登录 注册

Besag-Clifford e-values for unnormalized testing

🔗 访问原文
🔗 Access Paper

📝 摘要
Abstract

Unnormalized probability distributions are frequently used in machine learning for modeling complex data generating processes. Though Markov chain Monte Carlo (MCMC) algorithms can approximately sample from unnormalized distributions, intractability of their normalizing constants renders likelihood ratio testing infeasible. We propose to use the parallel method of Besag and Clifford to generate samples that are exchangeable with the data under the null, to then generate valid e-values for any number of iterations or algorithmic steps. We show that as the number of samples grows, these Besag-Clifford e-values constructed using the unnormalized likelihood ratio are actually log-optimal up to a multiplicative term that diminishes with the mixing time of the Markov chain. Additionally, averaging over the output of multiple chains retains validity while increasing the e-power. We extend Besag-Clifford e-values to the general problem of unnormalized test statistics, which allows application to composite hypotheses, uncertainty quantification, generative model evaluation, and sequential testing. Through simulations and an application to galaxy velocity modeling, we empirically verify our theory, explore the impact of autocorrelation and mixing, and evaluate the performance of Besag-Clifford e-values.

📊 文章统计
Article Statistics

基础数据
Basic Stats

291 浏览
Views
0 下载
Downloads
19 引用
Citations

引用趋势
Citation Trend

阅读国家分布
Country Distribution

阅读机构分布
Institution Distribution

月度浏览趋势
Monthly Views

相关关键词
Related Keywords

影响因子分析
Impact Analysis

8.00 综合评分
Overall Score
引用影响力
Citation Impact
浏览热度
View Popularity
下载频次
Download Frequency

📄 相关文章
Related Articles