Uncertainty-Aware Variational Reward Factorization via Probabilistic Preference Bases for LLM Personalization
作者
Authors
暂无作者信息
期刊
Journal
暂无期刊信息
年份
Year
2026
分类
Category
国家
Country
-
DOI
http://arxiv.org/abs/2604.00997v1
📝 摘要
Abstract
Reward factorization personalizes large language models (LLMs) by decomposing rewards into shared basis functions and user-specific weights. Yet, existing methods estimate user weights from scarce data in isolation and as deterministic points, leading to inaccurate and unreliable inference. We introduce Variational Reward Factorization (VRF), an uncertainty-aware framework that represents each user's preferences as a variational distribution in a shared preference space. VRF infers user distributions via a variational encoder, derives weights through Wasserstein distance matching with shared probabilistic bases, and downweights uncertain estimates through a variance-attenuated loss. On three benchmarks, VRF outperforms all baselines across seen and unseen users, few-shot scenarios, and varying uncertainty levels, with gains extending to downstream alignment.
📊 文章统计
Article Statistics
基础数据
Basic Stats
0
浏览
Views
0
下载
Downloads
0
引用
Citations
引用趋势
Citation Trend
阅读国家分布
Country Distribution
阅读机构分布
Institution Distribution
月度浏览趋势
Monthly Views