dTRPO: Trajectory Reduction in Policy 优化 (Optimization) of Diffusion Large Language 模型 (Model)s
dTRPO: Trajectory Reduction in Policy Optimization of Diffusion Large Language Models
作者
Authors
Wenxuan Zhang|Lemeng Wu|Changsheng Zhao|Ernie Chang|Mingchen Zhuge|Zechun Liu|Andy Su|Hanxian Huang|Jun Chen|Chong Zhou|Raghuraman Krishnamoorthi|Vikas Chandra|Mohamed Elhoseiny|Wei Wen
期刊
Journal
暂无期刊信息
年份
Year
2026
分类
Category
国家
Country
法国France
📝 摘要
Abstract
Diffusion Large Language Models (dLLMs) introduce a new paradigm for language generation, which in turn presents new challenges for aligning them with human preferences. In this work, we aim to improve the policy optimization for dLLMs by reducing the cost of the trajectory probability calculation, thereby enabling scaled-up offline policy training. We prove that: (i) under reference policy regularization, the probability ratio of the newly unmasked tokens is an unbiased estimate of that of intermediate diffusion states, and (ii) the probability of the full trajectory can be effectively estimated with a single forward pass of a re-masked final state. By integrating these two trajectory reduction strategies into a policy optimization objective, we propose Trajectory Reduction Policy Optimization (dTRPO). We evaluate dTRPO on 7B dLLMs across instruction-following and reasoning benchmarks. Results show that it substantially improves the core performance of state-of-the-art dLLMs, achieving gains of up to 9.6% on STEM tasks, up to 4.3% on coding tasks, and up to 3.0% on instruction-following tasks. Moreover, dTRPO exhibits strong training efficiency due to its offline, single-forward nature, and achieves improved generation efficiency through high-quality outputs.
📊 文章统计
Article Statistics
基础数据
Basic Stats
265
浏览
Views
0
下载
Downloads
30
引用
Citations
引用趋势
Citation Trend
阅读国家分布
Country Distribution
阅读机构分布
Institution Distribution
月度浏览趋势
Monthly Views