RewardFlow: Topology-Aware Reward Propagation on State Graphs for Agentic RL with Large Language 模型 (Model)s
RewardFlow: Topology-Aware Reward Propagation on State Graphs for Agentic RL with Large Language Models
作者
Authors
Xiao Feng|Bo Han|Zhanke Zhou|Jiaqi Fan|Jiangchao Yao|Ka Ho Li|Dahai Yu|Michael Kwok-Po Ng
期刊
Journal
暂无期刊信息
年份
Year
2026
分类
Category
国家
Country
中国China
📝 摘要
Abstract
Reinforcement learning (RL) holds significant promise for enhancing the agentic reasoning capabilities of large language models (LLMs) with external environments. However, the inherent sparsity of terminal rewards hinders fine-grained, state-level optimization. Although process reward modeling offers a promising alternative, training dedicated reward models often entails substantial computational costs and scaling difficulties. To address these challenges, we introduce RewardFlow, a lightweight method for estimating state-level rewards tailored to agentic reasoning tasks. RewardFlow leverages the intrinsic topological structure of states within reasoning trajectories by constructing state graphs. This enables an analysis of state-wise contributions to success, followed by topology-aware graph propagation to quantify contributions and yield objective, state-level rewards. When integrated as dense rewards for RL optimization, RewardFlow substantially outperforms prior RL baselines across four agentic reasoning benchmarks, demonstrating superior performance, robustness, and training efficiency. The implementation of RewardFlow is publicly available at https://github.com/tmlr-group/RewardFlow.
📊 文章统计
Article Statistics
基础数据
Basic Stats
181
浏览
Views
0
下载
Downloads
15
引用
Citations
引用趋势
Citation Trend
阅读国家分布
Country Distribution
阅读机构分布
Institution Distribution
月度浏览趋势
Monthly Views