VideoSeek: Long-Horizon Video Agent with Tool-Guided Seeking
作者
Authors
Jingyang Lin|Jialian Wu|Jiang Liu|Ximeng Sun|Ze Wang|Xiaodong Yu|Jiebo Luo|Zicheng Liu|Emad Barsoum
期刊
Journal
暂无期刊信息
年份
Year
2026
分类
Category
国家
Country
日本Japan
📝 摘要
Abstract
Video agentic models have advanced challenging video-language tasks. However, most agentic approaches still heavily rely on greedy parsing over densely sampled video frames, resulting in high computational cost. We present VideoSeek, a long-horizon video agent that leverages video logic flow to actively seek answer-critical evidence instead of exhaustively parsing the full video. This insight allows the model to use far fewer frames while maintaining, or even improving, its video understanding capability. VideoSeek operates in a think-act-observe loop with a well-designed toolkit for collecting multi-granular video observations. This design enables query-aware exploration over accumulated observations and supports practical video understanding and reasoning. Experiments on four challenging video understanding and reasoning benchmarks demonstrate that VideoSeek achieves strong accuracy while using far fewer frames than prior video agents and standalone LMMs. Notably, VideoSeek achieves a 10.2 absolute points improvement on LVBench over its base model, GPT-5, while using 93% fewer frames. Further analysis highlights the significance of leveraging video logic flow, strong reasoning capability, and the complementary roles of toolkit design.
📊 文章统计
Article Statistics
基础数据
Basic Stats
151
浏览
Views
0
下载
Downloads
38
引用
Citations
引用趋势
Citation Trend
阅读国家分布
Country Distribution
阅读机构分布
Institution Distribution
月度浏览趋势
Monthly Views