PromptHub: Enhancing Multi-Prompt Visual In-Context 学习 (Learning) with Locality-Aware Fusion, Concentration and Alignment
PromptHub: Enhancing Multi-Prompt Visual In-Context Learning with Locality-Aware Fusion, Concentration and Alignment
作者
Authors
Tianci Luo|Jinpeng Wang|Shiyu Qin|Niu Lian|Yan Feng|Bin Chen|Chun Yuan|Shu-Tao Xia
期刊
Journal
暂无期刊信息
年份
Year
2026
分类
Category
国家
Country
德国Germany
📝 摘要
Abstract
Visual In-Context Learning (VICL) aims to complete vision tasks by imitating pixel demonstrations. Recent work pioneered prompt fusion that combines the advantages of various demonstrations, which shows a promising way to extend VICL. Unfortunately, the patch-wise fusion framework and model-agnostic supervision hinder the exploitation of informative cues, thereby limiting performance gains. To overcome this deficiency, we introduce PromptHub, a framework that holistically strengthens multi-prompting through locality-aware fusion, concentration and alignment. PromptHub exploits spatial priors to capture richer contextual information, employs complementary concentration, alignment, and prediction objectives to mutually guide training, and incorporates data augmentation to further reinforce supervision. Extensive experiments on three fundamental vision tasks demonstrate the superiority of PromptHub. Moreover, we validate its universality, transferability, and robustness across out-of-distribution settings, and various retrieval scenarios. This work establishes a reliable locality-aware paradigm for prompt fusion, moving beyond prior patch-wise approaches. Code is available at https://github.com/luotc-why/ICLR26-PromptHub.
📊 文章统计
Article Statistics
基础数据
Basic Stats
407
浏览
Views
0
下载
Downloads
11
引用
Citations
引用趋势
Citation Trend
阅读国家分布
Country Distribution
阅读机构分布
Institution Distribution
月度浏览趋势
Monthly Views