Enhancing Alignment for Unified Multimodal Models via Semantically-Grounded Supervision
作者
Authors
Jiyeong Kim|Yerim So|Hyesong Choi|Uiwon Hwang|Dongbo Min
期刊
Journal
暂无期刊信息
年份
Year
2026
分类
Category
国家
Country
加拿大Canada
📝 摘要
Abstract
Unified Multimodal Models (UMMs) have emerged as a promising paradigm that integrates multimodal understanding and generation within a unified modeling framework. However, current generative training paradigms suffer from inherent limitations. We present Semantically-Grounded Supervision (SeGroS), a fine-tuning framework designed to resolve the granularity mismatch and supervisory redundancy in UMMs. At its core, we propose a novel visual grounding map to construct two complementary supervision signals. First, we formulate semantic Visual Hints to compensate for the sparsity of text prompts. Second, we generate a semantically-grounded Corrupted Input to explicitly enhance the supervision of masking-based UMMs by restricting the reconstruction loss to core text-aligned regions. Extensive evaluations on GenEval, DPGBench, and CompBench demonstrate that SeGroS significantly improves generation fidelity and cross-modal alignment across various UMM architectures.
📊 文章统计
Article Statistics
基础数据
Basic Stats
64
浏览
Views
0
下载
Downloads
13
引用
Citations
引用趋势
Citation Trend
阅读国家分布
Country Distribution
阅读机构分布
Institution Distribution
月度浏览趋势
Monthly Views