Uncovering Cross-Objective Interference in Multi-Objective Alignment
Abstract
Multi-objective alignment in LLMs suffers from cross-objective interference where improving performance on some objectives degrades others, with a covariance-based analysis and a proposed method to maintain positive correlations between rewards and training signals.
We study a persistent failure mode in multi-objective alignment for large language models (LLMs): training improves performance on only a subset of objectives while causing others to degrade. We formalize this phenomenon as cross-objective interference and conduct the first systematic study across classic scalarization algorithms, showing that interference is pervasive and exhibits strong model dependence. To explain this phenomenon, we derive a local covariance law showing that an objective improves at first order when its reward exhibits positive covariance with the scalarized score. We extend this analysis to clipped surrogate objectives used in modern alignment, demonstrating that the covariance law remains valid under mild conditions despite clipping. Building on this analysis, we propose Covariance Targeted Weight Adaptation (CTWA), a plug-and-play method that maintains positive covariance between objective rewards and the training signal to effectively mitigate cross-objective interference. Finally, we complement these local improvement conditions with a global convergence analysis under the Polyak--Εojasiewicz condition, establishing when non-convex scalarized optimization achieves global convergence and how cross-objective interference depends on specific model geometric properties.
Community
Why does improving one objective in multi-objective RL sometimes hurt others, even when they shouldn't conflict from the MOO and MTL sense? And why only on certain models? π€
This isn't Pareto tradeoffs. It's ππ«π¨π¬π¬-π¨ππ£ππππ’π―π π’π§πππ«πππ«ππ§ππ in LLM alignment.
We uncover this in https://arxiv.org/abs/2602.06869 by
- answering why and when this model-dependent interference actually happens.
- conducting the first systematic benchmark of classic MOO/MTL methods for LLM alignment.
- proposing CTWA (Covariance-Targeted Weight Adaptation), our solution that actually works across the board.
We hope this work could provide actionable insights for multi-objective alignment for LLMs.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Reward-free Alignment for Conflicting Objectives (2026)
- Multi-Task GRPO: Reliable LLM Reasoning Across Tasks (2026)
- APEX: Learning Adaptive Priorities for Multi-Objective Alignment in Vision-Language Generation (2026)
- Orchestrating Tokens and Sequences: Dynamic Hybrid Policy Optimization for RLVR (2026)
- $f$-GRPO and Beyond: Divergence-Based Reinforcement Learning Algorithms for General LLM Alignment (2026)
- AWPO: Enhancing Tool-Use of Large Language Models through Explicit Integration of Reasoning Rewards (2025)
- Feedback Control for Multi-Objective Graph Self-Supervision (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper