Papers
arxiv:2602.06869

Uncovering Cross-Objective Interference in Multi-Objective Alignment

Published on Feb 6
Β· Submitted by
Yining Lu
on Feb 9
Authors:

Abstract

Multi-objective alignment in LLMs suffers from cross-objective interference where improving performance on some objectives degrades others, with a covariance-based analysis and a proposed method to maintain positive correlations between rewards and training signals.

AI-generated summary

We study a persistent failure mode in multi-objective alignment for large language models (LLMs): training improves performance on only a subset of objectives while causing others to degrade. We formalize this phenomenon as cross-objective interference and conduct the first systematic study across classic scalarization algorithms, showing that interference is pervasive and exhibits strong model dependence. To explain this phenomenon, we derive a local covariance law showing that an objective improves at first order when its reward exhibits positive covariance with the scalarized score. We extend this analysis to clipped surrogate objectives used in modern alignment, demonstrating that the covariance law remains valid under mild conditions despite clipping. Building on this analysis, we propose Covariance Targeted Weight Adaptation (CTWA), a plug-and-play method that maintains positive covariance between objective rewards and the training signal to effectively mitigate cross-objective interference. Finally, we complement these local improvement conditions with a global convergence analysis under the Polyak--Łojasiewicz condition, establishing when non-convex scalarized optimization achieves global convergence and how cross-objective interference depends on specific model geometric properties.

Community

Paper author Paper submitter

Why does improving one objective in multi-objective RL sometimes hurt others, even when they shouldn't conflict from the MOO and MTL sense? And why only on certain models? πŸ€”

This isn't Pareto tradeoffs. It's 𝐜𝐫𝐨𝐬𝐬-π¨π›π£πžπœπ­π’π―πž 𝐒𝐧𝐭𝐞𝐫𝐟𝐞𝐫𝐞𝐧𝐜𝐞 in LLM alignment.

We uncover this in https://arxiv.org/abs/2602.06869 by

  • answering why and when this model-dependent interference actually happens.
  • conducting the first systematic benchmark of classic MOO/MTL methods for LLM alignment.
  • proposing CTWA (Covariance-Targeted Weight Adaptation), our solution that actually works across the board.

We hope this work could provide actionable insights for multi-objective alignment for LLMs.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.06869 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.06869 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.06869 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.