Papers
arxiv:2602.04649

Outcome Accuracy is Not Enough: Aligning the Reasoning Process of Reward Models

Published on Feb 4
ยท Submitted by
wang binghai
on Feb 9
ยท Qwen Qwen
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Generative Reward Models suffer from deceptive alignment due to outcome accuracy prioritization, but rationale consistency metrics and hybrid training signals improve performance and generalization in RLHF.

AI-generated summary

Generative Reward Models (GenRMs) and LLM-as-a-Judge exhibit deceptive alignment by producing correct judgments for incorrect reasons, as they are trained and evaluated to prioritize Outcome Accuracy, which undermines their ability to generalize during RLHF. We introduce Rationale Consistency, a fine-grained metric that quantifies the alignment between the model's reasoning process and human judgment. Our evaluation of frontier models reveals that rationale consistency effectively discriminates among state-of-the-art models and detects deceptive alignment, while outcome accuracy falls short in both respects. To mitigate this gap, we introduce a hybrid signal that combines rationale consistency with outcome accuracy for GenRM training. Our training method achieves state-of-the-art performance on RM-Bench (87.1%) and JudgeBench (82%), surpassing outcome-only baselines by an average of 5%. Using RM during RLHF, our method effectively improves performance as demonstrated on Arena Hard v2, notably yielding a 7% improvement in creative writing tasks. Further analysis confirms that our method escapes the deceptive alignment trap, effectively reversing the decline in rationale consistency observed in outcome-only training.

Community

Outcome Accuracy is Not Enough: Aligning the Reasoning Process of Reward Models
This paper reveals a critical but overlooked issue in reward model evaluation: Deceptive Alignment โ€” models can reach correct outcomes through superficial or incorrect reasoning.
๐Ÿ”‘ Key Findings:
Outcome accuracy alone fails to distinguish frontier models (e.g., GPT-5 vs Claude 3.5) and cannot detect deceptive alignment (e.g., o3 vs o3-mini have similar accuracy, but o3-mini's rationale consistency is ~50% lower).
Outcome-only supervision during training leads to Rationale Degeneration โ€” models abandon rigorous fact-checking for cheaper surface cues, resulting in 24.2% lower reasoning alignment.
๐Ÿ› ๏ธ Proposed Solutions:
MetaJudge Framework: Decomposes human/model rationales into atomic units and performs strict one-to-one semantic matching to compute Rationale Consistency.
Hybrid Reward Training: Combines rationale reward (Average Precision) with outcome reward to maintain reasoning quality.
๐Ÿ“Š Results: Achieves SOTA on RM-Bench (87.1%) and JudgeBench (82.0%).
๐Ÿค— Dataset (22K human-annotated atomic rationales based on HelpSteer3) is available at Qwen/RationaleRM.
A collaboration between Qwen Team and Fudan University.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.04649 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.04649 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.