Outcome Accuracy is Not Enough: Aligning the Reasoning Process of Reward Models
Abstract
Generative Reward Models suffer from deceptive alignment due to outcome accuracy prioritization, but rationale consistency metrics and hybrid training signals improve performance and generalization in RLHF.
Generative Reward Models (GenRMs) and LLM-as-a-Judge exhibit deceptive alignment by producing correct judgments for incorrect reasons, as they are trained and evaluated to prioritize Outcome Accuracy, which undermines their ability to generalize during RLHF. We introduce Rationale Consistency, a fine-grained metric that quantifies the alignment between the model's reasoning process and human judgment. Our evaluation of frontier models reveals that rationale consistency effectively discriminates among state-of-the-art models and detects deceptive alignment, while outcome accuracy falls short in both respects. To mitigate this gap, we introduce a hybrid signal that combines rationale consistency with outcome accuracy for GenRM training. Our training method achieves state-of-the-art performance on RM-Bench (87.1%) and JudgeBench (82%), surpassing outcome-only baselines by an average of 5%. Using RM during RLHF, our method effectively improves performance as demonstrated on Arena Hard v2, notably yielding a 7% improvement in creative writing tasks. Further analysis confirms that our method escapes the deceptive alignment trap, effectively reversing the decline in rationale consistency observed in outcome-only training.
Community
Outcome Accuracy is Not Enough: Aligning the Reasoning Process of Reward Models
This paper reveals a critical but overlooked issue in reward model evaluation: Deceptive Alignment โ models can reach correct outcomes through superficial or incorrect reasoning.
๐ Key Findings:
Outcome accuracy alone fails to distinguish frontier models (e.g., GPT-5 vs Claude 3.5) and cannot detect deceptive alignment (e.g., o3 vs o3-mini have similar accuracy, but o3-mini's rationale consistency is ~50% lower).
Outcome-only supervision during training leads to Rationale Degeneration โ models abandon rigorous fact-checking for cheaper surface cues, resulting in 24.2% lower reasoning alignment.
๐ ๏ธ Proposed Solutions:
MetaJudge Framework: Decomposes human/model rationales into atomic units and performs strict one-to-one semantic matching to compute Rationale Consistency.
Hybrid Reward Training: Combines rationale reward (Average Precision) with outcome reward to maintain reasoning quality.
๐ Results: Achieves SOTA on RM-Bench (87.1%) and JudgeBench (82.0%).
๐ค Dataset (22K human-annotated atomic rationales based on HelpSteer3) is available at Qwen/RationaleRM.
A collaboration between Qwen Team and Fudan University.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Reward Modeling from Natural Language Human Feedback (2026)
- LogicReward: Incentivizing LLM Reasoning via Step-Wise Logical Supervision (2025)
- Generative Adversarial Reasoner: Enhancing LLM Reasoning with Adversarial Reinforcement Learning (2025)
- Making Bias Non-Predictive: Training Robust LLM Judges via Reinforcement Learning (2026)
- Rethinking Rubric Generation for Improving LLM Judge and Reward Modeling for Open-ended Tasks (2026)
- Evidence-Augmented Policy Optimization with Reward Co-Evolution for Long-Context Reasoning (2026)
- Alternating Reinforcement Learning for Rubric-Based Reward Modeling in Non-Verifiable LLM Post-Training (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper