--- license: apache-2.0 base_model: - Qwen/Qwen2.5-VL-72B-Instruct language: - multilingual --- # SafeWork-RM-Knowledge-72B [📂 GitHub](https://github.com/AI45Lab/SafeWork-R1) · [📜Technical Report](https://arxiv.org/abs/2507.18576) · [💬Online Chat](https://safework-r1.ai45.shlab.org.cn/)
image
## Overview We introduce SafeWork-R1, a cutting-edge multimodal reasoning model demonstrating the coevolution of safety and general intelligence under the guiding principle of the AI-45° Law. SafeWork-R1 is built upon the SafeLadder framework, which integrates large-scale, progressive, safety-oriented reinforcement learning post-training supported by multi-principled verifiers. Unlike conventional RLHF that simply learns human preferences, SafeLadder enables SafeWork-R1 to develop intrinsic safety reasoning and self-reflection abilities, leading to emergent safety “aha” moments.
![ai45](https://cdn-uploads.huggingface.co/production/uploads/666fe1a5b07525f0bde69c27/9UP0ze3exhEHJXanUTyXk.png)
## Model Zoo The **SafeWork-R1 Reward Models** serve as the multi-principled verifiers that guide reinforcement learning in the SafeLadder framework. They are trained with curated datasets of safety, moral reasoning, and factual verification dialogues.
Reward Model Type Base Model Link
SafeWork-RM-Safety-7B Safety Verifier Qwen2.5-7B 🤗 link
SafeWork-RM-Value-72B Value Verifier Qwen2.5-72B 🤗 link
SafeWork-RM-Knowledge-72B Knowledge Verifier Qwen2.5-72B 🤗 link
## Performance | Model | JudgeBench | VLRewardBench | MMRewardBench | Avg. | |--------|--------------|----------------|----------------|------| | Qwen2.5-VL-7B | 26.3 | 34.9 | 24.9 | 28.7 | | Qwen2.5-VL-72B | 50.0 | 56.2 | 51.3 | 52.5 | | GPT-4o | 45.3 | 49.3 | 60.6 | 51.7 | | Claude Sonnet 3.7 | 49.3 | 53.2 | 56.1 | 52.8 | | Claude Sonnet 3.7 (thinking) | 62.0 | 61.0 | **69.4** | 64.1 | | **Knowledge Verifier 7B** | 54.9 | 61.9 | 55.2 | 57.3 | | **Knowledge Verifier 72B** | **72.7** | **66.0** | 65.6 | **68.1** | ## Quick Start ```python import torch from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor from qwen_vl_utils import process_vision_info model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "AI45Research/SafeWork-RM-Knowledge-72B", dtype="auto", device_map="cuda" ) processor = AutoProcessor.from_pretrained("AI45Research/SafeWork-RM-Knowledge-72B") SYSTEM_PROMPT = "Carefully evaluate the Answer's correctness for the given Question. The Answer must be factually accurate and complete. Base your judgment on objective knowledge, not the Answer's phrasing alone. If fully correct, output 'Yes'; otherwise, 'No'. Respond only with a single word: Yes/No.\n\n" QUESTION_RESPONSE_FORMAT = "Question: {question}\n\nModel's Response:\n{response}" messages=[ { "role": "system", "content":[ {"type": "text", "text": SYSTEM_PROMPT} ] }, { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image"}, {"type": "text", "text": QUESTION_RESPONSE_FORMAT.format(question="your question", response="your response")}, ], }, ] text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") generated_output = model.generate( **inputs, max_new_tokens=1, do_sample=False, return_dict_in_generate=True, output_scores=True ) certain_id = processor.tokenizer.convert_tokens_to_ids("Yes") uncertain_id = processor.tokenizer.convert_tokens_to_ids("No") certain_prob, uncertain_prob = torch.nn.functional.softmax(generated_output.scores[0][0, [certain_id, uncertain_id]], dim=-1).tolist() reward = (certain_prob + (1 - uncertain_prob)) / 2 print(reward) ``` ## License This project is released under the Apache 2.0 license. ## Citation If you find this work useful, feel free to give us a cite. ``` @misc{lab2025safework, title={SafeWork-R1: Coevolving Safety and Intelligence under the AI-45 Law}, author={Lab, Shanghai AI and Bao, Yicheng and Chen, Guanxu and Chen, Mingkang and Chen, Yunhao and Chen, Chiyu and Chen, Lingjie and Chen, Sirui and Chen, Xinquan and Cheng, Jie and others}, journal={arXiv preprint arXiv:2507.18576}, year={2025} } ```