Model Details

This model is an int4 model with group_size 128 and symmetric quantization of MiniMaxAI/MiniMax-M2.5 generated by intel/auto-round. Please follow the license of the original model.

How to Use

Environment

uv pip install transformers==4.57.1 torch accelerate --torch-backend=auto
uv pip install vllm --torch-backend=auto

HF Usage

from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch

MODEL_PATH = "INC4AI/MiniMax-M2.5-int4-mixed-AutoRound"

model = AutoModelForCausalLM.from_pretrained(
    MODEL_PATH,
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)

messages = [
    {"role": "user", "content": [{"type": "text", "text": "What is your favourite condiment?"}]},
    {"role": "assistant", "content": [{"type": "text", "text": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}]},
    {"role": "user", "content": [{"type": "text", "text": "Do you have mayonnaise recipes?"}]}
]

model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")

generated_ids = model.generate(model_inputs, max_new_tokens=100, generation_config=model.generation_config)

response = tokenizer.batch_decode(generated_ids)[0]

print(response)

VLLM Usage

vllm serve INC4AI/MiniMax-M2.5-int4-mixed-AutoRound \
    --port 7777 \
    --host localhost \
    --trust-remote-code \
    --dtype bfloat16 \
    --tensor_parallel_size 4 \
    --enable-auto-tool-choice \
    --tool-call-parser minimax_m2 \
    --reasoning-parser minimax_m2_append_think

Generate the Model

auto-round --model_name MiniMaxAI/MiniMax-M2.5 --scheme w4a16 --ignore_layers gate --iters 0 --output_dir MiniMax-M2.5-int4-mixed-AutoRound

Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing.

Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software:

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

Cite

@article{cheng2023optimize,
  title={Optimize weight rounding via signed gradient descent for the quantization of llms},
  author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi},
  journal={arXiv preprint arXiv:2309.05516},
  year={2023}
}

arxiv github

Downloads last month
73
Safetensors
Model size
32B params
Tensor type
I32
·
F16
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for INC4AI/MiniMax-M2.5-int4-mixed-AutoRound

Quantized
(56)
this model
Finetunes
1 model

Paper for INC4AI/MiniMax-M2.5-int4-mixed-AutoRound