Transformers documentation

Experts backends

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.0.0rc2).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Experts backends

All Mixture-of-Experts (MoE) implementations perform the same high-level computation. For each token, a router selects k experts. The token hidden state is then projected through the selected experts’ parameters and aggregated with routing weights. The difference between experts backends is how those expert matrix multiplications execute.

The ExpertsInterface provides optimized experts backends. It decouples the experts implementation from the model code to simplify experimentation with different functions. Add new backends through the same interface.

experts backend description
"eager" Reference implementation that loops over active experts and applies projections per-expert.
"batched_mm" Uses torch.bmm to compute per-(token, expert) projections in a batched way.
"grouped_mm" Uses torch._grouped_mm to group tokens by expert and run grouped GEMMs (requires PyTorch 2.9+).

batched_mm is fastest for very small inputs and compilation speeds it up further. grouped_mm performs best for larger inputs.

Set an experts backend

Use the experts_implementation argument in from_pretrained() to instantiate a model with a specific experts backend.

from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen1.5-MoE-A2.7B",
    dtype="bfloat16",
    experts_implementation="batched_mm",
)

Switch between experts backends at runtime without reloading the model using set_experts_implementation().

model.set_experts_implementation("eager")

Backbone-specific experts backend

Multimodal models can have multiple sub-configs (for example, different backbones). You can set a different experts backend per sub-config by passing a dict to experts_implementation at load time.

Keys in the mapping must match sub-config names.

from transformers import AutoModelForImageTextToText

experts_implementation_per_backbone = {
    "text_config": "grouped_mm",
    "vision_config": "eager",
}

model = AutoModelForImageTextToText.from_pretrained(
    "Qwen/Qwen3-VL-Moe",
    experts_implementation=experts_implementation_per_backbone,
)

Set the experts backend globally with an empty key.

model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen1.5-MoE-A2.7B",
    experts_implementation={"": "batched_mm"},
)

torch.compile

All three backends ("eager", "batched_mm", "grouped_mm") are compatible with torch.compile to certain extents. The following table summarizes compatibility:

Implementation compilation modes dtypes fullgraph=True
grouped_mm None, max-autotune-no-cudagraphs bfloat16 Yes
batched_mm all bfloat16, float16, float32 Yes
eager all bfloat16, float16, float32 No

Notes:

  • The grouped_mm experts backend currently only supports bfloat16 when compiled with torch.compile. Additionally, it is not compatible with CUDA graphs, so you must use mode=None or mode="max-autotune-no-cudagraphs" when compiling.
  • The eager experts backend uses a data-dependent operation to find which experts are used in a forward pass. This operation is not compatible with full graph compilation (fullgraph=True).
import torch
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen1.5-MoE-A2.7B",
    dtype="bfloat16",
    experts_implementation="grouped_mm",
).eval().cuda()

# Works for grouped_mm (no CUDA graphs)
model.forward = torch.compile(model.forward, mode="max-autotune-no-cudagraphs")

Benchmarks

This benchmark compares different input sizes and experts implementations with and without torch.compile.

Update on GitHub