Sink-Aware Pruning for Diffusion Language Models
Abstract
Diffusion Language Models suffer from high inference costs due to iterative denoising, prompting the development of Sink-Aware Pruning that identifies and removes unstable attention sinks, improving efficiency without retraining.
Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics largely inherited from autoregressive (AR) LLMs, typically preserve attention sink tokens because AR sinks serve as stable global anchors. We show that this assumption does not hold for DLMs: the attention-sink position exhibits substantially higher variance over the full generation trajectory (measured by how the dominant sink locations shift across timesteps), indicating that sinks are often transient and less structurally essential than in AR models. Based on this observation, we propose {bf Sink-Aware Pruning}, which automatically identifies and prunes unstable sinks in DLMs (prior studies usually keep sinks for AR LLMs). Without retraining, our method achieves a better quality-efficiency trade-off and outperforms strong prior pruning baselines under matched compute. Our code is available at https://github.com/VILA-Lab/Sink-Aware-Pruning.
Community
Sink-Aware Pruning for Diffusion Language Models identifies and addresses a fundamental blind spot in current pruning recipes for large language models. Most pruning methods are inherited from autoregressive LLMs and assume that attention sink tokens are stable global anchors. We show this assumption does not hold for diffusion language models (DLMs): attention sinks in DLMs shift significantly across denoising steps, making traditional sink-preserving heuristics suboptimal for this generation paradigm.
We propose Sink-Aware Pruning, a diffusion-native pruning strategy that automatically detects and suppresses unstable attention sinks based on their variance over the full denoising trajectory. Without any retraining, our method achieves a better quality–efficiency trade-off and outperforms strong prior pruning baselines under matched compute across multiple DLM families (e.g., LLaDA and Dream).
Our code and implementation will be publicly available on GitHub: https://github.com/VILA-Lab/Sink-Aware-Pruning.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper