Visual Mamba with DINO Pretraining

Official pretrained checkpoints for "RNN as Linear Transformer: A Closer Investigation into Representational Potentials of Visual Mamba Models"

Model Description

Mamba, originally introduced for language modeling, has recently garnered attention as an effective backbone for vision tasks. However, its underlying mechanism in visual domains remains poorly understood. In this work, we systematically investigate Mamba’s representational properties and make three primary contributions. First, we theoretically analyze Mamba’s relationship to Softmax and Linear Attention, confirming that it can be viewed as a low-rank approximation of Softmax Attention and thereby bridging the representational gap between Softmax and Linear forms. Second, we introduce a novel binary segmentation metric for activation map evaluation, extending qualitative assessments to a quantitative measure that demonstrates Mamba’s capacity to model long-range dependencies. Third, by leveraging DINO for self-supervised pretraining, we obtain clearer activation maps than those produced by standard supervised approaches, highlighting Mamba’s potential for interpretability. Notably, our model also achieves a 78.5% linear probing accuracy on ImageNet, underscoring its strong performance. We hope this work can provide valuable insights for future investigations of Mamba-based vision architectures.

Links

Citation

@article{yang2025dinomamba,
  title={RNN as Linear Transformer: A Closer Investigation into Representational Potentials of Visual Mamba Models},
  author={Yang, Timing and Wei, Guoyizhe and Yuille, Alan and Wang, Feng},
  journal={arXiv preprint arXiv:2511.18380},
  year={2025}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support