UniT: Unified Multimodal Chain-of-Thought Test-time Scaling Paper • 2602.12279 • Published 12 days ago • 19
OneVision-Encoder: Codec-Aligned Sparsity as a Foundational Principle for Multimodal Intelligence Paper • 2602.08683 • Published 15 days ago • 47
CoPE-VideoLM: Codec Primitives For Efficient Video Language Models Paper • 2602.13191 • Published 11 days ago • 29
OneVision-Encoder: Codec-Aligned Sparsity as a Foundational Principle for Multimodal Intelligence Paper • 2602.08683 • Published 15 days ago • 47
lmms-lab-encoder/wd_temporal_grounding_frames_max_64_max_448x448_pixels_with_fps Updated 10 days ago • 49
lmms-lab-encoder/wd_temporal_grounding_frames_max_64_max_448x448_pixels_with_fps Updated 10 days ago • 49
GigaBrain-0.5M*: a VLA That Learns From World Model-Based Reinforcement Learning Paper • 2602.12099 • Published 12 days ago • 56
ProCLIP: Progressive Vision-Language Alignment via LLM-based Embedder Paper • 2510.18795 • Published Oct 21, 2025 • 11
DanQing: An Up-to-Date Large-Scale Chinese Vision-Language Pre-training Dataset Paper • 2601.10305 • Published Jan 15 • 36
Innovator-VL: A Multimodal Large Language Model for Scientific Discovery Paper • 2601.19325 • Published 28 days ago • 79