SDAR Collection The models without suffixes use the default block size = 4. • 21 items • Updated 1 day ago • 7
Game-TARS: Pretrained Foundation Models for Scalable Generalist Multimodal Game Agents Paper • 2510.23691 • Published Oct 27, 2025 • 53
Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models Paper • 2509.06949 • Published Sep 8, 2025 • 55
V-JEPA 2 Collection A frontier video understanding model developed by FAIR, Meta, which extends the pretraining objectives of https://ai.meta.com/blog/v-jepa-yann • 8 items • Updated Jun 13, 2025 • 179
NativeRes-LLaVA Collection LLaVA using images with native resolution • 7 items • Updated Jun 14, 2025 • 5
ReasonFLux-Coder Collection Coding LLMs excel at both writing code and generating unit tests. • 9 items • Updated May 26, 2025 • 11
Co-Evolving LLM Coder and Unit Tester via Reinforcement Learning Paper • 2506.03136 • Published Jun 3, 2025 • 25