PersonaPlex: Voice and Role Control for Full Duplex Conversational Speech Models Paper • 2602.06053 • Published 29 days ago • 1
view post Post 2431 We collaborated with Hugging Face to enable you to train MoE models 12× faster with 35% less VRAM via our new Triton kernels (no accuracy loss). 🤗Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe See translation 🔥 19 19 🤗 3 3 + Reply
view post Post 1903 Now with extra functionality at the same LTX-2 HF Space, you can now add also your last frame along side your first frame to guide the generated videos by choosing our frame interpolation mode...Try it out: alexnasa/ltx-2-TURBO See translation 👍 3 3 + Reply
Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models Paper • 2602.07026 • Published 9 days ago • 128
OpenResearcher Collection OpenResearcher: A Fully Open Pipeline for Long-Horizon Deep Research Trajectory Synthesis • 7 items • Updated 1 day ago • 10