AIdeaLab-VideoMoE-7B-A2B / README_en.md
alfredplpl's picture
Update README_en.md
b8a4fb5 verified

output

Introduction

This document is the model card for AIdeaLab-VideoMoE, a conceptual model of a spatio-temporal decoupled Mixture-of-Experts (MoE) architecture proposed by AIdeaLab Inc. For inquiries regarding training, technical collaboration, or research partnerships, please feel free to contact us via our contact form.

VideoMoE is built using a foundation model developed by AIdeaLab under the support of GENIAC (Generative AI Accelerator Challenge)—a national project conducted by the Ministry of Economy, Trade and Industry (METI) and NEDO (New Energy and Industrial Technology Development Organization) to strengthen Japan’s domestic generative AI capabilities.

Usage

Please install from source. We recommend using tools such as uv.

git clone https://github.com/AIdeaLab/ST-MoE-DiT.git
cd ST-MoE-DiT
pip install -e .

Next, download the model from Hugging Face:

hf download aidealab/AIdeaLab-VideoMoE-7B-A2B videomoe.safetensors --local-dir=. 

Then run the inference script to generate a video:

python infer.py

A video should be generated. During the process, additional models will be downloaded from Modelscope, so please wait for a moment.

Gallery