YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fine-tuned CTC model export

sw_data_ctc_local (CTC fine-tuned)

    This folder contains a native fairseq2 checkpoint exported from the training run in this notebook.

    ## Required files and directories
    - `model/pp_00/tp_00/sdp_00.pt` (native fairseq2 checkpoint shard)
    - `training_config.yaml` (training config used)
    - `dataset_card.yaml` (dataset card used)
    - `README.md` (this file)

    ## Local loading (recommended)
    1. Ensure these paths exist:
       - `{assets_dir}/sw_data_ctc_local.yaml`
       - `{export_dir}/model/pp_00/tp_00/sdp_00.pt`
    2. Set the asset directory:
       ```bash
       export FAIRSEQ2_USER_ASSET_DIR="{assets_dir}"
       ```
    3. Run inference:
       ```python
       import torch
       from omnilingual_asr.models.inference.pipeline import ASRInferencePipeline

       device = "cuda" if torch.cuda.is_available() else "cpu"
       dtype = torch.bfloat16 if device == "cuda" else torch.float32
       pipeline = ASRInferencePipeline(model_card="sw_data_ctc_local", device=device, dtype=dtype)
       preds = pipeline.transcribe(["/path/to/audio.wav"], batch_size=1)
       print(preds[0])
       ```

    ## Notes
    - The checkpoint is in *native fairseq2* format (directory-based).
    - If you re-export a new checkpoint, overwrite the `model/` directory and update the asset card.
    

Model card name: sw_data_ctc_local Model directory: /home/eo/Workspace/OMNI/sw_data_ctc_export/model Training config: /home/eo/Workspace/OMNI/sw_data_ctc_export/training_config.yaml Dataset card: /home/eo/Workspace/OMNI/sw_data_ctc_export/dataset_card.yaml Source step: /home/eo/Workspace/OMNI/omnilingual-asr/outputs/sw_data_ctc/ws_1.a16ae016/checkpoints/step_2000 Asset card: /home/eo/Workspace/OMNI/fairseq2_assets/sw_data_ctc_local.yaml

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support