Spaces:
Runtime error
Runtime error
Update app.py
Browse files
app.py
CHANGED
|
@@ -28,7 +28,7 @@ DESCRIPTION = """\
|
|
| 28 |
|
| 29 |
This Gradio demo showcases **IndicSeamless**, a fine-tuned **SeamlessM4T-v2-large** model for **speech-to-text translation** across **13 Indian languages and English**. Trained on **BhasaAnuvaad**, the largest open-source speech translation dataset for Indian languages, it delivers **accurate and robust translations** across diverse linguistic and acoustic conditions.
|
| 30 |
|
| 31 |
-
π **Model Checkpoint:** [ai4bharat/seamless
|
| 32 |
|
| 33 |
#### **How to Use:**
|
| 34 |
1. **Upload or record** an audio clip in any supported Indian language.
|
|
@@ -42,9 +42,9 @@ hf_token = os.getenv("HF_TOKEN")
|
|
| 42 |
device = "cuda:0" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"
|
| 43 |
torch_dtype = torch.bfloat16 if device != "cpu" else torch.float32
|
| 44 |
|
| 45 |
-
model = SeamlessM4Tv2ForSpeechToText.from_pretrained("ai4bharat/seamless
|
| 46 |
-
processor = SeamlessM4TFeatureExtractor.from_pretrained("ai4bharat/seamless
|
| 47 |
-
tokenizer = SeamlessM4TTokenizer.from_pretrained("ai4bharat/seamless
|
| 48 |
|
| 49 |
CACHE_EXAMPLES = os.getenv("CACHE_EXAMPLES") == "1" and torch.cuda.is_available()
|
| 50 |
|
|
|
|
| 28 |
|
| 29 |
This Gradio demo showcases **IndicSeamless**, a fine-tuned **SeamlessM4T-v2-large** model for **speech-to-text translation** across **13 Indian languages and English**. Trained on **BhasaAnuvaad**, the largest open-source speech translation dataset for Indian languages, it delivers **accurate and robust translations** across diverse linguistic and acoustic conditions.
|
| 30 |
|
| 31 |
+
π **Model Checkpoint:** [ai4bharat/indic-seamless](https://huggingface.co/ai4bharat/indic-seamless)
|
| 32 |
|
| 33 |
#### **How to Use:**
|
| 34 |
1. **Upload or record** an audio clip in any supported Indian language.
|
|
|
|
| 42 |
device = "cuda:0" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"
|
| 43 |
torch_dtype = torch.bfloat16 if device != "cpu" else torch.float32
|
| 44 |
|
| 45 |
+
model = SeamlessM4Tv2ForSpeechToText.from_pretrained("ai4bharat/indic-seamless", torch_dtype=torch_dtype, token=hf_token).to(device)
|
| 46 |
+
processor = SeamlessM4TFeatureExtractor.from_pretrained("ai4bharat/indic-seamless", token=hf_token)
|
| 47 |
+
tokenizer = SeamlessM4TTokenizer.from_pretrained("ai4bharat/indic-seamless", token=hf_token)
|
| 48 |
|
| 49 |
CACHE_EXAMPLES = os.getenv("CACHE_EXAMPLES") == "1" and torch.cuda.is_available()
|
| 50 |
|