Commit
·
d22a730
1
Parent(s):
afdc9bf
README updates to include code snippets and correct leaderboard URL (#6)
Browse files- README updates to include code snippets and correct leaderboard URL (4b653d4d838af288df031ebbba95b8edd911adca)
- up (8e9b160e5190a078e2a65acb3b09175d286ff9bf)
Co-authored-by: Vaibhav Srivastav <reach-vb@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -33,6 +33,7 @@ task_categories:
|
|
| 33 |
- [Dataset Summary](#dataset-summary)
|
| 34 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
| 35 |
- [Languages](#languages)
|
|
|
|
| 36 |
- [Dataset Structure](#dataset-structure)
|
| 37 |
- [Data Instances](#data-instances)
|
| 38 |
- [Data Fields](#data-fields)
|
|
@@ -57,7 +58,7 @@ task_categories:
|
|
| 57 |
- **Homepage:** [MultiLingual LibriSpeech ASR corpus](http://www.openslr.org/94)
|
| 58 |
- **Repository:** [Needs More Information]
|
| 59 |
- **Paper:** [MLS: A Large-Scale Multilingual Dataset for Speech Research](https://arxiv.org/abs/2012.03411)
|
| 60 |
-
- **Leaderboard:** [
|
| 61 |
|
| 62 |
### Dataset Summary
|
| 63 |
|
|
@@ -75,6 +76,55 @@ MLS dataset is a large multilingual corpus suitable for speech research. The dat
|
|
| 75 |
|
| 76 |
The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish
|
| 77 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
## Dataset Structure
|
| 79 |
|
| 80 |
### Data Instances
|
|
|
|
| 33 |
- [Dataset Summary](#dataset-summary)
|
| 34 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
| 35 |
- [Languages](#languages)
|
| 36 |
+
- [How to use](#how-to-use)
|
| 37 |
- [Dataset Structure](#dataset-structure)
|
| 38 |
- [Data Instances](#data-instances)
|
| 39 |
- [Data Fields](#data-fields)
|
|
|
|
| 58 |
- **Homepage:** [MultiLingual LibriSpeech ASR corpus](http://www.openslr.org/94)
|
| 59 |
- **Repository:** [Needs More Information]
|
| 60 |
- **Paper:** [MLS: A Large-Scale Multilingual Dataset for Speech Research](https://arxiv.org/abs/2012.03411)
|
| 61 |
+
- **Leaderboard:** [🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=facebook%2Fmultilingual_librispeech&only_verified=0&task=automatic-speech-recognition&config=-unspecified-&split=-unspecified-&metric=wer)
|
| 62 |
|
| 63 |
### Dataset Summary
|
| 64 |
|
|
|
|
| 76 |
|
| 77 |
The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish
|
| 78 |
|
| 79 |
+
### How to use
|
| 80 |
+
|
| 81 |
+
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
|
| 82 |
+
|
| 83 |
+
For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
|
| 84 |
+
```python
|
| 85 |
+
from datasets import load_dataset
|
| 86 |
+
|
| 87 |
+
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
|
| 91 |
+
```python
|
| 92 |
+
from datasets import load_dataset
|
| 93 |
+
|
| 94 |
+
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
|
| 95 |
+
|
| 96 |
+
print(next(iter(mls)))
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
|
| 100 |
+
|
| 101 |
+
Local:
|
| 102 |
+
|
| 103 |
+
```python
|
| 104 |
+
from datasets import load_dataset
|
| 105 |
+
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
| 106 |
+
|
| 107 |
+
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
|
| 108 |
+
batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
|
| 109 |
+
dataloader = DataLoader(mls, batch_sampler=batch_sampler)
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
Streaming:
|
| 113 |
+
|
| 114 |
+
```python
|
| 115 |
+
from datasets import load_dataset
|
| 116 |
+
from torch.utils.data import DataLoader
|
| 117 |
+
|
| 118 |
+
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
|
| 119 |
+
dataloader = DataLoader(mls, batch_size=32)
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
|
| 123 |
+
|
| 124 |
+
### Example scripts
|
| 125 |
+
|
| 126 |
+
Train your own CTC or Seq2Seq Automatic Speech Recognition models on MultiLingual Librispeech with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
|
| 127 |
+
|
| 128 |
## Dataset Structure
|
| 129 |
|
| 130 |
### Data Instances
|