Update README.md
Browse files
README.md
CHANGED
|
@@ -1,43 +1,143 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
|
| 6 |
|
| 7 |
-
|
| 8 |
|
| 9 |
-
|
|
|
|
| 10 |
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
-
|
|
|
|
|
|
|
| 23 |
|
| 24 |
-
|
|
|
|
|
|
|
| 25 |
|
| 26 |
-
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
-
|
| 29 |
-
|
|
|
|
|
|
|
| 30 |
| Q1_0vLPYiPN7qY_1.mid | EmotionClassificationEMOPIA | 0.0 | 0.0 | ... | 0.0 | 0.0 |
|
| 31 |
-
| Q1_0vLPYiPN7qY_2.mid | EmotionClassificationEMOPIA | 0.0 | 0.0 | ... | 0.0 | 0.0 |
|
| 32 |
| Q1_4dXC1cC7crw_0.mid | EmotionClassificationEMOPIA | 0.0 | 0.0 | ... | 0.0 | 0.0 |
|
| 33 |
-
| Q1_4dXC1cC7crw_1.mid | EmotionClassificationEMOPIA | 0.0 | 0.0 | ... | 0.0 | 0.0 |
|
| 34 |
-
| Q1_GY3f6ckBVkA_2.mid | EmotionClassificationEMOPIA | 0.0 | 0.0 | ... | 0.0 | 0.0 |
|
| 35 |
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
-
|
| 41 |
|
| 42 |
-
|
| 43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
tags:
|
| 4 |
+
- symbolic-music
|
| 5 |
+
- music-information-retrieval
|
| 6 |
+
- classification
|
| 7 |
+
- retrieval
|
| 8 |
+
- benchmark
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# SyMuRBench Datasets and Precomputed Features
|
| 12 |
|
| 13 |
+
This repository contains datasets and precomputed features for [SyMuRBench](https://github.com/Mintas/SyMuRBench), a benchmark for symbolic music understanding models. It includes metadata and MIDI files for multiple classification and retrieval tasks, along with pre-extracted **music21** and **jSymbolic** features.
|
| 14 |
|
| 15 |
+
You can install and use the full pipeline via:
|
| 16 |
+
π [https://github.com/Mintas/SyMuRBench](https://github.com/Mintas/SyMuRBench)
|
| 17 |
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
## Overview
|
| 21 |
+
|
| 22 |
+
SyMuRBench supports evaluation across diverse symbolic music tasks, including composer, genre, emotion, and instrument classification, as well as score-performance retrieval. This Hugging Face dataset provides:
|
| 23 |
+
|
| 24 |
+
- Dataset metadata (CSV files)
|
| 25 |
+
- MIDI files organized by task
|
| 26 |
+
- Precomputed **music21** and **jSymbolic** features
|
| 27 |
+
- Configuration-ready structure for immediate use in benchmarking
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
## Tasks Description
|
| 32 |
|
| 33 |
+
| Task Name | Source Dataset | Task Type | # of Classes | # of Files | Default Metrics |
|
| 34 |
+
|----------|----------------|-----------|--------------|------------|-----------------|
|
| 35 |
+
| ComposerClassificationASAP | ASAP | Multiclass Classification | 7 | 197 | weighted f1 score, balanced accuracy |
|
| 36 |
+
| GenreClassificationMMD | MetaMIDI | Multiclass Classification | 7 | 2,795 | weighted f1 score, balanced accuracy |
|
| 37 |
+
| GenreClassificationWMTX | WikiMT-X | Multiclass Classification | 8 | 985 | weighted f1 score, balanced accuracy |
|
| 38 |
+
| EmotionClassificationEMOPIA | Emopia | Multiclass Classification | 4 | 191 | weighted f1 score, balanced accuracy |
|
| 39 |
+
| EmotionClassificationMIREX | MIREX | Multiclass Classification | 5 | 163 | weighted f1 score, balanced accuracy |
|
| 40 |
+
| InstrumentDetectionMMD | MetaMIDI | Multilabel Classification | 128 | 4,675 | weighted f1 score |
|
| 41 |
+
| ScorePerformanceRetrievalASAP | ASAP | Retrieval | - | 438 (219 pairs) | R@1, R@5, R@10, Median Rank |
|
| 42 |
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
## Precomputed Features
|
| 46 |
|
| 47 |
+
Precomputed features are available in the `data/features/` folder:
|
| 48 |
+
- `music21_full_dataset.parquet`
|
| 49 |
+
- `jsymbolic_full_dataset.parquet`
|
| 50 |
|
| 51 |
+
Each file contains a unified table with:
|
| 52 |
+
- `midi_file`: Filename of the MIDI
|
| 53 |
+
- `task`: Corresponding task name
|
| 54 |
+
- `E_0` to `E_N`: Feature vector
|
| 55 |
|
| 56 |
+
### Example
|
| 57 |
+
|
| 58 |
+
| midi_file | task | E_0 | E_1 | ... | E_672 | E_673 |
|
| 59 |
+
|----------|------|-----|-----|-----|-------|-------|
|
| 60 |
| Q1_0vLPYiPN7qY_1.mid | EmotionClassificationEMOPIA | 0.0 | 0.0 | ... | 0.0 | 0.0 |
|
|
|
|
| 61 |
| Q1_4dXC1cC7crw_0.mid | EmotionClassificationEMOPIA | 0.0 | 0.0 | ... | 0.0 | 0.0 |
|
|
|
|
|
|
|
| 62 |
|
| 63 |
+
These can be loaded with:
|
| 64 |
+
```python
|
| 65 |
+
import pandas as pd
|
| 66 |
+
df = pd.read_parquet("data/features/music21_full_dataset.parquet")
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
## File Structure
|
| 70 |
+
|
| 71 |
+
The dataset is distributed as a ZIP archive:
|
| 72 |
+
|
| 73 |
+
`data/datasets.zip`
|
| 74 |
+
|
| 75 |
+
After extraction, the structure is:
|
| 76 |
|
| 77 |
+
```
|
| 78 |
+
datasets/
|
| 79 |
+
βββ composer_and_retrieval_datasets/
|
| 80 |
+
β βββ metadata_composer_dataset.csv
|
| 81 |
+
β βββ metadata_retrieval_dataset.csv
|
| 82 |
+
οΏ½οΏ½ βββ ... (MIDI files organized in subfolders)
|
| 83 |
+
βββ genre_dataset/
|
| 84 |
+
β βββ metadata_genre_dataset.csv
|
| 85 |
+
β βββ midis/
|
| 86 |
+
βββ wikimtx_dataset/
|
| 87 |
+
β βββ metadata_wikimtx_dataset.csv
|
| 88 |
+
β βββ midis/
|
| 89 |
+
βββ emopia_dataset/
|
| 90 |
+
β βββ metadata_emopia_dataset.csv
|
| 91 |
+
β βββ midis/
|
| 92 |
+
βββ mirex_dataset/
|
| 93 |
+
β βββ metadata_mirex_dataset.csv
|
| 94 |
+
β βββ midis/
|
| 95 |
+
βββ instrument_dataset/
|
| 96 |
+
βββ metadata_instrument_dataset.csv
|
| 97 |
+
βββ midis/
|
| 98 |
+
```
|
| 99 |
+
* CSV files: Contain `filename` and `label` (or pair info for retrieval).
|
| 100 |
+
* MIDI files: Used as input for feature extractors.
|
| 101 |
+
|
| 102 |
+
---
|
| 103 |
+
|
| 104 |
+
## How to Use
|
| 105 |
+
|
| 106 |
+
You can download and extract everything using the built-in utility:
|
| 107 |
+
|
| 108 |
+
```python
|
| 109 |
+
from symurbench.utils import load_datasets
|
| 110 |
+
|
| 111 |
+
load_datasets(output_folder="./data", load_features=True)
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
This will:
|
| 115 |
+
|
| 116 |
+
* Download datasets.zip and extract it
|
| 117 |
+
* Optionally download precomputed features
|
| 118 |
+
* Update config paths automatically
|
| 119 |
+
|
| 120 |
+
---
|
| 121 |
+
|
| 122 |
+
## License
|
| 123 |
+
|
| 124 |
+
This dataset is released under the MIT License.
|
| 125 |
+
|
| 126 |
+
---
|
| 127 |
|
| 128 |
+
## Citation
|
| 129 |
|
| 130 |
+
If you use SyMuRBench in your work, please cite:
|
| 131 |
|
| 132 |
+
```bibtex
|
| 133 |
+
@inproceedings{symurbench2025,
|
| 134 |
+
author = {Petr Strepetov and Dmitrii Kovalev},
|
| 135 |
+
title = {SyMuRBench: Benchmark for Symbolic Music Representations},
|
| 136 |
+
booktitle = {Proceedings of the 3rd International Workshop on Multimedia Content Generation and Evaluation: New Methods and Practice (McGE '25)},
|
| 137 |
+
year = {2025},
|
| 138 |
+
pages = {9},
|
| 139 |
+
publisher = {ACM},
|
| 140 |
+
address = {Dublin, Ireland},
|
| 141 |
+
doi = {10.1145/3746278.3759392}
|
| 142 |
+
}
|
| 143 |
+
```
|