---
pretty_name: SongFormBench
tags:
- MSA
- Benchmark
license: cc-by-4.0
language:
- en
- zh
---
# SongFormBench π
[English ο½ [δΈζ](README_ZH.md)]
**A High-Quality Benchmark for Music Structure Analysis**


[](https://arxiv.org/abs/2510.02797)
[](https://github.com/ASLP-lab/SongFormer)
[](https://huggingface.co/spaces/ASLP-lab/SongFormer)
[](https://huggingface.co/ASLP-lab/SongFormer)
[](https://huggingface.co/datasets/ASLP-lab/SongFormDB)
[](https://huggingface.co/datasets/ASLP-lab/SongFormBench)
[](https://discord.gg/p5uBryC4Zs)
[](http://www.npu-aslp.org/)
Chunbo Hao1*, Ruibin Yuan2,5*, Jixun Yao1, Qixin Deng3,5,
Xinyi Bai4,5, Wei Xue2, Lei Xie1β
*Equal contribution β Corresponding author
1Audio, Speech and Language Processing Group (ASLP@NPU),
Northwestern Polytechnical University
2Hong Kong University of Science and Technology
3Northwestern University
4Cornell University
5Multimodal Art Projection (M-A-P)
---
## π What is SongFormBench?
SongFormBench is a **carefully curated, expert-annotated benchmark** designed to revolutionize music structure analysis (MSA) evaluation. Our dataset provides a unified standard for comparing MSA models.
### π Dataset Composition
- **πΈ SongFormBench-HarmonixSet (BHX)**: 200 songs from HarmonixSet
- **π€ SongFormBench-CN (BC)**: 100 Chinese popular songs
**Total: 300 high-quality annotated songs**
---
## β¨ Key Highlights
### π― **Unified Evaluation Standard**
- Establishes a **standardized benchmark** for fair comparison across MSA models
- Eliminates inconsistencies in evaluation protocols
### π·οΈ **Simple Label System**
- Adopts the widely used 7-class classification system (as described in [arxiv.org/abs/2205.14700](https://arxiv.org/abs/2205.14700)
)
- Preserves **pre-chorus** segments for enhanced granularity
- Easy conversion to 7-class (pre-chorus β verse) for compatibility
### π¨βπ¬ **Expert-Verified Quality**
- Multi-source validation
- Manual corrections by expert annotators
### π **Multilingual Coverage**
- **First Chinese MSA dataset** (100 songs)
- Bridges the gap in Chinese music structure analysis
- Enables cross-lingual MSA research
---
## π Getting Started
### Quick Load
```python
from datasets import load_dataset
# Load the complete benchmark
dataset = load_dataset("ASLP-lab/SongFormBench")
```
---
## π Resources & Links
- π Paper: *coming soon*
- π» Code: [GitHub Repository](https://github.com/ASLP-lab/SongFormer)
- π§βπ» Model: [SongFormer](https://huggingface.co/ASLP-lab/SongFormer)
- π Dataset: [SongFormDB](https://huggingface.co/datasets/ASLP-lab/SongFormDB)
---
## π€ Citation
```bibtex
@misc{hao2025songformer,
title = {SongFormer: Scaling Music Structure Analysis with Heterogeneous Supervision},
author = {Chunbo Hao and Ruibin Yuan and Jixun Yao and Qixin Deng and Xinyi Bai and Wei Xue and Lei Xie},
year = {2025},
eprint = {2510.02797},
archivePrefix = {arXiv},
primaryClass = {eess.AS},
url = {https://arxiv.org/abs/2510.02797}
}
```
---
## πΌ Mel Spectrogram Details
Click to expand/collapse
Environment configuration can refer to the official implementation of BigVGan. If the audio source becomes invalid, you can reconstruct the audio using the following method.
### πΈ SongFormBench-HarmonixSet
Uses official HarmonixSet mel spectrograms. To reproduce:
```bash
# Clone BigVGAN repository
git clone https://github.com/NVIDIA/BigVGAN.git
# Navigate to utils
cd utils/HarmonixSet
# Update BIGVGAN_REPO_DIR in inference_e2e.sh
# Run the inference script
bash inference_e2e.sh
```
### π€ SongFormBench-CN
Reproduce using [**bigvgan_v2_44khz_128band_256x**](https://huggingface.co/nvidia/bigvgan_v2_44khz_128band_256x)
You should first download bigvgan_v2_44khz_128band_256x, then add its project directory to your PYTHONPATH, after which you can use the code below:
```python
# See implementation
utils/CN/infer.py
```
---
## π§ Contact
For questions, issues, or collaboration opportunities, please visit our [GitHub repository](https://github.com/ASLP-lab/SongFormer) or open an issue.