--- dataset_info: features: - name: images sequence: image - name: question dtype: string - name: answers sequence: string - name: correct_answer dtype: string - name: question_type dtype: string splits: - name: train num_bytes: 5167070090.512 num_examples: 172384 - name: static num_bytes: 3140831722.665 num_examples: 127405 - name: val num_bytes: 305661617.158 num_examples: 4001 - name: test num_bytes: 125653489.0 num_examples: 150 download_size: 2182325666 dataset_size: 8739216919.335 configs: - config_name: default data_files: - split: train path: data/train-* - split: static path: data/static-* - split: val path: data/val-* - split: test path: data/test-* --- # SAT-v2 Dataset ## Paper **SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models** This dataset is part of the SAT (Spatial Aptitude Training) project, which introduces a dynamic benchmark for evaluating and improving spatial reasoning capabilities in multimodal language models. - **Project Page**: [https://arijitray.com/SAT/](https://arijitray.com/SAT/) - **Paper**: [arXiv:2412.07755](https://arxiv.org/abs/2412.07755) ## Dataset Description SAT-v2 is a comprehensive spatial reasoning benchmark containing over 300,000 questions across multiple splits. The dataset tests various aspects of spatial understanding including perspective-taking, object relationships, and dynamic scene understanding. ## Loading the Dataset ```python from datasets import load_dataset # Load the training split dataset = load_dataset("array/SAT-v2", split="train") # Or load a specific split val_dataset = load_dataset("array/SAT-v2", split="val") static_dataset = load_dataset("array/SAT-v2", split="static") test_dataset = load_dataset("array/SAT-v2", split="test") # Access a sample sample = dataset[0] print(sample["question"]) print(sample["answers"]) print(sample["correct_answer"]) ``` ## Dataset Splits - **train**: 172,384 examples - Dynamic training questions - **static**: 127,405 examples - Static spatial reasoning questions - **val**: 4,001 examples - Validation set - **test**: 150 examples - Test set **Important Note on Test Set Evaluation:** When evaluating on the test set, please use circular evaluation by switching the position of the correct answer to avoid position bias. If you're using lmms-eval, refer to the implementation here: [https://github.com/arijitray1993/lmms-eval/tree/main/lmms_eval/tasks/sat_real](https://github.com/arijitray1993/lmms-eval/tree/main/lmms_eval/tasks/sat_real) ## Citation If you use this dataset, please cite: ```bibtex @misc{ray2025satdynamicspatialaptitude, title={SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models}, author={Arijit Ray and Jiafei Duan and Ellis Brown and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko}, year={2025}, eprint={2412.07755}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2412.07755}, } ```