mmJEE-Eval / README.md
ArkaMukherjee's picture
Update README.md
35c2ccb verified
metadata
dataset_info:
  features:
    - name: question_id
      dtype: string
    - name: image
      dtype: image
    - name: subject
      dtype: string
    - name: question_type
      dtype: string
    - name: year
      dtype: string
    - name: paper
      dtype: string
    - name: language
      dtype: string
    - name: answer
      dtype: string
    - name: answer_sources
      dtype: string
    - name: requires_image
      dtype: bool
  splits:
    - name: train
      num_bytes: 101253285.86
      num_examples: 1460
  download_size: 97675003
  dataset_size: 101253285.86
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - image-text-to-text
  - question-answering
license: mit
language:
  - en
  - hi
tags:
  - multimodal
  - vlm
  - scientific-reasoning
  - benchmark
  - education

mmJEE-Eval: A Bilingual Multimodal Benchmark for Exam-Style Evaluation of Vision-Language Models

Paper: mmJEE-Eval: A Bilingual Multimodal Benchmark for Evaluating Scientific Reasoning in Vision-Language Models Code: https://github.com/ArkaMukherjee0/mmJEE-Eval Project Page: https://mmjee-eval.github.io

Introduction

mmJEE-Eval is a multimodal and bilingual dataset for LLM evaluation comprising 1,460 challenging questions from seven years (2019-2025) of India's JEE Advanced competitive examination. We evaluate 17 state-of-the-art VLMs, finding that open models (from 7B-400B) struggle significantly (maxing at 40-50%) as compared to frontier models from Google and OpenAI (77-84%). mmJEE-Eval is significantly more challenging than the text-only JEEBench, the only other well-established dataset on JEE Advanced problems, with performance drops of 18-56% across all models. Our findings, especially metacognitive self-correction abilities, cross-lingual consistency, and human evaluation of reasoning quality, demonstrate that contemporary VLMs still show authentic scientific reasoning deficits despite strong question-solving capabilities (as evidenced by high Pass@K accuracies), establishing mmJEE-Eval as a challenging complementary benchmark that effectively discriminates between model capabilities.

Sample Usage

You can load the dataset using the Hugging Face datasets library:

from datasets import load_dataset

# Load the entire dataset
dataset = load_dataset("ArkaMukherjee/mmJEE-Eval")

# Access the training split
train_dataset = dataset["train"]

# Print an example
print(train_dataset[0])

# To run evaluation scripts, please refer to the official GitHub repository:
# https://github.com/ArkaMukherjee0/mmJEE-Eval