Papers
arxiv:2602.02537

WorldVQA: Measuring Atomic World Knowledge in Multimodal Large Language Models

Published on Jan 28
· Submitted by
taesiri
on Feb 4
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

WorldVQA is a benchmark for evaluating the visual world knowledge of multimodal large language models by separating visual knowledge retrieval from reasoning to measure memorized facts.

AI-generated summary

We introduce WorldVQA, a benchmark designed to evaluate the atomic visual world knowledge of Multimodal Large Language Models (MLLMs). Unlike current evaluations, which often conflate visual knowledge retrieval with reasoning, WorldVQA decouples these capabilities to strictly measure "what the model memorizes." The benchmark assesses the atomic capability of grounding and naming visual entities across a stratified taxonomy, spanning from common head-class objects to long-tail rarities. We expect WorldVQA to serve as a rigorous test for visual factuality, thereby establishing a standard for assessing the encyclopedic breadth and hallucination rates of current and next-generation frontier models.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.02537 in a model README.md to link it from this page.

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.02537 in a Space README.md to link it from this page.

Collections including this paper 1