MathVista: Evaluating Math Reasoning in Visual Contexts
Code for the Paper "MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts".
For more details, please refer to the project page with dataset exploration and visualization tools: https://mathvista.github.io/.
:bell: If you have any questions or suggestions, please don't hesitate to let us know. You can comment on the Twitter, or post an issue on this repository.
[Webpage] [Paper] [Huggingface Dataset] [Leaderboard] [Visualization] [Result Explorer] [Twitter]
Tentative logo for MathVista. Generated by DALL·E 3 prompted by
"A photo-based logo with a gradient of soft blue and modern typography, accompanied by the title 'MathVista'".
- 💥 News 💥
- 👀 About MathVista
- 🏆 Leaderboard 🏆
- 📊 Dataset Examples
- 📖 Dataset Usage
- 🔮 Evaluations on MathVista
- 📝 Evaluation Scripts of Our Models
- 📈 Evaluation Results
- 📜 License
- ☕ Stay Connected!
- ✅ Cite
- 🧠 Related Work
- 🤝 Contributors
💥 News 💥
- [2024.06.20] 💥 Claude 3.5 Sonnet achieves new SOTA on MathVista with 67.7! Learn more at the Anthropic blog.
- [2024.05.13] 💥 OpenAI's GPT-4o Outperforms Humans on MathVista! For the first time, OpenAI's new GPT-4o model has achieved a higher score than the human average on MathVista, scoring 63.8 compared to humans' 60.3. Learn more at the OpenAI blog.
- [2024.01.16] 🌟 Our MathVista paper has been accepted for an Oral presentation at ICLR 2024 (only top 85 out of over 7200 submissions)! 🎉 Cheers!
- [2023.12.21] 🚀 Qwen-VL-Plus achieves 43.3%, establishing itself as the best-performing one in open-sourced models. 🎉 Congratulations!
- [2023.12.08] 🔍 We've updated the leaderboard and radar graphs with the fine-grained scores of the Gemini family models. Thanks to the Gemini Team and Google for providing us with these results! 👏
- [2023.12.06] 🚀 Google's newly released multimodal model, Gemini, shows impressive abilities on MathVista, achieving a new SOTA performance with 50.3%! 🎉 Cheers!!
- [2023.11.17] 🌟 Congratulations to SPHINX (V2), which is now the SOTA open-source multimodal model on MathVista, reaching 36.7%. 👏
- [2023.10.25] 🚀 Dive into our comprehensive 112-page evaluation of GPT-4V, Bard, and other Large Multimodal Models, encompassing both quantitative and qualitative insights. Explore the full paper now! 📄✨
- [2023.10.16] 🔍 We are working on a comparative study on the GPT-4V model. Stay tuned for the detailed report! 📑.
- [2023.10.15] We finished the manual evaluation of GPT-4V with the playground chatbot on the testmini set on MathVista. 🚀 GPT-4V achieves a substantial gain of 15.1% ⬆️ over Bard, reaching a new record of 49.9%! 🎉
- [2023.10.15] Our dataset is now accessible at Huggingface Datasets.
- [2023.10.15] Our dataset is now accessible at Paper With Code.
- [2023.10.03] The top-performing model, 🎭 Multimodal Bard, achieved a score of 34.8% on the testmini set for MathVista 📊.
- [2023.10.03] Our work was featured by Aran Komatsuzaki on Twitter. Thanks!
- [2023.10.03] Our paper is now accessible at https://arxiv.org/abs/2310.02255.
👀 About MathVista
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging.
Source dataset distribution of MathVista.
With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks.
Accuracy scores the testmini set (1,000 examples) of MathVista.
We further explore the new ability of self-verification, the use of self-consistency, and the goal-directed multi-turn human-AI dialogues, highlighting the promising potential of GPT-4V for future research.
Accuracy scores of one leading LLM (i.e., PoT GPT-4), four primary LMMs, random chance, and human performance on MathVista.
🔍 See the accuracy scores without Gemini Ultra
Accuracy scores of one leading LLM (i.e., PoT GPT-4), four primary LMMs, random chance, and human performance on MathVista.
For more details, you can find our project page here and our paper here.
🏆 Leaderboard 🏆
Contributing the Leaderboard
🚨🚨 The leaderboard is continuously being updated.
The evaluation instructions are available at 🔮 Evaluations on MathVista and 📝 Evaluation Scripts of Our Models.
To submit your results to the leaderboard on the testmini subset, please send to this email with your result json file and score json file, referring to the template files below:
- output_testmini_template_for_leaderboard_submission.json
- scores_testmini_template_for_leaderboard_submission.json
To submit your results to the leaderboard on the test subset, please send to this email with your result file (we will generate the score file for you), referring to the template file below:
Leaderboard on the testmini subset
Accuracy scores on the testmini subset (1,000 examples):
# | Model | Method | Source | Date | ALL | FQA | GPS | MWP | TQA | VQA | ALG | ARI | GEO | LOG | NUM | SCI | STA |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
- | Human Performance* | - | Link | 2023-10-03 | 60.3 | 59.7 | 48.4 | 73.0 | 63.2 | 55.9 | 50.9 | 59.2 | 51.4 | 40.7 | 53.8 | 64.9 | 63.9 |
1 | Grok-2 🥇 | LMM 🖼️ | Link | 2024-08-13 | 69.0 | - | - | - | - | - | - | - | - | - | - | - | - |
2 | Grok-2 mini 🥈 | LMM 🖼️ | Link | 2024-08-13 | 68.1 | - | - | - | - | - | - | - | - | - | - | - | - |
3 | Claude 3.5 Sonnet 🥉 | LMM 🖼️ | Link | 2024-06-20 | 67.7 | - | - | - | - | - | - | - | - | - | - | - | - |
4 | LLaVA-OneVision | LMM 🖼️ | Link | 2024-08-06 | 67.5 | - | - | - | - |