A-Bench: Are LMMs Masters at Evaluating AI-generated Images?

40citations
40
Citations
10
Authors
3
Data Points

Abstract

How to accurately and efficiently assess AI-generated images (AIGIs) remains a critical challenge for generative models. Given the high costs and extensive time commitments required for user studies, many researchers have turned towards employing large multi-modal models (LMMs) as AIGI evaluators, the precision and validity of which are still questionable. Furthermore, traditional benchmarks often utilize mostly natural-captured content rather than AIGIs to test the abilities of LMMs, leading to a noticeable gap for AIGIs. Therefore, we introduceA-Benchin this paper, a benchmark designed to diagnosewhether LMMs are masters at evaluating AIGIs. Specifically,A-Benchis organized under two key principles: 1) Emphasizing both high-level semantic understanding and low-level visual quality perception to address the intricate demands of AIGIs. 2) Various generative models are utilized for AIGI creation, and various LMMs are employed for evaluation, which ensures a comprehensive validation scope. Ultimately, 2,864 AIGIs from 16 text-to-image models are sampled, each paired with question-answers annotated by human experts. We hope thatA-Benchwill significantly enhance the evaluation process and promote the generation quality for AIGIs.

Citation History

Jan 26, 2026
0
Jan 26, 2026
40+40
Jan 27, 2026
40