An Expert Schema for Evaluating Large Language Model Errors in Scholarly Question-Answering Systems
Published in Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI 26), 2026
Large Language Models (LLMs) are transforming scholarly tasks like search and summarization, but their reliability remains uncertain. Current evaluation metrics for testing LLM reliability are primarily automated approaches that prioritize efficiency and scalability, but lack contextual nuance and fail to reflect how scientific domain experts assess LLM outputs in practice. We developed and validated a schema for evaluating LLM errors in scholarly question-answering systems that reflects the assessment strategies of practicing scientists. In collaboration with domain experts, we identified 20 error patterns across seven categories through thematic analysis of 68 question-answer pairs. We validated this schema through contextual inquiries with 10 additional scientists, which showed not only which errors experts naturally identify but also how structured evaluation schemas can help them detect previously overlooked issues. Domain experts use systematic assessment strategies, including technical precision testing, value-based evaluation, and meta-evaluation of their own practices. We discuss implications for supporting expert evaluation of LLM outputs, including opportunities for personalized, schema-driven tools that adapt to individual evaluation patterns and expertise levels.
Recommended citation: Anna Martin-Boyle, William Humphreys, Martha Brown, Cara Leckey, and Harmanpreet Kaur. (2026). "An Expert Schema for Evaluating Large Language Model Errors in Scholarly Question-Answering Systems." In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI 26), April 13–17, 2026, Barcelona, Spain. ACM.
