Steven Bedrick

Context matching is not reasoning when performing generalized clinical evaluation of generative language models

Andrew Wen, Qiuhao Lu, Yu-Neng Chuang, Guanchu Wang, Jiayi Yuan, Jiamu Zhang, Liwei Wang, Sunyang Fu, Kurt D. Miller, Heling Jia, Steven D. Bedrick, William R. Hersh, Kirk E. Roberts, Xia Hu, Hongfang Liu
npj Digital Medicine, Jan 2026

Abstract

Current discussion surrounding the clinical capabilities of generative language models(GLMs) predominantly centers around multiple-choice question-answer(MCQA) benchmarks derived from clinical licensing examinations. While accepted for human examinees, characteristics unique to GLMs bring into question the validity of such benchmarks. Here, we validate five benchmarks using eight GLMs, ablating for parameter size and reasoning capabilities, validating via prompt permutation three key assumptions that underpin the generalizability of MCQA-based assessments: that knowledge is applied, not memorized, that semantic consistency will lead to consistent answers, and that situations with no answers can be recognized. While large models are more resilient to our perturbations compared to small models, we globally invalidate these assumptions, with implications for reasoning models. Additionally, despite retaining the knowledge, small models are prone to memorization. All models exhibit significant failure in null-answer scenarios. We then suggest several adaptations for more robust benchmark designs, more reflective of real-world conditions.

Back to List

Add the full text or supplementary notes for the publication here using Markdown formatting.