Evaluation of the performance of large language models in clinical decision-making in endodontics


Özbay Y., Erdoğan D., Dinçer G. A.

BMC Oral Health, vol.25, no.1, 2025 (SCI-Expanded, Scopus) identifier identifier

  • Publication Type: Article / Article
  • Volume: 25 Issue: 1
  • Publication Date: 2025
  • Doi Number: 10.1186/s12903-025-06050-x
  • Journal Name: BMC Oral Health
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, CINAHL, MEDLINE, Directory of Open Access Journals
  • Keywords: Chat GPT, Chatbot, Endodontics, Endodontology, Large Language model
  • Bilecik Şeyh Edebali University Affiliated: Yes

Abstract

Background: Artificial intelligence (AI) chatbots are excellent at generating language. The growing use of generative AI large language models (LLMs) in healthcare and dentistry, including endodontics, raises questions about their accuracy. The potential of LLMs to assist clinicians’ decision-making processes in endodontics is worth evaluating. This study aims to comparatively evaluate the answers provided by Google Bard, ChatGPT-3.5, and ChatGPT-4 to clinically relevant questions from the field of Endodontics. Methods: 40 open-ended questions covering different areas of endodontics were prepared and were introduced to Google Bard, ChatGPT-3.5, and ChatGPT-4. Validity of the questions was evaluated using the Lawshe Content Validity Index. Two experienced endodontists, blinded to the chatbots, evaluated the answers using a 3-point Likert scale. All responses deemed to contain factually wrong information were noted and a misinformation rate for each LLM was calculated (number of answers containing wrong information/total number of questions). The One-way analysis of variance and Post Hoc Tukey test were used to analyze the data and significance was considered to be p < 0.05. Results: ChatGPT-4 demonstrated the highest score and the lowest misinformation rate (P = 0.008) followed by ChatGPT-3.5 and Google Bard respectively. The difference between ChatGPT-4 and Google Bard was statistically significant (P = 0.004). Conclusion: ChatGPT-4 provided more accurate and informative information in endodontics. However, all LLMs produced varying levels of incomplete or incorrect answers.