1McDermott M, 2Mufarreh N, 2Bamberger H
1Duly Health and Care, Naperville, IL, USA; 2Kettering Health Dayton, Dayton, USA
Hypothesis
Artificial intelligence can provide patients with informative resources for appropriate patient education after they are diagnosed by a hand surgeon.
Methods
The Online Artificial Intelligence model, ChatGPT, was accessed, and a conversation about hand surgery was started. ChatGPT was asked to provide information about ten different hand surgery diagnoses. The responses were then recorded and subjected to analysis to examine if they were good sources of patient education. A crucial aspect of effective patient education material is ensuring that its reading level is appropriate. For each response, a Flesch-Kincaid readability test, a Gunning Fog Readability test, and a Ford, Caylor, Stitch (FORCAST) index were used to assess the readability grade level. These results were then compared to the recommendations of the American Medical Association (6th grade and below) and the National Institute of Health (8th grade and below) to assess if the responses were too difficult for the general public.
Results
ChatGPT successfully provided patient education for all ten of the provided topics. Readability analysis yielded an average Flesch-Kincaid grade level of 11.0 ± 3.5 (11th grade), a Gunning Fog of 14.5 ± 0.9 (College), and a FORCAST of 11.9 ± 0.8 (12th grade). None of the responses were graded below the 11th grade (Figure 1).
Summary Point
While ChatGPT can provide patient education, the responses are written at a level significantly above the comprehension level of the average patient.