TY - JOUR
T1 - Assessing the Validity, Safety, and Utility of ChatGPT’s Responses for Patients with Frozen Shoulder
AU - Yang, Seoyon
AU - Kim, Younji
AU - Chang, Min Cheol
AU - Jeon, Jongwook
AU - Hong, Keeyong
AU - Yi, You Gyoung
N1 - Publisher Copyright:
© 2025 by the authors.
PY - 2025/2
Y1 - 2025/2
N2 - This study evaluates the potential of ChatGPT as a tool for providing information to patients with frozen shoulder, focusing on its validity, utility, and safety. Five experienced physicians selected fourteen key questions on musculoskeletal disorders after discussion and verified their adequacy by consulting one hundred and twenty frozen shoulder patients for additional or alternative inquiries. These questions were input into ChatGPT version 4.0, and its responses were assessed by the physicians using a 5-point Likert scale, with scores ranging from 1 (least favorable) to 5 (most favorable) in terms of validity, safety, and utility. The findings showed that for validity, 85.7% of the responses scored 5, and 14.3% scored 4. For safety, 92.9% received a score of 5, while one response received a 4. Utility ratings also demonstrated high scores, with 85.7% of responses rated 5 and 14.3% rated 4. These results indicate that ChatGPT provides generally valid, safe, and useful information for patients with frozen shoulder. However, users should be aware of potential gaps or inaccuracies, and continued updates are necessary to ensure reliable and accurate guidance. It should not be considered a substitute for professional medical advice, diagnosis, or treatment, highlighting the need for continued updates to ensure reliable and accurate guidance.
AB - This study evaluates the potential of ChatGPT as a tool for providing information to patients with frozen shoulder, focusing on its validity, utility, and safety. Five experienced physicians selected fourteen key questions on musculoskeletal disorders after discussion and verified their adequacy by consulting one hundred and twenty frozen shoulder patients for additional or alternative inquiries. These questions were input into ChatGPT version 4.0, and its responses were assessed by the physicians using a 5-point Likert scale, with scores ranging from 1 (least favorable) to 5 (most favorable) in terms of validity, safety, and utility. The findings showed that for validity, 85.7% of the responses scored 5, and 14.3% scored 4. For safety, 92.9% received a score of 5, while one response received a 4. Utility ratings also demonstrated high scores, with 85.7% of responses rated 5 and 14.3% rated 4. These results indicate that ChatGPT provides generally valid, safe, and useful information for patients with frozen shoulder. However, users should be aware of potential gaps or inaccuracies, and continued updates are necessary to ensure reliable and accurate guidance. It should not be considered a substitute for professional medical advice, diagnosis, or treatment, highlighting the need for continued updates to ensure reliable and accurate guidance.
KW - ChatGPT
KW - adhesive capsulitis
KW - artificial intelligence
KW - frozen shoulder
KW - musculoskeletal disorders
KW - safety
KW - utility
KW - validity
UR - http://www.scopus.com/inward/record.url?scp=85219001022&partnerID=8YFLogxK
U2 - 10.3390/life15020262
DO - 10.3390/life15020262
M3 - Article
AN - SCOPUS:85219001022
SN - 2075-1729
VL - 15
JO - Life
JF - Life
IS - 2
M1 - 262
ER -