In recent years, I’ve noticed with patients and their parents that artificial intelligence (AI) and social media-based natural language platforms, such as ChatGPT, among other health-related chatbots, can become their primary sources of information on myopia. As is the case with patients using Google for diagnostic and treatment information, we should broach with patients and their parents the dangers of over- reliance on AI-generated my- opia management content and such content from social influencers. Here are the talking points I suggest to accomplish this.
“The Information Isn’t Customized”
Despite statistically significant results demonstrated in evidence-based myopia interventions, the efficacy heterogeneity across different studies and the significant within-study individual variability show there is not a “1-size-fits-all” process for each patient. As a result, we must communicate to patients and their parents that clinical decisions depend on objective findings, including age of myopia onset, family history, previous axial-length progression, corneal curvature, refractive error, binocular vision status, previous treatments, etc. Also, we must provide education that AI cannot measure these variables, nor can it integrate them into a personalized treatment plan.
“The Information May Be Biased, Incomplete, or Outdated”
AI systems are trained on large datasets that may contain biased, incomplete, or outdated information. For example, newer spectacle designs for myopia may be underrepresented in these datasets. As a result, we must communicate to patients and their parents that AI platforms may place a disproportionate emphasis on more established interventions or fail to clearly distinguish among treatment options using different levels of supporting evidence. Also, we should provide education that recommendations shared by online influencers can lack transparency regarding potential conflicts of interest.
“The Information Can Be Misinterpreted”
AI tools are designed to provide direct answers, often in a confident tone, without clarifying the uncertainty or nuance behind such answers. Therefore, we must educate patients and their parents that they may misinterpret these answers as definitive medical advice.
Initiate the Conversation
While it makes sense for pa-tients and parents to prepare for medical appointments by conducting research online, we must immediately initiate a conversation with them about the aforementioned limitations of content from AI and social influencers. Also, we should stress that the information gleaned from nonclinician sources should never replace professional consultation, diagnostic evaluation, or evidence-based treatment planning focused on each patient. Patients and parents need to understand that turning primarily to AI platforms or social media for reassurance or decision-making regarding myopia may result in fragmented care, poor compliance, or misaligned expectations. OM


