Artificial intelligence (AI) is rapidly reshaping health care, and its impact on myopia management is increasingly evident: From diagnostics and treatment planning, to patient education and risk prediction, AI offers the benefits of speed, consistency, and scalability. That said, the future of AI in myopia care depends not only on technological innovation, but also on our ability to manage risks—particularly those related to misinformation, deepfake technologies, and confirmation bias.
Misinformation
One of the most subtle, yet dangerous, risks of AI is the spread of misinformation disguised as “partial truths.” This is because this misinformation can amplify misleading messages that can shape clinical decisions and patient behavior.
Unlike blatant falsehoods, partial truths—especially when paired with pseudo-scientific reasoning—can appear logical and persuasive. When dressed in the polish of professional language and visual data, they often bypass critical scrutiny.
For instance, an AI-written article may accurately state that outdoor time lowers the risk of myopia onset, but then falsely conclude that no further intervention is needed in the early stage of myopic development. Such misinformation can lead families to reject or delay evidenced-based treatments, such as orthokeratology or multifocal contact lenses.

Deepfake Technologies
Another growing threat is the use of deepfake, or AI-generated, technologies in the scientific and clinical space.
Originally associated with fake media, deepfakes can now fabricate research data, clinical images, and patient cases that appear alarmingly authentic.1 As a result, these manipulations can mislead peer reviewers, confuse clinicians, and misinform students.
I think the most concerning implication of deepfake technologies is the potential for commercial misuse, where falsified evidence is created to promote specific products or treatments. This undermines the foundation of evidence-based medicine. Combating this requires critical thinking, improved editorial standards, and, possibly, tools to detect AI-generated fabrications.
Confirmation Bias
Perhaps the most persistent and underestimated risk of AI is the amplification of confirmation bias. AI platforms can tailor content based on user behavior, leading to echo chambers where clinicians and patients alike are repeatedly exposed to information that reinforces their preexisting beliefs.
In the case of myopia, a clinician might overly favor a treatment, such as orthokeratology, based on past success, while a parent—exposed to alarming claims on social media—might reject atropine regardless of professional guidance. When both parties operate in parallel bias loops, treatment plans may reflect perception rather than science, which, ultimately, hurts our patients.
Clarifying AI’s Role
To navigate these challenges, we must clarify AI’s role in care delivery. AI can offer quick answers and synthesized information, but it cannot replace our ability to reason, to contextualize, and to make ethically sound decisions. In other words, I think it’s important to remember that an over-reliance on AI dulls these human faculties. Let’s view AI as a tool to sharpen clinical judgment, not override it. OM