
A new study recently published in Eye evaluated the awareness, acceptance and concerns about artificial intelligence in the ophthalmic community. The goal of the study was to share insights in this area, especially how it relates to further implementation of these technologies.
The authors conducted a systematic literature review to assess the acceptance of AI in ophthalmology among different stakeholders, including patients with ocular conditions, ophthalmologists, allied eye-care professionals and the general public. They aimed to map acceptance levels and identify influencing psychological factors using a theoretical framework.
Methodology
- Scope: Searches identified 16 relevant studies.
- Framework: The Unified Theory of Acceptance and Use of Technology (UTAUT) model was applied, focusing on four core dimensions—performance expectancy, effort expectancy, social influence, facilitating conditions—and four moderating variables: age, gender, prior experience, and voluntariness of use.
- Analysis: The review included quantitative surveys and qualitative assessments, evaluating how well studies addressed each construct.
Key Findings
- General Acceptance
- Across all stakeholder groups, acceptance of ophthalmic AI was high, with users perceiving tangible benefits in diagnostic accuracy and workflow efficiency.
- Dominant Influencing Factors
- Performance expectancy (perceived usefulness) and effort expectancy (ease of use) were the most widely addressed influences in the primary studies.
- Social influence (peer recommendations, institutional endorsement) and facilitating conditions (infrastructure, training, support systems) received comparatively limited attention.
- Moderating Variables
- Demographics and background—such as age, gender, previous AI exposure and the extent of voluntary use—were under-explored in their impact on acceptance rates.
- Stakeholder-Specific Insights
- Ophthalmologists: Generally optimistic about AI aiding disease detection and clinical decision-making, provided reliability and interpretability are assured.
- Allied professionals: Optometrists and technicians valued AI for triage and streamlining workflows but expressed concerns around responsibility and role changes.
- Patients and the public: Willing to accept AI initiative in their eye care, especially when it augments rather than supplants clinician judgment.
Identified Concerns & Gaps
- Privacy and data security: Widespread concerns over patient data use were noted.
- Trust and transparency: Stakeholders emphasized the need for clear understanding of AI decision-making processes (“black-box” issues).
- Regulatory and accountability issues: Questions remain around liability, validation, reimbursement and policy frameworks.
- Economic burden: Costs related to development, implementation and maintenance remain barriers.
Authors’ Recommendations
To enhance real-world adoption, the authors propose:
- Conducting more robust qualitative and mixed-method studies, especially in underrepresented populations and settings, with clear operational definitions.
- Designing larger-scale, randomized controlled trials to evaluate clinical and cost-effectiveness.
- Implementing clinician and patient education initiatives to build trust and understanding.
- Strengthening infrastructure—such as privacy architecture, IT integration and clinician support—to support technology rollout.
- Addressing cost-effectiveness through health economic modeling and reimbursement planning.
Conclusion
Ran et al. highlight strong baseline acceptance of ophthalmic AI across stakeholders, primarily driven by perceived performance gains and ease of use. However, they caution that without addressing gaps in trust, transparency, regulation and infrastructure, widescale implementation may falter. They advocate for multidisciplinary collaboration, rigorous clinical evidence, and strategic system design to ensure AI’s role in sustainable eye care.

