4 Criteria to Help You Determine If Your AI is Ethically Founded

Artificial intelligence
Photo Credit: Getty Images

Conversations about mitigating bias and maintaining ethics in AI often focus on design and development. Will the algorithm yield discriminatory results, or will it have varying levels of accuracy across different groups? 

 

These are important considerations, but they’re only part of the picture. Ethics isn’t just about what happens on the back end; bias can occur not just when developers are working on AI, but also once it’s in doctors’ offices, serving patients. An ethics-based approach that doesn’t consider the full product lifecycle runs the risk of overlooking harm and failing to deliver on the incredible promise AI holds to improve care.

 

Ethics must be the foundation of AI development and deployment. Without a strong ethical foundation, these technologies fall short of their purpose: empowering professionals and delivering meaningful benefits to people.

AI: Incredibly Smart, Shockingly Stupid

One of my favorite quotes comes from computer scientist Yejin Choi: “AI is incredibly smart and shockingly stupid.” It was the title of her 2023 TED Talk, and it’s just as accurate for health care AI as it is for large language models. 

 

It might be surprising to hear that sentiment endorsed by someone in Medical Affairs at a health care AI company, but my close familiarity with AI systems has made the quote resonate even more. Every AI system has limitations. When implementing any system, it’s crucial to recognize which tasks it’s suited to, and which tasks it isn’t. 

 

To determine whether an AI system is designed and developed in an ethical foundation, AI system developers and end users should evaluate the AI against several key criteria:

1.Validate Rigorously

Ensuring AI systems deliver consistent, accessible care requires continuous oversight beyond initial deployment. While an ethical foundation is critical at the outset of development, long-term success depends on the disciplined application and ongoing refinement of those principles to meet real-world demands—including liability, standards of care, FDA regulation and reimbursement models. 

 

In the health care ecosystem, validation is not a one-time exercise or a siloed responsibility; it’s an ongoing responsibility that’s essential to sustainable adoption and clinical impact. Upholding ethical considerations in AI requires an ongoing commitment from the people behind the technology. 

2. Improve Patient Outcomes

If health care providers are investing time and resources into implementing a new AI system, it must deliver measurable improvements in patient care. This is what separates “Glamour AI” from Impact AI. The former may look impressive on the surface, but it doesn’t meaningfully change clinical decision-making or results. It’s AI for the sake of AI, and at worst, it can introduce risk. At best, it diverts attention and resources from tools that could actually improve care.

 

Impact AI, in contrast, changes patients’ lives by helping them get care that they might not otherwise receive—and has the data to back it up. AI can be the difference between a patient getting a diabetic retinopathy exam that catches the condition in time for effective treatment, and a patient skipping their exam because a trip to a specialist’s office is inconvenient or inaccessible. These human-level impacts provide powerful case studies, and they have peer-reviewed data on early detection and testing adherence to support them. As AI system users, be empowered to ask for real-world evidence, ideally that matches your patient population and/or practice setting.

 

3. Assume Liability

One of the largest obstacles to physician adoption of AI is unresolved questions about liability. If the system makes a mistake, who’s to blame: the health care provider or the AI developer? 

 

In a study by the Organisation for Economic Co-operation and Development, 71% of medical associations expressed fear that AI will increase physicians’ liability, and an AMA report last year found that more than four-fifths of physicians named “not (being) held liable for errors” by AI as key to their adoption.

 

Many developers decline to take a position on liability, preferring to embrace the legal gray area. This poses a serious challenge to trust and adoption and brings ethical implications, too. If an AI system has the potential to deliver real patient benefit, don’t the developers have an obligation to address the concerns that slow its adoption? When evaluating AI systems, know the developer’s stance on liability so you can make appropriate decisions on patient disclosures and your own liability.

 

4. Maximize Transparency

In health care, a lack of transparency can undermine ethical principles and erode trust. If the AI system was trained on biased, nonrepresentative data, it will output biased results that don’t serve all patients equally. Once the system is deployed, developers must not only monitor for accuracy, but openly share that data, too. That way adopters can make informed decisions and recognize how AI fits into their overall strategy. 

 

Different AI models will have different strengths and weaknesses, but if we refuse to level with decision-makers about what those are, we can’t expect them to make decisions that most benefit patients and care providers. Consider the full product life cycle to evaluate if transparency has remained a priority, from concept to commercialization. 

Closing Thoughts

Health care AI represents not only a technological leap but a profound shift in how decisions are made at scale. As AI systems become increasingly embedded in clinical workflows, ethics move from a compliance concern to a strategic imperative. 

 

Ethical considerations are not an option in AI development, but the very bedrock. Leaders who treat ethical governance as an afterthought risk the safety and trust of their organization and the public. Conversely, those who prioritize ethics in their AI adoption strategy will emerge as part of the story of how AI is impacting people’s lives for the better. At its core, health care is a people-helping-people business, and AI is a tool to help people help other people — better. 

Author

  • Dena Weitzman, OD, FAAO

    Dena Weitzman, OD, FAAO, serves as the Senior Director of Medical Affairs at Digital Diagnostics, the first company to receive FDA clearance for an autonomous A.I. diagnostic platform. With more than 15 years of experience in health care, she is dedicated to advancing the integration of A.I. in the field. Dr. Weitzman began her career as Vice President of Optometry at the Infant Welfare Society in Chicago, followed by a transition to academia at Midwestern University, Downers Grove, Illinois, where she served as the Associate Dean of Clinical Affairs. She graduated from the Indiana University School of Optometry in 2010 and completed a residency at the Illinois College of Optometry. Dr. Weitzman is an active member of numerous professional society A.I. health committees and task forces. A passionate advocate for the safe, ethical, and effective use of A.I., she frequently engages with diverse health care disciplines through speaking engagements and publications, educating providers on optimal A.I. utilization.



    View all posts


Leave a Reply

Your email address will not be published. Required fields are marked *