Transparency Paradox: Questioning AI Fears by Examining Ourselves

Scot Morris, ODI was listening to a discussion recently about transparency in health care. The conversation led me to ponder a few basic realities: some we often acknowledge and others we might choose to ignore.

 

On one side, people argue that the real concern about AI is the “fear of the black box.” It’s the idea that we feed data into algorithms and machine processes, which push out an answer, but we don’t really understand how AI got from one point to another. This creates trust issues. As humans, clinicians or patients, we inherently distrust what we do not understand. We want to understand why an AI platform suggests a particular diagnosis or treatment so we can critically evaluate the suggestion, catch potential errors and identify situations where AI might be unreliable.

 

Is that what we say we want, or what we actually want? In an effort to protect intellectual property, some will argue that the algorithms that drive these black box decisions are vague. Others say that these processes are too complex for the average human brain to understand because of the inherent technical complexity.

Searching for Transparency

These are both valid points; however, I would argue a different perspective. Where is the transparency in the current system? Is the algorithm that I learned the same as yours, or the provider’s down the street? Probably not! Which one is right? We all think ours is correct because…well…it’s ours. Do we really know? How many of us are tracking all the data points of every decision that we make over the several years and thousands of patient encounters to check whether our “clinical decision making” (i.e., algorithm) is correct? How many of us have ever actually written out the algorithms that we use for each of the thousands of diseases we are expecting to identify and treat — much less the hundreds of thousands of combinations? I’m guessing that very few of us have done that.

 

We could argue that this lack of transparency of an AI system could potentially contribute to an adverse event, faulty clinical decision-making or misdiagnosis. True! But who is monitoring our current health care providers’ clinical decision making? Where is the accountability in our current “gray box” system? We cling to concepts of peer review, M&M conferences, licensing boards, etc. These systems are still imperfect and reactive, and providers are ultimately responsible for patient care. We do not seem to have accountability police around for the quality of care provided — just the quantity. And even that is woefully lacking. AI has the potential for proactive monitoring through built-in evidence-based checks-and-balances systems.

 

Algorithmic Transparency

We can continue this argument: if we have algorithmic transparency, then we can potentially detect and mitigate any bias that might be in the system. I agree! Though I wonder who is detecting and mitigating the inherent biases present in our current “gray box” system? In the AI algorithms, we can easily see and instantly correct those biases for any user, in any language, within seconds. How do we change the inherent biases of the 70,000+ eye care providers in North America alone? That might just prove to be a much bigger task than changing the biases within a series of transparent AI algorithms. Which algorithms, black box or gray box, seem easier to correct and prevent harmful biases that may persist undetected in our current system? Which algorithms will lead to and exacerbate health inequities?

Self-reflection

In light of the above arguments, maybe our focus should be more on the transparency of the results rather than the transparency of the process itself. To do so, we must face the real question: What is it that we truly fear? Do we have the courage to discover which system — ours or AI’s — is more unbiased, more transparent, more accurate, safer, more efficient and provides a better overall experience? Are we willing to confront our own gray matter biases? Or, will we continue to mask our discomfort as principled resistance, when in reality, it’s fear of the unknown — a fear that a machine might outperform us or outthink us?

 

What if we asked a better question: What becomes possible when we choose to augment our gray matter with the black box? Imagine a world where this technology doesn’t replace human connection, but enhances it. Instead of exhausting our bandwidth on data overload, we use it to elevate empathy, presence and purpose. The real question isn’t “if” we should use AI. It’s “how” we will use it to redefine what only humans can do best: deliver care that is deeply human.

Author

  • Scot Morris, OD

    Scot Morris, OD, has practiced for 25 years in various clinical settings and served as a technology author, magazine chief optometric editor, corporate advisor, practice consultant, and prominent educator. He started or cofounded multiple companies within the eye care industry and participated in multiple clinical trials. Among the challenges he consistently hears about in the health care industry for providers, patients, companies, and the health system are inefficient care delivery, clinical decision-making errors, rising costs, access issues, and failure to provide connected care.

    Through his various roles, Dr. Morris has focused on how to improve system efficiencies, market, and teach peers how to improve care delivery. His peers voted him as one of the 50 most influential people in eye care and one of the top 250 innovators in the industry. Driven to always find a better way and share that knowledge to make people and processes better, Dr. Morris spent his entire career thinking about health care challenges, how to solve them, and educating others to do the same. As a result, he spent the last few years focusing on these issues and codeveloping a knowledge platform called the AMI Knowledge System, (AMIKnowS), to share and evolve knowledge in hopes that we can solve many health care issues and enable the delivery of accessible and unbiased health care regardless of income, education, or geography.



    View all posts


Leave a Reply

Your email address will not be published. Required fields are marked *