Is ChatGPT Health A Revamp of Dr. Google?

Image of a phone using ChatGPT.
Image courtesy of Canva.

What’s the latest with ChatGPT Health? Scot Morris, OD, and Rehan Ahmed, MD, recently broke it all down on the weekly Real Talk podcast.

 

About 300 million AI chat conversations a week involve health care, Dr. Ahmed says. Patients post symptoms, medications and lab results while asking, “What does this mean? What am I diagnosed with?” 

 

To address that demand, ChatGPT created ChatGPT Health. The product “lets users connect their medical data with their labs, medications and visit summaries.” He also notes that ChatGPT Health connects with major EHR ecosystems like Epic and large lab providers. 

 

Skepticism About Data Flow or Privacy

Dr. Ahmed says that ChatGPT Health has guardrails. “There’s no diagnostics that are in there. It doesn’t make clinical decisions, it doesn’t prescribe, it doesn’t write back into EHR. But this is a huge first step. It also connects with wearables like your Apple Watch or glucometer,” he says. 

 

He also reports that the company said it will not use health data for training sets. Voicing concern about integration across eye care systems, Dr. Morris says it is “almost impossible to integrate with eye care EHRs across the board.” He cites security and privacy challenges, mentioning HIPAA. He worries that companies may still seek consumer data in indirect ways. 

 

Dr. Ahmed points out that “we’re already starting to see EHR systems, even in the optical space, integrating and using some type of LLM for doctors, and so I think that’s definitely going to be coming.”  

The Case For Synthetic Data

Another big topic of discussion: synthetic data. Synthetic data is “fake data” but Dr. Morris clarifies it as  “mathematically real.” The synthetic math mimics patterns without exposing real patient records. He provides an analogy: “If you want to teach AI to bake a cake but you don’t want to show photos of a real cake that people made, you have to create this 3D digital simulation of what a cake looks like–you try to give it texture and height and chemistry. You build all this but there was never a baker or flour or sugar, and there’s no frosting. It’s a digital twin that looks and acts like the truth without all the negatives, the baggage of reality.” 

 

Synthetic data is a simulation that we can learn from without putting anybody in danger. Dr. Morris predicts AI training data will be synthetic by 2030. He thinks that most AI data will come from synthetic situations that algorithms work through and then human supervision determines whether or not it was the correct answer. 

 

“We’re going to train models that are more private, less biased, and maybe then we can start creating situations humans haven’t even encountered yet. It’s not just going to be fake data or fake information–it will be the fuel for the next generation of secure, non-scarce intelligence that algorithms can learn from,” he says.

 

Dr. Morris also expresses a broader hope for industry attitudes throughout the upcoming months as he wants clinicians to see AI as a tool that improves work life. “AI is here to make our life better and you can’t ignore it,” he says.

 

For more on this conversation, listen to this episode of Real Talk.

 

Author

  • Savannah Pearson

    Savannah joined the Jobson editorial team in 2025 with a background in copywriting and marketing. Writing has been central to her life, and as a high myope, she brings personal insight and genuine passion that enrich her editorial work.



    View all posts


Leave a Reply

Your email address will not be published. Required fields are marked *