Advice

Ask Little Miss AI: Should I Trust LLMs with My Medical Data?

AI is an excellent tool that democratizes medical information, but—not so fast.


A red stethoscope with its chest piece and earpieces positioned around a cluster of translucent spheres, inside which the glowing letters "AI" are prominently displayed, set against a light blue background.

Illustration via Getty Images

Little Miss AI is a recurring advice column. Have a question? Email us!

If I generate insights about my medical condition using ChatGPT, should I share the output with my doctor?  Should I upload it to my healthcare portal? 

Also: Is there a way to enter my whole medical record into ChatGPT and have it come up with tests I should take?  If I do that, is my medical information not protected enough?

— Russ A., Marblehead

Dear Russ A.,

These are outstanding questions, and you are clearly a proactive “patient.” The medical arena offers some of the most promising and beneficial ways artificial intelligence can help us humans. But there are still loaded mines to navigate, mainly for confidentiality and accuracy (not nothing!). I will gladly offer my personal opinion here, but have sourced two experts who deal with these questions every day: Waichi Wong, of Boston Medical Consulting, and Emilia Javorsky, director of the Futures Program at the Future of Life Institute.

A cartoon image of a smiling robot with the words Little Miss AI on its body.

Image generated in Perplexity by Lisa Pierpont.

Both agree that AI is an excellent tool that democratizes medical information and enables folks to learn more about their medical needs. But as Javorsky—who describes herself as an AI advocate and ethicist—puts it, “ChatGPT is not a doctor, and general purpose LLMs [large language models] like ChatGPT have not been cleared by the FDA as medical devices for diagnosis.” In other words, there is zero guarantee that what ChatGPT shares with you is accurate. Still, AI is a good way to translate complicated medical jargon into words that anyone can understand. So, sure, ask ChatGPT (or Perplexity, Claude, Gemini, etc.) for medical insights, but make certain that you use that material as a conversation starter with your human doctor.

As for uploading that AI-generated data into your healthcare portal…hey ho, not so fast. The issue with artificial intelligence is that it can be Just. Plain. Wrong. If you upload your AI findings to your portal, future providers could get confused and misguided by erroneous information. That’s a big deal! Now, if you feel there is low risk of a fiasco like that, Wong advises uploading a brief summary with a short list of questions. “Practically,” she says, “many portals have message length limits, and clinicians may not have time to read a long attachment.”

Regarding entering your entire medical history into ChatGPT for guidance on tests and diagnosis; how cool would that be? The short answer is yes, someday that will be the case and AI is tracking toward that by the nanosecond. But today? AI has some hurdles. First, there’s a possibility that it will recommend tests that are not necessary. “Lab interpretation is context-dependent—reference ranges differ by laboratory, and small deviations can be clinically meaningless,” Wong says. “Physicians interpret results in the setting of the whole clinical picture and decide whether a borderline abnormality is truly important.” Additionally, while cost can be, of course, a factor in deciding what tests a patient receives, so is the risk of false positives. “Such findings then require follow up testing and procedures that are often not risk free, meaning patients take on risks without benefit,” Javorsky says.

Finally, on your question of whether your medical information is protected if you share it with ChatGPT: In most cases, it’s a big fat no. Doctors are bound by HIPAA, a U.S. federal law that prevents a patient’s medical information from unauthorized disclosure. Your records are confidential, and you have legal rights if HIPAA is not honored. Big tech companies abide by no such law. In fact, your shared medical record and personal identifiers could be used to train future AI models. No problem with that? Go for it.

Previously: How Can AI Help My Senior Mom Communicate Better?