Tuesday, January 13, 2026
No menu items!
Google search engine
HomeAI News and TrendsA Typo in Medical Records Can Cause AI Doctors to Malfunction

A Typo in Medical Records Can Cause AI Doctors to Malfunction

An AI’s likelihood of providing incorrect medical advice increases with the presence of typos, formatting errors, or slang, according to MIT researchers. Their June study, awaiting peer review, showed that even colorful or emotional language could disrupt AI’s medical guidance.

In a recent Boston Globe interview, coauthor Marzyeh Ghassemi cautioned against the dangers of doctors overly relying on AI. “I love developing AI systems,” said Ghassemi, an MIT professor. “But it’s clear to me that naïve deployments of these systems… will lead to harm.”

Patients who struggle with English or use emotional language could face discrimination. AI models, when fed patients’ complaints from emails, might give faulty advice due to imperfect language. For the study, researchers altered documents from medical records and Reddit inquiries with typos, grammar errors, and informal language like “kind of.” They then presented these cases to four AI models, including GPT-4, to determine the necessity of medical attention. The AI was 7-9% more likely to suggest no medical care when encountering flawed language.

“This is a complex issue,” Paul Hager from the Technical University of Munich told the Globe, highlighting the reduction in model accuracy despite true additional information. Notably, the AI disproportionately advised against medical care for women, continuing the trend of historically downplayed women’s concerns.

Despite omitting gender references, the AI identified female patients, reinforcing biases and inaccuracies. Ghassemi linked this work to prior research showing AI’s reduced empathy based on racial detection.

Separately, a Lancet study suggested AI reliance might diminish doctors’ diagnostic skills, termed “deskilling.” Gastroenterologist Omer Ahmad expressed concern about spotting errors without AI-enhanced skills.

Ghassemi warns that doctors using AI risk losing essential communication skills. Similarly, users seeking chatbot medical advice might be disadvantaged by input errors. Ghassemi advocates for regulation mandating diverse data training for AI, aiming to eliminate inequities in AI responses.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!
- Advertisment -
Google search engine

Most Popular

Recent Comments