AI in Health Care: Why Human Oversight Is Non-Negotiable (2026)

AI's Imperfections: Navigating the Complexities of Healthcare

The Unavoidable Reality of AI Errors

Despite AI's remarkable advancements, we must confront the fact that its errors are an inherent part of its nature. This raises critical questions about its role in healthcare, an area where even the smallest mistake can have dire consequences.

The Enthusiasm vs. Reality Gap

AI's success has sparked an era of bold claims and unbridled enthusiasm. Yet, users frequently encounter AI errors, from digital assistants misinterpreting speech to chatbots fabricating facts. These mistakes are often tolerated due to the technology's efficiency gains, but what happens when AI is tasked with critical decisions in healthcare?

The Controversial Proposal

A bill introduced in the U.S. House of Representatives in 2025 proposed allowing AI systems to prescribe medications autonomously. This idea has sparked intense debate among health researchers and lawmakers. How would such a system work in practice? What are the potential consequences, especially if AI leads to negative outcomes or, worse, patient deaths?

A Researcher's Perspective

As a researcher studying complex systems, I investigate how different components interact to produce unpredictable outcomes. My work explores the limits of science, particularly AI. Over the years, I've worked on projects ranging from traffic light coordination to tax evasion detection. Despite their effectiveness, these systems are not infallible.

The Inescapable Errors of AI

Research from my lab suggests that certain properties of the data used to train AI models contribute to errors. This issue is likely to persist, regardless of the resources invested in improving AI models. Simply put, nobody and nothing, not even AI, is perfect.

Alan Turing's Wisdom

Alan Turing, the father of computer science, once said, "If a machine is expected to be infallible, it cannot also be intelligent." Learning from mistakes is an integral part of intelligence. This principle applies to AI as well, where errors are an inevitable part of its learning process.

The Limits of Classification

In a recent study, my colleagues and I demonstrated that perfectly organizing certain datasets into clear categories may be impossible. This is because elements of many categories often overlap, leading to inherent errors in classification. For example, an AI model trained on a dataset of dogs might perfectly distinguish Chihuahuas from Great Danes based on age, weight, and height. However, it could struggle to differentiate between an Alaskan Malamute and a Doberman Pinscher, as these breeds can have similar physical characteristics.

The Challenge of Predicting Student Outcomes

In 2021, my students and I began studying classifiability, using data from over half a million students at the Universidad Nacional Autónoma de México. We aimed to predict which students would finish their degrees on time. Despite testing various algorithms, the best accuracy we achieved was around 80%, meaning at least 1 in 5 students were misclassified. Many students had identical grades, ages, genders, and socioeconomic statuses, yet their outcomes differed. Under such circumstances, perfect predictions are impossible.

The Diminishing Returns of More Data

You might assume that more data would improve predictability, but this often leads to diminishing returns. For each 1% increase in accuracy, you might need 100 times more data. Thus, we would never have enough data to significantly improve our model's performance.

Additionally, unpredictable life events, such as unemployment, death, or pregnancy, can occur after a student's first year at university, affecting their ability to finish on time. Even with an infinite number of students, our predictions would still contain errors.

Complexity Limits Prediction

In general, complexity limits prediction. The word "complexity" comes from the Latin "plexus," meaning intertwined. The components of a complex system are interconnected, and their interactions determine their behavior. Studying these components in isolation can provide misleading insights.

Take, for example, a car traveling in a city. While it's theoretically possible to predict its destination based on its speed, its actual speed depends on interactions with other vehicles. Since these interactions are unpredictable, precise predictions are only possible for a few minutes into the future.

The Risks of AI in Healthcare

The same principles apply to prescribing medications. Different conditions can have similar symptoms, and individuals with the same condition may exhibit different symptoms. This complexity creates significant overlaps in healthcare datasets, making error-free AI diagnoses challenging.

While humans also make errors, the legal implications of AI misdiagnoses are unclear. Who would be held responsible if a patient is harmed? The pharmaceutical company? The software developer? The insurance agency? The pharmacy?

The Role of Hybrid Intelligence

In many contexts, neither humans nor machines are the best option for a given task. "Centaurs" or "hybrid intelligence," a combination of humans and machines, often outperform each on their own. For example, a doctor could use AI to suggest potential drugs based on a patient's medical history, physiological details, and genetic makeup. This approach is already being explored in precision medicine.

However, common sense and the precautionary principle suggest that AI should not prescribe drugs without human oversight. The potential for mistakes in AI technology means that human supervision is likely always necessary when human health is at stake.

Final Thoughts and Questions

As we navigate the complexities of AI in healthcare, it's crucial to strike a balance between innovation and caution. How can we ensure that AI's benefits are realized while minimizing its risks? What role should human oversight play in AI-assisted healthcare decisions? These are questions that demand our attention and thoughtful consideration.

AI in Health Care: Why Human Oversight Is Non-Negotiable (2026)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Cheryll Lueilwitz

Last Updated:

Views: 6538

Rating: 4.3 / 5 (54 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Cheryll Lueilwitz

Birthday: 1997-12-23

Address: 4653 O'Kon Hill, Lake Juanstad, AR 65469

Phone: +494124489301

Job: Marketing Representative

Hobby: Reading, Ice skating, Foraging, BASE jumping, Hiking, Skateboarding, Kayaking

Introduction: My name is Cheryll Lueilwitz, I am a sparkling, clean, super, lucky, joyous, outstanding, lucky person who loves writing and wants to share my knowledge and understanding with you.