Yet despite these similarities, the two industries diverge sharply in how they manage risk, integrate new technologies, and—crucially—learn from failure.
As AI becomes embedded in clinical practice, healthcare has a rare opportunity to learn from aviation’s hard-won safety culture. The lesson is not simply about technology, but about systems, incentives, and human behaviour.
Medical errors in the United States are estimated to cause deaths equivalent to more than 700 Boeing 777 crashes every year. And yet, while a single plane crash dominates global headlines, the cumulative harm caused by medical error remains largely invisible.
This discrepancy is a textbook example of salience bias.
Humans are wired to fear sudden, dramatic catastrophes more than slow, distributed ones— even when the latter cause far greater harm.
In healthcare, failure is frequent but diffuse, occurring across thousands of hospitals and millions of patient interactions. Without systematic reporting and shared analysis, lessons are lost, patterns remain hidden, and preventable harm persists.
AI promises extraordinary gains in healthcare—from earlier diagnosis to predictive population health—but it also introduces risks that aviation has grappled with for decades. Chief among them is automation bias: the tendency to defer to machine outputs even when they are wrong or poorly understood.
In aviation, pilots are rigorously trained to fly with and without autopilot. Manual competence is consistently maintained through ongoing assessments. By contrast, poorly designed AI risks creating a generation of clinicians who are either deskilled—or ‘never-skilled’. A clinician’s ability to critically appraise AI outputs depends on having first developed strong independent judgment.
Alert fatigue provides another cautionary tale. Clinicians today face hundreds of alerts and warnings from EPR systems, many of them low-value or non-actionable. Expecting sustained vigilance in such conditions is unrealistic. Aviation solved this problem by ruthlessly prioritising cockpit alerts so that only genuinely critical issues demand immediate attention. Healthcare must adopt the same approach to alert fatigue in clinical software.
Technology does not create safety—culture does.
Aviation’s safety record rests on an uncompromising commitment to transparency. Every incident and near-miss is reported, analysed, and shared across the industry without reflexive blame. Accountability exists, but the primary goal is systemic learning, not individual punishment.
Healthcare, by contrast, still operates largely within a blame culture. Errors are underreported or hidden, shaped by fear of litigation and professional sanction. This culture not only drives clinician burnout, it actively undermines learning—especially in an AI-enabled environment.
In “clinician-in-the-loop” AI deployments, clinicians risk becoming liability sinks, absorbing responsibility for errors rooted in flawed algorithms, biased data, or poor system design. In autonomous AI deployments where there is no clinician to hold the blame there is uncertainty around how liability will be managed. Aviation takes a different approach, distributing responsibility across pilots, engineers, manufacturers, and regulators. Healthcare must move in the same direction, recognising that safety emerges from human–AI interaction, not isolated human decisions.
Aviation does not hoard safety data; it shares it. Incidents in one airline inform improvements across the entire industry. Healthcare AI, by contrast, is often deployed in silos.
If healthcare adopted aviation’s model of open, system-wide learning—sharing performance data, near-misses, and model behaviour across institutions—it could dramatically accelerate improvement. Pooling insights from AI systems deployed across diverse clinical settings would enable faster optimisation, bias detection, and risk mitigation. The learning potential is enormous, but only if transparency is treated as infrastructure rather than liability.
Few passengers would willingly board a plane with no pilot—and few would want to fly without autopilot either. Healthcare is heading in the same direction.
Patients should be confident that AI supports clinicians without replacing their judgement, and that doctors remain highly skilled, engaged, and capable of challenging machine outputs when needed. Achieving this balance requires more than technical excellence; it demands careful system design, cultural change, and a relentless commitment to learning from error.
The defining challenge for healthcare AI is not intelligence, but trustworthiness. Aviation approaches safety by designing systems that respect human limits, surface risk early, and learn relentlessly from failure. If healthcare applies these same principles—system-level accountability, deliberate skill preservation, continuous monitoring, and open learning—AI can become a stabilising force rather than a new source of fragility. Progress will come not from smarter algorithms alone, but from wiser systems built around them.