Perspectives

From Cockpits to Clinics: What Healthcare Can Learn from Aviation in the Age of AI

Dr Annabelle Painter
<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >From Cockpits to Clinics: What Healthcare Can Learn from Aviation in the Age of AI</span>

Healthcare and aviation share a defining characteristic: they are complex adaptive systems in which small errors can cascade into catastrophic outcomes. Both increasingly rely on advanced technology, automation, and data-driven decision-making.

Yet despite these similarities, the two industries diverge sharply in how they manage risk, integrate new technologies, and—crucially—learn from failure. 

As AI becomes embedded in clinical practice, healthcare has a rare opportunity to learn from aviation’s hard-won safety culture. The lesson is not simply about technology, but about systems, incentives, and human behaviour. 

The Salience of Failure: Why Some Risks Are Ignored 

Medical errors in the United States are estimated to cause deaths equivalent to more than 700 Boeing 777 crashes every year. And yet, while a single plane crash dominates global headlines, the cumulative harm caused by medical error remains largely invisible. 

This discrepancy is a textbook example of salience bias

Humans are wired to fear sudden, dramatic catastrophes more than slow, distributed ones— even when the latter cause far greater harm.

In healthcare, failure is frequent but diffuse, occurring across thousands of hospitals and millions of patient interactions. Without systematic reporting and shared analysis, lessons are lost, patterns remain hidden, and preventable harm persists. 

Automation Bias, Deskilling, and Alert Fatigue: Lessons from the Cockpit 

AI promises extraordinary gains in healthcare—from earlier diagnosis to predictive population health—but it also introduces risks that aviation has grappled with for decades. Chief among them is automation bias: the tendency to defer to machine outputs even when they are wrong or poorly understood. 

In aviation, pilots are rigorously trained to fly with and without autopilot. Manual competence is consistently maintained through ongoing assessments. By contrast, poorly designed AI risks creating a generation of clinicians who are either deskilled—or ‘never-skilled’. A clinician’s ability to critically appraise AI outputs depends on having first developed strong independent judgment. 

Alert fatigue provides another cautionary tale. Clinicians today face hundreds of alerts and warnings from EPR systems, many of them low-value or non-actionable. Expecting sustained vigilance in such conditions is unrealistic. Aviation solved this problem by ruthlessly prioritising cockpit alerts so that only genuinely critical issues demand immediate attention. Healthcare must adopt the same approach to alert fatigue in clinical software. 

Learning from Failure: Why Culture Drives Safety 

Technology does not create safety—culture does. 

Aviation’s safety record rests on an uncompromising commitment to transparency. Every incident and near-miss is reported, analysed, and shared across the industry without reflexive blame. Accountability exists, but the primary goal is systemic learning, not individual punishment. 

Healthcare, by contrast, still operates largely within a blame culture. Errors are underreported or hidden, shaped by fear of litigation and professional sanction. This culture not only drives clinician burnout, it actively undermines learning—especially in an AI-enabled environment.

In “clinician-in-the-loop” AI deployments, clinicians risk becoming liability sinks, absorbing responsibility for errors rooted in flawed algorithms, biased data, or poor system design. In autonomous AI deployments where there is no clinician to hold the blame there is uncertainty around how liability will be managed. Aviation takes a different approach, distributing responsibility across pilots, engineers, manufacturers, and regulators. Healthcare must move in the same direction, recognising that safety emerges from human–AI interaction, not isolated human decisions. 

Shared Learning at System Scale 

Aviation does not hoard safety data; it shares it. Incidents in one airline inform improvements across the entire industry. Healthcare AI, by contrast, is often deployed in silos. 

If healthcare adopted aviation’s model of open, system-wide learning—sharing performance data, near-misses, and model behaviour across institutions—it could dramatically accelerate improvement. Pooling insights from AI systems deployed across diverse clinical settings would enable faster optimisation, bias detection, and risk mitigation. The learning potential is enormous, but only if transparency is treated as infrastructure rather than liability. 

Key takeaways 

  • Systemic Thinking Over Individual Blame 
    Aviation recognises that failures arise from interactions between people, technology, and environment.   Healthcare regulation—particularly around AI—should do the same, encouraging transparency, systemic inquiry, shared reporting, and honest disclosure without creating perverse incentives for silence or defensive practice. 
  • Intentional Alert Prioritisation 
    Clinicians cannot safely manage hundreds of competing alerts. Healthcare systems should emulate aviation’s approach: fewer, tightly prioritised alerts, informed by human centred designed 
  • Robust Feedback Loops 
    Aviation continuously feeds incident data back into training, design, and regulation. Healthcare AI requires the same discipline. Ongoing monitoring, post-deployment evaluation, and structured learning from near-misses must be treated as core safety functions and be appropriately resourced. 

 

The Future: Clinician-AI Co-Pilots 

Few passengers would willingly board a plane with no pilot—and few would want to fly without autopilot either. Healthcare is heading in the same direction. 

Patients should be confident that AI supports clinicians without replacing their judgement, and that doctors remain highly skilled, engaged, and capable of challenging machine outputs when needed. Achieving this balance requires more than technical excellence; it demands careful system design, cultural change, and a relentless commitment to learning from error. 

The defining challenge for healthcare AI is not intelligence, but trustworthiness. Aviation approaches safety by designing systems that respect human limits, surface risk early, and learn relentlessly from failure. If healthcare applies these same principles—system-level accountability, deliberate skill preservation, continuous monitoring, and open learning—AI can become a stabilising force rather than a new source of fragility. Progress will come not from smarter algorithms alone, but from wiser systems built around them. 

 

Dr Annabelle Painter
Visiba Group AB
Adolf Edelsvärds Gata 11 Göteborg, 414 51
Phone: 0761993666