Opinion | Totally safe systems paradox & Lion Air crash
This week Indonesian investigators have released the first report into the fatal crash of Lion Air’s Boeing 737 MAX 8. In response to the news, AeroTime has received the following comment on the interactions between human operators and highly complex technology by Nick Oliver, Professor of Management at University of Edinburgh Business School.
The preliminary investigation into the loss of a brand-new Lion Air Boeing 737 MAX indicates that a malfunction of a new anti-stall system caused the aircraft’s automated systems and the pilots to take contradictory actions. This led to the aircraft plummeting into the sea with the loss of all 189 passengers and crew on board.
The irony is that the new system was supposed to make things safer, by reducing the risk of pilots accidently stalling aircraft, as happened in the case of Air France 447 in 2009 and Air Asia 8501 in 2014. The system automatically commands the aircraft to go nose-down when it detects a risk of an aerodynamic stall. In this case, it repeatedly and erroneously gave the nose-down command.
But there is another, more disquieting interpretation which goes beyond commercial aviation - the interaction of highly complex technologies and their human operators
The finger-pointing has started already. Pilots unions have accused Boeing of not doing enough to brief airlines about the new system. Lion Air have been accused of poor maintenance and fault reporting. Pilot error has been cited, because the pilots failed to recognise the faulty system and isolate it quickly enough.
But there is another, more disquieting interpretation which goes beyond commercial aviation - the interaction of highly complex technologies and their human operators. It is encapsulated in what safety science calls ‘the paradox of almost totally safe systems’ and it explains why even ultra-safe systems are prone to rare catastrophes that are almost impossible to predict and avoid.
The more we rely on complex system designs for safety, the less practice their human operators get in dealing with rare events that require rapid diagnosis and correction. As system complexity rises, diagnosis of system behaviour becomes more difficult as the technology throws more and more ‘curveballs’ at its operators. This happened with Air France 447 in 2009 and nearly a decade on, it looks as if it has happened again.