Flying Too High: AI and Air France Flight 447

Published Jul 19, 2024, 4:01 AM

Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.

Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…

In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn't - and that the consequences of asking the wrong question are disastrous.

For a full list of sources, see the show notes at timharford.com.

Pushkin. When the trouble started in the middle of the Atlantic, Captain Mark Dubois was in the flight rest compartment, right next to the flight deck. He was in charge of Air France flight four four seven, en route overnight from Rio de Janeiro to Paris, but he was tired. He had been seeing the sights of Rio with his girlfriend Copacabana Beach a helicopter tour, and he hadn't had a lot of sleep. The airliner was in the hands of flight officers David Robert and Pierre Cedric Bonard, and when the trouble started, first Officer David Robert pressed the call button to summon Captain Dubois. When you're asleep and the alarm goes off, how quickly do you wake up? Captain Dubois took ninety eight seconds to get out of bed into the flight deck, not exactly slow, but not quick enough. By the time Dubois arrived on the flight deck of his own airplane, he was confronted with a scene of confusion. The plane was shaking so violently that it was hard to read the instruments. An alarm was alternating between a chirruping trill and an automated voice store Stare Store. His junior co pilots were at the controls. In a calm tone. Captain Dubois asked, what's happening. Co pilot David Robert's answer was less calm. We completely lost control of the airplane and we don't understand anything. We tried everything. Two of those statements were wrong. The crew were in control of the air plane. It was doing exactly what they told it to do, and they hadn't tried everything. In fact, one very simple course of action would soon have solved their problem. But David Robert was certainly right on one count. They didn't understand what was happening. I'm Tim Harford, and you're listening to cautionary tales. The disappearance of Air France flight four four seven in the early hours of the first of June two thousand and nine was at first an utter mystery. The plane was an Airbus A three thirty, a modern airplane with an excellent safety record. In the fifteen years since being introduced in the early nineteen nineties, not a single passenger A three thirty had crashed anywhere in the world. This one was just four years old and fully serviced. The crew were highly trained, the captain experienced, and there seemed to be nothing too challenging about the conditions. And yet somehow Flight four four seven had simply fallen out of the sky. Search teams found traces of wreckage on the surface of the waves a few hours later, confirming that the plane had been destroyed and all two hundred and twenty eight people on board were dead. But the black box flight recorder, containing possibly vital clues to the cause of the disaster, it was somewhere on the bottom of the Atlantic Ocean. There wasn't until nearly two years later that the black box was discovered and the mystery could start to be solved. This dear listener, is not just a story about a plane crash. It's a warning to all of us about what's coming. Air France flight four four seven had begun with an on time takeoff from Rio de Geneio at seven twenty nine pm on May the thirty first, two thousand and nine, bound for Paris with hindsight. The three pilots had their weaknesses. Captain Mark Dubois fifty eight, had plenty of experience flying both light airplanes and large passenger aircraft, but it had very little sleep. Pierre Cedric Bonnard thirty two, was young and didn't have many flying hours under his belt. David Robert thirty seven had recently become an Air France manager and no longer flew full time. He was flying this route to keep active his credentials as a pilot. Fortunately, given these potential fragilities, the crew were in charge of one of the most advanced planes in the world, an Airbus three thirty. Legendarily smooth and easy to fly. Like any other modern aircraft, the A three thirty has an autopilot to keep the plane flying on a programmed route, but it also has a much more sophisticated automation system called assistive fly by wire, a traditional airplane gives the pilot direct control of the flaps on the plane, it's rudder, elevators, and ailerons. This means the pilot has plenty of latitude to make mistakes. Fly by wire is much smoother and potentially safer. Two. It inserts itself between the pilot with all his or her faults, and the plane's physical mechanisms. A tactful translator between human and machine, it observes the pilot tugging on the controls, figures out how the pilot wanted the plane to move, and executes that maneuver perfectly. It will turn a clumsy movement into a graceful one. This makes it very hard to crash an A three thirty, very hard, but it turns out not impossible. As the plane approached the equator, the junior pilot, Pierre Cedric Bonan, was flying, or more precisely, was letting the autopilot fly. Captain du Bras was with him. Ahead on the weather radar, they could see tropical thunderstorms gathering, which at that time of year and in that location was common enough. We're not bothered by storm clouds, eh, said the old hand Dubois. Young Bonan didn't respond. He was it would turn out very much bothered by the thunderstorms, and many captains would have chosen to divert around them for the comfort of the passengers as much as anything. That wasn't a possibility that was discussed. Instead, Dubois noted, we'll wait a little and see if that goes away, and if not, then what not Captain Dubois's problem. A few minutes later, at eleven PM, Rio time, he pressed the buzzer to summon David Robert so that Dubois could take a nap. This wasn't particularly unusual. Everyone needs a rest after all, and the junior pilots need to get some experience making decisions about the plane. With the plane on course to fly straight into thunderstorms, Dubois's decision to leave the flight deck raises questions. The chief investigator of the crash, A Lamb we Are, spoke to the writer and pilot, William Longevisha about that his leaving was not against the rules. Still, it is surprising if you're responsible for the outcome, you do not go on vacation. During the main event, with Dubois Gonne, Pierre Cedric Bannan's nerves about the storms became more apparent. Put a lavash poutin, he yelled. At one point. The outburst, the French equivalent of fucking, hell, fuck, seemed to be provoked by nothing in particular. He talked with David Robert about how it was a shame that they couldn't fly high enough to clear the storms. But they couldn't. There's a limit to how high a plane can go. That high you fly the further you are from dangers on the ground. But the thinner the atmosphere becomes, and the atmosphere, of course is what the wings are using to support the aircraft too high and the margins for error become tight. That's okay, though, because on an A three thirty, the assistive fly by wire system always keeps the pilots within those margins. As the plane approached the storm, ice crystals rattled unnervingly against the windscreen and ice began to form on the wings. Banan and Robert switched on the anti icing system to prevent too much ice building up and slowing the plane down. Robert nudged Bonan a couple of times to pull left, avoiding the worst of the weather. Banar seemed slightly distracted, perhaps put on edge by the fact that they hadn't plotted a route around the storms much earlier. A faint odor of electrical burning filled the cockpit and the temperature rose. Robert assured Bonan that all this was the result of the electrical storm, not an equipment failure. But the ice wasn't just forming on the wings. It had also blocked the plane's air speed sensors, meaning that the autopilot could no longer fly the plane by itself. A defrosting system activated to melt the ice and unblock the sensors, but in the meantime the pilots needed to take control. An alarm sounded in the cockpit, notifying Bonar and Robert that the autopilot had disconnected, and a message popped up, adding that at the same time and for the same reason, the assistive fly by wire system had stopped assisting. No longer would it be the smooth tongued interpreter between pilot and plane. Instead, the system was a literal minded translator that would relay any instruction, no matter how foolish. Pierre Cedric Bonnard was in direct, unmediated control of the airplane, a situation with which he had almost no experience. Still, all he needed to do was to keep the plane flying straight and level for a couple of minutes until the air speed indicators defrosted. How hard could that be? Cautionary tales will return after the break. Not long ago, Fabrizio de Laqua, a researcher at Harvard Business School, ran an experiment to see how people performed when they were assisted by an algorithm. The experiment was designed to be practical and realistic. It involved professional recruiters being paid to evaluate real resumes, equipped with commercially available software to use the sophisticated pattern recognition we call machine learning to assess and grade those resumes. Some of the recruiters were given software that was designed to operate at a very high standard for simplicity. Delaqua calls that good AI. Other recruiters chosen at random, were given an algorithm which didn't work quite as well, or bad AI. They were told that the algorithm was patchy, it would give good advice, but it would also make mistakes. Then there was a third group, also chosen at random, who got no AI support at all. That the computer assistance was very helpful. Whether recruiters were given good AI or bad AI, they made more accurate recruitment choices than the recruiters with no AI at all. But here's the surprise. The recruiters with good AI did worse than those with bad AI. Why because they switched off The group who had the good AI spent less time analyzing each application. They more or less left the decision to the computer. The group who knew they had a less reliable AI tool, spent more effort and paid closer attention to the applications they used the AI, but they also used their own judgment, and despite having a worse tool, they made more accurate decisions. With the rise of powerful new AI systems, we tend to ask who's better humans or computers. The De Laqua experiment reminds us that that might be the wrong question. Often decisions are made by humans and computers working together, and just using the best computer doesn't necessarily get the best results out of the humans. Pierre Cedric Bonan was flying at high altitude in thin, unforgiving air into a thunderstorm. It was dark, with an unnerving burning smell in the cabin because of the electrical charge in the air and the clatter of hailstones on the windshield. Then there was the sound of the alarm disconnecting the autopilot Boan. Anne needed all the help he could get, and just at that moment the assistive fly by wire system disconnected, but Anne had no real experience flying without it. When the autopilot disengaged, Banan grabbed the control stick and immediately the trouble began. The plane rocked right and left and right and left, and each time Banan over corrected. He was used to flying in the thick air of takeoff and landing, whereas at high altitude plane behaved differently. And more importantly, Banan was used to flying with the assistive fly by wire, gracefully interpreting his every move, and suddenly he was having to fly the plane without it right and left and right and left. It rocked ten times in thirty seconds. The side to side rocking of the plane must have been unsettling, but it wasn't particularly dangerous. What was dangerous was that Banan also pulled back on the control stick, sending the plane into a climb in the thin air a climbing plane could easily store. Stalling is what happens when the wings don't generate enough lift. A stalling plane is pointed upwards trying to climb, but it's losing forward speed and losing height, scrabbling for altitude as it slides down through the air. So why did Bonan point the plane up and risk a stall. It was an instinctive reaction from a pilot used to taking control of the plane at takeoff and landing when a stall is unlikely and the main danger comes from not having enough height and slamming into the ground. If there's a problem as you're landing, you gun the engines and point the nose of the plane upwards. That's what Banan was doing. In an article in Popular Mechanics, the aviation journalist Jeff Wise explained, intense psychological stress tends to shut down the part of the brain responsible for innovative, creative thought. Instead, we tend to revert to the familiar and the well rehearsed. At more than thirty seven thousand feet, the familiar and well rehearsed action of pointing the nose of the plane up wasn't going to make Bonan safer. It was bringing the entire plane closer to catastrophe. In nineteen forty two, two psychologists Abraham and Edith Luchins they were married, published the results of a famous experiment. In this experiment, subjects were given three different sized water jugs and asked to figure out how to measure out a certain amount. For example, one jug might have a capacity of twenty ounces, the second one hundred ounces, and the third four ounces. The question is, how would you measure seventy two ounces using these jugs? The answer, Fill one hundred ounce jug, then pour off twenty ounces into the medium sized jug. Then you filled the small four ounce jug twice from the big jug. With the pencil and paper, it's not too tricky to figure this out. One hundred minus twenty minus four minus four gives you seventy two ounces. The Luchins gave their experimental subjects several of these problems, each with different sized jars and a different target volume of water, but each time the solution followed the same pattern. Fill the big jar, then use it to fill the medium jar once and the small jar twice. Now comes the trick. The Luchins would give people a problem like this. The big jar hole thirty nine ounces, the medium jar holds fifteen, the small jar holds three. How do you get eighteen ounces? Well, you can repeat the same process as before, fill the big jar and use it to fill the medium jar once and the small jar twice. It works, but if you do it that way, you're over complicating things, because you could simply fill the medium and the small jar fifteen plus three is eighteen. That's much easier. But a lot of people missed that obvious solution because they'd already solved a bunch of previous problems that required the more elaborate method. Abraham and Edith Luchin also had a control group. They hadn't been given any practice problems. Instead, they started with the eighteen ounce problem, and of course most of them found the simple solution. Not having practiced was actually an advantage. They saw the problem with fresh eyes and solved it quickly and simply. The people who had practiced tended to get stuck with a clumsy solution. The Luchens called this the einstellung effect. Einstellung is perhaps best translated here as state of mind. The practiced participants found a simple rule of thumb that seemed to work, and so they began applying it unthinkingly. As the Luchins put it, the problem solving act had been mechanized. Banan's instinctive attempt to climb by pulling back on the stick demonstrated an Einstellung effect in two ways. First, as Jeff Wise explained, he was reverting to his instinct that when you're in trouble, safety is to be found by pulling the plane up and seeking height. Second, Banan had almost always flown the a three point thirty plane with the assistive fly by wire, and with the assistive fly by wire operating, you literally cannot stall the plane. The computer won't let you. Banan had been trained by his own airplane never to worry about stalls, never to even think about stalls, because stalls simply can't happen. As Flight four four seven began to lose air speed and altitude, an automated voice announced stall. Kescassa, said David Robert, what was that stall? Store? Over the next four minutes, the word stall would be repeated more than seventy times. But Bonar and Robert, it seems, couldn't grasp that a stall was possible. Their ein stellum, their state of mind made that risk inconceivable. I first heard the story of Flight four four seven told on the ninety nine percent Invisible podcast back in twenty fifteen. By the way, nineteen nine percent Invisible is amazing, and if by some miracle you're not already a listener, go and subscribe. He can thank me later. Now. In twenty fifteen, this seemed like a warning about self driving cars. Here's a pilot who grew so reliant on his assistive technology that he forgot how to fly a plane at high altitude. So what happens when the self driving cars take over and we all become Pierre Cedric Bonar, unable to remember what to do when the computer needs us to take over. I see the story differently now. It's not just the self driving cars. It's the appearance of artificial intelligence everywhere. Consider those decision making algorithms that Fabrizio de Laqua gave to professional recruiters, which made them switch off and let the algorithm handle the problem. He called that study falling asleep at the wheel. I think you can see why. Or generative AI, which we use to paint pictures, create videos, write essays like the assistive fly by wire on an A three thirty. It's a technological miracle. But like the assistive fly by wire, the question is not how well the computer works, it's how well the computers and the humans work together. Consider the hapless lawyers who turned to chat GPT for help in formulating a case, only to find it had simply invented new cases. Not only did this actually happen, it's happened more than once. In a New York case, the lawyers were fined five thousand dollars and ordered to write letters of apology to the judges whose names had been taken in vain by chat GPT. In Canada, another lawyer was let off with a warning. The Supreme Court of British Columbia believed her when she said she didn't really understand how chat GTP worked, which I can believe too. By now, surely even the lawyers have figured out that you can't ask chat gpt to prepare a legal submission for you without checking. But problems with generative AI can occur in more surprising places. Cautionary tales will return in a moment. Jeremy Hutley, Kean Gohart, and Henwick Verdelin are experts in ideation, or, to use its more everyday label, brainstorming creative problem solving as a group. Naturally, when they heard about the launch of chat GPT, they asked themselves what this new tool might bring to the ideation process. After all, chat GPT was a sudden sensation, powerful, flexible, easy to use, and the problem that the lawyers had that chat GPT just makes stuff up isn't a problem for ideation because the aim isn't accuracy but to generate a huge range of solutions as quickly as possible before you work out the details later. So the three researchers decided to conduct a simple experiment in which they compared ideation sessions using chat GPT with ideation sessions without it. Jeremy Utley, who teaches innovation at Stanford University, thought that chat GPT would help teams produce vastly more ideas, maybe twice as many, five times as many, one hundred times as many. He told the podcast You Are Not So Smart that he thought the question their study would answer was how many multiples more ideas are AI assisted teams generating? And then he saw the results, he told you were not so Smart. My first thought was, oh no, oh no. For many of the teams using chat GPT, the entire collaborative back and forth of the ideation process stopped. Instead, the room would be silent except for the pecking at keyboards. Each person would be staring into their screen, displaying what the researchers came to describe as resting AI face, and the ideas they produced utterly mediocre. Equipped with the latest, greatest, most sophisticated tool in the history of brainstorming. These teams produced totally predictable stuff, nothing brilliant, nothing particularly varied, nothing that didn't need a lot of development work, and above all, just not many ideas, which is insane because ideation is all about creating a huge variety of ideas and sorting through them later, and chat GPT is absolutely a machine for reducing a huge variety of ideas. It was the ein stelling problem again. What people really needed to do was to engage with each other and engage with the AI, prompting it, discussing the prompts, going back to the machine, mixing things up, varying their queries, asking for more. But what chat gpt gave them looked a lot like a Google search bar. You type in your question, you get an answer, and then you stop. You feel like you've seen this situation before, and so you do what you always do, and if it doesn't work often you just do it again. You get stuck. Pierre Cedric Bonan was certainly stuck. His instinct was to pull back on the control stick, which was stalling the plane, which was sinking, sinking, sinking to towards the Atlantic Ocean all around him and David Robert. Alarms were sounding, including the alarm store of Storm Stall, but they just didn't seem to be able to diagnose their self inflicted problem. By this time, even the air speed indicators had defrosted. There was literally nothing wrong with the plane. If they'd gently pointed the nose of the plane downwards, it would have regained speed and lift and pulled out of the storm. They had plenty of altitude to do that, but they didn't. Robert had pressed the button to summon Captain Dubois from the rest cabin. Fuck where is he? In a panic, he mashed it again and again, Fuck is he coming or not? Remember? Captain Dubois took only ninety eight seconds to reach the flight deck. What's happening? Dubois seemed calm given the circumstances. David Robert and Pierre Cedric Bonan were not. Bonan had stalled the plane, which was plummeting out of the sky nose way up in the air at one hundred and fifty feet per second. David Robert had noted that the air speed indicators had failed, and although the other readings were accurate, including the store stare stall, he didn't believe them. The air France pilots were hideously incompetent, says William Longevisha. Longevichha argued that the pilots simply weren't used to flying their own airplane at altitude without the help of the computer. Even Captain Dubois had spent only four hours in the last six months actually flying the plane rather than supervising the autopilot, and it had the help of the full assistive fly by wire system. If the plane flies itself, when do the pilots get to practice. So far, we haven't seen that problem with modern AI systems, but it's obvious that trouble is coming. Think of the recruiters who fell asleep at the wheel, the lawyers who didn't understand chat GTP, and the brainstorming groups who stared slack jawed at their screens rather than talking to each other. In each case we can see an all too human willingness to abandon our own judgment and let the computer do the thinking. And the more we do that, the less practice we will get. Better AIS are coming, of course, than that will only make the worse. The psychologist James Reason, the author of Human Error, explains why skills need to be practiced continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practicing these basic control skills. When manual takeover is necessary, something has usually gone wrong. This means that operators need to be more rather than less skilled in order to cope with these a typical conditions. This is called the paradox of automation. Unreliable automation keeps the operators sharp and well practiced, but the better the automated system gets, the less experience the operators will have in doing things for themselves, and cruelly, the weirder the situations will be. When the computer gives up. You might say, well, then we shouldn't use these automated systems. Pilots should practice their skills rather than using assistive fly by wire. We should memorize phone numbers instead of relying on our smartphones. Kids should learn long division rather than using calculators. Heck, books are a disaster. In the good old days, before books, people used to just remember fifteen hour epic poems such as the Iliad Wow, and that's not going to happen anyway. These tools don't just make life easier, they improve our performance. You can do more sophisticated calculations with a pocket calculator than without one. A library can contain vastly more information than any human could memorize, and modern planes with autopilots and assistive flyby wire are much much safer than the old fashioned kind. But there is a price to be paid. Sometimes we'll find we can't remember a phone number or how to do long division. Or perhaps we'll find we've asked an AI system to help us brainstorm, or to help us decide who to hire, or write new laws, or help us control weapon systems or plan military strategy. Maybe we stop paying attention, or become so helplessly out of practice that when the computer lets us down, we don't even notice. By the time Captain Mark Dubois returned to the flight deck, it was still possible to rescue the plane, point the nose downward, regain forward air speed, and dive out of the store. The plane still had enough altitude to make that possible, but Dubois would have had to take in a lot of information in a very short space of time to diagnose the stall, and neither he nor Robert could directly see that Bonan was still yanking back on the control stick instinctively trying to climb. The plane was falling so quickly that some of the indicators had stopped giving redoubts, and the ones that were working might have seemed unbelievable because of the extreme speed of the fall. And then there's a fundamental ambiguity in a stall. You're pointing up, but you're falling down. To stop descending, you'd first have to dive. That can make it difficult both to diagnose the problem and to talk about it. Less than a minute after Captain Dubois entered the flight deck, there's an exchange. Robert says, you're climbing. Then he says, you're going down, down, down, down. Is that an instruction to point the nose down or a description of the plane which is falling fast? Captain Dubois echoes going down. Bonan asks, am I going down now? Robert and Dubois both disagree. Robert answers, go down. Dubois says, no, you climb here. That's a description, not an order. Robert adds, go down. Bonan says, I'm climbing, okay, So we're going down. It's a mess. Are they climbing or going down? Both? The nose is pointed up Bonan stick is back, and they're falling at more than ten thousand feet a minute. Maybe Captain Dubois has realized their story. Maybe not. He doesn't say so directly, and he's not at the controls Bananas. All the while, computer voice is adding store stall. It takes another minute before there's some kind of clarity. The plane has fallen through the ten thousand feet mark. There's now less than a minute left. Robert says, Climb, climb, climb, climb, Of course he does. The plane is plummeting. Banan replies, but I've been at maxy nose up for a while. At last, Captain Dubois seems to understand what Banan has done. No, no, don't climb. At this point, David Robert pushes a button to switch controlled his seat and pushes the nose of the plane down. Banan, presumably panicking, pushes his button and silently takes back control of the aircraft and sticks the nose back up. It doesn't matter, it's too late anyway, They only have seconds left. Pierre Cedric Bonan's wife is back in the passenger cabin. Their two young sons are back in Paris. Does Bonan realize they're about to be orphaned? Probably we're going to crash, he says. This can't be true. Fuck, we're dead, says David. Robert. In less than three seconds, the plane will barely flop into the Atlantic Ocean, instantly killing all two hundred and twenty eight people on board, Robert Bonar, Captain Dubois, Bonan's wife, Dubois girlfriend, everyone, Pierre Cedric Bonan's last words, but what's happening? For a full list of our sources, see the show notes at Timharford dot com. Cautionary Tales is written by me Tim Harford with Andrew Wright. It's produced by Alice Fines with support from Marilyn Rust. The sound design and original music is the work of Pascal Wise. Sarah Nix edited the scripts. It features the voice talents of Ben Crowe, Melanie Guttridge, Stella Harford, Jemma Saunders, and Rufus Wright. The show also wouldn't have been possible without the work of Jacob Weisberg, Ryan Dilly, Greta Cohne, Eric's handler, Carrie Brody, and Christina Sullivan. Cautionary Tales is a production of Pushkin Industries. It's recorded a Wardour Studios in London by Tom Berry. If you like the show, please remember to share, rate and review, tell your friends and if you want to hear the show ad free, sign up for Pushkin Plus on the show page, if Apple Podcasts, or at pushkin dot fm, slash plus

Cautionary Tales with Tim Harford

We tell our children unsettling fairy tales to teach them valuable life lessons, but these Cautionar 
Social links
Follow podcast
Recent clips
Browse 140 clip(s)