Attention quickly focused primarily on four factors: the Boeing 737 Max flight control system; Lion Air's maintenance; aircrew training and performance; and the Aircraft Flight Manual (AFM).
The B737 Max, the latest variant in the seemingly immortal 737 series, adds something called Maneuvering Characteristics Augmentation System (MCAS). The point of MCAS is to automatically trim the aircraft in the nose-down direction in the event of excessive angle-of-attack (AOA). (Angle of attack is the angle between the wing and the relative wind. Imagine an airplane completely level, but falling straight down — its AOA is 90º; same airplane, but in level flight, the AOA would be 0º. Stall AOA is defined as that AOA beyond which lift decreases, and is around 20º. Typical AOA varies between about 2.5º in cruise, and up to 7º during some phases of departure and approach.)
Avoiding as many details as possible, the B737 Max had engines that were larger in diameter than anything that had ever been installed on the 737. This presents an engineering problem, as the original 737 design had very short landing gear struts. As engine diameters have gotten larger, this has required more elaborate ways to keep them from hitting the ground during crosswind landings. With the Max, this meant mounting the engines further forward, and higher, than previously.
The result aggravated what has always been a handling issue with airplanes having wing mounted engines. The correct response to stall recovery is to do two things simultaneously: reduce AOA (lower the nose) and increase thrust. However, because the engines are below the wings, increasing thrust creates a very pronounced nose-up force, to the extent that if a stall is entered at and low speed and idle thrust, the upward force generated by increased engine thrust can overcome the aerodynamic force available to push the nose down.
With the Max, Boeing decided that the thrust induced nose-up pitching moment had gotten sufficiently pronounced that the flight control system needed to step in and automatically trim the airplane nose down in order augment the pilot's response.
In and of itself, that is a good thing — if AOA gets too high, lower it. Easy peasy. And it really is easy. AOA sensors are brick-simple: they are really nothing more than wind vanes hooked to a variable resistor. As one might expect, simple means rugged and reliable. In nearly forty years of flying, I have never experienced an AOA failure.
The problem here should be obvious: what never fails, did, and as a consequence, MCAS tried to take control of the plane. The crew ultimately lost the fight.
In the mishap sequence, this first leads to Lion Air maintenance. The aircraft had experienced airspeed indicator problems on the preceding four flights. Inexplicably, Lion Air's maintenance replaced an AOA sensor — this would be akin to replacing your steering wheel to fix the speedometer. Not only did that predictably fail to fix the problem, they likely failed to install the new AOA indicator properly, because on its penultimate flight, the airplane suffered an AOA failure, accompanied by MCAS intervention, which that crew was able to manage.
Now over to the pilots. They should have been aware of the issues with airspeed and AOA. The first item on the Captains preflight is reviewing the maintenance logbook; for the First Officer, that is the first thing following the exterior preflight. Yet either they didn't do so, or the logbook failed to convey sufficient information, or the crew failed to consider the ramifications of erroneous AOA readings.
Whatever the reason, they were both surprised and insufficiently aware of not only how MCAS works, but that there even was such a thing. Had they been familiar with MCAS, they would have known that it is inhibited until the flaps and slats are retracted. Simply selecting Flaps 1 (which brings the leading edge slats to half travel, and slightly extends the trailing edge flaps) would have put paid to MCAS; it is inhibited unless the flaps and slats are fully retracted. As well, following the pilot adage "if things suddenly go to shit, that last thing you did, undo it." would have put things right no matter how aware they were of MCAS.
Alternatively, they could have gone to the Unscheduled Stab Trim procedure, which goes like this:
1. Position both (there are two completely independent trim systems) Stab Trim switches to cut-out.\
2. Disengage the autopilot if engaged.
3. Alternately reengage the systems to isolate the faulty system
4. If both primary systems are borked, proceed using the alternate trim system.
As with almost all aircraft mishaps, there are a great many links in the chain. Documentation, training, maintenance, and aircrew performance will each appear in the final report. It will perhaps fault Boeing for inadequate MCAS documentation in the AFM, and faulty MCAS implementation (more on that below). Lion Air maintenance will take a shellacking for not just likely poor maintenance procedures, but also shortcomings in documentation.
Finally, the pilots. Even if Boeing takes a hit for providing insufficient MCAS documentation in the AFM, it remains true that the crew had the means to shutoff the MCAS — cut out the primary pitch trim system — and then resort to the alternate trim system. That they didn't is clear; however, until the cockpit voice recorder is found, we will never know for certain why. I suspect fingers will be pointed at training. Outside the Anglosphere, EU, and Japan, the rest of the world doesn't put nearly as much emphasis on, and money into, training and standardization.
Modern airliners, and by that I mean anything built since the mid-1980s, have three sensing systems: Air Data, Inertial Reference, and GPS. Air Data provides altitude, true airspeed, air temperature, angle of attack, and vertical speed (how fast the airplane is changing altitude). Inertial Reference measures acceleration in all three axes, and through first and second order differentiation, calculates horizontal and vertical speed, as well as position. Finally, the GPS measures position, and integrated over time calculates speed in the horizontal plane.
As well, the airplane knows how much it weighs, how it is loaded, trim, wing configuration, and control positions.
All of these things are interrelated, all the time. For example, given set of values for airspeed, weight, air temperature, and so forth, there is only one altitude for which they can all be true. It is possible, in theory, to calculate any of those parameters given values for all the rest.
Yet so for as I know, no airplane anywhere even tries.
Presuming what is known about the Lion Air mishap is roughly true, MCAS provides a perfect example: if it sees an impending stall angle of attack, it effectively assumes control of the pitch axis, without checking to see if, given all the other parameters, a stall angle of attack is reasonable.
Instead, it should go like this:
Scene: shortly after takeoff, accelerating through slat retraction speed, when MCAS wakes up.
MCAS, "YIKES WE ARE STALLING WE ARE ALL GOING TO DIE. Oh, wait, let's ask around the office.
"Hey, GPS, you space cadet, what are you seeing for groundspeed?"
"MCAS, at the moment, 195 knots, with plus 20 knot change over the last ten seconds."
"Inertials, what have you got?"
"MCAS, ten degree flight path angle, fifteen degree pitch attitude, 195 knots, plus 20 knots over the last ten seconds, 1.2G vertical acceleration"
"Okay, Air Data, over to you."
"MCAS, 198 knots true airspeed, vertical speed 2500 feet per minute, and AOA off the charts."
MCAS to self: With all that info, AOA should be about 5º, not 20º. Hmm, Inertials says the difference between pitch and flight path angle is 5º. We are accelerating AND climbing. Not only that, but at this airspeed, a stall AOA would put about 5G on the airplane which a) isn't happening, and b) would have long since shed the wings.
I know, instead of having a helmet fire over something that cannot possibly be true, I'll throw an AOA Unreliable alert, disable direct AOA inputs, then just sit on my digital hands.
In essence, this is what pilots are supposed to do all the time. If I was flying my airplane and the stall warning system activated under those conditions, I would correlate that with all the other available information and immediately reject it as impossible.
The list of mishaps such data integration could have prevented is almost beyond counting. AF447 ended up in the middle of the Atlantic because the airplane didn't have the sense to calculate that an airspeed of zero was impossible. It had enough information available to replace the erroneous measured value with a calculated value, instead of throwing up a perfect shitstorm of worthless warnings. (Granted, the pilots then proceeded to kill themselves and everyone else, but the airplane forged the first link in that chain.)
Less famously, about five years ago my company had a tail strike on landing in Denver that did about $11 million in damage to the plane. It happened because the airplane was told it had 100,000 pounds less freight than was actually on board. Yes, there were multiple lapses that caused that error to go undetected. And the crew failed to note the slower climb, and higher pitch attitudes throughout the flight; to be fair, the performance differences weren't glaring. But comparing measured and calculated parameters would have highlighted something was out of whack: fuel flow too high, angle of attack too high, trim wrong, and that thing has to be aircraft weight.
The Buffalo mishap was due to undetected clear icing on the wings. The crew should have noticed the pitch attitude was too high for the configuration and airspeed, but there is absolutely no reason that problem couldn't have been highlighted well before things got out of hand.
To me, this seems simple. (Maybe Bret can tell me otherwise.) A set of a dozen or so simultaneous equations each calculating a given parameter using the measured values of the remaining parameters. Each calculated value should be roughly the same as its measured value, and everything has to be internally consistent; otherwise, something is wrong.
Yet, despite what seems simple to me must no be, because such a thing does not exist.
Well, actually, it does, it is called Pilots. If there were never any circumstances where a BS flag needed waving, then pilots wouldn't be required. Those circumstances are far more common than the rare Lion Airs, AF447s, et al would indicate. You never hear of the crash that didn't happen because the pilots effectively said "Yeah, no. We aren't doing that, because it doesn't make any sense."
Unfortunately, error cues can be subtle and easy to miss if everything else appears correct, or if the pilots aren't very experienced, or their training isn't very good, or they aren't on their A-game, or their background doesn't include much hands-on flying.
And this seems to have implications for autonomous vehicles of any kind. I don't think we fully comprehend how much expertise is within the operator, because operators themselves can't fully articulate what they are doing. Go ahead, try to describe what is required to ride a bike. It takes pilots years to reach a point where accumulated experience provides sufficient judgment to stop oddball situations getting worse.
It seems that these guys couldn't deal with a manageable situation, but those sorts of things get handled every day without making news. Take the human out of the system, though, and we will start finding out how much we don't know about what we know.