Attention quickly focused primarily on four factors: the Boeing 737 Max flight control system; Lion Air's maintenance; aircrew training and performance; and the Aircraft Flight Manual (AFM).
The B737 Max, the latest variant in the seemingly immortal 737 series, adds something called Maneuvering Characteristics Augmentation System (MCAS). The point of MCAS is to automatically trim the aircraft in the nose-down direction in the event of excessive angle-of-attack (AOA). (Angle of attack is the angle between the wing and the relative wind. Imagine an airplane completely level, but falling straight down — its AOA is 90º; same airplane, but in level flight, the AOA would be 0º. Stall AOA is defined as that AOA beyond which lift decreases, and is around 20º. Typical AOA varies between about 2.5º in cruise, and up to 7º during some phases of departure and approach.)
Avoiding as many details as possible, the B737 Max had engines that were larger in diameter than anything that had ever been installed on the 737. This presents an engineering problem, as the original 737 design had very short landing gear struts. As engine diameters have gotten larger, this has required more elaborate ways to keep them from hitting the ground during crosswind landings. With the Max, this meant mounting the engines further forward, and higher, than previously.
The result aggravated what has always been a handling issue with airplanes having wing mounted engines. The correct response to stall recovery is to do two things simultaneously: reduce AOA (lower the nose) and increase thrust. However, because the engines are below the wings, increasing thrust creates a very pronounced nose-up force, to the extent that if a stall is entered at and low speed and idle thrust, the upward force generated by increased engine thrust can overcome the aerodynamic force available to push the nose down.
With the Max, Boeing decided that the thrust induced nose-up pitching moment had gotten sufficiently pronounced that the flight control system needed to step in and automatically trim the airplane nose down in order augment the pilot's response.
In and of itself, that is a good thing — if AOA gets too high, lower it. Easy peasy. And it really is easy. AOA sensors are brick-simple: they are really nothing more than wind vanes hooked to a variable resistor. As one might expect, simple means rugged and reliable. In nearly forty years of flying, I have never experienced an AOA failure.
The problem here should be obvious: what never fails, did, and as a consequence, MCAS tried to take control of the plane. The crew ultimately lost the fight.
In the mishap sequence, this first leads to Lion Air maintenance. The aircraft had experienced airspeed indicator problems on the preceding four flights. Inexplicably, Lion Air's maintenance replaced an AOA sensor — this would be akin to replacing your steering wheel to fix the speedometer. Not only did that predictably fail to fix the problem, they likely failed to install the new AOA indicator properly, because on its penultimate flight, the airplane suffered an AOA failure, accompanied by MCAS intervention, which that crew was able to manage.
Now over to the pilots. They should have been aware of the issues with airspeed and AOA. The first item on the Captains preflight is reviewing the maintenance logbook; for the First Officer, that is the first thing following the exterior preflight. Yet either they didn't do so, or the logbook failed to convey sufficient information, or the crew failed to consider the ramifications of erroneous AOA readings.
Whatever the reason, they were both surprised and insufficiently aware of not only how MCAS works, but that there even was such a thing. Had they been familiar with MCAS, they would have known that it is inhibited until the flaps and slats are retracted. Simply selecting Flaps 1 (which brings the leading edge slats to half travel, and slightly extends the trailing edge flaps) would have put paid to MCAS; it is inhibited unless the flaps and slats are fully retracted. As well, following the pilot adage "if things suddenly go to shit, that last thing you did, undo it." would have put things right no matter how aware they were of MCAS.
Alternatively, they could have gone to the Unscheduled Stab Trim procedure, which goes like this:
1. Position both (there are two completely independent trim systems) Stab Trim switches to cut-out.\
2. Disengage the autopilot if engaged.
3. Alternately reengage the systems to isolate the faulty system
4. If both primary systems are borked, proceed using the alternate trim system.
As with almost all aircraft mishaps, there are a great many links in the chain. Documentation, training, maintenance, and aircrew performance will each appear in the final report. It will perhaps fault Boeing for inadequate MCAS documentation in the AFM, and faulty MCAS implementation (more on that below). Lion Air maintenance will take a shellacking for not just likely poor maintenance procedures, but also shortcomings in documentation.
Finally, the pilots. Even if Boeing takes a hit for providing insufficient MCAS documentation in the AFM, it remains true that the crew had the means to shutoff the MCAS — cut out the primary pitch trim system — and then resort to the alternate trim system. That they didn't is clear; however, until the cockpit voice recorder is found, we will never know for certain why. I suspect fingers will be pointed at training. Outside the Anglosphere, EU, and Japan, the rest of the world doesn't put nearly as much emphasis on, and money into, training and standardization.
Fine.
Modern airliners, and by that I mean anything built since the mid-1980s, have three sensing systems: Air Data, Inertial Reference, and GPS. Air Data provides altitude, true airspeed, air temperature, angle of attack, and vertical speed (how fast the airplane is changing altitude). Inertial Reference measures acceleration in all three axes, and through first and second order differentiation, calculates horizontal and vertical speed, as well as position. Finally, the GPS measures position, and integrated over time calculates speed in the horizontal plane.
As well, the airplane knows how much it weighs, how it is loaded, trim, wing configuration, and control positions.
All of these things are interrelated, all the time. For example, given set of values for airspeed, weight, air temperature, and so forth, there is only one altitude for which they can all be true. It is possible, in theory, to calculate any of those parameters given values for all the rest.
Yet so for as I know, no airplane anywhere even tries.
Presuming what is known about the Lion Air mishap is roughly true, MCAS provides a perfect example: if it sees an impending stall angle of attack, it effectively assumes control of the pitch axis, without checking to see if, given all the other parameters, a stall angle of attack is reasonable.
Instead, it should go like this:
Scene: shortly after takeoff, accelerating through slat retraction speed, when MCAS wakes up.
MCAS, "YIKES WE ARE STALLING WE ARE ALL GOING TO DIE. Oh, wait, let's ask around the office.
"Hey, GPS, you space cadet, what are you seeing for groundspeed?"
"MCAS, at the moment, 195 knots, with plus 20 knot change over the last ten seconds."
"Inertials, what have you got?"
"MCAS, ten degree flight path angle, fifteen degree pitch attitude, 195 knots, plus 20 knots over the last ten seconds, 1.2G vertical acceleration"
"Okay, Air Data, over to you."
"MCAS, 198 knots true airspeed, vertical speed 2500 feet per minute, and AOA off the charts."
MCAS to self: With all that info, AOA should be about 5º, not 20º. Hmm, Inertials says the difference between pitch and flight path angle is 5º. We are accelerating AND climbing. Not only that, but at this airspeed, a stall AOA would put about 5G on the airplane which a) isn't happening, and b) would have long since shed the wings.
I know, instead of having a helmet fire over something that cannot possibly be true, I'll throw an AOA Unreliable alert, disable direct AOA inputs, then just sit on my digital hands.
In essence, this is what pilots are supposed to do all the time. If I was flying my airplane and the stall warning system activated under those conditions, I would correlate that with all the other available information and immediately reject it as impossible.
The list of mishaps such data integration could have prevented is almost beyond counting. AF447 ended up in the middle of the Atlantic because the airplane didn't have the sense to calculate that an airspeed of zero was impossible. It had enough information available to replace the erroneous measured value with a calculated value, instead of throwing up a perfect shitstorm of worthless warnings. (Granted, the pilots then proceeded to kill themselves and everyone else, but the airplane forged the first link in that chain.)
Less famously, about five years ago my company had a tail strike on landing in Denver that did about $11 million in damage to the plane. It happened because the airplane was told it had 100,000 pounds less freight than was actually on board. Yes, there were multiple lapses that caused that error to go undetected. And the crew failed to note the slower climb, and higher pitch attitudes throughout the flight; to be fair, the performance differences weren't glaring. But comparing measured and calculated parameters would have highlighted something was out of whack: fuel flow too high, angle of attack too high, trim wrong, and that thing has to be aircraft weight.
The Buffalo mishap was due to undetected clear icing on the wings. The crew should have noticed the pitch attitude was too high for the configuration and airspeed, but there is absolutely no reason that problem couldn't have been highlighted well before things got out of hand.
To me, this seems simple. (Maybe Bret can tell me otherwise.) A set of a dozen or so simultaneous equations each calculating a given parameter using the measured values of the remaining parameters. Each calculated value should be roughly the same as its measured value, and everything has to be internally consistent; otherwise, something is wrong.
Yet, despite what seems simple to me must no be, because such a thing does not exist.
Well, actually, it does, it is called Pilots. If there were never any circumstances where a BS flag needed waving, then pilots wouldn't be required. Those circumstances are far more common than the rare Lion Airs, AF447s, et al would indicate. You never hear of the crash that didn't happen because the pilots effectively said "Yeah, no. We aren't doing that, because it doesn't make any sense."
Unfortunately, error cues can be subtle and easy to miss if everything else appears correct, or if the pilots aren't very experienced, or their training isn't very good, or they aren't on their A-game, or their background doesn't include much hands-on flying.
And this seems to have implications for autonomous vehicles of any kind. I don't think we fully comprehend how much expertise is within the operator, because operators themselves can't fully articulate what they are doing. Go ahead, try to describe what is required to ride a bike. It takes pilots years to reach a point where accumulated experience provides sufficient judgment to stop oddball situations getting worse.
It seems that these guys couldn't deal with a manageable situation, but those sorts of things get handled every day without making news. Take the human out of the system, though, and we will start finding out how much we don't know about what we know.
22 comments:
There are few times when I read something for free and feel like I should be paying for it. When you write about planes, I get to that feeling. Thanks.
I agree with CLovis on that. I've never seen such clear analysis of aircraft problems, anywhere, ever.
"Yet, despite what seems simple to me must no be, because such a thing does not exist."
Are these various sensors part of different subsystems by different manufacturers with no intercommunication specs? Because, otherwise, I agree, it should be simple enough to detect that one or more readings are inconsistent, and in your examples above, so inconsistent that it should be automatically flagged to the pilot that one or more of the sensors can't be relied on and that any autonomous subsystems that rely on them should disengage as soon as the pilots are ready.
Or have these things not been integrated because pilots are expected to notice and take action? Which you say they usually do except once in a great while as in this case.
"And this seems to have implications for autonomous vehicles of any kind."
I think "any kind" is probably too strong. For example, an autonomous fork lift guided by rails in a warehouse that never goes particularly fast probably is sufficiently different that said implications don't really apply.
But beyond that, can you give some examples of aircraft failures (such as sensor failures) where a pilot recovered but a computer could not have. Because in this failure you've written about, the pilots probably could've done better, but I see no reason a computer couldn't've figured out if it had been programmed/trained correctly.
Another reason I'm not sure it has such wide implications is that while a commercial large aircraft pilot is very, very skilled, those of us driving cars (or riding bicycles) really aren't. When things go wrong, we very often crash. For example, if we hit a patch of ice, most of us simply don't do the right thing and we lose control of the car. Some variant of a deep convolutional net will simply be better at those edge cases because it will have been trained on a huge number of hours of such cases, whereas someone like me has never been exposed to most of those edge cases.
"Go ahead, try to describe what is required to ride a bike."
It's been done at many levels (e.g. rule-base, neural network based, etc.). Some of those require "describing" (i.e. programming) how to control a bicycle, others just train nets and don't require description.
It's pretty simple anyway. Push on pedals to go, squeeze brake to stop, turn in the direction you're tilting to become more vertical, tilt in the direction you wish to turn (turn first in the opposite direction you wish to tilt). There are delays and speed has a major effect on those delays which you have to get a feel for, but there's really nothing to it which is why one can teach a 4-year-old to ride a bike in an afternoon.
I'll be surprised if by 2030 most new ground vehicles don't have complete autonomous capabilities available. Will they make mistakes? Yeah, but they'll be better than most human drivers. But we'll see - it's only 11 years and then you can tell me you told me so if I'm wrong.
On the other hand, for large commercial aircraft, I'll be surprised if there aren't still pilots. The cost of the pilot is very small compared to cost of fuel, capital equipment, etc. and the incentive to get rid of pilots isn't overwhelming.
Great, great post, Skipper. Even an arts and law grad can follow it, more or less. Many thanks.
[Clovis:] There are few times when I read something for free and feel like I should be paying for it …
Just returning the favor.
[Bret:] Are these various sensors part of different subsystems by different manufacturers with no intercommunication specs?
All of them go through the Flight Management System.
Here are some examples. Optimum cruise altitude predictions require outside air temperature, forecast and actual winds, and aircraft gross weight. To provide pitch commands, it needs to vertical and horizontal speeds, altitude delta, and pitch attitude. The Ground Proximity Warning System (GPWS) takes inputs from air data, inertials, instrument landing system and radio altimeters.
I could go on, but not helpfully.
About six months ago, en route from Tel Aviv to Cologne, the left radar altimeter failed to a value varying between -6' and -14'. The radalt is required for autolandings, certain hand flown approaches in low visibility, the landing configuration caution system, he Ground Proximity Warning System, and the Wind Shear Warning System. Any radalt value greater than 2500' above ground level is ignored, and the radalt value blanks.
But what about my situation? Clearly, it shouldn't even exist. But it did, and parts of the flight management system decided they were on the ground, even though every other input (horizontal speed, barometric altitude, THE OTHER RADAR ALTIMETER, etc) quite clearly said that was way wrong. The airplane stopped properly maintaining altitude, and threw erroneous landing gear not down warnings. It wouldn't follow its own vertical path guidance on the descent, and airspeed control went to crap.
We dealt with it through various means ranging from lower automation modes to manual override where required.
But, with all the information available, surely it isn't beyond the ken of Boeing to get the radalt do some sanity checks before taking over the show.
Sometimes the same failure in other circumstances leads to a far worse outcome.
Or have these things not been integrated because pilots are expected to notice and take action? Which you say they usually do except once in a great while as in this case.
That the latter is nearly always the case is no reason for the former. I think the actual reason boils down to this: aircraft designers don't do it because aircraft designers haven't done it. And certification requirements don't specify it. And mishap investigations never bring it up.
But beyond that, can you give some examples of aircraft failures (such as sensor failures) where a pilot recovered but a computer could not have.
At least one flight prior to its ultimate flight, the crew was apparently faced with the same, or similar, problem, and successfully dealt with it. Which we wouldn't have heard about, otherwise.
Eight or so years ago, I was pilot flying into Indianapolis from Anchorage. For spacing, approach control vectored us across final, then back to intercept.
The airplane turned the wrong way. Or would have, had I not given it the single digit salute.
(More later.)
[Bret:] Another reason I'm not sure it has such wide implications is that while a commercial large aircraft pilot is very, very skilled, those of us driving cars (or riding bicycles) really aren't. When things go wrong, we very often crash.
I disagree. "When things go wrong, we very often crash" is probably no more true of auto drivers than it is pilots.
Yes, there are more car crashes than airliners, but my underlying point remains the same: the crashes that didn't happen obscure what we don't know about what we know, and until AI can stand outside the realm it is controlling, then it will be, at best, a simulacrum.
For example, when I was in my late teens, I was driving on a parkway with two lanes either direction, no divider. I'm in the right lane, it is completely clear ahead. The line of cars to my left start slowing.
Not sharply, but for no apparent reason. So I did, too. And as a consequence did not run over a Golden Retriever coming from the other side of the road. That is the sort of non-accident that doesn't happen all the time, and it didn't happen because of a judgment made in the absence of evidence. What heuristic do you provide to AI to deal with that?
What we can articulate about any non-trivial task is superficial. Your description of what is required to ride a bike is both accurate, and lacking. It completely leaves out observing one's surroundings at an extremely granular level — I'm riding alongside a row of parallel parked cars. Six cars ahead, the passenger side door starts to open. Now what? (And that is leaving aside that a an autonomous system capable of making observations at that level, then deciding how to act upon them, would dwarf the bike it was on.)
Operating in three dimensions is at least three orders of magnitude harder than in two. But the environment is almost completely ordered, in stark contrast with a typical city street. So in many regards, developing autonomous airplanes is a much less daunting task than cars.
Yet their lies Lion Air. Either what seems simple to me is a lot harder than I know, which means the truly difficult stuff will be impossible.
Or our rate of solving problems is so slow that autonomous vehicles are a lot further off than we think.
Skipper, I usually skip technical articles, but you make things clear and understandable. I hope you're right about autonomous vehicles because experience and intuition (as in your example about the dog on the road) can't be built into machines.
Hey Skipper asked: "What heuristic do you provide to AI to deal with that?"
Did you have a heuristic or did you simply react?
AI technology has evolved to the point where there is are very limited heuristics and programming. It's about training deep networks with literally billions of hours of actual driving experience and then allowing them to react.
Unless you can point to something in a human brain that's beyond physical (i.e. supernatural), for example magic, or a god given soul, or intelligent design hardwiring the brain for driving vehicles, or whatever, I'd like to know what you think a brain can do that a computer inherently cannot. Unless there is supernatural influence, a brain is a computer, no more, no less.
That's the change in AI in the last 5 years. Prior to that it was all clever programming and heuristics and it was painfully slow going. Now it's having computers mimic desired behavior - a much easier task because the mimicry simply emerges within the complexity of the deep networks. Artificial Intelligence is starting on the path of exponential progress because it no longer requires intelligence to create it.
[Bret:] Did you have a heuristic or did you simply react?
It is safe to say I reacted. However, the question is: to what?
I deduced that I needed to stop, but since the deduction came in the absence of overt evidence, it was really a hunch. While that particular incident sticks out in my memory, I am sure there are similar hunches beyond counting because they prevented anything memorable happening. We don't know what we know, nor how we got to know it.
AI can't learn from what didn't happen.
Unless you can point to something in a human brain that's beyond physical (i.e. supernatural), for example magic, or a god given soul, or intelligent design hardwiring the brain for driving vehicles, or whatever, I'd like to know what you think a brain can do that a computer inherently cannot.
Take fifteen minutes and listen to this, a segment from This American Life.
It takes the position that we know enough about physics to conclude that free will is impossible, that the laws of physics leave no room for free will.
The argument is compelling, as far as it goes. Unfortunately, imho, it doesn't go far enough. The permutations of all the brains neurons and all their possible states leads to a problem space so huge that even given total knowledge of a brain at some instant, and a specific stimulus, there isn't enough time left in the universe to figure out what the next state will be. Of course, that doesn't exclude the possibility that the next state is pre-determined from the previous state. Equally, it can't exclude the possibility that there is some executive function in the brain, the "self", which imposes preferences on successive states, thereby performing free will.
That isn't just down to the number of neurons, but also to their possible states. In the brain, how many states may a neuron have? No one knows, but it seems certain neurons are analog. Also, neurons are densely interconnected — in the human brain, each neuron is synaptically connected to roughly 7000 others.
This means that the structure and operating principles of the brain are utterly unlike any computer. A brain is no more a computer than a computer is a brain; they are no more alike than chalk and cheese. Just as there are plenty of things a digital computer can do that my brain can't, these differences must mean there are things my brain can do that are outside the realm of a computer. I'll bet one of them is the ability of the human brain to synthesize new knowledge from existing knowledge. A human doesn't have to do billions of hours of driving experience to become a pretty credible driver (and far more credible in countries that have decent standards). That billions of hours are required to train deep networks to be fine up to the second they aren't should be a clue that brains can do pretty easily what is still well beyond AI.
A honeybee brain is pretty simple. Is AI anywhere near being able to do what a bumblebee does, never mind in a honeybee sized package?
Artificial Intelligence is starting on the path of exponential progress because it no longer requires intelligence to create it.
I agree that AI is going to go well beyond its heuristics bound shell. But so long as AI is dependent upon two-state basic units, it will be bound in ways a brain isn't.
Nope.
Hey Skipper wrote: "Take fifteen minutes and listen to this..."
Sorry, other than short clips of music, I don't watch or listen to things. I simply can't do it without becoming extremely irritated. Is there a transcript? Or can you summarize?
Hey Skipper wrote: "...it seems certain neurons are analog ... But so long as AI is dependent upon two-state basic units..."
Say What? I'm not totally sure what you mean by "two-state" basic units but if you mean that a computer is limited because it's digital (binary) rather than analog, that makes no sense. And you should know that.
It's not a big trick to make an analog computer (indeed many have been built) so it would seem that they'd be really popular if there was advantage. There's not an advantage because while digital computer have quantization noise, analog computers have analog noise, which for a given level of cost is more limiting than quantization noise, and that's why virtually all modern computers are digital. The human brain has all kinds of noise, but that's not in anyway an advantage - it's a huge disadvantage.
If you're goal is to not be predictable (and therefore also not repeatable), that's easy. Introduce quantum random numbers to some of the calculations and voila, you'll have a computer that's as flaky as a brain. Yay! That seems like a silly goal to me, but predictability can be overcome in a computer if you so desire.
But AIs interacting with the real-world are not predictable either. For example, if you had an AI drive a car from Los Angeles to New York do you believe you predict the exact path with exact timing? You could not, not with all the computing power of the universe, because the world essentially introduces the random components.
Hey Skipper wrote: "A human doesn't have to do billions of hours of driving experience to become a pretty credible driver..."
A typical human is trained (i.e. is alive) in general motor control for about 150,000 hours before they then learn to drive. AIs do not need nearly as many hours to be trained to drive.
Hey Skipper wrote: "That billions of hours are required to train deep networks..."
Obviously, billions of hours will never be used to train deep networks. Otherwise, we'll all be dead before they're ready.
Hey Skipper wrote: "...should be a clue that brains can do pretty easily what is still well beyond AI."
Yes, brains are better at screwing up, making mistakes, being inconsistent, being emotional, lying, cruelty, hate, war, destruction, malice, and lots of other things. They are probably slightly better than AIs at driving "still" at this point.
"Still" won't last long.
Bret, are you convinced that our brains produce the emotions you list? Mr. Google equivocates on the subject.
erp,
In a void, brains produce nothing at all and would be effectively comatose, so I agree that emotions are a response of a brain to current and past stimulus, both external and internal to the body.
... therefore you believe that AI brains will also respond to current and past stimuli, both external and internal to their bodiess, to create emotions?
No, that's why I wrote "brains are better at ... being emotional..."
Bret, I'm confused. Then you do believe that AI brains will evolve to develop emotions naturally the same way human and animal brains do?
Since AIs don't (at least for now) have a biological body, there'd be no reason I can see that they would be developed in a way that would evolve emotion like constructs.
Yep.
What you describe should be fairly easy.
As you say, almost every sensor produces a value that could instead be calculated from the other sensors, often from different combinations of other sensors.
Airspeed: Measured value from airspeed indicator (the pitot tube)
Calculation based on inertial data and control settings (attitude, inertial inputs, flaps, thrust)
Calculation based on GPS velocity, attitude, and guestimate of wind speeds.
There are quite a few ways to do handle it, and you could loop through all the important sensors, calculating what they should say compared to what they do say, and if there's a serious discrepancy the flight control system flags the sensor as bad and substitutes the back-calculated value for the input.
But I have another issue with Delta's MCAS system.
***
The reason Boeing added MCAS is that high angles of attack, the far forward engine mounting moves the center of lift forwards (from the nacelles), but also from the thrust vector * sine(AOA) creating a lift vector that’s far forward, pushing the nose up.
But the MCAS runs nose down trim for 10 seconds, adding up to 1.5 degrees of down trim, then waits and checks the AOA, and then repeats the cycle, potentially all the way to the travel stop on the jack screw.
I think that’s the wrong way to do it.
Fly an older 737 NG or Classic and measure the pitch performance at high AOA, perhaps using yoke forces as the standard. Then fly the 737 MAX at those same thrust/AOA points and measure it’s pitch forces. Adjust the change in the angle of incidence of the tail (the pitch trim) to make the required yoke forces on the MAX the same as the NG. This could be done with a simple lookup table or a curve fit, but the point is that for any combination of thrust, airspeed, and AOA, the difference between the two planes will be constant, and thus the MCAS system should just adjust the trim to a pre-determined point compared to where it was originally set by the pilot.
That’s not what they did. What they did is act like the pitch force difference between an NG and a MAX grows over time, even if the AOA and thrust aren’t changing. They adjust, take a sensor reading, and possibly keep right on adjusting. The MCAS system is actively trying to fly the aircraft to get a particular outcome (reduced AOA), instead of compensating for a built-in flight dynamics difference between models.
At a particular airspeed, thrust, and AOA, it's going to take only a particular adjustment (in degrees) between the horiz trim setting of an NG and the horizontal trim setting on a MAX to result in the same pitch angle, pitch rate, or control stick forces for the pilot. So compared to an NG, a MAX might require 2 more degrees of negative pitch trim in a particular set of conditions.
The MCAS system should've been designed to dial in those extra two degrees. Instead the system is seeking a particular result in seeing a decreased AOA. But why should it? Maybe the pilot wants a high AOA, which is why he isn't shoving the nose forward. Maybe he's trying to show off the gentle nature of the plane's stall to some Arab prince. The plane doesn't know what he wants to do, so as an engineer I'd think the system's job, given its origin, is to make the MAX fly like an NG, so an experienced NG pilot doesn't notice any handling difference.
They could probably have added a cam to the jack screw that converts AOA, airspeed, and throttle setting to a predetermined trim change, and that's how I'd approach the software.
Skipper (I suppose the Unknown above is Skipper),
Some news sites have been claiming the MCAS system was not even mentioned in the MAX manuals - that must be fake news, right?
Your ideia above looks a good one, though I suppose it would require so many news tests and certifications that Boeing will hardly try to change the philosophy of the MCAS right now, least they would take more than a year to provide the needed upgrades to take all the MAX off the ground ASAP.
Do you think Boeing will end up paying reparations due to the two lost places so far? And with so many planes down, those air companies should be suing Boeing too?
[Clovis:] Skipper (I suppose the Unknown above is Skipper),
Some news sites have been claiming the MCAS system was not even mentioned in the MAX manuals - that must be fake news, right?
No, Unknown is truly unknown, but nonetheless raises interesting points.
It is true that MCAS was not mentioned in the manuals.
Airliners are very complex machines. Some line needs drawing to separate what the pilots need to know from what is possible to know. Generally, anything that crews cannot control is below that line. I have no earthly idea how the landing gear sequences. Why should I? I can't effect the sequence in any way, and if any part of it goes wrong, then there is a Non-Normal Checklist to deal with the consequences.
Same for MCAS. It is beyond pilot control, and if something goes wrong, there is a NNC to deal with it.
The shortfall in Boeings reasoning is that the MCAS was vulnerable to a single point failure, which means that what had once been virtually completely unknown — uncommanded pitch trim — became far more likely. So even though the procedure existed to quickly deal with the problem, it had been heretofore so uncommon as to never feature in training.
I think in the case of Lion Air, the maintenance issues were so serious as to minimize, if not eliminate, Boeings liability.
As for Ethiopian, for the life of me, I can't figure out how the heck the crew didn't instantly jump to shutting off power to the primary pitch trim.
[Unknown:] The MCAS system is actively trying to fly the aircraft to get a particular outcome (reduced AOA)
I'm speculating here, but I think MCAS was designed to deal with a particular situation: a botched missed approach.
They happen, with alarming frequency. Not the missed approach itself — they are fairly rare — but that once a missed approach is initiated, far too often, things got pear shaped.
For instance: my company had a missed approach at XXXX. But due to inconsistent flight modes, the airplane got to 45º nose high, and 93 knots before the crew initiated an unusual attitude recovery.
In my airplane, the 757, the engines will just about, but not quite, overpower pitch authority under those circumstances. However, with the MAX, the sudden application of full thrust without an immediate and significant pitch down might have made matters worse. So MCAS was added to make the pitch authority on the MAX similar to that of previous generation 737s.
They could probably have added a cam to the jack screw that converts AOA, airspeed, and throttle setting to a predetermined trim change, and that's how I'd approach the software.
I disagree. Software for MCAS could be modified to look at the difference between pitch attitude and flight path vector — both available from the inertial reference units, and compare it to both the FO and CA AOA values. That difference, BTW, *should* be AOA.
Every time I look in my Heads Up Display, I see both the flight path vector and pitch attitude. I can see that the difference is essentially equal to the displayed AOA, which is directly measured.
Why Boeing didn't think to do that is a real mystery.
Skipper, the reason may not be that mysterious as Boeing is probably hiring for diversity rather than the ability to understand what you're saying here.
Wish you were flying and designing the planes my kids and grandkids are constantly flying around the world on.
Hope all is well with you guys.
Post a Comment