Search This Blog

Friday, October 18, 2019

Quote of the Day

"Men are qualified for civil liberty in exact proportion to their disposition to put moral chains upon their own appetites,—in proportion as their love to justice is above their rapacity,—in proportion as their soundness and sobriety of understanding is above their vanity and presumption,—in proportion as they are more disposed to listen to the counsels of the wise and good, in preference to the flattery of knaves. Society cannot exist, unless a controlling power upon will and appetite be placed somewhere; and the less of it there is within, the more there must be without. It is ordained in the eternal constitution of things, that men of intemperate minds cannot be free. Their passions forge their fetters." -- EDMUND BURKE, “Letter to a Member of the National Assembly,” 1791.—The Works of the Right Honorable Edmund Burke, vol. 4, pp. 51–52 (1899).

This quote was embedded in an interesting speech by US Attorney General William Barr.

Thursday, October 17, 2019

Epic Construction Projects

There are train tracks directly behind my office. It used to be a single track, but a 2nd one was built and is about a mile long.

It took more than 2 years from the start of construction to finish that mile of track. At that rate, it would've taken 4,000 years to complete the US Transcontinental Railroad. So I was amused when an article by the historian Victor David Hanson (VDH) asked the question:
Does anyone believe that contemporary Americans could build another transcontinental railroad in six years?
Oh I'm sure someone, somewhere believes that it could be done again, but it seems beyond implausible to me. You'd think that with all of the technology we've developed, we could just snap our fingers and voila!, new railroads and regular roads and bridges and ... would appear in no time. But no, not even close.

VDH notes other typical nearly absurdly slow projects:
Californians tried to build a high-speed rail line. But after more than a decade of government incompetence, lawsuits, cost overruns and constant bureaucratic squabbling, they have all but given up. The result is a half-built overpass over the skyline of Fresno — and not yet a foot of track laid.

California’s roads now are mostly the same as we inherited them, although the state population has tripled. We have added little to our freeway network, either because we forgot how to build good roads or would prefer to spend the money on redistributive entitlements.

When California had to replace a quarter section of the earthquake-damaged San Francisco Bay Bridge, it turned into a near-disaster, with 11 years of acrimony, fighting, cost overruns — and a commentary on our decline into Dark Ages primitivism. Yet 82 years ago, our ancestors built four times the length of our singe replacement span in less than four years. It took them just two years to design the entire Bay Bridge and award the contracts.

Our generation required five years just to plan to replace a single section. In inflation-adjusted dollars, we spent six times the money on one-quarter of the length of the bridge and required 13 agencies to grant approval. In 1936, just one agency oversaw the entire bridge project.

California has not built a major dam in 40 years. Instead, officials squabble over the water stored and distributed by our ancestors, who designed the California State Water Project and Central Valley Project.
Contemporary Californians would have little food or water without these massive transfers, and yet they often ignore or damn the generation that built the very system that saves us.

America went to the moon in 1969 with supposedly primitive computers and backward engineering. Does anyone believe we could launch a similar moonshot today? No American has set foot on the moon in the last 47 years, and it may not happen in the next 50 years.
VDH wonders if a new mythology will be born based on our forebearers being able to construct wonders far beyond our modern day capabilities:
Many of the stories about the gods and heroes of Greek mythology were compiled during Greek Dark Ages. Impoverished tribes passed down oral traditions that originated after the fall of the lost palatial civilizations of the Mycenaean Greeks.
Dark Age Greeks tried to make sense of the massive ruins of their forgotten forbearers’ monumental palaces that were still standing around. As illiterates, they were curious about occasional clay tablets they plowed up in their fields with incomprehensible ancient Linear B inscriptions.
We of the 21st century are beginning to look back at our own lost epic times and wonder about these now-nameless giants who left behind monuments [such as the transcontinental railroad] that we cannot replicate, but instead merely use or even mock.
I do see his point. Who isn't frustrated with traffic being badly slowed for years while crews patch a few holes at a snail's pace?

However, VDH did leave out a few details that I think are important. First, the working conditions were really, really bad for most of those epic projects. Around 1,200 people died building the Transcontinental Railroad. The construction of the Golden Gate Bridge was noted for how "safe" it was - only 11 people died. Things are much, much more comfortable now. Almost nobody would be willing to work in those conditions and take those risks (especially for what they were paid) and even fewer in power are willing to let them take those risks.

Yet before we blame those running the projects for the death toll, we need to keep in mind that those horrible working conditions were often a step up from what the workers were previously experiencing. For example,
Many more workers were imported from the Guangdong Province of China, which at the time, beside great poverty, suffered from the violence of the Taiping Rebellion. Most Chinese workers were planning on returning with their new found "wealth" when the work was completed. Most of the men received between one and three dollars per day, the same as unskilled white workers ... A diligent worker could save over $20 per month after paying for food and lodging—a "fortune" by Chinese standards.
Second, though he does grudgingly admit it, VDH glosses over the fact that working with modern technology very often creates more value than building yet another road. Instead of concrete, we build most of our roads with glass fiber and electrons and both the market and the taxpayer think that's more valuable.

So to me, it's not so much that we were once competent at building immense material things and now we're not. Instead, it's that once upon a time we were very poor and the best we could do was work high-risk construction jobs for the "fortune" of net $20 per month whereas now we can do oh-so-much better doing other things. And those that still work construction jobs (reasonably) demand orders-of-magnitude higher pay, far better working conditions, and far better safety.

VDH ends his article with:
Our ancestors were builders and pioneers and mostly fearless. We are regulators, auditors, bureaucrats, adjudicators, censors, critics, plaintiffs, defendants, social media junkies and thin-skinned scolds. A distant generation created; we mostly delay, idle and gripe.

As we walk amid the refuse, needles and excrement of the sidewalks of our fetid cities; as we sit motionless on our jammed ancient freeways; and as we pout on Twitter and electronically whine in the porticos of our Ivy League campuses, will we ask: “Who were these people who left these strange monuments that we use but can neither emulate nor understand?”

In comparison to us, they now seem like gods.
Perhaps we do "mostly delay, idle and gripe." But we can afford to, our ancestors could not. To me, our ancestors seem far less like gods and far more like people desperately impoverished compared to us trying to do the best they could. I thank them for taking the risks and building our comfort, I really do, but gods? Not so much.

Thursday, October 03, 2019

Flynn Effected

The Flynn Effect is one of the most cited topics in the debate over nature versus nurture regarding intelligence:

The Flynn effect is the substantial and long-sustained increase in both fluid and crystallized intelligence test scores that were measured in many parts of the world over the 20th century.[1] When intelligence quotient (IQ) tests are initially standardized using a sample of test-takers, by convention the average of the test results is set to 100 and their standard deviation is set to 15 or 16 IQ points. When IQ tests are revised, they are again standardized using a new sample of test-takers, usually born more recently than the first. Again, the average result is set to 100. However, when the new test subjects take the older tests, in almost every case their average scores are significantly above 100.
Test score increases have been continuous and approximately linear from the earliest years of testing to the present. For the Raven's Progressive Matrices test, a study published in the year 2009 found that British children's average scores rose by 14 IQ points from 1942 to 2008.[2] Similar gains have been observed in many other countries in which IQ testing has long been widely used, including other Western European countries, Japan, and South Korea.[1]
This effect is strong evidence against intelligence being overwhelmingly heritable (though since IQs run from less than 50 to 200+, there's still a fair amount of potential room for nature). As a result, James R. Flynn (for whom the Effect was named), has been somewhat of a hero for those who discount the heritable nature of intelligence.

It seems, though, that Mr. Flynn's hero status has waned substantially. He recently tried to get a book published (In Defense of Free Speech: The University as Censor), but it was rejected out-of-hand by the publisher. The reasons for the rejection were explained in an email (the whole article is interesting) from the publisher:
I am contacting you in regard to your manuscript In Defense of Free Speech: The University as Censor. Emerald believes that its publication, in particular in the United Kingdom, would raise serious concerns. By the nature of its subject matter, the work addresses sensitive topics of race, religion, and gender. The challenging manner in which you handle these topics as author, particularly at the beginning of the work, whilst no doubt editorially powerful, increase the sensitivity and the risk of reaction and legal challenge. As a result, we have taken external legal advice on the contents of the manuscript and summarize our concerns below.
There are two main causes of concern for Emerald. Firstly, the work could be seen to incite racial hatred and stir up religious hatred under United Kingdom law. Clearly you have no intention of promoting racism but intent can be irrelevant. For example, one test is merely whether it is “likely” that racial hatred could be stirred up as a result of the work. This is a particular difficulty given modern means of digital media expression. The potential for circulation of the more controversial passages of the manuscript online, without the wider intellectual context of the work as a whole and to a very broad audience—in a manner beyond our control—represents a material legal risk for Emerald. ... [emphasis added]
The ironies are frightening (to me), yet delicious. The first is that a book arguing for free speech is censored. That's kinda gettin' near the end of the road for free speech, isn't it? The second is that a progressive hero is censored. As long as he was willing to research and write stuff that supports that which all right thinking people are certain is correct, he's a hero and is cited incessantly. Write something a little different and bzzzzt, throw the bastard out.

The truth may be dangerous and now we're at a point where trying to find the truth is even more dangerous.

Wednesday, September 18, 2019

Another Topic Too Dangerous to Discuss?

I find the topic of sex, gender, identity, power and social constructionism very interesting. And here's an interesting article on the topic with the following catchy excerpt:
I basically just made it up.
Human characteristics generally have a basis in some mix of nature and nurture (or DNA and memes if you prefer). Topics like the above are dangerous to discuss because if it can be interpreted that one is putting just a little too much emphasis on nature (for example that the contribution of nature/DNA is non-zero) then one can get in a lot of trouble.

I sometimes wonder if the study of biology and particularly genetics is going to be shut down in the future. The problem is that it's seemingly increasingly at odds with social science. Biologists are finding more and more correlations between genes and human traits like intelligence and various behaviors via Genome Wide Association Studies (GWAS) and are starting to propose mechanisms for the genetic basis of those traits while Social Scientists clearly assert that what biologists are finding simply cannot be correct.

Perhaps not all of biology will be banned - just those topics that have to do with things like intelligence, behavior and identity. Nonetheless, it seems like we might be headed for a different sort of Creationism - not one that's deity based, but rather social science based.

Tuesday, September 17, 2019

Richard Stallman Resigns

There have been topics that I've wanted to write about but have been hesitant to do so. For example, I found the Epstein phenomenon to be fascinating (though awful), from his motivations to his operations to his (apparent) suicide. However, it was moderately clear that writing even one word about the subject that could possibly be interpreted by anybody as not being politically correct could be devastating to me.

I met Richard Stallman, a MacArthur Fellowship Award (Genius Grant) recipient and quintessential MIT nerd a few times when I was at MIT, both at CSAIL and at parties. He was, in my opinion, quite opinionated and could be very abrasive, but he was also very smart, very talented, extremely productive and seemed to overall have a good heart as far as I could tell.

He was recently forced to resign from various positions:
In 2019, Stallman was reported by colleagues to have made statements by email in defense of Marvin Minsky, then deceased, against allegations of sexual abuse in connection with Jeffrey Epstein's alleged child sex trafficking operation.[114] In the resulting furor, Stallman resigned from both MIT[115][116] and the Free Software Foundation.[117]
I'm not totally sure, but my recollection is that Minsky was at least somewhat of a mentor to Stallman, so it's not surprising that Stallman might be inclined to try and defend his dead mentor and given that he's the quintessential MIT nerd also not surprising that he'd lack the filters to realize it would be a really bad idea to do so.

Anyway, if I needed confirmation that Epstein was yet another topic I should stay way away from, this was it.

My question is: what can I write about that won't get me in trouble? I guess more science and math stuff so that's what I'll focus on.

Wednesday, September 11, 2019

Interesting Abstract

Here is an abstract I found interesting:

Technological innovation can create or mitigate risks of catastrophes—such as nuclear war, extreme climate change, or powerful artificial intelligence run amok—that could imperil human civilization. What is the relationship between economic growth and these existential risks? In a model of endogenous and directed technical change, with moderate parameters, existential risk follows a Kuznets-style inverted Ushape. This suggests we could be living in a unique “time of perils,” having developed technologies advanced enough to threaten our permanent destruction, but not having grown wealthy enough yet to be willing to spend much on safety. Accelerating growth during this “time of perils” initially increases risk, but improves the chances of humanity’s survival in the long run. Conversely, even short-term stagnation could substantially curtail the future of humanity. Nevertheless, if the scale effect of existential risk is large and the returns to research diminish rapidly, it may be impossible to avert an eventual existential catastrophe.

This has been my intuition for a long time. My metaphor is this. Humanity/civilization is on a runway in a scramjet accelerating towards a brick wall. If we go full pedal to the metal we might, just might, be fast enough to lift off the runway in time to clear the brick wall. If we don't, we won't reach a high enough speed to to clear the wall but unfortunately our forward momentum is too great to stop and we're sure to hit the wall and that will be the end.

Thursday, September 05, 2019

YASP (Yet Another Sunset Photo)

As usual, from my apartment. This time with a little foreground rain. No rainbow though.

Wednesday, June 26, 2019

Capitalism on Parade

Socialists have a long standing argument against capitalism: it commodifies human relations, trades lives for money, and exploits the brown working class for the pleasure and benefit of the white rich.

I give you Exhibit A, which actually isn't trying to be Exhibit A.

I was going to summarize the video, except it is so well done as to well reward the 20 minutes spent watching it.

If this shows socialists to be right, why are they, nonetheless, wrong?

Wednesday, May 22, 2019

Burning Down the House

Those of a certain age, towards the trailing edge of the Baby Boom, very likely vividly remember Dr. Seuss, aka Theodor Seuss Geisel.

When the first of his books came out, the pantheon of primary readers was anchored by a trio of pallid characters: Dick, Jane, and some damn dog Spot. They did boring things — run! play! — in various boring ways, enervated by cold porridge prose.

Then along came Dr. Seuss. Off kilter drawings, quirky rhythms, and hare-brained adventures.

He is how I learned to read. I made my parents read those things to me until I had completely memorized them, a process made easier not only by their novelty, but by the cadences shoving the words into my brain. Having made the connection between sounds and words and letters, learning to read came effortlessly.

So how to view Dr. Seuss?

Poisonously, of course:

In the fall of 2017, there was a furor involving Dr. Seuss, the first lady and a school librarian that many people found surprising and disconcerting. In celebration of National Read a Book Day, Melania Trump had sent a parcel containing 10 Seuss titles to a school in Massachusetts.

What could possibly be wrong with this, providing such a powerful tool to help children read?

Lots, apparently.

At that point, we were well into the first year of the “Resistance,” and the librarian, Liz Phipps Soeiro, wanted to make various political points. Attacking Dr. Seuss was one of them. “Dr. Seuss is a bit of a cliché, a tired and worn ambassador for children’s literature,” she wrote in an open letter to Mrs. Trump, adding: “Dr. Seuss’s illustrations are steeped in racist propaganda, caricatures, and harmful stereotypes.”

Being woke must be a real burden; but then, the savior business is never easy.

To be completely fair, Mr. Geissel was not without sin.

… in Geisel’s juvenilia, his early political cartooning and some of his first books for children, he evoked ethnic and racial caricatures that were common in the early 20th century and that, by the lights of the early 21st, appear shocking and shameful.

Is the full body of Geisel’s work fatally tainted by “harmful stereotypes”? Do the origins of the hat-wearing cat really lie in minstrelsy, as Kansas State University professor Philip Nel and others believe? And if so—assuming these transgressions are detectable to the civilian eye, which is not a sure thing—do they outweigh the joy and love of reading that Dr. Seuss brought to all sorts of children and families?

The author, Mrs. Gurdon, misses a critical point: the librarians presumption of an inborn moral superiority superpower.

This librarian is not alone. Everyone of these wokelings, the ones who want to tear down statues, rename buildings, or rubbish people like Mr. Gessel are, must be, asserting that their wokeness is timeless. That had they been alive in Dr. Seuss's time, they would have been just as enlightened as they are now. We must trust their judgment not as some post hoc virtue signaling, but rather as coming from a deeper place accessible only to the vanguard, give them special dispensation to decide for the rest of us which parts of our culture must be excised.

Thus, the first question to be asked of the this librarian, and every one of her ilk: Just who the hell do you think you are?

Thursday, May 09, 2019

Only the Best and Brightest

While I have quit wasting $20/month on the NYT — at which even the least discerning puppies turn up their noses — I still get their daily news summary. From today's comes this bit of journalistic excellence:

Indonesia: A group has begun translating the Quran into sign language, helping millions of deaf Muslims get access to their holiest book for the first time.

Sunday, April 14, 2019

The Book I Should Have Written

I'm old enough to remember those halcyon days when Earth was going to be burdened with so many people that some would get pushed off the edge.

Ummm. Not so much.

In the recently published Empty Planet, The Government and UN Experts are — shocking, I know — lagging the fight.

The great defining event of the twenty-first century,” they say, “will occur in three decades, give or take, when the global population starts to decline. Once that decline begins, it will never end.

For roughly thirty years, fertility has been declining, starting with developed world. Since the 1990s, global fertility has plummeted far faster than anyone has predicted, and may well go below replacement rate within a decade.

The UN Population Division has systematically over estimated fertility, with projections out of date almost as soon as they are published. For example:

The U.N.’s most recent population forecasts suggest that the average U.S. total fertility rate from 2015 to 2020 should be 1.9 children per woman. In reality, CDC data shows U.S. fertility has averaged about 1.8 children per woman from 2015 to 2018. In 2019, early indications are that fertility will probably be nearer 1.7 children per woman.

Contrary to expectations, instead of recovering along with the economy, the US total fertility rate has continued to drop, now standing at 1.76. That amounts to 125 fewer daughters out of 1000 women per generation.

And the disconnect isn't limited to just the US — it is nearly global. UN population forecasts are almost certainly wrong, and not by just a little, but by billions.

Population decline isn't unique in history. The bubonic plague decimated Europe in the 1300s. War and famine have caused temporary, smaller, declines.

However, the recent, relentless, decrease in fertility during times of unprecedented peace, health, and material comfort is wholly unprecedented. So far, there is no indication that women, given a meaningful choice, choose to have enough children to prevent steady, relentless, global, population decline.

(Territory we covered here at Great Guys in 2013 and 2016.)

Yet, somehow, the woke are completely eaten up by GlobalWarmingClimateChangeChaos. Amazing.

Thursday, February 14, 2019

Justice Kavanaugh and Global Warming

What do they have in common?

When Dr. Christine Ford's accusation against now Justice Kavanaugh arrived with all the subtlety of the stricken Hindenburg, there was one thing that near as dammit to certain: the correlation between political proclivity and assault assessment.

Which is, or should be, beyond odd.

After all, in as much as they occupy entirely different realms, judicial philosophy and inclination towards coerced sex don't have any obvious correlation.

Yet when Dr. Ford's accusation came to light, the correlation between attitude towards constitutional originalism and Dr. Ford's credibility was nearly one. Progressives almost without exception found Dr. Ford credible; conservatives, incredible.

Same goes for Anthropogenic Global Warming. Conservative ≅ disdainful. Progressive ≅ dainful. Yet AGW, as an objective fact, just as Dr. Ford's accusation, is completely independent of judicial philosophy or political priors. These strong relationships shouldn't exist, yet there they are, nonetheless.

Welcome to motivated reasoning.

Clearly, a great many people simply do not think things through independently of their desire for a preferred outcome. Kavanaugh is to be resisted, therefore any impeachment of his character is true, and to heck with that bothersome evidence nonsense.

And just as clearly, should one have settled on individualistic free markets as the sine qua non of human flourishing, then AGW cannot, must not, be true.

Of course, as should be transparently obvious to even the most casual observer of reality, I am uniquely immune to motivated reasoning.

No matter that I agree with constitutional originalism, I am certain that Dr. Ford is a moral cretin.

And completely disregard the fact I am an individualist, AGW is nothing more than scientistic catechisms.

My reasoning is entirely unmotivated.

Now you know.

Tuesday, February 05, 2019

Deep Learning and Emergent Deception

With all of the processing power available, all kinds of Deep Neural Network learning topologies are possible with tens of millions of connections or "parameters" (which are similar in purpose to synapses in a biological brain).

One of the more interesting nets to me are Generative Adversarial Networks (GANs) which are two (or more) connected networks that fight to win in a game to "outsmart" the other network. I've written about synthetic face generation before, and those applications use GANs. One network in the GAN learns to distinguish between real faces and synthetic faces and is called the discriminative network. The other network learns to generate synthetic faces and, not surprisingly, is called the generative network. The generative network is "rewarded" when a synthetic face is so realistic that it fools the discriminative network and "punished" when the discriminative network correctly identifies that the face is synthetic and not real. And when the generative network is rewarded, the discriminative network is punished and vice-versa. The two networks are locked in this zero sum win at all costs struggle, each trying to be rewarded and avoid punishment. If the GAN is set up correctly (being correct is mostly guesswork and trial and error), it can provide really impressive results as with the case of the synthetic faces.

But deception is an inherent part of the generative network. After all, it's designed to try an fool the discriminative network and ultimately us humans. Recently, a generative network went well past the bounds of deception expected by its creators. The application is this: transform aerial images into street maps and back to automate much of the image processing for things like google maps.

The above images show the process. There's the original aerial photograph (a), the street view (b), and the synthetic aerial view (c) that's reconstructed ONLY from the street view (b).

But wait! Looking at image (c), which is constructed from ONLY image (b), how on earth did it guess where to put the air conditioning units on the long white build? Or the trees? None of those details are in the street view image (b), right?

It turns out that the network "decided" to cheat:
It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map...
In other words, the street view map has gazillions of minute variations that aren't visible to the human eye that encode the data required for the remarkable aerial reconstructions.
This practice of encoding data into images isn’t new; it’s an established science called steganography, and it’s used all the time to, say, watermark images or add metadata like camera settings. But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new.
Note the last sentence. The generative network wasn't very good at generating the reconstructed aerial view the way it was supposed to. So instead, it figured out how to encode the data it needed so it didn't have to learn how to do it the right way.

The thing I find most interesting is the emergent deception. Nobody predicted this would happen (since it wasn't a desired result) and I don't think anybody could've predicted it.

We're currently able to use multiple networks with hundreds of millions of connections and we're already seeing emergent behavior that can't be predicted. Every ten years gives about a factor of 100 increase in processing power and network complexity.

It will be interesting to see what emerges when thousands of networks with billions of connections interact.

Tuesday, January 29, 2019

Happy 60th Birthday to the Transistor!

And what a momentous invention it has been:
The invention of the transistor-based logic engine, the integrated circuit, turned 60 this year. Today, humanity fabricates 1,000 times more transistors annually than the entire world grows grains of wheat and rice combined. Collectively, all those transistors consume more electricity than the state of California. The rise of transistors as “engines of innovation” emerged from Moore’s Law. And we’re still in its early days: paraphrasing Mark Twain, recent reports of the death of that Law are greatly exaggerated.

Monday, January 21, 2019

Artificial Deception

I've written recently about state-of-the-art creation of synthesized faces and I concluded:
I think that the day is coming within my lifetime when there'll be no need for human actors. Any screenwriter will just be able to work with AI based tools to create and produce movies. 
But what if the "screenwriter" isn't creating a work that's meant to be viewed as fiction, but rather a fictional story that's intended to look like news? In other words, what if the screenwriting wants to purposely create fake news? And what if those creations are ever more indistinguishable from real videos of real events?

It's actually beginning to happen:
Lawmakers and experts are sounding the alarm about "deepfakes," forged videos that look remarkably real, warning they will be the next phase in disinformation campaigns.
The manipulated videos make it difficult to distinguish between fact and fiction, as artificial intelligence technology produces fake content that looks increasingly real. [...]
Experts say it is only a matter of time before advances in artificial intelligence technology and the proliferation of those tools allow any online user to create deepfakes.
As a sort of expert in this area, I believe that to be true as well.

Pornography is one the biggest areas where deepfakes are developing at the moment. For example:
Deepfakes are already here, including one prominent incident involving actress Scarlett Johansson. Johansson was victimized by deepfakes that doctored her face onto pornographic videos.
“Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired,” she told The Washington Post in December, calling the issue a “lost cause.”
Ms. Johansson is wise enough to realize that trying to do much about it is a "lost cause." The problem is that the software to "understand" Ms. Johansson's face and to manipulate it realistically to replace the face of someone in a video, porn or otherwise, is actually fairly trivial, widely available, and getting easier and easier to access and use. The genie is out of the bottle and there's no way to recapture it.

Besides, porn is probably fairly far down in the list of things to worry about, even if it will be driver of the technology. Other sorts of fake news will generally be more of a problem:
Other cases have resulted in bloodshed. Last year, Myanmar's military is believed to have pushed fake news fanning anti-Muslim sentiment on Facebook that ignited a wave of killings in the country.
And as the fakes get better and better, inciting mobs will be easier and easier.

Of course governments, which like to regulate everything under the sun, are working to legislate against this sort of technology use:
Farid said First Amendment speech must be balanced with the new, emerging challenges of maintaining online security. [...]
Other countries are already working to ban deepfakes.
Australia, Farid noted, banned such content after a woman was victimized by fakes nudes and the United Kingdom is also working on legislation.
Unfortunately (or maybe fortunately, depending on your point of view), my guess is that there's very little governments can do to stifle this sort of thing. Pretty much anybody with a high-end graphics card and a little too much time on their hands will be able to create these sorts of things.

In the end, I believe that the main reason fake news, including deepfake news, is a problem is that we're too damn gullible. The reason fake news works is because we want to believe it:
“We have to stop being so gullible and stupid of how we consume content online,” Farid said. “Frankly, we are all part of the fake news phenomenon.”
My guess is that after the first couple of outrageous deepfakes that catch us unawares, we'll quickly learn to be more skeptical. Hopefully, the first deepfakes don't drive us to nuclear war or anything completely catastrophic first.

Sunday, December 30, 2018

Kicking Tin: Lion Air 610

A month and a half ago, Lion Air 610* crashed 13 minutes after takeoff. Weather was not a factor. The aircraft had been delivered new to Lion Air ten weeks previously. (*The link is worth following. As shoddy as reporting on aircraft mishaps generally is, this is very well done.)

Attention quickly focused primarily on four factors: the Boeing 737 Max flight control system; Lion Air's maintenance; aircrew training and performance; and the Aircraft Flight Manual (AFM).

The B737 Max, the latest variant in the seemingly immortal 737 series, adds something called Maneuvering Characteristics Augmentation System (MCAS). The point of MCAS is to automatically trim the aircraft in the nose-down direction in the event of excessive angle-of-attack (AOA). (Angle of attack is the angle between the wing and the relative wind. Imagine an airplane completely level, but falling straight down — its AOA is 90º; same airplane, but in level flight, the AOA would be 0º. Stall AOA is defined as that AOA beyond which lift decreases, and is around 20º. Typical AOA varies between about 2.5º in cruise, and up to 7º during some phases of departure and approach.)

Avoiding as many details as possible, the B737 Max had engines that were larger in diameter than anything that had ever been installed on the 737. This presents an engineering problem, as the original 737 design had very short landing gear struts. As engine diameters have gotten larger, this has required more elaborate ways to keep them from hitting the ground during crosswind landings. With the Max, this meant mounting the engines further forward, and higher, than previously.

The result aggravated what has always been a handling issue with airplanes having wing mounted engines. The correct response to stall recovery is to do two things simultaneously: reduce AOA (lower the nose) and increase thrust. However, because the engines are below the wings, increasing thrust creates a very pronounced nose-up force, to the extent that if a stall is entered at and low speed and idle thrust, the upward force generated by increased engine thrust can overcome the aerodynamic force available to push the nose down.

With the Max, Boeing decided that the thrust induced nose-up pitching moment had gotten sufficiently pronounced that the flight control system needed to step in and automatically trim the airplane nose down in order augment the pilot's response.

In and of itself, that is a good thing — if AOA gets too high, lower it. Easy peasy. And it really is easy. AOA sensors are brick-simple: they are really nothing more than wind vanes hooked to a variable resistor. As one might expect, simple means rugged and reliable. In nearly forty years of flying, I have never experienced an AOA failure.

The problem here should be obvious: what never fails, did, and as a consequence, MCAS tried to take control of the plane. The crew ultimately lost the fight.

In the mishap sequence, this first leads to Lion Air maintenance. The aircraft had experienced airspeed indicator problems on the preceding four flights. Inexplicably, Lion Air's maintenance replaced an AOA sensor — this would be akin to replacing your steering wheel to fix the speedometer. Not only did that predictably fail to fix the problem, they likely failed to install the new AOA indicator properly, because on its penultimate flight, the airplane suffered an AOA failure, accompanied by MCAS intervention, which that crew was able to manage.

Now over to the pilots. They should have been aware of the issues with airspeed and AOA. The first item on the Captains preflight is reviewing the maintenance logbook; for the First Officer, that is the first thing following the exterior preflight. Yet either they didn't do so, or the logbook failed to convey sufficient information, or the crew failed to consider the ramifications of erroneous AOA readings.

Whatever the reason, they were both surprised and insufficiently aware of not only how MCAS works, but that there even was such a thing. Had they been familiar with MCAS, they would have known that it is inhibited until the flaps and slats are retracted. Simply selecting Flaps 1 (which brings the leading edge slats to half travel, and slightly extends the trailing edge flaps) would have put paid to MCAS; it is inhibited unless the flaps and slats are fully retracted. As well, following the pilot adage "if things suddenly go to shit, that last thing you did, undo it." would have put things right no matter how aware they were of MCAS.

Alternatively, they could have gone to the Unscheduled Stab Trim procedure, which goes like this:

1. Position both (there are two completely independent trim systems) Stab Trim switches to cut-out.\
2. Disengage the autopilot if engaged.
3. Alternately reengage the systems to isolate the faulty system
4. If both primary systems are borked, proceed using the alternate trim system.

As with almost all aircraft mishaps, there are a great many links in the chain. Documentation, training, maintenance, and aircrew performance will each appear in the final report. It will perhaps fault Boeing for inadequate MCAS documentation in the AFM, and faulty MCAS implementation (more on that below). Lion Air maintenance will take a shellacking for not just likely poor maintenance procedures, but also shortcomings in documentation.

Finally, the pilots. Even if Boeing takes a hit for providing insufficient MCAS documentation in the AFM, it remains true that the crew had the means to shutoff the MCAS — cut out the primary pitch trim system — and then resort to the alternate trim system. That they didn't is clear; however, until the cockpit voice recorder is found, we will never know for certain why. I suspect fingers will be pointed at training. Outside the Anglosphere, EU, and Japan, the rest of the world doesn't put nearly as much emphasis on, and money into, training and standardization.


But I am still baffled

Modern airliners, and by that I mean anything built since the mid-1980s, have three sensing systems: Air Data, Inertial Reference, and GPS. Air Data provides altitude, true airspeed, air temperature, angle of attack, and vertical speed (how fast the airplane is changing altitude). Inertial Reference measures acceleration in all three axes, and through first and second order differentiation, calculates horizontal and vertical speed, as well as position. Finally, the GPS measures position, and integrated over time calculates speed in the horizontal plane.

As well, the airplane knows how much it weighs, how it is loaded, trim, wing configuration, and control positions.

All of these things are interrelated, all the time. For example, given set of values for airspeed, weight, air temperature, and so forth, there is only one altitude for which they can all be true. It is possible, in theory, to calculate any of those parameters given values for all the rest.

Yet so for as I know, no airplane anywhere even tries.

Presuming what is known about the Lion Air mishap is roughly true, MCAS provides a perfect example: if it sees an impending stall angle of attack, it effectively assumes control of the pitch axis, without checking to see if, given all the other parameters, a stall angle of attack is reasonable.

Instead, it should go like this:

Scene: shortly after takeoff, accelerating through slat retraction speed, when MCAS wakes up.

MCAS, "YIKES WE ARE STALLING WE ARE ALL GOING TO DIE. Oh, wait, let's ask around the office.

"Hey, GPS, you space cadet, what are you seeing for groundspeed?"

"MCAS, at the moment, 195 knots, with plus 20 knot change over the last ten seconds."

"Inertials, what have you got?"

"MCAS, ten degree flight path angle, fifteen degree pitch attitude, 195 knots, plus 20 knots over the last ten seconds, 1.2G vertical acceleration"

"Okay, Air Data, over to you."

"MCAS, 198 knots true airspeed, vertical speed 2500 feet per minute, and AOA off the charts."

MCAS to self: With all that info, AOA should be about 5º, not 20º. Hmm, Inertials says the difference between pitch and flight path angle is 5º. We are accelerating AND climbing. Not only that, but at this airspeed, a stall AOA would put about 5G on the airplane which a) isn't happening, and b) would have long since shed the wings.

I know, instead of having a helmet fire over something that cannot possibly be true, I'll throw an AOA Unreliable alert, disable direct AOA inputs, then just sit on my digital hands.

In essence, this is what pilots are supposed to do all the time. If I was flying my airplane and the stall warning system activated under those conditions, I would correlate that with all the other available information and immediately reject it as impossible.

The list of mishaps such data integration could have prevented is almost beyond counting. AF447 ended up in the middle of the Atlantic because the airplane didn't have the sense to calculate that an airspeed of zero was impossible. It had enough information available to replace the erroneous measured value with a calculated value, instead of throwing up a perfect shitstorm of worthless warnings. (Granted, the pilots then proceeded to kill themselves and everyone else, but the airplane forged the first link in that chain.)

Less famously, about five years ago my company had a tail strike on landing in Denver that did about $11 million in damage to the plane. It happened because the airplane was told it had 100,000 pounds less freight than was actually on board. Yes, there were multiple lapses that caused that error to go undetected. And the crew failed to note the slower climb, and higher pitch attitudes throughout the flight; to be fair, the performance differences weren't glaring. But comparing measured and calculated parameters would have highlighted something was out of whack: fuel flow too high, angle of attack too high, trim wrong, and that thing has to be aircraft weight.

The Buffalo mishap was due to undetected clear icing on the wings. The crew should have noticed the pitch attitude was too high for the configuration and airspeed, but there is absolutely no reason that problem couldn't have been highlighted well before things got out of hand.

To me, this seems simple. (Maybe Bret can tell me otherwise.) A set of a dozen or so simultaneous equations each calculating a given parameter using the measured values of the remaining parameters. Each calculated value should be roughly the same as its measured value, and everything has to be internally consistent; otherwise, something is wrong.

Yet, despite what seems simple to me must no be, because such a thing does not exist.

Well, actually, it does, it is called Pilots. If there were never any circumstances where a BS flag needed waving, then pilots wouldn't be required. Those circumstances are far more common than the rare Lion Airs, AF447s, et al would indicate. You never hear of the crash that didn't happen because the pilots effectively said "Yeah, no. We aren't doing that, because it doesn't make any sense."

Unfortunately, error cues can be subtle and easy to miss if everything else appears correct, or if the pilots aren't very experienced, or their training isn't very good, or they aren't on their A-game, or their background doesn't include much hands-on flying.

And this seems to have implications for autonomous vehicles of any kind. I don't think we fully comprehend how much expertise is within the operator, because operators themselves can't fully articulate what they are doing. Go ahead, try to describe what is required to ride a bike. It takes pilots years to reach a point where accumulated experience provides sufficient judgment to stop oddball situations getting worse.

It seems that these guys couldn't deal with a manageable situation, but those sorts of things get handled every day without making news. Take the human out of the system, though, and we will start finding out how much we don't know about what we know.

Monday, December 17, 2018

I've written about Artificial Intelligence based face and scene creation before but now the same researchers have taken it one step farther. The top eight pictures were generated with those previous algorithms - pretty realistic but not quite right, especially around the eyes. The bottom pictures were generated by the updated algorithms and it's very hard for me to see the fakeness.

The article is here and the relevant video follows:

I'll admit than I'm not quite following all of the explanation, but it's still fascinating for me to watch.

I think that the day is coming within my lifetime when there'll be no need for human actors. Any screenwriter will just be able to work with AI based tools to create and produce movies.

More Computing Efficiency

Not only have computers gotten exponentially faster per dollar, they've also gotten amazingly more energy efficient:

Over the past 60 years, the energy efficiency of ever-less expensive logic engines has improved by over one billion fold. No other machine of any kind has come remotely close to matching that throughout history.
Consider the implications even from 1980, the Apple II era. A single iPhone at 1980 energy-efficiency would require as much power as a Manhattan office building. Similarly, a single data center at circa 1980 efficiency would require as much power as the entire U.S. grid. But because of efficiency gains, the world today has billions of smartphones and thousands of datacenters.
Of course an iPhone would have been impossible to build at any price in 1980 and even if possible would have required the space of an entire Manhattan office building!

Saturday, November 10, 2018

Is Anybody Out There?

Back in the 1950s, Enrico Fermi posed the eponymous paradox: surrounded by an uncountable number of stars, why haven't we encountered extra-terrestrial intelligence?

After all, no matter even if life, and subsequently intelligent life, is statistically unlikely, its existence elsewhere is statistically certain. Further, since it is extremely unlikely that humans are the first intelligent life to emerge in our galaxy, then the seeming absence of intelligent life is a puzzle that needs explaining.

A decade later, Frank Drake formulated an equation supplying the terms that must be considered in contemplating how many extra terrestrial intelligences (ETI's) there might be.

In successive decomposition, it goes something like this: the number of stars, the fraction that have planets, the fraction of those that have habitable planets, the fraction of them that go on to develop life, the fraction of life bearing planets that yield intelligent life, the fraction that release detectable signals into space, and the duration those signals are emitted.

Of all those parameters, only the number of stars is approximately known, is large enough so that even the multiplicative combination of very low probabilities means the existence of ETI's is certain.

There are two potential resolutions to the Fermi paradox.

The first wasn't even remotely predictable in the 1950s and 1960s. At the time, radio and TV signals were often broadcast from 100,000 watt transmitters. What no one could predict then is a near certainty within a couple decades: our planet going dark. The combination of low power satellite transmitters, cellular networks and near-pervasive landline networks have rendered high power transmitters all but obsolete.

Now that alone doesn't eliminate the Fermi paradox, because even if other ETI's don't radiate enough energy to be detectable is of no real help. The likelihood that even one ETI has developed long before we did is a near certainty; therefore, such a civilization should long ago have pervaded the galaxy.

That, in turn, requires a more or less heroic assumption — that moving even anything more than trivial masses to other stars is possible.

Taken in combination, it is possible that the galaxy is littered with ETIs that will be forever confined to their stars, and undetectable from every other ETI.

But what if the certainty the Drake Equation predicts is? What if there has been widespread optimistic presumptions about some of its elements greatly overstating their likelihood?

The problem with the Drake equation is that it provides discrete estimates to each of the factors.

To quickly see the problems point estimates can cause, consider the following toy example. There are nine parameters (f1, f2, . . .) multiplied together to give the probability of ETI arising at each star.

Suppose that our true state of knowledge is that each parameter could lie anywhere in the interval [0, 0.2], with our uncertainty being uniform across this interval, and being uncorrelated between parameters.

In this example, the point estimate for each parameter is 0.1, so the product of point estimates is a probability of 1 in a billion. Given a galaxy of 100 billion stars, the expected number of life-bearing stars would be 100, and the probability of all 100 billion events failing to produce intelligent civilizations can be shown to be vanishingly small: 3.7 × 10−44. Thus in this toy model, the point estimate approach would produce a Fermi paradox: a conflict between the prior extremely low probability of a galaxy devoid of ETI and our failure to detect any signs of it.

Instead, the authors account for our uncertainty by applying a Monte Carlo simulation — randomly assigning a probability in the range [0, 0.2] for each factor, then combining the values for each of the factors.

The result?

More than 22% of the simulations produce a galaxy devoid of even one ETI.

But wait, there's more.

If, instead of assigning point probabilities to each factor, model each factor as itself a combination of factors. Take the existence of life as an example. Abiogenesis is a transition from non-life to life that "… occurs at some rate per unit time per unit volume of a suitable prebiotic substrate." Using informed guesses about rate, volume, protein folding, etc, yields a range of estimates for the existence of life on suitable planets spanning 20 orders of magnitude. (There is much more to this than I am presenting, btw.)

Applying uncertainty distributions reflecting current knowledge to each of the factors in the Drake Equation, what do you suppose the likelihood is that we are alone, not just in the galaxy, but in the entire observable universe?

Nearly 38%.

I sure didn't see that coming.