Search This Blog

Tuesday, January 29, 2019

Happy 60th Birthday to the Transistor!

And what a momentous invention it has been:
The invention of the transistor-based logic engine, the integrated circuit, turned 60 this year. Today, humanity fabricates 1,000 times more transistors annually than the entire world grows grains of wheat and rice combined. Collectively, all those transistors consume more electricity than the state of California. The rise of transistors as “engines of innovation” emerged from Moore’s Law. And we’re still in its early days: paraphrasing Mark Twain, recent reports of the death of that Law are greatly exaggerated.

Monday, January 21, 2019

Artificial Deception

I've written recently about state-of-the-art creation of synthesized faces and I concluded:
I think that the day is coming within my lifetime when there'll be no need for human actors. Any screenwriter will just be able to work with AI based tools to create and produce movies. 
But what if the "screenwriter" isn't creating a work that's meant to be viewed as fiction, but rather a fictional story that's intended to look like news? In other words, what if the screenwriting wants to purposely create fake news? And what if those creations are ever more indistinguishable from real videos of real events?

It's actually beginning to happen:
Lawmakers and experts are sounding the alarm about "deepfakes," forged videos that look remarkably real, warning they will be the next phase in disinformation campaigns.
The manipulated videos make it difficult to distinguish between fact and fiction, as artificial intelligence technology produces fake content that looks increasingly real. [...]
Experts say it is only a matter of time before advances in artificial intelligence technology and the proliferation of those tools allow any online user to create deepfakes.
As a sort of expert in this area, I believe that to be true as well.

Pornography is one the biggest areas where deepfakes are developing at the moment. For example:
Deepfakes are already here, including one prominent incident involving actress Scarlett Johansson. Johansson was victimized by deepfakes that doctored her face onto pornographic videos.
“Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired,” she told The Washington Post in December, calling the issue a “lost cause.”
Ms. Johansson is wise enough to realize that trying to do much about it is a "lost cause." The problem is that the software to "understand" Ms. Johansson's face and to manipulate it realistically to replace the face of someone in a video, porn or otherwise, is actually fairly trivial, widely available, and getting easier and easier to access and use. The genie is out of the bottle and there's no way to recapture it.

Besides, porn is probably fairly far down in the list of things to worry about, even if it will be driver of the technology. Other sorts of fake news will generally be more of a problem:
Other cases have resulted in bloodshed. Last year, Myanmar's military is believed to have pushed fake news fanning anti-Muslim sentiment on Facebook that ignited a wave of killings in the country.
And as the fakes get better and better, inciting mobs will be easier and easier.

Of course governments, which like to regulate everything under the sun, are working to legislate against this sort of technology use:
Farid said First Amendment speech must be balanced with the new, emerging challenges of maintaining online security. [...]
Other countries are already working to ban deepfakes.
Australia, Farid noted, banned such content after a woman was victimized by fakes nudes and the United Kingdom is also working on legislation.
Unfortunately (or maybe fortunately, depending on your point of view), my guess is that there's very little governments can do to stifle this sort of thing. Pretty much anybody with a high-end graphics card and a little too much time on their hands will be able to create these sorts of things.

In the end, I believe that the main reason fake news, including deepfake news, is a problem is that we're too damn gullible. The reason fake news works is because we want to believe it:
“We have to stop being so gullible and stupid of how we consume content online,” Farid said. “Frankly, we are all part of the fake news phenomenon.”
My guess is that after the first couple of outrageous deepfakes that catch us unawares, we'll quickly learn to be more skeptical. Hopefully, the first deepfakes don't drive us to nuclear war or anything completely catastrophic first.