Search This Blog

Monday, January 21, 2019

Artificial Deception

I've written recently about state-of-the-art creation of synthesized faces and I concluded:
I think that the day is coming within my lifetime when there'll be no need for human actors. Any screenwriter will just be able to work with AI based tools to create and produce movies. 
But what if the "screenwriter" isn't creating a work that's meant to be viewed as fiction, but rather a fictional story that's intended to look like news? In other words, what if the screenwriting wants to purposely create fake news? And what if those creations are ever more indistinguishable from real videos of real events?

It's actually beginning to happen:
Lawmakers and experts are sounding the alarm about "deepfakes," forged videos that look remarkably real, warning they will be the next phase in disinformation campaigns.
The manipulated videos make it difficult to distinguish between fact and fiction, as artificial intelligence technology produces fake content that looks increasingly real. [...]
Experts say it is only a matter of time before advances in artificial intelligence technology and the proliferation of those tools allow any online user to create deepfakes.
As a sort of expert in this area, I believe that to be true as well.

Pornography is one the biggest areas where deepfakes are developing at the moment. For example:
Deepfakes are already here, including one prominent incident involving actress Scarlett Johansson. Johansson was victimized by deepfakes that doctored her face onto pornographic videos.
“Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired,” she told The Washington Post in December, calling the issue a “lost cause.”
Ms. Johansson is wise enough to realize that trying to do much about it is a "lost cause." The problem is that the software to "understand" Ms. Johansson's face and to manipulate it realistically to replace the face of someone in a video, porn or otherwise, is actually fairly trivial, widely available, and getting easier and easier to access and use. The genie is out of the bottle and there's no way to recapture it.

Besides, porn is probably fairly far down in the list of things to worry about, even if it will be driver of the technology. Other sorts of fake news will generally be more of a problem:
Other cases have resulted in bloodshed. Last year, Myanmar's military is believed to have pushed fake news fanning anti-Muslim sentiment on Facebook that ignited a wave of killings in the country.
And as the fakes get better and better, inciting mobs will be easier and easier.

Of course governments, which like to regulate everything under the sun, are working to legislate against this sort of technology use:
Farid said First Amendment speech must be balanced with the new, emerging challenges of maintaining online security. [...]
Other countries are already working to ban deepfakes.
Australia, Farid noted, banned such content after a woman was victimized by fakes nudes and the United Kingdom is also working on legislation.
Unfortunately (or maybe fortunately, depending on your point of view), my guess is that there's very little governments can do to stifle this sort of thing. Pretty much anybody with a high-end graphics card and a little too much time on their hands will be able to create these sorts of things.

In the end, I believe that the main reason fake news, including deepfake news, is a problem is that we're too damn gullible. The reason fake news works is because we want to believe it:
“We have to stop being so gullible and stupid of how we consume content online,” Farid said. “Frankly, we are all part of the fake news phenomenon.”
My guess is that after the first couple of outrageous deepfakes that catch us unawares, we'll quickly learn to be more skeptical. Hopefully, the first deepfakes don't drive us to nuclear war or anything completely catastrophic first.

19 comments:

erp said...

Some genius will figure out a way to tell fact from fiction. I am a great believer in American ingenuity.🤓

Clovis said...

I am no genius, nor American, but I see the solution as almost trivial.

Digital cameras shall run with 'stamp protocols', where each frame will have encrypted correlation between its pixels (and also cross-frames correlations in videos), plus a system to give public validation of encryption keys.

Hence, every time someone captures the images and change its contents by anything more than rescaling, it will leave clear fingerprints. After a while, the public will give only satirical value to non-validated images/videos.

Like so many things in tech business, the first ones to turn the above ideas into workable protocols, set the standard for a decade.

erp said...

Clovis, you may or or not be a genius (I am in no position to judge that), but you are indeed an American -- a South American as we are North American. All in the family.

Hey Skipper said...

[OP:] Lawmakers and experts are sounding the alarm about "deepfakes ...

Oh, you mean like that Covington kerfuffle?


Clovis:

Isn't that kind of what block-chaining is all about?

Clovis said...

Skipper,

Kind of a similar idea, though it's application to image and videos in large and daily scale would ask for some novelties.

Bret said...

Clovis wrote: "Digital cameras shall run..."

Shall run? You mean a worldwide law that all manufacturers (including hobbyists) of digital cameras (including video cameras), digital camera chips, and all related circuitry and software shall have embedded unbreakable electronics and software to support ... something? Furthermore, no photoshopping or editing of any sort allowed, not even to fix pimples on models' faces? Even further, no analog cameras allowed (because one could, of course, write a fake video onto analog media)?

Even if I didn't think what you're proposing is technically not possible, I'd say, "Good luck with that!"

Bret said...

Hey Skipper wrote: "Oh, you mean like that Covington kerfuffle?"

That's why I wrote that the main problem is that we are "too damn gullible." It took a pretty high degree of gullibility to fall for that one.

erp said...


The answer is to get the government out of our faces. Make them stick to what the Constitution says are their duties and responsibilities.

Another thing. I'm pretty sure that none of you have ever been fooled into believing that those old movies were real, i.e., Frankenstein, movies showing famous cities and land marks, etc. were filmed on location or that war movies and westerns were real, BUT I can tell you that people at the time were so gullible, they were "fooled."

My father took me to see The Wolf Man when I was seven and I was scared to death. I remember the feeling very clearly.

Now, they're campy. Same thing will happen in the future.

Clovis said...

Bret,

Apparently, the internet, as well as all digital cameras of every manufacturer, work using many common protocols without any 'worldwide law' ever being necessary, with the exception of the law of supply and demand.

As soon as the need for safety against image manipulation becomes too great a demand, suppliers will come.

Clovis said...

"Furthermore, no photoshopping or editing of any sort allowed, not even to fix pimples on models' faces?"

I sure didn't imply so. The point is, the author of the picture will be able to trace back if it was changed over. Most of the time, he won't bother, unless the picture/video does appear in a doctored stunt and the need to invalidate such manipulation is called for.

Hey Skipper said...

[Bret:] That's why I wrote that the main problem is that we are "too damn gullible." It took a pretty high degree of gullibility to fall for that one.

Gullibility driven by a hate-fueled determination to be deceived.

Artificial deception is completely wasted on the Vanguard of the Intellectual Elite™.

Bret said...

Hey Skipper wrote: "Gullibility driven by a hate-fueled determination to be deceived."

Perhaps.

I'm almost did a post about the Covington Kid incident, but it seems pretty much every else wrote about it over and over, and I had nothing to add.

But I think that it's not gullibility in this case. To a substantial number of people, the mere existence of white male catholics (especially ones wearing MAGA hats) is intensely evil. That's simply a subjective opinion. That there was no wrongdoing by this boy at that time is immaterial. As far as they're concerned, he was evil before the incident and evil after the incident and the incident itself didn't matter.

I'm personally especially saddened that such hate can be directed towards a 16-year-old boy who, in my opinion, is really mostly still just a child. But it is what it is. Makes me glad I'm old and don't have all that many decades to watch the world become more and more hateful and violent.

Peter said...

I agree with Bret. I don't think this is about gullibility and I don't think it's one-sided. My belief is that social media is converting what heretofore were largely private personal low-intensity prejudices and distastes into collective high-intensity hate-fests. Once again we can thank Messers Zuckerberg et. al. for connecting us all and bringing us together.

One of the interesting things about the Covington story is how the Black Hebrews, who seem to be pretty unpleasant characters and who instigated the incident, are rarely mentioned. Some may say that's because progressives can't bring themselves to criticize any black activist group no matter how wacky, but I think an equally convincing explanation is simply that they weren't in the video and therefore aren't part of the virtual world this was played out in.

I'm not sure exactly what Clovis means when he holds out the prospect of safety measures, but I'm skeptical, mainly because hardly anyone thinks he or she is part of the problem--it's the other side that keeps getting taken in. I'm still waiting for just one person to emerge who will say he or she changed their vote in 2016 because of some Russian site.

erp said...

Good point! 😉

Clovis said...

Peter,

---
I'm not sure exactly what Clovis means when he holds out the prospect of safety measures, but I'm skeptical, mainly because hardly anyone thinks he or she is part of the problem
---

Indeed, I expressed myself naively by claiming people won't care much for fake videos when safety measures are in place to identify them.

People already ignore solid evidence and favor the fakes - or crazily biased ones - when it conforms to their worlview, so new methods to identify AI-generated videos won't change much the tone of our culture wars.

Yet, as a lawyer, you must appreciate the legal value of being able to show that a fake video out there is indeed fake.

Peter said...

People already ignore solid evidence and favor the fakes - or crazily biased ones - when it conforms to their worldview

Is that it? Or is it more that people discount fake news/evidence in support of their worldview as irrelevant to the basic truth of that view, but are convinced that contrary fake news persuades and blinds the dangerous dummies on the other side? Fake news on your side is like when a member of your soccer team commits a foul the referee misses. It's just part of the game and in no way diminishes the truth that your team is the superior side. But if the other side does it, it determined the outcome and they stole the game.

What a Piece of Work is Man!

Bret said...

Clovis wrote: "Yet, as a lawyer, you must appreciate the legal value of being able to show that a fake video out there is indeed fake."

I'm still at a loss as to how one would prove fakeness.

Let's start by assuming that technology was created and widely adopted such that all processes from gathering photons on the image sensor, to sampling the analog charges created to such photons, to creating the digital stream from those samples, to storing that digital stream were all perfectly secure and perfectly "watermarked" with an unbreakable camera specific cryptography based signature. I don't think all of that is anywhere near possible, but ok, let's go with it for the moment. There are still some pretty big holes.

For example, I can create a fake video, project it onto a screen, and then use one of the above camera systems to create a "real" video from it. Smudge a little grease on the lens and you probably couldn't prove that it was a video of a projection either.

For example, I've got my real (or fake) video and I change formats, say to compress it. All that watermarking goes poof. I can't then prove it's "real" but you can't prove it's fake. You say show me the original. I say oops, the original was too big to store, that's why I compressed it in the first place.

For example, a great number of videos are "photoshopped" to remove blemishes from models, make the contrast and brightness better, etc. In some sense, these are "fake" as well (certainly manipulated/edited), but not in the sense we're talking about. I'll bet a huge percentage of videos we see on television have some significant effect editing.

For example, ...

So I'm skeptical...

Clovis said...

Bret,

Sure, all your fake video examples are possible - heck, I am starting by taking for granted that future AI will create perfect videos with undetectable signs of forgery by analysing the quality of the image per se.

What you are missing is the change of protocols for people to oficially accept a video without a public available validation.

For example, suppose I can copy a 100 dollars note with perfection. I make a copy of as many thousands of them as you wish - again, with perfection - and offer it all in exchange for your robot companies. Do we have a deal? If not, tell me why so.

erp said...

Cash is outlawed.