Deepfakes: Tools + Rules to Save the Open Internet. What? How? Why?
Deepfakes: Tools & Rules To Save The Open Internet: What? How? Why?
Tuesday 2J
Convener: Kathryn Harrison, founder, DeepTrust Alliance
Notes-taker(s): Scott Mace
Tags for the session - technology discussed/ideas considered:
Deepfakes
Discussion notes, key understandings, outstanding questions, observations, and, if appropriate to this discussion: action items, next steps:
Was part of the IBM blockchain team
My first IIW, diving right in
https://www.DeepTrustAlliance.com
Definition of deepfake: Technique for human image synthesis based on AI. Created through the combination and superimposition of existing images and video using a ML technique called generative adversarial network (GAN)
This summer there was the cheapfake of Nancy Pelosi. Not a new problem. Fraudulent content has existed since before the printing press.
Tools are rapidly accelerated. Cheaper and faster to build this.
Motivated enemies.
In this country hard to have an honest discussion on it.
Will impact every industry that does business on the internet. How do I know if this is real.
Today you have a few options. Look at the web site. Look for caption or source. Or use your eyes.
Social engineering
Market manipulation. Casino Royale movie. Could publish a CEO saying whatever, market might correct itself quickly, make a ton of money in short term.
Extortion and harassment. Rana Idowa, fake porn created in India.
Banking and financial services. Already seeing fake audio theft of hundreds of thousands of dollars, CEO’s voices impersonated.
Social media fakes
Democracy
Just the tip of the fakes iceberg.
2.1 billion fake FB accounts diabled in Q1 2019.
Fake followers, fake views.
Why now?
Will take tech and human solutions to solve it.
This market is incredibly fractured. FB solve for FB, Google for Google, NYTimes for NYTimes.
But content ricochets across the net like a pinball.
Need persistent verification of content via an open standard.
If anything is going to get to scale, it has to be done in an open way.
We are dealing with potentially nefarious actors, and this is an arms race.
We need to pull together all the different parties impacted by this. Think about how this is going to impact their business. A lot of fear. Executives’ plans, we haven’t seen anything major right now. Not sure. That has to change. So much work needs to happen.
This is what I am driving toward. There needs to be a way to go from the digital edge, the real world, so you can know the source of the content. Plug into existing identity solutions. Already lots of metadata standards, but all that gets stripped out in FB, etc.
A checkmark could say we know the source of this content. A standard.
An opportunity to go from the device. Apple 10S phone onward, why couldn’t it generate a key where it was taken.
Q: Similar to email spam problem. How can we trust who signed the message?
Scott Mace: Content that’s guaranteed anonymous?
Q: Goal of masquaroca is to change your view of reality to what your adversary wants you to believe is real to change your decision process. Our brains are attack surfaces to change our view of reality. People trust crackpots.
Q: We don’t have a society any longer with sources that people trust. Trust is a way bigger problem than the technology.
Has to be both technical and societal. We don’t have a clear lexicon for defining these things. Every company decides what is real or not. Reddit pulled all the deepfake channels. FB keeps deepfakes. Doesn’t flag them, just doesn’t put them up in the newsfeed. There’s a whole spectrum to misinformation. Info is wrong but you don’t mean for it to be used in the wrong way. Needs to be set of rules and best practices so companies aren’t necessarily trying to make decisions on their own. In August, journalists put Cloudflare in spotlight, for hosting 8chan manifestos. There was no line in the sand, this is as an industry how we think about these things. Given it’s a network problem, dozens of state actors and others navigating through these policies, we need to create these best practices to have this line in the sand. It is a question of how far back do you go to answer the question what is real. Measuring intent is extremely difficult. One way is to warn them they may be looking at false information. Checkmark or confidence score.
Q: Warning on pack of cigarettes. Let’s say we are successful in identifying deepfakes. What’s to keep it from here is my official version.
Scott: Right to be forgotten abuse.
Trying to run forensics with traditional statistical analysis is getting difficult.
Q: Regarding warnings falling on deaf ears, the cigarette risk is a static variable. But if you say in this community there is a likely possibility you will be pickpocketed today, it’s different on the human psyche.
What are the seat belts or air bags as you get into dangerous situations. Automated safety features.
Q: It’s a spiritual problem.
Scott: How will this intersect with / conflict with publishers’ notion to get more compensation for content they product today but is ultimately monetized by others?
The first product we’re working on is a digital watermark for images.
A few standards could give you a ledger that provides a standard for the provanance of content.
Scott: recent selfies people have taken that end up being used in a Trump ad. Need technology to detect that, without the whole world having to detect it.
Does consent become part of this?
Virginia cracking down on deepfake porn and lack of consent.
Scott: How do we protect Edward Snowden’s physical address from being disclosed.
And yet verify it’s Snowden.
Deepfakes can be hilarious. But at least you know where it came from.
Q: I believe people are starving for trust. Teaching media literacy in school.
Q: Are our cars safe?
Scott: Pedestrian injuries & deaths are way up.
We will launch officially in November. I have a mailing list.