Mark Zuckerberg is the Latest Victim of Deep Fake Videos


Two U.K.-based artists created a deepfake of Facebook CEO Mark Zuckerberg to show just how dangerous AI-generated videos can be. Facebook is leaving the video up, sticking to a controversial stance it took when a doctored video of House Speaker Nancy Pelosi (D-California) went viral.

Deepfakes are fake videos that show a person saying or doing something they did not. The technique uses a mixture of real footage and artificial intelligence to falsify someone’s actions or speech.

As the technology gets better, many are worried that such videos will be used to spread misinformation and propaganda online.

A deepfake video of Mark Zuckerberg presents a new challenge

The video, posted to Facebook-owned Instagram over the weekend, falsely portrays Zuckerberg as saying,

“Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

An Instagram spokesperson told CNN Business on Tuesday that the site will treat the video “the same way we treat all misinformation on Instagram.” If it’s marked as false by third-party fact checkers, the spokesperson said, the site’s algorithms won’t recommend people view it.

The Zuckerberg video, which was first reported by Vice, comes as the US Congress prepares to hold its first hearing on the potential threats posed by deepfake videos. Earlier this year, the US Director of National Intelligence warned that America’s adversaries may use deepfake technology in future disinformation campaigns targeting the country.The video had less than 5,000 views before first being reported by news media, but how Facebook treats it could set a precedent for its handling of future deepfake videos.


Engineers at Facebook’s AI research lab created a machine learning system that can not only clone a person’s voice, but also their cadence — an uncanny ability they showed off by duplicating the voices of Bill Gates and other notable figures.

This system, dubbed Melnet, could lead to more realistic-sounding AI voice assistants or voice models, the kind used by people with speech impairments — but it could also make it even more difficult to discern between actual speech and audio deepfakes.

The speech is still somewhat robotic, but the voices are recognizable — and if researchers can smooth out the system even slightly, it’s conceivable that Melnet could fool the casual listener into thinking they’re hearing a public figure saying something they never actually uttered.


Please enter your comment!
Please enter your name here