Posted on Leave a comment

Facebook Begin To Censor Controversial Health Cures and Misinformation

Facebook announced Tuesday that it has changed its algorithm to curb the circulation of “misleading health content,” following reports that the platform is riddled with phony cancer cures.

In a blog post, Facebook product manager Travis Yeh, said: “In order to help people get accurate health information and the support they need, it’s imperative that we minimize health content that is sensational or misleading. 

Facebook’s update does not explicitly mention what it is doing about groups dedicating to promoting “exaggerated or sensational health claims.” Most of the announcement focuses on down-ranking posts in the News Feed and predicts some pages will be affected.

The announcement comes after the social media company has been singled out for failing to stop the spread of damaging health messages, such as the Anti-vaxx conspiracy that urges parents not to vaccinate their children.

“For the first update, we consider if a post about health exaggerates or misleads — for example, making a sensational claim about a miracle cure,” wrote Facebook product manager Travis Yeh.

“For the second update, we consider if a post promotes a product or service based on a health-related claim — for example, promoting a medication or pill claiming to help you lose weight.”

Posted on Leave a comment

Facebook’s Libra Cryptocurrency Poses a Double Threat

Facebook’s new cryptocurrency. platform could provide the embattled social media giant with a new revenue stream of historic proportions as it contends with a possible federal antitrust probe and continued scrutiny over its data privacy practices.

Regulators will be watching closely when Facebook Inc. unveils its cryptocurrency project this week. Their vigilance is warranted.

Weekend media leaks suggest that Facebook’s “Libra” project will be a continuation of its past efforts to expand its payments business and keep customers within the walled garden of its social media apps by creating their very own money.

Facebook’s cryptocurrency could thrive in emerging markets, providing a more stable alternative for transferring money in areas with volatile currencies and unstable governments, according to RBC Capital Markets. The firm expects “Libra” to facilitate person-to-person payments, traditional e-commerce and spending on apps or gaming services on Facebook-owned properties.

“We believe this may prove to be one of the most important initiatives in the history of the company to unlock new engagement and revenue streams,” RBC Capital Markets analysts said in a note to investors.

It’s crucial that Libra doesn’t become a protective glue that binds Zuckerberg’s social networks even more closely together at a time when many regulators want to break them up. Libra will be presented as an open-source partnership whose benefits are available to all, but to what extent will it really be held at arm’s length from the Zuckerberg empire? Indeed, if the financial and business benefits of using Libra accrue mainly to Facebook, it will merely enshrine its market dominance.

Facebook has already partnered with more than a dozen companies, including Visa and PayPal, that have invested in the cryptocurrency and will help to oversee its use, The Wall Street Journal reported. The cryptocurrency will reportedly be tied to several traditional fiat currencies in a bid to protect “Libra” from price volatility that has hurt other leading digital currencies such as bitcoin.

With scrutiny of Facebook’s practices at an all-time high, the cryptocurrency could reshape the company’s business. Barclays analyst Ross Sandler predicted in March that the digital currency could produce as much as $19 billion in new revenue by 2021.

While no one wants to choke innovation unnecessarily, Facebook hasn’t exactly done much to earn everybody’s trust in recent years. Any chance to put the necessary controls in at the beginning, rather than firefighting down the road, should be grabbed by the regulators.

Posted on Leave a comment

Mark Zuckerberg is the Latest Victim of Deep Fake Videos

Two U.K.-based artists created a deepfake of Facebook CEO Mark Zuckerberg to show just how dangerous AI-generated videos can be. Facebook is leaving the video up, sticking to a controversial stance it took when a doctored video of House Speaker Nancy Pelosi (D-California) went viral.

Deepfakes are fake videos that show a person saying or doing something they did not. The technique uses a mixture of real footage and artificial intelligence to falsify someone’s actions or speech.

As the technology gets better, many are worried that such videos will be used to spread misinformation and propaganda online.

A deepfake video of Mark Zuckerberg presents a new challenge

The video, posted to Facebook-owned Instagram over the weekend, falsely portrays Zuckerberg as saying,

“Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

An Instagram spokesperson told CNN Business on Tuesday that the site will treat the video “the same way we treat all misinformation on Instagram.” If it’s marked as false by third-party fact checkers, the spokesperson said, the site’s algorithms won’t recommend people view it.

The Zuckerberg video, which was first reported by Vice, comes as the US Congress prepares to hold its first hearing on the potential threats posed by deepfake videos. Earlier this year, the US Director of National Intelligence warned that America’s adversaries may use deepfake technology in future disinformation campaigns targeting the country.The video had less than 5,000 views before first being reported by news media, but how Facebook treats it could set a precedent for its handling of future deepfake videos.

AI FAKES BILL GATES VOICE AND THEIR SPEECH PATTERNS

Engineers at Facebook’s AI research lab created a machine learning system that can not only clone a person’s voice, but also their cadence — an uncanny ability they showed off by duplicating the voices of Bill Gates and other notable figures.

This system, dubbed Melnet, could lead to more realistic-sounding AI voice assistants or voice models, the kind used by people with speech impairments — but it could also make it even more difficult to discern between actual speech and audio deepfakes.

The speech is still somewhat robotic, but the voices are recognizable — and if researchers can smooth out the system even slightly, it’s conceivable that Melnet could fool the casual listener into thinking they’re hearing a public figure saying something they never actually uttered.