Posted on Leave a comment

Facebook Begin To Censor Controversial Health Cures and Misinformation

Facebook announced Tuesday that it has changed its algorithm to curb the circulation of “misleading health content,” following reports that the platform is riddled with phony cancer cures.

In a blog post, Facebook product manager Travis Yeh, said: “In order to help people get accurate health information and the support they need, it’s imperative that we minimize health content that is sensational or misleading. 

Facebook’s update does not explicitly mention what it is doing about groups dedicating to promoting “exaggerated or sensational health claims.” Most of the announcement focuses on down-ranking posts in the News Feed and predicts some pages will be affected.

The announcement comes after the social media company has been singled out for failing to stop the spread of damaging health messages, such as the Anti-vaxx conspiracy that urges parents not to vaccinate their children.

“For the first update, we consider if a post about health exaggerates or misleads — for example, making a sensational claim about a miracle cure,” wrote Facebook product manager Travis Yeh.

“For the second update, we consider if a post promotes a product or service based on a health-related claim — for example, promoting a medication or pill claiming to help you lose weight.”

Posted on Leave a comment

Facebook’s Libra Will Introduce A Global Digital ID

Buried in Facebook’s Libra white paper are two short sentences hinting that the project’s ambitions go even further than bringing billions of people into the global financial system.

More than launching a price-stable cryptocurrency for the masses, Libra could be aiming to change the way people trust each other on the internet.

At the top of page nine, in a section describing the consortium that will govern the Libra coin, the white paper states:

“An additional goal of the association is to develop and promote an open identity standard. We believe that decentralized and portable digital identity is a prerequisite to financial inclusion and competition.”

It’s a problem almost as old as the internet itself. As the classic “New Yorker” cartoon put it, “on the internet, nobody knows you’re a dog.”

In such an environment, businesses need to guard against fraud, but the copious amounts of personal data consumers must share to prove they are who they say they are leaves them vulnerable to identity theft and spying.

Libra, when launched, will be just one of the over 1,600 cryptocurrencies globally (with new ones emerging every week)—the largest being Bitcoin, followed by Ripple, Ethereum and Tether that cumulatively account for about 90% of the market capitalization. In India, too, Paytm and PhonePe cumulatively account for a little over 80% share of all digital wallet accounts in the country. Libra, thus, will not have a first mover’s advantage.

Posted on Leave a comment

Facebook’s Libra Will Become A Blockchain Lottery For The Rich

To understand how early investors in Facebook’s new Libra blockchain will make money over time, it helps to look at a lossless lottery called PoolTogether. Its similarity to Libra is not a completely one-to-one relationship, but the key insight of both is the same: Earning interest on your own money is good, but it’s better to also earn interest on other people’s money.

PoolTogether, sells digital lottery tickets for 20 DAI (the stablecoin generated by the MakerDAO protocol). PoolTogether sells as many tickets as it can, and all the DAI gets put into the ethereum-based money market protocol called Compound. There, all the ticket money collects interest over the life at the pool and at the end, one ticket earns all the interest off everyone’s ticket price.

But everyone else gets the money they paid for their tickets back, too – ergo, no losers. Except the ones paying the interests.

“What excites me is that I think it can actually move the needle on economic health for a lot of people,” PoolTogether’s creator, Leighton Cusack, told CoinDesk. People get excited about lotteries. They don’t excited about savings accounts. This is a way of nudging them in the right direction.

Libra is also designed so that a select few capture the interest earned on money tucked away by the vast many.

As CoinDesk previously reported, there are two tokens that make Libra work. Most of the attention has been on the Libra coin, the stablecoin backed by some as-yet-unnamed basket of bonds and currencies. To get that basket started, though, Facebook came up with the idea for the “Libra investment token” (LIT).

Like PoolTogether, the whole point of LIT is to earn interest off other people’s deposits.

On PoolTogether, everybody is betting that they can win the interest off of everyone else’s tickets. A crypto newbie could buy one ticket for 20 DAI and get all the interest earned off a whale who bought 1,000 tickets. On the Libra protocol, it works the same way, except the same whales always win.

Far from decentralizing the financial system, Libra will further centralize our data. Calibra, the company that will develop products and services based around Libra, says it will “use customer data to conduct research projects related to financial inclusion and economic opportunity”. In other words, as Privacy International observes, “even more intimate profiling of individuals allowing organisations to offer products and services with discriminatory pricing based on a large dossier of data”.

Posted on Leave a comment

Facebook’s Cryptocurrency Is A License To Print Money

Buy Facebook’s Libra and you are literally giving the social media giant a license to print money, in the form of its new cryptocurrency.

Facebook’s plan to operate its own digital currency poses risks to the international banking system that should trigger a speedy response from global policymakers, according to the organisation that represents the world’s central banks.

Although the move of big tech firms such as Facebook, Amazon and Alibaba into financial services could speed up transactions and cut costs, especially in developing world countries, it could also undermine the stability of a banking system that has only just recovered from the crash of 2008.

“Libra” will be pegged to a basket of mainstream currencies at a value of about a dollar, and rooted in the model of secure, immutable online transactions we know as blockchain. Its operations will be overseen by a new organisation based in Geneva, open to any company or corporation that has a value of at least $1bn (£790m) and will invest a minimum of $10m. There are currently 28 such participants, ranging from Uber and Spotify to Mastercard.

If you’re one of the site’s 2.6 billion users, Facebook’s operators know where you are all the time, whether you’re logged on or not. They know what you’re buying, even if you’re in a brick-and-mortar shop. They scan photos you upload for biometrics. They mine your data and sell it to advertisers, but they won’t say how much of it, only that it’s a small amount, promise.

Facebook’s not the product. We are.

There are vast markets in the developing world where Facebook and its subsidiaries have millions of users but are held back from creaming off advertising revenue by the simple matter of poverty. Libra offers new revenue opportunities, partly by inviting people to exchange national currencies for the new medium, thereby gifting Facebook and its partners a vast pool of funds. In these territories, and more affluent places, the new currency also offers Facebook the chance to accelerate what sits at the heart of everything it does: the harvesting of endless data, which can then be monetized.

Chris Hughes, a co-founder of Facebook, last week added his voice to concerns being expressed over big tech’s move into finance, warning that Libra could shift power into the wrong hands.

Hughes, who is co-chair of the Economic Security Project, an anti-poverty campaign group, said: “If even modestly successful, Libra would hand over much of the control of monetary policy from central banks to these private companies. If global regulators don’t act now, it could very soon be too late.”

At that point, surveillance capitalism would colonize the shrinking parts of our lives it has so far left relatively untouched. It would amass mountains of lucrative information about everything from our friends to when we last paid a speeding fine – and in the process, bring unlimited corporate power into areas we still consider subject to the checks and balances of democracy. The state could also have a field day: can you imagine the glee at the Department for Work and Pensions if it was able to monitor not just people’s universal credit payments but how they spent them?

But the biggest question of all is screamingly obvious, and worth asking for the thousandth time: how, in any meaningful way, can we hold Facebook – and Google – to account, and drastically limit their power?

Posted on Leave a comment

Your Status Updates Could Predict A Whole Range Of Health Conditions

Language in Facebook posts may be able to predict whether someone will develop diabetes and other conditions including depression, anxiety, alcohol abuse, sexually-transmitted diseases, and drug abuse better than demographic information like age, sex, and race.

Using an automated data collection technique, the researchers from University of Pennsylvania and Stony Brook University in the US analysed the entire Facebook post history of nearly 1,000 patients who agreed to have their electronic medical record data linked to their profiles. 

People who often use the words “God” and “pray” in their Facebook posts are 15 times more likely to develop Type 2 diabetes than people who rarely use those terms on the platform, the new study from the University of Pennsylvania School of Medicine finds.

Looking into 21 different conditions, researchers found that all 21 were predictable from Facebook alone. In fact, 10 of the conditions were better predicted through the use Facebook data instead of demographic information. 

“This work is early, but our hope is that the insights gleaned from these posts could be used to better inform patients and providers about their health,” said Raina Merchant, an associate professor at University of Pennsylvania. 

The study doesn’t show exactly why “God” and “pray” were linked to diabetes.

However, a 2011 study from Northwestern University found that those who begin regularly attending religious services while young are more likely to become obese by the middle of their lives. Some of the Facebook data also showed that the words, “drink” and “bottle” were more predictive of alcohol abuse.

Additionally, words expressing hostility — like “dumb” and some expletives — served as indicators of drug abuse and psychoses. 

“Our digital language captures powerful aspects of our lives that are likely quite different from what is captured through traditional medical data,” said Andrew Schwartz, an assistant professor at Stony Brook University. 

Merchant is hopeful that social-media posts could one day help doctors diagnose diseases like diabetes early or prevent them altogether, but there’s still more research to do before your doctor begins analyzing your status updates. Merchant plans to conduct a large study later this year that shares social-media information directly with health providers.

For those worried about privacy in the latest report, Merchant says it’s a top priority. “We made it very easy for patients to decide they no longer wanted to participate anymore, and we didn’t look at any data from their friends. This would be an opt-in process, and privacy needs to be part of the conversation,” she said.

Posted on Leave a comment

Facebook’s Libra Cryptocurrency Poses a Double Threat

Facebook’s new cryptocurrency. platform could provide the embattled social media giant with a new revenue stream of historic proportions as it contends with a possible federal antitrust probe and continued scrutiny over its data privacy practices.

Regulators will be watching closely when Facebook Inc. unveils its cryptocurrency project this week. Their vigilance is warranted.

Weekend media leaks suggest that Facebook’s “Libra” project will be a continuation of its past efforts to expand its payments business and keep customers within the walled garden of its social media apps by creating their very own money.

Facebook’s cryptocurrency could thrive in emerging markets, providing a more stable alternative for transferring money in areas with volatile currencies and unstable governments, according to RBC Capital Markets. The firm expects “Libra” to facilitate person-to-person payments, traditional e-commerce and spending on apps or gaming services on Facebook-owned properties.

“We believe this may prove to be one of the most important initiatives in the history of the company to unlock new engagement and revenue streams,” RBC Capital Markets analysts said in a note to investors.

It’s crucial that Libra doesn’t become a protective glue that binds Zuckerberg’s social networks even more closely together at a time when many regulators want to break them up. Libra will be presented as an open-source partnership whose benefits are available to all, but to what extent will it really be held at arm’s length from the Zuckerberg empire? Indeed, if the financial and business benefits of using Libra accrue mainly to Facebook, it will merely enshrine its market dominance.

Facebook has already partnered with more than a dozen companies, including Visa and PayPal, that have invested in the cryptocurrency and will help to oversee its use, The Wall Street Journal reported. The cryptocurrency will reportedly be tied to several traditional fiat currencies in a bid to protect “Libra” from price volatility that has hurt other leading digital currencies such as bitcoin.

With scrutiny of Facebook’s practices at an all-time high, the cryptocurrency could reshape the company’s business. Barclays analyst Ross Sandler predicted in March that the digital currency could produce as much as $19 billion in new revenue by 2021.

While no one wants to choke innovation unnecessarily, Facebook hasn’t exactly done much to earn everybody’s trust in recent years. Any chance to put the necessary controls in at the beginning, rather than firefighting down the road, should be grabbed by the regulators.

Posted on Leave a comment

Invasion of AI: How Will This Technology Change the World? (Part 2)

In this article we will take a closer look at how AI is being used to detect photoshopped images and also the virtual worlds, that might become the internet of the future. One of the more crazy things AI might be used for is accident prevention and even predicting the future.

AI tool automatically spots Photoshopped faces

Photoshop has long been one of the primary sources of manipulated photos and imagery, so in an attempt to counter the fake news epidemic, Adobe has also started developing tools that can both detect when an image has been manipulated, and reverse the changes to reveal the original. Last year its engineers created an AI tool that detects edited media created by splicing, cloning, and removing objects.

“While we are proud of the impact that Photoshop and Adobe’s other creative tools have made on the world, we also recognize the ethical implications of our technology,” Adobe said in a company blog post. “Fake content is a serious and increasingly pressing issue.”

This is far from the first time Adobe has come up with ways to try and counter misuse of its products. Already built into Photoshop are image recognition tools that prevent scans or photos of certain bank notes from being opened at all, although it’s far from foolproof.

“The idea of a magic universal ‘undo’ button to revert image edits is still far from reality,” Adobe researcher Richard Zhang

To create the software, engineers trained a neural network on a database of paired faces, containing images both before and after they’d been edited using Liquify.

The resulting algorithm is impressively effective. When asked to spot a sample of edited faces, human volunteers got the right answer 53 percent of the time, while the algorithm was correct 99 percent of the time.

Since it’s limited to just faces tweaked by this Photoshop tool, don’t expect this research to form any significant barrier against the forces of evil lawlessly tweaking faces left and right out there. But this is just one of many small starts in the growing field of digital forensics.

“We live in a world where it’s becoming harder to trust the digital information we consume,” said Adobe’s Richard Zhang, who worked on the project, “and I look forward to further exploring this area of research.”

Virtual worlds

A virtual world is a computer-based online community environment that is designed and shared by individuals so that they can interact in a custom-built, simulated world. These worlds are still being created, but you can already buy such things as expensive houses and other virtual products.

Why buy virtual land for real money? Because when you are in VR with your friends in VR too; playing, building, learning, socializing and just plain having fun you forget that its not physically real! Modern virtual reality is “good enough” providing enough data to the eyes, ears and brain to simulate reality and immerse you into completely new world.

One of these worlds is called Decantraland, but it’s far from the only one. Most of the giant tech companies are all working on virtual worlds. Facebook have shown they can use this technology in several ways, not just in the virtual world. In order for virtual worlds to become a success, they have to very realistic and the new software developed by Facebook will help to achieve that. By combining products like Alexa or other personal assistants with this technology Facebook hopes to get info from the real world that eventually will be used graphically in these virtual worlds. So expect personal assistant to advertised as personal helpers too, making it easier to find a lost book or your keys by being better at monitoring your actions and the environment.

These virtual worlds will start out as simple rooms for chatting in console/PC games and through social media, but they will quickly evolve to be the way we go online. The internet as we know it today has made the world smaller and made it easier to connect with people around the world, but virtual worlds will make it even easier. The technology has the potential to change many aspects of our lives, such as traveling, where these worlds would be able to instantly send us to anywhere in the world. That could impact the travel industry and how tourism works. Another aspect where this technology could have a huge impact is in education and not just in the technological advanced countries. It would benefit the poor and very uneducated in ways that could change entire nations or even continents.

One of the more “out there” ideas for this technology is to create virtual worlds, where we humans can upload our minds in order to live forever.

AI can predict the future

Scientists often say, that nothing happens by coincidence. Everything can be measured and analysed, which is what AI systems actually do when it is predicting the future. The AI systems might not be able to look far out in the future or determine our fates when we are born (not yet), but they can determine if people are in danger of a premature death. They can see workplace accidents before they happen, determine a users actions on social media, when people are going to take sick days off work, chances of mental health issues or what product you will buy next.

Posted on Leave a comment

Invasion of AI: How Will This Technology Change the World? (Part 1)

We live in a world that is beyond our control, and life is in a constant flux of change. So we have a decision to make: keep trying to control a storm that is not going to go away or start learning how to live within the rain. Although the Glenn Pemberton quote isn’t referring to the modern day struggle with new technologies, it defines the issues surrounding AI.

Oxford University’s Future of Humanity Institute asked several hundred machine-learning experts to predict AI capabilities, over the coming decades.

Notable dates included AI writing essays that could pass for being written by a human by 2026, truck drivers being made redundant by 2027, AI surpassing human capabilities in retail by 2031, writing a best-seller by 2049, and doing a surgeon’s work by 2053. They estimated there was a relatively high chance that AI beats humans at all tasks within 45 years and automates all human jobs within 120 years.

In this series of articles we’ll show how artificial intelligence impacts our lives in both positive and negative ways.

What is artificial intelligence?

AI systems will typically demonstrate at least some of the following behaviors associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

AI is often used today to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon’s Alexa and Apple’s Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud and much more.

Let’s take a look at some of the areas already impacted by artificial intelligence.

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving carsdelivery robots, as well as helping robots to learn new skills. The Chinese company Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain, and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce them by 2021. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news and Deepfakes

We already have neural networks that can create photo-realistic images or replicate someone’s voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies are being used to misappropriate people’s image, with tools already created to convincingly splice famous actresses into adult films, fake political statements and more.

While this aspect of artificial intelligence seems scary, it also have some productive aspects to it. As scary as it is to be able to switch out other peoples faces and voices, it also makes it easier and less expensive to make movies and TV shows. And just like the face swapping apps of yesterday, this technology will also be used for fun and entertainment.

Speech and language recognition

One of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple’s Siri, Amazon’s Alexa, the Google Assistant, and Microsoft Cortana. This technology is already being used in other products, such as TV’s, Smartwathces and other wearables. In the next couple of years everybody will be using universal translators with every word recorded and fed into the giant A.I network.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm alongside more traditional forms of human-machine interaction.

Social Media

Internally, each of the tech giants use AI to help drive a myriad of public services such as serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam. The list is extensive.

These systems feature absurd processing power and instant analytical capabilities. They eat big data and crap hyper-targeted marketing. They take no breaks or vacation days, and spend no time screwing around on Facebook (except to ingest behavioral insights to make themselves smarter).

People’s faces are being used like cookies to help in offering targeted services that meet the preferences of a customer. Other companies are using facial recognition to detect the moods of their customers and, in turn, offer them suitable product recommendations.

With the focus on fake news and online hate speech, more and more tech giants are implementing filters and other vetting processes with the help of A.I systems. Facebook training agents to negotiate and even lie is a huge problem that just shows the big tech companies are guilty of doing the things they promise to protect people from.

Entertainment

Choosing what song to release or what movie to produce is already being decided mostly by AI systems. AI systems have already produced music, art and TV shows. While these AI made productions still can’t compete with human creativity and struggles with things such as emotions and humor, they are very close to matching us or even becoming better.

There have been examples of the AI producing art and news stories humans couldn’t recognize as made by a machine.

Much of the entertainment related AI business will get popularized through Augmented Reality devices such as future smartphones and smartglasses.

Law enforcement

There can be no doubt, artificial intelligence (AI) helps defend government and business systems from cyberattacks, but conversely, AI systems can be used to augment attacks against governments and corporations, even Small businesses and private persons.

While police forces in western countries have generally only trialed using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialing the use of facial-recognition glasses by police.

In the near future, law enforcement will get new tools to fight crime and solve so-called cold cases where DNA is the only evidence. A Belgian team of scientists are working on this right know and if their work is successful, cops around the world will be able to get much more information from a typical DNA sample. Our DNA decides how we look and AI systems can soon read a DNA sample and extract physical traits, deceases and much more.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting diseases and identifying molecules that could lead to more effective drugs.

AI will be a powerful tool in the world of genetic manipulation and give us a better understanding of our genes and potentially gene therapy techniques such as CRISPR.

Even mental health will we be impacted by A.I through personal advice and therapy, but also in detecting issues such as mental breakdowns or other major psychological problems.

Business

AI is also a major factor in business and almost every business is using it in some form or another. Security in banking is one area where AI will be used, but also in everything from stocks to job interviews.

Jobs

While AI won’t replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn’t have the potential to impact. As AI expert Andrew Ng puts it: “many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work”, saying he sees a “significant risk of technological unemployment over the next few decades”.

Some experts think AI will improve the workplace and amount of human jobs.

Posted on Leave a comment

Facebook Will Pay Users To Let It Track You

Facebook will once again begin paying people to monitor how they use their phone through a new app called Study. The app will monitor which apps are installed on a person’s phone, the time spent using those apps, the country you’re in, and additional app data that could reveal specific features you’re using, among other things.

The company previously rolled out two similar apps that tracked what activities people did on their phones. But both were shut down after drawing criticism for infringing on privacy and for violating Apple’s App Store guidelines.

The launch of Study shows that Facebook clearly feels that it still needs this data on how people are using their phones, and also that Facebook has learned a thing or two from the last controversy. Study will only be available to people 18 and up; it’ll only be available on Android, where deeper phone access can be granted by each user; and it’ll open with a series of screens describing what type of data the app collects and how it’ll be used.

Facebook promises it won’t snoop on user IDs, passwords or any of participants’ content, including photos, videos or messages. It won’t sell participants’ info to third parties, use it to target ads or add it to their account or the behavior profiles the company keeps on each user. Yet while Facebook writes that “transparency” is a major part of “Approaching market research in a responsible way,” it refuses to tell us how much participants will be paid.

An investigation from January revealed that Facebook had been quietly operating a research program codenamed Atlas that paid users ages 13 to 35 up to $20 per month in gift cards in exchange for root access to their phone so it could gather all their data for competitive analysis.

Posted on Leave a comment

Mark Zuckerberg is the Latest Victim of Deep Fake Videos

Two U.K.-based artists created a deepfake of Facebook CEO Mark Zuckerberg to show just how dangerous AI-generated videos can be. Facebook is leaving the video up, sticking to a controversial stance it took when a doctored video of House Speaker Nancy Pelosi (D-California) went viral.

Deepfakes are fake videos that show a person saying or doing something they did not. The technique uses a mixture of real footage and artificial intelligence to falsify someone’s actions or speech.

As the technology gets better, many are worried that such videos will be used to spread misinformation and propaganda online.

A deepfake video of Mark Zuckerberg presents a new challenge

The video, posted to Facebook-owned Instagram over the weekend, falsely portrays Zuckerberg as saying,

“Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

An Instagram spokesperson told CNN Business on Tuesday that the site will treat the video “the same way we treat all misinformation on Instagram.” If it’s marked as false by third-party fact checkers, the spokesperson said, the site’s algorithms won’t recommend people view it.

The Zuckerberg video, which was first reported by Vice, comes as the US Congress prepares to hold its first hearing on the potential threats posed by deepfake videos. Earlier this year, the US Director of National Intelligence warned that America’s adversaries may use deepfake technology in future disinformation campaigns targeting the country.The video had less than 5,000 views before first being reported by news media, but how Facebook treats it could set a precedent for its handling of future deepfake videos.

AI FAKES BILL GATES VOICE AND THEIR SPEECH PATTERNS

Engineers at Facebook’s AI research lab created a machine learning system that can not only clone a person’s voice, but also their cadence — an uncanny ability they showed off by duplicating the voices of Bill Gates and other notable figures.

This system, dubbed Melnet, could lead to more realistic-sounding AI voice assistants or voice models, the kind used by people with speech impairments — but it could also make it even more difficult to discern between actual speech and audio deepfakes.

The speech is still somewhat robotic, but the voices are recognizable — and if researchers can smooth out the system even slightly, it’s conceivable that Melnet could fool the casual listener into thinking they’re hearing a public figure saying something they never actually uttered.