Posted on 2 Comments

The Cannabis Industry Is Changing

In examining the current state of the cannabis market, it is clear the use of artificial intelligence technology is not a passing trend but a real opportunity to provide solutions to a pressing market need.

Imagine that you have a day-by-day record of how your plants are reacting to all the different inputs (environmental conditions, nutrient feed, pH, CO2, light spectrum, etc.) and your software is now making millions of calculations to draw insights on what is making a difference to your output. The output may be yield, energy consumption, labor cost, etc., but it may also be modeling what will happen if you change your way of growing.

The access to a vast amount of data, allows growers to optimize for environmental changes and variables and can even change the strain of the product. Growers can even adapt to what CBD or THC levels that they want and change the genetic makeup to consistently produce the types of strains that sell best.

The Medical Cannabis market alone offers more than 30,000 different Marijuana strains, each used to treat a different set of symptoms and concerns. This creates a big confusion both at the buyer side and the dispatcher end. With so many strains available, the purchaser is often at a loss as to which one is best for their specific needs or condition. Artificial Intelligence is useful in using existing data from studies and peer-reviewed journals to match symptoms and ailments to one of the strains available.

Cannabis in everyday items

Innovators are taking up the gauntlet to cultivate this versatile plant for a medley of biodegradable materials including plastic polymersbuilding productsfabricswoodbiofuelpaper and even car components.

It’s not new. The fiber from industrial hemp (Cannabis sativa) has been used for thousands of years to make paper, rope, cloth and fuel.

Hemp is a weed, so it grows prolifically with little water and no pesticides. It takes up relatively little space, produces more pulp per acre than trees, and is biodegradable. Hemp crops even give back by returning nutrients to the soil and sequestering carbon dioxide.

Morris Beegle, co-founder and president of WAFBA (We Are For Better Alternatives) is a staunch advocate of industrial hemp.

Beegle set up his hemp company in 2012 and then launched the NoCo Hemp Expo, which has grown to be the largest in the world.

With a merchandising company called TreeFreeHemp, Beegle produces a vast array of custom products including paper, business cards, flyers, posters, CD and DVD sleeves and more. Drawing from his background in the music industry, he even produces boutique, custom-made guitars, using hemp for the body, straps, picks and volume knobs.

Currently, there are less than a million acres of hemp growing across the planet. Beegle sees this starting to grow exponentially over the next five to 20 years. “I don’t think there’s any way to stop it now.”

The silly CBD products

Sellers around the world are careful not to claim any specific medical benefits for the CBD products because of a lack of clinical evidence, so they are instead marketed as food supplements.

The new CBD products include; CBD water, to cooking or massage oils, pills, chewing gum, transdermal patches, pessaries, gin, beer and lube. The crown for silliest CBD product of the year, however, belongs indisputably to the CBD-infused pillowcases sold by one hopeful firm of US fabric-makers. 

There is now no denying the medicinal value of CBD and THC – not even by several European governments, which for years maintained that lie even as it rubberstamped the cultivation and export of the world’s largest medicinal cannabis crop.

In many cases, the CBD industry is taking consumers for a ride. Lab tests have analysed high-street offerings and found that more than half of the most popular CBD oils sold do not contain the level of CBD promised on the label. And a look at the label of those products shows that many are sold at such low concentrations that even the guesstimated doses, measured in drops, cannot deliver more than a scant few milligrammes of the active ingredient – whereas medical trials use many times more.

Scientists and politicians are, thankfully, catching up with hundreds of years of folk wisdom: it’s not news to anyone who regularly smokes a spliff that cannabis is relaxing, or that it can help you sleep far more soundly than a glass of red wine, or improve your mood. The interplay between THC, CBD, and the hundreds of other active compounds in the cannabis plant could one day be isolated, identified, tested and proven to offer symptomatic help or even a cure for dozens of life-threatening conditions. But decades of pointless prohibition based on specious moral arguments have prevented proper medical research that could have benefited millions.

Posted on Leave a comment

App Undresses A Photo Of Any Woman With A Single Click

Any woman can be a victim of “x-rays” of DeepNude. Just have a picture of a person – with the least amount of clothing if possible – and download it. From there, the software creates a new image, generated by algorithms, of the woman deprived of any clothing and in which the breasts and the vagina of the protagonist in question can be clearly seen. And all in less than a minute.

Women are the only target of DeepNude …. so far! If you try with a photo of a man, the experience does not work. However on the web, we can already read that they work on male nudes. 

Motherboard first drew attention to DeepNude, reporting on Wednesday that the app used machine learning technology to “undress” the images by algorithmically superimposing body parts on them.

Katelyn Bowden, founder and executive director of Badass, an anti-porn vengeance activist, told Motherboard, “It’s absolutely terrifying, anyone can be a victim of pornographic revenge, without having nude photo. This technology should not be accessible to the public.

Following the controversy that erupted, the anonymous creator of this app, going by the name Alberto, claimed he shut down the app.

Despite the safety measures adopted (watermarks) if 500,000 use it the probability that people will misuse it will be too high. […] The world is not yet ready for DeepNude.

Until a few days ago, anyone who wanted to see anyone naked had to pay $ 50, but probably because of the high demand now you have to pay $ 99.99. By paying this amount, you can access the premium option with which you can better modify the image. However, you can also see nudes for free in the non-paying version, whose images are partly covered with a watermark.

“We don’t want to make money this way,” the statement said. “Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones to sell it.” Alberto claimed that he’s just a “technology enthusiast,” motivated by curiosity and a desire to learn.

Alberto said he was inspired to create DeepNude by ads for gadgets like X-Ray glasses that he saw while browsing magazines from the 1960s and 70s, which he had access to during his childhood. The logo for DeepNude, a man wearing spiral glasses, is an homage to those ads.

“Like everyone, I was fascinated by the idea that they could really exist and this memory remained,” he said. “About two years ago I discovered the potential of AI and started studying the basics. When I found out that GAN networks were able to transform a daytime photo into a nighttime one, I realized that it would be possible to transform a dressed photo into a nude one. Eureka. I realized that x-ray glasses are possible! Driven by fun and enthusiasm for that discovery, I did my first tests, obtaining interesting results.”

Despite the shuttering of DeepNude – for now at least – similar services will likely be hot on its heels. Motherboard talked to Katelyn Bowden, founder and CEO of revenge porn activism organization Badass, who found DeepNude “absolutely terrifying.”

Now anyone could find themselves a victim of revenge porn, without ever having taken a nude photo. This tech should not be available to the public.

Posted on Leave a comment

EU Wants To Ban AI-powered Citizen Scoring And Mass Surveillance

A group of policy experts assembled by the EU has recommended that it ban the use of AI for mass surveillance and mass “scoring of individuals”; a practice that potentially involves collecting varied data about citizens — everything from criminal records to their behavior on social media — and then using it to assess their moral or ethical integrity.

In its latest report, the EU’s High Level Expert Group on Artificial Intelligence says that “AI enabled mass scale of scoring of individuals,” should be banned. In addition, instances where AI and big data could be used to identify national security threats should be tightly regulated.

“While there may be a strong temptation for governments to ‘secure society’ by building a pervasive surveillance system based on AI systems, this would be extremely dangerous if pushed to extreme levels,” the report released today reads.

The group also calls for commercial surveillance of individuals and societies to be “countered” — suggesting the EU’s response to the potency and potential for misuse of AI technologies should include ensuring that online people-tracking is “strictly in line with fundamental rights such as privacy”, including when it concerns ‘free’ services.

However, much of the report simply recommends “further study,” while other recommendations, like limits on the use of emotional tracking and assessment technologies, are maddeningly vague.

Following the publication of the report, the EU will look to explore the practicalities of the recommendations in time for concrete proposals by early 2020. And, somehow, to turn that into legislation that will protect European citizens’ rights in an age of big data and artificial intelligence.

Europe can distinguish itself from others by developing, deploying, using, and scaling Trustworthy AI, which we believe should become the only kind of AI in Europe, in a manner that can enhance both individual and societal well-being,” the document reads.

Other key recommendations:

  • Closely follow data collection practices of institutions and businesses
  • Require self-identification of AI systems in human-machine interactions
  • Support challenges to address climate change and hold an annual “AI for good” challenge
  • Include workers whose jobs are impacted by AI in the AI design process
  • Map skills shortages to identify AI opportunities
  • Support the development of AI testing systems that let civil society organizations conduct independent quality verification
  • Support elementary AI education courses for all EU citizens
  • Fund government employee AI training and assess potential privacy and personal data risks of AI systems before government agencies procure them
  • Create monitoring mechanisms to track the impact of AI on European members states and across the EU
  • Fund additional research into the impact of AI on individuals and society, including on the rule of law, democracy, jobs, and social systems and structures
Posted on Leave a comment

AI Capable Of Showing How The Universe Works

A team of researchers recently pioneered the world’s first AI universe simulator. It’s fast; it’s accurate; and its creators are baffled by its ability to understand things about the cosmos that it shouldn’t.

Scientists have used computer simulations to try and digitally reverse-engineer the origin and evolution of our universe for decades. The best traditional methods using modern technology take minutes and produce okay results. The world’s first AI universe simulator on the other hand, produces results with far greater accuracy in just milliseconds.

The speed and accuracy of a project at the Flatiron Institute, called the Deep Density Displacement Model, or D3M for short, wasn’t the biggest surprise to astrophysicists who used artificial intelligence techniques to generate complex 3-D simulations of the universe in an amazing 30 milliseconds, including how much of the cosmos is dark matter.

The real shock was that D3M could accurately simulate how the universe would look if certain parameters were tweaked even though the model had never received any training data where those parameters varied. The results are so fast, accurate and robust that even the creators aren’t sure how it all works.

It’s like teaching image recognition software with lots of pictures of cats and dogs, but then it’s able to recognize elephants. Nobody knows how it does this, and it’s a great mystery to be solved”.

Our universe is a strange and mostly unknown place. Humanity is just beginning to set our sights beyond observable space to determine what’s out there and how it all ended up the way it is. Computer simulations like those made by D3M have become essential to theoretical astrophysics.

Scientists want to know how the cosmos might evolve under various scenarios, such as if the dark energy pulling the universe apart varied over time. Such studies require running thousands of simulations, making a lightning-fast and highly accurate computer model one of the major objectives of modern astrophysics.

Posted on Leave a comment

Mass Surveillance Will Track Everything And Everyone In The Near Future

Persistent Surveillance Systems wants to spend three years recording outdoor human movements in a major U.S. city, KMOX news radio reports.

McNutt, who runs Persistent Surveillance Systems, was inspired by his stint in the Air Force tracking Iraqi insurgents. He tested mass-surveillance technology over Compton, California, in 2012. In 2016, the company flew over Baltimore, feeding information to police for months (without telling city leaders or residents) while demonstrating how the technology works to the FBI and Secret Service.

There’s really no telling whether surveillance of this sort has already been conducted over your community as private and government entities experiment with it.

The technology is straight forward: A fixed-wing plane outfitted with high-resolution video cameras circles for hours on end, recording everything in large swaths of a city. One can later “rewind” the footage, zoom in anywhere, and see exactly where a person came from before or went after perpetrating a robbery or drive-by shooting … or visiting an AA meeting, a psychiatrist’s office, a gun store, an abortion provider, a battered-women’s shelter, or an HIV clinic. On the day of a protest, participants could be tracked back to their homes.

“Someday, most major developed cities in the world will live under the unblinking gaze of some form of wide-area surveillance.”

Author Arthur Holland Michel says, the sheer amount of data will make it impossible for humans in any city to examine everything that is captured on video. But efforts are under way to use machine learning and artificial intelligence to “understand” more. “If a camera that watches a whole city is smart enough to track and understand every target simultaneously,” he writes, “it really can be said to be all-seeing.”  

Walmart stores are using AI cameras to monitor checkouts

Walmart’s early use of AI at its stores isn’t just for the sake of convenience. The retailer has confirmed to Business Insider that it’s using camera-based computer vision tech to deter theft and losses at its checkouts (including self-checkouts) in over 1,000 stores.

The camera systems started rolling out in Walmart stores as far back as two years ago under a program reportedly called ‘Missed Scan Detection’ internally. 

While many stores have security cameras, few are using AI to study activity on this level. How long does Walmart preserve the data, and is there anything identifying? We’ve asked Walmart for comment, but it’s safe to say that many customers aren’t aware that AI is at work.

Posted on Leave a comment

Powerpoints New AI Coach Wants People To Think And Speak More Inclusively

Microsoft PowerPoint is set to strip away the last vestiges of humanity from presentations with tweaks to its Designer functionality and a coach to help users “deliver the perfect presentation”.

Slightly creepily, it also comments on “inclusive language”, culturally insensitive phrases and swears. We assume it was trained on social media platforms, with a staffer waving a hand at the content and saying: “See that? None of that.” It also makes sure that you don’t commit the greatest sin of presenting: just reading the slides.

The technology is an evolution of an inclusivity checker that Microsoft announced for Word earlier this year. The checker scans your text for the use of unnecessarily gendered pronouns as part of a general grammar check. The PowerPoint team simply imported this AI logic into PowerPoint–and added voice recognition to the mix.

What does that mean exactly?

It’s hard to say. But no doubt, Microsoft is being deliberately vague. If the company wants to nudge the 1.2 billion Office users on the planet to speak more sensitively in an era of unprecedented political divide–an era when, for some reason, being considerate to others is a form of partisan activism–this is exactly the fine line it has to walk.

More than perhaps any other company in tech, Microsoft has made inclusivity a north star for the company’s products, from its Xbox controllers to its productivity software to its research efforts. It’s not necessarily just out of the goodness of their hearts, either; inclusivity has proven to be a powerful differentiator for Microsoft, which has leveraged the cause to reposition itself from sleepy to woke.

What’s particularly intriguing about the coach, however, is that it’s attempting to modify human behavior in a way that is one step removed from Microsoft products themselves. It’s not suggesting different words rendered on a screen; it’s suggesting different words coming out of the user’s mouths. In this sense, it’s trying to change the way people act.

Posted on Leave a comment

New Monopoly With Voice-Controlled AI Prevents You from Cheating

If you’ve ever played a marathon game of Monopoly, you know that it brings out the very worst in people—even your dear, dirty, lying, cheating friends and family. Sick of all the conniving?

The upcoming Monopoly Voice Activated Banking Game, available July 1, adds an omnipotent banker in the form of a voice-activated top hat that manages the game’s financial transactions.

The Monopoly voice banking game features lights and sounds, and comes with an interactive Mr. Monopoly banking unit. The iconic Monopoly top hat is voice-activated and the personality of Mr. Monopoly really shines as he handles all of the transactions. He keeps tabs on players’ Money and properties so there’s no cash or cards to think about. Talk to Mr. Monopoly and he responds. For instance, press your TOKEN’s button and say, “buy St. James place.” Mr. Monopoly will track The transaction, keeping the game moving. With the Monopoly voice game, players travel around the board aiming to be the person with the most money and highest property value to win!”

As smart assistants go, Mr. Monopoly is nowhere near as intelligent as Alexa, or even Siri. Players use one of four buttons on the top hat speaker to identify themselves (it can’t tell different voices apart on its own) and then make verbal requests like “Buy Boardwalk” or “Build a hotel.” All of the financial transactions are handled by Mr. Monopoly electronically. As with previous iterations of the game, physical cash isn’t even included, which should help eliminate at least one method of cheating.

Posted on Leave a comment

Your Status Updates Could Predict A Whole Range Of Health Conditions

Language in Facebook posts may be able to predict whether someone will develop diabetes and other conditions including depression, anxiety, alcohol abuse, sexually-transmitted diseases, and drug abuse better than demographic information like age, sex, and race.

Using an automated data collection technique, the researchers from University of Pennsylvania and Stony Brook University in the US analysed the entire Facebook post history of nearly 1,000 patients who agreed to have their electronic medical record data linked to their profiles. 

People who often use the words “God” and “pray” in their Facebook posts are 15 times more likely to develop Type 2 diabetes than people who rarely use those terms on the platform, the new study from the University of Pennsylvania School of Medicine finds.

Looking into 21 different conditions, researchers found that all 21 were predictable from Facebook alone. In fact, 10 of the conditions were better predicted through the use Facebook data instead of demographic information. 

“This work is early, but our hope is that the insights gleaned from these posts could be used to better inform patients and providers about their health,” said Raina Merchant, an associate professor at University of Pennsylvania. 

The study doesn’t show exactly why “God” and “pray” were linked to diabetes.

However, a 2011 study from Northwestern University found that those who begin regularly attending religious services while young are more likely to become obese by the middle of their lives. Some of the Facebook data also showed that the words, “drink” and “bottle” were more predictive of alcohol abuse.

Additionally, words expressing hostility — like “dumb” and some expletives — served as indicators of drug abuse and psychoses. 

“Our digital language captures powerful aspects of our lives that are likely quite different from what is captured through traditional medical data,” said Andrew Schwartz, an assistant professor at Stony Brook University. 

Merchant is hopeful that social-media posts could one day help doctors diagnose diseases like diabetes early or prevent them altogether, but there’s still more research to do before your doctor begins analyzing your status updates. Merchant plans to conduct a large study later this year that shares social-media information directly with health providers.

For those worried about privacy in the latest report, Merchant says it’s a top priority. “We made it very easy for patients to decide they no longer wanted to participate anymore, and we didn’t look at any data from their friends. This would be an opt-in process, and privacy needs to be part of the conversation,” she said.

Posted on Leave a comment

AI Has Made Video Surveillance Automated And Extremely Terrifying

Gone are the days when a store’s security cameras only mattered to shoplifters.

Now, with the rising prevalence of surveillance systems constantly monitored by artificial intelligence, ubiquitous security systems can watch, learn about, and discriminate against shoppers more than ever before.

AI can flag people based on their clothing or behavior, identify people’s emotions, and find people who are acting “unusual.”

Recent developments in video analytics—fueled by artificial intelligence techniques like machine learning—enable computers to watch and understand surveillance videos with human-like discernment. Identification technologies make it easier to automatically figure out who is in the videos. And finally, the cameras themselves have become cheaper, more ubiquitous, and much better; cameras mounted on drones can effectively watch an entire city. Computers can watch all the video without human issues like distraction, fatigue, training, or needing to be paid. The result is a level of surveillance that was impossible just a few years ago.

That’s the gist of a new ACLU report titled “The Dawn of Robot Surveillance,” about how emerging AI technology enables security companies to constantly monitor and collect data about people — opening new possibilities in which power is abused or underserved communities are overpoliced.

To prevent the worst consequences of this new smart surveillance tech, the ACLU report calls for strong legislation that would limit how the camera feeds can be used — especially to prevent mass data collection about people who are just going about their lives.

“Growth in the use and effectiveness of artificial intelligence techniques has been so rapid that people haven’t had time to assimilate a new understanding of what is being done, and what the consequences of data collection and privacy invasions can be,” concludes the report.

It is not just identifying actions, video analytics allow computers to understand what’s going on in a video: They can flag people based on their clothing or behavior, identify people’s emotions through body language and behavior, and find people who are acting “unusual” based on everyone else around them. Those same Amazon in-store cameras can analyze customer sentiment.

Data storage has become incredibly cheap, and cloud storage makes it all so easy. Video data can easily be saved for years, allowing computers to conduct all of this surveillance backwards in time.

In democratic countries, such surveillance is marketed as crime prevention—or counterterrorism. In countries like China, it is blatantly used to suppress political activity and for social control. In all instances, it’s being implemented without a lot of public debate by law-enforcement agencies and by corporations in public spaces they control.

Discrimination will become automated. Those who fall outside norms will be marginalized. And most importantly, the inability to live anonymously will have an enormous chilling effect on speech and behavior, which in turn will hobble society’s ability to experiment and change. The recent ACLU report discusses these harms in more depth. While it’s possible that some of this surveillance is worth the trade-offs, we as society need to deliberately and intelligently make decisions about it.