Gone are the days when a store’s security cameras only mattered to shoplifters.
Now, with the rising prevalence of surveillance systems constantly monitored by artificial intelligence, ubiquitous security systems can watch, learn about, and discriminate against shoppers more than ever before.
AI can flag people based on their clothing or behavior, identify people’s emotions, and find people who are acting “unusual.”
Recent developments in video analytics—fueled by artificial intelligence techniques like machine learning—enable computers to watch and understand surveillance videos with human-like discernment. Identification technologies make it easier to automatically figure out who is in the videos. And finally, the cameras themselves have become cheaper, more ubiquitous, and much better; cameras mounted on drones can effectively watch an entire city. Computers can watch all the video without human issues like distraction, fatigue, training, or needing to be paid. The result is a level of surveillance that was impossible just a few years ago.
That’s the gist of a new ACLU report titled “The Dawn of Robot Surveillance,” about how emerging AI technology enables security companies to constantly monitor and collect data about people — opening new possibilities in which power is abused or underserved communities are overpoliced.
To prevent the worst consequences of this new smart surveillance tech, the ACLU report calls for strong legislation that would limit how the camera feeds can be used — especially to prevent mass data collection about people who are just going about their lives.
“Growth in the use and effectiveness of artificial intelligence techniques has been so rapid that people haven’t had time to assimilate a new understanding of what is being done, and what the consequences of data collection and privacy invasions can be,” concludes the report.
It is not just identifying actions, video analytics allow computers to understand what’s going on in a video: They can flag people based on their clothing or behavior, identify people’s emotions through body language and behavior, and find people who are acting “unusual” based on everyone else around them. Those same Amazon in-store cameras can analyze customer sentiment.
Data storage has become incredibly cheap, and cloud storage makes it all so easy. Video data can easily be saved for years, allowing computers to conduct all of this surveillance backwards in time.
In democratic countries, such surveillance is marketed as crime prevention—or counterterrorism. In countries like China, it is blatantly used to suppress political activity and for social control. In all instances, it’s being implemented without a lot of public debate by law-enforcement agencies and by corporations in public spaces they control.
Discrimination will become automated. Those who fall outside norms will be marginalized. And most importantly, the inability to live anonymously will have an enormous chilling effect on speech and behavior, which in turn will hobble society’s ability to experiment and change. The recent ACLU report discusses these harms in more depth. While it’s possible that some of this surveillance is worth the trade-offs, we as society need to deliberately and intelligently make decisions about it.