A group of policy experts assembled by the EU has recommended that it ban the use of AI for mass surveillance and mass “scoring of individuals”; a practice that potentially involves collecting varied data about citizens — everything from criminal records to their behavior on social media — and then using it to assess their moral or ethical integrity.
In its latest report, the EU’s High Level Expert Group on Artificial Intelligence says that “AI enabled mass scale of scoring of individuals,” should be banned. In addition, instances where AI and big data could be used to identify national security threats should be tightly regulated.
“While there may be a strong temptation for governments to ‘secure society’ by building a pervasive surveillance system based on AI systems, this would be extremely dangerous if pushed to extreme levels,” the report released today reads.
The group also calls for commercial surveillance of individuals and societies to be “countered” — suggesting the EU’s response to the potency and potential for misuse of AI technologies should include ensuring that online people-tracking is “strictly in line with fundamental rights such as privacy”, including when it concerns ‘free’ services.
However, much of the report simply recommends “further study,” while other recommendations, like limits on the use of emotional tracking and assessment technologies, are maddeningly vague.
Following the publication of the report, the EU will look to explore the practicalities of the recommendations in time for concrete proposals by early 2020. And, somehow, to turn that into legislation that will protect European citizens’ rights in an age of big data and artificial intelligence.
Europe can distinguish itself from others by developing, deploying, using, and scaling Trustworthy AI, which we believe should become the only kind of AI in Europe, in a manner that can enhance both individual and societal well-being,” the document reads.
Other key recommendations:
- Closely follow data collection practices of institutions and businesses
- Require self-identification of AI systems in human-machine interactions
- Support challenges to address climate change and hold an annual “AI for good” challenge
- Include workers whose jobs are impacted by AI in the AI design process
- Map skills shortages to identify AI opportunities
- Support the development of AI testing systems that let civil society organizations conduct independent quality verification
- Support elementary AI education courses for all EU citizens
- Fund government employee AI training and assess potential privacy and personal data risks of AI systems before government agencies procure them
- Create monitoring mechanisms to track the impact of AI on European members states and across the EU
- Fund additional research into the impact of AI on individuals and society, including on the rule of law, democracy, jobs, and social systems and structures