With an overwhelmingly vote on Wednesday, the European Parliament adopted the historic Artificial Intelligence Act, which aims to restrict and outlaw certain AI applications that it considers “high-risk.”
Voters supported the initiative 523 to 46, with 49 abstaining.
The proposed regulation will outlaw some AI applications that violate citizens’ rights, such as those that gather CCTV footage’s facial images from the Internet and use them to build databases for facial recognition.
AI is also prohibited from being used “emotion recognition in the workplace and schools, social scoring, predictive policing when it is based solely on profiling a person or assessing their characteristics, and AI that manipulates human behavior or exploits people’s vulnerabilities will also be forbidden.”
Lawmakers in the European Parliament voted overwhelmingly in favor of the Artificial Intelligence Act, five years after regulations were first proposed.
The AI Act is expected to act as a global signpost for other governments grappling with how to regulate the fast-developing technology.
Big tech companies generally have supported the need to regulate AI while lobbying to ensure any rules work in their favor.
OpenAI CEO, Sam Altman caused a minor stir last year when he suggested the ChatGPT maker could pull out of Europe if it can’t comply with the AI Act — before backtracking to say there were no plans to leave.
Like many EU regulations, the AI Act was initially intended to act as consumer safety legislation, taking a “risk-based approach” to products or services that use artificial intelligence.
The riskier an AI application, the more scrutiny it faces. The vast majority of AI systems are expected to be low risk, such as content recommendation systems or spam filters. Companies can choose to follow voluntary requirements and codes of conduct.
High-risk AI applications, such as medical devices or vital infrastructure like water or electricity networks, must meet stricter constraints, such as using high-quality data and presenting users with clear information.
Some AI applications are prohibited because they are judged to represent an intolerable risk, such as social scoring systems that manage human behavior, certain types of predictive policing, and emotion identification systems in schools and workplaces.
Other prohibited uses include police scanning faces in public with AI-powered remote “biometric identification” devices, with the exception of major offenses such as kidnapping or terrorism.