Artificial Intelligence (AI) is a fascinating frontier in technology, but it isn’t without its pitfalls. From facial recognition systems that can mistakenly identify innocent people as criminals to biased algorithms ruining lives—AI can have serious repercussions when it goes awry. For instance, the Dutch child care benefit scandal was a stark reminder of how easily innocent individuals can be labeled as fraudsters due to erroneous AI decisions. These situations underline a sobering reality: the consequences of biased AI can resonate on a large scale, perpetuating stereotypes and marginalizing vulnerable groups.
In response to these growing concerns, the European Union has introduced the AI Act, the first comprehensive piece of legislation aimed at governing the use of AI. This law sets firm boundaries around AI practices, categorically banning systems it deems hazardous to health, safety, and fundamental rights. Notably, this protection extends to outputs created by systems operated outside the EU, showcasing a commitment to safeguard citizens regardless of where the technology originates. Article 5 of this regulation outlines prohibited AI practices, and non-compliance comes with hefty fines—up to €35 million or 7% of an enterprise’s annual turnover.
What AI Practices Are Now Under Fire?
Under the AI Act, several AI practices have been banned or partially prohibited:
- Totally banned: AI systems that manipulate or deceive individuals, such as voice-activated toys that might encourage harmful behavior in kids.
- Totally banned: Systems that exploit people’s vulnerabilities or employ social scoring unrelated to the data context.
- Totally banned: The unauthorized scraping of facial images from online sources to create or expand face recognition databases.
- Partially banned: Predictive policing that assesses individuals based on personality traits, but not when based on solid, verifiable facts.
- Partially banned: Systems attempting to discern sensitive data like race or religious beliefs based on biometric data, but law enforcement can still access them.
- Partially banned: Emotion recognition systems at work or in schools to evaluate emotional states via facial expressions, with exceptions for medical and safety contexts.
Navigating the Loopholes
However, there are loopholes that could undermine these protections. For instance, AI systems developed for “national security” may bypass the Act’s requirements entirely. While national security can be a legitimate reason for exceptions, each case must be evaluated in line with the EU Charter of Fundamental Rights. Moreover, the Act applies only to systems sold or used within the EU, crafting a troubling scenario where problematic systems could still be exported to other countries.
What to Do If Your Rights Are At Risk
If you suspect your rights have been compromised by a banned AI system, it’s crucial to bring your concerns to the relevant market surveillance authority. In Germany, for instance, you can file a complaint with the Bundesnetzagentur. However, the effectiveness of this enforcement remains to be seen, and it will take collective advocacy to ensure compliance and accountability.
For further insights on our advocacy efforts surrounding the Artificial Intelligence Act, feel free to explore our ongoing work.
It’s imperative that as technology advances, so too do our regulations and ethical considerations surrounding AI. The world will benefit from evolving frameworks that prioritize human rights while enabling innovation. The AI Buzz Hub team is excited to see where these breakthroughs take us. Want to stay in the loop on all things AI? Subscribe to our newsletter or share this article with your fellow enthusiasts.