EU AI Act: Unacceptable AI Practices and Examples

EU AI Act: Unacceptable AI Practices and Examples

Marketing
 / 
Feb 10, 2025
EU AI Act: Unacceptable AI Practices and Examples

Starting August 1, 2024, the EU AI Act officially takes effect, setting new standards for artificial intelligence (AI) use across the European Union. This essential regulatory framework is designed to harness AI's potential while ensuring the protection of fundamental rights. Here's a detailed look at AI practices considered "unacceptable risk" under the Act, along with relevant examples:

1. Social Scoring by Governments

AI systems used by governments to evaluate individuals based on behavior, economic status, or personal characteristics, leading to discriminatory outcomes, are prohibited.

  • Example: A system that monitors citizens' behavior and assigns them scores for social compliance, which then influences access to public services or employment opportunities.

2. Exploitative Practices Targeting Vulnerable GroupsThe Act prohibits AI systems that exploit the vulnerabilities of specific groups, such as children or people with disabilities, ensuring protection against manipulation and harm.

  • Example: A gaming app that uses AI to target children with offers and rewards that encourage them to spend large amounts of money.

3. Real-Time Remote Biometric Identification in Public SpacesThe use of real-time remote biometric identification, such as facial recognition in public spaces, is prohibited, except for strictly defined scenarios for law enforcement, with rigorous oversight.

  • Example: Surveillance cameras that identify and track individuals in real-time in shopping malls or other public spaces without their explicit consent.

4. Subliminal ManipulationAI systems that use subliminal techniques to significantly distort behavior in a harmful way are prohibited. This measure protects individuals from hidden manipulation.

  • Example: A marketing application that uses AI to insert subliminal messages into video ads, influencing consumer behavior without their awareness.

5. Indiscriminate SurveillanceMass surveillance through AI, which involves large-scale monitoring without specific objectives or legal justification, is prohibited to protect privacy and prevent unwarranted intrusion.

  • Example: A monitoring system that collects and analyzes data from all electronic devices in an urban area without a specific purpose or legal authorization.

6. Automated Decisions Without Human OversightCritical decision-making systems that affect people's lives, health, and rights must include human oversight. This ensures protection against unfair or wrong decisions made solely by automated processes.

  • Example: An AI system used to decide on granting or denying bank loans without any human verification or intervention, which can lead to errors or discrimination.

Understanding and Complying with the EU AI Act

Understanding these prohibitions is crucial for companies developing or implementing AI technologies. Compliance with the EU AI Act not only avoids legal risks but also strengthens trust and credibility in AI solutions. Companies must prioritize ethical AI practices to align with regulatory standards and societal expectations

The EU AI Act represents a vital step in ensuring the responsible use of AI. Strict regulations on unacceptable risks are necessary to protect citizens' fundamental rights and prevent technology misuse. These measures will stimulate ethical innovation, providing companies with the framework to develop AI solutions that are not only technologically advanced but also consistent with our society's values. By adhering to these regulations, we can build a future where AI is a trusted ally, not a source of concern.

The EU AI Act's clear rejection of unacceptable risks serves as a foundation for responsible AI innovation. Companies that effectively navigate these regulations will be leaders in creating AI solutions that are both innovative and ethically sound.

Feel free to contact me for further discussions or insights on AI regulations.

You can find the original article here

Alexandru Dan

CEO, TVL Tech