Recent EC guidelines on the AI-ACT

NEWSLETTER  >

The European Union’s Artificial Intelligence Act (AI Act) establishes the first comprehensive legal framework for regulating AI systems, adopting a risk-based approach to ensure safety, fundamental rights, and trustworthy AI development. To support the effective implementation of its provisions, the European Commission (EC) issued two key guidelines in February 2025: one on the definition of AI systems (February 6) and another on prohibited AI practices (February 4). These non-binding guidelines aim to clarify critical aspects of the Act, assisting stakeholders in navigating compliance requirements.

 

Guidelines on the Definition of AI Systems (Feb.6, 2025)

This guideline clarifies what constitutes an “AI system” under Article 3 of the EU AI Act. Its purpose is to ensure that stakeholders can accurately identify whether their software or technology falls within the scope of the Act. Key takeaways include:

  • Broad and Technology-Neutral Definition: An AI system is defined as a machine-based system operating with varying levels of autonomy to generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.
  • Case-by-Case Analysis: Determining whether a system qualifies as an AI system requires analysis based on functionality rather than specific technologies. This ensures flexibility and adaptability to emerging technologies.
  • Evolving Framework: The definition is designed to accommodate future advancements in AI, ensuring that the regulatory framework remains relevant over time.

These guidelines provide foundational clarity for businesses and developers to understand whether their systems are subject to regulation under the AI Act. For more details, refer to the official EC document: Guidelines on the definition of an artificial intelligence system

Guidelines on Prohibited AI Practices (Feb. 4, 2025)

This set of non-mandatory guidelines elaborates on Article 5 of the EU AI Act, which lists practices deemed to pose “unacceptable risks” and are therefore prohibited. Key takeaways are:

  • The Draft Guidelines aim to increase clarity and provide insight into the Commission’s interpretation of Prohibited Practices under Article 5 of the Act.
  • The Draft Guidelines are lengthy but are still in draft form and, even when finalized, will be non-binding. All guidance provided therein is subject to the formal requirements set forth in the Act.
  • Though the Draft Guidelines are not comprehensive, they are a helpful step in assessing whether an AI System qualifies as prohibited under the Act.

These include applications that violate fundamental rights or EU values. Key examples include:

  • Subliminal or Manipulative Techniques: Prohibits AI systems that use subliminal techniques or manipulative strategies beyond individuals’ awareness to distort their behavior in ways that cause significant harm.
  • Exploitation of Vulnerabilities: Bans systems exploiting vulnerabilities related to age, disability, or socio-economic conditions to distort behavior and cause harm.
  • Biometric Categorization: Outlaws systems inferring sensitive attributes (e.g., political beliefs or sexual orientation) from biometric data unless strictly necessary for law enforcement with safeguards.
  • Real-Time Remote Biometric Identification: Prohibits its use in publicly accessible spaces except for narrowly defined law enforcement purposes with judicial authorization.
  • Untargeted Facial Scraping: Prohibits indiscriminate scraping of facial images from online platforms or CCTV footage to create facial recognition databases without individuals’ consent. Examples include commercial facial recognition systems built using images scraped from social media platforms.
  • Emotion Recognition in Sensitive Contexts: Bans AI systems designed to infer emotions in workplaces and educational institutions due to concerns about their reliability and potential for discriminatory outcomes.
  • AI for Social Scoring: Prohibits systems evaluating individuals’ behaviour or characteristics unrelated to context, leading to unjustified discrimination or exclusion.

For further information, consult the official EC publication: Guidelines on prohibited artificial intelligence practices.

 

Conclusion

Together, these two guidelines provide essential clarity for implementing the EU AI Act’s provisions. By defining what constitutes an AI system and detailing prohibited practices, they help ensure consistency across Member States while fostering ethical innovation and safeguarding fundamental rights. These resources are invaluable for businesses seeking to align their operations with the EU’s regulatory framework for AI.

Author(s)

Stefano Modafferi

[ IT Innovation Centre, University of Southampton ]