EU AI Act Interim Legal Counsel

2 February 2025: Are You Ready for the EU AI Act’s First Deadline?

1. Introduction

The EU AI Act is here, setting a bold precedent for artificial intelligence regulation worldwide. While most provisions won’t apply until August 2026, 2 February 2025 – this coming Sunday – marks an earlier critical deadline for all businesses involved with AI systems. On this date, Article 5 – a strict ban on certain high-risk AI practices – comes into force, demanding immediate attention and action from businesses to ensure compliance.

This milestone presents not only a legal challenge but also an opportunity for companies to demonstrate their commitment to ethical AI practices.

2. What is the EU AI Act?

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (the “EU AI Act”) entered into force on 20 August 2024. It is a comprehensive regulatory framework designed to mitigate risks associated with artificial intelligence while fostering innovation. It introduces a risk-based classification system, placing AI applications into four categories:

  1. Unacceptable Risk (Prohibited Practices): AI systems deemed intolerable for their societal impact, regulated by Article 5.
  2. High-Risk AI Systems: Applications subject to strict oversight, such as medical diagnostics and recruitment algorithms.
  3. Limited Risk: AI systems requiring transparency measures, such as chatbots disclosing their non-human nature.
  4. Minimal Risk: Most AI applications, like basic software tools, face no additional requirements.

By focusing on protecting fundamental rights and ensuring transparency, the Act aims to establish trust in AI systems across industries.

3. Article 5: Prohibited AI Practices

At the core of the EU AI Act is Article 5, which bans specific AI practices considered to pose unacceptable risks. The scope of this prohibition is extensive, applying to public and private operators alike, regardless of their role or identity. Among the banned practices are:

  • Social scoring: Systems that assess individuals’ trustworthiness or worthiness based on personal data.
  • Emotion recognition in sensitive environments: For example, systems monitoring emotions in workplaces, classrooms, or healthcare settings.
  • Facial recognition database expansion: AI tools that scrape facial images from online platforms or CCTV footage without consent.
  • Predictive profiling of criminal behaviour: AI systems that rely on personality traits or profiling to predict criminal acts.
  • Biometric categorization: Systems that deduce sensitive attributes such as race, religion, or sexual orientation using biometric data.

4. How Does the EU AI Act Impact Businesses?

Businesses developing or using AI must adapt quickly to avoid falling foul of the Act. For general-purpose AI providers, such as Google Cloud AI or Amazon SageMaker, the challenge lies in restricting prohibited uses without limiting legitimate applications. Many are:

  • Revising customer contracts to outline prohibited uses clearly.
  • Developing Codes of Conduct to align practices with regulatory expectations.
  • Incorporating technical safeguards into platforms to prevent misuse.

5. Penalties for Non-Compliance

The stakes for non-compliance are high. Pursuant to Article 99(3) of the EU AI Act, violations of Article 5 could result in fines of up to €35 million or 7% of global annual turnover, making this one of the most stringent regulatory regimes globally.

While the AI Office plans to release guidelines in early 2025 to help clarify compliance requirements, businesses must act now to assess and mitigate risks.

6. Why 2 February 2025 Matters

This deadline is more than just a regulatory milestone; it’s a wake-up call. Even if your business isn’t directly affected by the ban on prohibited practices, compliance readiness sends a powerful message to stakeholders, regulators, and customers alike: you are serious about ethical AI. To prepare, businesses should:

  1. Audit AI Systems: Identify whether any systems fall under the Article 5 prohibitions.
  2. Engage Experts: Seek legal and technical guidance to interpret the Act’s requirements.
  3. Monitor Developments: Stay updated on guidelines from the AI Office and industry practices.

7. Final Countdown: Is Your Business Ready?

The EU AI Act’s first major deadline is just a few days away. From 2 February 2025, certain AI systems will be outright banned, and non-compliance could mean fines of up to €35 million or 7% of global turnover.

If your business develops or uses AI, now is the time to act. Even if you’re not directly affected, this deadline signals a shift – AI regulation is here, and enforcement is starting. Are you ready?

Thoughts, comments or questions? Get in touch!

Gundo Haacke, Interim Legal Counsel & Owner of Haacke Commercial Legal Services.
Blog article first published on 29 January 2025.
Image credit: Created by myself using AI.

Disclaimer
The information provided in this blog article is for general informational purposes only. Nothing contained in this blog article constitutes legal advice, nor is it intended to be a substitute for legal counsel on any subject matter. The author disclaims any liability in connection with the use of this information.

Related Blog Posts