EU's AI Act Takes Effect: Companies Face Fines for 'Unacceptable Risk' AI Systems

Alexis Rowe

Alexis Rowe

February 02, 2025 · 4 min read
EU's AI Act Takes Effect: Companies Face Fines for 'Unacceptable Risk' AI Systems

The European Union's comprehensive AI regulatory framework, the AI Act, has reached its first compliance deadline, with regulators now able to ban AI systems deemed to pose "unacceptable risk" or harm. As of February 2, companies found to be using prohibited AI applications in the EU will face fines, regardless of their headquarters.

The AI Act, which officially came into force on August 1, aims to cover a wide range of use cases where AI interacts with individuals, from consumer applications to physical environments. The Act categorizes AI systems into four risk levels: minimal risk, limited risk, high risk, and unacceptable risk. While minimal risk AI systems, such as email spam filters, will face no regulatory oversight, unacceptable risk applications will be prohibited entirely.

Some of the prohibited AI activities include social scoring, manipulating individuals' decisions subliminally or deceptively, exploiting vulnerabilities, predicting criminal behavior based on appearance, and using biometrics to infer personal characteristics. Companies found to be using these AI applications in the EU will be subject to fines of up to €35 million (~$36 million), or 7% of their annual revenue from the prior fiscal year, whichever is greater.

According to Rob Sumroy, head of technology at the British law firm Slaughter and May, the fines won't kick in immediately. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect."

Last September, over 100 companies signed the EU AI Pact, a voluntary pledge to start applying the principles of the AI Act ahead of its entry into application. Signatories, including Amazon, Google, and OpenAI, committed to identifying AI systems likely to be categorized as high risk under the AI Act. However, some tech giants, such as Meta and Apple, skipped the Pact, and French AI startup Mistral, one of the AI Act's harshest critics, also opted not to sign.

Sumroy notes that most companies won't be engaging in prohibited practices anyway, and the key concern for organizations is whether clear guidelines, standards, and codes of conduct will arrive in time to provide clarity on compliance. The working groups are meeting their deadlines on the code of conduct for developers, but it remains unclear how other laws on the books might interact with the AI Act's provisions.

The European Commission is set to release additional guidelines in "early 2025" following a consultation with stakeholders in November, but these guidelines have yet to be published. Sumroy emphasizes that understanding how these laws fit together will be crucial for organizations, particularly around overlapping incident notification requirements.

The AI Act also carves out exceptions for certain systems, such as those used by law enforcement for targeted searches or to prevent imminent threats to life, and for systems that infer emotions in workplaces and schools with a medical or safety justification. These exemptions require authorization from the appropriate governing body and stress that law enforcement cannot make decisions solely based on these systems' outputs.

As the EU's AI Act takes effect, companies must navigate the complex regulatory landscape to ensure compliance and avoid significant fines. With the next deadline approaching in August, organizations must stay vigilant and adapt to the evolving AI regulatory environment.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.