The EU’s AI Act officially came into force on 1 August 2024, marking a world first in terms of the regulation of AI technology.
To help you understand what this means for your business, we’ve combed through the documentation to bring you a high-level summary of what the regulations contain and what business leaders need to know.
1. The AI Act classifies AI according to its risk levels
The new regulations break down AI applications by the level of risk they pose to public safety, fundamental rights, or the environment. This risk is divided into four categories:
- Unacceptable risk. This refers to malicious AI or AI that seriously threatens citizens’ rights, for example, social scoring systems and AI systems that create facial recognition databases through untargeted scraping of facial
images. All systems deemed to pose an unacceptable risk are prohibited under the AI Act.
- High risk. These systems face the most rigorous regulations because they pose significant potential harm to our fundamental rights. Any AI system that profiles individuals is considered high-risk – for instance, systems
that use AI to evaluate credit worthiness or to allocate working tasks based on personal traits.
- Limited risk. These are systems that could be manipulated but are unlikely to do significant harm. Chatbots are one example; the regulations stipulate that users should be informed they are talking to AI, but the rules are less
stringent because they are unlikely to have harmful consequences.
- Minimal risk. This encapsulates all other AI systems – for example, an AI-enabled spam filter. Minimal risk systems will not require additional restrictions for deployment.
2. Changes to this classification are expected with the rise of generative AI
Although the AI Act is already in force, big changes could soon be underway thanks to the explosion of generative AI (GenAI) in the years since the legislation was first drafted.
In 2021, when much of the work on the regulations took place, most AI applications available within the EU single market fell under the “minimal risk” category and were therefore expected to go unregulated.
By contrast, in 2024, GenAI is now the most-used type of AI by organisations, and it has many potentially dangerous uses. This will not only make updates to the legislation necessary but potentially expand the scope of the regulation itself.
3. There are special rules for General Purpose AI (GPAI) providers
The EU AI Act has specific rules for “general purpose” AI (or GPAI), which is any AI model or system with enough generality to perform a wide range of tasks for which it was not specifically designed. ChatGPT or Gemini by Google are two
examples.
Under the new legislation, all GPAI model providers must:
Provide technical documentation, including the training and testing process for the model
Give instructions for its use and how to integrate it into AI systems
Comply with the Copyright Directive
Publish a summary of the content used for training
Free and open-license GPAI providers must only comply with the last two bullet points, unless they present a systemic risk, in which case they must also:
Conduct model evaluations
Carry out adversarial testing to assess and mitigate systemic risks
Track, document, and report serious incidents and corrective measures to the AI Office and any relevant national authorities
Put cybersecurity protections in place
4. Like GDPR, the AI Act will apply even if your business is based outside the EU
There are many similarities between the AI Act and the EU’s General Data Protection Regulation (GDPR), which came into effect in 2016. Both regulations are the first of their kind around the world in protecting citizens from the potential ramifications
of technological development.
There are also important regulatory similarities. Just as GDPR applies to any business that processes the personal data of people in the EU, regardless of where that business is established, the same is true of the AI Act.
Don’t risk noncompliance just because you’re based outside the EU; the penalties could be huge.
5. Penalties for non-compliance range up to €35 million
For companies that fall foul of the new regulations, fines can be severe. Any company that breaks the rules around prohibited AI could face fines of up to €35 million, or up to 7% of its total worldwide annual turnover for the preceding financial
year – whichever is higher.
Noncompliance with other areas of the AI Act will occasion fines of up to €15 million, or 3% of turnover.
There are also fines for supplying incorrect, incomplete, or misleading information to the authorities enforcing the regulations: up to €7.5 million or 1% of worldwide annual turnover.
6. Developers of AI systems face many obligations – but they’re not the only ones
In the terminology of the AI Act, AI “providers” are the developers who make the systems. “Users” are the natural or legal persons that deploy the AI system professionally – for instance, a business using it as part of
its service – and not the end-users interacting with the technology.
Many of the AI Act’s provisions apply to developers putting high-risk systems onto the market, but as an organisation, you’re still responsible for providing the proper human oversight, such as carrying out due diligence and keeping your
clients informed.
7. Deadlines for compliance begin in 2025…
So, how long do you have before your AI activities must be compliant with EU law? Here’s a brief timeline:
Prohibited systems must be offline by February 2025
GPAI must be compliant by August 2025
High-risk AI systems under Annex III (including biometrics, safety components for digital infrastructure, and recruitment) by August 2026
High-risk AI systems under Annex I by August 2027
8. … But AI systems launched before the deadlines get a two-year grace period
The above timeline may not apply in the same way depending on when you launch your AI system. After each deadline, any new AI system that the deadline applies to must be compliant upon launch, but if you launch your model before these deadlines, you
get two extra years to make your system compliant.
For example, let’s say you’ve developed an AI model for recruitment screening – considered “high risk” under Annex III – and you launch it in June 2026. You have until June 2028 to make it fully compliant with the
law, whereas, if you launched it two months later, it would need to be compliant on launch day.
This could lead to a rush of half-baked AI products launching in the next
few years. Don’t be one of them: build compliance into your AI applications from the very beginning.
To find out how to build AI responsibly and with maximum business impact, read our report.
Discover why AI is nothing without you
At Sopra Steria, we believe AI’s true potential is unlocked with human collaboration. By blending human creativity with advanced AI technology, we empower people to address society’s most pressing challenges—from combating disease to mitigating climate change—while helping our clients achieve their digital transformation goals.
We emphasize critical thinking and education to ensure AI upholds core human values like respect and fairness, minimizing ethical risks. Together, we’ll create a future where AI inspires positive impact and enhances human brilliance. That's why we believe that AI is nothing without you!
DISCOVER MORE