Introduction: Background to the EU AI Act
The EU unleashed a groundbreaking legal framework for artificial intelligence, directly applicable in all EU member states, and the clock is ticking for businesses across every sector.
Regulation (EU) 2024/1689, better known as the EU AI Act (or regulation), is not just another piece of legislation; it’s a paradigm shift designed to foster trustworthy, human-centric AI while safeguarding fundamental rights.
If your company is developing, deploying, or even just considering integrating AI into its operations, now is the time to sit up and take notice.
AI regulation for businesses in the EU
The future of AI in Europe will be regulated, and understanding these new rules, particularly for “high-risk” systems, is paramount to continued success and avoiding significant penalties.
At its core, the AI Act employs a risk-based approach, categorising AI systems based on their potential to cause harm. While some AI applications are deemed to constitute an “unacceptable risk” and are outright prohibited (think social scoring by governments or manipulative subliminal techniques), the vast majority fall into categories ranging from “minimal risk” to “limited risk”, and most critically for businesses, “high-risk”.
Understanding how your use of AI is classified under the AI Act is key to understanding your required level of compliance.
It is also important to know your role within the AI’s life cycle, as the regulation places distinct responsibilities on different key players. Most importantly, it focuses on two main functions: providers and deployers.
A provider is any natural or legal person who develops an AI system or general-purpose AI model, or who engages third parties to develop such a system for him, and places it on the market or puts it into service under his own name or trademark. This could, for instance, be a company building an AI-powered tool for recruitment purposes.
A deployer, on the other hand, is any natural or legal person, using an AI system developed by third parties in a professional capacity. This could be a hospital using an AI diagnostic tool, a bank utilising an AI credit scoring system, or an HR department implementing an AI recruitment platform. If a deployer significantly modifies the third-party AI system or markets it under their own name, they will be considered a provider.
EU AI Act compliance: a long-term opportunity
The AI Act is not merely a legal hurdle but an opportunity. Proactive compliance will not only shield your business from substantial fines (which can reach up to €35 million or 7% of global annual turnover for serious infringements) but also build consumer trust, enhance brand reputation and foster innovation within a clear, secure framework.
Companies must start by conducting an AI inventory, identifying all AI systems in use or that they are planning to use, and assessing their risk classification. Developing comprehensive compliance plans, updating internal policies, investing in employee training on human oversight and risk management, and ensuring robust data governance are immediate, critical steps.
Moreover, it is crucial to understand that the AI Act does not operate in a vacuum. Businesses must also consider its interplay with existing and upcoming EU legislation, particularly the General Data Protection Regulation (GDPR), which governs the processing of personal data, and the Digital Operational Resilience Act (DORA), which is vital for ensuring the cybersecurity and operational resilience of financial entities, as well as the directive on measures for a high common level of cybersecurity across the Union (NIS2) and the Cyber Resilience Act (CRA), among others, where applicable.
Conclusion: What are the next steps?
Whether your company is contemplating the in-house development of an AI system, engaging external developers to create a bespoke solution, or simply considering the adoption of an off-the-shelf AI application, compliance is no longer optional – it is an immediate necessity. Ignoring the regulations carries not only the risk of substantial financial penalties but also significant reputational damage in an increasingly AI-aware market.
This article was first published in the Times of Malta on 03/08/2025.