The EU AI Act: What you Need to Know


The EU AI Act was endorsed by the European Parliament on Wednesday 13th March. It is expected to become law in April following the formality of final approval from further member states.

This is a landmark moment. Last year the Biden administration signed an executive order requiring major AI companies to notify the government when developing a model that could pose serious risks, while Chinese regulators have also set out rules focused on generative AI. But the EU AI Act is a significant step up when it comes to the regulation of artificial intelligence and could serve as a blueprint for how global governance handles the technology moving forwards.

Dragos Tudorache, an MEP who helped draft the AI Act, said: “The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it, the technology, helps us leverage new discoveries, economic growth, societal progress and unlock human potential” [1].

While by definition the act only covers EU territories, its implications stretch further. No companies, even and especially the tech giants based across the Atlantic, are going to want to forgo access to Europe. As such, in order to work in the EU, they will need to comply with its regulations. “Anybody that intends to produce or use an AI tool will have to go through that rulebook,” said Guillaume Couneson, a partner at law firm Linklaters [2].

The European Parliament has precedent of making influential first moves in tech regulation, as evidenced by its General Data Protection Regulation (GDPR) and Digital Markets Act (DMA). The EU AI Act is likely to have an equally global impact.

“The act is enormously consequential, in terms of shaping how we think about AI regulation and setting a precedent,” says Rishi Bommasani, who researches the societal impact of AI at Stanford University in California [3].

Although, as with all things artificial intelligence, there are a number of unknowns. Other governing bodies will be monitoring how the EU AI Act progresses closely. Couneson notes that “the EU approach will likely only be copied if it is shown to work” [4].

What are the laws?

The EU AI Act seeks to set a definitive definition of AI that is also broad enough to cover the diversity of AI’s current use-points and any potential future developments. As such, drawing from the OECD’s definition, the act describes an AI system as: “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” [5]

The act plans to regulate AI according to a tiered system based on perceived risk. Systems that carry “unacceptable risk” are banned in their entirety. Such systems include those that use biometric data to infer sensitive information such as a person’s sexual orientation or gender identity.

Also outlawed are government-run social scoring systems that use AI to rank citizens based on their behaviour and trustworthiness, enabling Minority Report-esque predictive policing. Emotion recognition, which would give schools or workplaces the ability to monitor workers’ emotional states and activity by monitoring facial tics and body posture, is prohibited. As is the untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. There are exceptions. Biometric identification systems can be used in special circumstances such as in the prevention of a terror threat or in sexual exploitation and kidnapping cases.

For lower-risk systems such as Generative AI – the term for systems that produce plausible text, image, video and audio from simple prompts, the most prominent example being ChatGPT – developers will be forced to tell users when they are interacting with AI-generated content, as well as providing detailed summaries of the content used to train the model, which must adhere to EU Copyright law.

It’s unclear if the law can be retroactively applied to existing models. For example, in the cases of alleged copyright infringement for which The New York Times is suing OpenAI and Getty Images is suing StabilityAI. A number of writers, musicians and artists have also raised concerns that their work was used to train models without their consent or financial compensation.

Moving forwards, open-source models which are freely available to the public, unlike “closed” models like ChatGPT’s GPT-4, will be exempt from the copyright requirement. This approach of encouraging open-source AI differs from US strategy, according to Bommasani. He suggests that “the EU’s line of reasoning is that open source is going to be vital to getting the EU to compete with the US and China” [6].

People, companies or public bodies that issue deepfakes will need to disclose whether the content has been artificially generated or manipulated. If it is done for “evidently” artistic, creative or satirical work, it will still need to be flagged, but in an “appropriate manner that does not hamper the display or enjoyment of the work”.

High-risk AI systems like those used in critical infrastructure or medical devices will face more regulations, requiring those systems to “assess and reduce risks,” be transparent about data usage and ensure human oversight.

Fines will range from €7.5m or 1.5% of a company’s total worldwide turnover – whichever is higher – for giving incorrect information to regulators, to €15m or 3% of worldwide turnover for breaching certain provisions of the act, such as transparency obligations, to €35m, or 7% of turnover, for deploying or developing banned AI tools. More proportionate fines will be used for smaller companies and startups.

The reaction: The (tempered) positives

“Europe is now a global standard-setter in AI,” wrote Thierry Breton, the European commissioner for internal market, on X (formerly Twitter), leading praise for the bill [7].

The lobby group Business Europe also acknowledged its historic resonance, with director general Markus J. Beyreris describing it as a “pivotal moment for AI development in Europe” [8].

“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” said Italian MEP and Internal Market Committee co-rapporteur Brando Benifei [9].

However, many who generally approve of the bill – or even euphorically celebrated it – also tempered their praise with reservations, mainly regarding whether it can be effectively put into practice.

Dragos Tudorache, MEP, demonstrated both sides when he said, “The rules we have passed in this mandate to govern the digital domain – not just the AI Act – are truly historical, pioneering. But making them all work in harmony with the desired effect and turning Europe into the digital powerhouse of the future will be the test of our lifetime” [10].

This is the prevailing sentiment. Actualising these ideas will be difficult. In a similar vein to Mr Tudorache, after showing his support in saying that the bill was pivotal, Business Europe’s Markus J. Beyreris also noted that: “The need for extensive secondary legislation and guidelines raises significant questions about legal certainty and law’s interpretation in practice, which are crucial for investment decisions” [11].

Jenia Jitsev, an AI researcher at the Jülich Supercomputing Centre in Germany and co-founder of LAION (Large-scale Artificial Intelligence Open Network), a non-profit organisation aimed at democratising machine learning, showed even greater scepticism. “The demand to be transparent is very important,” they said. “But there was little thought spent on how these procedures have to be executed” [12].

The reaction: The negative

Those above considered the legislation to contain good ideas that would be difficult to implement. Others consider the ideas themselves to be wrong. The bill’s leading critics tend to fall into one of two camps: (1) those who think the bill is regulating too much (2) those who think it is regulating too little.

Those who think the bill is regulating too much are of the opinion that applying limits to AI development does nothing but quash innovation, slowing our progress and potentially denying the benefits truly powerful AI could bring.

Cecilia Bonefeld-Dahl, director-general for DigitalEurope, which represents the continent’s technology sector, said: “We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head.

“The new requirements – on top of other sweeping new laws like the Data Act – will take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers” [13].

In general, the major tech players are not enamoured with the idea of regulation. This is hardly surprising given that it will serve to limit their potential profits. “It is critical we don’t lose sight of AI’s huge potential to foster European innovation and enable competition, and openness is key here,” said Meta’s head of EU affairs [14].

Last year OpenAI chief executive Sam Altman caused a minor stir when he suggested the company might pull out of Europe if it cannot comply with the AI Act. Though he later backtracked on this statement, which was likely made as a way of applying pressure on regulators [15].

Anand Sanwal, chief executive of the New York-based data company CB Insights, wrote that the EU now had more AI regulations than meaningful AI companies. “So a heartfelt congrats to the EU on their landmark AI legislation and continued efforts to remain a nothing-burger market for technology innovation. Bravo!” [16]

After the preliminary bill passed in December, Innovation Editor at the Financial Times John Thornhill wrote that, “The conjugation of modern technology tends to go: the US innovates, China emulates and Europe regulates. That certainly appears to be the case with artificial intelligence” [17].

French President Emmanuel Macron seems to agree. “We can decide to regulate much faster and much stronger than our major competitors,” the French leader said in December. “But we will regulate things that we will no longer produce or invent. This is never a good idea” [18].

Macron’s scepticism was to be expected. As talks reached the final stretch last year, the French and German governments both tried to water the bill down, pushing back against some of the strictest ideas for regulating generative AI, arguing that the rules will hurt European start-ups such as France’s Mistral AI and Germany’s Aleph Alpha [19].

This move was heavily criticised by those who feel that the bill regulates too little. Civil-society groups such as Corporate Europe Observatory raised concerns that European companies and Big Tech were overly influential in shaping the final text [20].

“This one-sided influence meant that ‘general-purpose AI’ was largely exempted from the rules and only required to comply with a few transparency obligations,” watchdogs including the observatory and LobbyControl wrote in a statement, referring to AI systems capable of performing a wider range of tasks [21].

After it was announced that Mistral had partnered with Microsoft, legislators raised further concerns. Kai Zenner, a parliamentary assistant key in the writing of the Act and now an adviser to the United Nations on AI policy, wrote that the move was strategically smart and “maybe even necessary” for the French start-up, but said “the EU legislator got played again” [22].

Digital-rights group Access Now said the final text of the legislation was full of loopholes and failed to adequately protect people from some of the most dangerous uses of AI [23].

Kilian Vieth-Ditlmann, deputy head of policy at German non-profit organisation Algorithmwatch, which campaigns for responsible AI use, agreed. “We fear that the exemptions for national security in the AI Act provide member states with a carte blanche to bypass crucial AI regulations and create a high risk of abuse,” she said [24].

Next steps: For business

In wake of the act, PwC recommends all businesses that deal with AI take the following steps.

Firstly, to create an AI exposure register that allows companies to assess their exposure to all AI-related risks. Second, to risk-assess each of the use cases you have identified in your AI Exposure register in line with the EU AI Act Risk Assessment Framework so as to mitigate any potential risks and breaches. Third, to establish appropriate AI governance structures to manage the risk of AI responsibly in line with the EU AI Act. Fourth, to implement an upskilling programme and roll out awareness sessions to equip stakeholders for responsible use and oversight [25].

Next steps: For Ireland

Writing in the Irish Independent, Jim Dowling, CEO of the AI firm Hopsworks and associate professor at KTH Royal Institute of Technology in Stockholm, says that the EU AI Act can be an opportunity for Ireland.

He particularly focuses on the “regulatory sandbox” provision included in the bill, which means that national governments will be able to provide infrastructure support for their local AI companies to build out their AI with state support. Dowling argues this “regulatory sandbox” can “create a nurturing space for European and Irish companies to build globally competitive AI platforms before the wave of massively capitalised US-based companies, such as Sam Altman’s OpenAI, dominate the global AI market” [26].

He likens the opportunity to that taken by China in the 2010s, in which they legislated to protect their nascent cloud computing companies – Tencent, Alibaba, and ByteDance. The combination of regulations and large-scale investment “gave their local cloud computing companies time to grow from seeds into global cloud computing giants.”

He thinks the EU AI Act can do the same for Ireland, but that the time to act is now. Ireland “has a budget surplus and not many legacy companies to support. If we invest now in AI, the EU AI act will give our companies the time they need to create network effects within Europe, and then be ready to take on the world.”

The EU AI Act

Artificial intelligence is both a threat and an opportunity, no amount of legislation is going to change that. Overregulation threatens to stall progress and innovation. Under-regulation threatens our civil liberties or perhaps our very existence. The EU AI Act is a landmark moment, but there will be many more landmark moments to come.

More on AI

The Ethical Minefield of Artificial Intelligence

AI and the Future of Work






[5] ​​