Introduction
In his latest book, AI: Unexplainable, Unpredictable, Uncontrollable, AI Safety expert Roman Yampolskiy highlights a core issue at the heart of our continual AI development. The problem is not that we don’t know precisely how we’re going to control AI, but that we are yet to prove that it is actually possible to control it.
Yampolskiy, PhD, a tenured professor at the University of Louisville, writes that: “It is a standard practice in computer science to first show that a problem doesn’t belong to a class of unsolvable problems before investing resources into trying to solve it or deciding what approaches to try.” [1] Whether or not AI is ultimately controllable has not yet been proved solvable. And yet, today’s tech giants push ahead with development at breakneck speed all the same. Yampolskiy contends that this lax approach could have existential consequences.
What is AI Safety?
AI Safety is a bit of a catch-all term but can broadly be defined as the attempt to ensure that AI is deployed in ways that do not cause harm to humanity.
The subject has grown in prominence as AI tools have become increasingly sophisticated in recent years, with some of the most nightmarish doom scenarios prophesied by the technology’s naysayers coming to look increasingly plausible.
The need to guardrail against the worst of AI’s possibilities led to the Biden administration’s AI Executive Order in October 2023, the UK’s AI Safety Summit a matter of days later, the EU AI Act, which was approved in March of this year, and the landmark agreement between the UK and US, signed earlier this month, to pool technical knowledge, information and talent on AI safety moving forwards.
The push and pull, as ever, is between how much regulation, if any, we should be putting on AI –– whether we are stifling its potential for innovation by doing so, or simply taking sensible, even vital precautions.
The sudden firing then re-hiring of CEO Sam Altman by the OpenAI board last year was supposedly based on concerns he was neglecting AI Safety in favour of innovation to the point of negligence. This theory is circumstantially backed up by the emergence of Anthropic, a rival AI company set up by the brother and sister duo Dario and Daniela Amodei in 2021, after each of them left executive positions at OpenAI over concerns around the company’s handling of AI Safety.
Meanwhile, Altman, Dario Amodei and Google DeepMind chief executive Demis Hassabis were among the signatories on a one-sentence statement released last year by the Center for AI Safety, a nonprofit organisation [2]. The open letter, signed by more than 350 executives, researchers and engineers working in AI, read simply: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
The stakes couldn’t be higher.
Unexplainable
A much-vaunted notion is that of ‘explainable AI’, defined by IBM as “a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.” [3]
Put more simply, as the name suggests, after AI performs a task for the user, it will then explain how it did it. Except, as the gap between our intelligence and the ‘superintelligence’ of AI continues to grow, it will soon reach a stage where we simply will not understand how the technology achieved its aims, no matter whether or not it is programmed to tell us. As Albert Einstein said: “It would be possible to describe everything scientifically, but it would make no sense. It would be a description without meaning –– as if you described a Beethoven symphony as a variation of wave pressure.” [4]
Yampolskiy pushes the analogy further, saying: “It is likely easier for a scientist to explain quantum physics to a mentally challenged deaf and mute four-year-old raised by wolves than for superintelligence to explain some of its decisions to the smartest human.” [5]
He notes that it would potentially be possible for AI to only produce decisions that it knows are explainable at our level of understanding, but that doing so would require the AI to knowingly not make the best decision available to it. This, of course, would defeat the point of using such advanced technology in the first place; we are already quite capable of making the wrong decision on our own.
Unpredictable
Given that AI is not explainable, it is in turn necessarily unpredictable –– how can you predict the actions of something you don’t (and can’t) understand? As is already the case with black box AI, the term used to describe AI models that arrive at conclusions or decisions without providing any explanations as to how they were reached, we will be in the dark as to how AI achieved its aims and what it might do to achieve future ones. We may be able to set goals for AI and be accurate in our prediction that it will ultimately achieve them, but the crucial how will be lost, even to the technology’s own programmers.
Yampolskiy comes to the conclusion that the “unpredictability of AI will forever make 100% safe AI an impossibility, but we can still strive for Safer AI because we are able to make some predictions about AIs we design.” [6]
Uncontrollable
AI advocates believe that we will be able to control it. They say that even Artificial General Intelligence (AGI) –– a system that can solve problems without manual intervention, similar to a human being –– will be imbued with our values and as such act in our best interests.
Even Nick Bostrom, a philosopher and professor at Oxford University, whose bestselling book Superintelligence: Paths, Dangers, Strategies showed him to be far from an optimist when it comes to this topic, has commented that, “Since the superintelligence or posthumans that will govern the post-singularity world will be created by us, or might even be us, it seems that we should be in a position to influence what values they will have. What their values are will then determine what the world will look like, since due to their advanced technology they will have a great ability to make the world conform to their values and desires.” [7]
Yampolskiy argues the other side: “As we develop intelligent systems that are less intelligent than we are, we can maintain control, but once such systems become more intelligent than we are, we lose that ability.” [8]
He suggests it is more likely that our values will adjust in accordance with that of the superintelligence than that its values will be shaped and constrained by our own. As the technology reveals itself to be of greater intelligence than any human who has ever lived, it is only rational that humanity will heed to its ideas, as it has done to any number of great thinkers in the past.
The only way to control AI in any real sense, then, would be to put such limitations on it as to constrain its many potential benefits, to the point it ceases to be the revolutionary technology so fervently preached by its advocates. This is the great conundrum, the unsolvable debate: progress with vast, existential risk or safety at the expense of development? As Yampolskiy puts it, “unconstrained intelligence cannot be controlled and constrained intelligence cannot innovate.” [9] It’s one or the other; someone has to decide.
The deciders
“Regulating technology is about safety, but it is also about the kind of civilization we wish to create for ourselves. We can’t leave these big moral questions for AI companies to decide,” writes author of The Digital Republic: Taking Back Control of Technology, Jamie Susskind, in the Financial Times. [10]
And yet it increasingly feels like that’s precisely what we’ve done. We may read about the drama of Sam Altman’s firing and rehiring or of Elon Musk’s recent move to sue OpenAI and Altman himself, but these events play out like soap opera storylines in the headlines. Very few of us actually understand how far this technology has already been pushed, let alone where it’s going.
“The companies that make these things are not rushing to share that data,” says Gary Marcus, professor emeritus of psychology and neural science at New York University, speaking to The Atlantic in December. “And so it becomes this fog of war. We really have no idea what’s going on. And that just can’t be good.” [11]
Eliezer Yudkowsky, a research leader at the Machine Intelligence Research Institute and one of the founding thinkers in the field of AGI, has written that,“if we had 200 years to work on this problem and there was no penalty for failing at it, I would feel very relaxed about humanity’s probability of solving this eventually.” [12] But the precise problem is that the tech giants today are not taking their time. They don’t want safe AI in 200 years if they can have some form of AI today. The only thing that seems to matter is cornering the market. Such short-termism could have devastating consequences.
There is some hope that AI itself could provide the solution. That it might use its superintelligence to find a solution to the problem of how to control it. Though sharing it with humans would be self-defeating in the extreme. Unless superintelligence comes with a heavy streak of masochism baked in, this seems an unlikely scenario.
The unsolvable problem of AI Safety
Yampolskiy writes that “the burden of proof [to demonstrate AI is controllable] is on those who claim that the problem is solvable, and the current absence of such proof speaks loudly about the inherent dangers of the proposal to develop AGI.” [13]
An unexplainable, unpredictable, uncontrollable AI superintelligence will drastically re-shape the world order, perhaps even overhauling it. AI Safety is needed to stop it. While recent measures are plenty, none address the problem of the AI control problem. Meanwhile, in Silicon Valley, development continues at a pace. It is easy to write off AI critics as prophets of doom or enemies of progress, but to proceed without proper safety provisions in place is to open a door we may not be able to close. As Yampolskiy surmises, “the chances of a misaligned AI are not small. In fact, in the absence of an effective safety program, that is the only outcome we will get.” [14]
More on AI
Combatting Cybersecurity Risks
The EU AI Act: What you Need to Know
The Ethical Minefield of Artificial Intelligence
Sources
[1] Yampolskiy, R. V. (2024). AI: Unexplainable, Unpredictable, Uncontrollable. Taylor & Francis Ltd.
[2] https://www.safe.ai/work/statement-on-ai-risk
[3] https://www.ibm.com/topics/explainable-ai
[5] Yampolskiy, R. V. (2024). AI: Unexplainable, Unpredictable, Uncontrollable. Taylor & Francis Ltd.
[6] Yampolskiy, R. V. (2024). AI: Unexplainable, Unpredictable, Uncontrollable. Taylor & Francis Ltd.
[7] https://mason.gmu.edu/~rhanson/vc.html
[8] Yampolskiy, R. V. (2024). AI: Unexplainable, Unpredictable, Uncontrollable. Taylor & Francis Ltd.
[9] Yampolskiy, R. V. (2024). AI: Unexplainable, Unpredictable, Uncontrollable. Taylor & Francis Ltd.
[10] https://www.ft.com/content/b259b126-225b-4158-90a0-abebfd0119fc
[11] https://www.theatlantic.com/newsletters/archive/2023/12/ai-tech-instability-gary-marcus/676286/
[12] Yampolskiy, R. V. (2024). AI: Unexplainable, Unpredictable, Uncontrollable. Taylor & Francis Ltd.
[13] Yampolskiy, R. V. (2024). AI: Unexplainable, Unpredictable, Uncontrollable. Taylor & Francis Ltd.
[14] Yampolskiy, R. V. (2024). AI: Unexplainable, Unpredictable, Uncontrollable. Taylor & Francis Ltd.
Introduction
The risk of cybercrime is on a steep upward trajectory. In North America it has risen by 61%, in Europe, the Middle East and Africa by 66%, in Latin America by 58%, and in Asia-Pacific by 74% [1]. According to the U.S. Cybersecurity and Infrastructure Security Defense Agency, 47% of American adults have had their information exposed online from cyber criminals [2]. Meanwhile, in Ireland, cybercrime is the number one threat when it comes to financial crime, with fraud and tax evasion taking joint second place [3].
A recent investigation by Mandiant revealed that governments, businesses and financial institutions are the three primary targets of cyber threats [4]. Meanwhile the firm Cybersecurity Ventures unveiled that global cybercrime financial damage will reach $10.5 trillion by 2025 [5]. That figure would make it the world’s third largest economy behind only the U.S. and China.
It’s vital that businesses start putting cybercrime front of mind. This article will dig into the reasons for cybercrime’s increased prevalence, the core steps businesses need to take to protect themselves, the role of AI, and the impact on small businesses.
Why is cybercrime on the rise?
As well as the obvious reasoning of technological advancement, cybercrime is rising for three reasons. First, the pandemic. In the US, nearly 470,000 phishing attacks were launched by hackers in the first three weeks of March 2020. About 9,000 of those were related to COVID-19 –– a 667% increase from February [6].
The pandemic forced remote working on a number of businesses. All of a sudden staff were working from home, potentially with less secure connections. Equally, staff were more likely to fall for a fake email from their boss or IT department when at home than they would be if they were in an office together. As we will address later, innocent internal errors are a key cause of cybercrime. The home/hybrid working setup makes such instances more likely.
Second, Russia’s invasion of Ukraine. Targeting of users in NATO countries by Russian hackers increased over 300% in 2022 as compared with 2020 [7].
Third, China. According to US officials, the number of attacks from China has intensified greatly in recent years. “The People’s Republic of China represents the most critical threat [among cyber risks],” General Timothy Haugh, head of US cyber command, said while speaking at a Vanderbilt event earlier this year.[8]
The cost of cybercrime
UnitedHealth, a hugely successful American conglomerate, suffered a ransomware attack in February.The company reportedly paid a $22m ransom to a BlackCat hacker group [9]. But the initial payment is just the start of the cost companies suffer in the wake of such breaches. UnitedHealth reported an $872m first-quarter hit from the attack — and warned that number could potentially reach $1.6bn. That’s not to mention the reputational damage. Customers lose trust. All of a sudden things can tailspin quickly.
Meanwhile, the IMF has warned that “the probability of a firm experiencing an extreme loss of $2.5bn as a result of a cyber incident” had now risen to “about once every 10 years”. [10]
In Ireland, Banking & Payments Federation Ireland (BPFI) stats show fraudsters stole nearly €85 million through frauds and scams in 2022, an increase of 8.8% on the previous year [11]. Meanwhile, the HSE attack of 2021 still lives long in the memory. It is the largest known attack against a health service computer system in history. It also demonstrates that the cost of a future breach may not solely be money, but human lives. Companies can’t afford to take any risks.
Cybercrime considerations
Despite the growing risks from cybercrime, a number of businesses have been slow to act. Brandon Wales, a top official of the U.S. Cybersecurity and Infrastructure Security Agency, has suggested boards up company investment in cyber defences and ensure management are treating hacking threats as a core business risk. [12]
That comes from the top. “This needs to be driven at the board level,” Wales said, speaking at the Wall Street Journal’s CIO Network Summit. “You don’t want to start thinking about cybersecurity after your network has been brought down by a ransomware operator.”
There are two broad approaches to take: Cyber Risk Management and Cyber Resilience. Cyber risk management is the preventative aspect. It’s about monitoring risks and identifying threats before they happen. Cyber resilience is about equipping oneself with the tools to recover quickly in the wake of any cyber incident.
Within those pillars are more specific issues to address. Writing in Forbes, Rob Harrison, SVP of Products & Services at Sophos, breaks down the specific risks companies face into three categories: external risks, internal risks, and cloud risks. [13]
External risks are an attempted breach from an external source. That can be from cybercriminals, hacktivists or nation-state actors. The type of attack can vary from ransomware to distributed denial-of-service attacks.
To combat external risks requires regular monitoring of the threat landscape. Technology changes fast and cybercriminals are innovative. Organisations need to be proactive in ensuring their defences are up to date and that they have the appropriate countermeasures in place.
Internal risks involve someone with system access compromising security. That can come by way of an employee, partner or third-party figure. It can be intentional or entirely accidental. Sometimes someone will be deliberately stealing data –– they could be a victim of extortion or harbour ill-feelings toward the company. Or it could be an entirely innocent mishandling of data with devastating consequences.
To combat internal risks requires having a sturdy and constantly evolving security system in place. But it equally is about building a culture. Training employees on the importance of cybersecurity and how to manage data securely is vital. Writing in Forbes, Justin Slaten, chief information officer at Venbrook Group, LLC, advises not relying on only once-a-year training, arguing multiple sessions are needed. “Training sessions throughout the year will create a well-prepared and vigilant team capable of warding off savvy scammers,” he writes. [14]
Cloud-based services are something the majority of us make use of daily in our personal and professional lives. The cloud is deeply practical, but it almost became a trope for comedy shows to reference the fact that no one really knows how it works. Harbouring all one’s data in this liminal space comes with risk.
To combat cloud-based intrusions, companies should be using encryption, multifactor authentication and regular audits. Not to mention ensuring all data is backed up elsewhere –– you don’t want the data stolen or deleted by a bad faith actor to be the only records you have.
Decisions for businesses
Businesses face some key decisions as to how they’re going to address cybercrime. The first is whether they are going to handle their cybersecurity in-house or rely on a third-party vendor to do it for them. Both options have pros and cons –– one offers trade expertise, the other system control. Third-party cybersecurity firms are likely to offer better know-how as to how to protect your business but the option also introduces third-party risk.
Third-party risk, it should be noted, does not just come from cybersecurity firms you contract but from any third-party technology service your company makes use of. Slaten writes that, “As you embrace third-party technologies in a quest to offer better service, you also open the door to unseen and future threats with new updates and service changes.” [15]
Jason Hart, chief technology officer for EMEA at Rapid7, recommends businesses re-examine the role of the chief information security officer [16]. Often this role is awarded strictly for technological prowess, but as Hart acknowledges, it’s crucial now for them to share the attributes of a COO. They need to be able to think big picture, lead transformational change and spot which aspects of the business are most affected in a breach.
There’s no wrong or right answer when it comes to in-sourcing or out-sourcing. Each company must decide what best works for them.
Human vs AI
Another choice businesses must make is how much to rely on AI in their cyber defence versus relying on human agents.
Harrison writes that, “Driven by the economics of ransomware, organizations will likely face human-driven rather than automated attacks. To defend against human ingenuity, you need human defenders.” [17]
Others suggest AI defences are needed. Sam King, chief executive of the security group Veracode, says: “You can now take a GenAI model and train it to automatically recommend fixes for insecure code, generate training materials for your security teams, and identify mitigation measures in the event of an identified threat, moving beyond just finding vulnerabilities.” [18]
Bartosz Skwarczek, Founder and President of the Supervisory Board of G2A Capital Group, defines AI’s key attributes when it comes to combatting cybercrime in real time as (1) its ability to monitor and analyse behaviour patterns, detecting and acting on anomalies (2) its ability to predict the outcomes of unusual behaviour (3) its ability to implement preventative measures, such as preventing deletions, logging off suspicious users and notifying operators of the suspected malicious activity, and (4) its training and machine learning capabilities –– by training itself to “remember” previous incidents and actions, its ability to identify suspicious activity, predict outcomes and prevent criminal initiatives continuously improves. [19]
Another advantage of AI is that using it for mundane, time-consuming and repetitive tasks frees up the human workforce to think about the big picture. Meanwhile, with more than 3.5 million unfilled positions in the human cybersecurity labour force in 2023, for many, using AI will be a necessity not a choice [20].
AI systems are currently far from perfect. Its advocates expect it to improve drastically in the coming years. Still, some combination of human and AI defence seems the most effective process now and moving forwards.
AI-driven cyber security cannot “fully replace existing traditional methods,” warns Gang Wang, associate professor of computer science at the University of Illinois Grainger College of Engineering [21]. To be successful, he says, “different approaches complement each other to provide a more complete view of cyber threats and offer protections from different perspectives.”
Impact on small businesses
Small businesses are generally speaking less prepared to deal with a potential cyber attack –– they lack the resources to implement a strong defence system or to adequately train their personnel. According to a Grant Thornton International Business Report from 2023, one in three small-to-medium businesses in Ireland fell victim to cybercrime between May 2021 and April 2022 [22]. One in three were also reported to have paid out to cybercriminals, with €22,773 the average payout.
There is talk that the government plans to create a national anti-ransomware organisation and offer cash subsidies to small businesses to help fight cybersecurity threats. Michael Kavanagh, CEO of the Compliance Institute, told The Irish Times that, “The timelines for this are unclear but there’s no doubt that the move would be laudable and welcomed with open arms by many businesses that continue to be plagued by ransomware attacks.” [23]
For the majority of small businesses, such support cannot come soon enough.
Combatting cybersecurity risks
Cybercrime is on the rise. Technological advancements paired with geopolitical instability have contributed to an increasingly fractious security environment. The cost of a cybercrime attack –– financially and reputationally –– can devastate a business. As such, greater precautions need to be taken. Businesses must decide whether they’re going to invest in their in-house cybersecurity unit or offset the duty to a third-party. Equally they must find the balance between human and AI defence measures. Small businesses especially lack the resources to adequately defend themselves and will be reliant on potential government support. But businesses of all sizes should be taking steps to better defend themselves.
More on AI
The EU AI Act: What you Need to Know
The Ethical Minefield of Artificial Intelligence
Sources
[1] https://www.ft.com/partnercontent/google/situation-critical-fighting-back-against-cyber-threats.html
[3] https://www.irishtimes.com/special-reports/2024/03/29/cybercrime-a-major-threat-to-small-businesses/
[4] https://www.ft.com/partnercontent/google/situation-critical-fighting-back-against-cyber-threats.html
[6] https://blog.barracuda.com/2020/03/26/threat-spotlight-coronavirus-related-phishing/
[8] https://www.ft.com/content/bfe01131-1ae0-4df8-bdfe-3447def01053
[9] https://www.ft.com/content/bfe01131-1ae0-4df8-bdfe-3447def01053
[10] https://www.ft.com/content/bfe01131-1ae0-4df8-bdfe-3447def01053
[11] https://www.irishtimes.com/special-reports/2024/03/29/cybercrime-a-major-threat-to-small-businesses/
[18] https://www.ft.com/content/35d65b91-5072-40dc-861c-565d602e740e
[21] https://www.ft.com/content/35d65b91-5072-40dc-861c-565d602e740e
[22] https://www.irishtimes.com/special-reports/2024/03/29/cybercrime-a-major-threat-to-small-businesses/
[23] https://www.irishtimes.com/special-reports/2024/03/29/cybercrime-a-major-threat-to-small-businesses/
Introduction
The EU AI Act was endorsed by the European Parliament on Wednesday 13th March. It is expected to become law in April following the formality of final approval from further member states.
This is a landmark moment. Last year the Biden administration signed an executive order requiring major AI companies to notify the government when developing a model that could pose serious risks, while Chinese regulators have also set out rules focused on generative AI. But the EU AI Act is a significant step up when it comes to the regulation of artificial intelligence and could serve as a blueprint for how global governance handles the technology moving forwards.
Dragos Tudorache, an MEP who helped draft the AI Act, said: “The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it, the technology, helps us leverage new discoveries, economic growth, societal progress and unlock human potential” [1].
While by definition the act only covers EU territories, its implications stretch further. No companies, even and especially the tech giants based across the Atlantic, are going to want to forgo access to Europe. As such, in order to work in the EU, they will need to comply with its regulations. “Anybody that intends to produce or use an AI tool will have to go through that rulebook,” said Guillaume Couneson, a partner at law firm Linklaters [2].
The European Parliament has precedent of making influential first moves in tech regulation, as evidenced by its General Data Protection Regulation (GDPR) and Digital Markets Act (DMA). The EU AI Act is likely to have an equally global impact.
“The act is enormously consequential, in terms of shaping how we think about AI regulation and setting a precedent,” says Rishi Bommasani, who researches the societal impact of AI at Stanford University in California [3].
Although, as with all things artificial intelligence, there are a number of unknowns. Other governing bodies will be monitoring how the EU AI Act progresses closely. Couneson notes that “the EU approach will likely only be copied if it is shown to work” [4].
What are the laws?
The EU AI Act seeks to set a definitive definition of AI that is also broad enough to cover the diversity of AI’s current use-points and any potential future developments. As such, drawing from the OECD’s definition, the act describes an AI system as: “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” [5]
The act plans to regulate AI according to a tiered system based on perceived risk. Systems that carry “unacceptable risk” are banned in their entirety. Such systems include those that use biometric data to infer sensitive information such as a person’s sexual orientation or gender identity.
Also outlawed are government-run social scoring systems that use AI to rank citizens based on their behaviour and trustworthiness, enabling Minority Report-esque predictive policing. Emotion recognition, which would give schools or workplaces the ability to monitor workers’ emotional states and activity by monitoring facial tics and body posture, is prohibited. As is the untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. There are exceptions. Biometric identification systems can be used in special circumstances such as in the prevention of a terror threat or in sexual exploitation and kidnapping cases.
For lower-risk systems such as Generative AI – the term for systems that produce plausible text, image, video and audio from simple prompts, the most prominent example being ChatGPT – developers will be forced to tell users when they are interacting with AI-generated content, as well as providing detailed summaries of the content used to train the model, which must adhere to EU Copyright law.
It’s unclear if the law can be retroactively applied to existing models. For example, in the cases of alleged copyright infringement for which The New York Times is suing OpenAI and Getty Images is suing StabilityAI. A number of writers, musicians and artists have also raised concerns that their work was used to train models without their consent or financial compensation.
Moving forwards, open-source models which are freely available to the public, unlike “closed” models like ChatGPT’s GPT-4, will be exempt from the copyright requirement. This approach of encouraging open-source AI differs from US strategy, according to Bommasani. He suggests that “the EU’s line of reasoning is that open source is going to be vital to getting the EU to compete with the US and China” [6].
People, companies or public bodies that issue deepfakes will need to disclose whether the content has been artificially generated or manipulated. If it is done for “evidently” artistic, creative or satirical work, it will still need to be flagged, but in an “appropriate manner that does not hamper the display or enjoyment of the work”.
High-risk AI systems like those used in critical infrastructure or medical devices will face more regulations, requiring those systems to “assess and reduce risks,” be transparent about data usage and ensure human oversight.
Fines will range from €7.5m or 1.5% of a company’s total worldwide turnover – whichever is higher – for giving incorrect information to regulators, to €15m or 3% of worldwide turnover for breaching certain provisions of the act, such as transparency obligations, to €35m, or 7% of turnover, for deploying or developing banned AI tools. More proportionate fines will be used for smaller companies and startups.
The reaction: The (tempered) positives
“Europe is now a global standard-setter in AI,” wrote Thierry Breton, the European commissioner for internal market, on X (formerly Twitter), leading praise for the bill [7].
The lobby group Business Europe also acknowledged its historic resonance, with director general Markus J. Beyreris describing it as a “pivotal moment for AI development in Europe” [8].
“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” said Italian MEP and Internal Market Committee co-rapporteur Brando Benifei [9].
However, many who generally approve of the bill – or even euphorically celebrated it – also tempered their praise with reservations, mainly regarding whether it can be effectively put into practice.
Dragos Tudorache, MEP, demonstrated both sides when he said, “The rules we have passed in this mandate to govern the digital domain – not just the AI Act – are truly historical, pioneering. But making them all work in harmony with the desired effect and turning Europe into the digital powerhouse of the future will be the test of our lifetime” [10].
This is the prevailing sentiment. Actualising these ideas will be difficult. In a similar vein to Mr Tudorache, after showing his support in saying that the bill was pivotal, Business Europe’s Markus J. Beyreris also noted that: “The need for extensive secondary legislation and guidelines raises significant questions about legal certainty and law’s interpretation in practice, which are crucial for investment decisions” [11].
Jenia Jitsev, an AI researcher at the Jülich Supercomputing Centre in Germany and co-founder of LAION (Large-scale Artificial Intelligence Open Network), a non-profit organisation aimed at democratising machine learning, showed even greater scepticism. “The demand to be transparent is very important,” they said. “But there was little thought spent on how these procedures have to be executed” [12].
The reaction: The negative
Those above considered the legislation to contain good ideas that would be difficult to implement. Others consider the ideas themselves to be wrong. The bill’s leading critics tend to fall into one of two camps: (1) those who think the bill is regulating too much (2) those who think it is regulating too little.
Those who think the bill is regulating too much are of the opinion that applying limits to AI development does nothing but quash innovation, slowing our progress and potentially denying the benefits truly powerful AI could bring.
Cecilia Bonefeld-Dahl, director-general for DigitalEurope, which represents the continent’s technology sector, said: “We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head.
“The new requirements – on top of other sweeping new laws like the Data Act – will take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers” [13].
In general, the major tech players are not enamoured with the idea of regulation. This is hardly surprising given that it will serve to limit their potential profits. “It is critical we don’t lose sight of AI’s huge potential to foster European innovation and enable competition, and openness is key here,” said Meta’s head of EU affairs [14].
Last year OpenAI chief executive Sam Altman caused a minor stir when he suggested the company might pull out of Europe if it cannot comply with the AI Act. Though he later backtracked on this statement, which was likely made as a way of applying pressure on regulators [15].
Anand Sanwal, chief executive of the New York-based data company CB Insights, wrote that the EU now had more AI regulations than meaningful AI companies. “So a heartfelt congrats to the EU on their landmark AI legislation and continued efforts to remain a nothing-burger market for technology innovation. Bravo!” [16]
After the preliminary bill passed in December, Innovation Editor at the Financial Times John Thornhill wrote that, “The conjugation of modern technology tends to go: the US innovates, China emulates and Europe regulates. That certainly appears to be the case with artificial intelligence” [17].
French President Emmanuel Macron seems to agree. “We can decide to regulate much faster and much stronger than our major competitors,” the French leader said in December. “But we will regulate things that we will no longer produce or invent. This is never a good idea” [18].
Macron’s scepticism was to be expected. As talks reached the final stretch last year, the French and German governments both tried to water the bill down, pushing back against some of the strictest ideas for regulating generative AI, arguing that the rules will hurt European start-ups such as France’s Mistral AI and Germany’s Aleph Alpha [19].
This move was heavily criticised by those who feel that the bill regulates too little. Civil-society groups such as Corporate Europe Observatory raised concerns that European companies and Big Tech were overly influential in shaping the final text [20].
“This one-sided influence meant that ‘general-purpose AI’ was largely exempted from the rules and only required to comply with a few transparency obligations,” watchdogs including the observatory and LobbyControl wrote in a statement, referring to AI systems capable of performing a wider range of tasks [21].
After it was announced that Mistral had partnered with Microsoft, legislators raised further concerns. Kai Zenner, a parliamentary assistant key in the writing of the Act and now an adviser to the United Nations on AI policy, wrote that the move was strategically smart and “maybe even necessary” for the French start-up, but said “the EU legislator got played again” [22].
Digital-rights group Access Now said the final text of the legislation was full of loopholes and failed to adequately protect people from some of the most dangerous uses of AI [23].
Kilian Vieth-Ditlmann, deputy head of policy at German non-profit organisation Algorithmwatch, which campaigns for responsible AI use, agreed. “We fear that the exemptions for national security in the AI Act provide member states with a carte blanche to bypass crucial AI regulations and create a high risk of abuse,” she said [24].
Next steps: For business
In wake of the act, PwC recommends all businesses that deal with AI take the following steps.
Firstly, to create an AI exposure register that allows companies to assess their exposure to all AI-related risks. Second, to risk-assess each of the use cases you have identified in your AI Exposure register in line with the EU AI Act Risk Assessment Framework so as to mitigate any potential risks and breaches. Third, to establish appropriate AI governance structures to manage the risk of AI responsibly in line with the EU AI Act. Fourth, to implement an upskilling programme and roll out awareness sessions to equip stakeholders for responsible use and oversight [25].
Next steps: For Ireland
Writing in the Irish Independent, Jim Dowling, CEO of the AI firm Hopsworks and associate professor at KTH Royal Institute of Technology in Stockholm, says that the EU AI Act can be an opportunity for Ireland.
He particularly focuses on the “regulatory sandbox” provision included in the bill, which means that national governments will be able to provide infrastructure support for their local AI companies to build out their AI with state support. Dowling argues this “regulatory sandbox” can “create a nurturing space for European and Irish companies to build globally competitive AI platforms before the wave of massively capitalised US-based companies, such as Sam Altman’s OpenAI, dominate the global AI market” [26].
He likens the opportunity to that taken by China in the 2010s, in which they legislated to protect their nascent cloud computing companies – Tencent, Alibaba, and ByteDance. The combination of regulations and large-scale investment “gave their local cloud computing companies time to grow from seeds into global cloud computing giants.”
He thinks the EU AI Act can do the same for Ireland, but that the time to act is now. Ireland “has a budget surplus and not many legacy companies to support. If we invest now in AI, the EU AI act will give our companies the time they need to create network effects within Europe, and then be ready to take on the world.”
The EU AI Act
Artificial intelligence is both a threat and an opportunity, no amount of legislation is going to change that. Overregulation threatens to stall progress and innovation. Under-regulation threatens our civil liberties or perhaps our very existence. The EU AI Act is a landmark moment, but there will be many more landmark moments to come.
More on AI
The Ethical Minefield of Artificial Intelligence
Sources
[2] https://www.wsj.com/tech/ai/ai-act-passes-european-union-law-regulation-e04ec251
[3] https://www.nature.com/articles/d41586-024-00497-8
[4] https://theguardian.com/technology/2024/mar/14/what-will-eu-proposed-regulation-ai-mean-consumers
[5] https://www.lexology.com/library/detail.aspx?g=4ba63092-0cc5-447b-bae9-157afd91c11e
[6] https://www.nature.com/articles/d41586-024-00497-8
[10] https://www.irishtimes.com/business/2024/03/13/eu-parliament-embraces-new-ai-rules/
[12] https://www.nature.com/articles/d41586-024-00497-8
[13] https://www.ft.com/content/d5bec462-d948-4437-aab1-e6505031a303
[14] https://theguardian.com/technology/2024/mar/14/what-will-eu-proposed-regulation-ai-mean-consumers
[16] https://www.ft.com/content/a402cea8-a4a3-43bb-b01c-d84167d857d5
[17] https://www.ft.com/content/a402cea8-a4a3-43bb-b01c-d84167d857d5
[18] https://www.ft.com/content/2b18b3e7-5b92-4577-9c8e-6db2bdd016d8
[19] https://www.irishtimes.com/business/2024/03/13/eu-parliament-embraces-new-ai-rules/
[20] https://www.irishtimes.com/business/2024/03/13/eu-parliament-embraces-new-ai-rules/
[21] https://www.irishtimes.com/business/2024/03/13/eu-parliament-embraces-new-ai-rules/
[22] https://www.irishtimes.com/business/2024/03/13/eu-parliament-embraces-new-ai-rules/
[23] https://www.wsj.com/tech/ai/ai-act-passes-european-union-law-regulation-e04ec251
[24] https://theguardian.com/technology/2024/mar/14/what-will-eu-proposed-regulation-ai-mean-consumers
Introduction
As the world continues to evolve, so does the way we use technology to improve our lives and workplaces. New York City recently adopted final regulations on the use of AI in hiring and promotion processes marking a significant step in addressing potential biases and ethical concerns surrounding the use of AI in the workplace. The question now is, will other countries follow suit and implement similar regulations?
As AI increasingly moves from automating drudge work to playing a more prominent role in decision-making, it’s vital that we understand the implications and potential risks. The good news is that some countries have already started to take action in this area.
Global progress on regulations
The European Union, for instance, unveiled its proposed AI regulations in April 2021. While these regulations are still in the proposal stage, they represent a comprehensive approach to governing AI use across various sectors, including hiring and promotions. The EU’s proposed rules are designed to ensure that AI systems are transparent, accountable, and respect fundamental rights.
Japan, another key player in AI development, established the AI Technology Strategy Council in 2016. The Council has since released a series of strategic guidelines that consider the ethical, legal, and social issues surrounding AI use. While these guidelines are not legally binding, they provide a framework for companies and the Japanese government to consider as they develop AI systems and technologies.
Ethical challenges
In contrast, countries like China and Russia have prioritised developing and deploying AI for economic and strategic gains, with less emphasis on ethical considerations. However, as AI becomes more integrated into hiring and promotion processes globally, it’s likely that these countries will also have to address the ethical challenges presented by AI.
So, what are the chances of the NYC regulations being successful? It largely depends on how well they are enforced and how willing companies are to adapt their practices. One of the keys to success will be educating employers about the benefits of ethical AI use and the potential risks of non-compliance.
Biases and discrimination
The impact of AI in hiring and promotion goes far beyond automating menial tasks. By leveraging AI’s ability to analyse vast amounts of data, we can make better, more informed decisions in these areas. However, this also raises the risk of perpetuating biases and discrimination.
As we’ve seen in recent years, AI algorithms can sometimes unintentionally reinforce existing biases due to the data they’re trained on. By implementing regulations like those in NYC, we can help ensure that AI is used responsibly and that it truly serves to benefit all members of society.
The key takeaway is that while the use of AI in hiring and promotion can be hugely beneficial, it’s essential to have regulations in place to ensure ethical practices. As New York City has taken this bold step, we’ll see more countries and cities follow in their footsteps.
Conclusion
In conclusion, the adoption of AI regulations in New York City is a significant move towards ensuring the responsible and ethical use of AI in hiring and promotion processes. As AI continues to play an increasingly important role in our lives, it’s crucial that governments and businesses alike prioritise transparency, accountability, and the protection of fundamental rights. By doing so, we can harness the power of AI to create a fairer, more inclusive society – and that’s something worth celebrating.
So, will other countries follow New York City’s lead? I believe they will, and it’s only a matter of time before AI regulations become a global norm. Let’s keep the conversation going, stay informed, and make the best decisions.