The Ethical Minefield of Artificial Intelligence

Introduction

In May of last year, the “Godfather of AI” Dr. Geoffrey Hinton announced his resignation from Google citing concerns over the potential ramifications of advancing AI technologies [1]. Six months later, OpenAI’s board ousted CEO Sam Altman over concerns he was placing technological advancements ahead of human safety and ethical concerns, only to bring him back four days later [2]. Then in February of this year, Elon Musk sued both OpenAI and Altman saying they had abandoned the startup’s original mission to develop artificial intelligence for the benefit of humanity and not for profit [3].

Put simply, how to implement artificial intelligence in an ethical manner currently poses more questions than answers, with disagreements over the necessary direction of travel growing increasingly heated. How to handle data privacy, discrimination, deepfake technology, job losses and the environmental impact are all problems in need of solutions.

Data privacy

AI is built on data. But questions exist over how the data being used to feed its insatiable appetite is being stored, used and accessed. Sensitive information such as people’s location, sexual preferences, health records and habits are all hoarded somewhere in the internet’s great database – we traded privacy for convenience some time ago. But who can access this data? Who is it being disseminated to? Is it liable to a data breach from hackers or unwarranted surveillance from governmental or corporate entities? It’s not clear. Worse still, by definition, “black box” AI is not understood even by its creators [4]. How it chooses to manipulate our data is and will remain an unknown; all we can do is cross our fingers and hope it’s working in our interests.

Of course, data privacy laws exist. But they were written up prior to AI’s emergence and as such fall well short in managing its capabilities. The average citizen knows little of what information they have unknowingly given away over the years, much less how it is being used. This will only worsen.

Discrimination

Many fear that AI could end up perpetuating inequality and discrimination. As noted, AI models are based on data. As such, any data that is fed into the system – no matter whether that data demonstrates biases or is made up of poorly represented subsets – will be used and built upon. Examples of biassed algorithmic decision-making have already been reported in healthcare, hiring, and other settings [5]. For example, a recruiting tool at Amazon was found to prefer male candidates for jobs that required technical skills [6].

Algorithmic bias is complicated. Would implementing an algorithm that displays discriminatory bias be acceptable if the level of bias appears to be less than that displayed by society as a whole, for example? An industry with a record of giving just 30% of jobs to women would be improved by an algorithm that gave 35%. And yet there feels something troubling about such a concession. Perhaps it is the optimist’s view that individual and even institutional biases can be rooted out – algorithmic ones cannot. To sign off on the problem would be to give it legitimacy.

Tara Behrend, PhD, a professor at Michigan State University’s School of Human Resources and Labor Relations, notes that the problem is not always as high-stakes as hiring, but can be just as consequential. For example, an AI-driven career guidance system could unintentionally steer a woman away from jobs in STEM (science, technology, engineering, and maths), influencing her entire life trajectory.

“That can be potentially hugely consequential for a person’s future decisions and pathways,” Behrend says. “It’s equally important to think about whether those tools are designed well” [7].

Another potential problem is that AI is not just feeding off our existing biases but forging more. “AI has many biases, but we’re often told not to worry, because there will always be a human in control,” said Helena Matute, PhD, a professor of experimental psychology at Universidad de Deusto in Bilbao, Spain. “But how do we know that AI is not influencing what a human believes and what a human can do?” [8]

In a study Matute conducted with graduate student Lucía Vicente, participants classified images for a simulated medical diagnosis either with or without the help of AI. They found that when the AI system made errors, humans inherited the same biassed decision-making, even when they stopped using the AI. “If you think of a doctor working with this type of assistance, will they be able to oppose the AI’s incorrect advice?” Matute asked.

Any football fans reading this may recall the effect being “sent to the monitor” by VAR officials had on referees when first implemented in the Premier League. Rather than being used to reconsider their original decision, referees overturned practically every time, with the act of going to the monitor serving as little more than a form of ritual theatre. No matter how good at our jobs we are, if we are told we’re wrong by a higher authority, our natural inclination is to believe them.

Job loss

According to a report by Goldman Sachs, AI has the potential to replace around 300 million full-time jobs [9]. One quarter of all the work tasks in Europe and the US could be automated [10]. The effect could be catastrophic. Would the government then subsidise these displaced workers? Would new industries emerge? Or would we witness unemployment, poverty and then likely protests and riots on an as yet unprecedented scale? No one’s quite sure.

Utopians tend to posit that AI will simply take over the tasks we don’t want to do, giving us more time to focus on ourselves and better, more profound endeavours. Similar arguments were made about typewriters, printers and the internet. As it turned out, all that really changed was the amount of work one was expected to get through in a day. We raise our expectations to meet the tools available to us. Productivity is the name of the game; that’s not going to change.

Businesses will be forced to make tough choices. The people versus profit decision has always been a component of corporate thinking, but will soon become far more stark. If a company sees that it can save a huge percentage every year by moving to newly available AI tools, will it really show loyalty to its staff? If so, how much, and for how long? Those that choose to prioritise profit, as many will, will have to embrace a swift, grand overhaul that could produce unparalleled turmoil. One also wonders what the effect will be on the psyche of retained staff as they see the ease with which their colleagues are automated out the door.

Deepfakes

We’re already seeing increasingly sophisticated deepfakes online. At the moment, the falsity is detectable. Soon it won’t be. The ramifications are terrifying on a number of levels. Politically, we’re going to see democracy pushed to the brink as videos emerge of candidates for office saying or doing something repulsive days before an election, potentially swaying undecideds. Worse still could be deepfake footage from war zones. The wrong video believed by the wrong people could cause an escalation in a conflict. At a minimum it will pour fuel on the fire.

In the post-truth society we occupy, citizens already live in different realities based on the ideology they submit to. The gulf seems set to widen.

That’s not to mention the effect deepfakes will have on scamming. Should you receive a call from a loved one who claims to have been kidnapped and is desperately asking for money, will you be able to believe them? AI technology is already nearing the point of being able to accurately replicate voices of anyone with vocal recordings (podcasts, YouTube videos) in the public ether. It used to be that we believed something because we saw it with our own eyes, heard it with our own ears. In the coming years, even that won’t be enough.

Then there is the pornographic aspect that has regrettably already begun. Taylor Swift is the most famous victim of deepfaked images online but the exact same thing will soon be happening in classrooms up and down the country. In fact, it’s already started [11]. Compromising images of teenage girls are being created by AI and then spread amongst their classmates. Needless to say, this is reprehensible – and harmful in the extreme. How do you prepare a young girl for such psychological damage? Why should you have to? Because almost every new technology throughout history has been quickly turned into a weapon of misogyny. AI is no different.

Environment

AI models require an unconscionable amount of energy to train [12]. For all its hopes of being a tool of progress, artificial intelligence is an enormous resource consumer. Researchers are working to create energy efficient models but as of right now the AI revolution is quite literally unsustainable.

The responsibility of businesses

Regulation will be introduced in an attempt to minimise the potential damage AI poses. Indeed, in 2023, the Biden administration released an executive order on Safe, Secure, and Trustworthy AI and the European Union came close to passing its first comprehensive AI Act [13]. But Biden’s act is limited in its scope and authority, plus Silicon Valley will continue pushing back. As such, for the time being at least, if AI is to be effectively regulated, businesses will have to regulate it themselves.

Writing in Harvard Business Review, author of Ethical Machines, Reid Blackman, PhD, suggests some methods as to how. He says that organisations need to assemble “a senior-level working group that is responsible for driving AI ethics in your organisation…At a minimum, we recommend involving four kinds of people: technologists, legal/compliance experts, ethicists, and business leaders who understand the problems you’re trying to solve for using AI” [14].

Companies that fail to act now risk reputational damage and missed opportunities to build trust with customers and key stakeholders. This is uneasy ground; customers will feel kindly towards a company that’s prioritising high ethical standards.

What they won’t respond to are the businesses choosing instead to cash in on the chaos. As Adrienne LaFrance, executive editor of The Atlantic, writes, “Corporations that stand to profit off this new technology are already memorising the platitudes necessary to wave away the critics. They’ll use sunny jargon like ‘human augmentation’ and ‘human-centred artificial intelligence.’ But these terms are as shallow as they are abstract” [15].

Self-regulation and ethics have never walked comfortably hand in hand, but until such a time as more official channels have a grip on this era-defining technological advancement, businesses must do so as best they can, with transparency and moral values at the centre of their thinking.

LaFrance sums the situation up well:

“In the face of world-altering invention, with the power of today’s tech barons so concentrated, it can seem as though ordinary people have no hope of influencing the machines that will soon be cognitively superior to us all. But there is tremendous power in defining ideals, even if they ultimately remain out of reach. Considering all that is at stake, we have to at least try.”

More on AI

AI and the Future of Work

AI: The Changing Face of Project Management

Source

[1] https://www.irishtimes.com/technology/2023/05/11/why-godfather-of-ai-geoffrey-hinton-quit-google-to-speak-out-about-risks/#:~:text=%E2%80%9CIt’s%20quite%20conceivable%20that%20humanity,out%20about%20this%2C%20he%20added.

[2] https://abcnews.go.com/Business/sam-altman-reaches-deal-return-ceo-openai/story?id=105091534

[3] https://www.reuters.com/legal/elon-musk-sues-openai-ceo-sam-altman-breach-contract-2024-03-01/

[4] https://www.techopedia.com/definition/34940/black-box-ai#:~:text=Black%20box%20AI%20is%20any,of%20explainable%20AI%20(XAI).

[5] https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

[6] https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/

[7] https://www.apa.org/monitor/2024/04/addressing-equity-ethics-artificial-intelligence

[8] https://www.apa.org/monitor/2024/04/addressing-equity-ethics-artificial-intelligence

[9] https://www.forbes.com/sites/forbestechcouncil/2024/02/08/the-ethics-of-ai-balancing-innovation-with-responsibility/

[10] https://www.forbes.com/sites/forbestechcouncil/2024/02/08/the-ethics-of-ai-balancing-innovation-with-responsibility/

[11] https://www.nytimes.com/2024/03/02/opinion/deepfakes-teenagers.html

[12] https://www.forbes.com/sites/nishatalagala/2022/05/31/ai-ethics-what-it-is-and-why-it-matters/

[13] https://www.pwc.com/jp/en/knowledge/column/generative-ai-regulation09.html

[14] https://hbr.org/2022/03/ethics-and-ai-3-conversations-companies-need-to-be-having

[15] https://www.theatlantic.com/magazine/archive/2023/07/generative-ai-human-culture-philosophy/674165/