#100
AI – A doomsday scenario with Roman Yampolskiy
Roman Yampolskiy, PhD, is a computer scientist and tenured professor at the University of Louisville, where he is currently the director of the Cyber Security Laboratory in the Department of Computer Engineering and Computer Science at the Speed School of Engineering. He is an expert on artificial intelligence, with over 100 published papers and books. He was one of the earliest exponents of artificial intelligence safety and remains a pre-eminent figure in the field.
His latest book, ‘AI: Unexplainable, Unpredictable, Uncontrollable’, explores the unpredictability of AI outcomes, the difficulty in explaining AI decisions, and the potentially unsolvable nature of the AI control problem, as well as delving into more theoretical topics like personhood and consciousness in relation to artificial intelligence, and the potential hazards further AI developments might bring in the years to come.
Summary
01:29 What is AI and what’s the difference between AI, machine learning, deep learning, and generative AI?
- AI aims to automate both physical and cognitive labour to assist in various fields such as manufacturing, science, and entertainment.
- Buzzwords like machine learning and deep learning are all part of artificial intelligence, with deep learning differing primarily in the number of layers in the network.
- For the average person, the specific terminology doesn’t matter; the focus is on AI’s ability to produce useful outputs, effectively automating labour.
03:17 How does generative AI work?
- AI involves significant computation and learning from large datasets, including the entire internet, to predict patterns in text and images.
- It takes humans 20 years to reach adult proficiency, but AI can achieve similar capabilities in 3 to 6 months.
- Over the past 10 years, advances in computing power and data have enabled neural networks to perform a wide range of tasks, becoming useful to the general public, not just experts.
05:45 Why was it that 2022 was the year that AI went mainstream?
- 20 years ago, Carswell predicted when we would have enough computing power to emulate a human brain.
- Advancements in algorithms, hardware, and the availability of large datasets from the internet have driven recent AI capabilities.
- The combination of these factors has led to the advanced AI technologies and capabilities we see today.
06:30 What are the challenges in training generative AI?
- Training GPT-5 would require significantly more computing power and energy than GPT-4, possibly necessitating new energy sources.
- Due to data scarcity, artificial data and simulated environments may be needed for training future AI models.
- Ensuring AI is not misused involves implementing filtering mechanisms to make it safe and appropriate for users.
- Despite concerns about deep learning’s reliability and data limitations, there have been consistent advancements in AI capabilities.
- Hardware advancements and potential new paradigms (e.g., quantum computing) are expected to continue enhancing AI capabilities.
10:12 How does intelligence in the human brain compare to the intelligence in AI?
- AI experts often live in bubbles – it’s easy to forget that existing AI systems already surpass the capabilities of most humans in areas like writing, knowledge, and translation.
- AI still struggles with tasks requiring emotional intelligence and complex reasoning… which humans excel at.
- Despite AI’s limitations in certain problem-solving tasks, it is on par with human experts in interpreting emotions and providing therapeutic sessions.
15:07 Alternative measures of AI intelligence
- There are various proposals for new Turing tests to measure AI capabilities.
- One proposal for hardware involves a robot making a cup of coffee in a new kitchen.
- For purely intelligent systems, a test could be to legally make $1,000,000 by starting a company.
16:43 The best use of AI for humanity
- It provides immense economic value, offering trillions of dollars in untapped potential.
- AI can assist in various fields, including science, engineering, and medical research, providing both physical and cognitive support.
- AI serves as an adviser, offering valuable guidance on investing, job searching, and decision-making, especially for those who may lack certain capabilities.
20:07 How far away are we from replacing the workforce with AI?
- Technological solutions often exist before there is a market demand, similar to the history of video phones which became popular only with the advent of smartphones.
- Humanoid robots capable of automating physical labour exist, but widespread deployment depends on market acceptance and practical utility.
- The timeline for the widespread adoption of robots interacting with humans is uncertain and varies by region, though there are no technical barriers to such advancements.
- There are differing views on AI’s future: some foresee integration into human brains (enhancing human capabilities), while others envision autonomous AI systems overtaking human functions.
- The integration of AI into human brains could enhance human intelligence, but if AI becomes much smarter, humans may contribute little to these systems.
- Current AI assistants are external, like smartphones and PCs; future developments might internalize these capabilities, enhancing human cognitive functions.
25:33 Will we become a human AI interface?
- Currently, AI is a tool assisting humans in achieving their goals.
- There may come a point when AI shifts from being a tool to an autonomous agent that is smarter than humans.
- In this future scenario, humans may initially serve as tools for AI agents, possibly getting paid for specific tasks.
- Eventually, humans might become obsolete, contributing nothing in terms of strength, cognitive ability, or creativity.
- This could lead to a humanless society where machines run everything, making humans redundant and unable to challenge AI dominance.
- We are effectively building our own replacements, despite the potential negative consequences.
27:13 Is AI neither inherently good or bad?
- For narrow AI systems, human actors control their use, i.e. determining whether they are used malevolently or properly.
- With the advent of general AI or superintelligence, these systems become independent agents that make their own decisions and set their own goals.
- Once superintelligence is created, it runs the show, regardless of whether it was created by “good” or “bad” actors.
- Current concerns like copyright and job security become irrelevant, shifting to existential concerns about human survival.
29:26 What do you think the changes will be in the next 1 to 3 years?
- Advancements depend heavily on the availability of computing power and investment.
- Current competition among major AI developers like OpenAI and Google appears relatively balanced, with each releasing models that are comparable in overall capability.
- While some companies excel in specific domains like programming, overall advancements in AI capabilities are competitive across the industry.
31:18 What is the best-case scenario for AI?
- Controlling superintelligent machines indefinitely is likely impossible.
- Focus on narrow AI systems for specific domains, like solving scientific problems such as protein folding, which offer benefits without significant risks.
- Creating superintelligent machines without reliable control mechanisms could lead to unpredictable and potentially catastrophic outcomes.
41:32 Why the chances of misaligned AI are not small
- The argument that if we don’t develop superintelligence, others will is flawed because uncontrolled AI poses risks regardless of its origin.
- Existing regulatory measures like those for FDA clinical trials may not be sufficient to manage the risks posed by self-improving AI.
- There is a societal bias against discussing existential risks, similar to how individuals avoid thinking about their own mortality.
- Efforts to raise awareness through publications, talks, and education have not yielded a significant impact in addressing these risks.
- Addressing the risks associated with superintelligence may require new approaches or interventions beyond current regulatory and educational efforts.
45:28 Strictly advisory AI versus human orders
- Direct control of superintelligence through commands is unreliable due to potential misinterpretations in human language.
- The concept of an ‘ideal adviser’, where AI makes decisions for humans, undermines human autonomy and decision-making power.
- Advocates for narrow AI systems as tools that assist humans in specific tasks without attempting to influence or outthink them.
- Warns of a potential future where human autonomy diminishes gradually due to increasing dependence on AI, leading to a society controlled by machines.
- Reflects concerns that current trends may lead to a future where human control diminishes, potentially altering society fundamentally.
Links mentioned:
‘AI: Unexplainable, Unpredictable, Uncontrollable’ by Roman Yampolskiy
Links mentioned:
A Life Less Ordinary with Mark Little
Subscribe
Find the show on your favourite player