Eric Schmidt, the former CEO of Google, has long been a voice of insight and caution regarding artificial intelligence (AI). In a continuation of his discussion about AI’s impact in a recent conversation with Steve Bartlett on his podcast, The Diary of a CEO, Schmidt highlights the dangers and transformative potential of AI, framing it as one of the most significant and potentially harmful technologies humanity has ever developed. He compares its implications to the advent of nuclear weapons, noting that AI’s intelligence and autonomy may make it even more disruptive.
The Scaling of AI Capabilities: Exponential Growth
Schmidt describes the rapid scaling of AI models, likening it to “turning the crank” on a system that grows exponentially with each iteration. In the next five years, he predicts AI systems will become 50 to 100 times more powerful, unlocking unprecedented capabilities in fields like physics, mathematics, and advanced problem-solving.
However, this extraordinary growth comes with significant risks. Schmidt warns that unregulated scaling could outpace humanity’s ability to control it, leading to severe consequences in areas like cybersecurity, misinformation, and warfare.
Cybersecurity Threats: AI as the Ultimate Hacker
One of Schmidt’s primary concerns is the potential for AI to revolutionize cyberattacks. Advanced AI models, especially “raw models” that are not yet public, have demonstrated the ability to perform Day Zero attacks—cyber exploits that target previously unknown vulnerabilities.
- Day Zero Attacks: AI can autonomously discover new exploits by running countless simulations without human oversight. These attacks, Schmidt notes, are as effective, if not more so, than those executed by skilled hackers.
- Relentless Efficiency: Unlike humans, AI systems operate without rest, making them uniquely dangerous in the cybersecurity landscape.
Schmidt emphasizes the need for robust international cooperation to mitigate this threat, calling for preemptive measures to prevent catastrophic cyber incidents.
Biological Risks: AI-Driven Bioengineering
Schmidt also highlights the risk of AI being used to create biological weapons, such as synthetic viruses. With AI’s ability to model biological systems and processes, the technology could enable malicious actors to design viruses with devastating potential.
“Viruses are relatively easy to make, and you can imagine coming up with really bad ones,” Schmidt warns.
To combat this threat, Schmidt participates in global commissions aimed at regulating AI’s use in bioengineering, emphasizing the urgent need for oversight in this area.
The Evolution of Warfare: Drone-Dominated Battlefields
AI is fundamentally reshaping the nature of conflict. Schmidt outlines a future where traditional combat, symbolized by soldiers in trenches or tanks on battlefields, is replaced by autonomous drone warfare. He cites the ongoing Russia-Ukraine conflict as a real-time example of this transformation:
- Drone Efficiency: Low-cost drones are now capable of destroying high-value military assets, such as tanks, altering the economics of warfare. Schmidt describes this as the “kill ratio” dynamic—where a $5,000 drone can destroy a $5 million tank.
- Remote Warfare: Soldiers may no longer need to be physically present on the battlefield. Instead, they will operate drones and robots from command centers, fundamentally changing the human experience of war.
Schmidt predicts that drone-on-drone combat will dominate future conflicts, raising questions about accountability and escalation.
Misinformation and Manipulation: A Persistent Danger
In addition to physical threats, Schmidt reiterates the risk of AI amplifying misinformation. From undermining democratic institutions to polarizing societies, AI-driven content algorithms and fake media could erode trust in institutions and disrupt social cohesion.
The Need for Guardrails: Managing AI’s Risks
Despite the dangers, Schmidt remains cautiously optimistic that humanity can control AI’s development. He points to the growing movement within the tech industry to establish trust and safety frameworks, including:
- Human Oversight: Teams of human evaluators testing AI systems before deployment.
- Guardrails for Harmful Queries: Preventing AI from providing dangerous outputs, such as instructions for self-harm or weapon creation.
Schmidt stresses the importance of government involvement, a significant departure from the tech industry’s historical reluctance to seek regulation. He believes that collaboration between industry and policymakers is essential to prevent AI from becoming an uncontrollable force.
What the Future Holds
When asked to envision the future, Schmidt acknowledges moments of pessimism, particularly about the potential misuse of AI. However, he remains committed to shaping a world where AI serves humanity rather than threatens it.
“In five years, these systems will be 50 to 100 times more powerful. That’s a very big deal. But whether they advance responsibly depends on us.”
From cyberattacks and bio-weapons to misinformation and drone warfare, the challenges posed by AI are immense. Yet, Schmidt emphasizes that the greatest danger is inaction. By proactively addressing these risks, society can harness AI’s potential while safeguarding its future.