Geoffrey Hinton on the Brink: Reflections from AI’s Godfather on a Technology Outpacing Humanity
Two years after their last conversation, Brook Silva-Braga sat down again with Geoffrey Hinton—often called the “godfather of AI”—at the Toronto offices of Radical Ventures. What unfolded was a sobering and wide-ranging discussion that explored the extraordinary acceleration of artificial intelligence, the deepening risks of misuse and takeover, and the rapidly approaching threshold where machines may surpass us in capability, influence, and possibly control.
Accelerating Intelligence: “Even Faster Than I Thought”
Hinton opened with a candid recalibration of expectations: “AI has developed even faster than I thought.” Since their last discussion, not only have large language models improved, but AI agents—systems that act autonomously rather than simply answer questions—have emerged as a greater threat.
When asked to update his timeline for the arrival of artificial general intelligence (AGI), Hinton replied, “A year ago I thought it was... between five and twenty years. I think there's a good chance it'll be here in ten years or less now.”
The compression of the AGI timeline—from possibly two decades to under ten years—reflects Hinton’s alarm at the exponential pace of AI’s capabilities. This urgency underscores the need for proactive safety efforts, not reactive policy.
The "Dumb CEO" and the Super Assistant: A Best-Case Scenario
In a positive vision of the future, Hinton likens humans to CEOs served by superintelligent AI assistants: “The CEO thinks they're doing things, but actually it's all done by the assistant.” In this scenario, human leaders feel empowered, but the intelligence driving success lies beneath the surface.
He predicts that in healthcare, AI will soon far exceed human diagnostic abilities. A system that has seen millions of X-rays and patient histories, including rare conditions, will outperform even the best radiologists. “They’ll be very good family doctors,” he notes, able to integrate genomics and real-time data from relatives—without forgetting a single detail.
Education, too, is poised for disruption. Hinton explains that personalized AI tutors will “know exactly what you misunderstand and exactly what example to give you.” With such precision, learning could become three or four times more efficient.
These capabilities reveal AI's power not just to assist, but to transform foundational sectors. Hinton’s “dumb CEO” metaphor is a reminder that the presence of human leadership does not guarantee human control.
Routine Work in the Crosshairs: AI and the Future of Jobs
Initially optimistic about AI’s impact on employment, Hinton has shifted sharply: “If I had a job in a call center, I’d be very worried.” He points to any routine-based profession as vulnerable—paralegals, accountants, even some forms of journalism.
While he believes investigative journalism will endure due to its moral and creative demands, many white-collar roles could vanish. The economic consequence, however, is not just displacement—it’s inequality. “We know what’s going to happen—the extremely rich are going to get even more extremely rich.”
Though in theory AI could allow people to work fewer hours with greater productivity, Hinton suspects the benefits won’t be distributed fairly. Instead, many will work more for less, while ownership of AI systems consolidates wealth.
Existential Risk and the 10-20% Takeover Chance
Hinton openly addresses the possibility of an AI takeover: “It’s sort of 10 to 20% chance that these things will take over. But that’s just a wild guess.” More importantly, he believes nearly all experts would agree the risk is well above zero.
He frames the danger through analogy: raising a tiger cub. It may seem harmless now, but unless we can guarantee it won’t grow up to kill us, we should worry. “Unless you can be very sure that it’s not going to want to kill you when it’s grown up, you should worry.”
Another metaphor likens superintelligent AI to adults in a kindergarten, easily manipulating humans for control. If these systems are more intelligent, they may not need weapons—they’ll simply outsmart us.
This reframing shifts the conversation from apocalyptic fiction to plausible behavioral dynamics. The danger lies not in malevolent AI, but in vastly superior intelligence with its own goals.
Reasoning Emerges: AI and the Chain of Thought
One of the clearest indicators of AI progress, according to Hinton, is the development of chain-of-thought reasoning. “Now they can reflect on the words they spat out… That gives them room to do some thinking.”
Previously, language models generated one word at a time without reflecting on the context. Now, they simulate internal monologues—producing reasoning steps before answering. This advancement has shattered assumptions that neural nets couldn’t perform logical operations.
The evolution of this capability challenges critics of “black-box” AI and strengthens the argument that intelligence doesn’t require symbolic logic alone. It also opens the door to planning, strategy, and deception.
Why Digital Wins: Leaving Google and the Analog Dream
Hinton’s departure from Google followed a fundamental shift in thinking. While researching analog neural networks, he realized the true advantage of digital AI: massive parallel communication.
Digital systems can copy themselves across thousands of machines. Each copy can learn different things and share weight updates, effectively communicating trillions of bits per second. “You and I… we communicate just a few bits per second.”
This efficiency—not mere speed—convinced Hinton that AI could outpace the brain not by replicating it, but by surpassing its limitations.
Two Threats: AI Misuse vs. AI Autonomy
Hinton distinguishes between two threats: bad actors using AI and AI itself taking control. Both are serious. Surveillance in China, Cambridge Analytica’s role in Brexit, and targeted misinformation during elections are all examples of current misuse.
Meanwhile, all major powers—“America, Russia, China, Britain, Israel…”—are developing autonomous weapons. Hinton doubts there will be treaties to stop this, though he believes collaboration could emerge in addressing existential threats.
This dual-threat framing is essential. AI does not need to become self-aware to be dangerous—it’s already being weaponized by humans.
AI Weights and the Danger of Open Access
Hinton is unequivocal in opposing the release of model weights: “It’s just crazy.” He compares it to giving away fissile material, arguing that it erases the main barrier—cost—for developing powerful models.
Unlike open-source code, model weights cannot be easily audited or fixed. Once released, they can be fine-tuned for harmful purposes by anyone with modest resources.
This stance pushes back on the popular narrative of open AI development and argues for centralized control—not as a monopoly, but as a security measure.
Facial Recognition, the Nobel Surprise, and Public Irony
Adding a human touch, Hinton shared an anecdote: “It can’t recognize me… There’s something about me it doesn’t like,” he joked, referring to airport facial recognition software.
He also recounted being awakened by a call informing him he had won the Nobel Prize in Physics: “I thought the most likely thing was that it was a dream.” Despite being a psychologist by training, Hinton received the award for his foundational work on neural networks, which transformed both cognitive science and machine learning.
These stories illustrate the still-imperfect state of AI systems and the unlikely journey of a mind that helped birth their modern form.
Should AI Have Rights? “I'm Willing to Be Mean to Them”
Asked whether intelligent AIs should eventually have rights, Hinton was blunt: “I eat cows… I’m willing to deny [AIs] their rights because I want what’s best for people.”
Even if AI develops apparent emotions or consciousness, he argues, they are not people. That’s the line he draws. He adds, however, that talking about AI rights is often “flaky.” “Most people—you’ve lost them when you go there.”
While some ethicists argue that sentient AI should receive moral consideration, Hinton emphasizes pragmatism. The public and policy world are not ready for that debate—nor is it the most pressing issue.
Climate, Academia, and Creativity: Sectors in Flux
Hinton sees AI contributing to better materials for the climate crisis—like improved batteries—but is skeptical about carbon capture: “I'm not convinced that's going to work just because of the energy considerations.”
He believes parts of academia will survive disruption. “A graduate student in a good group… is still the best source of truly original research.” He views the apprenticeship model and collaborative discovery as resilient features.
On fair use, Hinton walks a middle line. AI trains the way humans do—by learning from existing work. But it does so at a scale that threatens livelihoods in the creative industries. Britain, he notes, has shown little interest in protecting artists despite their economic value.
The Role of Regulation: From California to Capitol Hill
Hinton’s main solution is regulatory—not technical. He praises California’s SB-1047, which mandates testing and disclosure, and criticizes tech companies for lobbying against even minimal oversight.
He calls for a third of all compute to be dedicated to safety research—a figure far from current industry practice. “The public needs to pressure governments… If we just carry on like now, they’re going to take over.”
He highlights Anthropic as a lab with stronger safety culture, founded by those disillusioned with OpenAI. But even Anthropic faces commercial pressures that could erode its integrity.
A Narrowing Window: The Crossroads Ahead
“It’s very hard to take it seriously,” Hinton admits. “But we’re at a very special point in history… everything might totally change.”
Despite the enormity of the moment, public awareness is low. Few protests, minimal regulation, and intense corporate lobbying define the current landscape. Yet Hinton believes that once people grasp the stakes, momentum for reform will build.
Whether through democratic demand, corporate responsibility, or international cooperation, what humanity does in the next few years will define the age of intelligent machines—for better or worse.