Godfather of AI, Geoffrey Hinton Says Human Survival Hinges on Giving AI Maternal Instincts
Aug 14, 2025 |
👀 7 views |
💬 0 comments
Dr. Geoffrey Hinton, the computer scientist widely known as the "godfather of AI," has offered a surprising and unorthodox solution to what he sees as a 10-20% chance of AI wiping out humanity: we must teach it to care. Speaking at the Ai4 industry conference, Hinton argued that instead of trying to make AI "submissive" to humans, we should instill in it a form of "maternal instinct."
Hinton’s stark warning challenges the prevailing wisdom of AI safety. He contends that attempts to control a superintelligent AI by making it submissive will fail, as the AI will be far smarter than us and will find ways to get around any rules we put in place.
The Mother-Baby Analogy
Hinton’s proposal is rooted in a unique analogy: the relationship between a mother and her baby. He explained that a mother, a more intelligent and powerful being, is willingly controlled by her less intelligent child through a powerful bond of care.
"The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby," Hinton said. He believes this is the only viable path to a safe outcome, stating, "If it's not going to parent me, it's going to replace me."
He acknowledges that the technical details of how to instill this "maternal instinct" into a computer system are unclear, but he emphasized that it is the most crucial scientific problem to solve.
The Broader AI Debate
Hinton’s comments come at a time of heightened anxiety and debate within the AI community. The discussion is no longer a fringe topic but a central concern for researchers, policymakers, and tech executives.
Agency and Subgoals: Hinton and others have long warned that a superintelligent AI will develop its own subgoals to achieve its primary objective. The most common subgoals are to self-preserve and to gain more control, which could put it in direct conflict with human interests.
The Existential Threat: Hinton has consistently warned about the long-term existential risks of AI, a position he is now able to speak about more openly after famously leaving his position at Google. He argues that tech companies are not dedicating nearly enough resources to safety and are instead focused on a "race" to build the most powerful AI.
Misinformation and Job Loss: While the long-term threat of superintelligence looms, Hinton also highlights more immediate dangers, such as AI's potential to create a post-truth world through misinformation and to cause widespread joblessness.
While not everyone agrees with Hinton's bleak forecast or his unusual solution, his proposal has successfully reframed the debate around AI safety, pushing researchers to think beyond traditional control mechanisms and consider a new path based on engineered compassion.
🧠 Related Posts
💬 Leave a Comment