Sam Altman is Hiring Someone to Worry About the End of the World
Dec 27, 2025 |
👀 31 views |
💬 0 comments
In a move that reads like the opening scene of a sci-fi thriller, OpenAI CEO Sam Altman has officially opened a search for a new executive with a singular, chilling mandate: figure out how to stop artificial intelligence from causing catastrophic harm before it’s too late.
The job posting for a "Head of Preparedness" went live this week, offering a base salary of $555,000 (plus equity) for a candidate capable of "tracking and preparing for frontier capabilities that create new risks of severe harm."
The "Chief Worrier" Role
While tech companies often hire safety teams, this role is distinct in its focus on "frontier" risks—threats that do not exist yet but could emerge as AI models begin to improve themselves.
The Mandate: The new hire will lead a team responsible for "predicting the unpredictable." This includes building "threat models" for scenarios where AI might help rogue actors create biological weapons, execute massive cyberattacks, or deceive its human handlers.
The "Self-Improvement" Tweet: Altman amplified the listing on X (formerly Twitter), dropping a cryptic hint about why the role is urgent now. He noted the need to prepare for "running systems that can self-improve," a comment that sent waves through the AI safety community. If an AI can rewrite its own code to become smarter, it could theoretically trigger an "intelligence explosion" that humans can no longer control.
"Code Red" Context
The hiring blitz comes at a precarious moment for the ChatGPT maker.
Internal Crisis: OpenAI is reportedly in the midst of a "Code Red"—a state of emergency declared by Altman earlier this month after Google’s Gemini 3 model outperformed GPT-5 on internal benchmarks.
Balancing Act: The company is trying to threading a needle: racing to release faster, stronger models to beat Google, while simultaneously hiring a "Head of Preparedness" to ensure those same models don't destroy the company (or the world).
A Very Specific Set of Skills
The job description makes it clear that this is not a standard compliance role. OpenAI is looking for a technical heavyweight who can "make clear, high-stakes technical judgments under uncertainty."
"We are looking for someone who can look at a model that seems safe today and explain why it might be dangerous tomorrow," said a source close to the OpenAI safety team. "It is effectively a $500,000-a-year position for a professional paranoid."
🧠 Related Posts
💬 Leave a Comment