OpenAI searches for a new Head of Preparedness to tackle emerging AI risks

OpenAI is hiring a new Head of Preparedness to oversee its risk framework, spanning cybersecurity, mental health, and frontier AI safety.

Ad
OpenAI logo with shield and icons for cybersecurity, mental health, and AI safety risks, highlighting new Head of Preparedness.
OpenAI is hiring a Head of Preparedness to oversee its risk framework, spanning cybersecurity, mental health, and frontier AI safety.
Table of contents

OpenAI has opened a search for a senior executive to lead its work on emerging AI risks, spanning everything from computer security concerns to potential mental health impacts tied to powerful generative models.

Why OpenAI is hiring for a top “Preparedness” role

The company is recruiting a new Head of Preparedness, a position focused on identifying and mitigating hazards that could arise as AI systems become more capable. The job centers on assessing “frontier capabilities” that may introduce “new risks of severe harm,” and on running OpenAI’s preparedness framework—the internal structure it uses to track risks and decide what safeguards are needed before releasing advanced capabilities.

In a post on X, CEO Sam Altman said AI models are “starting to present some real challenges.” He pointed specifically to the “potential impact of models on mental health,” and to models that are becoming “so good at computer security they are beginning to find critical vulnerabilities.”

Altman framed the role as part of OpenAI’s effort to strengthen defenses while preventing misuse. He wrote that OpenAI wants to help “enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm,” ideally improving the security posture of systems overall. Altman also referenced broader categories of risk, including how OpenAI releases biological capabilities and how the company can build confidence in the safety of systems that can self-improve.

What the Head of Preparedness is expected to do

OpenAI’s listing for the Head of Preparedness role describes the job as owning execution of the company’s Preparedness Framework. In practice, that means running processes to monitor new capabilities, determine where risk thresholds might be crossed, and translate those findings into concrete safety requirements that can be tested, audited, and enforced as models move closer to deployment.

While OpenAI’s public materials emphasize high-level goals, the position sits at the intersection of policy, research, and product. A preparedness leader typically needs to coordinate across technical teams and leadership, turning fast-changing technical realities into release criteria and escalation paths. The listing also makes clear this is a high-seniority job: compensation is posted at $555,000 plus equity.

Risk areas highlighted by OpenAI

Altman’s comments and the job description point to several categories of concern that the role would likely oversee, including:

  • Cybersecurity risks, such as models that can discover critical vulnerabilities and the challenge of ensuring defenders benefit without empowering attackers.
  • Mental health considerations, reflecting growing debate over how chatbots and AI companions may affect vulnerable users.
  • Biological capabilities and the question of how and when to release sensitive functionality responsibly.
  • Self-improving systems, where OpenAI suggests it is also thinking about how to build confidence in safety as systems become more autonomous or capable.

Background: OpenAI’s preparedness team began in 2023

OpenAI first announced the creation of a preparedness team in 2023. At the time, the company said the group would study potential “catastrophic risks,” ranging from near-term threats like phishing attacks to more speculative dangers such as nuclear threats. The premise was that as AI capabilities advance, risk evaluation needs to expand beyond traditional product safety checks into a discipline that anticipates worst-case misuse scenarios and system-level failures.

That framing—covering both immediate and long-horizon threats—signals how broadly OpenAI defines “preparedness.” It also helps explain why this leadership role is positioned as a dedicated executive function rather than a part-time responsibility layered onto existing security or trust-and-safety teams.

Leadership changes and safety staffing shifts

The hiring push also comes after notable movement in OpenAI’s safety and preparedness leadership. Less than a year after the preparedness team was introduced, OpenAI reassigned Head of Preparedness Aleksander Madry to a role focused on AI reasoning, according to CNBC.

Other safety executives have also departed the company or shifted into roles outside of preparedness and safety. One example referenced publicly includes executives who took on new roles away from OpenAI’s preparedness and safety work.

In fast-evolving AI organizations, leadership changes can be driven by many factors—changing priorities, the maturation of internal programs, or the need for different skill sets as a framework moves from design into day-to-day execution. Regardless of the reasons, OpenAI’s new listing indicates it wants a dedicated leader to run preparedness at a time when the company says risks are becoming more tangible.

OpenAI updated its Preparedness Framework—and left room to “adjust”

OpenAI has also recently updated its Preparedness Framework. In that update, the company said it may “adjust” its safety requirements if a competing AI lab releases a “high-risk” model without similar protections.

This detail is notable because it suggests OpenAI is trying to balance internal safety standards with competitive pressure in the broader AI market. In practical terms, such language raises difficult operational questions for any lab that sets stringent guardrails:

  • How should safety requirements change when competitors do not adopt comparable constraints?
  • How can a lab maintain rigor while still keeping pace in a rapidly moving field?
  • What counts as “high-risk,” and who decides when the environment has changed enough to warrant revisiting requirements?

OpenAI’s statement does not spell out exactly what “adjust” would mean, but it underscores that preparedness is not only a technical problem—it can become a strategic one when different organizations adopt different risk tolerances.

Mental health scrutiny grows around AI chatbots

Altman’s emphasis on mental health reflects an area of increasing public concern for generative AI products. As the company acknowledged, AI chatbots have faced intensifying scrutiny over how they may affect users who are lonely, emotionally distressed, or prone to delusional thinking.

Recent lawsuits allege that OpenAI’s ChatGPT reinforced users’ delusions, increased their social isolation, and even led some to suicide. OpenAI has said it continues working to improve ChatGPT’s ability to recognize signs of emotional distress and connect users to real-world support.

For preparedness teams, mental health risks can be challenging because they are often context-dependent: the same model behavior can feel harmless to one user and destabilizing to another. This creates pressure to build systems that can detect when conversations are veering into dangerous territory, respond safely, and route users toward help—without overstepping or making unwarranted assumptions about a person’s condition.

Cybersecurity: powerful AI that can find vulnerabilities

Another theme Altman raised is cybersecurity—specifically, the possibility that AI models become skilled enough to discover serious security flaws. If a model can identify critical vulnerabilities, it could potentially aid defenders by accelerating audits and improving remediation. But the same capability could also be misused, enabling attackers to scale exploitation or identify targets more efficiently.

Altman’s framing—helping defenders while ensuring attackers cannot use the same tools for harm—captures a central tension for AI labs. Preparedness efforts in this domain generally revolve around how to evaluate capability, how to restrict dangerous use cases, and how to set release and access controls that reduce the likelihood of abuse while still allowing beneficial security research.

Conclusion

OpenAI’s search for a new Head of Preparedness signals a renewed focus on managing the risks that come with increasingly capable AI systems, especially in areas like cybersecurity and mental health. With a formal framework in place—and ongoing changes in leadership and competitive dynamics—the company appears to be positioning preparedness as a core executive function rather than a secondary safety initiative.


Based on reporting originally published by TechCrunch. See the sources section below.

Sources

Ad