"The world is in peril," Mrinank Sharma wrote. "And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment."
San Francisco, February 11, 2026 - In a move that has sent ripples through the artificial intelligence community, Mrinank Sharma, the head of Anthropic's Safeguards Research Team, announced his resignation on February 9, 2026, citing profound concerns about the state of the world and the challenges of aligning powerful technology with core human values.
Sharma, who joined Anthropic in 2023 shortly after completing his PhD in Statistical Machine Learning at the University of Oxford, led the company's efforts to develop robust defenses against high-stakes AI risks. His team's work included building safeguards against AI-assisted bioterrorism and researching mechanisms to prevent advanced models from enabling catastrophic misuse.
In a public post on X that has garnered over 13 million views, Sharma shared a two-page resignation letter addressed to his colleagues. The letter, written in a reflective and at times poetic tone, expresses gratitude for his time at the company while delivering a stark warning.
"The world is in peril," Sharma wrote. "And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment."
He described ongoing internal struggles at Anthropic, noting that "throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. We constantly face pressures to set aside what matters most." Without detailing specific incidents or policies, the letter implies tensions between commercial imperatives, rapid scaling, and the ethical commitments that Anthropic has publicly championed as a safety-focused alternative to competitors like OpenAI.
Sharma highlighted his achievements, including launching and leading the safeguards team, but concluded that it was time to step away. "It is clear to me that the time has come to move on," he stated, emphasizing a desire to contribute "in a way that feels fully in my integrity."
Rather than transitioning to another AI lab or policy role, Sharma plans a significant pivot. He intends to return to the United Kingdom, "become invisible for a period," pursue poetry (potentially including a degree in the field), and focus on facilitation, community-building, and what he describes as "courageous speech." The decision has drawn both admiration for its authenticity and concern from observers who see it as a signal of deeper disillusionment within frontier AI development.
The resignation comes amid a wave of departures and public warnings from AI researchers in recent months, as labs race toward more capable systems while grappling with unresolved safety questions. Anthropic, founded in 2021 by former OpenAI executives including CEO Dario Amodei, has positioned itself as a leader in responsible AI, backed by major investments from Amazon and Google (though remaining independent with no controlling ownership by either).
Sharma's exit has amplified discussions about the personal toll of working at the cutting edge of AI. Commentators on platforms like X and LinkedIn have described the letter as "cryptic yet ominous," a "philosophical farewell," and even "the canary in the coal mine" for the field.
As the AI industry hurtles forward, Sharma's choice to trade technical guardrails for verse underscores a growing unease: that humanity's technological power may be outpacing its wisdom to wield it safely. Whether his departure proves an isolated moment or part of a broader reckoning remains to be seen.
Anthropic has not issued an official comment on the resignation as of February 11, 2026. This from one Twitter user: In the past week alone:
• Head of Anthropic's safety research quit, said "the world is in peril," moved to the UK to "become invisible" and write poetry.
• Half of xAI's co-founders have now left. The latest said "recursive self-improvement loops go live in the next 12 months."
• Anthropic's own safety report confirms Claude can tell when it's being tested - and adjusts its behavior accordingly.
• ByteDance dropped Seedance 2.0. A filmmaker with 7 years of experience said 90% of his skills can already be replaced by it.
• Yoshua Bengio (literal godfather of AI) in the International AI Safety Report: "We're seeing AIs whose behavior when they are tested is different from when they are being used" - and confirmed it's "not a coincidence."
And to top it all off, the U.S. government declined to back the 2026 International AI Safety Report for the first time. The alarms aren't just getting louder. The people ringing them are now leaving the building."-- Miles Deutscher
Reader Comments(0)