The AI news out of OpenAI this week has a sharp edge: the company launched a paid Safety Fellowship offering $3,850 weekly stipends to external researchers studying what could go wrong with advanced AI — announced within hours of a New Yorker investigation reporting that OpenAI had dissolved its internal safety teams and quietly removed the word “safely” from its IRS mission statement.
Summary
- The OpenAI Safety Fellowship, announced April 6, runs from September 14, 2026 through February 5, 2027; fellows receive a $3,850 weekly stipend, approximately $15,000 in monthly compute resources, and mentorship from OpenAI researchers, but will not have access to the company’s internal systems
- Priority research areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving methods, agentic oversight, and high-severity misuse — applications close May 3, with fellows notified by July 25
- The New Yorker’s Ronan Farrow reported the same week that OpenAI had dissolved its superalignment team, its AGI Readiness team, and its Mission Alignment team since 2024, and that an OpenAI representative responded to a journalist asking about existential safety researchers with: “What do you mean by existential safety? That’s not, like, a thing.”
OpenAI announced the fellowship on April 6 as “a pilot program to support independent safety and alignment research and develop the next generation of talent.” The program pays $3,850 per week, over $200,000 annualized, plus roughly $15,000 in monthly compute and mentorship from OpenAI researchers. Fellows work from Constellation’s Berkeley workspace or remotely, and applications close May 3. The fellowship is not limited to AI specialists — OpenAI is recruiting from cybersecurity, social science, and human-computer interaction alongside computer science.
The timing is the story. Ronan Farrow’s investigation in The New Yorker, published the same day, documented that OpenAI had dissolved three consecutive internal safety organizations over 22 months. The superalignment team was shut down in May 2024 after co-leads Ilya Sutskever and Jan Leike departed. Leike wrote on his way out that “safety culture and processes have taken a backseat to shiny products.” The AGI Readiness team followed in October 2024. The Mission Alignment team was disbanded in February 2026 after just 16 months. The New Yorker also reported that when a journalist asked to speak with OpenAI’s existential safety researchers, a company representative replied: “What do you mean by existential safety? That’s not, like, a thing.”
The fellowship explicitly does not replace internal infrastructure. Fellows receive API credits and compute resources but no system access, positioning the program as arm’s-length research funding rather than a rebuild of the dissolved teams.
What the Fellowship Requires Fellows to Produce
The research agenda spans seven priority areas: safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. By the program’s end in February 2027, each fellow must produce a substantive output — a paper, benchmark, or dataset. Specific academic credentials are not required; OpenAI stated it prioritizes research ability, technical judgment, and execution capacity.
Why This Matters Beyond the AI Industry
As crypto.news has reported, confidence in frontier AI companies’ stated safety commitments is a market signal that affects capital allocation across AI infrastructure, AI tokens, and the DePIN and AI agent protocols sitting at the intersection of crypto and artificial intelligence. As crypto.news has noted, OpenAI’s spending trajectory and the credibility of its operational priorities are tracked closely by investors evaluating the AI infrastructure sector — a sector with growing overlap with blockchain-based systems. Whether external fellows working without internal access can meaningfully influence model development is a question the first cohort’s research will begin to answer in early 2027.

















English (US) ·