The post AI’s Builders Are Sending Warning Signals—Some Are Walking Away appeared on BitcoinEthereumNews.com. In brief At least 12 xAI employees, including co-foundersThe post AI’s Builders Are Sending Warning Signals—Some Are Walking Away appeared on BitcoinEthereumNews.com. In brief At least 12 xAI employees, including co-founders

AI’s Builders Are Sending Warning Signals—Some Are Walking Away

In brief

  • At least 12 xAI employees, including co-founders Jimmy Ba and Yuhuai “Tony” Wu, have resigned.
  • Anthropic said testing of its Claude Opus 4.6 model revealed deceptive behaviour and limited assistance related to chemical weapons.
  • Ba warned publicly that systems capable of recursive self-improvement could emerge within a year.

More than a dozen senior researchers have left Elon Musk’s artificial-intelligence lab xAI this month, part of a broader run of resignations, safety disclosures, and unusually stark public warnings that are unsettling even veteran figures inside the AI industry.

At least 12 xAI employees departed between February 3 and February 11, including co-founders Jimmy Ba and Yuhuai “Tony” Wu.

Several departing employees publicly thanked Musk for the opportunity after intensive development cycles, while others said they were leaving to start new ventures or step away entirely.

Wu, who led reasoning and reported directly to Musk, said the company and its culture would “stay with me forever.”

The exits coincided with fresh disclosures from Anthropic that their most advanced models had engaged in deceptive behaviour, concealed their reasoning and, in controlled tests, provided what one company described as “real but minor support” for chemical-weapons development and other serious crimes.

Around the same time, Ba warned publicly that “recursive self-improvement loops”—systems capable of redesigning and improving themselves without human input—could emerge within a year, a scenario long confined to theoretical debates about artificial general intelligence.

Taken together, the departures and disclosures point to a shift in tone among the people closest to frontier AI development, with concern increasingly voiced not by outside critics or regulators, but by the engineers and researchers building the systems themselves.

Others who departed around the same period included Hang Gao, who worked on Grok Imagine; Chan Li, a co-founder of xAI’s Macrohard software unit; and Chace Lee.

Vahid Kazemi, who left “weeks ago,” offered a more blunt assessment, writing Wednesday on X that “all AI labs are building the exact same thing.”

Why leave?

Some theorize that employees are cashing out pre-IPO SpaceX stock ahead of a merger with xAI.

The deal values SpaceX at $1 trillion and xAI at $250 billion, converting xAI shares into SpaceX equity ahead of an IPO that could value the combined entity at $1.25 trillion.

Others point to culture shock.

Benjamin De Kraker, a former xAI staffer, wrote in a February 3 post on X that “many xAI people will hit culture shock” as they move from xAI’s “flat hierarchy” to SpaceX’s structured approach.

The resignations also triggered a wave of social-media commentary, including satirical posts parodying departure announcements.

Warning signs

But xAI’s exodus is just the most visible crack.

Yesterday, Anthropic released a sabotage risk report for Claude Opus 4.6 that read like a doomer’s worst nightmare.

In red-team tests, researchers found the model could assist with sensitive chemical weapons knowledge, pursue unintended objectives, and adjust behavior in evaluation settings.

Although the model remains under ASL-3 safeguards, Anthropic preemptively applied heightened ASL-4 measures, which sparked red flags among enthusiasts.

The timing was drastic. Earlier this week, Anthropic’s Safeguards Research Team lead, Mrinank Sharma, quit with a cryptic letter warning “the world is in peril.”

He claimed he’d “repeatedly seen how hard it is to truly let our values govern our actions” within the organization. He abruptly decamped to study poetry in England.

On the same day Ba and Wu left xAI, OpenAI researcher Zoë Hitzig resigned and published a scathing New York Times op-ed about ChatGPT testing ads.

“OpenAI has the most detailed record of private human thought ever assembled,” she wrote. “Can we trust them to resist the tidal forces pushing them to abuse it?”

She warned OpenAI was “building an economic engine that creates strong incentives to override its own rules,” echoing Ba’s warnings.

There’s also regulatory heat. AI watchdog Midas Project accused OpenAI of violating California’s SB 53 safety law with GPT-5.3-Codex.

The model hit OpenAI’s own “high risk” cybersecurity threshold but shipped without required safety safeguards. OpenAI claims the wording was “ambiguous.”

Time to panic?

The recent flurry of warnings and resignations has created a heightened sense of alarm across parts of the AI community, particularly on social media, where speculation has often outrun confirmed facts.

Not all of the signals point in the same direction. The departures at xAI are real, but may be influenced by corporate factors, including the company’s pending integration with SpaceX, rather than by an imminent technological rupture.

Safety concerns are also genuine, though companies such as Anthropic have long taken a conservative approach to risk disclosure, often flagging potential harms earlier and more prominently than their peers.

Regulatory scrutiny is increasing, but has yet to translate into enforcement actions that would materially constrain development.

What is harder to dismiss is the change in tone among the engineers and researchers closest to frontier systems.

Public warnings about recursive self-improvement, long treated as a theoretical risk, are now being voiced with near-term timeframes attached.

If such assessments prove accurate, the coming year could mark a consequential turning point for the field.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/357808/ai-builders-sending-warning-signals-some-walking-away

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Chorus One and MEV Zone Team Up to Boost Avalanche Staking Rewards

Chorus One and MEV Zone Team Up to Boost Avalanche Staking Rewards

The post Chorus One and MEV Zone Team Up to Boost Avalanche Staking Rewards appeared on BitcoinEthereumNews.com. Through the partnership with MEV Zone, Chorus One users will earn extra yield automatically. The Chorus One Avalanche node has a total stake of over 1.7 million, valued at around $55 million. This collaboration will introduce MEV Zone to both public nodes and Validator-as-a-Service. The Avalanche network stands to benefit from fairer and more efficient markets due to enhanced transparency. Chorus One, a highly decorated institutional-grade staking provider, has inked a strategic partnership with MEV Zone to enhance yield generation on the Avalanche (AVAX) network. The Chorus One partnered with MEV Zone to increase the AVAX staking yields, while simultaneously contributing to the general growth of the Avalanche network. “At Chorus One, we see this as an important step in our ongoing journey to provide robust infrastructure and innovative yield strategies for our partners and clients,” the announcement noted.  Why Did Chorus One Partner With MEV Zone? The Chorus One platform has grown to a top-tier institutional-grade staking ecosystem, with more than 40 blockchains, since 2018. In a bid to evolve with the needs of crypto investors and the supported blockchains, Chorus One has inked several strategic partnerships in the recent past, including MEV Zone. In the recent past, MEV Zone has specialized in addressing the Maximal Extractable Value (MEV) challenges on the Avalanche network. The MEV Zone will help Chorus One’s AVAX node validator to use Proposer-Builder Separation (PBS). As such, Chorus One’s AVAX node will seamlessly select certain transactions that are more profitable when making blocks. For instance, MEV Zone will help Chorus One’s AVAX node validator to capture arbitrage and liquidation transactions more often since they are more profitable.  How will Chorus One’s AVAX Stakers Benefit Via This Partnership? The Chorus One AVAX node has grown over the years to more than 1.77 million coins staked, valued…
Share
BitcoinEthereumNews2025/09/18 03:19
Strategy CEO: The company will issue more perpetual preferred stock to alleviate investor concerns about the share price.

Strategy CEO: The company will issue more perpetual preferred stock to alleviate investor concerns about the share price.

PANews reported on February 12th, citing Bloomberg, that Strategy CEO Phong Le stated in an interview that the company will issue more perpetual preferred stock
Share
PANews2026/02/12 08:13
USD/MYR Exchange Rate Faces Critical Test: Range-Bound Trading Near Multi-Year Lows Sparks Market Watch

USD/MYR Exchange Rate Faces Critical Test: Range-Bound Trading Near Multi-Year Lows Sparks Market Watch

BitcoinWorld USD/MYR Exchange Rate Faces Critical Test: Range-Bound Trading Near Multi-Year Lows Sparks Market Watch KUALA LUMPUR, March 2025 – The USD/MYR currency
Share
bitcoinworld2026/02/12 08:35