Explosion
Musk's 'World War III' Threat Resurfaces at OpenAI Trial
Technology

Musk’s ‘World War III’ Threat Resurfaces at OpenAI Trial

Ava MitchellBy Ava Mitchell·

Elon Musk’s threat during the Twitter acquisition dispute has come back to haunt him. OpenAI is now using it against him in his ongoing lawsuit, claiming he tried to intimidate them into a settlement just days before the trial began.

What Happened in Court

OpenAI’s legal team informed the court that Musk tried to “coerce” a settlement before the trial started. They referenced a message he sent during the 2022 Twitter acquisition saga, warning of “World War III” consequences if the deal didn’t go his way. OpenAI’s lawyers aim to portray Musk as someone who relies on extreme pressure tactics to achieve his goals, rather than as a person genuinely concerned about AI safety.

Musk sued OpenAI in 2024, claiming the organization strayed from its original nonprofit mission to develop artificial general intelligence (AGI)—AI systems capable of performing any intellectual task a human can—for humanity’s benefit. Instead, he argues that OpenAI shifted its focus toward profit after Microsoft invested billions in the company. OpenAI has strongly rejected these claims, calling the lawsuit a competitive maneuver from someone who wants to hinder them while building his own AI company, xAI.

Musk’s Star Witness Has a Warning for Everyone

Stuart Russell, a computer science professor at UC Berkeley and a leading voice in AI safety research, is Musk’s only expert witness at the trial. Russell co-authored what many consider the standard textbook on artificial intelligence, which is widely used in universities globally.

But Russell’s testimony might not work in Musk’s favor. Although he shares concerns about the risks posed by unchecked AI development, he also believes that governments should regulate all frontier AI labs—including Musk’s xAI. He doesn’t see OpenAI as uniquely dangerous; rather, he views the entire rush to develop powerful AI systems without safety measures as perilous, like passing out rocket launchers during a traffic dispute.

Russell has specifically warned about an AGI arms race. This scenario involves companies and governments racing to create more powerful AI systems, fearing that a slowdown could mean falling behind, while safety takes a backseat to speed.

OpenAI Fires Back

OpenAI’s defense hinges on a clear argument: Musk co-founded the company in 2015, advocated strongly for its pursuit of powerful AI, and only turned against it after leaving the board to start his competing AI company. Essentially, they claim his concern is less about safety and more about competition.

The “World War III” message, initially sent to pressure Twitter into finalizing its sale to Musk, suggests a pattern. Musk tends to make extreme threats when things don’t go his way. OpenAI wants the jury to view his lawsuit through this lens.

OpenAI: By The Numbers
Founded 2015
CEO Sam Altman
Headquarters San Francisco, CA
Sector Artificial Intelligence
Largest Investor Microsoft (multi-billion dollar stake)
Original Structure Nonprofit
Current Structure Capped-profit / transitioning to for-profit

What This Means for Everyday Users

If you use ChatGPT, Microsoft Copilot, or any of the many tools built on OpenAI’s technology, the trial’s outcome could impact you more than you realize. Here’s how:

If Musk wins and the court finds that OpenAI violated its original charitable mission, significant changes to the company’s operations could follow. This might limit its ability to pursue commercial partnerships or raise funds as it currently does. Such a shift could slow product development or alter how OpenAI’s tools are priced and distributed.

On the other hand, a clear victory for OpenAI could embolden the company to proceed with its plans for a full transition to a for-profit structure. Critics worry this might prioritize investor returns over safety research.

Ultimately, this case is becoming a test of whether AI companies can be held accountable for their founding promises—and who decides when those promises have been broken.

Community Reaction

“Using the Twitter ‘WWIII’ message in court is genuinely clever lawyering. Musk sent that to pressure someone into a deal. Now they’re showing the jury he does this all the time.”

— Reddit user on r/technology

“Stuart Russell testifying for Musk while also saying xAI needs to be regulated too is not the slam dunk Musk probably wanted. His own witness is arguing against the entire industry including him.”

— YouTube commenter on trial coverage

What To Watch

  • Russell’s full testimony: The jury’s reaction to an expert witness who critiques both sides might influence how much weight his arguments carry for Musk’s case.
  • OpenAI’s for-profit conversion: OpenAI is pushing to restructure into a fully for-profit company. The trial’s outcome could directly impact whether—and how—that transition occurs.
  • Musk’s counter-positioning: Keep an eye on how xAI markets itself compared to OpenAI during and after the trial. Legal battles often double as public relations contests in the tech world.
  • Government regulation signals: Russell’s testimony advocating for government oversight of all frontier AI labs may attract attention from lawmakers already focused on the AI sector.

Sources: TechCrunch — Musk’s only AI expert witness fears an AGI arms race | Ars Technica — Musk’s World War III threat haunts him at OpenAI trial

Ava Mitchell

Ava Mitchell

Ava Mitchell is a digital culture journalist at Explosion.com covering social media platforms, streaming services, and the creator economy. With 4 years reporting on TikTok, Instagram, YouTube, and the apps that shape daily life, Ava specializes in explaining platform policy changes and their impact on everyday users. She previously managed social media strategy for a tech startup, giving her firsthand experience with the platforms she now covers.