
A nine-person federal jury will begin deliberating Monday in Oakland on whether OpenAI CEO Sam Altman and president Greg Brockman violated charitable-trust law when they converted the nonprofit AI lab they co-founded with Elon Musk into an $852-billion for-profit corporation — a verdict that could force the removal of both executives and the unwinding of a restructuring that underpins hundreds of billions of dollars in investor commitments, putting the future of the world's most widely used AI platform in legal jeopardy.
An advisory verdict could arrive as early as next week. Judge Yvonne Gonzalez Rogers, presiding at the Ronald V. Dellums Federal Building, retains sole authority over the final liability ruling and will simultaneously open a parallel remedies phase Monday to consider penalties — including the ouster of Altman and Brockman and up to $134 billion in disgorgement from OpenAI and Microsoft — should OpenAI be found liable.
Musk Charges OpenAI Diverted His $38 Million to Enrich Corporate Insiders
Elon Musk, who co-founded OpenAI alongside Altman and Brockman in December 2015 before departing the board in 2018, is suing the company and its leadership on two civil claims: breach of charitable trust and unjust enrichment. He alleges that roughly $38 million he donated to the nonprofit between 2015 and 2017 was diverted for unauthorized commercial purposes, and that Altman and Brockman reneged on founding commitments to keep the company's mission altruistic and its technology open-source.
Musk is seeking the removal of Altman and Brockman from their leadership roles, the unwinding of OpenAI's October 2025 recapitalization into a public benefit corporation, and up to $134 billion from OpenAI and Microsoft — which his legal team calls "wrongful gains" — redirected to OpenAI's nonprofit foundation. On the stand, Musk renounced any personal financial benefit, framing the relief as a restoration of the charitable mission.
A ruling in Musk's favor could jeopardize Microsoft's more than $13 billion in cumulative investment in OpenAI and disrupt the company's anticipated path toward an IPO near a $1 trillion valuation.
Brockman's 2017 Journal Called the Nonprofit Commitment 'a Lie'
The most damaging piece of evidence in the case is not an email from Altman. It is a personal journal entry by Greg Brockman, written in November 2017 after a meeting with co-founder Ilya Sutskever: "Cannot say we are committed to the non-profit... if three months later we're doing b-corp then it was a lie." Judge Gonzalez Rogers cited that entry directly in her January 2026 ruling that sent the case to trial, finding "ample evidence" and rejecting nearly every OpenAI motion to dismiss.
Three weeks of testimony produced a parade of Silicon Valley witnesses — including Altman, Brockman, Microsoft CEO Satya Nadella, and Musk himself — and reduced to two competing summary readings: that Altman and Brockman "stole a charity," as Musk's lead counsel Steven Molo told the jury, or that Musk "didn't get his way at OpenAI," as the defense framed it.
OpenAI Says Musk Sought to Control AGI, Then Walked Away
OpenAI's defense, presented by attorneys Sarah Eddy and William Savitt, countered that no binding commitments were ever made to Musk about corporate structure, and that his donations came with no restrictions. Eddy told jurors that Musk was not motivated by the nonprofit mission but by a desire for dominance. "He wanted dominion over AGI," she said. "That's why this was such a high-stakes conversation." She pointed to testimony that Musk had discussed his children inheriting control of OpenAI as evidence of that ambition.
Eddy also noted that as early as 2017 — years before the lawsuit — Musk himself had pushed to create a for-profit subsidiary and fought to control it, contradicting his claim to have been a committed nonprofit steward.
Altman, who testified earlier in the trial, painted Musk as a power-seeker whose true goal was control over the development of artificial general intelligence. "No, you can't steal it, but Mr. Musk did try to kill it," Altman testified, referring to OpenAI. He added that Musk had launched the rival xAI, attempted to recruit away OpenAI researchers, and engaged in what Altman described as "business interference." Altman also testified that Musk's 2018 departure was both a source of instability and, for many employees, a morale boost.
Microsoft, a co-defendant on an aiding-and-abetting theory, argued separately that its investment was the very thing that kept OpenAI solvent long enough to build what Musk now wants returned. Nadella testified that the deal was Microsoft's defense against becoming "the next IBM."
Safety Dispute from 2018 Surfaces as Trophy in Oakland Courtroom
OpenAI's attorneys brought a physical object to court as evidence of the company's safety commitment: a small golden trophy of a donkey's rear end, inscribed with the words "Never stop being a jackass for safety." The trophy belonged to Joshua Achiam, OpenAI's chief futurist. In 2018, at an all-hands meeting where Musk announced his departure, Achiam challenged Musk's plan to build a competing AI company as fast as possible, calling the approach "unsafe and reckless." Musk, Achiam testified, "snapped and called me a jackass." Colleagues, including then-OpenAI researcher Dario Amodei — now CEO of Anthropic — and David Luan responded at the next all-hands by presenting Achiam with the trophy. Judge Gonzalez Rogers declined to admit it as formal evidence, so it never appeared before the jury.
Musk Skips Closing Arguments to Join Trump's Beijing Delegation
Musk was absent from Thursday's closing arguments — the culmination of the largest civil trial of his life — because he was traveling in Beijing as part of President Donald Trump's state-visit delegation, alongside Tim Cook, Jensen Huang, and Larry Fink. His attorney Steven Molo apologized to the jury on his behalf.
Outside the courthouse, protesters added their own commentary. One demonstrator circled the building in a costume depicting Musk holding a prop bag of ketamine and driving a miniature Cybertruck. Another held a photo of Altman alongside a sign reading, "Stop AGI or we're all gonna die."
Verdict Could Set Binding Rules for Every AI Lab That Started as a Charity
Legal observers have called Musk v. Altman the most consequential legal proceeding in AI history — not simply because of the principals involved, but because of the questions it forces the legal system to answer. At its core, the case asks whether a nonprofit organization can convert itself into a for-profit entity without the consent of its original donors, and whether those donors can seek restitution when they believe the mission has been abandoned. There is no settled legal precedent for this at the scale and speed at which AI development operates.
Both sides have described the case, with rare agreement, as one that will shape the next decade of governance for AI labs that started as charities and have become the most valuable private companies in the world. Anthropic, founded by former OpenAI employees including Dario Amodei, has watched the proceedings closely. A ruling against OpenAI could establish new fiduciary duties for AI organizations and reshape governance frameworks for research labs globally — disrupting the funding structures that tie frontier AI development to national security considerations.
Meanwhile, Musk's competing AI venture xAI is expected to go public as a part of SpaceX as early as June at a target valuation of $1.75 trillion — a figure OpenAI's legal team cited repeatedly as context for Musk's motivations in bringing the suit.
700 Million ChatGPT Users Face Uncertainty Over Governance, Access, and Safety Oversight
If Judge Gonzalez Rogers sides with Musk, the hundreds of millions of people who rely on OpenAI's products — from ChatGPT, which 700 million people use weekly, to Microsoft Copilot — will face uncertainty about the future governance, safety oversight, and product continuity of the world's leading AI platform. The remedies phase could result in the removal of the executives who built those products, the unwinding of the corporate structure that funds their development, and the redirection of billions of dollars from commercial investors to a nonprofit foundation.
A verdict for OpenAI would, conversely, ratify a model in which a charity's mission can be superseded by commercial imperatives — with no legal mechanism for early donors to demand accountability. The California Attorney General, who oversees nonprofit conversions, has already approved OpenAI's restructuring process; evidence from the trial could nonetheless draw fresh regulatory scrutiny.
An advisory verdict could arrive as early as next week. Whatever it says, the trial has already done something significant: it has subjected the governance of artificial intelligence to public examination before nine ordinary citizens, whose conclusions will echo through the industry regardless of what the judge ultimately decides.
ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.




