OpenAI Launches Government Division with $200M Defense Deal
OpenAI just took a big step into the public sector. On Monday, the company behind ChatGPT announced a new division focused on bringing its AI tools into U.S. government operations. The first move? A $200 million pilot contract with the Department of Defense.
The initiative, called *OpenAI for Government*, pulls together the company’s existing public-sector work under one roof. It’ll also give agencies access to OpenAI’s more advanced models. In a statement, the company said the goal is to “improve the day-to-day experience of public service” and help government employees work more efficiently.
But here’s the catch: OpenAI insists this isn’t about military applications. The DoD pilot, run through its Chief Digital and Artificial Intelligence Office, will focus on non-combat tasks—things like logistics or paperwork. The company stressed that all use cases have to follow its guidelines.
Questions About Military Use
That’s where things get a little fuzzy. Back in January, OpenAI quietly removed a line from its policy that banned military uses of ChatGPT. At the time, they said it was just a wording change, not a shift in stance. But critics weren’t convinced. Now, with this Defense Department deal, those concerns are bubbling up again.
Still, OpenAI’s got plenty of government partners already. NASA, the National Institutes of Health, and the Department of the Treasury are all working with them. Their models are also being tested at national labs like Los Alamos and Sandia.
And then there’s *Stargate*—a $500 billion private-sector AI project backed by OpenAI, Oracle, and SoftBank, with support from the Trump administration. It’s unclear how that fits into this new push, but it’s another sign of how deep these ties run.
What About Musk’s xAI?
Elon Musk’s xAI, meanwhile, isn’t part of this latest move. That’s not shocking, given his very public feud with Trump earlier this month. But xAI isn’t completely locked out.
Reuters reported that a customized version of xAI’s chatbot, Grok, was being used to analyze Department of Homeland Security data. Unnamed sources claimed it might violate conflict-of-interest rules and risk exposing sensitive info. The DHS denied it, but the story still raised eyebrows.
So where does this leave things? OpenAI’s making a serious play for government contracts, even as it tries to distance itself from military uses. Whether that balance holds—well, we’ll see.
*Edited by Sebastian Sinclair and Josh Quittner*