Key claim: OpenAI's newest coding model ran into California law, and the watchdog points to OpenAI's own documentation as evidence.
On Feb 6, The Midas Project (an AI safety nonprofit led by Tyler Johnston) alleged that GPT-5.3-Codex was marked as a "high" cybersecurity risk in OpenAI's internal framework but was shipped without required safeguards. Their cited evidence is page 29 of OpenAI's system card, which says current internal monitoring "does not protect against harm... in the way that would be adequate for a Safeguards Report" and notes ambiguity about when safeguards kick in. See threadreaderapp.com for the original thread.
OpenAI's response: OpenAI says the model followed its safety process and is compliant.
The company told Fortune that GPT-5.3-Codex "completed our full testing and governance process" and did not show "long-range autonomy" based on proxy tests and reviews by its Safety Advisory Group. Long-range autonomy means the ability to plan and act over extended periods without human input; OpenAI says it lacks robust evaluations for that and is relying on proxy tests for now. Read OpenAI's system card at openai.com and OpenAI's response summarized at fortune.com.
What SB 53 requires: SB 53 is California's Transparency in Frontier AI Act (effective 2025). It makes large AI developers publish and follow their own safety frameworks, file public transparency documents when deploying major models, and report critical safety incidents quickly (within 15 days, or within 24 hours if there's imminent harm).
The Attorney General (AG) enforces the law and can seek civil penalties up to $1,000,000 per violation. There is no private right of action under SB 53. For the law text and official summary, see gov.ca.gov.
Who labeled 'high risk': OpenAI did. The system card treats Codex as "High capability in the Cybersecurity domain" under its Preparedness Framework. Midas argues that label should have required stronger misalignment and cybersecurity safeguards before launch. OpenAI counters that those stronger safeguards apply only when high cyber capability occurs together with long-range autonomy. See OpenAI's system card at openai.com.
Why this matters for builders: If you're integrating third-party models, SB 53 changes risk management. Practical steps to consider:
Require vendor safeguards: contractually demand a copy of the vendor's safety/safeguards report for the shipped model, plus a warranty of SB 53 compliance and indemnity for AG actions. See the California AG's guidance at oag.ca.gov.
Keep a paper trail: mandate quarterly attestations, keep the right to audit the vendor's safety controls, and define breach and termination rights if the vendor changes its safety framework without notice. For legal analysis and contract tips, see omm.com.
Plan B: document a fallback model and red-team your workflows so a vendor compliance wobble doesn't stall your release.
Context: GPT-5.3-Codex launched Feb 5; The Midas Project posted its claim on Feb 6; Fortune published OpenAI's rebuttal on Feb 10. Meanwhile, OpenAI leadership reported that ChatGPT monthly growth is "back to exceeding 10%" and Codex usage jumped about 50% after the release and a new Mac app. That momentum could be affected if SB 53 enforcement or publicity continues. See OpenAI's system card for usage and product notes at openai.com.
Bottom line: The dispute centers on whether OpenAI met SB 53's safeguards framework for a model it internally labeled as high cybersecurity capability. The law gives the Attorney General a strong enforcement tool, so vendors and customers should review contracts, documentation, and fallback plans now.
Get daily insider tech news delivered to your inbox every weekday morning.