California Passes Landmark AI Law
- Katarzyna Celińska

- Oct 13
- 2 min read
California has officially enacted SB53 – The Transparency in Frontier Artificial Intelligence Act, signed by the Governor on September 29, 2025.
This landmark legislation establishes one of the most comprehensive AI safety, transparency, and accountability frameworks in the United States.
☑️ Transparency Requirements:
Large “frontier AI developers” (with annual revenues over USD 500 million) must publish AI safety frameworks on their websites, detailing how they mitigate potential model risks and what standards they follow.
☑️ Incident Disclosure:
Developers are required to report critical safety incidents to the state — including loss of control, unauthorized access, or deceptive AI behavior that could cause injury or death.
☑️ Public Reporting & Oversight:
The Office of Emergency Services will collect and manage safety reports, and allow the public to report potential AI-related dangers.
☑️ Whistleblower Protections:
Employees who disclose AI safety or security issues are legally protected from retaliation — a first of its kind in U.S. AI legislation.
☑️ Penalties:
Non-compliance may result in civil penalties up to USD 1 million.
☑️ Ethical AI Research & Governance:
The law creates a consortium under the Government Operations Agency to develop sustainable and equitable AI standards, including public computing clusters for AI research.

photo: https://pl.freepik.com/
Not only the EU has an AI Act — now U.S. states are stepping in. California’s SB 53 is an important milestone. It’s a well-balanced law that combines AI transparency, incident disclosure, risk assessment, and cybersecurity practices. What’s especially interesting — and commendable — is the whistleblower protection component. Employees who identify safety or ethical issues will be able to report them without fear, creating a culture of responsibility within AI development teams.
Organizations developing or using frontier AI models in California must prepare by:
✅ Implementing AI risk and safety frameworks.
✅ Establishing internal reporting and monitoring channels.
✅ Training teams on responsible AI use and disclosure obligations.
✅ Implementing a Whistleblower protection program.
Author: Sebastian Burgemejster





Comments