California Bill Could Regulate AI Safety

A new bill that has advanced to the California Senate Assembly floor represents both a significant step forward in AI governance as well as a risk to the technology’s innovative growth. Officially called California Senate Bill 1047 – and also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act – this bill is meant to regulate large-scale AI models in the state of California.

Authored by State Senator Scott Wiener, this bill would require AI companies to test their models for safety. Specifically, the bill targets “covered models,” which are AI models that exceed certain compute and cost thresholds. Any model that costs more than $100 million to train would fall under the jurisdiction of this bill.

Elon Musk has thrown his support behind this bill.

As of August 27, 2024, the bill has passed the California Assembly Appropriations Committee and will be soon advancing to the Assembly floor for a final vote.

California Senate Bill 1047 has a variety of requirements for builders of large AI models. One of these is to create a “full shutdown” ability that enables someone in authority to immediately shut down an unsafe model during nefarious or dangerous circumstances.

On top of this, developers will be required to generate a written safety and security protocol in the event of a worst-case scenario with the AI model. Companies such as Amazon, Google, Meta, and OpenAI have already made voluntary pledges to the Biden Administration to ensure the safety of their AI products. That said, this new bill would give the Californian government certain powers to enforce the bill’s regulations.

Additionally, California Senate Bill 1047 would require companies to retain an unredacted and unchanged copy of the safety and security protocol for the model for as long as the model is in use, plus five years. This is meant to ensure that developers maintain a complete and accurate record of their safety measures, thereby allowing for thorough audits and investigations if needed. If an adverse event were to occur with the model, this regulation should help developers prove they were adhering to safety standards – or that they weren’t.

In short, the bill is meant to prohibit companies from making a model commercially available if there is an unreasonable risk of causing or enabling harm.

Of course, it’s easy to put some words on a page. It’s much more difficult to actually follow through with the promises made in those words. The bill would also create the Board of Frontier Models within the Government Operations Agency. The functions of this group would be to provide high-level guidance on AI policy and regulation, approve regulations proposed by the Frontier Model Division, and ensure that oversight measures keep pace with the explosion of AI technology.

This bill also gives the California Attorney General power to address potential harms caused by Ai models. The Attorney General would have the authority to take action against developers whose AI models cause sever harm or pose imminent public safety threats. This person would also have the ability to bring civil actions against non-compliant developers as well as the power to enforce penalties for violations.

If the bill passes, developers will have until January 1, 2026 to begin annually retaining a third-party auditor to perform an independent compliance audit. Developers will be required to retain an unredacted copy of the audit report and grant access to the Attorney General upon request.

As one might imagine, this bill is causing an uproar among Silicon Valley elite. One of the biggest concerns is that this bill could hamper innovation in the AI community. Many of the U.S.’s AI companies reside within California, and as such this bill would have major ramifications on the entire U.S. tech industry. Certain critics believe the regulations will slow companies down, and allow foreign organizations to gain ground.

Additionally, there seems to be debate around the definition of “covered models” and “critical harm.” While both of these phrases appear many times within the bill, their actual definitions are considered by some to be too broad or vague. This could potentially lead to overregulation.

That said, there are also many supporters for the bill. In fact, Elon Musk has thrown his support behind the bill, stating on X that he has been “an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.”

As of right now, we don’t know if and when the bill will pass the Assembly floor’s final vote. When it does, it will go to the Governor for either a signature or a veto.

California has an opportunity to shape the future of AI development with this bill, and it will be interesting to see which way the decision swings.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...