California Poised to Enact Nation’s First Broad Regulation of AI

,

Updated September 30, 2024

On September 29, one day before the deadline, Governor Gavin Newsom vetoed SB 1047. In his veto message, Governor Newsom stated that while he agreed that California could not “afford to wait for a major catastrophe to occur before protect[ing] the public,” SB 1047 was the not right way to do so. In Newsom’s view, the bill was too prescriptive and lacked the necessary flexibility. “Adaptability is critical as we race to regulate a technology still in its infancy,” and “any framework for effectively regulating AI needs to keep pace with the technology itself.” The governor also pointed out that “32 of the world’s 50 leading AI companies” call California home, alluding to the importance of ensuring that any regulation of AI not have an overly chilling effect on innovation.

While SB 1047 may still impact the way state legislators craft future AI regulations (as Washington state’s failed Privacy Act has with state approaches to consumer data privacy), California is, at least for the moment, back to the drawing board regarding how to approach AI.


California stands on the verge of implementing comprehensive AI regulation which would, among other things, seek to hold AI developers accountable in the event that their advanced models cause major harms.  If enacted, California’s SB 1047 (“SB 1047”), would represent the broadest regulation of AI to date in the U.S.(following in the footsteps of the EU AI Act, which arguably is even stronger than SB 1047 – read more about the EU AI Act in our ongoing series here and here). SB 1047 has evolved significantly since it was first introduced, due at least in part to significant lobbying from various groups, including Anthropic, the developer of the LLM Claude. On August 29, one day before the end of the legislative session, the California Senate officially passed the bill by a vote of 29-9, sending the bill to Governor Gavin Newsom’s desk.  Key aspects of SB 1047 include mandating that regulated AI companies issue public-facing safety reports, creating liability for AI companies when their models cause certain widespread harms, and protecting AI whistleblowers. 

Applicability

SB 1047 will only be applicable to certain advanced AI models that meet cost and computing power-related thresholds (defined as Covered Models), as well as certain “derivative” AI models generated by Covered Models. The bill’s unit of measurement for computing power is the number of floating point operations (a certain type of arithmetic) executed by the model, or “FLOPs” for short.  For models developed prior to January 1, 2027, the thresholds are:

  • Models trained using more than 10^26 FLOPs and costing more than $100m to train; and
  • Covered Models fine tuned using at least three times 10^25 FLOPs and costing more than $10m to fine tune.
  • Notably, no AI models currently in existence satisfy these thresholds. 
  • While these thresholds may seem high, it is important to view them in the context of both the considerable compute required to train even smaller models and the significant monetary investments being made in the AI industry, which are already reportedly in the hundreds of billions annually. The ultimate scope of applicability remains uncertain, as models become more complex, compute (potentially) becomes cheaper, and the California Government Operations Agency (“CGAO”) sets and revises compute thresholds.

Obligations

Generally, SB 1047 imposes a duty on anyone developing or fine-tuning Covered Models to “take reasonable care to avoid [their models] posing an unreasonable risk of causing or materially enabling a critical harm”. The bill specifies harms that rise to the level of “critical harm”, including:

  • “[T]he creation or use of chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties”;
  • “Mass casualties or at least [$500m] of damage resulting from an [AI] model engaging in conduct that . . . [a]cts with limited human oversight, intervention, or supervision [and] [r]esults in death, great bodily injury, property damage, or property loss, and would, if committed by a human, constitute a crime . . . that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime”; and
  • “Other grave harms to public safety and security that are of comparable severity to the [other defined] harms.”

Developers are also subject to obligations at various stages of the development and deployment process: prior to any training, during training, and after deployment. Many of these obligations must be documented by the developers in a written “safety and security protocol” (“SSP”), which must also be made (at least partially) public. SSPs are “documented technical and organizational protocols” that a developer must use to “manage the risks of developing and operating covered models . . . across their life cycle. Developers must also be able to implement a “full shutdown” , i.e. a full cessation of operation – of Covered Models within their control.

Before Release

Once a developer has completed training or fine-tuning a Covered Model, but prior to deployment, it must conduct assessments to determine whether the model “is reasonably capable of causing or materially enabling a critical harm.” The results of these tests must be recorded and retained, with enough detail to allow a third party to replicate them, for as long as the model is available commercially, publicly, or for “foreseeably public use,” and for a period of 5 years thereafter. The developer must take reasonable care to implement “appropriate safeguards to prevent . . . critical harm(s),” and must not use or deploy models if there is “an unreasonable risk that the [model] will cause or materially enable a critical harm.”

Ongoing

Once a Covered Model is deployed, developers must conduct annual reviews of their SSP, as well as annual assessments to determine if the model is reasonably capable of causing or materially enabling a critical harm. The developer must also submit an annual statement to the California AG that, at a minimum, provides the outcomes of its annual assessments and describes the process used by the signatory to verify the developer’s compliance with its obligations under SB 1047. Additionally, beginning on January 1, 2026, developers are required to retain a third-party auditor to conduct an independent audit of their model, the (redacted) results of which are required to be both publicly posted and sent to the California AG.

In the event of any “Artificial Intelligence Safety Incident” involving a Covered Model, the developer is required to notify the California AG within 72 hours of learning of the incident or “learning facts sufficient to establish a reasonable belief that an . . . incident has occurred.” SB 1047 defines an Artificial Intelligence Safety Incident as “an incident that demonstrably increases the risk of a critical harm occurring” because of:

  • “A [model] autonomously engaging in behavior other than at the request of a user”;
  • The “[t]heft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights”;
  • “The critical failure of technical or administrative controls”; or
  • “Unauthorized use . . . to cause or materially enable critical harm.”

Enforcement

The California AG may enforce SB 1047 by bringing civil actions for violations. For violations that have not yet resulted in any actual harm and do not represent “an imminent risk or threat to public safety,” the AG may seek injunctive relief (plus attorney’s fees). For violations stemming from actual harm or imminent risk, the AG may seek fines of up to 10% of the cost of compute used to train a model for the first violation, and of up to 30% for subsequent violations. SB 1047 also specifies that whistleblowers are protected under existing state law. The bill does not impose any criminal penalties and does not create any private right of action.

Compute Cluster “Know Your Customer”

SB 1047 also creates “Know Your Customer” (“KYC”) obligations for large “Compute Clusters”, defined as any “set of machines transitively connected by data center networking of over 100 gigabits per second that has a theoretical maximum computing capacity of at least 10^20 [FLOPS] and [capable of being] used for training artificial intelligence.” Operators of Compute Clusters are required to, among other things, collect certain information from their customers, assess whether a prospective customers intend to utilize the compute to train a covered model, and be able to implement a Full Shutdown of the resources under their control.

Whistleblower Protections

SB 1047’s whistleblower protections come in the wake of the revelation earlier this year that OpenAI had been placing provisions in its employment contracts seeking to limit certain public disclosures, including some that might fall under the umbrella of whistleblowing. Under the bill, developers would be required to create a confidential internal process through which individuals would be able to anonymously disclose information about potential violations or risks. Developers would also need to provide anyone working on the development of a Covered Model, including contractors, with a “clear notice . . . of their rights and responsibilities” related to whistleblowing. The bill also officially prohibits the terms or conditions of employment agreements from preventing employees or contractors from disclosing information to the AG or state Labor Commission, and bars retaliation against whistleblowers.

Whether SB 1047 will be enacted, however, remains unclear. SB 1047 is highly contentious, with various high-profile publications, companies, and individuals coming out both in favor of and opposition to it. OpenAI opposes SB 1047, instead favoring legislation on the federal level with preemptive effect. On the other hand, Anthropic which engaged in extensive lobbying related to SB 1047, has expressed some support (though it has explicitly not endorsed the bill but instead stated that the benefits outweigh the costs). However, we have yet to hear from the person whose opinion regarding SB 1047 will decide its ultimate fate: Governor Newsom. Newsom has until September 30 to sign or veto the bill. According to some commentators, a legislative override of any veto appears unlikely, leaving the ultimate decision as to whether the bill becomes a law to Newsom. Newsom faces intense pressure regarding the bill, and, at least for now, it is unclear as to how he will respond.

Even if Newsom ultimately rejects SB 1047, the insight it provides into what is and is not important – for industry, for experts, and for the public – is valuable. As it has with consumer privacy protections, California could very likely lead the way with state regulation of AI, and SB 1047, whether or not entered into law, likely represents one of the paths other states might adopt.

Ben Mishkin and Daniel Kilburn are attorneys in Cozen O’Connor’s Technology, Privacy and Data Security Group.

Authors

  • Associate

    Daniel concentrates his practice on technology, privacy, and data security. He advises clients navig...

  • Member

    Ben focuses his practice on technology and data privacy. He advises clients in complex data transact...

Related Posts

Understanding the Colorado AI Act

With the governor’s signature, Colorado has enacted a new consumer protection law focused on artificial intelligence (“AI”) systems.  The “Colorado