Jump to content

Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

From Wikipedia, the free encyclopedia
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
California State Legislature
Full nameSafe and Secure Innovation for Frontier Artificial Intelligence Models Act
IntroducedFebruary 7, 2024
Senate votedMay 21, 2024 (32-1)
Sponsor(s)Scott Wiener
GovernorGavin Newsom
BillSB 1047
WebsiteBill Text

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill with the goal of reducing the risks of frontier artificial intelligence models, the largest and most powerful foundation models. If passed, the bill will also establish CalCompute, a public cloud computing cluster for startups, researchers and community groups.

Background[edit]

The bill was motivated by the rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022.

In May 2023, AI pioneer Geoffrey Hinton resigned from Google, warning that humankind could be overtaken by AI as soon as the next 5 to 20 years.[1][2] Later that same month, the Center for AI Safety released a statement signed by Hinton and other AI researchers and leaders: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Governor Newsom and President Biden issued executive orders on artificial intelligence in late 2023.[3][4] Senator Wiener says his bill draws heavily on the Biden executive order.[5]

Provisions[edit]

SB 1047 initially covers AI models with training compute over 1026 integer or floating-point operations. The same compute threshold is used in the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In contrast, the European Union's AI Act set its threshold at 1025, one order of magnitude lower.[6]

In addition to this compute threshold, the bill has a cost threshold of $100 million. The goal is to exempt startups and small companies, while covering large companies that spend over $100 million per training run.

Developers of models that exceed the compute and cost thresholds are required to conduct safety testing for the following risks:

  • Creation or use of a weapon of mass destruction
  • Cyberattacks on critical infrastructure causing mass casualties or at least $500 million of damage
  • Autonomous crimes causing mass casualties or at least $500 million of damage
  • Other harms of comparable severity

Developers of covered models are required to implement reasonable safeguards to reduce risk, including the ability to shut down the model. Whistleblowing provisions protect employees who report safety problems and incidents.

The bill establishes a Frontier Model Division to review the results of safety tests and incidents, and issue guidance, standards and best practices. It also creates a public cloud computing cluster called CalCompute to enable research into safe AI models, and provide compute for academics and startups.

Reception[edit]

Supporters of the bill include Turing Award recipients Geoffrey Hinton and Yoshua Bengio.[7] The Center for AI Safety, Economic Security California[8] and Encode Justice[9] are sponsors.

The bill is opposed by industry trade associations including the California Chamber of Commerce, the Chamber of Progress[a], the Computer & Communications Industry Association[b] and TechNet[c].[13] Companies Meta and Google argue that the bill would undermine innovation.[14]

Public opinion[edit]

A David Binder Research poll commissioned by the Center for AI Safety Action Fund found that in May 2024, 77% of Californians support a proposal to require companies to test AI models for safety risks before releasing them.[15] A poll by the AI Policy Institute found 77% of Californians think the government should mandate safety testing for powerful AI models.[16]

See also[edit]

Notes[edit]

  1. ^ whose corporate partners include Amazon, Apple, Google and Meta[10]
  2. ^ whose members include Amazon, Apple, Google and Meta[11]
  3. ^ whose members include Amazon, Anthropic, Apple, Google, Meta and OpenAI[12]

References[edit]

  1. ^ Metz, Cade (2023-05-01). "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead". The New York Times.
  2. ^ Lazarus, Ben (2023-05-06). "The godfather of AI: why I left Google". The Spectator.
  3. ^ "Governor Newsom Signs Executive Order to Prepare California for the Progress of Artificial Intelligence". Governor Gavin Newsom. 2023-09-06.
  4. ^ "President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence". White House. 2023-10-30.
  5. ^ Myrow, Rachael (2024-02-16). "California Lawmakers Take On AI Regulation With a Host of Bills". KQED.
  6. ^ "Artificial Intelligence – Questions and Answers". European Commission. 2023-12-12.
  7. ^ Kokalitcheva, Kia (2024-06-26). "California's AI safety squeeze". Axios.
  8. ^ DiFeliciantonio, Chase (2024-06-28). "AI companies asked for regulation. Now that it's coming, some are furious". San Francisco Chronicle.
  9. ^ Korte, Lara (2024-02-12). "A brewing battle over AI". Politico.
  10. ^ "Corporate Partners". Chamber of Progress.
  11. ^ "Members". Computer & Communications Industry Association.
  12. ^ "Members". TechNet.
  13. ^ Daniels, Owen J. (2024-06-17). "California AI bill becomes a lightning rod—for safety advocates and developers alike". Bulletin of the Atomic Scientists.
  14. ^ Korte, Lara (2024-06-26). "Big Tech and the little guy". Politico.
  15. ^ "California Likely Voter Survey: Public Opinion Research Summary". David Binder Research.
  16. ^ "AIPI Survey". AI Policy Institute.

External links[edit]