• Home 
  • -
  • AI 
  • -
  • Cali­for­nia: Draft Safe and Secu­re Inno­va­ti­on for Fron­tier Arti­fi­ci­al Intel­li­gence Systems Act 

Cali­for­nia: Draft Safe and Secu­re Inno­va­ti­on for Fron­tier Arti­fi­ci­al Intel­li­gence Systems Act

The Cali­for­ni­an Sena­te has pas­sed the bill SB (for “Sena­te Bill”) 1047, which Safe and Secu­re Inno­va­ti­on for Fron­tier Arti­fi­ci­al Intel­li­gence Systems Act. In the pre­ce­ding dis­cus­sion, the requi­re­ments had been sof­ten­ed to a cer­tain ext­ent at the insti­ga­ti­on of the industry.

SB 1047 amends exi­sting Cali­for­nia laws, the Busi­ness and Pro­fes­si­ons Code and the Govern­ment Code. In essence, deve­lo­pers of a cover­ed AI model must alre­a­dy have befo­re trai­ning take cer­tain mea­su­res, inclu­ding the following:

  • Take cyber­se­cu­ri­ty mea­su­res against access, misu­se or unsafe changes;
  • pro­vi­de for an imme­dia­te­ly effec­ti­ve switch-off option,
  • have, docu­ment, imple­ment, store and annu­al­ly review a secu­ri­ty protocol.

Befo­re use deve­lo­pers must take fur­ther measures:

  • Assess whe­ther the model “cri­ti­cal harms”, i.e. is capa­ble of doing so, 
    • pro­du­ce or use a che­mical, bio­lo­gi­cal, radio­lo­gi­cal or nuclear weapon;
    • cau­se dama­ge of at least USD 500M through cyber­at­tacks on cri­ti­cal infras­truc­tu­re (one tenth of what the Crowd­strike inci­dent is said to have cau­sed) or through (semi-)autonomous beha­vi­or that has cer­tain qua­li­fi­ed con­se­quen­ces and cor­re­sponds to a cri­mi­nal offen­se that requi­res intent or gross negligence;
    • cau­se other thre­ats to public safe­ty of com­pa­ra­ble severity.
  • keep traceable infor­ma­ti­on about the trai­ning and tests;
  • Take risk miti­ga­ti­on mea­su­res against cri­ti­cal harms;
  • ensu­re the tracea­bi­li­ty of the model so that its actions and any dama­ge can be attributed.

In addi­ti­on, an annu­al Com­pli­ance state­ment be filed with the Att­or­ney General.

Fur­ther­mo­re, a Obli­ga­ti­on to report safe­ty inci­dentsand ban­ned is the Pri­ce dis­cri­mi­na­ti­onthe mar­ket power of the pro­vi­ders of powerful models, and the­re is a cer­tain pro­tec­tion for Whist­le­b­lower.

The­se requi­re­ments rela­te to par­ti­cu­lar­ly powerful or other­wi­se gene­ral­ly appli­ca­ble models, ana­log­ous to the thres­hold of the GPAI model with syste­mic risks in the AI Act (Art. 51 para. 2), sub­ject to models that are based on an exi­sting model (“deri­va­ti­ve model”).

requi­re­ments also app­ly to the Infras­truc­tu­re ope­ra­tors such as data cen­ters, if their cus­to­mers purcha­se power suf­fi­ci­ent to train one – the­se ope­ra­tors have cer­tain KYC obli­ga­ti­ons and must be able to shut down the infras­truc­tu­re for a cus­to­mer in an emergency.

The fate of SB 1047 depends on Gover­nor Gavin News­om, who can sign or veto the bill.