The pro­po­sed Cali­for­nia “Safe and Secu­re Inno­va­ti­on for Fron­tier Arti­fi­ci­al Intel­li­gence Systems Act” (SB 1047, see here) was given a short life: Cali­for­nia Gover­nor Gavin News­om vet­oed or refu­sed to sign SB 1047 (Veto). He fears in particular,

  • that the impres­si­on of fal­se secu­ri­ty can ari­se if the law only regu­la­tes the very lar­gest pro­vi­ders – smal­ler models could also be dangerous;
  • the draft law does not suf­fi­ci­ent­ly address the real risks, e.g. whe­ther a model is ope­ra­ted in a sen­si­ti­ve con­text, makes sen­si­ti­ve decis­i­ons or uses sen­si­ti­ve data;

It is true that regu­la­ti­on should not wait for major dis­asters, and Cali­for­nia could cer­tain­ly dare to go it alo­ne. Howe­ver, we must start from con­cre­te expe­ri­ence and science:

I do not agree, howe­ver, that to keep the public safe, we must sett­le for a solu­ti­on that is not infor­med by an empi­ri­cal tra­jec­to­ry ana­ly­sis of Al systems and capabilities.

A cer­tain con­tra­dic­tion bet­ween the attempt to pre­vent risks and the need to regu­la­te only on the basis of con­cre­te data is evi­dent here, as in the dis­cus­sions in Switz­er­land. This con­tra­dic­tion can only be resol­ved if regu­la­ti­on can react quickly:

Ulti­m­ate­ly, any frame­work for effec­tively regu­la­ting Al needs to keep pace with the tech­no­lo­gy itself.

In Switz­er­land, this is anything but gua­ran­teed out­side of emer­gen­cy law, which rai­ses the que­sti­on of how much the legis­la­tor can actual­ly rely on empiricism.