The proposed California “Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act” (SB 1047, see here) was given a short life: California Governor Gavin Newsom vetoed or refused to sign SB 1047 (Veto). He fears in particular,
- that the impression of false security can arise if the law only regulates the very largest providers – smaller models could also be dangerous;
- the draft law does not sufficiently address the real risks, e.g. whether a model is operated in a sensitive context, makes sensitive decisions or uses sensitive data;
It is true that regulation should not wait for major disasters, and California could certainly dare to go it alone. However, we must start from concrete experience and science:
I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities.
A certain contradiction between the attempt to prevent risks and the need to regulate only on the basis of concrete data is evident here, as in the discussions in Switzerland. This contradiction can only be resolved if regulation can react quickly:
Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.
In Switzerland, this is anything but guaranteed outside of emergency law, which raises the question of how much the legislator can actually rely on empiricism.