Chi­na (TC260): AI Safe­ty Gover­nan­ce Frame­work Ver­si­on 1.0

China’s Natio­nal Infor­ma­ti­on Secu­ri­ty Stan­dar­dizati­on Tech­ni­cal Com­mit­tee (TC260) published the first ver­si­on of the “Arti­fi­ci­al Intel­li­gence Secu­ri­ty Gover­nan­ce Frame­work” has been published. The frame­work is here in Eng­lish available. The TC260 plays a key role in the deve­lo­p­ment of stan­dards in the are­as of cyber secu­ri­ty, data secu­ri­ty and data pro­tec­tion. The stan­dards are gene­ral­ly not legal­ly bin­ding as long as no law refers to them (and they are to be distin­gu­is­hed from “natio­nal stan­dards” with hig­her bin­ding force). Alre­a­dy in the past Stan­dards for the safe­ty of gene­ra­ti­ve AI and for the published

After intro­duc­to­ry gene­ral com­mit­ments to inno­va­ti­on, pro­tec­tion, gover­nan­ce, respon­si­bi­li­ty, etc., the frame­work iden­ti­fi­es the fol­lo­wing aspects asso­cia­ted with the tech­no­lo­gy as such Risks – The­se are the known risks that are also essen­ti­al­ly the gui­ding prin­ci­ples of the AI Act, but they are bro­ken down and cate­go­ri­zed in gre­at detail.

The mem­bers of the value chain – deve­lo­pers, ser­vice pro­vi­ders and users – should the­r­e­fo­re initi­al­ly Pro­tec­ti­ve mea­su­res to com­bat obscu­ri­ty (lack of explaina­bi­li­ty), bias and dis­cri­mi­na­ti­on, the vio­la­ti­on of intellec­tu­al pro­per­ty rights, the vio­la­ti­on of data pro­tec­tion law, the dis­clo­sure of sen­si­ti­ve infor­ma­ti­on, e.g. in the are­as of nuclear ener­gy and wea­pons tech­no­lo­gy and the use of AI in the­se are­as, the unaut­ho­ri­zed export of per­so­nal data, attacks on models, fail­ures and exce­s­si­ve pro­fil­ing. To this end, the frame­work requi­res suf­fi­ci­ent and gra­dua­ted risk clas­ses. Gover­nan­ce across the value chain and for­mu­la­tes very spe­ci­fic requi­re­ments and mea­su­res for this. A table sum­ma­ri­zes the risks, tech­ni­cal mea­su­res and gover­nan­ce measures.

Aut­ho­ri­ty

Area

Topics

Rela­ted articles

Sub­scri­be