China’s National Information Security Standardization Technical Committee (TC260) published the first version of the “Artificial Intelligence Security Governance Framework” has been published. The framework is here in English available. The TC260 plays a key role in the development of standards in the areas of cyber security, data security and data protection. The standards are generally not legally binding as long as no law refers to them (and they are to be distinguished from “national standards” with higher binding force). Already in the past Standards for the safety of generative AI and for the published
After introductory general commitments to innovation, protection, governance, responsibility, etc., the framework identifies the following aspects associated with the technology as such Risks – These are the known risks that are also essentially the guiding principles of the AI Act, but they are broken down and categorized in great detail.
The members of the value chain – developers, service providers and users – should therefore initially Protective measures to combat obscurity (lack of explainability), bias and discrimination, the violation of intellectual property rights, the violation of data protection law, the disclosure of sensitive information, e.g. in the areas of nuclear energy and weapons technology and the use of AI in these areas, the unauthorized export of personal data, attacks on models, failures and excessive profiling. To this end, the framework requires sufficient and graduated risk classes. Governance across the value chain and formulates very specific requirements and measures for this. A table summarizes the risks, technical measures and governance measures.