The German Federal Office for Information Security (BSI) is the central body in Germany for information security at the national level and the author of the BSI basic protection. One of its focal points is artificial intelligence; to this end, the BSI maintains a Topic pagewhere you can find studies worth reading.
On August 5, 2024, the BSI published a relatively short notice dated July 1, 2024. White paper on transparency in AI systems has been published. It is unlikely to remain visible for long in the plethora of publications, but attempts to shed light on the topic from a fundamental perspective.
According to the BSI, transparency in AI systems means providing information about the system, including its limitations, over the entire life cycle, from planning to design, development, validation, commissioning, use and ongoing evaluation. Transparency should not only enable stakeholders to make informed decisions, but also strengthen trustworthiness and the protection of fundamental rights, as required by the AI Act. The paper refers in particular to Art. 13 AIA (fundamental Transparency requirement for high-risk AI systems; basis for the Operating instructionswith which the Provider must inform the Deployer in accordance with Art. 3(15) AEOI (this obligation is not assigned to the Provider in Art. 16 AEOI, but rather in the definition of the operational derivation) and in accordance with Art. 53 AEOI (obligation of providers of GPAIto create technical documentation for the system and make it available to providers whose systems are based on the GPAI). However, there are various other provisions in the AIA that are in the service of transparency, in particular Art. 50 for Chatbots and other systems designed to interact directly with natural persons).
It is interesting to note that the BSI rightly points out that transparency can also be harmful – it can reveal new attack vectors and limitations can be exploited. An appropriate balance must therefore be found.