From Anne-Sophie Morand and David Vasella
In connection with the rapidly advancing development in the field of artificial intelligence (AI), there is a growing need in companies for structures and processes that help to ensure the safe, responsible and legally correct use of AI technology. AI governance plays a central role in this. Although functioning AI governance is a challenge for companies – like any governance – it is not only necessary to reduce risks, but also an opportunity that paves the way for risk-adequate innovation.
What is “AI governance”?
The term “governance” refers to a control and regulation system and refers to the regulatory framework required to manage a company and monitor its activities.
In the context of AI, “AI Governance” on this governance framework for AI, i.e. on the development and implementation of organizational measures, processes, controls and tools that help to make the use of AI trustworthy, responsible, ethical, legally permissible and efficient.
AI governance is usually part of a company’s general governance landscape and is often closely intertwined with data governance, i.e. the parallel or overlapping framework for managing the handling of personal and other data and information. However, it is still an area in its own right. Data governance focuses on the handling of data, while AI governance takes into account the particular challenges of AI technology. In addition, data is relatively static, while AI systems learn and evolve. Traditional data governance can therefore hardly guarantee the ethical and legally compliant use of AI.
In its scope of application, AI governance generally comprises the following aspects:
- Purchase, operation and use of AI systemsWhat constitutes an AI system is defined in the scope of the European AI Regulation (AI Act) its Art. 3 para. 1. Despite the EU Commission guidelines on the concept of an AI system However, it is still unclear when a semi-intelligent system crosses the threshold to an AI system (see our FAQ). The scope of application of AI governance should therefore not be drawn too narrowly, which FINMA emphasizes in its “Regulatory Notice 08/2024 – Governance and risk management in the use of artificial intelligence” has also emphasized;
- Development and sale of AI systems; and
- Development and sale of general purpose AI modelsA General Purpose AI Model (GPAIM) is not the same as an AI system and is addressed separately in the EU AI Regulation. A GPAIM (“general purpose AI model”) is an AI model that is general purpose, broadly applicable and can be integrated into downstream systems (Art. 3 No. 63 AI Act). An example is the “GPT‑4” model from OpenAI. In this case, the AI system would be “ChatGPT”.
Companies can also rely on the ISO standard 42001:2023 „Information technology – Artificial intelligence – Management system” support. The standard defines requirements for an AI management system (AIMS) and supports the systematic development, deployment and use of AI systems, and AI governance according to this standard can be integrated more easily with existing management systems, e.g. for quality (ISO 9001), information security (ISO 27001) or data protection (ISO 27701) (ISO 42001, Annex D).
The use of AI tools (e.g. ChatGPT, Whisper, Claude, Perplexity, NotebookLM, Gemini, etc.) by employees at work, but in a private capacity, on private initiative or with private licenses is an issue that must be dealt with separately. Such use of AI tools by employees is usually regulated by existing internal ICT guidelines. Companies often prohibit such use or at least specify which data employees may and may not feed into these tools. It should be noted in particular that the providers of the tools do not act as processors in this case, but as controllers and therefore have a great deal of freedom in handling the data entered. In the case of company licenses, on the other hand, the providers act – albeit not without exception – as processors and are therefore under the control of the company.
Why does a company need AI governance?
From a company’s perspective, there are various reasons for implementing AI governance.
Compliance with regulatory requirements
Demanding regulatory requirements are coming into force worldwide in the digital sector. Particularly complex, potentially far-reaching, often unclear in their application and also relevant for companies in Switzerland due to their extraterritorial effect is the AI Act. As is well known, it pursues a risk-based approach by distinguishing between prohibited practices (these are mainly applications that a responsible company would refrain from using on its own initiative), high-risk systems (e.g. when used in the work context or for credit checks), limited risks (such as chatbots) and other applications with minimal risks.
Companies domiciled in Switzerland are exempt from their territorial scope of application recorded,
- if they place AI systems on the market or put them into operation in the EU in the role of a provider, or
- if they are the provider or operator (“deployer”) of an AI system and use the output produced by the AI system in the EU. It is largely unclear when this is the case; however, the use of output in the EU is likely to presuppose a certain intention or orientation, but at the same time also covers the case that an AI system has a relevant impact on persons in the EU.
In November 2023, the Federal Council submitted a request to DETEC (OFCOM) and the FDFA (in the Europe Division) for a Outline of the possible regulation of AI This was to serve as the basis for a decision on how to proceed. This overview was on February 12, 2025 together with the Federal Council’s decision on how it would like to tackle the topic of AI from a regulatory perspective. As was to be expected, the Federal Council does not want a Swiss AI ordinance – it has taken on board widespread concerns that such a regulation would lead to high costs. However, it has opted for the implementation of the AI Convention of the Council of Europe decided and these be signed on March 27, 2025.
This is not surprising: the AI convention was significantly co-developed by Switzerland under its chairmanship. It
- is the world’s first intergovernmental agreement on AI that is binding for contracting parties and must now be incorporated into Swiss law, although there is considerable scope for implementation;
- it is primarily aimed at state actors. Private actors are only covered if they have a direct or indirect horizontal effect among private individuals. Examples include the duty of equal pay in employment relationships or the provisions on racial discrimination.
However, many areas will not be affected. However, the Federal Council wants to amend the law where necessary, sectoral and technology-neutral adjustments wherever possible. General, cross-sector regulation should only be enacted in central areas relevant to fundamental rights, e.g. in data protection law. The ratification of the AI Convention should then be flanked by Legally non-binding measurese.g. self-declaration agreements and industry solutions.
The Federal Council has instructed the FDJP, together with DETEC and the FDFA, to submit a consultation draft for the implementation of the AI Convention by the end of 2026. At the same time, a plan is to be drawn up for further, legally non-binding measures. DETEC is responsible for this. It is therefore foreseeable that additional rules will also apply in Switzerland, in some cases across the board, otherwise selectively and in addition to the existing legal framework, which is also applicable to AI, as the FDPIC has rightly pointed out.
In addition to these regulations the existing lawthat remains applicable to the use of AI, e.g. provisions of data protection, labor, copyright or fair trading law.
Not to be forgotten are then ESG aspects. Incorporating ESG principles into AI governance can help to take into account aspects of environmental protection, social responsibility and transparent corporate governance in the development and use of AI. The AI Act no longer contains any requirements in this regard, in contrast to draft versions that still required environmental impact assessments and reporting on energy consumption. However, ISO 42001 requires an assessment of whether climate change is a relevant issue for a company and mentions environmental impacts as a potential organizational objective (Annex C.2.4).
Against this backdrop, functioning AI governance can help companies to meet both current and future legal requirements. This relative security is a prerequisite for the efficient use of AI in the company.
Confidence building
Trust replaces uncertainties with assumptions and thus reduces complexity. In a company’s relationship with its customers, its employees and its partners, trust is a crucial component. This applies in particular to issues that are highly complex, have a potentially high impact and at the same time are not visible or comprehensible from the outside.
Confidence that companies handle technology responsibly and only use trustworthy AI systems (or only use AI systems ethically) is therefore essential. It helps to reduce internal and external resistance to AI initiatives and promote the acceptance of AI technologies in day-to-day business and their integration into company processes. Conversely, poor quality results, security incidents, discrimination and other undesirable effects can lead to a loss of trust that is not easy to make up for. This requires Risk management and quality assuranceincluding testing potential AI systems, checking training data for bias, testing model accuracy, contingency plans should a critical system misbehave, etc. AI governance therefore also supports the continuity of business operations.
Appropriate and functioning AI governance therefore leads to a better understanding of AI among the stakeholders involved – employees, customers, partners and authorities. Building and maintaining trust. This is particularly true when not only legal requirements but also ethical standards and social expectations are taken into account.
This is also associated with a Competitive advantageAppropriate AI governance underscores the company’s commitment to responsible conduct and transparency, including to the outside world, which can have a positive impact on its reputation. AI governance also plays an important role in promoting innovation. Companies can encourage creativity and experimentation within responsible boundaries through clear, understandable rules that are known within the company. If developers know the rules of the game and the limits within which they are allowed to operate, this not only promotes safety in dealing with AI, but also promotes its use, which can otherwise be slowed down by more or less vague and more or less justified concerns. All of this promotes the stability and reputation of the company – in markets where trust and reliability are essential, this is a competitive factor.
Interim conclusion
AI governance is nothing fundamentally new, but rather a new area of application for governance. Nevertheless, in the beginning – before the broader emergence of accessible AI technologies – this area was less developed and only present in companies that were already heavily involved with corresponding technologies (then more under the term “machine learning”). The more clearly the risks of using AI tools emerged, the more important well thought-out AI governance became. Today, it can be seen as a strategic necessity for many companies.
Implementation of AI governance
AI first came to the fore as a basic technology and only later as a regulatory and legal issue. Within the companies, it was therefore the business 1st Linewho drove the topic forward. Accordingly, responsibility for the topic lay primarily with the business functions, for example with a CAIO (Chief AI Officer) or a CDAO (Chief Data & Analytics Officer).
The Compliance tasks were often much less clearly assigned. They were often assigned to the persons or bodies responsible for data protection, e.g. a data protection officer (DPO) as the person most familiar with the topic. This has now changed to a certain extent. Some companies have created their own governance structure for AI, while others – probably the majority – have used existing structures and assigned responsibility for AI to them.
One way or another: AI governance must be tailored to the respective company. The following best practices can (hopefully) help with this.
Understanding the general conditions of the company
First of all, AI governance should correspond to the company’s AI strategy. This presupposes that the company is responsible for dealing with AI technology. concrete goals defined taking into account the specifics, needs and cultural environment of the company (see ISO 42001, point 4). This also means that the use of AI is neither a strategy nor an end in itself – AI is nothing more and nothing less than a tool. This does not contradict the fact that the technology itself and the applications based on it are developing so rapidly that a certain amount of trial and error is necessary and sensible. Companies therefore need to develop a very clear vision.
The following questions, for example, can help:
- How is the company already using AI? AI is not just generative AI in the form of ChatGPT and related systems, the term is much broader and many companies have been using AI for a long time (e.g. recommendation and expert systems, fraud detection, speech recognition, energy control, robotics, etc.). To this end, AI applications should first be inventoried. As a rule, a clear view is initially lacking, and even application directories rarely provide information about the actual use of in-house and purchased AI in the company.
- What are the company’s values and vision? How important is trust for the company’s activities? What is the perception of the company internally and externally? What are the reputational risks associated with the use of technology? Is the company in the public eye, does the public have an emotional relationship with the company? How important are ethical concerns (bias, fairness and transparency)? Has the company already made a public commitment to ethical principles?
- How does the company earn its money? How important is innovation? What products and services does it offer, now and in the future? Can AI help to improve products or services, develop new products or improve the customer experience?
- What risks are associated with the use of AI? How sensitive is the company to operational risks, for example, how important is business continuity and in which areas? How exposed is the company to legal risks? Is it regulated, does it offer critical products, does it use a large amount of personal data?
- What regulatory framework conditions is the company subject to? For example, is it active in the financial sector, healthcare, medical devices or telecommunications? Is it listed on the stock exchange? What ESG standards should it comply with, does it have sustainability goals?
- Which purchasing, production and sales processes are important for the company? Where is there the greatest potential for increasing efficiency? How?
- What resources are available to the company? What resources (data, expertise, infrastructure, budget) would be required? For example, is existing data suitable for the use of AI?
- How can the company deal with change and learning? Can experience from pilot projects be used? Does the company have employees who are able to acquire the necessary skills if required?
- How can responsibility for the use of AI be assigned? Are there already existing committees or roles that can be integrated? Do new responsibilities need to be created or existing roles adapted? Is the topic anchored in the management? Is there a structured way of dealing with risks?
- What governance already exists? Does the company have, for example, quality assurance, data protection, information security or other management systems, parts of which can be used or compared?
- How big and how complex is the company? What degree of formalization of the processes can it handle, or vice versa – what degree of formalization is necessary?
As mentioned, it is important to gain an understanding of the company’s AI risk landscape and assess it from the company’s perspective. This may also include a more detailed legal analysis. For example, if a company manufactures medical devices that embody or incorporate AI systems and sells them on the EU market, the EU AI Regulation becomes relevant and the product may well fall into the category of high-risk AI systems. If the company does not adhere to the relevant requirements, it risks fines, reputational damage and operational risks. When setting up AI governance, the focus here must be on compliance with the EU AI Regulation.
Define principles
When implementing AI governance, it has proven to be a good idea to define principles that Managing the use of AI. Each company will have to define its own principles – there is no universal approach. However, the key principles include safety, fairness, transparency, quality and accuracy, accountability, human oversight and sustainability. These principles can be based on the objectives and controls specified in ISO 42001 (Annex C). They should not remain euphonious buzzwords, but should be filled with life; however, this is no reason to dispense with them as guiding principles.
Dealing with AI technology can then Different risks depending on the application and context. It may therefore make sense to base AI governance on a risk-based approach, similar to the EU’s AI Regulation (which is quite sensible in terms of its thrust). To this end, risk criteria should be defined and, in a second step, different requirements can be defined depending on the risk category or an AI system impact assessment can be carried out.
Set a clear framework
A central aspect of implementation is then the creation of a “AI Governance Framework“which defines, for example, the general starting position and risks, the objectives of AI governance, definitions and scope of application, the categorization of AI systems and the principles and responsibilities. Clear, pragmatic and comprehensible guidelines should be established, and the scope of application of the framework and a procedure for exceptions (“exception to policy”) should be defined.
Such a framework can be more or less complex, but it is essential that it is tailored to the company and that it clearly defines the key principles. This also serves to protect the management bodies, which have to define these principles, but at the same time can also delegate tasks effectively.
It is recommended to use a successive approach – Here, too, you should not die in beauty. The first step in implementation should focus on the key management objectives and the relevant risks. For example, a policy with specific guidelines and, in particular, internal responsibilities, combined with a reporting system and the involvement of an independent body – more or less depending on the company – to check admissibility may be sufficient. Over time – when AI is used more extensively or for more sensitive processes – further elements can be added, e.g. more sophisticated directories, defined test grids, topic-specific training, contractual requirements for suppliers, recommendations from an internal ethics board, management approval, etc.
Define responsibilities and competencies
Involve management level
Even if specialized departments, positions or functions are created: The management level must be involved in the development and implementation of AI governance. Without this, the Acceptance of governance in the company, and conversely, managers can provide the necessary resources. As mentioned above, managers also have a vested interest in not only setting the strategic course, but also in effectively taking on responsibility.
delegateand this also presupposes that the Level-appropriate competence at all levels – also at the management level – that a reporting system
exists and that the recipients of reports are able to understand them.
Define responsibility for projects
For every AI project, a company should then designate a responsible person or unitwho bears internal responsibility for compliance (in the sense of “accountability”) and who decides on the development or use of an AI system within the scope of their function or competencies (e.g. a business owner). In addition, a contact person can be defined who does not have to be the same as the business owner, but who is available as a direct contact person for questions.
Central contact point
It is highly recommended, a person or business unit responsible for AI governance as the central point of contact. It plays a key role in monitoring and updating AI governance (ISO 42001, for example, provides for a process for reporting concerns (A.3.3), for which a clear point of contact is useful) and should have both the necessary technical expertise and sufficient authority within the company. Such a unit can, for example, be an existing data governance team that is familiar with interdisciplinary cooperation. In larger companies that have been dealing with the topic of AI for some time, a separate department for AI governance is increasingly being set up.
Interdisciplinary working group
The complexity and versatility of AI technology require a wide range of specialist knowledge and skills. Many companies are in an orientation phase at the beginning, where they do not yet have a clear idea of the scope of the AI topic for the company. It is worth forming an interdisciplinary working group at the beginning, made up of people from different areas (e.g. lawyers; IT, security and ethics experts or even people from the business itself) in order to take the various aspects into account when implementing AI governance.
However, it is important to distinguish such a group from a decision-making body in the sense of an accompanying expert committee. In particular, companies that are subject to a “Three Lines” approach The companies that follow the separation between the revenue units, the “business”, and an independent compliance function should not undermine this separation by having decisions made by mixed committees. On the other hand, there is no reason why such committees should not make joint Make suggestionsas long as this does not jeopardize the independence of the decisions.
Ethics Committee
AI governance generally goes beyond ensuring compliance. AI systems should not only be permissible, but also trustworthy and ethical. Many companies have therefore Ethics councils or committees establishedto support AI initiatives and ensure that they comply with ethical standards and social values.
Swisscom, for example, has a Data Ethics Board that also deals with AI projects as soon as they could be sensitive from an ethical perspective. Smaller companies can and should also deal with ethical issues, especially if they are active in sensitive areas or with sensitive data. If AI is to be used to evaluate employee data (e.g. for the currently much-discussed sentiment analysis), this is always the case.
Internal communication and training
Internal communication and training are essential elements of AI governance. Employees should understand the purpose of AI governance and how it can be
on their work. Open and honest communication with employees creates the necessary trust. This requires clear communication and appropriate training measures (the AI Act requires this anyway under the heading of “AI literacy”).
Iterative process
AI governance should be understood as a continuous and iterative process. It is not a one-off project that is completed after an implementation phase (just like other branches of governance). The structures and processes of AI governance should be reviewed regularly and adjusted if necessary. become. There is no other way for companies to react to new challenges and changes – be they technological, regulatory or market-related (and every system causes misuse and idle time – for this reason alone, systems should be constantly adapted).
This iterative approach is a process of testing, reviewing and adapting that is intended to keep AI governance up to date with technology, regulation and practice, but at the same time requires a culture of learning. Feedback from employees should be obtained on an ongoing basis.
Continuous monitoring of the systems
Finally, AI governance processes should provide for “health checks” to continuously monitor AI systems that have already been tested. To maintain an overview of all AI applications in the company, it is also essential to keep a list of the AI systems and AI models that have been developed or purchased.