From Anne-Sophie Morand and David Vasella

In con­nec­tion with the rapid­ly advan­cing deve­lo­p­ment in the field of arti­fi­ci­al intel­li­gence (AI), the­re is a gro­wing need in com­pa­nies for struc­tures and pro­ce­s­ses that help to ensu­re the safe, respon­si­ble and legal­ly cor­rect use of AI tech­no­lo­gy. AI gover­nan­ce plays a cen­tral role in this. Alt­hough func­tio­ning AI gover­nan­ce is a chall­enge for com­pa­nies – like any gover­nan­ce – it is not only neces­sa­ry to redu­ce risks, but also an oppor­tu­ni­ty that paves the way for risk-ade­qua­te innovation.

What is “AI governance”?

The term “gover­nan­ce” refers to a con­trol and regu­la­ti­on system and refers to the regu­la­to­ry frame­work requi­red to mana­ge a com­pa­ny and moni­tor its activities.

In the con­text of AI, “AI Gover­nan­ce” on this gover­nan­ce frame­work for AI, i.e. on the deve­lo­p­ment and imple­men­ta­ti­on of orga­nizatio­nal mea­su­res, pro­ce­s­ses, con­trols and tools that help to make the use of AI trust­wor­t­hy, respon­si­ble, ethi­cal, legal­ly per­mis­si­ble and efficient.

AI gover­nan­ce is usual­ly part of a company’s gene­ral gover­nan­ce land­scape and is often clo­se­ly intert­wi­ned with data gover­nan­ce, i.e. the par­al­lel or over­lap­ping frame­work for mana­ging the hand­ling of per­so­nal and other data and infor­ma­ti­on. Howe­ver, it is still an area in its own right. Data gover­nan­ce focu­ses on the hand­ling of data, while AI gover­nan­ce takes into account the par­ti­cu­lar chal­lenges of AI tech­no­lo­gy. In addi­ti­on, data is rela­tively sta­tic, while AI systems learn and evol­ve. Tra­di­tio­nal data gover­nan­ce can the­r­e­fo­re hard­ly gua­ran­tee the ethi­cal and legal­ly com­pli­ant use of AI.

In its scope of appli­ca­ti­on, AI gover­nan­ce gene­ral­ly com­pri­ses the fol­lo­wing aspects:

  • Purcha­se, ope­ra­ti­on and use of AI systemsWhat con­sti­tu­tes an AI system is defi­ned in the scope of the Euro­pean AI Regu­la­ti­on (AI Act) its Art. 3 para. 1. Despi­te the EU Com­mis­si­on gui­de­lines on the con­cept of an AI system Howe­ver, it is still unclear when a semi-intel­li­gent system crosses the thres­hold to an AI system (see our FAQ). The scope of appli­ca­ti­on of AI gover­nan­ce should the­r­e­fo­re not be drawn too nar­row­ly, which FINMA empha­si­zes in its “Regu­la­to­ry Noti­ce 08/2024 – Gover­nan­ce and risk manage­ment in the use of arti­fi­ci­al intel­li­gence” has also emphasized;
  • Deve­lo­p­ment and sale of AI systems; and
  • Deve­lo­p­ment and sale of gene­ral pur­po­se AI modelsA Gene­ral Pur­po­se AI Model (GPAIM) is not the same as an AI system and is addres­sed sepa­ra­te­ly in the EU AI Regu­la­ti­on. A GPAIM (“gene­ral pur­po­se AI model”) is an AI model that is gene­ral pur­po­se, broad­ly appli­ca­ble and can be inte­gra­ted into down­stream systems (Art. 3 No. 63 AI Act). An exam­p­le is the “GPT‑4” model from Ope­nAI. In this case, the AI system would be “ChatGPT”.

Com­pa­nies can also rely on the ISO stan­dard 42001:2023Infor­ma­ti­on tech­no­lo­gy – Arti­fi­ci­al intel­li­gence – Manage­ment system” sup­port. The stan­dard defi­nes requi­re­ments for an AI manage­ment system (AIMS) and sup­ports the syste­ma­tic deve­lo­p­ment, deployment and use of AI systems, and AI gover­nan­ce accor­ding to this stan­dard can be inte­gra­ted more easi­ly with exi­sting manage­ment systems, e.g. for qua­li­ty (ISO 9001), infor­ma­ti­on secu­ri­ty (ISO 27001) or data pro­tec­tion (ISO 27701) (ISO 42001, Annex D).

The use of AI tools (e.g. ChatGPT, Whisper, Clau­de, Per­ple­xi­ty, Note­book­LM, Gemi­ni, etc.) by employees at work, but in a pri­va­te capa­ci­ty, on pri­va­te initia­ti­ve or with pri­va­te licen­ses is an issue that must be dealt with sepa­ra­te­ly. Such use of AI tools by employees is usual­ly regu­la­ted by exi­sting inter­nal ICT gui­de­lines. Com­pa­nies often pro­hi­bit such use or at least spe­ci­fy which data employees may and may not feed into the­se tools. It should be noted in par­ti­cu­lar that the pro­vi­ders of the tools do not act as pro­ces­sors in this case, but as con­trol­lers and the­r­e­fo­re have a gre­at deal of free­dom in hand­ling the data ente­red. In the case of com­pa­ny licen­ses, on the other hand, the pro­vi­ders act – albeit not wit­hout excep­ti­on – as pro­ces­sors and are the­r­e­fo­re under the con­trol of the company.

Why does a com­pa­ny need AI governance?

From a company’s per­spec­ti­ve, the­re are various rea­sons for imple­men­ting AI governance.

Com­pli­ance with regu­la­to­ry requirements

Deman­ding regu­la­to­ry requi­re­ments are coming into force world­wi­de in the digi­tal sec­tor. Par­ti­cu­lar­ly com­plex, poten­ti­al­ly far-rea­ching, often unclear in their appli­ca­ti­on and also rele­vant for com­pa­nies in Switz­er­land due to their extra­ter­ri­to­ri­al effect is the AI Act. As is well known, it pur­sues a risk-based approach by distin­gu­is­hing bet­ween pro­hi­bi­ted prac­ti­ces (the­se are main­ly appli­ca­ti­ons that a respon­si­ble com­pa­ny would refrain from using on its own initia­ti­ve), high-risk systems (e.g. when used in the work con­text or for cre­dit checks), limi­t­ed risks (such as chat­bots) and other appli­ca­ti­ons with mini­mal risks.

Com­pa­nies domic­i­led in Switz­er­land are exempt from their ter­ri­to­ri­al scope of appli­ca­ti­on recorded,

  • if they place AI systems on the mar­ket or put them into ope­ra­ti­on in the EU in the role of a pro­vi­der, or
  • if they are the pro­vi­der or ope­ra­tor (“deployer”) of an AI system and use the out­put pro­du­ced by the AI system in the EU. It is lar­ge­ly unclear when this is the case; howe­ver, the use of out­put in the EU is likely to pre­sup­po­se a cer­tain inten­ti­on or ori­en­ta­ti­on, but at the same time also covers the case that an AI system has a rele­vant impact on per­sons in the EU.

In Novem­ber 2023, the Fede­ral Coun­cil sub­mit­ted a request to DETEC (OFCOM) and the FDFA (in the Euro­pe Divi­si­on) for a Out­line of the pos­si­ble regu­la­ti­on of AI This was to ser­ve as the basis for a decis­i­on on how to pro­ce­ed. This over­view was on Febru­ary 12, 2025 tog­e­ther with the Fede­ral Council’s decis­i­on on how it would like to tack­le the topic of AI from a regu­la­to­ry per­spec­ti­ve. As was to be expec­ted, the Fede­ral Coun­cil does not want a Swiss AI ordi­nan­ce – it has taken on board wide­spread con­cerns that such a regu­la­ti­on would lead to high costs. Howe­ver, it has opted for the imple­men­ta­ti­on of the AI Con­ven­ti­on of the Coun­cil of Euro­pe deci­ded and the­se be signed on March 27, 2025.

This is not sur­pri­sing: the AI con­ven­ti­on was signi­fi­cant­ly co-deve­lo­ped by Switz­er­land under its chair­man­ship. It

  • is the world’s first inter­go­vern­men­tal agree­ment on AI that is bin­ding for con­trac­ting par­ties and must now be incor­po­ra­ted into Swiss law, alt­hough the­re is con­sidera­ble scope for implementation;
  • it is pri­ma­ri­ly aimed at sta­te actors. Pri­va­te actors are only cover­ed if they have a direct or indi­rect hori­zon­tal effect among pri­va­te indi­vi­du­als. Examp­les include the duty of equal pay in employment rela­ti­on­ships or the pro­vi­si­ons on racial discrimination.

Howe­ver, many are­as will not be affec­ted. Howe­ver, the Fede­ral Coun­cil wants to amend the law whe­re neces­sa­ry, sec­to­ral and tech­no­lo­gy-neu­tral adjust­ments whe­re­ver pos­si­ble. Gene­ral, cross-sec­tor regu­la­ti­on should only be enac­ted in cen­tral are­as rele­vant to fun­da­men­tal rights, e.g. in data pro­tec­tion law. The rati­fi­ca­ti­on of the AI Con­ven­ti­on should then be flan­ked by Legal­ly non-bin­ding mea­su­rese.g. self-decla­ra­ti­on agree­ments and indu­stry solutions.

The Fede­ral Coun­cil has ins­truc­ted the FDJP, tog­e­ther with DETEC and the FDFA, to sub­mit a con­sul­ta­ti­on draft for the imple­men­ta­ti­on of the AI Con­ven­ti­on by the end of 2026. At the same time, a plan is to be drawn up for fur­ther, legal­ly non-bin­ding mea­su­res. DETEC is respon­si­ble for this. It is the­r­e­fo­re fore­seeable that addi­tio­nal rules will also app­ly in Switz­er­land, in some cases across the board, other­wi­se sel­ec­tively and in addi­ti­on to the exi­sting legal frame­work, which is also appli­ca­ble to AI, as the FDPIC has right­ly poin­ted out.

In addi­ti­on to the­se regu­la­ti­ons the exi­sting lawthat remains appli­ca­ble to the use of AI, e.g. pro­vi­si­ons of data pro­tec­tion, labor, copy­right or fair tra­ding law.

Not to be for­got­ten are then ESG aspects. Incor­po­ra­ting ESG prin­ci­ples into AI gover­nan­ce can help to take into account aspects of envi­ron­men­tal pro­tec­tion, social respon­si­bi­li­ty and trans­pa­rent cor­po­ra­te gover­nan­ce in the deve­lo­p­ment and use of AI. The AI Act no lon­ger con­ta­ins any requi­re­ments in this regard, in con­trast to draft ver­si­ons that still requi­red envi­ron­men­tal impact assess­ments and report­ing on ener­gy con­sump­ti­on. Howe­ver, ISO 42001 requi­res an assess­ment of whe­ther cli­ma­te chan­ge is a rele­vant issue for a com­pa­ny and men­ti­ons envi­ron­men­tal impacts as a poten­ti­al orga­nizatio­nal objec­ti­ve (Annex C.2.4).

Against this back­drop, func­tio­ning AI gover­nan­ce can help com­pa­nies to meet both cur­rent and future legal requi­re­ments. This rela­ti­ve secu­ri­ty is a pre­re­qui­si­te for the effi­ci­ent use of AI in the company.

Con­fi­dence building

Trust replaces uncer­tain­ties with assump­ti­ons and thus redu­ces com­ple­xi­ty. In a company’s rela­ti­on­ship with its cus­to­mers, its employees and its part­ners, trust is a cru­cial com­po­nent. This applies in par­ti­cu­lar to issues that are high­ly com­plex, have a poten­ti­al­ly high impact and at the same time are not visi­ble or com­pre­hen­si­ble from the outside.

Con­fi­dence that com­pa­nies hand­le tech­no­lo­gy respon­si­bly and only use trust­wor­t­hy AI systems (or only use AI systems ethi­cal­ly) is the­r­e­fo­re essen­ti­al. It helps to redu­ce inter­nal and exter­nal resi­stance to AI initia­ti­ves and pro­mo­te the accep­tance of AI tech­no­lo­gies in day-to-day busi­ness and their inte­gra­ti­on into com­pa­ny pro­ce­s­ses. Con­ver­se­ly, poor qua­li­ty results, secu­ri­ty inci­dents, dis­cri­mi­na­ti­on and other unde­si­ra­ble effects can lead to a loss of trust that is not easy to make up for. This requi­res Risk manage­ment and qua­li­ty assu­ranceinclu­ding test­ing poten­ti­al AI systems, checking trai­ning data for bias, test­ing model accu­ra­cy, con­tin­gen­cy plans should a cri­ti­cal system mis­be­have, etc. AI gover­nan­ce the­r­e­fo­re also sup­ports the con­ti­nui­ty of busi­ness operations.

Appro­pria­te and func­tio­ning AI gover­nan­ce the­r­e­fo­re leads to a bet­ter under­stan­ding of AI among the stake­hol­ders invol­ved – employees, cus­to­mers, part­ners and aut­ho­ri­ties. Buil­ding and main­tai­ning trust. This is par­ti­cu­lar­ly true when not only legal requi­re­ments but also ethi­cal stan­dards and social expec­ta­ti­ons are taken into account.

This is also asso­cia­ted with a Com­pe­ti­ti­ve advan­ta­geAppro­pria­te AI gover­nan­ce unders­cores the company’s com­mit­ment to respon­si­ble con­duct and trans­pa­ren­cy, inclu­ding to the out­side world, which can have a posi­ti­ve impact on its repu­ta­ti­on. AI gover­nan­ce also plays an important role in pro­mo­ting inno­va­ti­on. Com­pa­nies can encou­ra­ge crea­ti­vi­ty and expe­ri­men­ta­ti­on within respon­si­ble boun­da­ries through clear, under­stan­da­ble rules that are known within the com­pa­ny. If deve­lo­pers know the rules of the game and the limits within which they are allo­wed to ope­ra­te, this not only pro­mo­tes safe­ty in deal­ing with AI, but also pro­mo­tes its use, which can other­wi­se be slo­wed down by more or less vague and more or less justi­fi­ed con­cerns. All of this pro­mo­tes the sta­bi­li­ty and repu­ta­ti­on of the com­pa­ny – in mar­kets whe­re trust and relia­bi­li­ty are essen­ti­al, this is a com­pe­ti­ti­ve factor.

Inte­rim conclusion

AI gover­nan­ce is not­hing fun­da­men­tal­ly new, but rather a new area of appli­ca­ti­on for gover­nan­ce. Nevert­hel­ess, in the begin­ning – befo­re the broa­der emer­gence of acce­s­si­ble AI tech­no­lo­gies – this area was less deve­lo­ped and only pre­sent in com­pa­nies that were alre­a­dy hea­vi­ly invol­ved with cor­re­spon­ding tech­no­lo­gies (then more under the term “machi­ne lear­ning”). The more cle­ar­ly the risks of using AI tools emer­ged, the more important well thought-out AI gover­nan­ce beca­me. Today, it can be seen as a stra­te­gic neces­si­ty for many companies.

Imple­men­ta­ti­on of AI governance

AI first came to the fore as a basic tech­no­lo­gy and only later as a regu­la­to­ry and legal issue. Within the com­pa­nies, it was the­r­e­fo­re the busi­ness 1st Linewho dro­ve the topic for­ward. Accor­din­gly, respon­si­bi­li­ty for the topic lay pri­ma­ri­ly with the busi­ness func­tions, for exam­p­le with a CAIO (Chief AI Offi­cer) or a CDAO (Chief Data & Ana­ly­tics Officer).

The Com­pli­ance tasks were often much less cle­ar­ly assi­gned. They were often assi­gned to the per­sons or bodies respon­si­ble for data pro­tec­tion, e.g. a data pro­tec­tion offi­cer (DPO) as the per­son most fami­li­ar with the topic. This has now chan­ged to a cer­tain ext­ent. Some com­pa­nies have crea­ted their own gover­nan­ce struc­tu­re for AI, while others – pro­ba­b­ly the majo­ri­ty – have used exi­sting struc­tures and assi­gned respon­si­bi­li­ty for AI to them.

One way or ano­ther: AI gover­nan­ce must be tail­o­red to the respec­ti­ve com­pa­ny. The fol­lo­wing best prac­ti­ces can (hop­eful­ly) help with this.

Under­stan­ding the gene­ral con­di­ti­ons of the company

First of all, AI gover­nan­ce should cor­re­spond to the company’s AI stra­tegy. This pre­sup­po­ses that the com­pa­ny is respon­si­ble for deal­ing with AI tech­no­lo­gy. con­cre­te goals defi­ned taking into account the spe­ci­fics, needs and cul­tu­ral envi­ron­ment of the com­pa­ny (see ISO 42001, point 4). This also means that the use of AI is neither a stra­tegy nor an end in its­elf – AI is not­hing more and not­hing less than a tool. This does not con­tra­dict the fact that the tech­no­lo­gy its­elf and the appli­ca­ti­ons based on it are deve­lo­ping so rapid­ly that a cer­tain amount of tri­al and error is neces­sa­ry and sen­si­ble. Com­pa­nies the­r­e­fo­re need to deve­lop a very clear vision.

The fol­lo­wing que­sti­ons, for exam­p­le, can help:

  • How is the com­pa­ny alre­a­dy using AI? AI is not just gene­ra­ti­ve AI in the form of ChatGPT and rela­ted systems, the term is much broa­der and many com­pa­nies have been using AI for a long time (e.g. recom­men­da­ti­on and expert systems, fraud detec­tion, speech reco­gni­ti­on, ener­gy con­trol, robo­tics, etc.). To this end, AI appli­ca­ti­ons should first be invent­oried. As a rule, a clear view is initi­al­ly lack­ing, and even appli­ca­ti­on direc­to­ries rare­ly pro­vi­de infor­ma­ti­on about the actu­al use of in-hou­se and purcha­sed AI in the company.
  • What are the company’s values and visi­on? How important is trust for the company’s acti­vi­ties? What is the per­cep­ti­on of the com­pa­ny intern­al­ly and extern­al­ly? What are the repu­ta­tio­nal risks asso­cia­ted with the use of tech­no­lo­gy? Is the com­pa­ny in the public eye, does the public have an emo­tio­nal rela­ti­on­ship with the com­pa­ny? How important are ethi­cal con­cerns (bias, fair­ness and trans­pa­ren­cy)? Has the com­pa­ny alre­a­dy made a public com­mit­ment to ethi­cal principles?
  • How does the com­pa­ny earn its money? How important is inno­va­ti­on? What pro­ducts and ser­vices does it offer, now and in the future? Can AI help to impro­ve pro­ducts or ser­vices, deve­lop new pro­ducts or impro­ve the cus­to­mer experience?
  • What risks are asso­cia­ted with the use of AI? How sen­si­ti­ve is the com­pa­ny to ope­ra­tio­nal risks, for exam­p­le, how important is busi­ness con­ti­nui­ty and in which are­as? How expo­sed is the com­pa­ny to legal risks? Is it regu­la­ted, does it offer cri­ti­cal pro­ducts, does it use a lar­ge amount of per­so­nal data?
  • What regu­la­to­ry frame­work con­di­ti­ons is the com­pa­ny sub­ject to? For exam­p­le, is it acti­ve in the finan­cial sec­tor, heal­th­ca­re, medi­cal devices or tele­com­mu­ni­ca­ti­ons? Is it listed on the stock exch­an­ge? What ESG stan­dards should it com­ply with, does it have sus­taina­bi­li­ty goals?
  • Which purcha­sing, pro­duc­tion and sales pro­ce­s­ses are important for the com­pa­ny? Whe­re is the­re the grea­test poten­ti­al for incre­a­sing effi­ci­en­cy? How?
  • What resour­ces are available to the com­pa­ny? What resour­ces (data, exper­ti­se, infras­truc­tu­re, bud­get) would be requi­red? For exam­p­le, is exi­sting data sui­ta­ble for the use of AI?
  • How can the com­pa­ny deal with chan­ge and lear­ning? Can expe­ri­ence from pilot pro­jects be used? Does the com­pa­ny have employees who are able to acqui­re the neces­sa­ry skills if required?
  • How can respon­si­bi­li­ty for the use of AI be assi­gned? Are the­re alre­a­dy exi­sting com­mit­tees or roles that can be inte­gra­ted? Do new respon­si­bi­li­ties need to be crea­ted or exi­sting roles adapt­ed? Is the topic ancho­red in the manage­ment? Is the­re a struc­tu­red way of deal­ing with risks?
  • What gover­nan­ce alre­a­dy exists? Does the com­pa­ny have, for exam­p­le, qua­li­ty assu­rance, data pro­tec­tion, infor­ma­ti­on secu­ri­ty or other manage­ment systems, parts of which can be used or compared?
  • How big and how com­plex is the com­pa­ny? What degree of for­ma­lizati­on of the pro­ce­s­ses can it hand­le, or vice ver­sa – what degree of for­ma­lizati­on is necessary?

As men­tio­ned, it is important to gain an under­stan­ding of the company’s AI risk land­scape and assess it from the company’s per­spec­ti­ve. This may also include a more detail­ed legal ana­ly­sis. For exam­p­le, if a com­pa­ny manu­fac­tures medi­cal devices that embo­dy or incor­po­ra­te AI systems and sells them on the EU mar­ket, the EU AI Regu­la­ti­on beco­mes rele­vant and the pro­duct may well fall into the cate­go­ry of high-risk AI systems. If the com­pa­ny does not adhe­re to the rele­vant requi­re­ments, it risks fines, repu­ta­tio­nal dama­ge and ope­ra­tio­nal risks. When set­ting up AI gover­nan­ce, the focus here must be on com­pli­ance with the EU AI Regulation.

Defi­ne principles

When imple­men­ting AI gover­nan­ce, it has pro­ven to be a good idea to defi­ne prin­ci­ples that Mana­ging the use of AI. Each com­pa­ny will have to defi­ne its own prin­ci­ples – the­re is no uni­ver­sal approach. Howe­ver, the key prin­ci­ples include safe­ty, fair­ness, trans­pa­ren­cy, qua­li­ty and accu­ra­cy, accoun­ta­bi­li­ty, human over­sight and sus­taina­bi­li­ty. The­se prin­ci­ples can be based on the objec­ti­ves and con­trols spe­ci­fi­ed in ISO 42001 (Annex C). They should not remain eupho­nious buz­zwords, but should be fil­led with life; howe­ver, this is no rea­son to dis­pen­se with them as gui­ding principles.

Deal­ing with AI tech­no­lo­gy can then Dif­fe­rent risks depen­ding on the appli­ca­ti­on and con­text. It may the­r­e­fo­re make sen­se to base AI gover­nan­ce on a risk-based approach, simi­lar to the EU’s AI Regu­la­ti­on (which is quite sen­si­ble in terms of its thrust). To this end, risk cri­te­ria should be defi­ned and, in a second step, dif­fe­rent requi­re­ments can be defi­ned depen­ding on the risk cate­go­ry or an AI system impact assess­ment can be car­ri­ed out.

Set a clear framework

A cen­tral aspect of imple­men­ta­ti­on is then the crea­ti­on of a “AI Gover­nan­ce Frame­work“which defi­nes, for exam­p­le, the gene­ral start­ing posi­ti­on and risks, the objec­ti­ves of AI gover­nan­ce, defi­ni­ti­ons and scope of appli­ca­ti­on, the cate­go­rizati­on of AI systems and the prin­ci­ples and respon­si­bi­li­ties. Clear, prag­ma­tic and com­pre­hen­si­ble gui­de­lines should be estab­lished, and the scope of appli­ca­ti­on of the frame­work and a pro­ce­du­re for excep­ti­ons (“excep­ti­on to poli­cy”) should be defined.

Such a frame­work can be more or less com­plex, but it is essen­ti­al that it is tail­o­red to the com­pa­ny and that it cle­ar­ly defi­nes the key prin­ci­ples. This also ser­ves to pro­tect the manage­ment bodies, which have to defi­ne the­se prin­ci­ples, but at the same time can also dele­ga­te tasks effectively.

It is recom­men­ded to use a suc­ce­s­si­ve approach – Here, too, you should not die in beau­ty. The first step in imple­men­ta­ti­on should focus on the key manage­ment objec­ti­ves and the rele­vant risks. For exam­p­le, a poli­cy with spe­ci­fic gui­de­lines and, in par­ti­cu­lar, inter­nal respon­si­bi­li­ties, com­bi­ned with a report­ing system and the invol­vement of an inde­pen­dent body – more or less depen­ding on the com­pa­ny – to check admis­si­bi­li­ty may be suf­fi­ci­ent. Over time – when AI is used more exten­si­ve­ly or for more sen­si­ti­ve pro­ce­s­ses – fur­ther ele­ments can be added, e.g. more sophi­sti­ca­ted direc­to­ries, defi­ned test grids, topic-spe­ci­fic trai­ning, con­trac­tu­al requi­re­ments for sup­pliers, recom­men­da­ti­ons from an inter­nal ethics board, manage­ment appr­oval, etc.

Defi­ne respon­si­bi­li­ties and competencies

Invol­ve manage­ment level

Even if spe­cia­li­zed depart­ments, posi­ti­ons or func­tions are crea­ted: The manage­ment level must be invol­ved in the deve­lo­p­ment and imple­men­ta­ti­on of AI gover­nan­ce. Wit­hout this, the Accep­tance of gover­nan­ce in the com­pa­ny, and con­ver­se­ly, mana­gers can pro­vi­de the neces­sa­ry resour­ces. As men­tio­ned abo­ve, mana­gers also have a vested inte­rest in not only set­ting the stra­te­gic cour­se, but also in effec­tively taking on responsibility.
dele­ga­teand this also pre­sup­po­ses that the Level-appro­pria­te com­pe­tence at all levels – also at the manage­ment level – that a report­ing system
exists and that the reci­pi­en­ts of reports are able to under­stand them.

Defi­ne respon­si­bi­li­ty for projects

For every AI pro­ject, a com­pa­ny should then desi­gna­te a respon­si­ble per­son or unitwho bears inter­nal respon­si­bi­li­ty for com­pli­ance (in the sen­se of “accoun­ta­bi­li­ty”) and who deci­des on the deve­lo­p­ment or use of an AI system within the scope of their func­tion or com­pe­ten­ci­es (e.g. a busi­ness owner). In addi­ti­on, a cont­act per­son can be defi­ned who does not have to be the same as the busi­ness owner, but who is available as a direct cont­act per­son for questions.

Cen­tral cont­act point

It is high­ly recom­men­ded, a per­son or busi­ness unit respon­si­ble for AI gover­nan­ce as the cen­tral point of cont­act. It plays a key role in moni­to­ring and updating AI gover­nan­ce (ISO 42001, for exam­p­le, pro­vi­des for a pro­cess for report­ing con­cerns (A.3.3), for which a clear point of cont­act is useful) and should have both the neces­sa­ry tech­ni­cal exper­ti­se and suf­fi­ci­ent aut­ho­ri­ty within the com­pa­ny. Such a unit can, for exam­p­le, be an exi­sting data gover­nan­ce team that is fami­li­ar with inter­di­sci­pli­na­ry coope­ra­ti­on. In lar­ger com­pa­nies that have been deal­ing with the topic of AI for some time, a sepa­ra­te depart­ment for AI gover­nan­ce is incre­a­sing­ly being set up.

Inter­di­sci­pli­na­ry working group

The com­ple­xi­ty and ver­sa­ti­li­ty of AI tech­no­lo­gy requi­re a wide ran­ge of spe­cia­list know­ledge and skills. Many com­pa­nies are in an ori­en­ta­ti­on pha­se at the begin­ning, whe­re they do not yet have a clear idea of the scope of the AI topic for the com­pa­ny. It is worth forming an inter­di­sci­pli­na­ry working group at the begin­ning, made up of peo­p­le from dif­fe­rent are­as (e.g. lawy­ers; IT, secu­ri­ty and ethics experts or even peo­p­le from the busi­ness its­elf) in order to take the various aspects into account when imple­men­ting AI governance.

Howe­ver, it is important to distin­gu­ish such a group from a decis­i­on-making body in the sen­se of an accom­pany­ing expert com­mit­tee. In par­ti­cu­lar, com­pa­nies that are sub­ject to a “Three Lines” approach The com­pa­nies that fol­low the sepa­ra­ti­on bet­ween the reve­nue units, the “busi­ness”, and an inde­pen­dent com­pli­ance func­tion should not under­mi­ne this sepa­ra­ti­on by having decis­i­ons made by mixed com­mit­tees. On the other hand, the­re is no rea­son why such com­mit­tees should not make joint Make sug­ge­sti­onsas long as this does not jeo­par­di­ze the inde­pen­dence of the decisions.

Ethics Com­mit­tee

AI gover­nan­ce gene­ral­ly goes bey­ond ensu­ring com­pli­ance. AI systems should not only be per­mis­si­ble, but also trust­wor­t­hy and ethi­cal. Many com­pa­nies have the­r­e­fo­re Ethics coun­cils or com­mit­tees estab­lishedto sup­port AI initia­ti­ves and ensu­re that they com­ply with ethi­cal stan­dards and social values.

Swis­s­com, for exam­p­le, has a Data Ethics Board that also deals with AI pro­jects as soon as they could be sen­si­ti­ve from an ethi­cal per­spec­ti­ve. Smal­ler com­pa­nies can and should also deal with ethi­cal issues, espe­ci­al­ly if they are acti­ve in sen­si­ti­ve are­as or with sen­si­ti­ve data. If AI is to be used to eva­lua­te employee data (e.g. for the curr­ent­ly much-dis­cus­sed sen­ti­ment ana­ly­sis), this is always the case.

Inter­nal com­mu­ni­ca­ti­on and training

Inter­nal com­mu­ni­ca­ti­on and trai­ning are essen­ti­al ele­ments of AI gover­nan­ce. Employees should under­stand the pur­po­se of AI gover­nan­ce and how it can be
on their work. Open and honest com­mu­ni­ca­ti­on with employees crea­tes the neces­sa­ry trust. This requi­res clear com­mu­ni­ca­ti­on and appro­pria­te trai­ning mea­su­res (the AI Act requi­res this any­way under the hea­ding of “AI literacy”).

Ite­ra­ti­ve process

AI gover­nan­ce should be under­s­tood as a con­ti­nuous and ite­ra­ti­ve pro­cess. It is not a one-off pro­ject that is com­ple­ted after an imple­men­ta­ti­on pha­se (just like other bran­ches of gover­nan­ce). The struc­tures and pro­ce­s­ses of AI gover­nan­ce should be review­ed regu­lar­ly and adju­sted if neces­sa­ry. beco­me. The­re is no other way for com­pa­nies to react to new chal­lenges and chan­ges – be they tech­no­lo­gi­cal, regu­la­to­ry or mar­ket-rela­ted (and every system cau­ses misu­se and idle time – for this rea­son alo­ne, systems should be con­stant­ly adapted).

This ite­ra­ti­ve approach is a pro­cess of test­ing, revie­w­ing and adap­ting that is inten­ded to keep AI gover­nan­ce up to date with tech­no­lo­gy, regu­la­ti­on and prac­ti­ce, but at the same time requi­res a cul­tu­re of lear­ning. Feed­back from employees should be obtai­ned on an ongo­ing basis.

Con­ti­nuous moni­to­ring of the systems

Final­ly, AI gover­nan­ce pro­ce­s­ses should pro­vi­de for “health checks” to con­ti­nuous­ly moni­tor AI systems that have alre­a­dy been tested. To main­tain an over­view of all AI appli­ca­ti­ons in the com­pa­ny, it is also essen­ti­al to keep a list of the AI systems and AI models that have been deve­lo­ped or purchased.