datenrecht.ch

CoE Euro­pe Frame­work Con­ven­ti­on on Arti­fi­ci­al Intel­li­gence and Human Rights, Demo­cra­cy and the Rule of Law 

Text der Euro­pa­rats-Kon­ven­ti­on, die am 17. Mai 2024 ver­ab­schie­det wor­den ist. Den Arti­keln ist jeweils der ent­spre­chen­de Text des Expl­ana­to­ry Report zugeordnet.

aus­klap­pen | ein­klap­pen

Pre­am­ble

The mem­ber Sta­tes of the Coun­cil of Euro­pe and the other signa­to­ries hereto,

Con­side­ring that the aim of the Coun­cil of Euro­pe is to achie­ve grea­ter unity bet­ween its mem­bers, based in par­ti­cu­lar on the respect for human rights, demo­cra­cy and the rule of law;

Reco­g­nis­ing the value of foste­ring co-ope­ra­ti­on bet­ween the Par­ties to this Con­ven­ti­on and of exten­ding such co-ope­ra­ti­on to other Sta­tes that share the same values; 

Con­scious of the acce­le­ra­ting deve­lo­p­ments in sci­ence and tech­no­lo­gy and the pro­found chan­ges brought about through acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems, which have the poten­ti­al to pro­mo­te human pro­spe­ri­ty as well as indi­vi­du­al and socie­tal well-being, sus­tainable deve­lo­p­ment, gen­der equa­li­ty and the empower­ment of all women and girls, as well as other important goals and inte­rests, by enhan­cing pro­gress and innovation;

Reco­g­nis­ing that acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems may offer unpre­ce­den­ted oppor­tu­ni­ties to pro­tect and pro­mo­te human rights, demo­cra­cy and the rule of law; 

Con­cer­ned that cer­tain acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems may under­mi­ne human dignity and indi­vi­du­al auto­no­my, human rights, demo­cra­cy and the rule of law;

Con­cer­ned about the risks of dis­cri­mi­na­ti­on in digi­tal con­texts, par­ti­cu­lar­ly tho­se invol­ving arti­fi­ci­al intel­li­gence systems, and their poten­ti­al effect of crea­ting or aggravating ine­qua­li­ties, inclu­ding tho­se expe­ri­en­ced by women and indi­vi­du­als in vul­nerable situa­tions, regar­ding the enjoy­ment of their human rights and their full, equal and effec­ti­ve par­ti­ci­pa­ti­on in eco­no­mic, social, cul­tu­ral and poli­ti­cal affairs;

Con­cer­ned by the misu­se of arti­fi­ci­al intel­li­gence systems and oppo­sing the use of such systems for repres­si­ve pur­po­ses in vio­la­ti­on of inter­na­tio­nal human rights law, inclu­ding through arbi­tra­ry or unlawful sur­veil­lan­ce and cen­sor­ship prac­ti­ces that ero­de pri­va­cy and indi­vi­du­al autonomy;

Con­scious of the fact that human rights, demo­cra­cy and the rule of law are inher­ent­ly interwoven;

Con­vin­ced of the need to estab­lish, as a mat­ter of prio­ri­ty, a glo­bal­ly appli­ca­ble legal frame­work set­ting out com­mon gene­ral prin­ci­ples and rules gover­ning the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems that effec­tively pre­ser­ves shared values and harnes­ses the bene­fits of arti­fi­ci­al intel­li­gence for the pro­mo­ti­on of the­se values in a man­ner con­du­ci­ve to respon­si­ble innovation;

Reco­g­nis­ing the need to pro­mo­te digi­tal liter­a­cy, know­ledge about, and trust in the design, deve­lo­p­ment, use and decom­mis­sio­ning of arti­fi­ci­al intel­li­gence systems;

Reco­g­nis­ing the frame­work cha­rac­ter of this Con­ven­ti­on, which may be sup­ple­men­ted by fur­ther instru­ments to address spe­ci­fic issues rela­ting to the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems;

Under­li­ning that this Con­ven­ti­on is inten­ded to address spe­ci­fic chal­lenges which ari­se throug­hout the life­cy­cle of arti­fi­ci­al intel­li­gence systems and encou­ra­ge the con­side­ra­ti­on of the wider risks and impacts rela­ted to the­se tech­no­lo­gies inclu­ding, but not limi­t­ed to, human health and the envi­ron­ment, and socio-eco­no­mic aspects, such as employment and labour;

Not­ing rele­vant efforts to advan­ce inter­na­tio­nal under­stan­ding and co-ope­ra­ti­on on arti­fi­ci­al intel­li­gence by other inter­na­tio­nal and supra­na­tio­nal orga­ni­sa­ti­ons and fora;

Mindful of appli­ca­ble inter­na­tio­nal human rights instru­ments, such as the 1948 Uni­ver­sal Decla­ra­ti­on of Human Rights, the 1950 Con­ven­ti­on for the Pro­tec­tion of Human Rights and Fun­da­men­tal Free­doms (ETS No. 5), the 1966 Inter­na­tio­nal Covenant on Civil and Poli­ti­cal Rights, the 1966 Inter­na­tio­nal Covenant on Eco­no­mic, Social and Cul­tu­ral Rights, the 1961 Euro­pean Social Char­ter (ETS No. 35), as well as their respec­ti­ve pro­to­cols, and the 1996 Euro­pean Social Char­ter (Revi­sed) (ETS No. 163);

Mindful also of the 1989 United Nati­ons Con­ven­ti­on on the Rights of the Child and the 2006 United Nati­ons Con­ven­ti­on on the Rights of Per­sons with Disabilities;

Mindful also of the pri­va­cy rights of indi­vi­du­als and the pro­tec­tion of per­so­nal data, as appli­ca­ble and con­fer­red, for exam­p­le, by the 1981 Con­ven­ti­on for the Pro­tec­tion of Indi­vi­du­als with regard to Auto­ma­tic Pro­ce­s­sing of Per­so­nal Data (ETS No. 108) and its protocols;

Affir­ming the com­mit­ment of the Par­ties to pro­tec­ting human rights, demo­cra­cy and the rule of law, and foste­ring trust­wort­hi­ness of arti­fi­ci­al intel­li­gence systems through this Convention,

Have agreed as follows:


Expl­ana­to­ry Report

6. The Pre­am­ble reaf­firms the com­mit­ment of the Par­ties to pro­tec­ting human rights, demo­cra­cy and the rule of law and recalls inter­na­tio­nal legal instru­ments and trea­ties of the Coun­cil of Euro­pe and the United Nati­ons which direct­ly deal with topics within the scope of this Frame­work Convention.

7. During the nego­tia­ti­on and sub­se­quent adop­ti­on of this Frame­work Con­ven­ti­on, the fol­lo­wing inter­na­tio­nal legal and poli­cy instru­ments o n arti­fi­ci­al intel­li­gence, in par­ti­cu­lar tho­se pre­pared by the Coun­cil of Euro­pe and other inter­na­tio­nal orga­ni­sa­ti­ons and pro­ce­s­ses, were taken into account:

a) Decla­ra­ti­on of the Com­mit­tee of Mini­sters of the Coun­cil of Euro­pe on the mani­pu­la­ti­ve capa­bi­li­ties of algo­rith­mic pro­ce­s­ses, adopted on 13 Febru­ary 2019 – Decl(13/02/2019)1;

b) Recom­men­da­ti­on on Arti­fi­ci­al Intel­li­gence adopted by the OECD Coun­cil on 22 May 2019 (the “OECD AI Principles”);

c) Recom­men­da­ti­on of the Com­mit­tee of Mini­sters of the Coun­cil of Euro­pe to mem­ber Sta­tes on the human rights impacts of algo­rith­mic systems, adopted on 8 April 2020 – CM/Rec(2020)1;

d) Reso­lu­ti­ons and Recom­men­da­ti­ons of the Par­lia­men­ta­ry Assem­bly of the Coun­cil of Euro­pe, exami­ning the oppor­tu­ni­ties and risks of arti­fi­ci­al intel­li­gence for human rights, demo­cra­cy, and the rule of law and endor­sing a set of core ethi­cal prin­ci­ples that should be applied to AI systems;[2]

e) UNESCO Recom­men­da­ti­on on the Ethics of Arti­fi­ci­al Intel­li­gence adopted on 23 Novem­ber 2021;

f) G7 Hiro­shi­ma Pro­cess Inter­na­tio­nal Gui­ding Prin­ci­ples for Orga­ni­sa­ti­ons Deve­lo­ping Advan­ced AI Systems and Hiro­shi­ma Pro­cess Inter­na­tio­nal Code of Con­duct for Orga­nizati­ons Deve­lo­ping Advan­ced AI Systems (adopted on 30 Octo­ber 2023); and

g) EU Regu­la­ti­on lay­ing down har­mo­ni­s­ed rules on Arti­fi­ci­al Intel­li­gence (Arti­fi­ci­al Intel­li­gence Act) adopted on [exact date in April 2024 to be inserted].

8. Fur­ther­mo­re, the nego­tia­ti­ons were inspi­red by ele­ments of the fol­lo­wing poli­ti­cal declarations:

a) Decla­ra­ti­on by Heads of Sta­te and Govern­ment made at the 4th Coun­cil of Euro­pe Sum­mit in Reykja­vík on 16 – 17 May 2023;

b) G7 Lea­ders’ State­ment on the Hiro­shi­ma AI Pro­cess of 30 Octo­ber and 6 Decem­ber 2023; and

c) The Bletch­ley Decla­ra­ti­on by Count­ries Atten­ding the AI Safe­ty Sum­mit, 1 – 2 Novem­ber 2023.

9. The Pre­am­ble sets out the basic aim of the Frame­work Con­ven­ti­on – to ensu­re that the poten­ti­al of arti­fi­ci­al intel­li­gence tech­no­lo­gies to pro­mo­te human pro­spe­ri­ty, indi­vi­du­al and socie­tal well­be­ing and to make our world more pro­duc­ti­ve, inno­va­ti­ve and secu­re, is harnes­sed in a respon­si­ble man­ner that respects, pro­tects and ful­fils the shared values of the Par­ties and is respectful of human rights, demo­cra­cy and the rule of law.

10. The Draf­ters wis­hed to empha­sise that arti­fi­ci­al intel­li­gence systems offer unpre­ce­den­ted oppor­tu­ni­ties to pro­tect and pro­mo­te human rights, demo­cra­cy and the rule of law. At the same time, they also wis­hed to ack­now­ledge that the­re are serious risks and peri­ls ari­sing from cer­tain acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence such as, for instance, dis­cri­mi­na­ti­on in a varie­ty of con­texts, gen­der ine­qua­li­ty, the under­mi­ning of demo­cra­tic pro­ce­s­ses, impai­ring human dignity or indi­vi­du­al auto­no­my, or the misu­s­es of arti­fi­ci­al intel­li­gence systems by some Sta­tes for repres­si­ve pur­po­ses, in vio­la­ti­on of inter­na­tio­nal human rights law. The Draf­ters also wan­ted to draw atten­ti­on to human dignity and indi­vi­du­al auto­no­my as foun­da­tio­nal values and prin­ci­ples that are essen­ti­al for the full rea­li­sa­ti­on of human rights, demo­cra­cy and the rule of law and that can also be adver­se­ly impac­ted by cer­tain acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. The Draf­ters wis­hed to empha­sise that when refer­ring to indi­vi­du­als that can be affec­ted by arti­fi­ci­al intel­li­gence systems crea­ting or aggravating ine­qua­li­ties, the­se include indi­vi­du­als dis­cri­mi­na­ted based on their “race” [3] or eth­ni­ci­ty, inclu­ding indi­ge­nous indi­vi­du­als. They also wis­hed to empha­sise the need to avo­id dis­cri­mi­na­ti­on on grounds of sex, bias or other syste­mic harms, in accordance with inter­na­tio­nal obli­ga­ti­ons and in line with rele­vant United Nati­ons decla­ra­ti­ons. Fur­ther­mo­re, trust­wor­t­hy arti­fi­ci­al intel­li­gence systems will embo­dy prin­ci­ples such as tho­se set out in Chap­ter III of the Frame­work Con­ven­ti­on that should app­ly to acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. Final­ly, the Draf­ters were ful­ly awa­re that the incre­a­sing use of arti­fi­ci­al intel­li­gence systems, due to their trans­for­ma­ti­ve natu­re for socie­ties, brings new chal­lenges for human rights, demo­cra­cy and the rule of law which are not yet fore­seeable at the time of drafting.

11. Con­se­quent­ly, the Pre­am­ble sets the sce­ne for a varie­ty of legal­ly bin­ding obli­ga­ti­ons con­tai­ned in the Frame­work Con­ven­ti­on that aim to ensu­re that the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems that have the poten­ti­al to inter­fe­re with the respect for human rights, the func­tio­ning of demo­cra­cy, or the obser­van­ce of rule of law in both the public and pri­va­te sec­tors are in full com­pli­ance with this Frame­work Convention.

Chap­ter I – Gene­ral provisions

Artic­le 1 – Object and purpose

1. The pro­vi­si­ons of this Con­ven­ti­on aim to ensu­re that acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems are ful­ly con­si­stent with human rights, demo­cra­cy and the rule of law.

2. Each Par­ty shall adopt or main­tain appro­pria­te legis­la­ti­ve, admi­ni­stra­ti­ve or other mea­su­res to give effect to the pro­vi­si­ons set out in this Con­ven­ti­on. The­se mea­su­res shall be gra­dua­ted and dif­fe­ren­tia­ted as may be neces­sa­ry in view of the seve­ri­ty and pro­ba­bi­li­ty of the occur­rence of adver­se impacts on human rights, demo­cra­cy and the rule of law throug­hout the life­cy­cle of arti­fi­ci­al intel­li­gence systems. This may include spe­ci­fic or hori­zon­tal mea­su­res that app­ly irre­spec­ti­ve of the type of tech­no­lo­gy used.

3. In order to ensu­re effec­ti­ve imple­men­ta­ti­on of its pro­vi­si­ons by the Par­ties, this Con­ven­ti­on estab­lishes a fol­low-up mecha­nism and pro­vi­des for inter­na­tio­nal co-operation.


Expl­ana­to­ry Report

On the object and pur­po­se of the Frame­work Con­ven­ti­on and its rela­ti­on­ship with the exi­sting human rights pro­tec­tion regimes and mecha­nisms

12. Para­graphs 1 and 2 set out the object and pur­po­se of the Frame­work Con­ven­ti­on, which is to ensu­re that acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems are ful­ly con­si­stent with human rights, demo­cra­cy and the rule of law. At the same time, it is important to under­line that the Frame­work Con­ven­ti­on does not intend to regu­la­te all aspects of the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems, nor arti­fi­ci­al intel­li­gence tech­no­lo­gies as such. Both its object and pur­po­se are con­fi­ned to que­sti­ons per­tai­ning to the man­da­te of the Coun­cil of Euro­pe with a focus on arti­fi­ci­al intel­li­gence systems which have the poten­ti­al to inter­fe­re with human rights, demo­cra­cy and the rule of law.

13. The Frame­work Con­ven­ti­on ensu­res that each Party’s exi­sting appli­ca­ble obli­ga­ti­ons on human rights, demo­cra­cy and the rule of law are also applied to acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. In this sen­se, the Frame­work Con­ven­ti­on is ali­gned with the appli­ca­ble human rights pro­tec­tion systems and mecha­nisms of each Par­ty, inclu­ding their inter­na­tio­nal law obli­ga­ti­ons and other inter­na­tio­nal com­mit­ments and their appli­ca­ble dome­stic law. As such, no pro­vi­si­on of this Frame­work Con­ven­ti­on is inten­ded to crea­te new human rights or human rights obli­ga­ti­ons or under­mi­ne the scope and con­tent of the exi­sting appli­ca­ble pro­tec­tions, but rather, by set­ting out various legal­ly bin­ding obli­ga­ti­ons con­tai­ned in its Chap­ters II to VI, to faci­li­ta­te the effec­ti­ve imple­men­ta­ti­on of the appli­ca­ble human rights obli­ga­ti­ons of each Par­ty in the con­text of the new chal­lenges rai­sed by arti­fi­ci­al intel­li­gence. At the same time, the Frame­work Con­ven­ti­on rein­forces the role of inter­na­tio­nal human rights law and rele­vant aspects of dome­stic legal frame­works in rela­ti­on to acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems that have the poten­ti­al to inter­fe­re with human rights, demo­cra­cy and rule of law.

Regar­ding acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems

14. Throug­hout its text the Frame­work Con­ven­ti­on crea­tes various obli­ga­ti­ons in rela­ti­on to the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. This refe­rence to the life­cy­cle ensu­res a com­pre­hen­si­ve approach towards addres­sing AI-rela­ted risks and adver­se impacts on human rights, demo­cra­cy and the rule of law by cap­tu­ring all stages of acti­vi­ties rele­vant to arti­fi­ci­al intel­li­gence systems. App­ly­ing the­se obli­ga­ti­ons to the enti­re­ty of the life­cy­cle ensu­res that the Con­ven­ti­on can cover not only cur­rent but future risks, which is one of the ways in which the Draf­ters sought to make the Frame­work Con­ven­ti­on future pro­of in view of rapid and often unpre­dic­ta­ble tech­no­lo­gi­cal deve­lo­p­ments. It is important to cla­ri­fy that, throug­hout the Frame­work Con­ven­ti­on, “within” is not used as a tech­ni­cal term and is not meant to have a limi­ting effect on the con­cept of the lifecycle.

15. With that in mind, and wit­hout giving an exhaus­ti­ve list of acti­vi­ties within the life­cy­cle which are spe­ci­fic to arti­fi­ci­al intel­li­gence systems, the Draf­ters aim to cover any and all acti­vi­ties from the design of an arti­fi­ci­al intel­li­gence system to its reti­re­ment, no mat­ter which actor is invol­ved in them. It is the inten­tio­nal choice of the Draf­ters not to spe­ci­fy them expli­ci­t­ly as they may depend on the type of tech­no­lo­gy and other con­tex­tu­al ele­ments and chan­ge over time, but dra­wing inspi­ra­ti­on from the latest work of OECD, at the time of the draf­ting, the Draf­ters give as examp­les of rele­vant acti­vi­ties: (1) plan­ning and design, (2) data coll­ec­tion and pro­ce­s­sing, (3) deve­lo­p­ment of arti­fi­ci­al intel­li­gence systems, inclu­ding model buil­ding and/or fine-tuning exi­sting models for spe­ci­fic tasks, (4) test­ing, veri­fi­ca­ti­on and vali­da­ti­on, (5) supply/making the systems available for use, (6) deployment, (7) ope­ra­ti­on and moni­to­ring, and (8) reti­re­ment. The­se acti­vi­ties often take place in an ite­ra­ti­ve man­ner and are not neces­s­a­ri­ly sequen­ti­al. They may also start all over again when the­re are sub­stan­ti­al chan­ges in the system or its inten­ded use. The decis­i­on to reti­re an arti­fi­ci­al intel­li­gence system from ope­ra­ti­on may occur at any point during the ope­ra­ti­on and moni­to­ring phase.

Regar­ding the imple­men­ta­ti­on of the Frame­work Con­ven­ti­on

16. Para­graph 2 of Artic­le 1 sets out the approach to imple­men­ta­ti­on agreed upon by the Sta­tes which nego­tia­ted the Frame­work Con­ven­ti­on. This pro­vi­si­on requi­res Par­ties to give effect to the pro­vi­si­ons of this Frame­work Con­ven­ti­on, but also pro­vi­des that they enjoy a cer­tain mar­gin of fle­xi­bi­li­ty as to how exact­ly to give effect to the pro­vi­si­ons of the Frame­work Con­ven­ti­on, in view of the under­ly­ing diver­si­ty of legal systems, tra­di­ti­ons and prac­ti­ces among the Par­ties and the extre­me­ly wide varie­ty of con­texts of use of arti­fi­ci­al intel­li­gence systems in both public and pri­va­te sectors.

17. In order to account for exi­sting rules and mecha­nisms in the dome­stic legal system of each Par­ty, para­graph 2 of Artic­le 1 and many of the obli­ga­ti­ons requi­re Par­ties to “adopt or main­tain” cer­tain mea­su­res to address the risks of arti­fi­ci­al intel­li­gence. In using “adopt or main­tain”, the Draf­ters wis­hed to pro­vi­de fle­xi­bi­li­ty for Par­ties to ful­fil their obli­ga­ti­ons by adop­ting new mea­su­res or by app­ly­ing exi­sting mea­su­res such as legis­la­ti­on and mecha­nisms that exi­sted pri­or to the ent­ry into force of the Frame­work Con­ven­ti­on. Use of both of the­se terms ack­now­led­ges that, for the pur­po­se of dome­stic imple­men­ta­ti­on, eit­her of the­se approa­ches may be equal­ly suf­fi­ci­ent. Para­graph 2 of Artic­le 1 fur­ther pro­vi­des that such mea­su­res should be “gra­dua­ted and dif­fe­ren­tia­ted as may be neces­sa­ry in view of the seve­ri­ty and pro­ba­bi­li­ty of the occur­rence of adver­se impacts on human rights, demo­cra­cy and the rule of law”. This pro­vi­si­on con­veys that mea­su­res pur­su­ant to the Frame­work Con­ven­ti­on need to be tail­o­red to the level of risk posed by an arti­fi­ci­al intel­li­gence system within spe­ci­fic sphe­res, acti­vi­ties and con­texts, as appro­pria­te, and that this task falls on Par­ties to the Frame­work Con­ven­ti­on to deci­de how to balan­ce the rele­vant com­pe­ting inte­rests in each sphe­re, taking into account spe­ci­fi­ci­ties of acti­vi­ties in the pri­va­te sec­tor, their dome­stic regu­la­to­ry frame­work and natio­nal agen­da for arti­fi­ci­al intel­li­gence while ensu­ring the pro­tec­tion and pro­mo­ti­on of human rights, demo­cra­cy and the rule of law. The Par­ties may also take into account spe­ci­fi­ci­ties of public sec­tor acti­vi­ties such as law enforce­ment, migra­ti­on, bor­der con­trol, asyl­um and the judiciary.

18. It is cru­cial that in accordance with Artic­le 1, para­graph 2, the con­side­ra­ti­on of the men­tio­ned issues should start with an assess­ment by each Par­ty of risks and poten­ti­al impacts on human rights, demo­cra­cy and the rule of law in a given con­text and con­side­ra­ti­on of main­tai­ning or estab­li­shing appro­pria­te mea­su­res to address tho­se impacts. In rea­ching an under­stan­ding of such poten­ti­al impacts of acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems, Par­ties should con­sider the broa­der con­text, inclu­ding power asym­metries that could fur­ther widen exi­sting ine­qua­li­ties and socie­tal impacts. Given the wide ran­ge of sec­tors and use cases in which arti­fi­ci­al intel­li­gence systems are used and could be deployed in the future, such as the dis­tri­bu­ti­on of social wel­fa­re bene­fits, decis­i­ons on the cre­dit­wort­hi­ness of poten­ti­al cli­ents, staff recruit­ment and reten­ti­on pro­ce­s­ses, cri­mi­nal justi­ce pro­ce­du­res, immi­gra­ti­on, asyl­um pro­ce­du­res and bor­der con­trol, poli­cing, and tar­ge­ted adver­ti­sing and algo­rith­mic con­tent sel­ec­tion, some adver­se impacts could trans­la­te into human rights vio­la­ti­ons throug­hout the who­le socie­ty. They could also poten­ti­al­ly impact social justi­ce, alter the rela­ti­on­ship and affect the trust bet­ween citi­zens and govern­ment, and affect the inte­gri­ty of demo­cra­tic processes.

19. After careful con­side­ra­ti­on of the respec­ti­ve risks and other rele­vant fac­tors, each Par­ty will need to deci­de whe­ther it will ful­fil its obli­ga­ti­ons by app­ly­ing exi­sting mea­su­res or updating its dome­stic regu­la­to­ry frame­work and, if so, how. It must be bor­ne in mind that by vir­tue of the respec­ti­ve inter­na­tio­nal human rights obli­ga­ti­ons and com­mit­ments, each Par­ty alre­a­dy has in place various human rights pro­tec­tion and con­flict adju­di­ca­ti­on mecha­nisms as well as spe­ci­fic manner(s) of admi­ni­ste­ring the rele­vant rules and regulations.

20. Par­ties could the­r­e­fo­re, for exam­p­le, deci­de to keep making use of exi­sting regu­la­ti­on, sim­pli­fy, cla­ri­fy or impro­ve it, or they could work on impro­ving its enforce­ment or sup­port­ing the making available of exi­sting reme­dies more acce­s­si­ble or more available (see the com­men­ta­ry regar­ding Artic­les 14 – 15 in para­graph 95 – 104 below). Par­ties could also con­sider the adop­ti­on of new or addi­tio­nal mea­su­res, which could take the shape of rule-based, prin­ci­ple-based or goal-based legis­la­ti­on, poli­cy or regu­la­ti­on; the estab­lish­ment of com­pli­ance mecha­nisms and stan­dards; co-regu­la­ti­on and indu­stry agree­ments to faci­li­ta­te self-regu­la­ti­on; or resort to various com­bi­na­ti­ons of the abo­ve. Mea­su­res to be adopted or main­tai­ned pur­su­ant to the Frame­work Con­ven­ti­on may also con­sist of admi­ni­stra­ti­ve and non-legal­ly bin­ding mea­su­res, inter­pre­ta­ti­ve gui­dance, cir­culars, inter­nal mecha­nisms and pro­ce­s­ses, or judi­cial case-law, as each Par­ty deems appro­pria­te in line with the “gra­dua­ted and dif­fe­ren­tia­ted approach” descri­bed in Artic­le 1, para­graph 2. Any men­ti­on of adop­ting or main­tai­ning “mea­su­res” in this Frame­work Con­ven­ti­on may also be satis­fied by appro­pria­te admi­ni­stra­ti­ve measures.

21. Fur­ther­mo­re, to imple­ment the prin­ci­ples and obli­ga­ti­ons set forth in the Frame­work Con­ven­ti­on a Par­ty may adopt AI-spe­ci­fic mea­su­res or main­tain and update so-cal­led “hori­zon­tal” mea­su­res that are appli­ca­ble irre­spec­ti­ve of the type of tech­no­lo­gy used, such as for exam­p­le non-dis­cri­mi­na­ti­on, data pro­tec­tion and other legis­la­ti­on that could be reli­ed upon to imple­ment spe­ci­fic prin­ci­ples and obli­ga­ti­ons of this Frame­work Convention.

Regar­ding the fol­low-up mecha­nism

22. Para­graph 3 notes that, to ensu­re effec­ti­ve imple­men­ta­ti­on of the Frame­work Con­ven­ti­on, the Frame­work Con­ven­ti­on estab­lishes a fol­low-up mecha­nism, which is set out in Chap­ter VII, see the com­men­ta­ry in para­graphs 129 – 135, and pro­vi­des for inter­na­tio­nal co-ope­ra­ti­on, see the com­men­ta­ry to Artic­le 25 in para­graphs 137 – 140.

Artic­le 2 – Defi­ni­ti­on of arti­fi­ci­al intel­li­gence systems

For the pur­po­ses of this Con­ven­ti­on, “arti­fi­ci­al intel­li­gence system” means a machi­ne-based system that for expli­cit or impli­cit objec­ti­ves, infers, from the input it recei­ves, how to gene­ra­te out­puts such as pre­dic­tions, con­tent, recom­men­da­ti­ons or decis­i­ons that may influence phy­si­cal or vir­tu­al envi­ron­ments. Dif­fe­rent arti­fi­ci­al intel­li­gence systems vary in their levels of auto­no­my and adap­ti­ve­ness after deployment.

Expl­ana­to­ry Report

23. The defi­ni­ti­on of an arti­fi­ci­al intel­li­gence system pre­scri­bed in this Artic­le is drawn from the latest revi­sed defi­ni­ti­on adopted by the OECD on 8 Novem­ber 2023. The choice of the Draf­ters to use this par­ti­cu­lar text is signi­fi­cant not only becau­se of the high qua­li­ty of the work car­ri­ed out by the OECD and its experts, but also in view of the need to enhan­ce inter­na­tio­nal co-ope­ra­ti­on on the topic of arti­fi­ci­al intel­li­gence and faci­li­ta­te efforts aimed at har­mo­ni­s­ing gover­nan­ce of arti­fi­ci­al intel­li­gence at a glo­bal level, inclu­ding by har­mo­ni­s­ing the rele­vant ter­mi­no­lo­gy, which also allo­ws for the coher­ent imple­men­ta­ti­on of dif­fe­rent instru­ments rela­ting to arti­fi­ci­al intel­li­gence within the dome­stic legal systems of the Parties.

24. The defi­ni­ti­on reflects a broad under­stan­ding of what arti­fi­ci­al intel­li­gence systems are, spe­ci­fi­cal­ly as oppo­sed to other types of simp­ler tra­di­tio­nal soft­ware systems based on the rules defi­ned sole­ly by natu­ral per­sons to auto­ma­ti­cal­ly exe­cu­te ope­ra­ti­ons. It is meant to ensu­re legal pre­cis­i­on and cer­tain­ty, while also remai­ning suf­fi­ci­ent­ly abstract and fle­xi­ble to stay valid despi­te future tech­no­lo­gi­cal deve­lo­p­ments. The defi­ni­ti­on was draf­ted for the pur­po­ses of the Frame­work Con­ven­ti­on and is not meant to give uni­ver­sal mea­ning to the rele­vant term. The Draf­ters took note of the Expl­ana­to­ry Memo­ran­dum accom­pany­ing the updated defi­ni­ti­on of an arti­fi­ci­al intel­li­gence system in the OECD Recom­men­da­ti­on on Arti­fi­ci­al Intel­li­gence (OECD/LEGAL/0449, 2019, amen­ded 2023) for a more detail­ed expl­ana­ti­on of the various ele­ments in the defi­ni­ti­on. While this defi­ni­ti­on pro­vi­des a com­mon under­stan­ding bet­ween the Par­ties as to what arti­fi­ci­al intel­li­gence systems are, Par­ties can fur­ther spe­ci­fy it in their dome­stic legal systems for fur­ther legal cer­tain­ty and pre­cis­i­on, wit­hout limi­ting its scope.

25. This defi­ni­ti­on must be read in light of other rele­vant pro­vi­si­ons of the Frame­work Con­ven­ti­on, which refer to (1) the systems with poten­ti­al to inter­fe­re with human rights, demo­cra­cy, or the rule of law and (2) the gra­dua­ted and dif­fe­ren­tia­ted approach in Artic­le 1 and con­tex­tu­al ele­ments in the Frame­work Convention’s indi­vi­du­al pro­vi­si­ons (Artic­les 4 and 5, see the respec­ti­ve com­men­ta­ries in para­graphs 37 – 41, 42 – 48 below).

Artic­le 3 – Scope

1. The scope of this Con­ven­ti­on covers the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems that have the poten­ti­al to inter­fe­re with human rights, demo­cra­cy and the rule of law as follows:

a. Each Par­ty shall app­ly this Con­ven­ti­on to the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems under­ta­ken by public aut­ho­ri­ties, or pri­va­te actors acting on their behalf.

b. Each Par­ty shall address risks and impacts ari­sing from acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems by pri­va­te actors to the ext­ent not cover­ed in sub­pa­ra­graph a in a man­ner con­forming with the object and pur­po­se of this Convention.

Each Par­ty shall spe­ci­fy in a decla­ra­ti­on sub­mit­ted to the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe at the time of signa­tu­re or when depo­si­ting its instru­ment of rati­fi­ca­ti­on, accep­tance, appr­oval or acce­s­si­on, how it intends to imple­ment this obli­ga­ti­on, eit­her by app­ly­ing the prin­ci­ples and obli­ga­ti­ons set forth in Chap­ters II to VI of this Con­ven­ti­on to acti­vi­ties of pri­va­te actors or by taking other appro­pria­te mea­su­res to ful­fil the obli­ga­ti­on set out in this sub­pa­ra­graph. Par­ties may, at any time and in the same man­ner, amend their declarations.

When imple­men­ting the obli­ga­ti­on under this sub­pa­ra­graph, a Par­ty may not dero­ga­te from or limit the appli­ca­ti­on of its inter­na­tio­nal obli­ga­ti­ons under­ta­ken to pro­tect human rights, demo­cra­cy and the rule of law.

2. A Par­ty shall not be requi­red to app­ly this Con­ven­ti­on to acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems rela­ted to the pro­tec­tion of its natio­nal secu­ri­ty inte­rests, with the under­stan­ding that such acti­vi­ties are con­duc­ted in a man­ner con­si­stent with appli­ca­ble inter­na­tio­nal law, inclu­ding inter­na­tio­nal human rights law obli­ga­ti­ons, and with respect for its demo­cra­tic insti­tu­ti­ons and processes.

3. Wit­hout pre­ju­di­ce to Artic­le 13 and Artic­le 25, para­graph 2, this Con­ven­ti­on shall not app­ly to rese­arch and deve­lo­p­ment acti­vi­ties regar­ding arti­fi­ci­al intel­li­gence systems not yet made available for use, unless test­ing or simi­lar acti­vi­ties are under­ta­ken in such a way that they have the poten­ti­al to inter­fe­re with human rights, demo­cra­cy and the rule of law.

4. Mat­ters rela­ting to natio­nal defence do not fall within the scope of this Convention.


Expl­ana­to­ry Report

26. This Frame­work Con­ven­ti­on has a broad scope to encom­pass the acti­vi­ties within the life­cy­cle of artic­le intel­li­gence systems that have the poten­ti­al to inter­fe­rence with human rights, demo­cra­cy and rule of law.

27. Con­si­stent with Recom­men­da­ti­on No. R (84) 15 of the Com­mit­tee of Mini­sters to mem­ber Sta­tes Rela­ting to Public Lia­bi­li­ty of 18 Sep­tem­ber 1984, the Draf­ters’ shared under­stan­ding is that the term “public aut­ho­ri­ty” means any enti­ty of public law of any kind or any level (inclu­ding supra­na­tio­nal, Sta­te, regio­nal, pro­vin­cial, muni­ci­pal, and inde­pen­dent public enti­ty) and any pri­va­te per­son when exer­cis­ing pre­ro­ga­ti­ves of offi­ci­al authority.

28. Sub­pa­ra­graph 1 (a) obli­ges the Par­ties to ensu­re that such acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems com­ply with the pro­vi­si­ons of this Frame­work Con­ven­ti­on when under­ta­ken by public aut­ho­ri­ties as well as pri­va­te actors acting on their behalf. This would include an obli­ga­ti­on to com­ply with the pro­vi­si­ons of this Frame­work Con­ven­ti­on in regard to acti­vi­ties for which public aut­ho­ri­ties dele­ga­te their respon­si­bi­li­ties to pri­va­te actors or direct them to act, such as acti­vi­ties by pri­va­te actors ope­ra­ting pur­su­ant to a con­tract with a public aut­ho­ri­ty or other pri­va­te pro­vi­si­on of public ser­vices, as well as public pro­cu­re­ment and contracting.

29. Sub­pa­ra­graph 1 (b) obli­ges all Par­ties to address risks and impacts to human rights, demo­cra­cy and the rule of law in the pri­va­te sec­tor also for pri­va­te actors to the ext­ent the­se are not alre­a­dy cover­ed under sub­pa­ra­graph 1 (a). Fur­ther, refe­ren­ces to object and pur­po­se have the effect of import­ing all of the con­cepts of Artic­le 1, i.e. addres­sing risks is not mere­ly ack­now­led­ging tho­se risks, but requi­res the adop­ti­on or main­tai­ning of appro­pria­te legis­la­ti­ve, admi­ni­stra­ti­ve or other mea­su­res to give effect to this pro­vi­si­on as well as co-ope­ra­ti­on bet­ween the Par­ties as in the pro­vi­si­ons on the fol­low-up mecha­nism and inter­na­tio­nal co-ope­ra­ti­on. Howe­ver, the obli­ga­ti­on does not neces­s­a­ri­ly requi­re addi­tio­nal legis­la­ti­on and Par­ties may make use of other appro­pria­te mea­su­res, inclu­ding admi­ni­stra­ti­ve and vol­un­t­a­ry mea­su­res. So while the obli­ga­ti­on is bin­ding and all Par­ties should com­ply with it, the natu­re of the mea­su­res taken by the Par­ties could vary. In any case, when imple­men­ting the obli­ga­ti­on under para­graph 1, sub­pa­ra­graph (b), a Par­ty may not dero­ga­te from or limit the appli­ca­ti­on of its inter­na­tio­nal obli­ga­ti­ons under­ta­ken to pro­tect human rights, demo­cra­cy and rule of law.

30. To ensu­re legal cer­tain­ty and trans­pa­ren­cy, each Par­ty is obli­ged to set out in a decla­ra­ti­on how it intends to meet the obli­ga­ti­on set out in this para­graph, eit­her by app­ly­ing the prin­ci­ples and obli­ga­ti­ons set forth in Chap­ters II to VI of the Frame­work Con­ven­ti­on to acti­vi­ties of pri­va­te actors or by taking other appro­pria­te mea­su­res to ful­fil the obli­ga­ti­on set out in this para­graph. For Par­ties that have cho­sen not to app­ly the prin­ci­ples and the obli­ga­ti­ons of the Frame­work Con­ven­ti­on in rela­ti­on to acti­vi­ties of other pri­va­te actors, the Draf­ters expect the approa­ches of tho­se Par­ties to deve­lop over time as their approa­ches to regu­la­te the pri­va­te sec­tor evolve.

31. All Par­ties should sub­mit their decla­ra­ti­ons to the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe at the time of signa­tu­re, or when depo­si­ting an instru­ment of rati­fi­ca­ti­on, accep­tance, appr­oval or acce­s­si­on. Sin­ce it is important for Par­ties to the Frame­work Con­ven­ti­on to know what decla­ra­ti­ons have been for­mu­la­ted, the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe will imme­dia­te­ly share the decla­ra­ti­ons recei­ved with the other Par­ties. Par­ties may, at any time and in the same man­ner, amend their declarations.

32. Whil main­tai­ning a broad scope of the Frame­work Con­ven­ti­on, para­graph 2 envi­sa­ges that a Par­ty is not requi­red to app­ly this Frame­work Con­ven­ti­on to the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems rela­ted to the pro­tec­tion of its natio­nal secu­ri­ty inte­rests, regard­less of the type of enti­ties car­ry­ing out the cor­re­spon­ding acti­vi­ties. Such acti­vi­ties must nevert­hel­ess be con­duc­ted in a man­ner con­si­stent with the appli­ca­ble inter­na­tio­nal law obli­ga­ti­ons, sin­ce natio­nal secu­ri­ty is inclu­ded in the scope of many inter­na­tio­nal human rights trea­ties, such as but not limi­t­ed to the Coun­cil of Euro­pe Con­ven­ti­on for the Pro­tec­tion of Human Rights and Fun­da­men­tal Free­doms (ECHR), the Ame­ri­can Con­ven­ti­on on Human Rights (Pact of San José), the United Nati­ons (UN) Inter­na­tio­nal Covenant on Civil and Poli­ti­cal Rights (ICCPR) and the UN Inter­na­tio­nal Covenant on Eco­no­mic, Social and Cul­tu­ral Rights (ICESCR). Acti­vi­ties to pro­tect natio­nal secu­ri­ty inte­rests that inter­fe­re with human rights must be pro­vi­ded for by law, respect the essence of the human rights and, as appli­ca­ble within the scope of the afo­re­men­tio­ned obli­ga­ti­ons, con­sti­tu­te a neces­sa­ry and pro­por­tio­na­te mea­su­re in a demo­cra­tic socie­ty. The­se acti­vi­ties must also be con­duc­ted with respect for the Par­ties’ demo­cra­tic pro­ce­s­ses and insti­tu­ti­ons, as pro­vi­ded for in their dome­stic legis­la­ti­on in com­pli­ance with appli­ca­ble inter­na­tio­nal law. This excep­ti­on from the scope of the Frame­work Con­ven­ti­on applies only if and inso­far the acti­vi­ties rela­te to the pro­tec­tion of natio­nal secu­ri­ty inte­rests. This main­ta­ins in the scope of the Frame­work Con­ven­ti­on acti­vi­ties regar­ding ‘dual use’ arti­fi­ci­al intel­li­gence systems inso­far as the­se are inten­ded to be used for other pur­po­ses not rela­ted to the pro­tec­tion of the Par­ties’ natio­nal secu­ri­ty inte­rests and are within the Party’s obli­ga­ti­ons under Artic­le 3. All regu­lar law enforce­ment acti­vi­ties for the pre

ven­ti­on, detec­tion, inve­sti­ga­ti­on, and pro­se­cu­ti­on of cri­mes, inclu­ding thre­ats to public secu­ri­ty, also remain within the scope of the Frame­work Con­ven­ti­on if and inso­far as the natio­nal secu­ri­ty inte­rests of the Par­ties are not at sta­ke.

33. As regards para­graph 3, the wor­ding reflects the intent of the Draf­ters to exempt rese­arch and deve­lo­p­ment acti­vi­ties from the scope of the Frame­work Con­ven­ti­on under cer­tain con­di­ti­ons, name­ly that the arti­fi­ci­al intel­li­gence systems in que­sti­on have not been made available for use, and that the test­ing and other simi­lar acti­vi­ties do not pose a poten­ti­al for inter­fe­rence with human rights, demo­cra­cy and the rule of law. Such acti­vi­ties exclu­ded from the scope of the Frame­work Con­ven­ti­on should in any case be car­ri­ed out in accordance with appli­ca­ble human rights and dome­stic law as well as reco­g­nis­ed ethi­cal and pro­fes­sio­nal stan­dards for sci­en­ti­fic research.

34. It is also the intent of the Draf­ters to con­sider that arti­fi­ci­al intel­li­gence systems that are made available for use as a result of such rese­arch and deve­lo­p­ment acti­vi­ties would need in prin­ci­ple to com­ply with the Frame­work Con­ven­ti­on, inclu­ding in regard to their design and development. 

35. The exemp­ti­on for rese­arch and deve­lo­p­ment acti­vi­ties con­tai­ned in para­graph 3 should be imple­men­ted wit­hout pre­ju­di­ce to the prin­ci­ple of “safe inno­va­ti­on”, see Artic­le 13, and the exch­an­ge bet­ween Par­ties on infor­ma­ti­on about risks, as well as signi­fi­cant posi­ti­ve or nega­ti­ve effects on human right, demo­cra­cy and the rule of law, ari­sing in rese­arch con­texts, see Artic­le 25, para­graph 2, on “inter­na­tio­nal co-operation”.

36. For the exemp­ti­on of “mat­ters rela­ting to natio­nal defence” from the scope of the Frame­work Con­ven­ti­on, the Draf­ters deci­ded to use lan­guage taken from Artic­le 1, d, of the Sta­tu­te of the Coun­cil of Euro­pe (ETS No 1) which sta­tes that “[m]atters rela­ting to natio­nal defence do not fall within the scope of the Coun­cil of Euro­pe”. This exemp­ti­on does not imply that acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems rela­ting to natio­nal defence are not cover­ed by inter­na­tio­nal law.

Chap­ter II – Gene­ral obligations

Artic­le 4 – Pro­tec­tion of human rights

Each Par­ty shall adopt or main­tain mea­su­res to ensu­re that the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems are con­si­stent with obli­ga­ti­ons to pro­tect human rights, as enshri­ned in appli­ca­ble inter­na­tio­nal law and in its dome­stic law.

Expl­ana­to­ry Report

37. This pro­vi­si­on refers to the obli­ga­ti­ons of each Par­ty in the sphe­re of human rights pro­tec­tion, as enshri­ned in appli­ca­ble inter­na­tio­nal and dome­stic law, with respect to acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems.

38. Under inter­na­tio­nal law, the Par­ties have the duty to ensu­re that their dome­stic law is in con­for­mi­ty with their inter­na­tio­nal legal obli­ga­ti­ons, which inclu­des obli­ga­ti­ons under inter­na­tio­nal trea­ties which are bin­ding on them. Inter­na­tio­nal human rights law estab­lishes the obli­ga­ti­on for each Par­ty to respect, pro­tect, and ful­fil human rights. Each Par­ty has an obli­ga­ti­on to ensu­re that its dome­stic law is in con­for­mi­ty with its appli­ca­ble inter­na­tio­nal human rights obli­ga­ti­ons. At the same time, Par­ties are free to choo­se the ways and means of imple­men­ting their inter­na­tio­nal le

gal obli­ga­ti­ons, pro­vi­ded that the result is in con­for­mi­ty with tho­se obli­ga­ti­ons. This is an obli­ga­ti­on of result and not an obli­ga­ti­on of means. In this respect, the prin­ci­ple of sub­si­dia­ri­ty is essen­ti­al, put­ting upon the Par­ties the pri­ma­ry respon­si­bi­li­ty to ensu­re respect for human rights and to pro­vi­de redress for vio­la­ti­ons of human rights.

39. Below is a list of the main glo­bal and regio­nal inter­na­tio­nal human rights instru­ments and trea­ties to which various Sta­tes that nego­tia­ted the Frame­work Con­ven­ti­on may be Par­ties to (in chro­no­lo­gi­cal order):

United Nati­ons instru­ments:

1. The 1965 United Nati­ons Inter­na­tio­nal Con­ven­ti­on on the Eli­mi­na­ti­on of All Forms of Racial Dis­cri­mi­na­ti­on (ICERD);

2. The 1966 United Nati­ons Inter­na­tio­nal Covenant on Civil and Poli­ti­cal Rights and its Optio­nal Pro­to­cols (ICCPR);

3. The 1966 United Nati­ons Inter­na­tio­nal Covenant on Eco­no­mic, Social and Cul­tu­ral Rights (ICESCR) and its Optio­nal Protocol;

4. The 1979 United Nati­ons Con­ven­ti­on on the Eli­mi­na­ti­on of All Forms of Dis­cri­mi­na­ti­on Against Women (CEDAW) and its Optio­nal Protocol;

5. The 1984 United Nati­ons Con­ven­ti­on against Tor­tu­re and Other Cruel, Inhu­man or Degra­ding Tre­at­ment or Punish­ment and its Optio­nal Protocol;

6. The 1989 United Nati­ons Con­ven­ti­on on the Rights of the Child (UNCRC) and its Optio­nal Protocols;

7. The 2006 United Nati­ons Con­ven­ti­on for the Pro­tec­tion of All Per­sons from Enforced Dis­ap­pear­ance; and

8. The 2006 United Nati­ons Con­ven­ti­on on the Rights of Per­sons with Disa­bi­li­ties (UNCRPD) and its Optio­nal Protocol.

Coun­cil of Euro­pe and EU instru­ments:

1. The 1950 Coun­cil of Euro­pe Con­ven­ti­on for the Pro­tec­tion of Human Rights and Fun­da­men­tal Free­doms ((ETS No. 5, ECHR) and its Protocols;

2. The 1961 Euro­pean Social Char­ter (ETS No 35, ESC) and its pro­to­cols and the 1996 Revi­sed Euro­pean Char­ter (ETS No. 163);

3. The 1981 Con­ven­ti­on for the Pro­tec­tion of Indi­vi­du­als with Regard to Auto­ma­tic Pro­ce­s­sing Per­so­nal Data, as amen­ded (ETS No.108, CETS No 223) and its Protocols;

4. The 1987 Euro­pean Con­ven­ti­on for the Pre­ven­ti­on of Tor­tu­re and Inhu­man or Degra­ding Tre­at­ment or Punish­ment (ETS No. 126) and its Protocols;

5. The 1997 Con­ven­ti­on for the Pro­tec­tion of Human Rights and Dignity of the Human Being with regard to the Appli­ca­ti­on of Bio­lo­gy and Medi­ci­ne: Con­ven­ti­on on Human Rights and Bio­me­di­ci­ne ((ETS No. 164, the Ovie­do Con­ven­ti­on) and its Protocols;

6. The 1998 Frame­work Con­ven­ti­on for the Pro­tec­tion of Natio­nal Mino­ri­ties (ETS No. 157);

7. The 2000 Char­ter of Fun­da­men­tal Rights of the Euro­pean Uni­on (CFR, reco­g­nis­ed with the same legal value as the Trea­ties pur­su­ant to Artic­le 6 (1) of the Trea­ty on EU);

8. The 2005 Coun­cil of Euro­pe Con­ven­ti­on on Action against Traf­ficking in Human Beings (CETS No. 197);

9. The 2007 Coun­cil of Euro­pe Con­ven­ti­on on the Pro­tec­tion of Child­ren against Sexu­al Explo­ita­ti­on and Sexu­al Abu­se ((CETS No. 201, the Lan­za­ro­te Con­ven­ti­on); and

10. The 2011 Coun­cil of Euro­pe Con­ven­ti­on on Pre­ven­ting and Com­ba­ting Vio­lence Against Women and Dome­stic Vio­lence ((CETS No. 210, the Istan­bul Convention);

Other regio­nal instru­ments:

1. The 1969 Ame­ri­can Con­ven­ti­on on Human Rights (Pact of San José) and its first addi­tio­nal Protocols;

2. The 1985 Inter-Ame­ri­can Con­ven­ti­on to Pre­vent and Punish Torture;

3. The 1994 Inter-Ame­ri­can Con­ven­ti­on on the Forced Dis­ap­pear­ance of Persons;

4. The 1994 Inter-Ame­ri­can Con­ven­ti­on on the Pre­ven­ti­on, Punish­ment and Era­di­ca­ti­on of Vio­lence against Women;

5. The 1999 Inter-Ame­ri­can Con­ven­ti­on on the Eli­mi­na­ti­on of All Forms of Dis­cri­mi­na­ti­on against Per­sons with Disabilities;

6. The 2013 Inter-Ame­ri­can Con­ven­ti­on against Racism, Racial Dis­cri­mi­na­ti­on, and Rela­ted Forms of Into­le­rance; and

7. The 2015 Inter-Ame­ri­can Con­ven­ti­on on Pro­tec­ting the Human Rights of Older Persons.

40. In addi­ti­on to the legal obli­ga­ti­ons resul­ting from inter­na­tio­nal human rights law, Artic­le 4 of the Frame­work Con­ven­ti­on also refers to the pro­tec­tion of human rights in each Party’s dome­stic law. The­se typi­cal­ly include con­sti­tu­tio­nal and other sub­or­di­na­te norms and rules, as well as mecha­nisms for super­vi­si­on and enforce­ment of their imple­men­ta­ti­on, which aim to pro­tect human rights. The Draf­ters wis­hed to cla­ri­fy that refe­rence to dome­stic law in this pro­vi­si­on and else­whe­re is not inten­ded to ser­ve as pro­vi­ding for an exemp­ti­on from the obli­ga­ti­ons of the Par­ties to com­ply with their inter­na­tio­nal law obligations.

41. Against the abo­ve back­ground, the gene­ral obli­ga­ti­on in Artic­le 4 of the Frame­work Con­ven­ti­on requi­res Par­ties to take stock of their exi­sting human rights obli­ga­ti­ons, frame­works and mecha­nisms in their dome­stic legal system and, in line with the approach descri­bed in Artic­le 1, para­graph 2, ensu­re that the exi­sting frame­works, rules and mecha­nisms con­ti­n­ue to pro­tect and pro­mo­te human rights, con­si­stent with inter­na­tio­nal human rights obli­ga­ti­ons, and are suf­fi­ci­ent and effec­ti­ve to respond to the evol­ving arti­fi­ci­al intel­li­gence landscape.

Artic­le 5 – Inte­gri­ty of demo­cra­tic pro­ce­s­ses and respect for the rule of law

1. Each Par­ty shall adopt or main­tain mea­su­res that seek to ensu­re that arti­fi­ci­al intel­li­gence systems are not used to under­mi­ne the inte­gri­ty, inde­pen­dence and effec­ti­ve­ness of demo­cra­tic insti­tu­ti­ons and pro­ce­s­ses, inclu­ding the prin­ci­ple of the sepa­ra­tio n of powers, res pect for judi­cial inde­pen­dence and access to justice.

2. Each Par­ty shall adopt or main­tain mea­su­res that seek to pro­tect its demo­cra­tic pro­ce­s­ses in the con­text of acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems, inclu­ding indi­vi­du­als’ fair access to and par­ti­ci­pa­ti­on in public deba­te, as well as their abili­ty to free­ly form opinions.


Expl­ana­to­ry Report

42. Arti­fi­ci­al intel­li­gence tech­no­lo­gies pos­sess signi­fi­cant poten­ti­al to enhan­ce demo­cra­tic values, insti­tu­ti­ons, and pro­ce­s­ses. Poten­ti­al impacts include the deve­lo­p­ment of a deeper com­pre­hen­si­on of poli­tics among citi­zens, enab­ling increa­sed par­ti­ci­pa­ti­on in demo­cra­tic deba­te or impro­ving the inte­gri­ty of infor­ma­ti­on in online civic space. Simi­lar­ly, poli­ti­cal repre­sen­ta­ti­ves, can­di­da­tes, public offi­ci­als or public repre­sen­ta­ti­ves can estab­lish clo­ser con­nec­tions with indi­vi­du­als, ulti­m­ate­ly enhan­cing the abili­ty of poli­ti­cal repre­sen­ta­ti­ves, public offi­ci­als or public repre­sen­ta­ti­ves to repre­sent the public more effec­tively. This ali­gnment bet­ween poli­ti­cal repre­sen­ta­ti­ves, public offi­ci­als or public repre­sen­ta­ti­ves and citi­zens has the poten­ti­al to trans­form elec­to­ral cam­paigns and signi­fi­cant­ly enhan­ce the poli­cy­ma­king pro­cess, foste­ring grea­ter inclu­si­ve­ness, trans­pa­ren­cy and efficiency.

43. Con­cerns regar­ding the use of arti­fi­ci­al intel­li­gence in poli­tics have long been pre­sent, but tho­se spe­ci­fi­cal­ly asso­cia­ted with demo­cra­ci­es and the elec­to­ral pro­cess have inten­si­fi­ed with recent tech­no­lo­gi­cal advance­ments. The recent­ly intro­du­ced appli­ca­ti­ons of this emer­ging tech­no­lo­gy could pose num­e­rous thre­ats to demo­cra­cy and human rights, ser­ving as a potent tool for frag­men­ting the public sphe­re and under­mi­ning civic par­ti­ci­pa­ti­on and trust in demo­cra­cy. Such tools could enable users, inclu­ding mali­cious actors, to dis­se­mi­na­te dis­in­for­ma­ti­on and mis­in­for­ma­ti­on that could under­mi­ne infor­ma­ti­on inte­gri­ty (inclu­ding through the use of AI-gene­ra­ted con­tent or AI-enab­led mani­pu­la­ti­on of authen­tic con­tent) and, whe­re appli­ca­ble, the right of access to infor­ma­ti­on; make pre­ju­di­ced decis­i­ons about indi­vi­du­als, poten­ti­al­ly resul­ting in dis­cri­mi­na­to­ry prac­ti­ces; influence court rulings, with poten­ti­al impli­ca­ti­ons for the inte­gri­ty of the justi­ce system; and under­ta­ke ille­gal or arbi­tra­ry sur­veil­lan­ce, lea­ding to rest­ric­tions on the free­dom of assem­bly or free­dom of expres­si­on, and privacy. 

44. The use of arti­fi­ci­al intel­li­gence tech­no­lo­gy in the abo­ve-descri­bed man­ner could esca­la­te ten­si­ons or under­mi­ne public trust which is a main ele­ment of an effec­ti­ve demo­cra­tic govern­ment. Arti­fi­ci­al intel­li­gence has the capa­bi­li­ty to gene­ra­te fal­se infor­ma­ti­on or lead to the exclu­si­on of indi­vi­du­als or tho­se who may be under­re­pre­sen­ted or in a vul­nerable situa­ti­on from the demo­cra­tic pro­ce­s­ses. It could also exa­cer­ba­te mani­pu­la­ti­ve con­tent cura­ti­on. Despi­te its advan­ta­ge­ous aspects, arti­fi­ci­al intel­li­gence car­ri­es the signi­fi­cant risk to nega­tively impact the demo­cra­tic pro­cess and the exer­cise of rele­vant human rights. Howe­ver, with the imple­men­ta­ti­on of appro­pria­te safe­guards, the­se tech­no­lo­gies may pro­ve bene­fi­ci­al to democracy.

45. In Artic­le 5, the Draf­ters wis­hed to point towards spe­ci­fic sen­si­ti­ve con­texts (para­graph 1 cove­ring main­ly the rele­vant insti­tu­tio­nal aspects and para­graph 2 cove­ring prin­ci­pal­ly the rele­vant demo­cra­tic pro­ce­s­ses) whe­re a poten­ti­al use of arti­fi­ci­al intel­li­gence should be pre­ce­ded by a careful con­side­ra­ti­on of risks to demo­cra­cy and the rule of law and accom­pa­nied with appro­pria­te rules and safe­guards. Despi­te the lack of a com­mon­ly agreed upon defi­ni­ti­on of the term “demo­cra­tic insti­tu­ti­ons and pro­ce­s­ses”, the refe­rence is being made to all systems of govern­ment with cer­tain basic fea­tures and insti­tu­ti­ons which are com­mon to all demo­cra­tic countries.

46. In imple­men­ting its obli­ga­ti­ons to pro­tect demo­cra­tic insti­tu­ti­ons and pro­ce­s­ses under Artic­le 5, Par­ties may wish to focus, for exam­p­le, on the risks of arti­fi­ci­al intel­li­gence systems to:

a)

the prin­ci­ple of sepa­ra­ti­on of powers (in exe­cu­ti­ve, legis­la­ti­ve and judi­cial branches);

b)

an effec­ti­ve system of checks and balan­ces bet­ween the three bran­ches of govern­ment, inclu­ding effec­ti­ve over­sight of the exe­cu­ti­ve branch;

c)

whe­re appli­ca­ble, a balan­ced dis­tri­bu­ti­on of powers bet­ween dif­fe­rent levels of govern­ment (so-cal­led ver­ti­cal sepa­ra­ti­on of powers);

d)

poli­ti­cal plu­ra­lism (ensu­red in lar­ge part by the pro­tec­tion of human rights the respect of which is essen­ti­al for a thri­ving demo­cra­cy, such as free­dom of expres­si­on, free­dom of asso­cia­ti­on and free­dom of peaceful assem­bly; and exi­stence of plu­ra­list and inde­pen­dent media and a ran­ge of poli­ti­cal par­ties repre­sen­ting dif­fe­rent inte­rests and views) and fair access to and par­ti­ci­pa­ti­on in public debate;

e)

par­ti­ci­pa­ti­on in demo­cra­tic pro­ce­s­ses through free and fair elec­tions, and a plu­ra­li­ty of forms of meaningful civil and poli­ti­cal participation;

f)

poli­ti­cal majo­ri­ty rule cou­pled with respect of the rights of poli­ti­cal minority(ies);

g)

respect for the rule of law (gene­ral­ly encom­pas­sing the prin­ci­ples of lega­li­ty, legal cer­tain­ty and non-arbi­trar­i­ness) and the prin­ci­ple of access to justi­ce and its pro­per admi­ni­stra­ti­on; and

h)

respect for the prin­ci­ple of judi­cial independence.

47. Fur­ther­mo­re, the inte­gri­ty of demo­cra­cy and its pro­ce­s­ses is based on two important assump­ti­ons refer­red to in Artic­le 7, name­ly that indi­vi­du­als have agen­cy (capa­ci­ty to form an opi­ni­on and act on it) as well as influence (capa­ci­ty to affect decis­i­ons made on their behalf). Arti­fi­ci­al intel­li­gence tech­no­lo­gies can streng­then the­se abili­ties but, con­ver­se­ly, can also threa­ten or under­mi­ne them. It is for this rea­son that para­graph 2 of the pro­vi­si­on refers to the need to adopt or main­tain mea­su­res that seek to pro­tect “the

abili­ty [of indi­vi­du­als] to free­ly form opi­ni­ons”. With respect to public sec­tor uses of arti­fi­ci­al intel­li­gence, this could refer to, for exam­p­le, gene­ral cyber­se­cu­ri­ty mea­su­res against mali­cious for­eign inter­fe­rence in the elec­to­ral pro­cess or mea­su­res to address the spre­a­ding of mis­in­for­ma­ti­on and dis­in­for­ma­ti­on.

48. At the same time, this pro­vi­si­on is not inten­ded to crea­te, redu­ce, extend or other­wi­se modi­fy the exi­sting appli­ca­ble stan­dards regar­ding any human rights, inclu­ding free­dom of expres­si­on (such as for instance regar­ding poli­ti­cal adver­ti­sing), free­dom of asso­cia­ti­on and free­dom of assem­bly, as pro­vi­ded for in each Party’s appli­ca­ble inter­na­tio­nal obli­ga­ti­ons and dome­stic human rights law.

Chap­ter III – Prin­ci­ples rela­ted to acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems

Artic­le 6 – Gene­ral approach

This chap­ter sets forth gene­ral com­mon prin­ci­ples that each Par­ty shall imple­ment in regard to arti­fi­ci­al intel­li­gence systems in a man­ner appro­pria­te to its dome­stic legal system and the other obli­ga­ti­ons of this Convention.

Expl­ana­to­ry Report

49. This pro­vi­si­on makes clear that the prin­ci­ples con­tai­ned in this Chap­ter should be incor­po­ra­ted into the Par­ties’ dome­stic approa­ches to the regu­la­ti­on of arti­fi­ci­al intel­li­gence systems. As such, they are pur­po­seful­ly draf­ted at a high level of gene­ra­li­ty, with the inten­ti­on that they should be over­ar­ching requi­re­ments that can be applied fle­xi­bly in a varie­ty of rapid­ly chan­ging con­texts. They are also pur­po­si­ve, expres­sing the rea­son behind the rule and have very broad appli­ca­ti­on to a diver­se ran­ge of circumstances.

50. The Draf­ters wis­hed to make it clear that the imple­men­ta­ti­on of this Chap­ter, in line with the obli­ga­ti­ons set out in Artic­les 4 and 5, should be car­ri­ed out by each Par­ty in line with the approach descri­bed in Artic­le 1, para­graph 2, in a man­ner appro­pria­te to its dome­stic legal system, and also taking into account the other obli­ga­ti­ons con­tai­ned in this Frame­work Convention.

51. This point is par­ti­cu­lar­ly important inso­far as, as alre­a­dy men­tio­ned ear­lier, by vir­tue of their respec­ti­ve inter­na­tio­nal human rights obli­ga­ti­ons each Par­ty alre­a­dy has a detail­ed legal regime of human rights pro­tec­tion with its own set of rules, prin­ci­ples and prac­ti­ces regar­ding the scope, con­tent of rights and pos­si­ble rest­ric­tions, dero­ga­ti­ons or excep­ti­ons to the­se rights as well as the func­tio­ning of the appli­ca­ble super­vi­si­on and enforce­ment mechanisms.

52. Fur­ther­mo­re, not­hing in this Frame­work Con­ven­ti­on is inten­ded to impact exi­sting human rights obli­ga­ti­ons when­ever they over­lap with the prin­ci­ples in Chap­ter III.

Artic­le 7 – Human dignity and indi­vi­du­al autonomy

Each Par­ty shall adopt or main­tain mea­su­res to respect human dignity and indi­vi­du­al auto­no­my in rela­ti­on to acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems.

Expl­ana­to­ry Report

53. This pro­vi­si­on empha­sis­es the importance of human dignity and indi­vi­du­al auto­no­my as part of human-cen­tric regu­la­ti­on and gover­nan­ce of the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems that fall in the scope of the Frame­work Con­ven­ti­on. Acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems should not lead to the dehu­ma­nizati­on of indi­vi­du­als, under­mi­ne their agen­cy or redu­ce them to mere data points, or anthro­po­mor­phise arti­fi­ci­al intel­li­gence systems in a way which inter­fe­res with human dignity. Human dignity requi­res ack­now­led­ging the com­ple­xi­ty and rich­ness of human iden­ti­ty, expe­ri­ence, values, and emotions.

54. Uphol­ding human dignity implies respec­ting the inher­ent value and worth of each indi­vi­du­al, regard­less of their back­ground, cha­rac­te­ri­stics, or cir­cum­stances and refers in par­ti­cu­lar to the man­ner in which all human beings should be trea­ted. Sin­ce the dignity of the human per­son is uni­ver­sal­ly agreed as con­sti­tu­ting the basis of human rights[4], the refe­rence to it as the first prin­ci­ple of Chap­ter III high­lights the glo­bal cha­rac­ter of the Frame­work Con­ven­ti­on sin­ce all Par­ties reco­g­nise the inher­ent dignity of the human per­son as an under­ly­ing basis of human rights, demo­cra­tic par­ti­ci­pa­ti­on and the rule of l

aw.

55. Indi­vi­du­al auto­no­my is one important aspect of human dignity and refers to the capa­ci­ty of indi­vi­du­als for self-deter­mi­na­ti­on; that is, their abili­ty to make choices and decis­i­ons, inclu­ding wit­hout coer­ci­on, and live their lives free­ly. In the con­text of arti­fi­ci­al intel­li­gence, indi­vi­du­al auto­no­my requi­res that indi­vi­du­als have con­trol over the use and impact of arti­fi­ci­al intel­li­gence tech­no­lo­gies in their lives, and that

their agen­cy and auto­no­my are not ther­eby dimi­nis­hed. Human-cen­tric regu­la­ti­on ack­now­led­ges the signi­fi­can­ce of allo­wing indi­vi­du­als to shape their expe­ri­en­ces with arti­fi­ci­al intel­li­gence, ensu­ring that the­se tech­no­lo­gies enhan­ce rather than inf­rin­ge upon their auto­no­my. The Draf­ters con­side­red that refer­ring to this con­cept in this Frame­work Con­ven­ti­on is par­ti­cu­lar­ly appro­pria­te in view of the capa­ci­ty of arti­fi­ci­al intel­li­gence systems for imi­ta­ti­on and manipulation. 

Artic­le 8 – Trans­pa­ren­cy and oversight

Each Par­ty shall adopt or main­tain mea­su­res to ensu­re that ade­qua­te trans­pa­ren­cy and over­sight requi­re­ments tail­o­red to the spe­ci­fic con­texts and risks are in place in respect of acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems, inclu­ding with regard to the iden­ti­fi­ca­ti­on of con­tent gene­ra­ted by arti­fi­ci­al intel­li­gence systems.

Expl­ana­to­ry Report

56. Due to cer­tain fea­tures that distin­gu­ish arti­fi­ci­al intel­li­gence systems from tra­di­tio­nal com­pu­ting systems, which may include com­ple­xi­ty, opa­ci­ty, adap­ta­bi­li­ty, and vary­ing degrees of auto­no­my, acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems fal­ling within the scope of the Frame­work Con­ven­ti­on requi­re appro­pria­te safe­guards in the form of trans­pa­ren­cy and over­sight mechanisms.

57. The prin­ci­ple of trans­pa­ren­cy in Artic­le 8 refers to open­ness and cla­ri­ty in the gover­nan­ce of acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems and means that the decis­i­on-making pro­ce­s­ses and gene­ral ope­ra­ti­on of arti­fi­ci­al intel­li­gence systems should be under­stan­da­ble and acce­s­si­ble to appro­pria­te arti­fi­ci­al intel­li­gence actors and, whe­re neces­sa­ry and appro­pria­te, rele­vant stake­hol­ders. In cer­tain cases, this could also refer to pro­vi­ding addi­tio­nal infor­ma­ti­on, inclu­ding, for exam­p­le, on the algo­rith­ms used, sub­ject to secu­ri­ty, com­mer­cial and intellec­tu­al pro­per­ty and other con­side­ra­ti­ons, as detail­ed in para­graph 62 below. The means of ensu­ring trans­pa­ren­cy would depend on many dif­fe­rent fac­tors such as, for instance, the type of arti­fi­ci­al intel­li­gence system, the con­text of its use or its role, and the back­ground of the rele­vant actor or affec­ted stake­hol­der. Moreo­ver, rele­vant mea­su­res include, as appro­pria­te, recor­ding key con­side­ra­ti­ons such as data pro­ven­an­ce, trai­ning metho­do­lo­gies, vali­di­ty of data sources, docu­men­ta­ti­on and trans­pa­ren­cy on trai­ning, test­ing and vali­da­ti­on data used, risk miti­ga­ti­on efforts, and pro­ce­s­ses and decis­i­ons imple­men­ted, in order to aid a com­pre­hen­si­ve under­stan­ding of how the arti­fi­ci­al intel­li­gence system’s out­puts are deri­ved and impact human rights, demo­cra­cy and the rule of law. This will in par­ti­cu­lar help to ensu­re accoun­ta­bi­li­ty and enable per­sons con­cer­ned to con­test the use or out­co­mes of arti­fi­ci­al intel­li­gence system, whe­re and as appli­ca­ble (see the com­men­ta­ry to Artic­le 14, in para­graphs 95 – 102).

58. Pro­vi­ding trans­pa­ren­cy about an arti­fi­ci­al intel­li­gence system could thus requi­re com­mu­ni­ca­ting appro­pria­te infor­ma­ti­on about the system (such as, for instance, purpose(s), known limi­ta­ti­ons, assump­ti­ons and engi­nee­ring choices made during design, fea­tures, details of the under­ly­ing models or algo­rith­ms, trai­ning methods and qua­li­ty assu­rance pro­ce­s­ses). The term ‘algo­rith­mic trans­pa­ren­cy’ is often used to descri­be open­ness about the pur­po­se, struc­tu­re and under­ly­ing actions of an algo­rithm-dri­ven system. Addi­tio­nal­ly, trans­pa­ren­cy may invol­ve, as appro­pria­te, informing per­sons con­cer­ned or the wider public about the details of data used to crea­te, train and ope­ra­te the system and the pro­tec­tion of per­so­nal data along with the pur­po­se of the system and how it was desi­gned, tested and deployed. Trans­pa­ren­cy should also include informing per­sons con­cer­ned about the pro­ce­s­sing of infor­ma­ti­on and the types and level of auto­ma­ti­on used to make con­se­quen­ti­al decis­i­ons, and the risks asso­cia­ted with the use of the arti­fi­ci­al intel­li­gence system. Pro­vi­ding trans­pa­ren­cy could in addi­ti­on faci­li­ta­te the pos­si­bi­li­ty for par­ties with legi­ti­ma­te inte­rests, inclu­ding copy­right hol­ders, to exer­cise and enforce their intellec­tu­al pro­per­ty rights.

59. The pro­vi­si­on also pro­vi­des for mea­su­res with regards to the iden­ti­fi­ca­ti­on of AI-gene­ra­ted con­tent in order to avo­id the risk of decep­ti­on and enable distinc­tion bet­ween authen­tic, human-gene­ra­ted con­tent and AI-gene­ra­ted con­tent as it beco­mes incre­a­sing­ly hard for peo­p­le to iden­ti­fy. Such mea­su­res could include tech­ni­ques such as label­ling and water­mar­king – which usual­ly invol­ves embed­ding a reco­g­nisable signa­tu­re into the out­put of arti­fi­ci­al intel­li­gence system – sub­ject to the avai­la­bi­li­ty of the­se tech­no­lo­gies and their pro­ven effec­ti­ve­ness, the gene­ral­ly ack­now­led­ged sta­te of the art, and spe­ci­fi­ci­ties of dif­fe­rent types of con­tent. Pro­mo­ting the use of tech­ni­cal stan­dards, open-source licen­ces and the co lla­bo­ra­ti­on of rese­ar­chers and deve­lo­pers sup­ports the deve­lo­p­ment of more trans­pa­rent arti­fi­ci­al intel­li­gence systems in the long run.

60. It is important to under­line two important aspects of the prin­ci­ple of trans­pa­ren­cy, nota­b­ly e xplaina­bi­li­ty and inter­pr­e­ta­bi­li­ty. The term “explaina­bi­li­ty” refers to the capa­ci­ty to pro­vi­de, sub­ject to tech­ni­cal fea­si­bi­li­ty and taking into account the gene­ral­ly ack­now­led­ged sta­te of the art, suf­fi­ci­ent­ly under­stan­da­ble expl­ana­ti­ons about why an arti­fi­ci­al intel­li­gence system pro­vi­des infor­ma­ti­on, pro­du­ces pre­dic­tions, con­tent, recom­men­da­ti­ons or decis­i­ons, which is par­ti­cu­lar­ly cru­cial in sen­si­ti­ve domains such as heal­th­ca­re, finan­ce, immi­gra­ti­on, bor­der ser­vices and cri­mi­nal justi­ce, whe­re under­stan­ding the rea­so­ning behind decis­i­ons pro­du­ced or assi­sted by an arti­fi­ci­al intel­li­gence system is essen­ti­al. In such cases trans­pa­ren­cy could, for instance, take the form of a list of fac­tors which the arti­fi­ci­al intel­li­gence system takes into con­side­ra­ti­on when informing or making a decision.

61. Ano­ther important aspect of trans­pa­ren­cy is inter­pr­e­ta­bi­li­ty, which refers to the abili­ty to under­stand how an arti­fi­ci­al intel­li­gence system makes its pre­dic­tions or decis­i­ons or, in other words, the ext­ent to which the out­puts of arti­fi­ci­al intel­li­gence systems can be made acce­s­si­ble and under­stan­da­ble to experts and non-experts ali­ke. It invol­ves making the inter­nal workings, logic, and decis­i­on-making pro­ce­s­ses of arti­fi­ci­al intel­li­gence systems under­stan­da­ble and acce­s­si­ble to human users, inclu­ding deve­lo­pers, stake­hol­ders, and end-users, and per­sons affec­ted. Both aspects are also cru­cial in mee­ting the requi­re­ments men­tio­ned in Artic­les 12, 13 and 14 in gene­ral and para­graph (b) in par­ti­cu­lar, and in Artic­le 16. Addi­tio­nal­ly, the Draf­ters wis­hed to under­line that trans­pa­ren­cy in the con­text of arti­fi­ci­al intel­li­gence systems is sub­ject to tech­no­lo­gi­cal limi­ta­ti­ons – often the pre­cise pathway to a par­ti­cu­lar out­co­me of an arti­fi­ci­al intel­li­gence system is not rea­di­ly acce­s­si­ble even to tho­se who design or deploy it. The rea­li­sa­ti­on of the prin­ci­ple of trans­pa­ren­cy in such cir­cum­stances is a que­sti­on of degree, the sta­te of the art, cir­cum­stances and context.

62. Sin­ce the dis­clo­sure of some of this infor­ma­ti­on in pur­su­it of trans­pa­ren­cy may run coun­ter to pri­va­cy, con­fi­den­tia­li­ty (inclu­ding, for instance, trade secrets), natio­nal secu­ri­ty, pro­tec­tion of the rights of third par­ties, public order, judi­cial inde­pen­dence as well as other con­side­ra­ti­ons and legal requi­re­ments, in imple­men­ting this prin­ci­ple Par­ties are requi­red to strike a pro­per balan­ce bet­ween various com­pe­ting inte­rests and make the neces­sa­ry adjust­ments in the rele­vant frame­works wit­hout alte­ring or modi­fy­ing the under­ly­ing regime of the appli­ca­ble human rights law.

63. As regards the second prin­ci­ple refer­red to in this pro­vi­si­on, over­sight in the con­text of arti­fi­ci­al intel­li­gence systems refers to various mecha­nisms, pro­ce­s­ses, and frame­works desi­gned to moni­tor, eva­lua­te, and gui­de acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. The­se can poten­ti­al­ly con­sist of legal, poli­cy and regu­la­to­ry frame­works, recom­men­da­ti­ons, ethi­cal gui­de­lines, codes of prac­ti­ce, audit and cer­ti­fi­ca­ti­on pro­gram­mes, bias detec­tion and miti­ga­ti­on tools. They could also include over­sight bodies and com­mit­tees, com­pe­tent aut­ho­ri­ties such as sec­to­ral super­vi­so­ry aut­ho­ri­ties, data pro­tec­tion aut­ho­ri­ties, equa­li­ty and human rights bodies, Natio­nal Human Rights Insti­tu­ti­ons (NHRIs) or con­su­mer pro­tec­tion agen­ci­es, con­ti­nuous moni­to­ring of cur­rent deve­lo­ping capa­bi­li­ties and audi­ting, public con­sul­ta­ti­ons and enga­ge­ment, risk and impact manage­ment frame

works and human rights impact assess­ment frame­works, tech­ni­cal stan­dards, as well as edu­ca­ti­on and awa­re­ness pro­gram­mes.

64. One opti­on, in some cases, could be to pro­vi­de for some form of pro­tec­tion from reta­lia­ti­on for inter­nal whist­le­b­lo­wers who report mis­con­duct and the ver­a­ci­ty of public state­ments by arti­fi­ci­al intel­li­gence actors. In this regard, the Draf­ters wis­hed to make par­ti­cu­lar refe­rence to Recom­men­da­ti­on CM/Rec(2014)7 of the Com­mit­tee of Mini­sters to mem­ber Sta­tes on the pro­tec­tion of whistleblowers.

65. Given the com­ple­xi­ty of arti­fi­ci­al intel­li­gence systems and dif­fi­cul­ty of over­see­ing them, Par­ties are encou­ra­ged to imple­ment mea­su­res ensu­ring that the­se systems are desi­gned, deve­lo­ped and used in such a way that the­re are effec­ti­ve and relia­ble over­sight mecha­nisms, inclu­ding human oversight[5], within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. The prin­ci­ple of over­sight is more gene­ral and thus dif­fe­rent from the spe­ci­fic sub­stan­ti­ve obli­ga­ti­on set out in Artic­le 26 of the Frame­work Con­ven­ti­on, which requi­res Par­ties to estab­lish or desi­gna­te effec­ti­ve mecha­nisms to over­see com­pli­ance with the obli­ga­ti­ons in the Frame­work Con­ven­ti­on, as given effect by the Par­ties in their dome­stic legal system (see the com­men­ta­ry to Artic­le 26 in para­graphs 141 – 144 below).

Artic­le 9 – Accoun­ta­bi­li­ty and responsibility

Each Par­ty shall adopt or main­tain mea­su­res to ensu­re accoun­ta­bi­li­ty and respon­si­bi­li­ty for adver­se impacts on human rights, demo­cra­cy and the rule of law resul­ting from acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems.

Expl­ana­to­ry Report

66. The prin­ci­ple of accoun­ta­bi­li­ty and respon­si­bi­li­ty in this pro­vi­si­on refers to the need to pro­vi­de mecha­nisms in order for indi­vi­du­als, orga­ni­sa­ti­ons, or enti­ties respon­si­ble for the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems to be ans­werable for the adver­se impacts on human rights, demo­cra­cy or the rule of law resul­ting from the acti­vi­ties within the life­cy­cle of tho­se systems. Name­ly, the pro­vi­si­on requi­res Par­ties to estab­lish new frame­works and mecha­nisms, or to main­tain exi­sting frame­works and mecha­nisms as may then be applied to acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems to give effect to that requi­re­ment. This also may include judi­cial and admi­ni­stra­ti­ve mea­su­res, civil, cri­mi­nal and other lia­bi­li­ty regimes and, in the public sec­tor, admi­ni­stra­ti­ve and other pro­ce­du­res so that decis­i­ons can be con­te­sted, or the pla­ce­ment of spe­ci­fic respon­si­bi­li­ties and obli­ga­ti­ons on operators.

67. In line with the approach descri­bed in the com­men­ta­ry to Artic­le 4 in para­graphs 37 – 41 and the com­men­ta­ry to Artic­le 6 in para­graphs 50 – 51 abo­ve, the terms “adver­se impacts on human rights, demo­cra­cy and the rule of law” used in this pro­vi­si­on refer prin­ci­pal­ly to the human rights obli­ga­ti­ons and com­mit­ments appli­ca­ble to each Party’s exi­sting frame­works on human rights, demo­cra­cy and the rule of law. The­se stan­dards, inso­far as appli­ca­ble, include the noti­on of a “vio­la­ti­on of human rights” con­tai­ned in Artic­le 2 of the ICCPR, Artic­les 13, 34, 41 and 46 of the ECHR and Artic­les 25 and 63 of the Pact of San José. As regards demo­cra­cy and the rule of law, see in par­ti­cu­lar the con­texts men­tio­ned in the com­men­ta­ry to Artic­le 5 (para­graphs 45 and 46 abo­ve) and the rele­vant appli­ca­ble exi­sting dome­stic frame­works regar­ding the pro­tec­tion of the inte­gri­ty of demo­cra­tic pro­ce­s­ses and institutions.

68. This prin­ci­ple empha­sis­es the need for clear lines of respon­si­bi­li­ty and the abili­ty to trace actions and decis­i­ons back to spe­ci­fic indi­vi­du­als or enti­ties in a way that reco­g­nis­es the diver­si­ty of the rele­vant actors and their roles and respon­si­bi­li­ties. This is important to ensu­re that, for exam­p­le, in case the use of an arti­fi­ci­al intel­li­gence system results in an adver­se impact on human rights, demo­cra­cy or the rule of law, the­re is a mecha­nism to iden­ti­fy such out­co­mes and attri­bu­te respon­si­bi­li­ty in an appro­pria­te man­ner. In other words, all actors respon­si­ble for the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems, irre­spec­ti­ve of whe­ther they are public or pri­va­te orga­ni­sa­ti­ons, must be sub­ject to each Party’s exi­sting frame­work of rules, legal norms and other appro­pria­te mecha­nisms so as to enable effec­ti­ve attri­bu­ti­on of respon­si­bi­li­ty applied to the con­text of arti­fi­ci­al intel­li­gence systems.

69. The prin­ci­ple of accoun­ta­bi­li­ty and respon­si­bi­li­ty is inse­pa­ra­ble from the prin­ci­ple of trans­pa­ren­cy and over­sight, sin­ce the mecha­nisms of trans­pa­ren­cy and over­sight enable account

abili­ty and respon­si­bi­li­ty by making clea­rer how arti­fi­ci­al intel­li­gence systems func­tion and pro­du­ce out­puts. When the rele­vant stake­hol­ders under­stand the under­ly­ing pro­ce­s­ses and algo­rith­ms, it beco­mes easier to trace and assign respon­si­bi­li­ty in the event of adver­se impacts on human rights, demo­cra­cy or the rule of law, inclu­ding vio­la­ti­ons of human rights.

70. Final­ly, due to the pre­vious­ly descri­bed fea­tures of an arti­fi­ci­al intel­li­gence life­cy­cle, the prin­ci­ple of accoun­ta­bi­li­ty and respon­si­bi­li­ty also inclu­des the requi­re­ment for Sta­tes to adopt or main­tain mea­su­res aimed at ensu­ring that tho­se respon­si­ble for arti­fi­ci­al intel­li­gence systems con­sider the poten­ti­al risks to human rights, demo­cra­cy and the rule of law resul­ting from the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. This inclu­des proac­ti­ve action in pre­ven­ting and miti­ga­ting both the risks and adver­se impacts to human rights, demo­cra­cy or the rule of law (see the com­men­ta­ry to Artic­le 16 in para­graphs 105‑112).

Artic­le 10 – Equa­li­ty and non-discrimination

1. Each Par­ty shall adopt or main­tain mea­su­res with a view to ensu­ring that acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems respect equa­li­ty, inclu­ding gen­der equa­li­ty, and the pro­hi­bi­ti­on of dis­cri­mi­na­ti­on, as pro­vi­ded under appli­ca­ble inter­na­tio­nal and dome­stic law.

2. Each Par­ty under­ta­kes to adopt or main­tain mea­su­res aimed at over­co­ming ine­qua­li­ties to achie­ve fair, just and equi­ta­ble out­co­mes, in line with its appli­ca­ble dome­stic and inter­na­tio­nal human rights obli­ga­ti­ons, in rela­ti­on to acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems.


Expl­ana­to­ry Report

71. In for­mu­la­ting Artic­le 10, para­graph 1, which men­ti­ons “equa­li­ty, inclu­ding gen­der equa­li­ty and the pro­hi­bi­ti­on of dis­cri­mi­na­ti­on, as pro­vi­ded under appli­ca­ble inter­na­tio­nal and dome­stic law”, the Draf­ters’ inten­ti­on was to refer spe­ci­fi­cal­ly to the body of the exi­sting human rights law con­si­sting of inter­na­tio­nal (at both glo­bal and regio­nal levels) and dome­stic legal instru­ments appli­ca­ble to each Par­ty, which tog­e­ther pro­vi­de a solid legal basis and gui­dance for each Par­ty to con­sider what mea­su­res to adopt or maintain,

with a view to ensu­ring equa­li­ty and pro­hi­bi­ti­on of dis­cri­mi­na­ti­on in respect of the issues in the rele­vant sphe­res in the con­text of acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems.

72. At the glo­bal level, frame­works rele­vant to each Par­ty may include the fol­lo­wing provisions:

a) Artic­le 2, 24 and 26 of the ICCPR;

b) Artic­les 2, 3 and Artic­le 7 of the ICESCR; and

c) Spe­cia­li­sed legal instru­ments such as the ICERD, the CEDAW, the UNCRC and the UNCRPD.

73. At the regio­nal level, frame­works rele­vant to each Par­ty may include:

a) Artic­le 14 of the ECHR and its Pro­to­col No. 12;

b) Para­graphs 20 and 27 of Part I, Artic­le 20 of Part II and Artic­le E of Part V of the ESC;

c) Spe­cia­li­sed legal instru­ments of the Coun­cil of Euro­pe such as Artic­le 4 of the Frame­work Con­ven­ti­on for the Pro­tec­tion of Natio­nal Mino­ri­ties and Artic­le 4 of the Istan­bul Convention;

d) Tit­le III of the EU Char­ter of Fun­da­men­tal Rights, EU Trea­ties (e.g., Artic­le 2 of the Trea­ty on the Euro­pean Uni­on, Artic­le 10 of the Trea­ty on the Func­tio­ning of the Euro­pean Uni­on), EU secon­da­ry legislation[6] and the rele­vant case-law of the Court of Justi­ce of the Euro­pean Union;

e) Artic­le 24 of the Pact of San José; and

f) Spe­cia­li­sed legal instru­ments, such as the 1999 Inter-Ame­ri­can Con­ven­ti­on on the Eli­mi­na­ti­on of All Forms of Dis­cri­mi­na­ti­on against Per­sons with Disa­bi­li­ties; the 2013 Inter-Ame­ri­can Con­ven­ti­on against Racism, Racial Dis­cri­mi­na­ti­on, and Rela­ted Forms of Into­le­rance and the 2015 Inter-Ame­ri­can Con­ven­ti­on on Pro­tec­ting the Human Rights of Older Persons.

74. Par­ties should con­sider rele­vant ele­ments of their dome­stic law, which could include con­sti­tu­tio­nal law, sta­tu­tes, and jurisprudence.

75. The Draf­ters also reflec­ted on the real and well-docu­men­ted risk of bias that can con­sti­tu­te unlawful dis­cri­mi­na­ti­on ari­sing from the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. The Frame­work Con­ven­ti­on requi­res the Par­ties to con­sider appro­pria­te regu­la­to­ry, gover­nan­ce, tech­ni­cal or other solu­ti­ons to address the dif­fe­rent ways through which bias can inten­tio­nal­ly or inad­ver­t­ent­ly be incor­po­ra­ted into arti­fi­ci­al intel­li­gence systems at various stages throug­hout their life­cy­cle. The fol­lo­wing issues have been well-docu­men­ted with regard to some arti­fi­ci­al intel­li­gence systems:

a)

poten­ti­al bias of the algorithm’s deve­lo­pers (e.g. due to the con­scious or uncon­scious ste­reo­ty­pes or bia­ses of developers);

b)

poten­ti­al bias built into the model upon which the systems are built;

c)

poten­ti­al bia­ses inher­ent in the trai­ning data sets used (e.g. when the data-set is accu­ra­te or not suf­fi­ci­ent­ly repre­sen­ta­ti­ve), or in the aggre­ga­ti­on or eva­lua­ti­on of data (e.g. whe­re groups are inap­pro­pria­te­ly com­bi­ned, or if bench­mark data used to compa­re the model to other models does not ade­qua­te­ly repre­sent the popu­la­ti­on that the model would serve);

d)

bia­ses intro­du­ced when such systems are imple­men­ted in real world set­tings (e.g. expo­sure to a bia­sed envi­ron­ment once it is being used, or due to a bia­sed use of the arti­fi­ci­al intel­li­gence system, mali­cious use or attacks that inten­tio­nal­ly intro­du­ce bias by mani­pu­la­ting the arti­fi­ci­al intel­li­gence system) or as arti­fi­ci­al intel­li­gence evol­ves by self-lear­ning due to errors and defi­ci­en­ci­es in deter­mi­ning the working and lear­ning para­me­ters of the algo­rithm, or

e)

auto­ma­ti­on or con­fir­ma­ti­on bia­ses, wher­eby humans may place unju­sti­fi­ed trust in machi­nes and tech­no­lo­gi­cal arte­facts or situa­tions whe­re they sel­ect infor­ma­ti­on that sup­ports their own views, in both cases igno­ring their own poten­ti­al­ly con­tra­dic­to­ry judgment and vali­da­ting algo­rith­mic out­puts wit­hout que­stio­ning them.

76. The issues of equa­li­ty in the spe­ci­fic arti­fi­ci­al intel­li­gence con­text include rela­tively new cate­go­ries of pro­blems such as ‘tech­ni­cal bias’, which occurs from pro­blems in app­ly­ing machi­ne lear­ning that results in addi­tio­nal bia­ses that are not pre­sent in the data used to train the system or make decis­i­ons; and ‘social bias’, i.e. fail­ures to pro­per­ly account for histo­ri­cal or cur­rent ine­qua­li­ties in socie­ty in the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems such as desig­ning and trai­ning models. The­se ine­qua­li­ties include, for exam­p­le, histo­ri­cal and struc­tu­ral bar­riers to gen­der equa­li­ty and to fair and just tre­at­ment for per­sons belon­ging to groups that have been or are still part­ly under­ser­ved, dis­cri­mi­na­ted against, or other­wi­se sub­ject to per­si­stent ine­qua­li­ty. The­se issues also include the reco­gni­ti­on that various indi­vi­du­als expe­ri­ence dif­fe­rent impacts based on fac­tors which are lin­ked to their per­so­nal characteristics,

cir­cum­stances or mem­ber­ship of a group, inclu­ding tho­se cover­ed by the rele­vant and appli­ca­ble instru­ments inclu­ded in para­graphs 72 and 73 of the Expl­ana­to­ry Report as inter­pre­ted by the rele­vant juris­pru­dence and prac­ti­ces of inter­na­tio­nal huma n rights trea­ty bodies.

77. The pro­vi­si­on makes clear that the requi­red approach under this Artic­le should not stop at sim­ply requi­ring that a per­son not be trea­ted less favour­a­b­ly “wit­hout objec­ti­ve and rea­sonable justi­fi­ca­ti­on” based on one or more pro­tec­ted cha­rac­te­ri­stics that they pos­sess in rele­vant mat­ters of a pro­tec­ted sec­tor. Par­ties under­ta­ke to adopt new or main­tain exi­sting mea­su­res aimed at over­co­ming struc­tu­ral and histo­ri­cal ine­qua­lit ies, to the ext­ent per­mit­ted by its dome­stic and int erna­tio­nal human rights obli­ga­ti­ons, and moreo­ver the­se pro­ce­s­ses should be, whe­re appro­pria­te, infor­med by the views of tho­se impacted.

78. Mindful of con­cep­tu­al, doc­tri­nal, legal and tech­ni­cal dif­fe­ren­ces bet­ween the ways the­se issues are addres­sed in the dome­stic legal systems of various Par­ties and in order to pro­vi­de the Par­ties with the neces­sa­ry fle­xi­bi­li­ty in this regard, the Draf­ters inser­ted a for­mu­la­ti­on which enables each Par­ty to com­ply with the obli­ga­ti­on set out in para­graph 2 of Artic­le 10 in line with its own appli­ca­ble dome­stic and inter­na­tio­nal human rights obli­ga­ti­ons and com­mit­ments by app­ly­ing the appli­ca­ble exi­sting frame­works to the con­text of acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems.

Artic­le 11 – Pri­va­cy and per­so­nal data protection

Each Par­ty shall adopt or main­tain mea­su­res to ensu­re that, with regard to acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems:

a. pri­va­cy rights of indi­vi­du­als and their per­so­nal data are pro­tec­ted, inclu­ding through appli­ca­ble dome­stic and inter­na­tio­nal laws, stan­dards and frame­works; and

b.

effec­ti­ve gua­ran­tees and safe­guards have been put in place for indi­vi­du­als, in accordance with appli­ca­ble dome­stic and inter­na­tio­nal legal obligations.

Expl­ana­to­ry Report

79. The pro­tec­tion of pri­va­cy rights and per­so­nal data pro­tec­tion is a com­mon prin­ci­ple requi­red for effec­tively rea­li­sing many other prin­ci­ples in this Frame­work Con­ven­ti­on. Per­so­nal data coll­ec­tion is alre­a­dy ubi­qui­tous not only as the basis of busi­ness models across many indu­stries, but also as one of the key acti­vi­ties of govern­ment agen­ci­es, inclu­ding law enforce­ment aut­ho­ri­ties, which use a varie­ty of tech­no­lo­gies and auto­ma­ted systems that coll­ect, pro­cess and gene­ra­te per­so­nal data in decis­i­on-making pro­ce­s­ses that direct­ly impact people’s lives. With arti­fi­ci­al intel­li­gence systems being prin­ci­pal­ly data-dri­ven, in the absence of appro­pria­te safe­guards the acti­vi­ties fal­ling within the life­cy­cle of such systems could pose serious risks to the pri­va­cy of individuals.

80. Despi­te some dif­fe­ren­ces in the legal tra­di­ti­ons, spe­ci­fic rules and pro­tec­tion mecha­nisms, the Sta­tes which nego­tia­ted the Frame­work Con­ven­ti­on share a strong com­mit­ment to the pro­tec­tion of pri­va­cy, for exam­p­le, as enshri­ned at the glo­bal level in Artic­le 17 of the ICCPR and regio­nal­ly in Artic­le 8 of the ECHR, Artic­le 8 of the EU Char­ter and Artic­le 11 of the Pact of San José.

81. At its core, pri­va­cy rights of indi­vi­du­als ent­ail par­ti­al­ly over­lap­ping ele­ments with vary­ing degrees of legal reco­gni­ti­on and pro­tec­tion across juris­dic­tions, such as: (1) pro­tec­ted inte­rest in limi­ting access to an individual’s life expe­ri­en­ces and enga­ge­ments (2) pro­tec­ted inte­rest in sec­re­cy of cer­tain per­so­nal mat­ters (3) degree of con­trol over per­so­nal infor­ma­ti­on and data (4) pro­tec­tion of per­son­hood (indi­vi­dua­li­ty or iden­ti­ty, dignity, indi­vi­du­al auto­no­my) and (5) pro­tec­tion of inti­ma­cy and phy­si­cal, psy­cho­lo­gi­cal or moral inte­gri­ty. The pro­vi­si­on under­lines the­se various approa­ches by poin­ting at some of the key com­mo­n­a­li­ties in this sphe­re, even though it is not inten­ded to endor­se or requi­re any par­ti­cu­lar regu­la­to­ry mea­su­res in any given jurisdiction.

82. In view of the key role that the pro­tec­tion of per­so­nal data plays in safe­guar­ding pri­va­cy rights and other human rights in the digi­tal world, the Draf­ters made a spe­ci­fic men­ti­on in the text of the pro­vi­si­on of the dome­stic and inter­na­tio­nal laws, stan­dards and frame­works in the sphe­re of per­so­nal data pro­tec­tion. In order to under­line their importance in ensu­ring effec­ti­ve pro­tec­tion in the arti­fi­ci­al intel­li­gence con­text, Artic­le 11, sub­pa­ra­graph (b) also expli­ci­t­ly refers to other “gua­ran­tees and safe­guards” that indi­vi­du­als (also cal­led “data sub­jects” in some juris­dic­tions) usual­ly enjoy by vir­tue of such laws, stan­dards and frame­works. The Draf­ters con­sider this obli­ga­ti­on to requi­re Par­ties to take mea­su­res to pro­tect privacy.

83. One such instru­ment is the Coun­cil of Europe’s Con­ven­ti­on 108+, which covers both the public and pri­va­te sec­tors and it is open to acce­s­si­on by Sta­tes at a glo­bal level. At the EU level, the Gene­ral Data Pro­tec­tion Regu­la­ti­on (Regu­la­ti­on (EU) 2016/679, “GDPR”) is a com­pre­hen­si­ve data pro­tec­tion law that applies to natu­ral or legal per­sons that pro­cess per­so­nal data belon­ging to natu­ral per­sons in the Euro­pean Uni­on regard­less of whe­ther the pro­ce­s­sing takes place in the Euro­pean Uni­on or not. At the dome­stic level, most of the Sta­tes which nego­tia­ted the Frame­work Con­ven­ti­on have dedi­ca­ted per­so­nal data or pri­va­cy pro­tec­tion laws and often spe­cia­li­sed aut­ho­ri­ties respon­si­ble for the pro­per super­vi­si­on of the rele­vant rules and regulations.

Artic­le 12 – Reliability

Each Par­ty shall take, as appro­pria­te, mea­su­res to pro­mo­te the relia­bi­li­ty of arti­fi­ci­al intel­li­gence systems and trust in their out­puts, which could include requi­re­ments rela­ted to ade­qua­te qua­li­ty and secu­ri­ty throug­hout the life­cy­cle of arti­fi­ci­al intel­li­gence systems.

Expl­ana­to­ry Report

84. This pro­vi­si­on points to the poten­ti­al role to be play­ed by stan­dards, tech­ni­cal spe­ci­fi­ca­ti­ons, assu­rance tech­ni­ques and com­pli­ance sche­mes in eva­lua­ting and veri­fy­ing the trust­wort­hi­ness of arti­fi­ci­al intel­li­gence systems and for trans­par­ent­ly docu­men­ting and com­mu­ni­ca­ting evi­dence for this pro­cess. Stan­dards, in par­ti­cu­lar, could pro­vi­de a relia­ble basis to share com­mon expec­ta­ti­ons about cer­tain aspects of a pro­duct, pro­cess, system or ser­vice with a view to buil­ding justi­fi­ed con­fi­dence in the trust­wort­hi­ness of an arti­fi­ci­al intel­li­gence system if its deve­lo­p­ment and use are com­pli­ant with the­se standards.

85. This pro­vi­si­on high­lights the importance of estab­li­shing mea­su­res that seek to assu­re the relia­bi­li­ty of arti­fi­ci­al intel­li­gence systems through mea­su­res addres­sing key aspects of func­tio­ning such as robust­ness, safe­ty, secu­ri­ty, accu­ra­cy and per­for­mance as well as func­tion­al pre­re­qui­si­tes such as data qua­li­ty and accu­ra­cy, data inte­gri­ty, data secu­ri­ty, and cyber­se­cu­ri­ty. Rele­vant stan­dards, requi­re­ments, assu­rance and com­pli­ance sche­mes may cover the­se ele­ments as a pre­con­di­ti­on for suc­cessful­ly buil­ding justi­fi­ed public trust in arti­fi­ci­al intel­li­gence technologies.

86. Tech­ni­cal stan­dards can help deli­ver mutual­ly under­s­tood and sca­lable arti­fi­ci­al intel­li­gence assu­rance and com­pli­ance, while it must be ensu­red that they are deve­lo­ped in a trans­pa­rent and inclu­si­ve pro­cess that encou­ra­ges con­si­sten­cy with appli­ca­ble inter­na­tio­nal and dome­stic human rights instruments.

87. In addi­ti­on, mea­su­res to be adopted or main­tai­ned under this pro­vi­si­on should aim at ensu­ring that, like any other soft­ware system, arti­fi­ci­al intel­li­gence systems are “secu­re and safe by design”, which means that the rele­vant arti­fi­ci­al intel­li­gence actors should con­sider the secu­ri­ty and safe­ty as core requi­re­ments, not just tech­ni­cal fea­tures. They should prio­ri­ti­se secu­ri­ty and safe­ty throug­hout the enti­re life­cy­cle of the arti­fi­ci­al intel­li­gence system.

88. In some cases, it may not be enough to set out stan­dards and rules about the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. Mea­su­res to pro­mo­te relia­bi­li­ty may the­r­e­fo­re include, depen­ding on the con­text, pro­vi­ding rele­vant stake­hol­ders with clear and relia­ble infor­ma­ti­on about whe­ther arti­fi­ci­al intel­li­gence actors have been fol­lo­wing tho­se requi­re­ments in prac­ti­ce. This means ensu­ring, as appro­pria­te, end-to-end accoun­ta­bi­li­ty through pro­cess trans­pa­ren­cy and docu­men­ta­ti­on pro­to­cols. The­re is a clear con­nec­tion bet­ween this prin­ci­ple and the prin­ci­ple of trans­pa­ren­cy and over­sight in Artic­le 8 and the prin­ci­ple of accoun­ta­bi­li­ty and respon­si­bi­li­ty in Artic­le 9.

89. Assu­rance and com­pli­ance sche­mes are important both for secu­ring com­pli­ance with rules and regu­la­ti­ons, and also for faci­li­ta­ting the assess­ment of more open-ended risks whe­re rules and regu­la­ti­ons alo­ne do not pro­vi­de suf­fi­ci­ent gui­dance to ensu­re that a system is trust­wor­t­hy. The­re is an important role for con­sen­sus based, tech­ni­cal stan­dards in this con­text to fill gaps and also to pro­vi­de gui­dance on miti­ga­ting risks from a tech­ni­cal stand­point (see also the com­men­ta­ry to Artic­le 16 in para­graphs 105 and 112 below).

Artic­le 13 – Safe innovation

With a view to foste­ring inno­va­ti­on while avo­i­ding adver­se impacts on human rights, demo­cra­cy and the rule of law, each Par­ty is cal­led upon to enable, as appro­pria­te, the estab­lish­ment of con­trol­led envi­ron­ments for deve­lo­ping, expe­ri­men­ting and test­ing arti­fi­ci­al intel­li­gence systems under the super­vi­si­on of its com­pe­tent authorities.

Expl­ana­to­ry Report

90. This pro­vi­si­on points at an important the­me which lies at the heart of the approach of the Frame­work Con­ven­ti­on: Par­ties should seek to pro­mo­te and foster inno­va­ti­on in line with human rights, demo­cra­cy and the rule of law. One sui­ta­ble way to sti­mu­la­te respon­si­ble inno­va­ti­on with regard to arti­fi­ci­al intel­li­gence is by enab­ling the aut­ho­ri­ties in the rele­vant sec­tor of acti­vi­ty to set up “con­trol­led envi­ron­ments” or “frame­works” to allow deve­lo­p­ment, trai­ning, live expe­ri­men­ta­ti­on and test­ing of inno­va­tions under the com­pe­tent aut­ho­ri­ties’ direct super­vi­si­on, in par­ti­cu­lar to encou­ra­ge the incor­po­ra­ti­on of qua­li­ty, pri­va­cy and other human rights con­cerns, as well as secu­ri­ty and safe­ty con­cerns in the ear­ly stages. This is espe­ci­al­ly important as cer­tain risks asso­cia­ted with arti­fi­ci­al intel­li­gence systems can only be effec­tively addres­sed at the design stage.

91. It is also important to reco­g­nise that some arti­fi­ci­al intel­li­gence deve­lo­pers, inclu­ding tho­se with a public inte­rest mis­si­on, can­not pro­ce­ed with their inno­va­ti­on unless they can be rea­son­ab­ly sure that it will not have harmful impli­ca­ti­ons and incor­po­ra­te appro­pria­te safe­guards to miti­ga­te risks in a con­trol­led envi­ron­ment. Given that inno­va­ti­on is essen­ti­al­ly col­la­bo­ra­ti­ve and path depen­dent, with new systems buil­ding on what has taken place befo­re, the­re is a risk that this inno­va­ti­on may be impe­ded becau­se it can­not equal­ly use or build on exi­sting inno­va­tions that are not suf­fi­ci­ent­ly secu­re. This pro­vi­si­on is not meant to stif­le inno­va­ti­on but reco­g­nis­es that inno­va­ti­on may be shaped as much by regu

lati­on as by the absence of it. Fail­ure to crea­te an envi­ron­ment in which respon­si­ble inno­va­ti­on can flou­rish risks stif­ling such inno­va­ti­on and lea­ving the play­ing field open to more reck­less approa­ches.

92. In view of the diver­si­ty and under­ly­ing com­ple­xi­ty of legal systems and regu­la­to­ry tra­di­ti­ons in the Sta­tes which nego­tia­ted the Frame­work Con­ven­ti­on, the pro­vi­si­on lea­ves the spe­ci­fic details of the rele­vant arran­ge­ments up to the Par­ties pro­vi­ded that the regimes set up under this pro­vi­si­on com­ply with the requi­re­ment to “avo­id adver­se impacts on human rights, demo­cra­cy and the rule of law”. One approach to achie­ve the­se goals is, for instance, “regu­la­to­ry sand­bo­xes” that aim to foster inno­va­ti­on, pro­vi­de legal cer­tain­ty and enable regu­la­to­ry lear­ning. Other approa­ches include spe­cial regu­la­to­ry gui­dance or no-action let­ters to cla­ri­fy how regu­la­tors will approach the design, deve­lo­p­ment, or use of arti­fi­ci­al intel­li­gence systems in novel contexts.

93. The approa­ches poin­ted at by this pro­vi­si­on offer many advan­ta­ges par­ti­cu­lar­ly sui­ta­ble in the case of arti­fi­ci­al intel­li­gence systems, the fast pace of their deve­lo­p­ment and the ubi­qui­tous cha­rac­ter of their use:

1) By allo­wing con­trol­led deve­lo­p­ment and test­ing, vali­da­ting and veri­fy­ing of arti­fi­ci­al intel­li­gence systems, such approa­ches may help iden­ti­fy poten­ti­al risks and issues asso­cia­ted with arti­fi­ci­al intel­li­gence systems ear­ly in the deve­lo­p­ment pro­cess. This proac­ti­ve approach may enable deve­lo­pers to address con­cerns befo­re wide­spread deployment. Sand­bo­xes or the issu­an­ce of infor­mal regu­la­to­ry gui­dance, for exam­p­le, pro­vi­de an envi­ron­ment that simu­la­tes real-world con­di­ti­ons, allo­wing for deve­lo­p­ment and rather rea­li­stic test­ing of arti­fi­ci­al intel­li­gence appli­ca­ti­ons. This may help unco­ver chal­lenges that might not be appa­rent in iso­la­ted test­ing envi­ron­ments and enables co-ope­ra­ti­on with the com­pe­tent aut­ho­ri­ties in ear­lier sta­tes of the inno­va­ti­on life­cy­cle. 2) Such approa­ches faci­li­ta­te know­ledge-sha­ring among pri­va­te enti­ties, regu­la­tors, and other stake­hol­ders. The­se col­la­bo­ra­ti­ve envi­ron­ments may foster a bet­ter under­stan­ding of arti­fi­ci­al intel­li­gence tech­no­lo­gies, their impli­ca­ti­ons, and poten­ti­al gover­nan­ce approa­ches and pro­vi­de legal cer­tain­ty to inno­va­tors and sup­port them in their com­pli­ance jour­ney. 3) Arti­fi­ci­al intel­li­gence tech­no­lo­gies evol­ve rapid­ly, and tra­di­tio­nal regu­la­to­ry frame­works may strugg­le to keep pace. Such approa­ches make it pos­si­ble to learn about the oppor­tu­ni­ties and risks of an inno­va­ti­on at an ear­ly stage and pro­vi­de evi­dence for regu­la­to­ry lear­ning pur­po­ses and may pro­vi­de fle­xi­bi­li­ty for regu­la­ti­ons and tech­no­lo­gies to be tested to check their adap­ta­bi­li­ty to the chan­ging land­scape of arti­fi­ci­al intel­li­gence. Based on the result obtai­ned, the frame­work can be inter­pre­ted to take into account the­se novel chal­lenges and spe­ci­fic con­texts, imple­men­ted more effec­tively or, whe­re nee­ded, adju­sted. 4) Such envi­ron­ments may allow regu­la­tors to expe­ri­ment with dif­fe­rent regu­la­to­ry approa­ches and eva­lua­te their effec­ti­ve­ness in ensu­ring respect for human rights, demo­cra­cy and the rule of law, as well as the pre­ven­ti­on and miti­ga­ti­on of adver­se impact on them. This ite­ra­ti­ve pro­cess may help regu­la­tors deve­lop infor­med poli­ci­es which strike a balan­ce bet­ween foste­ring inno­va­ti­on and pro­tec­ting the public inte­rest. 5) The exi­stence of such approa­ches can boost public and indu­stry con­fi­dence by demon­st­ra­ting that regu­la­tors are actively enga­ged in under­stan­ding and over­see­ing arti­fi­ci­al intel­li­gence tech­no­lo­gies to ensu­re respect for human rights, demo­cra­cy and the rule of law. This trans­pa­ren­cy con­tri­bu­tes to buil­ding trust in the respon­si­ble deve­lo­p­ment and deployment of arti­fi­ci­al intel­li­gence. 6) Such approa­ches allow orga­ni­sa­ti­ons deve­lo­ping and deploying arti­fi­ci­al intel­li­gence systems, which could also include other stake­hol­ders, as appro­pria­te, to work clo­se­ly with regu­la­tors to under­stand and meet com­pli­ance requi­re­ments. This col­la­bo­ra­ti­ve approach helps stream­li­ne the regu­la­to­ry pro­cess and com­pli­ance that is par­ti­cu­lar­ly hel­pful for smal­ler com­pa­nies who lack the neces­sa­ry resources. 

Chap­ter IV – Remedies

Expl­ana­to­ry Report

94. Sin­ce the obli­ga­ti­ons in this Chap­ter are inten­ded to com­ple­ment each Party’s appli­ca­ble inter­na­tio­nal and dome­stic legal regime of human rights pro­tec­tion, which inclu­des not only spe­ci­fic rules and pro­ce­du­res but also diver­se insti­tu­ti­ons and super­vi­so­ry and enforce­ment mecha­nisms, the imple­men­ta­ti­on of the obli­ga­ti­ons in this Chap­ter should be car­ri­ed out by each Par­ty app­ly­ing their exi­sting frame­works to the con­text of arti­fi­ci­al intel­li­gence systems. In doing so, Par­ties should have in mind the object and pur­po­se of the Frame­work Con­ven­ti­on, which is to ensu­re that acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems are ful­ly con­si­stent with human rights, demo­cra­cy and the rule of law.

[/expandsub1]

Artic­le 14 – Remedies

1. Each Par­ty shall, to the ext­ent reme­dies are requi­red by its inter­na­tio­nal obli­ga­ti­ons and con­si­stent with its dome­stic legal system, adopt or main­tain mea­su­res to ensu­re the avai­la­bi­li­ty of acce­s­si­ble and effec­ti­ve reme­dies for vio­la­ti­ons of hum an rights resul­ting from the acti­vi­ties within the life­cy­cle of arti­fi­ci­al in tel­li­gence systems.

2. With the aim of sup­port­ing para­graph 1 abo­ve, each Par­ty shall adopt or main­tain mea­su­res including:

a. mea­su­res to ensu­re that rele­vant infor­ma­ti­on regar­ding arti­fi­ci­al intel­li­gence systems which have the poten­ti­al to signi­fi­cant­ly affect human rights and their rele­vant usa­ge is docu­men­ted, pro­vi­ded to bodies aut­ho­ri­sed to access that infor­ma­ti­on and, whe­re appro­pria­te and appli­ca­ble, made available or com­mu­ni­ca­ted to affec­ted persons;

b.

mea­su­res to ensu­re that the infor­ma­ti­on refer­red to in sub­pa­ra­graph a is suf­fi­ci­ent for the affec­ted per­sons to con­test the decision(s) made or sub­stan­ti­al­ly infor­med by the use of the system, and, whe­re rele­vant and appro­pria­te, the use of the system its­elf; and

c.

an effec­ti­ve pos­si­bi­li­ty for per­sons con­cer­ned to lodge a com­plaint to com­pe­tent authorities.

Expl­ana­to­ry Report

95. As alre­a­dy men­tio­ned, each Par­ty alre­a­dy has in place exi­sting frame­works in rela­ti­on to human rights, demo­cra­cy and the rule of law. The Frame­work Con­ven­ti­on requi­res Par­ties to app­ly tho­se exi­sting frame­works to the con­text of acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems.

96. Due to cer­tain uni­que cha­rac­te­ri­stics of arti­fi­ci­al intel­li­gence systems, such as their tech­ni­cal com­ple­xi­ty, their data-dri­ven cha­rac­ter and the rela­ti­ve opa­que­ness of the ope­ra­ti­ons of some such systems, human inter­ac­tions with arti­fi­ci­al intel­li­gence systems have been affec­ted by the pro­blem of opa­que­ness of arti­fi­ci­al intel­li­gence systems and infor­ma­ti­on asym­me­try, i.e. a signi­fi­cant imba­lan­ce in the access to, under­stan­ding of, or con­trol over infor­ma­ti­on bet­ween dif­fe­rent par­ties invol­ved in the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems.

97. This pro­blem is par­ti­cu­lar­ly acu­te in situa­tions whe­re human rights are adver­se­ly impac­ted by the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems, as the affec­ted or poten­ti­al­ly affec­ted per­sons may not beco­me awa­re of such impacts or have the neces­sa­ry infor­ma­ti­on to exer­cise their rights in this con­nec­tion or avail them­sel­ves of rele­vant pro­ce­du­res and safeguards.

98. That is why this pro­vi­si­on recalls the prin­ci­ple that a reme­dy needs to be both effec­ti­ve and acce­s­si­ble. In order to be effec­ti­ve, the reme­dy must be capa­ble of direct­ly reme­dy­ing the impug­ned situa­tions, and in order to be acce­s­si­ble, it has to be available with suf­fi­ci­ent pro­ce­du­ral safe­guards in place to make the reme­dy meaningful for the per­son con­cer­ned. In order to under­line the link and ensu­re com­ple­men­ta­ri­ty with the appli­ca­ble inter­na­tio­nal and dome­stic human rights pro­tec­tion mecha­nisms, the pro­vi­si­on uses the legal ter­mi­no­lo­gy refe­ren­ced in Artic­le 2 of the ICCPR, Artic­le 13 of the ECHR and Artic­le 25 of the Pact of San José. The term of “vio­la­ti­ons of human rights” used in the first para­graph of this pro­vi­si­on refers to the well-estab­lished noti­ons con­tai­ned in Artic­le 2 of the ICCPR, Artic­les 13, 34, 41 and 46 of the ECHR and Artic­les 25 and 63 of the Pact of San José, if and as appli­ca­ble to respec­ti­ve future Par­ties of this Frame­work Con­ven­ti­on (see the com­men­ta­ry in para­graph 67 above).

99. Con­si­stent with the prin­ci­ples in Artic­les 8 (Prin­ci­ple of trans­pa­ren­cy and over­sight) and 9 (Prin­ci­ple of accoun­ta­bi­li­ty and respon­si­bi­li­ty), Artic­le 14 of the Frame­work Con­ven­ti­on requi­res Par­ties to adopt or main­tain spe­ci­fic mea­su­res to docu­ment and make available cer­tain infor­ma­ti­on to the affec­ted per­sons in order to sup­port the aim of making available, acce­s­si­ble and effec­ti­ve reme­dies for vio­la­ti­ons of human rights in the con­text of acti­vi­ties in the life­cy­cle of an arti­fi­ci­al intel­li­gence system. The rele­vant con­tent in the infor­ma­ti­on-rela­ted mea­su­res should be con­text-appro­pria­te, suf­fi­ci­ent­ly clear and meaningful, and cri­ti­cal­ly, pro­vi­de a per­son con­cer­ned with an effec­ti­ve abili­ty to use the infor­ma­ti­on in que­sti­on to exer­cise their rights in the pro­ce­e­dings in respect of the rele­vant decis­i­ons affec­ting their human rights. It is also important to recall that excep­ti­ons, limi­ta­ti­ons or dero­ga­ti­ons from such trans­pa­ren­cy obli­ga­ti­ons are pos­si­ble in the inte­rest of public order, secu­ri­ty and other important public inte­rests as pro­vi­ded for by appli­ca­ble inter­na­tio­nal human rights instru­ments and, whe­re neces­sa­ry, to meet the­se objectives.

100. For vio­la­ti­ons of human rights resul­ting from the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems, it is also important to pro­vi­de the per­sons con­cer­ned with an effec­ti­ve pos­si­bi­li­ty to lodge a com­plaint to com­pe­tent aut­ho­ri­ties, as spe­ci­fi­ed in Artic­le 14, para­graph 2, sub­pa­ra­graph (c) of the Frame­work Con­ven­ti­on. This may include the over­sight mechanism(s) refer­red to in Artic­le 25. In some situa­tions, effec­ti­ve redress may include com­plaints by public inte­rest orga­ni­sa­ti­ons, in accordance with a Party’s dome­stic legal system.

101. The Draf­ters wis­hed to under­line that the expres­si­ons “signi­fi­cant­ly affect human rights” in sub­pa­ra­graph (a) of para­graph 2 of Artic­le 14 and “signi­fi­cant­ly impact(s) upon the enjoy­ment of human rights” in para­graph 1 of Artic­le 15 both intro­du­ce a thres­hold requi­re­ment, which means that (1) the rele­vant requi­re­ments of Artic­les 14 and 15 do not app­ly auto­ma­ti­cal­ly to all arti­fi­ci­al intel­li­gence systems fal­ling within the scope of Artic­le 3 of the Frame­work Con­ven­ti­on; (2) that the arti­fi­ci­al intel­li­gence systems which have no signi­fi­cant effect or impact on human rights do not fall within the scope of the spe­ci­fic new obli­ga­ti­ons in this Artic­le; and (3) it is up to the Par­ties of the Frame­work Con­ven­ti­on to exami­ne whe­ther, in view of their exi­sting inter­na­tio­nal and dome­stic human rights law and the con­text and other rele­vant cir­cum­stances in rela­ti­on to a given arti­fi­ci­al intel­li­gence system, such system can be said to have “signi­fi­cant effect” or “signi­fi­cant impact” on human rights.

102. Like­wi­se, the expres­si­on “sub­stan­ti­al­ly infor­med by the use of the [arti­fi­ci­al intel­li­gence] system” in sub­pa­ra­graph (b) of Artic­le 14 is meant to intro­du­ce a thres­hold requi­re­ment which under­lines that not every use of an arti­fi­ci­al system in decis­i­on-making trig­gers the appli­ca­ti­on of sub­pa­ra­graph (b), and the­se mea­su­res should app­ly only in cases whe­re the decis­i­on has been at least “sub­stan­ti­al­ly infor­med” by the use of the system. It is at the dis­creti­on of the Par­ties to the Frame­work Con­ven­ti­on to defi­ne the mea­ning of this expres­si­on, con­si­stent with its appli­ca­ble inter­na­tio­nal and dome­stic human rights law.

Artic­le 15 – Pro­ce­du­ral safeguards

1. Each Par­ty shall ensu­re that, whe­re an arti­fi­ci­al intel­li­gence system signi­fi­cant­ly impacts upon the enjoy­ment of human rights, effec­ti­ve pro­ce­du­ral gua­ran­tees, safe­guards and rights, in accordance with the appli­ca­ble inter­na­tio­nal and dome­stic law, are available to per­sons affec­ted thereby.

2. Each Par­ty shall seek to ensu­re that, as appro­pria­te for the con­text, per­sons inter­ac­ting with arti­fi­ci­al intel­li­gence systems are noti­fi­ed that they are inter­ac­ting with such systems rather than with a human.


Expl­ana­to­ry Report

103. Para­graph 1 of Artic­le 15 sets out a sepa­ra­te obli­ga­ti­on for the Par­ties to ensu­re that the exi­sting pro­ce­du­ral gua­ran­tees, safe­guards and rights pre­scri­bed in the appli­ca­ble inter­na­tio­nal and dome­stic human rights law remain available and effec­ti­ve in the arti­fi­ci­al intel­li­gence con­text. Whe­re an arti­fi­ci­al intel­li­gence system sub­stan­ti­al­ly informs or takes decis­i­ons impac­ting on human rights, effec­ti­ve pro­ce­du­ral gua­ran­tees should, for instance, include human over­sight, inclu­ding ex ante or ex post review of the decis­i­on by humans. Whe­re appro­pria­te, such human over­sight mea­su­res should gua­ran­tee that the arti­fi­ci­al intel­li­gence system is sub­ject to built-in ope­ra­tio­nal cons­traints that can­not be over­ridden by the system its­elf and is respon­si­ve to the human ope­ra­tor, and that the natu­ral per­sons to whom human over­sight has been assi­gned have the neces­sa­ry com­pe­tence, trai­ning and aut­ho­ri­ty to car­ry out that role.

104. Para­graph 2 of Artic­le 15 deals spe­ci­fi­cal­ly with situa­tions of direct human inter­ac­tion with an arti­fi­ci­al intel­li­gence system. In such cases and whe

re appro­pria­te taking into account the cir­cum­stances and con­text of use, and with a view in par­ti­cu­lar to avo­i­ding the risk of mani­pu­la­ti­on and decep­ti­on, per­sons inter­ac­ting with an arti­fi­ci­al intel­li­gence system should be duly noti­fi­ed that they are inter­ac­ting with an arti­fi­ci­al intel­li­gence system rather than with a human. For exam­p­le, inter­ac­tions with AI-enab­led chat­bots on govern­ment web­sites would likely trig­ger the noti­fi­ca­ti­on obli­ga­ti­on under this pro­vi­si­on. At the same time, this obli­ga­ti­on is not inten­ded, for instance, to cover situa­tions whe­re the very pur­po­se of the use of the system would be coun­ter­ac­ted by the noti­fi­ca­ti­on (law enforce­ment sce­na­ri­os) or whe­re the use of the system is obvious from the con­text, which ren­ders noti­fi­ca­ti­on unnecessary. 

Chap­ter V – Assess­ment and miti­ga­ti­on of risks and adver­se impacts

Artic­le 16 – Risk and impact manage­ment framework

1. Each Par­ty shall, taking into account the prin­ci­ples set forth in Chap­ter III, adopt or mai

ntain mea­su­res for the iden­ti­fi­ca­ti­on, assess­ment, pre­ven­ti­on and miti­ga­ti­on of risks posed by arti­fi­ci­al intel­li­gence systems by con­side­ring actu­al and poten­ti­al impacts to human rights, demo­cra­cy and the rule of law.

2. Such mea­su­res shall be gra­dua­ted and dif­fe­ren­tia­ted, as appro­pria­te, and:

a. take due account of the con­text and inten­ded use of arti­fi­ci­al intel­li­gence systems, in par­ti­cu­lar as con­cerns risks to human rights, demo­cra­cy, and the rule of law;

b.

take due account of the seve­ri­ty and pro­ba­bi­li­ty of poten­ti­al impacts;

c.

con­sider, whe­re appro­pria­te, the per­spec­ti­ves of rele­vant stake­hol­ders, in par­ti­cu­lar per­sons who­se rights may be impacted;

d.

app­ly ite­ra­tively throug­hout the acti­vi­ties within the life­cy­cle of the arti­fi­ci­al intel­li­gence system;

e.

include moni­to­ring for risks and adver­se impacts to human rights, demo­cra­cy, and the rule of law;

f.

include docu­men­ta­ti­on of risks, actu­al and poten­ti­al impacts, and the risk manage­ment approach; and

g.

requi­re, whe­re appro­pria­te, test­ing of arti­fi­ci­al intel­li­gence systems befo­re making them available for first use and when they are signi­fi­cant­ly modified;

3. Each Par­ty shall ad

opt or main­tain mea­su­res that seek to ensu­re that adver­se impacts of arti­fi­ci­al intel­li­gence systems to human rights, demo­cra­cy, and the rule of law are ade­qua­te­ly addres­sed. Such adver­se impacts and mea­su­res to address them should be docu­men­ted and inform the rele­vant risk manage­ment mea­su­res descri­bed in para­graph 2.

4. Each Par­ty shall assess the need for a mora­to­ri­um or ban or other appro­pria­te mea­su­res in respect of cer­tain uses of arti­fi­ci­al intel­li­gence systems whe­re it con­siders such uses incom­pa­ti­ble with the respect for human rights, the func­tio­ning of demo­cra­cy or the rule of law.


Expl­ana­to­ry Report

105. In order to take into account the ite­ra­ti­ve cha­rac­ter of the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems and also to ensu­re the effec­ti­ve­ness of the mea­su­res under­ta­ken by the Par­ties, the Frame­work Con­ven­ti­on con­ta­ins a dedi­ca­ted pro­vi­si­on pre­scrib­ing the need to iden­ti­ty, assess, pre­vent and miti­ga­te ex ante and, as appro­pria­te, ite­ra­tively throug­hout the life­cy­cle of the arti­fi­ci­al intel­li­gence system the rele­vant risks and poten­ti­al impacts to human rights, demo­cra­cy and the rule of law by fol­lo­wing and enab­ling the deve­lo­p­ment of a metho­do­lo­gy with con­cre­te and objec­ti­ve cri­te­ria for such assess­ments. The­se obli­ga­ti­ons are one of the key tools for enab­ling the imple­men­ta­ti­on of the requi­re­ments of the Frame­work Con­ven­ti­on and Chap­ters II and III in par­ti­cu­lar and should be imple­men­ted by the Par­ties in light of all rele­vant prin­ci­ples, inclu­ding the prin­ci­ples of trans­pa­ren­cy and over­sight as well as the prin­ci­ple of accoun­ta­bi­li­ty and responsibility.

106. The pur­po­se of this pro­vi­si­on is to ensu­re a uni­form approach towards the iden­ti­fi­ca­ti­on, ana­ly­sis, and eva­lua­ti­on of the­se risks and the assess­ment of impacts of such systems. At the same time, it is based on the assump­ti­on that the Par­ties are best pla­ced to make rele­vant regu­la­to­ry choices, taking into account their spe­ci­fic legal, poli­ti­cal, eco­no­mic, social, cul­tu­ral, and tech­no­lo­gi­cal con­texts, and that they should accor­din­gly enjoy a cer­tain fle­xi­bi­li­ty when it comes to the actu­al gover­nan­ce and regu­la­ti­on which accom­pa­ny the processes.

107. This is the prin­ci­pal rea­son why the pro­vi­si­on men­ti­ons gra­dua­ted and dif­fe­ren­tia­ted mea­su­res which should take due account of “the con­text and inten­ded use of arti­fi­ci­al intel­li­gence systems” that allo­ws fle­xi­bi­li­ty to the Par­ties in the approa­ches and metho­do­lo­gies they choo­se to car­ry out this assess­ment. In par­ti­cu­lar, the Par­ties may choo­se to imple­ment this assess­ment at the dif­fe­rent levels, such as at regu­la­to­ry level by pre­scrib­ing dif­fe­rent cate­go­ries of risk clas­si­fi­ca­ti­on and/or at ope­ra­tio­nal level by rele­vant actors assi­gned with respon­si­bi­li­ties for the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. Par­ties may also choo­se to focus at the ope­ra­tio­nal level only on cer­tain pre-defi­ned cate­go­ries of arti­fi­ci­al intel­li­gence systems in line with the gra­dua­ted and dif­fe­ren­tia­ted approach to keep the bur­den and obli­ga­ti­ons pro­por­tio­na­te to the risks (Artic­le 16, para­graph 2, sub­pa­ra­graph (a)). Par­ties could also con­sider the capa­ci­ty of various cate­go­ries of pri­va­te sec­tor actors to respond to the­se requi­re­ments, in par­ti­cu­lar tho­se regar­ding docu­men­ta­ti­on and com­mu­ni­ca­ti­on with rele­vant aut­ho­ri­ties and stake­hol­ders, and whe­re pos­si­ble and appro­pria­te, adjust them accordingly.

108. The Draf­ters also wis­hed to cla­ri­fy that along with the risks to human rights, demo­cra­cy and the rule of law, the assess­ments can, whe­re appro­pria­te, take due account of the need to pre­ser­ve a healt­hy and sus­tainable envi­ron­ment, as well as pro­jec­ted bene­fits for socie­ty as a who­le and posi­ti­ve impacts on human rights, demo­cra­cy and the rule of law. Such fac­tors as “seve­ri­ty”, “pro­ba­bi­li­ty”, dura­ti­on and rever­si­bi­li­ty of risks and impacts are also very important in the arti­fi­ci­al intel­li­gence con­text and should be taken into account in the risk manage­ment frame­work (Artic­le 16, para­graph 2, sub­pa­ra­graph (b)), spe­ci­fi­cal­ly when iden­ti­fy­ing and asses­sing risks and poten­ti­al impacts. Moreo­ver, it is important to spe­ci­fy that the requi­re­ment to take into account the per­spec­ti­ve of per­sons who­se rights may be impac­ted, depen­ding on con­text, to the ext­ent prac­ti­ca­ble and whe­re appro­pria­te, ent­ails con­side­ring the per­spec­ti­ve of a varie­ty of rele­vant stake­hol­ders, such as out­side tech­ni­cal experts and civil socie­ty (Artic­le 16, para­graph 2, sub­pa­ra­graph (c)).

109. The pro­vi­si­on is also based on the under­stan ding that car­ry­ing out risk assess­ment at the begin­ning of the arti­fi­ci­al intel­li­gence system life­cy­cle is only a first, albeit cri­ti­cal, step in a much lon­ger, end-to-end pro­cess of respon­si­ble eva­lua­ti­on and re-assess­ment (Artic­le 16, para­graph 2, sub­pa­ra­graph (d)). In the risk and impact assess­ment pro­cess, atten­ti­on should be paid both to the dyna­mic and chan­ging cha­rac­ter of acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems and to the shif­ting con­di­ti­ons of the real-world envi­ron­ments in which systems are inten­ded to be deployed. The pro­vi­si­on fur­ther intro­du­ces requi­re­ments regar­ding not only t he docu­men­ting of the rele­vant infor­ma­ti­on during the risk manage­ment pro­ce­s­ses, but also the appli­ca­ti­on of suf­fi­ci­ent pre­ven­ti­ve and miti­ga­ting mea­su­res in respect of the risks and impacts iden­ti­fi­ed. It is important for the requi­re­ment of pro­per docu­men­ta­ti­on of the risk and impact manage­ment pro­ce­s­ses in Artic­le 16, para­graph 2, sub­pa­ra­graph (f) to play its role in the iden­ti­fi­ca­ti­on, assess­ment, pre­ven­ti­on and miti­ga­ti­on of risks or adver­se impacts to human rights, demo­cra­cy or the rule of law ari­sing throug­hout the life­cy­cle of arti­fi­ci­al intel­li­gence systems. Both tech­ni­cal docu­men­ta­ti­on and docu­men­ta­ti­on of risks and adver­se impacts should be pro­per­ly drawn up and regu­lar­ly updated. Whe­re appro­pria­te, the docu­men­ta­ti­on may include public report­ing of adver­se impacts. Test­ing (Artic­le 16, para­graph 2, sub­pa­ra­graph (g)) may include pro­vi­ding inde­pen­dent audi­tors with access to aspects of arti­fi­ci­al intel­li­gence systems.

110. Para­graph 3 of Artic­le 16 also pre­scri­bes the appli­ca­ti­on of mea­su­res in respect of the risks and impacts iden­ti­fi­ed, in order to ade­qua­te­ly address the adver­se impacts of arti­fi­ci­al intel­li­gence systems to human rights, demo­cra­cy and the rule of law.

111. Para­graph 4 of Artic­le 16 sta­tes that Par­ties to the Frame­work Con­ven­ti­on shall assess the need for mora­to­ria, bans, or other appro­pria­te mea­su­res regar­ding uses of arti­fi­ci­al intel­li­gence systems that they con­sider “incom­pa­ti­ble” with the respect of human rights, demo­cra­cy, and the rule of law. The deter­mi­na­ti­on of what is “incom­pa­ti­ble” in this con­text is made by each Par­ty, as is the assess­ment of whe­ther such a sce­na­rio would requi­re a mora­to­ri­um or ban, on the one hand, or ano­ther appro­pria­te mea­su­re, on the other. Wit­hout mea­su­res pro­hi­bi­ting, limi­ting or other­wi­se regu­la­ting the use of arti­fi­ci­al intel­li­gence systems in the­se cir­cum­stances, such uses could pose exce­s­si­ve risks to human rights, demo­cra­cy, and the rule of law.

112. While this pro­vi­si­on lea­ves the details of how to address mora­to­ria, bans or other appro­pria­te mea­su­res to each Par­ty, given their gra­vi­ty, mea­su­res like mora­to­ria or bans should only be con­side­red in cir­cum­stances whe­re a Par­ty asses­ses that a par­ti­cu­lar use of an arti­fi­ci­al intel­li­gence system poses an unac­cep­ta­ble risk to human rights, demo­cra­cy or the rule of law. Fur­ther con­side­ra­ti­on may include, for exam­p­le, careful exami­na­ti­on of whe­ther the­re are any mea­su­res available for miti­ga­ting that risk. The­se mea­su­res should also be accom­pa­nied with appro­pria­te­ly orga­ni­s­ed review pro­ce­du­res in order to enable their update, inclu­ding pos­si­ble rever­sal (for exam­p­le, once rele­vant risks have been suf­fi­ci­ent­ly redu­ced or appro­pria­te miti­ga­ti­on mea­su­res have beco­me available, or new unac­cep­ta­ble prac­ti­ces have been iden­ti­fi­ed). The Draf­ters also note the importance of public con­sul­ta­ti­ons when dis­cus­sing mea­su­res set out under this provision.

Chap­ter VI – Imple­men­ta­ti­on of the Convention

Artic­le 17 – Non-discrimination

The imple­men­ta­ti­on of the pro­vi­si­ons of this Con­ven­ti­on by the Par­ties shall be secu­red wit­hout dis­cri­mi­na­ti­on on any ground, in accordance with their inter­na­tio­nal human rights obligations.

Expl­ana­to­ry Report

113. This Artic­le pro­hi­bits dis­cri­mi­na­ti­on in the Par­ties’ imple­men­ta­ti­on of the Frame­work Con­ven­ti­on. The mea­ning of dis­cri­mi­na­ti­on in Artic­le 17 is iden­ti­cal to that laid out in the appli­ca­ble inter­na­tio­nal law, such as, inter alia, Artic­le 26 of the ICCPR, Artic­le 2 of the ICESCR, Artic­le 14 of the ECHR and its Pro­to­col No. 12, Artic­le 24 of the Pact of San José, and Artic­le E of the ESC, if and as appli­ca­ble to Par­ties to the Frame­work Convention.

114. Taken tog­e­ther, the­se pro­vi­si­ons cover a broad ran­ge of non-dis­cri­mi­na­ti­on grounds which are lin­ked to indi­vi­du­als’ per­so­nal cha­rac­te­ri­stics, cir­cum­stances or mem­ber­ship of a group, inclu­ding tho­se cover­ed by the rele­vant and appli­ca­ble instru­ments inclu­ded in para­graphs 72 and 73 of the Expl­ana­to­ry Report as inter­pre­ted by the rele­vant juris­pru­dence and prac­ti­ces of inter­na­tio­nal human rights trea­ty bodies.

115. Not all of the­se grounds are expli­ci­t­ly sta­ted or iden­ti­cal­ly for­mu­la­ted in the human rights trea­ties by which the Par­ties to the pre­sent Frame­work Con­ven­ti­on may be bound. Tho­se trea­ties usual­ly con­tain open-ended lists of such grounds, as inter­pre­ted by the juris­pru­dence of com­pe­tent inter­na­tio­nal courts such as the Euro­pean and the Inter-Ame­ri­can Courts of Human Rights and in the rele­vant prac­ti­ce of com­pe­tent inter­na­tio­nal bodies, such as the United Nati­ons Human Rights Com­mit­tee. The­re may thus be varia­ti­ons bet­ween the various inter­na­tio­nal human rights regimes appli­ca­ble to dif­fe­rent Par­ties. As with other human rights con­ven­ti­ons and trea­ties, here too the approach of the Frame­work Con­ven­ti­on is not to crea­te new human rights obli­ga­ti­ons or to redu­ce, extend or other­wi­se modi­fy the scope or con­tent of the inter­na­tio­nal human rights obli­ga­ti­ons appli­ca­ble to a Par­ty (see the com­ment to Artic­le 1, in para­graph 13 above).

Artic­le 18 – Rights of per­sons with disa­bi­li­ties and of children

Each Par­ty shall, in accordance with its dome­stic law and appli­ca­ble inter­na­tio­nal obli­ga­ti­ons, take due account of any spe­ci­fic needs and vul­nerabi­li­ties in rela­ti­on to respect for the rights of per­sons with disa­bi­li­ties and of children.

Expl­ana­to­ry Report

116. This pro­vi­si­on sets out an obli­ga­ti­on for the Par­ties, in the con­text of the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems, to take due account of “spe­ci­fic needs and vul­nerabi­li­ties in rela­ti­on to respect of the rights of per­sons with disa­bi­li­ties and of child­ren” and in this regard it cor­re­la­tes direct­ly with the pro­vi­si­ons and the legal regime of the UNCRPD and the UNCRC as well as the appli­ca­ble dome­stic law of each Par­ty on the rights of per­sons with disa­bi­li­ties and the rights of the child. Expli­cit refe­rence to the appli­ca­ble dome­stic law on the rights of the child and the rights of per­sons with disa­bi­li­ties has been inser­ted, in par­ti­cu­lar, with a view to take into con­side­ra­ti­on the situa­ti­on of any Par­ty to the Frame­work Con­ven­ti­on which did not rati­fy the UNCRC or the UNCRPD, but nevert­hel­ess has dome­stic legis­la­ti­on secu­ring the enjoy­ment of such rights.

117. The refe­rence to dome­stic law in this pro­vi­si­on is meant sole­ly to point at pro­vi­si­ons of dome­stic law which pro­vi­de the level of pro­tec­tion in the rele­vant con­text simi­lar or sup­ple­men­ta­ry to the UNCRPD or the UNCRC, and such refe­rence can­not be invo­ked by a Par­ty as justi­fi­ca­ti­on for its fail­ure to per­form this trea­ty obli­ga­ti­on. The objec­ti­ve is thus to gua­ran­tee the hig­hest pos­si­ble level of con­side­ra­ti­on for any spe­ci­fic needs and vul­nerabi­li­ties in rela­ti­on to respect of the rights of per­sons with disa­bi­li­ties and of child­ren, inclu­ding trai­ning on digi­tal liter­a­cy, as explai­ned in rela­ti­on to Artic­le 20 in the Expl­ana­to­ry Report.

118. In view of the serious risk that arti­fi­ci­al intel­li­gence tech­no­lo­gies could be used to faci­li­ta­te sexu­al explo­ita­ti­on and sexu­al abu­se of child­ren, and the spe­ci­fic risks that it poses to child­ren, in the con­text of the imple­men­ta­ti­on of this pro­vi­si­on the Draf­ters con­side­red the obli­ga­ti­ons set forth in the Lan­za­ro­te Con­ven­ti­on, the Optio­nal Pro­to­col to the UN Con­ven­ti­on on the Rights of the Child on the sale of children,

child pro­sti­tu­ti­on and child por­no­gra­phy, and Gene­ral Com­ment No. 25 to the UNCRC on children’s rights in rela­ti­on to the digi­tal environment. 

Artic­le 19 – Public consultation

Each Par­ty shall seek to ensu­re that important que­sti­ons rai­sed in rela­ti­on to arti­fi­ci­al intel­li­gence systems are, as appro­pria­te, duly con­side­red through public dis­cus­sion and mul­ti­stake­hol­der con­sul­ta­ti­on in the light of social, eco­no­mic, legal, ethi­cal, envi­ron­men­tal and other rele­vant implications.

Expl­ana­to­ry Report

119. The pur­po­se of this artic­le is to prompt the Par­ties, inso­far as appro­pria­te, to foster civic enga­ge­ment, empower indi­vi­du­als and experts to par­ta­ke in public dis­cus­sion on issues of broad social and poli­ti­cal importance, and crea­te grea­ter public awa­re­ness of the fun­da­men­tal and emer­ging que­sti­ons, inclu­ding issues appli­ca­ble to the ear­ly stages of design, rai­sed by the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. Views of socie­ty and various per­spec­ti­ves should be ascer­tai­ned and taken into due con­side­ra­ti­on as far as pos­si­ble with regard to the rele­vant pro­blems, which could include, for exam­p­le, risks as well as posi­ti­ve and adver­se impacts. To this end, meaningful “public dis­cus­sion and mul­ti-stake­hol­der con­sul­ta­ti­on” are recommended.

120. Enga­ge­ment should invol­ve enga­ging a diver­se ran­ge of stake­hol­ders, inclu­ding the gene­ral public, indu­stry experts, aca­de­mics, Natio­nal Human Rights Insti­tu­ti­ons (NHRIs), and civil socie­ty. For the Draf­ters of the Frame­work Con­ven­ti­on, the­se dis­cus­sions and con­sul­ta­ti­ons play a cru­cial role in ensu­ring that arti­fi­ci­al intel­li­gence systems ali­gn with uni­ver­sal human rights and address rele­vant con­cerns regar­ding human rights, demo­cra­cy and the rule of law, by reflec­ting a broad ran­ge of per­spec­ti­ves and thus informing the rele­vant poli­cy-making and regu­la­to­ry initiatives.

121. The expres­si­on “as appro­pria­te” lea­ves it to the Par­ties to deter­mi­ne the topics, fre­quen­cy and other moda­li­ties of such con­sul­ta­ti­ons in the light of social, eco­no­mic, legal, ethi­cal, envi­ron­men­tal and other rele­vant impli­ca­ti­ons. For exam­p­le, Sta­tes may orga­ni­se sur­veys and que­sti­on­n­aires, public work­shops, focus groups, citi­zen juries and deli­be­ra­ti­ve pol­ling, expert panels and con­sul­ta­ti­ve com­mit­tees, public hea­rings, natio­nal and inter­na­tio­nal con­fe­ren­ces, or com­bi­na­ti­ons of the abo­ve. Final assess­ment and incor­po­ra­ti­on of the out­co­mes of such dis­cus­sions and con­sul­ta­ti­ons into the rele­vant poli­cy initia­ti­ves could also be ade­qua­te­ly and appro­pria­te­ly com­mu­ni­ca­ted to the rele­vant stakeholders.

Artic­le 20 – Digi­tal liter­a­cy and skills

Each Par­ty shall encou­ra­ge and pro­mo­te ade­qua­te digi­tal liter­a­cy and digi­tal skills for all seg­ments of the popu­la­ti­on, inclu­ding spe­ci­fic expert skills for tho­se respon­si­ble for the iden­ti­fi­ca­ti­on, assess­ment, pre­ven­ti­on and miti­ga­ti­on of risks posed by arti­fi­ci­al intel­li­gence systems.

Expl­ana­to­ry Report

122. The pro­vi­si­on draws the atten­ti­on of the Par­ties to the fact that pro­mo­ti­on of digi­tal liter­a­cy and digi­tal skills for all seg­ments of the popu­la­ti­on is cri­ti­cal­ly important in today’s tech­no­lo­gy-dri­ven world. The two terms refer to the abili­ty to use, under­stand, and enga­ge with digital, 

inclu­ding arti­fi­ci­al intel­li­gence and other data-based tech­no­lo­gies effec­tively and thus con­tri­bu­te to pro­mo­ting broad awa­re­ness and under­stan­ding in the gene­ral popu­la­ti­on and to pre­ven­ting and miti­ga­ting risks or adver­se impacts on human rights, demo­cra­cy or the rule of law, as well as other socie­tal harms such as mali­cious or cri­mi­nal use of such tech­no­lo­gies. The Draf­ters also wis­hed to men­ti­on par­ti­cu­lar­ly bene­fi­ci­al effects of such pro­gram­mes for indi­vi­du­als from diver­se back­grounds and tho­se who may be under­re­pre­sen­ted or in vul­nerable situa­tions, which may include, for exam­p­le, women, girls, indi­ge­nous peo­p­les, elder­ly peo­p­le and child­ren, with due respect for safe­guards regar­ding the use of arti­fi­ci­al intel­li­gence systems for peo­p­le in situa­tions of vul­nerabi­li­ty.

123. Owing to the object and pur­po­se of the Frame­work Con­ven­ti­on, the spe­ci­fic trai­ning pro­gram­mes regar­ding arti­fi­ci­al intel­li­gence tech­no­lo­gies refer­red to under Artic­le 20 could include enhan­cing awa­re­ness of and the abili­ty to mana­ge the poten­ti­al risks and adver­se impacts of arti­fi­ci­al intel­li­gence systems in the con­text of human rights, demo­cra­cy or the rule of law and, depen­ding on con­text, could cover such topics as:

a. the con­cept of arti­fi­ci­al intel­li­gence; b. the pur­po­se of par­ti­cu­lar arti­fi­ci­al intel­li­gence systems; c. capa­bi­li­ties and limi­ta­ti­ons of dif­fe­rent types of arti­fi­ci­al intel­li­gence models and the assump­ti­ons under­ly­ing them; d. socio-cul­tu­ral fac­tors asso­cia­ted with the design, deve­lo­p­ment, and use of arti­fi­ci­al intel­li­gence systems, inclu­ding in rela­ti­on to data used to train them; e. human fac­tors rele­vant to the use of arti­fi­ci­al intel­li­gence systems, such as how end users may inter­pret and use out­puts; f. domain exper­ti­se rele­vant to the con­text in which arti­fi­ci­al intel­li­gence systems are used; g. legal and poli­cy con­side­ra­ti­ons; h. per­spec­ti­ves of indi­vi­du­als or com­mu­ni­ties that dis­pro­por­tio­na­te­ly expe­ri­ence adver­se impacts of arti­fi­ci­al intel­li­gence systems.

124. In view of how essen­ti­al trai­ning is to tho­se respon­si­ble for the iden­ti­fi­ca­ti­on, assess­ment, pre­ven­ti­on and miti­ga­ti­on of risks posed by arti­fi­ci­al intel­li­gence, the pro­vi­si­on refers addi­tio­nal­ly to this spe­ci­fic group of addres­sees (such actors include, for instance, judi­cia­ry, natio­nal super­vi­so­ry aut­ho­ri­ties, data pro­tec­tion aut­ho­ri­ties, equa­li­ty and human rights bodies, ombuds, con­su­mer pro­tec­tion aut­ho­ri­ties, arti­fi­ci­al intel­li­gence pro­vi­ders and arti­fi­ci­al intel­li­gence users), in par­ti­cu­lar with refe­rence to the appli­ca­ti­on of the metho­do­lo­gy set out in Artic­le 16.

Artic­le 21 – Safe­guard for exi­sting human rights

Not­hing in this Con­ven­ti­on shall be con­strued as limi­ting, dero­ga­ting from or other­wi­se affec­ting the human rights or other rela­ted legal rights and obli­ga­ti­ons which may be gua­ran­teed under the rele­vant laws of a Par­ty or any other rele­vant inter­na­tio­nal agree­ment to which it is party.

Expl­ana­to­ry Report

125. Con­si­stent with the 1969 Vien­na Con­ven­ti­on on the Law of Trea­ties, this artic­le seeks to ensu­re that­the Frame­work Con­ven­ti­on har­mo­nious­ly coexists with other inter­na­tio­nal human rights trea­ties and instru­ments, such as tho­se listed in para­graph 39 above.

126. This pro­vi­si­on rein­forces that the over­all aim of this Frame­work Con­ven­ti­on is to ensu­re the hig­hest level of pro­tec­tion of human rights, demo­cra­cy and the rule of law in the con­text of the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. In this con­text, all refe­ren­ces to dome­stic law in this Frame­work Con­ven­ti­on should be read as limi­t­ed to cases whe­re dome­stic law pro­vi­des for a hig­her stan­dard of human rights pro­tec­tion than appli­ca­ble inter­na­tio­nal law.

Artic­le 22 – Wider protection

None of the pro­vi­si­ons of this Con­ven­ti­on shall be inter­pre­ted as limi­ting or other­wi­se affec­ting the pos­si­bi­li­ty for a Par­ty to grant a wider mea­su­re of pro­tec­tion than is sti­pu­la­ted in this Convention.

Expl­ana­to­ry Report

127. This pro­vi­si­on safe­guards tho­se pro­vi­si­ons of dome­stic law and exi­sting and future bin­ding inter­na­tio­nal instru­ments, which pro­vi­de sup­ple­men­ta­ry pro­tec­tion in respect of acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems in sen­si­ti­ve con­texts from the point of view of human rights, demo­cra­cy and the rule of law, going bey­ond the level secu­red by this Frame­work Con­ven­ti­on; this Frame­work Con­ven­ti­on shall not be inter­pre­ted so as to rest­rict such pro­tec­tion. The phra­se “wider mea­su­re of pro­tec­tion” can be inter­pre­ted as pro­vi­ding the pos­si­bi­li­ty of put­ting a per­son, for exam­p­le, in a more favoura­ble posi­ti­on than pro­vi­ded for under the Frame­work Convention.

Chap­ter VII – Fol­low-up mecha­nism and co-operation

Expl­ana­to­ry Report

128. Chap­ter VII of the Frame­work Con­ven­ti­on con­ta­ins pro­vi­si­ons which aim at ensu­ring the effec­ti­ve imple­men­ta­ti­on of the Frame­work Con­ven­ti­on by the Par­ti es through a fol­low-up mecha­nism and co-ope­ra­ti­on. This is the mecha­nism announ­ced in Artic­le 1 para­graph 3.

Artic­le 23 – Con­fe­rence of the Parties

1. The Con­fe­rence of the Par­ties shall be com­po­sed of repre­sen­ta­ti­ves of the Par­ties to this Convention.

2. The Par­ties shall con­sult peri­odi­cal­ly with a view to:

a. faci­li­ta­ting the effec­ti­ve appli­ca­ti­on and imple­men­ta­ti­on of this Con­ven­ti­on, inclu­ding the iden­ti­fi­ca­ti­on of any pro­blems and the effects of any reser­va­ti­on made in pur­su­an­ce of Artic­le 34, para­graph 1, or any decla­ra­ti­on made under this Convention;

b.

con­side­ring the pos­si­ble sup­ple­men­ta­ti­on to or amend­ment of this Convention;

c.

con­side­ring mat­ters and making spe­ci­fic recom­men­da­ti­ons con­cer­ning the inter­pre­ta­ti­on and appli­ca­ti­on of this Convention;

d.

faci­li­ta­ting the exch­an­ge of infor­ma­ti­on on signi­fi­cant legal, poli­cy or tech­no­lo­gi­cal deve­lo­p­ments of rele­van­ce, inclu­ding in pur­su­it of the objec­ti­ves defi­ned in Artic­le 25, for the imple­men­ta­ti­on of this Convention;

e.

faci­li­ta­ting, whe­re neces­sa­ry, the fri­end­ly sett­le­ment of dis­pu­tes rela­ted to the appli­ca­ti­on of this Con­ven­ti­on; and

f.

faci­li­ta­ting co-ope­ra­ti­on with rele­vant stake­hol­ders con­cer­ning per­ti­nent aspects of the imple­men­ta­ti­on of this Con­ven­ti­on, inclu­ding through public hea­rings whe­re appropriate.

3. The Con­fe­rence of the Par­ties shall be con­ve­ned by the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe when­ever neces­sa­ry and, in any case, when a majo­ri­ty of the Par­ties or the Com­mit­tee of Mini­sters requests its convocation.

4. The Con­fe­rence of the Par­ties shall adopt its own rules of pro­ce­du­re by con­sen­sus within twel­ve months of the ent­ry into force of this Convention.

5. The Par­ties shall be assi­sted by the Secre­ta­ri­at of the Coun­cil of Euro­pe in car­ry­ing out their func­tions pur­su­ant to this article.

6. The Con­fe­rence of the Par­ties may pro­po­se to the Com­mit­tee of Mini­sters appro­pria­te ways to enga­ge rele­vant exper­ti­se in sup­port of the effec­ti­ve imple­men­ta­ti­on of this Convention.

7. Any Par­ty which is not a mem­ber of the Coun­cil of Euro­pe shall con­tri­bu­te to the fun­ding of the acti­vi­ties of the Con­fe­rence of the Par­ties. The con­tri­bu­ti­on of a non-mem­ber of the Coun­cil of Euro­pe shall be estab­lished joint­ly by the Com­mit­tee of Mini­sters and that non-member.

8. The Con­fe­rence of the Par­ties may deci­de to rest­rict the par­ti­ci­pa­ti­on in its work of a Par­ty that has cea­sed to be a mem­ber of the Coun­cil of Euro­pe under Artic­le 8 of the Sta­tu­te of the Coun­cil of Euro­pe (ETS No. 1) for a serious vio­la­ti­on of Artic­le 3 of the Sta­tu­te. Simi­lar­ly, mea­su­res can be taken in respect of any Par­ty that is not a mem­ber Sta­te of the Coun­cil of Euro­pe by a decis­i­on of the Com­mit­tee of Mini­sters to cea­se its rela­ti­ons with that Sta­te on grounds simi­lar to tho­se men­tio­ned in Artic­le 3 of the Statute.


Expl­ana­to­ry Report

129. This artic­le pro­vi­des for the set­ting-up of a body under the Frame­work Con­ven­ti­on, the Con­fe­rence of the Par­ties, com­po­sed of repre­sen­ta­ti­ves of the Parties.

130. The estab­lish­ment of this body will ensu­re equal par­ti­ci­pa­ti­on of all Par­ties in the decis­i­on-making­pro­cess and in the Frame­work Con­ven­ti­on fol­low-up pro­ce­du­re and will also streng­then co-ope­ra­ti­on bet­ween the Par­ties to ensu­re pro­per and effec­ti­ve imple­men­ta­ti­on of the Frame­work Convention.

131. The fle­xi­bi­li­ty of the fol­low-up mecha­nism estab­lished by this Frame­work Con­ven­ti­on is reflec­ted by the fact that the­re is no tem­po­ral requi­re­ment for its con­vo­ca­ti­on. It will be con­ve­ned by the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe (para­graph 3) as appro­pria­te and peri­odi­cal­ly (para­graph 2). Howe­ver, it can only be con­ve­ned at the request of the majo­ri­ty of the Par­ties or at the request of the Com­mit­tee of Mini­sters of the Coun­cil of Euro­pe (para­graph 3).

132. With respect to this Frame­work Con­ven­ti­on, the Con­fe­rence of the Par­ties has the tra­di­tio­nal fol­low-up com­pe­ten­ci­es and plays a role in respect of:

a)

the effec­ti­ve imple­men­ta­ti­on of the Frame­work Con­ven­ti­on, by making pro­po­sals to faci­li­ta­te or impro­ve the effec­ti­ve use and imple­men­ta­ti­on of this Frame­work Con­ven­ti­on, inclu­ding the iden­ti­fi­ca­ti­on of any pro­blems the­r­ein, and the effects of signi­fi­cant legal, poli­cy or tech­no­lo­gi­cal deve­lo­p­ments per­tai­ning to the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems, as well as the effects of any decla­ra­ti­on or reser­va­ti­on made under this Frame­work Convention;

b)

the amend­ment of the Frame­work Con­ven­ti­on, by making pro­po­sals for amend­ment in accordance with Artic­le 28, para­graph 1 and for­mu­la­ting its opi­ni­on on any pro­po­sal for amend­ment of this Frame­work Con­ven­ti­on which is refer­red to it in accordance with Artic­le 28, para­graph 3;

c)

a gene­ral advi­so­ry role in respect of the Frame­work Con­ven­ti­on by expres­sing spe­ci­fic recom­men­da­ti­ons on any que­sti­on con­cer­ning its inter­pre­ta­ti­on or appli­ca­ti­on, inclu­ding, for instance, sug­ge­st­ing inter­pre­ta­ti­ons of legal terms con­tai­ned in the Frame­work Con­ven­ti­on. Alt­hough not legal­ly bin­ding in natu­re, the­se recom­men­da­ti­ons may be seen as a joint expres­si­on of opi­ni­on by the Par­ties on a given sub­ject which should be taken into account in good faith by the Par­ties in their appli­ca­ti­on of the Frame­work Convention.

d)

ser­ving as a forum for faci­li­ta­ting the exch­an­ge of infor­ma­ti­on on signi­fi­cant legal, socie­tal, poli­cy or tech­no­lo­gi­cal deve­lo­p­ments in rela­ti­on to the appli­ca­ti­on of the pro­vi­si­ons of the Frame­work Con­ven­ti­on, inclu­ding in rela­ti­on to the inter­na­tio­nal co-ope­ra­ti­on acti­vi­ties descri­bed in Artic­le 25;

e)

in accordance with Artic­le 29 of the Frame­work Con­ven­ti­on, faci­li­ta­ting, whe­re neces­sa­ry, the fri­end­ly sett­le­ment of dis­pu­tes rela­ted to the appli­ca­ti­on of its pro­vi­si­ons, in a non-bin­ding, con­sul­ta­ti­ve capacity;

f)

faci­li­ta­ting co-ope­ra­ti­on with stake­hol­ders, inclu­ding non-govern­men­tal orga­ni­sa­ti­ons and other bodies which can impro­ve the effec­ti­ve­ness of the fol­low-up mecha­nism. In view of the high­ly tech­ni­cal sub­ject mat­ter of the Frame­work Con­ven­ti­on, para­graph 6 of Artic­le 23 express­ly points at the pos­si­bi­li­ty for the Con­fe­rence of the Par­ties to seek, whe­re appro­pria­te, rele­vant expert advice.

133. The Con­fe­rence of the Par­ties must adopt rules of pro­ce­du­re estab­li­shing the way in which the fol­low-up system of the Frame­work Con­ven­ti­on ope­ra­tes, on the under­stan­ding that its rules of pro­ce­du­re must be draf­ted in such a way that such fol­low-up is effec­tively ensu­red. The rules of pro­ce­du­re shall be adopted by con­sen­sus, name­ly a decis­i­on taken in the absence of sus­tained objec­tion and wit­hout a for­mal vote. Artic­le 23, para­graph 4 fur­ther sti­pu­la­tes that the Con­fe­rence of the Par­ties shall adopt such rules within twel­ve months of the ent­ry into force of the Frame­work Convention.

134. Para­graph 7 con­cerns the con­tri­bu­ti­on of Par­ties which are not mem­ber Sta­tes of the Coun­cil of Euro­pe to the finan­cing of the acti­vi­ties of the Con­fe­rence of the Par­ties. The con­tri­bu­ti­ons of mem­ber Sta­tes to the­se acti­vi­ties are cover­ed coll­ec­tively by the ordi­na­ry bud­get of the Coun­cil of Euro­pe, whe­re­as non-mem­ber Sta­tes con­tri­bu­te indi­vi­du­al­ly, in a fair man­ner. The Frame­work Con­ven­ti­on does not sti­pu­la­te the form in which the con­tri­bu­ti­ons, inclu­ding the amounts and moda­li­ties, of Par­ties which are not mem­bers of the Coun­cil of Euro­pe shall be estab­lished. The legal basis for the con­tri­bu­ti­on of such Par­ties will be the Frame­work Con­ven­ti­on its­elf and the act(s) estab­li­shing that con­tri­bu­ti­on. The Frame­work Con­ven­ti­on does not affect dome­stic laws and regu­la­ti­ons of Par­ties gover­ning bud­ge­ta­ry com­pe­ten­ci­es and pro­ce­du­res for bud­ge­ta­ry appro­pria­ti­ons. Wit­hout pre­ju­di­ce to the agree­ment refer­red to abo­ve, one of the ways for a Par­ty which is not a mem­ber of the Coun­cil of Euro­pe to make its payment of con­tri­bu­ti­on is to pay within the limit of bud­get appro­ved by the legis­la­ti­ve branch.

135. Para­graph 8 of this pro­vi­si­on gives the Con­fe­rence of the Par­ties the aut­ho­ri­ty to deli­be­ra­te on the limi­ta­ti­on of invol­vement in its pro­ce­e­dings by any Par­ty that has been dis­qua­li­fi­ed from mem­ber­ship of the Coun­cil of Euro­pe pur­su­ant to Artic­le 8 of the Sta­tu­te of the Coun­cil of Euro­pe for a serious vio­la­ti­on of Artic­le 3 of the Sta­tu­te. Simi­lar action can be under­ta­ken regar­ding any Par­ty that is a non-mem­ber of the Coun­cil of Euro­pe by a decis­i­on of the Com­mit­tee of Mini­sters of the Coun­cil of Europe.

Artic­le 24 – Report­ing obligation

1. Each Par­ty shall pro­vi­de a report to the Con­fe­rence of the Par­ties within the first two years after beco­ming a Par­ty, and then peri­odi­cal­ly the­re­af­ter with details of the acti­vi­ties under­ta­ken to give effect to Artic­le 3, para­graph 1, sub-para­graphs a and b.

2. The Con­fe­rence of the Par­ties shall deter­mi­ne the for­mat and the pro­cess for the report in accordance with its rules of procedure.


Expl­ana­to­ry Report

136. To enable co-ope­ra­ti­on and regu­lar­ly update on the imple­men­ta­ti­on of the Frame­work Con­ven­ti­on, each Par­ty should pro­vi­de a report to the Con­fe­rence of the Par­ties within the first two years after beco­ming a Par­ty and then peri­odi­cal­ly the­re­af­ter, with details of the acti­vi­ties under­ta­ken to give effect to Artic­le 3, para­graph 1, sub­pa­ra­graphs (a) and (b). The Con­fe­rence of the Par­ties will deter­mi­ne the for­mat and the pro­cess for the report in accordance with its rules of pro­ce­du­re. The Draf­ters stron­gly encou­ra­ge the Par­ties to invi­te signa­to­ries not yet Par­ties to the Frame­work Con­ven­ti­on to share infor­ma­ti­on on the steps and mea­su­res taken to address risks to human rights, demo­cra­cy and the rule of law and to faci­li­ta­te exchanges.

Artic­le 25 – Inter­na­tio­nal co-operation

1. The Par­ties shall co-ope­ra­te in the rea­li­sa­ti­on of the pur­po­se of this Con­ven­ti­on. Par­ties are fur­ther encou­ra­ged, as appro­pria­te, to assist Sta­tes that are not Par­ties to this Con­ven­ti­on in acting con­sist­ent­ly with the terms of this Con­ven­ti­on and beco­ming a Par­ty to it.

2. The Par­ties shall, as appro­pria­te, exch­an­ge rele­vant and useful infor­ma­ti­on bet­ween them­sel­ves con­cer­ning aspects rela­ted to arti­fi­ci­al intel­li­gence which may have signi­fi­cant posi­ti­ve or nega­ti­ve effects on the enjoy­ment of human rights, the func­tio­ning of demo­cra­cy and the obser­van­ce of the rule of law, inclu­ding risks and effects that have ari­sen in rese­arch con­texts and in rela­ti­on to the pri­va­te sec­tor. Par­ties are encou­ra­ged to invol­ve, as appro­pria­te, rele­vant stake­hol­ders and Sta­tes that are not Par­ties to this Con­ven­ti­on in such exch­an­ges of information.

3. The Par­ties are encou­ra­ged to streng­then co ‑ope­ra­ti­on, inclu­ding with rele­vant stake­hol­ders whe­re appro­pria­te, to pre­vent and miti­ga­te risks and adver­se impacts on human rights, demo­cra­cy and the rule of law in the con­text of acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems.


Expl­ana­to­ry Report

137. This artic­le sets out the pro­vi­si­ons on inter­na­tio­nal co-ope­ra­ti­on bet­ween Par­ties to the Frame­work Con­ven­ti­on. It starts by men­tio­ning the obli­ga­ti­on appli­ca­ble among Par­ties to afford one ano­ther the grea­test mea­su­re of assi­stance in con­nec­tion with the rea­li­sa­ti­on of the pur­po­se of this Frame­work Con­ven­ti­on, which is to ensu­re that acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems are ful­ly con­si­stent with human rights, demo­cra­cy and the rule of law.

138. This gene­ral obli­ga­ti­on is sup­ple­men­ted by an important point regar­ding the need for the Par­ties to offer sup­port, as dee­med sui­ta­ble, to Sta­tes that have not yet beco­me Par­ties to this Frame­work Con­ven­ti­on. This assi­stance should be aimed at gui­ding the­se Sta­tes in alig­ning their actions with the prin­ci­ples out­lined in this Frame­work Con­ven­ti­on and ulti­m­ate­ly encou­ra­ging their acce­s­si­on to it. This col­la­bo­ra­ti­ve effort should seek to pro­mo­te a coll­ec­ti­ve com­mit­ment to the goals and pro­vi­si­ons of the Frame­work Con­ven­ti­on, foste­ring a broa­der and more inclu­si­ve adherence to its terms among Sta­tes glo­bal­ly. Such sup­port and gui­dance do not neces­s­a­ri­ly imply finan­cial assistance.

139. Fur­ther­mo­re, the co-ope­ra­ti­on set­up by the Frame­work Con­ven­ti­on should include faci­li­ta­ti­on of the sha­ring of per­ti­nent infor­ma­ti­on regar­ding various aspects of arti­fi­ci­al intel­li­gence bet­ween the Par­ties, inclu­ding mea­su­res adopted to pre­vent or miti­ga­te risks and impacts on human rights, demo­cra­cy and the rule of law. This infor­ma­ti­on exch­an­ge should encom­pass ele­ments that could exert sub­stan­ti­al posi­ti­ve or adver­se impacts on the enjoy­ment of human rights, the func­tio­ning of demo­cra­tic pro­ce­s­ses, and the respect of the rule of law, inclu­ding risks and effects that have ari­sen in rese­arch con­texts and in rela­ti­on to the pri­va­te sec­tor. This sha­ring also extends to risks and effects that have sur­faced within the con­texts of rese­arch on arti­fi­ci­al intel­li­gence, pro­mo­ting a com­pre­hen­si­ve under­stan­ding of the mul­ti­face­ted impli­ca­ti­ons of the­se tech­no­lo­gies across the­se cri­ti­cal domains. In this regard, the pro­vi­si­on also points at the need for the Par­ties to include rele­vant non-Sta­te actors, such as aca­de­mics, indu­stry repre­sen­ta­ti­ves, and civil socie­ty orga­ni­sa­ti­ons, with a view to ensu­re mul­ti-stake­hol­der view of the rele­vant topics.

140. Last­ly, the pro­vi­si­on direct­ly spe­ci­fi­es that, for the fol­low-up of the appli­ca­ti­on of the Frame­work Con­ven­ti­on to be tru­ly effec­ti­ve, the Par­ties’ efforts in co-ope­ra­ti­on should aim spe­ci­fi­cal­ly at the pre­ven­ti­on and miti­ga­ti­on of risks and adver­se impacts resul­ting from the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems and that such co-ope­ra­ti­on should moreo­ver include a pos­si­bi­li­ty of invol­ving repre­sen­ta­ti­ves of non-govern­men­tal orga­ni­sa­ti­ons and other rele­vant bodies.

Artic­le 26 – Effec­ti­ve over­sight mechanisms

1. Each Par­ty shall estab­lish or desi­gna­te one or more effec­ti­ve mecha­nisms to over­see com­pli­ance with the obli­ga­ti­ons in this Convention.

2. Each Par­ty shall ensu­re that such mecha­nisms exer­cise their duties inde­pendent­ly and impar­ti­al­ly and that they have the neces­sa­ry powers, exper­ti­se and resour­ces to effec­tively ful­fil their tasks of over­see­ing com­pli­ance with the obli­ga­ti­ons in this Con­ven­ti­on, as given effect by the Parties.

3. In case a Par­ty has pro­vi­ded for more than one such mecha­nism, it shall take mea­su­res, whe­re prac­ti­ca­ble, to faci­li­ta­te effec­ti­ve coope­ra­ti­on among them.

4. In case a Par­ty has pro­vi­ded for mecha­nisms dif­fe­rent from exi­sting human rights struc­tures, it shall take mea­su­res, whe­re prac­ti­ca­ble, to pro­mo­te effec­ti­ve coope­ra­ti­on bet­ween t he mecha­nisms refer­red to in para­graph 1 and tho­se exi­sting dome­stic human rights structures.


Expl­ana­to­ry Report

141. This pro­vi­si­on requi­res Par­ties to adopt or main­tain effec­ti­ve mecha­nisms to over­see com­pli­ance with the obli­ga­ti­ons in the Frame­work Con­ven­ti­on. In view of the ubi­qui­tous cha­rac­ter of the use of arti­fi­ci­al intel­li­gence systems and the fact all Par­ties alre­a­dy have various regu­la­ti­ons and super­vi­sing mecha­nisms in place for the pro­tec­tion of human rights in various sec­tors, the pro­vi­si­on empha­sis­es the need for the Par­ties to review the alre­a­dy exi­sting mecha­nisms to app­ly to the con­text of acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems. Par­ties may also choo­se to expand, real­lo­ca­te, adapt, or rede­fi­ne their func­tions or, if appro­pria­te, set up enti­re­ly new struc­tures or mecha­nisms. The pro­vi­si­ons under thi s Artic­le lea­ve the­se decis­i­ons express­ly to the Par­ties’ dis­creti­on, sub­ject to the con­di­ti­ons in para­graphs 2 and 3, with an under­stan­ding that the rele­vant bodies should be vested with the suf­fi­ci­ent powers to effec­tively pur­sue their over­sight activities.

142. Whe­ther estab­lished, new­ly set­up or desi­gna­ted, such bodies should satis­fy the cri­te­ria set out in para­graph 2 of the pro­vi­si­on inso­far as they should be func­tion­al­ly inde­pen­dent from the rele­vant actors within the exe­cu­ti­ve and legis­la­ti­ve bran­ches. The refe­rence to “inde­pendent­ly and impar­ti­al­ly” in para­graph 2 deno­tes a suf­fi­ci­ent degree of distance from rele­vant actors within both exe­cu­ti­ve and legis­la­ti­ve bran­ches, sub­ject to over­sight enab­ling the rele­vant body(ies) to car­ry out their func­tions effec­tively. This term accom­mo­da­tes a varie­ty of types of func­tion­al inde­pen­dence that could be imple­men­ted in dif­fe­rent legal systems. For exam­p­le, this may include over­sight func­tions embedded within par­ti­cu­lar govern­ment bodies that assess or super­vi­se the deve­lo­p­ment and use of arti­fi­ci­al intel­li­gence systems.

143. A num­ber of fur­ther ele­ments men­tio­ned in the pro­vi­si­on con­tri­bu­te to safe­guar­ding the requi­red level of func­tion­al inde­pen­dence: the bodies should have the neces­sa­ry powers, exper­ti­se, inclu­ding in human rights, tech­ni­cal know­ledge and pro­fi­ci­en­cy, as well as other resour­ces to ful­fil their tasks effectively.

144. Given the shared sub­ject mat­ter and a real pos­si­bi­li­ty that the over­sight of the acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems is shared by mul­ti­ple aut­ho­ri­ties across a ran­ge of sec­tors (this is par­ti­cu­lar­ly true for Par­ties with exi­sting spe­cia­li­sed human rights mecha­nisms, such as for exam­p­le data pro­tec­tion, equa­li­ty bodies, or Natio­nal Human Rights Insti­tu­ti­ons (NHRIs), acting in a given sec­tor or across sec­tors), the pro­vi­si­on requi­res the Par­ties to pro­mo­te effec­ti­ve com­mu­ni­ca­ti­on and co-ope­ra­ti­on bet­ween them.

Chap­ter VIII – Final clauses

Expl­ana­to­ry Report

145. With some excep­ti­ons, the pro­vi­si­ons in Artic­le 27 to 36 are essen­ti­al­ly based on the Model Final Clau­ses for Con­ven­ti­ons, Addi­tio­nal Pro­to­cols and Amen­ding Pro­to­cols con­clu­ded within the Coun­cil of Euro­pe adopted by the Com­mit­tee of Mini­sters at its 1291st mee­ting of the Mini­sters’ Depu­ties, on 5 July 2017.

Artic­le 27 – Effects of the Convention

1. If two or more Par­ties have alre­a­dy con­clu­ded an agree­ment or trea­ty on the mat­ters dealt with in this Con­ven­ti­on, or have other­wi­se estab­lished rela­ti­ons on such mat­ters, they shall also be entit­led to app­ly that agree­ment or trea­ty or to regu­la­te tho­se rela­ti­ons accor­din­gly, so long as they do so in a man­ner which is not incon­si­stent with the object and pur­po­se of this Convention.

2. Par­ties which are mem­bers of the Euro­pean Uni­on shall, in their mutu­al rela­ti­ons, app­ly Euro­pean Uni­on rules gover­ning the mat­ters within the scope of this Con­ven­ti­on wit­hout pre­ju­di­ce to the object and pur­po­se of this Con­ven­ti­on and wit­hout pre­ju­di­ce to its full appli­ca­ti­on with other Par­ties. The same applies to other Par­ties to the ext­ent that they are bound by such rules.


Expl­ana­to­ry Report

146. Para­graph 1 of Artic­le 27 pro­vi­des that Par­ties are free to app­ly agree­ments or trea­ties con­clu­ded pri­or to this Frame­work Con­ven­ti­on, inclu­ding inter­na­tio­nal trade agree­ments, that regu­la­te acti­vi­ties within the life­cy­cle of arti­fi­ci­al intel­li­gence systems fal­ling within the scope of this Frame­work Con­ven­ti­on. Howe­ver, Par­ties must respect the object and pur­po­se of the Frame­work Con­ven­ti­on when doing so and the­r­e­fo­re can­not have obli­ga­ti­ons that would defeat its object and purpose.

147. Para­graph 2 of this artic­le also ack­now­led­ges the increa­sed inte­gra­ti­on of the Euro­pean Uni­on, par­ti­cu­lar­ly as regards regu­la­ti­on of arti­fi­ci­al intel­li­gence systems. This para­graph, the­r­e­fo­re, per­mits Euro­pean Uni­on mem­ber Sta­tes to app­ly Euro­pean Uni­on law that governs mat­ters dealt with in this Frame­work Con­ven­ti­on bet­ween them­sel­ves. The Draf­ters inten­ded Euro­pean Uni­on law to include mea­su­res, prin­ci­ples and pro­ce­du­res pro­vi­ded for in the Euro­pean Uni­on legal order, in par­ti­cu­lar laws, regu­la­ti­ons or admi­ni­stra­ti­ve pro­vi­si­ons as well as other requi­re­ments, inclu­ding court decis­i­ons. Para­graph 2 is inten­ded, the­r­e­fo­re, to cover the inter­nal rela­ti­ons bet­ween Euro­pean Uni­on mem­ber Sta­tes and bet­ween Euro­pean Uni­on mem­ber Sta­tes and insti­tu­ti­ons, bodies, offices and agen­ci­es of the Euro­pean Uni­on. The same clau­se should also app­ly to other Par­ties that app­ly Euro­pean Uni­on rules to the ext­ent they are bound by the­se rules in view of their par­ti­ci­pa­ti­on in the Euro­pean Uni­on inter­nal mar­ket or being sub­ject to inter­nal mar­ket treatment.

148. This pro­vi­si­on does not affect the full appli­ca­ti­on of this Frame­work Con­ven­ti­on bet­ween the Euro­pean Uni­on or Par­ties that are mem­bers of the Euro­pean Uni­on, and other Par­ties. This pro­vi­si­on simi­lar­ly does not affect the full appli­ca­ti­on of this Frame­work Con­ven­ti­on bet­ween Par­ties that are not mem­bers of the Euro­pean Uni­on to the ext­ent they are also bound by the same rules and other Par­ties to the Frame­work Convention.

Artic­le 28 – Amendments

1. Amend­ments to this Con­ven­ti­on may be pro­po­sed by any Par­ty, the Com­mit­tee of Mini­sters of the Coun­cil of Euro­pe or the Con­fe­rence of the Parties.

2. Any pro­po­sal for amend­ment shall be com­mu­ni­ca­ted by the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe to the Parties.

3. Any amend­ment pro­po­sed by a Par­ty, or the Com­mit­tee of Mini­sters, shall be com­mu­ni­ca­ted to the Con­fe­rence of the Par­ties, which shall sub­mit to the Com­mit­tee of Mini­sters its opi­ni­on on the pro­po­sed amendment.

4. The Com­mit­tee of Mini­sters shall con­sider the pro­po­sed amend­ment and the opi­ni­on sub­mit­ted by the Con­fe­rence of the Par­ties and may appro­ve the amendment.

5. The text of any amend­ment appro­ved by the Com­mit­tee of Mini­sters in accordance with para­graph 4 shall be for­ward­ed to the Par­ties for acceptance.

6. Any amend­ment appro­ved in accordance with para­graph 4 shall come into force on the thir­tieth day after all Par­ties have infor­med the Secre­ta­ry Gene­ral of their accep­tance thereof.


Expl­ana­to­ry Report

149. This Artic­le pro­vi­des for a pos­si­bi­li­ty of amen­ding the Frame­work Con­ven­ti­on and estab­lishes the mecha­nism for such pro­cess. This amend­ment pro­ce­du­re is pri­ma­ri­ly inten­ded to be for rela­tively minor chan­ges of a pro­ce­du­ral and tech­ni­cal cha­rac­ter. The Draf­ters con­side­red that major chan­ges to the Frame­work Con­ven­ti­on could be made in the form of amen­ding protocols.

150. Amend­ments to the pro­vi­si­ons of the Frame­work Con­ven­ti­on may be pro­po­sed by a Par­ty, the Com­mit­tee of Mini­sters of the Coun­cil of Euro­pe or the Con­fe­rence of the Par­ties. The­se amend­ments shall then be com­mu­ni­ca­ted to the Par­ties to the Frame­work Convention.

151. On any amend­ment pro­po­sed by a Par­ty or the Com­mit­tee of Mini­sters, the Con­fe­rence of the Par­ties shall sub­mit to the Com­mit­tee of Mini­sters its opi­ni­on on the pro­po­sed amendment.

152. The Com­mit­tee of Mini­sters shall con­sider the pro­po­sed amend­ment and any opi­ni­on sub­mit­ted by the Con­fe­rence of the Par­ties and may appro­ve the amendment.

153. In accordance with para­graphs 5 and 6, any amend­ment appro­ved by the Com­mit­tee of Mini­sters would come into force only when all Par­ties have infor­med the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe of their accep­tance. This requi­re­ment seeks to ensu­re equal par­ti­ci­pa­ti­on in the decis­i­on-making pro­cess for all Par­ties and that the Frame­work Con­ven­ti­on will evol­ve in a uni­form manner.

Artic­le 29 – Dis­pu­te settlement

In the event of a dis­pu­te bet­ween Par­ties as to the inter­pre­ta­ti­on or appli­ca­ti­on of this Con­ven­ti­on, the­se Par­ties shall seek a sett­le­ment of the dis­pu­te through nego­tia­ti­on or any other peaceful means of their choice, inclu­ding through the Con­fe­rence of the Par­ties, as pro­vi­ded for in Artic­le 23, para­graph 2, sub-para­graph e.

Expl­ana­to­ry Report

154. The Draf­ters con­side­red it important to include in the text of the Frame­work Con­ven­ti­on an artic­le on dis­pu­te sett­le­ment, which impo­ses an obli­ga­ti­on on the Par­ties to seek a peaceful sett­le­ment of any dis­pu­te con­cer­ning the appli­ca­ti­on or the inter­pre­ta­ti­on of the Frame­work Con­ven­ti­on through nego­tia­ti­on or any other peaceful means of their choice.

155. In addi­ti­on to nego­tia­ti­on as spe­ci­fi­cal­ly men­tio­ned in the first para­graph of this Artic­le, Par­ties may have recour­se to any other peaceful means of their choice, as refer­red to in Artic­le 33 of the Char­ter of the United Nati­ons. As pro­vi­ded in Artic­le 23, they may also, by mutu­al con­sent, turn to the Con­fe­rence of the Par­ties at any stage. The pro­vi­si­on does not speak fur­ther about any spe­ci­fic pro­ce­du­res to be adopted in the con­text of a poten­ti­al dis­pu­tes. Any pro­ce­du­re for sol­ving dis­pu­tes shall be agreed upon by the Par­ties concerned.

Artic­le 30 – Signa­tu­re and ent­ry into force

1. This Con­ven­ti­on shall be open for signa­tu­re by the mem­ber Sta­tes of the Coun­cil of Euro­pe, the non-mem­ber Sta­tes which have par­ti­ci­pa­ted in its draf­ting and the Euro­pean Union.

2. This Con­ven­ti­on is sub­ject to rati­fi­ca­ti­on, accep­tance or appr­oval. Instru­ments of rati­fi­ca­ti­on, accep­tance or appr­oval shall be depo­si­ted with the Secre­ta­ry Gene­ral of the Coun­cil of Europe.

3. This Con­ven­ti­on shall enter into force on the first day of the month fol­lo­wing the expi­ra­ti­on of a peri­od of three months after the date on which five signa­to­ries, inclu­ding at least three mem­ber Sta­tes of the Coun­cil of Euro­pe, have expres­sed their con­sent to be bound by this Con­ven­ti­on in accordance with para­graph 2.

4. In respect of any signa­to­ry which sub­se­quent­ly expres­ses its con­sent to be bound by it, this Con­ven­ti­on shall enter into force on the first day of the month fol­lo­wing the expi­ra­ti­on of a peri­od of three months after the date of the depo­sit of its instru­ment of rati­fi­ca­ti­on, accep­tance or approval.


Expl­ana­to­ry Report

156. Para­graph 1 sta­tes that the Frame­work Con­ven­ti­on is open for signa­tu­re by Coun­cil of Euro­pe mem­ber Sta­tes, non-mem­ber Sta­tes that par­ti­ci­pa­ted in its ela­bo­ra­ti­on (Argen­ti­na, Austra­lia, Cana­da, Costa Rica, the Holy See, Isra­el, Japan, Mexi­co, Peru, the United Sta­tes and Uru­gu­ay) and the Euro­pean Uni­on. Once the Frame­work Con­ven­ti­on enters into force, in accordance with para­graph 3, other non-mem­ber Sta­tes not cover­ed by this pro­vi­si­on may be invi­ted to acce­de to the Framewor

k Con­ven­ti­on in accordance with Artic­le 31, para­graph 1.

157. Para­graph 2 sta­tes that the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe is the depo­si­ta­ry of the instru­ments of rati­fi­ca­ti­on, accep­tance or appr­oval of this Frame­work Convention.

158. Para­graph 3 sets the num­ber of rati­fi­ca­ti­ons, accep­tances or appr­ovals requi­red for the Frame­work Convention’s ent­ry into force at five. At least three of the­se must be made by Coun­cil of Euro­pe mem­bers, in accordance with the trea­ty-making prac­ti­ce of the Organisation.

Artic­le 31 – Accession

1. After the ent­ry into force of this Con­ven­ti­on, the Com­mit­tee of Mini­sters of the Coun­cil of Euro­pe may, after con­sul­ting the Par­ties to this Con­ven­ti­on and obtai­ning their unani­mous con­sent, invi­te any

non-mem­ber Sta­te of the Coun­cil of Euro­pe which has not par­ti­ci­pa­ted in the ela­bo­ra­ti­on of this Con­ven­ti­on to acce­de to this Con­ven­ti­on by a decis­i­on taken by the majo­ri­ty pro­vi­ded for in Artic­le 20.d of the Sta­tu­te of the Coun­cil of Euro­pe, and by unani­mous vote of the repre­sen­ta­ti­ves of the Par­ties entit­led to sit on the Com­mit­tee of Ministers.

2. In respect of any acce­ding Sta­te, this Con­ven­ti­on shall enter into force on the first day of the month fol­lo­wing the expi­ra­ti­on of a peri­od of three months after the date of depo­sit of the instru­ment of acce­s­si­on with the Secre­ta­ry Gene­ral of the Coun­cil of Europe.


Expl­ana­to­ry Report

159. After the ent­ry into force of this Frame­work Con­ven­ti­on, the Com­mit­tee of Mini­sters of the Coun­cil of Euro­pe may, after con­sul­ting the Par­ties to this Frame­work Con­ven­ti­on and obtai­ning their­un­ani­mous con­sent, invi­te any non-mem­ber Sta­te of the Coun­cil of Euro­pe which has not par­ti­ci­pa­ted in the ela­bo­ra­ti­on of the Frame­work Con­ven­ti­on to acce­de to it. This decis­i­on requi­res the two-third majo­ri­ty pro­vi­ded for in Artic­le 20.d of the Sta­tu­te of the Coun­cil of Euro­pe, and the unani­mous vote of the repre­sen­ta­ti­ves of the Par­ties entit­led to sit on the Com­mit­tee of Ministers.

Artic­le 32 – Ter­ri­to­ri­al application

1. Any Sta­te or the Euro­pean Uni­on may, at the time of signa­tu­re or when depo­si­ting its instru­ment of rati­fi­ca­ti­on, accep­tance, appr­oval or acce­s­si­on, spe­ci­fy the ter­ri­to­ry or ter­ri­to­ries to which this Con­ven­ti­on shall apply.

2. Any Par­ty may, at a later date, by a decla­ra­ti­on addres­sed to the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe, extend the appli­ca­ti­on of this Con­ven­ti­on to any other ter­ri­to­ry spe­ci­fi­ed in the decla­ra­ti­on. In respect of such ter­ri­to­ry, this Con­ven­ti­on shall enter into force on the first day of the month fol­lo­wing the expi­ra­ti­on of a peri­od of three months after the date of rece­ipt of the decla­ra­ti­on by the Secre­ta­ry General.

3. Any decla­ra­ti­on made under the two pre­ce­ding para­graphs may, in respect of any ter­ri­to­ry spe­ci­fi­ed in said decla­ra­ti­on, be with­drawn by a noti­fi­ca­ti­on addres­sed to the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe. The with­dra­wal shall beco­me effec­ti­ve on the first day of the month fol­lo­wing the expi­ra­ti­on of a peri­od of three months after the date of rece­ipt of such noti­fi­ca­ti­on by the Secre­ta­ry General.


Expl­ana­to­ry Report

160. Para­graph 1 is a clau­se on ter­ri­to­ri­al appli­ca­ti­on such as tho­se often used in inter­na­tio­nal trea­ty prac­ti­ce, inclu­ding in the con­ven­ti­ons ela­bo­ra­ted within the Coun­cil of Euro­pe. Any Par­ty may spe­ci­fy the ter­ri­to­ry or ter­ri­to­ries to which the Frame­work Con­ven­ti­on applies. It is well under­s­tood that it would be incom­pa­ti­ble with the object and pur­po­se of the Frame­work Con­ven­ti­on for any Par­ty to exclude parts of its ter­ri­to­ry from appli­ca­ti­on of the Frame­work Con­ven­ti­on wit­hout valid rea­son (such as the exi­stence of dif­fe­rent legal sta­tus or dif­fe­rent legal systems app­ly­ing in mat­ters dealt with in the Frame­work Con­ven­ti on).

161. Para­graph 2 is con­cer­ned with the exten­si­on of appli­ca­ti­on of the Frame­work Con­ven­ti­on to ter­ri­to­ries for who­se inter­na­tio­nal rela­ti­ons the Par­ties are respon­si­ble or on who­se behalf they are aut­ho­ri­sed to give undertakings.

Artic­le 33 – Fede­ral clause

1. A fede­ral Sta­te may reser­ve the right to assu­me obli­ga­ti­ons under this Con­ven­ti­on con­si­stent with its fun­da­men­tal prin­ci­ples gover­ning the rela­ti­on­ship bet­ween its cen­tral govern­ment and con­sti­tu­ent sta­tes or other simi­lar ter­ri­to­ri­al enti­ties, pro­vi­ded that this Con­ven­ti­on shall app­ly to the cen­tral govern­ment of the fede­ral State .

2. With regard to the pro­vi­si­ons of this Con­ven­ti­on, the appli­ca­ti­on of, which come under the juris­dic­tion of con­sti­tu­ent sta­tes or other simi­lar ter­ri­to­ri­al enti­ties that are not obli­ged by the con­sti­tu­tio­nal system of the fede­ra­ti­on to take legis­la­ti­ve mea­su­res, the fede­ral govern­ment shall inform the com­pe­tent aut­ho­ri­ties of such sta­tes of the said pro­vi­si­ons with its favoura­ble opi­ni­on, and encou­ra­ge them to take appro­pria­te action to give them effect.


Expl­ana­to­ry Report

162. Con­si­stent with the goal of enab­ling the lar­gest pos­si­ble num­ber of Sta­tes to beco­me Par­ties to the Frame­work Con­ven­ti­on, Artic­le 33 allo­ws for a reser­va­ti­on which is inten­ded to accom­mo­da­te the dif­fi­cul­ties fede­ral Sta­tes may face as a result of their cha­rac­te­ri­stic dis­tri­bu­ti­on of power bet­ween cen­tral and regio­nal aut­ho­ri­ties and the fact that in some systems fede­ral govern­ments of the par­ti­cu­lar coun­try may not be con­sti­tu­tio­nal­ly com­pe­tent to ful­fil the trea­ty obli­ga­ti­ons. Pre­ce­dents exist for fede­ral decla­ra­ti­ons or reser­va­tions to other inter­na­tio­nal agreements[7], inclu­ding, within the frame­work of the Con­ven­ti­on on Cyber­crime (ETS No.185) on enhan­ced co-ope­ra­ti­on and dis­clo­sure of elec­tro­nic evi­dence of 23 November2001 (Artic­le 41).

163. Artic­le 33 reco­g­nis­es that some varia­ti­ons in covera­ge may occur as a result of well-estab­lished dome­stic law and prac­ti­ce of a Par­ty which is a fede­ral Sta­te. Such varia­ti­ons must be based on its Con­sti­tu­ti­on or other fun­da­men­tal prin­ci­ples and prac­ti­ces con­cer­ning the divi­si­on of powers in rela­ti­on to the mat­ters cover­ed by the Frame­work Con­ven­ti­on bet­ween the cen­tral govern­ment and the con­sti­tu­ent Sta­tes or ter­ri­to­ri­al enti­ties of a fede­ral State.

164. Some artic­les of the Frame­work Con­ven­ti­on con­tain requi­re­ments to adopt or main­tain legis­la­ti­ve, admi­ni­stra­ti­ve or other mea­su­res that a fede­ral Sta­te may be unable to requi­re its con­sti­tu­ent Sta­tes or other simi­lar ter­ri­to­ri­al enti­ties to adopt or maintain.

165. In addi­ti­on, para­graph 2 of Artic­le 33 pro­vi­des that, in respect of pro­vi­si­ons the imple­men­ta­ti­on of which falls within the legis­la­ti­ve juris­dic­tion of the con­sti­tu­ent Sta­tes or other simi­lar ter­ri­to­ri­al enti­ties, the fede­ral govern­ment shall refer the pro­vi­si­ons to the aut­ho­ri­ties of the­se enti­ties with a favoura­ble endor­se­ment, encou­ra­ging them to take appro­pria­te action to give them effect.

Artic­le 34 – Reservations

1. By a writ­ten noti­fi­ca­ti­on addres­sed to the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe, any Sta­te may, at the time of signa­tu­re or when depo­si­ting its instru­ment of rati­fi­ca­ti­on, accep­tance, appr­oval or acce­s­si­on, decla­re that it avails its­elf of the reser­va­ti­on pro­vi­ded for in Artic­le 33, para­graph 1.

2. No other reser­va­ti­on may be made in respect of this Convention.


Expl­ana­to­ry Report

166. Artic­le 34 spe­ci­fi­es that a Sta­te may make use of the reser­va­ti­on pro­vi­ded for in Artic­le 33, para­graph 1, eit­her at the moment of sig­ning or upon depo­si­ting its instru­ment of rati­fi­ca­ti­on, accep­tance, appr­oval, or accession.

167. Para­graph 2 spe­ci­fi­es that no reser­va­ti­on may be made in rela­ti­on to any pro­vi­si­on of this Frame­work Con­ven­ti­on, with the excep­ti­ons pro­vi­ded for in para­graph 1 of this article.

Artic­le 35 – Denunciation

1. Any Par­ty may, at any time, denoun­ce this Con­ven­ti­on by means of a noti­fi­ca­ti­on addres­sed to the Secre­ta­ry Gene­ral of the Coun­cil of Europe.

2. Such den­un­cia­ti­on shall beco­me effec­ti­ve on the first day of the month fol­lo­wing the expi­ra­ti­on of a peri­od of three months after the date of rece­ipt of the noti­fi­ca­ti­on by the Secre­ta­ry General.


Expl­ana­to­ry Report

168. In accordance with the United Nati­ons Vien­na Con­ven­ti­on on the Law of Trea­ties, Artic­le 35 allo­ws any Par­ty to denoun­ce the Frame­work Con­ven­ti­on at any time. The sole requi­re­ment is that the den­un­cia­ti­on be noti­fi­ed to the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe who shall act as depo­si­to­ry of the Frame­work Convention.

169. This den­un­cia­ti­on takes effect three months after it has been recei­ved by the Secre­ta­ry General.

Artic­le 36 – Notification

The Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe shall noti­fy the mem­ber Sta­tes of the Coun­cil of Euro­pe, the non-mem­ber Sta­tes which have par­ti­ci­pa­ted in the draf­ting of this Con­ven­ti­on, the Euro­pean Uni­on, any signa­to­ry, any con­trac­ting Sta­te, any Par­ty and any other Sta­te which has been invi­ted to acce­de to this Con­ven­ti­on, of:

a. any signature;

b.

the depo­sit of any instru­ment of rati­fi­ca­ti­on, accep­tance, appr­oval or accession;

c.

any date of ent­ry into force of this Con­ven­ti­on, in accordance with Artic­le 30, para­graphs 3 and 4, and Artic­le 31, para­graph 2;

d.

any amend­ment adopted in accordance with Artic­le 28 and the date on which such an amend­ment enters into force;

e.

any decla­ra­ti­on made in pur­su­an­ce of Artic­le 3, para­graph 1, sub-para­graph b;

f.

any reser­va­ti­on and with­dra­wal of a reser­va­ti­on made in pur­su­an­ce of Artic­le 34;

g.

any den­un­cia­ti­on made in pur­su­an­ce of Artic­le 35;

h.

any other act, decla­ra­ti­on, noti­fi­ca­ti­on or com­mu­ni­ca­ti­on rela­ting to this Convention.

Expl­ana­to­ry Report

170. Artic­le 36 lists the noti­fi­ca­ti­ons that, as the depo­si­ta­ry of the Frame­work Con­ven­ti­on, the Secre­ta­ry Gene­ral of the Coun­cil of Euro­pe is requi­red to make, and also desi­gna­tes the reci­pi­en­ts of the­se noti­fi­ca­ti­ons (Sta­tes and the Euro­pean Union).