
Abiding by rules of ethics and law when developing and maintaining an AI system in healthcare (8-9 min read - by Romain-Daniel Gosselin).
In previous posts, we explored the ethical and legal context of using AI systems in clinical practice. This latest instalment of my post series is more for AI governance professionals who oversee the developers, maintainers, and integrators of AI systems, upstream to bedside use. As a reminder, Switzerland decided not to have a specific law for AI to preserve the competitiveness of its private sector, and has therefore considered that the current laws were sufficient, albeit possibly amendable.
Many ethical principles are particularly relevant in today's post, such as autonomy (also forcing the preservation of transparency), trust, fairness, privacy and non-maleficence. They guarantee that AI systems preserve the self-determination of all participants by maintaining their oversight over their personal data on the one hand, and that the process will not harm them on the other hand.
On the menu:
- The central importance of the Swiss Federal Data Protection Act
- Honorable mention of the EU AI Act
- The specificities of AI development for medicine
- The importance of certification
- Links between federal acts and the existence of cantonal laws
- The issue of liability in case of default
- The control authorities
Curbing the data feast: the new Data Protection Act.
I will euphemistically write that the training and testing of AI systems, in particular ML and DL systems, are data intensive. It is thus no surprise that one key regulatory player is the new federal Data Protection Act (nLPD), along with its accompanying ordinances on data protection (OPDo) and certification (OCPD). The purpose of these laws is to protect the personality and fundamental rights of natural persons whose personal data is processed (nLPD Art. 1). Its recent revision in 2023 was particularly important to align it to the European General Data Protection Regulation (GDPR, enacted in 2018), a regulation that Swiss organisations exchanging data in the EU had to comply with already.
Let’s briefly review some nLPD sections particularly relevant for the design and development of biomedical AI systems, which concern AI governance, maintainer, or integrators :
- Processing: the specific purpose and transparency of data collection (nLPD Art 6, al. 3), a time limitation of data processing (nLPD Art 6. al 4), and the verification of data exactitude (nLPD Art6, al. 5) are all mandatory.
- Privacy: it is required that the system collecting personal data is designed in such a way that the protection of users' privacy is integrated into its structure (Privacy by Design), and with the highest level of security by default, i.e. without any need of user intervention, (Privacy by Default) upon release (nLPD Art. 7).
- Sensitive data: genetic and biometric data are deemed sensitive (nLPD Art. 5, al. c), which implies that: 1) the consent must be explicitly given for data collection (nLPD Art. 6, al. a); 2) a prior data protection impact assessment must be carried out in the case of the large-scale data (nLPD Art. 22, al. a), which is more than likely to occur in AI design.
- Record: Processing activities must be recorded (nLPD Art. 12). Among other items it includes the categories of data subjects, the retention period for the data, and a description of the measures taken to guarantee data security (NB: exceptions exist for entities that have fewer than 250 employees and whose data processing poses a negligible risk of harm to the personality of the subjects).
- Commissioner: The Federal Data Protection and Information Commissioner (PFPDT) is the authority that supervises the application of the nLPD (nLPD Art 4). They must be consulted in case the impact assessment indicates that the planned processing poses a high risk to fundamental rights (nLPD Art 23). The FDPIC must be promptly notified in the event of a data security breach (nLPD Art 24), and they are entitled to carry out an investigation (nLPD Art 50), which may result in the modification, suspension or termination of the processing, or even the data destruction (nLPD Art 51).
What about the EU AI Act?
As opposed to Switzerland, the European Union has directly legislated on AI, with the finalised enactment of the groundbreaking European AI Act (EU AI Act) in 2024, the world's first comprehensive legal framework on AI. The challenge faced by the Union in creating the law from scratch was to keep all parties satisfied between public authorities, citizens (seeking protection of their rights) and businesses (seeking competitiveness). The Act is now in a two-year step by step implementation phase before the law is applied in its entirety. The Act aims to guarantee that European citizens can trust AI outcomes, that AI systems respect human fundamental rights and do not do any harm to humans. It applies to AI systems deployed in the EU, whether or not you are based in the EU. Therefore, although Switzerland is not a member state of the EU, any company aiming to deploy an AI system in member states will have to comply with it. Grumpy Swiss readers would perhaps blame the European bureaucracy, but at least the AI question is addressed point-blank.
The EU AI Act takes an interesting approach since it classifies AI applications by risk levels (based on human rights): unacceptable (prohibited), high, limited, or minimal risk. The higher the risk, the tougher the law. Let’s go straight to the point: health-related AI systems are automatically considered high-risk (Annex III, al. 5, unless you are provocatively developing a system that falls into the category “unacceptable” such as manipulative or social scoring algorithms, in that case you might have more advanced ethical issues than I initially anticipated).
Specificities of the clinical context.
Any R&D project, AI or not, that uses samples from persons (biological material, in vivo human embryos and foetuses, or health-related personal data) must also bow to the Human Research Act (LRH) and its ordinances (ORH, OClin, OClin-Dim). NB: sharp eyes will have noticed that I deliberately excluded the Ordinance on organisation that deals with regulatory bodies).
These laws serve as Switzerland's interpretation of foundational international agreements and declarations like the Oviedo Convention (the first legally-binding international text designed to preserve human rights in biological research), the Helsinki Declaration, and we can even ultimately refer back to the overarching Universal Declaration of Human Rights (1948). They are the ethical safeguards ensuring the development of biomedical AI systems serving humanity rather than the other way around.
Certification, release and commercialization.
If your AI system is anything that could be considered a medical device, it will also have to abide by the federal Medicines and Devices Act (LPTh) and its Ordinances on Medical Devices (ODim, ODiv). These regulations make explicit mention of European laws, and the alignment of Swiss regulations and international recommendations is also embodied by the contribution of Swissmedic to the International Council for Harmonization (ICH) guidelines that create a harmonized approach to medical innovation making global deployment smoother. Being certified is a mandatory prerequisite to be deployed in Switzerland. This door opener is granted by the Swiss Agency for Therapeutic Products (Swissmedic) for medical systems, so your product will have to go through their process. Ethically, the quality control guaranteed by an official certification is a safeguard for preserving non-maleficence, privacy and ultimately trust and self-determination.
Connections between laws and cantonal regulations.
You already guessed (and feared) that, but these federal laws do not work in complete isolation. They may refer to their respective articles at times (for example in the LRH Art. 42 that directly mention the nLPD), but they also sometimes address the same issues. This is not dramatically surprising since laws, in democratic countries, governing different aspects of our societies are largely based on the same handful of fundamental ethical principles such as respect, self-determination of citizens, equity and justice, or non-maleficence for example. For example, the principles of information and consent are widely covered by the different texts, although with certain specificities. Potential frictions or conflicts across laws will have to be sorted out with your favorite lawyer. I intuitively suspect that Lex specialis and Lex posterior doctrines, Swiss hierarchy of laws, as well as jurisprudence will play roles in sorting out this legal mess.
Each canton also maintains its own data protection laws that complement federal legislation (NB: cantonal laws may be more restrictive, but not more permissive than federal laws). For example Bern has its loi sur la protection des données (LCPD), Geneva has its loi sur l’information du public, l’accès aux documents et la protection des données personnelles (LIPAD), Vaud operates under its loi sur la protection des données personnelles (LPrD), Fribourg uses its loi sur la protection des données (LPrD, yes I know 4-letter acronyms tend to repeat!), and Valais follows the Loi sur l'information du public, la protection des données et l'archivage (LIPDA).
Defects in your AI systems? The liability issue.
The current Swiss law offers its own directives on who is holding the hot potato when an AI system makes a mistake. One guiding principle: laws are a matter of human beings and an AI system itself cannot be held responsible. In other words, there can be no trustworthy AI without human accountability, and in case of AI malfunction, a human will always be found and held accountable. AI systems can be regarded as products, meaning that developers and distributors are liable under the Federal act on liability due to products (LRFP, the reflection is somewhat technical and beyond the scope of this post, but I invite the reader to read Ariane Morin’s excellent article on the matter). In a nutshell, the liability of a manufacturer (designer) will require evidence that a defect existed in the AI system at the time it was released or when the manufacturer handed over the system to the user (LRFP, Art 4 al. 1c et 5 al. 1b). Importantly, software updates mean that the manufacturer retains control on the system, which means that their accountability continues.
Oversight authorities and standards: who is watching you?
The regulatory oversight landscape is as fragmented as the laws themselves, with different authorities monitoring different aspects of AI deployment. Cantonal ethics committees (CER) oversee compliance with the Human Research Act and ordinances, ensuring your research meets ethical standards. Data protection falls under the watchful eyes of federal and cantonal data protection officers, who ensure your AI systems respect privacy rights and data handling requirements. For medical devices, Swissmedic serves as the primary gatekeeper, evaluating safety and efficacy before allowing market entry. This distributed oversight system means you might find yourself presenting to multiple authorities, depending on the specificities of your biomedical AI development, each with their own procedures and timelines. Here is a summary:
- CER and Swissethics ensures compliance with LRH and ordinances (authority and jurisdiction ruled by ORH, OrgLRH, OClin, OClinDim).
- Swissmedics ensures compliance with LPTh and ordinances (authority and jurisdiction ruled by ODim and OClindim).
- Certification bodies on data protection and PFPDT ensure compliance with nLPD and ordinance (authority and jurisdiction ruled by Ordinance on Data Protection Certification, OCPD).
Conclusion: consequences for AI system ethics assessments
A dense legal framework sets limits to the development of medical AI systems, because these systems lie at the interface of many ethical stakes from privacy of personal data, self-determination, respect of research participants, to accountability in product commercialisation. Ethical assessments aim to review the aspects outlined above, ensuring that your AI project is ethically sound, up to the clear delineation of accountability, and aligned with the relevant legislations. Ultimately, assessors like me help foster innovation while at the same time upholding human rights.
Abbreviations and links
nLPD: Nouvelle loi fédérale sur la protection des données (New act on data protection) https://www.fedlex.admin.ch/eli/cc/2022/491/en (English) / https://www.fedlex.admin.ch/eli/cc/2020/552/fr (French)
OPDo: Ordonnance sur la protection des données (Ordinance of data protection) https://www.fedlex.admin.ch/eli/cc/2022/568/en (English) / https://www.fedlex.admin.ch/eli/cc/2022/568/fr (French)
OCPD: Ordonnance sur les certifications en matière de protection des données (Ordinance on certification on data protection) https://www.fedlex.admin.ch/eli/cc/2022/569/en (English) / https://www.fedlex.admin.ch/eli/cc/2022/569/fr (French)
LPTh: Loi fédérale sur les produits thérapeutiques (Act on therapeutical products) https://www.fedlex.admin.ch/eli/cc/2001/422/en (English) / https://www.fedlex.admin.ch/eli/cc/2001/422/fr (French)
ODim: Ordonnance sur les dispositifs médicaux (Ordinance on medical devices) https://www.fedlex.admin.ch/eli/cc/2020/552/en (English) / https://www.fedlex.admin.ch/eli/cc/2020/552/fr (French)
ODiv: Ordonnance sur les dispositifs médicaux de diagnostic in vitro (Ordinance on In Vitro Diagnostic Medical Devices) https://www.fedlex.admin.ch/eli/cc/2022/291/en (English) / https://www.fedlex.admin.ch/eli/cc/2022/291/fr (French)
OClin-Dim: Ordonnance sur les essais cliniques de dispositifs médicaux (Ordinance on clinical trials of medical devices) https://www.fedlex.admin.ch/eli/oc/2024/323/fr (French)
PFPDT: Préposé fédéral à la protection des données et à la transparence (Federal data protection and information commissioner) https://www.edoeb.admin.ch/en (English) / https://www.edoeb.admin.ch/fr (French)
LRFP: Loi fédérale sur la responsabilité du fait des produits https://www.fedlex.admin.ch/eli/cc/1993/3122_3122_3122/fr (French)
EU AI Act: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (English)
GPDR: General Data Protection Regulation https://eurlex.europa.eu/eli/reg/2016/679/oj/eng
LRH: Loi relative à la recherche sur l’être humain (Federal act on research involving human beings): https://www.fedlex.admin.ch/eli/cc/2013/617/en (English) / https://www.fedlex.admin.ch/eli/cc/2013/617/fr (French)
ORH: Ordonnance relative à la recherche sur l’être humain à l’exception des essais cliniques (Ordinance on research involving human beings except clinical trials) https://www.fedlex.admin.ch/eli/oc/2024/321/fr (French)
Oclin: Ordonnance sur les essais cliniques hors essais cliniques de dispositifs médicaux (Ordinance on clinical trials except medical devices) https://www.fedlex.admin.ch/eli/oc/2024/322/fr (French)
Oviedo declaration: https://rm.coe.int/168007cf98 also available in the Swiss portal of federal law https://www.fedlex.admin.ch/eli/cc/2008/718/fr
Helsinki declaration: https://www.wma.net/policies-post/wma-declaration-of-helsinki/
Universal declaration of human rights: https://www.un.org/en/about-us/universal-declaration-of-human-rights
Swissmedic: https://www.swissmedic.ch/swissmedic/en/home.html
LIPAD (Geneva): Loi sur l’information du public, l’accès aux documents et la protection des données personnelles:
https://silgeneve.ch/legis/data/RSG/rsg_a2_08.htm?myVer=1757921258039
LPrD (Vaud): Loi sur la protection des données personnelles https://prestations.vd.ch/pub/blv-publication/actes/consolide/172.65?key=1543934892528&id=cf9df545-13f7-4106-a95b-9b3ab8fa8b01
LPrD (Fribourg): Loi sur la protection des données https://bdlf.fr.ch/app/fr/texts_of_law/17.1
LIPDA (Valais): Loi sur l'information du public, la protection des données et l'archivage https://lex.vs.ch/app/fr/texts_of_law/170.2
LCPD (Bern): Loi sur la protection des données https://www.belex.sites.be.ch/app/fr/texts_of_law/152.04
Aspects juridiques de l'intelligence artificielle (2024. Alexandre Richa, Damiano Canapa; Ed. Stämpfli Verlag AG) https://www.fedlex.admin.ch/eli/cc/1993/3122_3122_3122/fr#art_4
Swissethics: https://swissethics.ch/
Banner created with ChatGPT, text 100% written by me (a human!)
