
The Swiss ethical and legal framework on AI in clinical settings (7-8 min read).
As announced in a recent post, welcome to this series of articles on ethics and regulation of medical AI in Switzerland!
In this inaugural post, we will explore how the use of medical AI is regulated. By “use” we mean when AI is implemented directly for patient care. As a reminder, Switzerland decided not to have a specific law for AI, thus preserving the competitiveness of private innovation. Instead, the Federal Council has considered that the current laws were sufficient, with some possible amendments, and encouraged the burgeoning of sectorial recommendations. It’s no surprise then that the tentacular Swiss regulatory framework for biomedical AI may give headaches. In this blog, the new Federal act on data protection (nLPD)* and its ordinance (OPDo, OCPD), and the legal guidance from the Swiss medical association (FMH) will take particular importance. The European Union (EU) AI Act will be marginally mentioned since today will mostly be about domestic issues between practitioners and patients.
Disclaimer: this overview is an outline intended to bring clarity and general directions. Specialist consultancy with ethicist or lawyers with a specific expertise in medical technology would be needed for more detailed guidance adapted to the specificities of your own professional practice.
NB: Note that, for the sake of simplicity, we will mostly focus on the use of AI systems used for direct patient management (eg. for diagnostics or treatment), considered medical devices, deliberately putting aside systems for administrative management, triage, accounting... etc
*Find abbreviations and links at the end of the post.
Health data is sensitive data: organisational obligations.
The nLPD distinguishes (generic) personal data and sensitive personal data. Personal data are any information concerning a person that makes it possible to identify them or contribute to this identification, and are, in principle, always worthy of protection. Furthermore, additional requirements are made explicit for the processing of so-called "sensitive" personal data.
Under the nLPD, health data, but also genetic and biometric data, automatically falls into the category of sensitive personal data (nLPD Art. 5, al. c), if they may enable the unequivocal identification of the patient. It creates a ripple effect throughout the entire deployment process. The nLPD therefore makes it mandatory for healthcare institutions and health professionals to:
- conduct careful impact assessment that evaluates the implications of AI-based decision-making on patient rights (nLPD Art 22, al. a);
- maintain detailed records of all AI processing activities (nLPD Art 12), with patients retaining the right to be informed about the collection and processing of their data. It is important to note that the revision of the law emphasizes the monitoring of data processing, not only the data themselves;
- report any residual risks or any security incident to the Federal data protection and information commissioner (PFPDT, nLPD Art 23, Art 24 and OPDo Art 15), whose recommendations can halt implementation entirely. Patients may have to be informed either for their own protection or upon request by the PFDPT, unless the provision of this information is impossible or requires disproportionate effort.
Personal data can only be collected with the patient’s consent (nLPD Art 6, al. 7), for a specific purpose that is clearly identifiable for the patient (nLPD Art 6, al. 3). If personal data are required, then the use of AI must be justified and motivated (nLPD Art 31) or the data must be anonymized.
I see you frowning, grunting and complaining. Yes I do! But, as we will see, some of these obligations will be more intimidating in theory than in practice.
Patient-focused obligations: transparency and information.
The watchword is: patient’s self-determination. It is a fundamental human right that grants every patient the right to dispose of their own body, and to accept or refuse healthcare offered by professionals. The patient’s ability to exercise their freedom of choice is therefore strongly linked to the principles of information and consent.
In theory (you see me coming here…), healthcare providers must clearly inform patients when their data is collected (nLPD Art 6 and 19) and consent is required. The patient may claim their collected data in a portable format (nLPD Art 28), and if this is health data they may consent to having it communicated by a health professional (nLPD, Art 25, al. 3, the only instance where health data is explicitly mentioned in the nLPD). There are exceptions to provide information such as when the provision of information hampers the purpose of the data processing (nLPD Art 20).
When AI systems are used for automated decision-making that affects patient treatment and that has a legal consequence for or a considerable adverse effect on them, the practitioner must inform the patient about this process in a clear manner (nLPD Art 21, al. 1). Patients maintain the right to request human review of these decisions (nLPD Art 21, al. 2). Therefore, healthcare providers must establish clear procedures for patients to exercise this right and ensure that qualified professionals are available to provide oversight.
Should a healthcare institution or professional call on external AI providers or consultants, clear contractual arrangements become mandatory, and great care must be taken to avoid violation of medical confidentiality (nLPD Art 9, and OPDo Art 7).
No panic! The Swiss medical association (FMH) clarified everything further.
Still frowning? Sure you are. But no need to get upset though, I have good news (again)! Remember when we mentioned that the Swiss AI regulatory maze was a blend of existing guidelines and sectorial recommendations? Well, that is indeed where the Swiss medical association (FMH) comes to the rescue to help clarify the situation in the specific medical context, backed by the nLPD Art. 11 that grants professional, industry and trade associations the right to issue their own code of conduct. The FMH has created its online practical guide about medical regulation, which includes the use of AI.
Infrastructure: relax on the duties.
First, if the AI-based tool you use is a medical device, it must have been certified and approved by Swissmedics as such (hopefully it is the case!), you can then assume that the preliminary impact assessment has been performed by the developers and double-checked by the certification body as defined by the nLPD Art 13, the LPTh Art 46-47 and its ordinance on medical devices (ODim, pretty much in its entirety). Hooray (bis)!
In a similar vein, the medical community is no stranger to the documentation and journalization of professional acts. It is called the patient's medical record, made mandatory by Art 12 of the deontological code of the Swiss Medical Association (FMH).
Information: simplified requirements.
With respect to patient information, although the Art 10 of the FMH code of deontology grants patients the right to receive comprehensive information from their doctor, the FMH redefines its outlines, and those of the consent thereof, in the case of AI. Just like any medical procedure, there are obvious limits set in patient information to prevent them from being overwhelmed with details on technology and risks, since it would eventually be a hurdle to self-determination (Chapter 3.2 and 3.3).
The FMH clarifies that there are two contexts where patient information (and consent thereof) about AI is required (Chapter 3.2):
- when using AI presents a significant risk for the patient,
- when the information plays a role in the patient’s decisions regarding their treatments.
Only typical risks should be communicated, with no need to mention unpredictable ones (Chapter 3.2). So, if your automated AI system is known for misdiagnosing early axial spondyloarthritis for sacroiliac joint dysfunction in 10% of analysed MRI scans, and if it would be close to- or higher than when reviewed by a human radiologist, risk must be communicated since the patient might prefer human analysis. However, information about potentially missing exceptionally rare infectious or cancerous conditions is not mandatory.
Finally, you theoretically do not have to motivate and justify the collection of personal data as long as the collection and its objectives are transparent and visible for the patient (Chapter 7.2). If the collection of personal data is not that visible, the patient must be informed and has the right to access all their collected data (Chapter 3.2).
If the collection of personal data is required for treatment, the use of AI must be justified or the data must be anonymized (Chapter 6.7, and nLPD Art 31). In the absence of explicit consent, the patient has also the right to be informed about the use of automated medical decisions that rests on AI only (Chapter 6.7, and nLPD Art 21). The FMH recommends using a mixed oral and written approach for patient information, for example using brochures about data processing in the waiting room before the consultation.
Accountability: human oversight.
You use AI in patient care and a medical error occurs. Who is legally responsible? Swiss law maintains a clear principle: artificial intelligence systems cannot bear legal liability, which will always trace back somewhere to human decision-makers (Chapter 6.7). Unless there is a clearly identifiable technical malfunction in the AI system, the designers of the device or software will not be legally liable. This means that every AI deployment must have clearly defined human accountability structures on your side as a healthcare provider.

Healthcare professionals remain fully accountable for clinical decisions made using AI assistance, particularly when these decisions fall within their domain of professional competence. They are responsible for the decision of using AI or not, for using the AI system according to its user instructions, and to ensure that the AI output is reviewed by a human. Importantly, the liability regarding the AI-based clinical decisions extends to the practitioner’s legal accountability for the exactness of the data (Chapter 7.2, and nLPD Art 6, al. 5). So, imagine that you use an AI system that integrates not only clinical information (tacitly assumed to be exact), other types of sensitive data (eg.: religious, political) and other data such as demographic data (eg.: ZIP code, education, income, marital status) for predicting compliance to a specific treatment. You are fully responsible for ensuring that these data are exact and up-to-date in your system to prevent bias and risk misclassification.
If the incident affects the security of the patient's data, the physician bears the burden of proof, as it is their responsibility to demonstrate that they properly informed the patient and obtained their consent prior to treatment. It is no surprise that written information has progressively become commonplace.
If a medical practice wishes to receive help in implementing requirements on data protection, it may call upon an internal or external data protection officer. However, such advice is optional for private medical practices and does not constitute a legal obligation (Chapter 7.2).
Conclusion
For you as a medical practitioner, the Swiss context on medical AI may look overwhelming, but in practice they mostly reinforce what you already do: protect patient data, inform and involve patients in decisions, and remain accountable for clinical choices. The nLPD provides the legal backbone, while the FMH translates it into concrete guidance that fits routines in clinical settings. In the end, AI-based devices are just tools like many others you already use that will have been certified and approved by Swissmedics. Their safe and ethical use depends on your judgment, documentation, and ability to keep patients’ trust at the center of care. Only a handful of specificities legally distinguish them from the usual medical devices, until a proper Swiss AI act comes (perhaps) into existence.
In future articles, we will cover the (widespread?) use of LLM such as ChatGPT in medicine, and the regulation of AI design. Stay tuned!
Abbreviations and links
nLPD: Nouvelle loi sur la protection des données (New act on data protection)
https://www.fedlex.admin.ch/eli/cc/2022/491/en (English) / https://www.fedlex.admin.ch/eli/cc/2020/552/fr (French)
OPDo: Ordonnance sur la protection des données (Ordinance of data protection)
https://www.fedlex.admin.ch/eli/cc/2022/568/en (English) / https://www.fedlex.admin.ch/eli/cc/2022/568/fr (French)
OCPD: Ordonnance sur les certifications en matière de protection des données (Ordinance on certification on data protection)
https://www.fedlex.admin.ch/eli/cc/2022/569/en (English) / https://www.fedlex.admin.ch/eli/cc/2022/569/fr (French)
LPTh: Loi sur les produits thérapeutiques (Act on therapeutical products)
https://www.fedlex.admin.ch/eli/cc/2001/422/en (English) / https://www.fedlex.admin.ch/eli/cc/2001/422/fr (French)
ODim: Ordonnance sur les dispositifs médicaux (Ordinance on medical devices)
https://www.fedlex.admin.ch/eli/cc/2020/552/en (English) / https://www.fedlex.admin.ch/eli/cc/2020/552/fr (French)
PFPDT: Préposé fédéral à la protection des données et à la transparence (Federal data protection and information commissioner)
https://www.edoeb.admin.ch/en (English) / https://www.edoeb.admin.ch/fr (French)
EU AI Act: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (English)
Swissmedic: https://www.swissmedic.ch/swissmedic/en/home.html
Bases juridiques pour le quotidien médical : https://leitfaden.samw.fmh.ch/fr/guide-pratique-bases-juridique/tables-des-matieres-guide-jur.cfm# (French)
Code of deontology: https://www.fmh.ch/files/pdf30/standesordnung---fr---2024-04.pdf (French)
Pictures created with ChatGPT, text 100% written without.