Cybersecurity and AI Act: Compliance for hospitals and public institutions
Czech hospitals and public healthcare institutions face a complex regulatory landscape. Obligations arising from NIS2, the AI Act and the GDPR overlap and may result in significant fines. To minimize the risk of sanctions and operational failures, it is essential to understand the systemic compliance requirements under Czech legislation. This article explains in detail the key obligations and risks for the healthcare sector in the Czech Republic.

Table of contents
- The NIS2 Directive and healthcare providers: A new legal standard
- Practical impacts of NIS2 on hospital operations
- What the AI Act is and why it applies to healthcare
- Anonymisation vs re-identification: The real-world problem
- Regulatory conflict in practice: A case study
- Practical compliance steps: What a hospital should do immediately
- Most common mistakes and how to avoid them
- Public institutions: Compliance specifics in community healthcare
Healthcare providers in the Czech Republic are facing an unprecedentedly complex regulatory environment. NIS2, the new AI Act, the GDPR and relevant Czech legislation create layered obligations that overlap, complement each other and sometimes even conflict.
Hospitals that want to minimise the risk of fines, sanctions and operational failures must understand how these regulations function as a system. This means it is not enough to view them merely as separate rules, but as a complex network of requirements. This article explains the real compliance demands and shows where the most common failures lie.
The NIS2 Directive and healthcare providers: A new legal standard
The NIS2 Directive (Directive (EU) 2022/2555 on measures for a high common level of cybersecurity across the Union) is a European legal framework focused on the cybersecurity of critical infrastructure and key services. It is not a general standard, but an obligation with an explicit priority on sectors that are “essential” to the functioning of society.
Healthcare providers are listed in NIS2 as a sector considered essential, which means the strictest requirements apply to them.
In the Czech Republic, the NIS2 Directive was implemented into national law through Act No. 264/2025 Coll., on cybersecurity (the “Cybersecurity Act”), which took effect on 1 November 2025.
The Cybersecurity Act applies to so-called essential entities and important entities. Hospitals most often fall into the category of essential entities in the critical sector of healthcare.
Healthcare providers must implement and maintain a comprehensive cyber risk management system. When setting up these processes (including governance and responsibilities in deploying tools), legal support for artificial intelligence (AI) implementation can also help. This system includes threat identification, impact assessment, implementation of security measures and ongoing monitoring. For the legal setup of security governance, supplier obligations and incident-response documentation, it can be useful to consult It and Software Law, Cybersecurity. It is not merely a one-off investment in software, but a continuous risk-reduction process.
Hospitals must significantly change their approach to identity and access management (Identity Access Management, IAM). From November 2025, it is mandatory to implement the “least privilege” principle. This means that every employee, application and system may have access only to the information and functionalities that are strictly necessary for their work.
In practice, this means, for example, that a radiologist reading CT scans should not have access to the administrative invoicing module. Likewise, a surgical ward nurse should not have access to psychiatric records from another department without a clear justification. These principles are key to protecting sensitive data. A practical overview of how GDPR, AI Act requirements and licensing interact in digital health tooling is also discussed in Legal Checklist for SaaS Platforms in the EU: GDPR, AI Act and Licensing. For practical guidance on setting internal rules and contractual arrangements for digital services (including data handling), a comparison in the article SaaS platform in the EU: Legal treatment of Terms and Conditions, GDPR and licensing arrangements for AI outputs may be useful.
Implementation measures for IAM under the Cybersecurity Act No. 264/2025 Coll. include creating and documenting an IAM policy and implementing a centralised identity management system. The “least privilege” principle is also necessary for all users, as well as strong authentication, including multi-factor authentication (MFA) where possible.
Further measures include proper logging of all access attempts, including unsuccessful ones. Regular audits and reviews of assigned access are also important to ensure ongoing relevance and security. This is not a theoretical exercise, but a practical necessity. Where IAM is supported by automated decision-making or monitoring, the compliance design should also reflect the Implementation of Artificial Intelligence (Ai) requirements relevant to healthcare operations.
If a hospital finds that a technical manager permanently retains rights to a system they no longer use, or that healthcare staff can open anything in any patient file without further restriction, it is directly breaching the requirements of the Cybersecurity Act.
In the event of incidents, rapid reporting is key. In practice, it is worth addressing obligations under healthcare regulations at the same time, which is covered by services in the area of healthcare law. If there is a patient data breach, an attack on an information system, data unavailability (e.g., ransomware) or another cybersecurity incident, the hospital must report the incident without undue delay to the National Cyber and Information Security Agency (NÚKIB).
This report must be made no later than 72 hours from the moment the hospital became aware of the incident. Hospitals should also map which authority will be competent depending on the affected product or service, as summarised in Which Czech Authority Regulates Your Health and Personal Care Product in 2026?. Under specific regulations, it may also be necessary to inform other relevant supervisory authorities. Meeting this deadline is essential to minimise impacts and comply with regulatory requirements.
A breach of, or failure to comply with, these obligations may result in fines of up to CZK 250 million or 2% of the organisation’s worldwide turnover, whichever is higher. For a mid-sized hospital, this represents a real financial impact that may threaten operations and the continuity of healthcare provision.
Practical impacts of NIS2 on hospital operations
One thing is to understand the rules on paper. Another is to understand what they mean every day in a hospital.
Healthcare providers have high turnover—seasonal staff, maternity leave, retirements. Each new worker must be granted access only to the systems they truly need, and only for the period during which they work. The HR impacts (setting roles, responsibilities and documentation for onboarding and offboarding) are also well illustrated by the text Partner Jakub Oliva for Hospodářské noviny: mistakes in HR, the biggest risks arise with key people and internal processes. NIS2 requires that every change be documented and regularly audited. If this is not done, people who no longer work at the hospital retain access.
Hospitals often use external IT providers, suppliers of diagnostic systems, or cloud services for data archiving. The NIS2 Directive requires a hospital to contractually agree cybersecurity rules with each such provider.
At the same time, the hospital must verify compliance with these rules, for example through a technical audit or inspection. If an external provider does not have adequate security and a data breach occurs, the hospital bears primary responsibility, even if the data was physically stored by someone else.
Modern medical devices, such as monitors, ventilators, infusion pumps, or diagnostic machines, are connected to the hospital network. NIS2 requires them to be part of the healthcare provider’s comprehensive cybersecurity strategy.
Older devices that lack modern security features and cannot be updated represent a significant risk. Such risks must be managed proactively, for example by isolating these devices from the rest of the network to minimize potential threats.
All of these situations require managerial and technical solutions that an ordinary hospital may not be able to handle without expert assistance. The attorneys at ARROWS, a Prague-based law firm, focus on compliance audits, preparation of documentation, and support in dealings with supervisory authorities in these situations.
EU Artificial Intelligence Regulation (AI Act) and healthcare institutions: A new layer of regulation for algorithms
What is the AI Act and why it applies to healthcare
Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (the EU Artificial Intelligence Regulation), which entered into force on 1 August 2024, is the world’s first comprehensive legislative regulation of artificial intelligence. The text asks: what if an algorithm fails or misleads? Who is responsible? What if an AI system learns something undesirable?
The AI Act divides AI systems into four categories according to the level of risk they pose. These categories are unacceptable risk, high risk, limited risk, and minimal risk. Each category entails different regulatory obligations and requirements.
Systems with unacceptable risk are completely prohibited, such as social scoring, behavioural manipulation, or emotion recognition in the workplace without a medical reason. These systems are considered too dangerous for society.
High-risk AI systems are subject to strict requirements, especially in medicine. These typically include diagnostic tools, clinical decision support, and systems affecting access to services, which may have a fundamental impact on health and safety.
Systems with limited risk have transparency obligations; for example, chatbots must disclose that they are AI. Minimal-risk AI systems, such as spam filters, have no special obligations and are considered safe.
In hospitals and healthcare institutions, the most common category is high-risk AI systems, which require special attention and compliance with strict regulatory standards. These systems are critical for diagnosis and treatment.
The rules for high-risk AI systems will become fully effective and applicable from 2 August 2026. From that date, all affected institutions must ensure full compliance with the AI Act.
Examples of high-risk AI systems in hospitals:
- AI for cancer diagnosis from mammograms, X-ray images, CT data
- AI for detecting diabetic retinopathy or other eye diseases
- AI for clinical decision support (clinical decision support systems – CDSS)
- AI for triage of patients in emergency cases
- AI for patient prognosis, prediction of disease progression
- AI for allocation of healthcare resources
Specific requirements for high-risk AI in medicine
1. Risk Management System
The hospital must identify and analyse all foreseeable risks that the AI system may pose to patients’ health, safety, or fundamental rights. This includes questions such as what happens if the algorithm fails or makes an error, and which patient groups are more at risk.
It is also necessary to consider what happens if the algorithm learns something undesirable, or how bias in the data will manifest itself. This analysis must be carried out throughout the system’s entire lifecycle—not only during implementation, but continuously as the system is used and new data is collected.
2. High-Quality Data Governance
AI systems learn from data. If the training data is biased in any way, the AI system will inherit that bias and reflect it in its outputs. A typical example is a cardiovascular risk algorithm trained primarily on Caucasian patients.
A system trained in this way is much less accurate for patients of African or Asian origin, which can lead to incorrect diagnoses or inadequate treatment. Hospitals must therefore ensure that the data is representative.
The hospital must ensure that training data is representative of the target population. The system must be tested on different patient groups and it must be reported transparently where the AI system performs worse or better.
It is also important to document where the data comes from and how it was cleaned and processed. Data governance also includes identifying and addressing known bias in the data to prevent discrimination and diagnostic errors.
If a hospital deploys an algorithm trained only on data from men and then uses it for women with substantially worse accuracy, it has breached the AI Act rules. This can have serious consequences for the care of female patients.
3. Technical documentation
Before deploying an AI system, complete technical documentation must be in place. It must include a detailed description of the system and its purpose, as well as a description of all inputs, processes, and outputs. Documentation is key to transparency and auditability.
The technical documentation must also include a risk management report, information on training and testing data, and testing results including error rates. A plan for monitoring the system after it is put into operation is also essential.
This documentation must be available in the relevant official language and must be made available to supervisory authorities upon request. Its absence or inadequacy may lead to serious sanctions.
4. Human oversight
The most important principle is that the algorithm must not decide on its own. A physician must always be able to understand how the algorithm works and what its limitations are. At the same time, they must be able to verify that the algorithm is functioning correctly.
The physician must also be able to ignore or override the algorithm’s output if they believe it is in the patient’s best interests. If the system is not functioning properly, the physician must be able to suspend it to prevent potential harm to the patient.
This means that AI for diagnostics is not a “second opinion you can ignore”. It is a system that is actively monitored and must remain under the physician’s control. Human oversight is essential to ensure safety and quality of care.
5. Transparency and informing patients
Patients on whom AI is used must be informed. There is no obligation to disclose all details of the algorithm. However, they should be aware that their diagnosis or treatment includes an element of artificial intelligence. This is key to transparency.
This requirement is not only about ethics; it has specific legal consequences. If a patient did not know that their diagnosis was performed by an algorithm and later finds out that the algorithm contained an error, they have a stronger legal position in any potential court dispute.
The intersection of the AI Act with the MDR and the GDPR
This is where things become truly complex. If your AI system is part of a medical device—for example, an AI module in diagnostic software—then three regulations apply to it simultaneously. These regulations overlap and complement each other.
|
Regulation |
What it covers |
When it takes effect |
|
MDR (Regulation (EU) 2017/745 on medical devices) |
Safety and performance of medical devices |
Already in force |
|
AI Act (Regulation (EU) 2024/1689 on artificial intelligence) |
Algorithms, their risks, transparency, human oversight |
Fully applicable from 2 August 2026 |
|
GDPR (Regulation (EU) 2016/679 on the protection of personal data) |
Protection of patient data |
Already in force |
All three apply to a single system at the same time. The MDR addresses questions such as “Is the device safe and effective?”, the AI Act addresses “Is the algorithm non-manipulative, transparent, and under control?”, and the GDPR addresses “Is patient data protected?”. This complexity requires a coordinated approach.
Attorneys from ARROWS, a Prague-based law firm, navigate this three-way interplay of regulations and help hospitals understand which obligations apply to them, how the rules affect each other, and what exactly they must do.
GDPR, NIS2 and the AI Act: How they interact in relation to health data
Health data under pressure from three regulations
Health data—such as data concerning health, genetic data and biometric data—are a special category of personal data under the GDPR. Such data is subject to enhanced protection, which places increased demands on healthcare providers.
A healthcare provider is considered the “controller” of this data and is fully responsible for its security. Processing is only possible under strict conditions—for example, for healthcare purposes, with the patient’s explicit consent, or for reasons of substantial public interest in the area of public health.
In the event of a security breach, such as a data leak, the hospital must notify the Office for Personal Data Protection (ÚOOÚ) and the affected patients without undue delay. This is key to minimising impacts and meeting legal obligations.
GDPR breaches may lead to fines of up to EUR 20 million or 4% of total worldwide annual turnover, whichever is higher. These sanctions underline the seriousness of non-compliance with personal data protection rules.
NIS2 then comes into play and requires technical protection of this data. This means implementing appropriate encryption, access rights, logging and incident management. NIS2 complements the GDPR in the technical aspects of security.
The AI Act then adds that if this data is used to train an AI algorithm, the hospital must ensure that the algorithm is not biased, is transparent, and remains under human control. The AI Act therefore extends data protection into the sphere of artificial intelligence.
As a result, hospitals must view the data simultaneously as data that must be protected under the GDPR, technically secured under NIS2, and also safely anonymised for AI training without infringing patients’ rights under the AI Act and the GDPR.
Anonymisation vs. re-identification: The real-world problem
One of the critical points is anonymising data for AI training. In theory, it should be possible to take health data, remove patient identifiers (name, personal identification number, address), and then safely use it to train an algorithm without breaching the GDPR.
In practice, however, it is far more complicated. The European Data Protection Board (EDPB) issued Conclusion No. 28/2024 in December 2024, stating that for many types of health data, re-identification is possible even where the data is considered “anonymous”.
If you have data from a patient with a rare disease, it may be possible to identify them solely from a combination of age, sex, district code and diagnosis. This shows how easily re-identification can occur even with seemingly anonymous data.
This means that a hospital that believes it is safely training AI on “anonymous” data may later find out that the data is not in fact anonymous and that it has breached the GDPR. Such situations present significant legal and reputational risk.
Related questions on protecting health data in AI projects:
- Can I use the same health data for clinical practice and for AI training?
Yes, but in both cases you need a legal basis. For clinical practice, this is a healthcare purpose; for AI training, it may be either explicit consent or legitimate interest (provided you meet all GDPR and AI Act conditions). If a patient refused to be contacted in connection with AI training, you cannot use their data without new consent. - If I anonymise the data, can I then use it at any time?
No. Anonymisation must be done properly and must be testable. Just because you removed a name does not mean the data is truly anonymous. You must be confident that re-identification is not possible. This requires technical analysis and documentation. Without it, you risk breaching the GDPR. - What if an external AI provider trains AI on my data?
Then the external provider is the “processor” and you are the “controller”. You must have a written data processing agreement with the processor ensuring GDPR compliance. At the same time, you remain responsible for data security. If a leak occurs, you are liable even if the data was physically held by the external provider.
Regulatory clash in practice: A case study
Situation:
Let us imagine a specific situation handled by the law firm’s attorneys: a university hospital wants to implement an AI system to detect cancer from mammography images. The system will be trained on data from 50,000 patients from the hospital over the last 10 years.
This system will be deployed as a support tool for radiologists and integrated into the existing medical image management system. The system will be leased from a third-party provider—specifically, an AI start-up.
Which regulations affect this project?
1. GDPR:
Health data (images, diagnoses, age, sex) will be used for training. This is processing of personal data and requires a legal basis. The hospital must have patients’ consent or a legitimate interest.
At the same time, it must ensure that the data is protected, including encryption, access rights and incident management. Any breach here can have serious consequences for patients’ privacy and for the hospital.
2. NIS2:
The data will be stored in the cloud with a third-party provider and accessed by a number of people within the hospital. IAM, logging and incident management must therefore be implemented. In the event of a data leak, the incident must be reported to the competent authorities.
3. AI Act:
The system is high-risk because it is a medical application. It must undergo a risk assessment, bias testing, and have human oversight in place. Complete technical documentation is also required.
Patients must be informed that AI is being used on them. They must also be monitored regularly to determine whether the algorithm starts becoming less accurate for certain groups of female patients, which could lead to discrimination.
4. MDR:
If the system is part of a regulated medical device, it must also comply with the Medical Device Regulation (MDR). This includes clinical evaluation, post-market surveillance, and incident reporting.
5. Czech legislation:
In addition to European regulations, Czech legislation also applies in the Czech Republic. This includes the Act on Health Services and the Conditions for Their Provision, implementing decrees, and the hospital’s internal regulations on data protection and information security. All of these must be aligned.
What should a hospital do?
First, the hospital should have a Fundamental Rights Impact Assessment and a Data Protection Impact Assessment prepared by attorneys from a Prague-based law firm. These analyses are key to identifying risks.
It is also essential to prepare and approve an AI policy within the hospital that clearly defines how artificial intelligence will be handled in the given healthcare facility. This policy will ensure a consistent and secure approach to AI technologies.
It is also crucial to sign an agreement with the AI provider that includes a clear allocation of responsibilities, data security, and confidentiality. It must be ensured that healthcare data is properly protected.
This includes encryption of data both in transit and at rest, setting appropriate access rights, and backups. It is also necessary to implement human oversight, where radiologists receive training on how the system works, its limitations, and the procedure in the event of failure.
The hospital must test the system for bias and monitor how it performs across different age groups, skin colour, or BMI. It is important to implement post-market monitoring.
Post-market monitoring means continuously checking whether the algorithm is still working correctly. This also includes informing patients, either generally in the waiting room or individually during the diagnostic process. An incident response plan must be prepared.
Such a project is complex, and without legal and technical support, a hospital may not achieve the correct outcome.
Attorneys from ARROWS advokátní kancelář assist hospitals with these projects—from early planning and compliance checks through to implementation and post-market monitoring.
Penalties for breaches: Real figures and impacts
Breaches of the NIS2 Directive, the AI Act, or the GDPR are not mere administrative offences. They are serious violations with real financial and operational impacts that can jeopardise the very functioning of a healthcare facility.
Penalties for breaches of NIS2 (Cybersecurity Act No. 264/2025 Coll.) can reach up to CZK 250 million or 2% of the organisation’s total worldwide annual turnover, whichever is higher. This level of fines represents a significant financial risk.
In addition to fines, members of statutory bodies may also face personal liability. This means that not only the organisations themselves, but also individual persons responsible for security, may be affected, increasing the pressure to comply.
Penalties for breaches of the AI Act (Regulation (EU) 2024/1689) vary depending on severity. For breaches of prohibited practices (Article 5), fines of up to EUR 35 million or 7% of total worldwide annual turnover may be imposed, whichever is higher.
For breaches of other obligations, such as those relating to risk management systems, data quality, technical documentation, human oversight, accuracy, robustness, and cybersecurity (Articles 10–50), fines of up to EUR 15 million or 3% of total worldwide annual turnover may be imposed.
Failure to provide correct information under the AI Act may lead to fines of up to EUR 7.5 million or 1.5% of total worldwide annual turnover, whichever is higher. These penalties reflect the importance of transparency and proper disclosure.
For breaches of the GDPR, such as violations of core data protection principles or obtaining insufficient consent, fines of up to EUR 20 million or 4% of total worldwide annual turnover may be imposed. These are among the highest sanctions under EU regulation.
For a hospital with annual turnover of CZK 100 million, this means fines in the tens of millions of Czech crowns for each individual breach. If three regulations are breached at the same time—which is not uncommon—the fines may accumulate, which could be existentially threatening.
However, financial impacts are only one side of the coin. Other consequences may be worse, such as service disruption, loss of trust, litigation, and blocked transactions, which can have a long-term negative impact on the hospital.
Service disruption: If a cybersecurity incident occurs due to poor security (NIS2 compliance), operating systems may shut down and the hospital will be unable to treat patients. Operating theatres will be without systems, pharmaceutical warehouses will be inaccessible.
Loss of trust: Once it becomes known that the hospital had poor security or used an AI system without transparency, patients and doctors lose trust. This has a long-term impact on reputation.
Litigation: Patients affected by a GDPR or AI Act breach may sue for damages. If a patient was diagnosed by an AI system with observed bias and has worse treatment outcomes, they may sue for compensation.
Blocked transactions: If the hospital is part of a larger organisation or subject to investor oversight, compliance breaches in areas such as GDPR, NIS2, and the AI Act may block mergers, acquisitions, or financing.
Practical steps to compliance: What a hospital should do immediately
Compliance status audit
The first thing a hospital must do is determine where it stands. It should have a compliance audit carried out to assess key areas. This includes identifying all AI systems in use and their risk classification under the AI Act.
The audit should also verify whether the relevant documentation is maintained for these systems, such as risk assessments and technical documentation, and whether patient consent is ensured. Protection of healthcare data under the GDPR is also essential.
The level of cybersecurity needs to be assessed, including identity and access management (IAM), logging, and incident management. The audit will also verify whether AI systems are tested for bias.
It is also crucial to confirm that physicians have been trained so they know how to use AI systems properly. Without a thorough audit, the hospital is operating blindly and does not know where the real risks lie or how to address them effectively.
Governance and policy
The hospital should adopt a written AI policy, as unwritten internal rules and traditions are not sufficient. This policy should clearly define what AI means in the hospital context and which systems are high-risk for the hospital.
The AI policy must set out how new AI systems are approved before deployment, including designation of the responsible committee or approval by the director. It must also clearly define who is responsible for compliance, such as an AI officer, the legal department, or IT.
The policy should also specify how AI-related incidents are handled, such as an incorrect diagnosis or a data breach. It is also important to define how patients are informed about the use of AI.
Last but not least, the policy must describe how AI systems are monitored after deployment. Without such a policy, decision-making is chaotic and risks are not properly managed, which can have serious consequences for hospital operations.
Training and awareness
Physicians and healthcare staff who work with AI systems must know exactly what they are doing. They should not simply blindly trust the output of an AI system; they should understand how it works and its limitations. This is the foundation of safe use of AI.
They must also know in which situations the algorithm is not reliable, for example for certain patient age groups. It is essential that they understand they can override or ignore the AI output and how to report concerns if they believe the algorithm is not working properly.
This requires training that is not a one-off, but ongoing. If the algorithm is updated, staff must undergo training again so they are always fully informed about new features and potential risks.
Data governance and security (Data Governance)
Healthcare data in AI projects must be protected under GDPR and NIS2. Key measures include encryption of data in transit and at rest, and implementation of access rights so that only those who need it have access to sensitive information.
Logging is also essential, recording who accessed the data, when, and what they did with it. It is also important to have effective incident management in case of a data breach and a data retention policy to define how long the data is stored.
At the same time, the data must be of sufficient quality for AI training. It must be representative of the target population, not only men or patients from wealthier areas. It must be tested for bias.
Testing will show how AI performs across different patient groups. The data must also be properly documented, including information on its origin, cleaning, and the assumptions used. Transparency in this area is key.
Monitoring and post-deployment audit
Once an AI system is deployed, the work is only beginning. The hospital must continuously monitor the algorithm’s performance and track whether it remains equally accurate over time. It is important to look for new bias.
It is also necessary to determine whether the algorithm behaves the same across all patient groups. Feedback from physicians and audits are also important to verify how the system is actually being used. If issues arise, they must be addressed through updates.
Without post-market monitoring, the hospital does not know whether the algorithm is still serving it properly and safely. Regular monitoring is therefore essential to maintain compliance with regulatory requirements and to ensure quality of care.
Documentation and audit trail
Complete documentation must be available for all of the above steps. If an inspection or court proceedings occur, the hospital must be able to demonstrate its decisions and the measures taken. Documentation serves as evidence of compliance.
The hospital must be able to show what decision was made and why, what risks were assessed, and what measures were adopted. It is also important to record who made the decisions and when, and what audits were carried out and what their results were.
Without adequate documentation, the hospital cannot prove that it acted in accordance with the regulations. With thorough documentation, however, it has a chance to defend itself and demonstrate its compliance, which is crucial in today’s regulatory environment.
Attorneys from ARROWS advokátní kancelář work with hospitals on all of these steps. They assist with audits, preparation of policies, management training, drafting contracts with AI providers, compliance monitoring, and representation in regulatory inspections.
Most common mistakes and how to avoid them
Mistake 1: Ignoring the AI Act on the assumption that it does not apply to “Artificial Intelligence”
Many hospitals mistakenly believe that the AI Act applies only to robotics or chatbots. The reality, however, is that the AI Act applies to any algorithm used to support or make decisions in a given context.
Such algorithms can have a potential impact on fundamental rights. Typically, hospitals ignore the AI Act for image-based diagnostic algorithms, clinical decision support tools, or older mortality prediction systems.
Under the AI Act, these systems are high-risk even if the hospital considers them merely “software” or “just a tool”. As a result, the hospital is in de facto breach without realizing it, which leads to unexpected inspections.
Then an inspector arrives and finds that the system has no risk management, is not documented, and patients are not informed. This has serious consequences for the operation and reputation of the healthcare facility.
How to avoid it: Have a thorough audit carried out to identify which systems are high-risk under the AI Act. Only on the basis of these findings can you effectively plan a compliance strategy and minimize risks.
Mistake 2: Believing that anonymization solves the GDPR issue
Many hospitals think: “We will provide training data without names and addresses—we will simply anonymize it—and then we can use the data to train AI without any problem.” However, in practice this assumption is often incorrect.
The reality is that anonymisation is very difficult, especially with healthcare data. As mentioned, the European Data Protection Board (EDPB) states in Opinion 28/2024 that data can, in many cases, be re-identified. This complicates the situation.
Safely anonymising healthcare data requires expert technical capabilities and scientific rigour. Without this, a hospital risks believing it is GDPR-compliant when in fact it is not, which may lead to sanctions.
How to avoid it: Have an audit carried out by privacy engineering and GDPR experts who will assess whether your anonymisation measures are truly effective. If you are unsure, it is better to obtain patients’ explicit consent, which is a safer and more robust solution.
Mistake 3: Failure to implement human oversight (Human Oversight)
Under the AI Act, a high-risk AI system must be subject to human oversight. Doctors must be able to understand how the AI works and must be able to override it if they see it is not functioning properly. However, many hospitals ignore this.
Some hospitals believe human oversight is unnecessary, or that doctors will monitor the systems automatically. They also often lack the time or resources for the necessary training, which results in inadequate implementation of this key obligation.
The reality is that without human oversight, it is not possible to demonstrate compliance with the AI Act. And if something goes wrong, such as an incorrect diagnosis leading to patient harm, the hospital cannot defend itself in Czech courts by saying “the system did it on its own”.
How to avoid it: Implement human oversight as a mandatory and integral part of every AI project. This includes systematic training, clear procedures for monitoring and intervention, and regular audits—not merely an additional measure.
Mistake 4: Underestimating NIS2 requirements
Cybersecurity Act No. 264/2025 Coll. is new, and many hospitals still have not properly read it. Many believe that legacy IT security systems will be sufficient, which is a fundamental mistake with potentially serious consequences.
The reality is that NIS2 requires a new approach, such as identity and access management (IAM) based on the “least privilege” principle. In addition, centralised logging, effective incident management, and comprehensive technical and organisational measures are required.
For a medium-sized to large hospital, this is not merely a question of updating a firewall. It involves a complete reorganisation of IT infrastructure and processes, requiring significant investment and time.
If a hospital does not implement these measures by 1 November 2025, it will be in breach and may face high fines that could jeopardise its operations. That is why timely action is absolutely critical.
How to avoid it: Arrange an immediate NIS2 compliance audit to obtain an overview of the current situation. Then prepare a detailed 12–18 month implementation plan. It is critical not to wait until the last minute, as the process is time-consuming.
Mistake 5: A weak contract with the AI provider
Hospitals often lease an AI system from a third-party provider. However, the contract is often too short and contains only basic information such as price, term and notice period, and nothing more about responsibilities.
The reality is that when something fails, the hospital assumes it will sue the provider. But if the contract does not cover liability for data, security, algorithm accuracy, or incident management, the hospital bears the entire burden itself in any potential dispute.
A proper contract must include key clauses. These include liability for data, including encryption and deletion, and accuracy and performance, which set a minimum level of functionality for the AI system.
Incident response and the right to audit are also essential, enabling the hospital to verify security and compliance. The contract must define warranty periods and intellectual property rights (IP rights) so it is clear who owns what.
How to avoid it: Have the contract with the AI provider prepared or reviewed by the lawyers at ARROWS advokátní kancelář, a Prague-based law firm. These experts understand both the specifics of artificial intelligence and healthcare law, and will ensure comprehensive protection of your interests.
Risk table: Practical issues and solutions
|
Potential issues |
How ARROWS can help (office@arws.cz) |
|
The hospital does not know which AI systems it uses or whether they are high-risk. Without an audit, it is not possible to say what needs to be done. Risk: An inspector’s inspection finds a breach. |
ARROWS lawyers carry out a comprehensive audit of AI and data governance at the hospital. They identify all AI systems, assess their risk level under the AI Act, and prepare a list of compliance obligations. |
|
Healthcare data is not sufficiently protected; risk of a data breach (NIS2, GDPR). Without proper encryption, identity and access management (IAM) and incident management (incident handling), data is vulnerable. If a breach occurs, there is a risk of fines and loss of patient trust. |
ARROWS assists with the preparation and review of data governance, including the security of data flows. It helps the hospital design IAM, encryption, logging and incident management in line with NIS2 and GDPR. |
|
The AI system does not have sufficient human oversight (Human Oversight); doctors do not know how the system works. If the AI makes a mistake and nobody detects it, the patient suffers harm and the hospital is liable. |
ARROWS helps design human oversight mechanisms—training for doctors, monitoring procedures, rules for overriding AI output. It prepares documentation to demonstrate that human oversight has been implemented. |
|
The contract with the AI provider is weak; the hospital cannot enforce data security or accuracy. If an incident occurs, the hospital cannot sue the provider because the contract does not cover it. |
ARROWS lawyers prepare or review contracts with AI providers, including clauses on data liability, security, performance, incident management and the right to audit. They protect the hospital legally. |
|
The hospital does not know how to communicate with patients about AI; risk of breaching the AI Act transparency requirements. Without informing patients, the hospital breaches the AI Act. |
ARROWS helps the hospital design patient information materials that meet the AI Act transparency requirements while remaining clear and understandable. |
|
After deploying the AI system, the hospital does not carry out post-market monitoring; it does not know whether the algorithm is still working correctly. Algorithms degrade over time or learn undesirable behaviour. Without monitoring, the hospital is blind. |
ARROWS lawyers help design and document the post-market monitoring process—how data is collected, which metrics are measured, and who decides on updates. They ensure ongoing compliance. |
Public institutions: Compliance specifics in community healthcare
Registration of AI systems
If a hospital is a public institution and operates a high-risk AI system, it must register it in the pan-European database of AI systems under the AI Act. This database is intended to become publicly accessible, at least in part.
The database will serve as an overview of which AI systems are used in Europe. Registration is not voluntary; it is a mandatory requirement for certain high-risk systems. If a hospital fails to comply, it breaches the AI Act.
Fundamental Rights Impact Assessment
Public healthcare institutions have a specific obligation: before deploying a high-risk AI system, they must carry out a Fundamental Rights Impact Assessment. This analysis is key to protecting patients’ rights.
This means that hospitals must formally analyse which groups of patients could be adversely affected by the AI system. It is also necessary to assess risks to fundamental rights, such as the right to health, non-discrimination, and privacy.
Measures adopted to minimise these risks must also be defined. This assessment must be submitted to the supervisory authority, ensuring transparency and accountability.
Governance and accountability
Public institutions have clearly defined chains of accountability, including the director, the board, and management. AI governance must therefore be an integral part of the institution’s management structure. This is crucial for effective oversight.
Related questions: Compliance of public healthcare institutions:
- Who is responsible if an AI system fails and a patient suffers harm?
Liability is not concentrated in a single person. It is a combination—the AI provider (did it sell the system without sufficient safety?), the hospital as the operator (was there insufficient human oversight?), the physician (did they ignore a warning signal from the AI?). Litigation will examine each party’s role. This is why documentation is so important—so it is clear who did what. - How does AI governance differ in a private vs. a public hospital?
Public hospitals have more bureaucracy and control mechanisms, but also greater accountability to the public. They must be transparent and accountable, which means more documentation and audits. Private hospitals may be more agile, but they also bear more responsibility for the risks themselves. - Can an exception under the AI Act be made for the “public interest in health”?
The AI Act includes certain exemptions for healthcare purposes, but these are not “rights” to break the rules. The exemptions relate more to interpretation—what qualifies as high-risk or the conditions for data processing. Both regulatory frameworks are built on the premise that health is a public interest, which is why there is more protection, not less.
Conclusion
Healthcare institutions in the Czech Republic face an unprecedentedly complex regulatory landscape, consisting of at least three interconnected regimes—NIS2, the AI Act, and the GDPR. In addition, there are overlaps with national law and sector-specific standards (MDR).
This complex situation requires a proactive approach and a deep understanding of all regulatory requirements in order to minimise risks and ensure smooth and safe delivery of healthcare services in compliance with the law.
The NIS2 Directive and the Cybersecurity Act No. 264/2025 Coll., which entered into force on 1 November 2025, require healthcare institutions to implement rigorous cybersecurity. This includes identity and access management (IAM) based on the “least privilege” principle.
It is also necessary to introduce centralised logging, effective incident management, and regular audits. Breaches may result in fines of up to CZK 250 million and other legal consequences, underscoring the seriousness of compliance with these requirements.
The EU Artificial Intelligence Regulation (AI Act), which entered into force on 1 August 2024 and becomes fully applicable to high-risk systems from 2 August 2026, introduces a comprehensive framework for regulating artificial intelligence.
High-risk AI systems in medicine—such as those used for diagnostics, clinical decision support, or triage—must undergo risk management, bias testing, documentation, and have human oversight ensured. Breaches may result in fines of up to EUR 35 million.
The GDPR and health data as a special category of personal data require enhanced protection, transparency, and control. Anonymisation is not a simple solution. Modern interpretation (EDPB Conclusion 28/2024) argues that data can be re-identified.
The possibility of re-identification increases data protection requirements. In practice, this means hospitals must understand which of their AI systems are high-risk and implement risk management and human oversight to ensure compliance.
In practice, this means hospitals must:
- Understand which of their AI systems are high-risk.
- Implement risk management and human oversight.
- Ensure that healthcare data is protected and secure.
- If they train AI on patient data, ensure the correct legal bases and security measures.
- Regularly monitor and audit compliance.
- Train staff and maintain documentation.
- Have an incident response plan in place.
- Register (if they are public institutions) AI systems in the EU database.
Without this strategy—and without expert legal and technical support—a hospital is exposed to a high risk of fines, service disruptions, litigation, and loss of patient trust.
The lawyers and advisors at ARROWS advokátní kancelář have experience with all aspects of compliance. They understand overlapping regulatory regimes and will help the hospital identify its specific obligations, design strategies, and support implementation.
If an inspection or incident occurs, they will assist with defence and resolution. If you want to minimise risk and be confident that your healthcare institution is compliant with the law, contact ARROWS advokátní kancelář at office@arws.cz.
Frequently asked questions: Cybersecurity and the AI Act in hospitals
- What should I do if I do not have time to go through all of this and I have neither IT security specialists nor lawyers?
First, have an audit carried out – an external firm will tell you where you stand and what risks you face. Then contact the attorneys at ARROWS advokátní kancelář (office@arws.cz), who will help you with the legal aspects and planning. Without an audit and without legal support, you will be working blind. - If I have an AI system that has been running for 5 years without issues, do I have to update it to comply with the new AI Act?
Yes, the AI Act applies to all high-risk AI systems used after 2 August 2026, regardless of how old they are. If the system meets the criteria for high-risk AI (which diagnostic systems typically do), it must comply with the AI Act. This means risk management, documentation, human oversight, and monitoring. The attorneys at ARROWS can help you assess what specifically needs to be done. - How do I assess whether our AI system is high-risk?
A system is high-risk if it is used to support or make decisions and has a significant impact on patients’ health or fundamental rights. Typically: image-based diagnostics, clinical decision support, triage, prognosis, resource allocation. The simplest approach is to have it assessed – the attorneys at ARROWS are experienced in this. - Do patients have to formally consent if AI is used on their data?
It depends on the situation. If AI is used as a support tool directly in their treatment (e.g., an AI tool to help diagnose their scan), then they must be informed (AI Act). If their data are used to train AI and it is not for their own treatment, then you need consent (GDPR) or a legitimate interest (subject to conditions). The attorneys at ARROWS can help you distinguish between scenarios and propose an appropriate approach. - If we deploy AI without compliance and it is discovered, what penalty can we expect?
It depends on the severity. If it is a minor administrative AI with no major impact, it may be only a warning and a request to remedy. If it is diagnostic AI without human oversight used on thousands of patients, then sanctions in the millions of Czech crowns or euros may apply, along with incident reporting, audits, and possible media exposure. The simplest solution is to address compliance from the outset; it is not expensive. - How long does it take to design and implement compliance?
It depends on the scope. A simple audit and report – several months. Full implementation of IAM under NIS2, an AI policy, training – 12–18 months. The attorneys at ARROWS will help you with the legal aspects; the technical part is handled by your IT team or external IT providers. It is not a one-off task – compliance is an ongoing process.
Do you have further questions? Contact the attorneys at ARROWS advokátní kancelář – office@arws.cz. We will be happy to help you understand your specific situation and propose a solution.
Notice: The information contained in this article is of a general informational nature only and is intended for basic orientation in the topic based on the legal framework as of 2026. Although we strive for maximum accuracy, legal regulations and their interpretation evolve over time. We are ARROWS advokátní kancelář, an entity registered with the Czech Bar Association (our supervisory authority), and for maximum client protection we maintain professional liability insurance with a limit of CZK 400,000,000. To verify the current wording of regulations and their application to your specific situation, it is necessary to contact ARROWS advokátní kancelář directly (office@arws.cz). We accept no liability for any damages arising from the independent use of the information in this article without prior individual legal consultation.
Read also:
- Integration of Satellite Intelligence and AI Under EU/NATO Oversight
- Research Sponsorship Agreements in 2026: Key Clauses and Czech Compliance Risks
- Medicinal Product Distribution Licensing and Compliance in Czechia
- Marketing Authorisation and Market Entry for Medicinal Products in Czechia
- Medicinal Product Registration in the Czech Republic: Process, Timeline and Risks