Compliance with AI Act and GDPR

Guide for medium and large enterprises and companies 2025

18.3.2025

The use of Artificial Intelligence (AI) in companies brings huge opportunities, but also new legislative requirements. The European Union is responding to the development of AI by adopting the Artificial Intelligence Regulation (AI Act), which will complement the existing data protection framework provided by the GDPR (General Data Protection Regulation). For medium and large businesses, ensuring compliance with both regulations is key - the stakes are high with heavy fines (the GDPR allows for penalties of up to 4% of global annual turnover, the AI Act provides for up to 6% of turnover), reputational damage and other legal implications. As practice confirms, non-compliance can cost a company not only money in fines, but also reputation and market position. That's why the issue of AI Act and GDPR compliance should be a strategic priority for every organization - preventive measures are always cheaper than dealing with the consequences.

Author of article: ARROWS law firm (JUDr. Jakub Dohnal, Ph.D., LL.M., office@arws.cz, +420 245 007 740)

Main requirements of AI Actu and GDPR

The AI Act establishes the world' s first comprehensive legal framework for AI. A key principle is the risk assessment of the AI system. The regulation divides AI applications into four categories: prohibited (unacceptable uses, e.g. manipulative techniques, social scoring, etc.), high risk (AI systems with a major impact on security or the rights of individuals, e.g. in healthcare, transport, finance, education or recruitment), limited risk (e.g. generative AI like ChatGPT - requiring certain transparent measures, such as labelling of AI-generated content) and minimal or no risk (common AI applications not specifically covered by the AI Act). High-risk AI systems will be subject to strict requirements - including certification and pre-market conformity assessment, maintenance of technical documentation, provision of human oversight of their decision-making, measures for data quality and prevention of bias (discrimination) or system security. Providers of these systems must also expect to be registered in an EU database, to be continuously monitored and updated, and to be liable for any damages. However, even for less risky AI, the AI Act imposes certain obligations - e.g. to ensure that users know that they are interacting with AI (unless it is obvious) and to label AI-generated content (so-called AI transparency). Importantly, the AI Act impacts not only AI developers and vendors, but also to some extent businesses as users - if a business uses an AI system subject to regulation, it must comply with the rules set out for its use.

The GDPR focuses on data protection and applies to all entities that process the personal data of EU citizens. For companies, this means in particular the obligation to process personal data lawfully and transparently - to have a valid legal reason for each processing (consent, contract, legitimate interest, etc.), to inform data subjects about the purpose and method of processing and to respect their rights (e.g. The GDPR also requires the implementation of organisational and technical measures to secure data and minimise risks (encryption, pseudonymisation, access restriction, etc.) and, in the event of a data breach, requires the incident to be reported to the authorities and often to the individuals concerned. Automated decision-making and profiling are key to AI - the GDPR (in Article 22) gives individuals the right not to be subject to a decision based purely on automated processing (including AI) if it has legal or similarly significant effects. In other words, if a company uses AI to make automated decisions affecting individuals (e.g. scoring clients, automated credit rejection or selection of job applicants), it must either obtain explicit consent or ensure that a human element enters the process, or provide the individuals concerned with the opportunity to challenge the decision and have it reviewed by a human. GDPR and AI intersect when AI systems process personal data - then both regulations need to be complied with simultaneously. For example, the AI Act emphasises respect for fundamental rights and explicitly requires AI systems to comply with the GDPR, meaning that data used for AI training or decision-making must be obtained and used within the confines of the Data Protection Act.

Practical steps towards compliance

So what is the specific course of action for a company to comply with both the AI Act and GDPR? A proactive and systematic approach is key. Below are practical steps and actions that medium and large businesses should consider to achieve and maintain compliance:

Setting up internal processes and responsibilities

The first step is to establish AI governance within the organization - clear rules and processes for developing or using AI systems. Management should identify who will be responsible for the AI compliance agenda (for example, expanding the scope of the existing data protection officer - DPO - to include AI, or establishing a new role for AI ethics and compliance). In addition, it is advisable to conduct an internal audit or inventory of all AI tools and projects that the company uses or develops and assess their riskiness from both the AI Act and GDPR perspectives. This inventory will allow you to focus on the essential areas. Internal guidelines and policies also need to be updated - for example, adding rules for the use of generative AI (what employees can or cannot enter into tools like ChatGPT), defining the approval process for new AI solutions, and setting up procedures for reporting AI-related incidents. Last but not least, this includes regular training of employees - the AI Act introduces the requirement of " AI literacy", i.e. ensuring that employees handling AI have adequate knowledge of the rules and risks. The training should make employees aware of the implications of the AI Act and GDPR, the ethical principles of AI, and teach them to recognise and deal with risk situations.

Identify AI systems subject to regulation

Not all AI tools in a company will fall into the high or limited risk categories under the AI Act. Therefore, it is necessary to categorize AI systems by risk category. Ask: are we using AI in an area that the AI Act identifies as high risk? Typical examples for companies include: AI for recruiting and HR (resume screening, automated candidate assessment), employee assessment, as well as AI in financial services (client scoring, fraud detection), healthcare (diagnostic algorithms), manufacturing and maintenance (e.g. AI for critical infrastructure management), etc. If a company develops its own AI-based product, it is an AI system provider - then it must also address potential certification and notification under the AI Act. In contrast, common uses of AI in marketing (e.g. e-commerce recommendation algorithms), data analytics or customer service (chatbots) are likely to fall into the category of limited or minimal risk. Each AI tool identified should be assessed in terms of its purpose, the type of data being processed and the potential impact. If it processes personal data, it is covered by the GDPR; if it falls within the AI Act definitions, note what specific obligations will apply.

Dealing with high-risk AI applications

For AI systems that fall into the AI Act's high-risk category, companies should put special measures in place. In practice, this means developing a detailed compliance plan for each such system. This includes conducting a conformity assessment - i.e., verifying that the system meets all of AI Act's technical and procedural requirements prior to deployment. Many of these requirements will need to be provided primarily by the suppliers/developers (e.g. to create technical documentation, ensure testing and risk minimisation). However, if a company acquires a high-risk AI system from a third party, it is not out of the woods - as a user, it has a responsibility to verify that the system has the necessary certifications and meets legal requirements, and to use it itself only in accordance with the instructions and limitations specified by the provider. Firms should enter into contracts with AI vendors covering compliance obligations so that it is clear who is providing what (similar to GDPR processing contracts). Further, human oversight of AI outputs should be set up - for decisions that have a significant impact on people (e.g. unsuccessful candidates or clients), there should be a human review process or recourse. Monitoring and testing of an AI system should not end with commissioning - high-risk AI requires ongoing monitoring of performance, accuracy and adverse effects. If an incident occurs (e.g. AI fails and causes damage, or proves to be non-compliant), you need to be able to report the issue to the supervisory authorities (similar to how security breaches are reported under GDPR). In short, high-risk AI systems should be subject to the same risk management regime as, say, occupational safety or financial risks - with clear responsibilities, documentation and controls.

Ensuring transparency and accountability of AI systems

Whether a high-risk or low-risk AI system, transparency to users, clients and employees is key. The company should be prepared to explain how the AI system works, at least in basic terms - especially for decision-making algorithms, it is a good idea to have internal documentation of what data and criteria go into the decision. For external parties (e.g. customers), it should then be available in an understandable form that the decision was made using AI and, if requested, a basic explanation of the logic (without revealing trade secrets or protected know-how, of course). This transparency is a requirement of both the AI Act and the GDPR - the GDPR mandates that subjects be informed of automated decision-making and the measures taken, and the AI Act goes further in requiring the provision of user documentation for some systems and, where appropriate, the publication of information in the High Risk Register. A good step is to develop an AI Code of Conduct for the company, where management commits to the responsible use of AI (without discrimination, with respect to privacy, etc.), and to communicate this commitment publicly - this will strengthen the trust of customers and employees. Responsibility also means a willingness to take the consequences: if an AI system causes an error, the company should be able to respond - investigate the incident, provide redress to those affected (e.g. compensation, the ability to repeat the decision-making process in a different way), and modify the system or process to prevent the situation from happening again. In practice, we recommend appointing an AI committee or board within the enterprise to oversee all major AI projects, approve them for compliance and ethics, and act as an advisory body in contentious situations.

Aligning AI Act requirements with GDPR

Although the AI Act and GDPR are separate regulations, in practice they overlap and complement each other. It is advantageous for companies to take a holistic view of compliance - setting up a single risk assessment process for AI projects that takes into account both privacy and fundamental rights impacts. The GDPR already requires a Data Protection Impact Assessment (DPIA) for high-risk processing; the AI Act adds a requirement for a fundamental rights impact assessment for high-risk AI systems. These analyses can be combined - when planning to deploy AI that will process personal data, a company will conduct an extended risk analysis to assess both privacy risks (GDPR) and broader ethical and societal risks (AI Act). Information gathered from existing DPIAs can be used for the new AI Act assessment, avoiding duplication and saving time. Alignment is also taking place at the level of technical measures - e.g. the AI Act's requirement for quality training data without bias complements the GDPR principle of data minimisation and accuracy; the obligation to keep records of AI activity (logs) meets the GDPR obligation to document processing activities. In the area of subjects' rights, if AI processes personal data, the company must be able to comply with subjects' requests under GDPR (provide their data, correct, delete) - so privacy by design and privacy by default must be kept in mind at the design stage of an AI system. A practical tip is to create a compliance matrix: in one column the GDPR requirements, in the other the AI Act, and mark how the area is treated internally. For example, "Informing subjects - addressed under GDPR by instructing the subject, while meeting AI Act transparency ", etc. Such a matrix will help to identify potential gaps and double compliance. Let's also remember that as of 1 January 2024, an amendment to the Personal Data Processing Act is in force in the Czech Republic, which allows personal data to be transferred to an AI system only if GDPR compliance is ensured - so national regulations are also starting to reflect AI. Aligning processes to meet both standards simultaneously is the most effective route to true compliance.

Practical examples - typical mistakes and how to avoid them

Practical experience also shows that companies often neglect certain obligations when implementing AI . Here are four common mistakes to avoid:

  • Mistake 1: Sharing sensitive data with public AI tools. It happens that AI-enthusiastic employees don't hesitate to embed company or personal data in freely available services (e.g., using ChatGPT to generate a report from real customer data). However, by doing so, they may inadvertently violate GDPR (if they pass the data to a third party without safeguards) and expose the company to the risk of know-how leakage. How to avoid this: Set clear rules on the use of AI for employees. Explain what content is prohibited from being uploaded to external AI systems (e.g. client personal data, internal strategy documents) and why. Also consider technical measures to prevent sensitive data from being sent online (filters, access restrictions). Educate employees that data entered into AI may be retained by the operator and may never come back under the firm's full control.
  • Mistake 2: Automated decisions without the possibility of human intervention. Some companies are tempted by the idea of fully automated decision-making (e.g., an AI system evaluating a loan application or applicant resume on its own). However, if there is no possibility of human review, there is a risk of violating Article 22 of the GDPR, as well as the risk of unfair or unverified AI outputs. Remedy: Implement a "human in the loop" principle for every AI decision-making process - set up a process where a questionable or negative AI decision is always confirmed by a human, or at least that the customer/applicant can challenge the decision and request a manual review. This will not only meet the legal framework, but also improve the quality of the output (AI can be wrong or have biases). At the same time, test the algorithm regularly for bias - e.g. whether it discriminates against a certain group - and document the results.
  • Mistake 3: Relying on a vendor without checking it yourself. For example, a company purchases AI software from a well-known vendor and assumes that this "covers" compliance. But even when using external AI, the principle of user responsibility applies. If an AI tool causes harm to customers or employees, the company's reputation will suffer, regardless of the fact that the technology came from outside. Recommendation. Has a privacy impact assessment been conducted? What assurances does the contract give regarding liability for errors? Ideally, you should conduct a self-audit of the delivered solution in a test run - to verify how it handles data, what security features it has, and whether it delivers the promised functionality. Have liability and indemnification provisions in the contract with the vendor in case its AI causes infringement or damage to third parties.
  • Mistake 4: Underestimating the documentation and consent of data subjects. When deploying AI that processes personal data, companies often forget to update their privacy policies and obtain the necessary consents. For example, a company starts using an AI analytics tool to profile customers in detail, but fails to adjust the customer information or the legal basis for the processing accordingly. How to get it right: Any new use of personal data - including the deployment of AI - must be transparently communicated to data subjects. Therefore, before deploying AI, review your information obligations (policies, instructions) and add a mention of the use of AI, including the purpose and logic. Check the legal title: if the AI is doing something beyond the original purpose of the data collection, you may need to get consent from subjects (e.g. to process data for automated profiling). For more complex data use cases, consider consulting with a DPO or lawyer, or even consulting the Data Protection Authority beforehand, especially if the DPIA shows a high risk that cannot be fully mitigated.

How ARROWS can help

Navigating the new regulations and putting them into practice effectively can be a challenge for businesses. ARROWS Law Firm has a team of experts in information technology and data protection law who follow the latest developments in AI Act and GDPR practice. We are happy to help you with comprehensive compliance:

  • Audit and risk analysis: we will conduct a thorough audit of your AI tools and data processing, identify potential risk points in terms of AI Act and GDPR and propose specific remedial measures.
  • Process setup and documentation. We will prepare the necessary legal documentation - from template consents and information for data subjects, to contractual clauses with AI vendors, to supporting documents for impact assessments (DPIA/FRIA).
  • Staff training and support: we will provide training to management and rank-and-file staff on how to use AI properly within the confines of the law. We will teach your team to recognize the legal risks when working with AI and to respond correctly to requests from data subjects or scrutiny from authorities.
  • Ongoing consultation and monitoring: the legislative framework for AI will continue to evolve. We'll be your partner for ongoing consultation - whether it's to introduce a new AI project, handle an incident, or communicate with a supervisory authority. We'll alert you to new obligations in advance so you're always ahead of the game.

Don't hesitate to contact us - together we'll ensure your business doesn't have to worry about fines or other negative impacts. With the right processes in place, you can use AI safely and responsibly as a competitive advantage while we keep an eye on the legal side. Contact ARROWS to arrange an AI compliance consultation today - we can help you turn regulatory obligations into practical solutions and benefits for your business.