The use of Artificial Intelligence (AI) in companies brings huge opportunities, but also new legislative requirements. The European Union is responding to the development of AI by adopting the Artificial Intelligence Regulation (AI Act), which will complement the existing data protection framework provided by the GDPR (General Data Protection Regulation). For medium and large businesses, ensuring compliance with both regulations is key - the stakes are high with heavy fines (the GDPR allows for penalties of up to 4% of global annual turnover, the AI Act provides for up to 6% of turnover), reputational damage and other legal implications. As practice confirms, non-compliance can cost a company not only money in fines, but also reputation and market position. That's why the issue of AI Act and GDPR compliance should be a strategic priority for every organization - preventive measures are always cheaper than dealing with the consequences.
Author of article: ARROWS law firm (JUDr. Jakub Dohnal, Ph.D., LL.M., office@arws.cz, +420 245 007 740)
The AI Act establishes the world' s first comprehensive legal framework for AI. A key principle is the risk assessment of the AI system. The regulation divides AI applications into four categories: prohibited (unacceptable uses, e.g. manipulative techniques, social scoring, etc.), high risk (AI systems with a major impact on security or the rights of individuals, e.g. in healthcare, transport, finance, education or recruitment), limited risk (e.g. generative AI like ChatGPT - requiring certain transparent measures, such as labelling of AI-generated content) and minimal or no risk (common AI applications not specifically covered by the AI Act). High-risk AI systems will be subject to strict requirements - including certification and pre-market conformity assessment, maintenance of technical documentation, provision of human oversight of their decision-making, measures for data quality and prevention of bias (discrimination) or system security. Providers of these systems must also expect to be registered in an EU database, to be continuously monitored and updated, and to be liable for any damages. However, even for less risky AI, the AI Act imposes certain obligations - e.g. to ensure that users know that they are interacting with AI (unless it is obvious) and to label AI-generated content (so-called AI transparency). Importantly, the AI Act impacts not only AI developers and vendors, but also to some extent businesses as users - if a business uses an AI system subject to regulation, it must comply with the rules set out for its use.
The GDPR focuses on data protection and applies to all entities that process the personal data of EU citizens. For companies, this means in particular the obligation to process personal data lawfully and transparently - to have a valid legal reason for each processing (consent, contract, legitimate interest, etc.), to inform data subjects about the purpose and method of processing and to respect their rights (e.g. The GDPR also requires the implementation of organisational and technical measures to secure data and minimise risks (encryption, pseudonymisation, access restriction, etc.) and, in the event of a data breach, requires the incident to be reported to the authorities and often to the individuals concerned. Automated decision-making and profiling are key to AI - the GDPR (in Article 22) gives individuals the right not to be subject to a decision based purely on automated processing (including AI) if it has legal or similarly significant effects. In other words, if a company uses AI to make automated decisions affecting individuals (e.g. scoring clients, automated credit rejection or selection of job applicants), it must either obtain explicit consent or ensure that a human element enters the process, or provide the individuals concerned with the opportunity to challenge the decision and have it reviewed by a human. GDPR and AI intersect when AI systems process personal data - then both regulations need to be complied with simultaneously. For example, the AI Act emphasises respect for fundamental rights and explicitly requires AI systems to comply with the GDPR, meaning that data used for AI training or decision-making must be obtained and used within the confines of the Data Protection Act.
So what is the specific course of action for a company to comply with both the AI Act and GDPR? A proactive and systematic approach is key. Below are practical steps and actions that medium and large businesses should consider to achieve and maintain compliance:
The first step is to establish AI governance within the organization - clear rules and processes for developing or using AI systems. Management should identify who will be responsible for the AI compliance agenda (for example, expanding the scope of the existing data protection officer - DPO - to include AI, or establishing a new role for AI ethics and compliance). In addition, it is advisable to conduct an internal audit or inventory of all AI tools and projects that the company uses or develops and assess their riskiness from both the AI Act and GDPR perspectives. This inventory will allow you to focus on the essential areas. Internal guidelines and policies also need to be updated - for example, adding rules for the use of generative AI (what employees can or cannot enter into tools like ChatGPT), defining the approval process for new AI solutions, and setting up procedures for reporting AI-related incidents. Last but not least, this includes regular training of employees - the AI Act introduces the requirement of " AI literacy", i.e. ensuring that employees handling AI have adequate knowledge of the rules and risks. The training should make employees aware of the implications of the AI Act and GDPR, the ethical principles of AI, and teach them to recognise and deal with risk situations.
Not all AI tools in a company will fall into the high or limited risk categories under the AI Act. Therefore, it is necessary to categorize AI systems by risk category. Ask: are we using AI in an area that the AI Act identifies as high risk? Typical examples for companies include: AI for recruiting and HR (resume screening, automated candidate assessment), employee assessment, as well as AI in financial services (client scoring, fraud detection), healthcare (diagnostic algorithms), manufacturing and maintenance (e.g. AI for critical infrastructure management), etc. If a company develops its own AI-based product, it is an AI system provider - then it must also address potential certification and notification under the AI Act. In contrast, common uses of AI in marketing (e.g. e-commerce recommendation algorithms), data analytics or customer service (chatbots) are likely to fall into the category of limited or minimal risk. Each AI tool identified should be assessed in terms of its purpose, the type of data being processed and the potential impact. If it processes personal data, it is covered by the GDPR; if it falls within the AI Act definitions, note what specific obligations will apply.
For AI systems that fall into the AI Act's high-risk category, companies should put special measures in place. In practice, this means developing a detailed compliance plan for each such system. This includes conducting a conformity assessment - i.e., verifying that the system meets all of AI Act's technical and procedural requirements prior to deployment. Many of these requirements will need to be provided primarily by the suppliers/developers (e.g. to create technical documentation, ensure testing and risk minimisation). However, if a company acquires a high-risk AI system from a third party, it is not out of the woods - as a user, it has a responsibility to verify that the system has the necessary certifications and meets legal requirements, and to use it itself only in accordance with the instructions and limitations specified by the provider. Firms should enter into contracts with AI vendors covering compliance obligations so that it is clear who is providing what (similar to GDPR processing contracts). Further, human oversight of AI outputs should be set up - for decisions that have a significant impact on people (e.g. unsuccessful candidates or clients), there should be a human review process or recourse. Monitoring and testing of an AI system should not end with commissioning - high-risk AI requires ongoing monitoring of performance, accuracy and adverse effects. If an incident occurs (e.g. AI fails and causes damage, or proves to be non-compliant), you need to be able to report the issue to the supervisory authorities (similar to how security breaches are reported under GDPR). In short, high-risk AI systems should be subject to the same risk management regime as, say, occupational safety or financial risks - with clear responsibilities, documentation and controls.
Whether a high-risk or low-risk AI system, transparency to users, clients and employees is key. The company should be prepared to explain how the AI system works, at least in basic terms - especially for decision-making algorithms, it is a good idea to have internal documentation of what data and criteria go into the decision. For external parties (e.g. customers), it should then be available in an understandable form that the decision was made using AI and, if requested, a basic explanation of the logic (without revealing trade secrets or protected know-how, of course). This transparency is a requirement of both the AI Act and the GDPR - the GDPR mandates that subjects be informed of automated decision-making and the measures taken, and the AI Act goes further in requiring the provision of user documentation for some systems and, where appropriate, the publication of information in the High Risk Register. A good step is to develop an AI Code of Conduct for the company, where management commits to the responsible use of AI (without discrimination, with respect to privacy, etc.), and to communicate this commitment publicly - this will strengthen the trust of customers and employees. Responsibility also means a willingness to take the consequences: if an AI system causes an error, the company should be able to respond - investigate the incident, provide redress to those affected (e.g. compensation, the ability to repeat the decision-making process in a different way), and modify the system or process to prevent the situation from happening again. In practice, we recommend appointing an AI committee or board within the enterprise to oversee all major AI projects, approve them for compliance and ethics, and act as an advisory body in contentious situations.
Although the AI Act and GDPR are separate regulations, in practice they overlap and complement each other. It is advantageous for companies to take a holistic view of compliance - setting up a single risk assessment process for AI projects that takes into account both privacy and fundamental rights impacts. The GDPR already requires a Data Protection Impact Assessment (DPIA) for high-risk processing; the AI Act adds a requirement for a fundamental rights impact assessment for high-risk AI systems. These analyses can be combined - when planning to deploy AI that will process personal data, a company will conduct an extended risk analysis to assess both privacy risks (GDPR) and broader ethical and societal risks (AI Act). Information gathered from existing DPIAs can be used for the new AI Act assessment, avoiding duplication and saving time. Alignment is also taking place at the level of technical measures - e.g. the AI Act's requirement for quality training data without bias complements the GDPR principle of data minimisation and accuracy; the obligation to keep records of AI activity (logs) meets the GDPR obligation to document processing activities. In the area of subjects' rights, if AI processes personal data, the company must be able to comply with subjects' requests under GDPR (provide their data, correct, delete) - so privacy by design and privacy by default must be kept in mind at the design stage of an AI system. A practical tip is to create a compliance matrix: in one column the GDPR requirements, in the other the AI Act, and mark how the area is treated internally. For example, "Informing subjects - addressed under GDPR by instructing the subject, while meeting AI Act transparency ", etc. Such a matrix will help to identify potential gaps and double compliance. Let's also remember that as of 1 January 2024, an amendment to the Personal Data Processing Act is in force in the Czech Republic, which allows personal data to be transferred to an AI system only if GDPR compliance is ensured - so national regulations are also starting to reflect AI. Aligning processes to meet both standards simultaneously is the most effective route to true compliance.
Practical experience also shows that companies often neglect certain obligations when implementing AI . Here are four common mistakes to avoid:
Navigating the new regulations and putting them into practice effectively can be a challenge for businesses. ARROWS Law Firm has a team of experts in information technology and data protection law who follow the latest developments in AI Act and GDPR practice. We are happy to help you with comprehensive compliance:
Don't hesitate to contact us - together we'll ensure your business doesn't have to worry about fines or other negative impacts. With the right processes in place, you can use AI safely and responsibly as a competitive advantage while we keep an eye on the legal side. Contact ARROWS to arrange an AI compliance consultation today - we can help you turn regulatory obligations into practical solutions and benefits for your business.