AI and personal data in 2024

21.3.2024

With the growing importance of artificial intelligence ("AI"), client demands in this area are increasing and it is therefore crucial to keep up-to-date in this area. Inseparably linked to the topic of AI is the processing of personal data. The General Data Protection Regulation ("GDPR"), which is also applicable to AI data processing and seeks to protect the personal data of EU citizens, addresses this issue.

What is GDPR and how does it protect us?

In case you haven't heard of it - the GDPR sets out the rules for processing personal data in the EU. Its principles apply to any entity that collects, stores, processes or transfers the personal data of EU citizens. This includes organisations and companies that use AI to process that data.

One of the main principles of the GDPR is the purpose limitation principle enshrined in Article 5(1)(b) of the GDPR. This means that personal data should only be collected for specified and legitimate purposes and should not be processed in a way that contradicts this.

The same therefore applies to the use of AI in the processing of personal data. Organisations must ensure that the use of AI is consistent with the purposes for which the data was collected and that it avoids excessive or unwarranted processing. In practice, this means, for example, that if you are thinking that "the more data you have in your database" the better, you need to be careful. Always think about what data you are going to work with.

How to access personal data when working with AI?

The protection of personal data should be taken into account already when developing AI systems, in particular by taking technical and organisational measures to minimise the risk of loss, misuse or unauthorised access to this data. Adequate security measures such as data encryption, anonymisation or pseudonymisation should be used. In addition, secure data storage or monitoring of who has access to the data may be used.

Consideration of data protection in the field of AI can also be found in the new EU AI Act Regulation, which regulates the use of artificial intelligence. If this is done in contravention of the procedures outlined in the GDPR or this new EU AI Act regulation, penalties can be imposed, typically in the form of fines, as discussed below.

Personal data then means any information about an identified or identifiable natural person (Article 4(1) of the GDPR). Therefore, at a minimum, it is necessary to proceed to the regulation of contracts relating to the processing of personal data, as we have outlined above.

The GDPR also emphasises transparency - which is enshrined together with legality and fairness in Article 5(1)(a) GDPR - and information. Individuals whose personal data are processed must be properly informed of the purposes of the processing, the categories of data processed and, last but not least, they must be informed of their rights in relation to their data (e.g. the right to be forgotten, which is regulated in Article 17 GDPR). This is all the more true in the case of the use of AI, where clear information should be provided on how AI systems process personal data and what the risks involved are.

The security of AI should also be monitored from the perspective of companies that could, for example, leak trade secrets or breach confidentiality obligations. So if your employees use e.g. ChatGPT at work, it is imperative that they do not give out any sensitive data that could cause irreparable damage.

It is therefore recommended to organise training sessions for employees on this issue or introduce internal guidelines. With this in mind, we also recommend that relationships outside the company be adjusted with regard to suppliers, for example, it might be appropriate to adjust contracts.

Healthcare and artificial intelligence "in its infancy"

The healthcare industry is undoubtedly an unforgettable topic in relation to AI. It should be stressed that health data is a special category of personal data that is subject to increased protection, with corresponding fines of higher amounts. However, the World Health Organisation (WHO) itself points out that the use of AI in healthcare should be rather marginal and used only very cautiously.*

Apart from the well-known applications that monitor, for example, caloric intake, there is also the concept of telemedicine as a completely new field, which was the subject of an article by Mgr. Barbora Dlabolová (available here). This concept basically covers the provision of healthcare remotely using digital devices, but at the moment it is not regulated in any way. There are risks here too, which the WHO also points out, given that the provision of health services as we know them is still preferred.

Assuming that health care, or data on our health status, should be linked to the new AI technology, it will certainly be necessary to amend Act No 372/2011 Coll., on health services and conditions of their provision, to go hand in hand with this progress. Thus, it will be more than appropriate to adjust the internal guidelines or policies of individual healthcare institutions to take into account the introduction of AI.

What happens if these rules are broken?

If you break the rules set out in the GDPR for handling personal data, you could face hefty fines. This includes when you fail to comply with data protection requirements when using AI. It is therefore important that you carefully comply with the GDPR in your organisation and ensure that AI systems are also compliant with these requirements.

According to Article 83 of the GDPR, fines for breaches of GDPR obligations can reach up to EUR 10,000,000 or, in the case of a company, up to 2% of the total annual worldwide turnover for the previous financial year, whichever is higher.

In the case of the AI Act, fines of up to €30,000,000 or, if the infringement is committed by a company, up to 6% of its total annual worldwide turnover for the previous financial year, whichever is higher.

A daunting example is the fine imposed on Hello bank! for the unauthorised handling of its clients' biometric signatures, where it was fined a quarter of a million crowns.

AI offers many benefits and innovations, but the processing of personal data should be carried out with respect for the privacy and rights of individuals. The GDPR introduces important frameworks and rules to protect personal data and ensure that the use of AI complies with fundamental privacy principles. Artificial intelligence is also covered in articles here or here.

*WHO calls for safe and ethical artificial intelligence for health. Online. 2023. Available from: https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health. [cited 2024-03-20].
Should you have any questions on this topic, please do not hesitate to contact us, we would be happy to review your case.

Kateřina Chaloupková collaborated on the article.

70+
countries

60+
advisors

15+
years of experience in the market