AI at work: who is responsible for AI's mistakes?

14.2.2025

Author of the article: JUDr. Barbora Kořenářová, ARROWS law firm (office@arws.cz, +420 245 007 740)

The use of Artificial Intelligence (AI) in the work environment is fast becoming the norm. Companies are implementing automated systems for employee management, data analysis or customer support. However, this raises a fundamental question: who is responsible when AI makes a mistake? In this article, we look at the legal aspects of AI liability, how Czech and European law addresses it, and how companies should prepare for new regulations.

Liability for AI mistakes under Czech law

Current Czech legislation does not contain specific legal regulation of liability for AI decisions. Therefore, companies must rely on general liability for damages (§ 2910 of the Civil Code), employer liability for damages caused by an employee (§ 250 of the Labour Code) or product liability for software.

The main issues addressed in practice are:

  • Is AI just a tool or can it be held liable for the error itself? - Today, the person or entity using the AI is always legally liable.
  • Who is liable for damage caused by AI? - If AI is integrated into work processes, the employer is liable. In the case of an AI error in HR (e.g. a discriminatory hiring decision), both the employer and the software vendor may be liable.
  • How to protect yourself contractually? - Companies should clearly define employees' responsibilities for working with AI in employment contracts and internal policies and ensure that they have third-party liability coverage.

European AI regulation: what will the AI Act bring?

The European Union is preparing an AI Act regulation that sets out rules for the use of AI in various sectors. Key points for employers:

  • Classification of AI systems according to risk - AI used in employment relationships is likely to fall into the "high risk" category (recruitment, HR and workforce management).
  • Duty of transparency - Employers will have to inform employees how AI systems work and what decisions they make.
  • Accountability for AI decisions - The regulation emphasises the responsibility of human oversight of AI decisions.
  • Mandatory testing of AI systems - Companies will have to audit their AI tools to minimize the risk of errors and discrimination.

How should firms prepare?

To help employers avoid legal risks associated with the use of AI, we recommend the following steps:

  1. Update employment contracts - Clearly define employees' responsibilities for working with AI and its limits.
  2. Provide human oversight - Have key decisions vetted by employees so that AI itself can't cause a legal problem.
  3. Internal guidelines on AI use - Ensure employees know how to work with AI, what is allowed and what is not.
  4. AI Act Readiness - Keep track of legislative developments and prepare for new obligations.

Conclusion

AI brings innovation but also new legal challenges. Companies need to be prepared for the issue of liability for AI mistakes while adapting to upcoming European regulation. Unclarified rules can lead to legal disputes and penalties. We therefore recommend consulting lawyers specializing in employment law and AI compliance.

Don't expose yourself to unnecessary risks - contact ARROWS lawyers to ensure you get the right legal solution.