Artificial Intelligence in HR:

A Practical Guide for Czech Companies by Lawyers

4.3.2025

Digitalization and the advent of artificial intelligence (AI) are fundamentally transforming the field of human resource management. Today, AI can speed up administration, help with recruitment and streamline employee training. According to a recent survey, half of Czech companies support the use of AI - most often in administration, HR and marketing. Companies expect AI to automate repetitive tasks and process data faster, but they also see challenges such as ensuring data security and a lack of skilled professionals. Only 14% of companies plan to deploy AI on a large scale, a third are open to targeted use in specific areas and almost half have no clear position yet.This too shows that AI in HR is on the rise, but many businesses are only just considering the risks and benefits. Therefore, the following article offers a practical guide: we will present specific ways of using AI in HR, highlight the legal risks (from data protection to discrimination) and give recommendations on how to implement AI ethically and safely in a Czech company.

Author of the article: JUDr. Jakub Dohnal, Ph.D., LL.M., ARROWS advokátní kancelář (office@arws.cz, +420 245 007 740)

Specific areas of use of AI in HR

AI permeates different parts of the employee lifecycle - from recruitment to development. Below, we describe the most common areas where it can help recruiters:

Recruitment and selection of employees

Recruitment is one of the first areas where many companies deploy AI.

Automated resume prescreening: AI systems can scan hundreds of resumes in an instant, searching for keywords or necessary qualifications to prescreen candidates. This saves recruiters time and helps identify the most suitable candidates without human bias. Modern tools can schedule interviews and coordinate communication with candidates - for example, automatically send out interview invitations to selected candidates, suggest suitable dates according to calendars or send online tests.

Chatbots and voicebots then ensure first contact: they quickly answer common questions from job seekers at any time, even outside working hours.

In the experience of recruitment agencies, up to 47% of conversations with a recruitment chatbot take place in the evenings or at weekends - so candidates don't have to wait for working hours and get answers immediately.

This improves their recruitment experience while removing routine queries from the HR team's agenda. AI also helps with job ad creation - generating text and even images for job postings, which the company can then use on career sites or social media.

Thanks to AI, you can reach a wider range of candidates with an attractive advert created quickly and tailored to the position. With all these applications, however, it's key to think about ensuring that the algorithms don't favour or disadvantage certain groups of candidates - we'll come to this in the section on bias and discrimination.

Automation of HR processes

HR professionals spend a significant amount of time on repetitive administrative tasks - this is where AI can help significantly.

Handling routine HR paperwork: AI can automatically process forms and requests (e.g. leave requests, changes to personal data) and enter them into internal systems. A chatbot for internal HR support can answer employees' questions about benefits, payroll or company policies, thus relieving the HR department. Advanced algorithms can optimally schedule employee shifts based on attendance data, availability and operational needs. This saves companies time in creating schedules while taking into account employee preferences when available.

Onboarding new hires: AI can walk a new employee through basic onboarding tasks - from filling out onboarding documents, familiarizing them with internal policies, to giving them a virtual tour of the workplace via an interactive chatbot. Some companies are also using AI to analyze HR data and reporting - for example, automatically generating regular reports on turnover, sickness or payroll costs and identifying patterns or anomalies that management should pay attention to. As a result, processes are speeded up and errors caused by routine manual work are reduced. According to research, automating repetitive tasks is one of the biggest benefits of AI in HR.

Companies can then devote the time saved to more strategic activities where human judgment and empathy are indispensable.

Employee performance evaluation

The field of appraisal and performance management is also moving into a new era thanks to AI. Traditional annual employee reviews are complemented by ongoing performance data analysis that can be partially automated.

Performance analytics: for sales teams, AI can track sales and KPIs in real time and alert managers to fluctuations - for example, identifying employees with declining numbers so they can get support or training in a timely manner. In customer service centres, some systems use AI to assess the quality of phone calls (tone of voice, keywords) or the speed of handling requests and give workers immediate feedback.

Predictive HR analytics even promises to predict which employees are at flight risk based on various indicators - project engagement, absenteeism, team sentiment, etc. Managers can then take proactive action, such as offering career advancement or addressing satisfaction before a valuable colleague quits. These tools can greatly refine and accelerate the work with talent. On the other hand, caution is required: a "black-box" algorithm, which would decide on promotions or bonuses on its own without clear criteria, is risky for the company both from a fairness and legal perspective. It is no coincidence that the forthcoming European AI Act classifies systems for evaluating employee performance and behaviour as 'high-risk' and places strict requirements on them.

This means that in practice, the human should always have the final say - AI should serve as a supporting tool providing data and recommendations, but the final decision (e.g. who to promote or how to evaluate) must be made by the manager, taking into account the context and human perspective.

Employee tracking

Employee monitoring is a sensitive area where technology in general is becoming more pervasive - and AI adds new opportunities, but also controversy. Businesses can use AI to track attendance and working hours - for example, a camera with facial recognition on arrival (instead of a punch clock) or automatic evaluation of logs to record work from home.

Some companies are testing tools that monitor computer activity (keystrokes, mouse movements, open applications) and assess employee productivity. AI can analyse large volumes of such data and look for patterns - for example, flagging up that a particular employee often spends time doing non-work activities online, or conversely identifying overworked workers who don't even take breaks.

In logistics, AI systems are used to track vehicles and drivers (GPS trackers combined with algorithms to optimise routes and detect deviations), while in manufacturing, camera systems are used to check compliance with safety rules. For example, modern AI cameras can automatically detect whether workers are wearing a helmet and vest and alert them to violations. Such automated safety audits can prevent accidents. There are also experiments with analyzing communications - for example, an AI tool can evaluate anonymized summaries of emails or chats within a team and measure "mood" or engagement (sentiment analysis) so that HR can catch workplace dissatisfaction early.

However, these options run up against privacy boundaries. In the Czech environment, the law clearly states that an employer may not, without good reason, invade an employee's privacy by systematically monitoring or controlling communications. Even justified monitoring must be reasonable and employees must be informed in advance - we will discuss the details in the legal section.

For now, AI can help in monitoring - but companies need to weigh very carefully what and how they monitor so they don't exceed legal limits and lose the trust of their people.

Training and Development

Artificial intelligence can also personalise and streamline employee training. Traditional training often offers the same thing to everyone, while AI can analyse the needs and performance of individuals and design a tailored development plan accordingly. For example, an employee in customer service who is struggling with a particular software skill can receive recommendations for a specific e-learning module or coaching in that particular area thanks to AI. AI tools track how someone is performing on knowledge tests or tasks and recommend relevant training programs based on the gaps identified.

This personalised approach increases learning efficiency - people don't waste time learning what they already know, but focus on what they need to improve. Some companies are already using chatbots as virtual tutors: an employee can ask the chatbot anything (e.g. "How do I fill out a travel order correctly?" or "What to do when a customer complains?") and the AI will promptly provide an answer, including links to relevant internal guidelines or materials. This allows for just-in-time learning - the human gets the knowledge they need at the moment they address it.

The AI can also generate training content - from automatically creating presentations based on the provided materials, to translating and localizing courses, to simulating virtual reality situations for practice (e.g., practicing a client interview in a simulated environment). The result is faster and cheaper training creation that can be easily adapted to different roles.

For large companies with hundreds or thousands of employees, AI thus helps scale development - providing each employee with a "coach" at their fingertips via an app or intranet. But it's important to ensure that AI's development recommendations are aligned with the strategic needs of the company and that there is no unwanted "pigeonholing" of employees (e.g., the algorithm stops offering someone a career path based on past performance alone). Therefore, HR should have the final say here as well and treat AI as a supporting tool.

Legal aspects of AI in HR

The deployment of AI in HR brings not only technological but also legal challenges. Employers need to respect a range of legal regulations - from data protection (GDPR) to employment law to prohibitions on discrimination. Below, we analyse the key legal aspects that Czech companies should focus on.

Data protection and GDPR

Employee data as personal data: most of the HR agenda revolves around employees' personal data - be it candidates' CVs, attendance, performance or training information. Under the GDPR definition, personal data is any information about an identified or identifiable natural person.

Therefore, if the use of AI involves processing data about specific employees (which it does in most HR scenarios), it is necessary to comply with all of the controller's obligations under the GDPR.

Legal basis for processing: the company must have a clear legal basis for processing the data. In the HR context, this will most often be for the performance of a contract (e.g. payroll processing, which is necessary for an employment contract) or a legitimate interest of the employer (e.g. streamlining recruitment through AI). If a company relies on a legitimate interest, it should conduct a balancing test - documenting that its interest (e.g. automating recruitment) does not outweigh the rights and freedoms of employees.

Transparency and information: employees and applicants must be informed what data AI processes and for what purpose. The GDPR (Article 13) imposes an obligation to provide clear information to the data subject, including that automated decision-making or profiling is taking place. Therefore, if a company deploys AI, it should update employee information memoranda, internal data processing policies, and perhaps wording in employment contracts or consents. Employees still have their rights under the GDPR - the right to access data, correct inaccuracies, or object to processing or request erasure. The company must ensure that even when using AI, people can exercise these rights. For example, when an AI system assesses performance, an employee should be able to see what data the system is recording about them and have it corrected if necessary.

Contract with an AI supplier: Often the AI tool for HR is supplied by an external company (for example, AI-based recruitment software). In this case, the supplier is typically in the role of a data processor on behalf of your company (the controller). GDPR requires you to have a processing contract with the processor and to commit them to data protection rules.

The firm should ensure that the AI service provider secures the data, does not use it for other purposes and enables the firm to meet its obligations to data subjects. Also beware of the data on which AI learns: if you provide your HR data to the supplier to "train" the model, this too is processing personal data and you must have a legal basis and contractual treatment.

Risk assessment and DPIA: The regulation emphasises assessing whether the deployment of new technology poses high risks to employees' rights. AI is often considered a new type of processing, potentially high risk, so a DPIA (data protection impact assessment) may apply.

For example, if a company wants to implement an algorithm for evaluating candidates, it should analyse the risks in advance - what if the algorithm excludes someone unjustifiably, what about data leaks, etc. If the DPIA reveals high risks that cannot be mitigated, the Data Protection Authority (DPA) must be consulted. Either way, the involvement of the company's Data Protection Officer (DPO) (if the company has one) is highly recommended when deploying AI in HR.

Data Security: AI systems tend to be data-intensive, sometimes operating in the cloud - making it all the more important to pay attention to cybersecurity. Leaking sensitive employee data (appraisals, salary, personnel files) could have serious consequences for the company. Both the GDPR and the Labour Code oblige employers to protect employees' personal data from misuse. In practice, this means implementing technical and organisational measures - database encryption, access control (authorised users only), regular audits, training of AI tool administrators, etc. For example, if the AI is generatively answering queries and someone enters sensitive HR data (e.g. salary list), it may be sent to the provider's servers - this is also a security risk that should be brought to the attention of employees (see further in the recommendations).

GDPR compliance is not to be underestimated - penalties for breaches can reach up to €20 million or 4% of a company's global turnover.

But just as important is the employer's reputation: a data leak scandal or employee complaints about privacy breaches can damage trust and company culture.

Discrimination and bias in AI

Risk of algorithmic discrimination: AI in HR promises objectivity because it makes decisions based on data. But without the right measures, it can instead transmit and reinforce biases that are in the data. A classic example is a recruitment algorithm that "learns" from historical data to prefer a certain type of candidate. For example, if a company has historically hired predominantly men for IT positions, the AI may adopt this pattern and start systematically disadvantaging female resumes - without anyone explicitly intending it. A well-known case is Amazon's experiment, which developed an internal AI to evaluate resumes: over time, they found that it gave lower scores to female applicants because the historical data was "biased" against men.

For example, the algorithm penalized mentions like "captain of the women's chess club" just because the word "women's" was not present in the data of the successful candidates.

Such behaviour is obviously undesirable and unacceptable in the context of employment law. Prohibition of discrimination: both the Czech Labour Code and the Anti-Discrimination Act expressly prohibit discrimination in employment on the basis of protected characteristics such as gender, age, racial or ethnic origin, religion, sexual orientation, etc. This applies to recruitment, remuneration, promotion and termination of employment. If a decision made by an AI results in an unjustified disadvantage to a person because of any of these characteristics, the employer is liable - just as if the manager had discriminated directly in the interview. Companies must therefore keep a close eye on how AI makes decisions and test for bias before deployment.

For example, for a CV screening tool, tests can be run with fictitious profiles (female vs. male with the same qualifications, etc.) and checked to ensure that the results are not systematically different. In addition, it is advisable to limit the use of inputs that could serve as proxies for sensitive traits - typically residential address (may correlate with ethnic minority), length of parental leave (may favour men over women), candidate photographs, etc. Employers' responsibilities: in practice, an AI system should never make irreversible decisions entirely on its own. European law (GDPR Art. 22) prohibits exposing people to purely automated decisions with significant impact unless there are specific safeguards. An example would be an algorithm deciding all by itself, without human judgement, whether or not to accept an applicant - such a practice is (with exceptions) prohibited because it fundamentally affects the person.

The company should therefore ensure that there is always a human involved in the process who can review the AI recommendation. This will not only meet the legal requirement, but also reduce the risk of discrimination - a human HR professional can spot a suspicious pattern (e.g. "the AI has excluded all my women, that's not right") and intervene.

Model training and tuning: responsible AI vendors today are emphasizing so-called ethical AI - models designed to minimize bias. For a company developing its own AI solution, this means including a fairness tuning phase in the development. For example, algorithm weights can be adjusted or compensatory mechanisms added if a certain group is found to be worse off for no legitimate reason. If you are buying an AI product from a third party, ask the supplier how they have addressed the issue of bias and whether they have conducted discrimination tests.

Monitoring AI decision making: it doesn't end with the introduction of AI - you need to continuously monitor outcomes. It is advisable to keep records (logs) of how the AI made decisions and regularly analyze them in terms of, for example, the ratio of accept/reject by gender, age, etc. This may also be aided by the mandatory assessment of the impact on fundamental rights (the so-called ALTAI - Assessment List for Trustworthy AI), which is likely to be required by the new AI Act for high-risk systems. In short, a company needs to oversee AI in much the same way as it oversees its managers - setting boundaries for it and checking that it complies with them. Ignoring this obligation is not worth it: in addition to fines, there is the risk of lawsuits from rejected applicants or employees who feel discriminated against, as well as loss of reputation.

Monitoring employees and their rights

Right to privacy vs. employer control: as we outlined above, modern technology gives employers unprecedented opportunities to monitor what employees are doing. However, from a legal perspective, an employee's privacy is protected and cannot be invaded beyond measure. The Czech Labour Code (Section 316) states that an employer may only monitor the use of its production and working means (counters, telephones, vehicles, etc. provided by employees) in a reasonable manner and to the extent strictly necessary.

This means that you cannot monitor all of an employee's activities across the board and without restriction - the monitoring must have a reasonable basis (e.g., protection of property, security, trade secrets) and be as minimally invasive as possible. Covert monitoring of an employee's communications (reading of private emails, interception of calls) is expressly prohibited unless there is a compelling reason based on the specific nature of the employer's business.

Such a reason could be, for example, the need to prevent the leakage of top secret information in a specific industry, but there is usually no legitimate reason to do so in an ordinary company. Even where there is (e.g., a security agency may have a reason to monitor whether a guard is actually prowling the premises), the principle of proportionality applies - monitor only what is necessary and no more.

Duty to inform: Crucially, the employer must inform the employee in advance of any monitoring, its extent and methods. It is not enough to have it hidden somewhere in an internal directive - employees must be demonstrably aware of whether and how they are being monitored. For example, if a company deploys AI that analyzes emails for keywords (even if just for cybersecurity), it must notify employees that company email is under surveillance and what data is involved. The Labour Code does not require employees to agree to this (consent cannot be enforced), but they must know about it.

AI and monitoring: the use of AI on the employer's side does not change these obligations - it does not matter whether the monitoring is done by a human or an algorithm. For example, if AI is evaluating camera footage and reporting suspicious behavior, the same limits apply as for regular camera systems. Moreover, when AI is used, it is again processing personal data, so GDPR applies - for example, facial recognition on a camera is biometric data, which has a special sensitive status.

European supervision: not only Czech courts but also the European Court of Human Rights has repeatedly commented on employee monitoring. In the well-known case Bărbulescu v. Romania, the ECtHR found that the dismissal of an employee for using company email for private purposes was unjustified because the employee was not clearly informed in advance about the scope of the monitoring.

This is an important lesson for companies: even if an employee broke the rules (used a company computer privately), evidence cannot be obtained by covert surveillance that he did not know about. Finally, when implementing AI monitoring tools (attendance cameras, computer monitoring, etc.), be mindful of legal boundaries. Set up monitoring to be targeted, not blanket; record only the necessary data; be sure to exclude monitoring of areas where the law prohibits it (locker rooms, restrooms, rest areas - where the employee has absolute privacy).

Always inform the employee openly what is being monitored and why. Consider the ethical side - excessive monitoring can damage workplace relationships. Use AI in monitoring for security and genuinely compelling reasons, rather than micromanaging an employee's every move.

Responsibility for decisions made by AI

Who is responsible: let's imagine a situation - an AI system selects a candidate that the company hires or, on the contrary, recommends to dismiss a certain employee. Who is legally responsible for this decision? From a legal perspective, it is always the employer. AI is not a legal entity and is not liable for its actions - it is a tool, just like accounting software. If there is misconduct (discrimination, wrongful termination, etc.), the company is liable to the affected employee or applicant, not the AI vendor or the "computer." Therefore, it is in the employer's interest to keep the AI decision-making under control.

Automated decisions in employment relationships: as already mentioned, purely automated decisions with a large impact on individuals are in principle prohibited by the GDPR. In HR, this includes decisions such as hiring/non-hiring, promotion/non-promotion, termination, significant changes to salary or bonuses, etc. A company should never find itself in a situation where the decision was "made by AI and no one else saw it." For one thing, it would be legally actionable, and for another, it would be difficult to defend such a decision - how would you explain in court why someone was fired when the answer is "an algorithm made that decision and we don't really know why"?

The requirement for human oversight: It is good practice (and soon to be a legal obligation under the AI Act) to provide human oversight of AI.

This means identifying people who understand how the system works, who can intervene or shut it down if necessary, and who, above all, take the final decision at critical moments. For example, the AI may suggest a ranking of candidates, but the recruiter must go through the top choices and confirm or modify the decision. When using AI to check attendance, AI can flag suspicious patterns (frequent late arrivals), but the manager must talk to the employee and find out the reasons before taking sanctions.

Auditability and explainability: accountability is also related to the fact that the company should be able to document the reasons for the decision. With AI, we often talk about the problem of "explainable AI" - especially with black box algorithms, it is difficult to retrospectively decipher why they made the decisions they did. But if a person's career depends on the decision, it's fair to be able to provide an explanation. For a company, this means preferring AI tools that are transparent or provide an "explanation interface" - for example, displaying a candidate's score with the reasoning "does not meet requirement XY" instead of a simple "rejected". It is also advisable to store logs (logs) of the AI's activity so that it can trace how it arrived at a particular result.

Complaining about an AI decision: If an employee or applicant challenges a decision in which the AI was involved (e.g., suspects that he or she was discriminated against by the algorithm), he or she should be able to address it in the same way as if it were a manager's decision. The company should have an internal process to investigate such a complaint - typically involving HR and IT experts to analyze whether the AI made a mistake. In the extreme case, the dispute would then be handled by a court or labour inspectorate and the company would have to defend its practices.

Supplier vs. user: of course, the employer is externally liable, but if the failure is caused by a bug in the AI tool, the company can then claim damages from the supplier (if it has breached the contract, etc.). This is why it pays to choose reputable and proven AI systems and have contractual arrangements in place for what happens if the tool doesn't work as expected (including the supplier's legal obligations in terms of compliance). Also, the EU is currently discussing special rules for AI liability (the so-called AI Liability Directive), which should make it easier for victims to seek compensation - this too is a signal to companies that liability is taken seriously. The conclusion is clear: AI is the company's responsibility and it must be treated accordingly - as a tool that needs to be carefully set up, monitored and corrected if necessary.

Practical recommendations for Czech companies

Introducing AI into HR is more than just a technical project - it also requires setting the right internal rules, ethical framework and communication with employees. Below are recommendations to help you use AI in HR responsibly and legally.

Setting internal rules for AI in HR

AI strategy and policy: the first step should be to develop an internal strategy for the use of AI. The company's management should clarify in which areas it wants to use AI, what goals it is pursuing (e.g., speeding up recruitment, reducing administrative burden), and what the limits are. This is followed by creating an internal policy or guideline that sets out the rules for using AI in HR. It should answer the questions: who is responsible for the operation and oversight of AI tools? What types of decisions is AI allowed to make independently and where is escalation to a human required? How will the deployment of a new AI tool be tested and approved? How will its outputs be monitored and any problems addressed? Such guidelines will ensure that everyone in the HR team acts in a consistent and aware manner.

AI governance in practice: for example, it is recommended to set up an internal committee or council to manage AI in the company.This could include representatives from HR, IT, legal and management. It would be tasked with approving new AI tools, overseeing their operation and resolving ethical dilemmas that may arise. In a smaller company, this might just be a regular meeting between the HR manager and IT and the lawyer over the use of AI. It is important that AI does not get out of control simply by someone "secretly" introducing it without the knowledge of others and without treating the risks.

Record keeping and auditing: Keep an inventory of all AI systems you use in HR, including a description of their function, vendor, data processed, and risk measures taken. This record keeping is useful for both internal purposes (you can easily see what is running where) and for possible communication with authorities or during an audit (you are ready to see how you are meeting GDPR, AI Act requirements, etc.).

Failure procedures: define what to do when the AI tool fails or causes an incident - for example, when the recruitment algorithm starts giving strange results. It should be clear who has the authority to suspend the system, who to notify (management, DPO, IT security), and how to remediate. This will prevent chaos in a crisis situation.

Ethical guidelines and codes for AI

In addition to hard rules, it is a good idea to set ethical principles that the company will follow in AI. It is possible to build on existing company values or code of ethics and extend them to AI. Some organizations are adopting an AI ethics charter where they commit to certain principles - for example, "We will use AI transparently and fairly; AI decisions are always subject to human review; we are committed to data privacy and security; we ensure equal treatment and non-discrimination; and we are accountable for the impact of AI on our employees." Such statements not only help internally, but can also serve externally to build trust (e.g. a company can communicate to candidates that it uses AI ethically and in compliance with GDPR when recruiting).

Inspiration from outside: There are a number of frameworks and recommendations to draw on. The European Commission has published Ethics Guidelines for Trustworthy AI (2019), which define principles such as human oversight, technical robustness, privacy, transparency, diversity and non-discrimination or accountability. The OECD has its AI Principles and, for example, the US government has also published a Blueprint for an AI Bill of Rights (2022) protecting the rights of employees in the context of AI.

For HR specifically, there are recommendations from professional bodies (SHRM, CIPD, etc.), often emphasising three pillars: legality, ethics and inclusion. A company does not have to invent everything from scratch - proven principles can be adopted such as "AI must not violate employees' rights, it must be deployed with their trust and to the benefit of both parties, AI decisions must be explainable and reviewable, we will introduce new technologies sensitively and with consideration for the impact on people", etc.

As Jaromír Staroba, President of ABSL, points out, it is important for companies to follow global regulatory developments and to lead their innovation teams according to ethical guidelines. In practice, this means training both developers and users of AI on ethical risks, consulting AI deployment with various stakeholders (e.g. trade unions, if any) and avoiding solutions where it is not clear how they work. Staroba warns against "black-box" AI tools for HR without documentation and transparency - such are best avoided. Instead, favour solutions that enable auditing and where the vendor openly shares information about the algorithm (at least at the level of explanation, not necessarily the source code).

Culture and values: introduce the topic of AI into the company culture as well. If the company emphasizes taking care of employees, it should approach AI with the mindset that it is meant to serve people, not replace or over-control them. The ethical use of AI will become real when employees themselves support it - which is when they see that AI is helping, not hurting them. So build a culture where innovation goes hand in hand with accountability.

Communicating with employees about AI

Transparency towards employees: For employees to embrace AI positively, they need to understand why and how it is being used. It is therefore essential to communicate openly from the start. If you plan to deploy AI in recruitment or internal processes, inform employees in advance. Explain what tasks the AI will perform (e.g. "the chatbot will answer your payroll questions" or "the new system will analyse attendance and design shifts") and highlight what people will get out of it - faster HR services, less administration for them, etc. Also make sure that AI will not be misused - e.g. that it is not used for inappropriate monitoring or evaluation without context. If you are introducing AI into recruitment, you can communicate this to candidates (e.g. mention in the job advert that the initial selection is done by an algorithm to ensure fairness).

Training and education: not every HR professional or manager understands what AI really entails. That's why it's a good idea to hold internal training or workshops on AI in HR - to introduce the possibilities but also the risks (privacy, bias) and outline new processes. Involve both the HR team and senior managers, and feel free to involve wider employees if they are affected (e.g. training for managers on how to use AI outputs correctly when evaluating subordinates). The aim is to take away the fear of the unknown - many employees may be concerned about whether AI will 'spy' on them or even replace them. Open discussion will help clarify that AI is a tool to make people's jobs easier, not to punish them.

Involve employees in the design: If possible, involve employee representatives (perhaps from different departments) in piloting AI. They can give valuable feedback, point out practical problems, and act as ambassadors who can then better explain the innovation to colleagues.

Policy on AI for employees: in addition to the use of AI by the employer, there is also the issue of the use of publicly available AI by employees (e.g. ChatGPT at work). It is a good idea to communicate clear guidelines to employees: what they can and cannot enter into external AI tools. For example, warn them not to enter sensitive company information into public AI services that could store or misuse it.

Explain it clearly - e.g. "Whatever you type into the AI generator can be stored and potentially reach strangers, so it is forbidden to enter personal data of colleagues, customers or non-public information about the company."

Employees often have no idea how these technologies work, so education is key.

Communicating when AI makes a mistake: If an incident occurs (for example, AI misclassifies an executive as a "poor performer"), communicate it openly and apologize for any mistakes. Show that you are aware of the AI's limits and that you will do your best to correct and be fair. This will maintain trust - people appreciate it when a company admits that technology is not infallible and puts human judgment first.

Future developments and forthcoming legislation

The field of AI is evolving very dynamically - and so is the legislation that is trying to keep up. In the coming years, companies are facing major changes in AI regulation, particularly from the European Union, which are good to watch and prepare for.

AI Act and AI regulation in the EU

The most important upcoming regulation is the Artificial Intelligence Regulation (AI Act), which was approved at EU level in 2024 and will gradually come into force (full implementation is expected around 2026). The AI Act introduces a comprehensive framework for the regulation of AI with a risk-based approach. It categorises individual AI applications into: prohibited uses (e.g. citizen social scoring), high-risk systems, limited-risk systems and minimal-risk systems.

AI in HR (recruiting, managing and monitoring employees) is a high-risk area as defined by the law.

This means that AI tools used, for example, to select candidates, make promotion and termination decisions, evaluate performance or monitor employee behaviour will be subject to strict requirements.

Obligations for companies (AI users): the AI Act distinguishes between obligations for providers of AI systems and for their users (so-called deployers). The role of a deployer will typically be a company that deploys e.g. purchased AI software for recruitment. The key responsibilities of a user of high-risk AI will include: ensuring adequate human oversight of the system, following the manufacturer's instructions closely, validating input data (i.e., monitoring the quality and representativeness of the data we "feed" to the AI), monitoring the AI's performance during operation, and reporting any serious incidents or misconduct to the vendor.

Furthermore, keep records (logs) of AI activities and even conduct a fundamental rights impact assessment (to prove that the system does not harm human rights, e.g. does not discriminate). Companies will also have to inform their employees (or their representatives) that they are subject to the AI system's decision-making at work or in recruitment. This again underlines the principle of transparency.

Sanctions under the AI Act: Failure to comply with obligations for high-risk systems will be punishable by fines - the proposal states up to €20 million or 4% of annual turnover (similar to GDPR). It is therefore clear that the EU takes the use of AI in HR very seriously and wants to prevent abuse or ill-considered deployment that could harm employees.

What to do now: Although the AI Act is yet to be put into practice, companies should start preparing now. It is recommended to audit any AI or advanced algorithms you use or plan to use in your company - to see if they fall within the AI Act's definition (which is quite broad) and whether they happen to fall into the high-risk category (if they involve employees, chances are they do). Also, keep an eye on the development of the implementing regulations and standards that the EU is preparing for the AI Act - they will specify technical details, such as exactly how to conduct those impact assessments or how to record the use of AI.

Companies should update their internal procedures to comply with the new requirements - this includes, for example, introducing the aforementioned human oversight, providing training for people working with AI, adjusting contracts with suppliers (so that they do what they are supposed to do) and so on.

Those who start on time will gain a competitive advantage because they will not have to hastily redesign processes at the last minute. The AI Act may seem like an administrative burden, but its goal is similar to the GDPR - to increase trust in technology by using it responsibly. For businesses, this may ultimately mean that well-set AI processes will give them a head start (they will be able to use AI to its full potential without fear of legal challenges).

Other legislative developments

In addition to the AI Act, other AI-related regulations are also being discussed. For example, the European Commission is preparing an AI Liability Directive, which will add rules for AI damages recovery - this may strengthen the position of employees should they feel aggrieved by an AI decision (making it easier for them to prove their position). In the field of employment law, we can expect to see AI on the agenda of labour inspections and possibly also in collective bargaining. Unions in some countries are already calling for a right to disconnect in relation to AI - so that workers are not just evaluated by dry metrics 24/7. There is also talk of a right to explain AI decisions and a right to refuse to be evaluated by automated means. We already have some of these principles in place de facto thanks to GDPR, but we can expect them to be strengthened. The Czech legislation will have to implement European standards - e.g. to adjust the competences of the Office of the Labour Inspectorate, labour inspectorates and possibly to enshrine some obligations in the Labour Code or the Employment Act (e.g. regarding informing applicants/employees about AI).

Sector dynamics: AI technologies are evolving rapidly and the legal framework will constantly adapt to them. Companies should therefore keep up to date - attend seminars, read professional articles, or use the services of legal advisors specialising in IT law and HR. It is also worth sharing experiences in the field - e.g. at professional meetings of HR managers to exchange knowledge on how someone is dealing with AI and what has worked well for them.

Ethics as an ongoing theme: In addition to laws, there will be pressure from the public and employees themselves. The generations entering the workforce (Millennials, Gen Z) are value-sensitive and will question how the company is using AI. It may become part of employer branding to say "we use AI responsibly". Considerations of collective rights are also on the horizon - e.g. whether employees should co-determine the implementation of AI in the workplace (through works councils), etc. These are all issues that the coming years will bring.

Conclusion

Artificial intelligence offers HR departments great opportunities to streamline their work and make decision-making more data-driven. It can help Czech companies find talent faster, better motivate and develop employees, and stay competitive. At the same time, it is not a panacea - it requires careful implementation, ongoing monitoring and, above all, respect for the people involved. The legal risks are not insignificant: from fines for GDPR violations, to discrimination lawsuits, to damage to an employer's reputation. This practical guide has highlighted that balance is key - use AI where it brings clear benefits, but always with common sense, human oversight and within the confines of the law and ethics. If a company approaches AI as a tool that serves people (and not the other way around), it can get the most out of it in HR with minimal risk. The future of HR will likely belong to smart collaboration between humans and machines - let's prepare for it responsibly and with our eyes open.