AI governance in companies: How to do it right to avoid legal problems and fines
AI governance is not a luxury – it is a legal obligation that companies ignore at their own risk. Regulators are increasing pressure, litigation is multiplying, fines are rising, and uncertainty around accountability is narrowing rapidly. If your company uses artificial intelligence without a clear governance structure, you are taking on risks that can quickly turn into multi-million costs, project shutdowns, or a reputational crisis. In this article, you will learn what governance means in practice, what you risk without it, how to set it up properly, and where to find help.
.png)
Article contents
- What AI governance is and why it is no longer optional
- Attorneys from ARROWS, a Prague-based law firm, understand both regulatory pressure and practical operational challenges
- Key elements of AI governance in practice
- Data management and data governance
Here is a summary of the most important points:
- The absence of AI governance is no longer an overlookable detail : Regulators (in the EU, the USA and the Czech Republic) are actively enforcing compliance; companies without documented governance face fines, sanctions, and are in a weaker position in litigation.
- Governance covers the entire lifecycle : From tool selection, through data control, model testing, output audits, incident management, to continuous monitoring – it is not just paperwork, but day-to-day operations.
- Our attorneys in Prague at ARROWS can help you set up and maintain governance structures, address compliance issues, and protect you from regulatory trouble as well as operational failures.
- Mistakes are expensive : AI hallucinations, serious incidents revealed without a response plan, discriminatory outputs, data leaks, invalid contracts – all of it is paid for through high fines and loss of trust from partners and customers.
What AI governance is and why it is no longer optional
AI governance is not just a label for the person who “takes care” of AI. It is a systematic set of processes, rules and mechanisms that ensure that systems using artificial intelligence remain safe, lawful, ethical and effective. It is about control from day one – from tool selection through data management, output monitoring and incident handling.
The global regulatory environment has changed fundamentally. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 March 2024 laying down harmonised rules on artificial intelligence (the “EU AI Act”) is already in force with increasing obligations: prohibited practices apply from December 2024, obligations for providers of general-purpose AI models apply from June 2025, and from June 2026 it will be fully applicable to high-risk AI systems. In the USA, individual states are adopting their own laws – Texas, Utah, Colorado and others.
The Czech Republic is not isolated: as an EU Member State, it is governed by the EU AI Act and national frameworks are also gradually developing. Companies that wait for a “federal solution” or a “Czech law” are already behind.
At the same time, case law is evolving. Without clearly documented governance, it is no longer possible to defend yourself by saying “we didn’t know” or “the model did it on its own”. For companies implementing AI tools across business units, it is often practical to map responsibilities and documentation to an internal governance program such as Implementation of Artificial Intelligence (AI). A court and a regulator will want to see what exactly your company knew, what steps it took, and why it believed it was safe.
For AI governance to be effective, it must be part of day-to-day operations. When setting internal rules (responsibilities, tool approvals, incident management), in practice the contractual framework and negotiation of commitments with suppliers are often addressed as well, which can be supported by the contracts and negotiations team. First and foremost, this means it is not only a matter for the IT department or the legal team – it must be on the management board’s agenda, reflected in recruitment rules, in procurement processes, and in communication with business partners.
Key elements of AI governance in practice
Inventory and classification of AI systems
The first step is to know which AI systems in your company do what. Surprisingly often, it turns out that many companies do not have a clear overview. If the inventory also concerns cross-border tools and providers (e.g., cloud services, a foreign parent company), it may be useful to build on the experience from the text How to set up Czech business in Iceland without unnecessary risks What Czech companies most often deal with. Employees use a certain AI tool (e.g., ChatGPT) without approval, individual departments build their own AI tools, developers integrate models into applications without reporting it – this is so-called shadow AI.
Related questions on classifying AI systems:
- How do I find out what AI I am using when employees commonly choose it themselves? If you do not have central control, your data may end up in unapproved tools. In such cases, legal review typically overlaps with technical measures and incident response planning covered by IT and Software Law, Cybersecurity. A practical solution is a combination of technical visibility (network monitoring, data loss prevention (DLP) tools), employee training, and, above all, providing approved alternatives. If you block employees without offering something useful, they will look for something else.
- Do I have to ban public AI tools such as a certain AI tool (e.g., ChatGPT)? Not necessarily. It depends on the type of data and the context. Where public tools are allowed, companies often formalize rules through management decisions and minutes, and it can help to compare approaches with Do Your Board Minutes Pass the Test? How Courts Treat Corporate Resolutions in the Czech Republic. But you must clearly define which data employees may enter into which tools and what happens if they breach this rule.
- What happens if I miss something in the inventory? If a regulator or a court later discovers a system that was not on your list, they will ask questions about your due diligence and systematic approach. This weakens you legally and reputationally. In a situation where an AI incident turns into a dispute with a supplier or a client, it is appropriate to address the evidence strategy and liability in a timely manner, which typically falls within the area of commercial and litigation disputes.
Proper practice therefore includes: a documented list of all AI systems, their purpose, who is responsible for operations, what data they process, what risks they carry, and how they are monitored.
Data management and data governance
AI systems are only as good and as safe as their data. If inappropriate, incorrect or discriminatory data enters a model, the model will replicate it. If data is not sufficiently protected, a leak from one AI system will turn into a mass leak of sensitive information.
Data governance in the context of AI means:
- Classification: Know which data is sensitive (personal data, trade secrets, health information, etc.).
- Access rights: Restrict access only to those who truly need it.
- Data minimisation: Do not collect or store more data than necessary.
- Provenance and lineage: Track where the data comes from, how it is transformed, and where it ends up.
Many companies do not realise that, for example, when they input data into tools such as Copilot without verifying that it is properly protected, they are breaching Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR), and exposing themselves to fines.
Our attorneys in Prague at ARROWS advokátní kancelář can assist by auditing your data governance and preparing rules so that they comply with legal requirements.
Model testing and validation
Regulators and courts are clear: operating an AI system without prior testing and without continuous monitoring is not acceptable. This means:
- Pre-deployment validation: Verify that the model operates safely on real data, does not produce discriminatory outputs, and that hallucinations are within an acceptable level.
- Post-deployment monitoring: Track how the model behaves after deployment, whether drift has occurred (a change in behaviour over time), whether its accuracy is declining, and whether it has started generating problematic content.
- Bias testing: Specifically test whether the model discriminates on the basis of gender, age, nationality, or another protected characteristic.
A simple example: A company deploys an AI system to score the creditworthiness of loan applicants. No one tests it with respect to the gender of female applicants. The model generates biased scores (this is a real-life case).
The regulator discovers it. The company must pay a fine under the GDPR and the EU AI Act and face a class action from the affected women.
Related questions on testing:
- How much testing is enough? That depends on the system’s risk level. High-risk decisions (employment, loans, healthcare) require very thorough testing. Low-risk ones (e.g., message classification for internal analytics) require less. ARROWS attorneys can help you determine the minimum standard for your specific case.
- Who should do the testing – in-house or externally? Ideally a combination. In-house testing by an experienced team, but whenever the risk is high, an external audit will verify objectivity. High-risk AI systems will require third-party conformity assessment (by a so-called notified body) under the EU AI Act.
- What risk do I run if I test only once and then go back to operations?
Models degrade over time. Without monitoring (e.g., monthly), you may only learn about an issue when the regulator arrives for an audit or when an affected customer contacts you.
Risk management and impact assessment
The EU AI Act, the NIST AI Risk Management Framework, and national regulations expressly require companies to identify and document the risks associated with their AI systems. This is more than just IT security – it includes legal, ethical, operational, and reputational risks.
Key is carrying out AI impact assessments (AI Impact Assessments), which are similar to data protection impact assessments (DPIAs) under the GDPR:
- What system are you using and what does it do?
- Who does it affect (employees, customers, the public)?
- How severe are potential errors (discrimination, hallucinations, data leaks)?
- Which legal frameworks apply?
- What prevention and monitoring are in place?
- Who is responsible if something fails?
Without such a document, you have no evidence that you approached the matter seriously. With it, you have a defence in case a court or regulator focuses its attention on you.
Governance structure and accountability
ARROWS advokátní kancelář attorneys have seen dispute after dispute where everyone points fingers: “We thought IT was dealing with it… IT thought legal was handling it… the legal department thought management was responsible...” The result: no one had clear accountability, mistakes piled up, and everyone was affected. That is not good.
Proper governance must have clearly defined roles and responsibilities for all stakeholders:
- Management/Board: Sets the strategy, approves risk tolerance, ensures that AI governance is part of corporate processes.
- AI Governance Committee (if any): Coordinates between individual teams, approves system deployments, handles escalations.
- Data stewards: Ensure data quality and security.
- Legal team: Monitors compliance with legal requirements, prepares contracts with suppliers.
- IT/Security: Ensures technical security and monitoring.
- Business owners: Bear operational responsibility for specific AI systems in their areas.
- Human reviewers: Ensure that critical decisions are not fully automated without human involvement.
Document this allocation in writing. Ideally in an AI governance charter or policy. Without it, everyone will argue when something goes wrong.
Table of key risks and how ARROWS advokátní kancelář helps
|
Potential issues |
How ARROWS helps (office@arws.cz) |
|
Absence of an inventory of AI systems; a regulator or court discovers systems your company has not documented. |
ARROWS attorneys can help you conduct an audit of AI use within your company and create a central inventory of all systems. We will prepare documentation that demonstrates your systematic approach and due care. |
|
Data leaks and GDPR breaches due to entering sensitive information into unapproved AI tools. |
ARROWS provides a review of your data governance and rules for using AI. We help set up controls, employee training, and policies so that data does not end up in the wrong places. |
|
Discriminatory outputs from an AI model; fines from the regulator and class actions by affected persons. |
ARROWS attorneys will guide you through testing and auditing AI systems for bias. If an issue is identified, we will prepare a remediation plan and represent you in dealings with regulators. |
|
AI system failure without a plan; hallucinations that reach the public and damage reputation. |
We work with you to prepare incident response plans, monitoring, and documentation. When an issue arises, we can defend you against sanctions and media pressure. |
|
Incorrect contracts with AI suppliers; undefined responsibilities, limited recourse options. |
ARROWS attorneys will ensure that your supplier contracts clearly define obligations, liability, and your rights to audit and remediation. We also address complex issues of indemnities and liability caps. |
Regulatory framework: Where it came from and what risks you now face
EU AI Act – the most binding regulation
The EU AI Act Regulation entered into force in June 2024 and is being rolled out gradually:
- From December 2024: Prohibited practices (manipulation, exploitation, social scoring without exceptions) are already in effect.
- From June 2025: Obligations for providers of general-purpose AI models (training data summaries, prohibition of certain practices).
- From June 2026: Full application to high-risk AI systems.
What does this mean in practice? If you use AI in decision-making on employment, loans, healthcare, or public services, you have obligations relating to documentation, testing, audit logs, and incident reporting. If you breach them, you face fines of up to EUR 35 million or 7% of global turnover, whichever is higher.
For companies operating in the Czech Republic or the EU, there is no way around it: even if you are based outside the EU, if you have EU customers or employees, the AI Act applies to you.
US state regulation
There is no federal law in the US yet, but individual states are taking it seriously:
- Texas (from January 2026): Systems intended for self-harm, discrimination, or illegal deepfakes are prohibited; mandatory transparency for government and healthcare AI.
- Colorado (from June 2026): Transparency requirements for high-risk AI, discrimination audits.
- Utah (already in effect): An obligation to clearly communicate that it is AI; the company is liable for illegal practices as if it carried them out itself.
- California (phased implementation): ADMT (Automated Decision Making Tools) regulation governing decisions about individuals, new data rights.
A company with Czech employees in the US or with US customers must be prepared to comply with multiple regulations at once.
The Czech Republic and international elements
Czech legislation is still developing, but it will automatically be based on the EU AI Act. In addition, Czech law (commercial, employment, healthcare) applies to all AI decisions that affect these areas—even if they are not explicitly mentioned.
If you have clients or partners in other countries, you must take into account that different rules apply in each region. The attorneys at ARROWS advokátní kancelář handle cross-border AI governance through the ARROWS International network—we can help you coordinate obligations across multiple countries.
Most common questions on the regulatory framework and emerging governance:
- How do I know whether an AI system is “high-risk” under the EU AI Act? High-risk systems include, for example, systems for employee screening, credit scoring, traffic management, and healthcare. Lower risk would be, for instance, an FAQ chatbot. It is not always clear. ARROWS’ Prague-based attorneys can help you classify your specific systems: office@arws.cz
- Can I just deploy a particular AI tool (e.g., ChatGPT) in HR for candidate screening? Theoretically yes, but in practice it is risky without governance. You must test for bias, keep audit records, ensure the ability for human oversight, and have remediation mechanisms. If you do not do this and it ends up in litigation, your defence will be weak.
- Who is responsible for compliance—the technology department or the lawyers?
Both. Technology builds the system and monitoring; lawyers ensure it complies with regulations. Without clear cooperation, things become complicated. ARROWS’ Prague-based attorneys can help you set up that cooperation: office@arws.cz
Practical risks addressed by AI governance
Hallucinations and uncertainty of outputs
Generative AI models sometimes simply lie—confidently generating content that does not exist at all. A lawyer could generate a “precedent” that does not exist in the database. An HR system could produce a report about a candidate that does not correspond to reality at all.
Without governance: Errors make it into production and harm clients or partners.
With governance: You have human review processes, you have audit records, you can show that you addressed the issue, and you can say without difficulty “this was an AI error that we have now fixed”—and have evidence for it.
Discrimination and bias
Models learn from data. If historical data reflect discrimination (e.g., women received worse evaluations), the model learns it. Failures follow: in the US, there have been cases where AI employment screening significantly discriminated against women and certain minorities.
Without governance: You may find yourself as evidence in a class action or face a fine from a regulator. The defence “we didn’t know” will not hold up.
With governance: You test for bias before deployment, you have data on what you tested, you have a remediation plan, and the regulator sees you as a responsible player. That is a much better position.
Data leaks and GDPR breaches
An employee enters personal data into a particular AI tool (e.g., ChatGPT) to have something summarised. The data are now on the provider’s server and may become part of the training data. This is a GDPR breach—fines of up to EUR 20 million or 4% of global turnover.
Without governance: The rules are unclear to everyone, everyone does whatever they want, leaks accumulate, and you will not even have a plan for how to deal with them.
With governance: You have training, you have approved tools, you have data rules, you have monitoring. When an issue arises, you have documentation you can present to the regulator.
Infringement of intellectual property rights (IP infringement) and copyright
Generative models often draw content from public sources—including protected works. If you have a model trained in this way, you may end up in a legal dispute or have to pay penalties. And if your model produces content that strongly resembles an author’s work, you risk a lawsuit.
Governance is important here: You have an audit of what was in the training data, you have a policy, you have the ability to prohibit it—then you have a defence.
Vendor lock-in and supplier risk
A company commits to a single AI vendor that changes its pricing policy or stops operating its model. You are trapped.
With governance: You have specific performance requirements in your contracts, you have exit clauses, you have audit rights, and you have a plan B. ARROWS’ attorneys in Prague can ensure that your supplier contracts are not one-sided.
How to set up AI governance in practice – a detailed plan
Phase 1: Preparation and training (1–2 months)
- Clarify who in leadership is responsible for AI.
- Familiarise top management with the regulatory framework and real risks.
- Train key people from IT, legal, HR, and business lines.
- Prepare a basic AI governance policy (or have it prepared by ARROWS advokátní kancelář: office@arws.cz).
Phase 2: Inventory and classification (2–3 months)
- Map all AI systems in the company—internal, purchased, shadow AI.
- Classify them by risk (high, medium, low) in line with .
- Document what they do, what data they process, and who is responsible for them.
Phase 3: Governance structure and responsibilities (1 month)
- Establish an AI Governance Committee (cross-functional).
- Clearly define roles: who decides, who handles data, who monitors, who escalates.
- Document it in writing.
Phase 4: Testing and audit (ongoing)
- Subject high-risk AI systems to an independent audit or .
- Test for bias, hallucinations, and data security.
- Set up continuous monitoring.
- Create an incident response plan.
Phase 5: Training and communication
- Train all employees on AI governance rules.
- Communicate what is and is not permitted.
- Make sure people know how to report suspected issues.
Phase 6: Regular review and updates
- Review your governance policies at least once a year.
- Monitor regulatory changes.
- Adapt processes to new situations.
The attorneys at ARROWS advokátní kancelář can guide you through all of these phases—from policy preparation, through a compliance audit, to preparing for a potential regulatory inspection. We are insured for professional liability up to CZK 400,000,000, so you can rely on our dependability: office@arws.cz
Table of typical mistakes companies make—and how to avoid them
|
Mistake |
Consequence |
How to address it |
|
No written AI governance policy; everything is only “in managers’ heads”. |
When people change or when a regulator shows up, you have nothing to point to. A court will think you handled it carelessly. |
Draft an AI governance charter or policy. Have it updated regularly. ARROWS attorneys in Prague can help you: office@arws.cz |
|
Testing only once, at deployment; then you return to operations without oversight. |
The model degrades over time, starts making mistakes, and it can take months before anyone notices. In the meantime, it causes damage. |
Set up monitoring—monthly, quarterly, annually depending on risk. If you have high-risk decisions, monthly is the minimum. |
|
Unapproved AI in individual departments; management does not know what is being used. |
Shadow AI contains data that should not have been used in AI, security is not controlled, and compliance cannot be ensured. |
Make the AI inventory real, provide employees with approved alternatives, train them, and enforce the policy. |
|
No clear allocation of responsibility; everyone thinks someone else is dealing with it. |
When something breaks, everything is muddled, no one takes ownership, and errors pile up. |
Develop a responsibility matrix naming the owner of each AI system and each governance function. |
|
Supplier contracts without AI-specific clauses; you have no access to audits, models, or training data. |
You are unable to demonstrate compliance, you have no right to control what the supplier does with your data, and you have limited recourse in the event of an error. |
Review all supplier contracts; add AI-specific clauses on audit rights, incident reporting, data handling, and indemnities. ARROWS attorneys in Prague can help you: office@arws.cz |
Most common questions about governance in the workplace and operations
- If I ban a certain AI tool (e.g., ChatGPT) and instruct everyone to work with an approved tool, will that be enough?
Partly yes, but you must combine it with access controls (DLP tools, monitoring), training, and a clear policy on what to watch out for (you must not input trade secrets, personal data, etc.). A ban without an alternative will lead people to find workarounds. - How much does proper AI governance cost? Isn’t it only for large corporations?
Governance can be scaled. For a smaller company, a simple policy, an inventory, and regular training are sufficient. For a large corporation, it is a more complex structure and toolset. In both cases, it is significantly cheaper than dealing with the consequences—an incident caused by missing governance can cost millions. ARROWS attorneys in Prague can help you build governance proportionate to your size: office@arws.cz - What is the fastest way to ensure basic compliance?
Start with an AI policy (you can use a template), inventory what you use, train people, and set up basic monitoring. It’s not ideal, but it’s better than nothing. Then go deeper. - If an AI incident occurs in the company (e.g., data leaks), should I report it to the regulator?
It depends on the nature of the incident. If it is a personal data breach under the GDPR, then yes—you must report it within 72 hours of becoming aware of it. In other cases, the reporting obligation depends on the specific situation and regulation (e.g., under the EU AI Act, serious incidents involving high-risk systems must be reported). ARROWS attorneys in Prague can help you determine your obligations and prepare your communications with the regulator: office@arws.cz
Summary
AI governance is neither a paperwork exercise nor a complex IT project—it is a core part of how a modern company operates AI systems so they deliver benefits without legal and operational risks. Regulators are no longer waiting, and fines are increasing. Litigation is multiplying. Everyone is asking: what have you documented? What have you tested? How do you know your model is not discriminatory? How do you manage data? Who is responsible?
Companies that reject governance are gambling with high stakes. Those that ignore it think it is unnecessary—until the regulator calls or a lawsuit is filed.
Properly set up governance, on the other hand, is the best investment. It enables you to pursue AI without fear, you understand your risks, you have a plan to manage them, and if something happens, you have a way to defend yourself. The regulator sees that you are a responsible player.
If you are not sure where to go next, or you want your governance reviewed from a legal perspective, contact the attorneys at ARROWS advokátní kancelář. We have experience with regulatory pressure in the Czech Republic and the EU, we understand clients’ operational challenges, and we can advise you on what the minimum is for you and how far we should go.
We handle audits, preparation of governance policies, supplier contracts, and representation before regulators. Email us at office@arws.cz and we will schedule a consultation.
Most common questions about AI governance in companies
- Do I need to implement AI governance right now, or can I wait?
The sooner, the better. Regulation is continuously tightening, and compliance will be more expensive if you fall behind. Moreover, if you already have AI in operation and start governance late, you will have to audit retrospectively and potentially fix things that should have been set up correctly from the start. If you know you are using AI, start now. Contact ARROWS, a Prague-based law firm: office@arws.cz - What is the difference between “AI governance” and “privacy by design” under the GDPR? Privacy by design focuses on the protection of personal data. AI governance is broader—it includes data security, but also bias testing, operational risks, regulatory compliance, and incident management. The GDPR is part of AI governance, but only one part. Both approaches complement each other.
- Can I create governance myself, or do I have to hire it out? You can create it yourself if you have experts on your team. But it is risky to underestimate it—small mistakes in the rules then scale. ARROWS attorneys in Prague can help with verification, auditing, and preparing templates so that everything aligns with the legal framework: office@arws.cz
- What if my AI is from an external supplier—who is responsible for compliance, them or me? You are both responsible. The supplier is responsible for its system; you (as the operator) are responsible for ensuring the system is used properly, monitored, and that outputs are reviewed. If the supplier fails and you had a duty to address this within your governance, the failure is also on you.
ARROWS attorneys in Prague can prepare a management presentation on regulatory risks: office@arws.cz
Notice: The information contained in this article is of a general informational nature only and is intended for basic orientation in the topic based on the legal state as of 2026. Although we take maximum care to ensure accuracy, legislation and its interpretation evolve over time. We are ARROWS, a Prague-based law firm, an entity registered with the Czech Bar Association (our supervisory authority), and for maximum client protection we maintain professional liability insurance with a limit of CZK 400,000,000. To verify the current wording of regulations and their application to your specific situation, it is necessary to contact ARROWS directly (office@arws.cz). We accept no liability for any damages arising from the independent use of the information in this article without prior individual legal consultation.
Read also:
- Integration of Satellite Intelligence and AI Under EU/NATO Oversight
- Data Management in Gambling: GDPR, AML and Cybersecurity Risks in 2026
- Lobbying in the Czech Republic Under the New Law: Who It Affects and What Has Changed
- How to outsource legal tasks in the Czech Republic without losing control over strategy
- How to Protect Yourself as a Company Executive in Czechia (Before It’s Too Late)