
Artificial intelligence (AI) is becoming a driving force for innovation in many industries, and the pharmaceutical industry is no exception. From developing new drugs to optimizing clinical trials, AI offers revolutionary opportunities. But with these opportunities come complex legal challenges. As potential clients of law firms, you may be asking yourself: How do we navigate this rapidly evolving field and minimize the risks?
Author of the article: ARROWS (JUDr. Jakub Dohnal, Ph.D., LL.M., office@arws.cz, +420 245 007 740)
Imagine that it is possible to reduce years of drug research and development to mere months. AI can analyse vast amounts of data, identify potential molecules, predict their effects and optimise manufacturing processes with previously unsuspected efficiency. This not only accelerates the time to market for innovative drugs, but also reduces costs and makes medicine more accessible to a wider range of patients. For pharmaceutical companies this means a competitive advantage and for patients the hope of previously unavailable therapies.
AI has the potential to transform every step of a drug's journey:
Although the benefits of AI are undeniable, their advent raises questions to which current legislation often has no clear answers. This is where we encounter the biggest challenges for your business.
1. Liability for a defective AI-developed drug: who will bear the consequences?
Imagine a situation where an AI designs a drug that turns out to be harmful after it is launched. Who is responsible in this case? Is it the AI developer? The pharmaceutical company that produced and marketed the drug? Or the programmer who wrote the AI system? This question is fundamental and its solution is key to trust in AI in medicine.
Czech and European legislation on product liability is complex. It is common to apply strict liability to the manufacturer, which means that the manufacturer does not have to be proven to be at fault. However, with AI, the question arises as to who is the real "manufacturer" in the context of autonomous system decision-making. If an AI system is shown to have made a "mistake", it is necessary to assess whether the mistake is in the design, in the data on which the AI was learning, or in the algorithm itself. Is AI a tool or a separate actor? And what if the error is the result of a complex interaction between multiple AI systems or between AI and humans?
Risks and penalties: getting liability wrong can lead to huge financial penalties that can paralyse even large companies. There is also the risk of damage to a company's reputation, which takes years to build and can be destroyed in a matter of days. In extreme cases, criminal liability may even result for individuals responsible for bringing a defective product to market. It is crucial to be clear about the risks you are taking and how to effectively defend against them.
ARROWS lawyers deal with product liability issues, including those involving AI components, on a daily basis and are ready to help you navigate these complex issues. Our experience allows us to identify weaknesses in contractual arrangements and propose solutions that minimize your risks.
2. Intellectual property: who owns the AI creation?
AI systems are capable of generating new molecules, algorithms, procedures, and even entire therapeutic plans. Who then owns the intellectual property of these creations? Can AI be the author of a patent?
Current patent law is based on the concept of the human inventor. The concept of an "invention" or "work" created by AI is a novelty that needs to be reconsidered. Most jurisdictions currently recognise only a natural person as the author or inventor. This means that if AI creates something unique without significant human intervention, legal protection is uncertain. Without clear ownership of intellectual property, there is a risk of losing R&D investment and legal disputes with competitors who might freely exploit your innovation.
Risks and penalties: IP uncertainty can lead to arbitration, litigation and loss of valuable innovation if your rights are not properly protected. Imagine AI discovers a breakthrough molecule, but you can't patent it because you don't know who the "inventor" is. You would lose exclusivity and massive investment.
ARROWS lawyers can advise you on how to ensure that your intellectual property is protected even as the boundaries between human and machine creation are blurring. We help clients create robust contractual frameworks with AI developers that clearly define ownership of IP rights to AI outputs, while monitoring developments in the field to find the best possible solutions for you.
3. Privacy and Cybersecurity: sensitive data in the hands of AI
Pharmaceutical research and development often works with extremely sensitive personal patient data, including medical history, genetic information, treatment results and biometric data. AI systems analyse this data on a large scale, which carries huge risks in terms of data protection (GDPR) and cybersecurity.
The GDPR imposes strict requirements on the processing of personal data, especially sensitive data. This includes the principles of data minimisation, purpose limitation, accuracy, transparency and accountability. The use of AI in medicine often involves the processing of huge data sets, increasing the risk of data leakage, misuse or unauthorised access.
Risks and fines: Breaches of the GDPR can lead to astronomical fines - up to €20 million or 4% of a business's total annual turnover, whichever is higher. In addition, there is the risk of reputational damage and loss of trust from patients and partners, which can have long-term negative effects on your business. Securing data from cyber attacks is absolutely key. Any data leak or cyber incident can have fatal consequences. Imagine if the sensitive medical records of thousands of patients were leaked to the public. It is every company's nightmare and a threat that must be taken very seriously.
ARROWS lawyers are experts in data privacy and cybersecurity issues and will help you set up robust systems and processes to avoid these risks. We routinely handle GDPR compliance issues in the pharmaceutical industry and AI implementation, including the preparation of Data Protection Impact Assessments (DPIAs), which are often necessary for AI systems handling sensitive data.
4. Ethical Aspects and Discrimination: Justice in an Algorithmic World
In addition to legal issues, AI in the pharmaceutical industry brings a number of profound ethical dilemmas that need to be actively addressed.
AI algorithms learn from data. If that data is skewed, incomplete or unrepresentative, AI can reproduce or even exacerbate existing discrimination and biases contained in the data. For example, if AI learns from data that is not representative of different ethnic groups, ages, or genders, AI-designed treatments may be less effective for some populations or lead to misdiagnoses. This raises serious ethical questions and may lead to legal challenges on the basis of discrimination.
Bias in AI models is a major concern, especially in healthcare, as systems trained on biased or unrepresentative data can produce skewed results, leading to inequities in diagnosis, treatment, and drug discovery.
If an AI model is built or trained on biased data, these biases can manifest in its outputs and exacerbate existing inequalities. To mitigate these risks, AI models must be trained on diverse and representative datasets.
It is also important to ensure that AI decision-making processes are transparent and explainable. One of the main problems is the "black box" problem, where AI systems, especially deep learning models, act as "black boxes". This means that their decision-making processes are opaque and difficult for humans to understand. This lack of transparency is particularly problematic in medicine because clinicians need to understand how AI systems arrive at their recommendations to ensure patient safety.
The concept of "black box" AI, where it is not clear how the system arrived at a particular decision, is unacceptable in medicine. What if an AI "decides" that a certain patient will not receive a specific treatment? How can we ensure that this decision is fair, ethical and justifiable? Patients and doctors need to trust that AI decisions are objective and based on the best available information.
Human oversight, or the human-in-the-loop model, is key to mitigating risks and liability issues. The EU AI Act requires that high-risk AI systems be subject to appropriate human oversight. This means that humans must have the ability and authority to change AI decisions and ensure that oversight is meaningful, not just symbolic. In the pharmaceutical industry, AI must complement (not replace) clinical and regulatory judgment.
Risks and penalties: discrimination is not only ethically unacceptable but also punishable under anti-discrimination laws. It can lead to lawsuits, huge fines and irreparable damage to reputations. It is essential to ensure that AI systems are designed and tested to minimise the risk of discrimination and comply with ethical standards. This includes careful selection and validation of training data, transparent algorithms and mechanisms for human oversight and control.
At ARROWS, we focus on the ethical aspects of AI and help clients avoid the pitfalls associated with algorithmic discrimination, including through ethical audits of AI systems.
5. Regulation and approval processes: new rules for new technologies
Existing regulatory frameworks for the approval of medicines and medical devices were designed for traditional pharmaceutical research and development. However, the advent of AI requires them to be updated and adapted. How to ensure that medicines developed or tested by AI are safe and effective? How to verify and validate algorithms that can learn and change their behavior over time? Existing clinical trials and validation procedures may not be sufficient for AI.
Under the Artificial Intelligence Regulation (AI Act), which is intended to comprehensively regulate AI systems with varying levels of risk, AI systems used in medicine and for drug development are likely to fall into the "high-risk" category, which means stricter requirements for transparency, human oversight, accuracy, cybersecurity and pre-market conformity assessment. Meeting these requirements will require significant effort and investment.
Risks and penalties: failure to comply with new regulatory requirements can lead to denial of drug approval (which can mean the loss of hundreds of millions to billions in development investment), product recalls and massive fines. It is essential to be prepared for these changes and proactively adapt your research, development and approval processes.
With ARROWS lawyers, you will always be up-to-date on the latest regulatory changes and be able to respond in a timely manner. ARROWS lawyers are constantly monitoring the development of AI regulation and its impact on the pharmaceutical industry, both at EU and national legislative levels. We can help you interpret new regulations and prepare for certification and approval processes.
Although the legal landscape of AI in pharmaceuticals is still taking shape, one thing is clear: a passive approach does not pay. On the contrary, proactive legal advice will allow you to not only minimize the risks, but also to take full advantage of the potential that AI in medicine offers.
What should you consider and how can ARROWS lawyers help you?
Artificial intelligence in pharmaceuticals is not the music of the future, but the dynamic reality of today. Those who understand and master the legal challenges it brings early on will gain a huge competitive advantage. They will be able to innovate faster, more efficiently and with fewer risks. Those who hesitate risk losing market share and potential legal difficulties that could threaten their entire existence.
At ARROWS, we understand that you may be feeling uncertain in this rapidly changing area. Our mission is to be your trusted partner to help you navigate the complexities of AI regulation. We are ready to support you at all stages - from strategic planning and process setup, to resolving specific legal issues, to representing you in potential litigation. We are your insurance policy for innovation.
Don't wait for problems to appear. Stay one step ahead. Investing in AI legal advice will pay you back many times over in minimized risks and maximizing the potential that AI offers in medicine.
Don't want to deal with this problem yourself? More than 2,000 clients trust us, and we have been named Law Firm of the Year 2024. Take a look HERE at our references.