As artificial intelligence (AI) becomes more prominent in the professional sphere, it is also necessary to look at what regulations should be followed and whether its use is in conflict with legal regulations. Especially in the sphere of AI, which is currently evolving very rapidly, regulation and legislation are changing almost constantly. This is because of the rapid development and also because no complete legal regulation of artificial intelligence has yet been issued. How are we dealing with this challenge and how are others in the world dealing with it?
Users in the Czech Republic will be most affected by the AI Act (1), a European Union regulation that should divide AI systems into different categories so that each of them is subject to different rules for regulation. Some AI algorithms will be banned. This category includes, for example, emotion recognition systems and is also to include advanced algorithms for real-time facial recognition.
AI-generated content will also be significantly affected by regulation. It is systems that generate images or text (such as ChatGPT) that should be required to make it clear that this content has been created with the help of AI. This should prevent misuse in the form of misinformation creation, and will also affect the copyright of AI-generated content.
The regulation of artificial intelligence is now a very hotly debated topic in the United States. Congress has invited Sam Altman (2), CEO of OpenAI (the creator of ChatGPT), to speak at a hearing on this topic. Altman agreed that AI needs to be heavily regulated, and he himself suggested that OpenAI and the government work closely together for the best possible outcome.
Tim Cook (3), who represents Apple as CEO, commented that companies should create their own teams to deal with self-regulation. Since AI is evolving very quickly, Cook believes this could be a way to quickly adopt appropriate regulation.
With regard to the Communications Decency Act (4), which has been in force in the US since 1996, AI's liability for its output is also being addressed. Paragraph 230 of this regulation states that no provider or user can be considered as a publisher or speaker of information provided by another information content provider. The problem lies in the fact that language models such as ChatGPT learn precisely by exploring and using content from other sources.
Unlike the European Union and the United States, the UK has decided to go its own way. Indeed, it plans to address this situation through the ICO (5,6) (UK Information Commissioner's Office). The latter is an independent body that is not dependent on the government and ensures that information rights are upheld. In future, it should also have responsibility for the introduction, promotion and oversight of AI in relevant sectors.
Another big difference between the EU act and the UK solution is that, unlike the EU, they only plan to regulate 2 types of systems:
Five principles are currently planned to cover both sectors:
In the next 12 months, the government is expected to brief companies on the future of regulation and how to deal with it in practice. The ICO took up the government's offer with one reservation. They have a problem with Article 22 of the UK GDPR, which imposes a duty to provide a justification where there are significant legal or personal impacts on individuals, whereas the regulatory proposal only suggests consideration of whether to provide one.
The Government is inviting sector participants to comment on this proposal by 21 June 2023. It intends to publish a plan for the next few years after the consultation.
Elon Musk (7) met with top Chinese officials in Beijings at the end of May this year. After this visit, he mentioned that China has its own plan to regulate AI in the future. However, we now have the first draft of a regulatory law.
The draft currently consists of twenty articles (8). The most important ones are briefly listed below. It may surprise no one that one of the very first articles contains an obligation for the provider to restrict any information that in any way opposes socialism in China. In this article we also find the necessity of fairness, not to disadvantage persons and the obligation to provide correct information.
Furthermore, the responsibility for the AI outputs, their training data and the optimization of the training data is mentioned. Everything must be in conformity with the cybersecurity law of the People's Republic of China, other laws and regulations. Otherwise, the provider may be fined.
Another important point of this proposal is that providers should be required to share data on who uses their software, for what occasions they use it and how. They should also have a responsibility to avoid creating dependence on and overuse of artificial intelligence. Similarly, they should be responsible about data protection. Furthermore, they must not illegally store information that could be used to identify a user.
As in other countries, they should ensure that content created by artificial intelligence is properly labelled so that it cannot be confused with content created by a natural person. In the event that a Chinese citizen finds AI-generated content without a label, they have the right to report it, and the user who misused it will have their access to the AI either restricted or completely blocked.
Petr Prucek contributed to this article.