Chatbots and Personal Data Protection – a Recent Case from Italy

Chatbots and Personal Data Protection – a Recent Case from Italy

May 05, 2023

After the Italian authority for personal data protection had imposed a temporary prohibition of viral ChatGPT application due to unlawful personal data processing by OpenAI, the start-up company that developed this platform, the issue of personal data protection in the context of artificial intelligence came to the fore.

What are chatbots?

Chatbots are software for online communication and interaction with users, which operates without a human factor engagement.

Namely, this is an advanced technology that applies fully automatized processes, i.e., communicates with customers by writing or pronouncing certain words or even through other applications installed on mobile phones or other devices. Therefore, chatbots are virtual assistants through which users can obtain certain information and services.

Case from Italy

In March this year, the Italian authority for personal data protection temporarily blocked the use of artificial intelligence software ChatGPT in the territory of this country, due to personal data processing against the rules and principles in this field, primarily for the lack of relevant grounds for mass collection and storage of personal data which were subsequently used for “training algorithms that are the essence of this platform”, as well as for the lack of appropriate form to check users’ age, due to which the application “exposed minors to absolutely inappropriate answers compared to the level of their development and conscience”.

The Italian regulator therefore ordered OpenAI as a controller to immediately suspend the stated processing with respect to the users from Italy, and to submit within 20 days the report on measures it will undertake to adjust its operation with the GDPR, otherwise it might be subject to a million euro fine.

The subject application has been available to the users in Italy again since May, given that OpenAI introduced some possibilities and additional controls in the meantime, by enabling users to apply for deletion of personal data in accordance with GDPR, introducing a tool for users’ age verification and making the way in which the platform processes personal data publicly available.

This application attracted millions of users over a short period of time, as it has the possibility to generate human-like text and has a wide spectrum of application, i.e., it can answer questions, write essays, translate texts, write codes, etc.

Artificial intelligence issue and personal data protection

As this case has showed us, the use of chatbots and similar technology implies significant risks with regards to personal data protection.

Namely, according to the definition provided for in the GDPR, but also in the domestic Law on Personal Data Protection, personal data implies not only direct identifiers of a person, i.e., their identity (such as name, surname or e-mail), but also any other data referring to the natural person, based on which their identity can even be indirectly identified. In other words, personal data includes all information whose combination can be used to establish the identity of a person, which principle at the same time constitutes the essence of artificial intelligence.

The question is therefore to what extent it is possible to balance these basically opposite interests, i.e., to harmonise the functioning of applications based on this and similar principles with the principles and other conditions of legality of personal data processing, primarily the principles of lawfulness, fairness and transparency, as well as integrity and confidentiality of processing.

This article is to be considered as exclusively informative, with no intention to provide legal advice. If you should need additional information, please contact us directly.