The Italian data protection authority, Garante, has raised concerns about potential GDPR violations by OpenAI’s ChatGPT, requiring OpenAI to protect the processing of personal data for 30 days.
The Italian data protection authority, Garante, raised concerns for potential violations of the European Union’s General Data Protection Regulation (GDPR) by OpenAI’s ChatGPT. This follows a months-long investigation that led to a formal notice issued to OpenAI suspecting breaches of EU privacy regulations. OpenAI was given 30 days to respond and file a defense against these allegations.
Italian authorities had previously ordered a temporary ban on ChatGPT’s local data processing in Italy, citing issues such as the lack of an appropriate legal basis for collecting and processing personal data to train ChatGPT’s algorithms. Concerns about the safety of children and the AI tool’s tendency to produce inaccurate information were also noted. OpenAI has temporarily addressed some of these issues, but now faces preliminary findings that its operations may be in breach of EU law. The main issue revolves around the legal basis OpenAI has for processing personal data to train its AI models, given that ChatGPT was developed using data mined from the public internet.
OpenAI originally claimed “contract performance” as the legal basis for ChatGPT model training, but this was disputed by the Italian authority. Now the only potential legal grounds left are consent or legitimate interests. Obtaining consent from multiple individuals whose data has been processed seems impractical, leaving legitimate interests as the primary legal basis. However, this basis requires OpenAI to allow data subjects to object to processing, which poses challenges for the continued operation of an AI chatbot.
In response to growing regulatory risks in the EU, OpenAI is seeking to establish a physical base in Ireland, seeking to have oversight of GDPR compliance led by the Data Protection Commission of Ireland. The move is part of a wider effort to address data protection concerns across the EU. In addition to the Italian investigation, OpenAI is also under scrutiny in Poland following a complaint about inaccurate information provided by ChatGPT and OpenAI’s response to the complainant.
The outcome of this investigation is likely to have significant implications not only for ChatGPT, but also for the wider landscape of AI applications and their adherence to EU data protection standards. As the situation unfolds, it highlights the challenges and complexities innovative technologies such as AI chatbots face in navigating Europe’s strict data protection regulations.
Image source: Shutterstock