...By Henry George for TDPel Media.
German authorities are investigating the use of personal data by ChatGPT, a popular AI chatbot, and have demanded answers from its US maker OpenAI.
This follows similar moves by regulators in France and Spain, and a temporary ban by Italy last month.
Germany’s regional data protection authorities have compiled a questionnaire for OpenAI and expect a response by June 11.
They want to know if a data protection impact assessment has been carried out, whether the data protection risks are under control, and whether people whose data is used by ChatGPT are sufficiently informed of their rights under EU law.
German regulators are particularly concerned about the processing of data relating to minors.
EU’s Central Data Regulator Forms Task Force
The European Union’s central data regulator has formed a task force to help countries harmonise their policies and address privacy concerns.
The task force will address concerns raised by regulators in Germany, Italy, France and Spain about ChatGPT’s use of personal data.
ChatGPT’s Capabilities and Controversies
ChatGPT is an AI chatbot that can generate essays, poems and conversations from the briefest of prompts.
It has proved itself capable of passing some tough exams.
However, it has been dogged by concerns that its abilities could lead to widespread cheating in schools, supercharge disinformation on the web, and replace human workers.
The chatbot can only function if it is trained on vast datasets, raising concerns about where OpenAI gets its data and how that information is handled.
The increasing scrutiny of ChatGPT’s use of personal data highlights the need for robust data protection measures to be put in place when developing AI chatbots.
This is especially important given the potential for chatbots like ChatGPT to generate and process vast amounts of personal data.
The formation of a task force by the EU’s central data regulator is a positive step towards addressing privacy concerns and harmonising policies across different countries.
As AI chatbots become more widespread, it is important for regulators and developers to work together to ensure that these technologies are developed in a way that respects people’s privacy and data protection rights.