The Irish Data Protection Commission (DPC) has launched an investigation into Google over its use of personal information in the development of its Pathways Language Model 2 (PaLM 2) AI model. The regulator will check whether the tech giant complied with the requirements of the General Data Protection Regulation (GDPR) when processing personal data of citizens of the European Union (EU) and the European Economic Area.
The PaLM 2 AI model was released in May 2023 and preceded the advent of the Gemini AI models, which currently form the basis of the American corporation’s AI products. Gemini, launched in December of the same year, became a key model for generating text and images in Google services.
Under the GDPR, companies are required to conduct a data protection impact assessment of their products before processing personal information, especially where the nature of its use may pose a high risk to the rights and freedoms of citizens. This requirement is especially relevant for new technologies and is “crucial to ensure adequate consideration and protection of fundamental human rights and freedoms,” the regulator said in a statement.
The DPC’s investigation into Google is not the first time that the European regulator has closely focused on the development of large language models by tech giants. In June 2023, following consultation with the DPC, Meta✴ suspended training its AI model on publicly available content posted by Facebook✴ and Instagram✴ users in Europe. Meta✴ subsequently limited the availability of some of its AI products to users in this region.
Another example of DPC intervention was the situation with company X. In July, users of the platform discovered that their publications were being used to train AI systems of the startup xAI, founded by Elon Musk. In August, the platform suspended processing European user data to train its own AI chatbot Grok following a legal battle with the DPC. It was the first time the regulator had used its powers to take such action against a technology company.