CEOs of more than 25 companies, including Meta✴, Spotify and Prada, have written an open letter to the European Union (EU), warning that overly stringent and inconsistent AI regulation could significantly limit the region’s innovation potential and deprive it of expected economic benefits. .
In an open letter initiated by Meta✴, company leaders argue their position: “Europe has lost its competitive advantage and innovation potential compared to other regions. There is a real risk of falling further behind in the AI era due to inconsistent regulatory decisions.” The authors emphasize that AI has the potential to significantly increase the productivity of Europeans and stimulate economic growth, but the EU may not fully realize this potential.
Signatories include representatives of leading companies from various sectors of the economy: Swedish telecommunications equipment manufacturer Ericsson, German software developer SAP, industrial conglomerate Thyssenkrupp and Italian luxury clothing brand Prada. Researchers, civil society representatives and industry associations have also joined the initiative, demonstrating the widespread resonance of the issue within the European business community.
The key demand of the letter’s authors is the harmonization of existing EU rules and the development of a modern interpretation of the General Data Protection Regulation (GDPR). Companies insist on the need for a clear and consistent regulatory framework that will allow European data to be effectively used to train AI models without compromising user privacy.
The European Commission responded to the appeal with a statement supporting innovation in the field of AI. The EU executive stressed that the new Justice Commissioner will be expected to strike a balance between law enforcement needs and commercial interests under the GDPR. This statement can be interpreted as a signal of readiness for dialogue and the search for compromise solutions. Still, European lawmakers insist that tough regulation is needed to counter monopolistic tendencies in the tech sector, combat disinformation and protect minors online.
The business community’s appeal follows recent announcements by Meta✴ and Apple that they would be delaying the launch of new AI features in Europe. Apple in June this year announced a likely delay in the implementation of the Apple Intelligence system for European iPhone users. The reason is the uncertainty caused by the new digital competition law. Following this, Meta✴ announced in July that it was suspending the EU release of its promising multimodal AI model capable of processing text, images and speech. The company cited the unpredictability of the European regulatory environment. Meta✴ previously shelved plans to train AI models based on public posts from adult Facebook✴ and Instagram✴ users in Europe following intervention by the Irish data protection regulator.
Over the past decade, the EU has strengthened its position as an influential global technology regulator. The GDPR, which came into force in 2018, has become an international benchmark for the protection of personal data. New EU legislation on digital competition, online content control and AI regulation has already forced leading tech companies to adapt their business models to the EU, albeit with difficulty.
Phased launches of new products and features outside the US are standard practice for tech companies. For example, Google AI chatbot Bard (later renamed Gemini) appeared in the EU a few months after launching in the US and UK. The delay was partly due to the Irish regulator’s requirement to introduce additional privacy protection features. Meta✴ also delayed the launch of its social platform Threads in the EU by several months after its US debut.
Despite regulatory challenges, the EU, with a population of about 450 million people, remains one of the largest and most solvent markets in the world. This explains the desire of technology companies to find a compromise with European regulators to maintain access to this strategically important region. However, the open letter highlights the risk of limiting European organizations’ access to open AI models.
The key argument of the letter is that the inconsistent application of the GDPR creates uncertainty about the types of data allowed for training AI models. “If companies and institutions plan to invest tens of billions of euros in the development of generative AI for European citizens, they need clear, consistently applied rules allowing the use of European data,” the appeal emphasizes.