European Commission Takes Aim at AI-Generated Disinformation Ahead of Elections

The European Commission mandates major technology platforms to detect AI-generated content to protect European elections from disinformation, underscoring a robust approach to maintaining democratic integrity.

In a proactive move to protect the integrity of the upcoming European elections, the European Commission has mandated tech giants like TikTok, X (formerly Twitter) and Facebook to step up their efforts to detect AI-generated content. This initiative is part of a broader strategy to fight disinformation and protect democratic processes from the potential threats posed by generative artificial intelligence and deep fakes.

Mitigation measures and public consultation

The Commission has produced draft election security guidelines under the Digital Services Act (DSA) which highlight the importance of clear and consistent labeling of AI-generated content that may substantially resemble or misrepresent real persons, objects, places, entities or events. These guidelines also highlight the need for platforms to provide users with tools to label AI-generated content, increasing transparency and accountability in digital spaces.​​​​

A public consultation period is underway allowing stakeholders to provide feedback on these draft guidelines until 7 March. The focus is on implementing “reasonable, proportionate and effective” mitigation measures to prevent the creation and spread of AI-generated disinformation. Key recommendations include watermarking AI-generated content for easy recognition and ensuring that platforms adapt their content moderation systems to effectively detect and manage such content​​​​

Emphasis on transparency and consumer empowerment

The proposed guidelines advocate transparency, calling on platforms to disclose the sources of information used in generating AI content. This approach aims to enable users to distinguish between authentic and misleading content. In addition, tech giants are encouraged to integrate safeguards to prevent the generation of false content that could influence user behavior, especially in the context of elections.

The EU legislative framework and industry response

These guidelines are inspired by the recently approved EU AI Act and the non-binding AI Pact, underscoring the EU’s commitment to regulating the use of generative AI tools, including those such as OpenAI’s ChatGPT. Meta, the parent company of Facebook and Instagram, responded by announcing its intention to label AI-generated posts in line with EU pressure for greater transparency and consumer protection against fake news.

The role of the Digital Services Act

The DSA plays a critical role in this initiative, applying to a wide range of digital businesses and imposing additional obligations on Very Large Online Platforms (VLOPs) to mitigate systemic risks in areas such as democratic processes. The DSA regulations aim to ensure that information provided using AI-generated relies on reliable sources, particularly in the context of elections, and that platforms take proactive measures to limit the effects of AI-generated “hallucinations”​​​​ ​.


As the European Commission prepares for the elections in June, these guidelines mark a significant step towards ensuring that the online ecosystem remains a space for fair and informed democratic engagement. By addressing the challenges posed by AI-generated content, the EU aims to strengthen its electoral processes against disinformation, maintaining the integrity and security of its democratic institutions

Image source: Shutterstock

Leave a Comment