US Senate Introduces DEFIANCE Act to Combat AI-Generated Nonconsensual Deepfakes

The US Senate proposed the DEFIANCE Act in response to the growing proliferation of AI-generated, non-consensual explicit images, illustrated by recent deepfake incidents involving Taylor Swift. This bill aims to provide legal protection for victims and criminalize the production and distribution of such content.

The United States Senate is currently considering the Infringement of Explicitly Counterfeit Images and Non-Consent Editing Act of 2024, known as the COUNTERPROOF Act. This bipartisan bill was introduced in response to growing concern about non-consensual, sexually explicit “deepfake” images and videos, particularly those created with AI (AI). The introduction of this legislation was significantly prompted by recent incidents involving AI-generated explicit images of singer Taylor Swift that quickly spread across social media platforms.

The DEFIANCE Act aims to provide federal civil protection for victims who can be identified in these “digital fakes.” This term is defined in legislation as visual images created using software, machine learning, AI or other computer-generated means to appear falsely authentic. The law will criminalize the creation, possession and distribution of such explicit AI-generated content without consent. It will also set a statute of limitations of ten years from when the subject depicted in the non-consensual deepfake content becomes aware of the images or turns 18.

The need for such a law is highlighted by a 2019 study which found that 96% of deeply fake videos were non-consensual pornography, often used to exploit and harass women, especially public figures, politicians and celebrities. The widespread distribution of these deep fakes can lead to severe consequences for victims, including job loss, depression and anxiety.

There is currently no federal law in the United States that specifically addresses the rise of digitally adulterated pornography modeled on real people, although some states such as Texas and California have their own legislation. Texas criminalizes the creation of illegal AI content with potential jail time for offenders, while California allows victims to sue for damages.

The introduction of the bill comes at a time when the issue of online sexual exploitation, particularly involving minors, is receiving significant attention. The Senate Judiciary Committee, in a hearing titled “Big Tech and the Online Child Sexual Exploitation Crisis,” is examining the role of social media platforms in the spread of such content and the need for legislative action.

This legislative initiative highlights the growing concern about the misuse of AI technology to create deeply fake content and the need for legal frameworks to protect people from such exploitation and harassment

Image source: Shutterstock

Leave a Comment