Bipartisan Efforts to Address Nonconsensual AI Pornography
U.S. lawmakers from both sides of the aisle are taking a firm stand against the proliferation of nonconsensual AI-generated explicit content, commonly referred to as "deepfakes." Recent incidents involving public figures have magnified the issue, urging Congress to introduce several pieces of legislation to curtail this digital malpractice.
Explicit AI-manipulated videos resembling singer Taylor Swift gained considerable attention earlier this year. The incident sparked an outcry and prompted urgent calls for stricter regulation. Both high-profile personalities and the general public, including minors in educational settings, have fallen prey to this form of cyberbullying.
Legislative Measures to Counteract Deepfake Porn
One bill aggressively championing this cause is the Defiance Act, proposed in March. This act establishes a federal civil cause of action, empowering victims to bring lawsuits against creators, distributors, or solicitors of deepfake content. Another legislative proposal, the Take It Down Act, introduced in the previous month, seeks to create a federal criminal offense specifically targeting the dissemination or threats of publishing such synthetic media. Furthermore, it lays down a mechanism for victims to request the removal of nonconsensual deepfakes from online platforms.
Rep. Alexandria Ocasio-Cortez (D-NY) and Sen. Dick Durbin (D-IL), champions of the Defiance Act, find themselves in collaboration with Sens. Ted Cruz (R-TX) and Amy Klobuchar (D-MN), backers of the Take It Down Act, reflecting truly bipartisan support.
ALSO READ : US Lawmakers Establish Bipartisan AI Working Group, Aiming to Balance Innovation and Regulation
Challenges in the Fight Against AI-Derived Exploitation
Despite bipartisan backing, several obstacles could potentially thwart the passing of these bills, particularly as the nation approaches a volatile electoral season. Previous attempts to advance legislation have met resistance due to concerns about the effects on privacy, technological innovation, and victim protection. For instance, the Defiance Act was obstructed by a unanimous consent vote in June by Sen. Cynthia Lummis (R-WY), who, despite her support, highlighted the need for more precise language to safeguard civil liberties.
Advancing Victim Rights and Platform Accountability
The dialogue around the nonconsensual distribution of explicit deepfakes inevitably brings Section 230 into focus. This federal law has been under scrutiny for shielding online platforms from liability for user-posted content. Per Carrie Goldberg, a victims' rights attorney, addressing this provision with a potential federal law might be critical to effectively combating deepfake distribution.
Lawmakers have also been victims of nonconsensual deepfakes, which has prompted a more personal investment in the issue. The necessity of legal consequences was emphasized by victims' rights advocate Anna Olivarius, who noted the dire consequences of deepfakes, including impacts on mental health and safety.
As the conversation around deepfake regulation continues, additional efforts such as a bill introduced by Sens. Maggie Hassan (D-NH) and John Cornyn (R-TX) have contributed to a multifaceted approach to the problem. This legislation would criminalize sharing deepfakes without consent and permit victims to sue violators.
Congress faces pressure from legal experts to enact effective strategies to combat nonconsensual explicit deepfakes. Given this digital threat's gravity, what measures should be prioritized to protect individuals' rights and curb the spread of maliciously altered media? Your views and experiences can spur further discussion on this critical matter.