Legislative Actions Addressing AI and Human Rights Concerns

Published
December 04, 2025
Category
Special Requests
Word Count
396 words
Voice
libby
Listen to Original Audio
0:00 / 0:00

Full Transcript

On March 11, 2024, Rep. Celeste Maloy, a Republican from Utah, and Rep. Jake Auchincloss, a Democrat from Massachusetts, introduced the Deepfake Liability Act, a bipartisan legislation aimed at holding social media platforms accountable for nonconsensual AI-generated sexual images and cyberstalking.

This legislation seeks to amend Section 230 of the Communications Decency Act, which currently provides legal immunity to online platforms for user-generated content. The new bill would require platforms to take proactive measures, including preventing cyberstalking and removing abusive deepfakes upon receiving reports from victims.

Maloy emphasized the importance of this legislation, stating that victims deserve real help and that companies must actively work to protect users. The bill proposes that companies must fulfill a duty of care to maintain their immunity, mandating them to investigate credible complaints and remove privacy-violating content identified by victims.

Notably, AI-generated content would no longer receive automatic immunity under Section 230, addressing the growing concern over the misuse of generative AI tools to create harmful content. Auchincloss criticized the current protections, stating that AI should not enjoy privileges that journalists do not.

The Deepfake Liability Act builds on the Take It Down Act, which was passed earlier in 2024, making it a federal crime to publish intimate images without consent, including AI-generated deepfakes. The Take It Down Act, which received bipartisan support and was signed into law by President Donald Trump on May 19, 2024, requires platforms to remove such material within 48 hours of a report from a victim.

The new Deepfake Liability Act expands upon this framework by tying Section 230 protections to whether platforms actively work to mitigate abuse. Supporters of the new legislation, including Danielle Keats Citron from the Cyber Civil Rights Initiative, argue that it closes a significant loophole by making platforms responsible not only for content they create but also for harmful content they solicit.

This legislative push reflects broader societal concerns about the implications of AI in personal and legal domains, with ongoing discussions about the need for stricter regulations on how tech companies handle online abuse and the safety of children.

Critics, including the Electronic Frontier Foundation, have raised concerns about the potential for overreach and the risk of platforms removing lawful content to avoid liability. The Deepfake Liability Act signifies a significant shift in the legislative landscape regarding AI and human rights, emphasizing the need for accountability in digital spaces.

← Back to All Transcripts