Take It Down Act 2025 (USA)

From the IFTAS Trust & Safety Library - supporting volunteer moderators in the Fediverse

Disclaimer: The information on this page is for general informational purposes only and is not legal advice. It may not reflect the most current legal developments and should not be relied upon without seeking legal counsel. IFTAS and its members do not endorse or take responsibility for the content of any third-party links provided. Use of this page does not create a solicitor-client relationship.

Summary #

The TAKE IT DOWN Act (S.146, 119th Congress) is a US federal law that criminalises the non-consensual distribution of intimate images (NCII), including AI-generated or digitally altered content. It requires covered platforms to remove such material within 48 hours of a valid report and take reasonable steps to prevent re-uploads. The law applies to both real and synthetic media and is enforced by the Federal Trade Commission as an unfair or deceptive practice. While intended to protect victims of image-based abuse, it presents complex challenges for decentralised platforms due to federated content distribution, limited resources, and unclear jurisdictional boundaries.

What does the Act require? #

Covered platforms must remove reported NCII content within 48 hours of a legitimate user report. Covered platforms must take reasonable steps to prevent re-uploads of the same content. Failure to comply is considered an unfair or deceptive act under U.S. law, enforced by the Federal Trade Commission. There is a one year grace period for platforms to implement any necessary tooling to comply with the act (May 2026).

What is a “Covered Platform”? #

A “covered platform” under the TAKE IT DOWN Act includes any online service that allows users to share or distribute user-generated content, particularly where such content may include intimate images or videos. This includes social media platforms, messaging services with public sharing functions (like Discord or Telegram), media hosting sites (YouTube, Reddit), as well as federated or decentralised services.

The Act does not provide a strict size threshold, meaning it applies to any platform regardless of size or commercial status – if it allows content to be uploaded and shared with others.

Even small, volunteer-run instances will be expected to comply if they are based in or hosted in the US, host US user accounts, or federate content that reaches US audiences.

The Act does not limit applicability to commercial companies. Non-commercial and community-run platforms are not explicitly exempted.

Implications for Moderators and Admins #

Before Notification: If NCII is posted by a user and no one is aware of its nature, admins are not considered to have “knowingly” published it. If NCII is federated into your service, you are not knowingly publishing it until a report has been made.

After Notification: Once a report is made by a victim or representative (e.g., via STOPNCII.org or internal flagging tools), the platform or instance may be deemed to “knowingly” host the content if no action is taken within the mandated 48-hour period. Failure to act promptly after being put on notice is a legal risk and could result in liability under the Act.

Administrators of US-based instances may face civil enforcement actions from the FTC, reputational harm for failing to act on abuse reports, and legal exposure if they knowingly host or redistribute illegal content. Even non-US instances could face deplatforming or intermediary pressure if their hosting providers, upstream services, or payment processors are US-based.

False Positives and False Reporting #

False reporting refers to actions where a user intentionally or mistakenly reports content as non-consensual intimate imagery when it is not.

False positives occur when moderators erroneously identify content as NCII, leading to unjustified removal or censorship.

Key Concerns for Moderators and Service Providers #

Weaponisation of Reporting Tools: Bad actors may deliberately misuse NCII reporting systems to silence marginalised users, journalists, or political dissidents. SThey may try to suppress artistic, educational, or LGBTQ+ content under false pretences, or trigger takedowns of consensual or non-sexual content by misrepresenting intent.

Such abuse disproportionately affects users with limited visibility or social capital to appeal unjust actions.

Lack of Verification Standards: The Act does not prescribe a robust process for verifying whether the depicted person is the complainant, or whether the content is, in fact, intimate or sexual. In decentralised environments, instance admins may lack legal training or verification tools, increasing the risk of mistakes or inconsistent handling.

Due Process and Transparency: The Act lacks clear mandates for notifying users when content is removed, or for providing appeals mechanisms for those who believe their content was wrongly taken down. For decentralised services, this raises ethical and operational concerns, especially when content is removed across multiple federated instances without explanation.

IFTAS Recommendations #

If you haven’t already, add rules to your Community Guide or ToS explicitly disallowing (follow the links for example rules) NCII and deepfakes. Define what qualifies as NCII under your community guidelines, distinguishing it from consensual content, satire, and protest material.

Be sure to check your incoming user reports in a timely fashion. Ensure your moderators have access to clear policy and process. Remember that handling some types of reports can cause secondary trauma to you and your moderators, resources are available for wellness and resilience.

Treat all NCII reports seriously, even if they seem unclear or incomplete. Where feasible, provide users with a way to appeal takedown decisions. Document and re-evaluate flagged cases periodically.

Be alert to coordinated reporting campaigns targeting women, LGBTQ+ users, sex workers, and other marginalised communities.

Consider reviewing the domains listed on the IFTAS Do Not Interact List, and monitor the Domain Observatory for domains observed to be sources of NCII. Domains that knowingly publish NCII will be listed on the SW-ISAC account, follow this account to be notified if new domains are observed to be knowingly publishing NCII or deepfake content.

Lobby the developers of the platforms you use to incorporate tooling that allows admins and moderators to ban/reject/auto-report/flag-for-review posts that contain links to known publishers of illegal content.

Collaborate with peer admins and moderators in IFTAS Connect to share good practices, block malicious actors, and reduce the risk of bad-faith reporting spreading harm.

This page was last updated on 2025-05-20
Was this page helpful?

IFTAS Community Library is licensed under CC BY-NC-SA 4.0, unless otherwise noted.