Synthetic Media

From the IFTAS Moderator Library, supporting Fediverse trust & safety

Updated on 2024-03-27

Definition

Content which has been generated or manipulated to appear as though based on reality, when it is in fact artificial. Also referred to as manipulated media. Synthetic media may sometimes (but not always) be generated through algorithmic processes (such as artificial intelligence or machine learning). A deepfake is a form of synthetic media where an image or recording is altered to misrepresent someone doing or saying something that was not done or said.

Background

Some common applications of synthetic media include deepfakes. These are hyper-realistic video and audio recordings that appear to show real people saying and doing things they never actually did. They are typically created by using deep learning to replace one person’s likeness with another’s. Deepfakes raise significant ethical concerns, especially regarding consent, disinformation, and defamation.

Ethical concerns regarding the use of generative AI are hotly debated. When AI systems are trained on data that was acquired without the consent of the individuals who created or are depicted in the media, it can infringe on their privacy rights. This is particularly concerning when personal or sensitive information is involved. Using copyrighted materials to train AI without permission from the copyright holders potentially violates intellectual property laws and disrespects the creators’ rights and efforts.

If the training data includes biased or unrepresentative samples, the AI’s outputs will likely perpetuate these biases, leading to unfair or discriminatory outcomes.

Using someone’s data or likeness without their consent undermines their autonomy and control over their own digital identity. It can lead to situations where individuals feel misrepresented or exposed without their approval.

By using content without consent, AI developers might unjustly profit from others’ work without providing fair compensation, depriving creators of income.

Moderation Challenges

As synthetic media technology becomes more sophisticated, it becomes increasingly difficult to distinguish between authentic and synthetic content. This can make the detection process challenging even for experienced moderators. The sheer volume of content posted can be overwhelming.

There may be ambiguous or non-existent policies regarding what constitutes unacceptable synthetic media, making it hard for moderators to make consistent, predictable, and fair decisions.

Moderating harmful or deceptive content, including realistic synthetic media, can have a negative psychological impact on human moderators, leading to stress or trauma. (See Traumatic Content: Coping With Exposure). Computer-Generated CSAM (“CG-CSAM”) is a particularly troubling issue and has caused significant harm to moderators in the past, both in terms of content and volume of automated posts.

Current detection tools may not be fully effective at identifying all types of synthetic media, leading to false positives or false negatives. This can undermine trust in the moderation process.

Moderators may face backlash from community members who feel their content was unfairly labelled or censored, especially in cases where the distinction between creative and deceptive use of synthetic media is subjective.

Watermarking

Efforts are underway to create technology standards to embed the provenance of a given media object, such that platforms can label or flag media that has been produced through synthetic means. The Coalition for Content Provenance and Authenticity (C2PA) addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content. C2PA is a Joint Development Foundation project, formed through an alliance between Adobe, Arm, Intel, Microsoft and Truepic. Separately, the IPTC has published standard vocabulary for identifying the source of digital media, primarily to support the news media.

Example Rule

Posting of synthetic media that deceives, misrepresents, or harms individuals are not permitted. Content must be clearly labelled if altered or generated by AI.

(Make sure to recommend how the user should label their post, either with hashtags, a statement, a visible watermark etc.)

Discussion

Discuss this label in the Synthetic Media forum.

Was this page helpful?
Português