Information for Software Developers and Designers

From the IFTAS Moderator Library, supporting Fediverse trust & safety

Updated on 2024-04-16

If you are creating an app or a web service that enables inter-personal communications, the following resources can help you consider safeguards and approaches to responsible design principles.

Consent

User Safety

  • Prosocial Design: The Prosocial Design Network curates and researches evidence-based design solutions to bring out the best in human nature online.
  • Safety by Design: From Australia’s eSafety Commissioner, this proactive and preventative approach focuses on embedding safety into the culture and leadership of an organisation. It emphasises accountability and aims to foster more positive, civil and rewarding online experiences for everyone.

Policy Design Considerations

  • Authentication Cheat Sheet (OWASP): Authentication is the process of verifying that an individual, entity, or website is who or what it claims to be by determining the validity of one or more authenticators (like passwords, fingerprints, or security tokens) that are used to back up this claim.
  • The Real Name Fallacy (J. Nathan Matias): People often say that online behavior would improve if every comment system forced people to use their real names. It sounds like it should be true – surely nobody would say mean things if they faced consequences for their actions?

Accountability and Transparency

Account and Content Reporting Workflow

  • Content moderators commonly experience trauma similar to those suffered by first responders. Even though you may have never reviewed traumatic content, your app or service may deliver this traumatic content to users of your moderation workflow. When presenting reported content to a service provider or moderator, always:
    • Show the classification clearly, so the moderator is aware of the type of content they are about to review
    • Blur all media until the moderator hovers to view greyscale version (re-blur when hover not detected or mouseleave event)
    • Grayscale all media until the moderator clicks to toggle greyscale (allow toggle state back to greyscale)
    • Mute all audio until the moderator requests audio
    • Allow the moderator to reclassify the report
  • Allow the service operator to choose from a list of harms or rules they want to receive reports about
  • Offer the end user a path to report an actor, behaviour, or content, e.g. “report this account” or “report this post”
  • Condense the labels by type and classification, and label each report. Use standard metadata to classify and present reported content. Use standard language to describe the reporting context. Consider a multi-step report submission process that allows fine-grained reporting, e.g.
    • Report an Account
      • Imposter
        • Account Takeover (account-takeover)
        • Impersonation (impersonation)
        • Sock Puppet / False Identity (sock-puppet)
      • Inauthentic Engagement
        • Astroturfing (astroturfing)
        • Brigading (brigading)
        • Catfishing (catfishing)
        • Content Farming (farming)
        • Service Abuse (service-abuse)
        • Troll (troll)
      • Dangerous Person or Organisation
        • Child Exploitation (csea)
        • Terroristic or Violent Person or Organisation (tvec)
    • Report a Post
      • Spam (spam)
      • Bullying
        • Brigading (brigading)
        • Doxxing / PII (doxxing)
        • Harassment (online-harassment)
      • Deception
        • Phishing (phishing)
        • Scam / Fraud (content-and-conduct-related-risk)
        • Sock Puppet / False Identity (sock-puppet)
        • Sextortion (sextortion)
      • Intellectual Property
        • Copyright (copyright-infringement)
        • Counterfeit Goods or Services (counterfeit)
      • Nudity / Sexual Activity
        • Explicit Content (explicit-content)
        • Child Sexual Abuse (csam)
      • False Information
        • Defamation (defamation)
        • Misinformation (misinformation)
        • Manipulated Media / Deepfake (synthetic-media)
      • Hateful Content
        • Hate Speech or Symbols (hate-speech)
        • Dehumanisation (dehumanisation)
      • Suicide or Self-harm (content-and-conduct-related-risk)
      • Sale of illegal or regulated goods or services (content-and-conduct-related-risk)
      • Violent Content
        • Glorification of Violence (glorification-of-violence)
        • Inciting Violence (incitement)
        • Violent Threat (violent-threat)
      • Terms of Service Violation / Community Guidelines Violation (service-abuse)
      • Something Else / Not Listed (unclassifed)
Was this page helpful?
한국어