In community moderation, we sometimes encounter individuals whose participation appears, on the surface, to be passionate or provocative – but on closer inspection, reveals patterns of trolling behaviour. These participants don’t simply disagree, they actively undermine community cohesion, weaponise the language of inclusion, and test the boundaries of civil speech.
Let’s break down how this dynamic works, using quotes drawn from a recent case in our community (anonymised, but verbatim):
Presenting as the Victim While Launching the Attack
“If I go march in a pride parade, accepted, if I do all manner of alternate deviant lifestyle choices, accepted, if I share pro Trump, Elon, Infowars views, phew, what a backlash.”
This quote pretends to seek inclusion, but the phrase “alternate deviant lifestyle choices” reveals the actual intent: to insult under the guise of expressing a viewpoint. It’s a rhetorical bait-and-switch, provocation wrapped in performative grievance.
Using Dehumanising Language to Undermine Others
“People are complex, many can not operate above a base level, an animal or herd mentality.”
This kind of language doesn’t belong in any respectful community. It degrades others and dismisses disagreement as intellectual inferiority. This is classic troll rhetoric: belittle first, then claim persecution.
Painting Community Standards as Censorship
“Great effort has been put in to silence me… many will never know I exist on the fediverse, not for any choice of their own either.”
The implication here is that moderation equates to censorship. In truth, this user’s content was moderated because it was disrespectful, not because of the specific views. Framing moderation as “silencing” is a common tactic to discredit community governance.
Dismissing Others While Demanding to Be Heard
“Try to phrase a question, I may try to answer, but if it’s more word salad, I’ll pass and wait for a steak.”
This quote is another common move in the troll playbook: insult someone’s clarity or intelligence while pretending to leave the door open for conversation. It sets up a dynamic where the troll controls the terms of engagement and reserves the right to disengage with derision.
Claiming Good Faith, Demonstrating the Opposite
“All I did was present my views, with care.”
Claiming care while repeatedly using inflammatory language shows a disconnect between self-perception and impact. True good faith requires listening, adapting, and respecting community boundaries. When a user insists they were respectful despite a record of dehumanising remarks and provocations, they’re often seeking to reverse the burden of justification onto the moderators.
Quoting Guidelines to Undermine Their Intent
“‘Diversity of views and of people on teams powers innovation, even if it is not always comfortable.’ It’s not always comfortable, but, suck it up buttercup, except, you didn’t.”
This is a classic example of weaponising community guidelines to mock and undermine them. By quoting a sincere principle about diversity of thought, and then responding with “suck it up buttercup”, the user trivialises the very value they pretend to uphold. It reveals a disdain for discomfort not as a price of growth, but as a tool of provocation.
Invoking Bad Faith with a Smile
“I am behind enemy lines, put on some tunes and march forward as I believe is right😊”
The emoji doesn’t soften the statement, it underscores it. When someone describes a community space as “enemy lines,” they are not here to collaborate. They are here to provoke, and the smile becomes part of the mockery.
Behavioural Observations
Tactical Victimhood with Dominance Framing
The user repeatedly positions themselves as a victim of suppression –“behind enemy lines”, “silenced”, “not for any choice of their own” – while simultaneously using aggressive and derisive language (“suck it up buttercup”, “herd mentality”, “deviant lifestyle choices”).
This juxtaposition suggests strategic victimhood: adopting the posture of the oppressed in order to deflect accountability while justifying antagonistic behaviour. It’s a common tactic in online trolling, designed to erode moderators’ ability to uphold community norms without being accused of censorship.
This reflects a dominance–submission inversion, where an individual uses claims of marginalisation to justify exercising control over others’ boundaries. It is more aligned with narcissistic injury than with sincere marginalisation.
Provocation Disguised as “Careful Speech”
The user asserts:
“All I did was present my views, with care.”
Yet they also say:
“Try to phrase a question, I may try to answer, but if it’s more word salad, I’ll pass and wait for a steak.”
This contradiction between claimed intent and actual speech suggests gaslighting behaviour—presenting themselves as rational and respectful while engaging in baiting and denigration.
This reflects what is known in trust and safety circles as covert antagonism – deliberate provocation masked in plausible deniability. It is used to frustrate moderation and create self-doubt in others, especially in empathetic communities.
Testing Boundaries, Not Seeking Dialogue
Rather than engaging in dialogue, the user explicitly states:
“I am behind enemy lines.”
This isn’t the language of someone trying to foster understanding; it frames the space as a battlefield. Combined with the declaration that they will “march forward” regardless of reception, this signals a premeditated decision to provoke.
This is consistent with what is termed reactance – a psychological resistance to perceived authority or social constraint. But it goes further into malicious compliance: purposefully engaging in surface-level “legitimate” behaviour e.g., quoting guidelines while violating the spirit of the rules.
Entitlement to Platform, Not Contribution
The user’s recurring theme is that others are wrong to exclude them. They reference being “locked out,” “not chosen,” or “silenced,” without ever acknowledging that others may simply not want to federate with their content or behaviour.
This reflects a sense of entitlement to audience and engagement – as though their presence must be accepted, regardless of its impact on others.
This may stem from a high-control worldview: a belief that decentralised spaces should behave in predictable, ideologically uniform ways – or else be accused of hypocrisy. It is often tied to authoritarian personality traits, particularly when expressions of pluralism are reinterpreted as betrayal or exclusion.
Intent, Not Identity, Matters Most
It’s important to note that none of these insights presume anything about the user’s identity – only their behavioural patterns. What we see here is a coordinated rhetorical strategy:
- Discredit moderation by invoking the community’s own guidelines
- Frame their provocations as “free speech”
- Provoke a reaction to trigger moderation, then portray the consequences as unjust persecution.
This user’s pattern matches what researchers describe as performative bad faith – where the goal is not to build, contribute, or challenge constructively, but to provoke, destabilise, and delegitimise the community’s values from within.
Moderation isn’t about ideological policing. It’s about protecting community integrity. Trolls often claim to be “just expressing views” or “offering another perspective,” but when their methods are adversarial and dismissive, their purpose becomes clear.
Moderator teams can consult Responding to Performative Bad Faith for guidance on identifying and responding to bad faith interactions.
True diversity of views is built on mutual respect, self-awareness, and a willingness to engage constructively – not to dominate or derail. When those values are absent, moderation must intervene.
And we will.