I noticed today an occurence of a user complaining about Lemmy being worse then Reddit. The modlogs shows how toxic they are. When this was pointed out, the user deletes their account

https://web.archive.org/web/20241217101003/https://sopuli.xyz/post/20276017?scrollToComments=true

Deleted account: https://kbin.melroy.org/u/Pyrin

This seems to address the question that comes up once in a while “a public modlog is only useful for mods” (https://feddit.org/post/4920887/3235141), while we can see from this example that it can also be useful for toxic users.

As you may know, [email protected] is a community dedicated to calling out power tripping mods.

Should we consider having a similar community for toxic users?

There is already [email protected], but I feel like the “lore” is more about large-scale events (like the cats wave recently) than specific users events.

Edit: Updated the title, and put the emphasis on creating a community to call out toxic users rather than “dunking” on the users that was banned.

  • OpenStars@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    5 days ago

    It definitely could go either way.

    The toxicity needs to be discussed in order to deal, but what is the real benefit of doing that at the per-user level? To make a cross-instance blacklist? The affected users would just create an alt, plus what is “toxic” to some (“I want women to not be treated as people” is the epitome of grace and class to others - someFUCKINGhow?!).

    A complicating factor is that currently, moderator reports aren’t even federated across instances, and that won’t be added until at least 0.20 as Nutomic put onto the Lemmy Roadmap. Not that it should either hinder or accelerate the need for such a community, just that it seems tangentially related?

    I keep coming back to the idea of porn: should it not exist (no, I mean yes, I mean it should not be entirely banned, studies show that banning it at least correlates if not actually contributes to causing actual irl physical violence), or can it simply be labeled properly? The problem being that while the Fediverse does an excellent job of labeling NSFW content (and PieFed even adds a new category, on top of NSFW, for “gore”), it fails miserably at labeling most other things - e.g. you cannot criticize Russia, China, or North Korea on the infamous “community of privacy and FOSS enthusiasts, run by Lemmy’s developers”, which how would the latter have in any way implied the former, in its wording?

    Making porn be “opt-in” makes it safe to visit the Fediverse even at work, without fear of being part of the company’s “cost savings plan” (at least due to such a reason as this, assuming they even need a reason at all). Failing to label toxic users as toxic allows them to mix in amongst all the other users, with no distinction offered except to allow or deny, at which point the moderation requires effort to perform that task. Unless we try other ways?