I’m kinda regretting not naming it oneninesix, but here we are. I guess I love letters.

To anyone wondering what’s up, I did this on my phone while out in the “big city”, so I’m still waiting to get home to do anything serious. I have a few suckers really nice people who volunteered for modding along with me. Anyone else who is interested, drop me a line. I’ll be picking mods when I get home in a few hours. Sorry for the wait and I’ll do my best to put out any fires in the meantime. I didn’t think this would take off!

For those wondering, here’s my take on moderating the place.

  1. Moderation is to facilitate an experience for its users in line with the goals of the community and the instance. It’s not to push a personal agenda, give you a bigger hammer in debates, set up a digital fiefdom, etc. You certainly can and should include your mod experience on your dating profile, though. Unilateral decisions are not cool except in a few situations, like if 100% of your userbase is usurped by literal Nazis.

  2. 196 exists to be a place where you post something (often but not always something goofy) when you visit. I know not everyone does and that’s fine - I still love you. These things can’t be offensive or hurtful, though, especially not intentionally so. Unintentional vs intentional I believe is a HUGE distinction and needs to be considered when moderating.

  3. LBJ LBZ exists as an inclusive, (relatively) judgment-free zone for gender-diverse folks. I intend for us to uphold that here. I say relatively judgment free because there will be people looking to start shit and mods and admins are going to have to judge their actions, but only their actions.

If you wanna be my modder, you gotta get with my bullet points…or argue persuasively why I should amend them (but that part doesn’t fit the tune).The three big things I’m looking for otherwise are diverse viewpoints, if you can remain reasonably impartial, and if you can say you’re sorry. The last is huge for me. As a mod, you’re going to mess up. I used to mod on Reddit and I certainly did! I find it’s important for maintaining the community’s respect to be able to admit when you made a bad call and what you’ll do to avoid it in the future.

@[email protected], pointers would be welcome as I think you do a great job.

Community feedback is encouraged and welcome, just be aware I’ll be a little slow to respond for a bit.

PS: wow, I really DO love letters!

Edit: Corrected point three, damn autocorrect! Believe it or not, we’re not an inclusive community in LBJ’s corpse.

Update 20/1/25: We’re replete with mods for now! Thank you all who reached out. I’ll start pulling these stickies as they get irrelevant, I’m just a full disclosure kind of person so I want people to know what is/has been going on.

  • WrittenInRed [any]@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 hours ago

    I’ve been thinking recently about chain of trust algorithms and decentralized moderation and am considering making a bot that functions a bit like fediseer but designed more for individual users where people can be vouched for by other users. Ideally you end up with a network where trust is generated pseudo automatically based on interactions between users and could have reports be used to gauge whether a post should be removed based on the trust level of the people making the reports vs the person getting reported. It wouldn’t necessarily be a perfect system but I feel like there would be a lot of upsides to it, and could hopefully lead to mods/admins only needing to remove the most egregious stuff but anything more borderline could be handled via community consensus. (The main issue is lurkers would get ignored with this, but idk if there’s a great way to avoid something like that happening tbh)

    My main issue atm is how to do vouching without it being too annoying for people to keep up with. Not every instance enables downvotes, plus upvote/downvote totals in general aren’t necessarily reflective of someone’s trustworthiness. I’m thinking maybe it can be based on interactions, where replies to posts/comments can be ranked by a sentiment analysis model and then that positive/negative number can be used? I still don’t think that’s a perfect solution or anything but it would probably be a decent starting point.

    If trust decays over time as well then it rewards more active members somewhat, and means that it’s a lot harder to build up a bot swarm. If you wanted any significant number of accounts you’d have to have them all posting at around the same time which would be a lot more obvious an activity spike.

    Idk, this was a wall of text lol, but it’s something I’ve been considering for a while and whenever this sort of drama pops up it makes me want to work on implementing something.

    • stray@pawb.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      I’m always wary of how such systems can be gamed and how they’ll influence user behavior, but the only downside to trying is your own efforts. Even if you fail miserably, I imagine the exercise itself would improve our understanding of what works, what doesn’t, and how to form better approaches in the future. To succeed in making a system which improves user interactions would be a truly wonderful thing, and may even translate to IRL applications. I would urge you to follow through with this for as long as you feel it’s something you’d like to do.

      • WrittenInRed [any]@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        Yeah those are basically my thoughts too lol. Even if it ends up not working out the process of trying it will still be good since it’ll give me more experience. Those aspects you’re wary of are also definitely my 2 biggest concerns too. I think (or at least hope) that with the rules I’m thinking of for how trust is generated it would mostly positively effect behaviour? I’m imagining by “rewarding” trust to recieving positive replies, combined with a small reward for making positive replies in the first place, it would mostly just lead to more positive interactions overall. And I don’t think I’d ever want a system like this to punish making a negative reply, only maybe when getting negative replies in response, since hopefully that prevents people wanting to avoid confrontation of harmful content in order to avoid being punished. Honestly it might even be better to only ever reward trust and never retract it except via decay over time, but that’s something worth testing I imagine.

        And in terms of gaming the system I do think that’s kinda my bigger concern tbh. I feel like the most likely negative outcome is something like bots/bad actors finding a way to scam it, or the community turning into an echo chamber where ideas (that aren’t harmful) get pushed out, or ends up drifting towards the center and becoming less safe for marginalized people. I do feel like thats part of the reason 196 would be a pretty good community to use a system like this though, since there’s already a very strong foundation of super cool people that could be made the initial trusted group, and then it would hopefully lead to a better result.

        There are examples of similar sorts of systems that exist, but it’s mostly various cryptocurrencies or other P2P systems that use the trust for just verifying that the peers aren’t malicious and it’s never really been tested for moderation afaik (I could have missed an example of it online, but I’m fairly confident in saying this). I think stuff like the Fediverse and other decentralized or even straight up P2P networks are a good place for this sort of thing to work though, as a lot of the culture is already conducive to decentralization of previously centralized systems, and the communities tend to be smaller which helps it feel more personal and prevents as many bad actors/botting attempts since there aren’t a ton of incentives and they become easier to recognize.

    • Roflmasterbigpimp@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      6 hours ago

      Hey wow thats an awesome Idea! I’m currently in training to become a Software developer myself and this sound really impressive!

      Did you already started?

      • WrittenInRed [any]@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        I’ve been looking at the Lemmy api and stuff, and into some existing libraries/implementations of trust networks but that’s about it so far tbh. I think I’m gonna start working on some implementation later today maybe, this whole mod drama and the discussion it led to make me really want to start lol.