Moderation is work. Trolls traumatize. Humans powertrip. All this could be resolved via AI.

  • okr765@lemmy.okr765.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    22 hours ago

    The AI used doesn’t necessarily have to be an LLM. A simple model for determining the “safety” of a comment wouldn’t be vulnerable to prompt injection.