Moderation is work. Trolls traumatize. Humans powertrip. All this could be resolved via AI.

  • Alice@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    I’m just curious how this would differ from automatic moderating tools we already have. I know moderating actually can be a traumatic job due to stuff like gore and CSEM, but we already have automatic filters in place for that stuff, and things still slip through the cracks. Can we train an AI to recognize it when it hasn’t already been put into a filter? And if so, wouldn’t it hit false positives and require an appeal system, which could still be used to traumatize people?