Not a member, but thought I’d share this for the artists here in case they haven’t seen the news.

    • AphoticDev@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      If this takes off, it will be bypassed within a month. Adversarial training is something Stable Diffusion users already invented, and we use it to make our artwork better by poisoning the dataset to teach the network what a wrong result looks like. They reinvented our wheel.

    • kromem@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      It’s fine, as outside of laboratory conditions it won’t work anyways (diverse image “reverse labels” would erase their signal to noise ratio of biased pixels across aggregate real world training data), so no need to stress about any kind of reaction to it.