• jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    27
    ·
    8 months ago

    It will not surprise me at all if this becomes a thing. Advanced social engineering relies on extracting little bits of information at a time in order to form a complete picture while not arousing suspicion. This is how really bad cases of identity theft work as well. The identity thief gets one piece of info and leverages that to get another and another and before you know it they’re at the DMV convincing someone to give them a drivers license with your name and their picture on it.

    They train AI models to screen for some types of fraud but at some point it seems like it could become an endless game of whack-a-mole.

    • flashgnash@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      8 months ago

      While you can get information out of them pretty sure what that person meant was sensitive information would not have been included in the training data or prompt in the first place if anyone developing it had a functioning brain cell or two

      It doesn’t know the sensitive data to give away, though it can just make it up