I asked Google Bard whether it thought Web Environment Integrity was a good or bad idea. Surprisingly, not only did it respond that it was a bad idea, it even went on to urge Google to drop the proposal.

      • novibe@lemmy.ml
        link
        fedilink
        English
        arrow-up
        18
        ·
        11 months ago

        That ignores all the papers on emergent features of LLMs and the fact they are basically black boxes. Yes, we “trained” them to write what we want to hear. But we don’t really understand what happens inside of it. We can’t categorically claim things like “they are only regurgitating what they heard”. Because that is not a scientific or even philosophical statement.

        If you think about it for a second, it’s also applicable to human beings…

          • novibe@lemmy.ml
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            11 months ago

            I think to assume what you assume is also incorrect given current data.

            And that’s my entire point…. What is it doing? How what it’s doing is different from a mind or intelligence?

            Like our brains and minds evolved to “fill in the blank”. For many situations, due to survival and millions of years of selection. So what is the actual difference?

            I’m not saying it’s “conscious”, but why is it not a mind?

      • Elise@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        I’ve actually developed quite a bit with gpt4 and have beta access and have developed quite some fancy prompts if I do say so myself.

        Telling me ‘isn’t it obvious’ doesn’t make it more obvious to me.

    • graham1@gekinzuku.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 months ago

      Large language models literally do subspace projections on text to break it into contextual chunks, and then memorize the chunks. That’s how they’re defined.

      Source: the paper that defined the transformer architecture and formulas for large language models, which has been cited in academic sources 85,000 times alone https://arxiv.org/abs/1706.03762

      • notfromhere@lemmy.one
        link
        fedilink
        arrow-up
        6
        ·
        11 months ago

        Hey, that comment’s a bit off the mark. Transformers don’t just memorize chunks of text, they’re way more sophisticated than that. They use attention mechanisms to figure out what parts of the text are important and how they relate to each other. It’s not about memorizing, it’s about understanding patterns and relationships. The paper you linked doesn’t say anything about these models just regurgitating information.

        • graham1@gekinzuku.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 months ago

          I believe your “They use attention mechanisms to figure out which parts of the text are important” is just a restatement of my “break it into contextual chunks”, no?

    • Pfnic@feddit.ch
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Yes because online discussions usually aren’t inherently subjective and instead backed by sourceable knowledge. Sorry for the cynicism but one could always find any source that underlines any point so everything should be taken with a grain of salt.

      I’d personally argue, that the way generative AI works lends itself to produce answers that fit the general consensus of the internet that is relevant to the given prompt, because it calculates the most likely response based on the information available. Since most information relevant to “Google Web DRM” is critical of it (Google doesn’t call it DRM themselves), it makes sense a prompt querying the AI for opinions on Web DRM will result in a rather negative response, if Google doesn’t tamper with it to their advantage.