• El Barto@lemmy.world
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    6 months ago

    Such a clickbaity article.

    Here’s the meat of it:

    Have they finally achieved consciousness and this is how they show it?!

    No. The answer, as is usually the case with these things, is that we are anthropomorphizing a step too far. These models don’t care about what is and isn’t random. They don’t know what “randomness” is! They answer this question the same way they answer all the rest: by looking at their training data and repeating what was most often written after a question that looked like “pick a random number.” The more often it appears, the more often the model repeats it.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      4
      ·
      6 months ago

      LLMs are AI. But then again, so are mundane algorithms like A* Pathfinding. Artificial Intelligence is an extraordinarily broad field.

      Very few, if any, people claim that ChatGPT is “Artificial General Intelligence”, which is what you probably meant.

      • DaseinPickle
        link
        fedilink
        arrow-up
        11
        arrow-down
        2
        ·
        6 months ago

        It’s a meaningless marketing term. It’s used to describe so many different technologies that it has become meaningless. People just use it to give their tech some SciFi vibes.

        • Pennomi@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          6 months ago

          Sorry but that’s bullshit. You can’t disqualify an entire decades-old field of study because some marketing people used it wrong.

          • DaseinPickle
            link
            fedilink
            arrow-up
            4
            arrow-down
            3
            ·
            6 months ago

            No it’s not. The engineers and researchers calling any tech they made AI is bullshit. It has nothing to do with intelligence. They used it wrong from the very beginning.

            • Pennomi@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              4
              ·
              6 months ago

              Please read up on the history of AI: https://en.m.wikipedia.org/wiki/Artificial_intelligence

              Alan Turing was the first person to conduct substantial research in the field that he called machine intelligence.[5] Artificial intelligence was founded as an academic discipline in 1956.[6]

              You are conflating the modern “deep learning” technique of AI, which has really only existed for a short time, with the entire history of AI development, which has existed for (probably much) longer than you’ve been alive. It’s a very common misconception.

              • DaseinPickle
                link
                fedilink
                arrow-up
                2
                arrow-down
                2
                ·
                6 months ago

                Just because it’s old doesn’t make it true. Democratic People’s Republic of Korea (DPRK)was established in 1948. Do you think North Korea is democratic just because it’s called that?

  • SkyNTP@lemmy.ml
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    6 months ago

    “Favourite numbers” is just another way of saying model bias, a centuries old knowtion.

    There’s no ethics in journalism. That’s the real story here.

    • kakes@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      I swear every article posted to Lemmy about LLMs are written by my 90 year old grandpa, given how out of touch they are with the technology. If I see another article about what ChatGPT “believes”…

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 months ago

    No. The answer, as is usually the case with these things, is that we are anthropomorphizing a step too far.

    No, you are taking it too far before walking it back to get clicks.

    I wrote in the headline that these models “think they’re people,” but that’s a bit misleading.

    “I wrote something everyone will know is bullshit in the headline to get you to click on it before denouncing the bullshit in at the end of the article as if it was a PSA.”

    I am not sure if I could loathe how ‘journalists’ cover AI more.

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      6 months ago

      Journalistic integrity! Journalists now print retractions in the very article where the errors appear

  • geography082@lemm.ee
    link
    fedilink
    arrow-up
    8
    ·
    6 months ago

    “because they think they are people” … hmmmmmmmmmmmmm this quote makes my neurons stop doing synapse