• NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      8
      ·
      6 days ago

      I mean… duh? The purpose of an LLM is to map words to meanings… to derive what a human intends from what they say. That’s it. That’s all.

      It’s not a logic tool or a fact regurgitator. It’s a context interpretation engine.

      The real flaw is that people expect that because it can sometimes (more than past attempts) understand what you mean, it is capable of reasoning.

      • vithigar@lemmy.ca
        link
        fedilink
        arrow-up
        20
        ·
        5 days ago

        Not even that. LLMs have no concept of meaning or understanding. What they do in essence is space filling based on previously trained patterns.

        Like showing a bunch of shapes to someone, then drawing a few lines and asking them to complete the shape. And all the shapes are lamp posts but you haven’t told them that and they have no idea what a lamp post is. They will just produce results like the shapes you’ve shown them, which generally end up looking like lamp posts.

        Except the “shape” in this case is a sentence or poem or self insert erotic fan fiction, none of which an LLM “understands”, it just matches the shape of what’s been written so far with previous patterns and extrapolates.

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          5 days ago

          Well yes… I think that’s essentially what I’m saying.

          It’s debatable whether our own brains really operate any differently. For instance, if I say the word “lamppost”, your brain determines the meaning of that word based on the context of my other words around “lamppost” and also all of your past experiences that are connected with that word - because we use past context to interpret present experience.

          In an abstract, nontechnical way, training a machine learning model on a corpus of data is sort of like trying to give it enough context to interpret new inputs in an expected/useful way. In the case of LLMs, it’s an attempt to link the use of words and phrases with contextual meanings so that a computer system can interact with natural human language (rather than specifically prepared and formatted language like programming).

          It’s all just statistics though. The interpretation is based on ingestion of lots of contextual uses. It can’t really understand… it has nothing to understand with. All it can do is associate newly input words with generalized contextual meanings based on probabilities.

          • MutilationWave@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            5 days ago

            I wish you’d talked more about how we humans work. We are at the mercy of pattern recognition. Even when we try not to be.

            When “you” decide to pick up an apple it’s about to be in your hand by the time your software has caught up with the hardware. Then your brain tells “you” a story about why you picked up the apple.

            • IlovePizza@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              5 days ago

              I really don’t think that is always true. You should see me going back and forth in the kitchen trying to decide what to eat 😅

      • Kilgore Trout@feddit.it
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        5 days ago

        I mean… duh?

        My same reaction, but scientific, peer-reviewed and published studies are very important if e.g. we want to stop our judicial systems from implementing LLM AI

    • metaStatic@kbin.earth
      link
      fedilink
      arrow-up
      15
      arrow-down
      74
      ·
      6 days ago

      plenty of people can’t reason either. the current state of AI is closer to us than we’d like to admit.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        17
        arrow-down
        3
        ·
        6 days ago

        DAE people are really stupid? 50% of all people are dumber than average, you know. Heh. NoW jUsT tHinK abOuT hOw dUmb tHe AverAgE PeRsoN iS. Maybe that’s why they can’t get my 5-shot venti caramel latte made with steamed whipped cream right. *cough* Where is my adderall.

      • Haggunenons@lemmy.world
        link
        fedilink
        arrow-up
        18
        arrow-down
        6
        ·
        6 days ago

        As clearly demonstrated by the number of downvotes you are receiving, you well-reasoning human.

      • peto (he/him)@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 days ago

        I severely hope that people aren’t using LLM-AI to do reasoning tasks. I appreciate that I am likely wrong, but LLMs are neither the totality or the pinnacle of AI tech. I don’t think we are meaningfully closer to AGI than we were before LLMs blew up.

      • Syrc@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        5 days ago

        That’s just false. People are all capable of reasoning, it’s just that plenty of them get terribly wrong conclusions from doing that, often because they’re not “good” at reasoning. But they’re still able to do that, unlike AI (at least for now).