• finitebanjo@lemmy.world
    link
    fedilink
    arrow-up
    41
    arrow-down
    2
    ·
    5 days ago

    You know, OpenAI published a paper in 2020 modelling how far they were from human language error rate and it correctly predicted the accuracy of GPT 4. Deepmind also published a study in 2023 with the same metrics and discovered that even with infinite training and power it would still never break 1.69% error rate.

    These companies knew that their basic model was failing and that overfitying trashed their models.

    Sam Altman and all these other fuckers knew, they’ve always known, that their LLMs would never function perfectly. They’re convincing all the idiots on earth that they’re selling an AGI prototype while they already know that it’s a dead-end.

    • JasminIstMuede@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      19
      ·
      5 days ago

      As far as I know, the Deepmind paper was actually a challenge of the OpenAI paper, suggesting that models are undertrained and underperform while using too much compute due to this. They tested a model with 70B params and were able to outperform much larger models while using less compute by introducing more training. I don’t think there can be any general conclusion about some hard ceiling for LLM performance drawn from this.

      However, this does not change the fact that there are areas (ones that rely on correctness) that simply cannot be replaced by this kind of model, and it is a foolish pursuit.

        • finitebanjo@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          5 days ago

          Human hardware is pretty impressive, might need to move on from binary computers to emulate it efficiently.

            • finitebanjo@lemmy.world
              link
              fedilink
              arrow-up
              9
              ·
              edit-2
              5 days ago

              Neurons produce multiple types of neurotransmitters. That means they can have an effective state different from just on or off.

              I’m not suggesting we resurrect analogue computers, per se, but I think we need to find something with a little more complexity for a good middle ground. It could even be something as simple as binary with conditional memory, maybe. Idk. I see the problem not the solution.

              I’m also not saying you can’t emulate it with binary, but I am saying it isn’t as efficient.