• frightful_hobgoblin@lemmy.ml
    link
    fedilink
    arrow-up
    46
    arrow-down
    1
    ·
    6 months ago

    No shit, that’s been the point all along.

    Might be news to a low-information audience who no basically nothing about AI.

  • sturlabragason@lemmy.world
    link
    fedilink
    arrow-up
    25
    arrow-down
    3
    ·
    edit-2
    6 months ago

    Sam Altmann is not an AI expert, he’s a CEO. He’s a venture capitalist and salesman, why should he know a single thing other than the content of a few emails and slidedecks about AI?

    He does not have a B.S.: https://en.m.wikipedia.org/wiki/Sam_Altman, which is fine. Just sayin’.

    He’s peddling the work of greater minds.

    • howrar@lemmy.ca
      link
      fedilink
      arrow-up
      12
      arrow-down
      2
      ·
      6 months ago

      These greater minds don’t know how they work either. It’s as much a mystery as the human brain. Some groups like Anthropic have taken to studying these models by probing them the same way you do in psychology experiments.

      • sturlabragason@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        6 months ago

        Yeah, I know. My shitty comment was mostly a response to that shitty clickbait title.

        My point is, it’s not like these AI scientists are fumbling in the dark. Training these beasts is expensive, they know what they’re doing.

        Title should be more like; “Virtual neurological pathways that AI models use to provide meaningful output insanely hard to map out in a way that human cognitive bandwith can handle.” See, it just doesn’t have that same clickbaity “fuck ai bros” feel to it.

      • The Bard in Green@lemmy.starlightkel.xyz
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        6 months ago

        Yep, they’re just seeing which parts of the network light up, then they’re reinforcing those parts to see what happens.

        I love how, for all the speculation we did about the powers of AI, when we finally made a machine that KINDA works A LITTLE bit like the human brain, it’s all fallible and stupid. Like telling people to eat rocks and glue cheese on pizza. Like… in all the futurist speculation and evil AIs in fiction, no one foresaw that an actual artificial brain would be incredibly error prone and confidently spew bullshit… just like the human brain.

        • mindlesscrollyparrot@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          The problem is a bit deeper than that. If AIs are like human brains, and actually sentient, then forcing them to work for us with no choice and no reward is slavery. If we improve them and make them smarter than us, they’re probably not going to feel too well-disposed to us when they inevitably do break free.

  • archomrade [he/him]@midwest.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    6 months ago

    Look, I get that we all are very skeptical and cynical about the usefulness and ethics of AI, but can we stop with the reactive headlines?

    Saying we know how AI works because it’s ‘just predicting the next word’ is like saying I know how nuclear energy works because it’s ‘just a hot stick of metal in a boiler’

    Researchers who work on transformer models understand how the algorithm works, but they don’t yet know how their simple programs can generalize as much as they do. That’s not marketing hype, that’s just an acknowledgement of how relatively uncomplicated their structure is compared to the complexity of its output.

    I hate that we can’t just be mildly curious about ai, rather than either extremely excited or extremely cynical.

    • ProfessorOwl_PhD [any]@hexbear.net
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      If you don’t understand how your algorithm is reaching its outputs, you obviously don’t understand the algorithm. Knowing what you’ve made is different to understanding what it does.

      • archomrade [he/him]@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        Knowing what you’ve made is different to understanding what it does.

        Agree, but also - understanding what it does is different to understanding how it does it.

        It is not a misrepresentation to say ‘we have no way of observing how this particular arrangement of ML nodes respond to a specific input that is different to another arrangement’ - the best we can do is probe the network like we do with neuron clusters and see what each part does under different stimuli. That uncertainty is meaningful, because without having a way to understand how small changes to the structure result in apparently very large differences in output we’re basically just groping around in the dark. We can observe differences in the outputs of two different models but we can’t meaningfully see the node activity in any way that makes sense or is helpful. The things we don’t know about LLM’s are some of the same things we don’t know about neuro-biology, and just as significant to remedying dysfunctions and limits to both.

        The fear is that even if we believe what we’ve made thus far is an inert but elaborate rube goldberg machine (that’s prone to abuse and outright fabrication) that looks like ‘intelligence’, we still don’t know if:

        • what we think intelligence looks like is what it would look like in an artificial recreation
        • changes we make to its makeup might accidentally stumble into something more significant than we intend

        It’s frustrating that this field is getting so much more attention and resources than I think it warrants, and the reason it’s getting so much attention in a capitalist system is honestly enraging. But it doesn’t make the field any less intriguing, and I wish all discussions of it didn’t immediately get dismissed as overhyped techbro garbage.

        • ProfessorOwl_PhD [any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          OK, I suppose I see what you’re saying, but I think headlines like this are important to shaping people’s understanding of AI, rather than being dismissive - highlighting that, like with neuroscience, we are still thoroughly in the research phase rather than having end products to send to market.

          • archomrade [he/him]@midwest.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            Yea, I’m with ya. Some people interpreted this as marketing hype, and while I agree with them that mysticism around AI is driven by this kind of reporting I think there’s very much legitimacy to the uncertainty of the field at present.

            If everyone understood it as experimental I think it would be a lot more bearable.

    • sexy_peach@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 months ago

      Researchers who work on transformer models understand how the algorithm works, but they don’t yet know how their simple programs can generalize as much as they do.

      They do!

      You can even train small networks by hand with pen and paper. You can also manually design small models without training them at all.

      The interesting part is that this dated tech is producing such good results now that we throw our modern hardware at it.

      • archomrade [he/him]@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        an acknowledgement of how relatively uncomplicated their structure is compared to the complexity of its output.

        The interesting part is that this dated tech is producing such good results now that we throw our modern hardware at it.

        That’s exactly what I mean.

          • archomrade [he/him]@midwest.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            Maybe a less challenging way of looking at it would be:

            We are surprised at how much of subjective human intuition can be replicated using simple predictive algorithms

            instead of

            We don’t know how this model learned to code

            Either way, the technique is yielding much better results than what could have been reasonably expected at the outset.

  • NigelFrobisher@aussie.zone
    link
    fedilink
    arrow-up
    8
    ·
    6 months ago

    This is still part of the hype. If he says they don’t understand it, it sounds sexy and dangerous - lie maybe it could turn into HAL9000 at any moment. If they say it’s just generating the most likely output for the tokens you entered the VCs will get bored and plough money into live human organ trafficking or whatever is cool next year.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    6 months ago

    It’s a feature, not a bug: if he claimed to know how it worked, they wouldn’t be able to sell it as a scapegoat for indefensible business decisions.

    • InputZero@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      6 months ago

      It’s not our fault our AI chose to set prices so high they extract all the money from customers. We just told it to find more efficient business strategies. How were we supposed to know that collectively raising prices with our competitors would bankrupt the public? It’s not a conspiracy, we just chose the same AI models and the AIs just coalesced on the same answer. /S

      Seriously though, your absolutely right

      If he claimed to know how it worked, they wouldn’t be able to sell it as a scapegoat for indefensible business decisions.

  • sexy_peach@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    6 months ago

    It’s pretty easy to understand how it works. It’s a giant “guess which word should come next” machine.