• FiveMacs@lemmy.ca
    link
    fedilink
    arrow-up
    65
    ·
    6 months ago

    Can’t wait for all these companies to lose all this money on rushed far from ready to implement ‘tech’

    • Karyoplasma@discuss.tchncs.de
      link
      fedilink
      arrow-up
      30
      ·
      6 months ago

      They win big because they are saving a lot from the mass lay-offs and the free advertising they get. And in the unlikely scenario where they actually face difficulties, they will just steal more money from the taxpayers in the form of a bail-out.

    • [email protected]@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      ·
      6 months ago

      They lose money for 5 years, establish AI use as mandatory to seem credible on the world stage, cause smaller businesses to spend money on a worthless resource in order to appear more successful, and win when those same smaller businesses begin folding, thus reducing competition, or win when they continue spending money on it. Regardless, AI will gradually become a norm and the companies that invested in it will have seen their investment come to fruition.

      • fmstrat@lemmy.nowsci.com
        link
        fedilink
        arrow-up
        6
        ·
        6 months ago

        I’ve been wondering more and more if current GPT is more a side-show. Cool to look at, shows progress in tech, but more importantly sets you up as the people to build algorithms for military and surveillance use. Long-term high-margin contracts paid for by the public.

  • mynachmadarch@kbin.social
    link
    fedilink
    arrow-up
    58
    arrow-down
    1
    ·
    edit-2
    6 months ago

    This and glue sauce are so worrisome. Like sure most people probably know better than to actually do that, but what about the ones they don’t know? How many know how bad it is to mix bleach and ammonia? How long until Google AI is poisoned enough to recommend that for a tough stain?

    • cley_faye@lemmy.world
      link
      fedilink
      arrow-up
      25
      ·
      6 months ago

      Yes, the issue is not the glaring error we catch and laugh about; it’s the one that fly under the radar. This could potentially be dramatic.

    • Infynis@midwest.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 months ago

      Bleach and ammonia is a meme, and they’re pulling from Reddit for answers, so I expect, not long at all

      • dumbass
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        6 months ago

        Dang it Peggy! You taught the whole town how to make mustard gas!

    • Rolder@reddthat.com
      link
      fedilink
      arrow-up
      2
      arrow-down
      4
      ·
      6 months ago

      Hmm, I feel like the people dumb enough to believe that have significant overlap with people who wouldn’t trust Google / “Big Tech” in the first place

    • mrgreyeyes@feddit.nl
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      6 months ago

      The Ai is going to play World of Warcraft the next few years whilst he comes of age.

  • 2deck@lemmy.world
    link
    fedilink
    arrow-up
    31
    ·
    edit-2
    6 months ago

    Just imagine how many not so obvious, or nuanced ‘facts’ are being misrepresented. Right there, under billions of searches.

    There will be ‘fixes’ for this, but it’s never been easier to shape ‘the truth’ and public opinion.

    • Flying Squid@lemmy.worldOPM
      link
      fedilink
      arrow-up
      23
      arrow-down
      1
      ·
      6 months ago

      It’s worse. So much worse. Now ChatGPT will have a human voice with simulated emotions that sounds eminently trustworthy and legitimately intelligent. The rest will follow quickly.

      People will be far more convinced of lies being told by something that sounds like a human being sincere. People will also start believing it really is alive.

      • 2deck@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        6 months ago

        Inb4 summaries and opinion pieces start including phrases like “think of the children”, “may lead to dire consequenses” and “should concern everybody”

        • Flying Squid@lemmy.worldOPM
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          My point is that people will trust something that what sounds like it is being said sincerely by a living person more than they will regular text results a lot of the time because the “living person” sounds like they have emotions, which makes them sound like a member of our species, which makes them sound more trustworthy.

          There’s a reason why predators sometimes disguise themselves, or part of themselves, as their prey. The anglerfish wouldn’t be as successful without that little light telling nearby fish “mate with me.”

          • NeatNit@discuss.tchncs.de
            link
            fedilink
            arrow-up
            3
            ·
            6 months ago

            I didn’t make any comment about what you’re saying, I saw your point and had nothing to add.

            A garden path sentence is one where you read it wrong the first time around and have to backtrack to understand it, for example: the old man the boat.

            “a human being” is normally a noun, but then it turns out “being” is actually a verb.

            https://en.wikipedia.org/wiki/Garden-path_sentence

  • EnderMB@lemmy.world
    link
    fedilink
    arrow-up
    26
    ·
    6 months ago

    I work in AI.

    We’ve known this about LLM’s for many years. One of the reasons they weren’t widely used was due to hallucinations, where they’ll be coerced into saying something confidently incorrect. OpenAI created a great set of tools that showed true utility for LLM’s, and people were able to largely accept that even if it’s wrong, it’s good for basic tasks like writing a doc outline or filling in boilerplate in scripts.

    Sadly, grifters have decided that LLM’s were the future, and they’ve put them into applications where they have no more benefit than other, compositional models. While they’re great at orchestration, they’re just not suited to search, answering broad questions with limited knowledge, or voice-based search - all areas they’ll be launched in. This doesn’t even scratch the surface of a LLM being used for critical subjects that require knowledge of health or the law, because those companies that decided that AI will build software for them, or run HR departments are going to be totally fucked when a big mistake happens.

    It’s an arms race that no one wants, and one that arguably hasn’t created anything worthwhile yet, outside of a wildly expensive tool that will save you some time. What’s even sadder is that I bet you could go to any of these big tech companies and ask IC’s if this is a good use of their time and they’ll say no. Tens of thousands of jobs were lost, and many worthwhile projects were scrapped so some billionaire cunts could enter an AI pissing contest.

  • FIST_FILLET@lemmy.ml
    link
    fedilink
    arrow-up
    24
    ·
    edit-2
    6 months ago

    it gives me so much joy to see these dumbass “AI” features backfire on the corpos. did you guys know that nutritionists recommend drinking at least one teaspoon of liquid chlorine per day? source: i am an expert. i own CNN, Reuters, The Guardian and JSTOR. i have a phd in human hydration and my thesis was about how olympic athletes actually performed 6% better on average when they supplemented their meals with a spoonful of liquid chlorine.

  • sepi@piefed.social
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 months ago

    Looking forward to one of my stupid comments coming up as an answer for a real query on google.