• Rookwood@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    Current AI is just going to be used to further disenfranchise citizens from reality. It’s going to be used to spread propaganda and create noise so that you can’t tell what is true and what is not anymore.

    We already see people like Elon using it in this way.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    McDonalds removes AI drive-throughs after order errors because they aren’t generating increased profits

    Schools, doctor’s offices, and customer support services will continue to use them because reducing quality of service appears to have no impact on the influx in private profit-margin on revenue.

  • Allonzee@lemmy.world
    link
    fedilink
    arrow-up
    46
    ·
    edit-2
    3 days ago

    They just want to make an economy they don’t have to pay anyone to profit from. That’s why slavery became Jim Crow became migrant labor and with modernity came work visa servitude to exploit high skilled laborers.

    The owners will make sure they always have concierge service with human beings as part of upgraded service, like they do now with concierge medicine. They don’t personally suffer approvals for care. They profit from denying their livestock’s care.

    Meanwhile we, their capital battery livestock property, will be yelling at robots about refilling our prescription as they hallucinate and start singing happy birthday to us.

    We could fight back, but that would require fighting the right war against the right people and not letting them distract us with subordinate culture battles against one another. Those are booby traps laid between us and them by them.

    Only one man, a traitor to his own class no less, has dealt them so much as a glancing blow, while we battle one another about one of the dozens of social wedges the owners stoke through their for profit megaphones. “Women hate men! Christians hate atheists! Poor hate more poor! Terfs hate trans! Color hate color! 2nd Gen immigrants hate 1st Gen immigrants!” On and on and on and on as we ALL suffer less housing, less food, less basic needs being met. Stop it. Common enemy. Meaningful Shareholders.

    And if you think your little 401k makes you a meaningful shareholder, please just go sit down and have a juice box, the situation is beyond you and you either can’t or refuse to understand it.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      And if you think your little 401k makes you a meaningful shareholder

      “In this company we’re all like family, you don’t have to worry about anything.”

    • FMT99@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      2 days ago

      I mean I don’t know how it is where you live but here taking the orders has been 99% supplanted by touch screens (without AI) So yeah, a machine can do that job.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    163
    arrow-down
    1
    ·
    4 days ago

    In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate.

    But that’s not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is “companies will buy our products so they can do more with less.” It’s not “business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.”

    Cory Doctorow: What Kind of Bubble is AI?

    • dance_ninja@lemmy.world
      link
      fedilink
      arrow-up
      41
      arrow-down
      2
      ·
      4 days ago

      AI tools like this should really be viewed as a calculator. Helpful for speeding up analysis, but you still require an expert to sign off.

    • Apytele@sh.itjust.works
      link
      fedilink
      arrow-up
      12
      arrow-down
      3
      ·
      4 days ago

      Very much so. As a nurse the AI components I like are things that bring my attention to critical results (and combinations of results) faster. So if my tech gets vitals and the blood pressure is low and the heart rate is high and they’re running a temperature, I want it to call both me and the rapid response nurse right away and we can all sort out whether it’s sepsis or not when we get to the room together. I DON’T want it to be making decisions for me. I just want some extra heads up here and there.

      • albert180@discuss.tchncs.de
        link
        fedilink
        arrow-up
        8
        arrow-down
        3
        ·
        3 days ago

        You don’t need AI for this and it’s probably Not using “AI”

        Also in other Countries there is No bullshit Separation between Nurses and “Techs”

        • KubeRoot@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          3 days ago

          What they’re describing is the kind of thing where the “last-gen” iteration/definition of AI, as in pretrained neural networks, are very applicable - taking in various vitals as inputs and outputting a value for if it should be alarming. For simple things you don’t need any of that, but if you want to be detecting more subtle signs to give an early warning, it can be really difficult to manually write logic for that, while machine learning can potentially catch cases you didn’t even think of.

        • Apytele@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          arrow-down
          5
          ·
          edit-2
          3 days ago

          Also in other Countries there is No bullshit Separation between Nurses and “Techs”

          do you want a sticker? ⭐

          • MutilationWave@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            7
            ·
            3 days ago

            Hey, do you want to go fuck yourself?

            I work in the hospital world and tele techs are working harder than nurses 99% of the time. So what job title do you have that makes you feel special? ⭐🖕⭐

            • albert180@discuss.tchncs.de
              link
              fedilink
              arrow-up
              1
              ·
              3 days ago

              In the Hospital World where I Work there are Nurses and Doctors delivering care. No need for thousand bullshit sub Jobs aimed at cutting wages and making Patient Care worse

    • fine_sandy_bottom@discuss.tchncs.de
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      4 days ago

      Ideally, yeah - people would review and decide first, then check if the AI opinion confers.

      We all know that’s just not how things go in a professional setting.

      Anyone, including me, is just going to skip to the end and see what the AI says, and consider whether it’s reasonable. Then spend the alotted time goofing off.

      Obviously this is not how things ought to be, but it’s how things have been every time some new tech improves productivity.

    • SquatDingloid@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Machine Learning is awesome for medicine, when they run your genetic sequence and then say “we should check for this weird genetic illness that very few people have because it’s likely you’ll have it” that comes from Machine Learning algorithms finding patterns in the old patient data we feed it.

      Machine Learning is great for finding discrepancies in big data sets, like statistics of illnesses.

      Machine Learning (AI) is incapable of making good decisions based on that statistical analysis though, which is why it’s still a horrible idea to totally automate medicine.

      • jagged_circle@feddit.nl
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        It also makes tons of mistakes and false-positives.

        There’s a right way to use it, and the wrong way is by using proprietary algorithms that haven’t published openly and reviewed by the government and experts. And with failsafes to override the decisions made by the algorithms, in recognition that they often make terrible mistakes that disproportionately harm minorities.

  • supersquirrel@sopuli.xyz
    link
    fedilink
    arrow-up
    32
    arrow-down
    5
    ·
    edit-2
    3 days ago

    Yeah fuck AI but can we stop shitting on fast food jobs like they are embarassing jobs to have that are somehow super easy.

    What you should hate about AI is the way it is used as a concept to dehumanize people and the labor they do and this kind of meme/statement works against solidarity in our resistance by backhandedly insulting people working in fastfood.

    Is it the most complicated job in the world? Probably not, but that doesn’t mean these jobs aren’t exhausting and worthy of respect.

    The whole point of AI is to provide a narrative framework that allows the ruling class to further dehumanize labor and treat workers worse (because replacement with automation is just around the corner).

    Realize that agreeing to this framework of low paid jobs as easy and worthless plays right into the actual reasons the ruling class are pushing AI so hard. The true power is in the story not the tech.

    • Notyou@sopuli.xyz
      link
      fedilink
      arrow-up
      9
      ·
      3 days ago

      I have to had so many conversations with people still thinking fast food is only for high school kids. It’s odd. If I say how will they be open during school hours, they make up some bullshit ‘get a better job.’ It doesn’t make snese. Most of these people don’t have good jobs and are lucky to be supported in their current lifestyle. They don’t see that though.

      I try to push the point of ‘they are paying for your time and for you to be on standby.’ you don’t need to be actively moving all 8 hours. Your bosses don’t. I’ve seen so many waste of time meetings to justify their welfare jobs. It’s comical. They don’t produce value. They are leeches. Not all, but too many.

      • StopTouchingYourPhone@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        3 days ago

        I hate that talking point so much (and hear it all the time from people complaining about immigrants turkin ur jerbs). The Fast-Food-Jobs-Are-Brutal-And-Pay-Shit-Wages-Because-They’re-Building-Teen-Character narrative is anti-worker bullshit that denies folk job security and a living wage.

        Someone’s widowed nan needs this job. The single dad living next door needs this job. A diverse workforce - that includes young people looking for a summer gig - need this job.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      3 days ago

      Can we also talk about how much everyone, everywhere relies on service industry workers and how much everyone would absolutely lose their goddamn minds if they had to make their own burgers and fries twice a week, AND how these staple institutions, jobs we deemed so important that we made people work at them during a pandemic, how much the prices of these sandwiches and snacks has gone up in the last few years, how even bringing up the possibility of increasing minimum wage for these difficult and demanding jobs leads to an entire social “discourse” and fierce debates about if people should be able to afford things.

      • supersquirrel@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Also centrists who think of themselves as tech savy will smugly tell you the only way technology can improve fastfood workers lives is by eliminating jobs and thus all the ruling class has to do is push inflation up and these types of people will shout down anyone who argues we need to pay fastfood workers more to compensate because that must be pushing against the “natural” path of technological progress.

        It is just another form of bootlicking honestly.

        • ameancow@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          The AI cult/singularity bros is absolutely a bootlicking cult, if not licking the boots of the giant tech companies that have no intention of making the world better, then they’re licking the imaginary boots of some kind of AI-mommy that they predict will just “be invented” any day now, aaaannnny day, and that AI will make everyone wealthy.

          Literally, they think an artificial super intelligence will help them pick stocks and invest and everyone will be rich. Don’t dare ask how, just believe it. Don’t ask what the several billion people are going to do who live subsistence lifestyles working land and manual labor to support our entire infrastructure. I guess they’ll also pick the right stocks and get rich and all the presidents and corporate leaders will just throw their hands in the air as their accumulated wealth becomes worthless overnight.

          I am so tired of human ignorance and escapism. We gotta live in the now, and solve the problems we have right now, and stop finding creating ways to blame others so we don’t have to do the hard shit.

          • supersquirrel@sopuli.xyz
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            21 hours ago

            I agree and to sharpen the edge to this point even more, this is also about centrists looking to AI for hope because they have utterly and completely ceded control over narratives about what kind of futures are possible or desirable to conservatives and the ultrawealthy.

            People think the best way towards a more humane society is by beating around the bush and never drawing a line in the sand for when abuse and exploitation have gone far enough and while it is understandable to a degree as an individual coping strategy, it is precisely this kind of societal mindset that fascism catches on and grows like wildfire in.

            This kind of escapism can only lead one place in the end.

    • Syrc@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      3 days ago

      I don’t think it’s shitting on fast food jobs at all. The point of this is that taking orders at a fast food is, in the micro, an extremely easy task. What makes the job as a whole exhausting is the fact that you have to do that for a full shift and the human brain gets stressed from doing that. But AI doesn’t, and yet it’s messing up the simplest part of the job.

      • supersquirrel@sopuli.xyz
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        3 days ago

        I don’t agree we can just authoritatively state in broad terms that working fastfood is extremely easy in any framing, especially for shit pay and lack of quality recuperation time associated with getting treated like you aren’t really a human being (more like an approximation of a robot).

        That is my whole point.

        • Syrc@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          3 days ago

          I never said “working fast food” is extremely easy. What I said is, listening to a customer speaking and just relying that to a machine is extremely easy.

          Doing that for a full shift is NOT easy. Doing that while being stressed because the pay is shit and you might even have another job on top of that is NOT easy. Being treated as a robot for half of your non-sleeping life is NOT easy. But all of those things are not easy for a human. None of these are issues for a software, whose hardest task is simply “listening to a customer speaking and just relying that to a machine”, which is, taking out of the equation human matters like stress, emotions and whatnot, extremely easy.

          • supersquirrel@sopuli.xyz
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            3 days ago

            You are subscribing to an abstraction of the inherently human labor of preparing a to-go meal for someone that assumes one can or should utterly remove the human aspect of that interaction.

            …and before someone comes at me with some form of an argument that I am arguing against a future with automation that will be better for everyone I want to emphasize that is again accepting a number of framings implicitly without first critically examining them.

            For one, why is the profession of feeding people hot food in a speedy manner in remote places or late hours considered so unworthy of a basic respect that people constantly shit on it as a job?

            If it is truly as demeaning and inhuman as we all casually assume when we use fast food labor as the butt of our points, as an insult in the form of association, than why can we only ever ask of technology in the context of the food service industry “how do we remove the humanity from this thing?” and never “how do we restore or embue humanity to this thing?”.

            In otherwords, why does fastfood work have to be seen as unworthy of being considered a respectable job? If there is an existential crisis here to be solved it is clearly not with helping massive corporations further slash operating costs and investments in stable decent employment, but with examining and addressing what horrifically went wrong that we have slept walk (by and large) into thinking this is an ok or healthy way to think about other human beings.

            • Syrc@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              3 days ago

              …I don’t think I understood your point. I’ll try giving my answers to these questions but I’m sure I misunderstood most of them.

              For one, why is the profession of feeding people hot food in a speedy manner in remote places or late hours considered so unworthy of a basic respect that people constantly shit on it as a job?

              In otherwords, why does fastfood work have to be seen as unworthy of being considered a respectable job?

              Because it’s a terrible job that I don’t think anyone actually wants to do. We’ve already talked about how stressful and unsatisfying it is as a job, there’s pretty much no upside to it.

              why can we only ever ask of technology in the context of the food service industry “how do we remove the humanity from this thing?” and never “how do we restore or embue humanity to this thing?”

              Personally, because I don’t think it’s possible. It’s a very “mechanical” job (save a very small number of people like restaurant chefs), and giving it “humanity” (less stressful shifts, less pressure and higher pay) is counterproductive to both what companies want (more money) and what customers want (to eat food for cheap and quickly, even at odd times or in odd places).

              I think it’s one of the best jobs to be replaced because it’s easy (for a machine) and no human actually likes doing it. The issue is, of course, that the cut costs will go straight to the pockets of the CEOs and will not be used to improve the customer experience (or at least make it cheaper), so the working class will just have less jobs while having to pay the same to eat, but that’s a widespread issue with capitalism that’s far harder to fix.

              If there is an existential crisis here to be solved it is clearly not with helping massive corporations further slash operating costs and investments in stable decent employment, but with examining and addressing what horrifically went wrong that we have slept walk (by and large) into thinking this is an ok or healthy way to think about other human beings.

              I feel like you’re conflating two things here: people that don’t consider “working at a fast food” worthy of respect (imo rightfully, because again, it’s a terrible job), and people that don’t consider “people who work at a fast food” worthy of respect (probably because they believe in the “hustler” mentality and are convinced that it’s their fault if they’re stuck with a shitty job).

              My opinions on a job and on someone who work at said job are vastly different, and not just for the food industry. I’m guessing a lot of people also think similarly, I’ve never seen people shit on fast food workers as people, except for the aforementioned delusional types who think anyone could be a billionaire if they just put in “enough work”.

              Again, sorry but I don’t think I really got the meaning of your last comment so do tell me if I completely missed your point and all my answers were gibberish based on assumptions I had.

              • supersquirrel@sopuli.xyz
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                edit-2
                2 days ago

                Personally, because I don’t think it’s possible. It’s a very “mechanical” job (save a very small number of people like restaurant chefs), and giving it “humanity” (less stressful shifts, less pressure and higher pay) is counterproductive to both what companies want (more money) and what customers want (to eat food for cheap and quickly, even at odd times or in odd places).

                I think it’s one of the best jobs to be replaced because it’s easy (for a machine) and no human actually likes doing it.

                These two paragraph are full of the common assumptions and generalizations we assert as a society about fastfood work and frankly I am tired of having to nod my head and pretend like they are indisputable facts. Nothing you said is evidence, you have just dutifully sketched out the narrative we use to dehumanize fastfood work (and other “essential work”).

                My opinions on a job and on someone who work at said job are vastly different, and not just for the food industry. I’m guessing a lot of people also think similarly, I’ve never seen people shit on fast food workers as people, except for the aforementioned delusional types who think anyone could be a billionaire if they just put in “enough work”.

                You are participating in a very dangerous slight of hand here by saying that in a society that utterly defines your worth and potential from your job that it is theoretically reasonable to disparage a job because why would anyone ever conflate a person with their job??

                Everything about our society conflates the identity of people with their job (especially along vectors of oppression), any attempt to divide those two except as basically an academic excersize is pointless and harmfully obscures the extremely class based rigidity of the society we live in (speaking as a USian, tho I am sure the pattern isnt tooo different elsewhere).

                People have been convinced by the rich to think fastfood work is demeaning, pathetic and worthless and I think it is honestly pretty disgusting how willing people are to jump on that bandwagon and do free work for the ruling class in helping undermine worker leverage to demand a decent life.

                • Syrc@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 days ago

                  These two paragraph are full of the common assumptions and generalizations we assert as a society about fastfood work and frankly I am tired of having to nod my head and pretend like they are indisputable facts. Nothing you said is evidence, you have just dutifully sketched out the narrative we use to dehumanize fastfood work (and other “essential work”).

                  …so what exactly is wrong about what I said? You’re saying they’re assumptions and generalizations but didn’t bring any counterpoint.

                  People have been convinced by the rich to think fastfood work is demeaning, pathetic and worthless and I think it is honestly pretty disgusting how willing people are to jump on that bandwagon and do free work for the ruling class in helping undermine worker leverage to demand a decent life.

                  I… really don’t think that’s what’s happening? At least barring the aforementioned delusional people. If anything, jobs that are considered horrible and demeaning like certain teachers and nurses get MORE sympathy from the public exactly because we see that’s a terrible way of living and that’s not okay.

                  What do you think we should do then? Act like it’s an awesome job and everyone is happy doing it? Wouldn’t that have the opposite effect of making people think all is good and nothing needs improvement?

  • finitebanjo@lemmy.world
    link
    fedilink
    arrow-up
    41
    arrow-down
    2
    ·
    3 days ago

    You know, OpenAI published a paper in 2020 modelling how far they were from human language error rate and it correctly predicted the accuracy of GPT 4. Deepmind also published a study in 2023 with the same metrics and discovered that even with infinite training and power it would still never break 1.69% error rate.

    These companies knew that their basic model was failing and that overfitying trashed their models.

    Sam Altman and all these other fuckers knew, they’ve always known, that their LLMs would never function perfectly. They’re convincing all the idiots on earth that they’re selling an AGI prototype while they already know that it’s a dead-end.

    • JasminIstMuede@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      19
      ·
      3 days ago

      As far as I know, the Deepmind paper was actually a challenge of the OpenAI paper, suggesting that models are undertrained and underperform while using too much compute due to this. They tested a model with 70B params and were able to outperform much larger models while using less compute by introducing more training. I don’t think there can be any general conclusion about some hard ceiling for LLM performance drawn from this.

      However, this does not change the fact that there are areas (ones that rely on correctness) that simply cannot be replaced by this kind of model, and it is a foolish pursuit.

        • finitebanjo@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          3 days ago

          Human hardware is pretty impressive, might need to move on from binary computers to emulate it efficiently.

            • finitebanjo@lemmy.world
              link
              fedilink
              arrow-up
              9
              ·
              edit-2
              3 days ago

              Neurons produce multiple types of neurotransmitters. That means they can have an effective state different from just on or off.

              I’m not suggesting we resurrect analogue computers, per se, but I think we need to find something with a little more complexity for a good middle ground. It could even be something as simple as binary with conditional memory, maybe. Idk. I see the problem not the solution.

              I’m also not saying you can’t emulate it with binary, but I am saying it isn’t as efficient.

  • activ8r@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    2
    ·
    3 days ago

    If I’ve said it once, I’ve said it a thousand times. LLMs are not AI. It is a natural language tool that would allow an AI to communicate with us using natural language…

    What it is being used for now is just completely inappropriate. At best this makes a neat (if sometimes inaccurate) home assistant.

    To be clear: LLMs are incredibly cool, powerful and useful. But they are not intelligent, which is a pretty fundamental requirement of artificial intelligence.
    I think we are pretty close to AI (in a very simple sense), but marketing has just seen the fun part (natural communication with a computer) and gone “oh yeah, that’s good enough. People will buy that because it looks cool”. Nevermind that it’s not even close to what the term “AI” implies to the average person and it’s not even technically AI either so…

    I don’t remember where I was going with this, but capitalism has once again fucked a massive technical breakthrough by marketing it as something that it’s not.

    Probably preaching to the choir here though…

    • glitchdx@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 days ago

      We also have hoverboards. Well, “hoverboards”, because that’s the branding. They have wheels, and don’t hover.

    • Swordgeek@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      3 days ago

      Yep, a great summary.

      I keep telling people that what they call AI (e.g. LLMs) are fancy autocomplete. Little more.

      • Flying Squid@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        3 days ago

        They’re sentence-constructing machines. Very advanced ones. There was one in the 80s called Racter that spat out a lot of legible text that was basically babble. Now it looks like it isn’t babble and that’s sometimes the case.

      • activ8r@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        Essentially auto-predict 2.0

        Fucking cool and it annoys me to no end that it gets slated because of unrealistic expectations.

    • cmhe@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      3 days ago

      Well it seems like a pretty natural fallacy to think that if something talks to us, in a language that we understand, that it must be intelligent. But it also doesn’t help that LLMs, aka. fancy text generators built with machine learning algorithms, are marketed as artificial intelligence.

    • doktormerlin@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      3 days ago

      The LLMs can also be EXTREMELY useful, if used correctly.

      Instead of replacing customer service workers, use the speech processing to highlight keywords on the service workers PC, so they can quickly find the right internal wiki page? Atlassian Intelligence works pretty neat in that way, a Help desk ticket already has some keywords highlighted and when you click on it, it shows an AI summary of what this means from resources in the Atlassian account. Helps inexperienced people to quickly get up to speed and it’s only helping, not replacing.

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      8
      ·
      4 days ago

      I mean… duh? The purpose of an LLM is to map words to meanings… to derive what a human intends from what they say. That’s it. That’s all.

      It’s not a logic tool or a fact regurgitator. It’s a context interpretation engine.

      The real flaw is that people expect that because it can sometimes (more than past attempts) understand what you mean, it is capable of reasoning.

      • vithigar@lemmy.ca
        link
        fedilink
        arrow-up
        20
        ·
        3 days ago

        Not even that. LLMs have no concept of meaning or understanding. What they do in essence is space filling based on previously trained patterns.

        Like showing a bunch of shapes to someone, then drawing a few lines and asking them to complete the shape. And all the shapes are lamp posts but you haven’t told them that and they have no idea what a lamp post is. They will just produce results like the shapes you’ve shown them, which generally end up looking like lamp posts.

        Except the “shape” in this case is a sentence or poem or self insert erotic fan fiction, none of which an LLM “understands”, it just matches the shape of what’s been written so far with previous patterns and extrapolates.

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          3 days ago

          Well yes… I think that’s essentially what I’m saying.

          It’s debatable whether our own brains really operate any differently. For instance, if I say the word “lamppost”, your brain determines the meaning of that word based on the context of my other words around “lamppost” and also all of your past experiences that are connected with that word - because we use past context to interpret present experience.

          In an abstract, nontechnical way, training a machine learning model on a corpus of data is sort of like trying to give it enough context to interpret new inputs in an expected/useful way. In the case of LLMs, it’s an attempt to link the use of words and phrases with contextual meanings so that a computer system can interact with natural human language (rather than specifically prepared and formatted language like programming).

          It’s all just statistics though. The interpretation is based on ingestion of lots of contextual uses. It can’t really understand… it has nothing to understand with. All it can do is associate newly input words with generalized contextual meanings based on probabilities.

          • MutilationWave@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            3 days ago

            I wish you’d talked more about how we humans work. We are at the mercy of pattern recognition. Even when we try not to be.

            When “you” decide to pick up an apple it’s about to be in your hand by the time your software has caught up with the hardware. Then your brain tells “you” a story about why you picked up the apple.

            • IlovePizza@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              3 days ago

              I really don’t think that is always true. You should see me going back and forth in the kitchen trying to decide what to eat 😅

      • Kilgore Trout@feddit.it
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        3 days ago

        I mean… duh?

        My same reaction, but scientific, peer-reviewed and published studies are very important if e.g. we want to stop our judicial systems from implementing LLM AI

    • metaStatic@kbin.earth
      link
      fedilink
      arrow-up
      15
      arrow-down
      74
      ·
      4 days ago

      plenty of people can’t reason either. the current state of AI is closer to us than we’d like to admit.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        17
        arrow-down
        3
        ·
        4 days ago

        DAE people are really stupid? 50% of all people are dumber than average, you know. Heh. NoW jUsT tHinK abOuT hOw dUmb tHe AverAgE PeRsoN iS. Maybe that’s why they can’t get my 5-shot venti caramel latte made with steamed whipped cream right. *cough* Where is my adderall.

      • Haggunenons@lemmy.world
        link
        fedilink
        arrow-up
        18
        arrow-down
        6
        ·
        4 days ago

        As clearly demonstrated by the number of downvotes you are receiving, you well-reasoning human.

      • peto (he/him)@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 days ago

        I severely hope that people aren’t using LLM-AI to do reasoning tasks. I appreciate that I am likely wrong, but LLMs are neither the totality or the pinnacle of AI tech. I don’t think we are meaningfully closer to AGI than we were before LLMs blew up.

      • Syrc@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        3 days ago

        That’s just false. People are all capable of reasoning, it’s just that plenty of them get terribly wrong conclusions from doing that, often because they’re not “good” at reasoning. But they’re still able to do that, unlike AI (at least for now).

    • MiDaBa@lemmy.ml
      link
      fedilink
      arrow-up
      31
      ·
      4 days ago

      That’s probably it’s primary function. That and maximizing profits through charging flex pricing based on who’s the biggest sucker.

  • Bluefalcon@discuss.tchncs.de
    link
    fedilink
    arrow-up
    35
    ·
    edit-2
    4 days ago

    Bitch just takes orders and you want to make movies with it? No AI wants to work hard anymore. Always looking for a handout.

  • ch00f@lemmy.world
    link
    fedilink
    arrow-up
    33
    arrow-down
    4
    ·
    4 days ago

    What blows my mind about all this AI shit is that these bots are “programmed” by just telling them what to do. “You are an employee working at McDonald’s” and they take it from there.

    Insanity.

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      4 days ago

      Yeah, all the control systems are in-band, making them impossible to control. Users can just modify them as part of the normal conversation. It’s like they didn’t learn anything from phone phreaking.