• 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠@programming.dev
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    6
    ·
    1 month ago

    He’s already given you 5 examples of positive impact. You’re just moving the goalposts now.

    I’m happy to bash morons who abuse generative AIs in bad applications and I can acknowledge that LLM-fuelled misinformation is a problem, but don’t lump “all AI” together and then deny the very obvious positive impact other applications have had (e.g. in healthcare).

    • IsoSpandy@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      1 month ago

      LLMs fucking suck. But there are things that don’t suck. AI chess engines have entirety changed the game, AI protein predictors have made designer drugs and nanobots come within our grasp.

      It’s just that tech bros want to grab quick cash from us peasants and that somehow equates to integrating chat gpt into everything. The most moronic of AI has become their poster child. It’s like if we asked people what a US president is like in character and everybody showed Trump to them as an example.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      6
      ·
      1 month ago

      He’s already given you 5 examples anecdotes of positive impact.

      Who’s upvoting this? Is Lemmy really this scientifically illiterate?

    • GreenKnight23@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      11
      ·
      1 month ago

      those aren’t examples they’re hearsay. “oh everybody knows this to be true”

      You are ignoring ALL of the of the positive applications of AI from several decades of development, and only focusing on the negative aspects of generative AI.

      generative AI is the only “AI”. everything that came before that was a thought experiment based on the human perception of a neural network. it’d be like calling a first draft a finished book.

      if you consider the Turing Test AI then it blurs the line between a neural net and nested if/else logic.

      Here is a non-exhaustive list of some applications:

      • In healthcare as a tool for earlier detection and prevention of certain diseases

      great, give an example of this being used to save lives from a peer reviewed source that won’t be biased by product development or hospital marketing.

      • For anomaly detection in intrusion detection system, protecting web servers

      let’s be real here, this is still a golden turd and is more ML than AI. I know because it’s my job to know.

      • Disaster relief for identifying the affected areas and aiding in planning the rescue effort

      hearsay, give a creditable source of when this was used to save lives. I doubt that AI could ever be used in this way because it’s basic disaster triage, which would open ANY company up to litigation should their algorithm kill someone.

      • Fall detection in e.g. phones and smartwatches that can alert medical services, especially useful for the elderly.

      this dumb. AI isn’t even used in this and you know it. algorithms are not AI. falls are detected when a sudden gyroscopic speed/ direction is identified based on a set number of variables. everyone falls the same when your phone is in your pocket. dropping your phone will show differently due to a change in mass and spin. again, algorithmic not AI.

      • Various forecasting applications that can help plan e.g. production to reduce waste. Etc…

      forecasting is an algorithm not AI. ML would determine the percentage of an algorithm is accurate based on what it knows. algorithms and ML is not AI.

      There have even been a lot of good applications of generative AI, e.g. in production, especially for construction, where a generative AI can the functionally same product but with less material, while still maintaining the strength. This reduces cost of manufacturing, and also the environmental impact due to the reduced material usage.

      this reads just like the marketing bullshit companies promote to show how “altruistic” they are.

      Does AI have its problems? Sure. Is generative AI being misused and abused? Definitely. But just because some applications are useless it doesn’t mean that the whole field is.

      I won’t deny there is potential there, but we’re a loooong way from meaningful impact.

      A hammer can be used to murder someone, that does not mean that all hammers are murder weapons.

      just because a hammer is a hammer doesn’t mean it can’t be used to commit murder. dumbest argument ever, right up there with “only way to stop a bad guy with a gun is a good guy with a gun.”

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        3
        ·
        1 month ago

        generative AI is the only “AI”. everything that came before that was a thought experiment based on the human perception of a neural network. it’d be like calling a first draft a finished book.

        You clearly don’t know much about the field. Generative AI is the new thing that people are going crazy over, and yes it is pretty cool. But it’s built on research into other types of AI-- classifiers being a big one-- that still exist in their own distinct form and are not simply a draft of ChatGPT. In fact, I believe classification is one of the most immediately useful tasks that you can train an AI for. You were given several examples of this in an earlier comment.

        Fundamentally, AI is a way to process fuzzy data. It’s an alternative to traditional algorithms, where you need a hard answer with a fairly high confidence but have no concrete rules for determining the answer. It analyzes patterns and predicts what the answer will be. For patterns that have fuzzy inputs but answers that are relatively unambiguous, this allows us to tackle an entire class of computational problems which were previously impossible. To summarize, and at risk of sounding buzzwordy, it lets computers think more like humans. And no, for the record, it has nothing to do with crypto.

        Nobody here will give you peer-reviewed articles because it’s clear that your position is overconfident for your subject knowledge, so the likelihood a valid response will change your mind is very small, so it’s not worth the effort. That includes me, sorry. I can explain in more detail how non-generative AI works if you’d like to know more.

        • GreenKnight23@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          1 month ago

          not once did I mention ChatGPT or LLMs. why do aibros always use them as an argument? I think it’s because you all know how shit they are and call it out so you can disarm anyone trying to use it as proof of how shit AI is.

          everything you mentioned is ML and algorithm interpretation, not AI. fuzzy data is processed by ML. fuzzy inputs, ML. AI stores data similarly to a neural network, but that does not mean it “thinks like a human”.

          if nobody can provide peer reviewed articles, that means they don’t exist, which means all the “power” behind AI is just hot air. if they existed, just pop it into your little LLM and have it spit the articles out.

          AI is a marketing joke like “the cloud” was 20 years ago.

          • Feathercrown@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            1 month ago

            not once did I mention ChatGPT or LLMs. why do aibros always use them as an argument? I think it’s because you all know how shit they are and call it out so you can disarm anyone trying to use it as proof of how shit AI is.

            You were talking about generative AI. Of that category, only text and image generation are mature and producing passable output (music gen sounds bad, video gen is existentially horrifying, code gen or Photoshop autofill etc. are just subsets of text or image gen). I don’t think LLMs or image gen are shit. LLMs in particular are easy to mischaracterize and therefore misuse, but they do have their uses. And image gen is legitimately useful.

            Also, I wouldn’t characterize myself as an “ai bro”. I’ve tested text and image generation like half a dozen times each, but I tend to avoid them by default. The exception is Google’s AI search, which can be legitimately useful for summarizing concepts that are fundamental to some people but foreign to me extremely quickly, and then I can go verify it later. I’ve been following AI news closely but I don’t have much of a stake in this myself. If it helps my credibility, I never thought NFTs were a good idea. I think that’s a good baseline for “are your tech opinions based on hype or reality”, because literally every reasonable person agrees that they were stupid.

            everything you mentioned is ML and algorithm interpretation, not AI. fuzzy data is processed by ML. fuzzy inputs, ML.

            ML is a type of AI, but clearly you have a different definition; what do you mean when you say “AI”?

            AI stores data similarly to a neural network, but that does not mean it “thinks like a human”.

            That was poorly worded on my part. I know that it doesn’t actually “think”. My point was that it can approach tasks which require heuristic rather than exact algorithms, which used to be exclusively in the human-only category of data processing capabilities. I hope that’s a more clear statement.

            if nobody can provide peer reviewed articles, that means they don’t exist

            “won’t” =/= “can’t”, but fine, if you specify what you’re looking for I’m willing to do your job for you and find articles on this. However, if you waste my time by making me search for stuff and then ignore it, you’re going on my shared blocklist. What exactly are you looking for? I will try my best to find it, I assure you.

            if they existed, just pop it into your little LLM and have it spit the articles out.

            Again, I feel like you’re using “AI” to mean “human-level intelligence”, which is incorrect. Anyways, you know that if I asked an LLM to do this it would generate fake citations. I’m not arguing against that; LLMs don’t posess knowledge and do not know what truth is. That’s not why they’re useful.

            AI is a marketing joke like “the cloud” was 20 years ago.

            I think they’re a bit more useful than the cloud was, but this comparison isn’t entirely inaccurate.

        • technocrit@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          6
          ·
          edit-2
          1 month ago

          Classification =/= intelligence.

          My spell checker can classify incorrectly spelled words. Is that intelligence? The whole field if a phony grift.

          AI is a way to process fuzzy data sell fuzzy statistics.

          Nobody here will give you peer-reviewed articles

          Nuf said.

          • Feathercrown@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 month ago

            I don’t get why people are harping on the term used so much. Whether we call it “intelligence” or not, and even how we define “intelligence” (fairly difficult to do), have no bearing on its abilities. Feel free to call it Machine Learning where applicable, although afaik that term has a more specific meaning so ymmv.

            People can use AI to sell things or do bad things, because it’s a new and situationally very powerful tool. It’s also something that’s not very well understood, so it’s particularly susceptible to grifting. I would recommend anyone in today’s world to take some time and reslly understand how it works so that you know when people are being truthful about its applications and when they’re just overhyping a nonsense feature.

            In the world of the past, access to knowledge determined how successful at learning the truth people were. Today, that success is determined by your ability to discriminate between good and bad information. We have access to nearly infinite knowledge and nearly infinite lies. Don’t waste the opportunity to learn to tell the difference. It is the greatest asset you can have.

            If you want specific studies, please specify exactly what you’re looking for, and perhaps I can help after work. Alternatively, if you know already, you can simply try to find them yourself, which imo would be more efficient.