Disclaimer: I am asking this for a college class. I need to find an example of AI being used unethically. I figured this would be one of the best places to ask. Maybe this could also serve as a good post to collect examples.

So what have you got?

  • ArcRay@lemmy.dbzer0.comOP
    link
    fedilink
    arrow-up
    5
    arrow-down
    6
    ·
    21 hours ago

    Excelled point. I think there are some legitimate uses of AI. Especially in image processing for science related topics.

    But for the most part, almost every common use is unethical. Whether it be the energy demands, (and its contributions to climate change), the theft of intellectual property, the spread of misinformation, and so much more. Overall, it’s a huge net negative on society.

    I remember hearing about the lawyer one. IIRC chatGPT was citing laws that didn’t even exist. How do you not check what it wrote? You wouldn’t blindly accept predictive word typing with your phones keyboard and autocorrect. So why would you blindly trust a fancier autocorrect?

    • Greg Clarke@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      15 hours ago

      But for the most part, almost every common use is unethical.

      The most common uses of AI are not in the headlines. Your email spam filter is AI.

        • Greg Clarke@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          You should be accurate with your language if you’re going to claim a whole industry is unethical. And it’s also important to make a distinction between the technology and the implementation of the technology. LLMs can be trained and used in ethical ways

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            7 hours ago

            I’m not really sure if I want to agree here. We’re currently in the middle of some hype wave concerning LLMs. So most people mean that when talking about “AI”. Of course that’s wrong. I tend to use the term “machine learning” if I don’t want to confuse people with a spoiled term.

            And I must say, most (not all) machine learning is done in a problematic way. Tesla cars have been banned from companies parking lots, your Alexa saves your private conversations in the cloud, the algorithms that power the web weigh down on society and they spy on me. The successfull companies build upon copyright-theft or personal data from their users. And all of that isn’t really transparent to anyone. And oftentimes it’s opt-out if we get a choice at all. But of course there are legitimate interests. I believe a dishwasher or spamfilter would be trained ethically. Probably also the image detection for medical applications.

            • Greg Clarke@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 hours ago

              I 100% agree that big tech is using AI in very unethical ways. And this isn’t even new, the chairman of the U.N. Independent International Fact-Finding Mission on Myanmar stated that Facebook played a “determining role” in the Rohingya genocide. And then recently Zuck actually rolled back the programs that were meant to prevent this in the future.

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                3 hours ago

                I think quite some of our current societal issues (in western societies as well) come from algorithms and filter bubbles. I think that’s the main contributing factor to why people can’t talk to each other any more and everyone gets more radicalized into the extremes. And in the broader pictures the surrounding attention economy fuels populists and does away with any factual view on the world. It’s not AI’s fault, but it’s machine learning that powers these platforms and decides who gets attention and who gets confined into which filter bubble. I think that’s super unhealthy for us. But sure. It’s more the prevailing internet business model to blame here and not directly the software that powers this. I have to look up what happened in Rohingya… We get a few other issues with social media as well, which aren’t directly linked to algorithms. We’ll see how the LLMs fit into that, I’m not sure how they’re going to change world, but everyone seems to agree this is very disruptive technology.