Meta’s AI image generator is coming under fire for its apparent struggles to create images of couples or friends from different racial backgrounds.

  • jedibob5@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    6 months ago

    I hate that the focus of AI/ML development has become so fixated on generative AI - images, video, sound, text, and whatnot. It’s kind of crazy to me that AI can generate output with the degree of accuracy that it does, but honestly, I think that generative AI is, in a sense, barking up the wrong tree in terms of where AI’s true strengths lie.

    AI can actually turn out to be really good at certain kinds of problem-solving, particularly when it comes to optimization problems. AI essentially “learns” by extremely rapid and complex trial-and-error, so when presented with a problem with many complex, interdependent variables in which an optimal solution needs to be found, a properly-trained AI model can achieve remarkably effective solutions far quicker than any human could, and could consider avenues of success that humans otherwise would miss. This is particularly applicable to a lot of engineering problems.

    Honestly, I’d be very intrigued to see an AI model trained on average traffic data for a section of a city’s street grid, taken by observations from a series of cameras set up to observe various traffic patterns over the course of a few months, taking measurements on average number of cars passing through across various times of day, their average speed, and other such patterns, and then set on the task of optimizing stoplight timings to maximize traffic flow and minimize the amount of time cars spend waiting at red lights. If the model is set up carefully enough (including a data-collection plan that’s meticulous enough to properly model average traffic patterns, outlier disincentives to keep cars at little-used cross streets from having to wait 10 minutes for a green light, etc.), I feel that this sort of thing would be the perfect kind of problem for an AI model to solve.

    AI should be used on complex, data-intensive problems that humans can’t solve on their own, or at least not without a huge amount of time and effort. Generative AI doesn’t actually solve any new problems. Why should we care if an AI can generate an image of an interracial couple or not? There are countless human artists who would happily take a commission to draw an interracial couple (or whatever else your heart desires) for you, without dealing with investing billions of dollars into developing increasingly complex models built on dubiously-sourced (at best) datasets that still don’t produce results as good as the real thing. Humans are already good at unscripted creativity, and computers are already good at massive volumes of complex calculations, so why force a square peg into a round hole?

    • TheChurn@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      “AI” isn’t needed to solve optimization problems, that’s what we have optimization algorithms for.

      Define an objective and parameters and give the problem to any one of the dozens of general solvers and you’ll get approximate answers. Large cities already use models like these for traffic flow, there’s a whole field of literature on it.

      The one closest to what you mentioned is a genetic algorithm, again a decades-old technique that has very little in common with Generative “AI”

      • Chozo@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        The problem comes from not knowing what the optimal algorithm is for the particular situation beforehand, though. You can only formulate based on factors you already know. That’s why AI can train itself and develop its own, new algorithms, and can determine those unknown factors that may have gone unnoticed by humans.

        • TheChurn@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          No, that’s not a real problem either. Model search techniques are very mature, the first automated tools for this were released in the 90s, they’ve only gotten better.

          AI can’t ‘train itself’, there is no training required for an optimization problem. A system that queries the value of the objective function - “how good is this solution” - then tweaks parameters according to the optimization algorithm - traffic light timings - and queries the objective function again isn’t training itself, it isn’t learning, it is centuries-old mathematics.

          There’s a lot of intentional and unintentional misinformation around what “AI” is, what it can do, and what it can do that is actually novel. Beyond Generative AI - the new craze - most of what is packaged as AI are mature algorithms applied to an old problem in a stagnant field and then repackaged as a corporate press release.

          Take drug discovery. No “AI” didn’t just make 50 new antibiotics, they just hired a chemist who graduated in the last decade who understands commercial retrosynthetic search tools and who asked the biopharma guy what functional groups they think would work.

    • UnpluggedFridge@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      Generative AI has the data, and the required data is immense. The challenge for other potential applications is that the data is not there or not available. There are no other aspects of human productivity that are as widely recorded and available as images and text. The next big thing will require an actual data collection effort; you won’t get it from scraping the Internet.