A group of hackers that says it believes “AI-generated artwork is detrimental to the creative industry and should be discouraged” is hacking people who are trying to use a popular interface for the AI image generation software Stable Diffusion with a malicious extension for the image generator interface shared on Github.

ComfyUI is an extremely popular graphical user interface for Stable Diffusion that’s shared freely on Github, making it easier for users to generate images and modify their image generation models. ComfyUI_LLMVISION, the extension that was compromised to hack users, is a ComfyUI extension that allowed users to integrate large language models GPT-4 and Claude 3 into the same interface.

The ComfyUI_LLMVISION Github page is currently down, but a Wayback Machine archive of it from June 9 states that it was “COMPROMISED BY NULLBULGE GROUP.”

  • pavnilschanda@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    ·
    edit-2
    5 months ago

    Based on the discussion that I’ve seen, it looks like the “Anti-AI” motive was an excuse since all the hack was doing was to steal API keys and potentially sell them. Here’s a discussion thread on reddit that goes into this more.

  • IHeartBadCode@kbin.run
    link
    fedilink
    arrow-up
    44
    arrow-down
    15
    ·
    5 months ago

    AI-generated artwork is detrimental to the creative industry and should be discouraged

    Man you wouldn’t guess how airbrush artist felt when Photoshop came around.

  • awesomesauce309@midwest.social
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    6
    ·
    5 months ago

    I really don’t understand this. All these search engine companies give millions of users a single button to create the most soulless art you’ve ever seen, but instead of caring about that they attack the tool that most enables the user to have control over their generation. You can argue that unlimited competition is bad for commission artists, but this attack is not “Pro Art”.

    Using creative cloud isn’t a sin, but helping maintain Adobes industry stranglehold should be.

    • fishos@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      9
      ·
      5 months ago

      Honestly, I feel like being a Luddite and everytime someone shows art from now on, critique the ever loving hell out of their process.

      “Did you make the brushes yourself from sheep you raised? Did you grind the pigments from plants you grew yourself?”

      Art is amazing, but artists are some of the most delicate people. Their entire career is, in a way, a showcase of themselves, and if you take any part of that away from them or judge it, they become incredibly hostile and take it deeply personally. But literally the same kind of criticisms they’re making now are taught in art history about previous advancements. It’s just the same fragile egos afraid that they’re not as special anymore.

      • CosmicTurtle0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        19
        ·
        5 months ago

        Imo, there are too many good artists and not enough of a market.

        The problem with the market is that they don’t want to pay what artists want. For every one person who wants a bespoke painting is about 30 people who are okay with mediocre and probably couldn’t tell the difference between someone who spent weeks versus a few hours on a painting.

        I’m not saying artists don’t deserve to get paid. There are just not enough people willing to pay what they want. Is AI stealing their jobs? Probably. But they weren’t getting those jobs in the first place.

        A few years ago, I was looking for an art student who would be willing to paint a very simple beach painting for me. Nothing fancy and really I probably could have done it on my own. There was an art school nearby and I went to a showcase and asked around. For an 8x10 canvas, it was going to be $1k minimum.

        That’s insane. I wasn’t asking for detail. Just something with a sky, ocean, and sand. I found something similar to what I was looking for at a yard sale for $5.

        I was willing to pay up to $300 with the possibility of doing more commissions down the road.

        During the pandemic, I ended up taking some painting classes and learned to do the simple painting on my own for the price of a good bottle of wine.

      • awesomesauce309@midwest.social
        link
        fedilink
        English
        arrow-up
        11
        ·
        5 months ago

        If you want to make a pie from scratch…

        I’m sure there’s plenty of people that make their living (or maybe barely scrape by) off digital art that are affected by this so I can understand some touchiness. I mean why pay $100 for an account avatar or other small commissions when you can generate it yourself in one second. But also, why pay a scribe to copy an entire book by hand when a printing press does it faster? The only difference in these statements is that hand scribe wasn’t a widespread profession 5 years ago.

        To me an artist is someone who uses tools to realize their vision. As technology progresses so do the tools. ComfyUI is leagues better of a tool than something like DallE will ever be, but no the entirety of “AI Generated Art” is a sin and must be attacked. Oh, not the corporate zeitgeist heisters, but instead the users of the community driven software.

  • A_Very_Big_Fan@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    10
    ·
    edit-2
    5 months ago

    Honestly I still don’t understand the “stealing” argument. Does the stealing occur during training? From everything I’ve learned about the technology, the training, in terms of the data given and the end result, isn’t any different than me scrolling through Google images to get a concept of how to draw something. It’s not like they have a copy of the whole Internet on their servers to make it work.

    Does it occur during the image generation? Because try as I might, I’ve never been able to get it to output copyrighted material. I know over fitting used to be an issue, but we figured out how to solve that issue a long time ago. “But the signatures!!” yeah, it’s never outputted a recognizable/legible signature, it just associates signatures with art.

    Shouldn’t art theft be judged like any other copyright matter? It doesn’t matter how it was created, it matters if it violates fair use. I really don’t think training crosses that line, and I’ve yet to see these models output a copy of another image outside of image-to-image models.

    • retrospectology@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      edit-2
      5 months ago

      It’s theft of labor without any compensation, aimed at cheapening the very value of that labor.

      A human artist can, and often does, train simply by looking at the real world. The art they then produce is a result of that knowledge being interpreted and stylized by their own brain and perception. The decision making on how to represent a given subject, what details to add and leave out to achieve an effect, is done by the artist themselves. It’s a product of their internal mental laboring.

      By contrast, if you trained an AI on photos alone it would never, ever produce anything that looks like a drawing or a piece of art, it would never create a stylized piece of art or make a creative decision of its own.

      In order to produce art the AI must be fueled with human created art, that humans labored to produce. The human artists are not being compensated for the use of that labor, and even worse the AI is leveraging that to make the human labor worth less. And what’s more, that AI’s ability will stagnate without further theft of newer, more novel art and concepts.

      Without that keystone of human labor the AI simply can’t function.

      Ripping off so many people at once and so chaotically that you can’t distinguish exactly how any given individual is being exploited doesn’t mean those people aren’t still being ripped off. The machine that the tech bros created could not exist without the stolen labor of the artists.

      • A_Very_Big_Fan@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        5 months ago

        I get the sentiment but I don’t think anything here addresses anything I haven’t already mentioned. The labor is certainly being used and it’s certainly for profit, but not in any way that humans don’t already do.

        I really am sympathetic towards artists, though. Like I get that a lot of demand for their work could one day be taken by what generative AI is working towards. I just don’t understand how we can reasonably call it theft/crime when a computer figures out how to make an image by looking at other images but not when humans do it. The whole thing seems like an appeal to emotion.

  • Ð Greıt Þu̇mpkin@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    8
    ·
    5 months ago

    For me the funniest moment of this whole saga was when the AI bros were claiming that they weren’t stealing anyone’s art, but then flipped shit when a FOSS tool released that let people reformat their art pieces specifically so that it’d be harmful to AI art generators that copied them.

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      4
      ·
      5 months ago

      You’ve got it backwards. Glaze and Nightshade aren’t FOSS and Ben Zhao, the University of Chicago professor behind them stole GPLv3 code for glaze. GPLv3 is a copyleft license that requires you share your source code and license your project under the same terms as the code you used. You also can’t distribute your project as a binary-only or proprietary software. When pressed, they only released the code for their front end, remaining in violation of the terms of the GPLv3 license.

      Moreover, Nightshade and Glaze also only works against open source models, because the only open models are Stable Diffusion’s, companies like Midjourney and OpenAI with closed source models aren’t affected by this. Attacking a tool that the public can inspect, collaborate on, and offer free of cost isn’t something that should be celebrated.