cross-posted from: https://lemmy.ml/post/5400607

This is a classic case of tragedy of the commons, where a common resource is harmed by the profit interests of individuals. The traditional example of this is a public field that cattle can graze upon. Without any limits, individual cattle owners have an incentive to overgraze the land, destroying its value to everybody.

We have commons on the internet, too. Despite all of its toxic corners, it is still full of vibrant portions that serve the public good — places like Wikipedia and Reddit forums, where volunteers often share knowledge in good faith and work hard to keep bad actors at bay.

But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I. systems.

  • albigu@lemmygrad.ml
    link
    fedilink
    arrow-up
    22
    arrow-down
    1
    ·
    edit-2
    9 months ago

    This is a classic case of tragedy of the commons, where a common resource is harmed by the profit interests of individuals.

    No, it’s not. It would be if all of this content was licensed as Creative Commons and had no author rights, but as could be seen a while back with the openai shadow library thing, these corporations are actively stealing content to train their models. This is corporate theft, but the culprits are too prominent and rich to ever face any repercussions.

    And it’s not like it’s a new thing, Google Images was based completely around the idea that, if an image is on the web, Google somehow has the right to store it in their own servers and present it to users with ads. Small youtubers had to go through years of getting their videos randomly claimed through content ID by well known and huge scam accounts, creating a whole business out of pretending to own other people’s stuff. M$ Github trained copilot on repositories without any regard for it breaking copyleft (i.e. no attribution in case of replicated code).

    It’s standard practice to simply not ask permission before potentially algorithmically wrecking somebody’s livelihood for these companies, sometimes not even informing them of afterwards (i.e. the mystical YouTube “Algorithm” that keeps changing without so much as patch notes, and the cargo cult it spawned).

    For them (and for the rest of the ruling class, obviously), (intellectual) property rights exist in a hierarchical level, and so long as you are on the top everything below you is free real estate.

    And the best part is that these “AI” content generation systems are still comically bad when actually put into practice, but the corporations feel the unending urge to deploy them ASAP because they’d rather have a lot of complete garbage content to be consumed, than to pay living wages. Burgers will literally employ rotting zombies if it means they can skimp on salaries and increase unemployment to drive down the rest of the wages.

    Got a bit worked up there lol.