Which of the following sounds more reasonable?
-
I shouldn’t have to pay for the content that I use to tune my LLM model and algorithm.
-
We shouldn’t have to pay for the content we use to train and teach an AI.
By calling it AI, the corporations are able to advocate for a position that’s blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.
Okay, I’m glad I’m not too far off the mark then (I’m not an AI expert/it’s not my field of study).
I think this also points to/is a great example of another worrying trend: the consolidation of computing power in the hands of a few large companies. Without even factoring in the development of true AI/whether that can or will happen anytime soon, the LLMs really show off the massive scale of both computational power consolidation AMD data harvesting by only a very few entities. I’m guessing I’m not alone here in finding that increasingly concerning, particularly since a lot of development is driving towards surveillance applications.