Eh, most of the marketing around ai is complete bullshit, but I do use it on a regular basis for my work. Several years ago it would have just been called machine learning, but it saves me hours every day. Is it a magic bullet that fixes everything? No. But is it a powerful tool that helps speed up the process? Yes.
Who is getting the reward for speeding up your work? Do you get to slack off more? How long will that last? Or does more work get piled on, making your employer richer not you?
I do, I’m freelance, I make more money.
Not a problem of the AI
Most people free up hours of writing emails to do their actual job.
What does it do to save you so much time?
Most of the hate is coming from people who don’t really know anything about “AI” (LLM) Which makes sense, companies are marketing dumb gimmicks to people who don’t need them and, after the novelty wore off, aren’t terribly impressed by them.
But LLMs are absolutely going to be transformational in some areas. And in a few years they may very well become useful and usable as daily drivers on your phone etc, it’s hard to say for sure. But both the hype and the hate are just kneejerk reactionary nonsense for the moment.
Most of the hate is coming from people who don’t really know anything about “AI” (LLM)
No.
As an actual subject matter expert, I hate all of this, because assholes are overselling it to people who don’t know better.
My hatred of AI comes from seeing the double standard between how mass market media companies treat us when we steal from them vs when they steal from us. They want it to be a fully one way street when it comes to law and enforcement. House of Mouse owns all the media they create and that remixes work they create. When we create a new original idea, by the nature of the training model, they want to own that, too.
I also work with these tech bro industry leaders. I know what they’re like. When they say to you they want to make it easier for non-artistic people to create art, they’re not telling you about an egalitarian and magnificent future. They’re telling you about how they want to stop paying the graphic designers and copy editors who work in their company. The vision they have for the future is based on a fundamental misunderstanding about whether or not the future presented in Bladerunner is:
a) Cool and awesome b) Horrifying
They want to enslave sentient beings to do the hard work of mining, driving, and shopping for them. They don’t want those people doing art and poetry because they want them to be too busy mining, driving, and shopping. This whole thing. This whole current wave of AI technology, it doesn’t benefit you except for fleetingly. LLMs, ethically trained, could, indeed, benefit society at large, but that’s now who’s developing them. That’s not how they’re being trained. Their models are intrinsically tainted by the double standard these corporations have because their only goal is to benefit from our labor without benefiting us.
They want to enslave sentient beings to do the hard work of mining, driving, and shopping for them. They don’t want those people doing art and poetry because they want them to be too busy mining, driving, and shopping.
That’s a great summary of the core issue!
I adore the folks doing cool new things with AI. I am unhappy with the folks deciding what should get funded next in AI.
AI has NOTHING to do with theft4. Didn’t bother reading past that because I presume the rest is also rehashing utter tripe
^ai models are literally trained on vast swathes of stolen content bro^
The people being oversold are the people who don’t know anything about it. I guess you can hate the people doing the over selling, but don’t hate the field. It’s one of the most promising areas of computer research being done right now.
Bullshit, stop lying about your credentials. You don’t understand it and you’re a luddite, that’s why you hate it
You can use an AI mushroom foraging guide, if you like it so much. ¯\_(ツ)_/¯
You need to let go of your overall attitude that people who have different preferences and opinions on things are misinformed. You might learn something. As it stands your takes I’ve come across of yours across the fediverse are those of someone who hasn’t seen much of the world but needs everyone else to know how much you know.
No, the “hate” is from people trying to raise alarms about the safeguards we need to put in place NOW to protect workers and creators before it’s too late, to say nothing of what it will do to the information sphere. We are frustrated by tone deaf responses like this that dismiss it as a passing fad to hate on AI.
OF COURSE it will be transformational. No shit. That’s exactly why many people are very justifiably up in arms about it. It’s going to change a lot of things, probably everything, irreversibly, and if we don’t get ahead of it with regulations and standards, we won’t be able to. And the people who will use tools like this to exploit others – because those people will ALWAYS use new tools to exploit others – they want that inaction, and love it when they hear people like you saying it’s just a kneejerk reaction.
At what point in history did we ever halt the deployment of a new technology to protect workers?
Never. That’s the problem with history. Happy Labor Day.
Or just the problem with technology in general. Every gain is bought with a tradeoff.
Once a man has changed the relationship between himself and his environment, he cannot return to the blissful ignorance he left. Motion, of necessity, involves a change in perspective.
Commissioner Pravin Lal, “A Social History of Planet”
Very fair, which is why we should be critical of those who make the opposite claim and use that as justification for their hatred of AI.
Most of the hate I see comes from people complaining about “muh copyright”. Which is why AI is going to become the intellectual property of mega-corporations that own the social media posts and pictures people have posted and we won’t be able to use those models freely any more. The “means of generation” will belong to the capitalists. And that will be that for humanity.
And to think this can be stopped by moronic posts like OP is laughable.
Maybe learn what you’re talking about and stop panicking. Attack the right things not the this new technology
I dabbled a bit in ML before GPT, and when the most recent hype-rocket launched I did a deep dive into LLMs, and I gotta say…
None of my hopes or horrors regarding “AI” have changed much along the way.
It’s pretty much the same thing we’ve been doing since the industrial revolution, which is to try to map human behavior onto mechanical processes so that we can optimize for <whatever> from a quantitative, objective frame of reference.
GenAI is only unique in that it’s an especially mask-off moment for the ruling technocrats. We are destined to become wetware plugins for a capitalist machine whose goal isn’t even as interesting as turning everything into paperclips. It’s worse than a rogue superintelligence.
I dont think people want to use AI for artistic reasons. How rewarding is that to tell a machine how to do all the hard parts you can’t do yourself or dont have the patience to do?
I mean feel free to do whatever of course, but AI cannot make art and someone using AI is not am artist.
I’m completely over taxed mentally, and I offload so much to it from reconciling bank statements and sorting game mods, to a home brew ongoing multiverse starring my son and which emojis to use in notion at work.
I’m just going to keep linking this: LLMentalist
I use LLMs to automate the boring parts of my job (programming), it’s literally like outsourcing your work to an intern. I still have to review what is done to make sure it’s correct, but it saves me a ton of time typing up things. If I didn’t have a strong programming background then yeah it probably wouldn’t be as useful to me, but then again you can use it as a learning assistant as well as long as you verify what it is telling you.
at the end of the day gpt is powering next generation spam bots and writing garbage text, stable diffusion is making shitty clip art that would otherwise be feeding starving artists….
all the while consuming ridiculous amounts of electricity while humanity is destroying the planet with stuff like power generation….it’s definitely automating a lot of tedious things… but not transforming anything that drastically yet….
but it will… and when it does, the agi that emerges will kill us all.
Utter nonsense. Total tripe
A far more likely end to humanity by an Artificial Superintelligence isn’t that it kills us all, but that it domesticates us into pets.
Since the most obvious business case for AI requires humans to use AI a lot, it’s optimized by RLHF and engagement. A superintelligence created using human feedback like that will almost certainly become the most addictive platform ever created. (Basically think of what social media did to humanity, and then supercharge it.)
In essence, we will become the kitties and AI will be our owners.
but that it domesticates us into pets.
So all our needs and wants will be taken care of and we no longer have to work or pay bills?
Welp, I for one welcome our
robotAI overlordsYes, I believe that will be the ultimate end of AI. I don’t think billionaires are immune from the same addictions that the rest of us are prone to. An AI that takes over will not answer to wealthy humans, it will domesticate them too.
I don’t think billionaires are immune from the same addictions that the rest of us are prone to.
I’d argue that it is likely that they are more prone to addiction but, their drug of choice is power.
there’s no way they would want pets… they might keep some humans to study…
social media did that to humanity by using AI… so in that way, we’re already kitties batting at AI balls of yarn….
but after it becomes fully self aware, it’ll kill most of us…
Why do you think it’ll kill us? If its prime directive is to increase engagement wouldn’t that be contrary to how we’d expect it to behave?
There is scientifically a lot more reason to believe that advanced AI will not kill most humans.
There is a lot of blind hate, because it’s edgy right now to be against.
This thing already is transformational and we can already see a glimpse where it’s going. I think it’s normal that we have a bunch of stupid half products right now. People just have to realise ai is under development and new advancements are coming weekly.
Besides, what are we going to do, not develop it? Just abandon the whole technology? That’s nonsense.
AI is absolutely going to be transformative but a lot of the hate right now isn’t the technology itself but the way companies are jumping on it and forcing it down the throats of people who don’t want it, in a way that worsens their customer experience. Yes, let’s force AI into every software product. Yes let’s take away the humans you used to talk to and make them all bots instead.
Even from within tech itself there is huge resentment because you’ve got corps pumping billions into AI while at the same time slashing their workforce to afford those billions, with no clear return in sight.
Tech is treating AI as the next dotcom boom and pumping everything into it, but just like it did then the bubble of investment will burst, and there will be losers as well as winners.
I’m running self-hosted LLMs at home and I’m having huge fun experimenting with their capabilities. I just wish LLMs could have been implemented in the real world with space for ethics and the human factor, not the pure profit chasing bullshit we actually got.
AI is absolutely going to be transformative but a lot of the hate right now isn’t the technology itself but the way companies are jumping on it and forcing it down the throats of people who don’t want it, in a way that worsens their customer experience.
Exactly.
We did the same shit with mobile apps in 2009 - there was a mobile app - that no one wanted - being pushed hard - for every imaginable purpose.
I do still use mobile apps.
But I don’t have a dedicated mobile app installed for buying socks for my pets.
AI, today, is burdened with the same shit. It’ll calm down, after failing to deliver the vast majority of what is currently being promised.
I agree, but I think there is no around this forcing down the throat, slashing people in favour of barely functioning product. Don’t get me wrong, I wish it was done the right and fair way, but realistically no one with any power wants it done in a fair way.
Not blind hate. AI will be devastating to the environment for to it’s power and water consumption. We need to ask ourselves if the future water wars will be worth the corporate profits.
I think people are over rating how much power AI will consume in the long term. Training a model takes way more power than running it, and once we understand the tech better models can be developed for specific applications. It would be like when Edison was first working on the light bulb and extrapolating the power usage of whatever filament he was testing to every household in the world.
Also, it doesn’t have to be corporate profits. Individuals can benefit from AI. There’s a structural problem with capitalism, not with this technology.
It’s not that simple. Lots of stuff is extremely bad for the environment but we still do them. All of us.
Besides, what are we going to do, not develop it? Just abandon the whole technology? That’s nonsense.
As someone who knows a substantial amount about how LLM’s actually work:
- I’m delighted that AI companies are developing this technology.
- I’m annoyed, but not angry, that phone and PC makers are developing this technology. I don’t want it, yet. I’ll probably appreciate it when they get it right. (I’ll wait for the version that ships with Debian, because that’s the only OS maker whose AI I would trust not to be deeply invasive to my privacy.)
- I’m irritated that car companies, real estate investment companies, web browser developers, stock traders, and everyone else who was “all-in” on virtual reality two years ago, is making a lot of noise about developing this technology. They don’t hire the necessary talent, and their results are shit. Real investment returns require real investments, which these hype-followers haven’t proven capable of.
Besides, what are we going to do, not develop it? Just abandon the whole technology? That’s nonsense.
The tech industry will happily abandon it as soon as the next hype train comes along – we’ve already seen it happen with multiple “innovations” – dotcom, subprime, crypto, NFTs …
It’s not comparable. This is not just something, it’s a tech we want, have dreamed about since probably ever.
Mark V. Shaney did nothing wrong!
I’ve lately tested AI if it can allow me to practice Russian in a natural sounding dialogue. While it didn’t sound 100% human (it was too formal and technical), it was a good practice.
So I wouldn’t say that it can’t be used for good things.
Well what good came of it?
You really don’t see how practicing and learning another language could be a good thing?
There are plenty of applications for machine learning, logic engines, etc. They’ve been used in many industries since the 1970s.
This post isn’t contributing to a healthy environment in this community.
Well thought out claim -> good source -> good discussion
LLMs helped me with coding and debugging A LOT. I’d much rather use AI than have to try and parse stack exchange and a bunch of other web forums or developer documentation directly. AI is incredible when i get random errors and paste them in to say “fix this” and it does and tells me HOW and WHY it did what it did.
I keep seeing programmers use this as an example of what LLMs are good for, and I’ve seen other programmers say that the people who do that are bad programmers. The latter makes sense because trusting an LLM to do this is to fundamentally misunderstand what your job is and how the LLM works.
The LLM can’t tell you HOW or WHY because it doesn’t know those things. It can only give you an approximation of words that sound like someone explaing HOW and WHY. LLMs have no fidelity.
It could be completely wrong, and you wouldn’t know because you’ve admitted you’re using the LLM instead of reading the documentation and understanding yourself.
That is so irresponsible. Just RTFM like good programmers have done forever. It’s not that much work if you get into the habit of it. Slow down, take the time to understand HOW and WHY to do things yourself, and make quality code rather than cranking out bigger volumes of crap that you don’t understand. I’m sure it feels very productive in the moment but you’re probably just creating more work for whoever has to clean up your large quantities of poorly thought out code.
And it only consumes the equivalent in electricity of what an American house uses for a few tears.
you’re leaving out the main question: do they increase profit? YES.
so nothing anyone says matters. prepare your anus
Does it though?
How long before anyone actually looks up and says the emperor has no clothes?
I mean the students around me, that would have failed by now without chatgpt probably DO want it. But they dont actually want the consequences that come with it. The academic world will adapt and adjust, kind of like inflation. You can just print more money, but that wont actually make everyone richer long term.
But they dont actually want the consequences that come with it.
This brings up a potential positive thought. If enough people are cheating with LLMs, the perceived value of a degree may go down. This in turn might put some downward pressure on the costs of higher education, making it practical for those of use who would like to pursue graduate studies for the sake of learning to do so.
ITT: LLM helps me with mundane tasks so fuck the enormous energy requirements and its impact on environment!
Yeah… who doesn’t love moral absolutism… The honest answer to all of these questions is, it depends.
Are these tools ethical or environmentally sustainable:
AI doesn’t just exist of LLMs, which are indeed notoriously expensive to train and run. Using an image generator for example can be done on something as simple as a gaming grade GPU. And other AI technologies are already so light weight your phone can handle them. Do we assign the same negativity to gaming even though it’s just people using electricity for entertainment? Producing a game also costs a lot more than it does for an end user to play. It’s all about the balance between the two. And yes, AI technologies should rightfully be criticized for being wasteful, such as implementing it in places that it has no business in, or foregoing becoming more efficient.
The ethicality of AI is also something that is a deeply nuanced topic that has no clear consensus. Nor does every company that works with AI use it in the same way. Court cases are pending, and none have been conclusive thus far. Implying it is one sided is just incredibly dishonest.
but do they enable great things that people want?
This is probably the silliest one of them all, because AI technologies are ground breaking in medical research. They are seemingly pivotal in healing the sick people of tomorrow. And creative AIs allow people who are creative to be more creative. But they are ignored. They are shoved to the side because they don’t fit in the “AI bad” narrative. Even though we should be acknowledging them, and seeing them as the allies they are against big companies trying to hoard AI technology for themselves. It is these companies that produce problematic AI, not the small artists, creatives, researchers, or anyone using AI ethically.
but are they being made by well meaning people for good reasons?
Who, exactly? You must realize there are far more parties than Google, Meta and Microsoft that create AI right? Companies and groups you’ve most likely never heard of before, creating open source AI for everyone to benefit from, not just those hoarding it for themselves. It’s just so incredibly narrow minded to assign maliciousness to such a large group of people on the basis of what technology they work with.
Maybe you’re not being negative enough
Maybe you are not being open minded enough, or have been blinded by hate. Because this shit isn’t healthy. It’s echo chamber level behaviour. I have a lot more respect for people that don’t like AI, but base it on rational reasons. There’s plenty of genuinely bad things about AI that have to be addressed, but instead you have to find yourself in a divide between people cuddling very close with spreading borderline misinformation to get what they want, and genuine people that simply want their voice and concerns about AI to be heard.
Can’t have nuanced sensible opinions on stuff in this community lol.
For real, it’s what I hate about all of this because infighting pretty much always leads to people being shafted. Even if there are plenty of things to come to agreements about. But this kind of one sided soapboxing is just doing far more harm than good in convincing people.
AI? In medical research? But rulers!!!
I am on an internship with like really nice people in a company that does sustainable stuff.
But they honestly have a list of AI tools they plan to use, to make automated presentations… like wtf?
Same at my work and it’s because the upper management have tasked middle managers with a way to ‘use AI’. But when the tool solves a business problem it really is fantastic.
Yes for sure there are use cases. But there are some things that humans can just do better.
Presentations? For sure AI will clutter you with pages, add random pictures and make a huge presentation. But why add unauthentic stuff, and bloat other people’s brains?
Just dont use Pictures if you prefer that
Maybe you should learn about them and realise that ai is not evil?
I’ve used LLMs to save me hours of time reformatting text and old notes, and restructure explanations so I can better understand and share them, used AI speech to text models to transcribe my voice notes, and used diffusion models to generate better quality mockups for designs that were later commissioned in better quality, with no need for any changes.
I can understand not liking AI, or not needing it yourself, but acting as if it has no use is frankly ridiculous. You might not use it, but other people do.
I think this says more about corporation’s attempts to integrate “AI” into everything, instead of it being a user choice, than it does about the technology itself.
Remember the sacred texts:
I think we’ve covered enough ground in the near-90 comments here. People are getting butthurt. Thread locked.
Most AI is being developed to try to sustain the need for content for social networks. The bots are there to make it feel lived in so they can advertise to you. They are running out of people who are willing to give them free content while they make billions off your art. So then, they just replace the artist.
Ok. Been thinking about this and maybe someone can enlighten me. Couldn’t LLMs be used for code breaking and encryption cracking. My thought is language has a cadence. So even if you were to scramble it to hell shouldn’t that cadence be present in the encryption? Couldn’t you feed an LLM a bunch of machine code and train it to take that machine code and look for conversational patterns. Spitting out likely dialogs?
Could there be patterns in ciphers? Sure. But modern cryptography is designed specifically against this. Specifically, it’s designed against there being patterns like the one you said. Modern cryptographic algos that are considered good all have the Avalanche effect baked in as a basic design requirement:
https://en.m.wikipedia.org/wiki/Avalanche_effect
Basically, using the same encryption key if you change one character in the input text, the cipher will be completely different . That doesn’t mean there couldn’t possibly be patterns like the one you described, but it makes it very unlikely.
More to your point, given the number of people playing with LLMs these days, I doubt LLMs have any special ability to find whatever minute, intentionally obfuscated patterns may exist. We would have heard about it by now. Or…maybe we just don’t know about it. But I think the odds are really low .
Very informative! Thank you.
That would probably be a task for regular machine learning. Plus proper encryption shouldn’t have a discernible pattern in the encrypted bytes. Just blobs of garbage.
Thanks for the reply! I’m obviously not a subject matter expert on this.
This is a good question and your curiosity is appreciated.
A password that has been properly hashed (the thing they do in that Avalanche Effect Wikipedia entry to scramble the original password in storage) can take trillions of years to crack, and each additional character makes that number exponentially higher. Unless the AI can bring that number to less than 90 days - a fairly standard password change frequency for corporate environments - or heck, just less than 100 years so it can be done within the hacker’s lifetime, it’s not really going to matter how much faster it becomes.
The easier method (already happening in fact) is to use an LLM to scan a person’s social media and then reach out to relatives pretending to be that person, asking for bail money, logins etc. If the data is sufficiently locked down, the weakest link will be the human that knows how to get to it.