AI is overhyped but it’s obvious that some time later in the future, AI will be able to match human intelligence. Some guy in 1600s probably said the same about the first steam powered vehicle that it will still be snake oil in 100 years. But little did he know that he is off by about 250 years.
The common language concept of AI (i.e. AGI), sure it will one day happen.
This specific avenue of approaching that problem ending up being the one that evolves all the way to AGI, that doesn’t seem at all likely - its speed of improvement has stalled, it’s unable to do logic and it has the infamous hallucinations, so all indications is that it’s yet another dead-end.
Mind you, plenty of dead-ends in this domain ended up being useful - for example the original Neural Networks architectures were good enough for character recognition and enabled things like automated mail sorting - however this bubble on this specific generation of machine learning architectures seems to have been way too disproportionate to how far it has turned out that this generation can go.
That’s my point though the first steam-powered vehicles were obviously promising. But all large language models can do it parrot back at you what they already know which they got from humanity.
I thought AI was supposed to be super intelligent and was going to invent teleporters, and make us all immortal and stuff. Humans don’t know how to do those things so how can a parrot work it out?
Of course the earlier models of anything are bad. Although the entire concept and practicals will eventually be improved upon as other foundational and prerequisite technologies are met and enhances the entire project. And of course, all progress doesn’t happen overnight.
I’m not fanboying AI but I’m not sure why the dismissive tone as if we live in a magical world where technology should have now let us travel through space and time (I mean, I wish we could). The first working AI is already here. It’s still AI even if it’s in its infancy.
Because I’ve never seen anyone prove that large language models are anything other than very very complicated text prediction. I’ve never seen them do anything that requires original thought.
To borrow from the Bobbyverse book series, no self-driving car has ever worked out that the world is round, not due to lack of intelligence but simply due to lack of curiosity.
Without original thinking I can’t see how it’s going to invent revolutionary technologies and I’ve never seen anybody demonstrate that there is even the tiniest spec of original thought or imagination or inquisitiveness in these things.
AI is overhyped but it’s obvious that some time later in the future, AI will be able to match human intelligence. Some guy in 1600s probably said the same about the first steam powered vehicle that it will still be snake oil in 100 years. But little did he know that he is off by about 250 years.
The common language concept of AI (i.e. AGI), sure it will one day happen.
This specific avenue of approaching that problem ending up being the one that evolves all the way to AGI, that doesn’t seem at all likely - its speed of improvement has stalled, it’s unable to do logic and it has the infamous hallucinations, so all indications is that it’s yet another dead-end.
Mind you, plenty of dead-ends in this domain ended up being useful - for example the original Neural Networks architectures were good enough for character recognition and enabled things like automated mail sorting - however this bubble on this specific generation of machine learning architectures seems to have been way too disproportionate to how far it has turned out that this generation can go.
That’s my point though the first steam-powered vehicles were obviously promising. But all large language models can do it parrot back at you what they already know which they got from humanity.
I thought AI was supposed to be super intelligent and was going to invent teleporters, and make us all immortal and stuff. Humans don’t know how to do those things so how can a parrot work it out?
Of course the earlier models of anything are bad. Although the entire concept and practicals will eventually be improved upon as other foundational and prerequisite technologies are met and enhances the entire project. And of course, all progress doesn’t happen overnight.
I’m not fanboying AI but I’m not sure why the dismissive tone as if we live in a magical world where technology should have now let us travel through space and time (I mean, I wish we could). The first working AI is already here. It’s still AI even if it’s in its infancy.
I want my functional, safe, and nearly free jetpack.
Because I’ve never seen anyone prove that large language models are anything other than very very complicated text prediction. I’ve never seen them do anything that requires original thought.
To borrow from the Bobbyverse book series, no self-driving car has ever worked out that the world is round, not due to lack of intelligence but simply due to lack of curiosity.
Without original thinking I can’t see how it’s going to invent revolutionary technologies and I’ve never seen anybody demonstrate that there is even the tiniest spec of original thought or imagination or inquisitiveness in these things.