With that said, you might look at researchers using AI to come up with new useful ways to fold proteins and biology in general. The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.
For qualitative examples we always have hallucinations and that’s a poorly understood mechanism that may well be able to create actual creativity. But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on. Though now it leads to “nothing new under the sun” so I’ll stop rambling now.
The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.
Yes.
But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on.
That’s fundamentally solvable.
I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.
What all these companies like DeepSeek and OpenAI and others are doing lately, with some “chain-of-thought” model, is in my opinion what they should have been focused on, how do you organize data for a symbolic logic model, how do you generate and check syllogisms, how do you, then, synthesize algorithms based on syllogisms ; there seems to be something like a chicken and egg problem between logic and algebra, one seems necessary for the other in such a system, but they depend on each other (for a machine, humans remember a few things constant for most of our existence). And the predictor into which they’ve invested so much data is a minor part which doesn’t have to be so powerful.
I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.
Agreed. The techbros pretending that the stochastic parrots they’ve created are general AI annoys me to no end.
While not as academically cogent as your response (totally not feeling inferior at the moment), it has struck me that LLMs would make a fantastic input/output to a greater system analogous to the Wernicke/Broca areas of the brain. It seems like they’re trying to get a parrot to swim by having it do literally everything. I suppose the thing that sticks in my craw is the giveaway that they’ve promised that this one technique (more or less, I know it’s more complicated than that) can do literally everything a human can, which should be an entire parade of red flags to anyone with a drop of knowledge of data science or fraud. I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).
Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.
An elegant way to make someone feel ashamed for using many smart words, ha-ha.
I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).
The metaphor is correct, I think it’s some social mechanism making them choose a brute force solution first. Say, spending more resources to achieve the same might be a downside usually, but if it’s a resource otherwise not in demand, that only the stronger parties possess in sufficient amounts, like corporations and governments, then that may be an upside for someone by changing the balance.
And LLMs appear good enough to make captcha-solving machines, proof image or video faking machines, fraudulent chatbot machines, or machines predicting someone’s (or some crowd’s) responses well enough to play them. So I’d say commercially they already are successful.
Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.
We-ell, it’s just hard to describe the idea without using that word, but I haven’t even finished my BS yet (lots of procrastinating, running away and long interruptions), and also the only bit of up to date knowledge I had was what DeepSeek prints when answering, so.
Ooooooh. Ok that makes sense.
With that said, you might look at researchers using AI to come up with new useful ways to fold proteins and biology in general. The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.
For qualitative examples we always have hallucinations and that’s a poorly understood mechanism that may well be able to create actual creativity. But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on. Though now it leads to “nothing new under the sun” so I’ll stop rambling now.
Yes.
That’s fundamentally solvable.
I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.
What all these companies like DeepSeek and OpenAI and others are doing lately, with some “chain-of-thought” model, is in my opinion what they should have been focused on, how do you organize data for a symbolic logic model, how do you generate and check syllogisms, how do you, then, synthesize algorithms based on syllogisms ; there seems to be something like a chicken and egg problem between logic and algebra, one seems necessary for the other in such a system, but they depend on each other (for a machine, humans remember a few things constant for most of our existence). And the predictor into which they’ve invested so much data is a minor part which doesn’t have to be so powerful.
Agreed. The techbros pretending that the stochastic parrots they’ve created are general AI annoys me to no end.
While not as academically cogent as your response (totally not feeling inferior at the moment), it has struck me that LLMs would make a fantastic input/output to a greater system analogous to the Wernicke/Broca areas of the brain. It seems like they’re trying to get a parrot to swim by having it do literally everything. I suppose the thing that sticks in my craw is the giveaway that they’ve promised that this one technique (more or less, I know it’s more complicated than that) can do literally everything a human can, which should be an entire parade of red flags to anyone with a drop of knowledge of data science or fraud. I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).
Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.
An elegant way to make someone feel ashamed for using many smart words, ha-ha.
The metaphor is correct, I think it’s some social mechanism making them choose a brute force solution first. Say, spending more resources to achieve the same might be a downside usually, but if it’s a resource otherwise not in demand, that only the stronger parties possess in sufficient amounts, like corporations and governments, then that may be an upside for someone by changing the balance.
And LLMs appear good enough to make captcha-solving machines, proof image or video faking machines, fraudulent chatbot machines, or machines predicting someone’s (or some crowd’s) responses well enough to play them. So I’d say commercially they already are successful.
We-ell, it’s just hard to describe the idea without using that word, but I haven’t even finished my BS yet (lots of procrastinating, running away and long interruptions), and also the only bit of up to date knowledge I had was what DeepSeek prints when answering, so.