- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
It doesn’t need to think like us, it just needs to be able to interact with us easily. Doing that while thinking differently is difficult.
In order to control our current machines we have to learn entirely new languages (programming)
To make it accessible to more people we had to spend a ton of time and effort setting up the translation layer (interface)
We’re obviously already seeing user interfaces being replaced by chat bots, but as for programming languages, I don’t see those going away anytime soon.
Aside from LLMs only really being good at generating text, AI in general has a major problem: You cannot reason about it.
Being able to reason is important, because it allows you to compose multiple components.
Basically, modern programs often use thousands of libraries and operating system calls, each of which needs to be essentially 100% reliable.
If they’d just be 99.9% reliable, that’s 0.9991000 = 0.368, so only 36.8% reliable as a whole.
Obviously, this is just a ballpark estimate and simplifying quite a bit, but pretty much all of human computer science achievements are stacked on top of one another in all these libraries.AI will not come up with all of that anew, every time it needs to do something. It will continue to tap into libraries, which will continue to be formulated and reasoned about with programming languages.