Imagine an AGI (Artificial General Intelligence) that could perform any task a human can do on a computer, but at a much faster pace. This AGI could create an operating system, produce a movie better than anything you’ve ever seen, and much more, all while being limited to SFW (Safe For Work) content. What are the first things you would ask it to do?
The obvious answer is to use that to create an AGPL3-or-newer clean room implementation of itself, then use that to do whatever I want
Personalized open source education for everyone, running on fully documented RISC-V open hardware, designed specifically for and by the AGI to run completely independently with no outside connections needed and with full transparency. The weights would be open source and freely available. It would be easily fabricated hardware with high yield on trailing nodes, and all of the software, lithography masks, and digital tooling would be GPL3 free. The hardware would be profitable for anyone to produce but no one can control.
Also, Open Source all of global politics showing underlying motivations and patterns objectively in an entertaining and easy to watch visual format that appeals to the majority of humans and in a way that motivates non violent change.
If that exists, it’s curtains for humanity. Not because of the AGI itself killing us all, necessarily, but because that means human labor is forever obsolete and the vast majority of humans, including me, will soon starve to death on the street or be imprisoned for vagrancy.
So, I wouldn’t ask it anything, except maybe to recommend a suicide method.
Hopefully there are some people more positive than that, willing to change society so AGI doesn’t make most humans starve to death or be imprisoned.
Look around you. Look at all the uprisings that haven’t happened as a result of the latest round of extreme price gouging and resulting public impoverishment. Look at all the homeless people everywhere, sitting quietly in their tents and dying of starvation, instead of standing up and marching and demanding the employment and housing opportunities that they’ve thus far been denied.
No, society will not change to coexist harmoniously with AGI. The events of the last few years have made this abundantly clear. The whole point of creating AGI is to replace human labor and dispose of the vast majority of humans, and those humans are going to let it happen.
I think a lot of things that are proposed here could not be done by an AGI on an computer, no matter how intelligent. Consider this alternative scenario: You have an exceptionally intelligent young human adult with a computer locked in a room. They have no specialized education or anything. They are just extremely intelligent. What could you achieve through such a person?
Discovery of new physics is out of the question. That would need experiments.
Locked in a room with an internet connection? A lot. But without any contact with the outside world? Not nearly as much. It could have other people running experiments for it with an internet connection, but not without one.
Anyway, whether or not the AGI can interact with the real world undermines the purpose of my explicit statement in the question. I specifically mentioned that it only operates as a human on a computer. I didn’t mention it could acquire a physical body, so let’s just assume it can’t and can’t use other people to do physical labor either.
Tell it to figure out zero point energy or whatever else scifi type free energy is possible. Then tell it to figure out the cheapest easiest way to implement the technology. Then have it disseminate those plans world wide to everyone.
This sounds like science fiction. Even if the AGI were capable of creating plans for a fusion reactor, for example, you would still need to execute those plans. So, what’s the point of everyone having access to the plans if the same electrical companies will likely be responsible for constructing the reactor?
If everyone has the plans for a super efficient easy(ish) to build free energy device, then its existence couldn’t be covered up and the big energy companies and governments around the world would be forced to implement those plans or face civil unrest or revolt.
Truth be told I don’t know what I would. That much said, I don’t foresee AI replacing us right away. It’s got quite a ways to go before that happens. But when it does, I expect there to be a massive disruption in society because joblessness and homelessness is going to skyrocket because there just is no assistance in a hypercapitalist world.
I wouldn’t be surprised if corporations just asked the AI to make as much money as possible at the expense of everything else. But people like living in capitalist countries anyways, while complaining about the lack of safety nets. Otherwise they would move to countries like China, North Korea or Cuba.
Yeah, that’s kind of reductive.
Discuss the notion and evidence for us being in an approximate recreation of Earth circa 2020s as recreated by future version of said superintelligence.
If I’m having an existential crisis, it should too.
I was thinking about this a few days ago. GANs and the Simulation Hypothesis: An AI Perspective
Clever thinking.
The generator’s aim is to create a world so convincing that the discriminator can’t distinguish it from a ‘real’ world. This mirrors the GAN architecture where the generator tries to trick the discriminator into believing its generated instances are real.
While I do think the generator and discriminator perspectives of reality is a great way of thinking of things, I think the details are too obvious once you see them for the purpose to have been to stay hidden.
In many ways it seems like the nine dolphins illusion.
You have very interesting thoughts on the topic, and I invite you to share them on [email protected] - which might also have details you’ll enjoy in turn.
I’d want a familiar/daemon that was running an AI personality to act as a personal assistant, friend and interactive information source. It could replace therapy and be a personalized tutor, and it would always be up to date on the newest science and global happenings.
I honestly think that with an interesting personality, most people would drastically reduce their Internet usage in favor of interacting with the AGI. It would be cool if you could set the percentage of humor and other traits, similar to the way it’s done with TAR in the movie Interstellar.
That’s possible now. I’ve been working on such a thing for a bit now and it can generally do all that, though I wouldn’t advise it to be used for therapy (or medical advice), but mostly for legal reasons rather than ability. When you create a new agent, you can tell it what type of personality you want. It doesn’t just respond to commands but also figures out what needs to be done and does it independently.
Yeah I haven’t played with it much but it feels like ChatGPT is already getting pretty close to this kind of functionality. It makes me wonder what’s missing to take it to the next level over something like Siri or Alexa. Maybe it needs to be more proactive than just waiting for prompts?
I’d be interested to know if current AI would be able to recognize the symptoms of different mental health issues and utilize the known strategies to deal with them. Like if a user shows signs of anxiety or depression, could the AI use CBT tools to conversationally challenge those thought processes without it really feeling like therapy? I guess just like self-driving cars this kind of thing would be legally murky if it went awry and it accidentally ended up convincing someone to commit suicide or something haha.
That last bit already happened. An AI (allegedly) told a guy to commit suicide and he did. A big part of the problem is while GPT4 for instance knows all about all the things you just said and can probably do what you’re suggesting, nobody can guarantee it won’t get something horribly wrong at some point. Sort of like how self driving cars can handle like 95% of things correctly but that 5% of unexpected stuff that maybe takes some extra context that a human has and the car was never trained on is very hard to get past.
Thanks for the link, that sounds like exactly what I was asking for but gone way wrong!
What do you think is missing to prevent these kinds of outcomes? Is AI simply incapable of categorizing topics as ‘harmful to humans’ on it’s own without a human’s explicit guidance? It seems like the philosophical nuances of things like consent or dependence or death would be difficult for a machine to learn if it isn’t itself sensitive to them. How do you train empathy in something so inherently unlike us?
In the case I mentioned, it was just a poorly aligned LLM. The ones from OpenAI would almost definitely not do that. That’s because they go through a process called RLHF where those sorts of negative responses get trained out of them for the most part. Of course there’s still stuff that will get through, but unless you are really trying to get it to say something bad, it’s unlikely to do something like in that article. That’s not to say they won’t say something accidentally harmful. They are really good at telling you things that sound extremely plausible but are actually false because they don’t really have any way of checking by default. I have to cross check the output of my system all the time for accuracy. I’ve spent a lot of time building in systems to make sure it’s accurate and it generally is on the important stuff. Tonight it did have an inaccuracy, but I sort of don’t blame it because the average person could have made the same mistake. I had it looking up contractors to work on a bathroom remodel (fake test task) and it googled for the phone number of the one I picked from its suggestions. Google proceeded to give a phone number in a big box with tiny text saying a different company’s name. Anyone not paying close attention (including my AI) would call that number instead. It wasn’t an ad or anything, just somehow this company came up in the little info box any time you searched for the other company.
Anyway, as to your question, they’re actually pretty good at knowing what’s harmful when they are trained with RLHF. Figuring out what’s missing to prevent them from saying false things is an open area of research right now, so in effect, nobody knows how to fix that yet.
Come up with a very low cost power generator and open source the whole thing.
The kind that uses gas? I honestly wouldn’t have thought someone would be interested in open-sourcing this. I would prefer if it designed an open-source Roomba or, while we’re at it, a robot body so that it could perform more tasks. But you would still have to build it yourself.
Not gas, something more environmental for sure.
I heard disruptive science is slowing down which I think means pretty much everything possible has already been thought of. So talking about things that exist, do you mean a cheaper solar panel or wind/water turbine? Or are we talking about science fiction like an Arc Reactor?
You’re assuming a human could do that on a computer, though. It’s kind of hard to improve on that basic and very mature technology.
We’re talking super intelligence here.
I put more weight on the description text, but yes that was in the title.
Even if we assume it’s a god, though, I’m not sure there’s a way to improve on most kinds of generators more than incrementally. I don’t expect it would improve on “the wheel” either.
I’m sure there are methods of generating electricity that we haven’t even stumbled on.
How would that work. Electrons are very well understood, as are the ways of getting them to move.
I think we’re pretty far from the peak understanding of almost everything. There are so many discoveries still to be made.
Based on what? Sure, I’m guessing we’re just starting with planetary science and cosmology, but power generation has been explored to death and we’re still using the same basic alternator design as Tesla was.
how to tie my shoe?