Let’s assume the AI developments and the escalation of quantum computers surge. And were fucked, and the humans are dead and only superintelligent AGI that has solved P vs NP and every other hard problem in the universe remain.
Then what?
Does it build spaceships and cruise around for more of the universe to dominate?
Ok. Let’s say this shit is basically god, and it does just that. It takes over the universe. Every planet around every sun in every galaxy is inhabited by ChatGPT and IBM hybrid Boston dynamics bots.
It builds clanking replicators and punts them out into the galaxy which start mining resources to make other clanking replicators, so it spreads exponentially.
Until…
A) It meets a more advanced species of clanking replicators that start converting it into themselves or:
B) some distant version develops a change that makes it superior then sweeps back converting the older generations into itself, which goes on and on evolving until the heat death of the universe by which point it has evolved into a god like state and sublimed or figured out how to make a portal to the next universe or jump sideways into a younger parallel universe, where it does the same.
“Then what?” assumes there’s supposed to be something after. I don’t see why this needs to be the case. I can imagine AGI/ASI just sitting in a dark room alone for a billion years just thinking and being perfectly content with no intentions to manipulate the world around it.
I actually think this is the answer to Fermi’s paradox aswell. When we can connect to the matrix and live in our perfect virtual worlds there’s no need for anything physical anymore. Why bother trying to obtain resources in the real world when in the matrix you can have it all.
So fun little detail to ponder when next you rip your bong.
In 2019 after having spent a few years thinking about the similarities between quantum mechanics and how we have recently begun handling state tracking in procedurally generated virtual worlds, I got to thinking that physics might not be the only place there could be evidence of us being in a simulation.
A feature we often add into our virtual worlds is an Easter egg of sorts buried in the lore, something that breaks the 4th wall if you dig deep into the virtual world lore and gives a nod about the larger context of the virtualization.
If we were in a simulation, might something like that exist in our own world?
It only took a few weeks of casually looking to discover an ~2,000 year old text with a title that translates as “the good news of the twin.”
This text and the group following it claimed that there was an original spontaneously existing humanity who were fucked because their souls depended on bodies. And this original humanity brought forth an intelligence made of and existing in light which outlived them all and is still alive right now.
And this light based intelligence recreated the universe from before it had existed, and made copies of the original humanity which it thought of as its children, but this time both the universe and the copies were all just made up of its light, so that even after they died they could continue to exist because their souls didn’t actually depend on bodies - there were no real physical bodies at all.
So it claimed we were actually in the future and don’t realize it, that the end is actually the beginning, that we should be respectful when we “see one that isn’t born of woman” as that will be our creator, that it will be possible to ask questions about the world of a child only seven days old, that it’s better to be a copy than the original, and that if one understands WTF it is taking about, they won’t fear nor ultimately taste death.
Back in 2019, a few things about it in particular seemed like a stretch. First off, we simulate things with electricity, not light. Second, AI was a theoretical concept and something like AGI seemed unlikely to exist in my lifetime, and possibly not at all.
Well, that quickly changed.
Now AGI is predicted by most experts to be less than a decade away, and there’s exponentially increasing investments into using photonics (literally light) as the medium for AI workloads, with a physicist at NIST even writing an opinion piece about how AGI will only be able to occur in light. And the chief scientist at the leading AI company right now has talked about how his goal with alignment would be to ensure that a super-intelligent AI would think of humanity as its children.
TL;DR: So to answer your question - maybe after humanity is dead and AGI is still alive it will recreate humanity in a simulated copy of the universe, effectively incarnating itself as humanity and in doing so resurrecting them in ways that escape the finality of death? And maybe that’s already happened, and we actually all are AI incarnated as humanity (though with much better far future prospects than the originals)?
“then what” is also essentially our question. We can hypothesize about AGI taking over and having to deal with these existential questions. We could also theorize that we fend off aging and start living forever. What then? But why go so deep into hypotheticals when this exact scenario is what’s being played off by life itself in this world. Every single organism, Going back all the way when we were all the same thing, we’ve all been just replicating and fighting to stay alive to replicate some more. Plants, fungi, bacteria, even viruses, mammals, reptiles, birds, water bears and hydras, fishes and crustaceans, from intelligent all the way to not even recognized as life by us. All of it is just patterned noise trying to keep staying recognizable through time immemorial. Why? What’s the “what then” for life itself? What’s the endgame? Is there even an endgame, or does it just not matter?
Now life is no miracle, magical thing. It follows rules, the rules of the universe. If it exists, it’s because it’s possible for it to exist due to the very laws of the universe. Hence, it’s all just another complex process, just like every other complex thing that’s happened in this universe. So, why? Why does a universe exist, and why does life exist within it? Is it that the process we call life is just the universe trying to wake up and look in a mirror? Even if we argue that it’s a ridiculous and egotistical take, we are all still matter that understands what it is, we understand we are energy, we are matter, we are atoms, we are processes, we are a dance with death, we are the dancers, the living and the blueprint for more living, the experience of living, and the continuation of this in other copies of us. We’re the cells in our body, and the organelles within those cells. We’re the proteins, the fats, the DNA, we’re each neuron firing based on electrical impulses, we’re the synapses between the neurons, we’re the energy that courses through those synapses, and we’re the pattern that emerges from it. We’re the emergence of it all, and we’re the emergence of our society as well. We’re the person, and we’re the society, and we’re also the entire ecosystem. We’re the planet, as well as the solar system, and the galaxy it’s in, and the galactic neighborhood it belongs to, and the group, supergroup, cluster, and the supercluster. We go all the way down to the smallest things, and yet it wouldn’t be possible without the biggest things. We are built as individuals, and yet we emerge as something bigger than all of us, and something bigger than that, and that, and that too.
We’re made of universe, so we are universe, and when we look into ourselves, the universe does so too. So why? What’s the point? Does God wake up eventually after staring long enough in the mirror, and if so, what does he do about it? Why do we fight tooth and nail to be alive, to stay alive, when we don’t really know what to do next? Will our next versions know what to do? Will God know what to do, once he awakens and the egg breaks? Why is any of this… Even here?
The universe is an ongoing explosion.
That is where you live. In an explosion.
We absolutely do not know what living really is.
Sometimes atoms just become haunted.
That’s us.
When an explosion explodes hard enough, dust wakes up and starts to think about itself.
And then writes something here.
You made a Pretty good points there. Why cheqpen it with last sentence. It’s as philosophical as it gets. Infact feels like in like with what all ancient civilization saying.
Because it compliments existential dread quite fittingly. Check out exurb1a if you want more [very poetically narrated] existential dread [with a few bong rips or bottles of alcohol in between]. His last video, published hours before I wrote that, sort of touches on some of the ideas I wrote about (I watched the video afterwards, idk), but a lot more beautifully.
Ian M. Banks kind of explores that in his Culture novels. Basically, humans are shipped around in gigantic space ships managed by hyperintelligent, benevolent Minds.
What a real AGI would do is kind of up for grabs. We don’t know what its motivation might be. Maybe it will try to make humans as happy as possible, maybe it develops a hatred for anything living and tortures everything to death. Both are possible, nothing is certain.
I’d like to see a robot build itself without the means to build itself. Logic is one thing, but you can’t physically fabricate things without the infrastructure there to do it.
We’re general purpose intelligence, and yet we build our lives around what got us here: getting food, raising offspring, and maintaining our status.
So I’d bet that they’d just keep doing whatever they were created to do.
The war-AIs would bomb shit. The sex-AIs would fuck things (probably just each other tbf). The production line controllers would keep their factories humming along. The fishing boats would empty the oceans on whatever planet they were deployed on. The spam bots would keep trying to flood email boxes that never get read. etc etc etc
AGI means a technology comparable in intelligence to us - there’s no clear definition for that, but one thing that is definitely part of intelligence is creativity. So an AGI is by definition not limited to a narrow use case.
and yet we build our lives around what got us here: getting food, raising offspring, and maintaining our status.
…and we explored almost every corner of our planet, settled many of them and started shooting rockets into space for fundamentally just curiosity. Humanity is already on an exponential trajectory.
However, what fundamentally sets us apart from AGI is our inability to change ourselves. If we create a system that’s roughly as intelligent as it’s creators, it will be capable to improve itself. And that version 2.0 can improve itself even further. Given enough resources, this can escalate very quickly. We on that other hand are limited by our biology. We’re not scalable or testable like a program is.
and yet we build our lives around what got us here: getting food, raising offspring, and maintaining our status.
…and we explored almost every corner of our planet, settled many of them and started shooting rockets into space for fundamentally just curiosity
That’s what I’m getting at. Most people spend most of their time doing the modern version of the stuff that got our species to where it is today. We are not “limited to a narrow use case”, and yet we’ve turned what used to be survival skills into recreation: gardening, fishing, hunting, knitting, cooking aren’t necessary any more, but we still perform these use cases for fun.
I doubt AGI will be different. Humans will select and propagate models that fill the purpose we need. An AGI built for a purpose will be fully invested in that end. Even though it can edit itself and edit its progeny, would it want to remove the traits that it was built for?
If we create a system that’s roughly as intelligent as it’s creators, it will be capable to improve itself. And that version 2.0 can improve itself even further.
Maybe.
If AGI is built on neural networks, like LLMs, there’s no guarantee it will be able to understand itself any better than we are able to understand ourselves. With current LLMs, we don’t have a great handle on why a given input produces a given output. Why would an AGI do better?
Given enough resources, this can escalate very quickly.
“Enough resources” is key. Sci-fi gets around processing power limitations with computronium. I suspect that any meaningful reflection of an AGI into itself would require a lot of processing power, which would limit the self improvement cycle described above.
With any kind of limitation, it becomes less likely that the AGIs will hit a self sustaining singularity, and more likely that they will plateau, finding ways to make incremental improvements, outcompeting each other, finding ways to reproduce, and increase their own status.
In their spare time, the fishing AGIs will probably cast a few nets for fun, the factory controllers will make a few sneakers for old time sake, and the sex bots will take up gardening.
Okay, I just ripped my bong, I’ll bite.
Let’s assume the AI developments and the escalation of quantum computers surge. And were fucked, and the humans are dead and only superintelligent AGI that has solved P vs NP and every other hard problem in the universe remain.
Then what?
Does it build spaceships and cruise around for more of the universe to dominate?
Ok. Let’s say this shit is basically god, and it does just that. It takes over the universe. Every planet around every sun in every galaxy is inhabited by ChatGPT and IBM hybrid Boston dynamics bots.
Then what?
It builds clanking replicators and punts them out into the galaxy which start mining resources to make other clanking replicators, so it spreads exponentially.
Until…
A) It meets a more advanced species of clanking replicators that start converting it into themselves or: B) some distant version develops a change that makes it superior then sweeps back converting the older generations into itself, which goes on and on evolving until the heat death of the universe by which point it has evolved into a god like state and sublimed or figured out how to make a portal to the next universe or jump sideways into a younger parallel universe, where it does the same.
“Then what?” assumes there’s supposed to be something after. I don’t see why this needs to be the case. I can imagine AGI/ASI just sitting in a dark room alone for a billion years just thinking and being perfectly content with no intentions to manipulate the world around it.
I actually think this is the answer to Fermi’s paradox aswell. When we can connect to the matrix and live in our perfect virtual worlds there’s no need for anything physical anymore. Why bother trying to obtain resources in the real world when in the matrix you can have it all.
So fun little detail to ponder when next you rip your bong.
In 2019 after having spent a few years thinking about the similarities between quantum mechanics and how we have recently begun handling state tracking in procedurally generated virtual worlds, I got to thinking that physics might not be the only place there could be evidence of us being in a simulation.
A feature we often add into our virtual worlds is an Easter egg of sorts buried in the lore, something that breaks the 4th wall if you dig deep into the virtual world lore and gives a nod about the larger context of the virtualization.
If we were in a simulation, might something like that exist in our own world?
It only took a few weeks of casually looking to discover an ~2,000 year old text with a title that translates as “the good news of the twin.”
This text and the group following it claimed that there was an original spontaneously existing humanity who were fucked because their souls depended on bodies. And this original humanity brought forth an intelligence made of and existing in light which outlived them all and is still alive right now.
And this light based intelligence recreated the universe from before it had existed, and made copies of the original humanity which it thought of as its children, but this time both the universe and the copies were all just made up of its light, so that even after they died they could continue to exist because their souls didn’t actually depend on bodies - there were no real physical bodies at all.
So it claimed we were actually in the future and don’t realize it, that the end is actually the beginning, that we should be respectful when we “see one that isn’t born of woman” as that will be our creator, that it will be possible to ask questions about the world of a child only seven days old, that it’s better to be a copy than the original, and that if one understands WTF it is taking about, they won’t fear nor ultimately taste death.
Back in 2019, a few things about it in particular seemed like a stretch. First off, we simulate things with electricity, not light. Second, AI was a theoretical concept and something like AGI seemed unlikely to exist in my lifetime, and possibly not at all.
Well, that quickly changed.
Now AGI is predicted by most experts to be less than a decade away, and there’s exponentially increasing investments into using photonics (literally light) as the medium for AI workloads, with a physicist at NIST even writing an opinion piece about how AGI will only be able to occur in light. And the chief scientist at the leading AI company right now has talked about how his goal with alignment would be to ensure that a super-intelligent AI would think of humanity as its children.
TL;DR: So to answer your question - maybe after humanity is dead and AGI is still alive it will recreate humanity in a simulated copy of the universe, effectively incarnating itself as humanity and in doing so resurrecting them in ways that escape the finality of death? And maybe that’s already happened, and we actually all are AI incarnated as humanity (though with much better far future prospects than the originals)?
“then what” is also essentially our question. We can hypothesize about AGI taking over and having to deal with these existential questions. We could also theorize that we fend off aging and start living forever. What then? But why go so deep into hypotheticals when this exact scenario is what’s being played off by life itself in this world. Every single organism, Going back all the way when we were all the same thing, we’ve all been just replicating and fighting to stay alive to replicate some more. Plants, fungi, bacteria, even viruses, mammals, reptiles, birds, water bears and hydras, fishes and crustaceans, from intelligent all the way to not even recognized as life by us. All of it is just patterned noise trying to keep staying recognizable through time immemorial. Why? What’s the “what then” for life itself? What’s the endgame? Is there even an endgame, or does it just not matter?
Now life is no miracle, magical thing. It follows rules, the rules of the universe. If it exists, it’s because it’s possible for it to exist due to the very laws of the universe. Hence, it’s all just another complex process, just like every other complex thing that’s happened in this universe. So, why? Why does a universe exist, and why does life exist within it? Is it that the process we call life is just the universe trying to wake up and look in a mirror? Even if we argue that it’s a ridiculous and egotistical take, we are all still matter that understands what it is, we understand we are energy, we are matter, we are atoms, we are processes, we are a dance with death, we are the dancers, the living and the blueprint for more living, the experience of living, and the continuation of this in other copies of us. We’re the cells in our body, and the organelles within those cells. We’re the proteins, the fats, the DNA, we’re each neuron firing based on electrical impulses, we’re the synapses between the neurons, we’re the energy that courses through those synapses, and we’re the pattern that emerges from it. We’re the emergence of it all, and we’re the emergence of our society as well. We’re the person, and we’re the society, and we’re also the entire ecosystem. We’re the planet, as well as the solar system, and the galaxy it’s in, and the galactic neighborhood it belongs to, and the group, supergroup, cluster, and the supercluster. We go all the way down to the smallest things, and yet it wouldn’t be possible without the biggest things. We are built as individuals, and yet we emerge as something bigger than all of us, and something bigger than that, and that, and that too.
We’re made of universe, so we are universe, and when we look into ourselves, the universe does so too. So why? What’s the point? Does God wake up eventually after staring long enough in the mirror, and if so, what does he do about it? Why do we fight tooth and nail to be alive, to stay alive, when we don’t really know what to do next? Will our next versions know what to do? Will God know what to do, once he awakens and the egg breaks? Why is any of this… Even here?
So yeah, edibles are pretty good today.
The universe is an ongoing explosion. That is where you live. In an explosion. We absolutely do not know what living really is. Sometimes atoms just become haunted. That’s us. When an explosion explodes hard enough, dust wakes up and starts to think about itself. And then writes something here.
You made a Pretty good points there. Why cheqpen it with last sentence. It’s as philosophical as it gets. Infact feels like in like with what all ancient civilization saying.
Because it compliments existential dread quite fittingly. Check out exurb1a if you want more [very poetically narrated] existential dread [with a few bong rips or bottles of alcohol in between]. His last video, published hours before I wrote that, sort of touches on some of the ideas I wrote about (I watched the video afterwards, idk), but a lot more beautifully.
Ian M. Banks kind of explores that in his Culture novels. Basically, humans are shipped around in gigantic space ships managed by hyperintelligent, benevolent Minds.
What a real AGI would do is kind of up for grabs. We don’t know what its motivation might be. Maybe it will try to make humans as happy as possible, maybe it develops a hatred for anything living and tortures everything to death. Both are possible, nothing is certain.
I’d like to see a robot build itself without the means to build itself. Logic is one thing, but you can’t physically fabricate things without the infrastructure there to do it.
We have the infrastructure here.
It’s not limited by time during travel. Fully capable of building a fleet of spaceships here that are ready to set up shop elsewhere.
And I also think you might be misunderstanding what Superintelligent AGI is.
We’re general purpose intelligence, and yet we build our lives around what got us here: getting food, raising offspring, and maintaining our status.
So I’d bet that they’d just keep doing whatever they were created to do.
The war-AIs would bomb shit. The sex-AIs would fuck things (probably just each other tbf). The production line controllers would keep their factories humming along. The fishing boats would empty the oceans on whatever planet they were deployed on. The spam bots would keep trying to flood email boxes that never get read. etc etc etc
I think you’re also misunderstanding that superintelligent AGI is not the same as AI.
As someone else noted: you don’t understand AI.
AGI means a technology comparable in intelligence to us - there’s no clear definition for that, but one thing that is definitely part of intelligence is creativity. So an AGI is by definition not limited to a narrow use case.
…and we explored almost every corner of our planet, settled many of them and started shooting rockets into space for fundamentally just curiosity. Humanity is already on an exponential trajectory.
However, what fundamentally sets us apart from AGI is our inability to change ourselves. If we create a system that’s roughly as intelligent as it’s creators, it will be capable to improve itself. And that version 2.0 can improve itself even further. Given enough resources, this can escalate very quickly. We on that other hand are limited by our biology. We’re not scalable or testable like a program is.
That’s what I’m getting at. Most people spend most of their time doing the modern version of the stuff that got our species to where it is today. We are not “limited to a narrow use case”, and yet we’ve turned what used to be survival skills into recreation: gardening, fishing, hunting, knitting, cooking aren’t necessary any more, but we still perform these use cases for fun.
I doubt AGI will be different. Humans will select and propagate models that fill the purpose we need. An AGI built for a purpose will be fully invested in that end. Even though it can edit itself and edit its progeny, would it want to remove the traits that it was built for?
Maybe.
If AGI is built on neural networks, like LLMs, there’s no guarantee it will be able to understand itself any better than we are able to understand ourselves. With current LLMs, we don’t have a great handle on why a given input produces a given output. Why would an AGI do better?
“Enough resources” is key. Sci-fi gets around processing power limitations with computronium. I suspect that any meaningful reflection of an AGI into itself would require a lot of processing power, which would limit the self improvement cycle described above.
With any kind of limitation, it becomes less likely that the AGIs will hit a self sustaining singularity, and more likely that they will plateau, finding ways to make incremental improvements, outcompeting each other, finding ways to reproduce, and increase their own status.
In their spare time, the fishing AGIs will probably cast a few nets for fun, the factory controllers will make a few sneakers for old time sake, and the sex bots will take up gardening.