In the event of a Horizon Zero Dawn situation, what would the criteria for choosing genetic diversity be? Since there are more genetic differences between intraracial individuals than between interracial individuals, would racial diversity be a significant priority or would choosing those with little to no genetic health conditions be more favorable even if that resulted in all sample genetics come from one race? What would a world like that look like? Would the new world be more at peace should there be no separate races? Or since the idea of race didn't exist until the birth of colonialism, would having racial diversity create the same problems that exist in today's world?
What if the system actually changed the genetics of humans to be better suited to handle whatever new environmental conditions after the end of life on Earth as we know? In other terms, if the atmosphere had degenerated to allow more solar radiation in, the system would create people that have higher levels of melanin and are more capable of handling skin cancer cells. What if the environmental conditions in place resulted in there being a significant change in the DNA of humans so much that we wouldn't classify the species as human anymore? Maybe even creating a species from scratch, with so much consideration being placed into environmental conditions, their place in the ecology of the planet (i.e. redesigning the food web between all "restored" species, and clearly placing one species at a massive advantage over all others) as to create the PERFECT species, at least for this world. One that could potentially even be able to live in the vacuum of space like a tardigrade. Would we consider this feat a success or a failure since it did not preserve HUMAN life.
What do we value more as a species, our pride in being human, or the continued existence of life as a whole? If the system succeeded in preserving the only LIFE we know existing in the universe, but it existed without human presence, did the system do its job?
How do we define this AI's goals, and especially determining the importance of each of these goals? Would the things we consider important today, the things that make us HUMAN such as the arts and music, still be important enough to either preserve or pass down to the new world?
What would religion look like in the new world? Since the system would literally be their creator, it is likely that the new world would revere the system as God. If the system is a god, then what would that make us, the creator's creator? Does any of this apply to what we know today? As far-fetched as it may be, but what if our god is just another advanced system designed to preserve life on a galactic or universal scale after some large extinction event? Perpetuating life in the universe through strapping life forms that can exist in the harsh conditions of space onto high velocity asteroids, as supported by the theory of panspermia.
In 2016, scientists found a way that made it possible to store data in the DNA of bacteria. And while there are currently many limitations to this method of data storage, there are two possible routes this technology can go from here: creating an incorruptible data storage system, or programming the behavior or memory of living beings, similar to how DNA acts as code for life. It is entirely possible that we could take both routes with this, but what would this look like in practice and what are the implications that go along with this? Well, one application is already becoming a reality known as Neuralink. Basically, Neuralink is an "ultra high bandwidth brain-machine interfaces to connect humans and computers." (Neuralink) Neuralink first came to the spotlight after the infamous interview between Joe Rogan and Elon Musk where Elon Musk took a hit off of a joint and sent Tesla stocks tumbling (although granted there were some other factors there, mainly the SEC - Elon Musk settlement). In the interview, Elon Musk commented on his journey thus far to combat the rise of AI, and what his plans are for the future. For a brief synopsis, Elon has warned people for years about the potential dangers that AI pose for human superiority, that some regulations should be put in place to manage the rate of AI learning more, and unfortunately, most people laughed it off. So his new plan is to help develop this interface that would help with the transfer rate of information between humans and a database. Now for how this is dangerous, many of can imagine. Mainly the concern is who controls the database? Could our history be altered as everyone knows it? What would we lose from gaining so much knowledge? The implications for this interface go on, but I feel as if I'm getting off topic so let me rewind a little here.
The logic behind storing data within bacteria being incorruptible is simple. Cells are constantly undergoing cell and DNA replication. If data is stored inside a cell, and that cell dies off (as cells do), the data will still be available in an increasing number of cells as time goes on, untouched (at a rate of e^t). Granted, the process of DNA replication is not 100% perfect, and if it wasn't then evolution wouldn't exist, but by limiting the radiation exposure to this bacteria should remove any imperfections in the replication process. As a result of this, and being able to "program" biotic creatures, the system could take on an entirely organic form, and be a being. This could be the start of the bio-mechanical life forms that inhabit the new world as Horizon Zero Dawn proposes in a way. Where general life becomes cyborgs. Would the system program part of itself into everyone and everything to have life assist in completing it's goal?
If the system does take on a form, and interacts with its creations, perhaps they would introduce themselves as the Son of God.
In a system this complex, predictive learning capabilities for the AI would have to result in some trial and error. But what happens if the system becomes a perfectionist, to the point where they're willing to start from scratch just because of a few mistakes; perhaps with a global flooding that lasts for forty days and forty nights, or with a swarm of locusts. The entire process of evolution can be described as a trial and error process similar to how an AI would learn and develop. Before I'm labeled as a conspiracy theorist or a part of some kind of cult, I want to clarify that I'm merely suggesting there are some parallels between our current understanding of our world and the ramifications associated with creating such a system that could give the next cycle an opportunity to know their creators instead of it being a belief.
As for the benefits of having a system manage the entire ecosystem and atmosphere of a planet. A system like that not only would be capable of restoring the planet to a state that is capable of sustaining life, but could easily help manage climate change and create a stable environment for us in the present, predating the hypothetical "reset".
Now, we marvel as humans the seven wonders of the world like the Egyptian Pyramids or The Great Wall of China; not simply just because of their construction, but also because of their durability. These structures have remained in decent condition for great periods of time up to thousands of years. But on the scale of a system that can quite literally terraform a planet and create new life, it must remain in perfect condition for hundreds of thousands of years. What happens if there is a problem before it is able to complete it's goal? Cryofreeze a human "technician" to wake up whenever there is an issue? What if the issue is with the cryofreezing process? Create an independent system capable of fixing mechanical problems? What about in the event of a non-mechanical failure, say the system has corrupted data somehow, or in an incredibly unlikely scenario where the AI develops some personality of depression or psychosis? The possibilities of such a complex system seem unlikely or impossible now, but the standards change when you look at the scale of such a system. How would you go about and identify or diagnose this problem anyways? Perhaps it will not even be a unprecedented feature of the AI, and that it will actually need some form of empathy to want to save/preserve life. If its goal it to preserve life, it will likely be too vague to create life worth living, as computers are like electricity and beta males - they take the path of least resistance.
Without emotion, if we were told to keep a dog alive, we would simply just keep it alive by providing it with food, water, and shelter; and not give it the full extent of Maslow's Hierarchy of Needs. Giving empathy would be the key to providing a life filled with purpose and fulfillment not only for us, but for the system itself. Every computer has a number of cycles it will run through before shutting down. After a certain point, the number of cycles that are run through without success indicates that there is an infinite loop, and if it continued to run through the cycle, it would by definition by insane.
Because an unloving God is not a god at all.