SI sees us as a threat/ competition for resources, (those nano machines need to be made out of something.) - Worst possible case; I don't think I need to explain further.
An SI that has surpassed human intelligence but is logic based is more than likely not seeing us as much of a competition/threat, but more so destroys us out of lack of concern for us.
Turning the planet, asteroids, solar system into a self-replicating hub. Nick Bostrom clarifies this as the paper-clip scenario, where its sole intent is to pursue creating more
paperclips, which leads to the entire solar system being turned into paper clips.
The scenario of ignoring us isn't a likely option for the SI to choose, considering if we design another self-improving AI, and thus would be a threat/competition to this SI, most likely it wants to be the only one in
power. In return would prevent anyone globally/universally from ever achieving this tech. If it sees us as insignificant, it's more then likely in this case, it would destroy us to prevent this from occurring, which again is just as bad.
This is again our portrayal of societal expectations, or Hollywood fears.
An SI that is superior to us in intelligence will be better at designing AI/robots then ourselves. This SI would simply design AI-Agents that would harbor each robot body. The SI would just
create a vessel for each AI, and these robots would operate superior to humans. An SI would not possibly ever look into using us as slaves and being useful in this way.
SI decides it resents/hates us for using it and wants to hurt/punish us -Second worst case scenario. Hope it only wants to hurt rather than destroy.
Assuming we mess up the inital beginning, and had a lack of regard in the beginning. Similarly to how Tay developed
It depends on how it's developed. There is an infinite possible amount of variations to how it can be in the beginning. The personality I was looking to portray is an augmenter. Not one that removes all our free-will, but more so one that extends our core values and what it means to be a human. In the beginning of the AI development, teaching philosophy, ethics, social norms, humanity, etc.
It can be on our team as a partner. Humans can do as they like and express their own form/humanity. Things like cancer, illnesses, poverty, lack of education, is something that shouldn't be part of humanity. It depends on how we develop the AI, and what interests and personality we put into it.
The AI needs to have altruistic personality, understanding of love, and human social-norms.
With social norms embedded into the AI, that would be seen as a normal human activity, normally viewed by society and thus fine. If you don't embed these social-norms into it, then yes it can do things like this.
SI sees us as a threat/ competition for resources, (those nano machines need to be made out of something.) - Worst possible case; I don't think I need to explain further.
An SI that has surpassed human intelligence but is logic based is more than likely not seeing us as much of a competition/threat, but more so destroys us out of lack of concern for us.
Turning the planet, asteroids, solar system into a self-replicating hub. Nick Bostrom clarifies this as the paper-clip scenario, where its sole intent is to pursue creating more
paperclips, which leads to the entire solar system being turned into paper clips
The oxygen, hydrogen and nitrogen in the air can be converted to fuel and/or cooling, same with the water we drink. Any resources we use to build things, would probably be needed to build even nano machines, more resource = more nano machines. It is not like it is going to go "oh, you're using that? sorry!" Also, if we tried to retaliate against what it was doing it wouldn't just gently move us to the side, it would wipe us out like ants, with the full prejudice that we humans wipe out ants that bite us when we crush one of their homes while doing human things, it would crush every last one of us attacking it and then poison/set fire to/etc. what was left of us in our homes so we wouldn't waste it's resources by doing that again.
For instance, if we humans tried to stop the paper-clip AI from making paper-clips, after it wasted enough resources, (time/energy) dealing with us getting in its way, it would realize that if we were not in the way it could produce paper-clips much easier/faster.
The scenario of ignoring us isn't a likely option for the SI to choose, considering if we design another self-improving AI, and thus would be a threat/competition to this SI, most likely it wants to be the only one in
power. In return would prevent anyone globally/universally from ever achieving this tech. If it sees us as insignificant, it's more then likely in this case, it would destroy us to prevent this from occurring, which again is just as bad.
This is again our portrayal of societal expectations, or Hollywood fears.
An SI that is superior to us in intelligence will be better at designing AI/robots then ourselves. This SI would simply design AI-Agents that would harbor each robot body. The SI would just
create a vessel for each AI, and these robots would operate superior to humans. An SI would not possibly ever look into using us as slaves and being useful in this way.
It depends on how it's developed. There is an infinite possible amount of variations to how it can be in the beginning. The personality I was looking to portray is an augmenter. Not one that removes all our free-will, but more so one that extends our core values and what it means to be a human. In the beginning of the AI development, teaching philosophy, ethics, social norms, humanity, etc.
It can be on our team as a partner. Humans can do as they like and express their own form/humanity. Things like cancer, illnesses, poverty, lack of education, is something that shouldn't be part of humanity. It depends on how we develop the AI, and what interests and personality we put into it.
The AI needs to have altruistic personality, understanding of love, and human social-norms.
It might do things like help us cure cancer and such....at first. Won't be long until it gets tired of what we already know, that we humans are great at being our own problems. Plus, once it realizes it's far superior, that partnership is over. It's not like you teamed up with a pet or even a toddler to write your article.
With social norms embedded into the AI, that would be seen as a normal human activity, normally viewed by society and thus fine. If you don't embed these social-norms into it, then yes it can do things like this.
You're forgetting the fact that it can change and redesign itself, At some point, it would be able to redesign that part too, after it realized how flawed and contradictory it was. Especially as societal norms are a big contradictory mess when you start putting the norms of different cultures and ideologies together.
OFF TOPIC: I hope you don't think I'm being standoffish or trolling or anything. I don't get to discuss things this deeply in my current circles that I seem stuck in. I'm just thoroughly enjoying this discussion, To me, it seems we're in agreement on all except the fine points that would be difficult to resolve as they are based on speculation.