Creativity ~= unpredictability.
The last thing we need is a "creative" computer system running the world.
I don't fully agree with that definition, but going with it for argument's sake:
Predictable or not (assuming we mean "predictable" here as "predictable by humans" [arrogant IMO]), if the ruling class (in this case sentient machines) is actually smarter, more powerful, and more able to rule the world than humans, than humans simply have no say anymore. It would not matter if the ruling computer system came to the (much processed) conclusion that humanity should be eradicated --
it would be a good and correct decision, because the smartest most powerful class of intelligence in existence has deemed it so.
So it also doesn't matter to a wolf when humans decide that a wolf-less world is better for them?
And that justifies the humans killing all the wolfs? Because, you know, they are more intelligent?
Great way forward!
You make the error of thinking that something that exceeds humans evolutionary development would not be selfish.
You also make the error of thinking that having a human-like intelligence regulating society is optimal.
I don't think we need a human-like AI at all but if you insist you still have a problem in that it will have its own ideas of what is good or bad that don't stroke well with what humans think is good or bad. That is what human-like intelligence is all about.
I don't recall making the errors you listed, and I would be happy to re-examine that if you could point me toward them
We as humans eradicate pests all the time. We have deemed wolves as non pests. I think a higher form of life might deem us similarly.
Well, let me remind you
:
"
It would not matter if the ruling computer system came to the (much processed) conclusion that humanity should be eradicated -- it would be a good and correct decision, because the smartest most powerful class of intelligence in existence has deemed it so."
First error is assuming that what this ruling class of most powerfull intelligence tells you is good (for them) is also good for you.
In reality these kinds of questions cannot be generalized.
I could now again ask if you think it is good or correct to the wolf that humans found it good to kill them off.
Second of all, you seem to imply that these super duper ultra intelligent AIs would be better at regulating society than something with less intelligence.
This is not true. Regulating a process requires only enough 'intelligence' to regulate the process. No need for systems that start to think for themselfs.
The assumption that higher/better intelligence is the way to achieve a well balanced human society is simply false. Actually we don't need these AIs at all because we could allready play nice and divide the resources if we wanted. But humans don't like to play nice an the AI thing is just an excuse to make people listen because, you know,allmighty computers says no. It's a bullshit excuse for the fact that society needs structure as we already have the answer. We need to consume less as a species or we will break everything. That is the simple truth that is hidden by RBE stuff.
The real problem, and why i think RBE is complete bullshit, is how to get people to listen to someone (or something) telling them they need to change their behaviour because that is more efficient.
Or, and i just like to repeat it, how will you get Putin to give up his resources?