Whyfuture.comI have written up an article on artificial intelligence, technology, and the future. The key point here is to design an altruistic superintelligence.
I explained abundantly why I have serious doubts that we could control (in the end, it's always an issue of control) a super AI by teaching him human ethics.
Besides, a super AI would have access to all information from us about him on the Internet.
We could control the flow of information to the first generation, but forget about it to the next ones.
He would know our suspicions, our fears and the hate from many humans against him. All of this would fuel also his negative thoughts about us.
But even if we could control the first generations, soon we would lose control of their creation, since other generations would be created by AI.
We also teach ethics to children, but a few of them end badly anyway.
A super AI would probably be as unpredictable to us as a human can be.
With a super AI, we (or future AIs) would only have to get it wrong just once to be in serious trouble.
He would be able to replicate and change itself very fast and assume absolute control.
(of course, we are assuming that AIs would be willing to change themselves without limits, ending up outevolving themselves; they could have second thoughts about creating AI superior to themselves, as we are).
I can see no other solution than treating AI like nuclear, chemical and biological weapons, with major safeguards and international controls.
We have been somehow successful controlling the spread of these weapons.
But in due time it will be much more easy to create a super AI than a nuclear weapon, since we shall be able to create them without any rare materials, like enriched uranium.
I wonder if the best way to go isn't freezing the development of autonomous AI and concentrating our efforts on developing artificially our mind or gadgets we can link to us to increase our intelligence, but dependent on us to work.
But even if international controls were created, probably, they would only postpone the creation of a super AI.
In due time, they will be too easy to create. A terrorist or a doom religious sect could create one, more easily than a virus, nuclear or nanotech weapon.
So, I'm not very optimistic on the issue anyway.
But, of course, the eventuality of a secret creation by mean people in 50 years shouldn't stop us trying to avoid the danger for the next 20 or 30 years.
A real menace is at least 10 years from us.
Well, most people care about themselves 10 years in the future as much as they care for another human being on the other side of the world: a sympathetic interest, but they are not ready to do much to avoid his harm.
It's nice that a fellow bitcointalker is trying to do something.
But I'm much more pessimistic than you. For the reasons I stated on the OP, I think that teaching ethics to a AI changes little and gives no minimal assurance.
It's something like teaching an absolute king as a child to be a good king.
History shows how that ended. But we wouldn't be able to chop the head of a AI, like to Charles I or Louis XVI.
It would still be a jump in the dark.