We, basically, have little clues on how our brain works, how it creates consciousness and allows us to be intelligent, therefore we don't have a clue about how to teach/program a machine to be as intelligent as a human.
We are just creating computers with massive processing power and algorithms structured on layers and connections similar to our neural system (neural networks), giving them massive data and expecting that they will learn by trial and error about how to make sense of it (deep learning).
However, Alphago learned to play Go with human assistance and data, but AlphagoZero learned completely by itself from scratch, with no human data, with the so called reinforcement learning (
https://www.nature.com/articles/nature24270.epdf), by playing countless games against itself. It ended up beating Alphago.
Moreover, the same algorithm AlphaZero learned how to play chess in 4 hours on itself and then beat the best machine chess player, Stockfish, 28 to 0, with 72 draws, with less computer power than Stockfish.
A grand master, seeing how these AI play chess, said that "they play like gods".
Then, it did the same thing with the game Shogi (
https://en.wikipedia.org/wiki/AlphaZero).
Yes, AlphaZero is more or less a General AI, ready to learn anything with clear rules by itself and, then, beat everyone of us.
So, since no one knows how to teach machines to be intelligent, the goal is creating algorithms that will be able to figure out how to develop a general intelligence comparable to ours by trial and error.
If a computer succeeds, and becomes really intelligent, we most probably won't know how it did it, what are its real capacities, how we can control it and what we can expect from it.
("even their developers aren’t sure exactly how they work":
http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside).
All of this is being done by greedy corporations and some optimistic programmers, trying to make a name for themselves.
This seems a recipe for disaster.
Perhaps, we might be able to figure out, after, how they did it and learn a lot about ourselves and about intelligence with them.
But in between we might have a problem with them.
AI development should be overseen by an independent public body (as argued by Musk recently:
https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html) and internationally regulated.
One of the first regulations should be about deep learning and self-learning computers, not necessarily on specific tasks, but on general intelligence, including talking and abstract reasoning.
And, sorry, but forget about open source AI. On the wrong hands, this could be used with very nasty consequences (check this 7m video:
https://www.youtube.com/watch?v=HipTO_7mUOw).
I had hopes that a general human level AI couldn't be created without a new generation of hardware. But AlphaZero can run on less powerful computers (single machine with four TPUs), since it doesn't have to check 80 million positions per second (as Stockfish), but just 80 thousand.
Since our brain uses much of its capacities running basic things (like the beat of out heart, the flowing of blood, the work of our body organs, controlling our movements, etc.), that an AI won't need, perhaps current supercomputers already have enough capacity to run a super AI.
If this is the case, the all matter is dependent solely on software.
And, at the pace of AI development, probably there won't be time to adopt any international regulations, since normally this takes at least 10 years.
Without international regulations, Governments won't stop or really slow AI development, because of fear of being left behind on this decisive technology.
Therefore, it seems that a general AI comparable to humans and, so, much better, since it would be much faster, is inevitable on the short term, perhaps less than 10 years.
The step to a super AI will be taken short after and we won't have any control over it.
https://futurism.com/openai-safe-ai-michael-page/"I met with Michael Page, the Policy and Ethics Advisor at OpenAI. (...) He responded that his job is to “look at the long-term policy implications of advanced AI.” (...) I asked Page what that means (...) “I’m still trying to figure that out.” (...) “I want to figure out what can we do today, if anything. It could be that the future is so uncertain there’s nothing we can do,”.