Robots will never replace humans, because there are certain problems with AI even on the theoretical level. Of course, here I mean not the AI we already have in some robots, but the AI which will work like human's. This problem I'm talking about can be understood via the "Chinese room experiment". For those who aren't familiar with it, I'll paste a brief description here:
"suppose that artificial intelligence takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.But does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?
Now the same with human being, who is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Human could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, that he would do so as well, simply by running the program manually.
However, this person would not be able to understand the conversation. Therefore, it follows that the computer would not be able to understand the conversation either.
Without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and since it does not think, it does not have a "mind" in anything like the normal sense of the word."
I believe that thought experiment has a few problems. Two that come to mind: not having a mind like we understand it or thinking like we do doesn't mean much. Shows more a problem with our understanding of those things or the definitions we use to describe them. Not a limit in building systems with a general intelligence. Then one of the things the experiment is saying is there is no set of instructions or actions that produce a mind. But we have a mind. So it is like saying our mind comes from somewhere else. Not the functioning of our bodies. Fine if you believe in gods or something. Maybe not very useful otherwise.