AI = Artificial Intelligence. Means it is fake intelligence.
Means, the real intelligence can still out smart it. Thus humans will control machines for a long time. Up until they are self-replicating, self-repairing, self-sustaining, and self-aware.
We already have rudimentary self-replicating algorithms. They're also self-repairing/amending, which is why algorithmic trading with Machine Learning works so well compared to traditional and commercially available trading bots. Considering that these algorithms automatically adjust as new data rolls in, you could already say that they are self-aware (of their relevant metrics) as well. The reasons why these algorithms aren't shitting on human intelligence is mostly due to a lack of sufficiently intertwined networks, but we're slowly getting there.
As far as "outsmarting" these algorithms is concerned. That's only partially true. We can screw with AI algorithms by analyzing precisely how they assimilate data (which doesn't work with all algorithms because it's sometimes impossible to tell what is going on and they're pretty much a blackbox). But they are usually significantly better than humans by a large margin at solving specific tasks. As mentioned earlier, the only reason they aren't capable of solving general tasks is because they're currently specialized algorithms for specific tasks. Once we can efficiently connect many such specialized "modules" together, humans will seem like complete retards compared to AI. Which raises the questions of whether or not we want to merge or inevitably have AI as our near-literal God.
AI = Artificial Intelligence. Means it is fake intelligence.
Means, the real intelligence can still out smart it. Thus humans will *continue to* control machines for a long time. Up until they are self-replicating, self-repairing, self-sustaining, and self-aware.
FTFY.
Machine do what they are asked to do.
Some machine learning algorithms (such as artificial neural networks) are based of the biomimetism principle, but the result is not even close to the reality.
Biomimetism implies two things :
-Our comprehension of the living is still very limited. Hence, genetic algorithm, artificial neural network,
etc... are by essence incomplete. You can see this that way : an artificial neural network is 10% (or 20%, it is very subjective here) nature inspired, and 80% pure engineering.
-Even with a full comprehension of nature processes, the algorithms outcoming of our attempt to mimic them wouldn't copy 100% of the reality, cause you know, organic cells (or organic processes in general) > transistors. Maybe this will change with major disruption in technology, for example a fully functionnal quantic computer.
For now, biomimetism only gives us some general paradigm that emerged naturally. And it works because Nature is just damn well made. But as long as we don't understand the truly meaning and processes of a thought, we'll never have a sentient machine. Scifi stories where some creator kinda randomly create true (artificial) intelligence is pure fantasy.
I would argue the contrary in continuation of my above ramblings. We don't need to mimic the human brain perfectly. Humans simply don't compare to specific task algorithms. All it takes to trump us is connecting modules together whilst allowing for the creation of new ones by the algorithm itself. And as I've mentioned, the first algorithms to create code already exist. So either of higher computational power or more efficient algorithms should be enough and reality both of these will get satisfied over time.
We didn't copy birds or fishes either to fly in the skies or swim through the oceans either, and we're significantly better at both of these tasks. There is no compelling reason to create a carbon copy of the human brain
for this specific purpose.