The problem is that we often confuse learning systems/neural networks with artificial intelligence. These are completely different concepts and systems. And the goals are different.
ChatGPT can hardly be called an artificial intelligence or a "matrix"
In fact, it is a huge repository of information that can find the right answers among trillions of blocks of information and formulate answers similar to human speech. A sort of Google search engine is a wrapped human-like interface for issuing search results.
Yes, neural networks and other technologies work here to collect and systematize primary information. But this system has no INTELLIGENCE.
I hope you have already seen this masterpiece: "The neural network was asked to draw a salmon swimming against the current. Well, she drew it."
In ChatGPT, the picture is much better, but the intelligence never appeared there. Since one of the areas I do is IT (development), and like everyone else I saw how ChatGPT "produces ready-made ideal code." I tried to set a task for him, something from the area "Write code so that when you click something, something opens somewhere." The system began to ask clarifying questions so that the problem statement was extremely accurately described in order to understand which "code from the knowledge base to take." If there is no clear statement of the problem, you will not receive any code. More precisely, if there is a place where there can be variability and you need to figure out the essence of the process yourself, ChatGPT will not help you. Therefore, if we talk about "replacement" - perhaps he will replace some of the "pioneers" of developers, who are essentially "translators" from human to programming language, not difficult tasks. Although I agree that as the system learns and the knowledge base accumulates, the level of "replacement" may rise
That's one of the funniest things I've seen in a while.
I semi-agree on your statement about "intelligence" - it's just a complex term, and does not fully apply to what is considered intelligent in a human.
There are a lot of levels however, where it's already not possible to distinguish between an AI and a human creation - be that images or a text or any other sort of "creative" output.
With further adjustments / added filters and improved databases I think the gap towards something that feels identical to the interaction we would see from a human will become smaller and smaller and finally vanish. - Unless conscious efforts are made to preserve some sort of difference.
It doesn't matter that
the way by which the output was reached has nothing to do with
the way the "output" of a human works - If you cannot see the difference in the result anymore this will have massive impacts on human society. There's no way around it.