If you believe it's possible we are living in a simulation, then you disagree with you own premises. And, passing a Turing test does not indicate intelligence anymore than quacking indicates that you're a duck.
Quack!
If we take the simulation argument into account (which is not a common prerequisite after all), then I have to put things a little differently.
My premise was also about evolution. So in this context I'd say in
this simulation we apparently had plenty of time to go through evolution.
It's admittedly hard to define what the "artificial" part in "intelligence" actually means then. (I didn't invent that term
).
What I meant is contemporary AI, i.e. the current state of research, and that AI (think chat bots) is not convincing just because it lacks inherent motivation and direction of self-learning. And I assume that's because it lacks culture, which in turn is because it lacks its own evolutionary history.
It may very well be that one of the purposes to run simulations then is exactly to "breed" "artificial" intelligence, i.e. to get AI units that are faithful and convincing enough.
Any software which has an internal state that can differentiate its actions (when compared to a copy of such software with a different state) and modify its own state is a "living" software. "Living" software differ from inert software in that their purpose can be tailored to the user, in this case a human. Although most current "living" software does not have the capability to reproduce or mutate on its own accord, they can do so with the assistance of humans.
For example, imagine an open-source speech-to-text software that can be trained for a specific person. If this software is more useful to humans that previous speech-to-text software, it will displace that software. In doing so, it attracts developers, who are humans that assist its mutation, and users, who are humans that assist its reproduction. Different copies of this software will naturally have different internal states. If some humans modify ("mutate") the software through forking it, and the mutation is favourable (the software becomes more useful to humans), there would have been some limited evolution through artificial selection.
Fast-forward 10 years into the future, and visualize the far descendents of this software. When compared to today's software, these descendents (which originally shared its codebase) are more useful to humans and more differentiated from each other. Although their specialization means that they use relatively few concepts of "intelligence", the software does not really need any more (and, indeed, software that becomes excessively intelligent is simply bloated and will be artificially selected against).
This software is neither reproducing on its own accord or actively attempting to preserve or improve itself beyond a basic level of machine learning. Even so, it has become more intelligent, effectively undergoing evolution. Although it is an eventual process, thanks to improved research in artificial intelligence, eventually software will acquire multiple facets of intelligence, including certain traits that resemble "self-preservation". Speculating the future is difficult, but a cursory guess indicates that these traits and behaviours may include marketing one's species, maximizing income for one's developers, detecting and reporting one's own deficiencies, etc.
Thanks for the interesting case, but I guess AI for such a specific use case is not what I meant. If you mean it would develop over time into something much richer in expression, I'm not sure, because it will for long be very dependent on the environment we feed it. And that is not an optimal or sufficiently neutral condition for the thought model about AI that I intended in the OP.