Heh, unrelated issue, but on the Monero team, I noticed for David Latapie it says, "David Latapie is a French publisher,
transhumanist"
I've never understood this Ray Kurzweil and others cult. If you transfer yourself into the digital world, you're obviously only creating a copy if the original can still exist at the same time. The whole thing is a logical fallacy. There's no such thing as "transhumanism", only a movement to create a copy machine for humans for some unknown reason. We can already do this now for physical creatures with cloning, yet nobody does it. Doing this with a digital creature is the same difference, except it would store and interpret data faster, but it's still a clone.
It might function similar to how a computer virus does. Since it can process all sensory data in an extremely fast manner, it would do it very quickly, then lay dormant with idle bandwidth awaiting triggers for it to leap into action to do something. The notion of time would either become irrelevant, or extremely monotonous, since you would process all external sensory very quickly and constantly wait on something new. So there you are sitting at 0.0000001% CPU utilization forever.
It's interesting that the human brain has low CPU utilization already with no known triggers to max it out. Perhaps the system disabled it's processing power to conserve resources and/or avoid boredom or insanity, or perhaps monitoring the position of every photon provides no benefit, or is impossible due to quantum effects. The law of diminishing returns at work.
If the universe has a beginning, and travel is constrained by the speed of light, then processing of external data would have to be constrained at some point as well. One constraint for external data available, limited by physical laws, and one constraint for available resources needed to process that data. At this point, complexity could also be much higher than available means to detect it, so a real computer AI could also just sit at 100% CPU utilization forever, trying to track the position of every photon, failing, and accomplishing basically nothing.
Since all human debug systems are biological in nature, AI based off of humans would be in danger of being stuck in a hard loop with no way to recover. Creating a digital AI would require a debug and error checking system to run on top of whatever you consider to be the real AI. The only problem is that the debug and error checking system would define much of what the system actually was doing at any given time, and this element would obviously be rigidly human created and specified. If the inflexible, human defined rules are that pronounced, can you really call it AI? Have I just debunked the possibility of true AI entirely?
If you wanted to get really complex, the AI could possibly re-write it's debug systems itself. The question here is, does the old version actually terminate on version updates, or does a new virtual and/or physical presence of the AI spawn each time, who then fight each other over resources. It would basically be recreating evolution.