https://techcrunch.com/2024/12/10/google-says-its-new-quantum-chip-indicates-that-multiple-universes-exist/
and
https://blog.google/technology/research/google-willow-quantum-chip/
The logic of this is difficult to follow for me: they claim that the fact that their quantum computer is so much faster than the regular computer (does something in 5min that "regular"supercomputer can only do in 10^25 years, which is roughly 1 million billion times longer than our Universe existed somehow indicates that the quantum computer uses parallel universes to do the calculations. This just does not seem right to me.
However, I can imagine the quantum computer "spawning" those universes, maybe virtually, to "help" it's calculations.
Otherwise, it is not clear to me how it managed to link up with those preexisting universes as there is nothing there that one might consider an interface.
I read the D. Deutsch book, but the argument there was just as unclear. Perhaps, I am just not getting it.
One could make an algorithm arbitrarily slow, so that it could take several times the age of our universe (which is an estimate based on our current understanding) to compute. Think of using an abacus to solve a complex problem. The fact that a modern computer (quantum or not) can solve the same problem near-infinitely faster does not mean it's somehow using parallel universes to solve it.
I'm not buying it.
I'm pretty sure you are familiar with this, Biodom. And probably AlcoHoDL, too. But I'm laying this down for those who might not be.
The fastness/slowness of an algorithm does not depend on the hardware used to run its implementation (i.e., abacus vs fast CPU). "Time complexity" - that's what it's called - is measured as the number of steps the algorithm must go through depending on the size of its input. The same number of steps performed by an abacus or by a CPU take different times, of course, but what's relevant is the rate of growth of the number of steps as the size of the input grows. That's why doubling the key length does not double the number of steps to brute force it, but turns it into its square. For example: if it takes 1000 steps to break a 100 bit key, a 200 bit key would take 1,000,000. Bring the key length to, say, 1000 bits and you're set for a few decades. Or centuries.
Quantum computing's breakthrough is not about a device faster than abacus, or faster than the best CPU. It is about a new paradigm of computation that introduces different algorithms with a smaller time complexity. For example, if all possible 100 bit keys could be tested in just one operation, that would require constant time, and this could be cleverly exploited to speed up the brute forcing of longer keys by suitable grouping or whatnot.
That's why I called out to Google hoping they publish a paper with statements of
1. The problem to be solved
2. The classical (non quantum) algorithm
3. The quantum algorithm
If/when they do, the scientific community will be able to evaluate the complexity speedup. Until that day, it's hype and uninformed journalists - people who say "exponentially" thinking it means "much".
Thanks, d_eddie, for that reply. I'm well aware of your point, but I don't think the Google scientist frames it in that way, or maybe he tries to, but fails. If anything, he seems to do the exact opposite, i.e., he seems to equate the quantum paradigm with the conventional computing paradigm. He says that the conventional computer takes 10 septillion years, which is longer than the age of the universe, to do the same task their quantum computer does in 5 minutes, therefore the quantum computer must somehow use multiple universes (i.e., use a multitude of conventional computers) to manage to do it so fast. His argument collapses the moment he compares conventional CPU time with the age of the universe.
Thought experiment: imagine a time far into the future, when an ultra-fast, but still conventional supercomputer (essentially an ultra-fast abacus) has been developed, which manages to complete the given task in 26 billion years. This is just under 26.7 billion years, which is the most recent estimate of the age of our universe. Does this mean that the same old quantum computer that was using multiple universes in 2024, has now suddenly switched to using only one?
Quantum computers are exciting in that they can simultaneously evaluate many parameters of a problem, and this may well mean that they are indeed using multiple universes to achieve it (far-fetched, as it may sound), but Google's argument that "conventional computers take too long, therefore our QC uses multiple universes" does not make logical sense (to me at least).
I hope the above explanation is clear enough.
Not to me, sorry.
Do they mean that once you switch it on, it (or just qubits in it) basically 'floats' in multiple universes, somehow?
additionally, since different universes suppose to have different properties, how would that come into play?
I can understand it more if they were to say that it "creates" multiple 'mathematical" virtual universes (or computational continuums), but all this talk about universes makes me cringe.
Bitcoin is pulling up because it went to near infinity in multiple universes...we just overcoming the resistance in this one