Author

Topic: IOTA - page 732. (Read 1473233 times)

hero member
Activity: 714
Merit: 500
November 16, 2015, 05:36:40 PM

Scott also writes regularly about DWave and their snake-oil version of quantum computer that your pictures alludes to. See http://www.scottaaronson.com/blog/?p=2448


Scott Aaronson is a champion of scalable quantum computers: http://spectrum.ieee.org/tech-talk/computing/hardware/why-im-wagering-100000-on-quantum-computing

No sure why you bring up D-Wave, everyone knows that they are doing quantum annealing, not proper quantum computations. None of this suggests we should not take a physical theory seriously. That's what this really boils down to, engineering challenges, the theory of quantum mechanics is crystal clear on this topic.
legendary
Activity: 2142
Merit: 1009
Newbie
November 16, 2015, 05:11:47 PM
You are rather confused about the abilities of quantum computers.
A 2^18 increase in sequential computation is also a 2^18 increase in quantum runtime.
Please read http://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf
to understand what quantum computers can and cannot do.

Scott also writes regularly about DWave and their snake-oil version of quantum computer that your pictures alludes to. See http://www.scottaaronson.com/blog/?p=2448

DWave is not a quantum computer, that's true.

Regarding that 2^18 issue, your paper says:
Quote
A small number of particles in superposition
states can carry an enormous amount of information:
a mere 1,000 particles can be in a superposition
that represents every number from 1 to
2^1,000 (about 10^300), and a quantum computer
would manipulate all those numbers in
parallel, for instance, by hitting the particles
with laser pulses.
While it's obvious that 1 number is not enough for Argon2 computation, if we assume that 10 numbers is enough then 18*10 extra qubits should solve the problem. Right?
legendary
Activity: 990
Merit: 1108
November 16, 2015, 05:01:57 PM
The whitepaper (Table 1) says that reducing memory for Argon2d by a mere factor of 7 requires increasing the amount of computation by 2^18, and it only gets much worse beyond that.

Best of luck with your perfect quantum computer.

So it requires to add 18 qubits to that perfect quantum computer, it seems?

You are rather confused about the abilities of quantum computers.
A 2^18 increase in sequential computation is also a 2^18 increase in quantum runtime.
Please read http://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf
to understand what quantum computers can and cannot do.

Scott also writes regularly about DWave and their snake-oil version of quantum computer that your pictures alludes to. See http://www.scottaaronson.com/blog/?p=2448
hero member
Activity: 1069
Merit: 682
November 16, 2015, 04:51:02 PM
Two years ago I was the first German blogger that took notice of Nxt. I hope for IOTA I can also play an important role to create attention in the German speaking communities (what includes Switzerland and Austria as well).

This first post includes a lot of information from this thread also some parts of the cointelegraph interview and other sources from the web.
In addition I brought attention to Jinn and how IOTA is related to this semiconductor start up:
https://altcoinspekulant.wordpress.com/2015/11/15/iota-kryptowaehrungsrevolution-zum-internet-of-things/

Have a good start in the week!


Thanks a lot !

Of course David. It would be great if I could contact you as well for an interview, not right now but begin of December, when we get closer to the ICO date. Just 4-5 questions.
Many thanks in advance!
legendary
Activity: 2142
Merit: 1009
Newbie
November 16, 2015, 03:14:51 PM
The whitepaper (Table 1) says that reducing memory for Argon2d by a mere factor of 7 requires increasing the amount of computation by 2^18, and it only gets much worse beyond that.

Best of luck with your perfect quantum computer.

So it requires to add 18 qubits to that perfect quantum computer, it seems?

Have you seen this pic:

legendary
Activity: 990
Merit: 1108
November 16, 2015, 03:07:40 PM
Here's one: Argon2, winner of the Password Hashing Competition.

Argon2 whitepaper says that time-memory trade-off still can be used. At some point the trade-off stops working because computational units will occupy more space than the removed memory but this protection won't work for a quantum computer with its perfect parallelism of computations. Looks like Argon2 fails to deliver protection against quantum computers.

The whitepaper (Table 1) says that reducing memory for Argon2d by a mere factor of 7 requires increasing the amount of computation by 2^18, and it only gets much worse beyond that.

Best of luck with your perfect quantum computer.
legendary
Activity: 2142
Merit: 1009
Newbie
November 16, 2015, 02:55:56 PM
You should find more reputable sources to support your questionable claims.

There are not that many papers that analyze algebraic attacks on double SHA256. Look at http://link.springer.com/chapter/10.1007%2F978-3-642-21702-9_6#page-1 and https://bitcointalksearch.org/topic/m.2851659 to get understanding how single bits can be computed faster than computation of the whole hash. https://en.wikipedia.org/wiki/Algebraic_normal_form may also help.
legendary
Activity: 2142
Merit: 1009
Newbie
November 16, 2015, 02:47:36 PM
Here's one: Argon2, winner of the Password Hashing Competition.

Argon2 whitepaper says that time-memory trade-off still can be used. At some point the trade-off stops working because computational units will occupy more space than the removed memory but this protection won't work for a quantum computer with its perfect parallelism of computations. Looks like Argon2 fails to deliver protection against quantum computers.
legendary
Activity: 990
Merit: 1108
November 16, 2015, 02:41:47 PM
This means that in average computation of a single bit takes less time than computation of the whole hash.

Like I said it takes a about a percent less.

All that article does is propose an extremely inefficient way of evaluating SHA256,
as some of the comments there already point out.

You should find more reputable sources to support your questionable claims.
legendary
Activity: 990
Merit: 1108
November 16, 2015, 02:25:11 PM
Of course I was talking about hash-functions that don't allow for time-memory trade-offs.

Give me the name of one of such functions, please. The trade-off is a pretty universal thing, the best a function can do is to keep time*memory*advice constant, if I'm not mistaken.

You are quite mistaken. This is a recognized weakness in scypt's design.

Here's one: Argon2, winner of the Password Hashing Competition.

Most of the PHC candidates qualify, since time-memory-trade-off resistance was one of the design goals.
legendary
Activity: 2142
Merit: 1009
Newbie
November 16, 2015, 01:27:47 PM
so that each single bits in one round depends on pretty much all bits of previous rounds.

This means that after some number of rounds SHA256 doesn't give a better mixing, hence it's possible to do a shortcut by finding a polynomial with fewer number of operators.


That is a long document to read. Where exactly does it claim that?

Quote
I introduced a novel algorithm to solve the bitcoin mining problem without using (explicit) brute force. Instead, the nonce search is encoded as a decision problem and solved by a SAT solver in such a way that a satisfiable instance contains a valid nonce. The key ingredients in the algorithm are a non-deterministic nonce and the ability to take advantage of the known structure of a valid hash using assume statements.

A couple of benchmarks demonstrated that already with simple parameter tuning dramatic speed ups can be achieved. Additionally, I explored the contentious claim that the algorithm might get more efficient with increasing bitcoin difficulty. Initial tests showed that block 218430 with considerably higher difficulty is solved more efficiently than the genesis block 0 for a given nonce range.

This means that in average computation of a single bit takes less time than computation of the whole hash.
legendary
Activity: 2142
Merit: 1009
Newbie
November 16, 2015, 01:21:04 PM
Of course I was talking about hash-functions that don't allow for time-memory trade-offs.

Give me the name of one of such functions, please. The trade-off is a pretty universal thing, the best a function can do is to keep time*memory*advice constant, if I'm not mistaken.
legendary
Activity: 990
Merit: 1108
November 16, 2015, 12:37:39 PM
But Hashcash with large memory requirements will likely not be affected as long as scaling quantum computers up to millions of bits remains elusive.

I didn't find information on time-memory trade-off of quantum computers, but if we assume that the trade-off is not worse than the trade-off of classical computers then we get that memory increase of the hashing function can be counteracted by increasing time we run the computations. So Hashcash with large memory won't save us.

Of course I was talking about hash-functions that don't allow for time-memory trade-offs.
legendary
Activity: 990
Merit: 1108
November 16, 2015, 12:33:59 PM
Computing a single bit of a hash is almost as much effort as computing the whole hash; you might be saving a percent or two at most.

Could you provide a proof of this statement?

Thus follows directly from how SHA256 is defined.
It is many rounds of confusion and dispersion;
so that each single bits in one round depends on pretty much all bits of previous rounds.

Quote

That is a long document to read. Where exactly does it claim that?
legendary
Activity: 2142
Merit: 1009
Newbie
November 16, 2015, 12:28:53 PM
But Hashcash with large memory requirements will likely not be affected as long as scaling quantum computers up to millions of bits remains elusive.

I didn't find information on time-memory trade-off of quantum computers, but if we assume that the trade-off is not worse than the trade-off of classical computers then we get that memory increase of the hashing function can be counteracted by increasing time we run the computations. So Hashcash with large memory won't save us.
legendary
Activity: 2142
Merit: 1009
Newbie
November 16, 2015, 11:51:03 AM
Computing a single bit of a hash is almost as much effort as computing the whole hash; you might be saving a percent or two at most.

Could you provide a proof of this statement? http://jheusser.github.io/2013/02/03/satcoin.html claims the opposite.
legendary
Activity: 990
Merit: 1108
November 16, 2015, 11:39:36 AM
Hashcash PoW (like Bitcoin's and most altcoin's) is amenable to Grover search which can search a space of n nonces in time O(sqrt(n)).

But Hashcash with large memory requirements will likely not be affected as long as scaling quantum computers up to millions of bits remains elusive.

Non-hashcash PoWs like Cuckoo Cycle are even less affected, as they are immune to Grover search.

 Also, a quantum computer doesn't need to evaluate a whole hash value, it can verify first bits and throw away nonces that don't suit with high probability.

Computing a single bit of a hash is almost as much effort as computing the whole hash; you might be saving a percent or two at most.

Quote
BTW, why is Cuckoo Cycle less affected, birthday paradox problems are solved with N^(1/3) effort VS N^(1/2).

Because, unlike Birthday collision problems, Cuckoo Cycle is a more structured search problem; you must find a 42 cycle in an arbitrary graph.
There are no known quantum speedups for such graph problems.
legendary
Activity: 2142
Merit: 1009
Newbie
November 16, 2015, 10:49:54 AM
Hashcash PoW (like Bitcoin's and most altcoin's) is amenable to Grover search which can search a space of n nonces in time O(sqrt(n)).

But Hashcash with large memory requirements will likely not be affected as long as scaling quantum computers up to millions of bits remains elusive.

Non-hashcash PoWs like Cuckoo Cycle are even less affected, as they are immune to Grover search.

"Improvements" like Scrypt make nodes more vulnerable to spam. Also, a quantum computer doesn't need to evaluate a whole hash value, it can verify first bits and throw away nonces that don't suit with high probability. BTW, why is Cuckoo Cycle less affected, birthday paradox problems are solved with N^(1/3) effort VS N^(1/2).
legendary
Activity: 990
Merit: 1108
November 16, 2015, 10:39:29 AM
Pretty much all current public-key cryptography is done for.

PoW blockchain mining too.

Hashcash PoW (like Bitcoin's and most altcoin's) is amenable to Grover search which can search a space of n nonces in time O(sqrt(n)).

But Hashcash with large memory requirements will likely not be affected as long as scaling quantum computers up to millions of bits remains elusive.

Non-hashcash PoWs like Cuckoo Cycle are even less affected, as they are immune to Grover search.
legendary
Activity: 2142
Merit: 1009
Newbie
November 16, 2015, 10:36:20 AM
Is it easy for Etheruem or POS to switch to a new algorithms to protect the public keys?

Existing schemes of QC-proof signing require pretty big signatures. In Iota, for example, ~433 bytes are occupied by the signature, it's exactly 50% of the transaction size.
Jump to: