Pages:
Author

Topic: Scrypt based coins: Any proof that this algorithm is realy good? - page 2. (Read 6370 times)

legendary
Activity: 2142
Merit: 1010
Newbie
Scrypt looks OK, the problem comes from chosen parameters (1024, 1 and 1) I think. Maybe "problem" is too strong word, but realy good algorithm supposed to use every byte of scratchpad with equal probability. Unfortunately, usage of the scratchpad is not symmetrical. For example, if u divide memory into 64 byte chunks u'll notice that every even chunk used 2 times while odd one used 1 time. There are also evident patterns in jumps from chunk to chunk. So on.

I think no need to discuss this topic. We should just pray that noone will solve the problem how to simplify SALSA.
legendary
Activity: 2142
Merit: 1010
Newbie
Good idea! thank you
hero member
Activity: 842
Merit: 507
I'm talking about algorithm that is used between two SHA256 calculations to make hashing GPU unfriendly. Its name is "Scrypt".

It's supposed to do calculations that can't be done in parralel mode and require a lot of memory. Parameters used for LTC and similar forks (1024, 1 and 1) make us to use only 1 thread and at least 128 Kb of memory to calc 1 hash value. I rewrote the algorithm so I could track a path of every bit. After that I used math methods for simplifying integrated circuits (it's like when u have 1000 NAND or NOR elements and wish to use as less as possible of them, just 300 for example). I failed to build complete Scrypt this way, coz of memory constraint, but I managed to simplify small pieces of it. I'm afraid that with a lot of memory anyone could build complete scheme and pass thousand SALSA steps. This will lead to huge hashrate boost, i.e. 1 Mh/s vs 1 Kh/s with "usual" Scrypt. No need to tell why it's very bad for LTC and the others.

Well, now my question: is there any theory that prove quality of Scrypt?

PS: My modified Scrypt code do less calculations. It gives approx. 20% boost when compared to original code of Tarnsnap project due to sequential memory access. Imagine what will happen if someone get rid of redundant XORs of 16 temporary memory cells in SALSA...

I think it would be interesting to discuss this with the author of scrypt, Colin Percival. There is even a mailing list dedicated to scrypt here:
http://www.tarsnap.com/lists.html
legendary
Activity: 2142
Merit: 1010
Newbie
I'm talking about algorithm that is used between two SHA256 calculations to make hashing GPU unfriendly. Its name is "Scrypt".

It's supposed to do calculations that can't be done in parralel mode and require a lot of memory. Parameters used for LTC and similar forks (1024, 1 and 1) make us to use only 1 thread and at least 128 Kb of memory to calc 1 hash value. I rewrote the algorithm so I could track a path of every bit. After that I used math methods for simplifying integrated circuits (it's like when u have 1000 NAND or NOR elements and wish to use as less as possible of them, just 300 for example). I failed to build complete Scrypt this way, coz of memory constraint, but I managed to simplify small pieces of it. I'm afraid that with a lot of memory anyone could build complete scheme and pass thousand SALSA steps. This will lead to huge hashrate boost, i.e. 1 Mh/s vs 1 Kh/s with "usual" Scrypt. No need to tell why it's very bad for LTC and the others.

Well, now my question: is there any theory that prove quality of Scrypt?

PS: My modified Scrypt code do less calculations. It gives approx. 20% boost when compared to original code of Tarnsnap project due to sequential memory access. Imagine what will happen if someone get rid of redundant XORs of 16 temporary memory cells in SALSA...
Pages:
Jump to: