Pages:
Author

Topic: Pollard's kangaroo ECDLP solver - page 91. (Read 55599 times)

member
Activity: 846
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
June 20, 2020, 07:35:25 PM
@COBRAS
I answered you on the github ticket.

New release 2.0 is out:
    Performance increase
    Kangaroo backup via the server (-wss)
    Fixed rare wrong points

https://github.com/JeanLucPons/Kangaroo/releases/tag/2.0
Thanks to test it Wink

Something is off with the server (version 3). I'll start with 6 clients and the server slowly but surely starts showing clients dropping off. However, when I go check the clients, they say "server ok". Never had that issue with previous server versions.

Bro, how to use -g option ?

I was try -g g1136,g1256,g2136,g2256 but not worekd !!!


Help me please, Very needed right now.

Big thank you
full member
Activity: 1050
Merit: 219
Shooters Shoot...
June 20, 2020, 07:29:46 PM
@COBRAS
I answered you on the github ticket.

New release 2.0 is out:
    Performance increase
    Kangaroo backup via the server (-wss)
    Fixed rare wrong points

https://github.com/JeanLucPons/Kangaroo/releases/tag/2.0
Thanks to test it Wink

Something is off with the server (version 3). I'll start with 6 clients and the server slowly but surely starts showing clients dropping off. However, when I go check the clients, they say "server ok". Never had that issue with previous server versions.
member
Activity: 846
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
June 20, 2020, 12:24:50 PM
@Jeab_Luc help please:

eroor code compilation in Ubintu:

Code:
main.cpp:335:13: error: 'exit' was not declared in this scope exit(0);

Someone help me please fix this error Huh?


Br

How to fix this ?


Big thank you.
member
Activity: 846
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
June 20, 2020, 11:53:12 AM
@Jeab_Luc help please:

eroor code compilation in Ubintu:

Code:
main.cpp:335:13: error: 'exit' was not declared in this scope exit(0);

How to fix this ?


Big thank you.
jr. member
Activity: 30
Merit: 122
June 20, 2020, 10:56:14 AM
https://github.com/brichard19/eclambda

Can anyone try my tool on a 2080ti? On a 2080S it gets around 1300MKeys/sec when using 24-bit DP.


Will you commit your source ?

I have not yet decided if I want to.
sr. member
Activity: 462
Merit: 696
June 20, 2020, 10:38:30 AM
https://github.com/brichard19/eclambda

Can anyone try my tool on a 2080ti? On a 2080S it gets around 1300MKeys/sec when using 24-bit DP.


Will you commit your source ?
jr. member
Activity: 30
Merit: 122
June 20, 2020, 10:28:29 AM
https://github.com/brichard19/eclambda

Can anyone try my tool on a 2080ti? On a 2080S it gets around 1300MKeys/sec when using 24-bit DP.
sr. member
Activity: 616
Merit: 312
June 20, 2020, 05:32:51 AM
-snip-
Thanks to test it Wink

+100mkeys in 2.0 for 2080ti
Expected number of operations is very different in v1.11 and 2.0
Also in 2.0 decreased GPU memory usage and host memory.

Code:
Kangaroo v1.11alpha
Start:4000000000000000000
Stop :7FFFFFFFFFFFFFFFFFF
Keys :1
Number of CPU thread: 0
Range width: 2^74
Jump Avg distance: 2^37.02
Number of kangaroos: 2^22.09
Suggested DP: 11
Expected operations: 2^39.07
Expected RAM: 348.1MB
DP size: 16 [0xFFFF000000000000]
GPU: GPU #0 GeForce RTX 2080 Ti (68x64 cores) Grid(136x256) (417.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^22.09 kangaroos [27.1s]
[1389.79 MK/s][GPU 1389.79 MK/s][Count 2^39.04][Dead 3][07:40 (Avg 06:55)][265.3/338.2MB]
Key# 0 [1S]Pub:  0x03726B574F193E374686D8E12BC6E4142ADEB06770E0A2856F5E4AD89F66044755
       Priv: 0x4C5CE114686A1336E07

Code:
Kangaroo v2.0
Start:4000000000000000000
Stop :7FFFFFFFFFFFFFFFFFF
Keys :1
Number of CPU thread: 0
Range width: 2^74
Jump Avg distance: 2^37.02
Number of kangaroos: 2^22.09
Suggested DP: 12
Expected operations: 2^38.60
Expected RAM: 254.9MB
DP size: 16 [0xFFFF000000000000]
GPU: GPU #0 GeForce RTX 2080 Ti (68x64 cores) Grid(136x256) (347.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^22.09 kangaroos [19.4s]
[1502.21 MK/s][GPU 1502.21 MK/s][Count 2^38.26][Dead 0][04:03 (Avg 04:37)][154.8/200.3MB]
Key# 0 [1S]Pub:  0x03726B574F193E374686D8E12BC6E4142ADEB06770E0A2856F5E4AD89F66044755
       Priv: 0x4C5CE114686A1336E07
Thanks for new release.
jr. member
Activity: 40
Merit: 2
June 20, 2020, 02:17:34 AM
My speed increased by 60 MK/s Smiley
sr. member
Activity: 462
Merit: 696
June 20, 2020, 01:53:03 AM
@COBRAS
I answered you on the github ticket.

New release 2.0 is out:
    Performance increase
    Kangaroo backup via the server (-wss)
    Fixed rare wrong points

https://github.com/JeanLucPons/Kangaroo/releases/tag/2.0
Thanks to test it Wink
member
Activity: 313
Merit: 34
June 19, 2020, 11:36:50 PM
Sorry me for offtop.

Can someone share me compiled Ubuntu 16. CUDA 10 version of Kangaroo ? Very needed. Try compile latest from GitHub but get trebles.

in PM if someone can this doing.


Code:


 make gpu=1 ccap=20 all
cd obj &&       mkdir -p SECPK1
g++ -DWITHGPU -m64 -mssse3 -Wno-unused-result -Wno-write-strings -O2 -I. -I/usr/local/cuda-10.0/include -o obj/SECPK1/IntGroup.o -c SECPK1/IntGroup.cpp
SECPK1/IntGroup.cpp: In constructor 'IntGroup::IntGroup(int)':
[b]SECPK1/IntGroup.cpp:24:42: error: 'malloc' was not declared in this scope[/b]
   subp = (Int *)malloc(size * sizeof(Int));
                                          ^
SECPK1/IntGroup.cpp: In destructor 'IntGroup::~IntGroup()':
[b]SECPK1/IntGroup.cpp:28:12: error: 'free' was not declared in this scope[/b]
   free(subp);
            ^
Makefile:80: recipe for target 'obj/SECPK1/IntGroup.o' failed
make: *** [obj/SECPK1/IntGroup.o] Error 1



Huh??
Your gpu model ?
member
Activity: 846
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
June 19, 2020, 05:26:40 PM
Sorry me for offtop.

Can someone share me compiled Ubuntu 16. CUDA 10 version of Kangaroo ? Very needed. Try compile latest from GitHub but get trebles.

in PM if someone can this doing.


Code:


 make gpu=1 ccap=20 all
cd obj &&       mkdir -p SECPK1
g++ -DWITHGPU -m64 -mssse3 -Wno-unused-result -Wno-write-strings -O2 -I. -I/usr/local/cuda-10.0/include -o obj/SECPK1/IntGroup.o -c SECPK1/IntGroup.cpp
SECPK1/IntGroup.cpp: In constructor 'IntGroup::IntGroup(int)':
[b]SECPK1/IntGroup.cpp:24:42: error: 'malloc' was not declared in this scope[/b]
   subp = (Int *)malloc(size * sizeof(Int));
                                          ^
SECPK1/IntGroup.cpp: In destructor 'IntGroup::~IntGroup()':
[b]SECPK1/IntGroup.cpp:28:12: error: 'free' was not declared in this scope[/b]
   free(subp);
            ^
Makefile:80: recipe for target 'obj/SECPK1/IntGroup.o' failed
make: *** [obj/SECPK1/IntGroup.o] Error 1



Huh??
legendary
Activity: 1914
Merit: 2071
June 19, 2020, 02:36:49 PM

But why it didn`t work when we move DPs to range*32 with arulbero method ?


Because each patch can reach only 1/32 of the points.
sr. member
Activity: 616
Merit: 312
June 19, 2020, 02:32:16 PM
I tried to search keys in the same range as the working file. Everything is much faster:
I solve 1 key at 54bit range and only tamed wild DPs. After that i fulfilled test with 1000pubkeys and got a not bad result.
Expected op 2^28.06 for one key
in average i got 2^27.20 for one key. This value variable dependency how many DPs you gain in workfile.

here is workingfile info:
Code:
DP bits   : 8
Start     : 40000000000000
Stop      : 7FFFFFFFFFFFFF
DP Count  : 658682 2^19.329
HT Max    : 12 [@ 009F12]
HT Min    : 0 [@ 000015]
HT Avg    : 2.51
HT SDev   : 1.58
But why it didn`t work when we move DPs to range*32 with arulbero method ?
P.S interesting thing that 2^27.20 it it is exactly 2^28.06 - (tameDPs+wildDPs/2)*(2^DP)

member
Activity: 846
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
June 19, 2020, 08:23:08 AM
Also i done test 1000 pubs with the same range but with normal soving without tricks.
here result:
Total    OP: 273125509453.87 = 2^37.99
Average  OP: 28.04

Unfortunately the difference is very small.

--------------------------------------------------------------------------------------------------------

I read this article:

https://medium.com/@johncantrell97/how-i-checked-over-1-trillion-mnemonics-in-30-hours-to-win-a-bitcoin-635fe051a752

this is the puzzle https://twitter.com/alistairmilne/status/1266037520715915267

I think that zielar could have won that prize easily too.

About this part of the arcticle:

Quote
In a GPU you have four main types of memory available to you (Global, Constant, Local, and Private). Global memory is shared across all GPU cores and is very slow to access, you want to minimize its use as much as possible. Constant and Private memory are extremely fast but limited in space. I believe most devices only support 64kB of constant memory. Local memory is shared by a “group” of workers and its speed is somewhere between Global and Constant.

My goal was to fit everything I needed into the 64kB of constant memory and never need to read from global or local memory to maximize the speed of the program. This proved to be a bit tricky because the standard precomputed secp256k1 multiplication table took up exactly 64kB by itself.

@JeanLuc

How much constant memory do you use for the multiplication and for the addition?

32 jumps are 16kB for x and y-coordinate + 8 kB for their private keys (32 * 256bit = 8kB) + what else?



It is solved from 18.06

Good day. What was the length of the privkey Bro ?
full member
Activity: 277
Merit: 106
June 19, 2020, 08:18:45 AM
Also i done test 1000 pubs with the same range but with normal soving without tricks.
here result:
Total    OP: 273125509453.87 = 2^37.99
Average  OP: 28.04

Unfortunately the difference is very small.

--------------------------------------------------------------------------------------------------------

I read this article:

https://medium.com/@johncantrell97/how-i-checked-over-1-trillion-mnemonics-in-30-hours-to-win-a-bitcoin-635fe051a752

this is the puzzle https://twitter.com/alistairmilne/status/1266037520715915267

I think that zielar could have won that prize easily too.

About this part of the arcticle:

Quote
In a GPU you have four main types of memory available to you (Global, Constant, Local, and Private). Global memory is shared across all GPU cores and is very slow to access, you want to minimize its use as much as possible. Constant and Private memory are extremely fast but limited in space. I believe most devices only support 64kB of constant memory. Local memory is shared by a “group” of workers and its speed is somewhere between Global and Constant.

My goal was to fit everything I needed into the 64kB of constant memory and never need to read from global or local memory to maximize the speed of the program. This proved to be a bit tricky because the standard precomputed secp256k1 multiplication table took up exactly 64kB by itself.

@JeanLuc

How much constant memory do you use for the multiplication and for the addition?

32 jumps are 16kB for x and y-coordinate + 8 kB for their private keys (32 * 256bit = 8kB) + what else?



It is solved from 18.06
legendary
Activity: 1914
Merit: 2071
June 19, 2020, 02:40:55 AM
@JeanLuc
How much constant memory do you use for the multiplication and for the addition?
32 jumps are 16kB for x and y-coordinate + 8 kB for their private keys (32 * 256bit = 8kB) + what else?

I use the following setting to prefer L1 cache as shared mem is not used.
cudaDeviceSetCacheConfig(cudaFuncCachePreferL1);

In constant mem:
Code:
__device__ __constant__ uint64_t _0[] = { 0ULL,0ULL,0ULL,0ULL,0ULL };
__device__ __constant__ uint64_t _1[] = { 1ULL,0ULL,0ULL,0ULL,0ULL };
__device__ __constant__ uint64_t _P[] = { 0xFFFFFFFEFFFFFC2F,0xFFFFFFFFFFFFFFFF,0xFFFFFFFFFFFFFFFF,0xFFFFFFFFFFFFFFFF,0ULL };
__device__ __constant__ uint64_t MM64 = 0xD838091DD2253531; // 64bits lsb negative inverse of P (mod 2^64)
__device__ __constant__ uint64_t _O[] = { 0xBFD25E8CD0364141ULL,0xBAAEDCE6AF48A03BULL,0xFFFFFFFFFFFFFFFEULL,0xFFFFFFFFFFFFFFFFULL
__device__ __constant__ uint64_t jD[NB_JUMP][4];
__device__ __constant__ uint64_t jPx[NB_JUMP][4];
__device__ __constant__ uint64_t jPy[NB_JUMP][4];

I will definitely reduce jD to 128 bits in the next release, the less constant mem usage is better, there is 64Kb available but for L1 cache the lowest is the best.


128 bit * 32 = 4kB saved, good.

If you accept to break the compatibility with the #115 search, you can save another 1kB picking as jumps points with the first 32 bits of the x-coordinate = 0; you have many of them in the file of the old DPs.
sr. member
Activity: 462
Merit: 696
June 18, 2020, 10:14:46 PM
@JeanLuc
How much constant memory do you use for the multiplication and for the addition?
32 jumps are 16kB for x and y-coordinate + 8 kB for their private keys (32 * 256bit = 8kB) + what else?

I use the following setting to prefer L1 cache as shared mem is not used.
cudaDeviceSetCacheConfig(cudaFuncCachePreferL1);

In constant mem:
Code:
__device__ __constant__ uint64_t _0[] = { 0ULL,0ULL,0ULL,0ULL,0ULL };
__device__ __constant__ uint64_t _1[] = { 1ULL,0ULL,0ULL,0ULL,0ULL };
__device__ __constant__ uint64_t _P[] = { 0xFFFFFFFEFFFFFC2F,0xFFFFFFFFFFFFFFFF,0xFFFFFFFFFFFFFFFF,0xFFFFFFFFFFFFFFFF,0ULL };
__device__ __constant__ uint64_t MM64 = 0xD838091DD2253531; // 64bits lsb negative inverse of P (mod 2^64)
__device__ __constant__ uint64_t _O[] = { 0xBFD25E8CD0364141ULL,0xBAAEDCE6AF48A03BULL,0xFFFFFFFFFFFFFFFEULL,0xFFFFFFFFFFFFFFFFULL
__device__ __constant__ uint64_t jD[NB_JUMP][4];
__device__ __constant__ uint64_t jPx[NB_JUMP][4];
__device__ __constant__ uint64_t jPy[NB_JUMP][4];

I will definitely reduce jD to 128 bits in the next release, the less constant mem usage is better, there is 64Kb available but for L1 cache the lowest is the best.

In that case best choice for solving keys is using CPU  Grin

Yes, this is true for small range (as written in the README).

Great work!

Thanks Wink

For the #120, if you and Zielar use 2^30 kangaroos, you need to use a DP < 28.

Yes, we didn't launch the run yet, we will make our choice in the days to come. Small DP also increase needed mem.

If you reduce k, you reduce the speed, then you have to reduce theta (DP).

Right.

How many kangaroos run in parallel on a single V100 ? At wich speed?

~2^20 for the last 2 runs, we will see for #120.

legendary
Activity: 1914
Merit: 2071
June 18, 2020, 06:23:26 PM
Also i done test 1000 pubs with the same range but with normal soving without tricks.
here result:
Total    OP: 273125509453.87 = 2^37.99
Average  OP: 28.04

Unfortunately the difference is very small.

--------------------------------------------------------------------------------------------------------

I read this article:

https://medium.com/@johncantrell97/how-i-checked-over-1-trillion-mnemonics-in-30-hours-to-win-a-bitcoin-635fe051a752

this is the puzzle https://twitter.com/alistairmilne/status/1266037520715915267

I think that zielar could have won that prize easily too.

About this part of the arcticle:

Quote
In a GPU you have four main types of memory available to you (Global, Constant, Local, and Private). Global memory is shared across all GPU cores and is very slow to access, you want to minimize its use as much as possible. Constant and Private memory are extremely fast but limited in space. I believe most devices only support 64kB of constant memory. Local memory is shared by a “group” of workers and its speed is somewhere between Global and Constant.

My goal was to fit everything I needed into the 64kB of constant memory and never need to read from global or local memory to maximize the speed of the program. This proved to be a bit tricky because the standard precomputed secp256k1 multiplication table took up exactly 64kB by itself.

@JeanLuc

How much constant memory do you use for the multiplication and for the addition?

32 jumps are 16kB for x and y-coordinate + 8 kB for their private keys (32 * 256bit = 8kB) + what else?

sr. member
Activity: 616
Merit: 312
June 18, 2020, 05:20:39 PM
Also i done test 1000 pubs with the same range but with normal soving without tricks.
here result:
Total    OP: 273125509453.87 = 2^37.99
Average  OP: 28.04
Pages:
Jump to: