Pages:
Author

Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it - page 24. (Read 245773 times)

newbie
Activity: 18
Merit: 0
Does anyone have anything to say? Very simple math! Only I don't know what to do for the next keys. My brain is full of ideas and methods, but I can't implement them.
https://postimg.cc/bdmjP5wx
https://postimg.cc/bdmjP5wx
member
Activity: 165
Merit: 26
You probably configured the grid incorrectly. The Nsight shows that 4 blocks are working simultaneously. That's why I have a grid of 88*128. And yes, the speed is +/- correct since it matches the amount of DP accumulated over a certain period of time.

Nah. After looking better at the specs of 1660S I see it has higher SM clock frequency and larger memory bus width than what I was comparing against, so that explains some things.

@kTimesG  you have written more than once that your program is several times faster than JLP and other clones. Can you really run 1660 Super at 2Gk/s?

I have no idea, I can't test on that. I can currently squeeze out 6.2 Gk/s on a RTX 4090, but some users here claim they can obtain 8 Gk/s or more. I think RetiredCoder might have an even faster version. Technically, it is plausible, it really depends on how well the kernel is implemented. And I said some time ago - if someone manages to fully parallelize the inversion, we can have a doubling in speed Smiley That b***h is really time-consuming.
newbie
Activity: 4
Merit: 0
...........

Hello. New version LPKangaroo_OW_OT not working on 135 puzzle? I see 2.0/4.0 MB and saving is not added when I use 135 puzzle. The same behavior I saw when I tried to use the original from JLP
jr. member
Activity: 42
Merit: 0
What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.
You probably configured the grid incorrectly. The Nsight shows that 4 blocks are working simultaneously. That's why I have a grid of 88*128. And yes, the speed is +/- correct since it matches the amount of DP accumulated over a certain period of time.
Or you used a small DP value, then of course the speed will be much less. For example, with DP 13 the speed is only 950Mk/s (on the patched version) and 1100Mk/s with DP 20.
P.S. But you are right there is a glitch with the speed calculation. JLP forgot to reset the average values ​​before calculating the speed (thread.cpp)
After making changes the speed became 950 Mk/s
Code:
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [7.6s]
[952.59 MK/s][GPU 952.59 MK/s][Count 2^35.49][Dead 0][50s (Avg 01:06:23)][3.4/10.6MB]

@kTimesG  you have written more than once that your program is several times faster than JLP and other clones. Can you really run 1660 Super at 2Gk/s?

I would like to test this version with a 3060, can you share it?
the release is available you can try.


This is pain in *ss to install on Windows 11 with 3060 - Visual Studio 2022 + Cuda12

 make gpu=1 ccap=86 all

I've been trying for several hours without success. I managed to compile it for the CPU.

sr. member
Activity: 652
Merit: 316
What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.
You probably configured the grid incorrectly. The Nsight shows that 4 blocks are working simultaneously. That's why I have a grid of 88*128. And yes, the speed is +/- correct since it matches the amount of DP accumulated over a certain period of time.
Or you used a small DP value, then of course the speed will be much less. For example, with DP 13 the speed is only 950Mk/s (on the patched version) and 1100Mk/s with DP 20.
P.S. But you are right there is a glitch with the speed calculation. JLP forgot to reset the average values ​​before calculating the speed (thread.cpp)
After making changes the speed became 950 Mk/s
Code:
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [7.6s]
[952.59 MK/s][GPU 952.59 MK/s][Count 2^35.49][Dead 0][50s (Avg 01:06:23)][3.4/10.6MB]

@kTimesG  you have written more than once that your program is several times faster than JLP and other clones. Can you really run 1660 Super at 2Gk/s?

I would like to test this version with a 3060, can you share it?
the release is available you can try.
newbie
Activity: 22
Merit: 1

The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.

What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.

I got 1308.29 Mk/s 2x GTX 1660S, like 700 per each.
member
Activity: 165
Merit: 26

The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.

What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.
newbie
Activity: 22
Merit: 1
I wonder what exactly you mean by 1.1Gk/s. Only one type of kangaroos included in that? Using my 1080 Ti, I get about 580Mk/s with my version.
The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.
1080 Ti is old card, I don't know what speed you can get.
P.S. It doesn't matter if it's 1 type of kangaroo or both. The speed is the same.
Code:
Kangaroo v1.7Gfix (Only Wild)
Gx=79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
Gy=483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
G Multipler: 0x1
JMP Multipler: 0x1
Loading: testwork
Start:0
Stop :7FFFFFFFFFFFFFFFF
Keys :1
KeyX :D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
KeyY :D973566FFD3D6F79192827E1F93CCBF7E7F2EAF48F762C72C37578EA8154D978
LoadWork: [HashTable 1665.5/2088.3MB] [07s]
Number of CPU thread: 0
NB_RUN: 128
GPU_GRP_SIZE: 128
NB_JUMP: 32
Range width: 2^67
JMP bits DEC: 34
Jump Avg distance min: 2^32.95
Jump Avg distance max: 2^33.05
Jump multipled by: 0x1
Jump Avg distance: 2^32.97 [96]
Number of kangaroos: 2^20.46
Suggested DP: 13
Expected operations: 2^35.11
Expected RAM: 184.5MB
DP size: 13 [0xFFF8000000000000]
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [9.4s]
[1132.46 MK/s][GPU 1132.46 MK/s][Count 2^38.71][Dead 0][04s (Avg 32s)][1677.2/2103.0MB]
Key# 0 [1S]Pub:  0x02D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
       Priv: 0x7989031FDA5BA3BF5

I would like to test this version with a 3060, can you share it?
sr. member
Activity: 652
Merit: 316
I wonder what exactly you mean by 1.1Gk/s. Only one type of kangaroos included in that? Using my 1080 Ti, I get about 580Mk/s with my version.
The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.
1080 Ti is old card, I don't know what speed you can get.
P.S. It doesn't matter if it's 1 type of kangaroo or both. The speed is the same.
Code:
Kangaroo v1.7Gfix (Only Wild)
Gx=79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
Gy=483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
G Multipler: 0x1
JMP Multipler: 0x1
Loading: testwork
Start:0
Stop :7FFFFFFFFFFFFFFFF
Keys :1
KeyX :D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
KeyY :D973566FFD3D6F79192827E1F93CCBF7E7F2EAF48F762C72C37578EA8154D978
LoadWork: [HashTable 1665.5/2088.3MB] [07s]
Number of CPU thread: 0
NB_RUN: 128
GPU_GRP_SIZE: 128
NB_JUMP: 32
Range width: 2^67
JMP bits DEC: 34
Jump Avg distance min: 2^32.95
Jump Avg distance max: 2^33.05
Jump multipled by: 0x1
Jump Avg distance: 2^32.97 [96]
Number of kangaroos: 2^20.46
Suggested DP: 13
Expected operations: 2^35.11
Expected RAM: 184.5MB
DP size: 13 [0xFFF8000000000000]
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [9.4s]
[1132.46 MK/s][GPU 1132.46 MK/s][Count 2^38.71][Dead 0][04s (Avg 32s)][1677.2/2103.0MB]
Key# 0 [1S]Pub:  0x02D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
       Priv: 0x7989031FDA5BA3BF5
member
Activity: 7
Merit: 0

P.S. If you like it, I can add a version with a bit more speed (gtx 1160 Super - 1.1 Gk/s)
[/quote]

I wonder what exactly you mean by 1.1Gk/s. Only one type of kangaroos included in that? Using my 1080 Ti, I get about 580Mk/s with my version.
sr. member
Activity: 652
Merit: 316
member
Activity: 7
Merit: 0
I didn't touch the workfile or changed it using a script. I did try using a CPU. I'll try now using a GPU.
Edit - doesn't work with a GPU either. what did you change from JLP's version?
I downloaded the KangarooOT and KangarooOW version from GitHub to make sure we were using the same tools.
For the experiment we will look for the public key 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213
whose private key is 0x7989031fda5ba3bf5 in the 67-bit range from 0 to 7ffffffffffffffff
Step 0: create a 67bitWrong.txt file with the following content:
Code:
0
7ffffffffffffffff
03633CBE3EC02B9401C5EFFA144C5B4D22F87940259634858FC7E59B1C09937852
Step 1: aor the accumulation of tame DPs create a step1.bat file with the following content and launch:
Code:
kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork 67bitWrong.txt
for /l %%i in (1,1,6) do (
    echo Iteration %%i    
    kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork -i testwork
)
pause
Step 2: To change the public key in the working file, make a step2.bat file and launch:
Code:
py changewf.py -f testwork -pub 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213 -rb 0 -re 7ffffffffffffffff
pause
Step 3: To find the public key  make a step3.bat file and launch:
Code:
kangarooOW -t 0 -gpu -gpuId 0 -g 88,128 -i testwork
pause

Thanks for the details, I already got it to work using my own version.
If you can share changewf.py though, that'd be great.
sr. member
Activity: 652
Merit: 316
I didn't touch the workfile or changed it using a script. I did try using a CPU. I'll try now using a GPU.
Edit - doesn't work with a GPU either. what did you change from JLP's version?
I downloaded the KangarooOT and KangarooOW version from GitHub to make sure we were using the same tools.
For the experiment we will look for the public key 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213
whose private key is 0x7989031fda5ba3bf5 in the 67-bit range from 0 to 7ffffffffffffffff
Step 0: create a 67bitWrong.txt file with the following content:
Code:
0
7ffffffffffffffff
03633CBE3EC02B9401C5EFFA144C5B4D22F87940259634858FC7E59B1C09937852
Step 1: aor the accumulation of tame DPs create a step1.bat file with the following content and launch:
Code:
kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork 67bitWrong.txt
for /l %%i in (1,1,6) do (
    echo Iteration %%i    
    kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork -i testwork
)
pause
Step 2: To change the public key in the working file, make a step2.bat file and launch:
Code:
py changewf.py -f testwork -pub 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213 -rb 0 -re 7ffffffffffffffff
pause
Step 3: To find the public key  make a step3.bat file and launch:
Code:
kangarooOW -t 0 -gpu -gpuId 0 -g 88,128 -i testwork
pause
P.S. If you like it, I can add a version with a bit more speed (gtx 1160 Super - 1.1 Gk/s)
member
Activity: 7
Merit: 0
Edit - doesn't work with a GPU either. what did you change from JLP's version?

By the things that he share here in the forum and github it isn't a single line or some of them, i think he rewrite a lot of code and added new code to it.

If he don't share it I doubt you can replicate it.

I think the issue is that we need to overwrite the key in the workfile, working on a script to do it now...
hero member
Activity: 862
Merit: 662
Edit - doesn't work with a GPU either. what did you change from JLP's version?

By the things that he share here in the forum and github it isn't a single line or some of them, i think he rewrite a lot of code and added new code to it.

If he don't share it I doubt you can replicate it.
member
Activity: 7
Merit: 0
When I try this trick, it just doesn't find the collision. even when it should take just a second, it never finds it. can you confirm these are the steps you did?
Choose a range (say, 48 bit)
Choose any key that is not in that range
run kangaroo on it for some time, (with -m 3 to be exact to your example), and make it save the work into a workfile.
Then load the work file into a new session with an actual key inside the range.
This is what I did, didn't work.
Before finding the public key in the range, did you change it in the working file using a script?
P.S. 48 bit...?  Maybe you are using CPU and not GPU, then it won't work. I remade the JLP program only for GPU, I don't know who uses CPU now.

I didn't touch the workfile or changed it using a script. I did try using a CPU. I'll try now using a GPU.

Edit - doesn't work with a GPU either. what did you change from JLP's version?
newbie
Activity: 68
Merit: 0
Quote

You must have had some really bad teachers, if what you've got out of the math classes was that everything is possible.

The very definition of cryptography is to hide information in such a way that it cannot be possible to decipher it, using the same mathematics it is built upon. Why? Because it is mathematically proven that it is not possible. So blindly making assumptions ("I believe that, I think that" etc.) without considering the core structure of the mathematical properties of what you're referring to, is simply called ignorance, and can result in nothing useful, except a vague sense of pathologically seeing patterns in what is actually random noise, because that is what you want to see, not because there are any.

TL'DR you can never ever convince anyone who knows what a hashing algorithm is made for, that it is broken just because you think it is broken, because if the hashes are uniform at the slightest, that makes them a pattern, and they can't be a pattern because of how they are defined to be (noise, no pattern, no predictability, etc.); hence no argument you can ever bring can ever make sense, making the debate itself useless from the start. Instead, come up with an actual proof, not beliefs. And that would be just step 1 of 10 to get from "RIPEMD is uniform so what I'm doing makes sense" to "hey, let's see if this can break ECC".

Hence, the comparison with an axe used to hunt wild gooses is pretty accurate.

Mathematics is in everything. It will be in everything.

67. When the wallet is found, I will show the detailed proof to a few people.

Actually, I am someone who loves love, respect and sharing like everyone else.
But thieves and foxes are everywhere. I don't trust anyone.

But I will share this topic with friends like Zielar and Alberto.
When they turn this into software, they may or may not want to share it. It is their job.t is their business.
sr. member
Activity: 652
Merit: 316
When I try this trick, it just doesn't find the collision. even when it should take just a second, it never finds it. can you confirm these are the steps you did?
Choose a range (say, 48 bit)
Choose any key that is not in that range
run kangaroo on it for some time, (with -m 3 to be exact to your example), and make it save the work into a workfile.
Then load the work file into a new session with an actual key inside the range.
This is what I did, didn't work.
Before finding the public key in the range, did you change it in the working file using a script?
P.S. 48 bit...?  Maybe you are using CPU and not GPU, then it won't work. I remade the JLP program only for GPU, I don't know who uses CPU now.
member
Activity: 7
Merit: 0
Bro, provide your script ? Wink
This is a jlp kangaroo, with precalculated tame kangaroos.
You can make them yourself. By running a kangaroo in the required range with the -m 3 parameter in a loop with a false public key. In this way you will accumulate a lot of DPs.
After that, you will only need wild kangaroos.

When I try this trick, it just doesn't find the collision. even when it should take just a second, it never finds it. can you confirm these are the steps you did?

Choose a range (say, 48 bit)
Choose any key that is not in that range

run kangaroo on it for some time, (with -m 3 to be exact to your example), and make it save the work into a workfile.

Then load the work file into a new session with an actual key inside the range.

This is what I did, didn't work.
member
Activity: 165
Merit: 26
Now, let him study math.
If the math operation works, if I have proven it. (To myself)

In this competition, you are either a software developer or someone interested in math. I am not a software developer, but didn't they prove to you at school that everything is possible mathematically? Or do they teach you how to hunt geese with an axe in your place?

You must have had some really bad teachers, if what you've got out of the math classes was that everything is possible.

The very definition of cryptography is to hide information in such a way that it cannot be possible to decipher it, using the same mathematics it is built upon. Why? Because it is mathematically proven that it is not possible. So blindly making assumptions ("I believe that, I think that" etc.) without considering the core structure of the mathematical properties of what you're referring to, is simply called ignorance, and can result in nothing useful, except a vague sense of pathologically seeing patterns in what is actually random noise, because that is what you want to see, not because there are any.

TL'DR you can never ever convince anyone who knows what a hashing algorithm is made for, that it is broken just because you think it is broken, because if the hashes are uniform at the slightest, that makes them a pattern, and they can't be a pattern because of how they are defined to be (noise, no pattern, no predictability, etc.); hence no argument you can ever bring can ever make sense, making the debate itself useless from the start. Instead, come up with an actual proof, not beliefs. And that would be just step 1 of 10 to get from "RIPEMD is uniform so what I'm doing makes sense" to "hey, let's see if this can break ECC".

Hence, the comparison with an axe used to hunt wild gooses is pretty accurate.
Pages:
Jump to: