Pages:
Author

Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it - page 7. (Read 230740 times)

jr. member
Activity: 42
Merit: 0
Hi guys!
Is there any central place where ALL SEARCHED ranges are?
I am interested in ALL SEARCHED ranges from the whole puzzle, NOT only for the unsolved one.

If there are not in any central place, are there any log files from people who have already searched some ranges?

Thanks

Well, I’m sure the NSA has a very detailed log of every range that’s been searched—probably right next to their collection of cat videos and conspiracy theories. 😜
newbie
Activity: 7
Merit: 0
Hi guys!
Is there any central place where ALL SEARCHED ranges are?
I am interested in ALL SEARCHED ranges from the whole puzzle, NOT only for the unsolved one.

If there are not in any central place, are there any log files from people who have already searched some ranges?

Thanks
member
Activity: 165
Merit: 26
You are my hero! We need more magic circles! Grin
Are you kidding, hero?! If you really mean it, come and help me! Not by donating, but by telling me if the magic circle helped you?!
I have enhanced the magic circles by combining shapes. The things I see from the discovered keys are amazing, but I don't know how to summarize the unknown keys to find the full key.

Let me summarize your amazing discoveries, and also help you find the full keys:

In every single puzzle the position of a leading bit "1" is hard-coded into the key. This information creates a pattern.

This explains pretty much all of your discoveries, but unfortunately this is something we already know.

Maybe subtract the range start from your decimal keys so that you actually have unbiased data to work with. Let us know how well your circles and shapes and columns of digits work for you after that.
newbie
Activity: 18
Merit: 0
You are my hero! We need more magic circles! Grin
Are you kidding, hero?! If you really mean it, come and help me! Not by donating, but by telling me if the magic circle helped you?!
I have enhanced the magic circles by combining shapes. The things I see from the discovered keys are amazing, but I don't know how to summarize the unknown keys to find the full key.
I would appreciate it if I could send them to you and if it really gives you a clue. Split the rewards How you split depends on how much my magic rings helped, for example if 20% helped you then my share is 20%.
newbie
Activity: 11
Merit: 0
I just got the information, but don't tell me where i got them from.

The solvers of #120 and #125 were the same persons and they were miners, they always had a mining farm which they minted dogecoins and Ethereum and other low altcoins.

They had access to huge GPU power, and most probably used Jean's Kangaroo for bruteforcing.

Most importantly; They are CURRENTLY NOW running and trying to hunt for #130 again! I hope we can be able to stop them and gain the 130's reward before them and before the end of the year. Based on calculations from the miners 3Emiwzxme7Mrj4d89uqohXNncnRM15YESs The estimated time for #130 to be hunted from the miners, should be around January 2024, February 2024, and if lucky already in December 2023!

So summary of the story: You have guys time before February 2024 to get #130's reward before the miners solve the third puzzle and get #120, #125, and #130.

comment this guy is correct

As @Etar said, we need to unite to stop the miners or he will take all prizes for himself.

and what kind unite??
newbie
Activity: 18
Merit: 0
Does anyone have anything to say? Very simple math! Only I don't know what to do for the next keys. My brain is full of ideas and methods, but I can't implement them.
https://postimg.cc/bdmjP5wx
https://postimg.cc/bdmjP5wx
member
Activity: 165
Merit: 26
You probably configured the grid incorrectly. The Nsight shows that 4 blocks are working simultaneously. That's why I have a grid of 88*128. And yes, the speed is +/- correct since it matches the amount of DP accumulated over a certain period of time.

Nah. After looking better at the specs of 1660S I see it has higher SM clock frequency and larger memory bus width than what I was comparing against, so that explains some things.

@kTimesG  you have written more than once that your program is several times faster than JLP and other clones. Can you really run 1660 Super at 2Gk/s?

I have no idea, I can't test on that. I can currently squeeze out 6.2 Gk/s on a RTX 4090, but some users here claim they can obtain 8 Gk/s or more. I think RetiredCoder might have an even faster version. Technically, it is plausible, it really depends on how well the kernel is implemented. And I said some time ago - if someone manages to fully parallelize the inversion, we can have a doubling in speed Smiley That b***h is really time-consuming.
newbie
Activity: 3
Merit: 0
...........

Hello. New version LPKangaroo_OW_OT not working on 135 puzzle? I see 2.0/4.0 MB and saving is not added when I use 135 puzzle. The same behavior I saw when I tried to use the original from JLP
jr. member
Activity: 42
Merit: 0
What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.
You probably configured the grid incorrectly. The Nsight shows that 4 blocks are working simultaneously. That's why I have a grid of 88*128. And yes, the speed is +/- correct since it matches the amount of DP accumulated over a certain period of time.
Or you used a small DP value, then of course the speed will be much less. For example, with DP 13 the speed is only 950Mk/s (on the patched version) and 1100Mk/s with DP 20.
P.S. But you are right there is a glitch with the speed calculation. JLP forgot to reset the average values ​​before calculating the speed (thread.cpp)
After making changes the speed became 950 Mk/s
Code:
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [7.6s]
[952.59 MK/s][GPU 952.59 MK/s][Count 2^35.49][Dead 0][50s (Avg 01:06:23)][3.4/10.6MB]

@kTimesG  you have written more than once that your program is several times faster than JLP and other clones. Can you really run 1660 Super at 2Gk/s?

I would like to test this version with a 3060, can you share it?
the release is available you can try.


This is pain in *ss to install on Windows 11 with 3060 - Visual Studio 2022 + Cuda12

 make gpu=1 ccap=86 all

I've been trying for several hours without success. I managed to compile it for the CPU.

sr. member
Activity: 642
Merit: 316
What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.
You probably configured the grid incorrectly. The Nsight shows that 4 blocks are working simultaneously. That's why I have a grid of 88*128. And yes, the speed is +/- correct since it matches the amount of DP accumulated over a certain period of time.
Or you used a small DP value, then of course the speed will be much less. For example, with DP 13 the speed is only 950Mk/s (on the patched version) and 1100Mk/s with DP 20.
P.S. But you are right there is a glitch with the speed calculation. JLP forgot to reset the average values ​​before calculating the speed (thread.cpp)
After making changes the speed became 950 Mk/s
Code:
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [7.6s]
[952.59 MK/s][GPU 952.59 MK/s][Count 2^35.49][Dead 0][50s (Avg 01:06:23)][3.4/10.6MB]

@kTimesG  you have written more than once that your program is several times faster than JLP and other clones. Can you really run 1660 Super at 2Gk/s?

I would like to test this version with a 3060, can you share it?
the release is available you can try.
newbie
Activity: 11
Merit: 0

The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.

What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.

I got 1308.29 Mk/s 2x GTX 1660S, like 700 per each.
member
Activity: 165
Merit: 26

The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.

What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.
newbie
Activity: 11
Merit: 0
I wonder what exactly you mean by 1.1Gk/s. Only one type of kangaroos included in that? Using my 1080 Ti, I get about 580Mk/s with my version.
The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.
1080 Ti is old card, I don't know what speed you can get.
P.S. It doesn't matter if it's 1 type of kangaroo or both. The speed is the same.
Code:
Kangaroo v1.7Gfix (Only Wild)
Gx=79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
Gy=483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
G Multipler: 0x1
JMP Multipler: 0x1
Loading: testwork
Start:0
Stop :7FFFFFFFFFFFFFFFF
Keys :1
KeyX :D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
KeyY :D973566FFD3D6F79192827E1F93CCBF7E7F2EAF48F762C72C37578EA8154D978
LoadWork: [HashTable 1665.5/2088.3MB] [07s]
Number of CPU thread: 0
NB_RUN: 128
GPU_GRP_SIZE: 128
NB_JUMP: 32
Range width: 2^67
JMP bits DEC: 34
Jump Avg distance min: 2^32.95
Jump Avg distance max: 2^33.05
Jump multipled by: 0x1
Jump Avg distance: 2^32.97 [96]
Number of kangaroos: 2^20.46
Suggested DP: 13
Expected operations: 2^35.11
Expected RAM: 184.5MB
DP size: 13 [0xFFF8000000000000]
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [9.4s]
[1132.46 MK/s][GPU 1132.46 MK/s][Count 2^38.71][Dead 0][04s (Avg 32s)][1677.2/2103.0MB]
Key# 0 [1S]Pub:  0x02D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
       Priv: 0x7989031FDA5BA3BF5

I would like to test this version with a 3060, can you share it?
sr. member
Activity: 642
Merit: 316
I wonder what exactly you mean by 1.1Gk/s. Only one type of kangaroos included in that? Using my 1080 Ti, I get about 580Mk/s with my version.
The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.
1080 Ti is old card, I don't know what speed you can get.
P.S. It doesn't matter if it's 1 type of kangaroo or both. The speed is the same.
Code:
Kangaroo v1.7Gfix (Only Wild)
Gx=79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
Gy=483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
G Multipler: 0x1
JMP Multipler: 0x1
Loading: testwork
Start:0
Stop :7FFFFFFFFFFFFFFFF
Keys :1
KeyX :D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
KeyY :D973566FFD3D6F79192827E1F93CCBF7E7F2EAF48F762C72C37578EA8154D978
LoadWork: [HashTable 1665.5/2088.3MB] [07s]
Number of CPU thread: 0
NB_RUN: 128
GPU_GRP_SIZE: 128
NB_JUMP: 32
Range width: 2^67
JMP bits DEC: 34
Jump Avg distance min: 2^32.95
Jump Avg distance max: 2^33.05
Jump multipled by: 0x1
Jump Avg distance: 2^32.97 [96]
Number of kangaroos: 2^20.46
Suggested DP: 13
Expected operations: 2^35.11
Expected RAM: 184.5MB
DP size: 13 [0xFFF8000000000000]
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [9.4s]
[1132.46 MK/s][GPU 1132.46 MK/s][Count 2^38.71][Dead 0][04s (Avg 32s)][1677.2/2103.0MB]
Key# 0 [1S]Pub:  0x02D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
       Priv: 0x7989031FDA5BA3BF5
newbie
Activity: 7
Merit: 0

P.S. If you like it, I can add a version with a bit more speed (gtx 1160 Super - 1.1 Gk/s)
[/quote]

I wonder what exactly you mean by 1.1Gk/s. Only one type of kangaroos included in that? Using my 1080 Ti, I get about 580Mk/s with my version.
sr. member
Activity: 642
Merit: 316
newbie
Activity: 7
Merit: 0
I didn't touch the workfile or changed it using a script. I did try using a CPU. I'll try now using a GPU.
Edit - doesn't work with a GPU either. what did you change from JLP's version?
I downloaded the KangarooOT and KangarooOW version from GitHub to make sure we were using the same tools.
For the experiment we will look for the public key 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213
whose private key is 0x7989031fda5ba3bf5 in the 67-bit range from 0 to 7ffffffffffffffff
Step 0: create a 67bitWrong.txt file with the following content:
Code:
0
7ffffffffffffffff
03633CBE3EC02B9401C5EFFA144C5B4D22F87940259634858FC7E59B1C09937852
Step 1: aor the accumulation of tame DPs create a step1.bat file with the following content and launch:
Code:
kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork 67bitWrong.txt
for /l %%i in (1,1,6) do (
    echo Iteration %%i    
    kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork -i testwork
)
pause
Step 2: To change the public key in the working file, make a step2.bat file and launch:
Code:
py changewf.py -f testwork -pub 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213 -rb 0 -re 7ffffffffffffffff
pause
Step 3: To find the public key  make a step3.bat file and launch:
Code:
kangarooOW -t 0 -gpu -gpuId 0 -g 88,128 -i testwork
pause

Thanks for the details, I already got it to work using my own version.
If you can share changewf.py though, that'd be great.
sr. member
Activity: 642
Merit: 316
I didn't touch the workfile or changed it using a script. I did try using a CPU. I'll try now using a GPU.
Edit - doesn't work with a GPU either. what did you change from JLP's version?
I downloaded the KangarooOT and KangarooOW version from GitHub to make sure we were using the same tools.
For the experiment we will look for the public key 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213
whose private key is 0x7989031fda5ba3bf5 in the 67-bit range from 0 to 7ffffffffffffffff
Step 0: create a 67bitWrong.txt file with the following content:
Code:
0
7ffffffffffffffff
03633CBE3EC02B9401C5EFFA144C5B4D22F87940259634858FC7E59B1C09937852
Step 1: aor the accumulation of tame DPs create a step1.bat file with the following content and launch:
Code:
kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork 67bitWrong.txt
for /l %%i in (1,1,6) do (
    echo Iteration %%i    
    kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork -i testwork
)
pause
Step 2: To change the public key in the working file, make a step2.bat file and launch:
Code:
py changewf.py -f testwork -pub 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213 -rb 0 -re 7ffffffffffffffff
pause
Step 3: To find the public key  make a step3.bat file and launch:
Code:
kangarooOW -t 0 -gpu -gpuId 0 -g 88,128 -i testwork
pause
P.S. If you like it, I can add a version with a bit more speed (gtx 1160 Super - 1.1 Gk/s)
newbie
Activity: 7
Merit: 0
Edit - doesn't work with a GPU either. what did you change from JLP's version?

By the things that he share here in the forum and github it isn't a single line or some of them, i think he rewrite a lot of code and added new code to it.

If he don't share it I doubt you can replicate it.

I think the issue is that we need to overwrite the key in the workfile, working on a script to do it now...
hero member
Activity: 862
Merit: 662
Edit - doesn't work with a GPU either. what did you change from JLP's version?

By the things that he share here in the forum and github it isn't a single line or some of them, i think he rewrite a lot of code and added new code to it.

If he don't share it I doubt you can replicate it.
Pages:
Jump to: