Pages:
Author

Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it - page 52. (Read 245102 times)

newbie
Activity: 38
Merit: 0
Any other Kangaroo version known that is known to be usable for >128bit ranges ?

Note that its Windows only and if you want to compile it yourself you need PureBasic, which is paid.



Also to note you need the specific Purebasic v.5.31 other versions to my knowledge do not work
jr. member
Activity: 79
Merit: 1
member
Activity: 63
Merit: 14
Any other Kangaroo version known that is known to be usable for >128bit ranges ?

This has been answered 10 thousand times in this same thread some pages back.

The only public available Kangaroo that works beyond 125 bits is Etar's.

It's based on JLP's but it can go up to 192 bits.

It has an edge on the speed too, yielding +-1Gk/s more than JLP's (at least on my setup).

https://github.com/Etayson/Etarkangaroo

Note that its Windows only and if you want to compile it yourself you need PureBasic, which is paid.

jr. member
Activity: 67
Merit: 1
python scans 50,000 addresses per second
The GPU scans +- 5,000,000,000 addresses per second
GPU scans +- 100,000 times faster than python
so you can forget about python, you have to make codes for gpu
you don't have eternity to solve it with python
newbie
Activity: 20
Merit: 0
bro can i talk about 1000 Btc puzzle i am using python but can't use my gpu i have rtx 4070 plz help

I am using Rotor-Cuda with relatively small blocks driven by my python script.
newbie
Activity: 1
Merit: 0
bro can i talk about 1000 Btc puzzle i am using python but can't use my gpu i have rtx 4070 plz help
member
Activity: 165
Merit: 26
You run a 65 bit range searching for pubkey Z, solve in expected time, and accumulate x amount of DPs to solve (whatever number you want to use, it is irrelevant)
Now, you tamed the wilds from the previous 65 bit run.
Lastly, you run a 65 bit range searching for pubkey W, on average, how much quicker will you solve pubkey W versus pubkey Z?

A LOT BETTER, if this was your question. And every time I benchmark to test for a new solution, I tame the found wild DPs, so results improve themselves. I also trim periodically DPs that are outside of some range, since almost 99% of the solutions are found via a DP which is an specific median range. I almost never find a key by hitting a DP which is below 50% of the interval, for example, even though there were a lot of them in that region. Don't ask for why that happens, I can only speculate it has to do with average travel size, who knows.

Currently I have a 80-bits solver DP database that can solve any key in under 6 seconds on a GPU, and under a minute if using the CPU.
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
It depends on how you plan to reuse them.

You are looking at it from one perspective, to run the original pub, in its original range, with the DPs generated from a lower range.

There are 9 million ways to skin a cat.

I am sure you have done this kind of test and analysis, so answer me this, if you reuse DPs found during a 66 or 65 or 70 bit range, to find a key in the same exact range, how much search time did it take, was it less, if so, how much less, on average?

It's not as easy at it sounds. If the jump rules stay the same (so the DPs can actually work between different intervals) then the DPs are valid and usable, but unfortunately the interval doubles in size for every bit increase. So where are the DPs that should cover the newly added other half? Nowhere.

To get the optimal runtime, the Tames start somewhere around the middle of the interval + some common offset (so they are all, on average, to a minimum closest distance to the Wild/private key).

But what we have, are DPs of kangaroos started from the middle point of the first half of the new interval (around half of them passing into the second half of the new interval). And also the DPs converted from Wild distances after a solve (all these DPs will start from where the private key was in the old interval, so we don't know what they cover).

So now we should run new Tames to cover the second added half (traveling a double distance than what Tames did over the previous interval). And we also need Wilds that need to cover the entire new interval (since they can be in the first or the second half of the new interval).

Since jumps are the same, the jumps need to be more of them! Usually, if the interval increases, we do longer jumps, but this breaks DP re-usage.

In short, existing DPs would help, but only if the private key is in the first half, not the second. And only if the jump rules are kept and don't affect too much the expected runtime.

Repeat for every new bit added... for 5 added bits the interval is 32 times larger, So I guess my math is wrong, it will take sqrt(32) more operations to solve, but the DP coverage of re-using the previous DPs is very low, not 17%, if I'm not mistaken... I think my fallacy was that I was assuming that if you solved an interval, and you increase by 5 bits, than 17% of the work needed was already performed, but my fault was that 97% of the DPs required for this to be valid are missing, since all of the known ones are sitting in a very tight corner.

You try use saved files from kangaro of JLP ?
full member
Activity: 1232
Merit: 242
Shooters Shoot...
It depends on how you plan to reuse them.

You are looking at it from one perspective, to run the original pub, in its original range, with the DPs generated from a lower range.

There are 9 million ways to skin a cat.

I am sure you have done this kind of test and analysis, so answer me this, if you reuse DPs found during a 66 or 65 or 70 bit range, to find a key in the same exact range, how much search time did it take, was it less, if so, how much less, on average?

It's not as easy at it sounds. If the jump rules stay the same (so the DPs can actually work between different intervals) then the DPs are valid and usable, but unfortunately the interval doubles in size for every bit increase. So where are the DPs that should cover the newly added other half? Nowhere.

To get the optimal runtime, the Tames start somewhere around the middle of the interval + some common offset (so they are all, on average, to a minimum closest distance to the Wild/private key).

But what we have, are DPs of kangaroos started from the middle point of the first half of the new interval (around half of them passing into the second half of the new interval). And also the DPs converted from Wild distances after a solve (all these DPs will start from where the private key was in the old interval, so we don't know what they cover).

So now we should run new Tames to cover the second added half (traveling a double distance than what Tames did over the previous interval). And we also need Wilds that need to cover the entire new interval (since they can be in the first or the second half of the new interval).

Since jumps are the same, the jumps need to be more of them! Usually, if the interval increases, we do longer jumps, but this breaks DP re-usage.

In short, existing DPs would help, but only if the private key is in the first half, not the second. And only if the jump rules are kept and don't affect too much the expected runtime.

Repeat for every new bit added... for 5 added bits the interval is 32 times larger, So I guess my math is wrong, it will take sqrt(32) more operations to solve, but the DP coverage of re-using the previous DPs is very low, not 17%, if I'm not mistaken... I think my fallacy was that I was assuming that if you solved an interval, and you increase by 5 bits, than 17% of the work needed was already performed, but my fault was that 97% of the DPs required for this to be valid are missing, since all of the known ones are sitting in a very tight corner.

Lol, I stopped reading when I realized you didn't understand what I said and what I asked.

I will try to clarify:

You run a 65 bit range searching for pubkey Z, solve in expected time, and accumulate x amount of DPs to solve (whatever number you want to use, it is irrelevant)
Now, you tamed the wilds from the previous 65 bit run.
Lastly, you run a 65 bit range searching for pubkey W, on average, how much quicker will you solve pubkey W versus pubkey Z?

Better?

I have ran 1000s of these tests. I was counting on the fact that you had too, since you talk about "pre compiling" a work file to use with a bot for the 66 key.
member
Activity: 165
Merit: 26
It depends on how you plan to reuse them.

You are looking at it from one perspective, to run the original pub, in its original range, with the DPs generated from a lower range.

There are 9 million ways to skin a cat.

I am sure you have done this kind of test and analysis, so answer me this, if you reuse DPs found during a 66 or 65 or 70 bit range, to find a key in the same exact range, how much search time did it take, was it less, if so, how much less, on average?

It's not as easy at it sounds. If the jump rules stay the same (so the DPs can actually work between different intervals) then the DPs are valid and usable, but unfortunately the interval doubles in size for every bit increase. So where are the DPs that should cover the newly added other half? Nowhere.

To get the optimal runtime, the Tames start somewhere around the middle of the interval + some common offset (so they are all, on average, to a minimum closest distance to the Wild/private key).

But what we have, are DPs of kangaroos started from the middle point of the first half of the new interval (around half of them passing into the second half of the new interval). And also the DPs converted from Wild distances after a solve (all these DPs will start from where the private key was in the old interval, so we don't know what they cover).

So now we should run new Tames to cover the second added half (traveling a double distance than what Tames did over the previous interval). And we also need Wilds that need to cover the entire new interval (since they can be in the first or the second half of the new interval).

Since jumps are the same, the jumps need to be more of them! Usually, if the interval increases, we do longer jumps, but this breaks DP re-usage.

In short, existing DPs would help, but only if the private key is in the first half, not the second. And only if the jump rules are kept and don't affect too much the expected runtime.

Repeat for every new bit added... for 5 added bits the interval is 32 times larger, So I guess my math is wrong, it will take sqrt(32) more operations to solve, but the DP coverage of re-using the previous DPs is very low, not 17%, if I'm not mistaken... I think my fallacy was that I was assuming that if you solved an interval, and you increase by 5 bits, than 17% of the work needed was already performed, but my fault was that 97% of the DPs required for this to be valid are missing, since all of the known ones are sitting in a very tight corner.
full member
Activity: 1232
Merit: 242
Shooters Shoot...
Maybe he/she is just a brilliant mind...and that person knows a way to reuse the DPs from 120,125 to find 130 and the rest.

I would like to know how he/she did it !

For every 5 bits range increase about 17% of previous range covers it, but to reuse DPs means using same jumps, so more jumps on average per kangaroo = more total operations than optimal.

If you start searching in 135 bits, then it will take the same amount of time to find private key 0x1 or private key 2**134 - 1 or private key 2**89. So if we adapt the jumps needed to go through 135 bits and solve a lower interval than that (as if we would try to jump through 135), that means longer jumps on average per kangaroo = more total operations needed than optimal.

I think it's pretty obvious what the 120 / 125 / 130 solver has done. You won't find the software publicly, you should all forget about that ever happening. Learn to code, this is what this competition is about first and foremost, not the prize.

It depends on how you plan to reuse them.

You are looking at it from one perspective, to run the original pub, in its original range, with the DPs generated from a lower range.

There are 9 million ways to skin a cat.

I am sure you have done this kind of test and analysis, so answer me this, if you reuse DPs found during a 66 or 65 or 70 bit range, to find a key in the same exact range, how much search time did it take, was it less, if so, how much less, on average?
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
Maybe he/she is just a brilliant mind...and that person knows a way to reuse the DPs from 120,125 to find 130 and the rest.

I would like to know how he/she did it !

For every 5 bits range increase about 17% of previous range covers it, but to reuse DPs means using same jumps, so more jumps on average per kangaroo = more total operations than optimal.

If you start searching in 135 bits, then it will take the same amount of time to find private key 0x1 or private key 2**134 - 1 or private key 2**89. So if we adapt the jumps needed to go through 135 bits and solve a lower interval than that (as if we would try to jump through 135), that means longer jumps on average per kangaroo = more total operations needed than optimal.

I think it's pretty obvious what the 120 / 125 / 130 solver has done. You won't find the software publicly, you should all forget about that ever happening. Learn to code, this is what this competition is about first and foremost, not the prize.


this is zelar and jlp, they use kangaroo, they try unsuccessful first time sollvec125 but. And cangaroo accept use previous calculated cangaroos for nex ranges,- files from 125 to solve 2^130 etc. Biut they get 13 btc furst time, previously they get 1,2 1,25 btc.difference is big, so they get now more gpu and solve. bye bye puzzles Cry
member
Activity: 503
Merit: 38
I would milk the BTC amount for years.


You mean 0.035 BTC per month? You need 30 years to spend Puzzle 130 like that.  Grin
member
Activity: 165
Merit: 26
Maybe he/she is just a brilliant mind...and that person knows a way to reuse the DPs from 120,125 to find 130 and the rest.

I would like to know how he/she did it !

For every 5 bits range increase about 17% of previous range covers it, but to reuse DPs means using same jumps, so more jumps on average per kangaroo = more total operations than optimal.

If you start searching in 135 bits, then it will take the same amount of time to find private key 0x1 or private key 2**134 - 1 or private key 2**89. So if we adapt the jumps needed to go through 135 bits and solve a lower interval than that (as if we would try to jump through 135), that means longer jumps on average per kangaroo = more total operations needed than optimal.

I think it's pretty obvious what the 120 / 125 / 130 solver has done. You won't find the software publicly, you should all forget about that ever happening. Learn to code, this is what this competition is about first and foremost, not the prize.
jr. member
Activity: 42
Merit: 0
from the moment they convert the BTC into EUR, their identity is revealed and they are no longer anonymous.

Why directly in EUR? Is there a swap from BTC to XMR?
And then you slowly spend XMR through the volet card....
I would milk the BTC amount for years.
hero member
Activity: 630
Merit: 731
Bitcoin g33k
then why there is non-zero balance on both of these addresses?
I'm sorry, I'm new to this topic, I'm just trying to figure out what's going on here and most importantly HOW...  Roll Eyes

because from the moment they convert the BTC into EUR, their identity is revealed and they are no longer anonymous. If things go wrong, the “lucky winner” then has a problem on their hands when they are forced to account for their actions or make a statement. Thanks to KYC Wink

EDIT: As I just realized, you post was deleted. Thanks to the quote I made everyone knows what I replied to.
hero member
Activity: 630
Merit: 731
Bitcoin g33k
Code:
GPUEngine: SetParams: Failed to copy to constant memory (distance): an illegal memory access was encountered
GPUEngine: SetKangaroos: an illegal memory access was encountered
GPUEngine: Kernel: an illegal memory access was encountered
[+] SolveKeyGPU Thread GPU#1: 2^22.00 kangaroos [42.1s]
Segmentation fault (core dumped)

guess why - backdoored? Cheesy hope you didn't run it on your own computer where you also run btc wallet applications, right?

relax, just kidding ...  Cool Grin RTFM and look into detect cuda script, if required manually assign your correct sms and also the gpu size

and don't be disappointed if the program doesn't work as you had hoped. You can also try -t 0, which should disable CPU and use only the GPU. However, you will notice that the GPU is not working at all, use a program of your choice to determine whether the GPU is utilized. For example, if you have an nvidia GPU, simply start the nVidia X Server (Linux) or nVidia control panel (Windows) and be amazed that the program will show 0% GPU load. Puzzled? Cheesy If you use -t 0, you think that the CPU is switched off, at least that's what this ominous kangaroo-256 is telling you. But there will only be one CPU core running and no GPU at all Wink besides of the weird and wrong performance meters that is related to the gpu core settings you choose. So this program is crap
jr. member
Activity: 79
Merit: 1
SolveKeyCPU Thread 254: 1024 kangaroos
SolveKeyCPU Thread 251: 1024 kangaroos
GPU: GPU #0 NVIDIA GeForce RTX 3060 (28x128 cores) Grid(56x256) (147.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
 

someone please help me, why it took so long ?, i'm stuck on this like 5 minutes.



SolveKeyCPU Thread 254: 1024 kangaroos
SolveKeyCPU Thread 251: 1024 kangaroos
GPU: GPU #0 NVIDIA GeForce RTX 3060 (28x128 cores) Grid(56x256) (147.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
 

someone please help me, why it took so long ?, i'm stuck on this like 5 minutes.

then "Segmentation fault (core dumped)"
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
1Prestige1zSYorBdz94KA2UbJW3hYLTn4 has been emptied half hour ago Smiley Probably owner get fear from script...

Seems that 125 puzzle and 130 puzzle BTC are sit on 3Emiwzxme7Mrj4d89uqohXNncnRM15YESs.

120 is also solved by 3Emi.

This 3Emi guy is definitely hiding something, or the creator himself..

120 solved by jeanlyckpons, developer if kangaroo in thread Technical develepment



hi i think this is user zielar https://bitcointalksearch.org/user/zielar-1020539 taking 120 125 130 he works in a company where there are many gpus i think the company doesn't know that he is using their equipment

Companies that use a large number of GPUs, especially at the scale of 20,000 or more

1.Google
2.Amazon (AWS)
3.NVIDIA
4.Meta (Facebook)
5.Tesla: Tesla Dojo
6.Cryptocurrency Mining Farms

In which company do you think it is possible to solve puzzle 130 unnoticed from the list above?  Grin

Do you still have doubts that it wasn't this person who decided https://bitcointalksearch.org/topic/m.51803085

He is a partner with 200 TESLA GPU what solve 2^120 provkey
newbie
Activity: 16
Merit: 0
1Prestige1zSYorBdz94KA2UbJW3hYLTn4 has been emptied half hour ago Smiley Probably owner get fear from script...

Seems that 125 puzzle and 130 puzzle BTC are sit on 3Emiwzxme7Mrj4d89uqohXNncnRM15YESs.

120 is also solved by 3Emi.

This 3Emi guy is definitely hiding something, or the creator himself..

Key 130 was found by the same person as the previous two.
I think he used Kangaro, and this person is a miner. The timing is roughly consistent.
I also don't think he will share the keys he found and how exactly he found them.

yeah I agree. I am using vanity for the small ones. For example there are around 10 Billion addresses starts with "1BY8G" between 2^66 and 2^67. I run 10 vanity search instances and daily around 300 unique addresses found. It is kind of a lottery for me. 300/10B is very good probability.

As for using Kangaroo method for 130bit ... as far as i know none of the publicly available software was able to do it cause they were limited to 125 or 128 bit (i don't remember).

BTW How is vanity search "better" than normal brute force ?  Isn't it like you still just randomly generate private keys and check if they match given address ?  If so ... then all you do is saving addresses that starts with the same characters but it doesn't increase your chances. Correct me please if i'm wrong.

For the one which does not have public key and small I am using this just for fun. If it does not have public key, and you brute force between 2^66 and 2^67, and if you check 1B records per second, your chance will be 1B * 60 * 60 * 24 / 2^66 per day. Currently 10 vanity instances find 300-400 unique records per day, and as I explained there are approximately 10B addresses starts with "1BY8G" between 2^66 and 2^67. However most probably vanity will stuck when the count reaches big numbers because it will hit same addresses most probably and the find rate will decrease. I did not say it is better than brute forcing, I just enjoy this way of doing the search. It will be around 100K vanity records per year, if I am lucky it will hit that address, if not who cares Grin
Pages:
Jump to: