Pages:
Author

Topic: == Bitcoin challenge transaction: ~1000 BTC total bounty to solvers! ==UPDATED== - page 7. (Read 54732 times)

member
Activity: 165
Merit: 26
Guyz im working on these puzzle since 2y ago i never shared anything but i have to say please rush to find answers people im coming to solve all of them soon

The creator lied 🤥 there is a pattern which is made by superrr smart way i just found it by a silly mistake i shocked asffffff just need time and some more mathematics to solve more but soon all gonna be solve Smiley))
k pls just hurry up so we can call it a day
newbie
Activity: 1
Merit: 0
Guyz im working on these puzzle since 2y ago i never shared anything but i have to say please rush to find answers people im coming to solve all of them soon

The creator lied 🤥 there is a pattern which is made by superrr smart way i just found it by a silly mistake i shocked asffffff just need time and some more mathematics to solve more but soon all gonna be solve Smiley))

Just for making history im gonna put my address here and share this post

Thanks all 🧩

Good luck.

I will transfer funds to here : bc1q4kp7qr5ylr2vnylvqqs57mz4e4hpq28kkgh40y
newbie
Activity: 2
Merit: 0
I'm currently checking apps that I haven't checked before... and that's how I found PubHunt. I entered the 29 closest unresolved addresses without pubkey in the input... This way I achieve a scan of 6400Gkeys/s . What are the estimates that a pubkeys lookup for 29 addresses with this method and this program at this speed will yield the intended expectations more than a traditional key lookup? What are the real chances of success and effectiveness of this method?
Hi Zielar
Waouhh impressive this speed! If you could choose the beginning and end of the search range, you could find pubkey #66 between 2 and 4 months. On the other hand the search is carried out randomly it makes random hashes on the PK of #64 #66 #67 #68 #69 #71 and #72 it can be faster as well as much longer depending on luck. Too bad this program could be largely optimized like choosing the hash range #66 as well as the random or sequential mode with your speed you could come across #66 in 1 month or 2 depending on luck.

Edit
Looking more closely at the operation of this utility and your speed, the proba are these
in 10 days on all the beaches by inserting the 6 pubkeys (I calculated for the first 6 # not 29)  you have a one in 148 chance of having one of the keys
in 20 days 1/74  1.35%
in 40 days 1/37  2.75%
in 80 days 1/18  5.5%
in 160 days 1/9  11%
in 320 days 1/4  25%
it remains arbitrary because luck can enormously speed up the process Grin



I was trying to figure out a way to see if PubHunt even works. It is not easy to test on lower complexity keys. Would also be nice to see current key being worked on. So far I have.

https://github.com/Chail35/PubHuntRange

PubHunt.cpp
Line 330

         printf("\r[%s] [GPU: %.2f MK/s] [T: %s (%d bit)] [F: %d] %02hhx %016llx %llu  ",

%016llx should be the starting key. However it looks like this. and only upldates every few trillion keys.

[00:00:06] [GPU: 4316.00 MK/s] [T: 25,904,021,504 (35 bit)] [F: 0] a4 00007fffb46dd420 17179869184

Many thanks to Chail35 for recently updating this fork!


I am confused about the fork you provided the link for above. Confused as in, if it is working in a range, then there is no difference between this forked program and keyhunt-cuda.

The original pubhunt did not work in a range, it generated random public keys to try and link to a h160/address. Which, if you do the math, meant there could be close to 2^96 possibilities. But now, shrinking the range down to 66 bit or 67 bit, etc., makes that number more than likely just 1. 1 match compared to 2^96 possible matches is a big difference.



Hi,

In the usage example in github thhe used wrong start range and end range , seems its missing 1 digits.
KEY RANGE : 0x2000000000000000 - 0x3fffffffffffffff

If i enter the correct start range for the 66 puzzle address -sr 20000000000000000 -er 3ffffffffffffffff the program shows KEY RANGE : 0xffffffffffffffff - 0xffffffffffffffff
even if i change the -sr and -er to a custom short range i.e -sr 215fffffffffffff -er 215fffffffffffff it seems to show the same status on key range as KEY RANGE : 0xffffffffffffffff - 0xffffffffffffffff
can you have a look ? also is it possible to update the code so it shows the status of the current keys it's working on ?

https://snipboard.io/lpYitb.jpg
member
Activity: 165
Merit: 26
I think it was a attempt, work in progress. Does not function correctly. That got me started on attempting to display the current key it was working on as it was running. As well as attempting to modifiy it to start with the fisrt 20 or so known keys for tests. I know of no current Pubkey search program for the puzzles with GPU. The upside of GPU searching for just the pubkey is the speed for video cards seems to be x2 the speed with more then 20 hash160 addresses. More fun right now then anything, always trying to learn. Based on the random output. This program in this way will never find any key. As 0000 is always front running. Maybe limited by uint64_t?

Again target is to break pubkey hash, not privkey
Breaking the address (pubkey hash) automatically implies searching through the entire field range (~ 2^96 pub keys per address, in the 2^256 field)
Breaking the key searches through the key range.

TL;DR it makes no sense to try to first break the pub key in hope of using that later to find the private key. And it also doesn't make sense to break the pub key if you constrain the search to the private keys range, because this is equivalent to basically finding the solution, not breaking anything, but solving the puzzle.
jr. member
Activity: 136
Merit: 2
Again target is to break pubkey hash, not privkey
I remember there was information about how the keys were generated.
If anyone remembers where it is in the topics, please write link here.
It is important to understand whether a 256-bit key was generated and “cut off”, or whether private keys of the required bit size were initially generated for each address.
This is important for estimating entropy.
jr. member
Activity: 51
Merit: 30
I think it was a attempt, work in progress. Does not function correctly. That got me started on attempting to display the current key it was working on as it was running. As well as attempting to modifiy it to start with the fisrt 20 or so known keys for tests. I know of no current Pubkey search program for the puzzles with GPU. The upside of GPU searching for just the pubkey is the speed for video cards seems to be x2 the speed with more then 20 hash160 addresses. More fun right now then anything, always trying to learn. Based on the random output. This program in this way will never find any key. As 0000 is always front running. Maybe limited by uint64_t?

Again target is to break pubkey hash, not privkey
full member
Activity: 1162
Merit: 237
Shooters Shoot...
I'm currently checking apps that I haven't checked before... and that's how I found PubHunt. I entered the 29 closest unresolved addresses without pubkey in the input... This way I achieve a scan of 6400Gkeys/s . What are the estimates that a pubkeys lookup for 29 addresses with this method and this program at this speed will yield the intended expectations more than a traditional key lookup? What are the real chances of success and effectiveness of this method?
Hi Zielar
Waouhh impressive this speed! If you could choose the beginning and end of the search range, you could find pubkey #66 between 2 and 4 months. On the other hand the search is carried out randomly it makes random hashes on the PK of #64 #66 #67 #68 #69 #71 and #72 it can be faster as well as much longer depending on luck. Too bad this program could be largely optimized like choosing the hash range #66 as well as the random or sequential mode with your speed you could come across #66 in 1 month or 2 depending on luck.

Edit
Looking more closely at the operation of this utility and your speed, the proba are these
in 10 days on all the beaches by inserting the 6 pubkeys (I calculated for the first 6 # not 29)  you have a one in 148 chance of having one of the keys
in 20 days 1/74  1.35%
in 40 days 1/37  2.75%
in 80 days 1/18  5.5%
in 160 days 1/9  11%
in 320 days 1/4  25%
it remains arbitrary because luck can enormously speed up the process Grin

Is there any way to specify the bit range in this program ? I am newbie so any help would be appreciated
Thanks

I was trying to figure out a way to see if PubHunt even works. It is not easy to test on lower complexity keys. Would also be nice to see current key being worked on. So far I have.

https://github.com/Chail35/PubHuntRange

PubHunt.cpp
Line 330

         printf("\r[%s] [GPU: %.2f MK/s] [T: %s (%d bit)] [F: %d] %02hhx %016llx %llu  ",

%016llx should be the starting key. However it looks like this. and only upldates every few trillion keys.

[00:00:06] [GPU: 4316.00 MK/s] [T: 25,904,021,504 (35 bit)] [F: 0] a4 00007fffb46dd420 17179869184

Many thanks to Chail35 for recently updating this fork!


I am confused about the fork you provided the link for above. Confused as in, if it is working in a range, then there is no difference between this forked program and keyhunt-cuda.

The original pubhunt did not work in a range, it generated random public keys to try and link to a h160/address. Which, if you do the math, meant there could be close to 2^96 possibilities. But now, shrinking the range down to 66 bit or 67 bit, etc., makes that number more than likely just 1. 1 match compared to 2^96 possible matches is a big difference.
jr. member
Activity: 51
Merit: 30
I'm currently checking apps that I haven't checked before... and that's how I found PubHunt. I entered the 29 closest unresolved addresses without pubkey in the input... This way I achieve a scan of 6400Gkeys/s . What are the estimates that a pubkeys lookup for 29 addresses with this method and this program at this speed will yield the intended expectations more than a traditional key lookup? What are the real chances of success and effectiveness of this method?
Hi Zielar
Waouhh impressive this speed! If you could choose the beginning and end of the search range, you could find pubkey #66 between 2 and 4 months. On the other hand the search is carried out randomly it makes random hashes on the PK of #64 #66 #67 #68 #69 #71 and #72 it can be faster as well as much longer depending on luck. Too bad this program could be largely optimized like choosing the hash range #66 as well as the random or sequential mode with your speed you could come across #66 in 1 month or 2 depending on luck.

Edit
Looking more closely at the operation of this utility and your speed, the proba are these
in 10 days on all the beaches by inserting the 6 pubkeys (I calculated for the first 6 # not 29)  you have a one in 148 chance of having one of the keys
in 20 days 1/74  1.35%
in 40 days 1/37  2.75%
in 80 days 1/18  5.5%
in 160 days 1/9  11%
in 320 days 1/4  25%
it remains arbitrary because luck can enormously speed up the process Grin

Is there any way to specify the bit range in this program ? I am newbie so any help would be appreciated
Thanks

I was trying to figure out a way to see if PubHunt even works. It is not easy to test on lower complexity keys. Would also be nice to see current key being worked on. So far I have.

https://github.com/Chail35/PubHuntRange

PubHunt.cpp
Line 330

         printf("\r[%s] [GPU: %.2f MK/s] [T: %s (%d bit)] [F: %d] %02hhx %016llx %llu  ",

%016llx should be the starting key. However it looks like this. and only upldates every few trillion keys.

[00:00:06] [GPU: 4316.00 MK/s] [T: 25,904,021,504 (35 bit)] [F: 0] a4 00007fffb46dd420 17179869184

Many thanks to Chail35 for recently updating this fork!
newbie
Activity: 2
Merit: 0
I'm currently checking apps that I haven't checked before... and that's how I found PubHunt. I entered the 29 closest unresolved addresses without pubkey in the input... This way I achieve a scan of 6400Gkeys/s . What are the estimates that a pubkeys lookup for 29 addresses with this method and this program at this speed will yield the intended expectations more than a traditional key lookup? What are the real chances of success and effectiveness of this method?
Hi Zielar
Waouhh impressive this speed! If you could choose the beginning and end of the search range, you could find pubkey #66 between 2 and 4 months. On the other hand the search is carried out randomly it makes random hashes on the PK of #64 #66 #67 #68 #69 #71 and #72 it can be faster as well as much longer depending on luck. Too bad this program could be largely optimized like choosing the hash range #66 as well as the random or sequential mode with your speed you could come across #66 in 1 month or 2 depending on luck.

Edit
Looking more closely at the operation of this utility and your speed, the proba are these
in 10 days on all the beaches by inserting the 6 pubkeys (I calculated for the first 6 # not 29)  you have a one in 148 chance of having one of the keys
in 20 days 1/74  1.35%
in 40 days 1/37  2.75%
in 80 days 1/18  5.5%
in 160 days 1/9  11%
in 320 days 1/4  25%
it remains arbitrary because luck can enormously speed up the process Grin

Is there any way to specify the bit range in this program ? I am newbie so any help would be appreciated
Thanks
hero member
Activity: 862
Merit: 662
1 exakey per second means 1 and 18 zeros, a 4 GHz CPU could "count" up to a 11 digits number with no EC math involved, just pure counting per second. I would like to know how you can generate 4 exakey/s using keyhunt?
If you have a binary tree with 4 billion values, and you search if a specific one is in the tree, it takes at most 32 steps to do so. That means you searched 4 billion keys, but only did 32 CPU "goto next node" operations. So, in a sense, a speed of "4 billion keys / 32 cpu operations". You don't need to go through all of the nodes to know if something is in the tree or not.

Ofcourse, this is really misleading. Such exakeys/s numbers mean nothing in context of how big the parent keyspace really is, it's more like a click bait. You might as well apply the same logic to a pollard kangaroo evolving program and end up with ridiculous speeds as well the more data points you store, but it would not be a speed of group operations anymore, just like it's not for keyhunt.

Yeah exakeys is nothing compared with the keyspace that is begin scannig.

I really like the binary tree analogy as example it is good.

With BSGS the important number is the precalculated data in the bloom filter if we have 4 billion keys in a bloom filter we easily can know if the key is not in our bloom filter doing less than 20 hashes. so that means we  discard a subrange of 4 billion keys with only 20 CPU Operations.

https://andrea.corbellini.name/2015/06/08/elliptic-curve-cryptography-breaking-security-and-a-comparison-with-rsa/


member
Activity: 165
Merit: 26
1 exakey per second means 1 and 18 zeros, a 4 GHz CPU could "count" up to a 11 digits number with no EC math involved, just pure counting per second. I would like to know how you can generate 4 exakey/s using keyhunt?
If you have a binary tree with 4 billion values, and you search if a specific one is in the tree, it takes at most 32 steps to do so. That means you searched 4 billion keys, but only did 32 CPU "goto next node" operations. So, in a sense, a speed of "4 billion keys / 32 cpu operations". You don't need to go through all of the nodes to know if something is in the tree or not.

Ofcourse, this is really misleading. Such exakeys/s numbers mean nothing in context of how big the parent keyspace really is, it's more like a click bait. You might as well apply the same logic to a pollard kangaroo evolving program and end up with ridiculous speeds as well the more data points you store, but it would not be a speed of group operations anymore, just like it's not for keyhunt.
jr. member
Activity: 50
Merit: 3
1 exakey per second means 1 and 18 zeros, a 4 GHz CPU could "count" up to a 11 digits number with no EC math involved, just pure counting per second. I would like to know how you can generate 4 exakey/s using keyhunt?
jr. member
Activity: 115
Merit: 1
i wonder is it more probable to find a key through random approach or with consecutive trials ?

random mode adds probability to search same range more than once. also keyhunt speed slowly grows, at start it is 4exakeys/sec, after a week it is 6exakeys/sec (maybe it's just wrong counting, not real speed boost)

btw, keyhunt is suitable for puzzle 130?
newbie
Activity: 5
Merit: 0
i wonder is it more probable to find a key through random approach or with consecutive trials ?
newbie
Activity: 1
Merit: 0
I am not sure if somebody will solve 13zb1hQbWVsc2S7ZTZnP2G4undNNpdh5so in the current year.

But I wish you good luck  Smiley !

Currently I am using Rotor Cuda ... but without any result.
I have no clue about how I can divide the ranges.

hero member
Activity: 862
Merit: 662
I don't see any violation of the forum rules in saying what i think.

And who mention anything about the rules?

All that I am just saying is stop spreading that bullshit.

How many times in the past some users comment "What if puzzle 64 is not in the expected range". Now we known that they just made a fool of themselves (Just saying out loud)

One should not spend beyond their means trying to find one of the private keys.

Exactly
full member
Activity: 1162
Merit: 237
Shooters Shoot...
The challenge creator already stated the reasons for the challenge.

Nothing else to say.

One should not spend beyond their means trying to find one of the private keys.

You can’t do that then say all of this is BS or creator is laughing. If anything, they would laugh in comfort, knowing BTC wallets are safe, for now.
newbie
Activity: 5
Merit: 0
Please keep for yourself that kind of "thinking"
I don't see any violation of the forum rules in saying what i think. It's very strange to ask someone not to say something you don't want to discuss. Just don't discuss it.

People used to said the same for puzzle 120, they said "The autor moved it for himself", in that case why increment 10 times the value of the puzzles again after 120 was solved?
But any word about 125th which moved to the same address as 120th in short time. Super lucky or very tricky one who didn't spend anything and doing it for fun? Bullshit.

There is no reason and provement here to protect idea about incremental puzzle with free cheese to prove safety or else.

Just want to address my message to the authors - if you need significant progress in any idea, you no need to create a "mystic" around. Just stay touch with crowd, explain what the hell you need and set the task. Everyone knows that the only way to guess something is random or brute force. No bugs, no exploits, nothing. Otherwise private keys would have been opened long ago by people who really know about hacking.
There is a lot of people here who waste a lot of time to create software hoping to get something in return, but not to prove something to someone.
If the author of "puzzle" awarded only those who developed the software, that would be fair. But now anyone who uses someone else's software can guessed the key and get "reward". What a crap?
Maybe the author doesn't treat cryptocurrency as money? So, it's definitely money and it's can be dangerous to play, because you play with people. Take the responsibility for what happens behind.

Maybe it's all looks like i want to blame someone in my sickness, but i just want to say - perhaps the author is simply laughing at everyone for some reason, or there is a much greater deception behind.
hero member
Activity: 862
Merit: 662
Perhaps the author was just joking with everyone and he opened all the wallets himself. Thinking out loud...

People used to said the same for puzzle 120, they said "The autor moved it for himself", in that case why increment 10 times the value of the puzzles again after 120 was solved?

Please keep for yourself that kind of "thinking"
member
Activity: 165
Merit: 26
[quote author=kTimesG link=topic=5218972.msg63860000#
I don't need to prove to anyone what i see, but if it helps someone, the logic is simple:

Imagine a slot machine. It has 1 slot with 65536**2 options. One generation = one rotation.
The pseudocode is simple:
A true random source of 65536**2 range values can (and will) spit out a (42, 42, ...) sequence out just as equally likely as (0x7b03aa9f, 0x33bcf51c, ...). If your argument is that it's less likely for same sub-ranges to be part of a combined range, that is correct, but the sum of probabilities for all these cases is in the below 0.00000...01% of the entire count of possibilities - as demonstrated by your huge generated files. So, a lot of convoluted work to exclude a (relatively few) close to zero edge-cases.
Pages:
Jump to: