Pages:
Author

Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it - page 8. (Read 189853 times)

hero member
Activity: 630
Merit: 731
Bitcoin g33k
Private Keys per second: 170893.39

Can you guys write a decent code atleast....

It's an mission impossible . Even in C++

I took that personal

I write a C program to do the same tha your python script I get at least 100 times more numbers per second in a single thread in my laptop core i5

Code:
Total 639630811 numbers in 39 seconds: 16400790.000000 numbers/s
Total 650116562 numbers in 40 seconds: 16252914.000000 numbers/s
Total 671088064 numbers in 41 seconds: 16368001.000000 numbers/s
Total 681573815 numbers in 42 seconds: 16227947.000000 numbers/s

I know it is still far away for the required number, but is 100x Times faster....
 Roll Eyes Roll Eyes
I would have been very surprised if it had been the other way around Wink C is of course more performant and much more efficient in these computing areas. Python can hardly keep up.

hero member
Activity: 861
Merit: 662
Private Keys per second: 170893.39

Can you guys write a decent code atleast....

It's an mission impossible . Even in C++

I took that personal

I write a C program to do the same tha your python script I get at least 100 times more numbers per second in a single thread in my laptop core i5

Code:
Total 639630811 numbers in 39 seconds: 16400790.000000 numbers/s
Total 650116562 numbers in 40 seconds: 16252914.000000 numbers/s
Total 671088064 numbers in 41 seconds: 16368001.000000 numbers/s
Total 681573815 numbers in 42 seconds: 16227947.000000 numbers/s

I know it is still far away for the required number, but is 100x Times faster....
 Roll Eyes Roll Eyes

hero member
Activity: 1736
Merit: 857
are close to 0% of the total possible different patterns

Interestingly, the content of such numbers increases with the growth and height of the range.

Quote
============================= RESTART: D:\NumHex.py ============================
In the range from 0x1000 to 0x2000, 1 hexadecimal numbers were found with repeating digits occurring four or more times in a row.
This accounts for 0.02% of the total hexadecimal numbers in the range.

============================= RESTART: D:\NumHex.py ============================
In the range from 0x10000 to 0x20000, 32 hexadecimal numbers were found with repeating digits occurring four or more times in a row.
This accounts for 0.05% of the total hexadecimal numbers in the range.

============================= RESTART: D:\NumHex.py ============================
In the range from 0x100000 to 0x200000, 737 hexadecimal numbers were found with repeating digits occurring four or more times in a row.
This accounts for 0.07% of the total hexadecimal numbers in the range.

============================= RESTART: D:\NumHex.py ============================
In the range from 0x1000000 to 0x2000000, 15617 hexadecimal numbers were found with repeating digits occurring four or more times in a row.
This accounts for 0.09% of the total hexadecimal numbers in the range.

============================= RESTART: D:\NumHex.py ============================
In the range from 0x10000000 to 0x20000000, 311282 hexadecimal numbers were found with repeating digits occurring four or more times in a row.
This accounts for 0.12% of the total hexadecimal numbers in the range.

============================= RESTART: D:\NumHex.py ============================
In the range from 0x100000000 to 0x200000000, 5963072 hexadecimal numbers were found with repeating digits occurring four or more times in a row.
This accounts for 0.14% of the total hexadecimal numbers in the range.

Anyone who wishes can calculate independently in the 130 range, but it will be a very long time:

Code:
def count_repeated_digits_hex(start, end):
    count = 0
    total_nums = int(end, 16) - int(start, 16) + 1

    def has_repeated_digits(num_str):
        prev_digit = None
        consecutive_count = 0

        for digit in num_str:
            if digit == prev_digit:
                consecutive_count += 1
            else:
                consecutive_count = 1

            if consecutive_count >= 4:
                return True

            prev_digit = digit

        return False

    for num in range(int(start, 16), int(end, 16) + 1):
        num_str = hex(num)[2:]  # Remove '0x' prefix
        if has_repeated_digits(num_str):
            count += 1

    percentage = (count / total_nums) * 100

    return count, percentage

start = "0x100000000"
end = "0x200000000"
result, percentage = count_repeated_digits_hex(start, end)
print(f"In the range from {start} to {end}, {result} hexadecimal numbers were found with repeating digits occurring four or more times in a row.")
print(f"This accounts for {percentage:.2f}% of the total hexadecimal numbers in the range.")
member
Activity: 286
Merit: 15
There are algorithms that are actually getting the uniform real randomness (not some deterministic PRNG bullshit like someone mentioned earlier, I mean really lol? Haven't they heard about os.urandom and how it works?)

Code:
import os, sys, random
import time

min_range = 18446744073709551615
max_range = 36893488147419103231
counter = 0  # Initialize counter
start_time = time.time()

while True:
    random_bytes = os.urandom(9)
    initial_bytes = b'\x00' * 23
    full_bytes = initial_bytes + random_bytes
    dec = int.from_bytes(full_bytes, byteorder='big')
    counter += 1  # Increment counter
    message = "\rPrivate Keys per second: {:.2f}".format(counter / (time.time() - start_time))
    messages = []
    messages.append(message)
    output = "\033[01;33m" + ''.join(messages) + "\r"
    sys.stdout.write(output)
    sys.stdout.flush()
    if min_range <= dec <= max_range:
       if dec == 30568377312064202855:
          print("\nSeed :", random_bytes)
          print("Generated number:", dec)
          break

This is Python script that will test os.urandom speed.
The example works for Puzzle 65. There is no hash or secp256k1 operations here - just numbers.
Result is (on my PC) :
Private Keys per second: 170893.39

Do you know how many numbers need to be generated per second to find Puzzle 65 in 10 minutes?

30744573456182586  Private Keys per second !

It's an mission impossible . Even in C++

We need Grey aliens hardware to solve this. From Zeta Reticuli  Grin
newbie
Activity: 18
Merit: 0
Discovered keys up to number 65 in hex format without spaces and back to back.
In the setting of 180 degrees and the distance between each character of 550, we see a special order.
-Professors and elders of science, can we know the reason for this?
-If these keys are random, why should this order occur?
-Write a script that will give us the same output from the keys?
Friends, the shape of a circle is not an omen, nor imagination, nor nonsense!
The circle is mathematics. The formula in the formula...
https://www.talkimg.com/images/2024/05/05/roBa2.gif
jr. member
Activity: 40
Merit: 6
You and I are probably talking about different things. I mean, it is extremely unlikely that among the remaining undisclosed keys there are patterns such as ffff or 8888 or more repetitions of the same digits.
At the same time, I doubt that such combinations as, for example, c5ec5e or dd4dd4, etc., can be considered unlikely. Therefore, I think it is possible to discard numbers containing patterns of more than three identical digits.
Without any calculations, I roughly assumed that there could not be more than 20% of such numbers in the range.
Sorry to ruin your intuition, but we were talking about the exact same thing

The effort to discard unlikely patterns is massively submined by the fact that such unlikely patterns are close to 0% of the total possible different patterns (e.g. the ones your mind sees as without a pattern).

42 is a totally random number, isn't it? Until you find a gazillion corelations, starting with the fact that it's now no longer random, because I mentioned it in this thread. What's the chance a key will contain c5ec5e? Lower now, because you mentioned it? This a classic fallacy.

A random number generator does NOT care about any patterns, repetitions, weirdnesses of the human perception.

Just because a found key doesn't contain a sequence of 30 consecutive same digits is not because it was impossible, it is because there's billions over billions of other possible combinations that had exactly the same equal chance of being selected by this thing called nature / reality. Repeat the experiment many billions times and it WILL appear. It can appear the first time or the Nth time, the chances are the same at any time...

There are algorithms that are actually getting the uniform real randomness (not some deterministic PRNG bullshit like someone mentioned earlier, I mean really lol? Haven't they heard about os.urandom and how it works?) as a benefit to speed up results, not as something that cripples the search. So if you're working against randomness, it's a lost battle on all possible fronts. You need to embrace it instead.
newbie
Activity: 19
Merit: 0
I'd say the proportion of "unlikely patterns" in a private key of size n is more like very close to 0.00% (zero percent) the higher n gets. And it goes towards 0 really fast as n grows exponentially.

You and I are probably talking about different things. I mean, it is extremely unlikely that among the remaining undisclosed keys there are patterns such as ffff or 8888 or more repetitions of the same digits.
At the same time, I doubt that such combinations as, for example, c5ec5e or dd4dd4, etc., can be considered unlikely. Therefore, I think it is possible to discard numbers containing patterns of more than three identical digits.
Without any calculations, I roughly assumed that there could not be more than 20% of such numbers in the range.

True, this is another pattern

but has something i found yesterday. I sincerely just want to have some code to run in Mac.

Any one knows a code for mac? Python?

thx in advance
hero member
Activity: 1736
Merit: 857
I'd say the proportion of "unlikely patterns" in a private key of size n is more like very close to 0.00% (zero percent) the higher n gets. And it goes towards 0 really fast as n grows exponentially.

You and I are probably talking about different things. I mean, it is extremely unlikely that among the remaining undisclosed keys there are patterns such as ffff or 8888 or more repetitions of the same digits.
At the same time, I doubt that such combinations as, for example, c5ec5e or dd4dd4, etc., can be considered unlikely. Therefore, I think it is possible to discard numbers containing patterns of more than three identical digits.
Without any calculations, I roughly assumed that there could not be more than 20% of such numbers in the range.
jr. member
Activity: 37
Merit: 1
wish me luck, current speed is 100K keys per sec.

100kk/s is very low. 
You're better off using keyhunt by alberto
You may get +1Mk/s even on a potato CPU



Nice info, but i already know from year past about that Alberto's BSGS.
jr. member
Activity: 37
Merit: 1

wish me luck, current speed is 100K keys per sec.

It must be a joke! You're joking, aren't you? Grin

😂🥶🤫
newbie
Activity: 8
Merit: 0
wish me luck, current speed is 100K keys per sec.

100kk/s is very low. 
You're better off using keyhunt by alberto
You may get +1Mk/s even on a potato CPU

newbie
Activity: 33
Merit: 0

wish me luck, current speed is 100K keys per sec.

It must be a joke! You're joking, aren't you? Grin
member
Activity: 286
Merit: 15
wish me luck, current speed is 100K keys per sec.

No luck here.
jr. member
Activity: 37
Merit: 1
| 38F6219EE3A3D85D4 | 13zb1P1kw3PyhCiwD3UNpD84fx2axrJhdv   | 20d4593f55c6ad7c26ec814666fce11c74c26240 |
| 22EB598FC469C5C0C | 13zb1WBV2Z48YuUnqEeSMa3a5P4GTMNBXw   | 20d459b3e617ee5b2f4a0ade3489c74e18ae6726 |
| 31D5E00B519D1D343 | 13zb1cs4B4gNiq87JnSNub9o51qtvFFE6G   | 20d45a2091876e79ea6a59f6702d6541b46887a2 |  
| 20C79DCD75513ADA1 | 13zb1iQ3mvS8s4JxBfrT4VU3Kct49jR8z2   | 20d45a7a920ba6f3362cfc5a9a708b717947f31a |
| 340FAE4332F3F93F3 | 13zb1au9kvqRGr497sGb3hoY11NdtZTzHo   | 20d45a00a1a6976e5b3a70ac3ec54387495f799b |
| 3CD7796DC62D738D6 | 13zb1BJZdLWFXnY8opAHtC7tEYKuAx7UY6   | 20d45880e5f2775bef64b160083c7f70ad4e4a2b |
| 3C661DD146906999C | 13zb14dzagJSAs8ExAxiVURvK4hxHGBSvr   | 20d45814826a5130c6b6e99846eefb7b29b08966 |
| 2570B465E2E2B7539 | 13zb1gj8kjC8BcPiLQchHiUYCP7whomxQK   | 20d45a5f65a8db614542cb063ba508920e8294c7 |
| 3512F971A039784C3 | 13zb111npnaATyQibmmhyGo4gyBBgLcB2g   | 20d457d9924eb254900aab37b690f42e3794deb3 |
| 2E4115FFD6DED498D | 13zb1NJoKk8QNVq39ovmvmTP6Qczi6QdEN   | 20d45933d9cd875940a19674e419891c4e1965e7 |
| 2FC3BE3D049B93B5C | 13zb1df5wTfUxc8DCE2FguuX9HRqniUvrj   | 20d45a2d798a79492525fb7c58802c00c99f3c7d |
| 273FD355C07294A64 | 13zb1MeqFdsnQgszDHxyaVLGBmjXL5ikQo   | 20d45929349816c3327f2be99df4595f1653538b |
| 200DAE45EAC5DF2A5 | 13zb16nndRPtF5z6W253AaJ8tmyb2M9hrf   | 20d458377ffaa3c1fb8b5f38d8ae0a1b1a759d15 |
| 28CA317317998B18B | 13zb11DtFvnKEDqSCc1KEMMj6xo16QxFHc   | 20d457dcf660bc47dcd9e54b6fbc36d90a239338 |
| 34E3010DE01C74869 | 13zb1JNs7GtHcpxLEdjhGq6mxnGSHga37d   | 20d458f3f0e8de9ae3711ddbcb843f506dbe7197 |
| 219478B3A52F0735F | 13zb1qhSuRqytgNHBU4aZ2WRNhjVURarVf   | 20d45af1491f5d903895555f040353db14414ed3 |
| 2355254DEE6FCDFF6 | 13zb1yUcbaQfieXJ74pKCZaH4CWiD5bkf2   | 20d45b6fc960b83900f812c21028e77c3da9e126 |
| 3A41C7D979F1BAC9D | 13zb1hidRcTS6S737V8FT5PfazHTU4oX2f   | 20d45a6f84891560f096a6726eeae68fe51277dc |
| 3AA5DA3B2B0279D80 | 13zb14sfH99jTZKTwVvtavCpheZ5fZjGrV   | 20d45818576e6483f52ea575422e2b80a5683f7b |
| 250F27D26FF94B3CA | 13zb1VotqWUouo7HmtaatUKcFsMtv6tkJk   | 20d459add86cbcb25a1d8fbd0db2cf62365b3d04 |

wish me luck, current speed is 100K keys per sec.


just currently update my progress, because i rarely active since i need someone help to translate my rust code into C++.
Smiley
newbie
Activity: 19
Merit: 0
On the other hand, the proportion of such "beautiful numbers" in the search ranges is probably no more than ~ 20%, that is, not very much. But maybe I'm wrong, and you're excluding certain numbers for some other reason.
I'd say the proportion of "unlikely patterns" in a private key of size n is more like very close to 0.00% (zero percent) the higher n gets. And it goes towards 0 really fast as n grows exponentially.

The amount of the sum of valid distinct combinations when selecting k elements, from a set of n, is massively larger towards the middle (n/2) and the close ranges around the middle, compared to the edges.

And this gap gets tighter and tighter as n increases, resembling a straight up point in the middle, which contains when looking at it with a "microscope" 99.9999% of all the possibilities. Everything else (the 0.00001%) is in the ranges before and after the central point.

This is called the central limit theorem (long-term, all results will be around the average). If this law would be invalid in observation, then it would mean randomness is not part of the structure of the game, and bias of the phenomenon (a fake coin, a compromised dice, non-random generated bits) can be proven with 99.99% certainty, But guess what, we have solutions already found with sequences of eleven consecutive 1 bits, and all the other sorts of sequences of 1 and 0 as well, which if you compute probabilistically, are not at all far away from the statistical model of a randomly generated phenomenon, so...? Excluding patterns just complicates the efficiency of the search, not sure I would play such a gamble (higher bid, lower reward).



Really depends

In the grim reality, the problem is most of randomization in computer level are not real random.
We suffer this problem a lot in my area where what should be random always have a bias or a preference.

Now coupling that with a codes with more than 10 years old code who may has more rudimentary random generations comparing with modern scripts....
I worked 15 years ago with similar code generators.... and yet, we saw prevalence in some number generation, clusters and sequences, even if you use different seeds, some numbers where significant more prevalent than others.


That's why I'm doing the calculation by hand, I can see more clear patterns arising and reducing the range.
It will take a while, maybe, idk sincerely, but I want to try another route bc I don't have a GPU  Undecided


Another point is not having a pool of checked addresses. Huh

So, is we think code like bitcrack or keyhunt have also "random", chances are, in a long run with enough computers, we are double-checking some addresses, losing time and computer power in the process.

Can be small headache, but as an example of that: on https://privatekeys.pw/scanner you can see have several people with more than a trillion addresses checked.
The caveat is many times will be in sequential mode for puzzle 66;aka recheck the same first addresses over and over.

My perspective of this challenge is to prove with the extended public key + child private key you can discover all keys.
However we are going by brutal force, and as a mathematician, i prefer to be more in a elegant way even if takes more time... and that's why i'm trying to resolve this puzzle
newbie
Activity: 19
Merit: 0
but idk, i had an idea
is there anyways to extend https://privatekeyfinder.io/bitcoin-puzzle/random-keys/66 instead of 50 keys per time, to like idk maybe 500, will reduce by 10.
ik ik, it will maybe takes more time than a normal vanity addresss search

my idea is not only find the 66# puzzle, but maybe if you cross with other wallets with balance if more random wallets can be see at once. Its simple and lite for any pc
You'll need to wait until the end of the galaxy we live in (times many billions) until you stumble upon a key with wallet balance.

My man, I know very, very ,very well math and statistics. Better than you think.
however I dont have gpu, a soly laptop, doing 3000 others tasks in parallel and almost with not space.

Im not a neck beard in a mom's basement with an entire day and money to spend on gpu Grin

Today i tried unsuccessully install keyhunt. IDK if have another good for mac

I'm calculating by hand to shrink the range, i'm using this https://privatekeys.pw/scanner to scanning.
But idk if I can trust in case if I found
jr. member
Activity: 40
Merit: 6
On the other hand, the proportion of such "beautiful numbers" in the search ranges is probably no more than ~ 20%, that is, not very much. But maybe I'm wrong, and you're excluding certain numbers for some other reason.
I'd say the proportion of "unlikely patterns" in a private key of size n is more like very close to 0.00% (zero percent) the higher n gets. And it goes towards 0 really fast as n grows exponentially.

The amount of the sum of valid distinct combinations when selecting k elements, from a set of n, is massively larger towards the middle (n/2) and the close ranges around the middle, compared to the edges.

And this gap gets tighter and tighter as n increases, resembling a straight up point in the middle, which contains when looking at it with a "microscope" 99.9999% of all the possibilities. Everything else (the 0.00001%) is in the ranges before and after the central point.

This is called the central limit theorem (long-term, all results will be around the average). If this law would be invalid in observation, then it would mean randomness is not part of the structure of the game, and bias of the phenomenon (a fake coin, a compromised dice, non-random generated bits) can be proven with 99.99% certainty, But guess what, we have solutions already found with sequences of eleven consecutive 1 bits, and all the other sorts of sequences of 1 and 0 as well, which if you compute probabilistically, are not at all far away from the statistical model of a randomly generated phenomenon, so...? Excluding patterns just complicates the efficiency of the search, not sure I would play such a gamble (higher bid, lower reward).

newbie
Activity: 2
Merit: 0
I have a question. Maybe someone will help.
How to do it like this.
Let's say the decimal is generated using multiplication. For example, multiply 10 by 2, we get 20 and convert it into a hex, and then into a public key and compare the resulting key with the one we are looking for. For example, you need to set 10 and the number needs to be multiplied from 1 to 10000 and then checked.
So is it possible to implement the script?
hero member
Activity: 1736
Merit: 857

I am working in some probabilistic BSGS version it will increase the "probabilistic speed" but since probabilistic means drop some keys with unlikely endings or repeated patterns there is the possibility of a misshit.


First of all, let me express my respect to you for creating an excellent Keyhunt program.
Unlikely key values can be numbers in which there are several (for example, more than three) identical digits in a row? I assume this from the fact that, from the point of view of probability, it is unlikely that there will be at least one key of this kind in the puzzle.

On the other hand, the proportion of such "beautiful numbers" in the search ranges is probably no more than ~ 20%, that is, not very much. But maybe I'm wrong, and you're excluding certain numbers for some other reason.
hero member
Activity: 861
Merit: 662
What are the possible solutions. For now, let's consider only the case of public keys, since going through addresses is uninteresting and useless, so there are two options:
A. Fast discrete logarithm algorithm. Whether it exists is unknown, just as it is unknown whether the expression P=NP is true. But there are two algorithms that, in a sense, can be considered as such: kangaroo and BSGS, since they reduce the complexity to O(n1/2).
B. Probabilistic approximation, that is, in other words, a reduction in the search space, where the above algorithms can already be effective, working on reasonable resources, and not on all video cards in the world.

I am working in some probabilistic BSGS version it will increase the "probabilistic speed" but since probabilistic means drop some keys with unlikely endings or repeated patterns there is the possibility of a misshit.

There are no other options yet. And these are definitely not magic circles or "solstice of prime numbers" tables.

Yeah aggree with you 100% all those users writing BS should not be here.
Pages:
Jump to: