Pages:
Author

Topic: == Bitcoin challenge transaction: ~1000 BTC total bounty to solvers! ==UPDATED== - page 9. (Read 54326 times)

full member
Activity: 1162
Merit: 237
Shooters Shoot...

He can't lol.

Really, what do you mean?

How is 0x10492A658E39638E5 the first value for the 66 bit range?

Maybe I am misreading your statement(s).
jr. member
Activity: 136
Merit: 2
newbie
Activity: 6
Merit: 0
If you can do that, congratulations because you just partially broke elliptic curve.

No, i mean I can reduce a generator range to skip not random values, so time to bruteforce reduced too.

For example, 23 bit key to test (python 3.11 + ice_secp256k1.dll).
with secret algo:
GOT: KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qYjgd9M7rVkthFNsQ6i7
10.363348245620728 s

with usual range (2^22 ... 2^23-1)
GOT: KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qYjgd9M7rVkthFNsQ6i7
16.832353353500366 s

with big values, like 66 bit, a lot of values just skiped as NOT random binary values, because cant be randomly generated by author (by wallet software).
for example, first value for 66-bit range is 100000100100100101010011001011000111000111001011000111000111001011, all values less is fail.
this value give generator as first value applyed with random's rules

anyway, pure python not a good instrument to get result. wanna use numba cuda.jit, but still learning how to.

Hi fecell .. can you please explain more why values less "100000100100100101010011001011000111000111001011000111000111001011" will fail .. thanks and regards
jr. member
Activity: 136
Merit: 2
If you can do that, congratulations because you just partially broke elliptic curve.

No, i mean I can reduce a generator range to skip not random values, so time to bruteforce reduced too.

For example, 23 bit key to test (python 3.11 + ice_secp256k1.dll).
with secret algo:
GOT: KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qYjgd9M7rVkthFNsQ6i7
10.363348245620728 s

with usual range (2^22 ... 2^23-1)
GOT: KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qYjgd9M7rVkthFNsQ6i7
16.832353353500366 s

with big values, like 66 bit, a lot of values just skiped as NOT random binary values, because cant be randomly generated by author (by wallet software).
for example, first value for 66-bit range is 100000100100100101010011001011000111000111001011000111000111001011, all values less is fail.
this value give generator as first value applyed with random's rules

anyway, pure python not a good instrument to get result. wanna use numba cuda.jit, but still learning how to.
jr. member
Activity: 41
Merit: 2
Those are tests with KeyHunterGPU; Rotor has some flaws in it, especially if using the continue option. I stopped using/testing all versions of Rotor after I discovered a bug with the continue option, because it was causing keys to be skipped.

Hello. What is continue option? What bug exactly? Also, can somebody explain me what maxFound option is and how it used in code? Thanks.
The continue option was an option in rotorcuda that would save how many keys searched and grid size and readjust the range on a restart.
It had flaws, as in sometimes it would not adjust correctly, or the total keys searched line would be blank.

The maxFound option was the max keys the program could find in a single kernel call. I don't remember that being in keyhuntcuda or rotorcuda but more of the vanitysearch/forks of vanitysearch.

Thanks. I had some thoughts about this program optimization also.

1. It uses global device memory access even if searching by one key. Why it can't fit searched bitcoin address ripemd160 hash, public key incremental function and ripemd160(sha256) functions in cache?
2. As I understand it executes kernel from cpu thread several times on range. Why don't do it just one time for entire range supplied to kernel.
3. ripemd160(sha256) using Tensor cores?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Those are tests with KeyHunterGPU; Rotor has some flaws in it, especially if using the continue option. I stopped using/testing all versions of Rotor after I discovered a bug with the continue option, because it was causing keys to be skipped.

Hello. What is continue option? What bug exactly? Also, can somebody explain me what maxFound option is and how it used in code? Thanks.
The continue option was an option in rotorcuda that would save how many keys searched and grid size and readjust the range on a restart.
It had flaws, as in sometimes it would not adjust correctly, or the total keys searched line would be blank.

The maxFound option was the max keys the program could find in a single kernel call. I don't remember that being in keyhuntcuda or rotorcuda but more of the vanitysearch/forks of vanitysearch.
jr. member
Activity: 41
Merit: 2
Those are tests with KeyHunterGPU; Rotor has some flaws in it, especially if using the continue option. I stopped using/testing all versions of Rotor after I discovered a bug with the continue option, because it was causing keys to be skipped.

Hello. What is continue option? What bug exactly? Also, can somebody explain me what maxFound option is and how it used in code? Thanks.
member
Activity: 43
Merit: 10
Yes, you could. I implemented a total key counter in mine. It would print to file, total # of keys, that way, even if power went out, I'd have a good starting point.

If you do it this way, make sure you keep/know start and end range. You then need to take total keys ran, divide by the number of threads (grid size) and then take that number and add it to your initial/last start AND end range.

Example:
If you had a start range of 0 and an end range of 1000000 (keep it small for this purpose) and your grid size was 10x10. The program says you have ran/checked 10,000 keys total.
Take 10,000 (total keys) and divide by 10x10=100 (grid size); 10,000 / 100 = 100. So each gpu thread checked 100 keys.
So for your next batch file, you would have a start/end range of 100:1000100.
If you only change the start range by 100, then you are overlapping/possibly missing keys checked on the other threads. If you stop and think about it, or do the math, it'll make sense.
Your first thread checked 0-100 (now on second run it should start at 100 and be on the hook to check up to 10,100); the last thread checked 990,000-990,100. If you don't adjust the end range as well, your last thread will now be checking 999,900 instead of starting where it left off at 990,100.
Lol, again, if you do the math you'll understand. Hope it made some sense.


That's actually a very good explanation, really appreciate it man!
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Yeah, I was testing different grids for my card the other day, within the reasonable bounds, keeping it a small multiple of the original grid.

After a while I decided to play with it a bit and increased the grid by larger factors, thus arriving at numbers like 18000 and 32000. They are not arbitrary.

I thought the program wouldn't even initiate. For my surprise not only it ran but it had increased speeds. Then it came to mind that it was probably jumping over a bunch of keys.  Roll Eyes

It was too good to be true. I was getting a constant 5GK/s with sudden peaks to 9GK/s every few seconds running sequential X-Point mode with a grid-size like --gpux 36000x512 against #130.

Raise it much further than that and the speed will keep dropping to 0.00 MK/s for a few seconds during the entire search.

Anyways, if going for random mode I guess this issue could be overlooked? One can have more threads thus searching faster, the trade-off being skipping some keys...

About this other flaw [in a scenario where the grid-size is not causing it to skip keys] could I avoid it by updating the lower range in my .bat file to be the last key shown in the counter before terminating the session, instead of using continue.bat?

Thanks!

Yes, you could. I implemented a total key counter in mine. It would print to file, total # of keys, that way, even if power went out, I'd have a good starting point.

If you do it this way, make sure you keep/know start and end range. You then need to take total keys ran, divide by the number of threads (grid size) and then take that number and add it to your initial/last start AND end range.

Example:
If you had a start range of 0 and an end range of 1000000 (keep it small for this purpose) and your grid size was 10x10. The program says you have ran/checked 10,000 keys total.
Take 10,000 (total keys) and divide by 10x10=100 (grid size); 10,000 / 100 = 100. So each gpu thread checked 100 keys.
So for your next batch file, you would have a start/end range of 100:1000100.
If you only change the start range by 100, then you are overlapping/possibly missing keys checked on the other threads. If you stop and think about it, or do the math, it'll make sense.
Your first thread checked 0-100 (now on second run it should start at 100 and be on the hook to check up to 10,100); the last thread checked 990,000-990,100. If you don't adjust the end range as well, your last thread will now be checking 999,900 instead of starting where it left off at 990,100.
Lol, again, if you do the math you'll understand. Hope it made some sense.
member
Activity: 43
Merit: 10
Yeah, I was testing different grids for my card the other day, within the reasonable bounds, keeping it a small multiple of the original grid.

After a while I decided to play with it a bit and increased the grid by larger factors, thus arriving at numbers like 18000 and 32000. They are not arbitrary.

I thought the program wouldn't even initiate. For my surprise not only it ran but it had increased speeds. Then it came to mind that it was probably jumping over a bunch of keys.  Roll Eyes

It was too good to be true. I was getting a constant 5GK/s with sudden peaks to 9GK/s every few seconds running sequential X-Point mode with a grid-size like --gpux 36000x512 against #130.

Raise it much further than that and the speed will keep dropping to 0.00 MK/s for a few seconds during the entire search.

Anyways, if going for random mode I guess this issue could be overlooked? One can have more threads thus searching faster, the trade-off being skipping some keys...

About this other flaw [in a scenario where the grid-size is not causing it to skip keys] could I avoid it by updating the lower range in my .bat file to be the last key shown in the counter before terminating the session, instead of using continue.bat?

Thanks!
full member
Activity: 1162
Merit: 237
Shooters Shoot...
You might be interested to read and learn more about grid sizes, there are more stats which you could find by visiting this nvidia page  there are technical stats on what is the acceptable grid size for different applications. You can't simply use any arbitrary grid size.
Digger doesn't even own a GPU or PC, lol. He's doing all of his tests from an old Blackberry flip phone Smiley

I've had some large grid sizes CY4NiDE, but I keep them multiples. If card stock grid is 38,128; then I will keep a front grid that is a multiple of 38. I normally run multiple of 38x256. I've used 760x512 with no issues, 1520x512 with no issues, and I think 1 multiple higher.

Those are tests with KeyHunterGPU; Rotor has some flaws in it, especially if using the continue option. I stopped using/testing all versions of Rotor after I discovered a bug with the continue option, because it was causing keys to be skipped.



copper member
Activity: 1330
Merit: 899
🖤😏
You might be interested to read and learn more about grid sizes, there are more stats which you could find by visiting this nvidia page  there are technical stats on what is the acceptable grid size for different applications. You can't simply use any arbitrary grid size.
member
Activity: 43
Merit: 10
I definitely got ahead of myself.  Grin

I ran X-Point mode using --gpux 32000,512 against 20 keys spread over the 2^40 range and only 2 of those keys were being found.

Same with --gpux 18000,512 and anything in between.

In the end only with --gpux 1024,512 the program was able to find all 20 keys without skipping.

I'll run 2^50 next against more keys to see if this effect gets mitigated as the space increases.

Haven't checked Address mode or Hash160 mode yet.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Hey there, thanks for your reply. Much appreciated.

So if it can pass the 2^40 test without skipping any keys can I deem it safe?

So far no problems with 2^35, 2^38, 2^39, 2^40.

--gpux 32000,512
I would run a few tests. Put some keys at the beginning of range, the middle and the end. I've used some large grid sizes with no issues.

If you are using the rekey option, you will get fluctuation in your speed no matter the grid size; as it spins up to rekey and then picks back up.
member
Activity: 43
Merit: 10
Hey there, thanks for your reply. Much appreciated.

So if it can pass the 2^40 test without skipping any keys can I deem it safe?

So far no problems with 2^35, 2^38, 2^39, 2^40.

--gpux 32000,512
full member
Activity: 1162
Merit: 237
Shooters Shoot...
While running some performance tests with Rotor-Cuda I've noticed that when I assign a monstruous grid-size for my GPU, I can get more speed.

If I set it like this --gpux 18000,512 I get a steady 4.62 GK/s peaking at 6.90 GK/s for a few seconds.

Can this cause any problems, like skipping keys during the search? If so, can anyone recommend a good grid-size for a 3080ti?

Thanks in advance!
The best thing to do is to test your grid size and run through a small range, something like a 2^40 range. See if the grid size finds the key or not.

I use a similiar version of KeyHunt Cuda / Rotor and haven't missed a key with a large grid size. But seriously, run a simple test to know for sure with your card and setup.
member
Activity: 43
Merit: 10
While running some performance tests with Rotor-Cuda I've noticed that when I assign a monstruous grid-size for my GPU, I can get more speed.

If I set it like this --gpux 18000,512 I get a steady 4.62 GK/s peaking at 6.90 GK/s for a few seconds.

Can this cause any problems, like skipping keys during the search? If so, can anyone recommend a good grid-size for a 3080ti?

Thanks in advance!
jr. member
Activity: 35
Merit: 1
how much time and resource to find a public key in range 40000000000000000000000000000...7ffffffffffffffffffffffffffff  115 puzzle with kangaroo
copper member
Activity: 1330
Merit: 899
🖤😏
Does anyone have concerns about the advancement of machine learning algorithms becoming increasingly capable of breaking complex encryption methods, such as secp256k1 and others?
What is secp256k1 encryption methods? elliptic curve doesn't encrypt anything, it's just a numbering system with 2 sides, negative and positive.

Here is an example of how it is possible to "break" the key :
Imagine this is your private key :
368812095337 and this is your public key:
023eb3a5c5c9e1303055215b2f2f455288bda053e1791a2b06d2c385cbc94eb51a

Now imagine you have zero knowledge about private key, can you or AI tell us how we can reduce the size of private key down to 368812  without even knowing the actual key above? If you guys can do that, I'd be concerned.
newbie
Activity: 1
Merit: 0
Does anyone have concerns about the advancement of machine learning algorithms becoming increasingly capable of breaking complex encryption methods, such as secp256k1 and others?
Pages:
Jump to: