Pages:
Author

Topic: == Bitcoin challenge transaction: ~1000 BTC total bounty to solvers! ==UPDATED== - page 9. (Read 56094 times)

member
Activity: 165
Merit: 26
[quote author=kTimesG link=topic=5218972.msg63860000#
I don't need to prove to anyone what i see, but if it helps someone, the logic is simple:

Imagine a slot machine. It has 1 slot with 65536**2 options. One generation = one rotation.
The pseudocode is simple:
A true random source of 65536**2 range values can (and will) spit out a (42, 42, ...) sequence out just as equally likely as (0x7b03aa9f, 0x33bcf51c, ...). If your argument is that it's less likely for same sub-ranges to be part of a combined range, that is correct, but the sum of probabilities for all these cases is in the below 0.00000...01% of the entire count of possibilities - as demonstrated by your huge generated files. So, a lot of convoluted work to exclude a (relatively few) close to zero edge-cases.
newbie
Activity: 5
Merit: 0
By the way. Did any of the topic participants even find private keys? A rhetorical question.
The last two opened keys were moved to the same address, but for a very long time they did not take a penny. Apparently, he did it for fun and not for almost a million bucks?
This all looks like nonsense. I've found the "author's" message and it's unconvincing. "Safety prove" sounds like nonsense, unless the author really don't knows how to entertain himself at his own cost.
There are too many people here who are hitting the 66th key. The number is not so large that this would not happen over so many years for so many people. Perhaps the author was just joking with everyone and he opened all the wallets himself. Thinking out loud...
newbie
Activity: 5
Merit: 0
The sum of all probabilities is always the same, no matter how you “divide” the possibilities.
I thought so too until i started running simulations. I'll explain later.

This is a very complex way to lose performance instead of simply generating a single random number.
Exactly the same waste of computing power, i don't understand how you came to that conclusion.

Numbers are not "converted to hexadecimal", they are numbers. Views are converted.
I know. The way we see it, numbers look visually simpler than hexadecimal. I didn't say that the results are different.

I don't need to prove to anyone what i see, but if it helps someone, the logic is simple:

Imagine a slot machine. It has 1 slot with 65536**2 options. One generation = one rotation.
The pseudocode is simple:

Code:
long a = rand(1, 65536**2);
int count = 0;

while (True) {
        count++;

        if (rand(1, 65536**2) == a) {
                a = rand(1, 65536**2);
                count = 0;
                print(count);
        }
}

Now try the same thing, but with two slots of 65536. I already said earlier that, despite the identical number of options, paradoxically, the frequency of two numbers appearing in a row of short numbers is in most cases much higher than only one, but long number.
This works up to the third or fourth "slot". The more there are, the less chance there is. Thus, generating all 8 characters results in almost zero chance of "catching" the desired value if you not exclude a second hit.
There is no need to guess one value out of extra large. You can significantly narrow the search if you combine ranges and brute force. That's all what i'm talking about.
In the case of the 66th key, the range is reduced from 2*4294967296*4294967296 to 4294967296.

The ranges that are in the text document above cost me over $60k. If someone tells me that this helped him narrow down his search and he found the key, of course i won't be happy for him, but maybe it's comforting to know that i screwed up so that someone else wouldn't (no).

There is nothing more to talk here, it's over. Just take my ranges.
member
Activity: 165
Merit: 26
i just came to this conclusion:
I ran hundreds of tests and simulations and found that if a number is divided into "chunks", the probability of hitting the target increases many times.
That's not how probabilities work. The sum of all probabilities is always the same no matter what way you "split" the possibilities.

I generate two random numbers, convert them to hex, concatenate them and pass them to a modified rotor-cuda so that it can iterate through the remaining 8 values.
That's a very complicated way to waste performance, instead of simply generating a single random number.
Numbers do not "convert to hex", they are numbers. Representations convert.

I never iterate over the full value of 00000000-ffffffff because the likelihood of there being 4 zeros or 4 "f" at the beginning is extremely small.

If in the first chunk we generate a number within 65536**2 (1 00000000 00000000), and not two separate values 0-65535, then the simulation shows that getting into a number within 4 billion is much more difficult than hitting two numbers 0-65535 twice. Mathematics often says the opposite, but i only believe the simulation, which showed me that in this case it is much more likely.
There's zero-proof for your statement. You believe in simulating what? A single observation out of a gazllion choices, each with identical probability?
Statistics work long-term, you can't have a conclusion from a single expected result.
A key with value 0xFFFFFFF....F has exactly the same chances as any other random key. It seems to me you are trying to say that randomness follows some "model", when in fact it's only definition is total impredictibility and any lack of rules or patterns.

Sorry for you addiction.
newbie
Activity: 5
Merit: 0
Could you share with us whatever you have done in those 3 years?  I mean this could be somewhat true that solving these puzzles could be addictive, but I'm curious to know what were you doing exactly? I just hope it wasn't sitting and watching the screen while brute force tools were running.  Knowing the ways you tried will definitely help others not to try them, you can at least do that.

Oh, i feel naked in the middle of the street...

At first it was really just an observation of brute force, because i did not fully understand the strategy of “attacking” with random numbers. Then I started modifying the existing software. Finally, i came up with my own software.
I am an engineer, not a programmer or mathematician, i had to study, so most of the time i rather struggled with technologies that were new to me.
Addiction began to appear when i felt that i was influencing the process. More like an obsession.
I will have access to the code on monday. In fact, i didn’t come up with anything totally new, i just came to this conclusion:
I ran hundreds of tests and simulations and found that if a number is divided into "chunks", the probability of hitting the target increases many times. For example:

1 0000 0000 00000000

What we see: the first digit is the starting number of the hex key. If we take, for example, the 66th key, then this number can be 2 or 3. A little later i will explain what i did in this case.
The second and third groups have a range from 0000 to ffff, so, 65536 options.
I generate two random numbers, convert them to hex, concatenate them and pass them to a modified rotor-cuda so that it can iterate through the remaining 8 values. The resulting range for example:

1fb1206ac0000ffff
1fb1206acffff0000

I never iterate over the full value of 00000000-ffffffff because the likelihood of there being 4 zeros or 4 "f" at the beginning is extremely small.

If in the first chunk we generate a number within 65536**2 (1 00000000 00000000), and not two separate values 0-65535, then the simulation shows that getting into a number within 4 billion is much more difficult than hitting two numbers 0-65535 twice. Mathematics often says the opposite, but i only believe the simulation, which showed me that in this case it is much more likely.

Having a cluster of servers and an orchestrator that gave each GPU a group of new random numbers, it took me 2-4 seconds to brute force the tail of each key. Up to 40 GPUs worked simultaneously (as much as there was enough money for rent).
If the number starts with 2-3, i went through the values twice, changing the starting digit.
The second and third chunks are saved into a regular text document as one value, so the generator never hits the numbers twice and wastes no time. The maximum size of this document would be only 39 gigabytes.
I understand that going through all 4 billion options would take many lives, but my system seemed to me the only one possible in terms of probability and i still believe that it would have worked, but i no longer have the money for further experiments.

I've attached my text document with all the ~1.4 millions of chunks that i checked on the 66th key. I checked both - with 2 and 3 at the beginning. The tail, of course, was checked by brute force.
https://drive.google.com/file/d/1EeEMjnQ_6xa9S_88zPSr8OELCNkwXL3o/view?usp=sharing

Hope it helps someone.
It will be very funny if my code didn't work well. But i checked it on the founded wallets, i don't know what could have broken it.
jr. member
Activity: 50
Merit: 3
Could you share with us whatever you have done in those 3 years?  I mean this could be somewhat true that solving these puzzles could be addictive, but I'm curious to know what were you doing exactly? I just hope it wasn't sitting and watching the screen while brute force tools were running.  Knowing the ways you tried will definitely help others not to try them, you can at least do that.
newbie
Activity: 5
Merit: 0
Hi.
Three years ago i first heard about the puzzle. At first i was interested in reading about this, then i started solving it. I was looking for software, formulas, theories, like all the other solvers, i developed a lot of my own software. I bought and rented gpus, cpus, servers, but did not give up, because solving the puzzle would help me solve some financial difficulties. I didn't notice how the first year passed.
The idea of opening the puzzle took hold of me and it turned into an addiction. I'm not a gambler, never liked it, but puzzles turned out to be a much more severe addiction, because i always had the feeling that everything depended on my knowledge and perseverance, and not on luck. But i was wrong.
I tried to stop many times, talked about it with a psychologist, came up with various forms of prohibition, but this only fueled in me a greater desire to solve at least one puzzle.
At work, i became inattentive, i was demoted, then fired altogether, because i did not perceive new information well and no longer met the required level. I got into debt because i couldn’t get a good job again, i started taking medications that the doctor prescribed for me, but it didn’t help. Six months ago i developed insomnia, became nervous, and almost stopped communicating with my wife, although i wanted to. Three months ago she couldn’t stand it and left me. I sold almost everything to pay off some of my debts and survive. In 2 weeks, i am going of my own free will for three months to a clinic for people with mental disorders. This is the only thing that will help me avoid going to court for failure to pay bank debts and gives me hope that under the supervision of doctors and in complete isolation i will be able to get rid of this addiction.

I don’t know how many people created the puzzle, who you are, what goals you actually pursue, but one thing i can tell you is that even the most insignificant idea at first glance always has consequences. When creating such toys as a puzzle, you did't think about others. The reward that is stored in every wallet is a bait that can become poison for someone.
I want to blame you, but i can't, because i'm not an evil person. Perhaps i will feel better that i have told you all this.
Maybe you think something like "just do your job and don't look at puzzle" - please, stop, you just know nothing about the disease of addiction.

If you can, please help me close at least some of my debts. My wallet is bc1q7pp4h4wc8p8czajfkwc7sc049d9vyrc8ftnct2

If you don't, i'll understand.
This message is for everyone - know your limits before.

I won't respond to messages in the thread because i'm embarrassed, sorry.
full member
Activity: 1232
Merit: 242
Shooters Shoot...
i have one more doubt, let's say i have a public key (not for the puzzle), what input should i give in the start and end range?

Unless you remember the range in which the private key was generated, you'll have to search the entire range. No one can predict/give you a correct answer.
newbie
Activity: 4
Merit: 0
i have one more doubt, let's say i have a public key (not for the puzzle), what input should i give in the start and end range?
full member
Activity: 1232
Merit: 242
Shooters Shoot...
can you explain me how does kangaroo works, for example if i have a public key for a bitcoin address can i use kangaroo to brute-force it's private key?

Enjoy the read!

https://github.com/JeanLucPons/Kangaroo
newbie
Activity: 4
Merit: 0
can you explain me how does kangaroo works, for example if i have a public key for a bitcoin address can i use kangaroo to brute-force it's private key?
member
Activity: 165
Merit: 26
I wanted to ask if someone got the public key for #66 as i have tried to bruteforce it but my hardware is not capable of doing so, if someone can link any code or algorithm (or even the public key if you have it).
I think you have the answer already by looking whether #66 was emptied or not. That is, the creator surely has the public key as he surely has the private key. Otherwise, someone would have used kangaroos long time ago, if pubKey was known.

I think it's pointless to "brute-force" the pubKey. I assume what you are thinking is somewhere along the lines:
1. Find some 256-bit number X (32 bytes) that results in RIPE(SHA('03' | X)) = decodeBase58(address)
2. Use ECDLP solver on P(X, y(X)) since you know it has a known long prefix.

Flaws
- you assume that there's only one X that results in the first equality. Chances are 1 in 2**96 that you'll get an X corresponding to a 66-bit private key. There's 2**256 SHA hashes that map to 2**160 RIPE hashes, so 1 address can be obtained (in theory) in 2**96 ways.
One of those 2**96 X's is the one having a 66-bit key. Most of all others will be in the 256-bit range.
- you're limited by SHA256 computing power of the hardware. Assuming you're after finding a collision, and let's say you have some GPUs providing 10 GH/s you're still looking at around 100 years of more until you find such a collision, which only guarantees you some 1 in 2**96 rate of success.

So to speed-up your search by a factor of 2**96, you'll need to brute-force the private key in order to do (correctly):
RIPE(SHA('03' | P.x)) = decodeBase58(address)
where P is the public key of the private key.

It will still take you same amount of time due to SHA limit above, but you have the guarantee of success.

Good luck.
newbie
Activity: 4
Merit: 0
I wanted to ask if someone got the public key for #66 as i have tried to bruteforce it but my hardware is not capable of doing so, if someone can link any code or algorithm (or even the public key if you have it).
jr. member
Activity: 34
Merit: 11
Just wanted to let the community know that our pool has now implemented a key finder bonus.  I know a lot of you have been wanting it and now it is here.

Current bonus is around $3300 at the time of writing this.

The bonus is calculated as 1% of the top ten contributor payout and the users in the current top ten are not eligible.

Have a great day and hope to see everyone on the pool.

http://www.ttdsales.com/66bit/

Chris
copper member
Activity: 69
Merit: 0
This is a continuation of the popular "32BTC puzzle" - https://bitcointalksearch.org/topic/bitcoin-puzzle-transaction-32-btc-prize-to-who-solves-it-1306983 and https://bitcointalksearch.org/topic/archive-bitcoin-challenge-discusion-5166284 - with the inclusion of the latest data updated.


Quote
2015-01-15 - a transaction was created in blockchain (included in block 339085) containing a transfer transaction for 256 different Bitcoin addresses: https://blockchain.info/tx/08389f34c98c606322740c0be6a7125d9860bb8d5cb182c02f98461e5fa6cd15

2017-07-11 - funds from addresses 161-256 were moved to the same number of addresses of the lower range - thus increasing the amount of funds on them. This takes place in block 475240: https://www.blockchain.com/btc/tx/5d45587cfd1d5b0fb826805541da7d94c61fe432259e68ee26f4a04544384164

2019-05-31 - the creator of the "puzzles" creates outgoing transactions with the value of 1satoshi for addresses #65, #70, #75, #80, #85, #90, #95, #100, #105, #110, #115 , #120, #125, #130, #135, #140, #145, #150, #155, #160 with the aim of probably comparing the difficulty of finding a private key for the address from which such a transaction was carried out, and one that There is no transaction.

2023-04-16 - The creator of the challenge paid rounded amounts of BTC to the remaining unguessed addresses - thus increasing the value of the prizes by a total of over 900 BTC. From this moment, for example - the reward value for address 66 is not 0.66BTC but 6.6BTC
Quote
First output: take random number from 20 upto 21-1, use it as private key
Second output: take random number from 21 upto 22-1, use it as private key
Third output: take random number from 22 upto 23-1, use it as private key

The outputs relate to transactions from 2015 in which the transaction was performed for all 256 addresses.

You can check them: ==> http://bit.do/maintx <==

Quote

Useful websites for challengers:


I have already commented under another thread. This is a telltale sign of a successful lattice attack on the Bitcoin wallet to reveal private keys and then transferring out the funds to deterministically generated wallets having similar public keys/addresses.
member
Activity: 158
Merit: 39
There is some tips to speed-up keyhunt-cuda (rotor-cuda):

Apply this then you need less grid size, like 4096x512 will be enough for 4090:

https://bitcointalksearch.org/topic/m.63526413

Also change this:

__device__ __noinline__ void CheckHashSEARCH_MODE_SA(uint64_t* px, uint64_t* py, int32_t incr, uint32_t* hash160, uint32_t* out)
{
   switch (mode) {
   case SEARCH_COMPRESSED:
      CheckHashCompSEARCH_MODE_SA(px, (uint8_t)(py[0] & 1), incr, hash160, out);
      break;
   case SEARCH_UNCOMPRESSED:
      CheckHashUnCompSEARCH_MODE_SA(px, py, incr, hash160, out);
      break;
   case SEARCH_BOTH:
      CheckHashCompSEARCH_MODE_SA(px, (uint8_t)(py[0] & 1), incr, hash160, out);
      CheckHashUnCompSEARCH_MODE_SA(px, py, incr, hash160, out);
      break;
   }
}

to this because doing switch-case in kernel is very bad idea:

__device__ __noinline__ void CheckHashSEARCH_MODE_SA(uint64_t* px, uint64_t* py, int32_t incr, uint32_t* hash160, uint32_t* out)
{
   
   CheckHashCompSEARCH_MODE_SA(px, (uint8_t)(py[0] & 1), incr, hash160, out);
      
}

also maxFound can be completely removed to search puzzle, because we need only one return result anyway

Rotor-cuda speed with this mods:

  [00:17:10] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.453247 %] [R: 0] [T: 6,412,923,043,840 (43 bit)] [F: 0]
  [00:17:11] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.500549 %] [R: 0] [T: 6,421,244,542,976 (43 bit)] [F: 0]
  [00:17:12] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.547852 %] [R: 0] [T: 6,429,566,042,112 (43 bit)] [F: 0]
  [00:17:13] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.595154 %] [R: 0] [T: 6,437,887,541,248 (43 bit)] [F: 0]
  [00:17:15] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.642456 %] [R: 0] [T: 6,446,209,040,384 (43 bit)] [F: 0]
  [00:17:16] [CPU+GPU: 6.72 Gk/s] [GPU: 6.72 Gk/s] [C: 36.689758 %] [R: 0] [T: 6,454,530,539,520 (43 bit)] [F: 0]
  [00:17:17] [CPU+GPU: 6.72 Gk/s] [GPU: 6.72 Gk/s] [C: 36.737061 %] [R: 0] [T: 6,462,852,038,656 (43 bit)] [F: 0]
  [00:17:18] [CPU+GPU: 6.72 Gk/s] [GPU: 6.72 Gk/s] [C: 36.784363 %] [R: 0] [T: 6,471,173,537,792 (43 bit)] [F: 0]
  [00:17:20] [CPU+GPU: 6.72 Gk/s] [GPU: 6.72 Gk/s] [C: 36.831665 %] [R: 0] [T: 6,479,495,036,928 (43 bit)] [F: 0]
  [00:17:21] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.878967 %] [R: 0] [T: 6,487,816,536,064 (43 bit)] [F: 0]


Thanks.

Hi are you willing to share Your code / github? Or it is some secret sauce ? Smiley
full member
Activity: 1232
Merit: 242
Shooters Shoot...
Hi everyone,
I have an idea that could greatly speed up puzzle 66 and all the others with little expenditure of energy.

How many pools are there?

 
Quote
saatoshi_rising wrote: "Finally, I wish to express appreciation of the efforts of all developers of new cracking tools and technology.  The 'large bitcoin collider' is especially innovative and interesting!"

So pools are the solution because unity is strength!

So why not share among all pools the shared intervals?

Let me explain further....

The idea would be to provide a central server or hosting where to store all the ranges that each pool has scanned.

The update would not be in real time, for security issues, but new records would be updated once a day after 24 hours from the scann of the range.

The requirement for each pool to access the data, is to ensure a minimum number of daily ranges (example: 10000 ranges per day).

Then it would be a matter of adding to the various pool programs an extra parameter in range selection, i.e., whether or not to include ranges from shared pools in the scan.

Of course, ranges scanned from other pools would not automatically be reported as done, because we do not have absolute certainty that they have been completed, only free ones would be given priority.

And there would also be no need to divide the prize between pools, each going their own way.

Doing so would drastically reduce search time.
What do you think

sorry for my English, I am using a translator

It's a good idea however, there are only 2 pools that I know of and one is shady, and had some bad ranges/math of total ranges needed to search.

I wouldn't trust anything from that certain pool.
newbie
Activity: 10
Merit: 0
Hi everyone,
I have an idea that could greatly speed up puzzle 66 and all the others with little expenditure of energy.

How many pools are there?

 
Quote
saatoshi_rising wrote: "Finally, I wish to express appreciation of the efforts of all developers of new cracking tools and technology.  The 'large bitcoin collider' is especially innovative and interesting!"

So pools are the solution because unity is strength!

So why not share among all pools the shared intervals?

Let me explain further....

The idea would be to provide a central server or hosting where to store all the ranges that each pool has scanned.

The update would not be in real time, for security issues, but new records would be updated once a day after 24 hours from the scann of the range.

The requirement for each pool to access the data, is to ensure a minimum number of daily ranges (example: 10000 ranges per day).

Then it would be a matter of adding to the various pool programs an extra parameter in range selection, i.e., whether or not to include ranges from shared pools in the scan.

Of course, ranges scanned from other pools would not automatically be reported as done, because we do not have absolute certainty that they have been completed, only free ones would be given priority.

And there would also be no need to divide the prize between pools, each going their own way.

Doing so would drastically reduce search time.
What do you think

sorry for my English, I am using a translator
member
Activity: 63
Merit: 14
There is some tips to speed-up keyhunt-cuda (rotor-cuda):

Apply this then you need less grid size, like 4096x512 will be enough for 4090:

https://bitcointalksearch.org/topic/m.63526413

Also change this:

__device__ __noinline__ void CheckHashSEARCH_MODE_SA(uint64_t* px, uint64_t* py, int32_t incr, uint32_t* hash160, uint32_t* out)
{
   switch (mode) {
   case SEARCH_COMPRESSED:
      CheckHashCompSEARCH_MODE_SA(px, (uint8_t)(py[0] & 1), incr, hash160, out);
      break;
   case SEARCH_UNCOMPRESSED:
      CheckHashUnCompSEARCH_MODE_SA(px, py, incr, hash160, out);
      break;
   case SEARCH_BOTH:
      CheckHashCompSEARCH_MODE_SA(px, (uint8_t)(py[0] & 1), incr, hash160, out);
      CheckHashUnCompSEARCH_MODE_SA(px, py, incr, hash160, out);
      break;
   }
}

to this because doing switch-case in kernel is very bad idea:

__device__ __noinline__ void CheckHashSEARCH_MODE_SA(uint64_t* px, uint64_t* py, int32_t incr, uint32_t* hash160, uint32_t* out)
{
   
   CheckHashCompSEARCH_MODE_SA(px, (uint8_t)(py[0] & 1), incr, hash160, out);
      
}

also maxFound can be completely removed to search puzzle, because we need only one return result anyway

Rotor-cuda speed with this mods:

  [00:17:10] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.453247 %] [R: 0] [T: 6,412,923,043,840 (43 bit)] [F: 0]
  [00:17:11] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.500549 %] [R: 0] [T: 6,421,244,542,976 (43 bit)] [F: 0]
  [00:17:12] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.547852 %] [R: 0] [T: 6,429,566,042,112 (43 bit)] [F: 0]
  [00:17:13] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.595154 %] [R: 0] [T: 6,437,887,541,248 (43 bit)] [F: 0]
  [00:17:15] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.642456 %] [R: 0] [T: 6,446,209,040,384 (43 bit)] [F: 0]
  [00:17:16] [CPU+GPU: 6.72 Gk/s] [GPU: 6.72 Gk/s] [C: 36.689758 %] [R: 0] [T: 6,454,530,539,520 (43 bit)] [F: 0]
  [00:17:17] [CPU+GPU: 6.72 Gk/s] [GPU: 6.72 Gk/s] [C: 36.737061 %] [R: 0] [T: 6,462,852,038,656 (43 bit)] [F: 0]
  [00:17:18] [CPU+GPU: 6.72 Gk/s] [GPU: 6.72 Gk/s] [C: 36.784363 %] [R: 0] [T: 6,471,173,537,792 (43 bit)] [F: 0]
  [00:17:20] [CPU+GPU: 6.72 Gk/s] [GPU: 6.72 Gk/s] [C: 36.831665 %] [R: 0] [T: 6,479,495,036,928 (43 bit)] [F: 0]
  [00:17:21] [CPU+GPU: 6.71 Gk/s] [GPU: 6.71 Gk/s] [C: 36.878967 %] [R: 0] [T: 6,487,816,536,064 (43 bit)] [F: 0]


Thanks.


That's actually very interesting. Could you share your raw files with these mods applied? 

I've tried it on my own files but I can't get it to recompile afterwards, throws a bunch of errors.

Also you've mentioned that you are using Vladimir's version of Rotor, try using Phrutis' version. I think it's a little bit faster.

Thanks!!
jr. member
Activity: 44
Merit: 2
In your example, what card are you using and are you searching x point or address?

I'm searching puzzle66 by address.
This is max speed for now.
Tryin to figure out how to reduce memory usage and fit entire app to GPU L2 cache.
Looks like reducing GRP_SIZE helps, but a lot of kernel calls then. Probably need to return back STEP_SIZE which was removed from rotor-cuda and vanitysearch have it.
Experimenting...
Pages:
Jump to: