Pages:
Author

Topic: Large Bitcoin Collider Thread 2.0 - page 17. (Read 57143 times)

newbie
Activity: 4
Merit: 0
May 31, 2017, 11:14:14 AM
#90
hi

thanks for your support.
how can i use fpga instances? what is the command?
thanks in advance...

regards,
killer king
legendary
Activity: 1120
Merit: 1037
฿ → ∞
May 31, 2017, 10:00:52 AM
#89
ROYAL247 has submitted 1410 Gkeys to date... so that's not sufficient. (You can find out with ./LBC -q command)
But the nick is rising fast https://lbc.cryptoguru.org/stats/ROYAL247, Let it work overnight and you have your GPU auth.

Looks like you did it! Congrats.

For better GPU performance, Amazon recommends you do

nvidia-smi -pm 1
nvidia-smi --auto-boost-default=0
nvidia-smi -ac 2505,875


(with sudo if you are not root already), then

./LBX -gpu -x

in order to get a good/updated speed benchmark for your GPU client.

And after that, the p2.16xlarge are to be started with

LBC -c 64 -gpu -gopt dev=1-16:lws=512

be prepared that upon 1st invocation, it takes quite a while until the machine has started up all 64 processes. In my experience, the AWS IO subsystem is laggy like hell. After that all should be well.

AWS systems differ slightly in their performance even in the same machine class. The bigger the system, the bigger the performance variance. You can expect

p2.xlarge ~11 Mkeys/s
p2.8xlarge ~80 - 88 Mkeys/s
p2.16xlarge ~155 - 170 Mkeys/s

legendary
Activity: 1120
Merit: 1037
฿ → ∞
May 31, 2017, 03:08:51 AM
#88
Hey guys,

you're the only 2 left with clients older than 1.140 (you have 1.067 in case you didn't know).
Please see https://bitcointalksearch.org/topic/m.18665920 ("News" link from my sig)

As of tomorrow, the server will require client 1.140 as minimum version. Shortly thereafter I would like to phase out the FTP updates.
The new versions accept updates of the client and the generator only via a HTTPS connection (with name validation on  Wink)
so they address and fix some of the valid security concerns raised about a month ago.

You guys are bravely fighting your way to 3000 Gkeys, so please don't let the server pull the plug on you.

edit: If these are some "fire and forget" clients, I recommend the "eternal run" setup, see here
https://lbc.cryptoguru.org/man/user#eternal-run (scroll down the "Get Work" section)
This will keep your clients, generators and BLF file up to date with minimum overead (check once a day).
Also, it's even more robust against transient network errors or even errors of the update process itself.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
May 30, 2017, 03:34:16 PM
#87
it's the issue because i simply modified the client with GPU authorize 0 to 1
then i got blacklisted. but i am paying for amazon ec2 more than 28K per annum, so i would like to use it at least per LBC.
that's the only thing i modified in LBC client.
and i already submitted more than 3000 G keys with royal360 user id.
hope you can understand and authorize for gpu.
thanks in advance...

The client is made so even adding a whitespace is considered tampering: please don't.

ROYAL247 has submitted 1410 Gkeys to date... so that's not sufficient. (You can find out with ./LBC -q command)
But the nick is rising fast https://lbc.cryptoguru.org/stats/ROYAL247, Let it work overnight and you have your GPU auth.

HOWEVER, I could enable GPU auth immediately for you (because honestly that's quite some punch hardware access you have there) if you let me play with an F1 instance for - say - 3x8 hours then.  Gentlemen agreement Wink

6 instances p2.16xlarge could reach the 1 Gkeys/s on their own ... intriguing.
newbie
Activity: 4
Merit: 0
May 30, 2017, 02:50:06 PM
#86
it's the issue because i simply modified the client with GPU authorize 0 to 1
then i got blacklisted. but i am paying for amazon ec2 more than 28K per annum, so i would like to use it at least per LBC.
that's the only thing i modified in LBC client.
and i already submitted more than 3000 G keys with royal360 user id.
hope you can understand and authorize for gpu.
thanks in advance...
newbie
Activity: 4
Merit: 0
May 30, 2017, 02:35:58 PM
#85
actually i have 6 instances with p2.16xlarge and 6 with f1.8xlarge (fpga) instances.
i have these instances for another project since last year. but if you would give an opportunity i will use them for LBC.
hope you can understand the issue and let me know your opinion.
thanks in advance and waiting for your response asap.......
legendary
Activity: 1120
Merit: 1037
฿ → ∞
May 30, 2017, 04:42:12 AM
#84
i am trying to run LBC client on amazon ec2 p2.16xlarge (64CPUs with GPU)

The p2.16xlarge is supported and would give you around 170 Mkeys/s (at the moment this would be equivalent to the total of the current pool capacity...).

In order to get GPU authorize, you have to submit 3000 Gkeys. With CPU. That's our "Proof of discipline". Also it's a little gesture of fairness for the early adopters. You have now around 680 Gkeys.

You should - by the way - refrain from doing things like

Code:
1496064624    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496064763    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496064848    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496064915    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496064934    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496065010    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496065068    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496065115    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496065262    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496065384    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496065443    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496065645    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496065690    50.19.147.36 PUT-NIL: tampered (K1LL3RK1NG-65f2)
1496065793    50.19.147.36 PUT-NIL: blacklisted (K1LL3RK1NG-65f2)
1496065848    50.19.147.36 PUT-NIL: blacklisted (K1LL3RK1NG-65f2)
...
1496066026    50.19.147.36 PUT-NIL: blacklisted (K1LL3RK1NG-65f2)
1496066101    50.19.147.36 PUT-NIL: blacklisted (K1LL3RK1NG-65f2)
1496066608     52.7.40.130 PUT-NIL: tampered (K1LLY0UALL-65f2)
...
1496067243     52.7.40.130 PUT-NIL: blacklisted (K1LL3RK1NG-65f2)
1496067535     52.7.40.130 PUT-NIL: blacklisted (K1LL3RK1NG-65f2)
1496068278   34.202.31.106 PUT-NIL: blacklisted (K1LL3RK1NG-65f2)

 Wink Ok?

So maybe you are cheaper off using a m4.16xlarge (64 vCPUs) and as soon as you have 3000 Gkeys, move to the p2.xxx
newbie
Activity: 4
Merit: 0
May 30, 2017, 03:53:19 AM
#83
hi @rico666

i am trying to run LBC client on amazon ec2 p2.16xlarge (64CPUs with GPU)
but getting an error like GPU authorize: no
it's working only on cpus
i installed all compatible drivers and there is no issue from hardware.
what would be the issue?
thanks in advance...

regards,
Killer King
full member
Activity: 158
Merit: 113
May 28, 2017, 11:45:49 AM
#82
Ethereum seems to be held together with band-aids. I am sure that there must be some bad private keys as it's an emerging technology with lots of bad implementations, and actually an Ethereum address is just hex(Keccak256(pubkey)[12:]), where pubkey is an ECDSA public key, which is represented as 32-bytes for x and 32 bytes for y (just the good old uncompressed public key with the first byte removed) - also easy to calculate and to make a bloom filter for.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
May 28, 2017, 03:42:30 AM
#81
Such an interesting project. Could the collider be applied to alts with smaller market caps and fewer address?  

In principle yes, but

fewer addresses = less hit probability

(I assume with "fewer addresses" you mean those in use/with funds)

Also, the size of the codomain (the number of possible addresses) of the address generation process is an important parameter. The only reason why I started this in the 1st place, are Bitcoins' 160bit (for both P2PKH and P2SH) which I believe will show being insufficient within a couple of years (contrary to what "educated" guesses may say).

An almost 1:1 Litecoin-collider could be done, obviously, because Litecoin is an almost 1:1 Bitcoin clone.  Wink

I have not looked into the other alts too much, but from what I have seen e.g. at Monero, a collider would be really futile there.
sr. member
Activity: 261
Merit: 250
May 27, 2017, 07:02:20 PM
#80
Such an interesting project. Could the collider be applied to alts with smaller market caps and fewer address? 
legendary
Activity: 1120
Merit: 1037
฿ → ∞
May 25, 2017, 07:00:45 AM
#79
and still: 30.958 seconds on 1 CPU core for 2 x 10 x 224 keys.

(10.8 Mkeys/s per core) GPU load about 40% higher than before. That's nearly a doubling of the speed!  Smiley

The hashes 0c96c911f51bee4250b3d2a2b86b8853fb8c5830 and 82bf3725df95cd4260b003d21063a1b85a66ab21 from the test below (are the symmetric siblings of what we compute now) are from​http://www.directory.io/904625697166532776746648320380374280100293470930272690489102837043110636675

where I use 1CvKupTzRqsDi5Zf4QdbVYhmaQUkF667hM (82bf3725df95cd4260b003d21063a1b85a66ab21) and 129Zk4KrdjCtTkPDKDA9yKEoyQMKg7nnY4 (0c96c911f51bee4250b3d2a2b86b8853fb8c5830) for testing.

Privkey shown is still wrong, should be e.g.

0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140-0x35
instead of
0000000000000000000000000000000000000000000000000000000000000001+0x35

but that's cosmetics.

Code:
$ make test
time ./kardashev -I 0000000000000000000000000000000000000000000000000000000000000001 -c 10000 -L 10 -g
2d17543d32448acc7a1c43c5f72cd5be459ab302:u:priv:0000000000000000000000000000000000000000000000000000000000000001+0x5e
02e62151191a931d51cdc513a86d4bf5694f4e51:c:priv:0000000000000000000000000000000000000000000000000000000000000001+0x65
0c96c911f51bee4250b3d2a2b86b8853fb8c5830:u:priv:0000000000000000000000000000000000000000000000000000000000000001+0x35
82bf3725df95cd4260b003d21063a1b85a66ab21:c:priv:0000000000000000000000000000000000000000000000000000000000000001+0x36
9d74ffdb31068ca2a1feb8e34830635c0647d714:u:priv:00000000000000000000000000000000000000000000000000000000000f9001+0xf8c
3d6871076780446bd46fc564b0c443e1fd415beb:c:priv:00000000000000000000000000000000000000000000000000000000000f9001+0xf8c
...

So why is it brittle, unoptimized and still shitty?

brittle because I had a bug in the symmetry code that did not find uncompressed keys. Fixed.  Wink

The symmetry computation is still done @ CPU and while it's no big deal (256bit subtraction), we want to get load from CPU to GPU - right?
So I'll do that before I release it. Also, there are still lots of optimizations missing, so I am quite confident to be able to achieve a per-core rate more like 5.8 Mkeys/s * 2 (~11.6 Mkeys/s per core) -> 23.2 Maddrs/s per core with around 25% GPU load per CPU core on my machine.

So watch out, next release will give you GPU users a nice speed bump.

edit: 11.9 Mkeys/s per core now (shittyness levels bearable - symmetry computation on GPU already) ... time to make a release.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
May 19, 2017, 12:11:49 PM
#78
Originally, I intended to make sure in -gopt dev=x,y,z (see above) each device is given just once.  I was enthusiastic about AWS support so I pushed the release out sooner than I had implemented that filtering.

Turns out, its a feature!  Cheesy

Normally gopt dev=x,y,z distributes CPU processes evenly on the given GPU devices. But what if you had two GPUs in your system where one was faster than the other? You would not want both to get the same share of work.

Or you have two same GPUs on your system, but one is for your Desktop and you want to put a lower load on this. Normally you would have to start - again - a separate LBC process for each GPU to achieve non-symmetric workload.

Not with our feature.  Wink

Assume GPU 1 has only 2GB VRAM and GPU 2 has 4GB VRAM. Naturally this limits the number of processes that can run on 1, but why should we constraint GPU 2 because of this? Assume you have 10 cores to fire on the GPUs and you want to have 3 on GPU1 and 7 on GPU2. Solution:

-c 10 -gopt dev=1,1,1,2,2,2,2,2,2,2

Now that's an extreme example where we basically had to enumerate all CPU processes explicitely for their GPU assignment. What if we simply have two 1080 (plenty of vram) and would like to assign 12 of our cores in a  1:2 ratio to both these GPUs? Solution:

-c 12 -gopt dev=1,2,2

Now GPU1 gets 4 processes, GPU2 gets 8 processes. 1:3 ratio? Simple:

-c 12 -gopt dev=1,2,2,2

GPU1: 3 processes, GPU2: 9 processes

So I like that feature very much and will keep it as is.



I intend to develop -gopt further. As is, it accepts dev and lws "subparameters". Each subparameter is delimited by ":". Another sub-parameter in future may be:

"bloom=0" (to keep bloom filter check on host CPU, which should allow to use GPUs with less than 512MB VRAM).
Nope. nobloom= with the possibility to declare a set of devices which shall not use on-GPU bloom filter check.

I have not yet decided about implementing other GPU features to be optional.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
May 18, 2017, 02:59:07 AM
#77
New version of the client is available. Generators have also been updated to accept "GPU worksize" parameter.
Biggest changes:
  • comfortable multi-GPU operation
  • support of Amazon GPU instances (p2.xxxx)

This is only of interest for people with multi-GPU setups or those who want to run LBC on Amazon GPU Instances, almost everyone else (*) will not need the new features. LBC will now distribute the workload to all given CPUs and GPUs in a balanced way. In previous versions, you had to start up a separate LBC client with -gdev X parameter for every GPU you had. No more.

Assume you have 4 GPUs and 32 CPUs. LBC will assign 8 CPUs to each GPU:

./LBC -c 32 -gpu -gopt dev=1-4

Or you have 32CPUs and 8 GPUs and are on Amazon:

./LBC -c 32 -gpu -gopt dev=1-8:lws=512

Assume you would like to distribute the work of your 16 CPUs only to GPUs 1,3,5,7

./LBC -c 16 -gpu -gopt dev=1,3,5,7


So the old -gdev X parameter is gone and replaced by the more generic (and more complex) gopt parameter.
If you have just one GPU, you do not need to care about all this. (-gopt dev=1 is default)

BTW: dev=1-4 isthe same as dev=1,2,3,4 or dev=1,2,3-4 etc. just shorter. dev=1-2 and dev=1,2 are also equivalent


(*) there may be one exception, if you have old GPUs and get a "CL_OUT_OF_RESOURCES" error. In this case, you could try -gopt with lws=X where X is 512 or 256 or 128 or 64
hero member
Activity: 1193
Merit: 507
Pinch.Network Guaranteed Airdrop
May 17, 2017, 04:20:31 PM
#76
Anyone that got some spare time that wants to fiddle with my machine?
I am a retard to Linux and cant get this thing going on my "server" with an i7 920 on.

Will run 24/7, with a quadro 4000 when I earn my GPU auth xD
full member
Activity: 158
Merit: 113
May 17, 2017, 07:50:26 AM
#75
I am not sure if that is considered a collision also, but if we end up with a P2PKH hash160 from a
P2SH process and vice versa - it could be considered a collision IMHO.

So yes - it will be done eventually.

That still means the same RIPEMD-160 hash, so yes.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
May 15, 2017, 05:21:51 PM
#74
Uh  Roll Eyes - finally commanding AWS GPU hardware:

Code:
ubuntu@ip-172-31-42-141:~/collider$ ./LBC -x -gpu
GPU authorized: yes
Testing mode. Using page 0, turning off looping.
Benchmark info not found - benchmarking... done.
Your speed is roughly 3466192 keys/s per CPU core.
o
Test ok. Your test results were stored in FOUND.txt.
Have a look and then you may want to remove the file.
2d17543d32448acc7a1c43c5f72cd5be459ab302:u:priv:0000000000000000000000000000000000000000000000000000000000000001+0x5e
02e62151191a931d51cdc513a86d4bf5694f4e51:c:priv:0000000000000000000000000000000000000000000000000000000000000001+0x65
9d74ffdb31068ca2a1feb8e34830635c0647d714:u:priv:00000000000000000000000000000000000000000000000000000000000f9001+0xf8c
3d6871076780446bd46fc564b0c443e1fd415beb:c:priv:00000000000000000000000000000000000000000000000000000000000f9001+0xf8c
Ending test run.

As with everything Amazon: Assume just the half of what they promise. They say you have 32 vCPUs? Assume you have only 16.
Their GPU hardware says max worksize 1024? Guess what...

I'll make a new kardashev-haswell capable of running on AWS. Please don't tell Unknownhostname or we're screwed.  Grin

edit: p2.8xlarge ~90 Mkeys/s - the sharp spike you can see in the stats from 2017-05-16 - 2017-05-23

edit2: more convenience - no separate LBC startup for multi-GPU operation

Code:
root@ip-172-31-39-222:~/collider# ./LBC -c 32 -gopt dev=1-8:lws=512
GPU authorized: yes
Ask for work... got blocks [5158380559-5158410766] (31675 Mkeys)
oooo....ooooo (85.29 Mkeys/s)
legendary
Activity: 1120
Merit: 1037
฿ → ∞
May 15, 2017, 11:01:13 AM
#73
If you can make a list of hardware that could be "fun" to play with then I can scavenge and maybe find most of it to you.
I want to learn more master... xD

For now, I also want to learn more. E.g. about GPU programming, especially OpenCL.
While the OpenCL code in LBC is actually faster than the one in OpenCL-vanitygen,
it seems it is still far below it's potential.

If hashcat benchmarks are to be believed, my low-mid-range GPU
should be able to perform at least 270 Mega-hash160 per second!!!
(compared to the current LBC implementation: 46 Mega-hash160 that put around 85% load on it)

And we're talking regular hash160 = RMD160(SHA256(x)), not the tuned Bitcoin-hash160
(where you have fixed input sizes for SHA256 and a very restricted input size for RMD160, both resulting
in significant speedup of the code - which we use.)

In short: Although we have problems to put load on GPUs, it looks like the GPU would be able - with the
correct programming - take 10times more load than we currently expect. We're not using the GPU
vectorization capabilities at all :-(

I thought about asking some professionals for help
https://streamhpc.com/development/making-the-release-version-of-prototype-code/
but I'm afraid of the cost...



legendary
Activity: 1120
Merit: 1037
฿ → ∞
May 15, 2017, 10:45:17 AM
#72
Searching for P2SH puzzles.
There have been multiple P2SH scripts which consisted of only one opcode (byte), likely made as a challenge. I think that with the pool speed, we'd easily find some longer scripts which can be spent (if they were created, of course).

What do you think?

It's in the pipeline. We currently have the "problem" of not being able to put enough load on GPUs.
There are some tweaks to the generator we'll have to do 1st, but a GPU-only P2SH search for filling
idle GPU capacity is in the mental incubator already.

I have been made aware, that P2SH addresses actually are probably even more susceptible to collision
attacks than P2PKH - and very GPU friendly, with basically zero ECC requirements.

I am not sure if that is considered a collision also, but if we end up with a P2PKH hash160 from a
P2SH process and vice versa - it could be considered a collision IMHO.

So yes - it will be done eventually.
full member
Activity: 158
Merit: 113
May 15, 2017, 10:34:11 AM
#71
Hey, I think I have an idea for a pool search project (but it would require rewriting the client a bit): Searching for P2SH puzzles.
There have been multiple P2SH scripts which consisted of only one opcode (byte), likely made as a challenge. I think that with the pool speed, we'd easily find some longer scripts which can be spent (if they were created, of course).

What do you think?
Pages:
Jump to: