Pages:
Author

Topic: Large Bitcoin Collider (Collision Finders Pool) - page 22. (Read 193420 times)

legendary
Activity: 1120
Merit: 1037
฿ → ∞
User SlarkBoy has generously deployed quite some free giveaways in the LBC search space here and there.

The new BLF file and patch available on FTP Server (auto-updated on LBC restart) does contain these already.

While I do not know the privkeys of these, I am aware of the total amount and it's the biggest bounty program of LBC by far!

We will reveal the complete extent of these freebies once the fireworks of discovering them has started.

Have fun and a big cheers to SlarkBoy!

Rico
legendary
Activity: 1344
Merit: 1046
The generator isn't open source, am I right?
I have such an image for compiling wallet sources with all the GCC, Xcode, Homebrew stuff.
I will upload it coming week.

Thanks Wink

Cheers,
Ray
legendary
Activity: 1120
Merit: 1037
฿ → ∞
I know. But what's the reason?  Wink
Is there any incompatibility between Arch Linux and Darwin?


The problem is not the LBC client itself (Perl script) - this should be pretty much multi-OS portable

The problem are the generator binaries
http://unix.stackexchange.com/questions/212754/is-there-a-way-to-run-a-linux-binary-on-os-x

I have no Darwin system - or even the Apple Hardware here.

If someone would send a vmware-able MacOSX image my way, I could try to compile the generator for that target-OS.

Rico
legendary
Activity: 1344
Merit: 1046
LBC under MacOS Sierra (brew):
...
Any idea?

Not supported.

I know. But what's the reason?  Wink
Is there any incompatibility between Arch Linux and Darwin?

Cheers,
Ray
legendary
Activity: 1120
Merit: 1037
฿ → ∞
LBC under MacOS Sierra (brew):
...
Any idea?

Not supported.
legendary
Activity: 1344
Merit: 1046
LBC under MacOS Sierra (brew):

Code:
Unknown operating system: darwin-thread. Please report.

All packages have been installed successfully.

Code:
perl -V:myarchname

myarchname='i386-darwin'

Any idea?

Thanks in advance.

Cheers,
Ray
legendary
Activity: 1120
Merit: 1037
฿ → ∞


9 days to go - worst case. I take it all have their hook-finds or are controlling for presence of FOUND.txt at least daily.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
I'm the only one stuck to 12Mkeys/s?  Angry

If you give me access to the machine, I can have a look what's wrong.

@unknownhostname

-t 10 is ok. Actually on AWS I wouldn't change that. If you really were into optimizing, you could let the instances in Europe have -t 10 and the instances in US/Asia have -t 17 (bigger network lag)


Rico
legendary
Activity: 1932
Merit: 2077
I'm the only one stuck to 12Mkeys/s?  Angry
member
Activity: 62
Merit: 10
So ... whats the best confirm for a 64 vCPU  / 240 GB mem ?

Right now I have     "time":   10
legendary
Activity: 1120
Merit: 1037
฿ → ∞
Ask for work... got blocks [1735316448-1735339999] (24696 Mkeys)
o ..snip.. o (24.48 Mkeys/s)

I assume that's a m4.16xlarge with -t 10

The +1.3 Mkey/s you get compared to what I see right now is because of  the -t 10 (you) versus -t 5 (me)

Actually I would recommend everyone to set up something between -t 10 and -t 60 (*) at the moment - depending on how long you want LBC to let run (do not forget: a graceful shutdown can take 60 - 120 minutes if you have -t 60). Because starting from -t 10 the startup cost is pretty diminished.

Also, I'd recommend to not stick to the "boring" 5/10/20 numbers. Try 11/17/23/37 or so (not exactly these, just dice some number if you want). You'll help to spread client-server requests more evenly and also you set up for your client more "individual" block sizes, which can actually help your client to get block intervals assigned that are being left for redistribution otherwise (if someone ends their client ungracefully).

(*) and not more than 60 if you haven't enough memory, because the LBC client has a small memory leak currently (you can see it taking more and more space within a round). Nothing tragic, but it's there - I noticed when I started a client overnight with -t 420 which actually became critical on my 16GB machine.


Rico
member
Activity: 62
Merit: 10
Ask for work... got blocks [1735316448-1735339999] (24696 Mkeys)
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo oooooooooooooooooo (24.48 Mkeys/s)


CPU only
legendary
Activity: 1120
Merit: 1037
฿ → ∞
AWS is the most shit cloud ever existed and exist Smiley

What do you suggest as alternative?

Unknownhostname is right. Alternative? Own hardware.

What I observed at AWS:

You get what they call "vCPUs". I assume that stands for "virtual CPUs". In general, it looks like you get only 50% of the CPU power announced - or in other terms - only 50% of the number of CPUs as "real" CPUs. I.e. if you get an instance with 64 vCPUs you can assume you get only the performance that 32 physical CPUs would give you.

Most of the time you can assume that, but sometimes its even worse. While I got regulary 18 Mkeys/s out of the m4.16xlarge, some instances gave me only 16 Mkeys/s - I guess Amazon speculates their customers do not see the difference (which is understandable as most of the customers do not put a 100% CPU load on the machines).

Having said all that: I actually started up a m4.x16xlarge instance after your report and I get

Code:
[root@ip-172-31-4-174 collider]# ./LBC
Ask for work... got blocks [1733262128-1733274415] (12884 Mkeys)
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooo (23.38 Mkeys/s)

with -t 5, so this is expected. You get that performance with any modern 4-core + GPU configuration.

If you get less than these 20+ Mkeys/s, something else is wrong.


Rico
legendary
Activity: 1932
Merit: 2077

1 answer Cheesy

AWS is the most shit cloud ever existed and exist Smiley

What do you suggest as alternative?
member
Activity: 62
Merit: 10
I'm using t = 20

Code:
{
    "cpus":   64,
    "id":     "arulbero",
    "secret": "xxxxxxxxxxxxxx",
    "time":   20
}


1 answer Cheesy


AWS is the most shit cloud ever existed and exist Smiley
legendary
Activity: 1932
Merit: 2077
I'm using t = 20

Code:
{
    "cpus":   64,
    "id":     "arulbero",
    "secret": "xxxxxxxxxxxxxx",
    "time":   20
}
legendary
Activity: 1120
Merit: 1037
฿ → ∞
AWS m4.4x vs m4.16x

The problem is the speed:

1) about 6,3Mkeys/s                2) about 12Mkeys/s

obviously I have 2 different config files "lbc.json" , 1) --> "cpus":   16,   2)  --> "cpus":   64.

Why is instance #2  so slow?

What about -t ? LBC gets quite a penalty on startup, because it has to load the BLF file, it connects with the FTP server to see if there are updates. AWS machines have quite a bad IO. This startup penalty is even worse with the GPU version, because additionally, it has to compile the OpenCL program.

I was able to get 18 Mkeys (https://lbc.cryptoguru.org/man/admin#system-speed) from a m4.x16 instance with the old client (secp256k1 lib). My guess would be, that the new client with your ECC should be able to give at least 32 times the speed + 20% you see from the benchmark per CPU. So at least over 20 Mkeys/s I guess.

WIth the AWS instances anything smaller than -t 10 does not make sense.

Rico
legendary
Activity: 1932
Merit: 2077
I'm using aws instances to run LBC client (version CPU only).

I chose 2 instances-type  (https://aws.amazon.com/ec2/instance-types/):

1) m4.4xlarge  (16vCPU)   about 0,13$ per hour (spot instance)
2) m4.16xlarge (64vCPU)   about 0,55$ per hour (spot instance)

generator:  gen-hrdcore-avx2-linux64

1)
Code:
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
stepping : 1
microcode : 0xb000014
cpu MHz : 2300.066
cache size : 46080 KB
physical id : 0
siblings : 16
core id : 0
cpu cores : 8
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bugs :
bogomips : 4600.13
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
....
processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz

2)
Code:
$ cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
stepping        : 1
...........
processor       : 63
vendor_id       : GenuineIntel
cpu family      : 6
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz

The problem is the speed:

1) about 6,3Mkeys/s                2) about 12Mkeys/s

obviously I have 2 different config files "lbc.json" , 1) --> "cpus":   16,   2)  --> "cpus":   64.

Why is instance #2  so slow?
legendary
Activity: 3486
Merit: 2287
Top Crypto Casino
Hahaha nice one! And an idiot like me have even believed it for a moment and thought oh my god what an assh**le is here with us?  Roll Eyes
 Grin Grin Grin
legendary
Activity: 1120
Merit: 1037
฿ → ∞
April fool's prank?

Killjoy! Grinch! Spoilsport!  Grin

I had hoped to cause more stirr - maybe even becoin to show up and hold some lectures.

April, 1st confirmed.


Rico
Pages:
Jump to: