Pages:
Author

Topic: Large Bitcoin Collider Thread 2.0 - page 13. (Read 57143 times)

legendary
Activity: 1428
Merit: 1000
October 31, 2017, 01:29:37 AM
Quote
there can be no single 2h average point with 0 Mkeys/s or else you are not eligible - no matter how many Gkeys you have submitted so far

IMO a 2h timeframe is too short.  Probability of power outage/ network downtime lasting over 2hr is quite high.

A number between 6hr to 12hr average point would be more reasonable.
newbie
Activity: 5
Merit: 0
October 30, 2017, 07:21:11 PM
arulbero,
I think this is rico's way to incentivize M/hash to the pool, and reward those who stick with him in his experiment.
Perhaps the re-distributed Gkeys can be assigned to a dead-pool IP of 0.0.0.0 to keep them separate from your work.


I found Google's cloud computing to be quite nice, they are offering a $300 trial of their services, at least for me.
Without asking for more vCPUs, a 24 core  skylake preemptible VM per region will cost $0.20/hour and net ~10 Mkey/s.  
A 96 core will cost $0.84/hr.  
There are GPU options as well, up to eight K80s or four P100.  An eight core Broadwell with four P100s will cost $9.399/hr or $5.799/hr on eight K80s.

Id like to try these once my GPU is authed.
legendary
Activity: 1914
Merit: 2071
October 30, 2017, 11:10:43 AM
Just my opinion:

when you delete an account, why its Gkeys are added to the other active accounts? Personally I like to know exactly how many keys I have delivered. I don't need the Gkeys from other accounts. I prefer to see an accurate record.

You could gather all the Gkeys from the account deleted in a single generic account "other" that doesn't appear  in the top 30.


A few hours ago, the LBC computed it's 16000 trillionth address (8000 trillion keys, each key compressed & uncompressed pubkey -> 2 addresses per key).

Then the Unknownhostname's share has fallen below 50% of the total amount.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 30, 2017, 03:28:19 AM
Greetings Colliders!

A few hours ago, the LBC computed it's 16000 trillionth address (8000 trillion keys, each key compressed & uncompressed pubkey -> 2 addresses per key). At the current rate of 25-30 tn keys per day, this means 36 days for 1000 tn keys on average. In other words, the LBC checked the equivalent of over 62500 billion pages on http://www.directory.io/ and at the moment is doing so at a rate of 2.5 million of these pages per second.



I have thought a long time how and when to distribute the funds that ended up in the LBC Pot (currently 0.12274528 BTC) and here's how the LBC Pot will be distributed among the participating colliders:

Every time the LBC reaches a bit boundary in the search space (53 bits, 54 bits, ...) the current LBC pot will be distributed proportionally according to the Gkeys delivered among those who were active in the week before the LBC hits this boundary. This means, if you look at your Per-User statistics (e.g. https://lbc.cryptoguru.org/stats/__rico666__) at the time the LBC crosses this boundary, there can be no single 2h average point with 0 Mkeys/s or else you are not eligible - no matter how many Gkeys you have submitted so far. On the other hand, the speed in there (as long as it is above 0 Mkeys/s) does not have any influence on the payout - only your Gkeys delivered so far do.



There are a lot of dead/dormant accounts, who seem to just have tested LBC and then stopped colliding. And this is perfectly ok.
On the other hand, I don't think it makes sense to keep these accounts around indefinitely as most of them have some single-digit Gkeys, but everyone started out small, so an automated cleanup mechanism should not immediately reap small accounts.

This is what will happen:

Starting with 53bit search space (which is due in about a month given current speed), accounts that have been inactive for (7 + Gkeys_delivered) days will be reaped and their Gkeys will be added proportionally  to the other active accounts. Consider it a kind of Proof of Stake.  Wink

e.g.

Account has delivered 5 Gkeys and is inactive. This account can be inactive for 12 days before it is reaped.
Account has delivered 1000 Gkeys and is inactive. This account can be inactive for 1007 days before it is reaped.

It's clear that accounts that did some serious colliding in the past with several thousand Gkeys do not have to fear to be reaped anytime soon.
But I should point out that as speed of colliding rates changes over time, we may change this rule at some point in the future. (Imagine if the hardware/software in 5 years can give you 1Gkey/s then probably 1 Gkey will buy you 1 hour idle time. Or something like that)

For now, 7 days idle time is ok for everyone, above that: 1Gkey = 1 day.

newbie
Activity: 12
Merit: 0
October 29, 2017, 01:19:02 PM
The Remote Code Execution is there by design and not a vulnerability.
My apologies for implying that the RCE was a bug. I was using the word "vulnerability" to refer to the "state of being exposed to the possibility of being attacked" without judgement of it being a "bug" or a "feature". Your analogy to Javascript is a good one. Browsers provide an execution sandbox so to manage the risks of RCE. Users of LBC should, as you say, take appropriate steps to provide an appropriate sandbox.

To avoid repeating yourself further, you might consider having the client print a message on startup, like: "The LBC client contains a feature update itself. In the unlikely event that the LBC server were compromised by an attacker, the attacker would be able to execute arbitrary code on your machine as the user that executes LBC. Please execute LBC accordingly."  This is just a well intended suggestion.

Apologies again for any insult.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 28, 2017, 03:28:54 PM
RUNNING LBC AS ROOT IS NOT REQUIRED OR A GOOD PRACTICE

The LBC app contains a remote code execution vulnerability.

It seems impossible for people to remember things that have been said once, so I repeat it again. Hopefully it will last for a couple of months:

a) Running LBC as root is not required, but if you run LBC in a VM, or Docker container, or on a dedicated machine, it's as good as any other command running as root. I do it all the time.

b) The LBC is not an App - go to your sweepy wheepy Android/iOS device for that.

c) The Remote Code Execution is there by design and not a vulnerability.

Same as you do not consider the JavaScript remote code execution in your faggy browser a vulnerability. Or do you?

Quote
p.s. Thanks for providing the nvidia libraries. My Docker image does not use that and some extra Docker foo is required to leverage the GPU.

I have some reports of people successfully doing a PCI passthrough to VMware and KVM virtual machines, so that's definitely an option for a GPU enabled client.
newbie
Activity: 12
Merit: 0
October 28, 2017, 01:06:38 PM
sudo ./LBC -x               

RUNNING LBC AS ROOT IS NOT REQUIRED OR A GOOD PRACTICE

The LBC app contains a remote code execution vulnerability. Rico assures us that he as taken many steps to avoid a malicious attacker from leveraging the vulnerability. But since root privilege is not required, there is no reason to run LBC as root.

For extra protection, you might consider running LBC inside a Docker container. I provide a base image (dcw312/lbc-base:latest) as well as code to create it and use it at:
https://github.com/dcw312/lbc-client-docker

p.s. Thanks for providing the nvidia libraries. My Docker image does not use that and some extra Docker foo is required to leverage the GPU.
jr. member
Activity: 32
Merit: 11
October 26, 2017, 06:36:47 PM
All comands:

sudo apt-get update               
sudo apt-get upgrade               
sudo apt-get update && sudo apt-get -y upgrade               
sudo apt-get install -y linux-image-extra-`uname -r`               
sudo apt install perl bzip2 xdelta3 libgmp-dev libssl-dev gcc make               
sudo apt-get install gcc make tmux libssl-dev xdelta3 nvidia-367 nvidia-cuda-toolkit               
sudo cpan install OpenCL               
sudo cpan force install JSON               
sudo cpan force install LWP               
sudo cpan force install Parallel::ForkManager               
sudo cpan force install Net::SSLeay               
sudo cpan force install LWP::Protocol::https               
sudo cpan force install Parallel::ForkManager                
sudo cpan force install Term::ReadKey               
sudo apt-get install libnet-ssleay-perl               
sudo apt-get install libcrypt-ssleay-perl               
sudo apt-get install LWP::Protocol::https               
sudo apt-get install Parallel::ForkManager                
sudo apt-get install Term::ReadKey               
sudo apt install ocl-icd-opencl-dev               
sudo apt-get install nvidia-375               
sudo apt-get dist-upgrade -y               
sudo reboot               
sudo ./LBC -h               
sudo ./LBC -x               
newbie
Activity: 5
Merit: 0
October 23, 2017, 04:28:02 AM
Ping came back fine updated on those commands previously still not working. I think this is an issue with the Perl certificate implementation. I had a similar issue running a perl script on windows with he same error that i could not fix.

LBC uses https://metacpan.org/pod/LWP::Protocol::https for HTTPS validation.

Worksforme.

So maybe

$ sudo cpan

cpan> install LWP::Protocol::https

Other than that: ¯\_(ツ)_/¯


Still no luck anyway I can bypass https?

https://stackoverflow.com/questions/74358/how-can-i-get-lwp-to-validate-ssl-server-certificates#5329129
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 23, 2017, 04:15:05 AM
Ping came back fine updated on those commands previously still not working. I think this is an issue with the Perl certificate implementation. I had a similar issue running a perl script on windows with he same error that i could not fix.

LBC uses https://metacpan.org/pod/LWP::Protocol::https for HTTPS validation.

Worksforme.

So maybe

$ sudo cpan

cpan> install LWP::Protocol::https

Other than that: ¯\_(ツ)_/¯
newbie
Activity: 46
Merit: 0
October 22, 2017, 09:29:14 PM
Might be time to start up the server again and see what a nVidia Quadro card can do.
I know I did around a million keys pr sec with the cpu alone.  Cheesy Cheesy
newbie
Activity: 5
Merit: 0
October 22, 2017, 05:54:50 PM
I'm having issues with the Arch Linux virtual machine on windows Problem connecting to server https://lbc.cryptoguru.org/static/client(status: 500 Can't connect to lbc.cryptoguru.org:443 (certificate verify failed). retries left 29

maybe a

> pacman -S ca-certificates

will help. I remember (faintly) that the ca-certificates package said something like "valid until 20.10" or something like that "next check 20.10" - not sure.

Or you do a

> pacman -Syu

Which is the Arch Linux way of "apt-get update & apt-get upgrade"

edit:

Or you have a simple networking problem. Try a

> ping 8.8.8.8

if that works. If yes, update as said above. If no, you have to get your VM networking right  first. But that is a matter of the VM configuration, not of the OS running inside the VM.

Ping came back fine updated on those commands previously still not working. I think this is an issue with the Perl certificate implementation. I had a similar issue running a perl script on windows with he same error that i could not fix.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 21, 2017, 05:12:57 AM
I'm having issues with the Arch Linux virtual machine on windows Problem connecting to server https://lbc.cryptoguru.org/static/client(status: 500 Can't connect to lbc.cryptoguru.org:443 (certificate verify failed). retries left 29

maybe a

> pacman -S ca-certificates

will help. I remember (faintly) that the ca-certificates package said something like "valid until 20.10" or something like that "next check 20.10" - not sure.

Or you do a

> pacman -Syu

Which is the Arch Linux way of "apt-get update & apt-get upgrade"

edit:

Or you have a simple networking problem. Try a

> ping 8.8.8.8

if that works. If yes, update as said above. If no, you have to get your VM networking right  first. But that is a matter of the VM configuration, not of the OS running inside the VM.
newbie
Activity: 5
Merit: 0
October 19, 2017, 11:35:12 PM
I'm afraid that almost all systems are now GPU limited.

Only multi GPU systems can take advantage from other ecc improvements (if there will be) and from n-k symmetry.  I think that a GPU version of ECC library is not very useful at the moment. We need overall to speedup sha256/ripemd160 on GPU.

You are right: Multi-GPU systems are not GPU limited and the client has been tested already on Systems with as many as 16 GPUs. (currently the Amazon p16.xlarge delivers 175 Mkeys/s)

I have already a n-k symmetry prototype running - in fact had this already May, 25th:
https://twitter.com/LBC_collider/status/867657663987015680

The single showstopper to enable n-k symmetry is accounting work that has to be done on the server.

In theory, with n-k symmetry and better hash160 OpenCL code (which I believe is possible - see also https://lbc.cryptoguru.org/crowdfunding), we may see 2x speedups on the same hardware.

Hi I am having issues with the Arch Linux virtual machine on windows Problem connecting to server https://lbc.cryptoguru.org/static/client(status: 500 Can't connect to lbc.cryptoguru.org:443 (certificate verify failed). retries left 29
newbie
Activity: 18
Merit: 0
October 19, 2017, 03:33:14 PM
Solution to the VM's "loadable library and perl binaries are mismatched" bug when you run "./collider/LBC -x" is to run "cpan -r" and let it update everything.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 18, 2017, 01:26:47 PM
I'm afraid that almost all systems are now GPU limited.

Only multi GPU systems can take advantage from other ecc improvements (if there will be) and from n-k symmetry.  I think that a GPU version of ECC library is not very useful at the moment. We need overall to speedup sha256/ripemd160 on GPU.

You are right: Multi-GPU systems are not GPU limited and the client has been tested already on Systems with as many as 16 GPUs. (currently the Amazon p16.xlarge delivers 175 Mkeys/s)

I have already a n-k symmetry prototype running - in fact had this already May, 25th:
https://twitter.com/LBC_collider/status/867657663987015680

The single showstopper to enable n-k symmetry is accounting work that has to be done on the server.

In theory, with n-k symmetry and better hash160 OpenCL code (which I believe is possible - see also https://lbc.cryptoguru.org/crowdfunding), we may see 2x speedups on the same hardware.
legendary
Activity: 1914
Merit: 2071
October 18, 2017, 01:07:38 PM
Hurrah! over 30% jump from  ~31.5  Mkeys to ~41.6  Mkeys on a 7700k with 1080Ti

If your system is more or less like this:

https://bitcointalksearch.org/topic/m.18373053

or like this https://bitcointalksearch.org/topic/m.18446472

then we are now faster than oclvanitygen on fast GPUs too. Remember that 41.6 Mkeys/s means 41.6 M compressed keys + 41.6 M uncompressed keys per second, over 83 Maddresses/s!.  Could you test oclvanitygen on your machine?

Gpu usage from ~83% now to ~98%

I'm afraid that almost all systems are now GPU limited.

Only multi GPU systems can take advantage from other ecc improvements (if there will be) and from n-k symmetry.  I think that a GPU version of ECC library would be not very useful at the moment. We need above all to speedup sha256/ripemd160 on GPU.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 18, 2017, 07:29:20 AM
Hurrah! over 30% jump from  ~31.5  Mkeys to ~41.6  Mkeys on a 7700k with 1080Ti

Gpu usage from ~83% now to ~98%

Thanks Arulbero for the optimizations and Rico for quick implementation  Smiley

28% for GPU clients that were not GPU limited - to be precise.  Wink
Your observation is consistent with what is seen on these machines.

i7-6700 CPU @ 3.40GHz + 1080 : 32.47 Mkeys/s -> 42.36 Mkeys/s

That makes the overall collider speed an equivalent of 6 such machines.
If we had 600 of these colliders, the next puzzle transaction privkey would be here in less than 24 hours - worst case.
legendary
Activity: 1428
Merit: 1000
October 18, 2017, 05:31:15 AM
Hurrah! over 30% jump from  ~31.5  Mkeys to ~41.6  Mkeys on a 7700k with 1080Ti

Gpu usage from ~83% now to ~98%

Thanks Arulbero for the optimizations and Rico for quick implementation  Smiley
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 16, 2017, 08:22:48 AM
Success!

(25.99 Mkeys/s) on a machine (Skylake + M2000M) that gave me 22.7 Mkeys/s max with the previous generator.
Thanks to the new Arulbero ECC library.

A nice side effect of arulbero completely ditching the libgmp requirement (by providing his own tailored bignum math) is, that the generator binaries are now only half the size of the previous ones.

243184 Oct 16 14:20 kardashev-skylake

237KiB - we may cast that soon into a FPGA   Grin - just kidding... Or am I?

Still, please do have patience before I push the new binaries on the server, I would like to get rid of the FTP server first and move all to a HTTPS communication.


edit: with some newer versions I was able to squeeze 27.26 Mkeys/s out of my machine with around 95/96% GPU usage. Certainly more will be possible in the future.

kardashev-skylake is now available in the new versions, those of you who have skylake machines with GPU, please test. (client should auto-update the generator upon restart).

Generators for the other instruction sets will follow soon.
Pages:
Jump to: