Pages:
Author

Topic: Large Bitcoin Collider Thread 2.0 - page 13. (Read 57468 times)

newbie
Activity: 5
Merit: 0
November 01, 2017, 12:21:21 AM
Hereticalsauce is over 3000 Gkeys and has no gpuauth.
GPUAuth set now.

Thank you, sir.
So, Google wants $140 before they auth me GPU use.  Roll Eyes  
Might just as well buy a video card and run this locally; time to dust off the ol' 5970.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 31, 2017, 11:36:23 AM
Id like to try these once my GPU is authed.

5b8f5562ac3873408d26852d74434040 is over 3000 Gkeys and has no gpuauth.
GPUAuth set now.
6fdafb1d737ef98bcbdceaaff16b2a9c is over 3000 Gkeys and has no gpuauth.
GPUAuth set now.
Hereticalsauce is over 3000 Gkeys and has no gpuauth.
GPUAuth set now.
QWERTYUIOP555 is over 3000 Gkeys and has no gpuauth.
GPUAuth set now.
YorikBlin is over 3000 Gkeys and has no gpuauth.
GPUAuth set now.
___vh___ is over 3000 Gkeys and has no gpuauth.
GPUAuth set now.
cyberguard is over 3000 Gkeys and has no gpuauth.
GPUAuth set now.
ddosddos is over 3000 Gkeys and has no gpuauth.
GPUAuth set now.
ffchampmt is over 3000 Gkeys and has no gpuauth.
GPUAuth set now.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 31, 2017, 01:28:11 AM
IMO a 2h timeframe is too short.  Probability of power outage/ network downtime lasting over 2hr is quite high.

A number between 6hr to 12hr average point would be more reasonable.

2h is fine. You can have multiple colliders, even geographically distributed, working for the same id.
Also, I need to be efficient about any functional changes I make to the LBC because of time constraints
taking the individual stats as validation dataset is such an efficiency.

And therefore as announced. It may be a problem if your collider infrastructure is not reliable:
https://lbc.cryptoguru.org/stats/Unknownhostname
But that is - intentionally - your problem you either solve or not.



@arulbero

I may re-think the assignment of forfeited Gkeys. Either assign them to some "Dummy account", or into a separate value of the clients, because I agree that the information what were your Gkeys and what were foreign Gkeys should not be dilluted.
legendary
Activity: 1428
Merit: 1000
October 31, 2017, 12:29:37 AM
Quote
there can be no single 2h average point with 0 Mkeys/s or else you are not eligible - no matter how many Gkeys you have submitted so far

IMO a 2h timeframe is too short.  Probability of power outage/ network downtime lasting over 2hr is quite high.

A number between 6hr to 12hr average point would be more reasonable.
newbie
Activity: 5
Merit: 0
October 30, 2017, 06:21:11 PM
arulbero,
I think this is rico's way to incentivize M/hash to the pool, and reward those who stick with him in his experiment.
Perhaps the re-distributed Gkeys can be assigned to a dead-pool IP of 0.0.0.0 to keep them separate from your work.


I found Google's cloud computing to be quite nice, they are offering a $300 trial of their services, at least for me.
Without asking for more vCPUs, a 24 core  skylake preemptible VM per region will cost $0.20/hour and net ~10 Mkey/s.  
A 96 core will cost $0.84/hr.  
There are GPU options as well, up to eight K80s or four P100.  An eight core Broadwell with four P100s will cost $9.399/hr or $5.799/hr on eight K80s.

Id like to try these once my GPU is authed.
legendary
Activity: 1948
Merit: 2097
October 30, 2017, 10:10:43 AM
Just my opinion:

when you delete an account, why its Gkeys are added to the other active accounts? Personally I like to know exactly how many keys I have delivered. I don't need the Gkeys from other accounts. I prefer to see an accurate record.

You could gather all the Gkeys from the account deleted in a single generic account "other" that doesn't appear  in the top 30.


A few hours ago, the LBC computed it's 16000 trillionth address (8000 trillion keys, each key compressed & uncompressed pubkey -> 2 addresses per key).

Then the Unknownhostname's share has fallen below 50% of the total amount.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 30, 2017, 02:28:19 AM
Greetings Colliders!

A few hours ago, the LBC computed it's 16000 trillionth address (8000 trillion keys, each key compressed & uncompressed pubkey -> 2 addresses per key). At the current rate of 25-30 tn keys per day, this means 36 days for 1000 tn keys on average. In other words, the LBC checked the equivalent of over 62500 billion pages on http://www.directory.io/ and at the moment is doing so at a rate of 2.5 million of these pages per second.



I have thought a long time how and when to distribute the funds that ended up in the LBC Pot (currently 0.12274528 BTC) and here's how the LBC Pot will be distributed among the participating colliders:

Every time the LBC reaches a bit boundary in the search space (53 bits, 54 bits, ...) the current LBC pot will be distributed proportionally according to the Gkeys delivered among those who were active in the week before the LBC hits this boundary. This means, if you look at your Per-User statistics (e.g. https://lbc.cryptoguru.org/stats/__rico666__) at the time the LBC crosses this boundary, there can be no single 2h average point with 0 Mkeys/s or else you are not eligible - no matter how many Gkeys you have submitted so far. On the other hand, the speed in there (as long as it is above 0 Mkeys/s) does not have any influence on the payout - only your Gkeys delivered so far do.



There are a lot of dead/dormant accounts, who seem to just have tested LBC and then stopped colliding. And this is perfectly ok.
On the other hand, I don't think it makes sense to keep these accounts around indefinitely as most of them have some single-digit Gkeys, but everyone started out small, so an automated cleanup mechanism should not immediately reap small accounts.

This is what will happen:

Starting with 53bit search space (which is due in about a month given current speed), accounts that have been inactive for (7 + Gkeys_delivered) days will be reaped and their Gkeys will be added proportionally  to the other active accounts. Consider it a kind of Proof of Stake.  Wink

e.g.

Account has delivered 5 Gkeys and is inactive. This account can be inactive for 12 days before it is reaped.
Account has delivered 1000 Gkeys and is inactive. This account can be inactive for 1007 days before it is reaped.

It's clear that accounts that did some serious colliding in the past with several thousand Gkeys do not have to fear to be reaped anytime soon.
But I should point out that as speed of colliding rates changes over time, we may change this rule at some point in the future. (Imagine if the hardware/software in 5 years can give you 1Gkey/s then probably 1 Gkey will buy you 1 hour idle time. Or something like that)

For now, 7 days idle time is ok for everyone, above that: 1Gkey = 1 day.

newbie
Activity: 12
Merit: 0
October 29, 2017, 12:19:02 PM
The Remote Code Execution is there by design and not a vulnerability.
My apologies for implying that the RCE was a bug. I was using the word "vulnerability" to refer to the "state of being exposed to the possibility of being attacked" without judgement of it being a "bug" or a "feature". Your analogy to Javascript is a good one. Browsers provide an execution sandbox so to manage the risks of RCE. Users of LBC should, as you say, take appropriate steps to provide an appropriate sandbox.

To avoid repeating yourself further, you might consider having the client print a message on startup, like: "The LBC client contains a feature update itself. In the unlikely event that the LBC server were compromised by an attacker, the attacker would be able to execute arbitrary code on your machine as the user that executes LBC. Please execute LBC accordingly."  This is just a well intended suggestion.

Apologies again for any insult.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 28, 2017, 02:28:54 PM
RUNNING LBC AS ROOT IS NOT REQUIRED OR A GOOD PRACTICE

The LBC app contains a remote code execution vulnerability.

It seems impossible for people to remember things that have been said once, so I repeat it again. Hopefully it will last for a couple of months:

a) Running LBC as root is not required, but if you run LBC in a VM, or Docker container, or on a dedicated machine, it's as good as any other command running as root. I do it all the time.

b) The LBC is not an App - go to your sweepy wheepy Android/iOS device for that.

c) The Remote Code Execution is there by design and not a vulnerability.

Same as you do not consider the JavaScript remote code execution in your faggy browser a vulnerability. Or do you?

Quote
p.s. Thanks for providing the nvidia libraries. My Docker image does not use that and some extra Docker foo is required to leverage the GPU.

I have some reports of people successfully doing a PCI passthrough to VMware and KVM virtual machines, so that's definitely an option for a GPU enabled client.
newbie
Activity: 12
Merit: 0
October 28, 2017, 12:06:38 PM
sudo ./LBC -x               

RUNNING LBC AS ROOT IS NOT REQUIRED OR A GOOD PRACTICE

The LBC app contains a remote code execution vulnerability. Rico assures us that he as taken many steps to avoid a malicious attacker from leveraging the vulnerability. But since root privilege is not required, there is no reason to run LBC as root.

For extra protection, you might consider running LBC inside a Docker container. I provide a base image (dcw312/lbc-base:latest) as well as code to create it and use it at:
https://github.com/dcw312/lbc-client-docker

p.s. Thanks for providing the nvidia libraries. My Docker image does not use that and some extra Docker foo is required to leverage the GPU.
jr. member
Activity: 32
Merit: 11
October 26, 2017, 05:36:47 PM
All comands:

sudo apt-get update               
sudo apt-get upgrade               
sudo apt-get update && sudo apt-get -y upgrade               
sudo apt-get install -y linux-image-extra-`uname -r`               
sudo apt install perl bzip2 xdelta3 libgmp-dev libssl-dev gcc make               
sudo apt-get install gcc make tmux libssl-dev xdelta3 nvidia-367 nvidia-cuda-toolkit               
sudo cpan install OpenCL               
sudo cpan force install JSON               
sudo cpan force install LWP               
sudo cpan force install Parallel::ForkManager               
sudo cpan force install Net::SSLeay               
sudo cpan force install LWP::Protocol::https               
sudo cpan force install Parallel::ForkManager                
sudo cpan force install Term::ReadKey               
sudo apt-get install libnet-ssleay-perl               
sudo apt-get install libcrypt-ssleay-perl               
sudo apt-get install LWP::Protocol::https               
sudo apt-get install Parallel::ForkManager                
sudo apt-get install Term::ReadKey               
sudo apt install ocl-icd-opencl-dev               
sudo apt-get install nvidia-375               
sudo apt-get dist-upgrade -y               
sudo reboot               
sudo ./LBC -h               
sudo ./LBC -x               
newbie
Activity: 5
Merit: 0
October 23, 2017, 03:28:02 AM
Ping came back fine updated on those commands previously still not working. I think this is an issue with the Perl certificate implementation. I had a similar issue running a perl script on windows with he same error that i could not fix.

LBC uses https://metacpan.org/pod/LWP::Protocol::https for HTTPS validation.

Worksforme.

So maybe

$ sudo cpan

cpan> install LWP::Protocol::https

Other than that: ¯\_(ツ)_/¯


Still no luck anyway I can bypass https?

https://stackoverflow.com/questions/74358/how-can-i-get-lwp-to-validate-ssl-server-certificates#5329129
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 23, 2017, 03:15:05 AM
Ping came back fine updated on those commands previously still not working. I think this is an issue with the Perl certificate implementation. I had a similar issue running a perl script on windows with he same error that i could not fix.

LBC uses https://metacpan.org/pod/LWP::Protocol::https for HTTPS validation.

Worksforme.

So maybe

$ sudo cpan

cpan> install LWP::Protocol::https

Other than that: ¯\_(ツ)_/¯
newbie
Activity: 46
Merit: 0
October 22, 2017, 08:29:14 PM
Might be time to start up the server again and see what a nVidia Quadro card can do.
I know I did around a million keys pr sec with the cpu alone.  Cheesy Cheesy
newbie
Activity: 5
Merit: 0
October 22, 2017, 04:54:50 PM
I'm having issues with the Arch Linux virtual machine on windows Problem connecting to server https://lbc.cryptoguru.org/static/client(status: 500 Can't connect to lbc.cryptoguru.org:443 (certificate verify failed). retries left 29

maybe a

> pacman -S ca-certificates

will help. I remember (faintly) that the ca-certificates package said something like "valid until 20.10" or something like that "next check 20.10" - not sure.

Or you do a

> pacman -Syu

Which is the Arch Linux way of "apt-get update & apt-get upgrade"

edit:

Or you have a simple networking problem. Try a

> ping 8.8.8.8

if that works. If yes, update as said above. If no, you have to get your VM networking right  first. But that is a matter of the VM configuration, not of the OS running inside the VM.

Ping came back fine updated on those commands previously still not working. I think this is an issue with the Perl certificate implementation. I had a similar issue running a perl script on windows with he same error that i could not fix.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 21, 2017, 04:12:57 AM
I'm having issues with the Arch Linux virtual machine on windows Problem connecting to server https://lbc.cryptoguru.org/static/client(status: 500 Can't connect to lbc.cryptoguru.org:443 (certificate verify failed). retries left 29

maybe a

> pacman -S ca-certificates

will help. I remember (faintly) that the ca-certificates package said something like "valid until 20.10" or something like that "next check 20.10" - not sure.

Or you do a

> pacman -Syu

Which is the Arch Linux way of "apt-get update & apt-get upgrade"

edit:

Or you have a simple networking problem. Try a

> ping 8.8.8.8

if that works. If yes, update as said above. If no, you have to get your VM networking right  first. But that is a matter of the VM configuration, not of the OS running inside the VM.
newbie
Activity: 5
Merit: 0
October 19, 2017, 10:35:12 PM
I'm afraid that almost all systems are now GPU limited.

Only multi GPU systems can take advantage from other ecc improvements (if there will be) and from n-k symmetry.  I think that a GPU version of ECC library is not very useful at the moment. We need overall to speedup sha256/ripemd160 on GPU.

You are right: Multi-GPU systems are not GPU limited and the client has been tested already on Systems with as many as 16 GPUs. (currently the Amazon p16.xlarge delivers 175 Mkeys/s)

I have already a n-k symmetry prototype running - in fact had this already May, 25th:
https://twitter.com/LBC_collider/status/867657663987015680

The single showstopper to enable n-k symmetry is accounting work that has to be done on the server.

In theory, with n-k symmetry and better hash160 OpenCL code (which I believe is possible - see also https://lbc.cryptoguru.org/crowdfunding), we may see 2x speedups on the same hardware.

Hi I am having issues with the Arch Linux virtual machine on windows Problem connecting to server https://lbc.cryptoguru.org/static/client(status: 500 Can't connect to lbc.cryptoguru.org:443 (certificate verify failed). retries left 29
newbie
Activity: 18
Merit: 0
October 19, 2017, 02:33:14 PM
Solution to the VM's "loadable library and perl binaries are mismatched" bug when you run "./collider/LBC -x" is to run "cpan -r" and let it update everything.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
October 18, 2017, 12:26:47 PM
I'm afraid that almost all systems are now GPU limited.

Only multi GPU systems can take advantage from other ecc improvements (if there will be) and from n-k symmetry.  I think that a GPU version of ECC library is not very useful at the moment. We need overall to speedup sha256/ripemd160 on GPU.

You are right: Multi-GPU systems are not GPU limited and the client has been tested already on Systems with as many as 16 GPUs. (currently the Amazon p16.xlarge delivers 175 Mkeys/s)

I have already a n-k symmetry prototype running - in fact had this already May, 25th:
https://twitter.com/LBC_collider/status/867657663987015680

The single showstopper to enable n-k symmetry is accounting work that has to be done on the server.

In theory, with n-k symmetry and better hash160 OpenCL code (which I believe is possible - see also https://lbc.cryptoguru.org/crowdfunding), we may see 2x speedups on the same hardware.
legendary
Activity: 1948
Merit: 2097
October 18, 2017, 12:07:38 PM
Hurrah! over 30% jump from  ~31.5  Mkeys to ~41.6  Mkeys on a 7700k with 1080Ti

If your system is more or less like this:

https://bitcointalksearch.org/topic/m.18373053

or like this https://bitcointalksearch.org/topic/m.18446472

then we are now faster than oclvanitygen on fast GPUs too. Remember that 41.6 Mkeys/s means 41.6 M compressed keys + 41.6 M uncompressed keys per second, over 83 Maddresses/s!.  Could you test oclvanitygen on your machine?

Gpu usage from ~83% now to ~98%

I'm afraid that almost all systems are now GPU limited.

Only multi GPU systems can take advantage from other ecc improvements (if there will be) and from n-k symmetry.  I think that a GPU version of ECC library would be not very useful at the moment. We need above all to speedup sha256/ripemd160 on GPU.
Pages:
Jump to: