Pages:
Author

Topic: Flexible mining proxy - page 16. (Read 88812 times)

full member
Activity: 182
Merit: 107
May 04, 2011, 06:04:44 PM
#17
Alright, I've cleaned up the repo and finished adding the final touches (for now...).  Grab the code: https://github.com/cdhowie/Bitcoin-mining-proxy

Setup directions are in the INSTALL file.  Please read the whole thing before asking questions; I took a long time writing it and proofread it several times to make sure it's complete and correct.  Let me know if anything is unclear or broken.  And please feel free to use GitHub's issue tracking to report issues.  Smiley

If you like the software and find it useful, there is a GPG-signed Bitcoin address on the about page where you can send donations.  Any amount would be appreciated.

For those interested in reliability information, I've been running one miner against it consistently for a full month, and a few other miners against it sporadically.  The getwork proxy has always worked 100% for me in my deployed copy, except during power failures.  Wink

@pwnyboy: Thanks for offering the bounty!  I was already getting it ready for release when you posted, so you don't have to send the bounty if you don't want (since it really wasn't responsible for motivating me; I just finally got off my lazy butt since everyone's been waiting for it).  But I won't turn it down either.  Wink  If you still want to send, feel free to use the donation address.
full member
Activity: 125
Merit: 100
May 04, 2011, 03:10:12 PM
#16
After recent strangeness with Deepbit, I'm putting a 10 BTC bounty on this, contingent upon ultimate release of the working product and sourcecode before I'm forced to write my own.  Looking forward to getting up and running.
member
Activity: 308
Merit: 10
May 04, 2011, 01:39:21 PM
#15
+1 for wanting to test this please Smiley
hero member
Activity: 532
Merit: 505
April 23, 2011, 06:30:33 PM
#14
i would also like to test this, so please keep us up to date.  Wink
full member
Activity: 125
Merit: 100
April 23, 2011, 05:36:39 PM
#13
What is the status of this?  I'd like to start testing and maybe contributing code if you're ready to release it into the wild.
newbie
Activity: 5
Merit: 0
April 20, 2011, 08:40:25 AM
#12
The way I understand the math, all of your recently-submitted shares are considered, no matter which worker they were submitted through -- so running several workers on one worker account or several should make no difference to the amount of reward you get when the block is solved.  Even if one machine drops out... since the shares are considered collectively, it doesn't matter which account they were submitted through.  They will age just like every other share you submit.  (Correct me if I'm wrong, slush.)  In other words, the older shares you submitted from the still-active machines have aged and become worthless too, you just can't tell because they are submitting enough new shares to keep the reward up.

This is interesting, in that case there still is the clear advantage of being able to quickly switch to a different pool in case that the current one goes offline for whatever reason.


Now, having said that, my proxy doesn't "dynamically associate a worker account to each machine."  You still need to set up worker accounts in my proxy script.  The difference is that you can assign those worker accounts to more than one pool.  (Although you could probably hack it to do what you want. Wink)

Ah I think you misunderstood me there, sorry if I was not clear enough. I ment to say that the alternative to what I said was your software; I understood the purpose of your software correctly.
full member
Activity: 182
Merit: 107
April 20, 2011, 07:12:34 AM
#11
Generally this is how I do it. However, let's take slush's pool and a GPU cluster which is not dedicated to generating Bitcoins and does this only when being idle as example: The amount of GPUs and machines available constantly changes. While there is a max. limit of machines, I'd still require a software to dynamically associate a worker account to each machine. This is because (here's where slush's pool comes in) slush's reward calculating formula includes the time at which the last share was submitted. Of course this is a great way to prevent cheating, however it also means that if one machine goes offline or switches to a different task, the reward it would usually have made quickly shrinks to zero. Therefore using a proxy tool like yours for such a cluster is more effective and way easier to manage and automate.

The way I understand the math, all of your recently-submitted shares are considered, no matter which worker they were submitted through -- so running several workers on one worker account or several should make no difference to the amount of reward you get when the block is solved.  Even if one machine drops out... since the shares are considered collectively, it doesn't matter which account they were submitted through.  They will age just like every other share you submit.  (Correct me if I'm wrong, slush.)  In other words, the older shares you submitted from the still-active machines have aged and become worthless too, you just can't tell because they are submitting enough new shares to keep the reward up.

Now, having said that, my proxy doesn't "dynamically associate a worker account to each machine."  You still need to set up worker accounts in my proxy script.  The difference is that you can assign those worker accounts to more than one pool.  (Although you could probably hack it to do what you want. Wink)
newbie
Activity: 5
Merit: 0
April 19, 2011, 03:27:53 PM
#10
If you are using e.g. slush's pool, you should still have a separate account for each worker.  My proxy allows multiple miners to authenticate to it with separate credentials, and the proxy will then authenticate to pools using credentials stored for that worker.  In other words, each worker-pool assignment has its own pool credentials.

Generally this is how I do it. However, let's take slush's pool and a GPU cluster which is not dedicated to generating Bitcoins and does this only when being idle as example: The amount of GPUs and machines available constantly changes. While there is a max. limit of machines, I'd still require a software to dynamically associate a worker account to each machine. This is because (here's where slush's pool comes in) slush's reward calculating formula includes the time at which the last share was submitted. Of course this is a great way to prevent cheating, however it also means that if one machine goes offline or switches to a different task, the reward it would usually have made quickly shrinks to zero. Therefore using a proxy tool like yours for such a cluster is more effective and way easier to manage and automate.


It's "as soon as I clean up the Git repo."  Smiley  I hope to get to that this week.

Great! I'll see to drop you some coins when it is out Smiley
full member
Activity: 182
Merit: 107
April 19, 2011, 12:09:16 PM
#9
This project appears to be very interesting and is in fact exactly what I've been looking for to connect all the machines I have here to one single worker account.

If you are using e.g. slush's pool, you should still have a separate account for each worker.  My proxy allows multiple miners to authenticate to it with separate credentials, and the proxy will then authenticate to pools using credentials stored for that worker.  In other words, each worker-pool assignment has its own pool credentials.

Is there any release date set already?

It's "as soon as I clean up the Git repo."  Smiley  I hope to get to that this week.
newbie
Activity: 5
Merit: 0
April 18, 2011, 10:35:58 AM
#8
This project appears to be very interesting and is in fact exactly what I've been looking for to connect all the machines I have here to one single worker account.
Is there any release date set already?
full member
Activity: 182
Merit: 107
April 12, 2011, 01:42:43 PM
#7
Long-polling proxying is now implemented.  The only remaining feature on my list is connection pooling to take advantage of HTTP 1.1 keep-alive connections, but I'm not sure how feasible this is in PHP without using some external connection-pooling daemon.  I might make a release before this feature is implemented.
full member
Activity: 182
Merit: 107
April 06, 2011, 06:49:13 PM
#6
You did first metapool. It was just matter of the time, but still - congratz  Grin
Thanks.  Smiley

Edit: Does this solve long polling somehow?
The current revision does not, but this is going to be implemented pretty soon.  Obviously it will only work on pools that support LP themselves; it will just proxy the LP request.

IMO it makes much more sense to add multi-pool support to each mining client.
I agree.  But I don't have the desire to hack on every client out there to implement something like this.  Further, this approach also gives me other benefits like the ability to manage miners remotely, retarget their pool assignment without restarting them, etc.

If someone implements a consistent multi-pool interface in all the mining clients, I would probably deprecate this project.  But in the meantime it fills the gap and also gives you some other nifty features that client-side multi-pool support alone wouldn't.

That way it doesn't break long-polling, and you can more easily utilize pool-specific features as they appear (such as using BDP).
LP is only "broken" in that I have not yet implemented it.  There's no technical reason it can't be done, I've just been focused on other aspects of the proxy.

A similar BDP proxy could be implemented alongside the existing HTTP-based proxy, with minimal database schema changes.  Once that is done, you could theoretically run HTTP-only miners against a BDP pool efficiently, by having the HTTP proxy get work from the local BDP hub process.

A meta-pool is an additional point of failure.
To a degree, yes.  I run my proxy on my LAN, so network failure is extremely unlikely.  A software bug is the only thing I can think of that would cause a problem (other than the whole box going down, at which point I lose DNS too), and with (... checking DB ...) 82,000 getwork requests processed in the last 2.5 days, my confidence in the proxy code is very high at this point.
legendary
Activity: 1596
Merit: 1100
April 06, 2011, 05:55:33 PM
#5
IMO it makes much more sense to add multi-pool support to each mining client.

That way it doesn't break long-polling, and you can more easily utilize pool-specific features as they appear (such as using BDP).

A meta-pool is an additional point of failure.
legendary
Activity: 1386
Merit: 1097
April 06, 2011, 05:53:18 PM
#4
You did first metapool. It was just matter of the time, but still - congratz  Grin

Edit: Does this solve long polling somehow?
full member
Activity: 182
Merit: 107
April 06, 2011, 05:40:59 PM
#3
I really like the idea of being able to do failovers, but the best part is that it is a great starting place for doing a really sophisticated implementation of Raulo's Pool Hopping Exploit!  Cry
Just to be clear, I take no liability for any forks of the project.  Wink
sr. member
Activity: 406
Merit: 250
April 06, 2011, 05:32:39 PM
#2
I really like the idea of being able to do failovers, but the best part is that it is a great starting place for doing a really sophisticated implementation of Raulo's Pool Hopping Exploit!  Cry
full member
Activity: 182
Merit: 107
April 06, 2011, 05:26:05 PM
#1
Edit 2011-05-04: The software has been released! (View post)

----------

Hey all, I'm trying to gauge interest for this.

I've been hacking for about a week and a half on a mining proxy written in PHP using MySQL for the data store.  I've been running my own miners against it with no problems for the last week.  The basic idea is that you can run multiple miners against multiple pools (same or different, it doesn't matter) and miners can fail-over to other pools if something happens to its preferred pool.

Additionally, using the web interface, you can manage pool assignments from any physical location; your miners won't notice that anything has changed when you switch them between pools.  The information on the dashboard can also be used to help determine when a miner goes AWOL.

Here's the more detailed list of how it all works:

  • Multiple pools can be defined.  Pools can be globally enabled/disabled for all workers.
  • Multiple workers can be defined, each with their own credentials to be used against the proxy itself.
  • Each worker can associate with as many pools as you have defined, and can have its own credentials to be used with that pool.  (In other words, you can have worker A and worker B both working slush's pool, but each using their own worker accounts.)
  • Worker-pool associations can be individually enabled/disabled without affecting other workers, or other pools associated with the worker.
  • Worker-pool associations can be ranked by priority.  The highest priority association will be used -- unless that pool is down or not responding, then the next-highest will be tried.
  • All getwork requests are partially logged in the database, and all work submissions are logged as well.  This includes which worker sent the request, which pool ultimately handled the request, and (in the case of work submissions) whether the share was accepted or not.

All this is manageable from a web-based control panel.  Right now the project is not terribly polished -- not well enough for a release anyway -- but the core seems to be working great.  If there is any interest in such a project, I will probably release it under the AGPL.

I'm interested in the views and perspectives of my fellow miners as to whether this project would have any value to the wider community.

Mandatory screenshot:

Pages:
Jump to: