Pages:
Author

Topic: Multipool - the pool mining pool (with source code) - page 8. (Read 48269 times)

newbie
Activity: 19
Merit: 0
Quote
Amazing that it actually runs! I had imagined the biggest problem with the porting would have been the sockets. Perl on Windows possibly doesn't implement all socket features, like maybe nonblocking mode, or the can_read function of IO::Select. Look in the rpc_server_alt function that handles all client connections. There is a big while loop which iterates over all connected clients, reads any request data available up to that point, and then sends response data when sufficient request data (such as basic authentication) has been obtained. Trace the execution of this loop to see what runs normally and at what exact point the execution gets stuck.

So I've gone through the rpc_server_alt function and it doesn't appear to be hanging anywhere, the loop runs normally whether the worker is connected or not.  But what I did find that seemed odd, was the it moves into code that appears to process the getwork request *as the worker disconnects*.  Here's a short section of code I'm referring too.  (Can't give line numbers as the unix2dos converter added a ton of whitespace, in theory it's after line 1400)

Code:
   } else {
if ($vars->{longpoll}){
   $vars->{rec}=2;
   last;
}
$user->{getworks}++;
my $user_ip=$user->{host}; $user_ip=~s/:.*//;
my $limiter=$user_limiter{$user_ip};
my $lawful=user_lawful($limiter);
my $pair;
if ($lawful){ #only give good shares to lawful users                       <-indicative comment
   $pair = $work_queue->dequeue_nb;                                       <-confirmed code is executed only when worker is disconnecting
}
if (!$pair){
   $pair=$solo_queue->dequeue;
}
$limiter->{got_works}++ if $limiter;;
my ($work, $pool_name) = @{$pair};
{
   lock $work_dequeues;
   $work_dequeues++;
}
logd("rpc getwork from $vars->{host} ($pool_name ".substr($work->{data}, 16, 8)." ".substr($work->{data}, 72, 8).") queue: ".$work_queue->pending."/".$WORK_QUEUE_SIZE."w ".$solo_queue->pending."so ".$submit_queue->pending."sh\n");

The rpc getwork message does appear, but only when the worker disconnects.

Again, I'm not strong with my Perl, so maybe this code is only supposed to run during disconnection, but the comment and the rest of the code implies it's removing a getwork from the internal queue and giving it to the worker.  Would this be accurate?

hero member
Activity: 607
Merit: 500
A work queue size of 103? Do you see like ten requests per second? That's not supposed to happen - usually it's one request every five seconds per miner. I've seen this before though - what miner version are you using? Does rebooting the server help?
And yes, I will work on the project.

Yes, that's exactly what I'm seeing. And I have the exact opposite - over time the request rate decreases. Restarting the server brings it back up.

EDIT: Now I restarted the server and the log suggests that I have queue size of 5, still I see like 10 requests per second on the screen.

Oh, and I use Multiminer. Maybe that's the culprit.

One more thing: I noticed that there is no reward function for bitcoins-lc, even though it is supported in the pools.conf. Does that mean that the script won't be able to distribute payments from bitcoins-lc to my workers?
There isn't a function for bitcoins.lc yet - they don't show per-round earnings and it will require some magic to sort the rounds out.

I guess there is now, at URL: https://www.bitcoins.lc/transactions



And yes, I will work on the project.

That's great news! Smiley. A big thank you from me.
newbie
Activity: 24
Merit: 0
Hm... I'm still left with some questions. One thing is - is it normal that I see *many* rpc calls per second? I mean, calls like this:

Code:
got work 6c1ea670 b5d42a84 from deepbit (t=0.070/0.082 s/g=0/14 old=32/2209 p=0.00)
got work 6c1ea670 259bd031 from mtred (t=0.118/1.508 s/g=0/12 old=3/44 p=0.00)
got work 6c1ea670 dde84185 from btcmine (t=0.154/0.157 s/g=0/8 old=30/3192 p=0.00)
rpc connection opened from 83.26.173.182:52750
rpc authorization from 83.26.173.182:52747: user=--- pass=x
rpc getwork from 83.26.173.182:52747 (deepbit 6c1ea670 dddc356d) queue: 15/103w 0so 0sh
got work 6c1ea670 3128bc4f from deepbit (t=0.082/0.082 s/g=0/15 old=32/2209 p=0.00)
rpc connection opened from 83.26.173.182:33185
rpc authorization from 83.26.173.182:52750: user=--- pass=x
rpc getwork from 83.26.173.182:52750 (btcmine 6c1ea670 c25c0819) queue: 15/103w 0so 0sh
got work 6c1ea670 6eda19aa from mtred (t=0.106/1.368 s/g=0/13 old=3/44 p=0.00)

I only use two workers so I assumed that it would not generate much traffic, and then it does Smiley.

My other question is - will you still be working on this project? I already saw an error in the log while parsing rewards, probably for BTCMine. I've looked at the rewards functions, but they are somewhat complicated and I doubt I'll manage to fix that myself without breaking something Smiley.

A work queue size of 103? Do you see like ten requests per second? That's not supposed to happen - usually it's one request every five seconds per miner. I've seen this before though - what miner version are you using? Does rebooting the server help?
And yes, I will work on the project.

One more thing: I noticed that there is no reward function for bitcoins-lc, even though it is supported in the pools.conf. Does that mean that the script won't be able to distribute payments from bitcoins-lc to my workers?
There isn't a function for bitcoins.lc yet - they don't show per-round earnings and it will require some magic to sort the rounds out.
hero member
Activity: 607
Merit: 500
One more thing: I noticed that there is no reward function for bitcoins-lc, even though it is supported in the pools.conf. Does that mean that the script won't be able to distribute payments from bitcoins-lc to my workers?
donator
Activity: 2058
Merit: 1007
Poor impulse control.
And eligius-eu is no more. The new url is in the eligius thread in Pools.
hero member
Activity: 607
Merit: 500
BTC Guild is offline sometimes, depending on which server do you use (they have several). Eligius also likes being down from time to time. Don't know about BTCMine.
member
Activity: 109
Merit: 10
Another thing I notice is a lot of those:

Code:
connection timeout to eligius-eu (p=0.49)
connection timeout to btcmine (p=30.00)
connection timeout to btcguild (p=30.00)


Seems a bit strange
hero member
Activity: 607
Merit: 500
Hm... I'm still left with some questions. One thing is - is it normal that I see *many* rpc calls per second? I mean, calls like this:

Code:
got work 6c1ea670 b5d42a84 from deepbit (t=0.070/0.082 s/g=0/14 old=32/2209 p=0.00)
got work 6c1ea670 259bd031 from mtred (t=0.118/1.508 s/g=0/12 old=3/44 p=0.00)
got work 6c1ea670 dde84185 from btcmine (t=0.154/0.157 s/g=0/8 old=30/3192 p=0.00)
rpc connection opened from 83.26.173.182:52750
rpc authorization from 83.26.173.182:52747: user=--- pass=x
rpc getwork from 83.26.173.182:52747 (deepbit 6c1ea670 dddc356d) queue: 15/103w 0so 0sh
got work 6c1ea670 3128bc4f from deepbit (t=0.082/0.082 s/g=0/15 old=32/2209 p=0.00)
rpc connection opened from 83.26.173.182:33185
rpc authorization from 83.26.173.182:52750: user=--- pass=x
rpc getwork from 83.26.173.182:52750 (btcmine 6c1ea670 c25c0819) queue: 15/103w 0so 0sh
got work 6c1ea670 6eda19aa from mtred (t=0.106/1.368 s/g=0/13 old=3/44 p=0.00)

I only use two workers so I assumed that it would not generate much traffic, and then it does Smiley.

My other question is - will you still be working on this project? I already saw an error in the log while parsing rewards, probably for BTCMine. I've looked at the rewards functions, but they are somewhat complicated and I doubt I'll manage to fix that myself without breaking something Smiley.
newbie
Activity: 24
Merit: 0
I had thrown the 'production' switch, which caused the work queue entries to start out at 30, but was only testing with about 300MHash/sec.  Occasionally (presumably after a new round began on the current pool or something), I would get dozens of invalid shares in a row from that pool.  Decreasing the work queue size seemed to mitigate the issue somewhat.

I occasionally saw work queue 'purge' messages going by in the log, but never noticed a non-zero number of entries being purged during these rejection streaks.

If you feel like explaining it, how is the work queue purging determined?  I know that Multipool-the-server implements Long Polling.  What about Multipool-the-client? 

For anyone running Multipool locally/with a fairly small hash rate, I might suggest decreasing the 'production' work queue size which defaults to 30 queue entries (around line 43) to avoid similar issues.  OTOH, maybe this is a non-issue and I was just having problems with Mt. Red.
Anyone running multipool locally will be satisfied leaving the $production switch set to false. Setting it to true does precisely the things like increasing the work queue size and rewriting all the addresses to external, rather than 127.0.0.1.

The work queue size expands or shrinks slowly to meet demand - it stores about 15 seconds worth of shares. When work is received from a pool with a different prev_block hash than before (substr($work->{data}, 16, Cool), all the work in the queue with the previous hash (including work from all other pools) is purged. About this time is also when longpoll is sent. The exact timing is a bit complicated, because while the pools typically, but not always, all work on the same block, the switching from one block to the next can happen up to 20 seconds apart between pools. So while you might have received a new share (and purged all old shares) from one pool, you might still receive shares with the old block hash from another pool for up to 20 seconds. Most of these shares will be rejected as stale once they are actually solved, so there is still some room for improvement in this area. Also, you don't want to send a longpoll signal ten times in 20 seconds, so you need a way to prevent double longpolls. For now, Multipool only sends longpoll when its own bitcoind reports a block hash change, but this could also be based on a cooldown timer.

I'm trying to set up Multipool on a Win7 machine (yes, I'm a masochist), and things seem to be more or less working.  I can see Multipool connecting to the regular pools and get shares.  I can see my miner connect to it, but for some reason the miner never receives any shares from Multipool.  The miner sits connected, but idle and no errors on either side.  I've played around with ports and I'm sure it's connecting properly (firewall deactivated and such), but I just can't get the miner to get any shares.

I'm a coding 'hobbiest', but I've learned Perl in the last 48 hours for this project.  I understand about 1/2 of what the code does, but I can't find the section that assigns shares to the workers.  It could be a porting issue (had to change several lines to make it work with wget for windows), could be a config error(IP addresses or such), could be who the heck knows what??

I don't know what other kind of information anyone would need to help me, but I'm grasping at straws at this point.
Amazing that it actually runs! I had imagined the biggest problem with the porting would have been the sockets. Perl on Windows possibly doesn't implement all socket features, like maybe nonblocking mode, or the can_read function of IO::Select. Look in the rpc_server_alt function that handles all client connections. There is a big while loop which iterates over all connected clients, reads any request data available up to that point, and then sends response data when sufficient request data (such as basic authentication) has been obtained. Trace the execution of this loop to see what runs normally and at what exact point the execution gets stuck.
newbie
Activity: 19
Merit: 0
Hey guys, I hope you can help with this one.

I'm trying to set up Multipool on a Win7 machine (yes, I'm a masochist), and things seem to be more or less working.  I can see Multipool connecting to the regular pools and get shares.  I can see my miner connect to it, but for some reason the miner never receives any shares from Multipool.  The miner sits connected, but idle and no errors on either side.  I've played around with ports and I'm sure it's connecting properly (firewall deactivated and such), but I just can't get the miner to get any shares.

I'm a coding 'hobbiest', but I've learned Perl in the last 48 hours for this project.  I understand about 1/2 of what the code does, but I can't find the section that assigns shares to the workers.  It could be a porting issue (had to change several lines to make it work with wget for windows), could be a config error(IP addresses or such), could be who the heck knows what??

I don't know what other kind of information anyone would need to help me, but I'm grasping at straws at this point.
hero member
Activity: 658
Merit: 500
is multiclone up? I'm getting connection errors

Working for me and at least 5 other people right now.

Post your full miner string?
poclbm.exe --user=1HHroZyBvQQtsp6bSyQ9rpYgDpN2RoTrUM --pass=pass -o multiclone.us.to -p 18080 --device=1 --platform=0 --verbose -v -w128 -f30

mining port is 18337, not 18080(that's the website port)
thanks, that fixed it
donator
Activity: 2058
Merit: 1007
Poor impulse control.
is multiclone up? I'm getting connection errors

Working for me and at least 5 other people right now.

Post your full miner string?
poclbm.exe --user=1HHroZyBvQQtsp6bSyQ9rpYgDpN2RoTrUM --pass=pass -o multiclone.us.to -p 18080 --device=1 --platform=0 --verbose -v -w128 -f30

mining port is 18337, not 18080(that's the website port)
hero member
Activity: 658
Merit: 500
is multiclone up? I'm getting connection errors

Working for me and at least 5 other people right now.

Post your full miner string?
poclbm.exe --user=1HHroZyBvQQtsp6bSyQ9rpYgDpN2RoTrUM --pass=pass -o multiclone.us.to -p 18080 --device=1 --platform=0 --verbose -v -w128 -f30
member
Activity: 79
Merit: 14
is multiclone up? I'm getting connection errors

Working for me and at least 5 other people right now.

Post your full miner string?
hero member
Activity: 658
Merit: 500
is multiclone up? I'm getting connection errors
donator
Activity: 2058
Merit: 1007
Poor impulse control.
What license is the code under?

I've downloaded it and started rewriting some ugly bits, removing embedded defaults, using JSON for the config files, etc.

I would like to make my changes available, but would rather know the license they'd be supposed to be under.

I'd then be putting the changes on Github as soon as I get them done, and I know which license they're supposed to be under.

Nice Smiley

Thought about forking a pool version and a stand alone client? The stand alone wouldn't need as much in config and might be easier for some folks (ahem) to set up.
hero member
Activity: 658
Merit: 500
Multipool, are you going to be bringing the pool at hpc.tw back up or will you defer to Multiclone? I need some help to make up for the lost shares.
mf
newbie
Activity: 24
Merit: 0
What license is the code under?

I've downloaded it and started rewriting some ugly bits, removing embedded defaults, using JSON for the config files, etc.

I would like to make my changes available, but would rather know the license they'd be supposed to be under.

I'd then be putting the changes on Github as soon as I get them done, and I know which license they're supposed to be under.
member
Activity: 79
Merit: 14
@Multipool:

One potential issue I noticed when I was testing locally before opening this up to everyone else...

I had thrown the 'production' switch, which caused the work queue entries to start out at 30, but was only testing with about 300MHash/sec.  Occasionally (presumably after a new round began on the current pool or something), I would get dozens of invalid shares in a row from that pool.  Decreasing the work queue size seemed to mitigate the issue somewhat.

I occasionally saw work queue 'purge' messages going by in the log, but never noticed a non-zero number of entries being purged during these rejection streaks.

I noticed it in particular from Mt. Red, to the extent that I ended up disabling them as a target pool (and I don't specifically remember if it occurred on other pools or not), so it may have been a pool-specific thing, but I thought I'd mention it.

If you feel like explaining it, how is the work queue purging determined?  I know that Multipool-the-server implements Long Polling.  What about Multipool-the-client? 


aside:
For anyone running Multipool locally/with a fairly small hash rate, I might suggest decreasing the 'production' work queue size which defaults to 30 queue entries (around line 43) to avoid similar issues.  OTOH, maybe this is a non-issue and I was just having problems with Mt. Red.
member
Activity: 79
Merit: 14
There is one correction I should make. On line 520 replace:
Code:
my $pool_name=$ranked_pools[$i]->{name};
with
Code:
my $pool_name=$pool->{name};
. It is some threading race issue I was working on which crashes the pool once a day or so.

When making this tweak, I noticed some nearby code specific to Slush that seems to handle multiple workers.  Did you find it necessary/beneficial to have multiple workers configured for each pool?  Just for Slush?  Is there any special syntax needed for specifying additional workers in the accounts.conf file, or do I just have multiple line-entries for Slush?

Multiclone is up to about 3-4 GHash/sec now, and seems to be handling things beautifully.  Total CPU load average on the server is under 7%.  Looks like there's room in the pool for everyone; jump on in! Smiley

Did a quick reboot of the pool to enable to tweak, expecting to see a disconnected/reconnecting message, but the process was so quick my miner didn't even seem to notice.  Looked like everyone who had been connected successfully reconnected after the restart.  Cool.
Pages:
Jump to: