I had thrown the 'production' switch, which caused the work queue entries to start out at 30, but was only testing with about 300MHash/sec. Occasionally (presumably after a new round began on the current pool or something), I would get dozens of invalid shares in a row from that pool. Decreasing the work queue size seemed to mitigate the issue somewhat.
I occasionally saw work queue 'purge' messages going by in the log, but never noticed a non-zero number of entries being purged during these rejection streaks.
If you feel like explaining it, how is the work queue purging determined? I know that Multipool-the-server implements Long Polling. What about Multipool-the-client?
For anyone running Multipool locally/with a fairly small hash rate, I might suggest decreasing the 'production' work queue size which defaults to 30 queue entries (around line 43) to avoid similar issues. OTOH, maybe this is a non-issue and I was just having problems with Mt. Red.
Anyone running multipool locally will be satisfied leaving the $production switch set to false. Setting it to true does precisely the things like increasing the work queue size and rewriting all the addresses to external, rather than 127.0.0.1.
The work queue size expands or shrinks slowly to meet demand - it stores about 15 seconds worth of shares. When work is received from a pool with a different prev_block hash than before (substr($work->{data}, 16,
), all the work in the queue with the previous hash (including work from all other pools) is purged. About this time is also when longpoll is sent. The exact timing is a bit complicated, because while the pools typically, but not always, all work on the same block, the switching from one block to the next can happen up to 20 seconds apart between pools. So while you might have received a new share (and purged all old shares) from one pool, you might still receive shares with the old block hash from another pool for up to 20 seconds. Most of these shares will be rejected as stale once they are actually solved, so there is still some room for improvement in this area. Also, you don't want to send a longpoll signal ten times in 20 seconds, so you need a way to prevent double longpolls. For now, Multipool only sends longpoll when its own bitcoind reports a block hash change, but this could also be based on a cooldown timer.
I'm trying to set up Multipool on a Win7 machine (yes, I'm a masochist), and things seem to be more or less working. I can see Multipool connecting to the regular pools and get shares. I can see my miner connect to it, but for some reason the miner never receives any shares from Multipool. The miner sits connected, but idle and no errors on either side. I've played around with ports and I'm sure it's connecting properly (firewall deactivated and such), but I just can't get the miner to get any shares.
I'm a coding 'hobbiest', but I've learned Perl in the last 48 hours for this project. I understand about 1/2 of what the code does, but I can't find the section that assigns shares to the workers. It could be a porting issue (had to change several lines to make it work with wget for windows), could be a config error(IP addresses or such), could be who the heck knows what??
I don't know what other kind of information anyone would need to help me, but I'm grasping at straws at this point.
Amazing that it actually runs! I had imagined the biggest problem with the porting would have been the sockets. Perl on Windows possibly doesn't implement all socket features, like maybe nonblocking mode, or the can_read function of IO::Select. Look in the rpc_server_alt function that handles all client connections. There is a big while loop which iterates over all connected clients, reads any request data available up to that point, and then sends response data when sufficient request data (such as basic authentication) has been obtained. Trace the execution of this loop to see what runs normally and at what exact point the execution gets stuck.