Author

Topic: [CLOSED] BTC Guild - Pays TxFees+NMC, Stratum, VarDiff, Private Servers - page 381. (Read 903150 times)

legendary
Activity: 1750
Merit: 1007
vphen: Proxy can ignore the request, but server can ban the connection if it will flood shares above the requested difficulty. It is not about protocol, but more about server implementation. As a stratum developer I'd like to ask you why you would want to ignore such command from the server?

when several rigs point to one proxy, on server side, will it think that's one high speed rig?so that each rig may get a work that difficulty larger than 1, will it be not so good for each rig?

Yes, the proxy will think it's one high speed rig.  That doesn't actually matter though.  Whether you're running 50 100 MH/s miners or 1 5 GH/s miner, a higher difficulty will result in the same number of shares returned [minus a very small (< 0.1%) overhead in the mining software].

If you are the only user on your proxy, the only downside to a higher difficulty applied to all your rigs is the speed estimate percentages (individual workers) will have higher variance.  Your net speed estimate (all rigs combined) will not have much variance.
member
Activity: 76
Merit: 10
vphen: Proxy can ignore the request, but server can ban the connection if it will flood shares above the requested difficulty. It is not about protocol, but more about server implementation. As a stratum developer I'd like to ask you why you would want to ignore such command from the server?

when several rigs point to one proxy, on server side, will it think that's one high speed rig?so that each rig may get a work that difficulty larger than 1, will it be not so good for each rig?
full member
Activity: 373
Merit: 100
On the https://www.btcguild.com/stratum_beta.php page, I just noticed the difficulty of each new row increasing by 1 and the PPS rate column changing accordingly - did I miss something about how difficulty works?

Since the stratum beta isn't expected to last long, I've been keeping it simple.

All right then - just as long as you're aware of it... Smiley
legendary
Activity: 1750
Merit: 1007
On the https://www.btcguild.com/stratum_beta.php page, I just noticed the difficulty of each new row increasing by 1 and the PPS rate column changing accordingly - did I miss something about how difficulty works?

Since the stratum beta isn't expected to last long, I've been keeping it simple.  The stats it shows by your workers are for the current difficulty.  When I want to reset everybody's stats after a server reset, I decrease the difficulty of all previous submissions by 1.  You end up with a few satoshis of extra income, and it saves me the trouble of adding a bunch of reset counters to each worker entry for the beta.
full member
Activity: 373
Merit: 100
On the https://www.btcguild.com/stratum_beta.php page, I just noticed the difficulty of each new row increasing by 1 and the PPS rate column changing accordingly - did I miss something about how difficulty works?
legendary
Activity: 1750
Merit: 1007
Thank you everybody so far for helping test this new software out.  We've found the 2nd block on the Stratum pool so far.  I'll be doing another restart in a few minutes (It's been almost 24 hours, woohoo!).

No major changes for miners, but it does plug a small memory leak I didn't catch previously.  It wasn't a huge problem, but it would have eventually become one.
legendary
Activity: 1386
Merit: 1097
vphen: Proxy can ignore the request, but server can ban the connection if it will flood shares above the requested difficulty. It is not about protocol, but more about server implementation. As a stratum developer I'd like to ask you why you would want to ignore such command from the server?
member
Activity: 76
Merit: 10
I'm looking at mining_proxy.py,
I saw set_difficulty(1) by default,
I'd like to know when server will ask "proxy" to set new difficulty to 2 or more?

During the initial testing, I'm only using difficulty=1.  I will be enabling the variable difficulty in a few days, which will adjust difficulty to a target rate of 1 share per 5 seconds.

Does stratum protocol allow 'proxy' to refuse changing difficulty?
legendary
Activity: 1792
Merit: 1047
I have also moved a few Gh's for the effort.
full member
Activity: 121
Merit: 100
Im adding 13GH now running from btcminer by stratum-proxy. Lets see how it works out.
legendary
Activity: 1750
Merit: 1007
I'm looking at mining_proxy.py,
I saw set_difficulty(1) by default,
I'd like to know when server will ask "proxy" to set new difficulty to 2 or more?

During the initial testing, I'm only using difficulty=1.  I will be enabling the variable difficulty in a few days, which will adjust difficulty to a target rate of 1 share per 5 seconds.
member
Activity: 76
Merit: 10
sometimes, when network is not good,
just using one tcp socket, it may be hard to provide service for a 40G/s rig to submit shares in time
member
Activity: 76
Merit: 10
I'm looking at mining_proxy.py,
I saw set_difficulty(1) by default,
I'd like to know when server will ask "proxy" to set new difficulty to 2 or more?
legendary
Activity: 1750
Merit: 1007
Restarted the stratum pool again (keeping restarts at a minimum to avoid too much disruption of mining).  Three new updates, one affecting miners:

1) Removed a fringe case where duplicate work may be provided after a longpoll-type work push.
2) Added some extra memory tracking to try to identify a stray memory allocation that never gets removed.
3) Added the "Mined by BTC Guild" tag to the coinbase that has been present on the normal pool servers.


I will be doing another stat reset in 5 minutes.  The last stats looked great, users who were reporting dupes before had 0 or 1 dupes over the last 5 hours.
sr. member
Activity: 406
Merit: 250
LTC
Tested with 40 GHs (one 10GHs node and one 30Ghs node), unfortunately, for some reason, the 30GHs node bad-performed.
I did install proxy but it was not able to upload more than 5-6 results per second (20 - 25Ghs) with alot of shares spending too much time in upload queue to the point where they would become obsolete after new job.
This is not happening on direct connection with btc pool, upload queue is almost always empty.


Example
Code:
1347652578.006 TS, Submitter2, extracted id=0, data=0
1347652578.168 TS, Submitter2, net-looped id=0, data=0
1347652578.213 TS, Submitter2, extracted id=1, data=0
1347652578.374 TS, Submitter2, net-looped id=1, data=0
1347652578.374 TS, Submitter2, extracted id=2, data=0
1347652578.535 TS, Submitter2, net-looped id=2, data=0
1347652578.535 TS, Submitter2, extracted id=3, data=0
1347652578.696 TS, Submitter2, net-looped id=3, data=0
1347652578.696 TS, Submitter2, extracted id=4, data=0
1347652578.857 TS, Submitter2, net-looped id=4, data=0
1347652578.857 TS, Submitter2, extracted id=5, data=0
1347652579.017 TS, Submitter2, net-looped id=5, data=0
1347652579.017 TS, Submitter2, extracted id=6, data=0
1347652579.177 TS, Submitter2, net-looped id=6, data=0
1347652579.177 TS, Submitter2, extracted id=7, data=0
1347652579.338 TS, Submitter2, net-looped id=7, data=0
1347652579.338 TS, Submitter2, extracted id=8, data=0
1347652579.499 TS, Submitter2, net-looped id=8, data=0
1347652579.499 TS, Submitter2, extracted id=9, data=0
1347652579.659 TS, Submitter2, net-looped id=9, data=0
1347652579.660 TS, Submitter2, extracted id=10, data=0
1347652579.820 TS, Submitter2, net-looped id=10, data=0

extracted happens just before httplib conn.request and net-looped happens after httplib conn.getresponse().read(), id is queue id for work result to submit.
legendary
Activity: 1750
Merit: 1007
Doing one more restart to the pool server in about 5 minutes.  Estimated restart time is about 2 seconds.  You'll receive a batch of unknown work rejects immediately after the restart, then it should stabilize.

I'll also be doing a stats reset when this happens, so we can get a new look at acceptance rates.


UPDATE: Restart has happened, and the stats  have been reset.  Since the beta pool doesn't have a 'Reset Stats' button, all previously submitted shares were adjusted to be 1 less than the actual difficulty, and the stats filtered out that difficulty.  This means your earnings are probably a few satoshis higher as a result.
legendary
Activity: 1386
Merit: 1097
The same hardware does or doesnt produce dupes on old mining api? Can you check it? Hopefully yes,then it confirms the bug in cgminer and not on the proxy/pool side.

Work format produced by the proxy should be absolutely the same as from standard pool, so if we assume that cgminer is not broken (and it probably isnt *if* it doesnt produces stales on old pool), then there must be same work payload coming into the miner.
legendary
Activity: 1750
Merit: 1007
I has been testing stack cgminer+proxy+my server heavily and didnt had a single duplicated share. So far there might be two reasons for that: cgminer dont receive a response fast enough, so he decide to resubmit the share. Another chance is that extranonce1 given by the server is not unique for some reason (threading issue?). And there might be a bug in the proxy so the extranonce2 is not generated uniquely, but i doubt it as far as i know about 500k shares generated by cgminers on my pool without a single dup.

Hope this helps in debugging.

cgminer is receiving the response, we can see it quite clearly getting the share accepted, sometimes 20+ seconds before it resubmits the duplicate share.

The extranonce1 is absolutely unique, and duplicate shares are only checked against the submissions made over that connection (since sending it over a different connection would be a different extranonce1, thus being a completely different hash to validate).  I've also made sure that there is never be a new work push that has the same coinbase as one previously sent (ie: if there were no new transactions on the network since last work push) by including the job_id inside the coinbase TX.


Hopefully somebody can submit a log of their proxy -and- cgminer at some point so we can clearly see the timing of:
1) When cgminer submitted it to the proxy
2) When the proxy received the response
3) When cgminer received the response from the proxy
4) When the proxy received the duplicate submission from cgminer
legendary
Activity: 1386
Merit: 1097
I has been testing stack cgminer+proxy+my server heavily and didnt had a single duplicated share. So far there might be two reasons for that: cgminer dont receive a response fast enough, so he decide to resubmit the share. Another chance is that extranonce1 given by the server is not unique for some reason (threading issue?). And there might be a bug in the proxy so the extranonce2 is not generated uniquely, but i doubt it as far as i know about 500k shares generated by cgminers on my pool without a single dup.

Hope this helps in debugging.
sr. member
Activity: 285
Merit: 250
Works great once I remembered to closed bitcoin-qt  Embarrassed
2 BFL singles running with 1300% E
1 HD5870 390% E
2 HD6870 590% E
11 stales and 53 Dupes across the 5 devices using CGMiner 2.7.5
I stopped and started the proxy once by mistake in the roughly 10-12 hrs its been running

Proxy server PC is an old Core2Duo E7400 2GB ram running vista 32bit that is mining with 2 BFL's and a 5870.
GBE LAN, Comcast Cable ISP, with a DD-WRT flashed WRT310n router.

Edit: Still running without issue add another 12hrs
Jump to: