Pages:
Author

Topic: Bandwidth of a Pool (Read 4257 times)

legendary
Activity: 1512
Merit: 1036
October 19, 2011, 03:02:21 AM
#23
bitcoins.lc has 2x100mbit links, and still has been DDOS'd by a single botnet actor until they get IP banned. You would want enough bandwidth to stay up in the face of evildoers.
hero member
Activity: 737
Merit: 500
October 18, 2011, 08:40:04 AM
#22
The only hashes that a miner "wastes time on" are those that stale because a new block has been found elsewhere on the network (and the miner does not know about it yet).  They are wasted because if they do find a block, the block will very likely become orphaned. 

As teukon said, all other hashing is not wasted because every single hash you calculate is effectively you "starting over" in your search for a valid block.  No matter how many hashes you have already checked, the very next hash you check might has the exact same probability to be valid or invalid. 

No matter how many times you have flipped a coin and had it come up "heads", the next time you flip it it still has exactly 50% chance to be "heads".
legendary
Activity: 1246
Merit: 1011
October 16, 2011, 05:20:22 PM
#21
Exactly that is the point however when a block changes a miner will have "wasted" any hashes being worked.

Maybe my wording is unclear but since hashes are an artificial measurement there is some "loss" which means a lower throughput miner takes MORE shares on average to achieve the same number of shares as higher throughput average.

This is because the pool only sees work in "full share" steps.  If shares were smaller it would be less of an effect and if shares were larger it would be more of an effect.

Currently a 400MH GPU outperforms 2x 200MH GPU in terms of shares earned by ~3% and outperforms 4x 100MH GPU by about 5%.  I know because I experimented by downclocking GPU to simulate slower GPU and running them for nearly a week to compare shares vs hashrate.

With higher difficulty this effect will be increased.

When hashing (assuming a fixed hashing rate) the expected time to find a share can be approximately modelled by an exponential distribution.  This distribution has a "lack of memory" property.  This means that the expected time until the next share is found is independent of past events.  To say that one has lost progress towards a share is to suggest that it is possible to make progress towards a share; this is simply not compatible with the loss of memory property.  A miner can no more make progress towards finding a share than a pool can make progress towards finding a block.

One thing which can cause the illusion of partial shares at the end of a round is frequently seeing rejects at the very beginning or end of a round.  This is generally caused by setting high aggression.
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 16, 2011, 04:40:00 PM
#20
The reality is the miner technically has completed some fraction of shares which are lost in the block change.

There is no such thing as a partially completed share.


Exactly that is the point however when a block changes a miner will have "wasted" any hashes being worked.

Maybe my wording is unclear but since hashes are an artificial measurement there is some "loss" which means a lower throughput miner takes MORE shares on average to achieve the same number of shares as higher throughput average.

This is because the pool only sees work in "full share" steps.  If shares were smaller it would be less of an effect and if shares were larger it would be more of an effect.

Currently a 400MH GPU outperforms 2x 200MH GPU in terms of shares earned by ~3% and outperforms 4x 100MH GPU by about 5%.  I know because I experimented by downclocking GPU to simulate slower GPU and running them for nearly a week to compare shares vs hashrate.

With higher difficulty this effect will be increased.

legendary
Activity: 1246
Merit: 1011
October 16, 2011, 04:30:30 PM
#19
The reality is the miner technically has completed some fraction of shares which are lost in the block change.

There is no such thing as a partially completed share.
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 16, 2011, 04:17:25 PM
#18
I certainly understand if it's not an easy parameter to change and the servers can take it anyway.

I don't see how share granularity is much of a plus though.  There is a bonus in that the pool can quickly detect when a user has stopped mining and send out a cautionary e-mail.  Low difficulty shares can be helpful for people trying to measure stales too.  Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.

One thing to consider is that higher share difficulty punishes smaller miners.

Pools don't pay for partial shares so there is already an advantage to having higher hashrate miner.

Difficulty 1 = ~4 billion hashes.

10 minutes per block change means @ 100MH/s on average a miner will complete 12 shares per block change.

@ 400MH/s on average a miner will complete 60 shares per block change.

The reality is the miner technically has completed some fraction of shares which are lost in the block change.  However for the slower miner it is a larger % of their aggregate output. 

12.5 shares completed = 12 shares accepted = 0.5 shares lost ~ 4%
60.5 shares completed = 60 shares accepted = 0.5 shares lost ~<1%

The effect is small but real.  Higher throughput miners achieve less "block change friction" than lower throughput miners. 

If a pool paid the exact amount of worked completed this would be a non-issue but doing that isn't possible.  A pool aproximates work by counting ONLY FULL SHARES.

With a difficulty 4 share
100 MH/s = ~ 6 shares per block change.  Assuming fractional loss of 0.5 shares = ~8% inefficiency
400 MH/s = ~ 30 shares per block change.  Assuming fractional loss of 0.5 shares = 1.5% inefficiency

Yes this does mean that even today 1 400MH  GPU is worth slightly more than 2x 200MH/s GPUs.
sr. member
Activity: 266
Merit: 254
October 15, 2011, 05:44:33 AM
#17
A getwork request is about 600 bytes, and a submit work is about 40 bytes.

submit work should be a lot more than 40 bytes... Unless you mean outbound...

Getwork could be reduced to about 40 bytes + maybe another 40 for tcp overhead with a proper differential binary protocol.  80/640 = 88% reduction.  I'm thinking of working on that as part of the next phase of poolserverj development but I'd be interested to hear if bandwidth costs really are an issue for pool ops... If so just have to hope some miner devs will step up and implement the client side of it.

basically first request contains all the stuff in a normal getwork (though in binary) except midstate which is redundant.  Subsequent requests only contain a new merkle root and timestamp.  These are the only fields that actually change except at longpoll time.
vip
Activity: 980
Merit: 1001
October 15, 2011, 03:56:17 AM
#16
we are watching merged mining but have other things (like the US server) in our priority list before we can implement it Smiley
paying 55BTC per block tho Smiley
hero member
Activity: 756
Merit: 500
October 15, 2011, 02:37:45 AM
#15
The last quote I had, I fell off the chair Smiley, they were pricing per 50 GB blocks of bandwidth.  I think there is a difference in USA bandwidth, if I remember correctly Northern USA IDCs are faster to Australia than the rest.  But do correct me, old folks do not have good memories Smiley . BTW are you planning merged mining?  Was about to sign up with you yesterday, but found no merged mining.
vip
Activity: 980
Merit: 1001
October 15, 2011, 12:11:33 AM
#14
That will cost a ton of money if the servers are in Australia. 

last month ozco.in
traffic in 224 GB
traffic out 457 GB
base hashrate ~70-90Ghash/s
yes
we are currently setting up a US server based in Dallas to take some of that pressure off Cheesy
hero member
Activity: 756
Merit: 500
October 15, 2011, 12:06:42 AM
#13
That will cost a ton of money if the servers are in Australia. 

last month ozco.in
traffic in 224 GB
traffic out 457 GB
base hashrate ~70-90Ghash/s
legendary
Activity: 1246
Merit: 1011
October 14, 2011, 02:40:37 PM
#12
I suppose mostly that's psychological. A user likes to see that the pool knows their approximate hashrate, its the fastest way to confirm that their setup is working properly and their shares are being accounted for. My users tend to use the hashrate estimate as their go-to stat - if its more than 20% out (acceptable variance given the way we work it out), they know something is wrong and can investigate further.

If a user has to go more than 10 minutes without submitting a single share it would be very difficult for us to work out this figure to any acceptable estimate, over a reasonably short timeframe.

That's very true.  I was used to this effect from solo mining and I admit it was much easier to be sure everything was working when you could see shares rolling.

Still, if a server is having resource issues then dropping to difficulty-2 shares seems like a much better idea than renting/buying a second server.
hero member
Activity: 546
Merit: 500
October 14, 2011, 02:04:42 PM
#11

Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.


I suppose mostly that's psychological. A user likes to see that the pool knows their approximate hashrate, its the fastest way to confirm that their setup is working properly and their shares are being accounted for. My users tend to use the hashrate estimate as their go-to stat - if its more than 20% out (acceptable variance given the way we work it out), they know something is wrong and can investigate further.

If a user has to go more than 10 minutes without submitting a single share it would be very difficult for us to work out this figure to any acceptable estimate, over a reasonably short timeframe.
legendary
Activity: 1246
Merit: 1011
October 14, 2011, 01:59:14 PM
#10
The more you raise the difficulty miners solve at, the less granular you can be sure of their hashrate. Low hashrate miners may not even get to submit a share in between longpolls in extreme instances. I dare say you could safely raise it to 2 or 4 without too much trouble though.

Probably the reason most don't do it is pushpool is set to 1 by default and few pool ops are skilled enough to change this without causing all manner of side effects. Personally, my load and network usage are well within acceptable parameters so raising it would just cause loss of share granularity for no real gain.

I certainly understand if it's not an easy parameter to change and the servers can take it anyway.

I don't see how share granularity is much of a plus though.  There is a bonus in that the pool can quickly detect when a user has stopped mining and send out a cautionary e-mail.  Low difficulty shares can be helpful for people trying to measure stales too.  Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.

Given the low fee that most pools ask for I might expect that a pool server would have to watch it's BTC/Watt in a similar way to a miner so surely there is incentive for making the server's more efficient.

Ah well, this is just curiosity.  I don't run a pool server nor do I intend to start.
hero member
Activity: 546
Merit: 500
October 14, 2011, 01:49:10 PM
#9
The more you raise the difficulty miners solve at, the less granular you can be sure of their hashrate. Low hashrate miners may not even get to submit a share in between longpolls in extreme instances. I dare say you could safely raise it to 2 or 4 without too much trouble though.

Probably the reason most don't do it is pushpool is set to 1 by default and few pool ops are skilled enough to change this without causing all manner of side effects. Personally, my load and network usage are well within acceptable parameters so raising it would just cause loss of share granularity for no real gain.
legendary
Activity: 1246
Merit: 1011
October 14, 2011, 01:36:36 PM
#8
Basically all of it.

Interesting.  I wonder why pools use such low difficulty for their shares then.  With most pools people are submitting many more shares than they are receiving payments so the reason is certainly not related to variance.  Higher difficulty shares would help with server resources which would allow pools to operate more cheaply too.

Is there some technical reason why difficulty-1 shares are preferred?
vip
Activity: 980
Merit: 1001
October 14, 2011, 01:12:40 PM
#7
last month ozco.in
traffic in 224 GB
traffic out 457 GB
base hashrate ~70-90Ghash/s
full member
Activity: 207
Merit: 100
October 14, 2011, 11:07:59 AM
#6
Last month, ArsBitcoin (~800 GH/s or so last month?) used the following:

data transfer in    495.223 GB
data transfer out 676.921 GB

Some of that is probably backing up files and such, but probably not too much.
hero member
Activity: 546
Merit: 500
October 14, 2011, 10:43:38 AM
#5

How much of that is down to the shares?  If you created a pool accepting difficulty 100 shares then would your bandwidth requirements drop significantly?


Basically all of it. The website is doing virtually nothing compared to the poolserver. In the last 12 hours (time of my log rotation) I've had 43,000 hits to the website.. Most of them are API hits which are pretty small (couple of hundred bytes). I don't bother running detailed stats on it at the moment though.

In the same timeframe I've had 1,400,000 hits to the poolserver.
A getwork request is about 600 bytes, and a submit work is about 40 bytes.
legendary
Activity: 1246
Merit: 1011
October 14, 2011, 10:02:31 AM
#4
Can only tell you what I've observed - rfcpool is using about 3Mbit outbound, 2Mbit inbound, to do 50GH/s.

How much of that is down to the shares?  If you created a pool accepting difficulty 100 shares then would your bandwidth requirements drop significantly?
Pages:
Jump to: