Author

Topic: [CLOSED] BTC Guild - Pays TxFees+NMC, Stratum, VarDiff, Private Servers - page 132. (Read 903163 times)

newbie
Activity: 35
Merit: 0
@eleuthria, can you explain how share difficulty impacts on the pooled miners ?

thanks

Every time a miner performs a hash, it is creating a share.  The difficulty of that share is based on the quality of the hash (leading 0-bits for the most part).  If the first 32 bits of the hash are 0s, then you have a difficulty 1 hash.  If 33 are 0, it's difficulty 2.  34 = 4, 35 = 8, etc., etc.

So the work you receive is the same regardless.  What setting a share difficulty does on the pool is it limits what quality of shares you submit.  Since the odds of a difficulty 2 share are exactly twice as rare as difficulty 1, the pool rewards you 2 shares at a time instead of 1, but you only submit half as many results (on average, since it is a slightly random event).

In the long run, higher share difficulties have no impact on earnings.  Higher difficulties will increase your hourly variance marginally, and have almost non-existant impacts on your 24-hour variance (in terms of share submissions) under BTC Guild's variable difficulty settings.  You will use significantly less bandwidth, since most of the bandwidth on Stratum is share submission/confirmation if you do not mine at higher difficulties.

A common misconception is that mining at a high difficulty will hurt your earnings because of "lost work" when stales occur.  This is not correct.  A stale submission will hurt more at higher difficulties, but they will be proportionally less frequent.  If your window for stales is 100ms, and you find a share every second at difficulty 2, you have a (roughly) 10% chance of submitting a stale in that 100ms window.  At difficulty 4, you would find shares every 2 seconds, meaning a 5% chance of submitting a stale in that 100ms window.  So while it hurts twice as much on your acceptance rate, it happens half as often.

Thanks for this!
newbie
Activity: 51
Merit: 0
All miners on the pool are working on the transactions that the pool has selected for inclusion in the next block.  Different pools will normally have a slight difference in the transactions in their blocks due to different size limits, and which transactions they have seen on the network (since it's all p2p, not all pools since all transactions at the same time).

BTC Guild sends you a new work template every 30 seconds, regardless of whether or not a new block is on the network.  This new template includes an updated list of transactions to include in the block, since the more time between blocks, the more likely new transactions with higher priority/higher fees have been sent on the network waiting for confirmation.

Thanks again
legendary
Activity: 1750
Merit: 1007
@eleuthria, can you explain how share difficulty impacts on the pooled miners ?

thanks

Every time a miner performs a hash, it is creating a share.  The difficulty of that share is based on the quality of the hash (leading 0-bits for the most part).  If the first 32 bits of the hash are 0s, then you have a difficulty 1 hash.  If 33 are 0, it's difficulty 2.  34 = 4, 35 = 8, etc., etc.

So the work you receive is the same regardless.  What setting a share difficulty does on the pool is it limits what quality of shares you submit.  Since the odds of a difficulty 2 share are exactly twice as rare as difficulty 1, the pool rewards you 2 shares at a time instead of 1, but you only submit half as many results (on average, since it is a slightly random event).

In the long run, higher share difficulties have no impact on earnings.  Higher difficulties will increase your hourly variance marginally, and have almost non-existant impacts on your 24-hour variance (in terms of share submissions) under BTC Guild's variable difficulty settings.  You will use significantly less bandwidth, since most of the bandwidth on Stratum is share submission/confirmation if you do not mine at higher difficulties.

A common misconception is that mining at a high difficulty will hurt your earnings because of "lost work" when stales occur.  This is not correct.  A stale submission will hurt more at higher difficulties, but they will be proportionally less frequent.  If your window for stales is 100ms, and you find a share every second at difficulty 2, you have a (roughly) 10% chance of submitting a stale in that 100ms window.  At difficulty 4, you would find shares every 2 seconds, meaning a 5% chance of submitting a stale in that 100ms window.  So while it hurts twice as much on your acceptance rate, it happens half as often.
hero member
Activity: 692
Merit: 500
@eleuthria, can you explain how share difficulty impacts on the pooled miners ?

thanks
legendary
Activity: 1750
Merit: 1007
Final question on this - are we all hashing the same "package" of transactions or is that a fluid situation managed by the pool software ?  ... suppose it has to be fluid

All miners on the pool are working on the transactions that the pool has selected for inclusion in the next block.  Different pools will normally have a slight difference in the transactions in their blocks due to different size limits, and which transactions they have seen on the network (since it's all p2p, not all pools since all transactions at the same time).

BTC Guild sends you a new work template every 30 seconds, regardless of whether or not a new block is on the network.  This new template includes an updated list of transactions to include in the block, since the more time between blocks, the more likely new transactions with higher priority/higher fees have been sent on the network waiting for confirmation.
newbie
Activity: 51
Merit: 0
le sigh. why did i just get so codey when he asked for laymans terms?

sudo reboot
Hey, I can do maths  Smiley

EDIT : & Thanks for the clarification ... was busy typing & thinking while you posted - did help and clarify thx

EDIT2 : Didn't see your code 1st time of looking ... but think I can follow it thx again
hero member
Activity: 658
Merit: 500
CCNA: There i fixed the internet.
 le sigh. why did i just get so codey when he asked for laymans terms?

sudo reboot
newbie
Activity: 51
Merit: 0
Final question on this - are we all hashing the same "package" of transactions or is that a fluid situation managed by the pool software ?  ... suppose it has to be fluid
newbie
Activity: 51
Merit: 0
Each hash a miner does is using a different blob of data.  A single change in a single bit will produce a completely different hash, with no deterministic way to know how it will change.  To prevent miners from repeating work, pools take a template of work, and increment a pool-side counter for each miner's work.  This means the template each miner receives is slightly different, so that they will produce completely different hash results.  Miners then take this template and have three values they can change:

1) ExtraNonce - A piece of the coinbase (payment transaction) that allows for a miner to increment a counter.  Most pools use 4-bytes for this value, meaning ~4.2 billion possible increments.
2) Nonce - Another 4-byte counter, this is part of the block header.  It has ~4.2 billion possible values as well.  You can use up ~4.2 billion nonces (~4.2 Gigahashes), then increment the ExtraNonce by 1, which allows you to try all 4.2 billion Nonce values again.
3) nTime - This is a timestamp part of the block header.  It *can* be altered within certain limits.  Each change in this would be another 4.2b x 4.2b possible hash results.  Most miners do not increment nTime anymore because there is no reason to alter timestamps with how much work can be generated by default.


By changing a single bit on any of those 3, you get a completely different hash.  There is also the pool side counter for each miner so there is no overlap, and then each pool has different payout addresses so no pool has overlapping hashes either.

Thanks Eleuthria - takes a bit of digesting
So a 2 TH/s miner running at the full 2 TH/s would require ~2,100 secs to try all 4.2 billion hashes (Extranonces) for a single nonce setting and a single nTime ?
.... unless of course it got a hash result lower than the target

2 TH/s is actually 2 *trillion* hashes per second.  Your miner adjusts the nonce first, then adjusted the extranonce once it runs out of numbers to try.  At 2 TH/s, this happens roughly 500 times per second.

Of course, Oops

Thanks everyone
legendary
Activity: 1750
Merit: 1007
Each hash a miner does is using a different blob of data.  A single change in a single bit will produce a completely different hash, with no deterministic way to know how it will change.  To prevent miners from repeating work, pools take a template of work, and increment a pool-side counter for each miner's work.  This means the template each miner receives is slightly different, so that they will produce completely different hash results.  Miners then take this template and have three values they can change:

1) ExtraNonce - A piece of the coinbase (payment transaction) that allows for a miner to increment a counter.  Most pools use 4-bytes for this value, meaning ~4.2 billion possible increments.
2) Nonce - Another 4-byte counter, this is part of the block header.  It has ~4.2 billion possible values as well.  You can use up ~4.2 billion nonces (~4.2 Gigahashes), then increment the ExtraNonce by 1, which allows you to try all 4.2 billion Nonce values again.
3) nTime - This is a timestamp part of the block header.  It *can* be altered within certain limits.  Each change in this would be another 4.2b x 4.2b possible hash results.  Most miners do not increment nTime anymore because there is no reason to alter timestamps with how much work can be generated by default.


By changing a single bit on any of those 3, you get a completely different hash.  There is also the pool side counter for each miner so there is no overlap, and then each pool has different payout addresses so no pool has overlapping hashes either.

Thanks Eleuthria - takes a bit of digesting
So a 2 TH/s miner running at the full 2 TH/s would require ~2,100 secs to try all 4.2 billion hashes (Extranonces) for a single nonce setting and a single nTime ?
.... unless of course it got a hash result lower than the target

2 TH/s is actually 2 *trillion* hashes per second.  Your miner adjusts the nonce first, then adjusted the extranonce once it runs out of numbers to try.  At 2 TH/s, this happens roughly 500 times per second.
newbie
Activity: 51
Merit: 0
Each hash a miner does is using a different blob of data.  A single change in a single bit will produce a completely different hash, with no deterministic way to know how it will change.  To prevent miners from repeating work, pools take a template of work, and increment a pool-side counter for each miner's work.  This means the template each miner receives is slightly different, so that they will produce completely different hash results.  Miners then take this template and have three values they can change:

1) ExtraNonce - A piece of the coinbase (payment transaction) that allows for a miner to increment a counter.  Most pools use 4-bytes for this value, meaning ~4.2 billion possible increments.
2) Nonce - Another 4-byte counter, this is part of the block header.  It has ~4.2 billion possible values as well.  You can use up ~4.2 billion nonces (~4.2 Gigahashes), then increment the ExtraNonce by 1, which allows you to try all 4.2 billion Nonce values again.
3) nTime - This is a timestamp part of the block header.  It *can* be altered within certain limits.  Each change in this would be another 4.2b x 4.2b possible hash results.  Most miners do not increment nTime anymore because there is no reason to alter timestamps with how much work can be generated by default.


By changing a single bit on any of those 3, you get a completely different hash.  There is also the pool side counter for each miner so there is no overlap, and then each pool has different payout addresses so no pool has overlapping hashes either.

Thanks Eleuthria - takes a bit of digesting
So a 2 TH/s miner running at the full 2 TH/s would require ~2,100 secs to try all 4.2 billion hashes (Extranonces) for a single nonce setting and a single nTime ?
.... unless of course it got a hash result lower than the target


hero member
Activity: 658
Merit: 500
CCNA: There i fixed the internet.
EDIT to explain what it is:  In getwork days, the pool provided you a single unit of work, you finished it, and asked for more.  Discarded was a figure of how much work you had asked for that you never got to use due to longpolls making it obsolete.  In GBT/Stratum, pools don't provide you with a unit of work, but they provide you with a template to make work locally.

Is there a good plain english write up explaining pools and how they operate or is it all programmer-speak gobbledegook ?

My inclination was that a pool just divided the potential solution space amonst live workers and gave them alll something to do
Is that anywhere near correct ?


Each hash a miner does is using a different blob of data.  A single change in a single bit will produce a completely different hash, with no deterministic way to know how it will change.  To prevent miners from repeating work, pools take a template of work, and increment a pool-side counter for each miner's work.  This means the template each miner receives is slightly different, so that they will produce completely different hash results.  Miners then take this template and have three values they can change:

1) ExtraNonce - A piece of the coinbase (payment transaction) that allows for a miner to increment a counter.  Most pools use 4-bytes for this value, meaning ~4.2 billion possible increments.
2) Nonce - Another 4-byte counter, this is part of the block header.  It has ~4.2 billion possible values as well.  You can use up ~4.2 billion nonces (~4.2 Gigahashes), then increment the ExtraNonce by 1, which allows you to try all 4.2 billion Nonce values again.
3) nTime - This is a timestamp part of the block header.  It *can* be altered within certain limits.  Each change in this would be another 4.2b x 4.2b possible hash results.  Most miners do not increment nTime anymore because there is no reason to alter timestamps with how much work can be generated by default.


By changing a single bit on any of those 3, you get a completely different hash.  There is also the pool side counter for each miner so there is no overlap, and then each pool has different payout addresses so no pool has overlapping hashes either.


just to give an estimate. using Eluethria's numbers. BTCGuild and most pools send a new work item every 30 seconds.

supposing a miner did still roll ntime, to roll ntime from 0x00 to 0x01 in 29 seconds would require ~640 Phash/second or ~26 times the current total network hashrate.

the software increments in this order:

Code:


Psuedocode:
for(ntime = 0; ntime <= 2^32; ntime++)
{
    for( extranonce = 0; extranonce <= 2^32; extranonce++)
    {
        for( nonce = 0; nonce <= 2^32; nonce++)
        {
           GenerateBlockHeader(ntime, extranonce, nonce);
           Hash();
    
        }
    }
}


it runs through the nonce loop 2^32 times then steps out one, increments extranonce, then runs the nonce 2^32 more. this goes on 2^32 times then steps out to the ntime loop, increments ntime, then steps back in to the nonce loop.

repeat until either a new work is sent from pool, we find a block, or ntime is exhausted.

math:
(Extranonce * Nonce) / hashrate = ntime_increment

(2^32 * 2^32 ) / x = 29

this yields x = 6.3609×10^17
legendary
Activity: 1750
Merit: 1007
EDIT to explain what it is:  In getwork days, the pool provided you a single unit of work, you finished it, and asked for more.  Discarded was a figure of how much work you had asked for that you never got to use due to longpolls making it obsolete.  In GBT/Stratum, pools don't provide you with a unit of work, but they provide you with a template to make work locally.

Is there a good plain english write up explaining pools and how they operate or is it all programmer-speak gobbledegook ?

My inclination was that a pool just divided the potential solution space amonst live workers and gave them alll something to do
Is that anywhere near correct ?


Each hash a miner does is using a different blob of data.  A single change in a single bit will produce a completely different hash, with no deterministic way to know how it will change.  To prevent miners from repeating work, pools take a template of work, and increment a pool-side counter for each miner's work.  This means the template each miner receives is slightly different, so that they will produce completely different hash results.  Miners then take this template and have three values they can change:

1) ExtraNonce - A piece of the coinbase (payment transaction) that allows for a miner to increment a counter.  Most pools use 4-bytes for this value, meaning ~4.2 billion possible increments.
2) Nonce - Another 4-byte counter, this is part of the block header.  It has ~4.2 billion possible values as well.  You can use up ~4.2 billion nonces (~4.2 Gigahashes), then increment the ExtraNonce by 1, which allows you to try all 4.2 billion Nonce values again.
3) nTime - This is a timestamp part of the block header.  It *can* be altered within certain limits.  Each change in this would be another 4.2b x 4.2b possible hash results.  Most miners do not increment nTime anymore because there is no reason to alter timestamps with how much work can be generated by default.


By changing a single bit on any of those 3, you get a completely different hash.  There is also the pool side counter for each miner so there is no overlap, and then each pool has different payout addresses so no pool has overlapping hashes either.
legendary
Activity: 1540
Merit: 1001
EDIT to explain what it is:  In getwork days, the pool provided you a single unit of work, you finished it, and asked for more.  Discarded was a figure of how much work you had asked for that you never got to use due to longpolls making it obsolete.  In GBT/Stratum, pools don't provide you with a unit of work, but they provide you with a template to make work locally.

Is there a good plain english write up explaining pools and how they operate or is it all programmer-speak gobbledegook ?

My inclination was that a pool just divided the potential solution space amonst live workers and gave them alll something to do
Is that anywhere near correct ?

That's pretty close to it.  Then it divides the rewards up among the workers in some sort of proportional manner equal to the work provided.  Which proportional manner used is in the fine details.  Most are pretty fair.  Some are subject to abuse.  The one used here is one of the fair ones.

M
newbie
Activity: 51
Merit: 0
EDIT to explain what it is:  In getwork days, the pool provided you a single unit of work, you finished it, and asked for more.  Discarded was a figure of how much work you had asked for that you never got to use due to longpolls making it obsolete.  In GBT/Stratum, pools don't provide you with a unit of work, but they provide you with a template to make work locally.

Is there a good plain english write up explaining pools and how they operate or is it all programmer-speak gobbledegook ?

My inclination was that a pool just divided the potential solution space amonst live workers and gave them alll something to do
Is that anywhere near correct ?
newbie
Activity: 51
Merit: 0
legendary
Activity: 1750
Merit: 1007
Your ant miner includes rejected shares and hardware errors.  Additionally, the pool dashboard is an estimate based on accepted shares, while your local miner is actual hashing speed.  The average on your Dashboard will always be +/- 5 to 10% of your actual hash rate (even further off if you have HW errors and/or lots of invalids).  As for "always when the pool luck is extended bad run", this is called selective bias.  You see the same fluctuation regardless of pool luck, but you're trying to associate completely independent events.  tl;dr:  Your brain lies to you.

My rejected and HW are very small (591 & 46 resp)
Have 0 stale but over 40,000 discarded versus more than 210,000 accepted (always around 20%)

Is there something I can do to improve the discarded ? What are they ? Where can I read up on it ?

Guess I do investigate more when luck is low ...


Discarded is a meaningless stat for Stratum & GBT.  Ignore it completley.

EDIT to explain what it is:  In getwork days, the pool provided you a single unit of work, you finished it, and asked for more.  Discarded was a figure of how much work you had asked for that you never got to use due to longpolls making it obsolete.  In GBT/Stratum, pools don't provide you with a unit of work, but they provide you with a template to make work locally.
newbie
Activity: 51
Merit: 0
Your ant miner includes rejected shares and hardware errors.  Additionally, the pool dashboard is an estimate based on accepted shares, while your local miner is actual hashing speed.  The average on your Dashboard will always be +/- 5 to 10% of your actual hash rate (even further off if you have HW errors and/or lots of invalids).  As for "always when the pool luck is extended bad run", this is called selective bias.  You see the same fluctuation regardless of pool luck, but you're trying to associate completely independent events.  tl;dr:  Your brain lies to you.

My rejected and HW are very small (591 & 46 resp)
Have 0 stale but over 40,000 discarded versus more than 210,000 accepted (always around 20%)

Is there something I can do to improve the discarded ? What are they ? Where can I read up on it ?

Guess I do investigate more when luck is low ...
legendary
Activity: 1750
Merit: 1007
I've noticed when the pool luck is low the website appears to report my hashrate as low too (after a run of bad luck - like now)
When I check my Ant it disagrees

Right now website says 165GH/s for my Ant worker
The Ant says 185 GH/s

Just wondering why

I've noticed it a few times over the last week, always when pool luck is extended bad run


Your ant miner includes rejected shares and hardware errors.  Additionally, the pool dashboard is an estimate based on accepted shares, while your local miner is actual hashing speed.  The average on your Dashboard will always be +/- 5 to 10% of your actual hash rate (even further off if you have HW errors and/or lots of invalids).  As for "always when the pool luck is extended bad run", this is called selective bias.  You see the same fluctuation regardless of pool luck, but you're trying to associate completely independent events.  tl;dr:  Your brain lies to you.
newbie
Activity: 51
Merit: 0
I've noticed when the pool luck is low the website appears to report my hashrate as low too (after a run of bad luck - like now)
When I check my Ant it disagrees

Right now website says 165GH/s for my Ant worker
The Ant says 185 GH/s

Just wondering why

I've noticed it a few times over the last week, always when pool luck is extended bad run

EDIT : Then perfect timing pool mines a block
Jump to: