Pages:
Author

Topic: Vladimir's essential self-defence guide for Bitcoin Miners - page 3. (Read 13271 times)

full member
Activity: 210
Merit: 100
Not entirely accurate.  WIthout going into details,  there are things which can be done to minimize and practically eliminate a block your pool has found from being orphaned (strictly speaking about BTC here and not other shitcoin chains).
Yes, what I said was entirely accurate, and I'll re-word it in case I wasn't clear enough. You cannot eliminate invalids. You can, at best, reduce them.

fair enough. 
legendary
Activity: 3878
Merit: 1193
Not entirely accurate.  WIthout going into details,  there are things which can be done to minimize and practically eliminate a block your pool has found from being orphaned (strictly speaking about BTC here and not other shitcoin chains).
Yes, what I said was entirely accurate, and I'll re-word it in case I wasn't clear enough. You cannot eliminate invalids. You can, at best, reduce them.
full member
Activity: 210
Merit: 100
Yep that is, "you treat invalid blocks incorrectly" stance. I got it. But how am I supposed to handle it? From my point of view, if a pool got invalid block it is pool operator's fault
No pool can eliminate invalid blocks. Invalids are expected and are a direct consequence of the 10-minute target and network propogation effects because solved blocks do not propogate instantly to every node on the network. Another blockchain with a lower target speed, say 3-minutes, would expect to have more invalid blocks.

Your spreadsheet is slightly inaccurate because you are including invalids in the 'actual' column, but not in the 'expected' column. Until you can compensate correctly for the 'expected' number of invalids, it's best to leave them out of the 'actual' column calculations.


Is there any non-falsifiable proof that a given block *was* actually invalid?

Yes.  Although it could take some research, i should think that it should be able to be verified by collating data from other pools and blockexplorer if a pool has falsely marked a block invalid/orphaned. 
full member
Activity: 210
Merit: 100
Yep that is, "you treat invalid blocks incorrectly" stance. I got it. But how am I supposed to handle it? From my point of view, if a pool got invalid block it is pool operator's fault
No pool can eliminate invalid blocks. Invalids are expected and are a direct consequence of the 10-minute target and network propogation effects because solved blocks do not propogate instantly to every node on the network. Another blockchain with a lower target speed, say 3-minutes, would expect to have more invalid blocks.


Not entirely accurate.  WIthout going into details,  there are things which can be done to minimize and practically eliminate a block your pool has found from being orphaned (strictly speaking about BTC here and not other shitcoin chains).   In very simple terms its just making sure that you arrange very good connectivity between your pool and as many other nodes on the net as possible (especially the other largest nodes and especially geographically well spaced out).  

To see some effects of doing this, have a look at this post:
https://bitcoin.org.uk/forums/topic/117-mmc-one-of-the-fastest-bitcoin-mining-pool-in-the-world-with-a-proof/

You can see that in many cases Mainframe is the first to announce a new block to the network even when we didnt even find it.   Of course this can depend on the location from which these measurements are taken but i think it illustrates my point that measures can be taken and gives evidence that those measures can be effective.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Yep that is, "you treat invalid blocks incorrectly" stance. I got it. But how am I supposed to handle it? From my point of view, if a pool got invalid block it is pool operator's fault
No pool can eliminate invalid blocks. Invalids are expected and are a direct consequence of the 10-minute target and network propogation effects because solved blocks do not propogate instantly to every node on the network. Another blockchain with a lower target speed, say 3-minutes, would expect to have more invalid blocks.

Your spreadsheet is slightly inaccurate because you are including invalids in the 'actual' column, but not in the 'expected' column. Until you can compensate correctly for the 'expected' number of invalids, it's best to leave them out of the 'actual' column calculations.


Is there any non-falsifiable proof that a given block *was* actually invalid?
legendary
Activity: 3878
Merit: 1193
Yep that is, "you treat invalid blocks incorrectly" stance. I got it. But how am I supposed to handle it? From my point of view, if a pool got invalid block it is pool operator's fault
No pool can eliminate invalid blocks. Invalids are expected and are a direct consequence of the 10-minute target and network propogation effects because solved blocks do not propogate instantly to every node on the network. Another blockchain with a lower target speed, say 3-minutes, would expect to have more invalid blocks.

Your spreadsheet is slightly inaccurate because you are including invalids in the 'actual' column, but not in the 'expected' column. Until you can compensate correctly for the 'expected' number of invalids, it's best to leave them out of the 'actual' column calculations.
legendary
Activity: 1190
Merit: 1000
I think what he meant is:

You do calculate the probability of finding a valid hash but comparing it with finding a valid block (which is lower by design because of the probability of chainsplits/orphans being > 0).

Yep that is, "you treat invalid blocks incorrectly" stance. I got it. But how am I supposed to handle it? From my point of view, if a pool got invalid block it is pool operator's fault, get the heck better connected and do not run it on some overloaded VPS, this might help.

If anyone think that I should have ignored invalid blocks and shares that went into them instead of adding invalid blocks shares to the next blocks, than I am ready to start a new pool which will declare 99% of all blocks invalid and this pool will have a perfect good luck on the rest 1% of accepted shares. I'll even throw in some guaranteed good luck into it, lol. Anyone wants to mine in such a pool?

In the end of the day there are 3 variables involved, the difficulty, number of accepted shares and number of solved blocks. That's it. There is no variable for "how lame is the excuse" and such.


So, if I DDoS the smaller pools that can't afford to buy as much bandwidth in perpetuity as I can afford to rent from botnets for a week and they experience degraded performance as a result. This is either bad luck or the pool operators fault according to you. I don't agree with that assessment. Does it degrade performance and harm miners? Absolutely. Is it predictable according to expected performance by a bitcoin pool? Not in my opinion.

Did I ever said that I am calculating odds of future performance? How is presenting 3 month worth of data out of 3 month worth of data is cherry picking?

Do not answer that, the questions are rhetorical. End of conversation with you, k9quaint.


You don't want me to answer the first question because it would be in the form of your quotes:
As an example, of a simple check any miner could to do in order to be confident in his pool
I do not see anything wrong with my suggestion to avoid pools which had extremely bad yield in the past.
And of course, this all started with your essay on "Is your pool cheating you."

You don't want me to answer the second question because it would also be in the form of your quotes:
I have specifically chosen the most "unlucky" pool to illustrate my point.
I have calculated odds for 3 month performance of some pools on that chart..

"Chosen the most unlucky pool" from what dataset? The dataset (http://www.l0ss.net/index30.php) I posted shows a strong negative bias over the last 20 days against the theoretical 0 luck line. Does the set of data that you examined (and then presented us only the small cherry flavored segment) correspond exactly to the theoretical 0 luck?

And what do you have against BTCguild anyway? You seem inordinately hostile and intransigent on this subject, declaring repeatedly that "excuses" do not change the yield". Well, nobody here is offering excuses. I am offering explanations. Bitcoin probability does not account for faulty software patch probabilities. You presenting the aftereffects of one as "some mysterious force" or "excuses" or "incompetence" makes it seem like you have a vendetta against this pool you have chosen.

You can't even calculate the odds of finding an "unlucky pool" when you go looking for one. Why should any of us believe you can find one that is cheating you?
legendary
Activity: 2618
Merit: 1007
I think what he meant is:

You do calculate the probability of finding a valid hash but comparing it with finding a valid block (which is lower by design because of the probability of chainsplits/orphans being > 0).
legendary
Activity: 1190
Merit: 1000
First point:: I believe you calculated the odds incorrectly regarding finding a set of bad luck in the search space of pools

That is only based on your assumption that it is impossible to have almost 100% efficiency in a pool (on accepted share -> solved block stage). I disagree and and I have seen plenty of examples of solo mining and pools hitting almost exactly 100% efficiency on small and large datasets.

No. But I don't think I can ever convince you of the difference between finding history vs the odds of it repeating itself.

Second point: I believe you treated the invalid blocks found incorrectly in your data set

I disagree. I have been effectively calculating the yield and I do not care if a block got invalid, lost in space or whatever, there is not useful reward with invalid blocks but there are accepted shares, hence I have added the accepted shares to the next block and this is the correct way to do it from my point of view. If you disagree than you would be happy to mine for a hypothetical pool which marks every other block as invalid. Would you mine for such a pool?


If every block was being marked as invalid, would that be a sign of "bad luck" or of a software problem. If, after the pool operator applied a patch and blocks were now being found successfully, would you expect the future runs to revert to every block as invalid? Or would you treat that as a discontinuous function instead of a continuous one?

Third point: I believe you represented past data that contained extreme and discontinuous events as historical norms. If you going looking for a problem in the past and show how the performance deviated from expected as a result of those problems, that is fine. You shouldn't represent that search as the result of a true Monte Carlo run to be expected in future performance.

I have presented 3 month worth of data. I have not selected it out 10 years of pools history. I do not care whatever events were there, excuses do not change the yield.

They don't change the yield, but understanding the events that were causal is much more logical than just assuming you understand the distribution perfectly. Congratulations, you found a pool that were you to possess a time machine you would be able to warn people from the past not to use it.

Fourth point: I believe you are dismissing data that disagrees with you because it is not "as significant". Your response should have been: "Oh, that is interesting, I wonder why that is." I wonder why those pools are under-performing and I believe that someone who claims to be providing a defense course for pool-shopping should wonder as well.

I actually have started from that chart, and than I have calculated odds for 3 month performance of some pools on that chart. However, one pool have shown extremely high "bad luck" while others, even though, below average in last few weeks over longer period of time hit almost exactly 0 luck line.

The thing my math shows is that on 3 month dataset 200 Ghps pools hit almost exactly 0% luck line while an order of magnitude larger pool is in area of a fraction of percent probability of being a simply a victim of variance.
 
My response is not what you have expected. This is exactly because I do not wonder what it is, I  know what it is. I know that 2 weeks is not enough data to make any conclusions with 99% certainty, particularly for 200'ish Ghps pools. On longer time frame however the picture is much more clear.

Well, I suppose I should just let you cherry pick your results and pass them off as statistics.

You are very right about one thing, with hindsight we should have absolutely avoided BTCGuild on 7 out of the last 60 days.

Glad we  agree on this one. My point though is that if you are 99% certain that a particular pool operator has managed to be either not as competent as he should have been or not as honest, than why would anyone want to bet on the next 3 month period with the same guy when there are alternatives such as PPS pools or pools which to not exhibit huge "bad luck" streaks.

Those who want to experiment with this they are more than welcome but it is not something I would advise.

BTW how ddos or downtime or whatever excuses accepting a share and not delivering statistically expected blocks is beyond my understanding. Thought, there surely could be a number of technical issues which can cause problems. In any case it is better to be with pools that are not affected by those mysterious factors.

Once again:  excuses do not change the yield and I do not see anything wrong with my suggestion to avoid pools which had extremely bad yield in the past.

You don't understand how a DDoS from a botnet could affect performance? Or a patch that prevented submitted shares from being aggregated correctly might skew results? I think I see now why your view on this subject is so narrow.
legendary
Activity: 1190
Merit: 1000
dear k9quaint,

 Your argument about "selection bias" makes no sense. Yes my selected example shows, some EXTREMELY unlucky pool. I have specifically chosen the most "unlucky" pool to illustrate my point. However, you completely failing to treat this in context of my OP.


The odds of finding outcome in a dataset is different than that outcome occurring in a single monte carlo run. You need to calculate what the odds were of finding a pool with luck that bad, not what the odds were that a specific pool would experience that luck in the future. One is more likely than the other, and misrepresenting the data in this fashion doesn't do anyone any favors.

My post was not about some theoretizing about hell knows what. It was about choosing a pool to mine with or not to mine with. BTC guild was a perfect example of a pool to not touch with barge pole on "bad luck" criteria. No more no less.

If one were to go back to June 1st and then decide a pool to mine in, you would be absolutely correct. Going forward, what sort of prediction would you make about the performance of BTCguild? I went and asked the pool operator about events that had an outsized effect on performance that were unlikely to recur. There were 3 and they were very visible in the data set. (DDos July 4th-7th, DDoS Aug 12th, a bad pushpoold patch from June 29th - July 2nd). Also, you didn't treat the invalid blocks correctly in your data set, but that is another conversation.

Are you trying to tell us here that btcguild's historical performance is ok and miners should go now and mine with them. Do you yourself mine with btcguild?

The historical performance is definitely subpar. However, after dropping those 3 periods the picture changes quite a bit. I do mine with them currently, although I didn't point any significant hashing power at them until August, after the dust had settled from the DDoS and patch issue.

As for the reference with chart you gave me as your argument meaning "other pools are unlucky too" . That chart has at best 20 days of data and statistically is not so significant as my 3 month dataset and therefore is dismissed.

You are going to throw out 20 days worth of data that demonstrates a clear trend in 4 pools (including the one your are discussing now)? When I come back in 10 days with 30 days worth of data, will you dismiss that as well? You were not even curious as to why 4 pools were clearly below trend and the other 2 were flat?

I do not see what point you are trying to make in this thread. It looks like you are nitpicking trying to come up with some criticism , if so than you are miserably failing at it. Please tell us what is your point if you reply.

First point:: I believe you calculated the odds incorrectly regarding finding a set of bad luck in the search space of pools

Second point: I believe you treated the invalid blocks found incorrectly in your data set

Third point: I believe you represented past data that contained extreme and discontinuous events as historical norms. If you going looking for a problem in the past and show how the performance deviated from expected as a result of those problems, that is fine. You shouldn't represent that search as the result of a true Monte Carlo run to be expected in future performance.

Fourth point: I believe you are dismissing data that disagrees with you because it is not "as significant". Your response should have been: "Oh, that is interesting, I wonder why that is." I wonder why those pools are under-performing and I believe that someone who claims to be providing a defense course for pool-shopping should wonder as well.

You are very right about one thing, with hindsight we should have absolutely avoided BTCGuild on 7 out of the last 60 days. I am trying to keep this discussion collegial and courteous, I am not trying to pick gnat scat out of pepper to just to troll you  Wink


donator
Activity: 2058
Merit: 1007
Poor impulse control.
Simulations of pool block finding for different lengths of time gives a good idea of how 'lucky' pools might be in the real world. In the histograms below, the 'luck' index = log((number of shares to find x blocks by pool)/(x * difficulty)).

 

Slower pools will have much larger variability than faster ones, but even after 1000 blocks found some pools will look 'unluckier' than others.
hero member
Activity: 518
Merit: 500
"quote" we saw 6 dice land and the odds of any one of those dice showing a 1 is 1 in 1. "quote"


WHAT?  After the event, looking at the dice, that's allowable, but rolling six dice and getting at least one "1" is [1 - (1 - 1/6)^6] ~= 0.66

Yep, you are right. one roll of 6 dice is will show a 1 66.51% of the time. /sadface
My point was the probabilities were different. I assure you, my point was not to belabor the fact that I am a moron.  Grin

lol - it gave me a laugh earlier today.  I don't think you are a moron, but the expression just jumped off the page at me.
legendary
Activity: 1190
Merit: 1000
"quote" we saw 6 dice land and the odds of any one of those dice showing a 1 is 1 in 1. "quote"


WHAT?  After the event, looking at the dice, that's allowable, but rolling six dice and getting at least one "1" is [1 - (1 - 1/6)^6] ~= 0.66

Yep, you are right. one roll of 6 dice is will show a 1 66.51% of the time. /sadface
My point was the probabilities were different. I assure you, my point was not to belabor the fact that I am a moron.  Grin
hero member
Activity: 518
Merit: 500
"quote" we saw 6 dice land and the odds of any one of those dice showing a 1 is 1 in 1. "quote"


WHAT?  After the event, looking at the dice, that's allowable, but rolling six dice and getting at least one "1" is [1 - (1 - 1/6)^6] ~= 0.66
legendary
Activity: 1190
Merit: 1000
Vladimir: After looking at your spreadsheet, I found a flaw in your analysis. You need to calculate the Poisson for all blocks found by all pools (and expected), not just the ones found by btcguild. You then need to determine what is the likelyhood that out of all the difficulty periods in all the pools, one will exhibit a run of "bad luck" such as the ones you found in BTCguild. You are failing to account for the selection bias effect.

For instance:

I separate people into many groups and have them all flip coins. I then tally each groups expected number of heads and tails. I see that one group has more heads than tails, and I calculate the Poisson distribution of that group as if those were the only coin flips thrown. Of course, this will show that groups efforts as unlikely to have occurred because I have masked out the efforts of the other groups.

I disagree. This would have some sense if I was claiming that all bitcoin pools have "bad luck" and brought up calculations with BTC guild data as proof. This is not the case though. Who cares, how many groups you separate people in. To illustrate this, image one particular "person/pool" with a fair coin which on the first and only try has tossed coin 4865 times and got at most 2275 heads. I have calculated odds of that happening is 0.0675%. Also if there were in existence 1400 more 2 Thps pools in bitcoin network, than yes we would statistically expect one of them being so unlucky, though for that the bitcoin network would have to be not 8 Ghps strong but almost 3 Pthps strong. This would be almost 3 orders of magnitude difference.

What if they flipped a coin made by a "friend" who claims that he made the coin right. It is the first time he has ever made a coin, but he did his best to make sure it was flat and looked good. We should first test the performance of this coin against the universe of coins made by friends, not against the theoretical 50/50. If we need to choose a coin to flip, we should compare then to each other and then choose one.

Also, the point is not that all pools are having bad luck. The point is that the universe of pools was examined and a single unlucky one was chosen. A better analog than coins might be dice. Imagine throwing six dice. One die comes up with 1 and we calculate the odds of that die landing on 1 correctly as 1 in 6. But that is not what we saw, we saw 6 dice land and the odds of any one of those dice showing a 1 is 1 in 1. Now let us suppose that these dice were not bought from the store, but were each made by a different individual. Now we are no longer sure what the correct distribution of die faces is for any of the dice.

Finally, the pools use different infrastructure. There is work to be done by the server when a share is submitted. What if pool A waits until all that work is complete before acknowledging to the client that it has received a submitted share. If clients cannot begin work on a second share until the first is acknowledged, this serializes the share submission with server share collation at the cost of latency. If pool B allows clients to submit their share and then immediately gives them a new one to work on before the collation work is done on the server, the share submission and processing are overlapped. Pool A's method has the effect of a natural governor on the speed that clients can use the server. Pool B's method minimizes latency but relies on the server to never get bogged down. Under maximum sustained load, these two systems will measure the same. But under failure conditions, they will measure differently.

That is it. I do not allege someone is cheating here. It is perfectly possible that it just have happened. Just like if we had someone to toss a coin 4865 times and repeat such a feat 1481 times than we would expect it (at most 2275 heads) to happen only once.

Let me set this str8, I do not allege that BTC Guild pool is cheating. I would have no factual basis for such conclusion at all. My conclusion is that during observed period of time (Summer 2011) for this particular set of data which is publicly provided by BTC Guild it seems (if the math is correct) that the pool is either  extremely (>99.9%) "unlucky" or cheating or having some technical issues or being attacked somehow or otherwise somehow inefficient or some combination of those.


My other conclusion is that I personally (or other people with a modicum of common sense) shall not touch such unlucky or inefficient, for whatever reason, pool and look elsewhere. However, those who unable to understand my reasoning or do their own DD are more than welcome to continue mining with proportional and extemely unlucky pools.

BTCguild did experience a DoS situation in June, I do not have the exact dates. But the scuttlebutt was that it was from a botnet, that it occurred over the course of a couple of days and peace finally "broke out". I am not sure just what the effect this might have on the "efficiency" of a pool, but I suppose it is possible it could skew the data.

Edit:

Also, you are comparing a real world empirical measurement to a perfect world theoretical average. Since there is no magic fairy dust that would result in a computer solving blocks at an average rate consistently faster than this theoretical average, we can ignore positive biases. If there was an operational friction that was not accounted for, it would exert a negative bias to all results. The way to account is to examine the poisson of all blocks found vs expected, btcguild blocks found vs expected, and all blocks without btcguild vs expected. That may produce a baseline would could illuminate any unseen frictions. It might be interesting to do that exercise for each pool, if deepbit had less friction its size may mask higher rates of friction in other pools.

True. However, somehow many other pools do not exhibit such inefficiencies, solo mining too appears to be fairly efficient.


Actually, they do. http://www.l0ss.net/index30.php
As you can see, there is very little if any "good" luck to be found in any of these 6 pools over the last 30 days.
Note btccoins.lc, btcmine, and mtred are all well below the expected luck line. What if there is friction that derives from a common component in the make up of these 3 pools? The luck for these 3 pools is demonstrably worse than btcguild, perhaps you should calculate what the "odds" of their bad luck occurring as well. Then we can calculate the conditional probabilities of those 3 pools + btcguild all having this sort of bad luck all at the same time.  Shocked

Also, one side question. How did you account for the invalid blocks?
legendary
Activity: 1190
Merit: 1000
Vladimir: After looking at your spreadsheet, I found a flaw in your analysis. You need to calculate the Poisson for all blocks found by all pools (and expected), not just the ones found by btcguild. You then need to determine what is the likelyhood that out of all the difficulty periods in all the pools, one will exhibit a run of "bad luck" such as the ones you found in BTCguild. You are failing to account for the selection bias effect.

For instance:

I separate people into many groups and have them all flip coins. I then tally each groups expected number of heads and tails. I see that one group has more heads than tails, and I calculate the Poisson distribution of that group as if those were the only coin flips thrown. Of course, this will show that groups efforts as unlikely to have occurred because I have masked out the efforts of the other groups.

Edit:

Also, you are comparing a real world empirical measurement to a perfect world theoretical average. Since there is no magic fairy dust that would result in a computer solving blocks at an average rate consistently faster than this theoretical average, we can ignore positive biases. If there was an operational friction that was not accounted for, it would exert a negative bias to all results. The way to account is to examine the poisson of all blocks found vs expected, btcguild blocks found vs expected, and all blocks without btcguild vs expected. That may produce a baseline would could illuminate any unseen frictions. It might be interesting to do that exercise for each pool, if deepbit had less friction its size may mask higher rates of friction in other pools.
hero member
Activity: 518
Merit: 500
Spending some of my weekend looking at this and went across to https://en.bitcoin.it/wiki/Comparison_of_mining_pools.  The range of different payment systems introduces the advantages and that was probably the argument that should have been put forward earlier.

If the payment methods were logical and consistent, then the hop vs non-hop largely disappears.  However, having systems where, for example,  "Each submitted share is worth more in the function of time t since start of current round." provides the opportunities to maximise returns.  That's more an issue with pool rules and the motivations of operators to attract people and earn fees.

Over time, miners would probably drift to an equilibrium state where consistent payout pools and hop-friendly pools separate, and then converge as the hopped pools then confer reduced advantage because they rely on non-hoppers to extract their advantage.  Simple - if you want to give away work, belong to a pool with exploits.  Again, that was the point Vladimir was trying to make.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
it's not too desirable to have HUGE miners in your pool as you have to trust them too.

I'll thank you to keep your size-ist remarks to yourself, Sukrim. Vlad looks quite a healthy weight in his photo.
legendary
Activity: 2618
Merit: 1007
Well, he could run a withholding attack (in any pool/payout scheme), for whatever reason - so it's not too desirable to have HUGE miners in your pool as you have to trust them too.
sr. member
Activity: 252
Merit: 251
How about "don't let one miner take over 80% of the hashpower of one pool" vlad?

Forget that one?  Ooops Smiley

Why would I care if Vlad supplies say, 100ghash to a pool I mine in?

If it's not PPS based, then all it means is every member gets paid faster and more frequently.
It also lowers the huge variance that occurs at 1.8 million difficulty.

Really.. This, and pool hopping etc.. Are not advanced math. You can figure these out with common sense.

Some people here seem to oppose everything, for the sake of being against something.
Pages:
Jump to: