Author

Topic: [ANN] profit switching auto-exchanging pool - www.middlecoin.com - page 474. (Read 829908 times)

sr. member
Activity: 658
Merit: 250
Assume a constant difficulty.

Assume pool produces a new block every 30 seconds on average, for a particular coin and at the pools particular total hashrate.

Miner A has 0.5 megahash hashrate

Miner B has 5 megahash hashrate


Assume these hashrates translate to finding a share every 30 seconds, or 3 seconds respectively (again, on average).



Miner A will only find a share in time approx 50% of the time. So half the time he has 1 share, half the time he has 0 shares. He will have worked an average of 15 seconds for nothing when the block changes.

Miner B will find an average of 10 shares. He, as well, would have been working towards another share when the block changes. However, on average, he will have only been doing so for 1.5 seconds.

So the difference for Miner A is the difference between all or nothing.

The difference for Miner B is between, say, 9 and 10 shares... or maybe 10 and 11 shares...

Do you see it yet?


The bolded part is not correct. Sometimes the miner will have 0 shares, sometimes 1, sometimes more than 1. On average, 1.
newbie
Activity: 28
Merit: 0
Assume a constant difficulty.

Assume pool produces a new block every 30 seconds on average, for a particular coin and at the pools particular total hashrate.

Miner A has 0.5 megahash hashrate

Miner B has 5 megahash hashrate


Assume these hashrates translate to finding a share every 30 seconds, or 3 seconds respectively (again, on average).



Miner A will only find a share in time approx 50% of the time. So half the time he has 1 share, half the time he has 0 shares. He will have worked an average of 15 seconds for nothing when the block changes.

Miner B will find an average of 10 shares. He, as well, would have been working towards another share when the block changes. However, on average, he will have only been doing so for 1.5 seconds.

So the difference for Miner A is the difference between all or nothing.

The difference for Miner B is between, say, 9 and 10 shares... or maybe 10 and 11 shares...

Do you see it yet?
newbie
Activity: 28
Merit: 0
However, that share is worth 10x as much so it evens out.

In the case you were talking about, where you only have a 10% chance of getting a share, you would not get rewarded more. The variable here was hashrate, not difficulty. We are talking about different miners with different hashrates across a constant difficulty. Reward for a share is the same.

You don't get rewarded more because your chance of finding a share in time is less.
member
Activity: 94
Merit: 10
There is no actual partial share, but from a probability standpoint, there is the same effect. If it takes you 30 seconds on average to find a share, and I give you a 30 second time limit to find one, 50% of the time you will, 50% of the time you wont. If i give you 50 seconds to find one, more often than not you will, but there will still be plenty of occasions where it will take longer than 50 seconds and you wont make it.

Its probability. Probability isn't very useful with a sample size of ONE, which is what you keep trying to use as an example. The useful ness of probability becomes greater the greater the sample you use it on.

If the average time to find a share was significantly longer than the block time OF COURSE you would get nearly 0 of your hash rate. Think about that. If it takes you one minute to find a share on average and the block changes every 10 seconds, you have a race condition you are only going to win about 1/12* times.

I think you either don't quite understand probabilities or how pools work. Looking for shares is completely independent of the past. Your chance to find a share in that time may be relatively low, but since the difficulty is so large, your reward is proportionally higher.

The probability of a certain number of independent events in a certain time is given by the Poisson distribution. Let's assume you submit on average 1 share per new block. Then here are the probability for actually finding X shares in that time. Now, assuming we now switch to 10x the previous diff. Of course, it will be unlikely that you submit a share in that time, but it is in no way impossible. Here is the distribution of shares found for that case. So you have about a 10% chance of finding a share (slightly less, because you also have a tiny chance of finding more than one share). However, that share is worth 10x as much so it evens out.
newbie
Activity: 28
Merit: 0
What I am talking about is when you are working on a share and you DO hear that the block has been solved. That isn't recorded in rejected shares. It just goes away. Yes on an individual level you might solve the share at at exactly the right time and waste no hashrate. You have to look at the statistical probability, over time, to see that you are wasting it.
You don't 'waste' hashrate if this happens, assuming zero latency. As has been said, you don't 'work towards' finding a share. Your chance of finding a share is independent of the past, it is constant at all times. So there is nothing to 'go away' (this is exactly what is meant by whoever brought up the gambler's fallacy). By your logic, if the average time to find a share were significantly longer than the block time, you would have nearly 0% of your actual hashrate, which isn't the case.

Therefore, I don't see how it would skew the stats in favour of faster miners.




There is no actual partial share, but from a probability standpoint, there is the same effect. If it takes you 30 seconds on average to find a share, and I give you a 30 second time limit to find one, 50% of the time you will, 50% of the time you wont. If i give you 50 seconds to find one, more often than not you will, but there will still be plenty of occasions where it will take longer than 50 seconds and you wont make it.

Its probability. Probability isn't very useful with a sample size of ONE, which is what you keep trying to use as an example. The useful ness of probability becomes greater the greater the sample you use it on.

If the average time to find a share was significantly longer than the block time OF COURSE you would get nearly 0 of your hash rate. Think about that. If it takes you one minute to find a share on average and the block changes every 10 seconds, you have a race condition you are only going to win about 1/12* times.
member
Activity: 94
Merit: 10
What I am talking about is when you are working on a share and you DO hear that the block has been solved. That isn't recorded in rejected shares. It just goes away. Yes on an individual level you might solve the share at at exactly the right time and waste no hashrate. You have to look at the statistical probability, over time, to see that you are wasting it.
You don't 'waste' hashrate if this happens, assuming zero latency. As has been said, you don't 'work towards' finding a share. Your chance of finding a share is independent of the past, it is constant at all times. So there is nothing to 'go away' (this is exactly what is meant by whoever brought up the gambler's fallacy). By your logic, if the average time to find a share were significantly longer than the block time, you would have nearly 0% of your actual hashrate, which isn't the case.

Therefore, I don't see how it would skew the stats in favour of faster miners.

sr. member
Activity: 312
Merit: 251
Nice summary, Liquidfire. +1 for that
newbie
Activity: 28
Merit: 0
I hope what you're trying to say is that pshare(t)=const. and pblock(t)=const., this is correct. But to represent that, your scale would have to go all the way to infinity.


I understand it goes all the way to infinity on the right side of the scale. But for the sake of the visualization, I decided not to represent that. What the visualization really represents is what will happen the vast majority of the times, I suppose. The fact that it can only go to infinity is a negligible effect. In part, because both the block time and share time could go to infinity. If you imagine a bell curve, you are talking about the trailing edge.


This is your fallacy. Changing difficulty would do nothing about that (in your model). The probability to find a share at each point in time would go up, by the same amount everywhere. By your logic, rejected share percentage should be 100%, since most of the scale is to the right of the found block (found block to infinity), which is infinitely more.

But there is another flaw. In reality, As soon as there's a new block, you will start looking for solutions to the new block (with some delay of course). So there is zero chance of you solving a block after you received the broadcast for the same.

What actually happens when your share is rejected, is that your share was found in the few (milli)seconds before you actually received the new block broadcast. It is only due to the lag in network.

To use a similar visualisation:

x = pool/you searching for block X, s: you find a share
X = block X found/you receive broadcast

Code:
Pool server  [xxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyy]
Broadcast    [-------------------X-------------------]
                              <--+-->  Total network latency (round trip)
You          [xxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyy]
Broadcast    [----------------------X----------------]
Share        [-s---s---------------s---------------s-]

Any of your shares that is found within the arrows will be dropped (since your share will arrive at the server after it knew about the block)
So essentially, the percentage of rejected shares should be about: tlatency/tblock.

So actually, lowering share difficulty might have the exact opposite effect to the one you want. If it substantially increases load on the server, latency might go up and therefore rejected shares will also go up.

So the effect i'm talking about and rejected shares are not exactly the same thing. A rejected share happens when you solve a share and submit it after the block is solved but before you hear about the block being solved. This is a problem, but a totally different one, and one that is measured in latency.

What I am talking about is when you are working on a share and you DO hear that the block has been solved. That isn't recorded in rejected shares. It just goes away. Yes on an individual level you might solve the share at at exactly the right time and waste no hashrate. You have to look at the statistical probability, over time, to see that you are wasting it.

It boils down to this... there is an inefficiency in terms of the work you do and how you show your work. If you are always partially though the "statistical" time it takes you to solve a share on average, when a block is solved and you have to start over on a new block, then over time you are wasting that hashpower.

If either your hashrate is better or the diff is lowered, the amount that you will be wasting will be smaller on each block change, because you are more frequently notifying the server of the work you did.

Now, this effect does not impact how the pool does as a whole in terms of finding blocks. Not one bit. It just effects the distribution of the reward - skewing it towards a higher hash rate miners.

There is obviously a drawback to lowering the diff or we would have already done it. That draw back is increased server load. If the server can't handle that many requests and the network becomes a bottleneck than its possible we are worse off for it... But as long as the additional bandwidth is not maxing out any links that wouldn't happen.

Its understandable that H2O doesn't want to lower the diff, because he gains nothing for it and gains extra bandwidth. But as individual miners, towards to bottom of the hashrate scale, there is an unfair skew towards the top.
member
Activity: 94
Merit: 10

So, if in this process we determine the average time to solve a block as a pool is 1 minute.... And we determine for a given miner the average time to solve a share is 45 seconds... What are really saying?

We are saying that about the same amount of times that the pool solves a block in 30 seconds, they also solve it in 1 minute 30 seconds.

We are also saying that about the same amount of times we solve a share in 30 seconds, we do so in 60 seconds.

Block  [0=====15=====30=====45=====60=====75=====90=====105=====120]
Share [0=====15=====30=====45=====60=====75=====90]


So here's a way to think of it - the block solve will fall somewhere along that range, and the share will solve somewhere along its range, at random.

I hope what you're trying to say is that pshare(t)=const. and pblock(t)=const., this is correct. But to represent that, your scale would have to go all the way to infinity.

On any given go around, i could easily solve a share first, in fact in this case that will happen more times than not. But MANY times, the block will be solved first, and I will get nothing. This is the loss we are talking about. The work I did means nothing, it is not counted by the pool and credited as such. This loss is present in every crypto currency, but it doesn't really become a serious issue until you have these coins that are so easy that we find blocks in seconds and minutes.

But, we can combat this with a smaller diff. Imagine the same scenario, but I lower my diff such that I solve a share on an average of 30 seconds.

Block  [0=====15=====30=====45=====60=====75=====90=====105=====120]
Share [0=====15=====[30]=====45=====60]

Now, you can imagine again - numbers falling randomly on these scales representing the time it takes to solve a block vs a share.
This time, you can visually see its much more likely that the share time will be less than the block solve time.

This is your fallacy. Changing difficulty would do nothing about that (in your model). The probability to find a share at each point in time would go up, by the same amount everywhere. By your logic, rejected share percentage should be 100%, since most of the scale is to the right of the found block (found block to infinity), which is infinitely more.

But there is another flaw. In reality, As soon as there's a new block, you will start looking for solutions to the new block (with some delay of course). So there is zero chance of you solving a block after you received the broadcast for the same.

What actually happens when your share is rejected, is that your share was found in the few (milli)seconds before you actually received the new block broadcast. It is only due to the lag in network.

To use a similar visualisation:

x = pool/you searching for block X, s: you find a share
X = block X found/you receive broadcast

Code:
Pool server  [xxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyy]
Broadcast    [-------------------X-------------------]
                              <--+-->  Total network latency (round trip)
You          [xxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyy]
Broadcast    [----------------------X----------------]
Share        [-s---s---------------s---------------s-]

Any of your shares that is found within the arrows will be dropped (since your share will arrive at the server after it knew about the block)
So essentially, the percentage of rejected shares should be about: tlatency/tblock.

So actually, lowering share difficulty might have the exact opposite effect to the one you want. If it substantially increases load on the server, latency might go up and therefore rejected shares will also go up.
legendary
Activity: 1064
Merit: 1000
Thank you h2odysee for providing this service.

I'd like to ask whether there is a pool API for statistics? And please don't let the difficulty-arguers get to you.

Not really. The closest thing right now is this:

http://middlecoin.com/json

It has the same info as the web page.

But that format is going to change soon, so I wouldn't bother.

About the difficulty, I know what's right, and what's not. It's fine if people discuss it, because that will help bring them closer to the truth.

And having a few days of a lower diff, say 256 would surely prove to everyone what is right and what is wrong....
member
Activity: 81
Merit: 10
Today's payout is not matching my wallet deposit.

Did I miss something here Huh



What's your address? Does the transaction look correct in the block explorer?

Things look OK on your end. This looks like some issue with Cryptsy's wallet.

Strange, somehow I got yesterdays deposit today and today's deposit yesterday?
newbie
Activity: 16
Merit: 0
Nice concept, sounds very complicated to pull it off properly as described. I did a trial 48 hours of mining with 8.5 MH/s which landed me with a grand total of 0.24 BTC.. so I'm back to mining LTC again. I think the share diff may need to be selectable as my rig with 6950's in was barely submitting anything at 512.

That sounds odd. I've been on the pool with 8mh since the 23rd and in a 48 hour period i'll everage anywhere from 0.3 to 0.5 btc.

http://middlecoin.com/reports/1Mk1jz3Ck1PoN3yTh99Cv9MH6i41Ye42eA.html

I don't know what was wrong with your miners but something seems off.

Mining straight ltc for me on Lite Guarduan will net me 8.5 to 9 ltc a day at the current diff. At the current rate that is 0.20ish btc a day. I'm averaging at least 0.20+ btc a day on this pool except for the couple of bad days that the pool has had.  The following days always make up for it though, and this is all at 512 diff!  I honestly don't know what the complaining is about. It's profit not rocket science.

full member
Activity: 238
Merit: 119
Today's payout is not matching my wallet deposit.

Did I miss something here Huh



What's your address? Does the transaction look correct in the block explorer?
member
Activity: 81
Merit: 10
Today's payout is not matching my wallet deposit.

Did I miss something here Huh

member
Activity: 60
Merit: 10
Nice concept, sounds very complicated to pull it off properly as described. I did a trial 48 hours of mining with 8.5 MH/s which landed me with a grand total of 0.24 BTC.. so I'm back to mining LTC again. I think the share diff may need to be selectable as my rig with 6950's in was barely submitting anything at 512.
newbie
Activity: 28
Merit: 0

While your description of Gambler's Fallacy is accurate and apt to the statistics described for share calculation, your choice of slot machines as the example is probably the worst possible choice.  Gambling laws around the world regulate the RTP (Theoretical Payout Percentage) for slots and that is controlled by firmware/software. Las Vegas has a minimum payout of 75% and Atlantic City 83%. The slots are individually tweaked upon setup and are audited.  If they pay too much to the House over thousands of pulls, then the casino is fined.   Also, there is a record kept of each pull, winnings paid, pulls per session (if a casino card is used), etc. These all factor in to the calculations/controls. So yes, slot machines have a memory if you pulled 500 times.

So... There is a "progress" of sorts, because over the course of hours, if there isn't at least one payout at that particular machine up to the threshold established by law, the firmware "forces" a win of a certain amount to meet the ratio of winnings needed. Not pure chance then.

What does this have to do with Middlecoin, Hashrates/shares and difficulty? Nada

Just to divert from this conversation, you are right about slots... thats why while in Vegas I play video poker. Nevada law requires anything that represents a physical gaming device (a card, dice, etc) to behave like the real thing. Therefore there's no memory and the casino just has to rely on the statistical house edge to buffer them. Don't worry - they still do JUST FINE on video poker.
newbie
Activity: 28
Merit: 0
I wanted to bring up the gambler's fallacy several pages ago, but I didn't have an account and so had to wait a while.

Your summary of gamblers fallacy is spot on. However it doesn't apply in this case.

It is relatively easy to calculate a mathematically perfect probability, in the form of "AVERAGE time to solve share", based on the difficulty of the coin and the difficulty of the pool. As well is it possible to calculate average time for the pool to solve a block given the pool's hashrate.

Since we are dealing with averages, this has nothing to do with gamblers fallacy. Instead we have a statistically provable probability.

So, if in this process we determine the average time to solve a block as a pool is 1 minute.... And we determine for a given miner the average time to solve a share is 45 seconds... What are really saying?

We are saying that about the same amount of times that the pool solves a block in 30 seconds, they also solve it in 1 minute 30 seconds.

We are also saying that about the same amount of times we solve a share in 30 seconds, we do so in 60 seconds.

Block  [0=====15=====30=====45=====60=====75=====90=====105=====120]
Share [0=====15=====30=====45=====60=====75=====90]


So here's a way to think of it - the block solve will fall somewhere along that range, and the share will solve somewhere along its range, at random.


On any given go around, i could easily solve a share first, in fact in this case that will happen more times than not. But MANY times, the block will be solved first, and I will get nothing. This is the loss we are talking about. The work I did means nothing, it is not counted by the pool and credited as such. This loss is present in every crypto currency, but it doesn't really become a serious issue until you have these coins that are so easy that we find blocks in seconds and minutes.

But, we can combat this with a smaller diff. Imagine the same scenario, but I lower my diff such that I solve a share on an average of 30 seconds.

Block  [0=====15=====30=====45=====60=====75=====90=====105=====120]
Share [0=====15=====[30]=====45=====60]

Now, you can imagine again - numbers falling randomly on these scales representing the time it takes to solve a block vs a share.
This time, you can visually see its much more likely that the share time will be less than the block solve time.


So why is this a big deal? Because its biased towards fast miners. With everyone at 512 diff, they achieve something closer to the second depiction than the first, by virtue of their faster hash rate.

So just to illustrate, say we had two miners only, one had 90% of the hash rate and the other had 10%. You would expect those to also be the shares of the profit, but that would not be the case. It might end up looking something more like 92% to 8% because of this effect.

Lowering the diff evens the playing field.

Edit: Another example of the advantage of fast miners.


Say we are working on a block of WhateverCoin. We have been solving WhateverCoin blocks at 60 seconds on average.

John, a 1 m/h miner, has been submitting shares on average of every 50 seconds.

Tim, a 10 m/h miner, has been submitting shares on average of every 5 seconds.


Lets focus on one particular block:


We solve a block of WhateverCoin in 50 seconds, a little ahead of average. Again, its random.

This time John doesn't solve until 55 seconds! Uh oh... John gets nothing.. Actually, John will stop at 50 seconds and start on the next block... but he wasted 50 seconds of hashing power.

Tim got 9 shares in the 50 seconds. He was 5 seconds into the 10th share when the block was found. Tim wasted time too - he wasted 5 seconds. Oh well.


If Tim is 10 times faster then John, you'd expect he earn 10 times as much. But John got nothing this time!


I can hear you now - "over time, that loss will be made up for statically"

Nope. You see, John is much, much more likely to waste significantly less of his hashing power, on average. If they both lost 30 seconds of their respective hashing power, fine, the loss is even.

My earlier research led me to the following conclusion:

X = Average time for miner to find a share
Y = Average block find time
x/2*y = Percentage of LOST profit/hashpower

So in this example:

John:

X= 50
Y= 60

50/2*60 = 41% Wasted Hash Power! (this is an extreme example but possible with some coins)

Tim:

X= 5
Y=60
5/2*60 = 4.1% Wasted Hash power.

If i did this again with higher and higher BLOCK solve times, the effect would dissipate and dissipate until it the difference between the two were negligible.

When the share solve rate starts approaching the block solve rate, its going to get bad without lowering diff.
hero member
Activity: 798
Merit: 1000
I wanted to bring up the gambler's fallacy several pages ago, but I didn't have an account and so had to wait a while.

It seems to me there is a fundamental misunderstanding of how this whole hashing thing works. It is based on the erroneous belief that during each block you are making "progress" towards a share; progress which is discarded if a new block shows up before you actually find a share. This would be as though you were trying to fill up buckets, but someone kept taking the buckets away and replacing them with empty ones before you could ever fill one entirely. This is wrong. There is no progress.

The reason it's called the gambler's fallacy is this: Imagine a man at a casino. He's been at the same slot machine for hours without getting back a dime. Due to the amount of time and money he's already put in, he's convinced it has to pay off sooner or later and that quitting beforehand would mean forsaking his progress. If for some reason the casino had a weird policy wherein every five minutes he had to move to a new machine, he might be upset by this and claim that every time he moves to a new machine he loses all his "progress" on the previous one. In reality, as long as the different machines are mechanically identical, there is physically nothing different from pulling the crank on one or the other. A slot machine has no memory and has no way of "knowing" that you've tried 500 times and deserve a break.

As for mining, say blocks come by every 10 seconds and you average one share every 12 seconds. Based on a misunderstanding of averages as being discrete times at which a share will be found, one might conclude that he would never get a share as his "progress" would be "reset" 2 seconds shy of finding a share each time. In reality, he has the exact same chance of finding a share every single second, it just averages out to once every twelve. However, it wouldn't be too uncommon to find one in only 8 seconds, or have to wait 16. In fact, while very unlikely, it would be possibly for a CPU miner to solo-mine Bitcoin and find a block within seconds, while a guy with a top of the line ASIC doesn't find anything in a month.

In closing,consider the following: If you throw a die once, what are the odds you roll a six? One in six, right? Yes. How about if you throw it twice? 2/6? Nope. "What!?" you might say "But two rolls should clearly give me twice the chances as one, you're obviously an idiot." Yet by that logic, six throws would give you 6/6, a 100% chance of success, and life is never that certain. The answer is that two rolls net you an 11/36 chance. Why? The answer is actually quite simple: there are a total of 36 equally probable combinations of two rolls, 11 of which include at least one six ([1,6] [2,6] [3,6] [4,6] [5,6] [6,6] [6,1] [6,2] [6,3] [6,4] and [6,5]) And six rolls? Surely in six rolls you should get a six once right? In fact you only have a 66.5% chance. Still more often than not, but not exactly something to bet your life on.

While your description of Gambler's Fallacy is accurate and apt to the statistics described for share calculation, your choice of slot machines as the example is probably the worst possible choice.  Gambling laws around the world regulate the RTP (Theoretical Payout Percentage) for slots and that is controlled by firmware/software. Las Vegas has a minimum payout of 75% and Atlantic City 83%. The slots are individually tweaked upon setup and are audited.  If they pay too much to the House over thousands of pulls, then the casino is fined.   Also, there is a record kept of each pull, winnings paid, pulls per session (if a casino card is used), etc. These all factor in to the calculations/controls. So yes, slot machines have a memory if you pulled 500 times.

So... There is a "progress" of sorts, because over the course of hours, if there isn't at least one payout at that particular machine up to the threshold established by law, the firmware "forces" a win of a certain amount to meet the ratio of winnings needed. Not pure chance then.

What does this have to do with Middlecoin, Hashrates/shares and difficulty? Nada

More info: http://en.wikipedia.org/wiki/Slot_machine#Payout_percentage
sr. member
Activity: 406
Merit: 250
From CGMiner readme:



About the difficulty, I know what's right, and what's not. It's fine if people discuss it, because that will help bring them closer to the truth.

Just tell me I am right and give us a status update on diverting the hashrate to multiple coins Cheesy
sr. member
Activity: 312
Merit: 251
Agrippa, your comparisons might be correct - but they are theoretical.

If we compare REAL numbers like W/U which tells you how much of your raw hashrate
found the correct way to the Poolserver you will understand.

Normal WU = 1950
WU on that Pool = 1500

So there is something going really really wrong here.
Jump to: