Pages:
Author

Topic: Does a high pool difficulty lower anyone's profits? - page 2. (Read 4420 times)

sr. member
Activity: 434
Merit: 250
Well done mueslo. You have more patience than I do. Smiley
sr. member
Activity: 658
Merit: 250
It's proportional

https://bitcointalksearch.org/topic/m.2900385

Glad to see this discussion is actually progressing. The lack of personal remarks in this thread is almost inexplicable.
member
Activity: 94
Merit: 10
[...]

Here's the code for per pool-found-block/proportional reward. There's no change (as is to be expected, all proportional reward does is increase variance (as long as there's no pool hopping)):

Code:
import numpy.random as rnd
import numpy as np

class worker():
    sharetime = None #time the next share is found by the worker
    def __init__(self,avgsharetime):
        self.avgsharetime = avgsharetime
        self.hashrate = 60/avgsharetime
        self.shares = 0
        self.roundshares = 0
        self.income = 0.
        self.generateShareTime(0.0)
    
    def generateShareTime(self, currenttime):
        self.sharetime = currenttime + rnd.exponential(scale=self.avgsharetime)
    
    def foundShare(self, currenttime):
        self.shares+=1
        self.roundshares+=1
        self.generateShareTime(currenttime)


class pool():
    t = 0. #time elapsed
    blocktime = None #time the next block is found (by the network)
    pblock = 0.02 #probability that a share solves the block
    totalroundshares = 0 #number of total shares solved for this block round
    workers = [] #list of workers in the pool
    def __init__(self,avgnetworkblocktime=10.,workers=[],):
        self.avgnetworkblocktime = avgnetworkblocktime
        self.generateNetworkBlockTime(0.0)
        self.workers=workers
        
    def start(self,duration=100.):
        while self.t            nextworker = self.workers[np.argmin([w.sharetime for w in self.workers])]
            nextsharetime = nextworker.sharetime            
            
            if self.blocktime                self.newNetworkBlock()
            
            else:                            #worker found a share
                self.t = nextworker.sharetime
                self.totalroundshares+=1
                nextworker.foundShare(self.t)

                
                if rnd.uniform(0,1)                    self.newPoolBlock()


        
    def generateNetworkBlockTime(self,currenttime):
        self.blocktime = currenttime + rnd.exponential(scale=self.avgnetworkblocktime)
        
    def distributeRewards(self):
        print "Shares: [",
        for w in self.workers:
            print w.roundshares,
            w.income += float(w.roundshares)/float(self.totalroundshares)
            w.roundshares = 0
        self.totalroundshares = 0
        print "]\n"
        
    def newPoolBlock(self):
        print "\nnew pool block t=",self.t,"-> Payout!"
        self.distributeRewards()
        
        self.blocktime = self.t
        self.newNetworkBlock(echo=False)
        
    def newNetworkBlock(self,echo=True):
        self.t=self.blocktime
        if echo: print "new network block t=",self.t
        for w in self.workers: #if you disable these, nothing changes in the outcome
            w.generateShareTime(self.t)
        
        self.generateNetworkBlockTime(self.t)
        


slowworker = worker(avgsharetime=30)
mediumworker = worker(avgsharetime=10)
fastworker = worker(avgsharetime=3)


pool1 = pool(workers=[fastworker, mediumworker, slowworker],avgnetworkblocktime=10.)
pool1.start(duration=100000.)

print fastworker.shares
print slowworker.shares
    
print "Slow worker has: " + str((float(slowworker.hashrate) / float(fastworker.hashrate)) * 100) + ' percent of the hash power of fast worker'
print "Slow worker has: " + str((float(slowworker.income) / float(fastworker.income)) * 100) + ' percent of the profit of fast worker'
print "Slow worker has: " + str((float(slowworker.shares) / float(fastworker.shares)) * 100) + ' percent of the total shares of fast worker'

And some example outputs:
Slow worker has: 10.0 percent of the hash power of fast worker
Slow worker has: 10.0643834344 percent of the profit of fast worker
Slow worker has: 10.2436090226 percent of the total shares of fast worker

Slow worker has: 10.0 percent of the hash power of fast worker
Slow worker has: 9.89355458187 percent of the profit of fast worker
Slow worker has: 9.99728580476 percent of the total shares of fast worker

Slow worker has: 10.0 percent of the hash power of fast worker
Slow worker has: 10.2071470481 percent of the profit of fast worker
Slow worker has: 10.1000090098 percent of the total shares of fast worker

Slow worker has: 10.0 percent of the hash power of fast worker
Slow worker has: 9.78155423167 percent of the profit of fast worker
Slow worker has: 10.1675711134 percent of the total shares of fast worker

Slow worker has: 10.0 percent of the hash power of fast worker
Slow worker has: 9.84810455803 percent of the profit of fast worker
Slow worker has: 9.8902409926 percent of the total shares of fast worker

example found blocks
newbie
Activity: 28
Merit: 0
What makes more sense: when a block is found by the pool itself (and not the whole network), all previous shares are reset to zero and the reward of that block is distributed among shares.  I'm guessing you want the latter?

The fact is I really don't know how it works right now. But, when I am talking about block change I mean block change for the network. I am starting to realize we are all working under a lot of personal assumptions about this because we don't know for sure what the reward system is. My assumption, and the one I've been arguing from (I hope not for nothing, but at least it would be settled), is that we get paid per block per shares that we have for that block.

At the end of every block? That wouldn't make much sense though (there is a big chance of loss for the pool operator if the network suddenly becomes fast).


It shouldn't effect the pool as a whole. The pool as a whole would still solve just as many blocks, it should only affect the shares within the pool.
 
member
Activity: 94
Merit: 10
I am curious what will happen if you implement the following conditions:

At the end of the block, worker shares get reset to 0.
Just before that happens, the shares are used to calculate payout, and the payout occurs per block.

At the end of every block? That wouldn't make much sense though (there is a big chance of loss for the pool operator if the network suddenly becomes fast).

What makes more sense: when a block is found by the pool itself (and not the whole network), all previous shares are reset to zero and the reward of that block is distributed among shares.  I'm guessing you want the latter?

(I don't think we're using that either, because it's vulnerable to pool hopping if you know when the pool found a block.)

But anyway, if you want to actually simulate this, you can no longer calculate the time when a block is found, because then it could happen that the pool finds a block without any share actually having been submitted. You now have to actually calculate the odds of one of the miners in the pool having found the block.
newbie
Activity: 28
Merit: 0
If you like, I can change the script I wrote to a per block reward system, I'm sure it would come out the same. If not, I concede.

I am curious what will happen if you implement the following conditions:

At the end of the block, worker shares get reset to 0.
Just before that happens, the shares are used to calculate payout, and the payout occurs per block.








member
Activity: 94
Merit: 10
Its not Miner A vs. the Pool and Miner B vs. the pool

Its Miner A vs. Miner B.

The distribution doesn't matter very much, because miner A and miner B both receive it the same system. I arbitrarily picked the averages out of the thin air. I can do that, because on the pool at any given time we can find miners all over the scale. That's why I can say miner A has this distribution, because no matter what, I can find a miner with that average.

To put it another way, its pointless to argue over weather miner A with average X would really distribute that way. Even if you are right, there is another miner with average Y who WOULD, and I can pick any of them since the competition is between the two miners, not miner vs. pool.

If the distribution is not that of a Poisson process, all your results will be wrong. Otherwise you are not simulating a Poisson process (which hashing is). If you like, I can change the script I wrote to a per block reward system, I'm sure it would come out the same. If not, I concede.

Which reward scheme is middlecoin using anyway? I have no idea. But I don't think it's per block reward.
newbie
Activity: 28
Merit: 0
Nowhere in my previous post am I assuming PPLNS. I am actually assuming straight PPS. Please read it and try to understand it. Or at least look at the python script.

You were assuming straight PPS, I was assuming per block proportional. Do you see? We will never convince each other because we are both right, in the respective systems we think we are operating under.

Sorry, but you are wrong. You were not simulating per block proportional, you were assuming a wrong distribution for the time between shares.

Its not Miner A vs. the Pool and Miner B vs. the pool

Its Miner A vs. Miner B.

The distribution doesn't matter very much, because miner A and miner B both receive it the same system. I arbitrarily picked the averages out of the thin air. I can do that, because on the pool at any given time we can find miners all over the scale. That's why I can say miner A has this distribution, because no matter what, I can find a miner with that average.

To put it another way, its pointless to argue over weather miner A with average X would really distribute that way. Even if you are right, there is another miner with average Y who WOULD, and I can pick any of them since the competition is between the two miners, not miner vs. pool.
member
Activity: 94
Merit: 10
Nowhere in my previous post am I assuming PPLNS. I am actually assuming straight PPS. Please read it and try to understand it. Or at least look at the python script.

You were assuming straight PPS, I was assuming per block proportional. Do you see? We will never convince each other because we are both right, in the respective systems we think we are operating under.

Sorry, but you are wrong. You were not simulating per block proportional, you were assuming a wrong distribution for the time between shares.


Thank you for the detailed, thorough response. Give me a chance to digest everything you said. You are clearly more advanced in mathematics and statistics than I (i mean that genuinely not sarcastically) and I would love to learn from you. However, in the mean time, let me ask you this.

If you can't precalculate (you say precalculate, I say predict) the time is takes to solve a block, how is bitcoin (or whatever coin) doing it? Bitcoin itself has to predict the time it will take the network to solve a block, to know how to adjust the difficulty. It MUST maintain an average block find time of 10 minutes, or quite literally the entire thing will come crashing to a screeching halt.

If bitcoin couldn't accurately predict the time it takes to solve a block given X hashpower, we wouldn't be having this conversation because bitcoin would have died, and none of the alt coins that we love to mine at middlecoin would have ever existed.

I elaborated a bit at the end, because that's what I thought was most likely to be read:

Now on to your simulation: You are using the wrong distribution. If you don't know why, read my post again from the top, read the wiki page or watch the video I linked above.

The probability distribution of the times between finding shares (again, this is a poisson process) is simply e^(-t/T), where T is the average time between shares. Which has exactly the property that you can just restart it at all points without changing anything.

edit to clarify: the chance P of the time you need to solve a block is P(t) = e^(-t/T). For generating t, you can for example use numpy.random.exponential(scale=T)
sr. member
Activity: 434
Merit: 250
If you can't precalculate (you say precalculate, I say predict) the time is takes to solve a block, how is bitcoin (or whatever coin) doing it? Bitcoin itself has to predict the time it will take the network to solve a block, to know how to adjust the difficulty. It MUST maintain an average block find time of 10 minutes, or quite literally the entire thing will come crashing to a screeching halt.

Bitcoin adjusts difficulty level based on historical data only, no predictions. It compares the time it took to get the last X blocks to what it "should" have taken, and then adjusts the difficulty accordingly to try and be closer to the target speed next time. Which would be very accurate if hash rate was constant.
newbie
Activity: 28
Merit: 0
Nowhere in my previous post am I assuming PPLNS. I am actually assuming straight PPS. Please read it and try to understand it. Or at least look at the python script.

You were assuming straight PPS, I was assuming per block proportional. Do you see? We will never convince each other because we are both right, in the respective systems we think we are operating under.
newbie
Activity: 28
Merit: 0

You cannot precalculate when an event may happen like this, that would violate you having a constant chance of finding a hash, and would make it dependent on the past.

Thank you for the detailed, thorough response. Give me a chance to digest everything you said. You are clearly more advanced in mathematics and statistics than I (i mean that genuinely not sarcastically) and I would love to learn from you. However, in the mean time, let me ask you this.

If you can't precalculate (you say precalculate, I say predict) the time is takes to solve a block, how is bitcoin (or whatever coin) doing it? Bitcoin itself has to predict the time it will take the network to solve a block, to know how to adjust the difficulty. It MUST maintain an average block find time of 10 minutes, or quite literally the entire thing will come crashing to a screeching halt.

If bitcoin couldn't accurately predict the time it takes to solve a block given X hashpower, we wouldn't be having this conversation because bitcoin would have died, and none of the alt coins that we love to mine at middlecoin would have ever existed.
sr. member
Activity: 434
Merit: 250
Ah yes, if you are using a proportional payment system which is based solely on the current block, then blocks with no shares you'd get zero and blocks with a share you'd get paid. One could argue that over time it wouldn't matter, if your hash rate is .5 shares per block then the blocks with a share you get paid "double" your average contribution and blocks with no shares you get paid 0. So it averages out to .5 shares/block worth of pay. But only getting paid on blocks where you personally submitted shares kinda violates the spirit of pooled mining, in my opinion.

'Block at a time' proportional payout for a very fast coin where slow miners might not submit shares seems silly. Proportional is a bad payment model anyway, and is hoppable. Best to use DGM or PPLNS if implemented the way Meni Rosefeld says to. Not all pools do PPLNS correctly. Follow this post:

https://bitcointalksearch.org/topic/pplns-39832

To do it right. Smiley

DGM is what I use on my pool and I really like it. (I do not use a negative f value, so there's no risk to the pool at all. PPS is also bad since it virtually guarantees pool bankruptcy if ran long enough.)
member
Activity: 94
Merit: 10
Ok. I had a bit of an epiphany - the argument has been over what diff setting is correct this whole time, but now I understand we really should have been arguing over reward system.

Everything I am saying is accurate for a proportional reward system. If we aren't using a proportional reward system, woah, that probably should have been mentioned weeks ago.

What a lot of the arguments are saying would be true if there was one long block with no block changes.

That's the epiphany I had - we should be arguing to go to PPLNS.


The effect I am describing is present when each block has shares associated with it and when the block is found the reward is shared among those blocks. Everything resets for the new block. Proportional reward.

If we've been using something else, than no wonder we are all disagreeing, we are basically speaking different languages.

But PPLNS would eliminate the bias almost completely. The block change would then have no effect on the payout.

I will concede that in a PPLNS system, yes, all the diff does is introduce variance. I could modify my script in 5 minutes to simulate PPLNS and I bet the numbers would look good.



Lets change this conversation. H20, can you verify what reward system you are using now? It's not explicitly mentioned in your FAQ or announcement statement. Now I realize, unless everyone is on the same page on this we are going to spin our wheels forever.

Second, instead of arguing about diff - which would alleviate the problem I am describing but not eliminate it, can we talk about PPLNS or something similar?

Nowhere in my previous post am I assuming PPLNS. I am actually assuming straight PPS. Please read it and try to understand it. Or at least look at the python script.
newbie
Activity: 28
Merit: 0
Ok. I had a bit of an epiphany - the argument has been over what diff setting is correct this whole time, but now I understand we really should have been arguing over reward system.

Everything I am saying is accurate for a proportional reward system. If we aren't using a proportional reward system, woah, that probably should have been mentioned weeks ago.

What a lot of the arguments are saying would be true if there was one long block with no block changes.

That's the epiphany I had - we should be arguing to go to PPLNS.


The effect I am describing is present when each block has shares associated with it and when the block is found the reward is shared among those blocks. Everything resets for the new block. Proportional reward.

If we've been using something else, than no wonder we are all disagreeing, we are basically speaking different languages.

But PPLNS would eliminate the bias almost completely. The block change would then have no effect on the payout.

I will concede that in a PPLNS system, yes, all the diff does is introduce variance. I could modify my script in 5 minutes to simulate PPLNS and I bet the numbers would look good.



Lets change this conversation. H20, can you verify what reward system you are using now? It's not explicitly mentioned in your FAQ or announcement statement. Now I realize, unless everyone is on the same page on this we are going to spin our wheels forever.

Second, instead of arguing about diff - which would alleviate the problem I am describing but not eliminate it, can we talk about PPLNS or something similar?



member
Activity: 94
Merit: 10

If the block changes every 30 seconds on average, and you find a share every 30 seconds on average, and someone else finds a share every 5 seconds, how do you have the same chances?

You cannot precalculate when an event may happen like this, that would violate you having a constant chance of finding a hash, and would make it dependent on the past. If I find a share now, I have the exact same chance of finding it one share later, since you find shares (approx) instantly. Or let me put it this way: assuming you have been unlucky, and got e.g. 0 shares in the time were on average supposed to get two: this does not make it any more likely that you will now find shares in the next timespan. Since you are constantly looking for shares and with the same probability at each point in time, the Poisson process continues even over block changes, so you can't just reset the time.

I explained this in the old thread. If you have an event that has a constant chance of happening in time (e.g. finding a hash, nuclear decay in atoms, ...), the amount of times the event occurs on average in a given timespan (here: between two blocks) is given by the Poisson distribution.

Here is the probability (y axis) that you find N shares (x-axis) between the two blocks if your hashrate is equal to the block find time.

Another analogy. You are gambling with slot machines that are free to use. Every time the block changes you have to change slot machines (but you can do so instantly, we are not simulating latency). By your argument, someone who can pull the lever five times as fast, would get more than five times what you got. Why should that be the case?

I suggest you read up a bit on the Poisson processes, I don't think you understand it quite correctly at the moment. Additionally, I'm confident I know what I'm talking about, I study physics. I'm also by no means a fast miner, I have a measly 1 MH/s.



You'll completely whiff on any given block just as often as you get one share.. meanwhile the other guy gets 6 in. If he has a little bad luck, he gets 5. You have a little bad luck? you get 0. But, it all evens out right? A little bad luck this time, a little good luck next time. Of course every time you have a little good luck, you don't get 2, you'll still get 1. He'll get 7

You each have a good block and a bad block. You have 1 share. He has 12.

1/12 is not the same ratio as the ratio as 1/6.

He might get 7, but that's only 17% more than he should have gotten. When you get 2, that's a 100% more than what you should have gotten. Here's the picture accompanying the above, for a miner that has 6x the hashrate of the share-rate-equal-to-block-rate miner:




Now on to your simulation: You are using the wrong distribution. If you don't know why, read my post again from the top, read the wiki page or watch the video I linked above.

The pobability distribution of the times between finding shares (again, this is a poisson process) is simply e^(-t/T), where T is the average time between shares. Which has exactly the property that you can just restart it at all points without changing anything.


Here is the correct version of your code:
Code:
import numpy.random as rnd

class worker():
    sharetime = None #time the next share is found
    def __init__(self,avgsharetime):
        self.avgsharetime = avgsharetime
        self.hashrate = 60/avgsharetime
        self.shares = 0
        self.generatesharetime(0.0)
        
    def generatesharetime(self, currenttime):
        self.sharetime = currenttime + rnd.exponential(scale=self.avgsharetime)

class pool():
    blocktime = None #time the next block is found
    def __init__(self,avgblocktime):
        self.avgblocktime = avgblocktime
        self.generateblocktime(0.0)
    def generateblocktime(self,currenttime):
        self.blocktime = currenttime + rnd.exponential(scale=self.avgblocktime)

pool1 = pool(2)
worker1 = worker(12)
worker2 = worker(1)
duration = 1000.

t=0.
while t    if pool1.blocktime        t=pool1.blocktime
        print "new block t=",t
        worker1.generatesharetime(t) #if you disable these, nothing changes in the outcome
        worker2.generatesharetime(t) #
        pool1.generateblocktime(t)
        
    elif worker1.sharetime        t=worker1.sharetime
        print "worker 1 found a share t=",t
        worker1.shares+=1
        worker1.generatesharetime(t)

    elif worker2.sharetime        t=worker2.sharetime  
        print "worker 2 found a share t=",t
        worker2.shares+=1
        worker2.generatesharetime(t)
    else:
        print "this is hugely improbable"


print worker1.shares
print worker2.shares
    
print "Worker 1 has: " + str((float(worker1.hashrate) / float(worker2.hashrate + worker1.hashrate)) * 100) + ' percent of the hash power'
print "But worker 1 has: " + str((float(worker1.shares) / float(worker2.shares + worker1.shares)) * 100) + ' percent of the profit'
#print "Over sample size of " + str(samplesize)
print "When worker1's average share-find-speed was: " + str((float(pool1.avgblocktime) / float(worker1.avgsharetime))) + "x the block time"
 

Example Output:

blocks and shares over time

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 8.26645264848 percent of the profit
When worker1's average share-find-speed was: 0.166666666667x the block time
sr. member
Activity: 434
Merit: 250
You'll completely whiff on any given block just as often as you get one share.. meanwhile the other guy gets 6 in. If he has a little bad luck, he gets 5. You have a little bad luck? you get 0. But, it all evens out right? A little bad luck this time, a little good luck next time. Of course every time you have a little good luck, you don't get 2, you'll still get 1. He'll get 7

Miners aren't paid based on some sort of weighted system where it's the # of shares per block you submit. It's the total number of shares you submit vs the total number of shares all miners have submitted.

Say the slow miner's average hash rate is about 1 share every 2 blocks and the fast miner is 9 shares every 2 blocks. Look at the slow miner over a period of 4 blocks. Maybe he submits 0 shares in blocks 1-3, and 2 shares in block 4. Or 2 shares in block 1 and no shares in block 2-4. Or one share in block 1 and one share in block 4, with no shares in blocks 2-3. The end result is after four blocks he's submitted 2 shares.

The fast miner, on the other hand, would average about 18 shares over these 4 blocks. In total after 4 blocks, on average, we'd see 20 total shares. 2 from the slow miner, 18 from the fast miner. And 2/20 of the coins being generated by the pool would be getting paid to the slow miner. Which specific blocks the slow miner submitted the shares on doesn't matter.

If it's more clear, raise the slow miner's average speed to 1 share per block. Whether he gets 1 share each in blocks 1-4, or 4 shares in block 1 and 0 in blocks 2-4, he still submitted 4 shares and is paid 4/total_shares_in_pool. How the shares were spread between the blocks doesn't change this payout.

(I'm assuming something like proportional or PPLNS payout of course. DGM doesn't use a proportional ratio although steady miners over time end up with the exactly fair proportional income based on their relative speeds.)
newbie
Activity: 28
Merit: 0
Looking at a graph of a poisson distribution, it starts low, peaks, then falls off. That's not the right distribution to use here. We need to start high, and fall off. The peak should be the very first sample.

If you are using a difficulty such that you have a 50% chance of getting a share each hash, the distribution will look like so:
1st hash: 50%
2nd hash: 25%
3rd hash: 12.5%
4th hash: 6.25%
and so on, dividing by two each time.

So whatever kind of distribution you call that.

Your example would solve something like, "what is the distribution of many hashes it should take to solve a share given 50% success rate".

What I am using poisson for, is to solve "given a known mean of 60 seconds, what is the distribution of time (in seconds) it will take to solve a share".

So the low, peak, low shape (skewed towards right for infinity) is correct for my use.



I can tell you'd prefer a full simulation instead of a half simulation. Mine is a half simulation since I am just using probability to say how long the block/share time took.

Let me give a full simulation some thought. I initially thought it would be hard to do, but the more I think about it, I think it might be possible to make each worker a thread. I thought there would be a race condition, but I forgot about the global interpreter lock in python. Basically python can't achieve true multi-threading because of the GIL which locks the interpreter to one thread at a time, but in this case, it would  actually be favorable.

Something like calculating how often a worker attempts a hash per second based on hash rate, putting a sleep(x) command in appropriate to that hash rate, and letting them all go to town until someone solves the block.
sr. member
Activity: 434
Merit: 250
Because the block changed. We he finds one, its a whole new block by then. All his work on that last block will never be given credit.

There is no "partial work" on blocks. Each random hash is either a valid share or not. When you get notified there is a new block, you keep trying hashes to see if they meet the new block instead of the old block. The only time a new block has any effect is if you report a share moment too late and it is 'stale'. But that's true for anyone.

The probability of finding a share is independent of your miner's speed or how many blocks you've looked at previously. A new block notification has no effect on this (unless the difficulty for the new block has changed).

Meanwhile, fast miner got 9 on the last block (or 8, or 10, whatever you like). Even if slow miner finds a block this time, fast miner probably got 8-12 shares again.

If the fast miner is 9x the speed of the slow miner, they should be reporting shares 9x as often as the slow miner. If it takes about 5 shares to find a block, then for every 2 blocks found on average that is 10 total shares, 1 share from the slow miner and 9 shares from the fast miner. So every other block the slow miner didn't report a share during that block. But that doesn't matter. The slow miner is reporting 1/10 of the total shares and will earn 1/10 of the total payout.

Over time, the block changing hurts the slow miner more.

Untrue.
full member
Activity: 168
Merit: 100
It merely calculates how many share you post in second and adjust the diff so its in desirable range (shares per sec)
Pages:
Jump to: