The following is my final word on this subject. I will shut up forever about it, after this. You are free to pick apart my script, find all the logical flaws with it, modify it, publicly shame/praise it. I don't care.
A lot of people aren't satisfied with theoretical equations. So I created a simulation. I wrote a script in python, it does the following things.
Takes 3 input variables.
- Average block solve time (a result of the pools hashrate and the network difficulty of the current coin)
- Worker 1 speed (your slower worker. represents the workers's hashrate in relation to the rest of the pool)
- Worker 2 speed (your faster worker. same)
- The sample size. This reduces the random variance over time if high
For each number of sample size, I run one instance of the simulation. Each one represents 1 block of a coin.
I generate a solve time for that block. Our constant variable is the block solve time, which represents an average. I generate a random number between 1/2 of it, and 1.5x of it, this is the solve time for this particular block.
Remember, by the very definition of the word average, future values will be evenly balanced across both sides.I then run a separate simulation, using the same solve clock, for both workers.
For each worker, I generate a random value which is 1/2 to 1.5x of their share solve time. Remember, these values don't have to be real, since all we care about is the relation to the other worker.
I check and see if the value is less than the clock. If it is, I credit the worker with 1 share, and subtract the share solve time from the clock time. I do this until the solve time finally becomes greater than the remaining clock.
Thus, I have simulated the number of shares that worker got from the block.
I do the same for the other worker, who has a faster share-solve-time.
The rest is just calculating and displaying statistics.
Heres the code:
import random
class worker():
def __init__(self,hashrate):
self.hashrate = hashrate
self.sharesolvetime = 60 / hashrate
self.shares = 0
class pool():
def __init__(self,blockfindtime):
self.blockfindtime = blockfindtime
pool1 = pool(500)
worker1 = worker(1)
worker2 = worker(12)
samplesize = 100000
for n in range(0,samplesize):
clock = random.randint(pool1.blockfindtime/2,pool1.blockfindtime + pool1.blockfindtime/2)
clock1 = clock
while clock1 > 0:
sharesolve = random.randint(worker1.sharesolvetime/2,worker1.sharesolvetime + worker1.sharesolvetime/2)
if sharesolve > clock1:
break
else:
worker1.shares = worker1.shares + 1
clock1 = clock1 - sharesolve
clock2 = clock
while clock2 > 0:
sharesolve = random.randint(worker2.sharesolvetime/2,worker2.sharesolvetime + worker2.sharesolvetime/2)
if sharesolve > clock2:
break
else:
worker2.shares = worker2.shares + 1
clock2 = clock2 - sharesolve
print "Worker 1 has: " + str((float(worker1.hashrate) / float(worker2.hashrate + worker1.hashrate)) * 100) + ' percent of the hash power'
print "But worker 1 has: " + str((float(worker1.shares) / float(worker2.shares)) * 100) + ' percent of the profit'
print "Over sample size of " + str(samplesize)
print "When worker1's average share-find-speed was: " + str((float(pool1.blockfindtime) / float(worker1.sharesolvetime)
It displays the following stats:
- What percent of the hash power worker1 has
- What percentage of the profit (shares) he ended up with
- What sample size we used
- What was the ratio of worker1's time to find a share to the pool's time to find a block (another way of saying, how "fast" was the coin)
I will now give you the results of running this script. I will use the same workers speeds, but I will change the block solve time. I will give 5 examples
One important point - the block solve time represents the speed of the coin, we can't do anything about that
The share solve time - that's what we want to effect. Two ways to do this: change your hashrate, or change the share difficulty.
Now then... the results...
Very Slow coin (something like LTC):
Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 7.12127534135 percent of the profit
Over sample size of 100000
When worker1's average share-find-speed was: 8.33333333333X the block-find-speed
Pretty Slow coin
Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 6.6950187416 percent of the profit
Over sample size of 100000
When worker1's average share-find-speed was: 4.0X the block-find-speed
Medium/Fast Coin
Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 5.89708931026 percent of the profit
Over sample size of 100000
When worker1's average share-find-speed was: 2.0X the block-find-speed
Fast Coin
Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 4.07045734716 percent of the profit
Over sample size of 100000
When worker1's average share-find-speed was: 1.0X the block-find-speed
Extremely Fast coin
Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 1.15306809456 percent of the profit
Over sample size of 100000
When worker1's average share-find-speed was: 0.5X the block-find-speed
A quick analysis of the results supports the following conclusion:
There is a skew towards faster miners, in terms of their percentage profit compared to their hashrate, on any pool for any coin. The effect gets
exponentially increased as the time for the worker to find a share comes closer and closer to the pool block find rate.
This effect is negligible, and irrelevant for slow coins who have slow block time. However, as share-find-time for slower workers approaches block-find speed for fast blocks, miners begin to lose an extreme amount of profits.
This only takes into account block changes, where the client hears about the new block - there are also rejected shares, where it hears about the new block too late, and happens to solve the share before hearing it (this is what is measurable on the site's stats page), and also coin changes.