Pages:
Author

Topic: Does a high pool difficulty lower anyone's profits? (Read 4420 times)

member
Activity: 94
Merit: 10
On the png I can see miners (+s) forming nice curves that are exponentialy rising with lower hashrate. At least now 21:05 25-th of August.


Yep, but it jumps around a lot there due to there being very few miners there. On http://mueslo.de/host/middlecoin/ you can view the last hour.
sr. member
Activity: 736
Merit: 262
Me, Myself & I
if you plot a moving average against that data there are some small hash rate miners around 200th on the list getting better rejection averages than in the top 50. its dynamic and mueslos
graph shows what going on nicely.


The lowest hash rate miners has some total irrelevant data shown that are distorted by quantisation. Please, to see how 512 and 16 difficulty is different for small miners, start a small cpu/cuda miner with 10KH/s, mine for a day at middlecoin with 512 difficulty, and the second day at pool2.us.multipool.us with 16 difficulty. First day you will see 99% lines with "stratum detected new block", 0,5-0,75% lines with "yay" and 0,25-0,5% with "booo". Second day will be different story.
Please note that my post is not in a way of flaming or making someone's hard work look worse, but in a way of making things better, if possible, because I like the middlecoin idea.
sr. member
Activity: 736
Merit: 262
Me, Myself & I
On the png I can see miners (+s) forming nice curves that are exponentialy rising with lower hashrate. At least now 21:05 25-th of August.
sr. member
Activity: 414
Merit: 251
Answer is on the middlecoin.com main page. Just calculate rejected over accepted MH/s as a percentage and it will come that percentage is raising going down the table...  Undecided


No it doesn't , no it isn't.


if you plot a moving average against that data there are some small hash rate miners around 200th on the list getting better rejection averages than in the top 50. its dynamic and mueslos
graph shows what going on nicely.

sr. member
Activity: 414
Merit: 251
So there is a little variance wobble for the small hash rates, seems to be around the same average as the rest of the miners ... well what do you know eh

if you animate those pngs you can see the variance dance.
member
Activity: 94
Merit: 10
Just calculate rejected over accepted MH/s as a percentage and it will come that percentage is raising going down the table...  Undecided

Here, I made a little something. This should finally clear up misconceptions.

http://mueslo.de/host/middlecoin/miners.png

Regenerates every 5 minutes.
sr. member
Activity: 736
Merit: 262
Me, Myself & I
Not possible running a trial. Middlecoin creator said that 512 difficulty is hardcoded. Is higher difficulty favourising faster miners over slower? Answer is on the middlecoin.com main page. Just calculate rejected over accepted MH/s as a percentage and it will come that percentage is raising going down the table...  Undecided
sr. member
Activity: 490
Merit: 250
why cant we test it in a real pool?

we can set high difficulty (512) for about a week, then switch to lower difficulty and compare the results with each other.

That would be a real examination about this topic.
sr. member
Activity: 414
Merit: 251
could be due to a mixture of cards forcing inefficient driver performance?
perhaps there is a software bottleneck
I seem to remember some chatter about cards of different generations not playing well together but didn't pay much heed as I'm all 7950s & 70s
and 1 6950 on its own with the bios flashed

If some mixtures don't work maybe some are a bit iffy ....
newbie
Activity: 31
Merit: 0
Although there is still one phenomenon I cannot explain. I have a HD 7950 and a HD 5830. The HD 7950 consistently gets 2-5% rejects, while the HD 5830 gets 6-12% rejects, even if I set clock and intensity on the HD 7950 such that the hashrates are equal, so it isn't dependent on hash rates.

I'm guessing that older cards have more latency. So to predict your rejection rate, you'd need to add up the ping to middlecoin.com, plus whatever latency is associated with your GPU.

That does seem to be the most reasonable conclusion. It's hard to imagine that the latency between the GPU and CGMiner is measurable compared to internet latency, but I'm no hardware expert.

[Edit] If that's true, then you'd think everyone running a 5830 would get similar results. Anyone else with an older card experiencing the same thing compared to a newer card?
member
Activity: 94
Merit: 10
Thanks for doing that, liquidfire.

I should really learn numpy, and scipy. They look fun to use. I could make use of them, to make my selling bot better.

It really is useful.  This is what I learned Python/Numpy/Scipy/Matplotlib with.: http://www.math.ethz.ch/education/bachelor/lectures/fs2012/other/nm_pc/NumPhys_handout1.pdf
full member
Activity: 238
Merit: 119
Thanks for doing that, liquidfire.

I should really learn numpy, and scipy. They look fun to use. I could make use of them, to make my selling bot better.
full member
Activity: 238
Merit: 119
Although there is still one phenomenon I cannot explain. I have a HD 7950 and a HD 5830. The HD 7950 consistently gets 2-5% rejects, while the HD 5830 gets 6-12% rejects, even if I set clock and intensity on the HD 7950 such that the hashrates are equal, so it isn't dependent on hash rates.

I'm guessing that older cards have more latency. So to predict your rejection rate, you'd need to add up the ping to middlecoin.com, plus whatever latency is associated with your GPU.
member
Activity: 94
Merit: 10
Strange, it's hard to imagine that any minor difference between the cards design would affect its ability to get the share from the card to the pool that significantly.

Out of curiosity, is the 5830 also serving your monitor?

Nope, that's the 7950's job.
newbie
Activity: 31
Merit: 0
Yeah, they're both in the same machine, the only difference apart from the actual cards is which PCIe slot they're in, obviously. I do occasionally get HW errors on the HD 5830 (about 1 every 2-3 days), but I don't see the connection.

Strange, it's hard to imagine that any minor difference between the cards design would affect its ability to get the share from the card to the pool that significantly.

Out of curiosity, is the 5830 also serving your monitor?
member
Activity: 94
Merit: 10
Yeah, they're both in the same machine, the only difference apart from the actual cards is which PCIe slot they're in, obviously. I do occasionally get HW errors on the HD 5830 (about 1 every 2-3 days), but I don't see the connection.
sr. member
Activity: 414
Merit: 251
that must be something to do with getting the share off the card and onto the pool no?
though that is quite a difference. assuming no HW error type issues, same machine,  switch, router etc
member
Activity: 94
Merit: 10
My mental roadblock (or rather what cleared it) was the fact that even in a lopsided distribution you can still have an average, but it will be very skewed toward the thick end of the plot. I needed to actually see it plotted before it clicked.

This is why I stuck through, I thought I might help someone understand it better Smiley Thank you very much for the follow up. I actually learned something too (at first I thought it is always wrong to calculate the times in advance, which, of course, given the correct distribution in times between finding a share, it isn't).

Although there is still one phenomenon I cannot explain. I have a HD 7950 and a HD 5830. The HD 7950 consistently gets 2-5% rejects, while the HD 5830 gets 6-12% rejects, even if I set clock and intensity on the HD 7950 such that the hashrates are equal, so it isn't dependent on hash rates.
newbie
Activity: 28
Merit: 0
I am sure this will make some of you happy...

Mueslo - I finally just took your numpy.random.exponential and replaced my poisson function with it, in my own simulation. When I did, numbers started coming out looking like they should (actual return close to expected). This backed your claim that my distribution was wrong. As I started generating plots of each distribution, things started to make sense.

As hard as I've been dug in on this, most people would just bug out, but it wouldn't be right for me to not tell you that you were right on the distribution. H2O tried to explain this to me too, and a couple others.

My mental roadblock (or rather what cleared it) was the fact that even in a lopsided distribution you can still have an average, but it will be very skewed toward the thick end of the plot. I needed to actually see it plotted before it clicked.

But, my simulation was still correct (yours was too), less the distribution, so I guess I learned I'm better at programming than math :p

Anyway, thanks for actually trying to explain things to me and not being an ass like a couple others. I learned quite a bit researching all of this.


For the record:

import random
import numpy as np
import numpy.random as rnd

class worker():
    def __init__(self,hashrate):
        self.hashrate = hashrate
        self.sharesolvetime = 60 / hashrate
        self.shares = 0

class pool():
    def __init__(self,blockfindtime):
        self.blockfindtime = blockfindtime

pool1 = pool(30)
worker1 = worker(1)
worker2 = worker(12)
samplesize = 1000000

for n in range(0,samplesize):
    clock = rnd.exponential(scale=pool1.blockfindtime)
    clock1 = clock
    while clock1 > 0:
        sharesolve = rnd.exponential(scale=worker1.sharesolvetime)
        if sharesolve > clock1:
            break
        else:
            worker1.shares = worker1.shares + 1
            clock1 = clock1 - sharesolve
    clock2 = clock
    while clock2 > 0:
        sharesolve = rnd.exponential(scale=worker2.sharesolvetime)
        if sharesolve > clock2:
            break
        else:
            worker2.shares = worker2.shares + 1
            clock2 = clock2 - sharesolve
    
print "Worker 1 has: " + str((float(worker1.hashrate) / float(worker2.hashrate + worker1.hashrate)) * 100) + ' percent of the hash power'
print "But worker 1 has: " + str((float(worker1.shares) / float(worker2.shares + worker1.shares)) * 100) + ' percent of the profit'
print "Over sample size of " + str(samplesize)
print "When worker1's average share-find-speed was: " + str((float(pool1.blockfindtime) / float(worker1.sharesolvetime))) + 'X the block-find-speed'
    




Very Slow

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 7.69363865886 percent of the profit
Over sample size of 1000000
When worker1's average share-find-speed was: 8.33333333333X the block-find-speed

Slow

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 7.6898534448 percent of the profit
Over sample size of 1000000
When worker1's average share-find-speed was: 4.0X the block-find-speed

Medium

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 7.68459728742 percent of the profit
Over sample size of 1000000
When worker1's average share-find-speed was: 2.0X the block-find-speed

Fast

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 7.68286758249 percent of the profit
Over sample size of 1000000
When worker1's average share-find-speed was: 1.0X the block-find-speed

Very Fast

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 7.67015222587 percent of the profit
Over sample size of 1000000
When worker1's average share-find-speed was: 0.5X the block-find-speed
newbie
Activity: 31
Merit: 0
Quote
There is one provision with that statement, it takes a relatively long time to average out the larger variance the small miners suffer.
This.

Quote
The drawback is that it also increases the day-to-day variance.  In the long-run, it doesn't matter, though.
This.

Quote
Just with a higher variance.
This.

Variance is what I've been mentioning in the original thread for days.  High diff will not affect pool's profit, but it will affect small miner's reasonable variance.

h2o has previously stated that we are using a proportional system.  I agree that changing the payout algorithm to something like PPLNS would also help smooth out the variance.

Right now we have the worst possible setup for small miners - high diff with a payout calculation that does nothing to smooth out variance.

Over a long enough period of time (although I have no idea if that's weeks, months, etc) it shouldn't matter. But isn't the whole point of a pool to minimize variance?
Pages:
Jump to: