We will shortly release a Ethash miner testing tool, it's just a simple open source node.js project. You will be able to run controlled long-running tests on all miners. The reason this is important is that the displayed hashrates of Claymore and Phoenix are just bullshit. They both add +1.5% or more of fake hashrate.
Plenty of people have already pointed this out just by logging all shares over time, but it's really hard to prove anything.
Hashrates for 1 active connections:
Global hashrate: 228.51 MH/s vs 235.43 MH/s avg reported [diff -2.94%] (A247366:S1575:R0 shares, 93582 secs)
Global stats: 99.37% acc, 0.63% stale, 0% rej.
[1] hashrate: 228.51 MH/s vs 235.43 MH/s avg reported [diff -2.94%] (A247366:S1575:R0 shares, 93582 secs)
Look at that, a diff of -2.94% between the reported and accepted hashrate, what a coincidence?
Isn't that statistically the same amount as the fee? A likely coding error, a simple oversight at best or
willful incompetence at worst.
Miners typically calculate hash rate by counting hash iterations over time. it's a fair way to measure
the miner's performance but there's no direct connection to share submission. You're trying to prove
your point statistically when there is a precise way to do it if you can find the hash loop code in the kernel.
Man, I apologize, but there's just no beating around the bush here, this is maybe the most inaccurate statement I've ever seen in this forum. You just invalidated the whole world of PoW:
it's a fair way to measure the miner's performance but there's no direct connection to share submission.
The connection is exactly the opposite, they are directly and fully connected, and it's the basis for EVERYTHING related to proof-of-work: when you calculate N hashes for as many nonces, given a specific difficulty we can say _exactly_ how many shares matching that diff we are _expected_ to find. This is the very essence of proof-of-work, and there are no approximations involved. For a smaller N, the variance in the _actual_ outcome will be high, sometimes we find zero shares, sometimes we find more than expected. As N grows larger and larger, the relative error between the nr shares we are _expected_ to find and the nr of shares we _do_ find become smaller and smaller. With proven theorems, we can show that after e.g. 1 million shares, we can estimate N from the nr of shares found and the difficulty to within +-0.33% at a 99% confidence level.
So, I'm not _trying_ to prove my point statistically, I am indeed proving it. And yes, as mentioned above I've also reverse engineered both miners to verify the claims. The reason I am proving it statistically instead of doing open surgery in public on closed source miners has already been answered above: it's a matter of principle, I'm not going to hand out closed source intellectual property to the world. If you want to hack miners, do so yourself. The whole point is that the statistical test is just as effective down to a certain bounded interval that we can prove, but you are absolutely correct in that tracking the OpenCL enqueue calls would be more effective and provide the answer with 100% accuracy. However, since we can run tests where the deviation is 5x outside our 99% confidence level accuracy, you can be damn sure things are seriously off without having to hack miners.
Please note that I have not made any claims as to knowing _why_ this is the case. If you're a good-hearted person you are free to believe it's all just a little coding mistake. For some reason all other miners in the world manage to do these calculations correctly (and they are darn trivial), but only the two biggest closed source miners in the world for crypto's largest emission gpu mineable coin happen to have a little bug or two that makes e.g. Nicehash run their miners instead of other alternatives. Right. That's besides the point I'm trying to make though: when comparing miners by looking at their displayed hash rate, if you don't understand that there is a significant amount of BS hashrate involved in certain miners, that comparison is very much invalid.
Anyway, we will soon be releasing this little tester tool and everyone can run it for themselves. There will most probably be a range of "experts" expressing doubts around the statistical test, but what can you do? At the end of the day, you really just have to run this miner (or the open source Ethminer) instead on a large enough farm and the numbers will be visible poolside as well. Unfortunately, you need a farm in the TH/s range to really get a tight enough continuous sample on e.g. Ethermine, which is why I linked you to an example wallet above.
As a final little example, this is how the open source Ethminer converges after 900k shares on the same rig at the same clocks as the Phoenix example above:
[[20191114 08:23:17.421]] [LOG] Hashrates for 1 active connections:
[[20191114 08:23:17.421]] [LOG] Global hashrate: 230.97 MH/s vs 230.99 MH/s avg reported [diff -0.01%] (A908252:S6201:R0 shares, 85023 secs)
[[20191114 08:23:17.421]] [LOG] Global stats: 99.32% acc, 0.68% stale, 0% rej.
[[20191114 08:23:17.421]] [LOG] [1] hashrate: 230.97 MH/s vs 230.99 MH/s avg reported [diff -0.01%] (A908252:S6201:R0 shares, 85023 secs)
There just isn't any black magic to this process. Ethminer has no dev fee and no bullshit implementations. If it scans 230.99 MH/s x 85023 secs = 19.639 THashes it will find the expected nr of shares within a very tight range. Same thing with all miners that aren't faking any numbers. It really is that trivial unless you're running the worst PoW algo in history with some seriously skewed entropy, which isn't the case for any of the major algos out there. In this case, Phoenix miner underperformed the open source miner with more than -1% running under the same conditions. When people look at the displayed hash rates though, they will see Phoenix winning with +1.9%. That's how you corner a market of miners with no tools to verify they're making the optimal choice.