I agree with all this, except that I think the pool is receiving the correct amount of blocks for the userbases network hash level, as I really feel that only 5% of our base came over. Since we have not actually advertised it yet as ready for prod, I think 95% is still solo mining, but then I could be wrong about that.
Yeah, that's what I meant. The pool was finding too few blocks relative to the network hash rate, meaning that the network hash rate must be wrong. Network hash rate approximations can be wrong, but everyone is mining against the same algorithm so the rate at which you're finding blocks over a statistically relevant period of time is the most objective reference.
A little off-topic, but about the time spent on X11 vs time spent on Biblehash - since X11 comes first and is independent of Biblehash, do you think it's possible someone could build a hybrid GPU miner? Using a GPU to find X11 solutions, and simultaneously feeding those into the CPU to run Biblehash on? Whether there'd any benefit to that would likely depend on where the CPU is spending the majority of its time, I suppose. Since it sounds like Biblehash is the bottleneck it's probably not a concern, but if it turns out that the opposite is true then it could be an issue.
Also, it looks like the X11 hash and the Biblehash hash are being compared against the same hashtarget value in the miner code. Do you know if the difficulty readjustment takes this into consideration? This is way outside of my field of knowledge, so maybe I'm thinking about this wrong, but imagine a scenario like this:
>1 out of 100 hashes are solutions at difficulty X
>0.01 probability of finding an X11 solution
>0.0001 probability of finding an X11 solution that also produces a Biblehash solution
>1 out of 200 hashes are solutions at difficulty 2X
>0.005 probability of finding an X11 solution
>0.000025 probability of finding an X11 solution that also produces a Biblehash solution
So, the difficulty adjustment system might have expected this to double the block time, but doubling the difficulty actually increased it by 4 times. Just for the record, I haven't looked at the code related to calculating difficulty at all, I'm just wondering. -edit- Block time probably isn't a good term to use there, I mean doubling the difficulty so the block time would remain consistent after readjusting for a doubled network hash rate.
Yeah, you must be reading my mind. Good finds/observations.
Well, this is sort of an eye opening experience actually. The difficulty readjustment does not take into account independent adjustments for each algorithm.
I agree, with the test harness metrics, someone with a lot of time on their hands could offload the X11 hash (as of when we released the wallet with the faster hashing speed and the outer X11 loop).
I mentioned two things in the next mandatory that would make the wallet smoother: preventing the block clumping, (thats a constant retarget), and lowering the stuck block threshhold.
I now want to add: tackle the diff readjustment per algo, and remove the difficulty of the x11, another words, we should put All of the onus on solving the block basing it off of the PoBh difficulty level only. I believe we can accomplish this by lowering the difficulty of X11 down to miniscule amount, and testing this in testnet.
I would consider this an emergency, but we need to test thoroughly, and we dont want to tick off ccex. We need to give them at least a 2 week notice.
Ill get to work on a new testnet version for us with these mandatories features to be ready by Friday for testing, then on the weekened we can announce an upgrade to them for about 14 days later.
In the mean time, I believe the pool problem will be fixed, it passed UAT and now I need to compile the windows wallet.