This testing in testnet could not be having some impact on the pool mining could it? Seems as if the leaderboard is barren and miners are either unable to connect or the pool is not showing them connected.
Investigating...
OK, so fortunately I have the forensic data from the pool that explains this situation. When the pool worked through the GSC superblock height @ 09:30 PM CST last night, it didnt have the govobj in its govobj cache, therefore it didnt believe the GSC superblock was valid, so it rejected it.
2019-07-24 02:24:47 CMasternodeMan::CheckAndRemove -- Masternodes: 184, peers who asked us for Masternode list: 0, peers we asked for Masternode list: 0, entries in Masternode list we asked for: 0, nDsqCount: 0
2019-07-24 02:24:48 block.vtx[0]->GetValueOut() 4270 <= blockReward 4270
2019-07-24 02:24:48 IsSuperblockTriggered::SmartContract -- WARNING: No GSC superblock triggered at this height 133680. IsSuperblockTriggered::SmartContract -- WARNING: No GSC superblock triggered at this height 133680. Memorizing prayers tip height 133679 @ time 1563935088 deserialized height 0 ...Finished MemorizeBlockChainPrayers @ 1563935088 EGSCQP 133680.000000 CHAIN_NOT_SYNCEDUpdateTip: new best=d850e653f734dea6db8741b4b45b836a1d23e7df2d3af24313a7d5e255241d70 height=133680 version=0x20000000 log2_work=59.51628264 tx=1101419 date='2019-07-24 02:24:37' progress=1.000000 cache=6.9MiB(0txo)
2019-07-24 02:24:48 AddToWallet 89caca51d79e173646a21e4d8b18c9e7dccb25bc133a8c37a9db2c3e333bffcf update
2019-07-24 02:24:48 {PNB}: ACC Prayer Modulus 0 Prayer Modulus 0 complete ERROR: AcceptBlockHeader: prev block not found
2019-07-24 02:25:05 ERROR: ProcessNewBlock: AcceptBlock FAILED: bad-prevblk (code 0)
(I'm resyncing the pool now).
As far as our current network state:
The chainz and the explorer.biblepay explorers seem to be correct. SX is correct. My sanctuary report is 98% correct-- however I see one sanc has chosen this shorter chain with low diff.
(If everyone could just double check their hash against one of our explorers, just make sure your diff > 2000 right now and the hash is correct).
I've seen this happen to the pool specifically a few times now. We are going to need to address this issue permanently as its unacceptable.
We need 100% accuracy going forward.
I'll investigate this gobject propagation failure and make this a higher priority than our next release and our next set of features - And I will put a plan into place that gives us a failsafe method to receive the data for the GSC superblock. (I feel as if the older protocol (when we had PODC) was much more resilient in that nodes missing the contract recovered.) I believe we have more of a propensity for nodes to miss gobjects in the current environment. If we need an emergency patch that puts the gobject in memory (IE an emergency sync) within 20 blocks of the GSC height, then we will need to have this in place as its unacceptable for any node to miss the govobj sync when 98% of our nodes have the object in memory.
PS One very interesting thing I found, the sanc that went out of sync is the same exact sanc that went out of sync during the last instance. I find it extremely odd that after deleting the chain files and dat files the same sanc goes out of sync. Its almost as if one particular network segment is getting hit (unless I forgot to delete the banlist.dat, but those bans usually expire in one day). We'll get to the bottom of this, we will search for the actual gobject hash.
PS2: The great news is I can see that this is not a network banlist issue; I can see the winning superblock govobj hash from 133680 was e317c284687997da7ea8994adcb40b3ac1304b2315afee156df098db0ee0ed01 and I can see in the log on the pool when the pool received this object, it was rejected:
CGovernanceManager::MasternodeRateCheck -- Rate too high: object hash = e317c284687997da7ea8994adcb40b3ac1304b2315afee156df098db0ee0ed01, masternode = 4882a52f01fd0f1c93ce58f7a8372f8214396cab5bbdd1080e564099e644b6f3-1, object timestamp = 1563849021, rate = 0.000005, max rate = 0.000004. This is great news because this means we can find the root cause and correct it. Im confident that all of our GSC problems from day 1 are similarly related (including all forks) - this is a gobject replication issue; stemming from "success" when Distinct sanctuaries submit new GSC triggers per day (causing no propagation issue), but when the Same sanc creates a new gobject within N days (probably 7 or so, based on the math of the superblock cycle), its causing the rest of the network to refuse to add that gobject and all of the child votes- which is elusive but now we have the information to fix it. I also believe this can be fixed in the core code without any workarounds which makes me very happy.