Lately I have gone thru alot of logs and hashs/txid, Ill get all of this confirmed after I get blockexplorer running (It's currently adding blockchain into database, and adding 100k+ entries takes some time).
Preliminary analysis from the settings change reads as follows:
During blocks 109500-109680 (Subsidy with new clients changed at 109500, first retarget with new settings was 109680), blockchains with new and old clients forked and merged several times. Fortunatly no fork was long enough to generate possibility of a double spend, accordin to my logs longest one was:
REORGANIZE: Disconnect 7 blocks; e3c80b289823f910f1ca..f47a9176221c31ee0330
REORGANIZE: Connect 8 blocks; e3c80b289823f910f1ca..235c65d1c0736a264864
REORGANIZE: done
SetBestChain: new best=235c65d1c0736a264864 height=109656 work=63134248907490 date=07/27/13 23:31:34
As the first block in the reorganize is the same that leaves only 6 blocks that actually got disconnected. so worst case is that next block from e3c80b289823f910f1ca has transaction leaving only 5 blocks to confirm that transaction, before the tx gets removed from the blockchain.
anyone else having high orphan/reject since changover i have had nothing but
This issue was real, I just didnt understand hows and whys back then. Now I'm wiser. Problem lies here:
if (vtx[0].GetValueOut() > GetBlockValue(pindex->nHeight, nFees)) return false;
For those who are not into coding, basicly this checks if the block we just had from the network doesnt have greater block value than it should have. Now as the old client block subsidy is 40 PWC, its smaller than 60 PWC new one has. So blocks generated by old client got accepted by new one and not vice versa. Before the first retarget, old clients (around 3 Mh/s) had more power behind than the new ones (around 2 Mh/s). This led into situation where old clients constantly over ran blocks generated by new clients. at blockheight 109680 chains finally forked, because the difficulty was different between the clients.
In theory this could happen again if the difficulties would be exactly the same, tho I find that very very unlikely. During the testing phase I obviously didn't get this one covered, I could say some of this is just bad luck as I ran old and new client side-by-side in testnet both having one core each hashing. and old client never got ahead of new client (tho i had smaller variables back then so the difference between subsidy and retarget wasnt as great as it was in the live launch). Still totally my fault not to get this test case covered.
I'm working now on to get the new client into a separate network.
Lesson learned:
- change all variables at the same time.
- make changes depend on time rather than blockheight, as hashrate etc is not easy to predict.