Ya, seems the dev v2 pool is also stuck/forked... Is pool software the root cause? Or does this forking issue lie within the Burst protocol? I would think developing/updating "new pool software" as a work-around for a deeper issue is the wrong approach to take here, especially since most pool miners would then be incentivised to mine on said pool, which doesn't help at all with decentralization.
I'm solo mining, and since 1.2.6 came out my main burst client (that I mine against) was on a fork 4 times.
I run another named burst client to support the network, without local miners, it got forked once.
Interestingly, my database folders contain a " burst.mv.db" of 7 - 9 GBytes (9 right after syncing, shrinking to ~7 after a few hours).
The downloaded db.zip contains a "burst.h2.db" of ~3,5 GByte. I always sync from the network and pulled the file out of curiosity, and did not start a wallet with it yet.
Why the different naming and size ?
IIRC it has something to do with the way the multithreaded wallet handles the db.
Also @includebeer - we are currently unable to figure out the 'root cause' of the issues, but they lie within the wallet. Interestingly, they seem to not happen unless there's a miner or a bunch pointed at the wallets.
We resolved the fork issue with a use of the older burst.jar from 1.2.3 version. These are temp fixes until we can re-make the error for the devs and have them fix the actual issue.
The new pool software will be available to everyone, so the issue with decentralization is a non-issue.
The pool software is to alleviate the load on the wallets so that the issue doesn't rear its head in production environment, while at the same time we resolve the actual issue.
So far, the devs haven't been able to recreate the problem since it only seems to happen on pool wallets. We're open to any ideas from anyone as far as what the 'real issue' could be.
As far as i'm concerned it has to have something to do with the multithreading that was added to the wallet, as it only started doing it after that. But since it is seemingly random, and we're not able to reproduce it on a testnet, we're setting up some workarounds for now, while we investigate the issue further.
I personally don't think there's an issue with the way we're going about it whatsoever.
Ok, thanks for the explanation, I've a much better understanding now. If this defect started appearing after multi-threaded handling was added, I'd start looking at thread collisions for a start. Likely, the app isn't (completely) thread safe. If race conditions aren't being handled properly, it could lead to such problems like we're seeing. The best part: you don't need a production environment to test this is. There are a number of thread watchers and debuggers out there so a dev can view, debug, and catch such scenarios as they happen.
Source: I'm a software engineer by profession.