Weird. I get why the fix works, but I think the problem source may be that I put the "finished := false" within the for loop by accident. The reason I delay the break is because if the config doesn't have an empty line at the end, it looks like it'd break before actually parsing the last line. If you move the "finished := false" to before the for loop and remove the break you inserted, does it still cause a problem on Linux? Am I mistaken about the early break not causing problems on config files with no empty final line?
EDIT: I made a mistake that was included in the first version I uploaded to github, I've since corrected it but if you downloaded a version before I did then this might be the issue.
https://github.com/nixed18/combfullui/commit/3074fbd709e9602eeaccd639dfcb63dcad7dee66If you've still got that "continue" on line 70, that's what causing the permaloop.
Now, the concept of pulling blocks over RPC is solid, I didn't know it was practical or doable. Great job there.
That said the implementation has it's flaws, I fixed two of them.
First of all the shutting down of goroutines using the run boolean had a race, so I've added a RWMutex to guard it.
I've changed the boolean to integer so that later when we want to start it again we just keep increasing it on every startup or shutdown.
The goroutine will just compare it's own copy of the integer to know if it should run or not.
My understanding was that, because its creating a new Index instance for each mine_start() to use as a reference, mine_start() instance 1 and mine_start() instance 2 (and all their respective sub-routines) would never be referencing the same run value. Was I incorrect on this?
Secondly, the parsing of the bitcoin block had a serious problem. The block/transaction was getting parsed into map[string]interface{}.
Problem is maps don't guarantee ordering by key when getting looped.
This could've caused reordering of commitments in some conditions. Whoops.
So I've put there a Block struct that contained slice of tx and tx contains slice of outputs. Problem solved, as slices are ordered.
You should recreate your databases after this fix just to be sure.
Yea that'd be bad. I haven't had it happen yet, all my fingerprints (that I've tested at least) have matched up, but I didn't know that that was a possibility. More reading for me lol.
EDIT: It's been a while since I wrote the initial interp code, but from reading it again it doesn't look like it's pulling the relevant TX sections into map[string]interface{}, but that it's pulling them into []interface{}. This was done for both the initial array of TXes in the block, and the array of outputs in the TX (
https://github.com/nixed18/combfullui/blob/master/miner.go, lines 420 and 427). Even if I'm right and it's stable as is, it's still probably worth moving over to a struct system; from what I've read it's faster than using map[string]interface{}.
EDIT2: Could I see your struct implementaion? I've been unable to reuse make_bitcoin_call() without having it output an interface/map[string]interface{} value, and at that point I can't convert back to a struct without marshalling and unmarshalling the JSON for a second time. That seems slow.
EDIT3: I was able to do it by unmarshalling the "result" json into another json.RawMessage, then unmarshalling again. No clue if it's faster but it works, and lets make_bitcoin_call() be used for multiple return values, so that's cool.
EDIT:
Another problem was that you weren't using pointer receivers when programming in an object oriented way. To highlight the problem:
func (a A) X() {}
fixed as:
func (a *A) X() {}
The lack of pointer receivers makes copy of the
a each time which makes the original not writable, and could lead to other subtle glitches.
Gotcha, I didn't know you could do that, that's pretty sick.
Further potential improvements* Remove the mining/miner api. Only keep the height view set to 99999999 to make sure Modified BTC Does not troll us when we run at port :2121 set on config.
* Write complete block in batch instead of writing every commitment separately. Basically, you can open new leveldb.Batch object then write to it all the commitments. Then, even if computer hard shutdowns due to power loss the leveldb should guarantee that the final block either did or didn't get written completely.
* Change utxo tag to 128bit (Commit position is basically a position of the current commitment is in the block sequentially taking both seen and unseen commitments into account):
type UtxoTag struct {
Height uint64
CommitPositionInBlock uint32
TransactionNum uint16
OutputNum uint16
}
* Change on-disk block key to
Height uint64 as opposed to Block hash. Then you can distinguish block keys and Commitment keys by their length.
* Implement the block-specific checksum. (Probably a SHA256 of all the commits in the block concatenated in their order). The block-specific checksum can be prepended to the block value whose key is uint64 on-disk:
* Implement storage of block headers. BTC header having 80byte should be recreated from the information provided via the RPC api and put to the database. This makes is feasible to resume the download using a specialized tool that I have in development (the tool will also require access to an API endpoint that will return ALL or the HIGHEST block header).
This gets into next-step territory; modifying the actual miner_mine_commit_internal() process. Cool.
While building the API to get headers, another thing to consider adding is the option to pull commits in a json format. While I don't know about pulling ALL the commits in one go, considering there's like over 3.5 million of them rofl, pulling them in chunks/pages might be practical. It'd allow other programs (i.e. ClaimVision) to access the commits file locally for use, without having to use an html parser to deal with (127.0.0.1:3232/utxo/).
EDIT: I'm confused about your last bit;
* Change on-disk block key to Height uint64 as opposed to Block hash. Then you can distinguish block keys and Commitment keys by their length.
* Implement storage of block headers. BTC header having 80byte should be recreated from the information provided via the RPC api and put to the database. This makes is feasible to resume the download using a specialized tool that I have in development (the tool will also require access to an API endpoint that will return ALL or the HIGHEST block header).
Right now the block hash information is stored to check for reorgs as the value, not the key. Are you referring to value here? Just making sure, but you're suggesting that we toss block hashes and just use the headers to check for reorgs, right? Why store the block height as a 64-bit, wouldn't a 32-bit be fine?
The way I'm reading the commit proposal is as follows
On-Disk Commits
KEY: 000000000000006889790000000300000001
VALUE: C78B5DBA7CAD6FA564D60BCF3099D0C80FD4CB75CD1A0FB261A568E35B8F6905
The On-Disk Hashes are currently just
KEY: 9999999900688979
VALUE: 0000000000000000001103f2f1f374ee8bf36c64949fcd53e47d0174cf063329
Is the replacement format you're suggesting below?
KEY: 00000000000000688979
VALUE: 010000009500c43a25c624520b5100adf82cb9f9da72fd2447a496bc600b0000000000006cd862370395dedf1da2841ccda0fc489e3039de5f1ccddef0e834991a65600ea6c8cb4db3936a1ae3143991