Pages:
Author

Topic: [ANN][COMB] Haircomb - Quantum proof, anonymity and more - page 5. (Read 7091 times)

jr. member
Activity: 76
Merit: 8
I have here a skeleton for the sleep-less channel-less downloading of blocks, could it be integrated? I don't think its a big change.

https://goplay.space/#Yl0k34__XjH

Woah that use of the next_mine/next_download variables is cool. Yea I'll try to hook it up, it might take me a sec to fully understand but it looks good.


Quote
Your explanation about sequential reorg is fine then, let's keep linear.

To finalize the format I think we need to also pull block headers since genesis using getblockheader I think.

Can we hard-code the header for block 481822 into haircomb and just use that? It won't work for regtest though, so we can potentially code an option to pull from genesis for those blocks, but if we keep a hard coded pseudo genesis haircomb block then we can cut down of the data storage and the build time. The only risk is somebody 51%ing the entire BTC chain back to 481821, right?

I used 481824 in my code because that's the first block that a commit ever appears on, but Natasha's Modded BTC pulls from 481822 so it makes sense to start from there.


Quote
Thinking about it, perhaps to simplify things, it would be ok to have a mutex protected global in memory map keyed by blockhash containing the block height.

You know, what you previously did in the level db, but just in memory. In the level db, on disk, there would be the other direction (from height to full header).

Then on startup, the leveldb can be looped over and values cached to memory.

Makes sense, a million blocks will only add like 40 mb of memory. But key = hash and value = height, or the other way around? The current leveldb is key = height, value = hash.

Quote
Advice about the migration to the new utxo tag:

1. put the on disk utxo tag struct to the code
2. write a function convert the on disk utxo tag to the old utxo tag (the height is just casted to uint32, based on height before/after fork fill the txnum and outnum or split the output order num uint32 into two uint16 dividing/remaindering by 10000 and return them in the old struct)
2. when downloaded the block, generate the on disk utxo tag and store it next to the commitment.
3. when going to mine the block, don't pass the utxo tag as string but simply as on disk utxo tag struct
4. the strictly_monotonic_vouts_bugfix_fork_height if inside miner_mine_commit_internal can then be removed.
5. all the remaining functions just look at height
6. finally put the old version of the utxo tag to the main map (commits map[[32]byte]utxotag) where the old code does
7. but serialize the new on-disk version of the utxo tag to level db.

This is way better than what I was thinking, I figured we'd have to go through and mod all the code to use a new UTXOtag lol.

I'll try to implement a rough version of this and your new pull code over the next little bit, but I want to confirm the new UTXO struct first. Previously you laid out the following:

Code:
type UtxoTag struct {
    Height uint64
    CommitPositionInBlock uint32
    TransactionNum uint16
    OutputNum uint16
}

Do we need to include the CommitPositionInBlock? I'm not sure what the advantage is to including this.

I'm also concerned about the removal of the fork, because I'm not sure why it was implemented as a fork rather than an entire change like you're proposing here, so the unknown is spooky. But we'll be able to test if this is a problem via fingerprinting, so it should be fine to experiment with.

I'm also going to include the block hash in the block commits fingerprint, unless you think there's a problem doing it. This way there's an explicit tying of the commits to the block header itself, it feels better for some reason.

EDIT: I modded your code to be able to operate from high to low for unmining: https://goplay.space/#2_WoPvMPlLQ. I'll finish merging it tomorrow hopefully.

EDIT2: Running a speed test on the merged code.

EDIT3: As it stands the current code is super slow. I think this is because the info pulling and info mining are tied together; if there are a bunch of blocks queued up to be mined, mining them locks all the other downloaders from depositing their payload and pinging the BTC node for more. In order for this to be fast, the BTC node needs to be under 100% load all the time, which it isn't right now.

EDIT4: I modded the code to have separate mining/pulling processes. It still seems slower for some reason, however my pc is acting up and the old version also seems somewhat slower so I can't really trust it, you might want to give it a shot and see how quick it is for you in comparison. Code's up on my github.

sr. member
Activity: 687
Merit: 269
I have here a skeleton for the sleep-less channel-less downloading of blocks, could it be integrated? I don't think its a big change.

https://goplay.space/#Yl0k34__XjH

Your explanation about sequential reorg is fine then, let's keep linear.

To finalize the format I think we need to also pull block headers since genesis using getblockheader I think.

Thinking about it, perhaps to simplify things, it would be ok to have a mutex protected global in memory map keyed by blockhash containing the block height.

You know, what you previously did in the level db, but just in memory. In the level db, on disk, there would be the other direction (from height to full header).

Then on startup, the leveldb can be looped over and values cached to memory.

Advice about the migration to the new utxo tag:

1. put the on disk utxo tag struct to the code
2. write a function convert the on disk utxo tag to the old utxo tag (the height is just casted to uint32, based on height before/after fork fill the txnum and outnum or split the output order num uint32 into two uint16 dividing/remaindering by 10000 and return them in the old struct)
2. when downloaded the block, generate the on disk utxo tag and store it next to the commitment.
3. when going to mine the block, don't pass the utxo tag as string but simply as on disk utxo tag struct
4. the strictly_monotonic_vouts_bugfix_fork_height if inside miner_mine_commit_internal can then be removed.
5. all the remaining functions just look at height
6. finally put the old version of the utxo tag to the main map (commits map[[32]byte]utxotag) where the old code does
7. but serialize the new on-disk version of the utxo tag to level db.
jr. member
Activity: 76
Merit: 8
There was a bug in the initial release. When I added in the mining redundancy prevention, I forgot to modify the start block to reflect that, so it misses mining the very first comb commit. This will NOT show up in the AES fingerprint, as it doesn't take into account the height of the commit, only the commits themselves and the order they're loaded, but it will give an inaccurate number for how many total comb have been spawned.

I'll have a newer version up in the next couple of days, I want to test out the struct-based reading first. The correction is to just change the number in the following code to 481823, inst4ead of 481824
Code:
if !u_config.regtest && curr_height < 481824 {
curr_height = 481824
}

yes I am right about the run boolean, there was really a race condition bug, I found it using:

 go build -race

The reader of that boolean must take the mutex too, not just the writer.

That I see and I've fixed, but I was wondering about your change from a bool to an incrementing int.


Quote
Sorry I was wrong, you aren't actually looping over the map[string]interface{}, just lookup one value. Thus it's safe.

I already swapped over to structs. It feels better using more static typing anyways, so I'll run a few commit-build benchmarks and make sure it doesn't slow anything down. If it does then we can switch back.

Quote
Yes we need block height ON DISK to be uint64. This is because there will be fast altcombs (with block time 1 second or faster). If we don't do this, we are just repeating the Nakamoto 32bit timestamp bug. (Bitcoin Timestamp Year 2106 Overflow)

Utxo tag IN MEMORY can stay in it's current format for now. ([2]uint32). Can be bumped to match on disk format later.


Gotcha, that makes sense.

Quote
I also think that the LEVELDB integers (inside keys) should be actual binary (big endian=starting by zeroes from the left) not in Binary coded decimals.
This will ensure that the transaction counter and output counter (both uint16) will be capped at 65535 not 9999.

My understanding was the the fork Natasha did fixes the 9999 max by combining the two into one number. If you look at the formatting of commits that have been processed after the fork, IIRC they're all merged to be stored as one large number. Natasha described it here: https://bitcointalk.org/index.php?topic=5195815.140

I may be misunderstanding something about the fix though.

Quote
Inside leveldb blocks (uint64 keys), you can store both the new 32byte block checksum and block header (80byte) concatenated.

The new 32byte block checksum will be SHA256 of Key 1 CAT Commitment 1 CAT Key 2 Commitment 2 CAT etc (all the keys and previously unseen commitments inside the block)

Oh store them together, that makes way more sense lol.


Quote
He=BTC Full node

1. read his best hash. if same as our best hash, end, otherwise start loop:
2. read his height
(optional) if his height is 0, feed him all our block headers (by submitheader) starting from genesis+1 and goto 2 (continue loop).
3. read his best hash, if different than read previously goto 2 (continue loop). else goto 4 (break loop).
4. read our block header at his height+do the header's hash.
5. if our hash at his height == his best hash from steps 1 & 3, reorg to his height, end.
6. if his height was less or equal to ours, launch the seek reorg height-by-height backwards procedure, then reorg to it's result. then fast forward, end.
7. read his hash at our height, if equal to ours top hash, fast forward, end.
8. launch the seek reorg height-by-height backwards procedure, then reorg to it's result, then fast forward, end.

seek reorg height-by-height backwards:
1. keep checking his hash at decreasing height until it matches our hash at that height.
2. return the height of highest match.
note: can be done using bisection just keep trying the central height between "highest match"+1 and "lowest non-match"-1.
if the central height was a match, increase "highest match" integer. If the central height was not a match, decrease "lowest non-match" integer. in both cases set the integer to central height and pick a new central height. Terminate when "highest match"+1 == "lowest non-match". This search is efficient even without goroutines.

The bisect option makes sense, however the nature of the BTC chain, at least right now, means it's INSANELY likely that a reorg will occur within the fist 2 blocks of the chain edge. I think it makes the most sense to just iterate down the chain, and if the reorg isn't found within the first X iterations then it switches to a bisect search.


Quote
fast forward:
1. check his hash at our height +1, +2, +3, ....
2. download that block(s).
3. apply them in order from lowest.
4. if applying any block fails, terminate all goroutines and end.
note: can be done by multiple goroutines, if the goroutine's downloaded block is
at current height +1 she will try applying it. otherwise she puts it into a shared map keyed by height and tries
applying the current height+1 block from the map (if it's there).
once the goroutine successfully applies her lowest block, she inspects the map to see if there's another lowest block.
if yes she applies that block too, if no, she goes to check another hash and download another block.
note 2: no need to have unlimited goroutines, just have like 6 of them and wait for them to apply everything using a wait group.
note 3: works without the use of channels, just needs a mutex protected shared map and two shared integer counters (all protected by the same mutex).

This is essentially what the current mining system does. The blocks are requested in incremental order as the counter ticks up, but the order it receives and reads in is not fixed. Unfortunately the chokepoint is not how quickly we can read the blocks, but how quickly the BTC can spit them out at us. Although the funneling method you described seems way better than having a bunch of loops checking a timer over and over again, so if that was your intent then nvm that makes sense lol.

You have it written that the goroutine that makes the call is also the one that does the mining. Do you think that this is better than having one dedicated routine to query the map and mine the blocks, and just have the callers dump their blocks in the map? I guess that it makes sense that you're removing a loop that is essentially doing nothing most of the time, waiting for blocks to come in.

I think I get what you're trying to build here, like building a skeleton of the BTC chain and then filling in the blanks as it goes along. Like I said, unfortunately right now the biggest limiter from what I can tell is BTC's speed at supplying the information. But structurally it makes sense.

sr. member
Activity: 687
Merit: 269
yes I am right about the run boolean, there was really a race condition bug, I found it using:

 go build -race

The reader of that boolean must take the mutex too, not just the writer.

Sorry I was wrong, you aren't actually looping over the map[string]interface{}, just lookup one value. Thus it's safe.

Yes we need block height ON DISK to be uint64. This is because there will be fast altcombs (with block time 1 second or faster). If we don't do this, we are just repeating the Nakamoto 32bit timestamp bug. (Bitcoin Timestamp Year 2106 Overflow)


Utxo tag IN MEMORY can stay in it's current format for now. ([2]uint32). Can be bumped to match on disk format later.

I also think that the LEVELDB integers (inside keys) should be actual binary (big endian=starting by zeroes from the left) not in Binary coded decimals.
This will ensure that the transaction counter and output counter (both uint16) will be capped at 65535 not 9999.


Inside leveldb blocks (uint64 keys), you can store both the new 32byte block checksum and block header (80byte) concatenated.

The new 32byte block checksum will be SHA256 of Key 1 CAT Commitment 1 CAT Key 2 Commitment 2 CAT etc (all the keys and previously unseen commitments inside the block)




He=BTC Full node

1. read his best hash. if same as our best hash, end, otherwise start loop:
2. read his height
(optional) if his height is 0, feed him all our block headers (by submitheader) starting from genesis+1 and goto 2 (continue loop).
3. read his best hash, if different than read previously goto 2 (continue loop). else goto 4 (break loop).
4. read our block header at his height+do the header's hash.
5. if our hash at his height == his best hash from steps 1 & 3, reorg to his height, end.
6. if his height was less or equal to ours, launch the seek reorg height-by-height backwards procedure, then reorg to it's result. then fast forward, end.
7. read his hash at our height, if equal to ours top hash, fast forward, end.
8. launch the seek reorg height-by-height backwards procedure, then reorg to it's result, then fast forward, end.




seek reorg height-by-height backwards:
1. keep checking his hash at decreasing height until it matches our hash at that height.
2. return the height of highest match.
note: can be done using bisection just keep trying the central height between "highest match"+1 and "lowest non-match"-1.
if the central height was a match, increase "highest match" integer. If the central height was not a match, decrease "lowest non-match" integer. in both cases set the integer to central height and pick a new central height. Terminate when "highest match"+1 == "lowest non-match". This search is efficient even without goroutines.



fast forward:
1. check his hash at our height +1, +2, +3, ....
2. download that block(s).
3. apply them in order from lowest.
4. if applying any block fails, terminate all goroutines and end.
note: can be done by multiple goroutines, if the goroutine's downloaded block is
at current height +1 she will try applying it. otherwise she puts it into a shared map keyed by height and tries
applying the current height+1 block from the map (if it's there).
once the goroutine successfully applies her lowest block, she inspects the map to see if there's another lowest block.
if yes she applies that block too, if no, she goes to check another hash and download another block.
note 2: no need to have unlimited goroutines, just have like 6 of them and wait for them to apply everything using a wait group.
note 3: works without the use of channels, just needs a mutex protected shared map and two shared integer counters (all protected by the same mutex).





jr. member
Activity: 76
Merit: 8
Hello,

the config file didn't work out of the box on Linux, a one-liner was needed to add break in the parsing.

https://textbin.net/rrkcxerx0z

Weird. I get why the fix works, but I think the problem source may be that I put the "finished := false" within the for loop by accident. The reason I delay the break is because if the config doesn't have an empty line at the end, it looks like it'd break before actually parsing the last line. If you move the "finished := false" to before the for loop and remove the break you inserted, does it still cause a problem on Linux? Am I mistaken about the early break not causing problems on config files with no empty final line?

EDIT: I made a mistake that was included in the first version I uploaded to github, I've since corrected it but if you downloaded a version before I did then this might be the issue.
https://github.com/nixed18/combfullui/commit/3074fbd709e9602eeaccd639dfcb63dcad7dee66

If you've still got that "continue" on line 70, that's what causing the permaloop.

Quote
Now, the concept of pulling blocks over RPC is solid, I didn't know it was practical or doable. Great job there.

That said the implementation has it's flaws, I fixed two of them.

First of all the shutting down of goroutines using the run boolean had a race, so I've added a RWMutex to guard it.
I've changed the boolean to integer so that later when we want to start it again we just keep increasing it on every startup or shutdown.
The goroutine will just compare it's own copy of the integer to know if it should run or not.


My understanding was that, because its creating a new Index instance for each mine_start() to use as a reference, mine_start() instance 1 and mine_start() instance 2 (and all their respective sub-routines) would never be referencing the same run value. Was I incorrect on this?

Quote
Secondly, the parsing of the bitcoin block had a serious problem. The block/transaction was getting parsed into map[string]interface{}.
Problem is maps don't guarantee ordering by key when getting looped.
This could've caused reordering of commitments in some conditions. Whoops.

So I've put there a Block struct that contained slice of tx and tx contains slice of outputs. Problem solved, as slices are ordered.

You should recreate your databases after this fix just to be sure.

Yea that'd be bad. I haven't had it happen yet, all my fingerprints (that I've tested at least) have matched up, but I didn't know that that was a possibility. More reading for me lol.

EDIT: It's been a while since I wrote the initial interp code, but from reading it again it doesn't look like it's pulling the relevant TX sections into map[string]interface{}, but that it's pulling them into []interface{}. This was done for both the initial array of TXes in the block, and the array of outputs in the TX (https://github.com/nixed18/combfullui/blob/master/miner.go, lines 420 and 427). Even if I'm right and it's stable as is, it's still probably worth moving over to a struct system; from what I've read it's faster than using map[string]interface{}.

EDIT2: Could I see your struct implementaion? I've been unable to reuse make_bitcoin_call() without having it output an interface/map[string]interface{} value, and at that point I can't convert back to a struct without marshalling and unmarshalling the JSON for a second time. That seems slow.

EDIT3: I was able to do it by unmarshalling the "result" json into another json.RawMessage, then unmarshalling again. No clue if it's faster but it works, and lets make_bitcoin_call() be used for multiple return values, so that's cool.

Quote
EDIT:
Another problem was that you weren't using pointer receivers when programming in an object oriented way. To highlight the problem:
Code:
func (a A) X() {}
fixed as:
Code:
func (a *A) X() {}
The lack of pointer receivers makes copy of the a each time which makes the original not writable, and could lead to other subtle glitches.

Gotcha, I didn't know you could do that, that's pretty sick.

Quote
Further potential improvements

* Remove the mining/miner api. Only keep the height view set to 99999999 to make sure Modified BTC Does not troll us when we run at port :2121 set on config.
* Write complete block in batch instead of writing every commitment separately. Basically, you can open new leveldb.Batch object then write to it all the commitments. Then, even if computer hard shutdowns due to power loss the leveldb should guarantee that the final block either did or didn't get written completely.
* Change utxo tag to 128bit (Commit position is basically a position of the current commitment is in the block sequentially taking both seen and unseen commitments into account):
Code:
type UtxoTag struct {
    Height uint64
    CommitPositionInBlock uint32
    TransactionNum uint16
    OutputNum uint16
}
* Change on-disk block key to Height uint64 as opposed to Block hash. Then you can distinguish block keys and Commitment keys by their length.
* Implement the block-specific checksum. (Probably a SHA256 of all the commits in the block concatenated in their order). The block-specific checksum can be prepended to the block value whose key is uint64 on-disk:
* Implement storage of block headers. BTC header having 80byte should be recreated from the information provided via the RPC api and put to the database. This makes is feasible to resume the download using a specialized tool that I have in development (the tool will also require access to an API endpoint that will return ALL or the HIGHEST block header).

This gets into next-step territory; modifying the actual miner_mine_commit_internal() process. Cool.

While building the API to get headers, another thing to consider adding is the option to pull commits in a json format. While I don't know about pulling ALL the commits in one go, considering there's like over 3.5 million of them rofl, pulling them in chunks/pages might be practical. It'd allow other programs (i.e. ClaimVision) to access the commits file locally for use, without having to use an html parser to deal with (127.0.0.1:3232/utxo/).

EDIT: I'm confused about your last bit;
Quote
* Change on-disk block key to Height uint64 as opposed to Block hash. Then you can distinguish block keys and Commitment keys by their length.
* Implement storage of block headers. BTC header having 80byte should be recreated from the information provided via the RPC api and put to the database. This makes is feasible to resume the download using a specialized tool that I have in development (the tool will also require access to an API endpoint that will return ALL or the HIGHEST block header).

Right now the block hash information is stored to check for reorgs as the value, not the key. Are you referring to value here? Just making sure, but you're suggesting that we toss block hashes and just use the headers to check for reorgs, right? Why store the block height as a 64-bit, wouldn't a 32-bit be fine?

The way I'm reading the commit proposal is as follows

On-Disk Commits
Code:
KEY: 000000000000006889790000000300000001
VALUE: C78B5DBA7CAD6FA564D60BCF3099D0C80FD4CB75CD1A0FB261A568E35B8F6905

The On-Disk Hashes are currently just
Code:
KEY: 9999999900688979
VALUE: 0000000000000000001103f2f1f374ee8bf36c64949fcd53e47d0174cf063329

Is the replacement format you're suggesting below?
Code:
KEY: 00000000000000688979
VALUE: 010000009500c43a25c624520b5100adf82cb9f9da72fd2447a496bc600b0000000000006cd862370395dedf1da2841ccda0fc489e3039de5f1ccddef0e834991a65600ea6c8cb4db3936a1ae3143991






sr. member
Activity: 687
Merit: 269
Hello,

the config file didn't work out of the box on Linux, a one-liner was needed to add break in the parsing.

https://textbin.net/rrkcxerx0z

Now, the concept of pulling blocks over RPC is solid, I didn't know it was practical or doable. Great job there.

That said the implementation has it's flaws, I fixed two of them.

First of all the shutting down of goroutines using the run boolean had a race, so I've added a RWMutex to guard it.
I've changed the boolean to integer so that later when we want to start it again we just keep increasing it on every startup or shutdown.
The goroutine will just compare it's own copy of the integer to know if it should run or not.


Secondly, the parsing of the bitcoin block had a serious problem. The block/transaction was getting parsed into map[string]interface{}.
Problem is maps don't guarantee ordering by key when getting looped.
This could've caused reordering of commitments in some conditions. Whoops.

So I've put there a Block struct that contained slice of tx and tx contains slice of outputs. Problem solved, as slices are ordered.

You should recreate your databases after this fix just to be sure.

EDIT:
Another problem was that you weren't using pointer receivers when programming in an object oriented way. To highlight the problem:
Code:
func (a A) X() {}
fixed as:
Code:
func (a *A) X() {}
The lack of pointer receivers makes copy of the a each time which makes the original not writable, and could lead to other subtle glitches.

Further potential improvements

* Remove the mining/miner api. Only keep the height view set to 99999999 to make sure Modified BTC Does not troll us when we run at port :2121 set on config.
* Write complete block in batch instead of writing every commitment separately. Basically, you can open new leveldb.Batch object then write to it all the commitments. Then, even if computer hard shutdowns due to power loss the leveldb should guarantee that the final block either did or didn't get written completely.
* Change utxo tag to 128bit (Commit position is basically a position of the current commitment is in the block sequentially taking both seen and unseen commitments into account):
Code:
type UtxoTag struct {
    Height uint64
    CommitPositionInBlock uint32
    TransactionNum uint16
    OutputNum uint16
}
* Change on-disk block key to Height uint64 as opposed to Block hash. Then you can distinguish block keys and Commitment keys by their length.
* Implement the block-specific checksum. (Probably a SHA256 of all the commits in the block concatenated in their order). The block-specific checksum can be prepended to the block value whose key is uint64 on-disk:
* Implement storage of block headers. BTC header having 80byte should be recreated from the information provided via the RPC api and put to the database. This makes is feasible to resume the download using a specialized tool that I have in development (the tool will also require access to an API endpoint that will return ALL or the HIGHEST block header).



jr. member
Activity: 76
Merit: 8
Alright, I've got a test version up here: https://github.com/nixed18/combfullui

A release is available to be downloaded, or people can build it themselves.

For simplicity I'm gonna refer to the normal version as legacycomb and the test version as mergedcomb


Build

Do the same steps as the normal combfullui (https://bitcointalksearch.org/topic/m.54605575) but with two differences:

1. Make sure you don't install in the same folder as your normal combfullui; after cd Documents, type in mkdir combfullui_rpctest, then type in cd combfullui_rpctest

2. Type in "go get github.com/syndtr/goleveldb/leveldb" to install LevelDB


Set up bitcoin.conf

1. Navigate to the directory where you have stored your BTC blockchain. By default it is stored in C:\Users\YourUserName\Appdata\Roaming\Bitcoin on Windows. You'll know you're there when you see blocks and chainstate folders, along with some .dat files for stuff like the mempool, peers, etc.

2. Look for a file named "bitcoin.conf". If one doesn't exist, make one by right clicking the whitespace and going New>TextFile. Then rename this file to "bitcoin.conf"

3. Open "bitcoin.conf" in Notepad and add the following two entries, replacing XXXXX with what you want your log info to be. This is only used to your BTC node.
Code:
rpcuser=XXXXX
rpcpassword=XXXXX

4. Save and exit


Set up config.txt

1. Navigate to the directory you installed mergedcomb

2. Create a text file called "config.txt". Launching the program will automatically create this file if you don't.

3. Open "config.txt" in Notepad, and add the following lines, replacing the XXXXX with the same values that you used in your "bitcoin.conf".
Code:
btcuser=XXXXX
btcpass=XXXXX

4. Save and exit. If mergedcomb was open during this edit, you must restart the program for your changes to take effect.

Note: There are two further options that you can use in the config: "btcmode=" and "port=". The only valid value for "btcmode=" is "regtest", any other entry will result in a normal boot. "port=" sets the port that the mergedcomb listens on, set it to a normal number lol I haven't made the proper processing for it to reject letters yet.


Run

Assuming you're using an unmodded BTC, you can run either the BTC or the mergedcomb first, it doesn't matter. While the mergedcomb is running, it'll keep checking if there's a BTC server running on your machine, and if so, will attempt to connect with it. When you run BTC, either run bitcoind.exe OR run bitcoin-qt.exe with "-server" as an argument.

AFAIK this version is compatible with any recent BTC build. It is also compatible with the Modded BTC, however you either must run it in tandem with legacycomb, or modify mergedcomb's port to be 2121. The Modded BTC must be able to communicate with a Haircomb client to start. The default port for mergedcomb is 3232 btw.

Merged comb should begin mining automatically, assuming that the BTC chain is up to date. BTC's RPC doesn't like to respond to queries while it's validating blocks, mining progress may be slow or non-existent before the BTC chain is all caught up.


Notes

This is essentially a proof of structure, so approach it as such. There's more to do, listed below, but it makes sense to finalize the structure before refining anything.

A brief overview:

Mining: Mergedcomb pings BTC until it is told there's a change in the chain, mergedcomb then sets up a mining config with the start height, target height, and direction (mine or unmine), and begins pulling blocks. Block info is funneled through the miner, which first mines all the commits in the next block, which is done by triggering miner_mine_commits_internal() as usual. Then once all the commits have been stored, the miner stores the block hash under the key 99999999 CAT blockheight. If unmining, the miner REMOVES the hash first, then begins unmining the commits. This ordering should minimize the possibility of incorrect block content while also storing the hash. This is important for DB cleaning, which happens on load.

Loading: Mergedcomb iterates through every entry in levelDB. Entries are stored in numerical ordering, which is convenient. First, check that the commit entry belongs to a block who's hash has been stored. If it has, move it to an array for said block. Once a new block is reached, all the commits in the old block array. Continue until you reach the hashes, aka block 99999999, then stop. If a commit's block does NOT have a stored hash, then it and every other commit after the fact are all marked as orphaned, and so are any hashes for those blocks (i.e. if block 6 does not have a stored hash, all the commits on block 6, 7, 8, 9, etc. are all considered bad and must be redownloaded). After pulling all the orphaned commits and hashes, delete the from the db without mining them. In the future this can be made more efficient, but for now it makes sense from a safety perspective.

This setup should, barring manually messing around with the commits files or ​manually inputting commits, prevent any problems. The hash is always stored/removed as the seal, even if the mine/unmine process is interrupted by a crash the next time you boot that block's commits will all be orphaned, as they won't have a stored hash, and will get negated and remined.

Note: currently mergedcomb contains some code I haven't tested for cleaning orphaned blocks. The code has been tested for cleaning the most recent block, but what has not been fully tested yet is the multi-block cleaning that would be trigger by messing removing a block hash that's halfway down the chain. I'll test it over the next couple of days, this is just a heads up there may be some problems if you try to mess with it.


Testing with BTC's regtest mode has been implemented, not Testnet yet though. As this whole modification doesn't make any changes to the transaction processing code, regtest will work for simulating chainstate changes and how they affect the database. A separate commitsdb is created specifically for regtest, so you don't need to worry about overwriting the normal one.


TODO:
​ - Implement block by block fingerprinting. This will allow for 100% certainty that no problems have occurred in the commits file, and will also potentially allow for selective block repairs if commits get corrupted/orphaned, rather than bruteforcing every commit above the problem.

 - Better comms structure. The channel communication isn't built on a solid foundation because it doesn't need to be yet. If it is projected that it'll need to expand to become a more substantial part of the system, then reorganizing it properly is a good idea.

 - Better datatype management. I need to go through and make a few things more uniform and consistent I think, I haven't done any optimization in this regard.

 - More thorough error management testing. Right now it seems to work alright, but I haven't gone through it enough to be 100% confident in it yet. This also includes better logging for debugging.

 - Memory management. I need to go through the best way to negate this, probably combine this with datatype management lol.

Overall there's a lot to be done to format everything and make it less sloppy, but like I said this is really a structure proof before doing stuff like that.
jr. member
Activity: 76
Merit: 8
thoughts about leveldb in combfullui:

overall I will stay by your core dev decision to implement this db into haircomb,
although there are some disadvantages known to me.

here are my thoughts about decisions you might be facing at this point

1. keep supporting commits.db or drop it?

   - the new leveldb haicomb core in my opion should not be able to read
   commits.db at all and only be able to write it, and only write it in case
   the user manually clicks downgrade to old commits.db format.

   - the reason for this is that we cannot realistically assume that commits.db
   is consistent.
  
   - full header data need to be present to allow meaningful sync resume support.
  
   - another reason is we need additional data in the leveldb format (complete BTC
   block headers). There is no way to magically recreate this data that do not
   exist in the old format. (e.g while we are able to download the current BTC
   headers we have no way to know/verify that those blocks contain the commitments on file)
  
   - therefore, full resync will be required to use the new leveldb comb full ui.
  
   - commits.db will live and remain working on on legacy/natasha branch.


I hadn't thought about maintaining support for the old commits.db but it makes sense to do. The problem though is that, as you brought up, the old commits data relied on the Modded BTC node to trigger reorgs; it can't detect them by itself. Are you suggesting also combining this with a second, separate database so that people could run the old commits.db format while also using the built in RPC puller, or just that they'd have the option to go back to the legacy system of requiring the Modded BTC?

Quote
2. change of port from :2121 to something else

   - yes. we want to people who physically own 1 pc to run both versions in
   parallel, as well as testers to push blocks to both nodes using a hacked syncer.

This makes a ton of sense.

Quote
3. remove the whole /miner/mine api.

   - definitely, we must recreate it correctly (under a different name)

   - there should be:
     - a call to return ALL/THE TOPMOST on chain block headers the combfullui is currently synced to (not just current height).
     - a call to rollback (reorg the chain) to a specific header hash (block hash) + it's height (uint64).
     - a call to start uploading one forward block, into this call the syncer will submit the new 80byte new block header.
     - a call to upload one commitment. parameters:
       - commitment (256bit), blockhash(256bit), height(uint64), tx number (uint32), txout number(uint32), total order
         number of the commitment within block including previously seen and unseen commitments (uint32).
     - a flush forward block command, paramters: blockhash(256bit), height(uint64), previously UNSEEN,SEEN commitment count in the block ([2]uint32)
     - a call to trigger to write out the legacy commits.db file.


I agree entirely, although for testing purposes I have it currently left on in my build, it's been invaluable for testing to allow me to manually enter in commits. I also had a more extensive COMB RPC planned, it makes complete sense to include something like that, though I'll probably wait to include it until after the DB stuff I'm working on right now is finalized.

EDIT: I'm curious as to the structure of some of these. Right now I don't have it working to walk step by step through the commit upload process, to be as safe as possible. Allowing manual, commit upload means allowing interblock commit segregation; you could theoretically end up with an incomplete storage of a block's commits and not know it. The way I have it running currently is to upload every commit in a block, then upload the block hash immediately afterwards. When you load Haircomb, it scans all the commits for matching block hashes; if a commit doesn't have a block hash that matches its height, it's considered orphaned and deleted. By batching the processes together and only storing the hash AFTER all block commits have been recorded means that you shouldn't ever end up with incomplete blocks.


Quote
4. a Bitcoin blockchain syncer will be integrated or not?
     - at first not, it will talk to combfullui using the above api (done, but it was using the legacy api to mine blocks.)
     - later it will be integrated into the same exe but still talking to 127.0.0.1 (localhost) using the same api (done, again though with the legacy api)
     - finally syncer can be made to communicate internally and the api deprecated (so that there will be no miner api at all) (done)
     - the syncer will trust an external BTC full node about the longest valid chain selection (i.e. the one ran by the comb node operator) and
       only SPV validate the blocks. (done)
     - only one syncer running at a time will be supported, if multiple ones are detected wanting to push a block, the non first one will be ignored. (pretty much done)

See included, once I'm done a little more testing on the reorg code I'll upload the code to github and provide a link. There's still a lot to do to properly secure it all and make it less sloppy, but the framework is there and it works.

Quote
Finally, disadvantages of leveldb:
     - it's multiple files / difficult to share P2P
       - probably just rar/7zip it
     - multiple writing/reading processes at the same time unsupported
       - leveldb combfullui must be that single writing process.
       - syncing while wallet is off WILL NOT BE SUPPORTED.
       - use of 3rd party tools that work on leveldb files MUST BE DISCOURAGED
     - when 2 users sync to the same height their leveldb will not be bit-for-bit identical (commits.db will be) (This is true, but I think with the AES it'll be okay.)
       - we must keep the support for the AES checksum for UI purposes. (This has been kept working successfully during my testing so far)

When you say reading processes, can you elaborate? I'm assuming you're referring to securing the DB during comb operation so that we don't run into any data-race type scenarios, which makes complete sense. I'm also curious as to your emphasis on "syncing while wallet is off WILL NOT BE SUPPORTED," as while I agree it makes the most sense to run everything through Haircomb's prebuilt mining code, I think that you could likely get it running fine without doing that. I'm curious as to your concern, I'm clearly missing something.

I'll test the reorg stuff over the next few days, then I'll get a build up for testing. Thanks for the feedback!

EDIT: One bonus to note with leveldb is that, at least in my trials, commit file build and load times are reduced to like 66% from their previous times. I realize that if there's any sacrifice to security to get this then obviously it's not worth doing, however it is a nice bonus.
sr. member
Activity: 687
Merit: 269
thoughts about leveldb in combfullui:

overall I will stay by your core dev decision to implement this db into haircomb,
although there are some disadvantages known to me.

here are my thoughts about decisions you might be facing at this point

1. keep supporting commits.db or drop it?

   - the new leveldb haicomb core in my opion should not be able to read
   commits.db at all and only be able to write it, and only write it in case
   the user manually clicks downgrade to old commits.db format.

   - the reason for this is that we cannot realistically assume that commits.db
   is consistent.
   
   - full header data need to be present to allow meaningful sync resume support.
   
   - another reason is we need additional data in the leveldb format (complete BTC
   block headers). There is no way to magically recreate this data that do not
   exist in the old format. (e.g while we are able to download the current BTC
   headers we have no way to know/verify that those blocks contain the commitments on file)
   
   - therefore, full resync will be required to use the new leveldb comb full ui.
   
   - commits.db will live and remain working on on legacy/natasha branch.


2. change of port from :2121 to something else

   - yes. we want to people who physically own 1 pc to run both versions in
   parallel, as well as testers to push blocks to both nodes using a hacked syncer.

3. remove the whole /miner/mine api.

   - definitely, we must recreate it correctly (under a different name)

   - there should be:
     - a call to return ALL/THE TOPMOST on chain block headers the combfullui is currently synced to (not just current height).
     - a call to rollback (reorg the chain) to a specific header hash (block hash) + it's height (uint64).
     - a call to start uploading one forward block, into this call the syncer will submit the new 80byte new block header.
     - a call to upload one commitment. parameters:
       - commitment (256bit), blockhash(256bit), height(uint64), tx number (uint32), txout number(uint32), total order
         number of the commitment within block including previously seen and unseen commitments (uint32).
     - a flush forward block command, paramters: blockhash(256bit), height(uint64), previously UNSEEN,SEEN commitment count in the block ([2]uint32)
     - a call to trigger to write out the legacy commits.db file.

4. a Bitcoin blockchain syncer will be integrated or not?
     - at first not, it will talk to combfullui using the above api
     - later it will be integrated into the same exe but still talking to 127.0.0.1 (localhost) using the same api
     - finally syncer can be made to communicate internally and the api deprecated (so that there will be no miner api at all)
     - the syncer will trust an external BTC full node about the longest valid chain selection (i.e. the one ran by the comb node operator) and
       only SPV validate the blocks.
     - only one syncer running at a time will be supported, if multiple ones are detected wanting to push a block, the non first one will be ignored.

Finally, disadvantages of leveldb:
     - it's multiple files / difficult to share P2P
       - probably just rar/7zip it
     - multiple writing/reading processes at the same time unsupported
       - leveldb combfullui must be that single writing process.
       - syncing while wallet is off WILL NOT BE SUPPORTED.
       - use of 3rd party tools that work on leveldb files MUST BE DISCOURAGED
     - when 2 users sync to the same height their leveldb will not be bit-for-bit identical (commits.db will be)
       - we must keep the support for the AES checksum for UI purposes.
       
jr. member
Activity: 76
Merit: 8
I'm a big dummy, I just realized that there isn't actually any database encryption, the aes is just used for the file fingerprint. Took me long enough. Knowing this, I'm gonna start merging the RPC puller and new DB format into the Haircomb code, hopefully I'll have a decent test version out in the next couple of weeks.

On a different note, another thing I can't believe I didn't realize sooner was that you can do a better version of atomic swapping COMB using the same decider for two contracts, without any new tech required.

1. Alice sends 10 COMB to Contract_1, which is composed of Decider_1 and a pattern_0 merkle tree with Alice_1/Turbine_1.

2. After Alice proves that she's deposited the funds, the Turbine then sends 10 COMB to Contract_2, which is composed of Decider_1, and a pattern_0 merkle tree with Turbine_2/Alice_2


If Alice signs the key true, Alice's funds go to Turbine_1, and the turbine's funds go to Alice_2. If she signs it false, Alice's funds go to Alice_1, and the Turbine's funds go to Turbine_2.


This means that Alice has complete control over her funds and doesn't have to trust a 3rd party to hold them or oversee the trade, and also reduces the trade cost to 2 BTC commits. There's still a problem though; information imbalance. Alice can wait until the Turbine has deposited its funds, then sign the Decider false to rollback. Now Alice has access to her funds (because she can see the TX history of the Decider that unlocked them), but until she gives the long decider to the Turbine the turbine funds are still locked up from the Turbine's view. This is a problem. It would be less of a problem if the turbine was the one who had the information advantage, because they're the ones with a reputation to uphold, however Alice is anonymous; there's no real incentive for her to act in good faith. Technically there isn't a reason for her to actually do this (she gains nothing from the process), but it's still an attack vector for those who just like to watch things burn.

From my thinking so far, the only way to deal with the information advantage in general is to a) rely on the reputation of the information holder to ensure that they supply the tx history, or b) figure out a way to communicate the information over the only way that Haircomb nodes communicate; the BTC chain. Both options kinda stink, but option b stinks more; the solution I proposed before was the brute-force based lockbox, but I have a feeling that it could be exploited and used as an attack vector. It also requires a fork, which, while not out of the question because both Hot Cashes and the scripting implementation would be sick, is less than ideal. I don't know how to deal with it in this circumstance though.
jr. member
Activity: 76
Merit: 8
I've been working on integrating an RPC based commits puller into Haircomb and I've run into a bit of a crossroads. In order for it to work well I'll need to rework the storage mechanics of the commits, and I have two options: move over to an established database format (I've been using LevelDB for now), or rework the current storage mechanism of a single, custom file.

The rework is needed to store block hashes with the commits themselves; storing the block hashes is required to detect when the BTC chain orphans a block/blocks, to trigger a commits unmine. Previously this was detected internally by the modded BTC client, however switching to using the BTC's RPC means that this won't work, and reorgs must be detected manually. It only makes sense to store the block hashes with the commits themselves, working with a single database is going to be easier to account for all variable events rather than working with two separate databases simultaneously.

The problem arises with deciding how to move forward. We were talking about it on the telegram, and we can't really figure out why Natasha would have gone with a custom db system like she did; it seems like it would have been incredibly easy to implement a pre-existing db structure into Haircomb (and from my experience with LevelDB, it was very easy). This makes me think that I'm missing something important about why Haircomb uses this custom structure, but I don't know what it'd be.  If I had to guess, it'd be the encryption. I'm not sure how relevant the encryption of the commits file is to the security of Haircomb though. You could probably use an established db structure that utilizes encryption, however I don't know how quickly you'd be able to iterate through the dataset, which could significantly slow down Haircomb's load times.

So I see two ways of moving forward:

1. Switch to an established DB system.
I've been using the Go implementation of LevelDB that can be found here: https://github.com/syndtr/goleveldb. It works fairly well from my limited experience with it, and has similar load times to the current system. I've been storing the block hashes under the UTXO tag 99999999 CAT BLOCKHEIGHT. The database organizes by numerical order of the key, so this clumps all the hashes at the end of the database, and allows me to iterate through the stored commits on load without a problem.

2. Modify the current system.
This could be done by changing the flush system from using FFF...FFF, 999...999 to instead using BLOCKHASH, 99999999 CAT BLOCKHEIGHT, the same format as previously mentioned. All that's needed to identify a flush entry in the commits file is the end of a block is the 99999999 block height in the UTXO tag, which means that both the FFF...FFF and the second portion of the UTXO tag are available to use as data storage. We should be able to just slot these into the existing commits.db file and, on load, process them into their own memory storage.

I've got a prototype working for option 1, but I don't want to put the effort in to finalize it yet without properly going over whether it's the right path to take. It's a much easier implementation than option 2, however you do lose the current aes encryption that is implemented. You also gain a more stable storage format though, which means less corrupting, though there may be ways to reduce the corruption risk in the current system. There may also be fast, encrypted databases that I'm not currently aware of though. I'd appreciate any thoughts that anybody has on the matter.
jr. member
Activity: 76
Merit: 8
I may have found a bug with the Haircomb Core's reorg code, but it may just be the way I'm testing it, I'm not sure. I'm gonna PM you Watashi.

EDIT: Looking at it a bit more, if it’s an actual bug and not just something I’m messing up it doesn’t seem to be exploitable in any way, and it would explain the random crashes that some people have had. Doesn’t hurt to be safe though.

EDIT2: I patched the bug, it seems to be working fine now on windows. If someone would be willing to test it on a Linux machine I'd appreciate it, I can help you through the re-org steps on BTC's regtest if you need assistance. The repo is here: https://github.com/nixed18/combfullui

The bug fix is in commitfile.go, there are a bunch of other changes with comments and some testing values though.
jr. member
Activity: 76
Merit: 8
The way I laid out to commit the contracts to the BTC chain has an attack vector. If you are a malicious actor, you can scan the mempool for new transactions coming in with P2WSH addresses, and then test out those transactions as if they were contract IDs. If they were, you could insert your own instructions into the contract by setting a higher fee and getting your transaction on the next block.

Rather than initiating a contract commit using the contract ID directly, you could instead initiate by using SHA256(ID CAT START CAT 0), and then continue inputting the instructions starting with index = 1.

This brings up another question though, what about 51% attacks? You need to reveal the contract ID to tx receivers, so if they had enough power/the contract was recent enough they could go back and rewrite the contract. Do you think that this is a risk to worry about?
jr. member
Activity: 76
Merit: 8
Alright I have the beginnings to interpreter law, but nothing concrete yet.

As established, values can be separated into two types, Locked and Free. Free values can be input via a TX history input, and have no inherent ties to the contract, save for a reference to their argument slot. Locked values are more varied; they can be either included the contract themselves as predefined constants, such as raw integers used to multiply, or can be injected into the contract on its creation, by becoming part of the address that makes up this particular instance of the contract.

Problems arise when free values can be inserted into areas of the contract and affect the outcome while being completely untethered from the contract itself. See the example below.

Code:
ARGS = 2
int256 ARG[1] = Address
int256 ARG[2] = Address (Free)
ADDRESS = SHA(ARG[1]+1)
CONDITION = TRUE
CONNECT_TO = ARG[2]

This contract determines the CONNECT_TO solely based off of a free value. This is unstable.


One methodology that might be able to restrain free value usage is to require free values to be in some way tethered to a locked value. This could be done through a pointing process, in which a contract must specify which locked value the free value will be used by. This pointing process can then be used to provide a filter to these free values; free values cannot be included in the contract unless they are done so via integration through the locked value that points to them. To illustrate my point (hah), I'll use a version of a Hot Cash contract.

Code:
ARGCOUNT = 6
256bit ARG[1] = Fallback address
256bit ARG[2] = Value at the end of the hashing sequence (to sign with the left leg of the decider)
256bit ARG[3] = Value at the end of the hashing sequence (to sign with the right leg of the decider)
256bit ARG[4] = Value needed to be commited on chain, a free variable (to sign with the left leg of the decider)
256bit ARG[5] = Value needed to be commited on chain, a free variable (to sign with the right leg of the decider)

ADDRESS = SHA256(ARG[1] CAT ARG[2] CAT ARG[3])
CONSTRUCTS = [DECIDER(ARG[2], ARG[3], ARG[4], ARG[5])]
CONDITION = ENTRENCHED(ARG[4]) && ENTRENCHED(ARG[5]) &&
CONSTRUCTS[1]
CONNECT_TO = GET_COMMIT_AT_INDEX(GET_INDEX_OF_COMMIT(ARG[1]) - CONSTRUCTS[1]])

In this example, DECIDER() will return the number that has been signed, based off of the the input values. If it cannot calculate a value, it will return nil. If it can calculate a value, but the DECIDER has already been used to sign a previous number, it will also return nil. Returning some value that is not false or nil will act as a TRUE statement for the CONDITION, and allow the contract to move onto the CONNECT_TO phase. Once there, the DECIDER() will again return the signed number, and the contract will complete based on said number.

In this example, the free values are never allowed to interact with the main contract itself. They are REQUIRED to be filtered through a native contract. This seems like it could be a core methodology that can secure contracts; by requiring free values to be processed through a native object like this, we can restrict the outcome. This can potentially even expand to nested custom contracts; free values could extend through multiple layers, as long as they were required to be processed directly through a native object to be used.

I'm not sure I'm happy with the syntax though, let me know what you think. Is there a way to break this system?


EDIT: Another phrasing option:

Code:
ARGCOUNT = 6
256bit ARG[1] = Fallback address
256bit ARG[2] = Value at the end of the hashing sequence (to sign with the left leg of the decider)
256bit ARG[3] = Value at the end of the hashing sequence (to sign with the right leg of the decider)
256bit ARG[4] = Value needed to be commited on chain, a free variable (to sign with the left leg of the decider)
256bit ARG[5] = Value needed to be commited on chain, a free variable (to sign with the right leg of the decider)

CONSTRUCTS = [DECIDER(ARG[2], ARG[3], ARG[4], ARG[5])]
ADDRESS = SHA256(ARG[1] CAT CONSTRUCT[1].ADDRESS)
CONDITION = ENTRENCHED(ARG[4]) && ENTRENCHED(ARG[5]) &&
CONSTRUCTS[1]
CONNECT_TO = GET_COMMIT_AT_INDEX(GET_INDEX_OF_COMMIT(ARG[1]) - CONSTRUCTS[1]])

There may be consequences to having ARG[2] and ARG[3] hashed prior to hashing in the main address, I'm not sure.
jr. member
Activity: 76
Merit: 8
I see I see, that makes sense, thanks for the explanation.

Dammmmmm gurl, thassum phat commitz.

You could still do a multi-sig Hot Cash but using multiple Deciders. If a Decider that works liked they do now but could only sign 1 or 0 was implemented, you could do the exact same process as the multi-sig Hot Cash I laid out before. Even if a new binary Decider wasn't included you could still accomplish it with 2 normal Deciders, but then you run into the same problems that you did in your full-wallet multi-sig, that being that there's a strong pressure to follow the leader because your funds will get destroyed if you all disagree.

Do you think that a new variable type "address" makes sense? If you gave the Interpreter instructions that modifying the address in any way (i.e. address*0) would turn it into a normal int256, and then also told it to ONLY accept addresses as CONNECT_TO variables, it covers some ground but I don't think it's enough. Essentially the only way to have addresses in the contract would be to either A) Include them as listed "address" in the non-free variable slots, so they're hard coded into the object's address, or B) create them using native functions like GET_ADDRESS_AT_INDEX().

But I still see there being problems if someone were to make a contract like:

Code:
ARGS = 2
address ARG[1] = Address
int16 ARG[2] = index_displacement (FREE)
ADDRESS = SHA(ARG[1]+1)
CONDITION = TRUE
CONNECT_TO = GET_COMMIT_AT_INDEX(GET_COMMIT_INDEX(ARG[1]) + ARG[2])

Now you just hand out multiple txes with different displacements Angry

There's gotta be a good way to gate contracts like this.



Thinking more about your Hot Cash, two thoughts:

1. I think they aren't actually Hot until you spend them. You could make a Hot Cash with a specific fallback address, and then just never commit that address to the chain until you're ready to do your first spend. So that's pretty cool.
2. I ran a script last night that pulled how many new P2WSH addresses were on each block. Over 70 blocks, it averaged out to 36 new P2WSH addresses per block. While that's a small sample size, even if we multiplied that number by 10, you're looking at 360 new addresses per block. That's 51,840 new P2WSH per day, if you were making multiple payments in a single day you could get away with reusing a fallback address for a good chunk of your day, most of the waking hours. And that's with the overestimation included lol.
sr. member
Activity: 687
Merit: 269
yes all schemes that rely on your single commited DECIDER, are more subsceptible to short-lived 51% attacks.

1. Wealthy Alice buys a lolipop worth 0.01$ from a 51% Miner
2. They wait 6 confirmations
3. Wealthy Alice gives a "single commited decider private key" to her entire wallet to a 51% Miner as part of a coin history for the 0.01$ payment.
4. 51% miner reorgs the last 6 confirmations and loots Wealthy Alice's entire wallet

If the 51% miner would be able to sell the lolipops to say 90% of the HODLERS, it'd be able to loot 90% COMB in existence.

Sure you might say a 51% miner might reorg the whole chain to the point when just 10% of the current COMB existed
and steal 90% of COMB that way, but that would be a long-range 51% attack, much more costly and risky for the attacker.


I think multisig is doable if forked, for instance up to 3 party multisig.

Total number of commits on chain is rather fat. 21 * CEIL(n/2) for up to n-party multisig.


Code:
ARGCOUNT = 69 = 23*n
ARG[1] comb 1 pubkey
ARG[2] comb 2 pubkey
ARG[3] comb 3 pubkey
ARG[4...24] comb 1 sig, free
ARG[25...45] comb 2 sig, free
ARG[46...66] comb 3 sig, free
ARG[67] destination voted by comb 1, free
ARG[68] destination voted by comb 2, free
ARG[69] destination voted by comb 3, free

ADDRESS = SHA256(ARG[1] CAT ARG[2] CAT ARG[3])
CONDITION = MAJORITY(
    ENTRENCHED(ARG[4...24]) && COMBSIGNS(ARG[1], ARG[4...24], ARG[67]),
    ENTRENCHED(ARG[25...45]) && COMBSIGNS(ARG[2], ARG[25...45], ARG[68]),
    ENTRENCHED(ARG[46...66]) && COMBSIGNS(ARG[3], ARG[46...66], ARG[69]))
CONNECT_TO = BITMAJORITY(ARG[67], ARG[68], ARG[69])
WHERE:
MAJORITY(a,b,c) = (a && b) || (a && c) || (b && c)
BITMAJORITY(a,b,c) = (a & b) | (a & c) | (b & c)

Possible problems:
- If a super-majority of keys does not sign exactly the same destination each, then all funds get destruct.
- Strong pressure to follow the decision of the first signer, because signer who already signed can not change their mind anymore.






jr. member
Activity: 76
Merit: 8
So, am I missing something, or can you sign 16 bit numbers with the Binary Decider method?

1. Make a merkle tree that has 131,072 leaves, and the leaf pattern is 0, key, 1, key, 2, key . . . 65,533, key 65,534, key, 65,535, key.
2. Include the merkle root in the address of the object to use this signed number.
3. If the leaves are level 0, commit one of the level 1 numbers to the chain, i.e. SHA256(58 CAT key)
4. After 6 confirmations, reveal the key.

You just signed a 16 bit value with 1 commit.

EDIT: I'm clearly missing something, because if you could just reveal the key then you could instead just do a 65536 long hash chain, commit one of the numbers, and reveal the key. The commit that appeared first on the chain would be the real one. So why isn't this how things are decided?

Back on topic, assuming that I was correct and there was no way to escape the requirement of limiting the contacts to no OR, IF, or ELSE statements, there's still problem phrasing. I think it can be dealt with via the interpreter, but I'm not sure how yet.

Code:
ARGS = 2
int256 ARG[1] = Address
int256 ARG[2] = Address (Free)
ADDRESS = SHA(ARG[1]+1)
CONDITION = TRUE
CONNECT_TO = ARG[2]

The interpreter needs a way to determine that this is all jacked up. The possibility exists to include a more specific variable type, address, and require the CONNECT_TO statement to utilize this address in some regard. It could also NOT be a free variable.

Depending on what you allow though, this could happen:
Code:
ARGS = 2
address ARG[1] = Address
int256 ARG[2] = Address (Free)
ADDRESS = SHA(ARG[1]+1)
CONDITION = TRUE
CONNECT_TO = (ARG[1] * 0) + ARG[2]

So clearly that can't be the only thing involved in the solution.
jr. member
Activity: 76
Merit: 8
Alright I have a multi-sig proposal but it requires a new native object: Binary Decider.

Binary Decider can only choose Forward or Rollback, and requires a single commit to do so.



User generates a key, a random 256bit int. Generate the BDecider Root by doing SHA256( SHA256(Rollback_Address CAT Key) CAT SHA256(Forward_Address CAT Key) ) ). Include the Root, Rollback, and Forward in the contract. When you want to Decide, commit either SHA256(Rollback_Address CAT Key) or SHA256(Forward_Addres CAT Key) to the BTC chain. Then give the person your key. Whichever commit appears first on the chain is the one used.

This can also be used to sign a miscellaneous YES or NO, for use in Custom Contracts. Instead of doing a Rollback Address do 000...000, and instead of a Forward Address do 000...001. See the Multi-Sig example below:


Code:
ARGCOUNT = 6
256bit ARG[1] = Fallback address
256bit ARG[2] = Value at the end of the hashing sequence (to sign with the left leg of the decider)
256bit ARG[3] = Value at the end of the hashing sequence (to sign with the right leg of the decider)\
256bit ARG[4] = BDecider Lock
16bit ARG[5] = Int to be signed, a free variable
256bit ARG[6] = Value needed to be commited on chain, a free variable (to sign with the left leg of the decider)
256bit ARG[7] = Value needed to be commited on chain, a free variable (to sign with the right leg of the decider)
256bit ARG[8] = BDecider Key, a free variable

ADDRESS = SHA256(ARG[1] CAT ARG[2] CAT ARG[3] CAT ARG[4])
CONDITION = ENTRENCHED(ARG[7]) && ENTRENCHED(ARG[8]) &&                                                     # If both Decider vals have been entrenched
   IF_DECIDER_SIGNS_VALUE_USING_PREIMAGES(ARG[2...3], ARG[5], ARG[6...7]) &&                       # And the Decider is signed with them
            IF_BDECIDER_UNLOCKED(ARG[4], 000...000, 000...001, ARG[8])                                      # And the BDecider is unlocked
CONNECT_TO = GET_COMMIT_AT_INDEX(GET_INDEX_OF_COMMIT(ARG[1] - (ARG[4] * BDECIDER_UNLOCK(ARG[4], 000...000, 000...001, ARG[8]))))

IF_BDECIDER_UNLOCKED returns a TRUE or FALSE, but BDECIDER_UNLOCK returns the address itself. Because the contract stipulates the two addresses of the BDecider are 0 and 1 respectively, BDECIDER_UNLOCK can only return one of those.

The CONNECT_TO takes the index of the fallback, modify it by the signed value, then multiply it by either 0 or 1. In this way the second signature is required to sign off, but in a way that it impossible to burn any COMB; either they sign a 1 to not modify the displacement that indicates the Forward Address, or they sign a 0, to making the displacement guaranteed to be modified to 0, aka the Fallback Address.

I do worry that including raw 256bit digits in the contract maybe a problem, so if we wanted to limit raw number size inclusion this could be re-written to have those address become part of the entire contract address instead.

LMK what you think about not using IFs or ORs.
jr. member
Activity: 76
Merit: 8
Crap crap crap, I just realized there's a MAJOR problem with the precedent that the Multi-Sig would allow.

 - Alice and Bob pay into a contract that has "If EITHER decider signs a non-0 number, all funds go to the corresponding address
 - Alice signs her Decider Key sending to Chris's Address. She doesn't tell Bob.
 - Bob signs his Decider key sending to Danny's Address. He doesn't tell Alice.'

Now Chris and Danny both have 100% of the funds that Alice and Bob payed in, doubling the COMB. That's not good.

Is there a way to program the client to detect a bad contract like this? Is that what you were showing in your post, and I'm just too much of a baby-brain to notice?

EDIT: Something you could do would be to not allow OR statements in the condition. Rephrasing the Hot Cash to this would fit that requirement:

Code:
ARGCOUNT = 6
256bit ARG[1] = Fallback address
256bit ARG[2] = Value at the end of the hashing sequence (to sign with the left leg of the decider)
256bit ARG[3] = Value at the end of the hashing sequence (to sign with the right leg of the decider)
16bit ARG[4] = Int to be signed, a free variable
256bit ARG[5] = Value needed to be commited on chain, a free variable (to sign with the left leg of the decider)
256bit ARG[6] = Value needed to be commited on chain, a free variable (to sign with the right leg of the decider)

ADDRESS = SHA256(ARG[1] CAT ARG[2] CAT ARG[3])
CONDITION = ENTRENCHED(ARG[5]) && ENTRENCHED(ARG[6]) &&
IF_DECIDER_SIGNS_VALUE_USING_PREIMAGES(ARG[2...3], ARG[4], ARG[5...6])
CONNECT_TO = GET_COMMIT_AT_INDEX(GET_INDEX_OF_COMMIT(ARG[1]) - ARG[4])

Would rephrasing in this manner work for all instances? All contracts would have to be phrased with no IF/ELSE statements in the CONNECT_TO/RETURN, and no OR statements in the CONDITION. All you have are fact-checks. You could still do a multi-sig like this, but it'd REQUIRE communication between both parties.

Except what if Alice just decided they wanted to send all the funds to a destination of their choice, so they just went ahead and signed their Decider to point at said destination. In the basic case of requiring both parties to sign, Bob now has to either sign in agreement with Alice, or just trap the funds in there forever. There's gotta be a clever way around this, without breaking the contracts.

jr. member
Activity: 76
Merit: 8
OH CRAP I didn't even consider floor(), I understand now. That's spooky. God dammit, I thought I was good at breaking things.


Okay so I understand your layout of the Hot Cash now, thank you for labelling, it helped a lot. Last night I went through and made the following as an example of a Multi-Sig Hot Cash (included an updated version after text), and it has some small differences, but also I think I learned why custom signers won't work; anything that does SHA()^X is a potential weapon.

I think it's important to include in the CONTRACT what the expected type of an argument is; I worry that there'd be a way to cause a problem if someone hacked together a contract history with mismatched types, and the receiver just tried to run it anyways without realizing that it was invalid. It feels too loosey-goosey.

What I think is interesting about your Hot Cash contract is that it runs on receiving a destination address. The version I had made would request a GetCommitAtIndex(X), and had the Decider function return a TRUE or FALSE for if the listed number was signed, then the client would go get the INDEX if it was true. Yours makes more sense I think, because at the cost of 256bits of space you're saving the computer work to find the address. It's also clever to not store the signed int, but to just infer it, so I stole that lol. Do you think there's a problem in general with a GetCommitAtIndex() function? Or would it work to have as an option in other instances?

There's a potential problem with doing nested objects with arguments in TX history data; what if you have two nested objects, and the commits of the second one have occurred, but not the first? I think to deal with this it makes sense to use something like 000...000 in place of just leaving an argument slot empty. The example below will demonstrate, although it's still using the flawed custom Decider for it's nested objects.


Non-Nested Custom Decider
Code:
ARG = [
int256,   # 1 - Hashtip_1
int256,   # 2 - Hashtip_2
int16,    # 3 - SignedInt (Free)
int256,   # 4 - CommittedVal_1 (Free)
int256,   # 5 - CommittedVal_2 (Free)
]

#VALID(ARG) checks that it isn't 000...000

ADDRESS = SHA256(ARG[1] CAT ARG[2])
CONDITION = VALID(ARG[4]) && VALID(ARG[5]) && ENTRENCHED(ARG[4]) && ENTRENCHED(ARG[5])
RETURN = SHA256(ARG[4])^(65535-ARG[3]) == ARG[1] && SHA256(ARG[5])^(ARG[3]) == ARG[2] ? TRUE : FALSE


Multi-Sig Hot Cash
Code:
ARG = [
int256,   # 1 - Reference to NN Decider Contract
int256,   # 2 - Fallback Address
int256,   # 3 - Decider1_HashTip_1 (Alice)
int256,   # 4 - Decider1_HashTip_2 (Alice)
int256,   # 5 - CommittedVal1_1 (Alice) (FREE)
int256,   # 6 - CommittedVal1_1 (Alice) (FREE)
int256,   # 7 - Decider2_HashTip_1 (Bob)
int256,   # 8 - Decider2_HashTip_2 (Bob)
int256,   # 9 - CommittedVal2_1 (Bob) (FREE)
int256,   # 10 - CommittedVal2_1 (Bob) (FREE)
int256,   # 11 - Forward Address (FREE)
]

ADDRESS = SHA256(ARG[2] CAT ARG[3] CAT ARG[4] CAT ARG[7] CAT ARG[8])
CONDITION = (  ENTRENCHED(ARG[5]) && ENTRENCHED(ARG[6]) && BUILD_GET_OBJECT(ARG[1], ARG[3..4], 0, ARG[5...6])  ) ||                   #If Alice has signed 0
   (  ENTRENCHED(ARG[9]) && ENTRENCHED(ARG[10]) && BUILD_GET_OBJECT(ARG[1], ARG[7...8], 0, ARG[9...10])  ) ||                #If Bob has signed 0
   (  ENTRENCHED(ARG[2]) && ENTRENCHED(ARG[11]) && ENTRECHED(ARG[5]) && ENTRENCHED(ARG[6]) &&                                #If Alice and Bob have both signed the same non-0 number
            (  BUILD_GET_OBJECT(ARG[1], ARG[3..4], (CHAINPOS(ARG[11]) - CHAINPOS(ARG[2])), ARG[5...6]) &&
            (  ENTRENCHED(ARG[9]) && ENTRENCHED(ARG[10]) && BUILD_GET_OBJECT(ARG[1], ARG[7...8],  (CHAINPOS(ARG[11]) - CHAINPOS(ARG[2])), ARG[9...10])  )  )  )  
RETURN = BUILD_GET_OBJECT(ARG[1], ARG[3..4], 0, ARG[5...6]) || BUILD_GET_OBJECT(ARG[1], ARG[7...8], 0, ARG[9...10]) ? ARG[2] : ARG[11]

Let me know what you think about using 000...000 to indicate an empty field in the data. In this case, if Alice hadn't signed anything, but Bob had signed 0, he'd need to indicate in the TX history that Alice hadn't signed anything.

Or is this too sketchy? I'm trying to think if this could be exploited via the asymmetry that comes with TX histories. In this instance lets say they had the Multisig to make purchases using their combined funds; Alice and Bob put 10 COMB each into the Hot Cash, and then they now have purchasing power of 20 COMB. If either of them signs 0, the 20 COMB goes to an LStack that splits the COMB back up and send 10 to Alice and 10 to Bob. It gets a little messed up when you consider that Bob could sign the TX to 0 and then just not tell Alice. Now she still thinks that the wallet has 20 COMB. I don't think that she can't actually generate COMB with it, which is good.

I think to make this contract secure for both parties, in addition to the information you currently have to sign, you'd also have to commit to an address to signal that the Hot Cash has been burnt, and a decision has been made. So rather than using a contract like the one above, where two people have separate Deciders, there's probably a better way to do it where the contract has a killswitch address also. Both parties know the killswitch. If the killswitch is ever committed to before both Deciders sign a number together, then the transaction would go to the Fallback, but if both Deciders signed the same number with their Deciders before the killswitch was committed, then the funds would go to the Forward. This would actually make the code smaller I think, and it'd definitely make it cheaper to back out of the arrangement.

What do you think about the RETURN function? I used it in the "main" contract too, because if the Interpreter is the one who is acting as the parent of the sequence, then they know to take a RETURN from a main contract to be the CONNECT_TO address.



Quote
Guess this primitive  Grin


Code:
ARGCOUNT = 3
256bit ARG[1] = ALICE_ADDRESS
256bit ARG[2] = BOB_ADDRESS
256bit ARG[3] = ARBITRARY 256bit number
ADDRESS = SHA256(ARG[1] CAT ARG[2] CAT ARG[3])
CONDITION = TRUE
CONNECT_TO = SHA256( SHA256( SHA256(ARG[1] CAT ARG[2] CAT (ARG[3]+1)) CAT ARG[1] CAT 0.00000001 COMB  ) CAT ARG[2] CAT 0.00000001 COMB )


a. CONDITION is always TRUE (end of proof)
b. by contradiction: ADDR1 != ADDR2 || CONNECT_TO1 == CONNECT_TO2
     if ADDR1 != ADDR2:
       then by SHA256 collision resistance ARGS1 == ARGS2:
         but ARGS1 != ARGS2 if we want to prove something
     if CONNECTTO1 == CONNECTTO2:
       then by SHA256 collision resistance ARGS1 == ARGS2:
         but ARGS1 != ARGS2 if we want to prove something
   and so: ADDR1 == ADDR2 && CONNECT_TO1 != CONNECT_TO2
   and therefore: CONNECT_TO1 != CONNECT_TO2
 


That looks kinda like an LStack to me, but I'm confused about the ARG[3]+1. By ARBITRARY 256BIT number, I read that as you can put in w/e number you want. But if that was the case, then all the deposited money except for 0.00000002 COMB would just be stuck in the last stack unless you had put enough funds there to trigger it. So clearly I'm misunderstanding something lol.

EDIT: On an unrelated note, I was thinking more about your Hot Cashes, and I realized that if you're making regular payments it'll only be 3 BTC TXes per cycle, because you can just re-use the same fallback address as you did the last time!
Pages:
Jump to: