Author

Topic: Including merkleroot of blockheaders in coinbase? (Read 1077 times)

staff
Activity: 4284
Merit: 8808
Do you mean morphing the "bits" field in the header? Is it legal to use an lower-than-required "bits" in bitcoin header?
No; the commitment included in the block is a hashtree with leaves that look like:

{index for how many back, previous_block_hash, compact encoded work difference between now and then}
...

For multiple backpointers. A back pointer is only valid (considered acceptable for traversal in a proof) if the block committing to it meets the work-difference threshold.  Similar to merge mining, it's possible to perform N-hashcashes in parallel all with different thresholds. Just like if namecoin's difficulty were higher than Bitcoin you could mergemine namecoin and have a bitcoin block with a namecoin commitment in it which is still useless for namecoin though its a valid bitcoin block.

So to follow a link that goes one block back, you need bits work-- this is the normal case.

To go follow a link that jumps two back your block must be valid at the 2*bits level... and so on, to successfully mine a 10000 back hop you need to have a block that is valid when considered with a work threshold of whatever was spanned by those blocks (e.g. 10000x greater if difficulty has been constant).

This is why the proof can hide data, but you can't fake one with less work (on average) than just mining normally.  Individually any block is unlikely to jump far back, but on the aggregate long jumps happen and as a result the proof is log sized in the length of the chain. (Actually less when the hashrate increases constantly).

For a given set of commitments there are multiple possible paths, and so multiple proofs for the same chain possible-- if you could have gone 100 back you could also go 50 back. Sometimes going less back makes for a smaller proof, perhaps at 51 back there is a single hop that goes all the way. The smallest proof can be constructed in linear time via dynamic programming: Start with the block you're trying to reach with your proof (e.g. genesis), and scan forward, for each block keep track of the size of the best proof from it to the destination and the first hop back, when you consider a block check each of its valid backlinks, add the cost of taking that backlink to the cost of its destinations best. Take the minimum as your best and move to the next block.  When you get to the end walk backwords. This algorithm always produces the smallest possible proof, though a simple greedy algorithm (always jump as far as you can but not past your destination) isn't usually _that_ much larger.

Quote
The chance of success is higher because of higher variance, right? The normal way follows gamma (Erlang) distribution while the compact proof follows exponential distribution?
Right, exactly-- well to be more precise, the compact proof and normal are both gamma; but the compact proof takes the log of the shape, while increasing lambda--- the same expected work, is represented by a fewer number of more rare events such that the expectation is the same but the distribution shape is different.

Technically its possible to make the proofs "over prove", e.g. to go X-work back you have to show a*X expected total work; but I think I concluded that you can only get constant factor improvements that way unless you break the asymptotic log scaling of the proofs. One nice thing is that such techniques are between the prover and verifier, the commitments don't change.
legendary
Activity: 1792
Merit: 1111
you have to set out in advance what amount of work you're claiming;

Do you mean morphing the "bits" field in the header? Is it legal to use an lower-than-required "bits" in bitcoin header?

and the link does nothing unless you hit it. If you try, but aren't successful, you've just wasted your effort.  In other words with the same amount of hashing power on average you could make a false compact proof that replaces the chain, or you could replace the whole chain normally; they take the same expected amount of work exactly.   But if you didn't have that much hashing power... only enough to, say, replace 6 months; then maybe you could get lucky and successfully create a compact proof for the whole chain but with, say, a 0.01% probability of success while the probability of success for just getting lucky directly mining the blocks would be 0.0000001% probability of success; just due the the reduced number of samples.  

The chance of success is higher because of higher variance, right? The normal way follows gamma (Erlang) distribution while the compact proof follows exponential distribution?

This sounds related to my earlier (bad) idea of "decreasing the variance of block interval w/o decreasing the mean of interval" https://bitcointalksearch.org/topic/decreasing-the-variance-of-block-interval-wo-decreasing-the-mean-of-interval-572816
staff
Activity: 4284
Merit: 8808
Do you mean one can use a very low value hash to replace the whole blockchain?
Not quite; you commit to the work you're targeting... so you can't just find something and replace as much as you can; you have to set out in advance what amount of work you're claiming; and the link does nothing unless you hit it. If you try, but aren't successful, you've just wasted your effort.  In other words with the same amount of hashing power on average you could make a false compact proof that replaces the chain, or you could replace the whole chain normally; they take the same expected amount of work exactly.   But if you didn't have that much hashing power... only enough to, say, replace 6 months; then maybe you could get lucky and successfully create a compact proof for the whole chain but with, say, a 0.01% probability of success while the probability of success for just getting lucky directly mining the blocks would be 0.0000001% probability of success; just due the the reduced number of samples.  

(Though note, you can boost your reorg success probability even without the compact proofs by mining so that the difficulty goes up at the highest possible rate (e.g. 4x every retarget)... and _if_ you assume that everyone's hashrate exponentially goes up forever; then any fixed proportion of the hashrate (e.g. 0.1%) will in finite, but potentially very large, time eventually replace the whole chain just due to that 4x ramping variance attack: eventually you have a chain with similar work but much fewer samples; and so you can get lucky)

As far as the historical work:  http://bitcoin.sipa.be/powdays-50k.png  this is how much time at the current hashrate that it would take to replace the work in the chain.
legendary
Activity: 1792
Merit: 1111
it is 39136 times more difficult than then current target of 00000000000000001713dd000000000000000000000000000000000000000000 and is expected to have one in 19.4 weeks with current hashrate

39136 blocks is nearly 1 year of blocks.  If you had that much hashing power, then you could replace the chain directly.

The underlying problem is that if the hashing power is increasing exponentially, then all previous POW done is equal to 1 doubling time worth of the current hashing power.

It is higher since the difficulty is quite stable in the past few months.
legendary
Activity: 1232
Merit: 1094
it is 39136 times more difficult than then current target of 00000000000000001713dd000000000000000000000000000000000000000000 and is expected to have one in 19.4 weeks with current hashrate

39136 blocks is nearly 1 year of blocks.  If you had that much hashing power, then you could replace the chain directly.

The underlying problem is that if the hashing power is increasing exponentially, then all previous POW done is equal to 1 doubling time worth of the current hashing power.
legendary
Activity: 1792
Merit: 1111
Well, as soon as I post this I realize this may not be a good idea as SPV clients have to verify the PoW so they need all blockheaders anyway. However, is this idea useful in some other way?

Behold: http://blockstream.com/sidechains.pdf  read appendix b "Efficient SPV proofs", in it we present a design that allows you to precisely the amount of work done (in the expectation) on a chain and compute exactly the same value as a user linearly reading the headers using only log communication.  (The tradeoff is that because there are few samples the measurement has high variance; so an attacker authoring a false proof could get lucky, similar to how you might find a block on your CPU... so even though the expected work to forge a proof is exactly the same, the probability of success in at attack is higher than for an equivalent length reorg; though there are various ways to further harden it, most further decrease communications efficiency).

Do you mean one can use a very low value hash to replace the whole blockchain?

EDIT: I made a mistake

So with the current chainwork=69fdb53bd3217bc08921f at block 354922, I can rewrite the whole blockchain if I'm lucky enough to get a hash below 0000000000000000000026a50fe1b18437....... ?
(I get this number by taking 1/69fdb53bd3217bc08921f https://www.wolframalpha.com/input/?i=1%2F0x69fdb53bd3217bc08921f)

it is 39136 times more difficult than then current target of 00000000000000001713dd000000000000000000000000000000000000000000 and is expected to have one in 19.438.8 weeks with current hashrate
legendary
Activity: 1792
Merit: 1111
I don't think this is possible for the same reason a signature can't sign itself.. The coinbase transaction is used to compute the merkle hash, so therefore cannot contain the merkle hash without altering the hash you wished to include.

To be precise, I'm talking about merkle hash of the hash of all previous blocks
sr. member
Activity: 412
Merit: 287
I don't think this is possible for the same reason a signature can't sign itself.. The coinbase transaction is used to compute the merkle hash, so therefore cannot contain the merkle hash without altering the hash you wished to include.
staff
Activity: 4284
Merit: 8808
Well, as soon as I post this I realize this may not be a good idea as SPV clients have to verify the PoW so they need all blockheaders anyway. However, is this idea useful in some other way?

Behold: http://blockstream.com/sidechains.pdf  read appendix b "Efficient SPV proofs", in it we present a design that allows you to precisely the amount of work done (in the expectation) on a chain and compute exactly the same value as a user linearly reading the headers using only log communication.  (The tradeoff is that because there are few samples the measurement has high variance; so an attacker authoring a false proof could get lucky, similar to how you might find a block on your CPU... so even though the expected work to forge a proof is exactly the same, the probability of success in at attack is higher than for an equivalent length reorg; though there are various ways to further harden it, most further decrease communications efficiency).
legendary
Activity: 1792
Merit: 1111
Well, as soon as I post this I realize this may not be a good idea as SPV clients have to verify the PoW so they need all blockheaders anyway. However, is this idea useful in some other way?
legendary
Activity: 1792
Merit: 1111
With the ever-growing blockchain, storing and verifying blockheader may become a problem for extra-light SPV clients. Would that be beneficial to include the merkleroot of blockheaders in coinbase (as a soft-fork)? Therefore, SPV clients will only need to store headers of relevant blocks.
Jump to: