Pages:
Author

Topic: Limits to accepting a new longest chain to prevent >50% - page 2. (Read 1678 times)

legendary
Activity: 1232
Merit: 1094
Although perhaps not a very likely scenario such an attack would be a massive confidence destroyer - so I am wondering would it not be reasonable for a client to reject a new chain if it contains blocks that it hasn't seen that are much older than blocks in the chain it is already building on (or is this already the case)?

Some of the proof of stake rules do that kind of thing.  The checkpoint system is a manual version to a certain extent.

An extreme version would be that you multiply by age.  Block proof of work is (Block POW) * (time since the node first added the block to the chain).

This is already used for tie breaking chains.  If you have 2 equal POW chains, then you go with the one that was extended first.

You could add a maximum bonus (say 60 days old).  This would allow the chain to heal eventually.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
My scheme, like all such schemes to add state, has some very serious downsides.  For starters, it makes network convergence not automatic.  I have argued a bunch of times that the trade-off could be worthwhile, but I still have to accept that the burden of proof for messing with such a core concept is very high.

Your scheme sounds interesting (and is actually better than my idea I must admit) - the automatic convergence is something I don't really see as being a good thing at all (as I stated before if you have been using a fork for 100s or more importantly 1000s of blocks then such convergence whilst able to occur automatically would would not occur without a hell of a lot of complaints).
kjj
legendary
Activity: 1302
Merit: 1026
Right now, a new chain takes over as long as the embedded difficulty is at least one hash more than the currently known chain.

I have proposed many times that reorgs beyond a trivial depth should require exponential difficulty increases.  For example, we could say that a reorg of more than 10 blocks requires 1.02 times as much work, per block past 10, or dD=1.02(blks-10).

This would force any potential history rewriting attacker to move quickly and make their intentions obvious to all, lest they find themselves fighting that exponential.

My scheme, like all such schemes to add state, has some very serious downsides.  For starters, it makes network convergence not automatic.  I have argued a bunch of times that the trade-off could be worthwhile, but I still have to accept that the burden of proof for messing with such a core concept is very high.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
Well consider coinbase rewards - you can spend them after a certain number (is it 100 or 120?) more blocks are added to your chain but if you were mining on the losing fork then such blocks are going to be discarded - if you have already *spent* those funds in the meantime then someone is going to be rather unhappy.

If Bitcoin thinks that 100/120 is the *safe* point to allow spending from coinbase then I would be proposing a figure that would be closely related to that (making it no more subjective than the limit already in place).
hero member
Activity: 798
Merit: 1000
"Too old" is a subjective measure and will be disagreed upon. Not everyone is going to see everything at the same time. And disqualifying "too old" means you are accepting a permanent fork if there has been a network split for more than the "too old" amount of time.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
I understand (and respect) the need to be conservative and I guess in the end maybe it is not much different to having checkpoints every now and again but it just seems that disallowing a new longest chain (due to containing blocks that are too old) would be a more elegant (and automatically ongoing) way to prevent such a major reorg from occurring.

Also if I were (well actually I am) in China and had been running on a separate fork for the last year or so then I don't think I'd want to see my fork merged at all (so it may as well stay forked forever).
hero member
Activity: 798
Merit: 1000
(I think if they were big enough you'd still have a hell of a mess so the current system doesn't really help that much).

A hell of a mess that has a dead simple solution - longest chain wins.

Is it the best solution? It certainly is an inelegant one, but it does fix the problem. I have suggested using a bitcoin days destroyed mechanic in addition to longest chain wins under attack-like scenarios, which doesn't even require a soft fork and would only be an incompatibility issue if the problem occurs, but people around here aren't too interested in straying from the satoshicode.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
While this could be an attack of hashpower, it could also be an attack (or mishap) on the internet infrastructure that has caused a separation of mining powers for some time. When they rejoin, your solution would cause a fork that would have to be resolved by the users instead of by a computer.

In the scenario above the fork has already happened *before* trying to apply my solution (i.e. as soon as the miners became separated you have forked) - but yes with my solution those forks would not be able to be rejoined (I think if they were big enough you'd still have a hell of a mess so the current system doesn't really help that much).
hero member
Activity: 798
Merit: 1000
Although perhaps not a very likely scenario such an attack would be a massive confidence destroyer - so I am wondering would it not be reasonable for a client to reject a new chain if it contains blocks that it hasn't seen that are much older than blocks in the chain it is already building on (or is this already the case)?


It could but it doesn't because "there can be only one". While this could be an attack of hashpower, it could also be an attack (or mishap) on the internet infrastructure that has caused a separation of mining powers for some time. When they rejoin, your solution would cause a fork that would have to be resolved by the users instead of by a computer.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
One of the concerns that has been raised several times about a >50% attack is that if blocks were not immediately published but instead kept secret whilst continuing to mine ahead then finally publishing all of the blocks at once to form a new longer chain invalidating all transactions that occurred before it was started (which could be all the way back to the last checkpoint).

Although perhaps not a very likely scenario such an attack would be a massive confidence destroyer - so I am wondering would it not be reasonable for a client to reject a new chain if it contains blocks that it hasn't seen that are much older than blocks in the chain it is already building on (or is this already the case)?
Pages:
Jump to: