Obviously, the criteria is:
Any fork, being the longest chain or not, is not allowed to exceed a computable threshold in length such that it would compromise the block generation rate imposed by other consensus rules.
Taken strictly that criteria is violated by the existing chain. In other words, if it were imposed previously the network would have already faulted. Block 0 is timestamp 2009-01-03 18:15:05, this was 3501 days ago, which implies ~504144 blocks. Yet we are on 535244.
According to my suggested threshold of 60480, it doesn't fail.
Any increase in hashrate will result in that condition, implemented strictly, being violated because the difficulty adjustment is retrospective and only proportional (it includes no integral component). If the condition is violated it can cause spontaneous consensus failure by forcing blocktimes against the maximum time skew and causing the network to break into islands based on small differences in their local clocks. You can add margin so that it hasn't failed _yet_ on the current chain but what reason do you have to expect that it wouldn't fail at some point in the future for any given selection of margin? There is no amount which the chain is guaranteed by construction to not get ahead by. Quite the opposite, under the simple assumption that hashrate will increase over time the chain is guaranteed to get ahead.
In the last decade, starting from zero and reaching to 45o000 penta hash, experiencing GPU and FPGA and ASIC one after another and we got just 35000 more blocks. In the next decade, with such a hash power as the inertia, we don't expect even half of such a deviation, and more importantly it won't be just a jump like in few days or weeks, it would be incremental and observable and a simple increase in the threshold would be enough to resolve the problems you are worried about.
So we have a solid definition, provably safe
As far as I can see your posts contain no such proof. In fact, the existing chain being well ahead of the metric now is concrete proof that some arbitrary safety margin values are not safe. What reason do you have to believe another arbitrary value is safe given that some are provably not, beyond "this setting hasn't failed with the historic chain yet"?
It takes some time for me to prepare a formal mathematical proof that there is always a safe threshold for a large enough duration of time.
Had your same series of reasoning been followed in 2012-- and set a margin of 1000 blocks (or whatever was twice the adequate value at that point in time) then sometime in the next year after the hashrate grew tremendously the network would have spontaneously broken into multiple chains.
The suggested threshold of 60480 is not calculated as a fraction of current block height, it is the number of blocks in 30 consecutive difficulty adjustments. So in 2012 I'd suggest the same threshold.
and helpful in mitigating the DoS vulnerability under consideration here
What DOS vulnerability? There isn't one as far as any measurements have thus far determined. Changing unrelated consensus rules in ways which only seem to satisfy a "doesn't fail yet on the existing chain" level of proof to fix a maybe service could be denied here node behavior seems, frankly, absurd. Additionally, it seems just kind of lazy to the point of reckless indifference: "I can't be bothered to implement some P2P within the performance envelope I desire, though it clearly can be through purely local changes, so I'm going to insist that the core consensus algorithm be changed."
It is too much!
No, it is not a
lazy approach, whatever. It is about stopping clients from pretending to be honest and innocently following a bogus chain when they are attempting to query a hypothetical chain with one billion height when we expect just 500-600 thousands. That simple.
In what universe does it makes sense address a simple implementation programming question through speculative consensus rules with arbritary parameters that would provably have failed already for some choices of the arbitrary parameters?
In my universe, every rule has a criteria to met and it is a loose definition to say:
the longer chain is always the better chain. I'll set a criteria on this rule:
as long as its height is not obviously bogus.
... some of us don't think of Bitcoin as a pet science project, understand that its difficult to reason about the full extent of changes, and actually care if it survives. If you share these positions your should reconsider your arguments, because I don't think they make sense in light of them.
I don't agree. I'll provide proof that there exists a safe threshold for a long enough period that the block chain won't break through it and it is getting smaller in time because of the inertia of the installed hash power.
I suppose with such a proof, it is ok to put a cap on LCR, right?