This is 'not a fix' but a bad workaround that limits the system.
Imagine if you applied that logic to the blocksize!
I can. The block size limit does indeed limit the system. Just raising it (bad workaround) and hoping everything will be alright would be a bad move. This is why the infrastructure around it needs to be improved after which the block size limit can be safely increased.
your doomsday scare mongering of nefarious people creating 2mb blocks filled with a single transaction of 1.99mb data wont happen..
for 2 reasons
1. as you said yourself it takes 10minutes+ just to validate the transaction before even working on hashing the block itself.. so thats 25btc lost not even mining.. then if they start mining and create a block. because its 1.99mb of data it will take longer than a block of 1.025mb or 0.99mb would take just to hash. so it wont get solved the fastest either
so.. with that said.. imagine blockheight was 400,000... the nefarious miner would need to start validating the transaction, taking 10minutes ..
simultaneously other miners validate AND hash out height 400,001.
its been 10 minutes and now the nefarious miner finally gets around to hashing the block, but because 1.99mb of data is more then 1.025mb
it takes longer.. so again the non nefarious miner hashes out 400,002 while nefarious miner is still working..
and when the nefarious miner finally gets a solution.. its blockheight will be 400,001 because its merkle data is linked to 400,000 while the rest of the network is at 400003.. making the nefarious block instantly out of sync and rejected.. not due to blocksize rules.. but due to being behind in the chain.
2. because miners know that being nefarious is risking them losing out on 3 blocks due to time to do it.. thats a potential 75btc loss ($30,000) so they would need to be bribed with $30k just to attempt it with still no guarantee it would work.
yes 30k is a small bribe but its not like they would successfully get a large tx into the chain on first attempt so the $30k payments will mount up..
and there is nothing stopping the devs from adding other rules that
a. rejects blocks with less than 200 transactions.. to force miners to actually put multiple transactions into block, instead of creating near empty blocks which not only solves the doomsday problem, but also helps ensure transactions are not held in mempool for hours while blocks are being solved without tx's
b. or reject and not relay transactions where a single tx has more than 500k of data. to teach people how to create transactions properly and more lean
EG
instead of
single TX {
1originfunds
2originfunds
3originfunds
4originfunds
5originfunds
6originfunds -> 1destination
7originfunds
8originfunds
9originfunds
10originfunds
}
single TX {
1originfunds
2originfunds -> 1destination
}
single TX {
3originfunds
4originfunds -> 1destination
}
single TX {
5originfunds
6originfunds -> 1destination
}
single TX {
7originfunds
8originfunds -> 1destination
}
single TX {
9originfunds
10originfunds -> 1destination
}
single TX {
1originfunds
1originfunds -> 1destination
}
oh and by the way segwit doesnt prevent the doomsday 1tx with 4000 dust inputs, because although segwit separates the signatures it still needs to check the signatures.. so it would still be 10 minute of validation time for nefarious segwit miners or normal miners.. before even getting to the hashing part
infact segwit makes it easier at the hashing block stage to make such a doomsday block be able to be part of the chain faster. because after the 10minute validation.. the actual block data wont be 1.99mb it would be 1mb-1.2mb because its not holding the signatures.. so the time to actually hash the block can result in a solution sooner than non segwit miners..