Few setups, one for example:
128Gb RAM, SSD 5-7/5-7 Gb/s >1Tb, 32 cores Ryzen, 100Mbit/1Gbit ethernet.
dbcache set to 64/48/32/16/8/2 Gb
par set to 30/24/16/8/0
txindex=0
maxorphantx=4096/1024/256/0
maxmempool=2048/1024/256
blockreconstructionextratxn set at 16M/1M/256k/4096 and 512 (not M/k of course, 16777216 etc)
txindex=0
maxreceivebuffer set to 16384/4096/1024
maxsendbuffer set to 8192/1024/512
checkblocks=3
checklevel=1
checkmempool set to 10000/100/0
checkpoints set to 0/1
maxsigcachesize set to 2048/512/32
dblogsize set to 256/96/64
Wallet started at first time. No data, no blocks etc.
First blocks was downloaded very fast, there is no questions to network download speed/disk activity etc. Then after downloading and writing some blk*.dat Core starts to check blocks, hashes etc - rev*.dat files created. Higher the blocks lesser the speed of calculating (repeating that no questions to network/disk).
At ~98% of completing something went wrong and wallet must be closed gracefully - finished normally, debuglog contains lines that all finished smoothly. (Any situation can happen - no electricity, crash, out of space - it's not important now)
Restarted bitcoin-qt and... it started to "sync headers" from ZERO. With a speed near 0.02%/h. WTF?
Restarted again - the same situation.
Ok, let's try -reindex. But before that copy some blk* and rev* to other SSD.
bitcoin-qt -reindex -> it is opened and started to reindex from beginning. Watching for blk* and rev* files - blk's are not touched with write activity, only load (extremely fast) then Core started to recalc something and rewrite rev* files. Only one core of CPU is used! After waiting for a long time just compared new rev's to backed up - they are identical, so it was no need in reindexing/rewriting them... but... Ok.
Something went wrong again near the end (not important now what and why).
Restarted bitcoin-qt and situation repeated - 0.02%/h.
-reindex again and all the same.
filesize: blk03200.dat .. blk03249.dat - 6'681'054'357 bytes
download speed: 737 s at 100 Mbit (8.64 Mb/s) and 90 s at 1Gbit (70.81 Mb/s)
read from disk speed: 0.91-0.93 s (6.9 Gb/s)
filesize: rev03200.dat .. rev03249.dat - 966'121'582 bytes
(re)indexing by Bitcoin Core (only one core of 32 is used, no matter what "par" is): 617-628 s (1.46 Mb/s)
So the big question(s) is:
1) why (re)indexing is so slow and uses only ONE core of CPU and not all of them or is in "pre" option (if there is importance that one block must be checked after previous block then there is a simple solution - check all sequential blocks at once in different threads and then if all is ok with their chain calc next blocks but now it supposed to be just one block at a time!)
2) why wallet is not just compare calculations to what was written to rev's previously not to strain and wear out disks, especially SSD's? (blk's FYI can be placed to mid-range HDD with 150-120Mb/s speed without any losses just because blk's are read sequentially)
3) when someone uses -resync then... bitcoin core downloads blocks from the net from scratch?.. why?.. it is soooo wrong I think. Much much better would be read blocks from disk, calculate hashes for them or their parts, compare hashes to other nodes and only when the filesize and a hashes are not identical then redownload. This moment I think is hard to code because (as I understand) blockfiles can differ from machine to machine because of different map of blocks inside it, but why? It can't be organised the same way in all network? If this can be done then all network can breathe much easier when someone resyncs because only hashes of blockfiles can be transferred if blockfiles are okay.
PS: I've found I can use assumevalid=blockhash to skip days of work, it's ok but not the case of the problem (previously if everytime something goes wrong I have ~1 try per day).