only 13 or 14 blocks.
O(log n) can be anything of the form a + b * log n.
In the case of flyclient a could be over 1000, and b over 10.
It also requires a bitcoin header to commit to a merkle tree of all previous headers,
and perhaps the cumulative difficulty.
This makes me feel relieved, as I had applied the run-time function literally.
Still there's another hurdle, maybe even two, to overcome. It uses a probability distribution with the difficulty as input and returns the chance that it's block will be used in the proof. First off the difficulty of bitcoin mining is ever-increasing and this distribution takes difficulties on a scale from 0 to 1. It means every time the mining difficulty breaks a high, each and every full node has to compute the new difficulty "percentage" (a terrible name but let's just call it that) for all of the blocks, and I imagine this is not going to take as much time as transaction verification but you are still processing 650K+ blocks so... that's going to take a few hours on average.
If the difficulty breaks a new high during that period this reassignment has to be aborted and it has to start all over again.
The second hurdle is choosing a probability distribution for choosing the blocks. Even if there are no major hot spots in the blockchain that may need more verification (because of user-developer drama so theoretically are the first targets by rouges) such as the blocks around the segwit and taproot activation blocks, as I mentioned above the difficulty is ever changing so choosing a new probability distribution that mostly corresponds to the blocks it previously selected will be challenging because you keep having to compute it again.
This also assumes that there's a 1-to-1 mapping between the difficulties and blocks, which shouldn't be a problem except maybe for the earliest blocks with near-zero difficulty.