So Luke's new proposal is actually a way to soft-fork to 600k
weight units, which is not at all the same as 300kB.
That's truly bizarre as an idea, apologies to Luke. That would mean keeping the base size at 1MB, and only being able to use 600kB within that base limit. The segwit discount would still reduce fees for segwit tx's, but the incentive to use segwit tx's as a way to boost capacity would disappear if the base size (1MB) was higher than the weigh limit (600kWU). Don't see the rationale for that at all.
I'm disappointed with the press circus Luke has contributed to here, -- it's not the first time he's set things up perfectly for his words to be taken out of context and then been so so surprised at what happened. But he does make useful contributions, and in the fullness of time drawing more attention to the initial sync problem may be one too, even though I disagree with the approach.
It seems like Luke has a fascination for exploring possibilities without much reasoning as to why the ends are desirable. In the case of the actual BIP141 segwit soft fork, that approach was great, as Luke was motivated to figure out a way to implement segwit. Someone with an "it'll never work" attitude would never have done so.
Blockstream has unpublished code that implements an alternative serialization that reduces tx sizes by around 25%. I don't think it would actually improve IBD time except for very fast computers on fairly slow internet connections... initial sync is more utxo-update bound than bandwidth bound for most users. It might even slow it down, since the compact serialization is slower to decode. On a ludicrously fast machine (24 core 3GHz, nvme storage, syncing from local hosts over 10gbe) sync currently only proceeds at about 50mbit/sec. I've been nagging them to publish it. Their interest is in using it to increase capacity on the sat signal, but it's more generally useful.
50Mbit/s is high-end validation performance? Interesting.
I expect and hope that all the IBD activity will move into the background. After that happens, then the time it takes is less important than the resources-- and at that point a 25% bandwidth improvement would look pretty good.
Are you referring to the hybrid SPV concept? (SPV synchronisation finishes first, IBD continues in the background). Or new UTXO set tech?