@Ali
The amount of data we are now talking about is quite significant. To keep track of 1 billion outputs (1024*1024*1024), as a bitfield, in a centralised fashion requires 128MB. And that is for 1 single output, and does not include all the proofs (then it's really big)!
By using this scripting technique (with the dreaded covenants.. i like, you no like), instead of the miners storing it, all the data is kept by the users. And each user will only store the tiny amount relevant to themselves (no different to storing a public key - it's not even secure data.. just proofs).
A hypothetical 1 billion nested outputs txn can be handled by making a time/space trade-off. Nodes can implement it without storing a single extra byte and querying the blockchain for the history, they can maintain an index to make it more efficient.
There is no way to "distribute" data between users, eventually every node has to approve that a single nested output is not double spent and they need data, a copy of it to reach to a consensus, it is how blockchains work, remember? In each spending attempt, nodes have to verify that it is not attempted before and they need to either query the history or check an exclusively maintained data structure for speed purposes. a bitfield defintely.
I'm going to have to re-raise your '..whole covenant thing and txn commitment to bitfields which is absolutely unnecessary..' and say 'Come on then - what's your way that is simpler, cleaner and more efficient than this way ?'
In my proposal everything is straightforward: users maintain their
proofs of leafs and supply it along with other supplementary data to fullfil the nested output script (pubkeys, signatures, scripts, etc.) nodes maintain 1024 long bitfields internally and tick the respected nested output as spent eventually.
Only the first 'spend' of the original transaction would need to post the next bitfield transaction - (not all of them - a nice optimisation). And then as usual 1024 normal transactions can be made.
..
Go large! .. with a Triple-Decker.. and we get a billion outputs.. from a single txn output..
.. (not sure what for..)
This 'optimization" you are talking about could be more "optimized" by not requiring the bitfield at all! Because it is not the user's problem.
This would be an anomaly to have weird txns that carry such scripts which
nodes have to update them incrementally (in your optimized version), it is more clean and consistent to have them keeping track of nested outputs like what they already do for normal outputs.