Come-from-Beyond has been very cordial to me, so I don't want to defecate on his effort. I have my doubts about viability for the following reason. The ramifications of this probably needs to be discussed more. But it seems to me that having users who send transactions viewing all the transactions before they can send is the antithesis of instant microtransactions and also places a burden on who can send a transaction. You need certain minimum level of connectivity and bandwidth on your connection just to send a transaction. It is an interesting concept and maybe DAG can be integrated in other ways into cryptocurrency. Maybe he needs to figure out how to eliminate this apparent weakness with some paradigm shift. Note it
appears to me that Lightning Networks is in some facets (not all) similar to a DAG concept. Perhaps thinking about those two different paradigms will lead to some epiphany.
Hey cool name Iota (IoT)! Good one!
It's not needed to see all the transactions before sending a payment, one could have a few days old snapshot and still get their transaction included into the tangle. This is an advantage of the tangle over the blockchain - consistency requirement is much lower than in Bitcoin. Lightning Networks approach (more precisely its improvement made by Christian Decker and Roger Wattenhofer in "A Fast and Scalable Payment Network with Bitcoin Duplex Micropayment Channels") is already utilized in Iota.
I haven't dug into the core issues of the breadth of tree and its implication on convergence versus divergence and as pertains to double-spends and other metrics. So I am limited in terms of making insights at this time until I do.
I thought you replied to me up thread that the payer needs to accumulate a significant portion of the breadth of the tree (even historically) in order to evaluate where strategically to optimally insert his/her node in the DAG. Thus it seems to me that each payer has to see some N other payers, so this bandwidth and computation load on the payer is scaling as N x N for payers versus to a normal PoW system where the payer's signature is autonomous from the network. The latter is the end-to-end principle because the intermediaries—between the originator and the construction of a transaction to the destination—are incapable of harm, substitutable, and fungible. Put more abstractly, the intermediaries are idempotent, referentially transparent, transitive, and commutative.
I understand conceptually the global consistency requirement is lower than a more deterministic traditional PoW or even PoS system (although these diverge on reorganizations and total divergence at 51% attack), but doesn't that come with the tradeoff of a risk of divergence of the tree's *final* conclusion about a double-spend (two reasonably balanced leaves each with a double-spend)?
I guess what I am after in terms of characterizing the tradeoffs is some quantification or conceptualization of the frequency/probably (or characteristic principles) of divergence as we have succinctly with PoW (selfish mining, 51% attack, orphaned chains, etc). Something expressed in the English language and not requiring differential equations models to comprehend.