r0ach,
Vitalik did not respond to my challenge.
I genuinely believe based on listening to their presentations which I linked upthread, that they sincerely didn't realize that partitions are unbounded due to externalities. Math nerds tend to see every nail with their math hammer and don't think of other perspectives such as the following bolded by one of the very smart mathematicians (Greg Meredith) working on Casper (he is also the author of Synereo's white paper):
4. Most fundamentally to Synereo's design is I don't see how Greg's math model for the attention model (Reo & AMPs impacts) can be enforced on all nodes. I admit I didn't dig into the math and research he cites in the 56 page white paper (I do sort of understand it conceptually), but i think I don't need to because there is no way to enforce that all nodes will run the same math model. Additionally I think the concept of paying with AMPs to force content to move uphill against Reo is the wrong model, because the value of advertising is orders-of-magnitude smaller than the value that users get out of social networks. Thus the only model that makes economic sense is Reo. Removing AMPs of course destroys Synereo's funding and profit model, so would kill the project. Thus I don't expect them to adopt a corrected design.
We'll see how they react to this, whether they ignore it, offer a "solution", or capitulate honestly.
On throughput,
smart contract block chains can't even exploit parallelism of the CPU! So without partitions you can only use one thread for validation. Basically it isn't scalable for anything. Completely useless.
I quoted Vitalik upthread and linked to that multichain.com page and indicated he had a had solution in mind for parallelization, but I suppose he is thinking along the lines of what I quoted upthread:
programming languages with formal verification systems backed by state-of-the-art theorem provers
Ah so Vitalik does realize there is a problem like the sort I am pointing out. But perhaps he has not yet realized that even 100% dependently typed scripting won't fix
the problem I am claiming is inherently insoluble.
Edit: thinking about this more while eating, I think parallelization could be exploited within a block if the scripts can provably not depend on each other w.r.t. to the data in the prior blocks (e.g. employing partitioned data stores or 100% dependently typed scripts) because the prior blocks are static data so no cross-partition indeterminism can be introduced by externalities (all scripts submitted for the block are computed based on inputs from the historic static data which can't be impacted by running any script for the current block). So then the externalities wouldn't apply within the block. This still won't allow partitions to span block boundaries because externalities across block boundaries will apply in that case as I explained upthread.
So thus I guess you could scale this as a centralized service as r0ach wrote.