I also wanted to clear out a certain number of misconception relative to the issue of distribution and scaling.
I see many people speaking about scaling on this forum, but very few seem to really understand what scaling mean.
What is commonly meant by "scaling" is "linear scaling", which mean that a process speed will "scale linearly" with the number of processor on which it can execute. If process scale linearly, if it take one second to process on one cpu, it will take 0.5 sec on 2 two cpu, 0.1 sec on 10 cpu, etc, the processing speed "scale linearly" with the number of processor.
Typically the kind of process that will scale linearly are process that need to process large number of elements who are independent from each others, such as rays in a raytracer, or vertex and polygones, where a unique linear process has to be applied to a certain number of constant elements, the elements can be distributed across different core and processed in parallele which effectively scale linearly the processing speed with the number of processor.
But 95% of application logic doesn't scale linearly, because one task need the result of another task as input first, or it doesn't dépend that much on processing power on a large number of independent elements.
Blockchain tends naturally toward distributed application, like typically web wallets, exchanges, and most of the use are not made entierely throught a local application with all the data and component on the local machine. There are some machine who run nodes and certain part of wallet data caching, and other machines who run the front end via a browser, so it's already a form of distributed application.
But it doesn't mean processing speed linearly scale with the number of node on the network. The website wont load 1 million time faster if there are 1 millions nodes on the network. It only dépend on the local processing speed of the machine you are connected to, for processing the website content, and keep the blockchain data synchronized with the network.
Blockchain ecosystem is distributed, it doesn't mean all the processing required to run any application that use the blockchain will scale linearly with the number of node on the network.
The way blockchain works it scale close to 0%, all the real processing power needed to process a block transaction is made by a single machine, mostly on a single core, for the whole network. Only the processing of the proof of work actually scale.
And need to see that distribution of application is not necessarily distributed computing, and distributed computing doesn't mean computing that scale linearly.
There are other motive than linearly scaling computing for distributed application, including case of blockchain it's mostly because not all user want to download the whole chain and wallet and all application on their machine to use it, distributed application it can be because certain resources are not available symetrically between different machines on the network, and some component are executed remotely, but again it doesn't necessarily mean the global processing speed of the application scale linearly.
To make a real efficient use of linearly scaling distributed computing in an everyday application, it needs already application that are designed specially to exploit this, that can handle a certain form of pipelined processing that can be efficiently distributed on different processor.
With the approach with modules and dynamic data tree, it's where I want to get at, it allow component to expose an api, and exchange data with other application/modules based on dynamic data type, via json/rpc and js, it provide what is needed to make application that exploit distribution to a degree.
But in the case the distribution is mostly to be able to run complex application in the browser without having to install anything on the local computer, and having the data and certain part of the processing executed on remote computers, controlled via json/rpc ajax request, more than to have mass scaling.
For 3d rendering the distributed computing is very interesting because lot of the processing actually scale linearly, but for most application logic it's not necessarily the processing power that limit application performance in a way that it can easily scale on the network.
With blockchain principle of having all the data replicated on the network identically with good degree of security, that can still call for mass scalable computing
But with the system of purenode it can create new blockchain easily, so can imagine tokens based application with their own public chain which could make things somehow more scalable if differents applications need to access blockchain in the same time, but for operatïons on bitcoin blockchain or others alt coin, it would not scale the processing speed of the transactions.
But distributed application framework is not all about distributing processing power on different cpu to make it scale, but also making certain resources available on the network through an api, with different requirement regarding digital property and proof of ownership, and different degree of privacy and right control through crypto.