My initial scenario was about digital image processing and video processing (Golem case). To handle video processing the whole video file needs to be uploaded to the miner, isn't it?
I would use Golem for that. They have specialized on that use-case and they can do it very good.
Another case is solving a huge system of differential equations (something about 10 millions of variables or more). For example, by Runge-Kutta method. The real use case for such systems is predictions (of cryptocurrencies' prices
). To solve such huge system the whole matrix needs to be uploaded as well.
Well, the Runge-Kutta method is not exactly the best choice in this case. What many people forget is that there are multiple types of parallelizm. The above method's parallelization potential falls into the first class called "parallelism across the method". Here, each computation core executes a different portion of the method itself. Because of complex interdependencies of variables in those different portions, the computational work per processor compared to the amount of data that needs to be exchanged between the cores can be assumed pretty low. Such methods are therefore more suitable for shared memory systems like multi processor rigs or GPUs. On the other hand, it does not matter whether you implement them in OpenMP, on the Elastic network, or some other distributed memory "cluster" ... in all cases you will face severe performance issues. This is not Elastic's fault.
This is the reason why it is mandatory to analyze which type of algorithm is suitable best for systems like Elastic. However, if the method is picked wisely. solving OBEs should not be undoable on distributed memory systems. One possibility would be to chose from methods from the category "parallelism accross the step". These do not require a constant exchange of data. Cores only need to communicate after each full iteration. Now, if each iteration has sufficient computational volume it might be worth a shot. However, i am not sure if step-parallel OBE solvers are the best choice anyway since the effective speed up (even for a large number of processors) is fairly low (bad convergence behaviour, low robustness, ...)
And then there are "parallelism accross the system" methods. This is where Elastic feels home. Here, the full task is partitioned into multiple of subtasks that are worked on independently over a number of iterative step sweeps. Information is only exchanged at the end of each sweep. This eliminates a lot of the communication overhead. To get back to your use-case: without having too much experience in solving OBEs, my hunch tells me that the waveform relaxation method could fall into this category.
Finally, we can conclude that Elastic will not be the universal toolkit for all problem classes. There are problems that Elastic can solve very good, there are problems that should be solved on FPGAs for the maximum efficiency, there are problems that can only be solved on single core computers and there are problems who need shared memory. It will not be efficient to run a Bitcoin miner on a single CPU core just as it will not be efficient to execute "parallelism across the method" on Elastic.