Hmm... so, why is there a limit like that? I mean 64K ints is not nearly enough to process big data, so that seems like a bottleneck to me.
EK mentioned he wanted this for some of his own use cases. Can he elaborate and point to a practical use case for XEL?
Sorry guys, it's christmas time and I am family-hopping the entire time. I am back home by tomorrow.
We have to have some sort of limit, because otherwise you can DOS weak nodes easily and split them from the rest of the network.
With no limit, you can just start processing gigabytes of data in memory by loading them up slowly and waiting until first the weak nodes crash with a memory overflow, with the rest following.
If anyone finds a way here to use unlimited memory without a) bloating the blockchain and b) without bloating the memory of slower nodes and c) without writing the stuff to the disk which would both just shift the point of failure and make processing really slow, if someone sees a way to do it, we should discuss that!
EDIT: A few weeks earlier I came up with the idea to allow "data sets" to be uploaded, of arbitrary size, and have the nodes do calculations on it. But here, we run into synchronization problems and would require Golem's block-partial-work-on-subscribe scheme ... which sucks big time
Enjoy holidays! Well deserved!
The way I picture distributed computing is that work request doesn't contain the work data, or even the code - but just the hash for it. E.g. using IPFS to distribute the data/code. Then computation should arrive at same result done by multiple workers and workers only publish the hash of the result. Once there is agreement work is correctly computed - payment is made. At least that's how I would do it. But nothing wrong with making what you have go live, and later expand it to include more computation methods as they mature.
Thanks ;-) Well the holiday was two-fold. First, the mustang broke down, then I got a rental which also broke down and I was standing 5 hours on the highway waiting for a replacement
What a weekend.
Regarding the work scheme, I (and we all) was just sticking to the original idea and together we came up with something that is actually working and which allows for an almost-reliable proof-of-execution. Surely, it is perfectly suited for one specific class of computation problems.
What you describe is usable for different kinds of tasks, ones that do not explore a search space for the right solution to a complex computation problem but which "map-reduces" a large but simple computation problem into multiple smaller tasks which are computed by multiple nodes and then puzzled together afterward.
Golem does exactly what you have described and I think that it is not sufficient to verify the computation result by letting multiple nodes (or should I say my own Sybils) execute the same package and checking for the equality of the results. I would go even further ;-) I am quite sure that I will be able to show a way to bypass Golems verification mechanism (as it is described currently) and I think I would be willing to put a public 1000 dollar bid on it! Surely, I could lose but at the moment I am really confident. And if I can't make it, I think HMC would be very likely able to claim my stake ;-)
(Reference:
https://golem.network/doc/Golemwhitepaper.pdf, Nov 2016, Resilence Chapter)