ok np, let's go step by step:
Where? Maybe I'm missing something, but what is the actual innovation, here? How is this preferable over spot priced surplus resource from any of the big players? What is the competitive advantage? Why will XZennet "make it" when every prior attempt didn't, and failed?
For example, the pricing model is totally innovative. It measures the consumption much more fairly than common services. It also optimally mitigates the difference between different hardware. The crunch here is to make assumptions that are relevant only for distributed applications. Then comes the novel algorithm (which is an economic innovation by itself) of how to price with respect to unknown variables under a linearity assumption (which surprisingly occur on Zennet's case when talking about accumulated resource consumption metrics).
The workflow is inconvenient. Reliably executing a batch of jobs will carry a lot of overhead in launching additional jobs, continually monitoring your service quality, managing reputation scores, etc. If you don't expend this overhead (in time, money, and effort) you will pay for service you don't receive or, worse, will end up with incorrect results.
I don't understand what's the difference between this situation or any other cloud service.
Also note that Zennet is a totally free market. All parties set their own desired price.
By launching jobs, you're trusting in the security of a lot of random people. As you've said, you have to assume many of these people will be downright malicious. Sure, you can cut off computation with them, but by then they may already be selling off your data and/or code to a third party. Even if the service provided is entirely altruistic the security on the host system might be lax, exposing your processes to third parties anyway, and in a way that you couldn't even detect as the sandbox environment precludes any audit trail over it. Worse yet, your only recourse after the fact is a ding on the provider's reputation score.
I cannot totally cut this risk, but I can give you control over the probability and expectation of loss, which come to reasonable values when massively distributed applications are in mind, together with the free market principle.
Examples of risk reducing behaviors:
1. Each worker serves many (say 10) publishers at once, hence the reducing the risk 10 fold to both parties.
2. Micropayment protocol is taking place every few seconds.
3. Since the system is for massive distributed applications, the publisher can rent say 10K hosts, and after a few seconds dump the worst 5K.
4. One may only serve known publishers such as universities.
5. One may offer extra-reliability (like existing hosting firms) and charge appropriately. (for the last two points, all they have to do is to config their price/publishers on the client, and put their address on their website so people will know which address to trust).
6. If one computes the same job several times with different hosts, they can reduce the probability of miscalculation. As the required invested amount grows linearly, the risk vanishes exponentially. (now I see you wrote it -- recall this is a free market. so if "acceptable" risk probability is say "do the calc 4 times", the price will be adjusted accordingly)
7. User can filter spammers by requiring work to be invested on the pubkey generation (identity mining).
I definitely forgot several more and they appear on the docs.
Since authenticity of service can't be validated beyond a pseudonym and a reputation score, you can't assume computation to be done correctly from any given provider. You are only partly correct that this can be exponentially mitigated by simply running the computation multiple times and comparing outputs - for some types of process the output would never be expected to match and you could never know if discrepancy was due to platform differences, context, or foul play. At best this makes for extra cost in redundant computations, but in most cases it will go far beyond that.
Service discovery (or "grid" facilities) is a commonly leveraged feature in this market, particularly among the big resource consumers. Your network doesn't appear to be capable of matching up buyer and seller, and carrying our price discovery, on anything more than basic infrastructural criteria. Considering the problem of authenticity, I'm skeptical that the network can succeed in price discovery even on just these "low level" resource allocations, since any two instances of the same resource are not likely to be of an equivalent utility, and certainly can't be assumed as such. (How can you price an asset that you can't meaningfully or reliably qualify (or even quantify) until after you have already transacted for it?)
See above regarding the pricing algorithm which addresses exactly those issues.
As for matching buyers and sellers, we don't do that, the publisher announces they want to rent computers, while publishing their ip address, then interested clients connect to them and a negotiation begins without any 3rd party interference.
How can this model stay competitive with such a rigid structure? You briefly/vaguely mention GPUs in part of some hand waving, but demonstrate no real plan for dealing with any other infrastructure resource, in general. The technologies employed in HPC are very much a moving target, more so than most other data-center housed applications. Your network offers a very prescriptive "one size fits all" solution which is not likely to be ideal for anyone, and is likely to be sub-optimal for almost everyone.
The structure is not rigid at all - the contrary, it allows full control to the user.
The algorithm is also agnostic to all kinds of resources -- it even covers the unknown ones!! That's a really cool mathematical result.
Where is the literature that I've missed that "actually solved" any of these problems? Where is this significant innovation that suddenly makes CPUShare market "work" just because we've thrown in a blockchain and PoW puzzles around identity?
(I just use CPUShare as the example, because it is so very close to your model. They even had focus on a video streaming service too! Wait, are you secretly Arcangeli just trying to resurrect CPUShare?)
What is the "elevator pitch" for why someone should use this technology over the easier, safer, (likely) cheaper, and more flexible option of purchasing overflow capacity on open markets from established players? Buying from amazon services (the canonical example, ofc) takes seconds, requires no thought, doesn't rely on the security practices of many anonymous people, doesn't carry redundant costs and overheads (ok AWS isn't the best example of this, but at least I don't have to buy 2+ instances to be able to have any faith in any 1) and offers whatever trendy new infrastructure tech comes along to optimize your application.
Don't misunderstand, I certainly believe these problems are all solvable for a decentralized resource exchange. My only confusion is over your assertions that the problems are solved. There is nothing in the materials that is a solution. There are only expensive and partial mitigation for specific cases, that aren't actually the cases people will pragmatically care about.
As for Zennet vs AWS, see detailed (yet partial) table on the "About Zennet" article.
If you haven't seen above, we renamed from Xennet cause of
this.
I think that what I wrote till now shows that many issues were thought and answers were given.
Please rethink about it and please do share further thoughts.