I'd like to point out two subjects that answer many FAQ:
Why fixed amount of Zencoins?I came into understanding that generating new coins is a fiction, if we want to keep the market non-skewed and fair. Since, if new coins are generated, the optimal fairness strategy will be to spread them proportionally between all coin holders. But then, we actually did nothing. It's like adding a zero to all accounts on a certain currency. Hence, without making skewed distribution, coin generation is meaningless.
Zennet Design's Main AssumptionZennet's design relies on a main assumption, which states that Zennet is intended only for massive distributed computational jobs. We assume that publishers (bidders) are aware of this assumption and do not rent computers unless they:
1. Need 'a lot of' computers (say at least dozens, of course it depends).
2. The work is divided into many small parts, where we anticipate from each computer to finish one job in a reasonably short (say < 5sec) time. This is partially like saying "your algorithm should be well-distributed rather than only artificially 'distributed', not to mention parallel".
This might sound restrictive, but in fact, even nowadays most of the money spent on the software world, is for such huge and distributed jobs. Just read about
Apache Hadoop which is only part of what Zennet can support. World Community Grid is another example.
The assumption may also sound innocent, but it has strong implications. Let me state some:
1. We may assume that the time until a single small job finishes is not interesting, but the number of small jobs finished. This leads to the pricing model, in which we can get a unique and optimal solutions from linear operators (which are always preferred because of normality, generalization ability and more), while taking into account even totally unknown variables. for more information see
this doc.
Without the main assumption, linearity would be lost, and we come to a yet unanswered difficult question in economy and computer science. Nevertheless, our linear algorithm is novel.
2. If a computer suddenly shuts down, or somehow not functioning as well, the loss is small by all means. People frequently ask me "what if it's a full night of computation, and the computer turned down in the middle of the night? All the data will be lost.". The answer is that such distributed application is not really distributed, and for sure - not well distributed. Distributed computations should be broken into small enough parts, as a mainstream and important practice in this world.
3. Say we have 1000 small jobs. If we distribute them among 10 or 20 computers, the latter will be as twice as faster, of course, yet we will pay the same amount of coins on both cases. Speed comes for free. This is due to the ability to price the usage in time independent units, which is justified only if the main assumption is satisfied.
4. The main assumption opens many possibilities to reduce the expectation of the risks' costs. For example, a publisher can rent, say, 10K hosts, and after a few seconds drop the less-efficient 50%. Another risk-decreasing behavior is that each provider works simultaneously for several publishers. Say each work for 10 publishers in parallel, so the risk to both sides decreases 10 fold.
PS
I've added some more to the
'economical aspects' comment above, and will continue adding.