The verification reciepts will be included in the final product. We will have to know our budget in order to plan it. Up till now, before the sale, we made UI infrastructures (I can show it to you privately) with some good UI engineers, and we make progress with the pricing algorithm, benchmarks, procfs etc. with some good software engineers, who had to study the subject thoroghly and they're already hands on and building the client. We also recruited a respected cryptocurrencies developer who began working on the wallet and DPOS. Of course my time is invested with our devs and other team's work.
I'm looking forward to seeing the progress! Hopefully my suggestions and reference materials have been helpful in those efforts. Our direction on the design was sound (obv modulo my outstanding concerns like the GPU IOMMU, etc) so if the implementation follows suit then there should be few concerns!
For the verification we will further need Linux kernel guy and a PLT guy, both talented enough and free enough.
GL! Most people are not like me, in my experience. Finding people with solid background in both the low level details and the high level theories has always been a challenge for me in my hires in the past. Most people don't understand both ends of the spectrum, it seems. If they can grok a pci enumeration they will probably be lost in the lambda calculus, and if they can explain type theory they probably won't know the first thing about a crossbar DMA. Those of us who run the gauntlet between the gate/sub-gate level semantics that the OS developers care about and the syntactic concerns of PLT that the application developers care about are (increasingly) a very rare breed. Your best bet is to find someone who has been working in "high level synthesis" tools for IC design, since that is the one area where you're forced to constantly bridge the gaps.
I know very few people talented enough, and none of those would be nearly free enough. Myself included.
If I did know of someone to point you toward I'd probably be grabbing them up for my own endeavors anyway, heh.
We need a budget, a more rigid organizational structure, and time, to find the right people and get them deep into the design. It's a serious work done here. I have no plans releasing a novel cutting edge verification algo implementation that will turn out a joke.
I've said before and I'll say again that I think the less novel and cutting edge the verification is, the better. KISS. Draw as much as possible from prior art, here, with things like lambda-hist and the klee gaming work. (Hopefully this is "preaching to the choir" by now?)
We are planning to go with the direction you have suggested. 'Authenticating the procfs' to summarize it in 3 words.
Cool.
A publisher will be able to run e.g. gromacs, and utilize the GPU safely for both sides. It worths gold for so much. We both discussed and understood that a freeway to the provider's GPU is problematic. We can get into this topic again and fine tune it. In any case, all HW or nothing, and all right now, might not be the only useful approach.
This will need to be handled very delicately in any case. The special considerations for any peripheral hardware will need to be very clearly enumerated for both providers and publishers. It will need to be made very clear how providing GPU resource (or anything with side-band IO facility) is potentially very dangerous, and requires strict isolation. It will need to be made very clear how utilizing GPU resource (or, again, anything side-band) can not be as precisely accounted for in receipt validation, and requires strict secondary verification.
I see this as one of the biggest possible Achille's heels for the project. If providers get hacked because they offered GPU resource without understanding the ramifications they will certainly blame your software instead of themselves. If publishers have excess spend because they utilized GPU resource without understanding the ramifications they will certainly blame your software instead of themselves.
This could easily turn into an image/PR problem.
I didn't understand that. Do you see any issue with Zennet's identity model?
Only the concerns I had brought up previously. This seems to be the inflection point for "balancing out" fraud, so the specifics will be delicate. I still hold that the easiest way to "attack" this network would be to simply mine a mass of identities early to burn through later.
There is no reputation model. There are some other parameters that a node can measure that may be used to increase confidence. It's not that there is a public DB of each address' reputation or something.
There is no explicit reputation model, like a WoT or anything, but there is an implicit reputation model in the histories of the publisher/provider transactions, as you've pointed out to me at least a dozen times now. If this implicit reputation model is easily gamed it is just as bad as having a weak explicit reputation model, no?
The default client will block network access and persistent storage. So you're safe by default. If you want to turn it on, it's your choice. Publishers will typically offer more for such, so it will pose a problem from their side as well (it'll be more costly).
This is really more of a question of liabilities, and is something we briefly touched on before at one point. I think, IIRC, that we decided that there was "little to be done" about it from the network perspective, and the onus would be on the participants to monitor their own transactions as much as possible. Of course with added security layers (encrypted computations, etc) there is always some risk that your resources could be being applied toward naughty things without you ever having any way to know, but the assumption is that in such a case the provider is also absolved of the liability by the same reasoning. It is a difficulty subject, and one that is much more political than technical. You know my feelings on politics: blech. I'll leave this one to the various jurisdictions to sort out on their own, and won't even think of it much further. (Again.)
AWS didn't get any 0day hypervisory layer.
Weeelllllll, not publicly anyway.
Other VPS providers have not always been so lucky.
In any case, what else can I do other than take some best known hypervisors?
Nothing! This is the one big "potential point of failure" that you can do absolutely nothing to mitigate systematically. It either happens or it doesn't, plain and simple. Unfortunately, on a long enough timeline it probably does happen, so it's more a question of response times and impact than anything. All you can do is "be prepared" and demonstrate well that preparedness to your users.
They are considered very safe, and they are.
Everything is safe until the 0day becomes public. The OpenSSL was "very safe" for many years, and now I've had my bitcointalk account snooped on twice in under a year. WOOPS. All well, we patch up and are "very safe" again until the next time that we discover that we actually weren't.
This vicious cycle is one of the primary motivations for my general interest in formal methods. We have the technology to break the cycle and actually "be" safe, through combinations of isolation and verification. Soon these technologies will even be widely practical, and maybe we can even start to look forward to a day where our software systems aren't Swiss cheese earning millions of people free credit monitoring every month. I hope I live long enough to see that day.
Let's say that on 0day hypervisory, Zennet might not be the world's biggest problem. Like breaking SHA or ECDSA, will not make Bitcoin's unsafety #1 world's crisis...
For sure. However, more to the point, it is
a potential problem on a frighteningly long list of potential problems. I have the advantage (that many others here who are skeptical of your project lack) of first-hand knowledge of your attempts to mitigate all of these problems, and I can say that you are doing a great job of covering all of the bases. However, it is a LOT of bases to cover, and you are no superman. You're good, I hold you in high regard, but with so many and so varied concerns something is bound to be missed. Again, this will just be a question of response times and impacts.
This "not IPO" IPO is a big gamble, one of the biggest in the crypto space to date, but not for the same reasons as most IMO.
I don't share the same concerns as others that you might abscond without producing results, particularly given how open you are about both your initiatives and yourself. I don't see many fraudster devs appearing at conferences to discuss their efforts, or giving up precise personal details in discourse. (They usually go to great lengths to do just the opposite of these two things, so if you are going to defraud everyone then you've just made a lot of work for yourself in not just getting your arse kicked once you do!)
This IPO gamble is not a gamble because you might be some pump&dump fraudster, this IPO gamble is a gamble for precisely the "right reason" for an IPO to be a gamble - what is being attempted is huuuuuuuge and either fails spectacularly or changes the whole landscape of the marketplace. Those are precisely the sort of IPO gambles an investor wants to make, even though in this case the vegas odds might actually be
better with the scam possibility offerings just because of the scope of this undertaking and the very long list of negative outcome potentialities, however well mitigated or accounted for.
I'm rooting for you, particularly considering how much I've helped to lay out the implementation direction, but the analyst in me is still very very afraid that this will be another CPUShare all over again.
(Work faster, I can't wait for the resolution to present itself!)