Pages:
Author

Topic: [PRE-ANN][ZEN][Pre-sale] Zennet: Decentralized Supercomputer - Official Thread - page 15. (Read 57089 times)

hero member
Activity: 897
Merit: 1000
http://idni.org

And I have found 100+ customers. Unfortunately they are botnet owners.

that's why we plan to deliver the client configured by default to block internet connection to the publishers.
providers will be able to change this setting, and moreover, they'll be able to allow this for publishers they trust only (such as universities). all the publisher has to do is to publish their zennet address on their website, and providers can trust it.
hero member
Activity: 897
Merit: 1000
http://idni.org
member
Activity: 98
Merit: 10
legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it
hero member
Activity: 897
Merit: 1000
http://idni.org
below when I mention "we" I mean HMC and myself:

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

1. Identity.  We agree that a PoW mining to claim identity should be used.  I say that identity should be pre-claimed, based on a global difficulty target.  Ohad says that identity should be claimed when the worker client connects to the publisher for initiation, with an arbitrary target set by the publisher.
  My key concern: Publisher has no way to know what an appropriate difficulty should be set at for any given worker.

as above, this description of this issue is based on misunderstanding. we continue working together on this mechanism.

Quote
2. Verification of execution.  We agree that it would be ideal to have authenticated processes, where the publisher can verify that the worker is well behaved.  Ohad says there doesn't need to be any, and the cost of any approach would be too high.  I say the cost can be made low and that there is a likely critical need.
  My key concern: Without a verification over at least some aspects of the job work the publisher can have far too little faith in the results of any one computation, and particularly in the results of a combination of computations.  In particular, lack of verification may lead to rational collusion by workers and added incentive to attack each-other.

we are making promising advancements on that area.

Quote
3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

4. Pricing mechanism.  We agree that the presented linear decomposition utility pricing objective is probably "just about right."  Ohad says his approach is entirely sound.  I say that the overall model leads to an opportunity, particularly because of prior points taken in conjunction, for an attacker to introduce relevant non-linearity by lying about results.
  My key concern: the fundamental assumption of the system, which ends up "joining together" the entire economic model, actually ends up working in reverse of what was intended, ultimately giving particular incentive to the "sell side" worker side participants to misrepresent their resources and process.

I think we now agree after clarifying that the contract does not contain any futuristic promise.

PS
To anyone who didn't note, I've credited HMC on OP.
sr. member
Activity: 434
Merit: 250
I don't have the time to follow your discussion on IRC, but feel free to post any results here. I will check on this thread.

I'm sure we will continue to relay any pertinent conclusions here.
member
Activity: 98
Merit: 10
I don't have the time to follow your discussion on IRC, but feel free to post any results here. I will check on this thread.
sr. member
Activity: 434
Merit: 250
During the Patriots game? Are you Insane! Tongue

We'll still be around after, I'm sure.
sr. member
Activity: 434
Merit: 250
I do agree too and seems like it was a misunderstanding based on Ohad's post.

I think we do still have some "fine point" details to work out, but yet we at least agree on the actual goal here, now.

Many processes don't have the properties necessary to be authenticated. For example, you can verify the work of a miner by a simple hash function, but you can't verify the work of a neural network that simply.

Really, you can!  An ANN is really just a composition of sigmoids in a graph and you can certainly authenticate over sigmoid functions, graphs, and the composition.  You can't assert something like "it hit a correct error rate" because you can't define what would be a correct meeting of any arbitrary objective, there, but you can certainly assert "evaluation and backprop/annealing were applied correctly" and infer from that the error rate hit was the same as what you would've gotten to running locally, which is all we desire.

I agree with HMC here. Any kind of benchmarking used must be run alongside the process. Any host can benchmark high and detach resources after the process has begun.

I think you missed something key, here.  Benchmarking is continuous, and ongoing, in any case.  In other words, your job is benchmarked "alongside the process" so if you start out benchmarking high and then go about removing applied resource, you will not be able to (assuming we can get the ancillary issues sorted out) continue billing without also reducing your billing rate correspondingly.  We all agree that this will work fine and that rates will converge appropriately.

What we don't agree on is the meaningfulness of the initial "baseline" benchmark that you start from, to do your initial rounds of billing before this convergence starts to "settle into" the correct values via the linear decomposition.  I don't dispute the validity of the linear solve, itself, only the applicability of a single "cannonical" or general benchmark to any initial billing for an arbitrary process.

The details on this are a bit too deep and maths-y to get into here, I think.  Join #zennet and we can wade into it if you'd like. :-)

Quote
... This may introduce another problem aside, any open source OS selected must be heavily changed.

Yes, this has come up as well.  We'd obviously like to avoid something as (insanely) effort-intensive as authenticating an entire kernel and/or VM.  Although Ohad briefly considered it as an option, I discouraged such a "moon shot" goal, favoring instead an approach more like a special purpose vm layer.

Quote
I think if point 3 and 2 are solved, this won't rise.

I agree!  If we can solve 2 and 3 then any "lower dimension" non-linearity introduced into the pricing model by an "attacking" worker becomes immediately quite visible, and the publisher can reliably abort.

Quote
If we can identify well behaved nodes that give verifiable results with verifiable resources used, this incentive wouldn't exist. Any pricing model based on this would be sound.

Exactly.  The conclusion we do all solidly agree on is that if we can verify enough such that a "big lie" becomes very self-evident and "creating lots of continuous small lies over time" becomes very computationally expensive, then the rest of the model follows soundly from that.  (Assuming ID cost is correct, i.e. my point #1 is solidly addressed as well.)

legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it
newbie
Activity: 28
Merit: 0
Please joins us on #zennet at freenode
hero member
Activity: 897
Merit: 1000
http://idni.org

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

4. Pricing mechanism.  We agree that the presented linear decomposition utility pricing objective is probably "just about right."  Ohad says his approach is entirely sound.  I say that the overall model leads to an opportunity, particularly because of prior points taken in conjunction, for an attacker to introduce relevant non-linearity by lying about results.
  My key concern: the fundamental assumption of the system, which ends up "joining together" the entire economic model, actually ends up working in reverse of what was intended, ultimately giving particular incentive to the "sell side" worker side participants to misrepresent their resources and process.

we just cleared another misunderstanding (off-board), whether the mentioned contract is some futuristic promise (no) or just the agreed rate per cpu/disk/mem etc. (yes). this brought a confusion in how to avoid consumption spoofing. short answer - if publisher trusts procfs, no problem exists. if he doesn't trust it, he will be able to perform the various multivariate-linear-outlier-detection methods raised.
i think HMC now agrees that the ongoing measurements can be decomposed of past-benchmark-runnings measurements.

i hope the blogger from http://data-science-radio.com/is-zennet-or-any-other-decentralized-computing-real/ will now understand how wrong he was Wink
legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it

Interesting article. I disagree with the premise that "Most" DC users need that level of security for their proprietary tasks. The need for such massive computational power is in itself a determent to theft as the use of the resulting data is specialized to the entity seeking it.
member
Activity: 98
Merit: 10
1. Identity.  We agree that a PoW mining to claim identity should be used.  I say that identity should be pre-claimed, based on a global difficulty target.  Ohad says that identity should be claimed when the worker client connects to the publisher for initiation, with an arbitrary target set by the publisher.
  My key concern: Publisher has no way to know what an appropriate difficulty should be set at for any given worker.

I do agree too and seems like it was a misunderstanding based on Ohad's post.

2. Verification of execution.  We agree that it would be ideal to have authenticated processes, where the publisher can verify that the worker is well behaved.  Ohad says there doesn't need to be any, and the cost of any approach would be too high.  I say the cost can be made low and that there is a likely critical need.
  My key concern: Without a verification over at least some aspects of the job work the publisher can have far too little faith in the results of any one computation, and particularly in the results of a combination of computations.  In particular, lack of verification may lead to rational collusion by workers and added incentive to attack each-other.

Many processes don't have the properties necessary to be authenticated. For example, you can verify the work of a miner by a simple hash function, but you can't verify the work of a neural network that simply. If the publisher has 1000 hosts on his VM, and wants to verify their works one by one, it would take a lot of computational power on his side. Also, I assume by 'work' we don't mean running a mathematical operation across hosts. I don't know the infrastructure for the VM, but the system may assume all hosts are online and cooperating in a non-malicious way, so it can build and operate an entire OS across them. If one host acts maliciously, it would endanger the integrity of the whole VM. In this perspective, assuming 1 in a 1000 defective host endangers the entire system, not just 1/1000 of work.

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

I agree with HMC here. Any kind of benchmarking used must be run alongside the process. Any host can benchmark high and detach resources after the process has begun. The host can even do this by the network itself. Consider renting 1000 hosts to just benchmark high for a publisher and then release them. So, you either have to benchmark and process at the same time, decreasing the effective resources available, or your work supplied must be 'benchmarkable' itself. In the perspective I introduced in the last question, this does not necessarily mean every publisher should change his work, but would mean running an OS across hosts that can effectively calculate the help of each host in terms of resources. This may introduce another problem aside, any open source OS selected must be heavily changed.

4. Pricing mechanism.  We agree that the presented linear decomposition utility pricing objective is probably "just about right."  Ohad says his approach is entirely sound.  I say that the overall model leads to an opportunity, particularly because of prior points taken in conjunction, for an attacker to introduce relevant non-linearity by lying about results.
  My key concern: the fundamental assumption of the system, which ends up "joining together" the entire economic model, actually ends up working in reverse of what was intended, ultimately giving particular incentive to the "sell side" worker side participants to misrepresent their resources and process.

I think if point 3 and 2 are solved, this won't rise. If we can identify well behaved nodes that give verifiable results with verifiable resources used, this incentive wouldn't exist. Any pricing model based on this would be sound.
sr. member
Activity: 434
Merit: 250
I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

if we go on the new idea of "slim" paravirt provability, we might prove the running of the benchmarks themselves.

Yes, but these are related-but-distinct concerns.  Related because of #4 there.  Distinct because even with authentication to verify the correct benchmarks are run I still see a potential problem if we lack that "functional extension" from the benchmark to the job itself.  Our baseline would still be the wrong baseline, we'd just have a proof of it being the correct wrong baseline, heh.
hero member
Activity: 897
Merit: 1000
http://idni.org
I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

if we go on the new idea of "slim" paravirt provability, we might prove the running of the benchmarks themselves.
hero member
Activity: 897
Merit: 1000
http://idni.org
I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?
1. Identity.  We agree that a PoW mining to claim identity should be used.  I say that identity should be pre-claimed, based on a global difficulty target.  Ohad says that identity should be claimed when the worker client connects to the publisher for initiation, with an arbitrary target set by the publisher.
  My key concern: Publisher has no way to know what an appropriate difficulty should be set at for any given worker.

I do agree, that's just misunderstanding. I meant that when the client connects, the publisher can't know which address this ip owns, unless they challange it with some string to sign.
Yes, the POW should be invested as identity creation. Like in Keyhotee project.
sr. member
Activity: 434
Merit: 250
I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

I'll try, but the key points have been shifting a bit rapidly.  (I consider this a good thing, progress.)

Socrates1024 jumping in moved the goal posts a bit, too, in ways that are probably not obvious from the thread, now.  Undecided

Perhaps we need that dedicated IRC channel sooner?

1. Identity.  We agree that a PoW mining to claim identity should be used.  I say that identity should be pre-claimed, based on a global difficulty target.  Ohad says that identity should be claimed when the worker client connects to the publisher for initiation, with an arbitrary target set by the publisher.
  My key concern: Publisher has no way to know what an appropriate difficulty should be set at for any given worker.

2. Verification of execution.  We agree that it would be ideal to have authenticated processes, where the publisher can verify that the worker is well behaved.  Ohad says there doesn't need to be any, and the cost of any approach would be too high.  I say the cost can be made low and that there is a likely critical need.
  My key concern: Without a verification over at least some aspects of the job work the publisher can have far too little faith in the results of any one computation, and particularly in the results of a combination of computations.  In particular, lack of verification may lead to rational collusion by workers and added incentive to attack each-other.

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

4. Pricing mechanism.  We agree that the presented linear decomposition utility pricing objective is probably "just about right."  Ohad says his approach is entirely sound.  I say that the overall model leads to an opportunity, particularly because of prior points taken in conjunction, for an attacker to introduce relevant non-linearity by lying about results.
  My key concern: the fundamental assumption of the system, which ends up "joining together" the entire economic model, actually ends up working in reverse of what was intended, ultimately giving particular incentive to the "sell side" worker side participants to misrepresent their resources and process.

Did I miss any of the major issues?

Quote
I like the idea, but I'm getting paranoid about the QoS and the verifiability of pieces of work. Also I don't get the need for introduction of a new currency. I'm sure you've discussed these as I've read the first 5/6 posts, but couldn't quite follow you guys.

I also like the idea, but even if the model is fixed it still has some dangerous flaws "as given."  It will be a prime target for hackers, data theft, espionage, and even just general "griefing" by some participants.  In some respects, this is easily resolved, but in other respects it may become very difficult and/or costly.  This will, in any case, have to be a bit of a "wait and see" situation.
hero member
Activity: 897
Merit: 1000
http://idni.org

Super interesting conversation between ohad and HMC   Smiley

But the big question: Who is wrong?  Wink

Is there anyone out there in internetland who wants to jump in with a new perspective?

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

I like the idea, but I'm getting paranoid about the QoS and the verifiability of pieces of work. Also I don't get the need for introduction of a new currency. I'm sure you've discussed these as I've read the first 5/6 posts, but couldn't quite follow you guys.

Hi,

Our discussion follows two main approaches:
1. Verifiable computing, and
2. Risk reduction

Zennet does not aim to verify the correctness if the computation, but offer risk reducing and controlling mechanisms. HMC's opinion is that we should stick to path 1, towards verifiable computing, and we're discussing this option also off this board. HMC also suggests that Zennet's risk reduction model is incorrect, and gives scammers an opportunity to ruin the network. I disagree.
I think that'd be enough for you to read only the last comments, since many of them are only clarifications, so you can get right into the clear ones Smiley
More information about Zennet at http://zennet.sc/about, and more details on the math part of the pricing algo available here http://zennet.sc/zennetpricing.pdf
Pages:
Jump to: