Pages:
Author

Topic: [PRE-ANN][ZEN][Pre-sale] Zennet: Decentralized Supercomputer - Official Thread - page 17. (Read 57089 times)

sr. member
Activity: 434
Merit: 250

it is clear that:
1. you want the computational work to be proven

Ideally, yes.

Quote
2. i want to control the risk expectation and decrease it to practical values

As I've said I'd accept this as compromise, but have yet to see the mechanic by which it is decreased to practical values.

Quote
i claim that your method is less practical, but i'm open to hear more.

Excellent.  Why, specifically, do you claim that my method is less practical?  Also what, specifically, would you like to hear more about?

Quote
you claim that my method won't work.

I only claim that they "shouldn't work, as described."

Quote
now let's focus on either ways:
if we want to talk about my approach, then miscalc and procfs spoof are indeed different.

GAH how are they at all different? :-)

In either case the attacker claims to execute one reduction but actually executes another related reduction.  Where is the difference?  Either I am executing a different algorithm than specified for the job itself (miscalc) or I am executing a different algorithm than specified for the consumption sampling (procfs spoof) but either way I'm doing the same thing - a different reduction from what the publisher thinks I am doing.

sr. member
Activity: 434
Merit: 250
Quote
By what criteria? As a publisher, how should I estimate what a sufficient difficulty should be to counter incentive for absconding at some point, considering I can't know the capacity of the worker?

again, all matter of approximations and probabilities.

More "and then some magic happens."

So a provider who makes some technological leap in solving the puzzle gets to violate the constraint at will until the network just "wisens up on it's own?"  By what means should it become wise to the fact?

If you remove the difficulty scale it can't really be called PoW anymore, since it no longer manages to prove anything.  The "hash lottery" has to be kept relatively fair by some explicit mechanism, or else anyone who finds a way to buy their "hash tickets" very cheaply breaks any assumption of fairness otherwise!
sr. member
Activity: 434
Merit: 250
at least we now understand who influences who, and that the user may change his numbers at any time with or without looking at the market. hence no contradiction or something like that. you may argue about terminology.

 Huh Another response that didn't match the statement.  How is the system not doing the pricing?  If you think the market is doing the pricing, this would imply that the pricing only occurs at the fiat denomination.  I don't think this is a notion that would be well accepted.  If you think the user is doing the pricing, why?  How does the user setting the relation between the procfs token and the coin token say anything abut the valuation of the actual computation, which is denominated in that procfs token?  I would hope you do understand the difference between valuation and denomination.

Quote
draw me a detailed scenario for a publisher hiring say 10K nodes and lets see where it fails

I've already detailed where it can fail.  This is all I've been doing for hours now.

Quote
in theory. in practice, power can shut down and so on.

Eh, I'm going to avoid getting back into this discussion for the hundredth-or-so time.  The whole model of bitcoin *doesn't* actually fall over when the EMPs go off.  The theory remains just as sound.  The protocol can still be enacted, and work, albeit probably with adjusted parameters that account for the (massively) increased network latency caused by lack of electronic communication.

Bitcoin would've literally solved the actual Byzantine Generals' problem, even at the time!

Quote
probability for a computer to give you a correct answer is never really 1.

It is when the process the computer employs to derive that answer is proof carrying!  Either you get out the correct answer or you get no output at all.

Quote
how much uptime AWS guarantee? i think 99.999%

How much uptime does bitcoin guarantee? 100%.  Anti-fragile and all that jazz.  It really is deterministic "immortal" modulo the 51% attack or some hypothetical eventual exhaustion of the hash space.

Six sigma has it all wrong.  We should be building systems that are "forever."  (Particularly being "Bitcoiners.")

Quote
since after the convergence of the network toward more-or-less stable market, spammers and scammers will earn so little.

Again, why do we think this model will converge in such a direction?  What makes them actually "earn so little?"  What is going to prompt the network participants to behave altruistically when they have both incentive and opportunity not to?

Quote
Quote
People *will* do these things at any given opportunity, and people will use any *other* vulnerability of the system to carry out these practices.

i totally agree. i do not agree that the network is not able to mitigate them and converge to reasonable values.

I never said that I don't think it could.  In fact I explicitly stated the opposite several times.  What I'm saying here is that your network, as described so far, doesn't even seem to mitigate correctly.

Quote
since the costs are so lower than big cloud firm operational costs, we have a large margin to allow some considerable financial risk.

Eh?  How can we know the relative cost a priori?  Why shouldn't we believe the cost of this service will actually average higher, given the need for excess redundancy etc.  We've already brought into the discussion the notion that people might even just re-sell AWS, and they certainly wouldn't do so at a loss.

I don't think you've made a safe assumption on this.

Quote
Quote
How exactly is this not just like CPUShare again, and why exactly shouldn't we expect it to fail for the exact same reasons, again?

let me mention that i know nothing about cpushare so i can't refer this question

I'll rephrase.  How exactly is this not reiterative of any other attempts at a p2p resource market, which have all failed?

They've all failed for the same reasons I'm assuming your model fails, btw.  No authentication over quote or work.  Inadequately constrained execution context.  Disassociated cost models.  Providers absconding mid-computation.  Providers attacking each-other and the network for any possible advantage.  Providers burning through identities to perpetuate their unfair trades. Requirement for substantial overheads in any attempts at "mitigation" of these problems.

Those who do not learn from history, they say, are doomed.
hero member
Activity: 897
Merit: 1000
http://idni.org
won't you agree that detecting miscalculation and detecting procfs spoofing are two different things?

No!  This is central to my point.  Authentication is authentication, and anything else is not.

(Authentication encompasses both concerns.)

Quote
maybe there is a similarity if our goal is to eliminate them both.

There is more than a similarity, there is a total equivalence.  To assume they are in any way different is a mistake.  They both just constitute a change in reduction semantics within the execution context.

Quote
but all we want is to decrease the risk expectation.

I am actually interested in a goal of eliminating both, of course.

However, all I really want is at least a rational explanation of where risk expectation is decreased, assuming rational behavior (and not assuming honest or even semi-honest behavior of participants beyond what is enforced by blockchain semantics) by participants.

It still seems to me like rational behavior of participants is to default to attack, and it seems that they have little discouraging them from doing so.

it is clear that:
1. you want the computational work to be proven
2. i want to control the risk expectation and decrease it to practical values

i claim that your method is less practical, but i'm open to hear more.
you claim that my method won't work.
now let's focus on either ways:
if we want to talk about my approach, then miscalc and procfs spoof are indeed different.
hero member
Activity: 897
Merit: 1000
http://idni.org
it's not like btc difficulty, where all the network has to agree. it's only local. each participant may choose their own value.

By what criteria? As a publisher, how should I estimate what a sufficient difficulty should be to counter incentive for absconding at some point, considering I can't know the capacity of the worker?

again, all matter of approximations and probabilities.
sr. member
Activity: 434
Merit: 250
it's not like btc difficulty, where all the network has to agree. it's only local. each participant may choose their own value.

By what criteria? As a publisher, how should I estimate what a sufficient difficulty should be to counter incentive for absconding at some point, considering I can't know the capacity of the worker?
sr. member
Activity: 434
Merit: 250
won't you agree that detecting miscalculation and detecting procfs spoofing are two different things?

No!  This is central to my point.  Authentication is authentication, and anything else is not.

(Authentication encompasses both concerns.)

Quote
maybe there is a similarity if our goal is to eliminate them both.

There is more than a similarity, there is a total equivalence.  To assume they are in any way different is a mistake.  They both just constitute a change in reduction semantics within the execution context.

Quote
but all we want is to decrease the risk expectation.

I am actually interested in a goal of eliminating both, of course.

However, all I really want is at least a rational explanation of where risk expectation is decreased, assuming rational behavior (and not assuming honest or even semi-honest behavior of participants beyond what is enforced by blockchain semantics) by participants.

It still seems to me like rational behavior of participants is to default to attack, and it seems that they have little discouraging them from doing so.
hero member
Activity: 897
Merit: 1000
http://idni.org
also note that those UVs are actually "atomic operations".
running one FLOP requires X atomic operations of various types.
we just add their amounts linearly!
but summing consequent FLOPS will end up correlated.
hero member
Activity: 897
Merit: 1000
http://idni.org
forget about probrams.
think of arbitrary vectors.
every k-dim vector can be written as a linear combination of k linearly independent vectors.
you can take much more vectors, and increasing the probability that they'll contain k independent ones.
if the benchmarks are different, k will be enough. but the more the better.

I don't disagree with any of this except the notion that it is acceptable to forget about programs.   Wink

What you propose is fine other than the fact that there is no relation between the end result and the program that you conveniently want to just "forget about."

(Such a relation is not easily established, generally.  Functional extension is hard.  Proving lack of divergence in the general case is even known to be impossible.  However, you can't just "punt" like this, forgetting about programs, and assume everything else will just work out soundly.)

it's true for ANY vectors. calling them "procfs msmts" doesnt change the picture.
i can linearly span the last 100 chars you just typed on your pc by using say 200 weather readings each from 200 different cities.
of course, only once, otherwise the variance will be too high to make it somehow informative.
but for a given program, the variance over several runnings is negligible (and in any case can be calculated, and taken into account on the least squares algo).
sr. member
Activity: 434
Merit: 250
forget about probrams.
think of arbitrary vectors.
every k-dim vector can be written as a linear combination of k linearly independent vectors.
you can take much more vectors, and increasing the probability that they'll contain k independent ones.
if the benchmarks are different, k will be enough. but the more the better.

I don't disagree with any of this except the notion that it is acceptable to forget about programs.   Wink

What you propose is fine other than the fact that there is no relation between the end result and the program that you conveniently want to just "forget about."

(Such a relation is not easily established, generally.  Functional extension is hard.  Proving lack of divergence in the general case is even known to be impossible.  However, you can't just "punt" like this, forgetting about programs, and assume everything else will just work out soundly.)
hero member
Activity: 897
Merit: 1000
http://idni.org
as i wrote, i'll be glad to discuss with you methods for proving computation. not even discuss but maybe even work on it.
hero member
Activity: 897
Merit: 1000
http://idni.org
it's not about "one size fits all". it's about describing how much has to be invested in a job. like saying "each task of mine is as about 1000 fft of random numbers" or a linear combination of many such tasks.
again, such task decomposition is not mandatory, only the ongoing procfs msmts
another crucial point is that we don't have to have accurate benchmarks at all. just many of them, as different as possible.

 Huh  If benchmarks don't have to be accurate than why do them at all?

that's the very point: i dont need accurate benchmarks. i just need them to be different and to keep the system busy. that's all!! then i get my data from reading procfs of the benchmark's running. if you understand the linear independence point, you'll understand this.

Quote
Quote
it is *able* to be customized and coded, hence flexible.

By this measure all software is flexible.  Of course you must have known what I meant, here, as a measure of flexibility relative to alternatives.

Quote
it has to be done only for totally new creatures of hardware.

Totally new creatures of hardware show up every day.  Anyway this is neither here nor there.  I said I wasn't going to hold my breath on that result, and I didn't.  We can move on from it without prejudice.  Smiley

oh well
so you know how to write software that applies to all future hw?
hero member
Activity: 897
Merit: 1000
http://idni.org
very easy: each client picks any amount of ID POW to require from its parties.

Eeep, it just keeps getting more scary!  Cheesy

Who decides how much work is sufficient?  How does any given publisher have any indication about any given providers ability to perform the identity work?

There kind of has to be some continual consensus on a difficulty here, doesn't there?

it's not like btc difficulty, where all the network has to agree. it's only local. each participant may choose their own value.
hero member
Activity: 897
Merit: 1000
http://idni.org
man stop mixing miscalculation with procfs spoofing

Man stop thinking they are not exactly the same thing.   Wink

If you can get over that hangup I think we can make better progress.

The attacker's arbitrary control over the execution, without being held to any scrutiny of authentication, is the same problem regardless of if we are looking at the implications to the pricing or the implications to the execution itself.

The attacker recomposing the execution context is the same behavior in either case.  This is the good ol' red/blue pill problems, just reiterated under a utility function and cost model.

won't you agree that detecting miscalculation and detecting procfs spoofing are two different things?
maybe there is a similarity if our goal is to eliminate them both.
but all we want is to decrease the risk expectation.
hero member
Activity: 897
Merit: 1000
http://idni.org
i have a program which i want to decompose into a linear combination of benchmarks w.r.t. procfs measurements (abbrev. msmts).

i.e., i want to describe the vector x of procfs msmts of a given program as a linear combination of procfs msmts vectors of another n programs.

assume the dimension of the vectors is k.

all i need to do is have n>=k and have the matrix having n rows which are our vectors be of rank k.
i will get it for almost surely if i just pick programs "different enough", or even if they're just "a bit different", i may increase n.

I still don't see how the n programs have any assumable relation to the program that I'm actually trying to get quoted.  How is the behavior of any of the runs of the n programs indicative of anything about my future run of my job?  How is what is being priced over serving to price the thing that I want priced?

forget about probrams.
think of arbitrary vectors.
every k-dim vector can be written as a linear combination of k linearly independent vectors.
you can take much more vectors, and increasing the probability that they'll contain k independent ones.
if the benchmarks are different, k will be enough. but the more the better.
sr. member
Activity: 434
Merit: 250
man stop mixing miscalculation with procfs spoofing

Man stop thinking they are not exactly the same thing.   Wink

If you can get over that hangup I think we can make better progress.

The attacker's arbitrary control over the execution, without being held to any scrutiny of authentication, is the same problem regardless of if we are looking at the implications to the pricing or the implications to the execution itself.

The attacker recomposing the execution context is the same behavior in either case.  This is the good ol' red/blue pill problems, just reiterated under a utility function and cost model.
hero member
Activity: 897
Merit: 1000
http://idni.org
Quote
Precisely the contradiction, restated yet again.  "Translate procfs to zencoin" is the exact same problem as "price procfs" so this statement is precisely restated as "The control on the price is the user's. The system just helps them by doing pricing."

at least we now understand who influences who, and that the user may change his numbers at any time with or without looking at the market. hence no contradiction or something like that. you may argue about terminology.

Quote
I'm becoming really quite convinced that where you've "gone all wrong" is in this repeated assumption of semi-honest participation.

Much of what you're saying, but particularly this, simply doesn't hold in the explicit presence of an attacker.

draw me a detailed scenario for a publisher hiring say 10K nodes and lets see where it fails

Quote
The one exception to this, of course, being formal proof.  We can actually offer real promises, and this is the even central to the "seemingly magic" novelty of bitcoin and altcoins.  Bitcoin really does promise that no-one successfully double spends with any probability as long as hashing is not excessively centralized and the receiver waits an appropriate number of confirms.  (Both seemingly reasonable assumptions.)

Why you're so readily eschewing approaches that can offer any real promises, even go so far as to deny they exist "in real life," despite our Bitcoin itself being a great counterexample, is confusing to me.

in theory. in practice, power can shut down and so on. probability for a computer to give you a correct answer is never really 1. how much uptime AWS guarantee? i think 99.999%

Quote
I assume most users will be rational and will do whatever maximizes their own profit.

I actually go a bit further to assume that users will actually behave irrationally (at their own expense) if necessary to maximize their own profit.  (There's some fun modal logic!)

(I further have always suspected this is the root cause behind the failure of many corporations.)

since after the convergence of the network toward more-or-less stable market, spammers and scammers will earn so little. they'd rather do decent work and get paid more. even if they don't, other mentioned mitigations are taking place.

Quote
People *will* do these things at any given opportunity, and people will use any *other* vulnerability of the system to carry out these practices.

i totally agree. i do not agree that the network is not able to mitigate them and converge to reasonable values. since the costs are so lower than big cloud firm operational costs, we have a large margin to allow some considerable financial risk.

Quote
How exactly is this not just like CPUShare again, and why exactly shouldn't we expect it to fail for the exact same reasons, again?

let me mention that i know nothing about cpushare so i can't refer this question
sr. member
Activity: 434
Merit: 250
it's not about "one size fits all". it's about describing how much has to be invested in a job. like saying "each task of mine is as about 1000 fft of random numbers" or a linear combination of many such tasks.
again, such task decomposition is not mandatory, only the ongoing procfs msmts
another crucial point is that we don't have to have accurate benchmarks at all. just many of them, as different as possible.

 Huh  If benchmarks don't have to be accurate than why do them at all?

Quote
it is *able* to be customized and coded, hence flexible.

By this measure all software is flexible.  Of course you must have known what I meant, here, as a measure of flexibility relative to alternatives.

Quote
it has to be done only for totally new creatures of hardware.

Totally new creatures of hardware show up every day.  Anyway this is neither here nor there.  I said I wasn't going to hold my breath on that result, and I didn't.  We can move on from it without prejudice.  Smiley
sr. member
Activity: 434
Merit: 250
very easy: each client picks any amount of ID POW to require from its parties.

Eeep, it just keeps getting more scary!  Cheesy

Who decides how much work is sufficient?  How does any given publisher have any indication about any given providers ability to perform the identity work?

There kind of has to be some continual consensus on a difficulty here, doesn't there?
sr. member
Activity: 434
Merit: 250
i have a program which i want to decompose into a linear combination of benchmarks w.r.t. procfs measurements (abbrev. msmts).

i.e., i want to describe the vector x of procfs msmts of a given program as a linear combination of procfs msmts vectors of another n programs.

assume the dimension of the vectors is k.

all i need to do is have n>=k and have the matrix having n rows which are our vectors be of rank k.
i will get it for almost surely if i just pick programs "different enough", or even if they're just "a bit different", i may increase n.

I still don't see how the n programs have any assumable relation to the program that I'm actually trying to get quoted.  How is the behavior of any of the runs of the n programs indicative of anything about my future run of my job?  How is what is being priced over serving to price the thing that I want priced?
Pages:
Jump to: