true, they're unrelated.
as for load decomposition,
why on earth the programs should be identical/homomorphic/isomorphic?
First, homomorphism is an entirely seperate thing. I only want the isomorphism.
Second, they "should" be isomorphic because the performance profile of any given algorithm has to be assumed as potentially unique to that algorithm. No benchmark you could throw at the system will necessarily "load" that system in the same way(s) that my program P will "load" that system. What we ultimately want to get to is a measure of "how hard is the system working on P" and we can't formulate this beginning from some baseline that has nothing to do with P!
programs that performs 1000 FLOPs will take about 1/2 than programs that perform 2000 similar FLOPs, even if they're two entirely different algos. i'm not counting on that anywhere, just pointing to the fact that i'm apparently not interested at all in such functional equivalence.
Sure, but this reasoning is only really valid if we look at a single measure in isolation - which is not what we intend, per your decomposition! If we run your benchmarks and generate our g values across all metrics, and then use a spin loop for P, we will see 100% processor usage, but no other load. Does this mean that we are loading the system with 100% of "P work?" YES, but your model will initially decide that the usage is actually lower!
how any kind of such equivalence is related to resource consumption?
If we can formulate our g in relation to P then our measures relate, too. If we take our g measure using a variety of spin loops (potentially "extended" from P in some way) and then perform our decomposition against our P spin loop your decomposition will "know" from g that the only meaningful dimension of the performance profile is the cpu, and will correctly measure P as loading the system to 100%.
Obviously this is a contrived example to illustrate the point, no-one will want to pay to run a busy loop P. However taking the point to an extreme like this makes it simple to illustrate.
again, how come you tie so closely between workloads and specific algo's operation?
Because every algorithm combined with a particular reduction of that algorithm has a unique performance profile criteria! A performance is unique to a particular run. We can never know how
our particular run will behave, but we can't measure it against the baseline of some other algorithm(s) in a meaningful way. Disk usage or page faults or etc in our benchmarks have NO bearing on our measure of our busy loop P!
maybe it's needed for proven comps, but for my approach estimating consumption and risk reducing?
It is necessary in either case to derive meaningful measure in the decomposition.
Also, this notion should actually be turned the other way round - proven comps should be considered needed for estimating consumption, as AMiller pointed out on IRC.
" 12:21 < amiller> imo it's not a bad idea to have some pricing structure like that, i'm not sure whether it's novel or not, i feel like the biggest challenge (the one that draws all my attention) is how to verify that the resources used are actually used, and this doens't address that" [SIC]
where are we stuck on agreeing over the procfs vector decomposition?
maybe you're looking for something deep like functional extension, but those vectors can be spanned by almost any enough random vectors. like all vectors.
The problem is that such a span is not meaningful relative to subsequent measure unless some functional extension exists. Unless our benchmarks are "similar" to our P busy loop, they only introduce noise to our decomposition of our measures over P.
sometimes the verification is so fast that it can be done on the publisher's computers. so as for miscalc, not always 3rd party is needed.
Again, I'm assuming most computations will not have referential transparency, will not be pure, precluding this.
moreover, yes, i do assume that each publisher rents many providers and is able to compare between them.
Again, I'd rather find solutions than defer to (potentially costly) mitigation.
I still don't understand which flaw you claim have found.
You claim that people will just create many addresses, make jobs for a few seconds, and this way fool the whole world and get rich until zennet is doomed?
Among other behaviors. I basically assume that people will do "all the same crap they did with CPUShare et al that made those endeavors fail miserably."
So many mechanisms were offered to prevent this. Such as:
Except as offered, they don't
prevent anything at all. They presume to probabilistic
avoid the concern, except that there is no formalism, yet, around why we should believe they will serve to avoid any of them to any reasonable probable degree. Again, I defer to AMiller who always puts things so much better than I ever could:
"12:25 < amiller> there are no assumptions stated there that have anything to do with failure probability, malicious/greedy hosts, etc. that would let you talk about risk and expectation"
3. You can always ask it to hash something and verify the result! Moreover, you can spend the your first new seconds with your new provider with just proving his work. Yes, they can became malicious a moment after, but: you'll find out pretty quickly and never work with that address, while requiring more POW than this address has.
Again, how is the challenge difficulty to be set? This is still unanswered!
4. i'll be glad of a detailed scenario of an attacker trying to earn. i dont claim he won't make a penny. but show me how he'll make more than a penny a day.
How much he stands to make largely depends on his motive and behavior. Some attackers might make 0 illegitimate profit, but prevent anyone *else* from being able to accept jobs, for example. Some attackers might burn addresses, and just make whatever "deposit" payments he can take. Some attackers might fake their computation, and make the difference in energy cost between his fake work and the legitimate work. Whether or not he'll make more than a penny a day depends on how capable his approach is, how naive/lax his victims are, and how much traction the network itself has gained.
My concern is not that some attacker will get rich, my concern is that the attackers leeching their "pennies a day" will preclude the network from being able to gain any traction at all, and will send it the way of CPUShare et al.
It is very important to remember that, for some attackers, a dollar a day would be a fortune. Starving people are starving. Some of those starving people are smart and own computers, too.
Can you pinpoint the problem? for miscalc and procfs spoofing, we may assume we dont have the fancy pricing model, and we can discuss it as we were pricing according to raw procfs.
The root of the problem is just the combination of everything I've already described.
It is all predicated from lack of authentication over the g values, and made worse by the lack of cost for identity.