Pages:
Author

Topic: [PRE-ANN][ZEN][Pre-sale] Zennet: Decentralized Supercomputer - Official Thread - page 13. (Read 57105 times)

legendary
Activity: 882
Merit: 1002
sr. member
Activity: 434
Merit: 250
A BTC holder that believes the btc price would go up he would not spend his btc on a service that is currently being valued in dollars.

I see your point now. Maybe we'll think about giving both options.

I see the point too, I just think it is an absurd notion particularly as applied here.   Wink

Quote
Yet, not every BTC/USD bullish is necessarily BTC/ZEN bullish.

^^ THIS.  If we are really concerned with some triangular arbitrage then we need to consider all three edges of the triangle!

Quote
On a second thought, Zencoin is a commodity, and the sale is not intended for assets speculations. So maybe we'll keep it fiat.

I think this would mostly come down to constraints of the "IPO" itself, such as how escrow would be handled etc.  Ideally you should offer as broad a base of denomination as possible while still remaining convenient to everyone.  ("I'll send you some apples for zencoins." is probably not logistically viable, but "btc, usd, ltc" should be trivial enough.)

sr. member
Activity: 434
Merit: 250
There is no reason of investing using btc, it's just like selling btc to dollars to invest in a start up, and that there's a good chance that if the btc goes up you probably won't get your btc back.
If you buy with 350$ now and the btc goes to 1000$ your 350$ will still be worth 350$, it's actually a waste of btc.

By this reasoning alone, if you believe that the btc price will go up then you should never do anything with a bitcoin except hoard.  While I often see this reasoning I don't think it is sound.  I won't step into the holy war and debate this, it is already a well beaten dead horse.

In any case, why is the denomination relevant at all?  Any argument that you could make for a bitcoin you could also make for any asset, like a dollar or an apple or a share of Apple.  If I invest my apples in zencoin and apples go up I probably wont get my apples back.  I give $350 worth of apples for some zencoin and then those apples become worth $1000 I'll still only have $350 worth of zencoin.  By this logic I should never invest any asset in zencoin if I believe that asset will increase in value, because I might be right and the value of that other asset "might" go up.

Of course this is flawed reasoning.  One really needs to account for potential movement of *both* assets in *both* directions.  One needs to consider not just if apples are going to go up or down, but by what percentage they are going to go up or down relative to the percentage that zencoin will go up or down.

The specific underlying assets are not of actual concern, nor their particular spot prices, only their relative appreciation/depreciation while held.  If I think that there will be more new interest opened in zencoin over some time span than will be opened into bitcoin, relative to their market caps, then it is rational to exchange my bitcoin (some or all) for zencoin over that time period, and then to incrementally re-balance between the two as that new interest enters the market and the price gaps relative to market caps closes.  This is pretty basic economic theory.  If "done correctly" one ends up with a larger valued pile of both bitcoin and zencoin than they had of bitcoin originally.  Of course this is simplified, a real investment house will also have to consider things like the "market sensitivity" or how much their own entry to and exit from both markets will skew the curves.

(I think that if Ohad meets his goals then this will probably be the case over at least a 6 to 12month timeframe, btw!!  Yah, I'm that optimistic about this project - if he does what he intends I firmly believe that new adoption of the technology will (briefly) outpace new adoption of bitcoin.  No joke.  If this thing works or even just "works" there will almost certainly be a mad rush of both publishers and providers.  I don't see people flocking away from AWS or anything (the projects cover different use cases and concerns) but I do see this having the potential of very quickly becoming a dominant technology in the space.)

Someone might be tempted to say "but this is an IPO so it is different from general investment of an open asset" but the fact is that the only real difference is that there is no prior historical trade.  Otherwise the market "functions the same" at the moment of the IPO and forward.

Quote
So, if an investor believes that zennet is a better investment than btc then he would invest his *dollars* in zennet.

If an investor believes that asset X is a better investment than asset Y he will trade asset Y for asset X proportional to how much better they believe it to be, possibly adjusted relative to some risk profile.  (Note this says nothing of asset Z.  In your example X=zennet, Y=btc, Z=usd.  You can't cross-relate value comparisons like that unless you start talking about arbitrage.  If you do want to talk about three or more assets then you are looking at hedging between the three *relative* interests, but the premise remains more or less the same from there.)

If they are smart, they will also write out put/call contract pairs to bracket return above a risk free rate to within some acceptable bound of probability.  This is slightly less basic economic theory, but still pretty much "first principles."

All of this is rather off topic discussion, though.  It is not any concern specific to zennet, and is something that is well covered elsewhere.  This isn't an appropriate thread for discussing general economics or investment strategy, IMO.

hero member
Activity: 897
Merit: 1000
http://idni.org
A BTC holder that believes the btc price would go up he would not spend his btc on a service that is currently being valued in dollars.

I see your point now. Maybe we'll think about giving both options.
Yet, not every BTC/USD bullish is necessarily BTC/ZEN bullish.
On a second thought, Zencoin is a commodity, and the sale is not intended for assets speculations. So maybe we'll keep it fiat.
legendary
Activity: 2898
Merit: 1017
80% will be sold. 10% now for price discovery. but the plan is to sell them all before the genesis block. if we won't be able to sell them all, we'll destroy what remains. the dedicated 4% will be spread to the people. the remaining 16% will be used for building and maintaining the products etc.
people rent their power are getting paid in Zencoin.
There will never be an IPO since it's not shares we're talking about. We sell a commodity.
The tokens will be distributed to their owners when the network begins working. We might consider a slow distribution in order not to flood the market.


Please explain, why too small? According to my math, if we sell more, it will cut the buyer's profits.
If the community will put X BTC for %10, so each coin will worth twice than if it was X BTC for %20.
In addition, recall that this is only the first sale. On the other sales, the price will be >=  from this price-discovery sell.
If I missed something, please explain.

Some dilemma with real world service that is currently valued by the dollar.. You can't provide real world existing service and assign it to the bitcoin value (not in this era anyway), just like with Storj ( cloud storage) , the second the bitcoin price goes up its btc value will go down and the other way around (in some way it supports bitcoin value since if the btc drops in value the will be more demand for bitcoin to use to pay for storj service, but that's a story for another day).

Am I the only one thinks that the base price should be evaluated by electricity costs+hardware costs/profitability+real world market price of these resources+whatever  ?

Lets say you manage to sell the 80% zencoin for 8000BTC (At its current price of 350$)  and the next year the bitcoin costs 1000$ , do you think that demand will overcome the supply and people will pay X3 times more for Zennet resources instead of google's supercomputer ?


And I still don't know how one gets zencoin / pays for the work.


Quote
The coins will be distributed proportionate to the amount each buyer bought in BTC/USD (average) value (in order to be fair to all buyers disregarding BTC ups and downs. It doesn't matter from our side), and not according to a fixed price.

So we fixed the BTC/USD value problem.
Now this problem is only one-time. Once all buyers get their Zencoins, Zennet has nothing to do with BTC, USD or DOGE - Zennet runs on Zencoin only.


That actually doesn't fix the problem for the btc holder.. you miss my point.

A BTC holder that believes the btc price would go up he would not spend his btc on a service that is currently being valued in dollars.
There is no reason of investing using btc, it's just like selling btc to dollars to invest in a start up, and that there's a good chance that if the btc goes up you probably won't get your btc back.
If you buy with 350$ now and the btc goes to 1000$ your 350$ will still be worth 350$, it's actually a waste of btc.

So, if an investor believes that zennet is a better investment than btc then he would invest his *dollars* in zennet.
hero member
Activity: 500
Merit: 507
Zennet truly looks like the next HUGE thing and the presale sounds like a solid opportunity to take a small part in it
The more i am exposed to the thread and contents the more i am asured that this project has serious thought and work put into it, and unlike so many shit/scam initiatives - this one has real usability and value in the real world!
A huge thumbs up for Zennet!!  hope it lives up to the big promises and potential it holds!
hero member
Activity: 897
Merit: 1000
http://idni.org
Quote
Quote
I think I said this before, but if not I at least thought it really hard:  Figure out what the absolute minimum acceptable adoption rate would be and set the difficulty for identity mining to constrain as closely to this as possible.  If you can accept a growth rate of not less than one new user per day make the identity mining find not more than one new identity per day.

It is one heck of a tightrope to walk, in any case.  Too little identity generation will frustrate users and too much identity generation will put them at increased risk.  Unpleasant trade-off is unpleasant.
I will emphasize on this a bit later.


Let's make some order about the ID POW. I'm including new ideas here, tell me what you think.

Background:

1. When publisher and provider agree, the publisher may now utilize and pay, or not utilize and not pay the provider.
2. Publisher may even only add 3 integers for a whole hour, so after 1hr the payment might even be < 1 Satoshi.
3. If a publisher does not trust a certain address, they should not give a lot of work to that address.
4. We have two ECDSA keys on the system, call them SSH and ZEN. ZEN ECDSA is just like BTC priv/pub key, or: address. SSH is equivalent, for the sake of secure shell connection.

ID POW:

5. If an address begins with 10 zeros, one can quite easily calculate a lower bound for the amount of Zencoins needed to create such address. Say it costs X Zencoins.
6. Now I connect to such a provider and give it work X/4 Zencoin worth. This work should be easily provable (like hashing, matrix eigenvalue problem etc.).
7. Note that I can securely tell my publisher friends that this address is bad, if they miscalculated. How?
7.1. When I began talking with the provider, they proved they own the ZEN key by challenging them to sign a message.
7.2. This proof was done over SSH, signed with the SSH key.
7.3. So I have in my hands a signed conversation, proving its Zencoin origin, and proving that the eigenvalue returned is incorrect.
8. Now, after the provider invested X Zencoin worth of POW on their ID, I have a proof in my hand, which costed me X/4 (at the worst case, if I couldn't run away without paying after I realized I got scammed), and I can prove to the whole world that this X worth addr is malicious.

hero member
Activity: 897
Merit: 1000
http://idni.org
80% will be sold. 10% now for price discovery. but the plan is to sell them all before the genesis block. if we won't be able to sell them all, we'll destroy what remains. the dedicated 4% will be spread to the people. the remaining 16% will be used for building and maintaining the products etc.
people rent their power are getting paid in Zencoin.
There will never be an IPO since it's not shares we're talking about. We sell a commodity.
The tokens will be distributed to their owners when the network begins working. We might consider a slow distribution in order not to flood the market.


Please explain, why too small? According to my math, if we sell more, it will cut the buyer's profits.
If the community will put X BTC for %10, so each coin will worth twice than if it was X BTC for %20.
In addition, recall that this is only the first sale. On the other sales, the price will be >=  from this price-discovery sell.
If I missed something, please explain.

Some dilemma with real world service that is currently valued by the dollar.. You can't provide real world existing service and assign it to the bitcoin value (not in this era anyway), just like with Storj ( cloud storage) , the second the bitcoin price goes up its btc value will go down and the other way around (in some way it supports bitcoin value since if the btc drops in value the will be more demand for bitcoin to use to pay for storj service, but that's a story for another day).

Am I the only one thinks that the base price should be evaluated by electricity costs+hardware costs/profitability+real world market price of these resources+whatever  ?

Lets say you manage to sell the 80% zencoin for 8000BTC (At its current price of 350$)  and the next year the bitcoin costs 1000$ , do you think that demand will overcome the supply and people will pay X3 times more for Zennet resources instead of google's supercomputer ?


And I still don't know how one gets zencoin / pays for the work.


Quote
The coins will be distributed proportionate to the amount each buyer bought in BTC/USD (average) value (in order to be fair to all buyers disregarding BTC ups and downs. It doesn't matter from our side), and not according to a fixed price.

So we fixed the BTC/USD value problem.
Now this problem is only one-time. Once all buyers get their Zencoins, Zennet has nothing to do with BTC, USD or DOGE - Zennet runs on Zencoin only.

legendary
Activity: 2898
Merit: 1017
80% will be sold. 10% now for price discovery. but the plan is to sell them all before the genesis block. if we won't be able to sell them all, we'll destroy what remains. the dedicated 4% will be spread to the people. the remaining 16% will be used for building and maintaining the products etc.
people rent their power are getting paid in Zencoin.
There will never be an IPO since it's not shares we're talking about. We sell a commodity.
The tokens will be distributed to their owners when the network begins working. We might consider a slow distribution in order not to flood the market.


Please explain, why too small? According to my math, if we sell more, it will cut the buyer's profits.
If the community will put X BTC for %10, so each coin will worth twice than if it was X BTC for %20.
In addition, recall that this is only the first sale. On the other sales, the price will be >=  from this price-discovery sell.
If I missed something, please explain.

Some dilemma with real world service that is currently valued by the dollar.. You can't provide real world existing service and assign it to the bitcoin value (not in this era anyway), just like with Storj ( cloud storage) , the second the bitcoin price goes up its btc value will go down and the other way around (in some way it supports bitcoin value since if the btc drops in value the will be more demand for bitcoin to use to pay for storj service, but that's a story for another day).

Am I the only one thinks that the base price should be evaluated by electricity costs+hardware costs/profitability+real world market price of these resources+whatever  ?

Lets say you manage to sell the 80% zencoin for 8000BTC (At its current price of 350$)  and the next year the bitcoin costs 1000$ , do you think that demand will overcome the supply and people will pay X3 times more for Zennet resources instead of google's supercomputer ?


And I still don't know how one gets zencoin / pays for the work.
hero member
Activity: 897
Merit: 1000
http://idni.org
Quote
Quote
Recall that there are many models they can borrow with CPU. Combine with it the integrated LLVM that can identify many "innocent" say OpenCL kernels. That's for the near future.

Yes, IIRC this is pretty much where our last discussions got to leave off as well, either enforcing semantics with some higher level wrapper or asserting semantics with some static analysis.  Both are, pragmatically, even more substantial an undertaking than zennet itself!

It may definitely be one of Zennet's future areas of R&D.

Quote

I think I said this before, but if not I at least thought it really hard:  Figure out what the absolute minimum acceptable adoption rate would be and set the difficulty for identity mining to constrain as closely to this as possible.  If you can accept a growth rate of not less than one new user per day make the identity mining find not more than one new identity per day.

It is one heck of a tightrope to walk, in any case.  Too little identity generation will frustrate users and too much identity generation will put them at increased risk.  Unpleasant trade-off is unpleasant.

I will emphasize on this a bit later.

Quote
We call this dependently typed lambda calculus.  It really is the ultimate contract mechanism, as I'd hope you've begun to come to realize after reading the other materials that I threw at you.  Funny that we've had the basics for "the total solution" since 1991 and yet still almost zero application of it today.

Speaking of lambdas for resource contracts I just this week read a recent work that you might be interested in: http://arxiv.org/pdf/1312.2334.pdf

It is about algebraically inferring side effect.  It touches on some of what we'd discussed about type systems for resource access, but unlike most of the other work we discussed it does so in a simpler, polymorphic setting which allows for a more comprehensible inference method.  They also make a much broader and more complete treatment of the subject, in general, than most of the other papers, which was nice to see.

All so interesting. I cant wait for the days of diving into it. It also draws parallels to my main area of research (machine learning).

Quote
Also of related note, did you see Facebook's OSQuery?

Seems too heavy. https://github.com/google/cadvisor seems better and suited to our needs though. I tested it, works like magic.
sr. member
Activity: 434
Merit: 250
Safe GPU computing isn't far.

Probably, but precisely none of the commodity configurations today offer it.  Not one.  Sad

Quote
Recall that there are many models they can borrow with CPU. Combine with it the integrated LLVM that can identify many "innocent" say OpenCL kernels. That's for the near future.

Yes, IIRC this is pretty much where our last discussions got to leave off as well, either enforcing semantics with some higher level wrapper or asserting semantics with some static analysis.  Both are, pragmatically, even more substantial an undertaking than zennet itself!

Quote
Quote
Only the concerns I had brought up previously.  This seems to be the inflection point for "balancing out" fraud, so the specifics will be delicate.  I still hold that the easiest way to "attack" this network would be to simply mine a mass of identities early to burn through later.

What do you suggest?

I think I said this before, but if not I at least thought it really hard:  Figure out what the absolute minimum acceptable adoption rate would be and set the difficulty for identity mining to constrain as closely to this as possible.  If you can accept a growth rate of not less than one new user per day make the identity mining find not more than one new identity per day.

It is one heck of a tightrope to walk, in any case.  Too little identity generation will frustrate users and too much identity generation will put them at increased risk.  Unpleasant trade-off is unpleasant.

Quote
Address' public history is one (untrusted) aspect. There is something like a dozen more parameters which I mentioned endless times, like local history (you don't have to find the bad ones, all you need is enough good and trusted ones), altogether letting you control the risk's expectation.

Sure, and again I don't ultimately see much problem with this in premise.  Getting it right in practice is another story.  This is also another case where it will need to be made explicitly clear to the user what the risk details are, and the onus of responsibility that they hold in making their own assurances.  Software criteria can only assist them so much, they will still always need to actually come to their own decision about reasonableness of each provider/publisher they deal with.

Quote
I have thoughts about it too. Building a contract editor that will let you design any contract, but lambda calculus will vefiry that this contract is safe.

We call this dependently typed lambda calculus.  It really is the ultimate contract mechanism, as I'd hope you've begun to come to realize after reading the other materials that I threw at you.  Funny that we've had the basics for "the total solution" since 1991 and yet still almost zero application of it today.

Speaking of lambdas for resource contracts I just this week read a recent work that you might be interested in: http://arxiv.org/pdf/1312.2334.pdf

It is about algebraically inferring side effect.  It touches on some of what we'd discussed about type systems for resource access, but unlike most of the other work we discussed it does so in a simpler, polymorphic setting which allows for a more comprehensible inference method.  They also make a much broader and more complete treatment of the subject, in general, than most of the other papers, which was nice to see.

Also of related note, did you see Facebook's OSQuery?
hero member
Activity: 897
Merit: 1000
http://idni.org
Quote
I'm looking forward to seeing the progress!  Hopefully my suggestions and reference materials have been helpful in those efforts.

Definitely! I'm going to do exactly what you told me to do. With the exact paper you referenced me to. This will answer the 'novel cutting edge' vefirication below - you had a novel idea (well with some insignificant help from me) and I'm going to implement it as-is. That's, at least, the plan. BTW, I've pretty much kept in secret the details of this verification idea. I didn't even publish the name of the article you referenced me to.
Defraging the discussion, sure, the right personnel is very difficult to find. Yet, Zennet have myself dedicated (and I can cover even all of this by myself, only a matter of time), and I can find a PLT guy and a Kernel guy. It's not an impossible mission here in Israel and with our connections. I know how to get such people to successfully attack such missions. Also I'm pretty sure that worldwide interest of professionals will also come with time, and the open source community have all great minds there. Of course, I wish you or Socrates1024 would get more involved, but I totally understand the lack of time. Like I lack time for projects other than Zennet.

Quote
Our direction on the design was sound (obv modulo my outstanding concerns like the GPU IOMMU, etc) so if the implementation follows suit then there should be few concerns!

Our agreements and understandings are stone-standing like you know my word.

Quote
Your best bet is to find someone who has been working in "high level synthesis" tools for IC design, since that is the one area where you're forced to constantly bridge the gaps.
I'll take your advice for it. I even have one in mind.

Quote
I see this as one of the biggest possible Achille's heels for the project.  If providers get hacked because they offered GPU resource without understanding the ramifications they will certainly blame your software instead of themselves.  If publishers have excess spend because they utilized GPU resource without understanding the ramifications they will certainly blame your software instead of themselves.

Safe GPU computing isn't far. Recall that there are many models they can borrow with CPU. Combine with it the integrated LLVM that can identify many "innocent" say OpenCL kernels. That's for the near future. That's without mentioning the progress in HW-side verifyable processors.
For the present, only closed and trusted applications. Give me your gromacs job desc file and I'll execute it for you by myself -- that's roughly the idea with a few more tweaks.
Add to this the fact that VT-d isn't fully out there yet.

Quote
Only the concerns I had brought up previously.  This seems to be the inflection point for "balancing out" fraud, so the specifics will be delicate.  I still hold that the easiest way to "attack" this network would be to simply mine a mass of identities early to burn through later.

What do you suggest?

Quote
There is no explicit reputation model, like a WoT or anything, but there is an implicit reputation model in the histories of the publisher/provider transactions, as you've pointed out to me at least a dozen times now.  If this implicit reputation model is easily gamed it is just as bad as having a weak explicit reputation model, no?

Address' public history is one (untrusted) aspect. There is something like a dozen more parameters which I mentioned endless times, like local history (you don't have to find the bad ones, all you need is enough good and trusted ones), altogether letting you control the risk's expectation.

Quote
This is really more of a question of liabilities, and is something we briefly touched on before at one point.  I think, IIRC, that we decided that there was "little to be done" about it from the network perspective, and the onus would be on the participants to monitor their own transactions as much as possible.  Of course with added security layers (encrypted computations, etc) there is always some risk that your resources could be being applied toward naughty things without you ever having any way to know, but the assumption is that in such a case the provider is also absolved of the liability by the same reasoning.  It is a difficulty subject, and one that is much more political than technical.  You know my feelings on politics: blech.  I'll leave this one to the various jurisdictions to sort out on their own, and won't even think of it much further.  (Again.)

All agreed. To rephrase it in my words, something like Zennet would become sooner or later without Zennet. It's almost a fact and people will have to learn how to live in the new world which is much more powerful than the old one. And that's not my fault even if I wrote Zennet. Zennet is just another piece of software. It's not like discovering how to split the atom.

Quote
Weeelllllll, not publicly anyway.   Wink

Other VPS providers have not always been so lucky.
Quote
Quote
In any case, what else can I do other than take some best known hypervisors?
Nothing!  This is the one big "potential point of failure" that you can do absolutely nothing to mitigate systematically.  It either happens or it doesn't, plain and simple.  Unfortunately, on a long enough timeline it probably does happen, so it's more a question of response times and impact than anything.  All you can do is "be prepared" and demonstrate well that preparedness to your users.


On Zennet you have 3 layers of protection. Hypervisor is only the first. cgroups is the second. And even inside the cgroups container, you don't get root. So it's a third layer.

Quote
Everything is safe until the 0day becomes public.  The OpenSSL was "very safe" for many years, and now I've had my bitcointalk account snooped on twice in under a year. WOOPS.  All well, we patch up and are "very safe" again until the next time that we discover that we actually weren't.

And yet the development, widespread and funding of OpenSSL is justified.

Quote
This vicious cycle is one of the primary motivations for my general interest in formal methods.  We have the technology to break the cycle and actually "be" safe, through combinations of isolation and verification.  Soon these technologies will even be widely practical, and maybe we can even start to look forward to a day where our software systems aren't Swiss cheese earning millions of people free credit monitoring every month.  I hope I live long enough to see that day.
I have thoughts about it too. Building a contract editor that will let you design any contract, but lambda calculus will vefiry that this contract is safe. You may also replace "contract" by "protocol". This is also a step towards generic programming. Life is long, we'll have time for all.

Quote
For sure.  However, more to the point, it is a potential problem on a frighteningly long list of potential problems.  I have the advantage (that many others here who are skeptical of your project lack) of first-hand knowledge of your attempts to mitigate all of these problems, and I can say that you are doing a great job of covering all of the bases.  However, it is a LOT of bases to cover, and you are no superman.  You're good, I hold you in high regard, but with so many and so varied concerns something is bound to be missed.  Again, this will just be a question of response times and impacts.

tx. i quoted this just to mention that giants x1000 and more than me will probably get involved with time. it's just good for everyone. who doesn't want such a system?
legendary
Activity: 882
Merit: 1002
Great in-depth discussion between HunterMinerCrafter and Ohad as always.
HunterMinerCrafter come back more often please  Smiley
hero member
Activity: 644
Merit: 500
sr. member
Activity: 434
Merit: 250
The verification reciepts will be included in the final product. We will have to know our budget in order to plan it. Up till now, before the sale, we made UI infrastructures (I can show it to you privately) with some good UI engineers, and we make progress with the pricing algorithm, benchmarks, procfs etc. with some good software engineers, who had to study the subject thoroghly and they're already hands on and building the client. We also recruited a respected cryptocurrencies developer who began working on the wallet and DPOS. Of course my time is invested with our devs and other team's work.

I'm looking forward to seeing the progress!  Hopefully my suggestions and reference materials have been helpful in those efforts.  Our direction on the design was sound (obv modulo my outstanding concerns like the GPU IOMMU, etc) so if the implementation follows suit then there should be few concerns!

Quote
For the verification we will further need Linux kernel guy and a PLT guy, both talented enough and free enough.

GL!  Most people are not like me, in my experience.  Finding people with solid background in both the low level details and the high level theories has always been a challenge for me in my hires in the past.  Most people don't understand both ends of the spectrum, it seems.  If they can grok a pci enumeration they will probably be lost in the lambda calculus, and if they can explain type theory they probably won't know the first thing about a crossbar DMA.  Those of us who run the gauntlet between the gate/sub-gate level semantics that the OS developers care about and the syntactic concerns of PLT that the application developers care about are (increasingly) a very rare breed.  Your best bet is to find someone who has been working in "high level synthesis" tools for IC design, since that is the one area where you're forced to constantly bridge the gaps.

I know very few people talented enough, and none of those would be nearly free enough.  Myself included.  Undecided  If I did know of someone to point you toward I'd probably be grabbing them up for my own endeavors anyway, heh.

Quote
We need a budget, a more rigid organizational structure, and time, to find the right people and get them deep into the design. It's a serious work done here. I have no plans releasing a novel cutting edge verification algo implementation that will turn out a joke.

I've said before and I'll say again that I think the less novel and cutting edge the verification is, the better.  KISS.  Draw as much as possible from prior art, here, with things like lambda-hist and the klee gaming work.  (Hopefully this is "preaching to the choir" by now?)

Quote
We are planning to go with the direction you have suggested. 'Authenticating the procfs' to summarize it in 3 words.

Cool.

Quote
A publisher will be able to run e.g. gromacs, and utilize the GPU safely for both sides. It worths gold for so much. We both discussed and understood that a freeway to the provider's GPU is problematic. We can get into this topic again and fine tune it. In any case, all HW or nothing, and all right now, might not be the only useful approach.

This will need to be handled very delicately in any case.  The special considerations for any peripheral hardware will need to be very clearly enumerated for both providers and publishers.  It will need to be made very clear how providing GPU resource (or anything with side-band IO facility) is potentially very dangerous, and requires strict isolation.  It will need to be made very clear how utilizing GPU resource (or, again, anything side-band) can not be as precisely accounted for in receipt validation, and requires strict secondary verification.

I see this as one of the biggest possible Achille's heels for the project.  If providers get hacked because they offered GPU resource without understanding the ramifications they will certainly blame your software instead of themselves.  If publishers have excess spend because they utilized GPU resource without understanding the ramifications they will certainly blame your software instead of themselves.

This could easily turn into an image/PR problem.

Quote
I didn't understand that. Do you see any issue with Zennet's identity model?

Only the concerns I had brought up previously.  This seems to be the inflection point for "balancing out" fraud, so the specifics will be delicate.  I still hold that the easiest way to "attack" this network would be to simply mine a mass of identities early to burn through later.

Quote
There is no reputation model. There are some other parameters that a node can measure that may be used to increase confidence. It's not that there is a public DB of each address' reputation or something.

There is no explicit reputation model, like a WoT or anything, but there is an implicit reputation model in the histories of the publisher/provider transactions, as you've pointed out to me at least a dozen times now.  If this implicit reputation model is easily gamed it is just as bad as having a weak explicit reputation model, no?

Quote
The default client will block network access and persistent storage.  So you're safe by default. If you want to turn it on, it's your choice. Publishers will typically offer more for such, so it will pose a problem from their side as well (it'll be more costly).

This is really more of a question of liabilities, and is something we briefly touched on before at one point.  I think, IIRC, that we decided that there was "little to be done" about it from the network perspective, and the onus would be on the participants to monitor their own transactions as much as possible.  Of course with added security layers (encrypted computations, etc) there is always some risk that your resources could be being applied toward naughty things without you ever having any way to know, but the assumption is that in such a case the provider is also absolved of the liability by the same reasoning.  It is a difficulty subject, and one that is much more political than technical.  You know my feelings on politics: blech.  I'll leave this one to the various jurisdictions to sort out on their own, and won't even think of it much further.  (Again.)

Quote
AWS didn't get any 0day hypervisory layer.

Weeelllllll, not publicly anyway.   Wink

Other VPS providers have not always been so lucky.

Quote
In any case, what else can I do other than take some best known hypervisors?

Nothing!  This is the one big "potential point of failure" that you can do absolutely nothing to mitigate systematically.  It either happens or it doesn't, plain and simple.  Unfortunately, on a long enough timeline it probably does happen, so it's more a question of response times and impact than anything.  All you can do is "be prepared" and demonstrate well that preparedness to your users.

Quote
They are considered very safe, and they are.

Everything is safe until the 0day becomes public.  The OpenSSL was "very safe" for many years, and now I've had my bitcointalk account snooped on twice in under a year. WOOPS.  All well, we patch up and are "very safe" again until the next time that we discover that we actually weren't.

This vicious cycle is one of the primary motivations for my general interest in formal methods.  We have the technology to break the cycle and actually "be" safe, through combinations of isolation and verification.  Soon these technologies will even be widely practical, and maybe we can even start to look forward to a day where our software systems aren't Swiss cheese earning millions of people free credit monitoring every month.  I hope I live long enough to see that day.

Quote
Let's say that on 0day hypervisory, Zennet might not be the world's biggest problem. Like breaking SHA or ECDSA, will not make Bitcoin's unsafety #1 world's crisis...

For sure.  However, more to the point, it is a potential problem on a frighteningly long list of potential problems.  I have the advantage (that many others here who are skeptical of your project lack) of first-hand knowledge of your attempts to mitigate all of these problems, and I can say that you are doing a great job of covering all of the bases.  However, it is a LOT of bases to cover, and you are no superman.  You're good, I hold you in high regard, but with so many and so varied concerns something is bound to be missed.  Again, this will just be a question of response times and impacts.

This "not IPO" IPO is a big gamble, one of the biggest in the crypto space to date, but not for the same reasons as most IMO.

I don't share the same concerns as others that you might abscond without producing results, particularly given how open you are about both your initiatives and yourself.  I don't see many fraudster devs appearing at conferences to discuss their efforts, or giving up precise personal details in discourse.  (They usually go to great lengths to do just the opposite of these two things, so if you are going to defraud everyone then you've just made a lot of work for yourself in not just getting your arse kicked once you do!)

This IPO gamble is not a gamble because you might be some pump&dump fraudster, this IPO gamble is a gamble for precisely the "right reason" for an IPO to be a gamble - what is being attempted is huuuuuuuge and either fails spectacularly or changes the whole landscape of the marketplace.  Those are precisely the sort of IPO gambles an investor wants to make, even though in this case the vegas odds might actually be better with the scam possibility offerings just because of the scope of this undertaking and the very long list of negative outcome potentialities, however well mitigated or accounted for.

I'm rooting for you, particularly considering how much I've helped to lay out the implementation direction, but the analyst in me is still very very afraid that this will be another CPUShare all over again.  Wink

(Work faster, I can't wait for the resolution to present itself!)
hero member
Activity: 897
Merit: 1000
http://idni.org
Delighted to see you here again.
Nothing compares to an honest discussion.
I left you a message on IRC regarding the sale and I wanted to discuss it with you, but somehow we didn't get into it.

Quote
The numbers don't matter a priori.  Any ratio of coins sold to coins destroyed is irrelevant.  You could work your system with an initial distribution of just one coin or a million, and the system as a whole functions just the same.  (Denomination is not value.)

To further assume that price discovery sets a floor is a bit unlike you.  Of course we should hope that this is the case, under an assumption that resource demand being initially high and resource supply being initially zero coupled with an assumption that, post launch, demand by publishers will continually outpace supply by providers (both growing proportionally) creating a natural scarcity of the representative token, such that market rates would never fall below the initial pricing.  Wouldn't it be great if that could be assumed to hold?  Of course it cannot be, as any number of factors could crash demand, skewing the proportional growth.  If this happens at a time where the ratio of oversupply subsequently becomes greater than the initial amount of demand at launch (relative to the zero initial supply) then it is only rational that the value of the token would cross below this floor.

Please recall that while and after the price-discovery sale, we will continue the development and have more funds for the dev, so naturally the value should go up. That's without all the market size considerations you mentioned correctly.

Quote
This reminds me of an interesting model that I worked a few years ago about general asset pricing.  It seems to be commonly believed (particularly around these parts, heh) that any given asset has a lower bound on price at zero, and a non-finite upper bound.  This turns out to be logically backwards.  Under any rational model, an arbitrary asset actually has a finite upper bound on price (as there is only so much of value that is not the asset, to be traded for it) and a finite but *negative* lower bound, as any asset could eventually become toxic by way of an external interaction.  (Arguably there are hypothetical assets which might have an effectively unbounded low price.  For an example, a device which randomly and suddenly (in uncontrolled fashion) creates a black hole of any random size could be perceived has having a potentially infinitely negative value as it could come to "cost" the entirety of the universe to hold.  Of course this is a bit absurd, and no practical asset has such a pricing model, but the model is sound despite.)

That's why I don't set any bounds. But, I have to protect the early buyers. See, if I was thinking of myself only, I wouldn't restrict the price to be higher. I'd just sell at the price I'll be able to get at the same moment. The proposed obligation bounds the team in order to compensate the buyers. If the community will see this as more an obstacle than protection, we might reconsider it.

Quote
Anyway, with all of that aside, how are the technical details coming?  I've been meaning to check in with you on the irc for quite some time now, but have been very busy.  Is there any progress on the verification of receipts?

The verification reciepts will be included in the final product. We will have to know our budget in order to plan it. Up till now, before the sale, we made UI infrastructures (I can show it to you privately) with some good UI engineers, and we make progress with the pricing algorithm, benchmarks, procfs etc. with some good software engineers, who had to study the subject thoroghly and they're already hands on and building the client. We also recruited a respected cryptocurrencies developer who began working on the wallet and DPOS. Of course my time is invested with our devs and other team's work. For the verification we will further need Linux kernel guy and a PLT guy, both talented enough and free enough. We need a budget, a more rigid organizational structure, and time, to find the right people and get them deep into the design. It's a serious work done here. I have no plans releasing a novel cutting edge verification algo implementation that will turn out a joke. I need to take this part of the project not less seriously (and even more) than all others.
So, there is a limit to what to do without a successful pre-sale. And this pre-sale is intended to build the product to our alls requirements and satisfaction.


Quote
I'm still on the fence about your "not an IPO" IPO.
You cannot avoid the fact that the worth of a currency is mainly over non-tangible belief, but computing resources are a tabgible asset. Zencoin should be treated differently. Even though it's a commodity of a potentially huge market, we of course don't plan to sell coins for billions or trillions. So at any case, the coins will be sold at a ridiculus price, given the tech is usable. And it is usable and I'll convince again if needed.

Quote
If everything lives up to our ideals then I'm all in, but our ideals are set quite loftily.  I still see dozens of ways that this whole project goes the way of CPUShare et al.

You're definitely a man of ideals which I appreciate and tend towards. Bringing good and efficiency to the world is a different matter.

Quote
If the verification doesn't work to sufficiently secure publishers, I'm out.

We are planning to go with the direction you have suggested. 'Authenticating the procfs' to summarize it in 3 words. Nevertheless, the original design, mitigation only without verification, does meet industrial standards for security and everything. We both might argue whether industrial standards are good enough or not. But the big money doesn't argue and follows the industrial standard.

Quote
If the GPU is available as a resource but not adequately secured, I'm out.

A publisher will be able to run e.g. gromacs, and utilize the GPU safely for both sides. There will be no freeway to the provider's GPU, but the provider will have to trust certain applications (via plugins or so).

Quote
If the identity model is insufficiently restrictive, or sees too much early gaming, I'm out.

I didn't understand that. Do you see any issue with Zennet's identity model?

Quote
If the reputation model isn't capable enough to handle all of the various concerns, I'm out.
There is no reputation model. There are some other parameters that a node can measure that may be used to increase confidence. It's not that there is a public DB of each address' reputation or something.

Quote
If there is not sufficient protection to providers to preclude illegal/immoral uses, I'm out.
The default client will block network access and persistent storage. So you're safe by default. If you want to turn it on, it's your choice. Publishers will typically offer more for such, so it will pose a problem from their side as well (it'll be more costly).

Quote
If this whole thing just evokes some 0day in the hypervisory layer, I'm out.
AWS didn't get any 0day hypervisory layer. In any case, what else can I do other than take some best known hypervisors? They are considered very safe, and they are. Let's say that on 0day hypervisory, Zennet might not be the world's biggest problem. Like breaking SHA or ECDSA, will not make Bitcoin's unsafety #1 world's crisis...
sr. member
Activity: 434
Merit: 250
Please explain, why too small? According to my math, if we sell more, it will cut the buyer's profits.
If the community will put X BTC for %10, so each coin will worth twice than if it was X BTC for %20.
In addition, recall that this is only the first sale. On the other sales, the price will be >=  from this price-discovery sell.
If I missed something, please explain.

On this point, you're both crazy.  Tongue

Ohad you should know better! 

The numbers don't matter a priori.  Any ratio of coins sold to coins destroyed is irrelevant.  You could work your system with an initial distribution of just one coin or a million, and the system as a whole functions just the same.  (Denomination is not value.)

To further assume that price discovery sets a floor is a bit unlike you.  Of course we should hope that this is the case, under an assumption that resource demand being initially high and resource supply being initially zero coupled with an assumption that, post launch, demand by publishers will continually outpace supply by providers (both growing proportionally) creating a natural scarcity of the representative token, such that market rates would never fall below the initial pricing.  Wouldn't it be great if that could be assumed to hold?  Of course it cannot be, as any number of factors could crash demand, skewing the proportional growth.  If this happens at a time where the ratio of oversupply subsequently becomes greater than the initial amount of demand at launch (relative to the zero initial supply) then it is only rational that the value of the token would cross below this floor.

This reminds me of an interesting model that I worked a few years ago about general asset pricing.  It seems to be commonly believed (particularly around these parts, heh) that any given asset has a lower bound on price at zero, and a non-finite upper bound.  This turns out to be logically backwards.  Under any rational model, an arbitrary asset actually has a finite upper bound on price (as there is only so much of value that is not the asset, to be traded for it) and a finite but *negative* lower bound, as any asset could eventually become toxic by way of an external interaction.  (Arguably there are hypothetical assets which might have an effectively unbounded low price.  For an example, a device which randomly and suddenly (in uncontrolled fashion) creates a black hole of any random size could be perceived has having a potentially infinitely negative value as it could come to "cost" the entirety of the universe to hold.  Of course this is a bit absurd, and no practical asset has such a pricing model, but the model is sound despite.)

Anyway, with all of that aside, how are the technical details coming?  I've been meaning to check in with you on the irc for quite some time now, but have been very busy.  Is there any progress on the verification of receipts?

I'm still on the fence about your "not an IPO" IPO.  If everything lives up to our ideals then I'm all in, but our ideals are set quite loftily.  I still see dozens of ways that this whole project goes the way of CPUShare et al.

If the verification doesn't work to sufficiently secure publishers, I'm out.  If the GPU is available as a resource but not adequately secured, I'm out.  If the identity model is insufficiently restrictive, or sees too much early gaming, I'm out.  If the reputation model isn't capable enough to handle all of the various concerns, I'm out.  If there is not sufficient protection to providers to preclude illegal/immoral uses, I'm out.  If this whole thing just evokes some 0day in the hypervisory layer, I'm out.

There are a lot of ways that this whole thing falls over.  If it doesn't, it will be awesome.

I'll catch you on IRC again soon.
hero member
Activity: 897
Merit: 1000
http://idni.org

In this pre-sale we will sell 10% of all future Zencoins that will exist. The community will set the price. The coins will be distributed proportionate to the amount each buyer bought in BTC/USD (average) value (in order to be fair to all buyers disregarding BTC ups and downs. It doesn't matter from our side), and not according to a fixed price.


Man, 10% is too small, there will not profit for investors.


+1

We need a benefit balance between developer with investor, developer works hard to deliver a good product, investor invests it to get some profit, win-win is real win.

Please explain, why too small? According to my math, if we sell more, it will cut the buyer's profits.
If the community will put X BTC for %10, so each coin will worth twice than if it was X BTC for %20.
In addition, recall that this is only the first sale. On the other sales, the price will be >=  from this price-discovery sell.
If I missed something, please explain.
sr. member
Activity: 269
Merit: 250

In this pre-sale we will sell 10% of all future Zencoins that will exist. The community will set the price. The coins will be distributed proportionate to the amount each buyer bought in BTC/USD (average) value (in order to be fair to all buyers disregarding BTC ups and downs. It doesn't matter from our side), and not according to a fixed price.


Man, 10% is too small, there will not profit for investors.


+1

We need a benefit balance between developer with investor, developer works hard to deliver a good product, investor invests it to get some profit, win-win is real win.
full member
Activity: 125
Merit: 100

In this pre-sale we will sell 10% of all future Zencoins that will exist. The community will set the price. The coins will be distributed proportionate to the amount each buyer bought in BTC/USD (average) value (in order to be fair to all buyers disregarding BTC ups and downs. It doesn't matter from our side), and not according to a fixed price.


Man, 10% is too small, there will not profit for investors.
Pages:
Jump to: