Pages:
Author

Topic: ion discussion - page 4. (Read 9662 times)

legendary
Activity: 1050
Merit: 1016
September 09, 2015, 09:25:41 PM
#43
I've been discussing the issues raised by r0ach and smooth in private messages. And I want to share something I wrote in a private message.

Quote from: myself
...Does anyone have any idea how much work is head of me!...

I do, and I'll tell you exactly how much....a proverbial shit ton!  And once you've shoveled through that, expect a few more!

If your ambition includes a desire to do it right and there are no actual deadlines (only self imposed ones), then you'll under-estimate the time required for everything.  This is due to you over-estimating your ability to do it in a timely manner and to exacting standards by a large factor.

Personally I think this is fine, a while ago now I accepted that it was going to take time to do this right.  Not only for the quality of the product, but also so I could sleep at night without the concern of "Am I going to awake in the morning and some poor person has lost their life savings due to a stupid bug!", thus I stopped giving estimated launch dates and its now purely moving on "Valve Time" and IRWIR

I too have worked on million+ user projects, but crucially, the failure of which rarely results in the loss of someones livelihood, so corners can (and are constantly) cut.  Even in the case that it could effect some users life in a negative manner, there is usually some form of insurance in place either by way of actual policies held by the producing organization, or the organization itself has deep enough pockets to mitigate any issues as such.

We don't have the safety net above, nor the same as most others cryptos do.  That is, they base a large amount of what they are doing on Satoshi's ideas and have seen that it works and is of sound theory.  We are pushing the frontier with these new ideas and I'd be mortified if something went wrong and I knew I'd cut corners just to rush it out the door.
newbie
Activity: 28
Merit: 0
September 09, 2015, 08:11:05 PM
#42
Edit: here at 53:30 Daniel explains they are licensing their code:

https://beyondbitcoin.org/bitshares-dev-hangout-bytemaster-stealth-confidential-transactions/

They are building a business model around building software for the coin ecosystem.  This is very top-down and non-network effects thinking. It is far too slow. We need 1000s of developers working in an ecosystem

It seems like a smart idea to prevent a large corporation or financial institution from taking all the work, not utilizing the original system to give it value, and instead building their internal network, exchange, etc, with it.  Anytime you read coin news about banks, they don't seem very interested in doing things like buying coins from miners, they seem interested in taking the work, building their own system on top of it, or selling the coins to you.  I think people are underestimating the highly refined banker initiative to screw everyone over as much as possible.

The correct solution is invent a decentralized ecosystem coin that will be used by millions of people every day.

Aiming lower is slower, lower, and not my style. Then you worry about being pecked off by a larger small bird.

I have produced million user software back when the internet population was 1/10 of its current size.

Sorry arguing that top-down is a correct solution for decentralized crypto-currency is an oxymoron to me. Either we do this right, or why bother even doing it at all.

Besides that is not the reason Daniel is doing it. He is doing it because he philosophically believes in control (and/or like every good socialist he wants to mask his lack of confidence and milking of the market under the guise of forming a cult of "sharing" for as long as all such taxes course through him ... and he probably believes in socialist organization philosophically as if he can give a better result if he can control it more). Steve Jobs was a control freak and his iOS walled garden got destroyed by the more open Android as the following chart shows...

https://upload.wikimedia.org/wikipedia/commons/a/ac/World_Wide_Smartphone_Sales_Share.png



I've been discussing the issues raised by r0ach and smooth in private messages. And I want to share something I wrote in a private message.

Quote from: myself
Also remember I want to use this coin. Which means I want to go create services in the ecosystem and earn money on those. Even if for example my designs get implemented by Blockstream, at least I can go create the types of decentralized services and sites that can earn me $millions.

But while getting this coin to the point where it will scale out, I can't be targeting measily $20k to leech on my ecosystem for server infrastructure. If I create a product in my ecosystem later, it will shoot big B2C (to drive more usership to the coin helping the entire ecosystem), not vertical market B2B in a small developer market.

I see marketing idiots throughout this space. This is going to be as easy as taking candy from a baby for me from a marketing standpoint.

 The name vote is already a reflection of my marketing instincts. I will however need to not get in debates, as it interferes with my marketing and causes people to perceive me as overbearing. Rather I think it would be more accurate to say there is too much work to do and it is much more amicable to be relating on production instead of these debates. I simply can't allow myself to get sucked into long discussions. I am the only programmer on the project right now. So my overbearingness is a function of necessity. Let's be realistic. Does anyone have any idea how much work is head of me!

My greatest successes in the past were when I was implementing software because I wanted to use it and solve a problem I had with existing software for my own personal use. I guess because I am very passionate about it when that is the case and I am up close and personal with the issues I am trying to solve.
newbie
Activity: 28
Merit: 0
September 09, 2015, 07:55:50 PM
#41
I wrote this...

That's clearly not the case since the block rewards go to zero and the bigger question is whether there would even be enough PoW, not whether there would be too much.

The concern over PoW going gray goo over all the world's power is based on the bizarre premise of moonstruck fanatics that it becomes the world reserve currency tomorrow, despite not even being able to figure out how to scale above 5-7 tps. After a few more block halvings this would not be the case at all.

This is why being able to objectively filter 25 - 33% selfish mining attacks and 51% attacks is so critical. Satoshi's design can not do it. Mine can.

With that key improvement, then one can change the way mining is done so that those who are sending transactions are doing the mining, but they don't care about their profitability so then you drastically reduce the electricity used, yet simultaneously make it uneconomic to run an ASIC farm. And pools become irrelevant because no one is mining for profit, rather because they must mine to send a transaction.

So then you have the unbounded entropy of proof-of-work that insures its model of security and game theory, without any of the drawbacks.

TADA.  Grin
newbie
Activity: 28
Merit: 0
September 09, 2015, 07:04:39 PM
#40
Bitshares has too much investment (vested "stake") in proof-of-stake to shift quickly and their small market cap would be overrun well before they could act. Besides I think they believe in their solution. Their coin appears to be more like a cooperative or corporation than a decentralized open source deal. Rather I could see them making tweaks to their own design to try emulate any of the best attributes and I would need to study DPoS in more detail to see where if any limitations for them might lie.

One thing about the Bitshares people is they seem pretty willing to undertake new initiatives and switch direction. If they saw a reason to want to do something different I don't think it would necessarily have to be on the same coin platform at all.

Hey if Daniel is willing to give up all the collectivism crap about shares in MLM, interest bearing accounts, etc (and all that I don't bother to understand about Bitshares), then as a coder I'd explore working with him. I know he is a talented coder. And I see there are other talented coders over there in his community (some guy pulled a Javascript implementation of ECC out of his hat in 30 hours the other day so they can accelerate their web wallet implementation of the new anonymity features).

But alas, I remember from the deep debates I as Anonymint had with Daniel in 2013, that he is inherently socialist and I am inherently anarchist. I recently exhanged a few posts with his brother and still the same attitude difference seems to persist between us. So I think he will always be looking at problems and solutions more top-down than I am. Meaning I will be looking to encourage the market of network effects for development momentum as early as feasible. I'd rather make aspects orthogonal so others can build and profit, rather than try to build my own well contained, feature complete, crypto kingdom as Bitshares Corporation appears to do. Give more opportunities to others.

So I am confident that what ever he does with my designs, he will sufficiently screw them up that I don't have to worry too much.  Tongue

But let me reserve my final judgement until I complete my evaluation of DPoS to see what he has done technically.

I am more worried about Vitalik (and separately Blockstream and with their significant resources). That guy appears to be a beast of an intellect. Wish I could get him on my team and rein him in to focus sufficiently on achieving realized designs and implementations. Or maybe I should wish he will go to the competitor and tie them up in forever research and R&D.

Look I can't worry about this. I need to code and see where the chips fall. Let everyone do what they will.


Edit: here at 53:30 Daniel explains they are licensing their code:

https://beyondbitcoin.org/bitshares-dev-hangout-bytemaster-stealth-confidential-transactions/

They are building a business model around building software for the coin ecosystem. This is very top-down and non-network effects thinking. It is far too slow. We need 1000s of developers working in an ecosystem not the developers of the coin taking a preferential position crowding out the rest of the potential competition within the ecosytem.

The developers should not be competing with their own ecosystem!!

I am confident the Larimer brothers will screw up despite being technically competent.

Crypto-land requires a very diverse mix of skills.


Edit#2: I was just shaking my head at the end of the audio interview with Daniel. They are paying their community to listen to his interview! How much more top-down micromanager could you possibly be.

I am all about using my brain to make big paradigm shifts and then let the market sweep me away.

Now you all get to observe if there is a significance difference... stay tuned...
newbie
Activity: 28
Merit: 0
September 09, 2015, 06:37:31 PM
#39
r0ach, I think it is impossible Bitcoin adopts such a radical change in consensus network, because it is changes the security model. If any coin was going to supplant Bitcoin then it would do so at such an adoption rate that Bitcoin couldn't possibly make the change fast enough (e.g. if decentralized exchanges caught on like wildfire). Just look at the debate over block size increase and imagine the debate over changing the entire model.

I could see Blockstream potentially doing a side chain, so counter measures may have to be taken but it is also not yet clear if side-chains are going to be trusted and used.

Bitshares has too much investment (vested "stake") in proof-of-stake to shift quickly and their small market cap would be overrun well before they could act. Besides I think they believe in their solution. Their coin appears to be more like a cooperative or corporation than a decentralized open source deal. Rather I could see them making tweaks to their own design to try emulate any of the best attributes and I would need to study DPoS in more detail to see where if any limitations for them might lie.

Dash wouldn't be able to technically shoehorn something of this sophistication adequately fast enough if not forever. There is so much deep stuff in this, it isn't going to be something you just implement over the weekend. We'll have a few months lead going in and if we continue innovating, they will never catch up. For one thing, the Dash people aren't even going to understand the code because it (at least the portions I write) is written in Scala.  Tongue It is possible some of it will be written in Java, but I am hoping not. Of course some optimizations of critical paths may be written in C and assembly. And yes I am full aware that Java libraries suck (remember the Android BitcoinJ RNG bug) and have many security vulnerabilities and thus I am for the most part not using them.

For example, the anonymity design I did requires a larger EdDSA curve (e.g. ed41417) than is implemented in any standard crypto libraries. If that is coded in Scala, then how are these C++ coders going to grab that and translate it quickly. Scala is a 3 month learning process for people who know some Java and probably longer for those who never did Java programming.

You are always best just joining the leader rather than trying to steal from the leader. You only co-opt the leader if the leader is incompetent, such as what Monero had to do to bitmonero and Bytecoin. If my designs are correct, then I expect other top developers to eventually join in. The best developers like to work with the best developers and on the most interesting and leading efforts. I think it is important to demonstrate that quality early on. I'll be publishing some code soon so we can get the ball rolling on the level of sophistication and quality in this project.

Who is going to trust someone who steals code and designs? The community is going to invest in who has proven he can create quality solutions. So that is the point of starting this thread is to start showing some code. Of course the best exhibits will be usable software with easy-to-use, clean user interfaces, etc..

Right now one of the big problems in crypto is there is so much that isn't implemented and fully refined. I will certainly be challenged in this regard being only one person. Getting other developers to join is very important eventually. But still the lead has to really lead and do a herculian output, otherwise the project doesn't fly.
newbie
Activity: 28
Merit: 0
September 09, 2015, 06:12:07 PM
#38
My suggestion is to lock the thread and bump it when you have substantive updates.

I thought I didn't have the lock topic button, as I tried to find it earlier and didn't see it. Now I see it very small at the very bottommost left of the thread. I will lock if there is another raging (unintentional) thread jack spawning.

Again I did write in the opening post that I might eventually delete posts and resummarize them into the one condensed post or into the opening post, if I have time to do so.

I do understand the extreme curiosity and interest in discussing such an important topic about how to improve upon consensus attributes such as scalability. I tried to entertain that curiosity by revealing some of the basic principles I've used to solve that design issue without revealing the specific inventions that make such a holistic design conform to those principles. So I did provide some substantive response to FUD and myopia, but it is also true that it will just cause more confusion because obviously if they can't see the entire solution in detail, their misconceptions of issues are what are preventing them from inventing the solution in the first place. So it is best to just not discuss it at all, if we are not going to reveal all the details. This was my intent from the opening post, but I again I tried to appease the curiosity somewhat, but it is just not going to work. We have to shutdown the discussion on the consensus design until our implementation is near finished and we are ready to publish the white paper.

Interestingly bytemaster (Bitshare's Daniel Larimer) discusses the Vitalik white paper I mentioned upthread pertaining to scalability of the transaction rate at the 33 min point of this audio interview:

https://beyondbitcoin.org/bitshares-dev-hangout-bytemaster-stealth-confidential-transactions/

Notice how at the 35:25 min point, Daniel quickly glosses over the issue that the network bandwidth and latency are the actual limiting factor because if you want decentralization then you have to allow for great disparity in ISP connections around the world and also be tolerant to network fragmentation and degradation scenarios. Then he admits at 36:45 min that actual measured maximums are in the 40 - 500 transactions per second range and the latter figure is only on a idealized testnet so I take that as not representative of the real world networks in harsh environments such as my fabulous 1 Mbps connection here in the Philippines. Another thing he is glossing over is the issue of who is mining and are they full nodes and if not are the relying on pools and what does this do to centralization, hence censorship resistance, etc, etc, etc. Daniel doesn't seem to appreciate that Vitalik's point may be that in order to scale within the context of real world networks, there needs to be a fundamentally different network design. Daniel seems to think the partitioning is only to divide the CPU load up, but that isn't the real bottleneck (as he admitted for 5 seconds and didn't mention again).

This consensus network stuff is very holistically detailed and it can't be well discussed in this format. The white paper is very long for a reason. There are a lot of details that have to be covered. And it needs to be precise.

P.S. Daniel makes a great implied point at the 41:45 min point that decentralized exchanges will require much higher transactions per second throughput (certainly higher than Bitcoin can handle at this time and remain widely decentralized). Daniel claims Bitshares can do it, and I think to the extent this is a true claim and not just a testnet idealization, this might be true only because DPoS (Delegated Proof-of-Stake) isn't really well decentralized, but I need to go study it in more detail before I can make a detailed comparison.
legendary
Activity: 1260
Merit: 1000
September 09, 2015, 06:00:10 PM
#37
Since this is exactly like reading Fuserleer's thread where a few details are given, but not enough to analyze the finer points, let's talk about a different subject.  Who do you think the competitors to absorb this code will be if it actually functions?  

As I mentioned in my thread that you linked to earlier in the post, it seems like Bitcoin might be inflexible to change due to a myriad of reasons.  It sounds like someone at BTC core would have to have the balls to scrap everything, admit they've not been able to come up with anything useful in the last several years, then transfer the entire ledger over to a code base from someone probably universally hated by core devs, Anonymint.  Is this possible?

I imagine Darkcoin would cannibalize anything useful from it extremely fast since Duffield isn't afraid to slap some experimental code on a few million market cap userbase.  Bitshares would probably also easily integrate anything useful.  Fuserleer, hey, that's one sneaky guy.  And we all know John Connor isn't afraid of borrowing a little code.
legendary
Activity: 1498
Merit: 1000
September 09, 2015, 05:09:12 PM
#36
The name has grown on me, ion cash.. sounds good  Cool
cashion?

EDIT: i-ON
legendary
Activity: 1050
Merit: 1001
September 09, 2015, 05:06:38 PM
#35
The name has grown on me, ion cash.. sounds good  Cool
newbie
Activity: 28
Merit: 0
September 09, 2015, 04:15:11 PM
#34
A point worth considering:  If the delegated set is fixed over a prolonged period of time, then its not trustless.  If the delegate set changes constantly as the result of some random function, then it may be trustless depending on how large the source set is.

And those are not the only two possible ways to structure the period of delegation. In fact, the delegation membership paradigm in my design is one of my key epiphanies which makes it Byzantine fault tolerant.

There were "issues" with this approach that I wasn't happy with and found myself running around in circles.

Indeed there were key epiphany inventions I discovered which eliminated those circles. That is why I felt what I revealed today isn't enough for someone to figure out my design, unless they can have the same holistic design epiphanies.



Furthermore, if the selected set of delegates is short lived, then any disruption caused by selecting a dishonest nodes in the previous set will be minimal, as transactions that failed which are deemed legitimate can simply be re-presented a short time later against a new set of delegates.

The higher the frequency of delegate election, the worse the service they can provide

True (for various reasons such as ratios relating to propagation, denial-of-service, etc).

And you were correct to follow up with this post which I will delete:

I disagree, so long as there is a reliable method to allow everyone to know who the delegates are at any moment in time present or past, then selection could happen at 1 second intervals or less.

Network latency prevents everyone knowing who the elected delegates are once you drop below 10 second intervals.

But...

- at one extreme, the service is completely trust based and very efficient and at the other extreme there is no advantage at all to having delegates but they are completely trustless.

False assumptions galore.

Wait for the white paper. Amazing to me that what I invented seems so obvious in hindsight yet it isn't obvious to you all. Interesting.

All discussion about delegation should stop now. Otherwise I will be forced to delete some posts. We are cluttering this thread with too much detail.

Wait for the white paper then you will all understand.



It reduces the exposure of the system to dishonest nodes, as providing that the function output that determines the set selection is random, these dishonest nodes will never know when they will have an opportunity to be dishonest.

This then requires these nodes to be online constantly in the chance that they do get selected in the next round of delegates.  If the selection set is sufficiently large, and the function output is random (or close to), then the time between subsequent selection may be quite long, thus acting as a discouragement due to costs.

Furthermore, if the selected set of delegates is short lived, then any disruption caused by selecting a dishonest nodes in the previous set will be minimal, as transactions that failed which are deemed legitimate can simply be re-presented a short time later against a new set of delegates.

You can increase resilience further if you delegate the work to ALL selected delegates instead of just one or a few of them.  Then they all perform the same work and you can think about using the output from that as a basis for consensus.

You have to consider Sybils and other things too, but the basic philosophy is as above.

Did I explain clearly? :|  Not sure lol

This is a relativistic, probabilistic way of framing objectivity. It is more difficult to prove and argue than what my design attempts. It is perhaps a useful technique and it is an interesting discussion, but we should move it to another thread. We can probably improve our designs by incorporating more discussion along these lines. But any way, I am overloaded right now just to try to get what I have already designed implemented. So for me K.I.S.S. is important right now.
legendary
Activity: 1050
Merit: 1016
September 09, 2015, 04:11:22 PM
#33
Furthermore, if the selected set of delegates is short lived, then any disruption caused by selecting a dishonest nodes in the previous set will be minimal, as transactions that failed which are deemed legitimate can simply be re-presented a short time later against a new set of delegates.

The higher the frequency of delegate election, the worse the service they can provide - at one extreme, the service is completely trust based and very efficient and at the other extreme there is no advantage at all to having delegates but they are completely trustless.

I disagree, so long as there is a reliable method to allow everyone to know who the delegates are at any moment in time present or past, then selection could happen at 1 second intervals or less.
legendary
Activity: 1008
Merit: 1007
September 09, 2015, 04:08:13 PM
#32
Furthermore, if the selected set of delegates is short lived, then any disruption caused by selecting a dishonest nodes in the previous set will be minimal, as transactions that failed which are deemed legitimate can simply be re-presented a short time later against a new set of delegates.

The higher the frequency of delegate election, the worse the service they can provide - at one extreme, the service is completely trust based and very efficient and at the other extreme there is no advantage at all to having delegates but they are completely trustless.
newbie
Activity: 28
Merit: 0
September 09, 2015, 04:07:53 PM
#31
You have to trust that the person/node will do the task otherwise there is no point in delegating in the first place. Please see my quoted definition of the word DELEGATED.

You continue to ignore information which has already been stated. I have emphasized upthread that the nodes are fungible, replaceable, and unable to monopolize their function. A node that continuously doesn't do its function will simply have removed itself from the network giving way to the other nodes which are doing their function.

I think you ought to just wait until I publish the entire white paper, so the whole thing can make sense to you.

I don't know why you incapable of reading a dictionary properly.

del·e·gate
noun
ˈdeləɡət/
1.
a person sent or authorized to represent others, in particular an elected representative sent to a conference.
legendary
Activity: 1050
Merit: 1016
September 09, 2015, 04:01:43 PM
#30
A point worth considering:  If the delegated set is fixed over a prolonged period of time, then its not trustless.  If the delegate set changes constantly as the result of some random function, then it may be trustless depending on how large the source set is.

Interesting point.

How does it change the issue of trust even if the delegating parties change randomly?

The delegating parties themselves for iteration n still are to be trusted right?


It reduces the exposure of the system to dishonest nodes, as providing that the function output that determines the set selection is random, these dishonest nodes will never know when they will have an opportunity to be dishonest.

This then requires these nodes to be online constantly in the chance that they do get selected in the next round of delegates.  If the selection set is sufficiently large, and the function output is random (or close to), then the time between subsequent selection may be quite long, thus acting as a discouragement due to costs.

Furthermore, if the selected set of delegates is short lived, then any disruption caused by selecting a dishonest nodes in the previous set will be minimal, as transactions that failed which are deemed legitimate can simply be re-presented a short time later against a new set of delegates.

You can increase resilience further if you delegate the work to ALL selected delegates instead of just one or a few of them.  Then they all perform the same work and you can think about using the output from that as a basis for consensus.

You have to consider Sybils and other things too, but the basic philosophy is as above.

Did I explain clearly? :|  Not sure lol
newbie
Activity: 28
Merit: 0
September 09, 2015, 03:53:55 PM
#29
If I delegate to you, but you have no choice in how you execute the delegated task, then I don't have to trust you.

Not exactly. If I delegate to you, and your actions have no effect whatsoever on the system, then I don't have trust in you.

I didn't frame my response systemically. I wrote if "I" then "I". Yes of course for a node in a Byzantine fault tolerant system, the node can't deviate from the objective verifiable truth that is Byzantine fault tolerant. Precisely. Then the system doesn't have to trust that delegate.

Even that is not entirely sufficient. The node has to also be fungible, replaceable, and not able to monopolize its function.

And that node must not be capable of influencing the verifiable truth by collusion or any other means.

Actually it is not correct to state "no effect whatsoever". Even miners in Satoshi's proof-of-work have an effect; it is just that the effect is objective verifiable truth and Byzantine fault tolerant (within the tradeoffs of Satoshi's design).
legendary
Activity: 1050
Merit: 1016
September 09, 2015, 03:52:41 PM
#28
I'm not sure how it can be "delegated" and not require trust in some form or fashion.

You didn't pay close attention to the phrases "verifiable truth" and "they make no discretionary choices of significance, persist the minimum state possible, are fungible with each other" I bolded in the generous excerpt that I quoted from my white paper draft.

If I delegate to you, but you have no choice in how you execute the delegated task, then I don't have to trust you.

I will not tell you the precise inventions of how I do that. Absolutely no hints will be given on that for now. Sorry.

On the surface it sounds a little like our old "hatcher" process, where transactions were delegated to a 3rd party of the clients choice.  No trust was required as the output from a hatcher would either be correct, the minimum state as you put it, or not.  If the latter, it would be apparent to everyone and that work would be rejected along with the transaction.

There were "issues" with this approach that I wasn't happy with and found myself running around in circles. So I dropped it, quite recently in-fact (past 6 months or so) and replaced it with a network wide solution we have now.
newbie
Activity: 28
Merit: 0
September 09, 2015, 03:41:43 PM
#27
I'm not sure how it can be "delegated" and not require trust in some form or fashion.

You didn't pay close attention to the phrases "verifiable truth" and "they make no discretionary choices of significance, persist the minimum state possible, are fungible with each other" I bolded in the generous excerpt that I quoted from my white paper draft.

If I delegate to you, but you have no choice in how you execute the delegated task, then I don't have to trust you.

I will not tell you the precise inventions of how I do that. Absolutely no hints will be given on that for now. Sorry.



These attacks can be squelched if for observers and peers their system function is verifiable truth, they can prove a trusted reputation, or they can expend or risk sufficient resources which exceed the gain from cheating.

Also DASH solved the last problem with an entry barrier of 1k DASH for masternodes.

You refer to the game theory of the bounded entropy of buying influence a verifiable truth.  Roll Eyes I refer you back to the opening post:

...while retaining proof-of-work as unbounded entropy[1]:

...

[1] My position until I am convinced otherwise is that all non-proof-of-work consensus systems have a bounded entropy (e.g. the total stake and/or any initial seeds used for randomization) and thus their attributes (e.g. decentralization, censorship resistance, DoS resistance, Sybil attack resistance, impartiality) is subject to a game theory which is potentially undiscovered. Whereas, the entropy of proof-of-work is unbounded because it is externally added and the game theory is well defined.

I prefer the objectivity of Satoshi's trustless entropy, especially since I rendered pools and ASICs economically irrelevant in my design.
full member
Activity: 208
Merit: 103
September 09, 2015, 03:19:24 PM
#26
Seems like I'm a late arrival to the party. I'm currently hovering somewhere between a "somewhat" and a "yes" (though can't vote anyway as a newbie). The name initially struck me as being a little bland and corporate, but it's growing on me. Certainly snappy and memorable.

Exciting to see this actually starting to happen after following events for so long.

I'm wondering if the timing of the launch will be connected at all with prevailing prices of BTC, gold etc; seeing as they are possibly reaching their lows around next spring?

Anyway, good luck!
newbie
Activity: 28
Merit: 0
September 09, 2015, 03:06:24 PM
#25
IMO there is too much focus on 100% anonymity (not just here, but crypto in general) and I don't believe it is needed for a crypto to become mainstream for a number of reasons.

Let's see if everyone is still of that mind set when the governments are potentially expropriating every dime after 2017.

Anonymity is not only about moving money, but about being able to continue to do commerce when the government bans you from doing commerce, i.e. they regulate everything and we all become slaves to Facebook and Google.

As the Moneronistas often point out, it is also about fungibility to prevent coin blacklists, whitelists, and redlists.

However, I am hedging my bets which is why I have decided to make the anonymity orthogonal and let others implement my anonymity breakthrough. Because maybe it is possible that non-anonymous Knowledge Age can innovate so fast that the slow moving gubermint don't even understand what is going on even if it is not anonymous.

So let's do both. But let me work on the non-anonymous parts, so I can be out of harms way and push the coin in the mainstream channels. I already did the holy grail anonymity whitepaper, so I already made my career exclamation mark there.

But I will agree with you that enabling commerce is the highest priority. Everyone should be using crypto-coin when they use social networking. Bitcoin is scaling way too slowly for this to come to fruition fast enough to keep up with how fast the Knowledge Age is accelerating.

I used to be of the same mindset, but came to a few conclusions that led to the realization that providing the anonymity was as good, or slightly better, than what Bitcoin offers, then there isn't any real point to spend large amounts of time chasing 100% it down.  Of course, if a solution should show up right in front of your nose, then take advantage of it by all means, but I think you should consider following

1.  There has to be a trade off at some point of anonymity vs some other factor.  Be it the size of the transactions, the speed which they can be processed, or functionality limitations later on.

The trade-offs appear so far to have become insignificant in my designs, but let's wait for it all to come together with all the pedantic implementation issues, before we make final conclusions. You may end up being correct.  

2.  TPTB will fight against any crypto that is truly anonymous, preventing it to gain any significant mass market adoption with the usual rhetoric of drugs etc....

That might just make adoption grow faster. Let's not assume we know the state of the world as it wakes up to the reality that the world is in a massive default on $227 trillion of debt, $quadrillion of derivatives, defaulting socialism as a dying paradigm, and thus all profit opportunity moving into Knowledge Age work. Humans have a way of suddenly realize the tide is turning and they all move to the other side of the boat at once. That Minsky Moment could be upon us soon.

3.  Nor will any infrastructure you need to build on allow you to use it to get it out there.

Base infrastructure is sorely lacking, such as Tor and I2P are not anonymous against national security agencies that can see all network packets. I invented a simple solution to this problem a couple of weeks ago. Was a spontaneous discovery when I was helping one of the Nxt developers.

4.  The general public doesn't give 1 single thought to anonymity on a daily basis, most people don't care that the governments read their email/sms/web activities etc.

Ahem. Did you not see the applause that Rand Paul got on this point at the first Republican convention. It was the only point where he was resoundingly cheered.

The public is starting to wake up and we are in the early innings yet. When the government starts doing more shit like Spain fining you for taking a photo of a police car parked in a handicapped parking space, then the people start to smell a rat.

Anonymitys main purpose in a mass market targeted crypto IMO is to guard against criminals figuring out what you own and who you are, second to that is privacy.

Yes that too. And thus hiding all three of payer, payee, and values is beneficial, even if you do give your viewkey to the authorities to KYC compliant. But that assumes the rule of law doesn't entirely breakdown and the government is actually the criminals (which appears to be a potential outcome).

If mass market adoption is your goal, I would seriously consider the above because it WILL come and bite you in the ass if you don't implement it wisely.

The anonymity will be as orthogonal as possible if and when it gets done. People can choose if they want to use it or not.

On the other hand, if you simply want to take over the worlds black market, then full steam anon is the way to go  Cheesy

It is another market. Why not go after it, for as long as it is orthogonal. I am attempting in my design to make it impossible to control which format users want to use for transactions. Thus the government can't claim we put anonymity in the coin. The users did. They can put any feature they want in the coin. It will out-of-our-control. But wait to see if I can really achieve this. Some of these finer details I am still working through.
newbie
Activity: 28
Merit: 0
September 09, 2015, 02:36:47 PM
#24
Delegated is not the same as decentralized.

Nor is it necessarily an antithetical concept. Delegated can be decentralized and effectively as trustless as Satoshi's design is (with some differing assumptions that have differing tradeoffs).



I haven't noticed the word trustless anywhere - is this design trustless?

Yes and in some respects more so than Satoshi's design, but there is a change in the security model assumption.

You've actually hit upon the key distinction between my design and Satoshi's design. Almost everything else follows from it. But still even if I share with you this, there are many details that you'd still have to invent to get a holistically sound design.

I will excerpt from the rough draft of the white paper and bold the relevant phrases...

Quote
1   Decentralized transaction consensus

A Byzantine fault is defined as a disagreement by system participants on the state of the system. A centralized system is inherently Byzantine fault tolerant because it doesn’t disagree with itself. Such faults are only system failure if decentralized agreement on system state is required.

For example, the Byzantine fault of differing opinions of voters is not a failure if democracy by majority rule is acceptable. Systemic failure results from obstruction of voting, undetected ballot box stuffing, or purchased votes. These are respectively denial-of-service (DoS), Sybil, and resource-capture attacks.

Protection against the double spending of an account balance is an example of a contract that depends only on the relative order of spend events, and not on the events’ timestamp. The latter spend must be discarded otherwise the first payee would be defrauded.

In a decentralized network of observers of spend events, a majority vote on event order would fail due to DoS, Sybil, and resource-capture attacks on the observers.

  • DoS attack on relayed events over the network.
  • Sybil attack creating unlimited observers or network relay peers.
  • Resource-capture of any resource required to be an observer.

These attacks can be squelched if for observers and peers their system function is verifiable truth, they can prove a trusted reputation, or they can expend or risk sufficient resources which exceed the gain from cheating.

2   Non-delegated transaction verification by longest chain of proof-of-work

In Satoshi Nakamoto’s longest chain of proof-of-work decentralized consensus system[Nak09], the participant nodes are distrusted and not capable of independently verifying the event order. They risk proof-of-work resources in exchange for block rewards with low probability of gain from cheating, unless the adversary possesses greater than 50% of the total network proof-of-work resources, or 25 - 33% in the case of selfish mining[ES13].

In addition to this Byzantine failure due to capture of at least 25 - 50% of the system resources, Satoshi Nakamoto’s system has other weaknesses.

  • Reliability of the event order requires a proof-of-work confirmation to prevent a double spend due to the Finney attack[Fin11] or gaming of differing mempool heuristics[Hea15]; yet remains unreliable if there is network fragmentation or with too few confirmations given a significant orphan rate combined with some network propagation opacity.
  • Variance of mining[Ros11] incentivizes a limited number of pools of nodes thus concentrating the control of system resources.
  • The scaling tradeoff that all nodes incur the bandwidth and processing load to relay and verify all system-wide transactions, otherwise they delegate to pools which concentrate the control of system resources.
  • Satoshi’s protocol doesn’t verify if pools implement getblocktemplate to enable a node to insert transactions into its winning block to mitigate the pools’ power to censor transactions.
  • Satoshi’s design depends on concentrated control and monolithic network coherence—which sacrifices censorship resistance and network fault tolerance—in order to provide scalability and reliable instant transactions.

Concentrated control accruing naturally or via a resource-capture attack could alter the protocol— such as increasing the money supply of cryptocurrency (even gifting the debasement to any publicly acceptable entity such as government1) or only validating events which accompany some KYC (know your customer) proof of identification. Some argue that a political outcry would move away from such an attack on the protocol, but in reality the preoccupied masses tend to continue to the use the clients they are told to use and are accustomed to such as popular web clients Coinbase, Blockinfo, etcetera. The masses didn’t abandon the dollar in spite of its malevolent holistic effects. Most people would not agree the dollar is malevolent.

The security model of the longest chain of proof-of-work in Satoshi’s system is ultimately founded on the principle that each participant minion will act unselfishly in the short-term to protect the long-term collective benefits of trustless consensus— yet this has proven not to be the case throughout all recorded human history when the individual selfish incentives are great enough.

1 Some influential people such as Martin Armstrong have simultaneously outlined an expectation of a one-world currency monetary reset solution to the current prolifergate nation-states sovereign debt crisis; and called for (a world?) government to tax the money supply instead of income taxes[Arm08].

3   Delegated transaction verification by longest chain of proof-of-work

To overcome the weaknesses in Satoshi’s design, we delegate the verification of event order to _________ nodes which provably record their system functions in the longest chain of proof-of-work.

Delegating to these _________ nodes is harmless because these nodes obey the fundamental end-to-end principle of networks in the sense that they make no discretionary choices of significance, persist the minimum state possible, are fungible with each other, and can’t be monopolized. _________ nodes do set the transaction fee they charge which is based on transaction data size and not value since transaction values may be concealed.

...

Quote
3.10   Security model

The security model assumption of Satoshi’s design is that every miner—whether alone or in collusion with other miners—has a greater opportunity cost when mining on an incorrect or shorter block chain. Additionally unlike other reputation-based or proof-of-stake schemes, it is implausible to game the order of block solutions because the source of the entropy for proof-of-work is external thus an open instead of closed thermodynamic system. Also each miner can verify the entire history of the block chain, including every transaction.

The security model weaknesses of Satoshi’s design detailed in the prior sections derive from the generative essence that miners can’t autonomously determine the correctness of the longest chain w.r.t. to double spends, censorship resistance, mining centralization, collusion, and selfish mining.

Our security model retains the assumption of greater opportunity cost when mining on incorrect or shorter correct block chain, while adding objectivity to the correctness of double spends and censorship resistance. The model is founded on the principle that published data can’t be unpublished and that nodes listening for compliance will need to do so for other reasons, such as _________ nodes maximizing the efficiency which payers can send non-instant transactions...

Although block chain miners don’t verify for themselves every transaction in the chain of hashes for each hash stored on the block chain, the nodes listening and checking for compliance insure that the block chain is verified against the distributed and independent _________ nodes.

Any entity that downloads an archive of all the historic distributed data could perform a full verification to attain the same security as a full node that downloads the entire block chain in Satoshi’s design. The distinction from the security model in Satoshi’s design is that if the compliance checking nodes fail to report cheating, the cheating will become nearly implausible to revert. This is why the incentives offered to compliance checking nodes are lucrative and compliance checking node membership is permissionless. Compare the odds that no party will avail of the profit incentive to the security assumptions in Satoshi’s design:

  • Network fragmentation never occurs.
  • The masses will politically take time from their preoccupied lives en masse to fork away from an insidiously (perhaps undetectably so) malevolent 25 - 51% attack, e.g. one that requires KYC or adds some debasement to fund social welfare in a world government or collaboration of regional governments such as EU, G7, G20, Asian Union, and Mercosur.
  • Pools with different names, IP addresses, and servers aren’t controlled by the same entity, i.e. Sybil attacked.
  • Scaling transaction volume and zero confirmations by centralization is secure.

Choose your poison. I think the security assumptions in my design are much more robust because decentralized, permission-less opportunity is inherent more reliable than centralized control. I would guesstimate the odds of a serious problem arising in my security model on the order of an asteroid striking your house, while the odds of the Bitcoin algorithm failing (to remain decentralized, permission-less, not just a fiat) due to any of those listed items at the end is very palpable.
Pages:
Jump to: