Pages:
Author

Topic: Decrits: The 99%+ attack-proof coin - page 16. (Read 45353 times)

hero member
Activity: 518
Merit: 521
May 18, 2013, 07:48:36 PM
I will be offline for up to 2 days. I will read and reply later, but good to see we are moving towards understanding each other on this one facet.

Btw, you've apparently solved the problem I had with my Proof-of-harddisk needing some external (random) entropy. You proposed a function to salt the keys that the attacker can choose before the fact with the randomness of joins/leaves which they can't control after the fact. Kudos!
hero member
Activity: 798
Merit: 1000
May 18, 2013, 02:39:27 PM
Actually, rethinking option 1) there is a vulnerability. This vulnerability only becomes relevant in owning >50% of the consensus in a network that has many multiples of 87.6k SHs.

If a period of the network is overwhelmed by evil SHs, there can be an increasingly long delay before the honest network can put together a legitimate fork. Unless we want CNPs to be able to fracture the network (which is probably a bad idea), the most they can do is delay suspicious looking TBs.

For example, TB 400 is an honest SH sandwiched between 100 evil SHs on both sides. If the SH at 300 did something bad, like not acknowledging a TB or signature that he should have, CNPs can only delay his block, but all the evil SHs don't care because they will add it anyway and will presumably control some portion of the CNPs. Ergo, many people could see a(n evil) network functioning for an extended period without knowing that good people are being dropped.

Essentially, if you have 87.6k*5 SHs, and evilcorp owns half or whatever, evilcorp can keep the honest half from forking for many weeks. Yes, it's still crazy unlikely, but after so much time has passed, how do we prove that they are rejecting honest SHs? Only from honest CNPs, but it is essentially only their word because they do not have two chains to prove an honest and a dishonest half (or a dishonest 80% and an honest 20%). Honest SHs may not keep up with the network and must rely on the only chain they can see.

If the 1% that is honest (if we move to 99/1) is spread thin, they have no power and may not even be aware of what evilcorp is doing. The good guys are left hanging with nobody to back them up. This *could* be addressed by some kind of emergency consensus of honest SHs, but detecting this is tricky, and there is no guarantee that a significant enough portion of honest SHs will be watching. And creating a split from that will be very difficult to do in any kind of organized way. In the 99/1 case, they are still in control of 1% of the 10 CD consensus period. And if 99% does something bad, 1% can chug right alongside it showing what happened and why they split.

I think this is why a reasonable time to guaranteed 100% consensus is important. Even though these situations are crazy unrealistic, there still may need to be considerations in regards to legitimate network splits (government, internet catastrophe, who knows), or even perhaps accidental splits (software bug--hello bitcoin has already shown us to be aware of this).
hero member
Activity: 798
Merit: 1000
May 18, 2013, 07:26:02 AM
And add any missing transactions. Otherwise there is no point and it doesn't solve the delay attack. And thus need to communicate the added transactions within the 10 second window. This is another TB, as there is no distinction from what a TB does.

Yes, something about your thought process distracted me. Wink I thought about it a little more after my reply and got what you were saying.

So you are right, in the case of 5 million SHs, those 5.7 SH/s would be including a lot of the same 3-5 byte hashes for missed transactions. It isn't network breaking, but it is an inefficiency that gets worse as the network gets very large. There are two solutions I see:

1) What you have already suggested in that the entire consensus is not required each CB, as long as we are certain the order can't be gamed.
or
2) Since everyone knows who are the "backup" SHs for each TB, each of those runs a remainder/modulo/hamming distance on each tx to determine which of the backups will include the missed txes divided among that group. Even if the TB is entirely missed, all the txes will be divided up evenly with no duplication. We could even design the system so that the backup SHs automatically do this anyway, effectively adding a maximum of 4 bytes per tx, and have them create backup TBs at the same time as the primary TB without even waiting to acknowledge it.

Both options seem very reasonable. Leaving people out of consensus scares me a little bit though, even if the only vulnerability is EvilCorp controlling a vast majority of the SHs during that CB (87.6k SHs). If they do, then unless we assume they control 80-90%, they will have almost no power in consecutive blocks.

If we took option 1, do we reorder the order every CB (or 10 CD period) or do we just effectively extend the CB process? Do we "commit" everything every 10 CDs to the approved consensus, or do we hold everything until all SHs have signed over however long that takes? I can't think of any vulnerabilities from either scenario, but option 2) above limits the data load to the absolute minimum. I mean if we assume 1 SH for every 1,000 people on earth (and base the incentive system around this), that is 7 million SHs, only slightly above the 1kB/s figure already stated for 5 million. Totally irrelevant in a network that is only the size of visa (of course we expect it to grow much bigger).
hero member
Activity: 518
Merit: 521
May 18, 2013, 01:03:04 AM
It seems the key insight was the randomization of the SH order by using the random order that SHs join and leave. This in theory enables us to select only a portion of SH to sign in any finite period without the risk of a delay attack.

This means some of us can aim for a design that lets any one process transactions, and this is good, so that we can anonymous the injection point of transactions better (my peer can inject my transactions using a MUTE concept).

http://mute-net.sourceforge.net/experiments.shtml

We still need to require SHs put up a resource to limit SH so there isn't spamming, but we can be less concerned about the ability to get consecutive SH signs without control of 100% of the SH.

As for the required resource, you propose money. The other option is to have SH prove some work, either Satoshi style, my harddisk work, or proof-of-burn. We should at some point evaluate the tradeoffs of each.
hero member
Activity: 518
Merit: 521
May 18, 2013, 12:43:10 AM
The more robust fan out, will duplicate the transmissions.

Will there need to be 1 copy for each node that wants a copy? Yes. This is not a weakness or inefficiency, this is decentralization.

I am stating it is more robust (decentralized). We agree.

I was also erroneously implying that if we send data over hops (multifurcating tree) rather than directly (star network), there is additional duplication of quantity of data transmitted. I was wrong to make that implication. The same amount of data sent. And a multifurcating tree is more energy efficient, analogous to why we don't run a separate pipe from the water pumping station to every house. However, the latency typically suffers with the hops, and that is relevant, which is what I probably meant to imply.

This does not negate the points in the previous reply at all.
hero member
Activity: 518
Merit: 521
May 18, 2013, 12:33:26 AM
So if the Bitcoin network is taking 2-3 seconds to propagate a TB, then it means we must have a limit on number of SH signing a TB per CB, that does not cause us to need to communicate TB more frequently than say 10 seconds or so.

You are teetering away again. When more than one SH is selected for 1 TB, which will always be every 10 seconds, the SH without priority will acknowledge someone else's block.

And add any missing transactions. Otherwise there is no point and it doesn't solve the delay attack. And thus need to communicate the added transactions within the 10 second window. This is another TB, as there is no distinction from what a TB does.


Quote
So this is why the randomization of the order is crucial, because otherwise we would need to have all the SH sign within the maximum time we would allow for a delay in a 100% delay attack (or adjust this for any percent of attack we think is realistic). Without randomization, the asymptotic analysis is an infinite number of SH would require us to propagate TBs in infinitesimally small time.

Instead of misinterpreting this, I'm just going to say I don't understand what you're saying. I think I know what you're saying, but it's probably better to just drop this for now and focus on learning more about other things instead.

See my point above first.

Then I am saying that with randomization of the order that SH's sign, we don't need every SH to have a turn within every CB (or what ever period we decide is the maximum we want to allow for a potential delay), because the probability of controlling many consecutive signing SHs is much reduced.

With randomization, the problem of potential communication overload is fixed, because we don't need to have every SH sign within that finite period.
hero member
Activity: 798
Merit: 1000
May 18, 2013, 12:25:03 AM
In the most optimized scenario for minimizing hops (not optimal for robustness),

As I said, it is only a recommendation so that the network will not be ad-hoc. Robustness is up to the individual CNPs and how far they want to reach.

Quote
The more robust fan out, will duplicate the transmissions.

Will there need to be 1 copy for each node that wants a copy? Yes. This is not a weakness or inefficiency, this is decentralization.

Quote
So if the Bitcoin network is taking 2-3 seconds to propagate a TB, then it means we must have a limit on number of SH signing a TB per CB, that does not cause us to need to communicate TB more frequently than say 10 seconds or so.

You are teetering away again. When more than one SH is selected for 1 TB, which will always be every 10 seconds, the SH without priority will acknowledge someone else's block.

Quote
So this is why the randomization of the order is crucial, because otherwise we would need to have all the SH sign within the maximum time we would allow for a delay in a 100% delay attack (or adjust this for any percent of attack we think is realistic). Without randomization, the asymptotic analysis is an infinite number of SH would require us to propagate TBs in infinitesimally small time.

Instead of misinterpreting this, I'm just going to say I don't understand what you're saying. I think I know what you're saying, but it's probably better to just drop this for now and focus on learning more about other things instead.
hero member
Activity: 518
Merit: 521
May 17, 2013, 11:59:55 PM
This data must then be sent to the next SH that is expected to sign. But since there is a possibility that this next SH will not receive and/or refuse to act on the information, then data must also be sent (or somehow otherwise made available) to every SH expected to sign in the future. This is where my confusion lies in trying to compute the communication requirements.

It is a distributed, p2p network. All peers that want to see all data will see all data. But because of section 2, the distribution network is anything but ad-hoc, it is rather organized. Some easy simulations can be run to determine the maximum number of hops for a given transaction to reach each CNP (to which SHs, CNCs, and SPs are connected). In fact, similarly to the order of consensus, a rolling CNP order should be maintained as well. This rolling order is a recommendation for who each CNP should connect to (other CNPs), that way there aren't nodes that get too many requests or nodes that get too few. And it also means that transactions will always take the least number of hops to reach wide distribution. And because the transmission protocol will be designed to be excessively efficient, it will be bandwidth cheap to maintain.

Besides the massive efficiency gains in relation to time and bandwidth, it is roughly the same type of network as bitcoin. A sort of distributed copy of an ongoing shared file. I don't believe it is ever common to see a bitcoin transaction take longer than 2 or 3 seconds to go from one side of the network to the other, and that is with a completely ad-hoc network.

In the most optimized scenario for minimizing hops (not optimal for robustness), the originating SH sends the data to every other SH. So we can compute the minimum communication load. The more robust fan out, will duplicate the transmissions.

So if the Bitcoin network is taking 2-3 seconds to propagate a TB, then it means we must have a limit on number of SH signing a TB per CB, that does not cause us to need to communicate TB more frequently than say 10 seconds or so.

So this is why the randomization of the order is crucial, because otherwise we would need to have all the SH sign within the maximum time we would allow for a delay in a 100% delay attack (or adjust this for any percent of attack we think is realistic). Without randomization, the asymptotic analysis is an infinite number of SH would require us to propagate TBs in infinitesimally small time.

If the randomization of the order of SH signing (modulate the self-chosen key) by the join/leave entropy works, then we no longer need to have a limit on SH.

Did I oversimplify?
hero member
Activity: 798
Merit: 1000
May 17, 2013, 07:56:56 PM
This data must then be sent to the next SH that is expected to sign. But since there is a possibility that this next SH will not receive and/or refuse to act on the information, then data must also be sent (or somehow otherwise made available) to every SH expected to sign in the future. This is where my confusion lies in trying to compute the communication requirements.

It is a distributed, p2p network. All peers that want to see all data will see all data. But because of section 2, the distribution network is anything but ad-hoc, it is rather organized. Some easy simulations can be run to determine the maximum number of hops for a given transaction to reach each CNP (to which SHs, CNCs, and SPs are connected). In fact, similarly to the order of consensus, a rolling CNP order should be maintained as well. This rolling order is a recommendation for who each CNP should connect to (other CNPs), that way there aren't nodes that get too many requests or nodes that get too few. And it also means that transactions will always take the least number of hops to reach wide distribution. And because the transmission protocol will be designed to be excessively efficient, it will be bandwidth cheap to maintain.

Besides the massive efficiency gains in relation to time and bandwidth, it is roughly the same type of network as bitcoin. A sort of distributed copy of an ongoing shared file. I don't believe it is ever common to see a bitcoin transaction take longer than 2 or 3 seconds to go from one side of the network to the other, and that is with a completely ad-hoc network.

Quote
Remember I was pointing out that either there could be duplicated transmissions of communications, else there needs to be a propagation of each TB to all SHs, before the next TB can be created and sent. Etlase has not explained a resolution to this as far as I have understood thus far.

The next TB in line does not have to acknowledge the TB directly before it (though an honest one always will if it has seen it). However, I do believe that the vast majority of the time this will be the case.

Now, to explain how a merchant can use the information it receives to determine the risk in accepting a face-to-face transaction. Let's reuse the 10-TB window figure for the longest amount of time before a missed TB can be inserted into the chain. If SH X misses his TB Y, at TB Y+10, his TB can no longer be accepted into the chain and he will receive a soft strike.

So with this knowledge, if TB 1-10 are all acknowledged by TB 11*--where the transaction that a merchant is interested in lies--he knows for certain that the account balance of the sending account is accurate and up-to-date in the ongoing consensus. (This is presuming that the general order of the network is stable; e.g. 20% of the consensus isn't missing.) The merchant doesn't even need to have a copy of the shared consensus, he only needs a CNP to prove it with hash trees (or for decentralized simplicity, invest in the network as a SH and/or CNP and maintain this data as a part of doing business).

* - acknowledged by TB 11 does not mean that they are all repeated, or hashes are all repeated. If TB 10 acks TB 9, and TB 11 acks TB 10, TB 11 implicitly acks TB 9. TB 11 couldn't receive TB 10 without having seen TB 9, because TB 9 had to exist for TB 10 to be transferred around, and so on. No one (in the CNP) can honestly transmit a block that acks another without being able to provide the data of the other block. This, quite literally, means they must be able to provide data in the TB chain *at least* until the last CB. If a CNP doesn't have the evidence and it is honest, it will refuse to transmit it until it does have the evidence. A SH can never be fooled into acknowledging something it doesn't have. The network simply will not transmit this data. I *think* this is what you wanted addressed by still being concerned about data overload.

Now if a TB is missing* in the prior 10, it is possible that the account balance for the customer is not accurate. If TB 12 comes along and acknowledges this missing TB and the merchant's transaction will take the customer's account below 0, the transaction will be denied. If this transaction is for a couple decrits, it is probably hardly worth worrying about. If the transaction is for 50 decrits, the merchant may want to wait an additional 10 or 20 seconds to reduce or eliminate the possibility of TB insertion.

* - unacknowledged

But realistically... this "attack" really requires that person being specifically involved with that SH, or to be that SH, because otherwise the next TBs in line have picked up any distributed txes changing that account's balance. Now at first this sounds like a great attack vector for every SH to hold back a transaction and get away with a free purchase each time his TB arrives, but one must remember this vulnerability is only relevant for face-to-face transactions. If the merchant is even slightly suspicious, your chances of succeeding are drastically reduced.

A quick side note: This is the caveat that comes with using a larger number before TBs can't join the chain. If it is longer, the more potential people that can do this in a specific time window, and over a longer period of time. So picking this number requires some deep discussion. But maybe not that deep, and it is something that will also benefit from knowledge gained in live network testing. A way to significantly reduce this vulnerability is to have CNP heuristics delay TBs that come out of order that have transactions that will cause a double/bad spend. Dick around, and there's a good chance you'll get a soft strike.

Anyways, this opportunity requires this SH-owner to miss his TB, pay, send and have accepted his TB, and leave the merchant's presence within about 100 seconds at one specific time within a 10 day period. Owning 10 SHs in a row does not practically increase this vulnerability because you still have to acknowledge the missing TB or receive a soft strike. Additionally, the double spend attempt is public and a specific SH is tied to it whether or not he has any actual relation to it (the odds of this happening by chance/accident are astronomically low), so if you do get away with attempting it once, do not expect smart clients to advise this as a "low-risk" situation the next time.

Unfortunately we can't award a soft strike here because though the chance/accident rate is super low, the attack rate could be much higher if an attacker was able to isolate a SH.

Quote
As a separate issue, there would need to be some penalty for SHs who send duplicate information or bogus transactions to themselves in order to overload the communication.

These issues, if they become relevant, can be addressed by CNP heuristic measures. I don't have the exact answers for this. But if enough CNPs agree to delay propagation of TBs they determine are trolling or whatever, the SHs in question will start receiving soft strikes. Coordinated defenses like this are part of the *cough* intent of the design of section 2, but I am fleshing it out more as the discussion is coming up, so that's a good thing. As far as attempting to DDoS the network, the idea in my notes is to have a maximum amount of transactions per block equal to (network minimum based on scaling up initially to paypal-like levels) or (avg. # of txes per block over the last rolling CY) times 3 or 4.
hero member
Activity: 518
Merit: 521
May 17, 2013, 04:14:55 PM
For example (and there may be many more), the cartel can increase the # of SH they have until the profitability is below 0, as I stated.

But this would require a significant chunk of all the currency in existence*, and it still does not do anything to the Cloudnet side of the equation. It would require matching the entire honest SHs deposits to cut their profitability in half

You mean to cut gross revenue in half. Profits are hard to estimate, because for example we don't even know what each SH's cost of capital is.

Profits might be razor thin already, if there are already many SH.
hero member
Activity: 518
Merit: 521
May 17, 2013, 10:38:27 AM
Thank you. I really do want to help. I hope there is some design that everyone can agree upon. It isn't easy, but it is worth trying. Hang in there.
hero member
Activity: 798
Merit: 1000
May 17, 2013, 09:38:57 AM
You were asked upthread by another poster to stop, yet you continue to insert insults in nearly every post. This will not reflect well on others wanting to work with you and help you. You can cause me to form an opinion of you that you are incapable of politely working with others.

I'm sorry, I was exasperated and exhausted.

Quote
For example (and there may be many more), the cartel can increase the # of SH they have until the profitability is below 0, as I stated.

But this would require a significant chunk of all the currency in existence*, and it still does not do anything to the Cloudnet side of the equation. It would require matching the entire honest SHs deposits to cut their profitability in half (100% more for a 50% attack). In the mean time, the opportunity cost of doing this is huge as well. By locking up this money, the EvilCorp loses all ability to use that money to do anything else beneficial. Power over transaction activity increases; power for anything else decreases.

I'm sure you're aware of how important cash flow is for any entity. This puts a distinctive stop to that flow.

* - hand-wavy only because we can't predict how much on-network tx activity will exist, and 0.01% tx fee may be too low. I don't know, I haven't run a slew of numbers on various scenarios because it's something I'd rather let people who enjoy that sort of thing do.

Quote
Also it is very poor engineering to design a system that is totally reliant on all its parts. A fault tolerant system can have parts of the system fail and still not entirely fail.

The reliance is on other parts to tolerate those problems. You are dismissing the very ideas that will accomplish fault tolerance as poor design. If you try to rely on tolerance within each system, then that system can be gamed by anyone who has a reasonable control over the majority of the system, as you have been going on and on about.

Quote
Yes I am totally ignoring section 4 and I doubt I would ever agree to it. Anyone who wants to fork can go create another altcoin.

So you would rather deny the idea on principle than accept that no currency can be perfect? It's not just about forking either, that is just one possibility.

Quote
I don't agree with voting to fork a constitution, because voting can be gamed by the power of debt and fiat.

Well debt-based money and fiat money need not exist after decrits. Wink It doesn't matter if voting can be gamed as long as people still have the opportunity to retain value in forking the network, making forking much less painful, making any attempt at controlling the network *much more* painful.

Quote
A politician always promises we can fix it later.

Why fix it later, when we can fix it at the start.

I have tried to address every conceivable thing I could think of, but that still won't be anywhere close to everything. I can't predict the future. I've even considered the rise of quantum computing and I knew that an adaptable protocol for new signature schemes and new hashing schemes must be in from the start so that any transition is as easy as it can be.

Bitcoin proponents claim "we can always fix it" but they never grasp the true difficulty of changing a decentralized system. It is easy with bitcoin, at least for now, as it is highly centralized. But in a system that promotes all sorts of decentralization, and if it is ubiquitous, this will be far more difficult.

Quote
You said SH price is fixed so the higher price must be coming from debasing the system every time someone transacts or ...? And because we can't differentiate transactions from exchanges.

It is temporary deflation. The monetary system has a lot of brakes to prevent endless money printing. EvilCorp will not be able to subvert this, and will pay the price in fiat.
hero member
Activity: 518
Merit: 521
May 17, 2013, 05:07:49 AM
I will attempt to summarize what I understand so far about the proposed transaction processing for Decrits.

The transaction processing peers (SH) will be expected to create and sign a new transaction block (TB) containing added transactions and a new top level hash for the consensus block hash tree ledger.

This data must then be sent to the next SH that is expected to sign. But since there is a possibility that this next SH will not receive and/or refuse to act on the information, then data must also be sent (or somehow otherwise made available) to every SH expected to sign in the future. This is where my confusion lies in trying to compute the communication requirements.

Remember I was pointing out that either there could be duplicated transmissions of communications, else there needs to be a propagation of each TB to all SHs, before the next TB can be created and sent. Etlase has not explained a resolution to this as far as I have understood thus far.

Until he explains this, I can not compute the communication requirements. And this is why I am not trusting his estimates yet for that computation.

As a separate issue, there would need to be some penalty for SHs who send duplicate information or bogus transactions to themselves in order to overload the communication. But I don't know how to detect the latter, except that Etlase proposes only 50% of tx fees go to the SH (so 50% demurrage), so that they are draining their money by doing this. Again I don't know if draining money is an adequate defense, given that if a cartel can delay the transactions of others, they could force spenders (or the retailers, thus hiding the fee in product prices) to pay them an extra fee to get faster transactions.
hero member
Activity: 518
Merit: 521
May 17, 2013, 04:16:05 AM
It may seem like useless arguing if you don't understand he and I do understand what we are talking about. And we've made a lot of progress on threshing out the issues. One or both of us will need to write cogent explanations of the conclusions, once we reach them.

I need to go do some calculations.

However, I do agree with you that I don't understand the importance the non-SH peers. So that may cause my calculations to be incorrect, if I am not correctly factoring the network topology and protocol.
I'm not saying it is useless, I am saying it's much less useful than writing implementation details in pseudo code. That would force Etlase to lay out precisely how system should work and would allow more people to understand it and discuss it.

Perhaps so and I am not discouraging him from doing it, but consider also that perhaps that could also be very low-level verbose and perhaps not high-level enough on some key concepts that elucidate the viability of the design.

Agreed that more people need to understand it, so they can jump into this discussion. I guess my next input here is to make a summary of where I think we are, and with some computations. First I will see if he wants to feedback more on anything I've written. He may be taking a little break to recharge his batteries.

I can't summarize his minting system and the voting system for future protocol modification (sections 3 and 4) because I don't understand the former and probably don't want the latter. The system must work without the voting system for protocol modification. That should be an optional proposal.
sr. member
Activity: 359
Merit: 250
May 17, 2013, 03:39:41 AM
It may seem like useless arguing if you don't understand he and I do understand what we are talking about. And we've made a lot of progress on threshing out the issues. One or both of us will need to write cogent explanations of the conclusions, once we reach them.

I need to go do some calculations.

However, I do agree with you that I don't understand the importance the non-SH peers. So that may cause my calculations to be incorrect, if I am not correctly factoring the network topology and protocol.
I'm not saying it is useless, I am saying it's much less useful than writing implementation details in pseudo code. That would force Etlase to lay out precisely how system should work and would allow more people to understand it and discuss it.
hero member
Activity: 518
Merit: 521
May 17, 2013, 02:39:35 AM
Etlase2, why waste so much time arguing. You could use this time to make post which describe in points what each participant (SH, cnc, cnb, average user making transaction, etc...) do/can do from moment of starting the program to close.
That would really move discussion on higher level.

It may seem like useless arguing if you don't understand he and I do understand what we are talking about. And we've made a lot of progress on threshing out the issues. One or both of us will need to write cogent explanations of the conclusions, once we reach them.

I need to go do some calculations.

However, I do agree with you that I don't understand the importance the non-SH peers. So that may cause my calculations to be incorrect, if I am not correctly factoring the network topology and protocol.
hero member
Activity: 518
Merit: 521
May 17, 2013, 02:19:06 AM
Disagree, because once an attacker bids up the price of a SH beyond profitability, no one else will buy.

Do you proposal to set a price for a SH that never changes? I think this would cause another set of problems.

How am I supposed to argue with these notions? How can you possibly draw any valid conclusions when the idea you have formulated is something completely other than what the OP states?

You were asked upthread by another poster to stop, yet you continue to insert insults in nearly every post. This will not reflect well on others wanting to work with you and help you. You can cause me to form an opinion of you that you are incapable of politely working with others.

Quote
Consensus is determined by a group of peers called Shareholders (SH). Anyone may purchase shares in the network with Decrits using a special transaction and become a SH. The price of each share will be a meaningful amount (intended to be in the range of 3,000-5,000 USD), and this money is locked for a period of at least 1 Consensus Year

There is no bidding described here.

As I wrote in the post to which you are replying, I think a fixed price will cause other problems.

For example (and there may be many more), the cartel can increase the # of SH they have until the profitability is below 0, as I stated.

If you assume the cartel won't be processing many transactions, then their cartel is not working. The point is that once they can delay transactions by even a few minutes with even less than 50% share of the number of SH (assuming we didn't have randomization but maybe we do), then business starts going to their members and the thing snowballs towards them.

As I said, randomization is a significant factor that improves this, but not totally. That is another specific analysis that needs to be fleshed out.

No cartel is inherently in this design. And yes, the design is intended to have a fixed cost to purchase a share. Want a share? Buy a share. Perhaps shareholder has a poor connotation, but I admitted I am terrible with terminology. With this settled maybe we can start discussing something closer to a vulnerability in the actual design? I'm sure you have a whole slew of things you can think up for why a fixed price is a bad idea, but I can make cogent arguments since you are using the actual design, for a change.

Do you have any control over your emotions?

Stop the fucking insults. Design is holistic. Stop the dogma and preaching verbiage and focus on technical verbiage only.

This is not a church. Get off your fucking soap box please.

If you don't want peer review, then say it now.

Quote
Incorrect logic because the % inside the system says nothing about resources outside the system that can come in via fiat exchange.

While I have only probably hinted at addressing this, it is addressed, but it involves going into the monetary system, and I still want to nail down consensus for you first.

Yes I am aware that you think the minting system can attain an equilibrium. And I don't know if I will agree. And thus you can't assert that point and expect me to leave it unchallenged.

Also it is very poor engineering to design a system that is totally reliant on all its parts. A fault tolerant system can have parts of the system fail and still not entirely fail.

Quote
Also a higher transaction fee means another altcoin with lower transaction fees can outcompete you.

You keep thinking in terms of a closed system. That is why my point about Coase's Theorem is so fundamental.

Except that part of the design is to foster (peaceful) forking if deemed necessary.

Yes I am totally ignoring section 4 and I doubt I would ever agree to it. Anyone who wants to fork can go create another altcoin.

I don't agree with voting to fork a constitution, because voting can be gamed by the power of debt and fiat.

You may be able to convince me otherwise, but I strongly doubt it. There are certain laws of human nature which are inviolable.

That is hardly thinking in a closed system. If the monetary system stops working in a reasonable manner, the people are free to bust it wide open with something new, and retain value from the old, and part of this break has to be denying access to any of the existing SHs who do not wish to switch. But now we're getting deep into section 4, and you still haven't mastered section 1 yet.

A politician always promises we can fix it later.

Why fix it later, when we can fix it at the start.

Quote
If every SH needs to see these 10 second incremental updates, then the overload communication attack vector resurfaces again.

Every SH does not need to see the updates as they happen. Even ones around the same time frame do not need to see each other. There is no overload of communication unless you can prove that a steady and predictable 1kB/s at 5 million SHs is impossible to overcome.

I will need to do my own math. I don't trust that yours factors in all the factors. Will do.

There is risk in accepting a transaction if you are missing recent blocks, but not much. I don't even remember now who I started explaining this to, but I think it was you, and I don't really feel like explaining it again at the moment. Suffice it to say that I can go much deeper into why transactions are secure when they're secure, and it is easy to identify when a transaction is risky to accept without waiting. Just take it as a given for now because I'm tired.

Yes you are tired and losing your cool. Take a break please. We will still be here.

Quote
Sent from each signing SH to one other SH. But I was assuming this has to be communicated to all SH, so they will know which transactions have been excluded, so that when they sign, they can include the missing transactions.

Yes, we have to multiply the data times the number of connected peers divided by about half, assuming half of the network gets the transactions before you. But lite clients only need to maintain 1kB/s to verify any part of the ongoing consensus. (4,000 tx/s [visa] * 100 + 5.7 SH sigs/s [5 million / 876,600] * 150) = 400,855B/s, times say 100 connected peers, divided by half, is 20MB/s or approximately a 160Mbit connection, or around the very high end of consumer bandwidth available today. And notice that the SH part is still a meaningless amount.

Again I will work through the network topology and math. I don't trust your summary without detailing for myself the system.

Quote
If your minting proposal is able to prevent large exchange of fiat for Decrits, then that is a valid transaction cost in Coase's theorem. But I don't yet understand your minting proposal, so I can not yet agree if it works.

It can't possibly prevent the exchange, but it definitely has a cost. Any drastic uptick in the demand can only be assuaged by higher prices in the short term which means the large exchange is going to have to pay increasingly more fiat to obtain increasingly more decrits.

You said SH price is fixed so the higher price must be coming from debasing the system every time someone transacts or ...? And because we can't differentiate transactions from exchanges.

And in the long term, new decrits will be created and distributed throughout the network. The people using decrits will profit off of selling you decrits for much more than their cost to produce, and then the people using decrits will profit again when new coins are distributed to bring the price back down.

I don't understand this mechanism so I can't agree to it yet.

Quote
But only in the 51+% scenario do they have no chance of getting profit.

Otherwise it is just an economic calculation, same as for Decrits.

This is true, but the economic calculation is much more beneficial to the decrits user. Wink

I don't see this yet. Can't agree yet. May disagree. I don't know.

Quote
Have any of my conclusions been proven wrong yet?

All I concluded was that the system can not be closed and must be natural within the realities of the real world.

Based on some assumption that I think the system is closed.

The ability to fork does not impact whether the current fork is making such assumptions.

The section 4 (in OP) voting for a fork aspect of your design is analogous to a politician telling us to not be worried about how bad the current proposed legislation is, because we can always change it again later.

Quote
And that the SH would need to be limited. Both conclusions appear to be correct.

Based on not understanding the topology of a distributed network, I think? I really don't see why you still think SHs need to be artificially limited. They will be limited by the currency required to purchase them.

I think I explained it again above?

Quote
Now I understand you were referring to the random order of joins/leaves, which is not "wobble".

The order of joins/leaves has little to do with it.

Sorry that is mathematically wrong. The randomness of joins/leaves is what gives the order in the system any randomness. The purpose of the deterministic function you proposed, is to spread (salt) the randomness to the attacker's keys so they are no longer deterministic for the attacker. Please see my earlier post about that.
sr. member
Activity: 359
Merit: 250
May 17, 2013, 01:56:51 AM
Etlase2, why waste so much time arguing. You could use this time to make post which describe in points what each participant (SH, cnc, cnb, average user making transaction, etc...) do/can do from moment of starting the program to close.
That would really move discussion on higher level.
hero member
Activity: 518
Merit: 521
May 17, 2013, 01:28:24 AM
This is all contingent on the input joins/leaves not being entirely controlled by the attacker,

No, it isn't. Put all the new joins at the end then mix, it doesn't matter. It doesn't need to be random, it just needs to mix up the current order. Take every second SH in order and put them at the end. Voila, you have a completely new mix of SHs.

The input randomness is required. See my prior post, that I was writing at the same time you were writing.

I am also now agreeing with you that a deterministic function can spread this randomness to things which the attacker expected to not be randomized. That is the key point you were trying to make.

But don't forget the input randomness is required, else it doesn't work.
hero member
Activity: 798
Merit: 1000
May 17, 2013, 01:26:11 AM
This is all contingent on the input joins/leaves not being entirely controlled by the attacker,

No, it isn't. Put all the new joins at the end then mix, it doesn't matter. It doesn't need to be random, it just needs to mix up the current order. Take every second SH in order and put them at the end. Voila, you have a completely new mix of SHs.
Pages:
Jump to: