Some posts about competing block chain scaling designs Bitshares, Iota, eMunie, and "block list":
Let's talk software engineering a bit...
Hmmm...Ive found that the major bottlenecks on lower end stuff is actually the IO DB writes/reads and not so much crypto related stuff. Sure it has a positive effect if you can speed it up, but a good 70%+ of optimizing I do is how to get data over the IO quicker and more efficiently.
That was like word for word what Bytemaster said in this youtube video heh: http://www.youtube.com/watch?v=bBlAVeVFWFM
Daniel Larimar incorrectly
claims in that video that it is not reliable to validate transactions in parallel multithreaded. Nonsense. Only if the inputs to a transaction fail to validate would one need to potentially check if some other transactions need to be ordered in front of it, or check if it is a double-spend. And
he incorrectly implies that the only way to get high TX/s is to eliminate storing UXTO on disk, because presumably he hasn't conceived of using SSD and/or RAID and/or node partitioning. It is impossible to keep the entire world's UXTO in RAM given 36 bytes of storage for each 256-bit output address+value, given even 1 billion users and several addresses per user. He mentions
using indices instead of hashes, but enforcing such determinism over a network makes it extremely brittle (numerous ways such can fail and having addresses assigned by the block chain violates user autonomy and the end-to-end principle) as well even 64-bit hashes are
subject to collisions at billion-scale. Essentially he is making the error of optimizing at the low-level while breaking higher-level semantics, because he apparently hasn't gone about the way to really scale and solve the problem at the high-level semantically.
Edit: Fuseleer applies the term "vertical scaling" to describe Bitshare's optimization strategy.
Hmmm...Ive found that the major bottlenecks on lower end stuff is actually the IO DB writes/reads and not so much crypto related stuff. Sure it has a positive effect if you can speed it up, but a good 70%+ of optimizing I do is how to get data over the IO quicker and more efficiently.
What DB system do you use? MySQL? I use
http://docs.oracle.com/javase/8/docs/api/java/nio/MappedByteBuffer.html.
I have just recalled that Emunie does much more than just payments, in this case we cannot compare our solutions, because our cryptocurrency works with payments only and doesn't need to do sophisticated stuff like order matching.
MySQL and Derby for development, probably go with Derby or H2 for V1.0.
The data stores themselves are abstracted though, so any DB solution can sit behind them with minor work so long as they implement the basic interface.
That solution for you (if it fits your purpose) will be very fast, then your IO bottleneck will mainly shift to network I imagine?
Both of these methods are horridly inefficient. Cripes disk space is not at a premium. Duh!
That solution for you (if it fits your purpose) will be very fast, then your IO bottleneck with shift to network I imagine?
Network will become a bottleneck at 12'000 TPS (for 100 Mbps).
Yup, partitions my friend, that problem goes away
I expect that when you do finally issue a white paper, the weakness is going to be the economic model will be gameable such that there is either a loss of Consistency, Availability, or Partition tolerance (CAP theorem). Because without a proof-of-work (or proof-of-share[1]) block chain, there is no objective SPOT (single-point-of-truth), which really becomes onerous once partitioning is added to the design because afaics there is then no way to unify the partitioned perspectives. I believe this to be the analogous underlying flaw of Iota and "block list". Challenge with proving this flaw for Iota et al, is to show a game theory that defeats the assumptions of the developers (white paper), e.g. selfish mining game theory against Satoshi's proof-of-work. However, I have argued in Iota's thread that this onus is on them to prove their design doesn't have such a game theory. Otherwise you all can put these designs into the wild and then we can wait to see if they blow up at scale. Note I haven't had enough time to follow up on Iota lately, and I am waiting for them to get all their final analysis into a final white paper, before I sit down and really try to break it beyond just expressing theoretical concerns/doubts.
[1] In PoS the entropy is bounded and thus in theory it should be possible to game the ordering. In theory, there should be a game theory such that the rich always get richer, paralleling the 25 - 33% share selfish mining attack on Satoshi's proof-of-work. However, it is not yet shown how this is always/often a practical issue. Proof-of-share can't distribute shares of the money supply to those who do not already have some of the money supply. Proof-of-share is thus not a debasement power-law flattening (recycling) distribution compatible scheme, although neither is proof-of-work once it is dominated by ASICs. Without recycling of the power-law distribution, velocity-of-money suffers unless debt-driven booms are employed and then government becomes a political expediency to "redistribute from the rich to the poor" (which is then gamed by the rich and periodic class/world warfare). Proof-of-share suffers from conflating coin ownership with mining, thus if not all coin owners are equally incentivized to participate in mining, then the rich control the mining. A coin owner with a holding that is only worth less than his toenail, isn't going to bother with using his share to mine. Thus proof-of-share is very incompatible with the direction towards micro-transactions and micro-shares. Any attempt to correct this by weighting smaller shares more, can then be gamed by the rich who can split their shares into micro-shares.
Ideally debasement should be distributed to an asset that users control but the rich can't profitably obtain.
You can't just make a claim out-of-context that an "honest" majority of the trust reputation will decide the winner of a double-spend. You have to model the state machine holistically before you can make any claim.
Proof-of-work eliminates that requirement because each new iteration of a block solution is independent (trials, often simplistically modeled as a Poisson distribution) from the prior one (except to some small extent in selfish mining which is also easily modeled with a few equations). See the selfish-mining paper for the state machine and then imagine how complex the model for his design will be.
This is independence is what I mean when I say the entropy of PoW is open (unbounded), while it is closed for PoS.[1]
Daniel Larimar incorrectly
claims in that video that it is not reliable to validate transactions in parallel multithreaded. Nonsense.
Why nonsense, it depends on linearity of the system. For a linear system order doesn't matter, for a non-linear one it does.
PS: We assume that multithreaded execution can't ensure a specific order of events, which is pretty reasonable for current architectures without placing a lot of memory barriers which would degrade the performance significantly.
Because (as indicated/implied by my prior post) it is more sane to design your system holistically such that ordering of transactions is an exceptional event, and not a routine one.
Conflation of "order book" with TX/s is a category error. It is not even clear if a decentralized "order book" can or should have a deterministic ordering, because determinism may allow the market to be gamed. In any case, it is not relevant to the issue of rate of processing TX/s for signed transactions. Separation-of-concerns is a fundamental principle of engineering.
I expect that when you do finally issue a white paper, the weakness is going to be the economic model will be gameable such that there is either a loss of Consistency, Availability, or Partition tolerance.
I believe Availability will always be nine nines for any decentralized cryptocurrency and Consistency will always be eventual, so Partition tolerance is the only toy we all can play with.
I have already argued to you in your Iota thread that your definition of Availability has no relevant meaning (propagation to across the peer network is not a semantic outcome). Rather a meaningful Availability is the ability to put your transaction into the concensus. In Bitcoin, that availability is limited in several ways:
- Confirmation is only every 10 minutes.
- Inclusion is in a block is dependent on the whims of the node which won the block, and on the maximum block size.
- One who has sufficient hashrate power, has higher availability.
- 51% of the network hashrate power can blacklist your Availability.
It is my stance, that the holistic game theory analysis of Availability in Iota, eMunie, and "block list" is much more muddled thus far. The multifurcated tree of Iota appears to be multiple (potentially inConsistent) Partitions, so Availability to create a new tree branch doesn't appear to be meaningful Availability since there is no confirmation of consensus.