In a controlled environment its already demonstrated 100k tps.. The bottleneck is the network capabilities rather than the core code which is what it demonstrates. Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.
You are obscuring that what you just wrote effectively means, "this is how many TX/s a fast server CPU can process and we eliminated any sense of a real network from the test".
Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.
This is like saying once we redesign our coin (
or monopolize the DPOS witnesses with our own corporate funded super hardware because DPOS is really just a way to obscure that we are paying dividends to ourselves, analogous to how masternodes in Dash was ostensibly a way obscuring that those huge interest fees were going to those who owned the most coins), it will get faster. Let's rather talk about what a coin can do now, today, in the real world of decentralized witnesses of varying capabilities.
Obscuring instamines, and other means (
cheap pre-sales of ProtoShares that were converted to Bitshares?) of having control over large percent of the coins and then setting up a paradigm where coins can be parked to earn dividends. Hmmm. How dumb are we. Hey more power to them if investors are gullible enough to buy it. But it all starts to fit together in terms of analyzing why they would even think they could have a uniform capability across all witnesses.
Your assumption which led to a dozen more about which witness is next to sign a block is incorrect, thus your derived assumptions also incorrect. Thus you really have no claim ablut bitshares and the tps without fully understanding the concepts behind the tests and feature itself.
If you would be kind enough, you are welcome to cite a reference document. I was working from the official description of DPOS at the official website. As I wrote, I will edit my post for corrections if any one provides information. You have not yet provided any information. So far I read only your (perhaps incorrect) spin on the matter.
Why not read the code?
Again the bottleneck is in the consensus code which was been optimized so that it is possible to do more than 100k tps, a bitcoin controlled environment cant do this because of the bottleneck outside of network constraints. By leveraging LMAX technology and applying it to blockchains they were able to increase efficiency in validating and signing blocks. Propogation is always an issue.. Which is where scaling up network parameters helps and is totally feasible which multiple markets are betting on and will benefit. Because there is no mining it is possible off the bat, and now optimized to create more tps. Dpos allows them to maximize decentralization while remaining anonymous and even so with bitshares following regulatory rules gives less incentive from a regulation attack than bitcoin.
With fiberoptic internet would bitcoin be able to do 100k tps? No.
Lmax does 100k in 1ms latency
http://www.infoq.com/presentations/LMAXOn the use of lmax in bts
https://bitshares.org/technology/industrial-performance-and-scalability/Increasing network params will only help bitcoin by helping with the regulation attack but not scale up in tps as efficiently. Today btc is restricted to 7tps at 1mb so its orders of magnitudes off and id argue that dpos is still more decentralized than using LN to increase tps and use bitcoin as a settlement platform.
As I wrote from the start of this, Bitshares 2.0 has optimized the witness code so the CPU can scale to 100,000 TX/s,
but not only are they apparently requiring on the order of LMAX's 1ms network latency to achieve it, but I haven't read where they've modeled DoS service attacks on the transaction propagation network at such high TX/s. Real-time systems are not only about average throughput but also about CIS (guaranteed reliability and continuous throughput). If you are sending your real-time payment through and the next 10 witnesses that are queued in the chosen order are DoS attacked so they are unable to receive the transactions, then they can't complete their function. That is a fundamental problem that arises from using PoS as the mining method if you claim such high TX/s across variable hardware and network capabilities of nodes (
those PoS claiming more conservative TX/s and block times are thus less likely to bump into these issues external to the speed of updating the ledger in the client code). They can adopt counter measures, but it is going to impact the the maximum TX/s rates to the downside, perhaps significantly.
I am not even confident they can maintain 100 TX/s on a real-world network today composed of a myriad of witnesses capabilities under a DDoS attack. Someone needs to do some modeling.
LMAX is able to push 6M TPS but its not on a blockchain. Thus it is not apparently requiring that kind of latency at all. "BitShares is able to process 100,000 transactions per second without any significant effort devoted to optimization" that means with optimization they can pull alot more and deal with DDOS or whatenot.
"The real bottleneck is not the memory requirements, but the bandwidth requirements. At 1 million transactions per second and 256 bytes per transaction, the network would require 256 megabytes per second (1 Gbit/sec). This kind of bandwidth is not widely available to the average desktop; however, this level of bandwidth is a fraction of the 100 Gbit/s that Internet 2 furnishes to more than 210 U.S. educational institutions, 70 corporations, and 45 non-profit and government agencies."
"The NASDAQ claims that orders are acknowledged in 1 ms and then executed in just 1 ms. This claim has the built in assumption that the machines doing the trading are on fiber optic connections to the exchange and located within 50 miles. This is due to the fact that light can only travel 186 miles in a millisecond in space and half of that speed on a fiber optic cable. The time it takes for an order to travel 50 miles and back is a full millisecond even if the NASDAQ had no overhead of its own.
If a user in China were to do trading on the NASDAQ then they would expect that order acknowledgement would be at least 0.3 seconds.
If BitShares were configured with 1 second block intervals, then on average orders are acknowledged and/or executed within 0.5 seconds. In other words, the performance of BitShares is on par with a centralized exchange processing orders submitted from around the world. This is the best that can be achieved with a decentralized exchange because it puts everyone on equal footing regardless of their geographical location. In theory, traders could locate their machines within 50 miles of the elected block producers and trade with millisecond confirmations. Unfortunately, block producers are randomly selected from locations all around the world which means that at least some of the time a trader would have higher latency."
"We setup a test blockchain where we created 200,000 accounts and then made 2 transfers and 1 asset issuance to each account. This is involved a total of 1 million operations. After creating the blockchain we timed how long it took to “reindex” or “replay” without signature verification. On a two year old 3.4 Ghz Intel i5 CPU this could be performed at over 180,000 operations per second. On newer hardware single threaded performance is 25% faster.
Based upon these numbers we have concluded that claiming 100,000 transactions per second is well within the capability of the software."
"When measuring performance
we make the assumption that the network is capable of streaming all of the transaction data and that disks are capable of recording this stream. We make the assumption that signature verification has been done in parallel using as many computers as necessary to minimize the latency. A single core of a 2.6 Ghz i7 is able to validate 10,000 signatures per second.
Todays high-end servers with 36 cores (72 with hyper-threading) could easily validate 100,000 transactions per second. All of these steps have been designed to be embarrassingly parallel and to be independent of blockchain state."
Read
https://bitshares.org/blog/2015/06/08/measuring-performance/