Out of that list, Bitshares has the best scaling due to DPoS. NXT, if it ever gets transparent forging working, would have less TPS and slower block times, but better than other coins except Bitshares. Transparent forging is kind of like recreating DPoS in a more Rube Goldberg approach:
DPoS is just a centralized version of NXT's PoS which was implemented so the Larimers and a select group of manipulators (aka the Communist Chinese Gov) can effectively rig the delegate elections through "approval voting" and control the platform. Also, Bitshares ridiculous claim of 100k tps requires hardware with 1TB of ram.
NXT didn't invent PoS, so you should probably stop lying about that. NXT will also not be worth buying until someone can demonstrate they can even get transparent forging working on the platform. I honestly can't believe you're dumb enough to claim Bitshares can't do 100k TPS. NXT, which would be using an inferior method of deterministic block validation claims similar numbers.
That was also Dan Hughes of Emunie claiming it requires 1TB of RAM, a guy who doesn't even know how Bitshares works. Tendermint claims 40,000 TPS functionality as well. You think Tendermint is lying too? Bitshares can do whatever Tendermint can do or more. NXT will likely do less than Bitshares while also having slower block times due to the way it's structured.
Bitshares is beating NXT to the market with both high scaling and stable market pegged assets, the two things actually required for crypto to go mainstream. Nobody wants to hold crypto until it has stability like the dollar.
Well seeing as Larimer himself said the solution is to "...keep everything in RAM..." how much RAM to do think is required to keep up with a sustained 100,000 tps if it is indeed true?
You don't need to know anything about how that system works when these statements have been, 120 bytes per transaction * 100,000 transactions per second * 86400 seconds in a day = 1,036,800,000,000 bytes (almost 1TB) per day of data. However you cut it that is a lot of data to manage efficiently in a vertically scaled system (which Bitshares is).
If you have to go to disc even the fastest PCI SSDs will struggle to do 100,000 random reads per second after some moderate use, unless you are doing very aggressive and efficient disc management at a low level to ensure they are stored sequentially. Even without considering transactions arriving out of order from the network, that in itself is no trivial task.
Even batch writing large blocks of transactions to reduce the required IOPs might not cut it, as you have no guarantee that the SSD controller is going to (or even can) write them in one consecutive chunk. If the controller wants to move data around so it can do this, that is additional overhead and your max IOPs will drop anyway.
If the RAM requirements are low, then you will be swapping out those transactions in RAM constantly, so then you are asking an already utilized IO stream to do 100,000 reads AND 100,000 writes each second on top of processing.
My issue isn't that 100,000 transactions can not be processed per second, my issue is that sustaining that processing capability with RAM and IO limitations is not as trivial as is being made out, and anyone with even a basic understanding of IO systems should be able to see it.
I believe this 100,000 tps relates to the systems ability to order match as he mentioned that specifically in the videos relating to this topic, and NOT a sustained network load of new transaction processing capability.
Also this little snippet
Real-Time Performance
While we are convinced that the architecture is capable of 100,000 transactions per second, real world usage is unlikely to require anywhere near that performance for quite some time. Even the NASDAQ only processes 35,000 messages (aka operations) per second and has only been tested to 60,000 TPS with eventual plans to upgrade to 100,000.
We set up a benchmark to test real-time performance and found that we could easily process over 2000 transactions per second with signature verification on a single machine. On release, the transaction throughput will be artificially limited to just 1000 transactions-per-second.
found here
https://bitshares.org/blog/2015/06/08/measuring-performance/Also makes me question this 100,000 tps in the real world, as their
OWN benchmarks could only achieve 2000+ Its still a lot sure, but its not 100,000 tps is it? So what is going on?
The real world benchmarks can not have produced much more performance than 2000 tps, because if it had they would say. For example, if it produced 20,000 tps why claim it only resulted in 2000? So I conclude from that also, that the 100,000 is either wrong/false or presented in a confusing context.