Hi,
I'd like to address some questions about the specifics of the `2000 tps` test.
Lets start with the specs of the PC mentioned before, the one on which the test was performed;
This would be my daily work horse PC:
OS: Ubuntu 16.x
RAM: 16GB
CPU: Intel Core i5
HD: 500GB SSD
Now about the test setup.
The HEAT server tested actually runs in my Eclipse Java IDE, this as opposed to running running HEAT server from the command line (its own JVM).
Running in Eclipse IDE, and in our case in DEBUG mode is quite the limiting factor i've experienced.
Running HEAT from the command line does not have the burden of also having to run the full Eclipse Java IDE coding environment as well as who knows what Eclipse is doing with regards to breakpoints and all and being able to pauze execution and all.
We have not yet tested this while HEAT runs from the command line, I expect it to be quite faster in that case.
Now about the websocket client app.
With the help of our newly beloved AVRO binary encoding stack we have been able to generate, sign and store to a file our 500,000 transactions. This process takes a while, a few minutes at least. But I dont think this matters much since in a real-life situation with possibly 100,000s of users the cost of creating and signing the transactions is divided over all those users.
The client app was written in Java and opens a Websocket connection to the HEAT server, since both are running on the same machine we would use the localhost address.
Now you might say; "Hey! wait a minute. Isn't localhost really fast? Isn't that cheating?"
The short awnser.. "No! And you might fail to understand what's important here."
While its absolutely true that localhost is much faster than your average external network, what should be obvious here is what levels of bandwidth we are talking about. I suppose anyone reading this has downloaded a movie before, be that over torrent or maybe streamed from the network. Well there is your proof that testing this over localhost has zero meaning here.
Your PC while downloading a movie or the youtube server streaming to you and probably 1000s of others the most recent PewDiePie video will process MUCH MUCH more data than our little test here.
One transaction in binary form is about 120 bytes in size, times 2000 and you'll need a network that has to support 240KB data transferred a second. Not sure what internet connections are normal in your country, but it seems here in Holland we can get 400Mbit connections straight to our homes, talking about standard consumer speeds here (looked it up just now).
To put that in perspective, 240KB a second is about 1/200th the amount of bandwidth you get with your 400Mbit(50MB) connection. You should understand by now that the network is not the bottle neck, its the cryptocurrency server.
So whats so special about HEAT you might ask, why does HEAT what HEAT can do?
Well for that you'd have to dive into the source code of our competitors, be them Waves, Lisk, NEM, IOTA, NXT etc.. Just this afternoon I've taken a look at IOTA source code, which is something thats always interesting (did the same with the others mentioned).
But I can tell you right now that none of the other currencies (Waves, Lisk, NEM, IOTA, NXT etc) will be able to reach similar speeds as HEAT has now shown it can.
Why I can claim this is pretty simple.
Cryptocurrencies, all of them basically (blockchain or tangle makes no difference here), all follow a similar internal design. And all of them need to store their; balances, transactions, signatures, etc etc... and they all use different databases to do so.
Some like NXT use the most slowest of all solutions which is a full fledged SQL database, others have improved models optimized for higher speed in the form of key/value datastores. IOTA today i've learned uses rocksdb, Waves is on H2's key value store, Bitcoin is on leveldb etc.
Afaik HEAT is the only one that does not use a database. Instead we've modeled our data in such a way that it can be written to a Memory Mapped flat file, which is how we store blocks and transactions. Our balances are kept all in memory and to support on-disk persistence we use
https://github.com/OpenHFT/Chronicle-Map as our memory/on-disk hybrid indexing solution.
If you'd look at ChronicleMap's website you'll see that they: "Chronicle Map provides in-memory access speeds, and supports ultra-low garbage collection. Chronicle Map can support the most demanding of applications." Oh and did I mention this grew from the needs of HFT trading systems be faster than anything else available?
Anyways..
The next test is gonna be even cooler. We'll be hosting the single HEAT instance which will be forging blocks on a nice and powerfull machine, much faster than my PC, probably something with 64GB RAM and perhaps some 32 or 64 cores.. My estimate is that we can push the TPS to much higher levels on such a server.
Right now we are adding binary AVRO encoding support to HEAT-SDK and after that we'll release one of those samples we used to do a month or so ago with which you can fire transactions from your browser to our test setup. I bet that'll be fun.
- Dennis