Also, bear in mind the 2 following quotes when I present the points made that don't seem to add up, conflict or simply don't make sense.
So, lets proceed...
That is a bad assumption to make if you intend BitShares to run on commodity hardware as is stated in various texts relating to BitShares 2.0....if you wish to achieve that, then you should really be assuming that the network, and disks are NOT capable of such a thing.
Erm....what happened to commodity hardware already? Who has a 36 core CPU lying around?
I just don't see how this is possible while maintaining the minimum amount of data required to ensure a validatable transaction. A 256bit EC signature is ~70 bytes, 30 bytes sure doesn't seem like enough to specify 2 addresses, a transaction value, and anything else that is required.
Its worth noting that quoted figures for average BTC transactions are also incorrect as per this http://bitshares.github.io/technology/high-performance-and-scalability/
I recall a few respected members here doing research into this, and the average BTC transaction was at least 2x that, usually 3x and greater.
so we have gone from commodity hardware, to data-centers? What about keeping things decentralized or on commodity hardware?
That is a statistic I can swallow, but my question is, WHO is doing the signature validation? Only people with 32 core machines and 1TB of memory? If so, how are the rest of the nodes in the network ensuring that this now centralized task is done properly? How can I with my lowly 8 core, be sure that the transactions are indeed valid and signed correctly without having to also verify 100k transactions per second.
Ahh so 100k per second really is only available to people who own 32 core CPU's in a spare data-center? If this single machine consisted of commodity hardware, and thus most users of the network will have similar, its not 100k per second is it, its 2k.
Remember this from up top It should be noted that we are talking about the capability of an individual computer which is the ultimate bottleneck, which is in turn confirmed by this next statement
If the speed of the 100tx/s is really achievable on commodity hardware, why limit it to 1000 transactions per second on release? Could it be that 100k/s on commodity hardware actually is not possible, and this 1k limit is actually to mitigate machines slower than the test bed machine that could achieve 2k/s
If that is not the case then I am totally confused. Is it limited by an individual computer to 2k tx/s, or is it not? Do you need the suggested 32 cores to be able to process 100k tx/s and if so, what about my question for machines that are slower? If the majority of machines are indeed only able to process 2k/s what purpose do they serve in the network? Are they redundant in ANY transaction processing?
To me on the surface, it all seems like conflicting statements, and contradictory numbers. If the system can process 100k/s, without having centralized nodes packing 32 cores and 1TB of RAM, then I'll take my hat off, but all this information is so confusing, I don't even know what to take away from it.
YOUR THOUGHTFUL RESPONSE IS MUCH APPRECIATED
Since you took the trouble to read some of our BitShares 2.0 documentation and have prepared a polite, professional response, I am pleased to join you in a serious exchange of ideas.
I'll limit my first response to just one of your lines of questions, lest too many trees hide the forest.
You can think of the transaction rate setting in BitShares 2.0 as a form of "dial a yield". It can be dynamically adjusted by the stakeholders to provide all the throughput they want to pay for. Since BitShares is designed to be a profitable business, it only makes sense to pay for just enough processing capacity to handle peak loads.
The BitShares 2.0 block chain has a number of "knobs" that can be set by elected delegates. One of them is throughput. The initial setting of that knob is 1000 transactions per second because right now that is plenty and allows the maximum number of people to provide low cost nodes. A second knob is witness node pay rate. If doubling the throughput requires slightly more expensive nodes, the stakeholders just dial up the pay rate until they get enough bidders to provide the number of witness node they decide they want (another dial). Pay rate scales which throughput which scales with revenue which scales with transaction volume.
Now, suppose that a few big applications were to decide to bring all their transactions to the neutral BitShares platform one summer. If we needed to double the throughput, here's what would happen.
The elected delegates would discuss it in public and then publish their recommended new knob settings. Perhaps they pick 2000 transactions per second and $100/month pay for each witness node provider. Everyone who wants to compete for that job then has the funds to upgrade their servers to the next bigger off-the-shelf commodity processor.
As soon as they change those knob settings, the blockchain begins a two week countdown during which time the stakeholders are given a chance to vote out the delegates from their wallets if they don't like the change. If they are voted out, the blockchain automatically aborts the adoption of the new settings. If not, the settings are changed and the BitShares network shifts gears to run faster.
There is enough reserve capacity in the software to double our throughput about 8 times - scaling by three orders of magnitude with a simple parameter adjustment.
The current knob setting gives us plenty of reserve capacity at the lowest possible witness node price. It could absorb all of Bitcoin's transaction workload without needing to even touch those dials. But, if we ever need to take on the workload of something like NASDAQ or VISA or Master Card, we can dial up the bandwidth to whatever level the stakeholders vote to support.
So, the BitShares 2.0 platform has plenty of spare bandwidth to handle the ledger bookkeeping functions of just about every blockchain currently in existence. You are all welcome to join us and start using your mining expenses to pay your own developers and marketers instead of electric companies. Nothing else about your business models or token distributions would change. Simply outsource your block signing tasks and get on with your more interesting earthshaking ideas. Think of the industry growth we would all experience if most funds spend on block signers were used to grow our common ecosystem instead. We could all share one common, neutral global ledger platform, where cross-chain transactions, smart contracts and other such innovations were all interoperable!
Or will we waste our industry's development capital on a never ending mining arms race? Carpe diem!
Stan Larimer, President
Cryptonomex.com