...ell seeing as Larimer himself said the solution is to "...keep everything in RAM..." how much RAM to do think is required to keep up with a sustained 100,000 tps if it is indeed true?
Just for the record, cross post from https://bitcointalksearch.org/topic/m.12575441
The default wallet has all transactions expiring just 15 seconds after they are signed which means that the network only has to consider 1,500,000 * 20 byte (trx id) => 3 MB of memory to protect against replay attacks of the same transaction.
The vast majority of all transactions simply modify EXISTING data structures (balances, orders, etc). The only type of transaction that increases memory use permanently are account creation, asset creation, witness/worker/committee member creation. These particular operations COST much more than operations that modify existing data. Their cost is derived from the need to keep them in memory for ever.
So the system needs the ability to STREAM 11 MB per second of data to disk and over the network (assuming all transactions were 120 bytes).
If there were 6 billion accounts and the average account had 1KB of data associated with it then the system would require 6000 GB or 6 TB of RAM... considering you can already buy motherboards supporting 2TB of ram and probably more if you look in the right places (http://www.eteknix.com/intels-new-serverboard-supports-dual-cpu-2tb-ram/) I don't think it is unreasonable to require 1 TB per BILLION accounts.
Ok that clears that up, maybe he should be a bit more clear in future about what exactly "...keep everything in RAM..." means.
It still leaves a lot of questions unanswered regarding that claim though, specifically the IO related ones.
Streaming 11MB from disc doesn't sound like its too hard, but it depends on a number of factors. Reading one large consecutive 11MB chunk per second is of course childs play, but if you are reading 11MB in many small reads (or worse still if its a mechanical platter drive and is fragmented) then that simple task becomes not so simple.
Also, network IO seems to have some potential issues. 11MB/s down stream isn't too much of a problem, a 100Mbit downstream line will just about suffice, but what about upstream? I'm assuming (so correct me if Im wrong), that these machines will have numerous connections to other machines, and will have to relay that information to other nodes. Even if each node only has a few connections (10-20), but has to relay a large portion of those 100,000 tps to each of them, upstream bandwidth requirements for that node quickly approach multiple gigabits in the worst case.
Further more, lets assume that Bitshares is a huge success, is processing just 10,000 tps sustained and that none of these issues exist other than storage. As Bitshares relies on vertical scaling, and we've already determined that 100,000 tps = ~1TB of data a day, 10,000 tps = 100 GB daily. Operators of these machines are going to be spending a lot of dollar on fast drive space and have to employ sophisticated storage solutions in order to keep pace. This becomes quite insane at the 100,000 tps level (365TB per year), perhaps Bitshares has some chain pruning or mechanisms to keep this down? (I hope so!)
Finally back to RAM requirements, what are the measures or mechanisms in place to prevent someone from creating 1Billion or more empty accounts, and causing RAM requirements to shot upwards as this information is kept in RAM? A few machines could easily do this over the course of a couple of weeks if there are no other costs associated with it, I assume there is some filtering to only keep accounts with activity in RAM as otherwise this will be a major issue.
Eitherway, this is just another example how vertically scaled systems are not viable, should Bitshares grow to the level where it is processing 100,000s of transactions per second and has even a few 100M accounts, you need a machine with 100s of GB of RAM, 100s of TB of storage, and internet connections in the multiple GB speeds.....not really accessible to the man on the street.
Perhaps the cost of participating at that level just isn't an issue, as Bitshares has always had a semi-centralized element to it anyway, and most supporters of it don't seem to mind it. For me though, relying on ever increasing hardware performance and sacrificing core principles which brought us all here in the first place is a cop out.