I don't know the details of the database in Steem/Graphine but several different people have said they've tested it at 100x the current transaction volume of Steem and had no problems with ordinary hardware. That takes daily active users to about 4 million, or higher with better hardware.
Note by transaction rate tests you mention, I presume they are referring to computational load and not database load (since for one reason I doubt they even could simulate the real world data load patterns a priori).
I'm not. They ran the ordinary node which writes to a database. Real world load patterns are of course unknowable and I don't know how realistic any of these tests were but there is also a pretty (or very) wide margin between running on ordinary cheap unoptimized hardware and something actually optimized to host a database.
I'm pretty sure ability to handle only the computational load is much, much higher, but that is completely unrealistic, so pointless to even quote those "Wow!" numbers.
Since afaik, they haven't published the data in an easily accessible format, I have no way to confirm nor refute unpublished benchmarks.
But I note you refer to "transactions", and I am referring to edits of blogs and comments. They may have simulated adding these at 100X current rate, but perhaps not random editing of content updates to the blockchain data, i.e. random writes (and then random requests for data to the web server). Since random edits can be so much more costly (and also require a less optimal database design), they can have a big impact even if they are not the most numerous case.
Note I presume they can scale any design with enough capital to spend on infrastructure and engineering. But if this is a significant cost (and maybe it is not, I dunno), it will come out of the pockets of the users one way or the other. For example, it is not unfathomable that it might possibly cause a problem in that user's stake becomes insufficient over time and users have to buy stake to continue, but instead users might leave for the more efficient system that doesn't need to charge them.
When I refer to scaling, I don't mean that it is physically implausible to scale it. I am referring to how the cost and complexity grow through scaling (i.e. big O notation), and potential impacts thereof.
There is a deployment scaling issue in that the database not fully ACID and recovery from corruption is slow (more precisely will become slow if the tx volume increases; currently it isn't too bad). The only way to deal with that currently is with redundant systems and trailing backups. It is a pain in the ass, but workable.
Thanks for sharing that.
Note this is also a weakness of DPoS. If witnesses were instead fungible amongst all nodes, then it doesn't matter when a database is corrupted on any subset of witnesses.