Tracking transaction inputs to outputs will require access to the full transaction store.
Say, there will be 1 million transaction per day.
356 days = 356 millions of transactions.
Yes, there is some "compression" or "compaction". What are the estimates of it's effectiveness in the average case and in the worst case? I don't know the numbers, please correct me.
10 years = 3560 millions of records everyone should have to scan it for every transaction it receives (1 million per day?)
If it will be 10 millions per day, then 35600 millions of records to store * 10 millions of accesses per day.
Complexity is growing with the number of transactions faster, than linearly, noticed that ?
The more deflation occurs, the more separate transactions we will have, right? 0.01 limit will disappear.
Yes, CPU power and storage space were growing exponetially in the past and we are used to that.
There can be no limit for capacity, sure, but we are not limited by capacity, it is important to have very fast
access times for storage, not huge capacity.
They say we are almost touched the limit for CPU speed, now the number of cores is doubling.
Another interesting aspect is that the byte size of the block (so the number of transactions in it) is limited,
and the speed of the generation is limited too, by dynamic difficulty.
There is upper limit on the number of transactions per day, imposed by the block size limitation.
Let's publish the numbers?