"Is it needed yet?"
I think so, according to the below graph that I got from
this blog post (by Gavin Anderson), (as of May 2015) the mempool would often not be cleared when new blocks were found/propagated. Even though this is a small dataset, I would conclude this to mean that often times there are (unpredictable) backlogs that will deter people from using Bitcoin.
The sharp dips are when new blocks have been found
I'd say that coding an 8000x increase into the protocol that is in no way connected to actual capacity needs doesn't really line up with that.
Well, keep in mind that miners won't be required to create larger blocks, they'll just have the opportunity to do so if they want to.
To be fair, they would need to validate larger blocks if another miner were to create a larger block (which does take resources)
If I remember correctly, Satoshi added the 1 MB limit in 2010 to prevent some sort of DDOS attack? If so, then there are a few important questions:
1) Is that DDOS attack still a viable threat if the maximum blocksize limit is removed? increased?
I believe that transaction volume was very low when the 1 MB limit was implemented in 2010, and even less economic related transactions (according to the
Bitcoin Wiki, MtGox was only created in June 2010, and the first 10k
BTC pizza was purchased in May 2010). So I think the greater threat was that nodes would get overwhelmed by a DDoS attack, verses that users would have difficulty getting their transactions confirmed.
I believe that the majority of nodes would be able to support the validation and rebroadcasting (and storage) of larger blocks, as well as the larger expected mempool size associated with larger blocks.
If the maximum block size is too low (as I believe it is now), then it would be very easy and cheap for an adversary to prevent legitimate economic transactions from completing (confirming), which would cripple the bitcoin economy, and as evidenced in the most recent stress tests. I also know that a too small of a maximum block size will allow the mempool to get overwhelmingly large and would cause many nodes to crash (my node had crashed several times, sometimes multiple times per day at the tail end of the most recent stress tests).
3) If it is not possible to prevent the attack with some other method, then is it possible to determine the best limit today, in the near future, and in the long term, without needing to have this debate repeatedly for the rest of bitcoin's existence?
I don't think so. It is too difficult to predict that far into the future as even a small error in an estimate verses actual growth would cause a very large difference when compounded over time. It is however
much easier to lower the maximum block size limit then it is to raise it (a soft fork could be used to lower it which is much easier to implement). So I believe that it would be best to be somewhat aggressive in raising the max block size
limit as long as we are confident that the network could handle the larger blocks in the near term.