Because we are a decentralized network
You can instead connect to 20,000 peers, and send one tiny spam through each.
D&T made the same remark, to which I responded above. You'd had to filter which you would relay, and then priority vs fees kicks in. The only difference is that it's not a binary decision (has/hasn't enough fees to get relayed), it's just a way of sorting what gets relayed from what does not. You may end up relaying a transaction that today would not be relayed if there's enough "room" for it.
1). Memory-limit the memory pool-- the set of transactions waiting in memory eligible to be included in a block. Matt Corallo has been working on that. The limit should be a small multiple of the median block size of the last few hundred blocks.
2) Use the same algorithm/parameters/etc for adding transactions to the memory pool that we use to fill blocks.
3) Only relay transactions that fit into your memory pool. This is the DoS prevention, your transaction won't get relayed if your node doesn't think it will end up in a block soon.
4) Estimate minimum transaction fee / priority needed to get into a block, based one:
a) At startup: the transactions in the last few blocks
b) If you've been running long enough to "warm up" your memory pool: transactions in the memory pool
That's nice, better than what is done today and better than what I was saying above. Much more flexible. Nice.
There is one more change I'd like to make that is independent; re-define "dust" based on the floating transaction fee (e.g. a dust output is any output with a value of less than 1/4 the minimum fee-per-kb required to get into one of the next 6 blocks). And make any transactions with dust outputs non-standard, so they're not included in the memory pool or relayed.
Why not applying the same logic you described above? (they only get dropped if they can't fit in your memory pool)