DoS attack: somebody sends you lots of fake transactions with non-existent inputs. To dismiss such transaction you need to confirm that input is indeed non-existent, and it requires one or more read operations.
If you store UTXO set on hard disk drives, it would be pretty much trivial to DoS attack you because one HDD can do something like 100 random reads per second.
This is why people think that there should be a limit on UTXO set. For example, via introduction of minimal viable transaction output value. (Say, if TXO value is 1 satoshi it might be seen as spam.)
Please understand me correctly, this is just what I saw in discussions here, it appears to be a prevalent opinion.
Of course, it is possible to design system in such a way that UTXO set size won't be a problem and won't affect costs, this is what I'm talking about here...
I see still the majority of developers considers the block verification as sort of a race that happens after each block gets broadcast. I consider it the implementation artefact in the proof of concept code. Both from information-theorethic and game-theoretic point of views the block should only refer to the transactions that were already broadcast and are normally verified with at-a-leisure speed during the inter-block period.
I'm not really familiar with game theoretic methods of system analysis. I'm way more comfortable with hierarchical control systems methodology, where the nodes intent to honestly cooperate but have limited information about the state of the whole system or various resource limitations in internal processing.
I'll be watching the evolution of this problem with great interest. Maybe someone has some game-theoretic attack hidden it their sleeve. Or it may turn that this is just a problem in industrial and organizational psychology, where the hard-scientific reasoning have to take the backseat.