The few DHT's that exist which are proposed to be attack resistant in any serious way— things like CJDNS's routing or Freenet— work by imposing a 'social' network link topology on the network which is required (by the security assumptions) to by largely sybil proof. ... A pretty strong requirement.
Fortunately, this is neither here nor there because the requirements of the Bitcoin system are almost but not completely unlike the services provided by a DHT. DHT's provide users with random access to unordered data. In Bitcoin there is no access which resembles a hash table access.
To verify a block we must confirm that the inputs for the transactions it contains are spendable— that they they were previously created in the same chain and have not (yet) been spent— for this all nodes require the same data, not random data. We do not even require the full prior transactions, just the TXouts (potentially reducing tens of kilobytes of data to tens of bytes of data).
If we do not have this data, but could verify it if it were handed to us (e.g. if we'd been tracking a committed UTXO set root hash) our peer could provide it for us along with the block. So long as we have _any_ peer willing to give us the block we'd have a guaranteed way to obtain the required data— immediately eliminating most of the DHT attack weaknesses (in the literature sometimes systems with properties like this are called D1HTs).
Unfortunately, obtaining just in time data comes with a large bandwidth overhead: If you're storing nothing at all then any data you receive must come with hash-tree fragments proving their membership: With convention hash trees each txin requires on the order of a 768 bytes of proof data... and with current technology bandwidth is far more limited than storage, so this may not be a great tradeoff. One possibility here is that with some minor changes nodes could randomly verify fractions of blocks (and use information theoretic PIR to hide from peers which parts they are verifying), and circulate fraud notices (the extracted data needed to prove to yourself that a block is bad) if they find problems. This may be a good option to reduce bandwidth usage for edge clients which currently verify nothing, but its not helpful overall (since one hop back from the edge the host must have the full block)... I'd say it would be a no brainer but getting the rarely executed fraud proof codepaths correct may be too much of an engineering challenge (considering the level of failure alt implementations have had with the consensus rules in Bitcoin as is).
Managing storage also does not have any need for an kind of sophisticated DHT— since 0.8 the reference client separates storage of the UTXO and block data. The UTXO, stored in the chainstate directory, is the exclusive data structure used for verifying new blocks. The blocks themselves are used only for reorganizations and for feeding up new nodes that request them— there is no random access used or required to transaction data. If you remove the test for block data in init.cpp the node will happily start up with all the old blocks deleted and will work more or less correctly until you call a getblock/etc rpc that reads block data correctly or until a newly initializing peer requests old block data from you. The chainstate data is currently about 450MB on disk, so with that plus some recent blocks for reorganization you can already run a full verifying node. The task of initializing a new node requires verifying the historic blocks so to accommodate that in a world with many pruned nodes we'd want to add some data to the addr messages, but it a couple of ranges of served blocks is sufficient— no need for routing or other elaboration. And block ranges match the locality of access so there is no overhead (beyond a couple bytes in the addr messages).
The change to the reference client to separate out the UTXO in this way in the "ultraprune" patchset was no accident— it was a massive engineering effort specifically done to facilitate the transition to a place where no single node was required to store all the historical data without any compromise to the security model. You can see this all discussed in detail on the bitcoin-development list on and off going back years.
I don't think that any of this is actually incompatible with what you were _actually_ thinking, — you're no fool and your intuition of what would actually work seems more or less the same as mine. But the fuzzy armwave about just scattering block data to peers with no mind to locality or authentication is a common trope of people who have no clue about the security model and whom are proposing something which very much won't actually work, so I think it's important to be clear about it! ... that keeping pruned validation data locally and not storing blocks historically is already the plan of record, and it's not a DHT, and the DHT proposals are usually completely misguided.
Sipa's analysis about a year ago was that there is an exponential tail in block access frequencies follow an exponentially decaying pattern to ~2000 blocks back, and after that have uniform access probability (further back than 2000 and the requesting host requests all of them in order), so it would be prudent to try to have hosts store as much as the first 2000 or so as they can, and then use additional space for a random range or two in the history. I'd consider 144 to be a sane minimum because with less than that long reorganizations may cause severe problems.