As it appears to be consensus that deriving the spec is too complex and tricky for a human being
I really don't understand this sentiment, especially from one of the devs. I think that everyone who complains that it's too difficult have just not bothered to find out how anything works.
In fact, I think it speaks volumes about those complaining that every one of these threads talk about a 'protocol spec', when in fact the way that the network passes messages is largely irrelevant to the working of Bitcoin (aside from the fact that it's very well
documented and simple). The only important part is the binary format of transactions and blocks, so that you can verify proof-of-work and merkle roots, other than that it really doesn't matter how you obtain them - you could get all the data you need from blockexplorer.com and bypass having to connect to the network at all if you really wanted to.
There's only one thing you actually need to worry about - block validation, which is explained
here. If it's too much to implement in one go, start with SPV style checks and work your way up, it's better to accept bad blocks than to reject good ones. Just don't mine or relay blocks until you're sure you've got verification completed, or isolate yourself from the rest of the network by connecting to a single trusted reference client node.
There's also the question of why you actually want to implement a fully verifying node. The only reason I can think of is if you want to mine with it - so you don't waste your effort producing invalid blocks. Other than that, you can get all the functionality you need from SPV checks and a full blockchain (making sure to verify the merkle root) for significantly less development effort, less cpu+ram use, less likelihood of forking yourself into oblivion, for a tiny reduction in security.
Additionally, there is almost zero financial incentive for creating an alternative implementation - no-one will trust it if it's not open source, and no-one will buy it if it is open source.
The biggest problems in creating an alternative client are software engineering type problems, not implementing the network protocol. How do you guarantee database consistency or recover from corruption if the power goes out while you're processing a block? How do you deal with re-orgs? How do you store your blocks/headers for best performance? How do you minimize the expensive processing you perform on bad data before you realize it's bad? How do you properly deal with errors like failed verification or your DB crapping out so that you don't confuse one with the other and end up stuck on a fork? If you receive an invalid block a number of times from different peers, when do you decide that either you're under attack or maybe you're rejecting a block that's actually good, and what do you do? Etc.
I think people are just scared of having to deal with network sockets and processing binary data, if everything was dealt with as JSON there'd be no complaints (other than network usage).