Luke, shame on you for just dropping this message and calling for a forum vote without giving people who disagree with you the fair time to write an opposing position. (Edit: Specifically I'm complaining about the forum vote aspect of this, Luke has been pretty upfront with his complaints at least since he started raising them)
As with a lot of things, the subtle details here are technical can be tricky to understand. It's hard to find an expression of the situation to help people reason about the consequences without having to learn everything that the developers know.
Here is my take on the situation: We have a couple of related needs: We need to have receiver controlled disposition of funds (e.g. you should be able to elect to have a multi-party secured wallet _personally_ without burdening people who send to you), we need this to have a compact representation both as an address (600 byte address have severe usability problems) and in the chain (so the sender doesn't have to pay larger fees just because you have a complicated escrow for your account), and ideally we'd like to move more of the blockchain storage from outputs to inputs because inputs are always prunable so this will help tame chain bloat in the future.
After some discussion it was proposed that we add a new opcode, OP_EVAL which would allow a transaction validated by a script provided in the transaction itself. With a solution in hand, no more alternatives were considered and implementation was performed.
Things were looking pretty good for about a month, but then Roconnor went to implement OP_EVAL in his own implementation of the bitcoin blockchain validation code and he immediately sounded alarms: He found some serious bugs in the OP_EVAL implementation (which were quickly fixed), but he also had a more fundamental complaint: OP_EVAL makes script turing complete, and breaks the ability to statically analyze scripts.
At least the former part wasn't unknown to the other developers— but I don't think we'd thought through the consequences completely because adding turing completeness wasn't at all one of the goals (see the list above), it was a side effect. A side effect we thought was tamed by recursion limits (which weren't actually working), but the recursion limits aren't enough to recover the analyizability of script that is lost.
Analyiziability is important because its what allows you to make definite, accurate, statements about what a script will and won't do without actually executing it. It also makes it possible to write implementations of bitcoin with stronger concrete and auditable statements about their security. It's the property that lets you uphold the security principle of "validate input; then act on it". Because the bitcoin system's security comes _PURELY_ from software— it doesn't matter how great Gavin is, he can't reverse your transactions if they go wrong due to software bugs— its very important for the continued adoption of bitcoin that we be able to back it up with the most secure software possible, and that we be able to prove that security to skeptics.
Roconnor's concerns came very late in the process, but they deserved a lot of weight: As one of few people who have implemented the whole system from scratch (not even something Gavin can say of himself) he has a important and almost-unique perspective, he can also rightfully be considered an expert in the field of formal software validation, and his concerns came packaged with real security vulnerabilities that everyone else had missed— proving his deep understanding of the code. TD, author of the BitcoinJ implementation, had already expressed serious reservations about the whole thing.
Perhaps most significantly for me is this simple fact: Script was very clearly designed by Satoshi to NOT be turing complete, and considerable effort was made in this design to achieve this property. The second sentence of the description of script on our wiki says
"It is purposefully not Turing-complete, with no loops.". It seems foolish to me to abandon a core design principle of script, one which greatly helps in reasoning about its security, without a darn good reason. And we don't have a reason at all— our purpose had nothing to do with turing-completeness, that was just a side effect. (the size limits of script and limited IO make it pretty hard to come up with anything where the turing completeness is actually useful)
After further discussion and iteration over roughly a half dozen other proposals it was realized that it was possible to accomplish _exactly_ the original goals, in a much simpler manner, without introducing Turing completeness. This is what P2SH does. I personally think that if we'd come up with the idea of P2SH first, we would have stopped there and never considered OP_EVAL.
Unfortunately, Luke has taken this position that I consider weird: That P2SH is bad because it's a special case instead of being a regular opcode that executes arbitrary code, and this special-caseness makes it inelegant. My own position is that none of this is natural law: every behavior is a special case and happens only because the software says it so— some parts are more regular than others but what we should care about is implementation complexity and P2SH's complexity is very low.
What perplexes me more is that Luke stated he would withhold his complains if the 'other developers' announced that "Bitcoin 2.0" (whatever that would be) would only use P2SH style transactions and that non-P2SH would be deprecated. I think a lot of people who have considered this consider it a likelihood— the potential pruning savings of P2SH style transactions are tremendous and could significantly reduce spam risks, but no one can make promises about a future system which doesn't exist and may not involve the current developers or may never exist.
[It was also unfortunate that Luke had other commitments at the weekly development meeting where ~everyone else decided that P2SH was an acceptable compromise, so he sort of appeared two days later complaining about it after it was perceived by many other people as already settled. As a result not a lot of care has been given to his complaints, even though he has been pretty persistent about them]
I fail to see a lot of genuineness in complaints that take the form of "do it completely, or not at all, a little isn't acceptable" unless there is a good reason that the moderate path is a bad one, and I don't see any reason why the moderate path is a bad one here.
P2SH is a good technical solution which addresses our needs and which has decreased complexity and risk compared to OP_EVAL. Should there ever be a need for the turing completeness that OP_EVAL offers enough to justify the costs/risks then it could be deployed, P2SH does not preclude the possibility of OP_EVAL in the future.