They both suck and suffer from the Smart Contract vulnerabilties, so both will die in the end if nothing is done to remedy to the problem.
There are no Smart Contract vulnerabilties. slock.it messed up on their code writing. The vulnerabilties were that they did not protect
their Dao.
A bit like writing a cheque with the correct date and then signing it but leaving the value blank and then giving it to a homeless man. Is it the bank to blame when the homeless man writes in his own value and goes to cash it out?
But if the bank recalls all the cash and then issues new cash just to stop one homeless man taking advantage of the bloke who wrote the bad cheque, that could be seen as governance vulnerabilties don't you think?
I agree with that principle. And it is true that what failed was the DAO code, and ethereum "worked like a charm".
But the DAO was written like normal software, on a normal software platform (ethereum). Normal software, on normal software platforms, contains bugs and exploits if its complexity is a slight bit involved. If we knew how to write complex code without bugs and exploits, we'd be doing it for more than 40 years. Even the most talented and experienced software engineers, on platforms that they know like their pockets, make bugs and exploits. That's unavoidable. So what happened to the DAO was perfectly normal.
Now, what happens to normal software when bugs and exploits are discovered, is that one PUSHES A SECURITY UPDATE. You can't do that with a smart contract.
After a while, as your software is used and tested in real world situations, it starts to be secure against bugs. However, you will almost never know that it is secure against exploits. The difference between a bug and an exploit is that the bug is the software not functioning correctly within the range of "normal user input". As normal usage of the software samples more and more of the normal user input space, the statistical possibility of there to be a significant island of user input where the software fails and not having been sampled yet, becomes smaller and smaller. That's why software that is used a lot, becomes more and more bug free (unless new features are added to it): the space of normal user input is sampled more and more. So "old stable software" is essentially bug free.
However, an exploit is a bad functioning of the software OUTSIDE of the normal user input, looked for on purpose by someone desiring the software not to function properly. The exploit lifetime is much much longer than the bug lifetime, simply because:
1) the space of possible "abnormal" user inputs is way bigger than the space of normal user input
2) that space is not sampled by normal users
3) that abnormal user input goes against the abstract logic of the software, and so during software engineering, everything is done so that the programmer DOES NOT think of these illogical cases.
As such, the dev crew is probably "brainwashed" NOT to discover their own exploits. Finding exploits needs a "hacker's mindset".
The "exploit space" is only sampled by real hackers, having a real mobile to hack. You can only protect yourself up to a certain level of "hacker's competence" by demanding an external audit.
But the whole point is that all these things are nuisances in normal software because normal software needs to be immensely flexible, and consider immensely different usage cases. This is why software platforms are very versatile, and it is this flexibility that allows for all these "unexpected" cases.
A contract is normally much simpler. We've seen with the banking crisis that it is NOT a good idea to allow for very complicated contracts. Normally, as a general guideline, *you should only sign a contract when you've understood ALL FINE PRINT*. That means that the terms of a contract shouldn't be too complicated in general.
As such, the "state tree" of a contract should be finite and sufficiently limited that the person signing up can have a complete overview of it: "If this happens, that clause is valid ; if that happens, that clause is valid ; finally if not this but that happens, at that moment, that clause is valid" should be ENTIRELY CLEAR to the signer.
The terms of a smart contract are hence much simpler than the entire statespace of general software. If not, things go wrong, in the same way as the real estate security swaps went wrong. There's no good reason to have contracts with very involved state spaces.
And IF we can have limited state spaces, then we CAN prove that a certain piece of software will ONLY implement such a state tree, on the condition that the platform is *designed that way*, with provability in mind. It is automatically not Turing complete.
This is what ethereum is not. This is why, even though it was the DAO code that was buggy, the ethereum platform, running like a charm, was the cause of it: it is a normal software platform, on which complicated, and hence buggy and exploitable code, is going to be designed. The DAO was the first example of it. Making a software platform that allows easily, like all software platforms, to make bugs and exploits, but in a system (smart contracts) that doesn't even allow for security updates to be pushed, with money at stake, and "open to the world" (not running on airgapped computers in physically protected rooms) is recipe for disaster.