What this article doesn't cover is the fundamentally different mental model most of us in the Bitcoin highly technical sphere have for smart contracts. The key realization behind it is that smart contracts are verifying not computing-- the inputs are the contract, the transaction, additional evidence provided by the user, and the network should either accept or reject the state update represented by the transaction: Script is a predicate. The user(s) do the computation, then prove it to the network.
Verifying is fundamentally easier (and I mean really fundamentally: verification of a NP statement, given side information is in P). Verification is also usually much easier to make highly parallel. As a toy example, say I give you some giant numbers A and B and ask you to divide them to compute Q = A/B. The division takes a lot of work. If, instead, I tell you Q, A, and B-- you can multiply and check the value-- much easier.
This model is also how we get useful tools like P2SH and MAST... you don't have to provide the contract to the network until it's enforced, and if the contract is complex and not every part needs to be enforced you can only ever publish the parts that get used. The unused parts just get represented by a compact cryptographic hash.
This distinction also simplifies the system from an engineering perspective. Script is a a pure function of a transaction, so if a transaction is valid-- it can't be invalidated by other transactions coming or going, except via the single mechanism of anti-doublespending. Similarly, OP_CHECKLOCKTIMEVERIFY isn't OP_PUSH_CURRENT_HEIGHT_ONTO_STACK-- the script expresses what it expects the locktime to be, and the machine verifies it. The construction means that the chain can reorganize with invalidating chains of transactions and it makes the verification work highly cachable.
Is this mental model similar to people familiar with conventional programming (say, on the web?)? No. But smart contracts aren't conventional programming, and blockchain isn't a conventional computing environment (how often does history change out from under most of your programs?). These design elements make for a clean, simple system with predicable interactions. To the extent that they make some things "harder" they do so mostly by exposing their latent complexity that might otherwise be ignored-- some tasks are just hard, and abstracting away the details in an environment with irreversible consequences is frequently not wise.
This may be bad news for the hype of converting the whole universe to "distributed apps"-- though a lot of that just doesn't make sense on its face-- but the fact that a lot of people are thinking seriously and carefully about the subject is good news for the technology having a meaningful long term impact.