It wasn't my assumption, it was something that appeared to have been calculated in the posted paper.
The paper is trying to be "conservative" in how limited they believe capacity increases from re-parameterization can come... meaning they're leaping for the largest possible increases, without regard to many considerations... e.g. how far before the existing system will certainly go off the rails. They use this approach so they can then argue that even that much isn't enough-- a conclusion I strongly agree with.
That doesn't mean that those parameters are actually workable, however.
In terms of the interblock time, decreases have dramatic effects once you consider an adversarial setting-- honest miners end up diluted working on many forks more often, while a high hashpower attacker stays focused. Decreased interblock times also increase the pressure to consolidate mining to a few (or one pools) by making it more profitable to do so. Many of the tools to increase reliability of shorter interblock times, like GHOST, increase other problems like selfish mining that crop up once you are considering rational actors (instead of just honest/altruistic ones).
If you chart out a simulation of how long a user has to wait for (e.g. 99.9999%) confidence that their transaction won't be reversed, as a function of the expected interblock time you end up with a chart that looks like this(ignore the scaling):
(flip vertically for 'wait time goes up'). The scaling of this depends on factors like network hashpower distribution and latencies which are hard to measure and which change. The key take away is the derivative: if the time is somewhat longer than optimal for the network-reality, it has a fairly small effect on time-until-security ... but if it's shorter than optimal it rapidly destroys security. This is why it's important to be relatively conservative with the interblock interval.