Of course it wouldn't be 100%, but if the confidence heuristics you implicitly refer to, are pretty similar across nodes, I hardly see more than a few percent of the mempool set being dropped for "IBLT-safety" purposes, which doesn't fundamentally change the argument.
Today I would say you could drop or include as a full txn (depending on fee) a small number of standard txns and get a high IBLT proof success rate. It would make sense to assume that non-standard txns, very stale txns (txns which have been in the memory pool much longer than normal) and very recent txns are not O(1) and only include those if the fee is higher than the propagation cost.
The larger point is that is only true today while the txn volume is low. If txn volume remains lower than the bandwidth requirements of most nodes then larger blocks aren't an issue. You know that thinking in nominal terms for a computing network is silly. In 1980 someone would have said 1MB blocks every 10 minutes to tens of thousands of nodes all over the world would have been an impossibly high resource requirement to base a decentralized network on but today it is trivial. So what matters is not the nominal size but the size relative to the median node's resources. My point is that IBLT only results in O(1) if the IBLT set is smaller than the resource availability of median node. If it is higher than the IBLT proof will fail for a majority of the nodes and they will request the full block resulting in the full propagation cost.
As a thought exercise, if there was no block limit today and IBLT was already implemented could a miner create a 1GB block have O(1) cost? The answer is probably not. The median node would not have a superset of the txns in the IBLT set. They would either be dropped due to being stale or simply because the node had insufficient bandwidth to relay all txns. So solving the IBLT would fail and the receiving node would need to request the full block at O(n) costs. Would it make sense for a miner under that scenario to create that block with txns which pay less than the expected orphan cost? Once again no, no more than it would without IBLT.
What about all blocks up to the median (i.e. a set which has a high confidence of being a subset of the memory pool of >50% of the nodes)? Well you're right the marginal cost would be near zero but the miner can't control the resource availability of other nodes so he can't force them to accept more. Also if the majority of nodes can handle that volume and there is sufficient demand for that volume why would we want to artificially prevent that transaction volume?
Now a jump to 1GB (or unlimited) block size would be bad today. It doesn't have anything to do with fees or economics though. It has to do with the original reason for the limit and that is security. It would not be economical (in fact it would be moderately expensive) for a miner to create 1GB blocks but they would hurt the network by bloating the blockchain. This is why I am not in favor of an unlimited block size. The current block limit has had no impact on economics but it does provide an upper bound on the damage caused by an attempt to maliciously bloat the blockchain. Those arguing unlimited block sizes often forget this fact. So from a security standpoint the question becomes at what rate can the limit be raised with low risk. At some point in the future 1GB will have lower storage and bandwidth costs than 1MB does today. I don't favor Gavin's current proposal only because I think the high rate of change is a risk. A lower rate of change that is sublinear to Neilson's Law over a longer period of time can get us to the same place with less short term risk.