Another thing to note is that changing N changes the amount of memory required. This will (eventually) make gpu mining not worthwhile and much less efficient than cpu mining. ASICs and FPGAs (in their current renditions) won't even come close. This will be a coin ruled by fast memory, and lots of it.
My GPU 3GB of memory and a large amount of bandwidth to it; memory access times are on the same order as those of my CPU, so I doubt that it's going to have significantly different bounds in the absolute worst case. It's an interesting challenge trying to make an algorithm used by the network parallelism resistant while relying on distributed parallelism for part of the network security.
Looking at it from the brute force point of view, the obvious case of a botnet has been already stated, and running 1,000 CPU hours on Amazon is almost insanely cheap. Throwing $20 at AWS would probably be enough to out-mine the entire rest of the network if people were bound per CPU. Once again it degenerates on whoever-has-the-most-cash-to-throw-at-the-problem-wins.
Making stuff hardened against GPUs is going to be pretty hard as they're pretty much computing clusters to start with, without the problem about distance for connection between nodes or power/space overheads. Given you can just write for them in (pretty much) C, and then just pick and choose which parts to parallelize, for repetitive proof-of-works it's almost always going to be possible to find a way that is at least as fast. It's probably easy to find a way to make something CPU bound in the short term, but no reason it has to stay that way.
Dynamic parameters seems like a good idea to start with, but given optimising compilers can automatically give a not-horrible solution rather quickly, we can either recompile every parameter change, or just have code that isn't fully unrolled for a GPU. The latter is almost certainly what the CPU is doing anyway. Looking at FPGAs, they are literally designed to be reconfigurable on the fly. If the parameter change is only once every few days, hours or even minutes then this isn't going to rule those out either.
If you succeed in limiting to only CPUs, it will just be an extra income stream for botnets.
What might stop the situation that we have now (with a never-ending race towards more hashing power, which is actually encouraging entrenchment and centralization of power at the expense of network security) is to somehow have a fixed cap on work, split it into shares, and then have some sort of distribution of those shares across the network. Encourage geographic network diversity, and encourage nodes rather than just raw brunt.
Proof of work does help to secure against brute force attacks long term by giving fiduciary incentive to be the biggest brute force attack out there first. Those same forces however also drive centralization of mining power and entrenchment of monopoly on those who have been able to re-invest mining profits back into mining. Even the pools consolidate to single points of control, which even if the pool providers have the best of intentions, are single points of attack and single points of failure.
Distribution of work across all node networks over time isn't the hardest problem to solve. Protecting against brute force without an internal arms race against giants is though. What protects us seems to be also destroying us from within at the same time :-/