I do not think it will matter so much about switching from coin to coin. When you have a dozen plus switching mining pools they will all be mining the top 1-5 coins. So switching from one coin that is being mined by 4 pools to a different coin being mined by 2 pools to a different coin being mined by 3 pools.. you get my point. The algorithms are not exactly rocket science. Jump in when diff drops || price rises || speculative mining (which can back fire) || bid depth || knowing when the next block to be found is more valuable then the last || etc... In the end the top alt coins are being heavily mined by hashing power that comes and goes.
So would a pool admin try the highly speculative route in their algorithm trying to predict the future price of an alt coin? I doubt it. Might try it once in a while after they are established but in the end they will all fall into place when their are 1-5 obvious winners to mine.
So that leaves trading. But not trading how most people think of it. More like timing your selling. I tend to think the best seller could probably beat an automatic sell process but for most pools they will probably do much better just to sell every day and get an average over time. No one can predict the future reliably and in the end just selling every day will more then likely beat the majority who micro manages it.
So that leaves implementation of all the pieces together. Who has the best uptime and gets fewer denial of service attacks while gaining more hash rate. Who can scale their servers up to handle load to reach the optimal size of pool for the best payouts. Who can minimize stale/rejects for users making things more efficient and probably having a dozen servers world wide running stratum.
Dunno tho. I think about it sometimes. I have run a pool before but not a switching one.
I like your analysis; in the end you are showing that it is extremely cost prohibitive for tons of multi-pools to spring up.
Just think; after having about 300-400 MH/s your network connection to the internet is heavily taxed; you need a 50 Mbps line minimum that is dedicated. Anything less and high rejects.
You also need a system(s) that can handle High I/O; this generally means at least two boxes with one being a dedicated MySQL box.
After 1 GH/s you need multiple servers to do off-loading/failover/etc.
After 2 GH/s or more you need to start thinking about multiple network lines (if you haven't done so already) and multiple geographic locations.
Add all of that up and I highly doubt a whole lot of newcomers will start showing up once they see all of the work.
The most expensive part of running a pool for those who are well connected in the ISP world is DDOS protection. Hands down. And even paying for protection is no guarantee of being protected. No DDOS protection provider will stop a 100 gigabit per second DDOS that takes place for 4-12 hous for a couple hundred measily bucks a month. They will null route you. So you start paying more and perhaps you can be protected from only the most vicious of attacks. Plus pools are not just websites. Protecting port 80 and 443 is well refined by them. Not port 3333.
Surprisingly a pool does not use a lot of bandwidth. For every 1000 mhash/sec you can basically say you need 10 mb/s. And that is with a low diff of 64 for every user. Honestly bandwidth is hardly a concern. Low latency is much more appreciated by being well connected to the backbones. So if you use VARDIFF or just hardcode a diff of 1024 you can cut your bandwidth usage down quite a bit.
Which leads to I/O. The ultimate challenge for a pool. Without SSD drives (and not cheap ones) you will crumble and die. Mysql can be highly tweaked and I always wondered if pool admins have learned how to do this effectively. I had ~1000 mh/sec on one samsung pro 840 SSD at diff 64. I was doing about ~1000 insert/updates/reads a second on mysql. So if I turned diff to 1024 you can imagine how much more I could have scaled it up. Plus I had all sorts of stats running and a few UNIONS left to remove from my mysql queries which are horrible in mysql (always create a tmp table in mysql, fyi) before I folded. So I could only dream what a RAID array of 5 of those SSD drives could do!
So that leads to your point of multiple servers. I never got to that point myself so I can only guess how I would do it.
I would probably want a friend to spin up several VMs in his racks at major data centers world wide. These would run stratum and the local alt coin daemon(s). All shares would be sent to a hidden mysql server so it could not be DDOSd.
Crap knows what I would use for DDOS protection for them. Cloudflare crumbles under 60-100 gigabit per second DDOS and black holes you after a few hours of it. I doubt I would use a load balancer type hardware/software. Just let people pick a stratum server close to them. Dunno. One crosses that bridge when you get there. DNS can be used to switch to other backup servers, etc.. but if someone wants to knock you offline they can. Simple as that.
As for multiple internet connections that is assumed by being in a good data center. They always have more then one up to dozens. 2x10 Gig to the top of rack switches or other crazy amounts of bandwidth. gigE to the servers themselves.
So in the end.. will many of these alt coin pools pop up? YES. But what I am questioning is that once we have a half dozen successful ones all mining the half dozen best coins.. will it start to reach the point of driving down prices where beating litecoin mining itself is pretty tough? Look at ltcrabbit. That PPS pool. It added multi coin mining for a few coins and were paying litecoin pps + bonus on top of that. They are back to mining LTC again. Not even bothering to mine the 3 others they added in. Just aint worth the extra work I imagine to earn an extra 1%.