It depends on what kind of data center you're running. My typical OCP cabinets are running 20kW, I have routers running 30kW. My cabinets are only 2' wide so I'd bet the power density is 2-3X.
I just designed one room to run 9.6MW IT load, nominal, up to 32kW per 2' cabinet. These little S9s at 1.3kW aren't so bad.
I'd say that every mining farm I've seen so far is completely missing the idea.
Looking at your pictures I'd suggest you simply isolate the exhaust of all the miners from the intake. I mean a physical barrier which prohibits the hot air from mixing with the cool air. I'd just build a box that contains all the hot air which will go out that convenient window. Place your shelves against the outside off the box, set the miner on the shelf so that the exhaust fan touches the wood or better yet sheet rock, use a pencil to outline the location of the exhaust fan, cut a hole to stick the fan through the hole and turn it on. Then do the rest of them. Hot air isolation and containment is the key.
If you really want to, measure the differential pressure between the box and outside. If it exceeds 0.5 psi, mount a squirrel cage blower with 20% more capacity than the miners total the suck the air through the miners and force it outside. If you put a VFD on that blower, you can actually set it to about -0.1. This is critical to improve the effectiveness of the radial fans on the miners. They'll last much longer that way. Don't worry about the fan energy. I've run this test a couple times and found the fan energy used by the exhaust fan is negated by the fan energy reduction in all the miners. Then, the miner chips do run cooler so they burn less energy. It'll actually improve your site efficiency.
Sheet rock is better than wood as it's really hard to burn. Your noise level will go way down too. You could even add a layer of insulation but I don't think you'll need it.
Too many mining farms try to throw the air around the room. Too many data centers do too. My data centers all have air to Freon heat exchangers on the back doors of each cabinet.
Think about how the shed solution isolates the hot and cool air. This is doing the same thing.
OCP cabinets tend to have a lot higher power density that most common 19" rackmount based data centers allow.
Most data centers do NOT allow 20KW per cabinet - most I've worked with you're lucky if you have 10 kw available.
Your 32 kw per cabinet data center is the EXCEPTION, not the norm (though not uncommon in OCP usage).
I've not actually bought any OCP gear, but I keep being tempted by some surplus Quanta Windmill and Winterfell servers/racks, so I have SOME knowleage of the subject and the relatively high power/performance density of OCP vs most "standard" rack-mount servers.
Sheet rock / drywall is generaly fire rated, not just "hard to burn" - how many hours of fire rating tends to vary with the thickness but I've seen 6 hour ratings on some THICK sheets of the stuff.
Insulation would probably be a waste, given the airflow level and that drywall itself insulates some.
New data centers are moving away from the entire "A/C to cool with" concept due to the costs - the Yahoo "Chicken Coop" design is a lot closer to what most large data centers are doing today, as it's a TON more efficient than traditional designs.
Or look at the GigaWatt "shed" design, which seems to be a lower-tech variant intended to be able to deal with stuff OTHER THAN rack-mounted gear (and seems to be similar to what Bitmain uses in it's big farm).
My 19" rack mount cabinets are running 10-30kW depending on how populated the chassis are. My OCPs are running 14-20kW, they are design limited to 24kW so technically they cannot reach a Cisco 9922 running full out.
About sheet rock, regular sheet rock is NOT Fire-Rated, but the Type X is Fire-Rated, but not Fireproof.
Type X is by no means 100% fireproof; simply it is drywall that will stand up against flame longer than regular drywall. Also, just because an area is rocked in Type X does not ensure fire safety. Fire can still find other avenues to travel: vents, doors, gaps, etc.
If a conventional 1/2" thick sheet of drywall will stand up to 30 minutes of fire, then the added 1/8" found in the Type X drywall, along with its other properties, will increase your margin of safety another 30 minutes. For this reason, fire-rated drywall is sometimes called one hour fire wallboard.
I mentioned you could add insulation. There will be a lot of heat in this containment area. If the sheet rock feels warm to the touch, there is a bit of heat conduction between the hot area and the cool area. You may also want to reduce noise further. Insulation is really cheap for such a small project.
Yahoo experimented with the Chicken Coop. They build hyper scale centers. What many have learned is that you can certainly reduce or even eliminate cooling system energy but that doesn't mean you'll save total energy. Actually, IT load increases with temperature as the CPU and transport interface components behave this way. The headline was great, near unity PUE but alas, they forgot to mention the IT load increase.
I was just in Facebook's newest data center in Fort Worth, it's nothing but OCP, it's not a chicken coop. Good ole mechanical cooling with a very efficient environmental air exchange system used when conditions allow. Cold aisle temps are 80F which I found a bit uncomfortable. It's 35MW IT per building with 3 buildings in the plan which is more than large.
I actually tour many new data centers every year, and speak on the subject at data center engineering conferences. Nearly all new builds are putting in Liebert DSE units with pumped refrigerant economizers. I use some of them too but usually only for a redundancy layer. Our study showed we can actually use less total energy with efficient cooling systems to optimize for server intake air temp of 72F. It also makes for a much more comfortable room to work in.
Over the years I think I've seen 25ish sites with direct outdoor air exchange systems intended to be used most nights and winters. Of all those systems, I've only seen two that are still actually used. One of them is my customer, the other at Cisco, but both of those actually yield much less time in eco mode than we thought they would. Every other one was disabled a long time ago.
Oh, and if it's making noise, it took energy to do that. When you go into a loud data center, I can guarantee it is inefficient. All mine are cold everywhere and quiet enough so I can talk to a small group of people in a normal voice. All noise is wasted energy.
apologies for the tangent topics