Compared to this Rube Goldberg contraption, the costs would be insignificant. Further, to mount proprietary gear in these "Data Tanks" also requires NRE. Unless you think that the installation is simply a question of backing up a dump truck full of PC boards to a 40-footer and tilting the bed.
https://docs.google.com/uc?export=download&id=0ByWHHc0u_thNdzB3c2hvVzJkcTQHave you read the design guideline document? The installation will be just plugging in mining hardware via edge connector (like a GPU card), which takes a few seconds at most. The same edge connector could be used for air-cooling as well, so that no NRE required. It is an attempt to introduce a similar standard like PCIe to the mining hardware industry with several key manufacturers very interested to participate. Compare that to installing fans and heatsinks with multiple screws, taking several minutes at least. It's like desperately holding on to installing a GPU via manufacturer-specific interface instead of via standardized PCIe. Setting up a few is no problem, setting up many thousands is a different story.
Rube Goldberg contraption are subject to context and perspective. You may see that as over-engineered if you don't compare correctly. But with multiple hardware generations, every time mining hardware manufacturers have to design a new case, new cooling infrastructure, cooling and performance tests, assembly and manufacturing of everything surrounding the boards, this sounds to me much more as over-engineered. Considering that air is inherently inefficient to transport heat away (insulator in every Starbucks double-walled plastic mug, double-paned windows, etc.), a lot of engineering has to be employed to make it work effectively.
Did you know that Tencent, Baidu and other big data centers in China have an army of technicians who do nothing else but exchanging broken fans? Did you know that Intel has reliability studies of electronics rusting away within a few months in India and China because of much higher sulfur content in the air/humidity/rain? I know for a fact that many Chinese mining operations need to exchange a lot of broken fans as well and have to deal with tons of heat issues - if not mining hardware, then network switches, PSUs, etc. Those open air chicken farm facilities will probably make it impossible to reuse most of the components like PSUs if they corrode away.
http://www.intelfreepress.com/news/corrosion-testing-procedures-adapt-to-rising-air-pollution/6763http://www.oregonlive.com/silicon-forest/index.ssf/2013/10/intel_finds_asian_pollution_ma.htmlSure, some of today's mining hardware at 40nm+ may just barely get away with so-called 'free cooling'. But that comes at the expense of spreading out in low density (e.g. huge 'chicken farm' / entire building vs 1 container). If everyone is moving towards 28nm and lower in not too distant future, then many will realize very quickly that physical limits of W/m
2K heat transfer in air and single phase liquid can't be overcome. To cool down hotter mining hardware, it either requires relatively cool air (chiller in hot climate, or cold climate but then often paired with higher electricity/logistics/labor costs and taxes) and/or a LOT of air volume. I've heard of China installations having to be shut down in the summer and/or having to deal with sub-par performance because of temperature induced down-throttling. What a surprise, since it all still worked so nicely in the winter, as many have set up air flow only for that scenario... 2-phase immersion cooling has a much higher heat transfer, so that it doesn't require many pumps inside the tanks at all to transport heat away. Less moving parts = less maintenance.
So how is this different from "576 power supplies [that] cost money"? How is this an example of savings?
If not having to replace PSUs, but being able to reuse them for many future hardware generations, you are not forced to buy new miners again with new PSUs.
But the much more important point is that if miners are selling you a finished case with PSUs, it will add on to lead time on their supply chain. Meaning that if a new mining chip has been developed, which looks really good on screen at the prevailing difficulty that time, it might only come into your hands and will be ready for mining several weeks or months later: How fast can boards be assembled into cases if they have to be supplied from elsewhere, shipping back to a logistics center after assembly, etc.? And as mentioned above on the NRE: Customizing previous cooling solution for the new mining board, testing it, starting to order fans/heatsinks/water cooling blocks in bulk with inevitable delivery time, before assembly can even start.
In comparison to immersion cooling, the mining board could be shipped out right away. That's technically not any kind of saving, but additional realized mining income. The chart in the prospectus explaining the effect of delayed deployment time and text describe that this crucial time difference of just 10 days or less getting online faster could already pay off the entire DTM container costs and still leave you a nice income on top, while others are still waiting for their hardware and still have to start setting up afterwards.
The miners are lighter, but (the 40-footer + cooling tower + miners) ain't. All that stuff doesn't ship itself.
It's all in the prospectus: The containers will be set up first at a location chosen by DTM. Only shortly before completion of setup, mining hardware will be purchased at best efficiency and price at that point of time. So only mining hardware has to be shipped around. But if it's necessary to ship an entire case, then of course the costs are going to be higher. Some cloud miners have revealed that they have to resort to split up mining hardware into boards in one shipment and cases plus heatsinks, etc. in another shipment for custom duties reasons, because those are much higher on finished products than on components.
Hobby miners might not care if one or a few cases slip through customs. But if it's about an entire farm, the story looks completely different. And then again, we're not talking about one generation only (although those savings are already significant, despite shipping the container first time) - for every subsequent mining hardware generation, there are again much higher shipping costs. A 2U case is about the same volume as probably 5-8 mining boards, while already set up DataTank containers don't need to be shipped any more and will be ready for new generations.
When a board (blade) goes bad in the "DataTank," replacing it isn't as trivial as replacing an air/indirect water cooled board. A whole cluster needs to be powered down and cooled before being serviced.
See above about replacing time. Consider furthermore, that most cooling solutions are already at today's maximum. If the TDP is higher, then they will need a new solution. So it's very likely not just reusing the old cooling solution for new mining hardware. You can be pretty sure that a cooling solution for 40nm or even 28nm won't work for 20nm or 14nm. Who is deliberately shipping oversized heatsinks and fans with current generation of mining hardware? Even if it would work, do you really think that mining hardware designers already consider to have the same PCB mounting holes for heatsinks, fan connectors, etc. at the same place for the next generation?
With 2-phase immersion cooling all this doesn't matter, since the fluid automatically surrounds the new mining hardware, no matter which shape and format. If you extrapolate the 4kW simulation in 200cc fluid in 1L space to an entire server rack, you would be at a theoretical 3-4MW per rack - try imagining how many future mining generations that would be if today's air cooling capacity maxes out at maybe 35-45kW per rack with LOTS of effort. It's not only cheap for the first generation already, but the costs can be truly split over multiple hardware generations because of it's incredible excess thermal capacity.
And how do you know that it's needed to power down a whole cluster? A single slot of many inside a tank needs to be powered down, since the PSUs are connected to up to 8 boards within that slot. But even that may not be the end of the road yet.
Judging by the vids, hundreds of boards are sitting in the same Novec tank. This is a closed-loop system, with the vapors condensing and returning to the tank. Opening that tank with the miners powered up and boiling Novec would do 2 things: Vent the vapor into the environment (thus losing exorbitantly expensive Novec, if nothing else), and suffocating you (unless you think that Novec turns into air in vapor phase).
Again, how do you know? Anitrack has posted a picture of invited press, 3M people and himself standing around an open, bubbling tank. Antirack is obviously still alive and posting... furthermore, it's on the company website that it's Allied Control's expertise to minimize fluid and vapor losses. Having a way to hot-swap is one of the technologies already developed.
Let me quote myself from another post:
On toxicity and evaporation and all other arguments on that it's not practical - one of the many articles on the web:
http://www.pcworld.idg.com.au/article/542462/intel_sgi_test_full-immersion_cooling_servers/Does it mean that Intel, SGI, U.S. Naval Research Laboratory, Lawrence Berkeley National Laboratory, Schneider Electric, etc. are all wrong, suicidal and have no idea what they're doing? Interesting... maybe you could make much more money by teaching all their PhD's, etc. a lesson in physics and chemistry and prove they're all wrong...
It's also used as fire extinguishing agent in the Library of Congress and as well the military for extremely confined spaces with troops inside as a much healthier and better alternative due to its much lower no effects limits than other solutions - it is intended for fogging an entire room with people inside with high concentrations of Novec, not causing any suffocation:
http://solutions.3m.com/wps/portal/3M/en_US/3MNovec/Home/News/PressReleases/?PC_Z7_RJH9U523007AE0IUQAESTF39O3000000_univid=1273685423873http://www.youtube.com/watch?v=KNSHsUWcploIt's perfectly fine to have another opinion and I will always respect that. But spreading wrong claims based on no source at all is not really strengthening any position.