Pages:
Author

Topic: .25 BTC BOUNTY for the best answer - page 3. (Read 13574 times)

legendary
Activity: 2044
Merit: 1000
December 30, 2013, 05:50:32 PM
#29
We have a lot of the "silliest" answers here, from people who have no datacenter or AC experience. You put up a bounty, you get stupid spammers and beggars.

Before qualifying any answer as silly or stupid, I think we need to know some additional considerations such as the amount of money he is willing to spend.

If we take into account the kind of unit he is considering to buy, that sounds more like a kind of garage project, in the line of DIY and low cost solutions.

In the company where I work, we recently did an investment in a cooling solution. The company spent a bit more than 50,000 euros (about 80,000 USD) in a solution based on in-row equipment from APC, like the one linked below. This was a very small datacenter with a power consumption much lower than 40,000 W

http://www.apc.com/products/family/index.cfm?id=379

So, if someone is willing to spend an amount in the range of 3000~4000 USD, maybe the silly solutions are those intended for real datacenters. May be, if he had a budget in the range of 100,000 USD he would be addressing his doubts to a professional cooling consultant. May be he would be ordering and paying a whole study and a project instead of being offering 0.25 BTC (about 180 USD) looking for ideas into a community of unknown people.

This.

I am looking for low cost, DIY solutions.  This will be built out in space not originally intended for Crypto mining.  I want to keep expenses to a minimum.....no need to spend all the profit on cooling the damn things. 
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
December 30, 2013, 06:45:42 AM
#28
We have a lot of the "silliest" answers here, from people who have no datacenter or AC experience. You put up a bounty, you get stupid spammers and beggars.

Inside a closed air conditioned building, evaporative cooling may enhance efficiency a bit. AC removes humidity, to the point where the IDUs need to pump water out. You could add some humidity back to pre-cool the hot AC intake air (you can't humidify cold air AC output). The humidity would have to be strictly monitored to not go overboard or add more humidity than the AC can remove.

+1
You should pass some of the bounty to DC for being so accurate and on point.

If you are doing raised flooring, you will have moisture sensors under the floor (spills, leaks, plumbing, flooding), you may also want air humidity alarming if you are using swamp cooling as part of your mix.

Design for what happens when things go wrong, not just for how you want it to work.
legendary
Activity: 1512
Merit: 1036
December 29, 2013, 05:47:13 PM
#27
We have a lot of the "silliest" answers here, from people who have no datacenter or AC experience. You put up a bounty, you get stupid spammers and beggars.

You basically have two choices:

- Traditional refrigerant air conditioning with a condensing outdoor unit,
- Evaporative "Swamp coolers", where facilities allow large outside berth for building flow-through, and the local weather is favorable.

The amount of air conditioning required is calculable. You have two factors:
1. The amount of air conditioning required to keep an unloaded building at room temperature vs the hottest outside temperatures.
 This is directly related to the building's insulation and R factor. If you have an uninsulated warehouse style steel building, you are going to be using much more AC to keep the building at room temperature than a highly insulated facility,
2. The amount of heat that needs to be removed from equipment heat generation.

I am not sure of the BTU rating, but I will need to dissipate upwards of 40,000 watts.
Unfortunately, #2 will be the major factor in designing an air conditioning system, your equipment-generated heat is much more than the amount needed for building cooling. The cons of air conditioning is that it is a closed system, so even on cool days, you'll be running AC equivalent to 40,000 watts of heat removal. This is one factor that has data centers looking for better ways of doing things.

Air conditioning has a lot of weird units of measurement, they can't seem to just use joules and watts like a normal physicist would. I will try to process some of these measurements like "tons" and "BTUs" to give you an idea about your AC power bills and required capacity.

ton = 12000 BTUs/hour, or 3517 watts. (based on how much ice would be used to provide the same refrigeration)
1 watt = 3600 joules per hour
1 btu = 1055.05585 joules
1 watt = 3.41214163 btu / hour

therm = 100,000 BTU
EER = Energy Efficiency Ratio = BTUs/watt-hour. BTU/hr vs watts of AC unit. A number 8-12 is typical
SEER = season-based voodoo. EER = -0.02 × SEER² + 1.12 × SEER
COP = Coefficient of perfomance. What we really want to know - i.e. how many watts will remove 1000 watt of heat. COP = EER / 3.412

The first thing to figure out is 40,000 watts equals how much in these AC terms, and how much electricity will it take. Lets remove everything except watts and the EER rating:
Wreq = Wload / COP -> Wreq = Wload * 3.412 / EER

So for 40,000 watts, and an example of 9 EER-rated air conditioning, we get
Wreq = 40000W * 3.412 / 9  ->  15,164 watts

Next, how much AC capacity is required in those weird AC terms?
40000 watts = 11.4155251142 tons of air conditioning

So add that power use and capacity on top of what AC would normally be required for the space.


Evaporative cooling is measured a different way, in the temperature drop from intake air temp, with accompanying increased humidity. You can make 100F outside air into 75F inside air. However, you will need to look at the cubic feet per minute ratings of the systems to see what can keep up with your heat load. You may decide that 85F will be the maximum "output" temperature rise after air goes through your racks - for this much cooling, you will be looking at garage-door sized walls of fans from the outside and gallons of water per minute.

However, the evaporative cooling does have the advantage that you are putting in a massive outside air circulation system - the 75% of the day and year when outside air is below 75F, you will need nothing more than to run the fans.

Inside a closed air conditioned building, evaporative cooling may enhance efficiency a bit. AC removes humidity, to the point where the IDUs need to pump water out. You could add some humidity back to pre-cool the hot AC intake air (you can't humidify cold air AC output). The humidity would have to be strictly monitored to not go overboard or add more humidity than the AC can remove.

Whatever system is implemented, you need to direct airflow through your facility and systems, ideally in a typical contained hot/cool-aisle system:
newbie
Activity: 28
Merit: 0
December 29, 2013, 04:49:11 PM
#26
Indirect Evaporative Cooling


1.Yes.
2.The Indirect Evaporative Cooling (IEC) produces less risk than a poorly managed system.
3.Yes.
4.✗

Furthermore, the IEC systems can lower air temperature without adding moisture into the air, making them the more attractive option over the direct ones.

newbie
Activity: 6
Merit: 250
December 29, 2013, 04:23:48 PM
#26
Hello,

You should consider using localized refrigeration instead of trying to cool the whole room.

It is important to know what kind of devices are you trying to cool: USB based asic devices? GPU cards?

The base idea of localized cooling is to use air conduits in order to blow fresh air just to the intake of your heat-generating devices. you should use conduits also to take hot air away from these devices, and finally evacuate the hot air out of the room.

I am thinking on something like air conditioning flexible ducts, those made from corrugated aluminium.

Connect the "cold air" pipes to a junction box (cold box) and the pipes carrying "hot air" to another one (hot box). You will have just to blow air from the outside to the "cold box", and exhaust the air from the "hot box" outside of the room.


Hope you will find this helpful,

 José Antonio

BTC: 1HJMgnNnJ4ouLTenVFsDRhafPF2ZXHgTg9
hero member
Activity: 658
Merit: 500
Small Red and Bad
December 29, 2013, 12:58:30 PM
#25
You just posted it right above me no need to repeat Wink He should be out of the newbie section by now, maybe he'll come here and defend his idea.
global moderator
Activity: 3990
Merit: 2717
Join the world-leading crypto sportsbook NOW!
December 29, 2013, 12:06:05 PM
#24
Oil is a really bad idea if you're making a data center.  
1. You need big containers to store your stuff.
2. You need to circulate the oil inside containers (pumps)
3. If your oil heats up it's going to evaporate and cover the ceiling and other objects in the room.
4. If the container walls do not dissipate the heat well equipment may overheat anyway.
5. Hardware modifications are difficult be prepared to cover yourself in coolant every time you want to connect a cable.
6. Makes reselling almost impossible and voids warranty.

Tell it to this guy: https://bitcointalksearch.org/topic/responso-to-topic-389706-389758 haha
hero member
Activity: 658
Merit: 500
Small Red and Bad
December 29, 2013, 11:54:12 AM
#23
Oil is a really bad idea if you're making a data center. 
1. You need big containers to store your stuff.
2. You need to circulate the oil inside containers (pumps)
3. If your oil heats up it's going to evaporate and cover the ceiling and other objects in the room.
4. If the container walls do not dissipate the heat well equipment may overheat anyway.
5. Hardware modifications are difficult be prepared to cover yourself in coolant every time you want to connect a cable.
6. Makes reselling almost impossible and voids warranty.
global moderator
Activity: 3990
Merit: 2717
Join the world-leading crypto sportsbook NOW!
December 29, 2013, 10:05:53 AM
#22
Posting message for generationbitcoin from the newb forum since he is unable. Link here: https://bitcointalksearch.org/topic/responso-to-topic-389706-389758


Hi, I'm answering here coz I'm not allowed to do that in the actual thread.

https://bitcointalksearch.org/topic/m.4195044

My solution would be something like:

http://www.youtube.com/watch?v=Eub39NaC4rc

Just put hardware in the oil, no contact with air, no dump no rust. Oil does not conduct current so no short circuit.

Oil also distribute heat on the whole surface of the containing tank, easier to cool down.

Little maintenance.

Only con is that hardware will be permanently oily and hard to clean and resell.

my 2 cents (and hopefully 25 BTC Tongue)

me btc wallet 1L89NqH8vEwmcCkUf9W62fKL8y9Vi3K9KE
newbie
Activity: 28
Merit: 0
December 29, 2013, 01:21:21 AM
#21
buy a cool box:

http://www.minicoolers.co.uk/products/waeco/images/w35open.jpg

pack it full of dry ice and put your equipment inside...your rig will soon look like this:

http://www.freebiespot.net/wp-content/uploads/2010/12/dryice.jpg

send btc here:

15Wzsww4syNhX7joLbKE3mXhH8Fs78EXvc
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
December 29, 2013, 12:42:19 AM
#20
No GPUs in data centers?  You should come to Hollywood, or see Pixar's renderwalls for some Data center GPU mania.
http://www.youtube.com/watch?v=i-htxbHP34s
Any animation or gaming studio of much stature has these gpu renderwalls.
hero member
Activity: 955
Merit: 1004
December 29, 2013, 12:04:24 AM
#19
Two part answer.

1.  The answer is 42.

2. Datacenter servers are all CPU based, there are no GPUs, much less GPUs that can mine Scrypt coins efficiently.   GPU mining is a waste of time for BitCoin now.

3. So unless you are building custom mining PCs, scrap the datacenter idea.

Here's my wallet address:  12eBRzZ37XaWZsetusHtbcRrUS3rRDGyUJ

Thank You!   Grin
newbie
Activity: 10
Merit: 0
December 28, 2013, 11:56:53 PM
#18
I live in a country with long and hot summers.

1. Warm air rises while cool air sinks
2. Open systems are not that hard to cool
3. All in life is about the flow, not how big or how expensive

If you would create a big box or a small room where cool air sinks in and settles down on the bottom you got an inflow and it's ready to be absorbed / sucked up. Like normal cards, it could suck up the cool air and spit it out. It should spit it out upwards. iiiii ... i = device and the dot is the hot air spit up.

-the devices low with still 2 inches of space under where you could poor water on the side of the box/floor without an immediate danger and it would run right under the devices and level (only air needed, just visualize)
-the top of the devices blow of steam (hot air, just visualize) up while cool air sinks in from the side
-  [ \ i i i i i i ] cool air slides in to the bottom under the devices and the fans suck it up and blow it up in the way warm air would go anyway.

Circulating air without knowing what it wants to do is not gonna work, so adjust to nature and you only need air if you understand the flow. I would use AC flow with a gap in the inflow in case there would be any water, it would drip out on the floor before flow into the cool-box with devices (not a common little drain pipe that can be stuck up with 1 fly or something).  

You could put some kind of kitchen hood (a fan backwards in a tunnel connect to a mouth/hood) above the devices with hot air flowing out and up above the devices (iiii). Not to close to the cool air input and maybe not to low to suck the cool bottom air up other then the little fans from the devices (you're gonna have to play with it).

Now if that all wont work you have an insane expensive setup that should be under the ground or not in a cold county  Wink

Just air will do the job, when i opened up the computer on the left side and turn on a normal living room fan on the left front it cooled down from 78C to 66C. Also i have less dust then a normal closed computer inside the box.

In my next step I will have more cards in a huge cool box with AC flow in (not even with a tunnel, just pointing and not on swing). just not right under incase it would drip in the future. If the sealing would be filled up with a hot air cloud i just put a fan backward into an other room or pointing outside just where the AC engine is. Maybe i still create a tunnel with just 2 wide sheets left under th ac towards the box on the right \*\
hero member
Activity: 518
Merit: 500
December 28, 2013, 10:44:44 PM
#16
If you set up in a place where the temperature reaches 110 in the summer, you are going to end up costing yourself an arm and a leg whatever cooling method you employ. Find a cooler location should be your first priority.
sr. member
Activity: 454
Merit: 250
Technology and Women. Amazing.
December 28, 2013, 09:45:00 PM
#15
lol @ copypaste answers
legendary
Activity: 1512
Merit: 1036
December 28, 2013, 09:25:38 PM
#14
The move to water-cooled applications raises many challenges for facility executives. For example, experience shows that a building’s chilled water system is anything but clean. Few data center operators understand the biology and chemistry of open or closed loop cooling systems. Even when the operating staff does a great job of keeping the systems balanced, the systems still are subject to human errors that can wreak permanent havoc on pipes.
...

You could just post links instead of being a tool:
www.facilitiesnet.com/datacenters/article/Free-Air-WaterCooled-Servers-Increase-Data-Center-Energy-Efficiency--12989
http://www.facilitiesnet.com/datacenters/article/ripple-effect--8227

Location: Obviously wherever you live will play a huge part in this ... if your near mountains, the suggestions above will get you some interesting ambient air to play with; along with the possibility of cheap local electricity if you put up some windmill/solar near facility. Again, that is just additional capital cost when your probably more focused on spending as much as you can on G/HASH vs hedging your own power source.

Building a data center for BITCOIN or ANYCOIN should follow most of the current standards out there. Any computer equipment for extended periods of time at high temperatures greatly reduces reliability, longevity of components and will likely cause unplanned downtime. Maintaining an ambient temperature range of 68F to 75F (20 to 24C) is optimal for system reliability. This temperature range provides a safe buffer for equipment to operate in the event of air conditioning or HVAC equipment failure while making it easier to maintain a safe relative humidity level.

It is a generally agreed upon standard in the computer industry that expensive IT equipment should not be operated in a computer room or data center where the ambient room temperature has exceeded 85F (30C).
...
Recommended Computer Room Humidity
Relative humidity (RH) is defined as the amount of moisture in the air at a given temperature in relation to the maximum amount of moisture the air could hold at the same temperature. In a Mining Farm or computer room, maintaining ambient relative humidity levels between 45% and 55% is recommended for optimal performance and reliability.
..
You too:
http://www.avtech.com/About/Articles/AVT/NA/All/-/DD-NN-AN-TN/Recommended_Computer_Room_Temperature_Humidity.htm
sr. member
Activity: 423
Merit: 250
December 28, 2013, 09:23:02 PM
#13
The move to water-cooled applications raises many challenges for facility executives. For example, experience shows that a building’s chilled water system is anything but clean. Few data center operators understand the biology and chemistry of open or closed loop cooling systems. Even when the operating staff does a great job of keeping the systems balanced, the systems still are subject to human errors that can wreak permanent havoc on pipes.

Installing dedicated piping to in-row coolers is difficult enough the first time, but it will be nearly intolerable to have to replace that piping under the floor if, in less than five years, it begins to leak due to microbial or chemical attacks. That does happen, and sometimes attempts to correct the problem make it worse.

Consider these horror stories:

A 52-story single-occupant building with a tenant condenser water system feeding its data center and trading systems replaced its entire piping system (live) due to microbial attack.
A four-story data center replaced all of its chilled and condenser water systems (live) when the initial building operators failed to address cross contamination of the chilled water and the condenser water systems while on free cooling.
In yet another high-rise building, a two pipe (non-critical) system was used for heating in the winter and cooling in the summer. Each spring and fall the system would experience water flow blockages, so a chemical cleaning agent was added to the pipes to remove scale build-up.
Before the cleaning agent could be diluted or removed, the heating system was turned on. Thanksgiving night, the 4-inch lines let loose. Chemically treated 180-degree water flooded down 26 stories of the tower. Because no one was on site knew how to shut the system down, it ran for two hours before being stopped.

Isolation
Water quality isn’t the only issue to consider. Back in the days of water-cooled mainframes, chilled water was delivered to a flat plate heat exchanger provided by the CPU manufacturer. The other side of the heat exchanger was filled with distilled water and managed by technicians from the CPU manufacturer. Given this design, the areas of responsibility were as clear as the water flowing through the computers.

In today’s designs, some of the better suppliers promote this physical isolation through the use of a “cooling distribution unit” (CDU) with the flat plate heat exchanger inside. Not all CDUs are alike and some are merely pumps with a manifold to serve multiple cooling units. It is therefore wise to be cautious. Isolation minimizes risk.

Currently, vendor-furnished standard CDUs are limited in the number of water-cooled IRC units they can support. Typically these are supplied to support 12 to 24 IRCs with a supply and return line for each. That’s 24 to 48 pipes that need to be run from a single point out to the IRCs. If there are just a few high-density cabinets to cool, that may be acceptable, but, as the entire data center becomes high-density, the volume of piping can become a challenge. Even 1-inch diameter piping measures two inches after it is insulated.

The solution will be evolutionary. Existing data centers will go the CDU route until they reach critical mass. New data centers and ones undergoing major renovations will have the opportunity to run supply and return headers sized for multiple rows of high-density cabinets with individual, valved take-offs for each IRC unit. This reduces clutter under the floor, allowing reasonable airflow to other equipment that remains air-cooled. Again, the smart money will have this distribution isolated from the main chilled water supply and could even be connected to a local air-cooled chiller should the main chilled water plant fail.

Evaluating IRC Units

Given the multitude of water-cooled IRC variations, how do facility executives decide what’s best for a specific application? There are many choices and opportunities for addressing specific needs.

One consideration is cooling coil location. Putting the coils on top saves floor space. And the performance of top-of-the-rack designs are seldom affected by daily operations of server equipment installs and de-installs. But many older data centers and some new ones have been shoehorned into buildings with minimal floor-to-ceiling heights, and many data centers run data cabling in cable trays directly over the racks. Both these situations could make it difficult to put coils on top.

If the coil is on top, does it sit on top of the cabinet or is it hung from the structure above? The method of installation will affect data cabling paths, cable tray layout, sprinklers, lighting and smoke detectors. Be sure that these can all be coordinated within the given overhead space.

Having the coil on the bottom also saves floor space. Additionally it keeps all piping under the raised floor and it allows for overhead cable trays to be installed without obstruction. But it will either increase the height of the cabinet or reduce the number of “U” spaces in the cabinet. A “U” is a unit of physical measure to describe the height of a server, network switch or other similar device. One “U” or “unit” is 44.45 mm (1.75 inches) high. Most racks are sized between 42 and 50 “U”s (6 to 7 feet high) of capacity. To go taller is impractical because doing so usually requires special platforms to lift and install equipment at the top of the rack. To use smaller racks diminishes the opportunities to maximize the data center capacity.

With a coil on the bottom, a standard 42U cabinet will be raised 12 to 14 inches. Will that be too tall to fit through data center and elevator doors? How will technicians install equipment in the top U spaces? One option is a cabinet with fewer U spaces, but that will mean more footprint for the same capacity.

Alternative Locations
Another solution is 1-foot-wide IRC units that are installed between each high-density cabinet. This approach offers the most redundancy and is the simplest to maintain. It typically has multiple fans and can have multiple coils to improve reliability. Piping and power are from under the floor. This design also lends itself to low-load performance enhancements in the future. What’s more, this design usually has the lowest installed price.

On the flip side, it uses more floor space than the other approaches, with a footprint equal to half a server rack. It therefore allows a data center to go to high-density servers but limits the total number of computer racks that can be installed. Proponents of this design concede that this solution takes up space on the data center floor. They admit that data centers have gone to high-density computing for reduced footprint as well as for speed, but they contend that the mechanical cooling systems now need to reclaim some of the space saved.

Rear-door solutions are a good option where existing racks need more cooling capacity. But the design’s performance is more affected by daily operations then the other designs due to the door being opened when servers are being installed or removed. Facility executives should determine what happens to the cooling (and the servers) when the rear door is opened.

No matter which configuration is selected, facility executives should give careful consideration to a range of specific factors:

Connections. These probably pose the greatest risk no matter which configuration is selected. Look at the connections carefully. Are they of substance, able to take the stresses of the physical abuse when data cables get pulled around them or do they get stepped on when the floor is open? The connections can be anything from clear rubber tubing held on with hose clamps to threaded brass connections.

Think about how connections are made in the site as well as how much control can be exercised over underfloor work. Are workers aware of the dangers of putting stresses on pipes? Many are not. What if the fitting cracks or the pipe joint leaks? Can workers find the proper valve to turn off the leak? Will they even try? Does the data center use seal-tight electrical conduits that will protect power connections from water? Can water flow under the cables and conduits to the nearest drain or do the cables and conduits act like dams holding back the water and forcing it into other areas?

Valve quality. This is a crucial issue regardless of whether the valves are located in the unit, under the floor or in the CDU. Will the valve seize up over time and become inoperable? Will it always hold tight? To date, ball valves seem to be the most durable. Although valves are easy to take for granted, the ramifications of valve selection will be significant.

Servicing.
Because everything mechanical will eventually fail, one must look at IRC units with respect to servicing and replacement. How easy will servicing be? Think of it like servicing a car. Is everything packed so tight that it literally has to be dismantled to replace the cooling coil? What about the controls? Can they be replaced without shutting the unit down? And are the fans (the component that most commonly fails) hard wired or equipped with plug connections?

Condensate Drainage.
A water-cooled IRC unit is essentially a mini computer-room air conditioning (CRAC) unit. As such, it will condense water on its coils that will need to be drained away. Look at the condensate pans. Are they well drained or flat allowing for deposits to build up? If condensate pumps are needed what is the power source?

Some vendors are promoting systems that do sensible cooling only. This is good for maintaining humidity levels in the data center. If the face temperature of the cooling coil remains above the dew point temperature in the room, there will not be any condensation. The challenge is starting up a data center, getting it stabilized and then having the ability to track the data center’s dew point with all the controls automatically adjusting to maintain a sensible cooling state only.

Power. Data centers do not have enough circuits to wire the computers and now many more circuits are being added for the IRC units. What’s more, designs must be consistent and power the mechanical systems to mimic the power distribution of computers. What is the benefit of having 15 minutes of battery back-up if the servers go out on thermal overload in less than a minute? That being the case, IRC units need to be dual power corded as well. That criteria doubles the IRC circuit quantities along with the associated distribution boards and feeders back to the service entrance.

Before any of the specifics of IRC unit selection really matter, of course, facility executives have to be comfortable with water in the data center. Many are still reluctant to take that step. There are many reasons:

There’s a generation gap. Relatively few professionals who have experience with water-cooled processors are still around.
The current generation of operators have been trained so well about keeping water out of the data center that the idea of water-cooled processors is beyond comprehension.
There is a great perceived risk of making water connections in and around live electronics.
There is currently a lack of standard offerings from the hardware manufacturers.
The bottom line is that water changes everything professionals have been doing in data centers for the last 30 years. And that will create a lot of sleepless nights for many data center facility executives.

Before You Dive In
Traditionally, data centers have been cooled by computer-room air conditioning (CRAC) units via underfloor air distribution. Whether a data center can continue using that approach depends on many factors. The major factors include floor height, underfloor clutter, hot and cold aisle configurations, loss of air through tile cuts and many more too long to list here.

Generally speaking, the traditional CRAC concept can cool a reasonably designed and maintained data center averaging 4 kw to 6 kw per cabinet. Between 6 kw and 18 kw per cabinet, supplementary fan assist generally is needed to increase the airflow through the cabinets.

The fan-assist technology comes in many varieties and has evolved over time.

• First there were the rack-mounted, 1-U type of fans that increase circulation to the front of the servers, particularly to those at the top of the cabinet.

• Next came the fixed muffin fans (mounted top, bottom and rear) used to draw the air through the cabinet. Many of these systems included a thermostat to cycle individual fans on and off as needed.

• Later came larger rear-door and top-mounted fans of various capacities integrated into the cabinet design to maximize the air flow evenly through the entire cabinet and in some cases even to direct the air discharge.

All these added fans add load to the data center and particularly to the UPS. To better address this and to maximize efficiencies, the latest fan-assist design utilizes variable-speed fans that adjust airflow rates to match the needs of a particular cabinet.

Until recently, manufacturers did not include anything more than muffin fans with servers. In the past year, this has started to change. Server manufacturers are now starting to push new solutions out of research labs and into production. At least one server manufacturer is now utilizing multiple variable turbine-type fans in their blade servers. These are compact, high air volume, redundant and part of the manufactured product. More of these server-based cooling solutions can be expected in the coming months.
sr. member
Activity: 423
Merit: 250
December 28, 2013, 09:13:31 PM
#12
Just as water is an effective heat-exchange medium in evaporative cooling systems, it can also be circulated throughout the data center to cool the
An air-side economizer intakes outside air into the building when it is easier to cool than the air being returned from the conditioned space and distributes it to the space; exhaust air from the servers is vented outside. Under certain weather conditions, the economizer may mix intake and exhaust air to meet the temperature and humidity requirements of the computer equipment.

Evaporative cooling uses non-refrigerated water to reduce indoor air temperature to the desirable range. Commonly referred to as swamp coolers, evaporative coolers utilize water in direct contact with the air being conditioned. Either the water is sprayed as a fine mist or a wetted medium is used to increase the rate of water evaporation into the air. As the water evaporates, it absorbs heat energy from the air, lowering the temperature of the air as the relative humidity of the air increases.

These systems are very energy efficient as no mechanical cooling is employed. However, the systems do require dry air to work effectively, which limits full application to specific climates. Even the most conservative organizations, such as financial institutions, are beginning to use these types of systems, especially because ASHRAE has broadened the operating-temperature recommendations for data centers. ASHRAE's Technical Committee 9.9 recommendations allow dry-bulb operating temperatures between 64.4 degrees F (18 degrees C) and 80.6 degrees F (27 degrees C), with humidity controlled to keep dew points below 59.0 degrees F (15 degrees C) or 60 percent RH, whichever is lower. This has given even the most reluctant owners a green light to consider these options.

Airside economizers and evaporative cooling systems are difficult to implement in existing data centers because they typically require large HVAC ductwork and a location close to the exterior of the building. In new facilities, these systems increase the capital cost of the facility (i.e., larger building volume), HVAC equipment and ductwork. However, over the course of the lifetime of the facility, these systems significantly reduce operating costs when used in the appropriate climate, ideally, locations with consistent moderate temperatures and low humidity. Even under ideal conditions, the owner of a high-density data center that relies on outside air for cooling must minimize risks associated with environmental events, such as a forest fire generating smoke, and HVAC equipment failures.


IT equipment at the cabinet level. In fact, water cooling is far more energy efficient than air cooling. A gallon of water can absorb the same energy per degree of temperature change as 500 cubic feet of air. This yields significant operational savings in typical applications because the circulation of air to remove heat will require 10 times the amount of energy than would be required to move the water to transport the same amount of heat.

However, it is more expensive to install water piping than ductwork. An engineer can provide cost comparisons to provide the owner with the financial insight to make a sound decision when constructing a new facility. It is not usually a feasible retrofit for an existing data center.

Rear-door heat exchangers and integral water cooling are options in existing air-cooled data centers to reduce the energy use and cost associated with cooling. They put the water-cooling power of heat exchangers where they are really needed: on the server racks.

Rear-door heat exchangers are mounted on the back of each server rack. Sealed coils within the heat exchanger circulate chilled water supplied from below the raised floor. Hot air exhausted from the server passes over the coils, transferring the heat to the water and cooling the exhaust air to room temperature before it re-enters the room. The heated water is returned to the chiller plant, where the heat is exhausted from the building. Owners can achieve significant operational savings using these devices. To protect the systems during loss of utility power, many facilities put the pumps for the systems on a dedicated uninterruptible power supply (UPS) system.

Owners have been cautious in adopting this approach due to the risk of leaks. The heat exchanger is equipped with baffles that prevent water spraying into the computer in the rare event of a leak. However, water could still leak onto the floor.

Another alternative is integral cooling, a sort of a "mini AC unit" between the cabinets. This close-coupled system takes the hot air discharged from the servers, cools it immediately and then blows it back to the inlet of the server. The system contains the water within the AC unit itself. The installation can also be designed to drain to a piping system under the floor, and it can incorporate leak detectors.


-----------------------------------
An air-side economizer intakes outside air into the building when it is easier to cool than the air being returned from the conditioned space and distributes it to the space; exhaust air from the servers is vented outside. Under certain weather conditions, the economizer may mix intake and exhaust air to meet the temperature and humidity requirements of the computer equipment.

Evaporative cooling uses non-refrigerated water to reduce indoor air temperature to the desirable range. Commonly referred to as swamp coolers, evaporative coolers utilize water in direct contact with the air being conditioned. Either the water is sprayed as a fine mist or a wetted medium is used to increase the rate of water evaporation into the air. As the water evaporates, it absorbs heat energy from the air, lowering the temperature of the air as the relative humidity of the air increases.

These systems are very energy efficient as no mechanical cooling is employed. However, the systems do require dry air to work effectively, which limits full application to specific climates. Even the most conservative organizations, such as financial institutions, are beginning to use these types of systems, especially because ASHRAE has broadened the operating-temperature recommendations for data centers. ASHRAE's Technical Committee 9.9 recommendations allow dry-bulb operating temperatures between 64.4 degrees F (18 degrees C) and 80.6 degrees F (27 degrees C), with humidity controlled to keep dew points below 59.0 degrees F (15 degrees C) or 60 percent RH, whichever is lower. This has given even the most reluctant owners a green light to consider these options.

Airside economizers and evaporative cooling systems are difficult to implement in existing data centers because they typically require large HVAC ductwork and a location close to the exterior of the building. In new facilities, these systems increase the capital cost of the facility (i.e., larger building volume), HVAC equipment and ductwork. However, over the course of the lifetime of the facility, these systems significantly reduce operating costs when used in the appropriate climate, ideally, locations with consistent moderate temperatures and low humidity. Even under ideal conditions, the owner of a high-density data center that relies on outside air for cooling must minimize risks associated with environmental events, such as a forest fire generating smoke, and HVAC equipment failures.
full member
Activity: 208
Merit: 117
December 28, 2013, 09:06:11 PM
#11
Location: Obviously wherever you live will play a huge part in this ... if your near mountains, the suggestions above will get you some interesting ambient air to play with; along with the possibility of cheap local electricity if you put up some windmill/solar near facility. Again, that is just additional capital cost when your probably more focused on spending as much as you can on G/HASH vs hedging your own power source.

Building a data center for BITCOIN or ANYCOIN should follow most of the current standards out there. Any computer equipment for extended periods of time at high temperatures greatly reduces reliability, longevity of components and will likely cause unplanned downtime. Maintaining an ambient temperature range of 68F to 75F (20 to 24C) is optimal for system reliability. This temperature range provides a safe buffer for equipment to operate in the event of air conditioning or HVAC equipment failure while making it easier to maintain a safe relative humidity level.

It is a generally agreed upon standard in the computer industry that expensive IT equipment should not be operated in a computer room or data center where the ambient room temperature has exceeded 85F (30C).

In today's high-density data centers and computer rooms, measuring the ambient room temperature is often not enough. The temperature of the air where it enters miner can be measurably higher than the ambient room temperature, depending on the layout of the data center and a higher concentration of heat producing rigs. Measuring the temperature of the aisles in the data center at multiple height levels can give an early indication of a potential temperature problem. For consistent and reliable temperature monitoring, place a temperature sensor at least every 25 feet in each aisle with sensors placed closer together if high temperature equipment like blade servers are in use. I would recommend installing TemPageR, Room Alert 7E or Room Alert 11E rack units at the top of each rack in the data center. As the heat generated by the components in the rack rises, TemPageR and Room Alert units will provide an early warning and notify staff for temperature issues before critical systems, servers or network equipment is damaged.

Recommended Computer Room Humidity
Relative humidity (RH) is defined as the amount of moisture in the air at a given temperature in relation to the maximum amount of moisture the air could hold at the same temperature. In a Mining Farm or computer room, maintaining ambient relative humidity levels between 45% and 55% is recommended for optimal performance and reliability.

When relative humidity levels are too high, water condensation can occur which results in hardware corrosion and early system and component failure. If the relative humidity is too low, computer equipment becomes susceptible to electrostatic discharge (ESD) which can cause damage to sensitive components. When monitoring the relative humidity in the data center, the recommendation is to set early warning alerts at 40% and 60% relative humidity, with critical alerts at 30% and 70% relative humidity. It is important to remember that the relative humidity is directly related to the current temperature, so monitoring temperature and humidity together is critical.

So in closing, many ways to cool, from traditional air conditioning to evaporation systems ... that part really is a math equation on capital cost. The real focus should be maintaining the optimal environmental conditions inside the mining farm to ensure your core capital investment stays operational as efficient as possible.

Tips: 1BhgD5d6YTDhf7jXLLGYQ3MvtDKw1nLjPd
Pages:
Jump to: