Pages:
Author

Topic: [Klondike] Case design thread for K16 - page 5. (Read 37958 times)

legendary
Activity: 952
Merit: 1000
June 13, 2013, 11:11:37 PM
#74
I drew this little diagram in another thread, but thought it might be applicable here.

If you're not in a datacenter, then sealing everything up and channeling your airflow is the best way to go. For proper cooling, you need a low ambient temp. If the miner is dumping 1000+W into your spare bedroom, then the ambient is going to skyrocket. You could throw an AC into the room to combat the increase in temp, or you could throw all the heat out the window, literally, and just keep a fresh supply of cooler air coming in.

This thread has some good ideas for properly insulating and exhausting your rigs. At that point, you don't need to worry about giant AC units, you would just need a steady stream of cool, outside air. I live in the NE USA, and even in the summer, it barely gets 25C, and hardly ever to 30C, even in the summer. In the winter, it's not uncommon to see below 0C. Fresh, cool air isn't a huge concern for me. Wink

      COLD             HOT
       AIR               AIR
        |                      ^
          V                      | 
        OUTSIDE
----FAN-----------FAN---------------
        |                      ^           INSIDE 
        V                      |
         --> ASICs -->
sr. member
Activity: 322
Merit: 250
Supersonic
June 13, 2013, 10:54:25 PM
#73
Had a random thought.

This is only for on premise hosting... not datacenter.

If we use such dense layouts, airflow is clearly channeled. All of the cool air would be taken in from one side and hot air coming out the other end. Why not seal the outlets, attach ducts, and release the hot air to outside....  I guess the devices should run fine with 25 - 30C ambient in tropics. One would need air conditioners because the hot air released increases the ambient much higher, but that wouldnt happen if you channel the heat out... The hot air should be 5 - 10C warmer than ambient i presume...

I see hosting in datacenter or controling airflow the only reasons for even needing a case.. if no airflow control is needed just keep naked pcbs...
legendary
Activity: 952
Merit: 1000
June 13, 2013, 04:01:54 PM
#72
That would certainly be more cost effective.  I've seen one proposal that tries to fit 72 K16's in a single 3U case... at that density you'd run out of power after only 6U of the rack filled.  I think that 2.5KW in a single 3U case is way overboard, but even if you cut that in half to 36 K16's, you'd need just over 16KW for the whole rack, and that is unheard of.  It would be 2TH in a single rack though, which is pretty sweet.
It really depends on how much you want to cram into a rack. If you're just looking for a single, really dense unit, sure you could cram them in there until they don't fit anymore. At that point, physical placement is your bigger concern. If you're talking about populating a rull rack, power distribution becomes your limiting factor.

20 of them in a 4U chassis would be about 700W (including fans/controller), which is totally doable from a thermal perspective. A 4U chassis could vent a lot more than 700W of heat, esp if you use a lot of fans on the front and back in a push/pull. 10 of those in a rack would be about 1TH/s, at ~7KW. That would be manageable, I think.
sr. member
Activity: 252
Merit: 250
June 13, 2013, 01:13:25 PM
#71
I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..

So each 1U server would literally be housing 3-4 tiny little boards for only 15-20GH/s tops. That's not a whole lot, especially when you consider the cost of the 1U server chassis, the PSU, controller, etc.

I'd rather go with a 3or4U case that can handle a decent ATX PSU, and throw a whole lot more of those K16s in there, maybe sandwiched together into pairs with fans between them?

That would certainly be more cost effective.  I've seen one proposal that tries to fit 72 K16's in a single 3U case... at that density you'd run out of power after only 6U of the rack filled.  I think that 2.5KW in a single 3U case is way overboard, but even if you cut that in half to 36 K16's, you'd need just over 16KW for the whole rack, and that is unheard of.  It would be 2TH in a single rack though, which is pretty sweet.
legendary
Activity: 952
Merit: 1000
June 13, 2013, 11:41:51 AM
#70
I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..

So each 1U server would literally be housing 3-4 tiny little boards for only 15-20GH/s tops. That's not a whole lot, especially when you consider the cost of the 1U server chassis, the PSU, controller, etc.

I'd rather go with a 3or4U case that can handle a decent ATX PSU, and throw a whole lot more of those K16s in there, maybe sandwiched together into pairs with fans between them?
full member
Activity: 224
Merit: 100
June 13, 2013, 09:43:44 AM
#69
hero member
Activity: 924
Merit: 1000
June 13, 2013, 08:27:06 AM
#68
I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..

https://www.stackpop.com/configure/colocation/singapore/singapore/singapore/19885
4 kva for 42U $2500 / month
125 x K16 = 4 kva
125 x K16 = ~3 K16 per 1U  (most likely there are some unusable slots, use for switches, spacing, etc)

https://www.stackpop.com/configure/colocation/united_states/virginia/ashburn/17237
24 kva 59U for $1000 per month (or $2000 unsure if power is included or extra)
750 x k16 = 24kva
750 x k16 = 12.7 k16 per 1U or ~51 per 4U  (most likely there are some unusable slots, use for switches, spacing, etc)

But these can only be used once stability of these devices is proven, to not need much babysitting by hand.

Cramming higher density into 1U is only viable if you are hosting the rack on your own premises and can arrange adequate cooling.. But then why would one use 1U and not bigger cases where u can have better airflow using more efficient bigger fans.

For something of the 1U I designed to go into a datacenter you will need 2nd Gen or 3rd Gen ASICs where power is a third it is now. Maybe the 55nm Avalons will work. Then it will be datacenter ready. That is why I will have to go cheaper route build my own data center? LOL... sounds funny to say but it will be.
sr. member
Activity: 322
Merit: 250
Supersonic
June 13, 2013, 06:16:52 AM
#67
I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..

https://www.stackpop.com/configure/colocation/singapore/singapore/singapore/19885
4 kva for 42U $2500 / month
125 x K16 = 4 kva
125 x K16 = ~3 K16 per 1U  (most likely there are some unusable slots, use for switches, spacing, etc)

https://www.stackpop.com/configure/colocation/united_states/virginia/ashburn/17237
24 kva 59U for $1000 per month (or $2000 unsure if power is included or extra)
750 x k16 = 24kva
750 x k16 = 12.7 k16 per 1U or ~51 per 4U  (most likely there are some unusable slots, use for switches, spacing, etc)

But these can only be used once stability of these devices is proven, to not need much babysitting by hand.

Cramming higher density into 1U is only viable if you are hosting the rack on your own premises and can arrange adequate cooling.. But then why would one use 1U and not bigger cases where u can have better airflow using more efficient bigger fans.
member
Activity: 77
Merit: 10
June 13, 2013, 01:22:09 AM
#66
I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..
legendary
Activity: 952
Merit: 1000
June 13, 2013, 01:11:32 AM
#65
...you'll find most rack densities in the 3-7kW per rack range. At 42u, that's 71-167w per U on average. ...Stick to the kW / rack or even watt-per-U. 100w per U is a pretty good number since it would give you 4.2kW / rack.
I can't remember how many W each K16 uses. How many K16s would 100W limit you to?
sr. member
Activity: 246
Merit: 250
My spoon is too big!
June 12, 2013, 03:05:57 PM
#64
Depending on the data center, you'll find most rack densities in the 3-7kW per rack range. At 42u, that's 71-167w per U on average. You can get more dense if you want, but you're going to pay for the power / cooling that you use and it won't do you any good to make it more dense. Ultimately, the colo provider had to build their data center with a design power in mind and if you chew up power before the usable space, they can't sell just bare white space without the power / cooling capacity to support it. You really can't look at it in terms of u-space or even physical (white) space alone. Stick to the kW / rack or even watt-per-U. 100w per U is a pretty good number since it would give you 4.2kW / rack. if it was fully populated. Go higher and you're probably going to be using blanking plates.

Source: I'm an 18-year "critical power industry" veteran.
KS
sr. member
Activity: 448
Merit: 250
June 11, 2013, 11:47:32 AM
#63

I can bet anyone 1 satoshi that no datacenter is gonna accept a unit so dense. Personally i think its a little dangerous.

The DC's I have contacted recently will let you have a 2200W server per half rack.

edit: 2200W usual, 3500W peak
hero member
Activity: 924
Merit: 1000
June 10, 2013, 11:42:52 PM
#62
The rack in my shop begs to differ, and begs to hold as many of those as it can.

Which? 1U server idea? K256? 700W?
sr. member
Activity: 310
Merit: 250
June 10, 2013, 09:58:40 PM
#61
The rack in my shop begs to differ, and begs to hold as many of those as it can.
hero member
Activity: 924
Merit: 1000
June 10, 2013, 09:57:48 PM
#60

I can bet anyone 1 satoshi that no datacenter is gonna accept a unit so dense. Personally i think its a little dangerous.


There are ones more dense than this in centers already. GPU servers and what wattage does a 4U server throw... density? I am thinking with the 55nm Avalon 2nd Generation this will be possible. That is what I am counting on but I want to do a 1U now to see how it all fits together. 3 Months till the next chip! We need to get back on thread though... K16 casings.
sr. member
Activity: 322
Merit: 250
Supersonic
June 10, 2013, 09:40:02 PM
#59

I can bet anyone 1 satoshi that no datacenter is gonna accept a unit so dense. Personally i think its a little dangerous.
sr. member
Activity: 322
Merit: 250
Supersonic
June 10, 2013, 09:36:53 PM
#58
Just look at a 1U heatsink fan on a typical Xeon. Even if the CPU is only 90W, you'll have a 45dba centrifugal heatsink for that ONE cpu, using a solid copper heatsink.

1U sounds noiser than vaccum cleaner when booting up or full load... Got one to office once to debug. Everyone looked up shocked when i turned the sucker on...
Yep. And thats for what, 100-200W? Imagine the density of some of the designs we've seen here.

Not all 1U servers are 100W... or 200W... come on. Average what 200 to 400W.

And I guess I shouldn't have bogarted the thread with a 1U server... we are supposed to be talking K16 Casing.


We're guessing at what he just dragged into his office. I doubt he has a super dense 400W 1U server in a work space lol. Have fun trying to use a phone.

My comments were actually made in regard to http://i230.photobucket.com/albums/ee106/PFC4L1FE/MOCK_zps134cc5d8.png

Nah it wasnt super dense... I still have that box collecting dust for sentimental reasons (decomissioned years ago and i cant use it for fun cause noisy). I will look up PSU capacity and let u know. I believe its one of the early xeons with hyperthreading, and the board fits 2 cpus, but has only 1 installed.
legendary
Activity: 1666
Merit: 1185
dogiecoin.com
June 10, 2013, 09:30:03 PM
#57
Just look at a 1U heatsink fan on a typical Xeon. Even if the CPU is only 90W, you'll have a 45dba centrifugal heatsink for that ONE cpu, using a solid copper heatsink.

1U sounds noiser than vaccum cleaner when booting up or full load... Got one to office once to debug. Everyone looked up shocked when i turned the sucker on...
Yep. And thats for what, 100-200W? Imagine the density of some of the designs we've seen here.

Not all 1U servers are 100W... or 200W... come on. Average what 200 to 400W.

And I guess I shouldn't have bogarted the thread with a 1U server... we are supposed to be talking K16 Casing.


We're guessing at what he just dragged into his office. I doubt he has a super dense 400W 1U server in a work space lol. Have fun trying to use a phone.

My comments were actually made in regard to
hero member
Activity: 924
Merit: 1000
June 10, 2013, 09:29:44 PM
#56
You can't run this 1U in your room... lol! Mine is going on the 3rd floor of my shop house school well away from the working floor of the school.

No no no.

I think you would have to stack like cairnsmore / x6500. Even then the heat might be a bit much. My ambient is 28 / 32 night / day.
sr. member
Activity: 322
Merit: 250
Supersonic
June 10, 2013, 09:27:15 PM
#55
Just look at a 1U heatsink fan on a typical Xeon. Even if the CPU is only 90W, you'll have a 45dba centrifugal heatsink for that ONE cpu, using a solid copper heatsink.

1U sounds noiser than vaccum cleaner when booting up or full load... Got one to office once to debug. Everyone looked up shocked when i turned the sucker on...
Yep. And thats for what, 100-200W? Imagine the density of some of the designs we've seen here.

Not all 1U servers are 100W... or 200W... come on. Average what 200 to 400W.

And I guess I shouldn't have bogarted the thread with a 1U server... we are supposed to be talking K16 Casing.


i believe when booting up all fans, etc are set to max rpm... before the controllers/sensors come online... but still noisy in moderate usage.

You have 2 kinds of people. Ones ( like Bicknellski) who plan on putting it in dedicated room where people wont be working, and dont mind the noise.... and people (like me) who would probably have one running in my bedroom or office and a vaccum cleaner would drive me nuts, but at the same time dont mind leaving it left caseless and MacGyvering external airflow...

Without any knowledge of thermodynamics... id guestimate atleast 2 120mm fans worth of airflow to remove heat from 4 K16 (128W)... with ambient < 25C
Pages:
Jump to: