Pages:
Author

Topic: [Klondike] Case design thread for K16 - page 4. (Read 37921 times)

sr. member
Activity: 322
Merit: 250
Supersonic
June 15, 2013, 05:52:55 PM
#93
I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..

https://www.stackpop.com/configure/colocation/singapore/singapore/singapore/19885
4 kva for 42U $2500 / month
125 x K16 = 4 kva
125 x K16 = ~3 K16 per 1U  (most likely there are some unusable slots, use for switches, spacing, etc)

https://www.stackpop.com/configure/colocation/united_states/virginia/ashburn/17237
24 kva 59U for $1000 per month (or $2000 unsure if power is included or extra)
750 x k16 = 24kva
750 x k16 = 12.7 k16 per 1U or ~51 per 4U  (most likely there are some unusable slots, use for switches, spacing, etc)

But these can only be used once stability of these devices is proven, to not need much babysitting by hand.

Cramming higher density into 1U is only viable if you are hosting the rack on your own premises and can arrange adequate cooling.. But then why would one use 1U and not bigger cases where u can have better airflow using more efficient bigger fans.



come and host them in ROMANIA-EU

you will pay 125x16K=4kva= 503$ / month

i presume you just calculated electricity cost only... not factoring in cooling or rackspace or other facility charges.

503$ only electricity 4kva

but in near future I will have a temperature control cooled room and I will open a hosting service... ofcourse not for 503$/month...but it will be an verry atractive price considering that I will provide UPS, chilled room, internet backup, and allmost 24/7 man if something needs to be done!, remote control of your device!

The links I posted are about proper data centers, one in our own cities which we trust (they handle much more expensive equipment). one which we can visit. One that has high gurantees and SLAs... And offers good physical security.

Personally I would host the mining equipment in a different city than one im based in...
member
Activity: 92
Merit: 10
June 15, 2013, 05:36:29 PM
#92
I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..

https://www.stackpop.com/configure/colocation/singapore/singapore/singapore/19885
4 kva for 42U $2500 / month
125 x K16 = 4 kva
125 x K16 = ~3 K16 per 1U  (most likely there are some unusable slots, use for switches, spacing, etc)

https://www.stackpop.com/configure/colocation/united_states/virginia/ashburn/17237
24 kva 59U for $1000 per month (or $2000 unsure if power is included or extra)
750 x k16 = 24kva
750 x k16 = 12.7 k16 per 1U or ~51 per 4U  (most likely there are some unusable slots, use for switches, spacing, etc)

But these can only be used once stability of these devices is proven, to not need much babysitting by hand.

Cramming higher density into 1U is only viable if you are hosting the rack on your own premises and can arrange adequate cooling.. But then why would one use 1U and not bigger cases where u can have better airflow using more efficient bigger fans.



come and host them in ROMANIA-EU

you will pay 125x16K=4kva= 503$ / month

i presume you just calculated electricity cost only... not factoring in cooling or rackspace or other facility charges.

503$ only electricity 4kva

but in near future I will have a temperature control cooled room and I will open a hosting service... ofcourse not for 503$/month...but it will be an verry atractive price considering that I will provide UPS, chilled room, internet backup, and allmost 24/7 man if something needs to be done!, remote control of your device!
sr. member
Activity: 434
Merit: 250
June 15, 2013, 05:30:40 PM
#91

~800 W
That appears to be 3U half rack unit. or is it 4U?

I think needs some space for
1) Controler - raspberry pi
2) USB hub(s)
3) PSU

So, stick in 2 of them in single 3U (full length) case, remove about 2 or 4 K16, stuck in PSU, hubs, rpi. Add in external power/network ports. Then find datacenter that'll take 1.5 KW in 3 RU...  then profit.

It's a 3U case. It'll have the PSU and controller (an Ubuntu box) separately as I won't be using hosted rackspace. Otherwise, yeah you could sacrifice some K16 space for the things you mentioned to create a self-contained unit.

I think one challenge in your deployment will be mounting. You would probably need a lot of hex spacers of exact specific lengths, and then drill holes in in the case and mount off that asif its 3 pillars.

OR mount them off the big side (which in the pic is not shown), but its going to be tricky IMHO.

Whats your plan regarding mounting?


Sheet metal frames to lock in columns of three K16s, as well as to channel airflow - essentially creating a "blade" design. Frames will likely be screwed into top/bottom plates. I'll post pictures once I'm finished the CAD work.
sr. member
Activity: 322
Merit: 250
Supersonic
June 15, 2013, 05:27:57 PM
#90
I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..

https://www.stackpop.com/configure/colocation/singapore/singapore/singapore/19885
4 kva for 42U $2500 / month
125 x K16 = 4 kva
125 x K16 = ~3 K16 per 1U  (most likely there are some unusable slots, use for switches, spacing, etc)

https://www.stackpop.com/configure/colocation/united_states/virginia/ashburn/17237
24 kva 59U for $1000 per month (or $2000 unsure if power is included or extra)
750 x k16 = 24kva
750 x k16 = 12.7 k16 per 1U or ~51 per 4U  (most likely there are some unusable slots, use for switches, spacing, etc)

But these can only be used once stability of these devices is proven, to not need much babysitting by hand.

Cramming higher density into 1U is only viable if you are hosting the rack on your own premises and can arrange adequate cooling.. But then why would one use 1U and not bigger cases where u can have better airflow using more efficient bigger fans.



come and host them in ROMANIA-EU

you will pay 125x16K=4kva= 503$ / month

i presume you just calculated electricity cost only... not factoring in cooling or rackspace or other facility charges.
KS
sr. member
Activity: 448
Merit: 250
June 15, 2013, 05:26:17 PM
#89
what's the DC in Romania?
member
Activity: 92
Merit: 10
June 15, 2013, 05:20:02 PM
#88
I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..
https://www.stackpop.com/configure/colocation/singapore/singapore/singapore/19885
4 kva for 42U $2500 / month
125 x K16 = 4 kva
125 x K16 = ~3 K16 per 1U  (most likely there are some unusable slots, use for switches, spacing, etc)

https://www.stackpop.com/configure/colocation/united_states/virginia/ashburn/17237
24 kva 59U for $1000 per month (or $2000 unsure if power is included or extra)
750 x k16 = 24kva
750 x k16 = 12.7 k16 per 1U or ~51 per 4U  (most likely there are some unusable slots, use for switches, spacing, etc)

But these can only be used once stability of these devices is proven, to not need much babysitting by hand.

Cramming higher density into 1U is only viable if you are hosting the rack on your own premises and can arrange adequate cooling.. But then why would one use 1U and not bigger cases where u can have better airflow using more efficient bigger fans.
[/quote]



come and host them in ROMANIA-EU

you will pay 125x16K=4kva= 503$ / month
sr. member
Activity: 322
Merit: 250
Supersonic
June 15, 2013, 05:10:46 PM
#87

~800 W
That appears to be 3U half rack unit. or is it 4U?

I think needs some space for
1) Controler - raspberry pi
2) USB hub(s)
3) PSU

So, stick in 2 of them in single 3U (full length) case, remove about 2 or 4 K16, stuck in PSU, hubs, rpi. Add in external power/network ports. Then find datacenter that'll take 1.5 KW in 3 RU...  then profit.

It's a 3U case. It'll have the PSU and controller (an Ubuntu box) separately as I won't be using hosted rackspace. Otherwise, yeah you could sacrifice some K16 space for the things you mentioned to create a self-contained unit.

I think one challenge in your deployment will be mounting. You would probably need a lot of hex spacers of exact specific lengths, and then drill holes in in the case and mount off that asif its 3 pillars.

OR mount them off the big side (which in the pic is not shown), but its going to be tricky IMHO.

Whats your plan regarding mounting?
member
Activity: 92
Merit: 10
June 15, 2013, 04:58:11 PM
#86
yes indeed the design is verry OK... but my point was a little bit of something different  Smiley
sr. member
Activity: 434
Merit: 250
June 15, 2013, 04:54:39 PM
#85
I think we should post designes of cases, but builded with materials that many of us allready have them.. or they are easy to find and to buy... not fancy CNC drilled cases made in a fancy 3D software

I think you can think whatever you want - regardless if you know what you're talking about or not.

The case I posted is readily available for internet order and would only require knocking a few holes in metal plate for fan cutouts - which could be done with a jigsaw in a pinch.

I thought it might be useful for others considering similar designs, sorry you don't find it useful.
sr. member
Activity: 322
Merit: 250
Supersonic
June 15, 2013, 04:36:29 PM
#84
I think we should post designes of cases, but builded with materials that many of us allready have them.. or they are easy to find and to buy... not fancy CNC drilled cases made in a fancy 3D software

I think we should post designs that we are building for ourselves in hopes that it may help anyone else irrespective of that they use to design it or what materials it uses.
member
Activity: 92
Merit: 10
June 15, 2013, 04:34:10 PM
#83
I think we should post designes of cases, but builded with materials that many of us allready have them.. or they are easy to find and to buy... not fancy CNC drilled cases made in a fancy 3D software
sr. member
Activity: 434
Merit: 250
June 15, 2013, 11:35:05 AM
#82

~800 W
That appears to be 3U half rack unit. or is it 4U?

I think needs some space for
1) Controler - raspberry pi
2) USB hub(s)
3) PSU

So, stick in 2 of them in single 3U (full length) case, remove about 2 or 4 K16, stuck in PSU, hubs, rpi. Add in external power/network ports. Then find datacenter that'll take 1.5 KW in 3 RU...  then profit.

It's a 3U case. It'll have the PSU and controller (an Ubuntu box) separately as I won't be using hosted rackspace. Otherwise, yeah you could sacrifice some K16 space for the things you mentioned to create a self-contained unit.
sr. member
Activity: 322
Merit: 250
Supersonic
June 15, 2013, 11:28:39 AM
#81

~800 W
That appears to be 3U half rack unit. or is it 4U?

I think needs some space for
1) Controler - raspberry pi
2) USB hub(s)
3) PSU

So, stick in 2 of them in single 3U (full length) case, remove about 2 or 4 K16, stuck in PSU, hubs, rpi. Add in external power/network ports. Then find datacenter that'll take 1.5 KW in 3 RU...  then profit.
sr. member
Activity: 434
Merit: 250
June 15, 2013, 10:50:27 AM
#80
sr. member
Activity: 367
Merit: 250
June 15, 2013, 05:57:13 AM
#79
I've been thinking some more about the K64 cube heat sink. The one I originally used is very complicated. Here's a model using a 90x100mm heatsink, similar to off the shelf-sinks available today:



The heat sink I made up only has fins over the actual Avalon IC:s. Such a heat sink could be cheaper, but the airflow between fins would be decreased by the large opening in the middle. Maybe it's better to use a heat sink with fins everywhere?

Here's a model using a "perfect" heat sink I made up:



The idea is to increase the heat exchange in the far end of the tunnel, which would be pointing upwards when in use. Unfortunately, again, a heat sink like that would be complicated and expensive to make. It looks darn slick, though Smiley
sr. member
Activity: 246
Merit: 250
My spoon is too big!
June 14, 2013, 01:14:48 AM
#78
Oh for sure, they do these kinds of things and they can be done but they are in situations where they're basically using commodity hardware (for them) and have environments where the failure of a machine, rack, row, or even in extreme cases entire data centers doesn't translate to lost service to customers. They can afford to be pioneers and push the envelopes. Most of us simply don't have this type of ability in our houses. They save money on power / cooling equipment and just accept a potentially higher MTBF of server equipment. I worked at a "big company's" data center in Seattle for a while where they had a tent outside that was basically protecting servers from the rain. Intel did similar things. These types of endeavors are what have helped open up the TC 9.9 operating window. It used to be a max of 68F back in 2008 (iirc) but it has since been raised to 80.6F for the cold aisle (inlet) and even in some classes of machines as high as 104F.
sr. member
Activity: 322
Merit: 250
Supersonic
June 14, 2013, 12:58:02 AM
#77
While this seems good in principle, there are some things to keep in mind when using outside air for cooling.

1) Just exhausting air to the outside and using the room as the supply means that the negative pressure you're creating draws the air from somewhere else in the house / building. Exhausting the air out your window means the air to make up for what you're exhausting is going to be coming into the house from somewhere else. You may not feel it but it's happening.
That is what i was counting on.. the house/building temperature is not that warm, close to ambient.
2) Using an inlet and an outlet will remove this but then you have to worry about air quality, humidity, and even potentially rain. If you have a long enough duct or have one of those dryer vent covers, you'll avoid rain but humidity (specifically condensation) is a more difficult challenge. The last thing you want to have happen is for the temperature to fall below dewpoint and you start getting condensation. Probably not going to be a huge issue since the temperature change across the device is going to be positive therefore producing exhaust air that is capable of holding more moisture than the intake air; but if it's enclosed and you're taking in cold air in the winter, it could potentially cause condensation on the exterior that could mess some things up. Additionally there's debris to worry about like dust, pollen, or whatever else. This can accumulate in the heat sinks or other airflow areas and either block airflow, foul heat transfer surfaces (meaning the internal temperature rises), or even accumulate and pose a fire risk.
It doesnt get cold, and never ever close to dewpoint, but the problem is humidity and/or pollution/dust. I feel using internal building air should be fine, since its not directly coming from outside...
3) In addition to humidity, electronics may experience damage or small cracks due to excessive rate of change of temperature (called heatup and cooldown rates). ASHRAE TC 9.9 (This is the 2011 version which was recently superseded by the 2012 version, though it's not publicly available from what I could find) is the industry standard for this type of thing. This is all to prolong life and prevent damage or danger to the environment. The point here is that if you have this running in the winter (presumably powered by an external fan) and the hash rate goes down (pool goes down, lose connection to internet, or any number of other reasons), if you continue to pump cold air through there, that could stress sensitive components and ultimately lead to premature failure.
Interesting. Will go thru the doc after a while. Hadnt thought about heat-cool cycle causing problems.
There are some other considerations but these are the top things to keep in mind. It's not as simple as just rejecting heat to the outside. It can be done but if it was simply that easy, every data center would be doing it.

In fact i thought about this because I remembered hearing about someone (it was either amazon or facebook unsure) throwing away the air from the hot isle, and using fresh air for further cooling and using. They obviously have some expensive equipment in place to treat the air before unleashing it onto the servers....
sr. member
Activity: 246
Merit: 250
My spoon is too big!
June 14, 2013, 12:38:02 AM
#76
While this seems good in principle, there are some things to keep in mind when using outside air for cooling.

1) Just exhausting air to the outside and using the room as the supply means that the negative pressure you're creating draws the air from somewhere else in the house / building. Exhausting the air out your window means the air to make up for what you're exhausting is going to be coming into the house from somewhere else. You may not feel it but it's happening.
2) Using an inlet and an outlet will remove this but then you have to worry about air quality, humidity, and even potentially rain. If you have a long enough duct or have one of those dryer vent covers, you'll avoid rain but humidity (specifically condensation) is a more difficult challenge. The last thing you want to have happen is for the temperature to fall below dewpoint and you start getting condensation. Probably not going to be a huge issue since the temperature change across the device is going to be positive therefore producing exhaust air that is capable of holding more moisture than the intake air; but if it's enclosed and you're taking in cold air in the winter, it could potentially cause condensation on the exterior that could mess some things up. Additionally there's debris to worry about like dust, pollen, or whatever else. This can accumulate in the heat sinks or other airflow areas and either block airflow, foul heat transfer surfaces (meaning the internal temperature rises), or even accumulate and pose a fire risk.
3) In addition to humidity, electronics may experience damage or small cracks due to excessive rate of change of temperature (called heatup and cooldown rates). ASHRAE TC 9.9 (This is the 2011 version which was recently superseded by the 2012 version, though it's not publicly available from what I could find) is the industry standard for this type of thing. This is all to prolong life and prevent damage or danger to the environment. The point here is that if you have this running in the winter (presumably powered by an external fan) and the hash rate goes down (pool goes down, lose connection to internet, or any number of other reasons), if you continue to pump cold air through there, that could stress sensitive components and ultimately lead to premature failure.

There are some other considerations but these are the top things to keep in mind. It's not as simple as just rejecting heat to the outside. It can be done but if it was simply that easy, every data center would be doing it.
sr. member
Activity: 322
Merit: 250
Supersonic
June 14, 2013, 12:21:54 AM
#75
Pages:
Jump to: