Pages:
Author

Topic: [Klondike] Case design thread for K16 - page 7. (Read 37964 times)

hero member
Activity: 924
Merit: 1000
June 04, 2013, 09:43:05 PM
#34
http://www.supermicro.com.tw/products/system/1u/6015/sys-6015x-8.cfm  <---- Casing for the 1U design. http://www.mkl.co.id/ <--- can get them in Indonesia here.

So can a Raspberry Pi, Arduino or BeagleBone run this server or the K64 Cube can you put it somewhere in your modular design?
hero member
Activity: 924
Merit: 1000
June 03, 2013, 10:56:07 PM
#33
Revised 1U Rack Mount




Love the K64 CUBE... stacking them and placing them in a room you can save on heating costs or use them in greenhouses. I am imagining them in cylinders.
Those heat sinks might be very expensive but in a cold climate you could certainly use that extra heat. Might want to see if there are some off the shelf heat sinks that can do the same thing or can be extruded. Change the angle of the heat sinks and get penta, hex, hepta and octogonal shapes.

You can cut existing heat sinks on a bevel... and I bet you can even have that design in an extruded version without having to cut it. It definitely would not be as pretty but it could be done and it would look a bit more utilitarian but would work very well.




full member
Activity: 205
Merit: 100
June 03, 2013, 06:42:42 PM
#32
Nice! That's genius Smiley
sr. member
Activity: 367
Merit: 250
June 03, 2013, 05:54:01 PM
#31
Here's an idea I had. It's not really feasible since the heat sinks aren't readily available and would be crazy stupid to custom make. It was fun to put it together in Sketchup though.

What if you could make a heatsink like this?


They would fit together with the Klondike 16 board on the back, like this:

Notice how the pin placement is more dense towards one end of the heat sink.

When you put four together, they create a tunnel:


Less dense end. This is where you blow air in:


Dense end. This is where you want the air to come out:


Now, since there are more pins towards one end of the tunnel, more heat will be given off in the denser end. Thus, the board temperature is the same in both ends.

So, make a case of plexiglass boards and some screws/rods:


Put some fans and feet on, and you are good to go:


Those are two 140mm fans, which will ensure good airflow both inside the heat sink tunnel and over the top of the boards. Air is sucked in at the bottom, and pushed out on top, working with the natural flow of hot air.

Voilla! The K64 cube. Wink

So, the heatsink I made up for this design is, as I said, not feasible. However, it would be possible to use the same tunnel design with a normal heatsink - but the chips towards the top of the cube would get a little warmer than the chips on the bottom.

The cubes could be arranged in a square at least 2x2 for a 256W space heater.
KS
sr. member
Activity: 448
Merit: 250
June 03, 2013, 03:45:16 PM
#30
Need something to fit in the case?  http://cubieboard.org/

beagleboard? coinninja looks to have a nice anubis + cgminer cum mobile phone distro.
hero member
Activity: 924
Merit: 1000
June 03, 2013, 01:50:34 PM
#29
I am working on the idea of a 1U for 256+ chips the 512 would need to be at least 2U to fit 50mm fans. Search online for the specs as there are number of different configurations.

As for heat there are two banks of fans push and pull so there won't be too much of an issue with that given some of the server designs I have seen that utilize this same design even with more restricted airflow and generate as much or more heat given the PSUs they use as I stated above the GPU servers are much hotter. Have to remember the design is spread out over a much wider surface area and fans blowing over the entire area so heat should not be such an issue given the CFM is proabably more than sufficient. Anyhow having looked at various cases the max in a 1U could be a K256 with only 512W so no heat issue then.

I will post some new mock ups for that tomorrow.

Need something to fit in the case?  http://cubieboard.org/
sr. member
Activity: 367
Merit: 250
June 03, 2013, 01:39:52 PM
#28
That 512 chip monster would be pulling more than 1kW. That's a lot of heat. Yes, the rear will be very hot.

I'm working on my own design for four K16 boards in Sketchup now, will post it later tonight.
full member
Activity: 159
Merit: 100
June 03, 2013, 09:32:58 AM
#27


Trying to fit 512 into a rack mounted server... wish me luck.

I think you're going to have too much heat near the rear of that... Maybe a second row of fans in the middle to help move the air?
I think the higher velocity fans are about twice as thick as those too...
newbie
Activity: 16
Merit: 0
June 03, 2013, 08:58:17 AM
#26
Re: the rackmount chassis -- excellent design, great work. How long is that design? It'd be nice if there was some extra room for some sort of controller -- maybe we could cram a raspberry pi in there?
hero member
Activity: 924
Merit: 1000
June 03, 2013, 03:41:50 AM
#25
Ya I didn't add that to the model... I was using an older BKKCoin design... need to add that for the next one. Also parts are not to scale just eyeballed except the QFN48 size. Just wanted something that I can mock up fast not too detailed. But yes need that has to be added just so that cabling can be assessed. Problem with the design as well boards are not oriented to the KLego design that BKKCoins specified. Just trying to fit the boards into the space and orient the heat sinks along the axis flow of the air.
hero member
Activity: 728
Merit: 500
June 03, 2013, 03:22:10 AM
#24
Trying to fit 512 into a rack mounted server... wish me luck.

Looks good.
Appears to be 2 or 3U.
Possible to fit it all in 1u?

I think the PSU is not gonna take full length space, if thats so then ample room for some embedded computer to control these suckers...

2U it has to be I think for 512 and possibly 768... 1U for 256. Hard to find PSU to get the +1200 W that fits 1U and be able to power boards and arranged in 40x40cm (even if 1 layer design) 2 layers of 4 x K64. Depending on the heat sink size it might be possible to jam 512 in a 1U and I am trying to get a venturi effect to push that air hard through the tunnel I am creating I think it might be possible. The left over space could house controller... depending on what is used in front of the PSU on the corner next to the front fan bank.

Not much head room on top there... not sure that you'd want to put anything on top wires ok but nothing else going to be pretty warm there. I was thinking that the top of the case and bottom will need venting slats.

Front View:


Hi ,
Just small remark
I do not see the power connector in your 3D
sr. member
Activity: 266
Merit: 250
The Assman: CEO of Vandelay Import/Export, Inc.
June 03, 2013, 03:08:14 AM
#23
You guys have some great case ideas! I have several hundred chips on order which are destined to be K16's and I can't wait to pick a beautiful home for them Smiley
hero member
Activity: 924
Merit: 1000
June 02, 2013, 11:22:07 PM
#22
which 3d modeling program did you use?

SketchUp 8. I will post up some models in a few days when I get time.

http://www.sketchup.com/
http://sketchup.google.com/3dwarehouse/
hero member
Activity: 924
Merit: 1000
June 02, 2013, 11:04:45 PM
#21
I really don't think this is the "venturi" layout you're looking for. There is no significant reduction in cross section, just a hell of a lot of turbulent flow. I really think that even with a middle row of fans you'd struggle to remove that crazy amount of heat from there.

It is smooth not very turbulent through the fins it is basically less turbulent than the Avalon design given the proximate distance to the fins and as the fans are face mounted this will provide a much more stable and direct airflow across the fins even without baffles. Unless you have a simulator shows airflow I suspect you are speculating just as much as I am?

Testing this will not be an issue since the modular design of the boards means break down and re-configuring is simple. So I will be testing it with and without baffles to see what happens and get temp readings. Having a larger opening (volume) at the front and smaller space (volume) at the back along with having 2 banks of fans I don't think there will be any issues given this is based right off other server designs with a lot more obstruction of the airflow. I don't see this being an issue at all but again I will test it out and provide feed back. If it works I'll throw up the SketchUp plans so people can modify them for KICAD or what not be keen to see others work on this as well.
hero member
Activity: 924
Merit: 1000
June 02, 2013, 09:25:29 PM
#20
Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...
As for flipping the boards they are fin to fin so that the heat is pushed out through the fins getting the best airflow from of the fans. Also the Klego is easier to put togther in a fin to fin configuration. Access however is not and maintenance and inspection will require a lot of time.  But given this a stanchion solution it is easy enough to test both configurations by changing stanchion length. I can test both ways and see which is better. I have a feeling fin to fin is optimal if the air flow is constant and in that Venturi type configuration.

Not flipping the boards. Take the whole box *as is* and turn it upside down. You don't change the current layout, just put it on its head. Smiley

The heat would be trapped then... again having the larger space above will allow heat to dissipate faster... heat rises. Flipping the design as it is would trap more heat I think. Again I can test all this out and run various configurations and see. I think the best solution is to get airflow into both the upper and lower cavity and push the air out and I think I might have to given the ambient temperatures here in Indonesia. I am not interested in air-conditioning the floor just for a single unit so I will have to be very concerned with airflow. The 3 - X6500 I have at home seem to do well although they run at between 40C - 52C all the time. I suspect with the right heat sink, fan configuration the K256 in a 1U should be more than adequate without AC. If have multiple units however then I think the collocation will have to be the option or build my own data center on the 3rd floor.

The Avalon boxes have a lot of air blowing at chips and board as well as fins in their vertical configuration. The poor goop job results in less efficient heat transfer as indicated in the thread of the Avalon user who pulled the heat sinks from the PCB. But it seems Avalon is really trying to tunnel the air into a very very tight space I just want to see if I can compact this down into something that is manageable size wise. I am really keen on what might be possible with a GEN2 Avalon chip so Blade Servers or SATA Hot swappable cards could be something that works as well with the preexisting server chassis. We will see what BKKCoins finds when he gets testing boards and heat sinks but I think the Klondike is easily configured so testing this will be quite easy. Personally seeing things like the TESLA 8 GPU server in a 3U/4U configuration and some of the 2 GPU Teslas crammed into a 1U gives me confidence that if you get the air flow and heat sinks right it is possible to get this sort of density.

15% Air Flow Top (Left)
-----------
70% Air Flow Middle Fin-2-Fin Tunnel
-----------
15% Air Flow Bottom (Right)

That way you only need the fan banks front and back. 16 Fans for 16 boards seems right as that is what people will do in stack configuration anyhow. It is such an amazing design that BKKCoins has come up with. Even if you are not going to do a lot of DIY board building just configuring cases etc is going to be great fun and I bet GEN2 Klondikes will be even more versatile.

Isolated view of the low profile fin-2-fin tunnel concept for a K32 section:

hero member
Activity: 924
Merit: 1000
June 02, 2013, 09:13:48 PM
#19
Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...

Checking that out. There are plenty of GPU servers that throw that sort of wattage and more so not sure that is a hard and fast rule depending on the nature of the data center and what services they provide.  [...]

The thing is typically a datacenter will bill you per rack unit, and a particular amount of power(and hence cooling expense) comes with it. Generally people that need more power buy more rack units than they need. Ive never investigated this point, but thats my understanding.

hmm... after writing the above i googled a little... it seems power is not an issue.

https://www.stackpop.com/search/colocation  hit search without entering anything, choose cabinet space 1U .. there is a provider offering 15A even...


Power IS a problem. Standard racks are 20A redundant (230V). I have 1U servers that need a half rack just for themselves, it's getting ridiculous. You can ask for more power but the prices are through the roof.

I'm seriously thinking of getting a DC in Texas or Washington (state).

I am in Indonesia and I know this is a huge issue then again our problems here are air conditioning as ambient is 28C to 33C everyday, power outages and security. So for the extra electricity costs it might be well work a collocation set up. I am checking that out now and will definitely get back to you guys about it. The other issue for me is I don't want to have to deal with having it in my school over the longer term more of a space issue. I can put a few of these K256's together but I really don't want to house them all here... I suspect that if you have any number of these the cost of water blocks and maintenance might make it worth a look to collocate if the price is right. Always about the price.

Again you can do a K256 @ 512 W and get what 72.192 GH/s in a 1U... so that is certainly not going to be an issue. As a solution 1U makes sense if you can get PSU and a empty 1U barebones chassis at a good price. Must be plenty of chassis laying around somewhere at a great price. Group Buy? I am still going to try for a more dense K512 in a 1.5U / 2U and test it out at school.
legendary
Activity: 1666
Merit: 1185
dogiecoin.com
June 02, 2013, 03:59:34 PM
#18
Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...

Checking that out. There are plenty of GPU servers that throw that sort of wattage and more so not sure that is a hard and fast rule depending on the nature of the data center and what services they provide.  http://www.nvidia.com/object/tesla-servers.html If I need to go to a 1U K256 so be it that is still doable I just do not want to have to build a server room on my 3rd floor so this is really an experiment to see if I can get my chips into a rack mount configuration. If it works I will definitely be doing more of them in the future.

As for flipping the boards they are fin to fin so that the heat is pushed out through the fins getting the best airflow from of the fans. Also the Klego is easier to put togther in a fin to fin configuration. Access however is not and maintenance and inspection will require a lot of time.  But given this a stanchion solution it is easy enough to test both configurations by changing stanchion length. I can test both ways and see which is better. I have a feeling fin to fin is optimal if the air flow is constant and in that Venturi type configuration.

I really don't think this is the "venturi" layout you're looking for. There is no significant reduction in cross section, just a hell of a lot of turbulent flow. I really think that even with a middle row of fans you'd struggle to remove that crazy amount of heat from there.
KS
sr. member
Activity: 448
Merit: 250
June 02, 2013, 03:16:41 PM
#17
Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...
As for flipping the boards they are fin to fin so that the heat is pushed out through the fins getting the best airflow from of the fans. Also the Klego is easier to put togther in a fin to fin configuration. Access however is not and maintenance and inspection will require a lot of time.  But given this a stanchion solution it is easy enough to test both configurations by changing stanchion length. I can test both ways and see which is better. I have a feeling fin to fin is optimal if the air flow is constant and in that Venturi type configuration.

Not flipping the boards. Take the whole box *as is* and turn it upside down. You don't change the current layout, just put it on its head. Smiley
KS
sr. member
Activity: 448
Merit: 250
June 02, 2013, 03:14:56 PM
#16
Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...

Checking that out. There are plenty of GPU servers that throw that sort of wattage and more so not sure that is a hard and fast rule depending on the nature of the data center and what services they provide.  [...]

The thing is typically a datacenter will bill you per rack unit, and a particular amount of power(and hence cooling expense) comes with it. Generally people that need more power buy more rack units than they need. Ive never investigated this point, but thats my understanding.

hmm... after writing the above i googled a little... it seems power is not an issue.

https://www.stackpop.com/search/colocation  hit search without entering anything, choose cabinet space 1U .. there is a provider offering 15A even...


Power IS a problem. Standard racks are 20A redundant (230V). I have 1U servers that need a half rack just for themselves, it's getting ridiculous. You can ask for more power but the prices are through the roof.

I'm seriously thinking of getting a DC in Texas or Washington (state).
sr. member
Activity: 322
Merit: 250
Supersonic
June 02, 2013, 03:02:51 PM
#15
Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...

Checking that out. There are plenty of GPU servers that throw that sort of wattage and more so not sure that is a hard and fast rule depending on the nature of the data center and what services they provide.  [...]

The thing is typically a datacenter will bill you per rack unit, and a particular amount of power(and hence cooling expense) comes with it. Generally people that need more power buy more rack units than they need. Ive never investigated this point, but thats my understanding.

hmm... after writing the above i googled a little... it seems power is not an issue.

https://www.stackpop.com/search/colocation  hit search without entering anything, choose cabinet space 1U .. there is a provider offering 15A even...
Pages:
Jump to: