Pages:
Author

Topic: Request for Discussion: proposal for standard modular rack miner - page 9. (Read 9613 times)

legendary
Activity: 872
Merit: 1010
Coins, Games & Miners
So, if i'm getting it right, the case would host things like these?

(Removed front and top face to make it clear what's hosted inside)

Front view


Rear view


Legend:

  • Light grey: Case
  • Green: Hashboards
  • Violet: Heatsinks
  • Blue: PSUs
  • Yellow: Controller
  • Black: 80mm Fans

As i see it, it only fits 6 S1 sized boards, so it could be tops 3.5 Kw of heat to be dissipated, with 80mm fans you could move enough air for them, but it has to be calculated as the case would be really tight.

Am i right on this one, or did i get it all wrong? I could do a fluid model to see if this thing would work.
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
On the psu's:
Or - dedicate a couple U of rack height and make/sell a PDU case for plugging in either the DPS1200 or better yet the IBM 2kw+ psu's either independently powered or wired 1+n to power rack miners above and below each PDU.... Please please please Wink

For a 2400w load 3 of the HP's wired for current share would be marvelous allowing true hotswap of a deunct supply (haven't had one yet though) while the other 2 take up the load. With all 3 in, gives a nice margin for the supplies right around the 80% load butter zone.
legendary
Activity: 3318
Merit: 1848
Curmudgeonly hardware guy
So. The day's discussions.

Chiguireitor, we spent most of today figuring out how to do things well with S1-spec heatsinks. If we maintain the 4U case size, we can comfortably fit seven heatsinks of S1 dimension and screw pattern in with room alongside for power supplies. Boards would be too tall to allow for PSUs mounted at the top how we originally planned.

This would allow us (and others) to build a single standard S1-sized board which could act as an upgrade for S1 miners or fit into the rackable machine. It also has the side benefit of being able to build S1-formfactor standalone miners, since we'd already have boards and heatsinks. All that remains to acquire is fans, controllers and a bit of framework.

For GekkoScience specifically, it would mean basically merging the Spec1 and Spec2 designs into a single board. The intent for Spec1 being a quarter S1 instead of a half S1 was that the board could be run standalone as its own 50-150W machine. The Spec2 was almost from the beginning intended to be rack-buildable. If we divide the market into roughly three sectors using Bitmain products as examples, we have the U3, S1 and S2. Our Spec1 would have fit U3 and S1 sectors and the Spec2 fit S2. However, if we make this change, we'd have the Spec1 as a single 30-chip board meeting S1 and S2 sectors and design a different product for the U3 sector. I think we're okay with that plan.

Designing boards for the rack-standard as based on S1 standard also helps cement and maintain S1 as a standard for its own market sector, which I don't think anyone is going to really argue against. This also helps maintain driver compatibility, as a single driver per board design would work for both S1-refit and rack-miner installations.

Being as a plethora of waterblocks already exist for S1 standard, an S1-standard-derived rack machine could be refit with S1 waterblocks pretty readily.

The problems we're coming up with are geometric in nature. There's plenty of room widthwise to fit seven boards. We could probably do eight, but seven gives better power headroom and more efficient per-board cooling off 2400W of available power. The problem comes in when we want to fit supplies and fans together. The width of a pair of DPS1200 supplies plus three 120mm fans will not work in a 17.5" OD rack case. It could be done if we switched to something like the Emerson 1200 that Spondoolies uses, which are also fairly expensive and still pretty tight.
If we want to keep a DPS1200 (whether that exact supply, or something with its dimensions) we've got three choices, far as I can tell.
1. Put the supplies at the front. This makes the power cord readily available but also gets your PSU cooling air blowing out the front. At full power you could be venting over 200W per machine out into your cold-aisle space.
2. Recess the supplies inside the machine. This makes them inaccesible from the outside for replacement, which shouldn't matter to anyone except Spondoolies fanboys since nobody else has ever built a machine with ready-swappable PSUs. It does also, however, make plugging in the supplies more cumbersome - we either let you have to thread your cord carefully into the socket about 2" inside the machine, or we put on an external socket wired to an internal plug into your PSU. One option removes convenience and the other adds a fair amount of cost.
3. Use a matrix of 80mm fans across the back instead of 120mm fans. Putting in 2x4 80mm fans instead of 3x 120mm fans will give us an additional ~1.5" of horizontal room to play with, making space for PSUs and allowing some gap/play between fans. Small fans will have to spin faster and so will make more noise. Sourcing more fans will probably also increase cost.

Currently I'm in favor of using 80mm fans across the back, because it's the least cumbersome option and, though it results in more noise, the purpose of rack gear has never been "silent running".
legendary
Activity: 872
Merit: 1010
Coins, Games & Miners
I know copper is expensive now but aluminum retains heat for too long.  I'm sure you could reduce the heatsink size by using a utilizing a more efficient copper heatsink.  Copper may have the same properties but there must be something other than aluminum.  Could you have rear pull fans and  fans on the heatsink pulling of it's heat directly?  That would limit how many boards you could place in the case but it would be quieter and possibly allow for more dense placement of chips on each board.

Copper has WAY better thermal conductivity than Aluminium, almost double, but the weight is a great deal too, so i doubt anyone is going to do big heatsinks with them.
legendary
Activity: 1050
Merit: 1001
yeah the modular design would solve so many issues.. something similar to the S2 backplane with interchangeable cards not depending on manufacturer
hero member
Activity: 924
Merit: 1000
I know copper is expensive now but aluminum retains heat for too long.  I'm sure you could reduce the heatsink size by using a utilizing a more efficient copper heatsink.  Copper may have the same properties but there must be something other than aluminum.  Could you have rear pull fans and  fans on the heatsink pulling of it's heat directly?  That would limit how many boards you could place in the case but it would be quieter and possibly allow for more dense placement of chips on each board.
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
My thoughts,
1) Make the controller aware of power quality via the signal all psu's provide.
2) Since most server PSU's provide +5v control power, buffer that with a small supercap & use it to power the control board. Size the supercap to allow the controller to run for say 10-15sec and use that time to do a controlled shutdown and possibly a restart if the power come back fast.
3) Is it possible to just use a internal LAN via a multi port switch? Seems to me it would be a lot easier for addressing the boards & data xfr vs the common SPI bus from board to controller and there are chips galore out there made just for internal LAN coms.
legendary
Activity: 3318
Merit: 1848
Curmudgeonly hardware guy
It also makes a difference if you want to use evaporative coolers to knock your 37C ambient down a bit.

The real point he's making is, for approximately equatorial locations air cooling is not easy to accomplish so some provision for waterblock installation is probably necessary.

If you have a string design with a dozen local ground planes all at different absolute potentials, you do not want a heatsink spanning them without isolation. If that's not the case, sure using powerpegs is probably great. I wouldn't design a standard heatsink to require them, but if you can use 'em affordably without catching stuff on fire I wouldn't rule it out.

If you're taking power in through cables instead of a backplane socket, that's not another argument in favor of using a backplane.
legendary
Activity: 872
Merit: 1010
Coins, Games & Miners

with >60% humidity points here on my country, high density only happens with water blocks).


 I don't understand that statement, heat sinks don't care about humidity to cool, they care about temperature differential. They do NOT use evaporative cooling like humans do.

[...]

Humid air increases static pressure on fans, lowering considerably the air pushed through the heatsink (that means CFMs go down by a whole lot).

Waterblocks OTOH don't rely on changing ambient conditions (if you're going closed loop).

There's also the possibility of open loop cooling with chillers and such, but i'm not a fan of that kind of cooling tech.
hero member
Activity: 767
Merit: 500
If you're talking powerpeg like the Alpha 3085 or the Swiftech 370, those things were EXPEN$$IVE for a reason - high cost to MAKE that style of HS, though they worked well. They don't work better than fins for crossflow cooling though, they were intended for updraft/downdraft specifically.

 If not, you'll probably need to explain what you mean.
Powerpeg: http://tem-products.com/index.php/thermal-connectors/power-peg.html
its basically a round bit of copper, that sits though a 2.5mm hole in the board, that the ground/thermal pad of them QFP can solder onto, and since its a full chunk of copper over via holes thats been plated with 2uM of copper, to pass heat though and you can screw a heatsink directly to it. its been around for a few years now, I even pulled it up in the hardware section here to ask why no one thats manufacturing miners with thermal vias are using it.

PCI-E hardware can use a LOT more than 175 watts. Look at ANY of the Radiion "x2" cards, typically in the 400+ watt range, for examples. Just have to use enough power connectors.

 Standard PCI-E does NOT use USB in any way shape or form.

i would assume you are using power wires off the PSU directly.

Mini-PCIe in laptops do, many wifi cars use the usb protocol over the pcie bus. hell, used it for my old EEEPC701 mods, put 32GB of flash memory on it (via hub and 4x8GB drives).

I was wrong with the power throughput Via the bus only, its 75W, not 175W. about page 35 has the power requirements on the bus.
http://read.pudn.com/downloads166/ebook/758109/PCI_Express_CEM_1.1.pdf
legendary
Activity: 872
Merit: 1010
Coins, Games & Miners

...

Out of curiosity, what constitutes as high density? in kW per volume.

I've been doing some custom watercooling stuff for fun, wondering how thin the cooling blocks should be. So far I've only gone down to (in total) 7 mm thick blocks and they have had problems with flow.

As for liquid cooling it should generally require little (or no) servicing, especially since miners aren't kept running for long.


High Density varies a lot depending on the nature of your deployment, but it can go as high as 18Kw/m3 depending on the cooling you have.

However, i tend to go a little more low there, the densest deployment i have is about 2.7 Kw/m3.

Watercooling isn't only for density, though, it is also for ultra-high humidity deployments, like my country's usual >60% (even near 98% 4 months a year) relative humidity.

7mm block is too small, i'm designing mine with 2cm blocks at least.
legendary
Activity: 1666
Merit: 1183
dogiecoin.com
Also, PCIe standard (last I checked) provided for a total device power dissipation of 300W. There are nonstandard devices which exceed this, however, but they aren't labeled with the PCIe standard logo. The PCIe standard allows for one 8-pin jack (at 150W) and one 6-pin jack (at 75W) and 75W through the socket.

I don't know about that, my HD7990s had 3x8-pins = 525W and they were official enough.
legendary
Activity: 1498
Merit: 1030

with >60% humidity points here on my country, high density only happens with water blocks).


 I don't understand that statement, heat sinks don't care about humidity to cool, they care about temperature differential. They do NOT use evaporative cooling like humans do.

 Ignore "heat index", that's only an estimate of how hot a HUMAN feels due to humidity level reducing the ability of a human body to cool itself, NOT the same mechanics as for an item cooled by heatsink-to-air heat transfer.



Quote

Using a PCI-hardware backplane, even if the signal is still USB, really isn't any better than a PCB with a securely-mounted heatsink and a fifty-cent cable. By my consideration, it's substantially worse based on cost and longevity.


 Cost more, definitely so when you include the cost of a backplane - though not as much as you think, passive PCI backplanes do exist and have been used for a long time in some hardware and aren't exactly rare. No need to reinvent the wheel there.

 Mount really isn't any more complex than mounting those heatsinks to the case or however you're planning to mount them.

 Longevity, IME PCI connections tend to last longer than the hardware they are being used by, Heck, I've got "ancient" ISA based gear that still connects reliably after 20+ YEARS of usage. I do NOT see a longevity advantage for the typical cheap connector used on any USB setup - though I doubt it would average much if any worse, BOTH will probably outlast Bitcoin mining.

 The size disadvantage I can see POSSIBLY being an issue, especially since you're trying to limit board length for a better balance on front-to-back cooling.


 Just had a thought - but I can see mounting issues getting "interesting". Make the hash boards horizontal, instead of vertical, then mount the power supplies to one sire of the case. Would probably need a subframe mounted inside the case, or spacers between the boards, to keep the boards from flexing too much. Would give fewer but larger boards, so would be a bit less "flexable" about incrimental upgrades. It WOULD make the cooling issues on the hash boards easier to manage.

 Second thought - why limit it to 4U? As I recall BitFury and Avalon both made rack-mount 6U miners, which would make space management a LOT easier inside the case.
 6x 120mm fans on the front would generally be more airflow per square inch than 3x140 too while using a LOT more common size of fan with a LOT more options available.

 Delta, for example, does not list ANY 140mm 12V fans on their website, but they have a TON of 120mm 12V options (the 140mm fans they DO list are 24V and UP).
legendary
Activity: 3318
Merit: 1848
Curmudgeonly hardware guy
SerialLain, have you done any... experiments... lately?

Our design philosopy is simple, durable, reliable. Given two options to provide the same function, we will always pick the one that does so more simply and more durably. PCI-type socket backplane is a nifty idea for yanking cards in and out quickly and easily, but:
- the backplane burns about an inch of vertical height, reducing hash density
- the backplane PCB is very large and built in low quantities (compared to PCBs) and is therefore comparatively expensive
- means of securing PCBs within the case is cumbersome and unreliable (especially if PCB is secured with a hanging heatsink, versus a secured heatsink with a hanging PCB)
- increases the number of breakable plastic parts
- requires edge-connector fingers on every PCB, which adds to cost
- if something breaks, zero modularity makes repair or replacement difficult or expensive

I'm heavily in favor of using USB 2.0 protocol, which keeps the board-level hardware interfacing very simple. The designer can leverage any number of protocol converters like CP2102, MCP2210 and so on, or tie into a USB-enabled microcontroller. This also simplifies coding at the controller end, as cgminer is already quite good at talking USB. This also simplifies hardware requirements for the controller, as it's trivial to find a decent minicomputer board with USB jacks.
Using a PCI-hardware backplane, even if the signal is still USB, really isn't any better than a PCB with a securely-mounted heatsink and a fifty-cent cable. By my consideration, it's substantially worse based on cost and longevity.


Also, PCIe standard (last I checked) provided for a total device power dissipation of 300W. There are nonstandard devices which exceed this, however, but they aren't labeled with the PCIe standard logo. The PCIe standard allows for one 8-pin jack (at 150W) and one 6-pin jack (at 75W) and 75W through the socket.
newbie
Activity: 21
Merit: 0
...
A removable back plate would indeed solve most of the watercooling issues with a air-only solution (i understand your air only preference, but with >60% humidity points here on my country, high density only happens with water blocks).
...

Out of curiosity, what constitutes as high density? in kW per volume.

I've been doing some custom watercooling stuff for fun, wondering how thin the cooling blocks should be. So far I've only gone down to (in total) 7 mm thick blocks and they have had problems with flow.

As for liquid cooling it should generally require little (or no) servicing, especially since miners aren't kept running for long.
legendary
Activity: 1498
Merit: 1030
If you're talking powerpeg like the Alpha 3085 or the Swiftech 370, those things were EXPEN$$IVE for a reason - high cost to MAKE that style of HS, though they worked well. They don't work better than fins for crossflow cooling though, they were intended for updraft/downdraft specifically.

 If not, you'll probably need to explain what you mean.



 PCI-E hardware can use a LOT more than 175 watts. Look at ANY of the Radiion "x2" cards, typically in the 400+ watt range, for examples. Just have to use enough power connectors.

 Standard PCI-E does NOT use USB in any way shape or form.
hero member
Activity: 767
Merit: 500
10 inches long contiguous aluminum heathsink?

The thermal expansion of such a slab of aluminum will be literally ripping the chips off of the PCB.

Either the heathsink or the PCB needs to be partitioned into sectors.


"rip the chips off" ? are you glueing on the HS? most heasinks have thermal gel, or thermal pads between the chip and sink, then its bolted down to a flex plate or secondary heatsink, or sometimes just machine screws holding onto the PCB.

I have an old heatsink that held onto 10 audio amps and is 30cm/12in long. it never warped due to the 80 odd degrees Celsius of thermal input.
the only way them heatsinks warp, is incorrect installation. Using the PCB to hold 1KG of heatsink whilst dangling it in-font of a fan board flexes off the sink. I'm thinking of the RK-Box that did this.

also still surprised no one wants to use the powerpeg style heatisnk..

Can you talk a little about why USB was chosen over saaayyy a PCI-E bus (a la Block Erupter Blade backplanes)? My thought would be that a USB driver is easier to work with but PCI-E is pretty cool and very modular...

the PCIe hardware is only designed for 175W. and I have a pet peeve of companies using "Standard" hardware with non-standard layouts. There will be a stupid person attempting to plug a video card in and go "LOL mah vidz card makes fire! I sue yooou!".. and also could lead to the PCISIG suing the company misusing their hardware.
 
now if there was data throughput via PCIe lanes, thats a complete different dev path again..
Hell, I'm not sure if it was just the mini-PCIe (laptop card slots) that only had USB lanes, or if the full sized PCIe does as-well.
hero member
Activity: 924
Merit: 1000
I like the 4U case, plenty of possibilities with cooling ala S2 and a controller like the S4 would be nice.  Massive heatsinks seem to be popular and cheaper but new cooling needs to be developed.  Companies should stick to a standard case and board setup, it saves them money and us to just ship new boards.  Saves us from ditching rigs altogether.
legendary
Activity: 3318
Merit: 1848
Curmudgeonly hardware guy
Yeah, a standard miner shouldn't ignore watercooling requirements since a lot of folks would like that option. You'd still be limited by power and data connections, so density wouldn't be as good as some other options (like fitting twice the power from a C1 as from an S3). I think making the rear panel removable shouldn't be too difficult.

The vertical mounting planes are themselves the primary heatsinks, screwed to the bottom of the case. If you space the PCBs off things, you're removing their contact with heatsinks and then things catch on fire. The boards do not mount to the case at any point. It's like how boards are installed in a Dragon.
The separation between PCB and heatsink on the S5 is actually the chips themselves, since most of the heat in a BM1384 comes out the top. If the ground planes on that miner ever interacted with the heatsink they'd short out and break stuff - like the Prisma did.

I think what happens with S5 fan control is more the fault of whoever wrote the driver code than an inability of cgminer to do things right. A separate control software is an option, but that removes the single-point control which an end user might appreciate.
legendary
Activity: 872
Merit: 1010
Coins, Games & Miners
Indeed, the nice thing about it is that you could design a big board mix it up with a plethora of different architectures inside them.

A removable back plate would indeed solve most of the watercooling issues with a air-only solution (i understand your air only preference, but with >60% humidity points here on my country, high density only happens with water blocks).

Also, i was thinking about the thermal protection you proposed, it would be nice to have the thermal watchdog in a separate process NOT related to cgminer, as you can see what happened to the S5s with that option.

I was thinking that the boards should go with atx spacers to have them mounted like standard motherboards on the case (but in this case on the vertical mounting planes), with some separation of the PCB backside to let the heat flow thorugh the ground pads (not unlike the S5).

I really should get a 3D model because i don't seem to make sense with words today (had my birthday celebration yesterday and i'm a little verbally impaired atm Cheesy )
Pages:
Jump to: