Pages:
Author

Topic: Request for Discussion: proposal for standard modular rack miner - page 7. (Read 9671 times)

hero member
Activity: 924
Merit: 1000
Cool.  Not to offend the scam sites but those renders have more info than most of their sites.  NO PREORDERS HERE!  Cheesy  Nice work!  It does help to sketch it out and not just keep it in your head.
member
Activity: 116
Merit: 101
If you can fit all of your controller circuitry in the PSU bay, then you really have more like .875 above the cards for cabling/connections.  I partitioned it in my model thinking you might want to stick a controller card up there.  But that could be worked out in a few different ways.

Hopefully this view answers most of the dimensional questions.  Let me know what you want tweaked and I can rework it.  The only thing I didn't remember to put a dimension on was the PCB thickness.  I assumed and went with the standard 0.062in.  If the S1 cards are thinner that might be part of why mine seems so tight.

legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
0.75" is a bit tight for power jacks, but you shouldn't have trouble plugging in a PCIe without kinking the wires.

I question your hashboard space width - a 17.5" case should have about 14" left over after 3.5" of power supply. But pretty much everything else - interchangeable PSU backplanes, controller location, all that - we're on the same page now.
legendary
Activity: 872
Merit: 1010
Coins, Games & Miners
Better stick to 4U, that way you can have even taller boards. Also, on a 46U rack you would fit 11 of these rackable units, which would amount to 77 boards, not too shabby Wink
member
Activity: 116
Merit: 101
I reread the thread and pulled out some of those dimensions. 
I mocked up a DPS-1200 type supply, I went with an overall dimension (including the power blade) of 8.25in x 3.5in x 1.6in.
The PSU channel is currently 11in up to that first bend.  If you opted to have the PSU section rectangular with no angled ducting, you could get away with 13 inches of bay space.  I would imagine you have some kind of backplane card for the 3 psus to plug into? What will this do to the PSU chamber airflow?  Is this worth consideration? 

Right now I have it with 0.4in between the first card and the side, and then 1.9 inches card to card.  This means that last card only has 1.5inches of space on the right side where it is up against the psu bay.  I could sink the first card up against the wall and get the full 1.9" per card for heatsink/space, but then you are restricted to single sided cooling only. 

I like the idea of mounting cards to heatsinks, and then heatsinks to chasis.  For the time being I modeled up a strut, giving a .1" clearance for a nylon washer or something between the strut and the card.  Also, im using 0.055in thickness for all the wall thicknesses.  I would imagine you could go thinner on the interior stuff and gain some fractions of an inch. 

Right now as it stands I have 0.075" clearance on the top and the bottom edge of each blade.  This affords you 0.75" of free space in the top shelf for cables and controller circuitry. Is that enough for you?  Theres also a good portion of the PSU bay that could do that same job.  I was thinking maybe even put the controller in the top of the PSU bay, and then use a blade style connector to interface PSU backplane.  This backplane could come in different flavors to accommodate different PSUs, and then your controller could use the signal quality lines etc. 

I'm going to start playing around with the simulation stuff soon and see how that pans out. Ideally id like to be able to run a full forced convection study, but I don't know if I have the right package for that.

legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
That mockup looks pretty much like what I'm thinking. I don't recall the exact numbers offhand, but I think a few posts back I gave some expected dimensions for heatsink height. The heatsink dimensions would be the same as an S1 chassis, but likely with a bit shorter fins. I think I figured for one half inch between the face of one heatsink and the fins of an adjacent heatsink, which allows for board thickness and some tall parts like tantalum caps and such before interfering with neighbors. Taller parts could be put at the top of the board above the heatsink, where clearance to the next board is roughly two inches. That might make pulling cards in and out a bit cumbersome, so it'd be nicer if tall parts were kept on the side with the heatsink, but that's probably going to be left up to manufacturer's discretion.

If you have seen a Dragon, I like how those are put together. Basically the PCBs are mounted to the heatsink, and the heatsink is stuck to the case. Screws through the base of the case run right up through tapped holes drilled into the fins. It's pretty solid, and there's no additional rail components like in the S2 - which increase manufacturing, and can get in the way of airflow, and shouldn't be relied upon for structure given how many of those arrived from the factory with cards flopping all over the place.

I'm not sure offhand what the board height on the S3 is, but S5 boards are a bit shorter than the S1 boards. They can be that much smaller because there's no power components due to string topology. I'd rather not design a flexible standard without allowing it room for components for a variety of topologies, so I'd rather make provision for something the full 5.9 inch height. The S1 heatsink I think is about 4.5 inches tall, so to fit inside a 5.25" case doesn't allow much extra room above that for anything requiring through-hole components. 3U would be nice, but I don't think it's possible without switching to a different board dimension.
member
Activity: 116
Merit: 101
As I'm currently stuck on a boat for a few days waiting to get into port, I think I'll take a stab at this in Solidworks and see if I can't get some flow/thermal simulations running.

Edit: Here is what I have so far

120mm fans, plenty of spacing, I'm using .055in for all wall thicknesses.  C20 socket.  Separate flow path for the PSU space and for the hash space. 

For the PSU space I alotted 4inches, I think this is too much, but I wanted to play it conservative at first.  How much space wall to wall do you think this should be?

The other thing im still guessing on is the placement of the first hash card and the exact spacing.  Right now I have the first card a half inch off the wall, and then 1.75in spacing card to card.  This is totally a rough guess, and I believe you were all spec'ing 2" heatsinks.  Is this two inches overall? or .5inches on one side and 1.5inches on the other?  Should it include any chip standoff height?

Speaking of stand offs, what did you guys envision for vertical mounting?  I could see something like a 2 strut/board system with nylon standoffs to space the cards off the vertical struts. But I'd like to hear what you all wanted before I model that up.  I think it for the flow simulations that the board mounting will come into play so its worth getting that detail captured. 

I have to dig into the simulation stuff tonight and figure out how thats going to work, but safe to say I should run it at 300w per board?  I was thinking 200cfm per fan assuming no restriction?  So maybe like 100-150cfm per fan actual flow rate?  I may need an actual performance curve from a fan to get the simulation to mesh out correctly, so if anyone has a specific fan in mind I can try to find and incorporate that performance curve. 

Also, for heat-sinks, are you thinking custom jobs? Or re-purposed S1/S3 hardware?



I want to see what kind of numbers come out of a 3 fan pull only simulation.  But it would be pretty easy to rework the model and do a 6 fan push pull.  I would just mirror the duct shape from the back at the front and maybe stretch the whole unit by a touch.

I also could run this as 3 pull 2 push, which might make for an interesting angle on the "negative vs positve pressure" argument. 

 
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
Another thought - it looks like you are trying to stay somewhere around the depth of an s2/4. At least in telecom and industrial electronics case depths up to 28" are not uncommon (good example is Bitmain's 1600w PSU) sooo....
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
I'd say depends on where the miners are. Most conventional IT hosting facilities charge by rack space used (height) and power so yes hash density comes into play. But said facilities are by no means cheap (but do have other advantages such as security, often uninterruptible power, bodies on-hand for monitoring, etc.) .

Going the miner warehouse style like you are thinking of in Venezuela then it is rather a non-issue I'd think.
member
Activity: 116
Merit: 101
Ahh, I knew I had to be missing some key dimension.  

The 3U suggestion was predicated on 5in cards that had "rear" facing connections, so they could be just ever so perfectly fit in a 3U case.  If a 5inch card is incompatible with the S1 form factor then throw my 3U suggestion out the window.

I hadn't really given a great deal of thought to backplane vs no backplane.  I was really just considering possible high level layouts.  It may not be feasible, but I envisioned that a "Power rack" could be built in a number of configurations.  Backplane if it was to be hot swappable, or wiring harness if it was to be cheaper and more simple.  

I think the separate PSU rack really depends on a 3U hashing unit.  At 4U I see the value in having the option to run it internally powered, and that probably offsets the loss of the 8th hashing card.  Is the S1 Form factor incompatible with a 5inch card? And even if it was, does 8 cards still not work in a 3U because of the controller card? The more I think about it, the more I realize that 8 hash boards in a 3U would be an impressively tight squeeze if it could even be done at all.  

This brings me back to my original question...

"Do mining operations really have an issue with overall hashing density per sq ft?  Is there a driving need for the entire unit to occupy a 4U space?  Or could other options be explored without reducing the value of the unit?"

I mean this appears to be aimed at some level of industry, vs the home mining market.  Do you guys that have hosting services and mining operations really feel the squeeze when it comes to space?  Is that a limiting factor in your operation?  At 4U you can fit I believe 10 units on a full rack, at 5U its 8. So for the same footprint, assuming you run a full rack, you have either 70 hashing cards at 4U, or you have 64 cards at 5U.   Thats less than 10% difference in the overall hashing density.  Is that significant to you guys?




legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
I still rather like the separate psu rack myself. Sell the miner and require that one of the 6-slot psu cases be bought (and preferably enough HP1200's to power it). Something like what is shown here from TRC http://www.trcelectronics.com/Meanwell/hot-swappable-rcp2000.shtml Then when they get a 2nd miner they are all set, just get sockets/cables (and psu's) to fill the caase and hook up the 2nd miner. Seems pretty painless to me and frees up a lot room in the miner case.
legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
S1 card height is actually 5.9 inches. Ignore the proposed card dimensions from the first post as they're something else entirely. If S1 card height is to be the upperbound for allowable dimension, this will not fit in a 3U chassis with vertical cards and especially not when power cables are considered. As stated, the only way to get 3U is to either change card dimensions or not have vertically-mounted cards (which makes mechanics and thermal considerations more convoluted).

Regarding dictating internal versus external power, I think there's a substantial difference between a 4U unit with the option for either internal or external power and a "4U hashing unit + (1U/2U) power unit". Takes up more space and costs a lot more on account of now you have to build a second case and all the cabling required both internally and externally. As a manufacturer, as a miner and as a hoster of mining gear, it doesn't really make sense to me to do it that way.

With provision for internal PSUs, you are not mandated to use internal PSUs. A factory-option wiring harness (which can be sold separately for refitting machines later, hooray interchangeable parts) connecting to externally-accessible jacks would probably be cheaper than the set of internal PSUs and their backplane board (and would be required for your idea anyway) and it still gives you the ability to do what you propose with external power - but it doesn't remove the option to have a self-contained unit, which a lot of people appreciate. I argue for the option that incorporates - does not mandate, but at the same time does not exclude - what you'd like to see.

The internal power housing could, with small changes and replacing the backplane board, hold any number of PSU models - we've discussed 3 just in the last page. Or if you want no backplane board and the externalizing harness, you can wire it up to server supplies or ATX or whatever you want in whatever configuration. I'm fairly certain that's more flexible being as it includes all the flexibility of your proposal and still allows for a self-contained unit without requiring a separate box for power and with various options for internal supplies as well. The only thing it can't do is 8 blades in a chassis, which your 3U proposal can't really do either.
member
Activity: 116
Merit: 101
I agree that a turnkey solution should be offered.  I am just suggesting that it come in the form of 2 physically separate units, both of which are produced and sold by you.  One would be the hashing unit at 3U, the other would be one or more forms of a "Power Rack" that is built specifically to run 1 or 2 of the 3U hashing units.  Anyone wanting turn key can simply purchase them both.  Anyone who has existing power or wants to use alternative power sources can buy the hashing rack without purchasing the power rack. 

Regarding dictating internal vs external power, I don't think there is much of a difference between a turnkey 4U unit with internal power or a turn key "3U hashing unit + (1U/2U) power unit".  At least not from a consumers perspective. 

The power rack could be build to allow hot swapping for spoondoolies fanboys. It could come in more or less redundant variations.  The point is that it would be a more flexible form factor/standard than an integrated system. 

Regarding fitting 8 cards in a 3U space, I understand that it is exceedingly tight.  Perhaps I am missing a key dimensions that makes it impossible, but skimming through I see the following numbers.

3U space: 5.25in Nominal, or 133.35 mm

Proposed S1 form factor card height: 5in nominal, or 127mm

Proposed chassis thickness: 2mm per, or 4mm overall.

Assuming nominal dimensions, you get a stacked height of 131mm assuming the hashing boards touch the chassis.  This affords you 2.35mm of margin for tolerance and making sure your equipment actually fits in a rack.  Admittedly this is tight, but I don't think that is impossible.

Can we shave a mm or two on the card height to raise the margin?

legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
My opinion is, make the unit fully self-contained (having its own PSUs, controller, everything required to operate) but allow the option for external power. If you have no provision for internal power, it requires everyone (including folks who just want one) to source PSUs independently. That might not be a problem, but it can be cumbersome and it's pretty universally standard so far that rack gear has internal supplies. Sometimes those supplies suck and we wish external power was an option. With an internal supply the option for external power can still exist (with a simple wiring harness and external jacks, as previously discussed) - and now you have choices. If you dictate no internal power, it can never be an option.

The 4U height currently is required for the preferred board dimension. I think, and others have agreed, that using S1-size boards is good; even without considering cabling, an S1 board is about 3/4 inch taller than maximum allowable height for 3U. Blades could be laid flat (like in the S4) but that makes it difficult to access or maintain bottom-layer boards, and to fit more than about six boards you'd have to put them in rows which reduces air cooling efficiency (as hot air from front boards is trying to cool rear boards) and makes fitting waterblocks and plumbing very difficult in addition to greatly increasing the case depth. Dropping to 3U height is possible if we want to use a different size board, but the reasons for not doing so are fairly numerous.
member
Activity: 116
Merit: 101
I have never mined industrially and have never operated a data center/rack mounted setups, so pardon my potential ignorance here, but I have a question about the form factor.

Do mining operations really have an issue with overall hashing density per sq ft?  Is there a driving need for the entire unit to occupy a 4U space?  Or could other options be explored without reducing the value of the unit?

It seems like the layout is a key issue here, centering around balancing  variables: Number of boards, Number/size of fans, and arrangement of PSU's.  The ideal goal of the current discussion appears to be somehow fitting 8 properly cooled hashing boards alongside a robust PSU set inside of a 4U case. But the end goal of the discussion is a "standard modular rack miner".   It seems to me like a great number of the existing units on the market are powered externally.  And integrating the PSU with the hashing boards restricts modularity due to layout concerns. 

I envision something along the lines of a 3U hashing rack that has 8 boards and the 3 X 120mm fans layout. 
Alongside this would be offered "Power rack" that could be 1 or 2U.  I imagine this may add some cost in terms of additional cabling/chasis/wiring, but it does solve the modularity issue, and it gets you back in the 8 cards per rack with 120mm fans ballgame.  Just an idea, I haven't really looked at the actual unit dimensions so I could be missing something obvious here.

legendary
Activity: 1022
Merit: 1003
Higher quality PS cost a lot more up front, unless you're willing to risk buying used units.

Buying used server PSU's is not much of a risk.  Out of the hundreds I've had a part in with either supplying or supplying boards for, there have been 2 failures that I have been made aware of, and I have received 2 DOA units. The real risk is buying an ATX PSU, new or otherwise, as they are complete rubbish compared to server PSU's and it's not a matter of if but when they will fail (after you have paid more for a PSU that provides less).

220 isn't anywhere near as common in the US as 110. Most HOME miners have ONE 220 outlet available (for their dryer), many don't have any.
 You pretty much have to OWN your own home to be able to add 220 outlets, and most folks have no clue how to wire them up - at which point you're talking expen$$ive electricians to be ABLE to add any 220 outlets to your home.

Actually 220/240V is the MOST common power supplied to homes in North America.  It's called split phase (single phase), and nearly all homes are powered with it in both Canada and US.  They split the single 220/240V phase into 2x 1/2 phases of 110/120V each at the panel, meaning anyone can have a 220/240V outlet made up by combining opposite phase 110/120V circuits.  Yes, I do recommend everyone get a certified electrician to do all the work, and all that other liability-ass-covering crap.  I believe that home miners should be looking to making the switch, because the few miners that will still be available to purchase will be moving to 220/240V anyways ala S4+ and larger.

Sidehack has already mentioned using the boards for a different form factor similar to S1/3/5.  For the less serious home miners, they would be best to wait for that and power it with 120V PSU's.  But to sacrifice a significant design element (large server PSU's) for a relatively small market would be silly.
legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
If by "sizing around one specific PSU" you mean "any of half a dozen existing PSUs which can be acquired new or secondhand will work without significant mechanical alteration" - since the 1Ux2U server PSU is a very common dimension - then yeah, I'm sizing around one specific PSU.

I also don't really consider 4U rack to be a small case, given that it's a taller dimension than any decent rackable gear built in the last year.

Home miner is a secondary consideration for rack gear.
legendary
Activity: 1498
Merit: 1030

 I'm amazed how many people disregard the ~5-10% efficiency that can be gained by switching lower quality PSU's out as well as moving to 220/240V.


 Higher quality PS cost a lot more up front, unless you're willing to risk buying used units.

 220 isn't anywhere near as common in the US as 110. Most HOME miners have ONE 220 outlet available (for their dryer), many don't have any.
 You pretty much have to OWN your own home to be able to add 220 outlets, and most folks have no clue how to wire them up - at which point you're talking expen$$ive electricians to be ABLE to add any 220 outlets to your home.

 If you intend for this to be a HOME miner, ASSUME 110VAC not 220VAC or plan to lose most sales to the US (I believe Canada also defaults to 110, but not 100% certain).

Quote

I don't like the idea of resizing the case to fit one specific PSU


 You seem to already be sizing the case around "one specific PSU".
 That would be the one advantage of specifying 2 standard ATX power supplies - even at the 1200+ watt level there are quite a few options - though they're a lot less convenient to make FIT into such a small case as you want to use.
legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
Sent in a PM:

I'd rather not shift to a 5U if we don't have to. The S1 board is over an inch shorter than 4U, so pushing to 5U would just mean more wasted space at the top where there's nothing but cables.

I also have no problems at all about using more efficient PSUs, which is one of the main reasons we started building server supply interfaces to begin with.

Hey, I don't want to have any clutter from back and forth banter in your thread, I just want to re-iterate one last time my opinion, and leave it at that.  It's your project.

As you may know, cost-effectiveness is going to be the single most important factor to make your project a success (and in turn to provide any kind of real benefit to the community). Not sure if you're thinking about 2 or 3 DPS PSU's, but If you go with 3x DPS-1200FBA's, the cost of PSU's will be minimum $75, and I'm not sure I feel that having 3x PSU's in a miner make it any more reliable. I feel like it increases the likelihood of having a failure (which could easily go un-noticed, leading to further PSU failure due to over-loading the remaining). It's not redundancy if you need all 3 in order for it to run reliably.

Also, as I've mentioned, efficiency of PSU's should not be over-looked, and although running the DPS-1200FBA would be convenient due to availability, their lack of efficiency would not fit very well with your goal of creating an efficient miner worth re-using.  If you were to switch to the DPS-1200TBA Platinum supply, you're looking at nearly triple the price over the FBA.

On the other hand, the IBM 2880W is also proven reliable, Platinum rated, and can be had for $50-$70 each.  They are also much more common and available than the DPS 1200TBA.

 

If the power spec is 2400W, three 1200W PSU is definitely redundant. It's only not redundant if you're running them off 110V, which is why I specifically said that'd be a non-redundant configuration but still actually possible. I assume there'd be an indicator for PSU functionality which would alert you to a downed supply. I think redundant supplies is probably not essential anyway, given that no miner yet made with an internal PSU had that as an option and nobody seems to complain.

There are a lot of good options for PSU, all weighing against availability, efficiency and price. I don't like the idea of resizing the case to fit one specific PSU that's not shaped like any other PSU ever, because if we want to make more of them than we can find that exact PSU in the secondhand market then we have to abandon the standard and go with something else. If it's designed in such a way that any of half a dozen existing PSUs which can be acquired new or secondhand will work without significant mechanical alteration, I'd rather see that option as the default.

Especially considering it won't be difficult to bypass any internal PSU and plug in whatever PSU you want as an external.
legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
I'd rather not shift to a 5U if we don't have to. The S1 board is over an inch shorter than 4U, so pushing to 5U would just mean more wasted space at the top where there's nothing but cables.

I also have no problems at all about using more efficient PSUs, which is one of the main reasons we started building server supply interfaces to begin with.
Pages:
Jump to: