Pages:
Author

Topic: Request for Discussion: proposal for standard modular rack miner - page 3. (Read 9613 times)

member
Activity: 116
Merit: 101
I took a stab at your 2in side channel idea.

It definitely has some layout advantages but you do sacrifice some heat sink to get it.  I ran three sets of numbers, all assuming a straight 2 inches of width wall to wall inside that side channel for controllers.

If you leave 0.50 inches between each circuit card/and the next heat sink or wall, you can have a maximum PCB - heat sink height of 1.29".
If you go with 0.475 inch clearance between cards, you can get 1.32in of heat sink
0.40 inch clearance gets you 1.4 inches of heat sink.

So you could definitely play around with the actual width of that side section, card clearance, and sink height, and get something workable.

This is what it looks like as I interpreted your idea.  Shown is the heat sinks at 1.32" tall with 0.475 card spacing, for what its worth.





Fair enough on the S1 compatibility. 

legendary
Activity: 3318
Merit: 1848
Curmudgeonly hardware guy
Fuzzy, yeah a sleeve mounted to the back panel and used as structure to hold the backplane is exactly what I'm thinking. The top of the case should be an entirely independent panel, I think, for ease of digging into the works.

The top of the hashcard above the heatsink would probably be home to all the tall parts. Any through-hole or tall SMD caps, power jacks, interface or control chips, if you got VRMs in your design (that aren't super-large) they probably go up there. With a bottom-cooling chip your PCB is right up against the heatsink and you have clearance to adjacent for tall parts, but if they're at the top you have your ~2" clearance to the next board instead of the ~0.5" clearance to the next heatsink. I'm assuming you won't need a lot of airflow up there, and somewhere between 0% and 10% of your heat would be generated there.

Conversation about rolling with S1 dimensions is probably around page 2. Making a single board that works for both rack and small units means someone could design one product instead of two and fit both markets. There's already a lot of S1/3/5 chassis out there waiting to be messed with, and waterblocks for that formfactor are also pretty common.  The S1 design has proven itself pretty well, from almost silent running in the S3 to pushing pretty good power density with overclocked S5. I'm comfortable considering that board size and heatsink layout as a decent home-miner standard, and because of the opportunity for compatibility both from boards and waterblocks, I'm comfortable using that board size in a rack machine as well. The taller board certainly requires changes to case layout, as we've seen in the last several pages, but I think the long-term benefits of compatibility are worth the hassle now.
member
Activity: 116
Merit: 101
I think the sleeve idea is very much what we are thinking, at least as you describe it and I picture it, I can't speak for sidehack.  

Regarding the 2 inches or whatever.  I think I see what you are getting at, I will play around with configurations like that next.

Again, it would be helpful to know how much flow you want the tops of the hashing cards to see, the part where there is no heatsink.  Does this need to be entirely open to to the hashing space airflow with little/no obstruction? There is ALOT of free space if you drop the hashing area flowpath to just about height of one fan and leave everything above it as fair game for circuits, cabling, or tall parts the don't need significant heat dissipation.

Now its probably a little late in the conversation for this, but I do want to ask.  What specifically is the reason for the S1 compliance?  As I understand it you would need to at a minimum replace the hashboards and controller.  Do you envision the actual S1/3/5 heatsinks being used on your upgrade hash boards?  If not, then you are basically building all dimensions of this rack unit around the ability to strap the same hash cards on an existing frame and fan unit.  Is that really worth whatever design sacrifices you make to achieve it?  Just playing devils advocate here, thinking that if we are trying to define a long standing form factor that will be applied to both 2 card standalone miners and larger rack mounted multi card miners, then we really ought to be certain that we define it right.  My understanding of the argument is basically backward compatibility = more adoption/sales? And perhaps the fear that existing waternblocks couldn't be designed into a new non S1 compliant frame standard.
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
Having that top chamber with a fixed separation does require more disassembly in order to get cards in and out. Hm...

Not if the chamber is part of the top of the case.

To me easiest way to hold the PSU's is in a sleeve that is part of the case and welded to the rear panel. That sleeve could also serve as a mounting point the power backplane. Thinking on it, having the sleeve as part of the top as I mentioned earlier means longer cables from the PSU's to the boards and not good...
legendary
Activity: 3318
Merit: 1848
Curmudgeonly hardware guy
If you want to take the machine apart and run your own cabling, taking cables in through a PSU slot directly to the boards would certainly be possible. If you didn't, a simple insert that plugs into the PSU socket on the backplane and has either screw terminals or PCIe jacks accessible sticking out the back of the case would do nicely.

I would rather see common-rail redundancy be not an option at all than see it be the only option. It'd be a good feature to have, and there are certainly situations when you'd see improved performance, but it leaves at least as many situations when split rail is desirable. With a bit of thought on flexible internal connections, any number of configurations of 1 to 3 PSUs in common or separate rails is possible from the same simple hardware.

The most recent render actually fairly closely resembles our original idea, except that PSUs are now behind the cards in a deeper case instead of right over top. Having that top chamber with a fixed separation does require more disassembly in order to get cards in and out. Hm...

What could be done with keeping about 2 inches of width at one end of the case wherein resides the controller and such? You could mount your cabling interconnects at the top of this, and your backplane terminates at that point. You could put any required tall parts on that end of the backplane board so there'd be no restrictions at all on hanging parts in the airpath of an inverted supply. Simple cabling interconnects between backplane output and main cabling input will give you the ability (as previously described) to combine or isolate rails as desired. Something like that could work.

I'm not hell-bend on DPS1200 so much as on a fairly standard 1Ux2U server PSU. Please note every time in the last five pages I've given consideration to a design change requiring accomodation for three different common models in that approximate dimension. I'm fairly set on 4U height, which takes the IBM 2880 out of consideration as an internal supply (and there are numerous other reasons already documented) but also note that every time in the last five pages I've given consideration to a design change I've also required accomodation for interfacing to external PSUs including the 2880. The first post mentions we're working around the DPS1200 because that's what we have, but from the outset it was intended that provision for other PSUs, internal and external, was essential.
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
As it stands it looks like the PSU's will be sliding into a sleeve spot welded onto the case lid so it should be an easy change to use other supplies... And again, since the hashboards use PCIe power connectors, this should not be a sticking point.

After all, gotta leave _something_ for folks to tinker with Wink
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
Can't disagree with that. Aside from the redundancy aspect not sure why Sidehack is hell-bent on the DSP1200's but that is what is currently on the board so we follow his lead.
legendary
Activity: 1022
Merit: 1003
IBM 2880W!!!!!!  1 power cord, 1 breakout board, 1 PSU, 80+ platinum, all for ~$50-70 with fan packs.  I realize I am biased because I sell boards for them, but I am biased for a reason.  They are the best...
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
heh heh heh, ja I still like an external power bank but if they fit, well why not keep them(semi) internal. As for converting to using an external power rack, would be just as I do now -- run heavy gauge power feeds to split into short PCI cables near the cards. No use of the PSU adapters needed. Since the cards use PCIe for power in - is a non-issue to me at least.

As for mis-matched supplies, THAT is a problem but should not necessarily be yours... Drop-in replacement DSP1200 supplies abound with many sellers doing Amazon next-day Prime. Or, just have a few from a multi-pack buy  kept as spares. One has to draw a line somewhere.
member
Activity: 116
Merit: 101
I agree with sidehack on not mandating either option but providing for both in an economical way.  I think you pretty much have the connectivity down pat with your A B C grouping situation.  This means each rack unit can have a common backplane and wiring harness that can be purchased in bulk. 

I am not sure on the cost differences between a wire harness, a slot adapter, or a terminal block on the backplane, but I think you could also solve the problem of people wanting to go with external supplies. 

You can either offer an entirely new wiring harness for external supplies.
Or you could make the existing wiring harness attatch to the backplane with a connector, such that an additional wiring harness can expose the internal harness to the back of the unit by simply connecting in place of the backplane.
Or you could offer slot adapters with terminal blocks that could be used to adapt any supply to the backplane, leaving the internals hardwired.  This would also allow you to expose P_GOOD and current sharing signals to the external harness. 
A slightly more labor intensive option would be to simply build ALL backplanes with terminal blocks for tying in external supplies.  Although that may impose an un-needed cost on all stock units with internal supplies.  Either way, I think there is alot of flexibility involved with the whole concept of grouped boards tied to a 3 PSU slotted backplane.

Regarding the layout and flowpath for PSU's,  it sounds like there is little need for cooling on components populating the slot adapter?  I'm hearing that you want the most unobstructed flow path into the PSU intake.  How critical is this? How much flow do we need? Can that number be quantified?  I assume it's something along the lines of "Enough airflow to cool 600-700w of PSU waste heat at up to ~35 deg ambient intake temps". 

I would need to work the numbers and run some simulations but intuitively I feel like if the PSU's generate negative pressure in the PSU cooling channel, the flowpath doesn't need to be absolutely perfectly straight. 

I reworked the model with the PSU's flipped.  I focused on trying to maximize the PSU intake tract, again, without knowing how much this really needs to breathe.  As such, I envisioned the power leads coming of from underneath the backplane, routing to the outside, then penetrating through the top shelf to be routed to the front of the hashing cards, where they plug in.  This provides balanced flow to all the PSU's, and it only means a little bit of cabling in the front of the PSU duct.   Tradeoff is you make two seperate areas for the controller, unless they end up living in the hot zone between fans and hash cards.  How hot do those exhaust temps tend to get?  Is it an issue that the backplane compenents are hanging down into that hot zone now? Would they get cooked?  My original thought was that you'd want absolute separation of the hotzone from all other components.  How critical is it that the components above the heatsinks on the hashing card see airflow?  Could they be in stagnant/low flow air? There is alot of space above the fans that could be sectioned off and made into a cooler space, but this would restrict airflow that comes in between the tops of the hashing cards and the heatsinks.





Thoughts?


legendary
Activity: 3318
Merit: 1848
Curmudgeonly hardware guy
I don't mind screwing ATX folks. But you end up screwing anyone with mismatched server supplies as well. If I got a machine with a down 1200W PSU and I have a different 1200W PSU I could drop in while waiting for replacements to ship and arrive, but I can't do that and now I'm down 4TH because the manufacturer mandated single-rail redundancy but I don't really care about it, well, that kinda sucks. And the point of allowing external supplies (which folks were arguing very heavily in favor of a very short time ago) includes allowing mix-matching external supplies.
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
THB, I feel screw folks that would want to use ATX supplies at these power levels. Use them on the possible 2-blade s1 style modules. Load sharing makes things so much easier. The HP supplies and their ilk are still pretty damn easy to acquire in multi packs and cheap.
legendary
Activity: 3318
Merit: 1848
Curmudgeonly hardware guy
If the connector is at the top of the case, the fans are right below the connector. In order to access the crapton of room below, your parts hanging down are directly in the way of intake air to the PSU. If the connector is at the bottom, you can still have parts hanging down but now they're below the PSU's airway in all that open space otherwise unoccupied behind the hashboards. This puts the backplane at lower than the top of the hashboards, meaning now the entire backplane has to be behind the hashboards instead of allowing it to be above them. This makes the case a bit longer (potentially) but also means you don't have to remove your PSUs and power backplane and power backplane mounting framework before pulling a hashboard for servicing or replacement.

If we don't do common-rail we lose redundancy. If we're not concerned about being able to load-balance the supplies, then there's no real reason to do a common rail - which makes using different or external supplies relatively trivial. I would prefer if load-balanced redundancy were still an option, but not if it makes every other option more cumbersome to achieve. It's probably not worth the trouble.

One way it could be done is with an intermediate power block connecting the backplane to the internal cabling. Say you have three PSUs 1, 2 and 3 (and each PSU has three cables to the intermediate block) and you have three board groups A B and C. If you keep cabling between the supplies grouped, so 1 powers A, 2 powers B and 3 powers C, now you can use whatever supplies you want and they never interact. But if you take one cable from each supply and tie to each board group, now you have common rail by default because A is powered by 1, 2, and 3 - and so are B and C. Two minutes with a screwdriver and you can switch between common and independent rails. Maybe there's a jumper on the backplane which ties the current-share lines. This also further modularizes the separation between backplane power and the cabling to the boards themselves.
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!

Am I missing something about PSU layout?  The fan on the PSU pulls air in through from where the edge connector is right?  And if the edge connector is closer to the top of the case than the bottom, their is a metric crapload of room below for the any parts that need to hang down farther.  These parts are also directly in the intake path of the PSU.  And the edge connector for a slot adapter can still be nice and high for mounting to a backplane that's up on top of hash cards.  The fibre mat protects the pins from shorting, and you could build in some struts between each PSU slot to give the case top resistance to being crushed.  I think I'm missing something here with the PSU orientation....
ref https://i.imgur.com/rbcz0pw.jpg for a good look at the backs of the DSP1200's. 7.2kw worth  Tongue
Very much like the psu's sticking out. Frees up apparent case depth for the power adapters. Highly approve of forcing load sharing. 1 (in this case 2) +n _is_ what the supplies were designed for after all... Another advantage is that the AC end of them happen to be the hottest parts of the case so good that it's catching the outside airflow instead of making their contribution inside the case.
member
Activity: 116
Merit: 101
I hadn't thought of using slot adapters, not a bad trick. It sounds like the backplane having integrated load balancing circuitry is a foregone conclusion?  I ask because there may be some small number of people wanting to piece together systems using different PSU's, and this rules them out if you gang the output from the supplies to a single 12v channel supplying 8 cards in parallel.  Perhaps this is where a factory wire harness option comes into play.  

Lets explore the standard backplane with slot adapters.  Oversize the PSU bay slightly as you mentioned to accommodate all practical options.  The solution to mismatched PSU's can be in the form of a mechanical system. Something intrinsic to each flavor of power socket adapter that prevents two flavors of socket adapters from being inserted adjacent to each other.  Not entirely sure how this would work for stopping someone from putting PSA A in slot 1, no PSU in the slot 2, and PSU B in slot 3.  But at that point someone is intentionally trying to be stupid.  

Edit:

Am I missing something about PSU layout?  The fan on the PSU pulls air in through from where the edge connector is right?  And if the edge connector is closer to the top of the case than the bottom, their is a metric crapload of room below for the any parts that need to hang down farther.  These parts are also directly in the intake path of the PSU.  And the edge connector for a slot adapter can still be nice and high for mounting to a backplane that's up on top of hash cards.  The fibre mat protects the pins from shorting, and you could build in some struts between each PSU slot to give the case top resistance to being crushed.  I think I'm missing something here with the PSU orientation....
legendary
Activity: 3318
Merit: 1848
Curmudgeonly hardware guy
So now you don't short your pins, but the potential for mechanical damage is not decreased. You also have no means of getting any standing parts out of the direct airpath of your supplies, since the backside of the board (where you could put any tall parts that aren't the PSU connector itself) is up against your steel ceiling. Immediately below the potentially somewhat congested airspace directly in front of the PSUs, however, you have about five hundred cubic inches of empty.

It might make the case a bit longer to put connectors at the bottom, but a board could be built that doesn't interfere with hashboards and doesn't pose a risk of shorting or mechanical damage to the case lid.

Also... here's a question. If we allow for multiple PSUs in redundant configuration, is it possible to also isolate PSUs to particular subcircuits so different PSUs could be used in non-redundant (which is to say, not load-balancing) configuration without a substantial hardware change? That needs to be addressed if someone wants to use external ATX supplies or any supplies, internal or external, that can't be put on a common rail.

It might be simple as having three separate internal busses (3 board, 3 board, 2 board + controller + fans) that would be heavy-jumpered for a shared bus or remove those jumpers for separate rails. There's probably other ways to do it, but that requirement will partially dictate backplane and internal power design.
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
Good power connectors are through-hole parts, and unless a lot of care is taken to make sure things are durably insulated, it wouldn't be hard for something sitting on top to flex the case lid into those pins and short something out.
Good concern but not to hard to take care of. Use a strip of electrical grade fibre paper contact cemented to the case area above the bare points.
legendary
Activity: 3318
Merit: 1848
Curmudgeonly hardware guy
The backplane pretty much just needs to be sockets fit for the PSU (which Emersons and 1200FBA are quite incompatible, not sure on the 1200TBA) and a few signal lines for load balancing, turn-on and PGOOD flags.

As you mentioned, feasible options would be to either construct interchangeable backplanes, specific to the model of PSU but mounting up to the same space, or to spec a single standard backplane and make slot-in adapters to go from the stock PSU to something else.

I had previously assumed using a different backplane for different PSU, but adapters isn't a bad idea. That reduces the replacement cost if you want to use a different PSU. The problem is, it allows different models of PSU to run on the same bus - which depending on their means of load-balancing and stock voltage setpoints, could be fairly disastrous.

If we make a stock channel size for the PSU (based around the largest practical supply) and when you buy a kit for a different PSU it comes with the power socket adapter and maybe a spring insert that grips the supply and fits tight to the bay slot?

One thing I thought of with your model is, the PSUs are upside-down. That does make the case depth a bit shorter because now the backplane can ride in the space above the boards, but I'm not sure how safe that'll be. Good power connectors are through-hole parts, and unless a lot of care is taken to make sure things are durably insulated, it wouldn't be hard for something sitting on top to flex the case lid into those pins and short something out. That can also cause mounting problems and airflow restrictions to your supplies.
member
Activity: 116
Merit: 101
Before I mock up any specific mechanical solutions for attaching PSU's securely that protrude from the rear, I'd like to define the objective.

I don't know much about PSU interchangeability so if you could clear up a couple things it will help the brainstorming. 

Do PSU's have any standard for the output blade?  As in, if you spec the unit with a DPS-1200FBA and its internal and flush mounted, and someone wants to put in a DPS-1200TB, does the unit interface with the same socket on the backplane? Or would each PSU need an entirely separate backplane? 

I ask because it would help to know how you plan to offer the unit in its stock form. 
What upgrade/modification paths you want to design specifically for and support. 
And what upgrades/modifications you want to anticipate and allow for, but leaving actual excretion up to the end user.

This would shape the types of solutions that make sense for securing the PSU's IE:

Stock unit with DPS-1200FBA supplies, with optional adapter/guide that interfaces a different length PSU to the same internal fastener, this would mean same backplane.
Stock unit with DPS-1200FBA supplies, with optional PSU bay section that includes a new backplane and mechanical solution for fastening the PSU unique to some other model of PSU.
Stock unit with longer PSU supploes and a stock secure method for attaching those PSU's, with an optional method for attaching the shorter DPS-1200FBA. 

Basically I think that actually securing a PSU in a socket is relatively trivial.  What isn't trivial is picking the most flexible solution that is compatible with everyone's needs and not overly expensive to produce.
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
It's only common at the breaker box and in the main supplying the box, NOT in the home itself.
 NOT the same thing.

 Having does more than a little rewiring over the years, I was FULLY aware of how power commonly arrives at the breaker box - but you can't plug a miner into a breaker box much less a main supply.

And how many typical homes have rooms wired for a >24A load between 3 circuits using standard 110/120V outlets? You're talking about having a 2600+W miner that is power-able in the typical home on 15A 110V circuits without re-wiring or a spider nest of extensions cords, keep dreaming.  My point is that 240V can be had for those serious enough to want this miner in their home, otherwise wait for the smaller S1 formfactor miner and power it with ATX. Even the S4+ went 205+V for input, likely because they had too many PSU failures with the S4's on 110/120V, it is inferior in all ways for powering PSU's.

You can't say 240V is not common, because it is "at the breaker" as you said, it just means extra work and/or expenses to be able to utilize it from the panel, 3-phase on the other hand is not common in North American homes. There's a difference...

If i'm not mistaken this miner is being designed for rack-mounting, efficiency and power-density, I'm not sure that people who rent their home is the target market here. The fan noise alone on this thing will probably make it prohibitive to have in a typical home anyways.
Agreed. This point has always struck me along the lines of someone wanting a good size welder or ceramics kiln at home -- sure you can do it. After you put in the circuits to feed them! If you want something powered from 110v 15a then one must accept the limitations (smaller units) that come with that.
Pages:
Jump to: