Author

Topic: HashFast announces specs for new ASIC: 400GH/s - page 508. (Read 880479 times)

newbie
Activity: 47
Merit: 0
I spend quite a lot of time in a datacenter, and the back of a rack is most certainly accessible by opening the door in the warm corridor. (Otherwise how would you attach power and network cables to the servers?)

I didnt say it was totally inaccessible, but no one wants to through the back, and wade through the cable mess, move cable management arms and get your fingers cut just to turn a machine on or off, or reset it.

My racks are very neat and tidy at the back. But you do have a point if you're somewhere like this:  Grin

legendary
Activity: 1176
Merit: 1001
Hashfast, any proof that you have something to deliver pretty soon would be appreciated for our mood. I'm getting more sad every time i visit the thread.
legendary
Activity: 980
Merit: 1040
I spend quite a lot of time in a datacenter, and the back of a rack is most certainly accessible by opening the door in the warm corridor. (Otherwise how would you attach power and network cables to the servers?)

I didnt say it was totally inaccessible, but no one wants to through the back, and wade through the cable mess, move cable management arms and get your fingers cut just to turn a machine on or off, or reset it.
sr. member
Activity: 462
Merit: 250
I haven't heard other BTC rig manufacturers did.



which is why I lowered the fans to get 5-7C cooler   Cheesy

full member
Activity: 210
Merit: 100


2.  The twin, non-redundant power supplies are a waste of space & a sign of sloppy, afterthought engineering.  3 modules in the case?  2 power supplies?  One module gets one, and remaining two get the other?  Two power supplies to provide 750W?  Honest?
There are no *single* off-the-shelf PS which could handle 750W?  They had to enter into a contract with Sea Sonic to provide them with *TWO ANEMIC PSs per box"?  Really?



I'm not saying that the whole thing will fail on the merits of its cooling solution alone.  It likely won't -- 750W is not a huge amount of power to dissipate.  What i *am* saying is this:  If their cooling & packaging design is indicative of their ASIC skillz, what we have here is a giant fail. Smiley


correct me if i am wrong, but isn't the total more like 1400watts per unit...   and you cannot get a good efficient PSU that is 1600,  so 2*850 were choosen?

also, if 2 pci cables power each miniboard, then you can have a pci cable from each PSU powering the 3rd board.

According to Hashfast, their chips draw .65W/GH/sec, so: .65 * 400 * 3 = 780W.  Let's round it off to 900W to make up for fans, pumps & fluff.  Sea Sonic, the company Hashfast has entered into some sort of a deal with (PR release is unclear), has platinum-rated 1000W gizmos available @ Newegg @$239.

As far as running two power supplies in parallel, it might work, and it might be fireworks/one PS "loafing" at idle & not adding to the fun -- depending on the output circuitry of the PS.  It's possible to bridge switching PS, but not worth the effort.
Hope this helps.
legendary
Activity: 1876
Merit: 1000


2.  The twin, non-redundant power supplies are a waste of space & a sign of sloppy, afterthought engineering.  3 modules in the case?  2 power supplies?  One module gets one, and remaining two get the other?  Two power supplies to provide 750W?  Honest?
There are no *single* off-the-shelf PS which could handle 750W?  They had to enter into a contract with Sea Sonic to provide them with *TWO ANEMIC PSs per box"?  Really?



I'm not saying that the whole thing will fail on the merits of its cooling solution alone.  It likely won't -- 750W is not a huge amount of power to dissipate.  What i *am* saying is this:  If their cooling & packaging design is indicative of their ASIC skillz, what we have here is a giant fail. Smiley


correct me if i am wrong, but isn't the total more like 1400watts per unit...   and you cannot get a good efficient PSU that is 1600,  so 2*850 were choosen?

also, if 2 pci cables power each miniboard, then you can have a pci cable from each PSU powering the 3rd board.
newbie
Activity: 47
Merit: 0
The back is out of reach?

You've never seen the inside of a datacenter, have you?

I spend quite a lot of time in a datacenter, and the back of a rack is most certainly accessible by opening the door in the warm corridor. (Otherwise how would you attach power and network cables to the servers?) I must admit, though, that nothing else about the design of these units makes any sense. Apart from the power supplies, I don't see any screw holes for the rack slides; these seem pretty heavy cases to be mounted purely on the ears.
ImI
legendary
Activity: 1946
Merit: 1019
0.75 btc per day for 400gh at the current diff

less than 0.5 btc per day for the next adjustment.

People will get a lot more btc for their money buying direct.

newbie
Activity: 47
Merit: 0
...

3.  As xstr8guy suggested above (not sure if he was joking, but wait...), even flipping the whole magella around, so that the back of the case faces the front (becomes the inlet side) would be a more elegant solution.  At least the case would only get the exhaust from the Rube Goldbergian twin PSs, not the full furnace blast of the 3 ASICs.
...

Nice analogy !

full member
Activity: 210
Merit: 100
crumbs there is a finite amount of available space to make a rackmount unit.  You design a rackmount unit with better airflow, buy modules from HF in bulk and then resell the package.  I think you will find it is harder than it looks.   Putting radiator in the back would be an "easy" solution except you only have 17" by 1.75 x U height inches to work with.  If the power supply is mounted on the back and the radiator is mounted on the back the radiator will be tiny, too little surface area to effectively cool 750W+.    To keep Delta T less than 10C over ambient you are going to need 1 to 2 cm2 of radiator surface area per watt (i.e 420cm x 120cm on 750W heat load) even with pretty high extreme airflow (3000 RPM pusher & puller fans).  There is only so much surface area on the back or front panel of a rackmount unit.

Sure if you don't want to compromise then build a massively expensive 6U chassis with straight flow power supplies and the entire rest of the back panel devoted to a radiator.  Of course when you do so you would price yourself out of the market and people will just buy the more economical solution from Hashfast or Cointerra.

I won't find anything "harder than it looks" -- i've designed and built cooling solutions for a wide range of gear, from 'puter boxen to cars.  I know what i'm doing.  That usually helps.

Puting the radiators in the back is not an elegant option -- it's a design constraint.  If an engineer can't figure out how it's done, there are plenty of careers in ditch digging which remain open to him.

1.  The figures you quote for radiator heat dissipation are simply wrong.  A radiator core has three dimensions:  Height, width, and DEPTH.  That's how THICK a core is.  Your cm2 ignores that.  It also ignores the cooling fin design -- it is the surface area area of those fins which counts, not the H x W of the radiator.  The volume of water that flows through the core &  the design of the header tanks also factors in.  This, again, is elementary stuff, known by every child who played with "My First Watercooler."

2.  The twin, non-redundant power supplies are a waste of space & a sign of sloppy, afterthought engineering.  3 modules in the case?  2 power supplies?  One module gets one, and remaining two get the other?  Two power supplies to provide 750W?  Honest?
There are no *single* off-the-shelf PS which could handle 750W?  They had to enter into a contract with Sea Sonic to provide them with *TWO ANEMIC PSs per box"?  Really?

3.  As xstr8guy suggested above (not sure if he was joking, but wait...), even flipping the whole magella around, so that the back of the case faces the front (becomes the inlet side) would be a more elegant solution.  At least the case would only get the exhaust from the Rube Goldbergian twin PSs, not the full furnace blast of the 3 ASICs.

Finally, @itod:  There's nothing arrogant in what i say.  The problems with this "design" are obvious to a dull-normal 5-year-old, the same 5-year-old who can fire up her dad's CAD & really botch things up.

I'm not saying that the whole thing will fail on the merits of its cooling solution alone.  It likely won't -- 750W is not a huge amount of power to dissipate.  What i *am* saying is this:  If their cooling & packaging design is indicative of their ASIC skillz, what we have here is a giant fail. Smiley
legendary
Activity: 1974
Merit: 1077
^ Will code for Bitcoins
I stand corrected, thanks.
legendary
Activity: 980
Merit: 1040
I haven't heard other BTC rig manufacturers did.

legendary
Activity: 1974
Merit: 1077
^ Will code for Bitcoins
HF may have many sins possibly being late, but does anyone really think they were careless with such a simple thing as air-flow direction? Come on, HF did thermal simulations with heated bodies in place where chips will be. They sure have someone who knows a thing a two about thermodynamics. Air-flow is the least of our worries.

No one ever went broke by banking on the stupidity of American people.  I won't be the first.
Your argument that you're "sure they know what they are doing" shows us one of two things:

1. That they do know what they're doing or
2.  That you're dead wrong.

You know which one i'm betting on, right? Smiley

Your arrogance is entertaining. My argument was not that "sure they know what they are doing", it was that they look they know their way around thermodynamics doing simulations I haven't heard other BTC rig manufacturers did. Don't that bother you to claim you know more on the issue than them.
hero member
Activity: 784
Merit: 1000
0.75 btc per day for 400gh at the current diff

less than 0.5 btc per day for the next adjustment.

People will get a lot more btc for their money buying direct.
hero member
Activity: 784
Merit: 1004
Glow Stick Dance!
Hmm, maybe the illustrator mistakenly put the rack ears on the back of the case.  It would be odd that the PSUs are at the front of the case but still functional.  A well built, modern PSU doesn't exhaust much hot air.
donator
Activity: 1218
Merit: 1079
Gerald Davis
crumbs there is a finite amount of available space to make a rackmount unit.  You design a rackmount unit with better airflow, buy modules from HF in bulk and then resell the package.  I think you will find it is harder than it looks.   Putting radiator in the back would be an "easy" solution except you only have 17" by 1.75 x U height inches to work with.  If the power supply is mounted on the back and the radiator is mounted on the back the radiator will be tiny, too little surface area to effectively cool 750W+.    To keep Delta T less than 10C over ambient you are going to need 1 to 2 cm2 of radiator surface area per watt (i.e 420cm x 120cm on 750W heat load) even with pretty high extreme airflow (3000 RPM pusher & puller fans).  There is only so much surface area on the back or front panel of a rackmount unit.

Sure if you don't want to compromise then build a massively expensive 6U chassis with straight flow power supplies and the entire rest of the back panel devoted to a radiator.  Of course when you do so you would price yourself out of the market and people will just buy the more economical solution from Hashfast or Cointerra.
full member
Activity: 210
Merit: 100
HF may have many sins possibly being late, but does anyone really think they were careless with such a simple thing as air-flow direction? Come on, HF did thermal simulations with heated bodies in place where chips will be. They sure have someone who knows a thing a two about thermodynamics. Air-flow is the least of our worries.

No one ever went broke by banking on the stupidity of American people.  I won't be the first.
Your argument that you're "sure they know what they are doing" shows us one of two things:

1. That they do know what they're doing or
2.  That you're dead wrong.

You know which one i'm betting on, right? Smiley
legendary
Activity: 1974
Merit: 1077
^ Will code for Bitcoins
HF may have many sins possibly being late, but does anyone really think they were careless with such a simple thing as air-flow direction? Come on, HF did thermal simulations with heated bodies in place where chips will be. They sure have someone who knows a thing a two about thermodynamics. Air-flow is the least of our worries.
full member
Activity: 210
Merit: 100
As far as PSs being fine with sucking on hot air, i'm not so sure.  Making them do it just because people were too lazy to think about proper air management is absurd.

ATX PSU are designed to intake from heated case air.  It isn't being too lazy.  A rackmount case is only 17.5" wide.   A power supply is 3.4" wide.  You would need to use a more expensive server (designed to have front to back airflow) power supply and then you would still lose 20% of your front surface area.   Removing 750W+ by watercooling is no small task and that means a relatively large radiator surface area.

Sure if you wanted to make the case larger 6U+ or only put 2 Sierras per case but those would be inferior choices IMHO.

Power supplies are designed to work at high ambient temps.   Both servers and PC intake cooling air from inside the case.   Some high end PC allow flipping the PSU to draw outside air but that is the exception not the rule.   SeaSonic puts a 5 yr warranty on their power supplies and they know 90%+ of the time it is going to draw in heated case air.  They are designed to handle that.  Actually it has become an easier engineering challenge as PSU become more efficient it means they have less of a heat load.

On the off-chance you're seriously missing my point, i'll try again.
ATX power supplies are designed to function within their datasheet specs.  Depending on the specs, they may or may not function reliably sucking hot air, but they will certainly have a slimmer thermal margin & have the shorter MTBF.  Debating this is silly.

Power supplies inside of PCs do not suck on hot exhaust air.  The exhaust air from a typical GPU, for instance, is exhausted through the mounting bracket & out of the case.  Exhaust from a properly mounted CPU cooler is also aimed at the back of the case & is evacuated, often aided by an additional fan in the rear of the case.  In Hashfast's case, *all of the air inside the case is hot exhaust*.  All of it.

The picture used earlier in this thread, with yellow highlighter, is a picture familiar to every child who has assembled  a computer.  It is an example of *WHAT NOT TO DO*.  It shows a CPU cooler exhausting into the intake of the PS.
The caption beneath the picture reads: DO. NOT. WANT.
This is really basic stuff, not open for debate.

Finally, it is plain stupid to have radiators exhausting inside the case.  That's why it is never done.  Scratch that, it's done by Hashfast.

Buy a water cooling kit.
See if it exhausts inside the case.  If it appears that it does, go back, read the instructions, and correct.
Now that you're done, and the radiator exhaust is aimed the right way, ask yourself why Hashfast didn't spend the same amount of time reading instructions.
Fini.

Edit:  Please examine the pic of a dual P3 i took the time to post for you.  Tell me if you feel that the power supply in that ancient box is sucking on hot air.
donator
Activity: 1218
Merit: 1079
Gerald Davis
As far as PSs being fine with sucking on hot air, i'm not so sure.  Making them do it just because people were too lazy to think about proper air management is absurd.

ATX PSU are designed to intake from heated case air.  It isn't being too lazy.  A rackmount case is only 17.5" wide.   A power supply is 3.4" wide.  You would need to use a more expensive server (designed to have front to back airflow) power supply and then you would still lose 20% of your front surface area.   Removing 750W+ by watercooling is no small task and that means a relatively large radiator surface area.

Sure if you wanted to make the case larger 6U+ or only put 2 Sierras per case but those would be inferior choices IMHO.

Power supplies are designed to work at high ambient temps.   Both servers and PC intake cooling air from inside the case.   Some high end PC allow flipping the PSU to draw outside air but that is the exception not the rule.   SeaSonic puts a 5 yr warranty on their power supplies and they know 90%+ of the time it is going to draw in heated case air.  They are designed to handle that.  Actually it has become an easier engineering challenge as PSU become more efficient it means they have less of a heat load.
Jump to: