Pages:
Author

Topic: BFL announces 28nm 600GH/S blade for $4680 - page 31. (Read 41051 times)

full member
Activity: 146
Merit: 100
This could be a good thing or a bad thing depending on whether BFL is actually insolvent right now or not.

They could have run out of money and are trying to fund old pre-orders with new pre-order money. If that's the case I see prison cells in BFL mangers' futures since it's a blatantly ponzi-esque tactic that will collapse in on itself eventually.

However think about the people who ordered after the price of BFL units doubled. They will likely never see a return on investment on their order anyway so switching to the new product queue and being at the head of that line actually makes a ton of sense for them. This only holds true as long as BFL has enough money to stay afloat without dipping into future pre-order money to fund old pre-orders' production though.

With their history though I would certainly never pay for a pre-order in Bitcoin. For a product in hand that other people have recieved from a seller with no problem Bitcoin works great, but for a pre-order the lack of protection for buyers is very worrying.
full member
Activity: 227
Merit: 100
I think around 1W/Gh is a more reasonable estimate, that would be the number I will be running with for making future energy consumption estimates.  0.58 (BFL site) or 0.77 (Nasser) just seems too optimistic given history of both BFL and other companies.

Will

Every opinion is respectful. I'm happy we were able to resolve even the most challenging issues we encountered in the past. We will always look forward into making better products for our customers.


Regards,
Nasser
hero member
Activity: 798
Merit: 531
Crypto is King.
Its interesting this was 'leaked' with no official announcement.
legendary
Activity: 1890
Merit: 1003
Regarding power consumption, Radeon 5970 and 5870 both consume more power than our card does, the very reason we took this design approach.

Yeah, but what about when the power doubles from your pre-fab estimates, like every other chip you've built?

Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.
Regards,
Nasser

HAHAHAHAHAH.  WHAT!? The calculations for transistor gate energy are pretty complicated, and they certainly aren't linear with respect to surface area. and I think at 28nm you have more leakage which leads to more waste energy as well.

I think this shows you have absolutely no idea what you're talking about.
Where is TheFiend when you need him?

I would love to hear his input on this and many other crazy claims by BFL.
hero member
Activity: 574
Merit: 500


Really?  Why would it be PCIex16.  I get the idea of making it PCIe so using standard motherboards to densely rackmount.  By why PCIex16?  Using a x1 slot would allow more cards per mother board.

People still have those riser cable from GPU mining rigs Grin

3 of these cards will make it 1.8T,  almost 1000x faster than a 2.1GH 3x5970 rig, is it really possible? Both 28nm tech, why kncminer consumes 1KW for 400GH while they can make it 4x more efficient???

Anyway I think this is targeting the right user, they know the history and culture of mining

Aftet some deep deliberation of about 30 seconds these thoughts came up

1) Even @ 28nm and 600GH this form factor seems impossible on a single card with max 250 watts (Anybody care to comment who knows more that i ??)
2) My spider senses are telling me this seems like the last throw of the dice
3) If BFL are saying ~ a year then the translation mean NEVER.... as we know that 2 weeks means 6-9 months
4) Funny that the probabtion is comming to an end and this would be a nice going away present
5) An old one but a good one ...if it seems to good to be true then it is
6) Did they not do the same thing when the FGPA market just started to get going by anouncing ASICS and basically trying to kill the market & competition with VAPOURWARE...look where we are now after ~ 14 months from around then

I really dont know what to say but I they will have a fight on their hands Cheesy
sr. member
Activity: 322
Merit: 250
I think i got it all filled in. You might see your own quote from this thread in there =)

http://www.youtube.com/watch?v=4jYNMKdv36w
full member
Activity: 238
Merit: 100
Regarding power consumption, Radeon 5970 and 5870 both consume more power than our card does, the very reason we took this design approach.

Yeah, but what about when the power doubles from your pre-fab estimates, like every other chip you've built?

Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.
Regards,
Nasser

HAHAHAHAHAH.  WHAT!? The calculations for transistor gate energy are pretty complicated, and they certainly aren't linear with respect to surface area. and I think at 28nm you have more leakage which leads to more waste energy as well.

I think this shows you have absolutely no idea what you're talking about.
hero member
Activity: 767
Merit: 500
I think around 1W/Gh is a more reasonable estimate, that would be the number I will be running with for making future energy consumption estimates.  0.58 (BFL site) or 0.77 (Nasser) just seems too optimistic given history of both BFL and other companies.

Will
legendary
Activity: 966
Merit: 1000
I am glad I mine litecoin.. every other day BTC miners are getting screwed over. First cpu, then gpu, then fpga, then asic, then usb asic, then knc asic, then bfl supposed super Asic... Man I would be a nervous wreck wondering when I would get my money back and if I would have a paper weight in 1 month or not. I know bitcoin mining has been great to a few but it is a mess at this point and getting more and more consolidated.   
hero member
Activity: 798
Merit: 531
Crypto is King.
. . . . . . . . . . .


Who's next?
legendary
Activity: 1890
Merit: 1003
This is hilarious... I actually have some BFL hardware and even I think this means they are finished..  They are obviously trying to move payments to a medium that cant be refunded (bitcoin or wire xfer).  They are using a design which alot of us *know* cannot dissipate that much heat, and they are moving at a snail's pace with current orders.

Guys... Ive never said this before, but i believe they are on the virge of folding and taking anyone who preorders money with them..

Regarding power consumption, Radeon 5970 and 5870 both consume more power than our card does, the very reason we took this design approach.

TDP of 5970 is 294W and 5870 is 224W.   The card is reported to be 350W which is significantly higher not lower.

Also this pretends away the challenges of the form factor and ignores it was AMD with three decades of experience delivering their 12th generation graphics card family.

AMD (ATI) first graphics card looked like this and consumed 10W.  Even AMD isn't immune to the challenges of working in a compact unforgiving form factor, the 7990 (375W TDP) was delayed by six months due to power/thermal issues that they found challenging to resolve.


While 350W is possible in that form factor one would have to be willing to bet that unlike every other time the simulations aren't lower than reality AND that the company doesn't run into any cooling or power problems due to the high energy density.   350W is 0.6 w/GH.  BFL current chips are 3.1 w/GH correct?  A die shrink generally cuts power consumption by 40% (you stated upthread or in another thread that it can be up to 60% but that would be rather optimistic don't you think?). 28nm is two die shrinks from current chip.  So 3.1 w/GH * 0.6 * 0.6 = 1.1 w/GH.  If the chips were identical with only a die shrink we would be looking at 1.1w/GH (660W for this card).  Now you did indicate you optimized the chip but that is a rather significant optimization wouldn't you say.  Nearly an 86% (1.1/0.6) improvement in performance per watt outside the die shrink.  Intel is happy for a 5% to 10% improvement in performance per watt (outside of die shrinks).

Given the aggressive improvement in performance per watt necessary, combined with the lack of any headroom (if it misses by even 20% then it can't be cooled in that form factor at that speed), it would need to be a nearly flawless design and execution from start to finish.   It certainly "can" be done but given BFL past promises on power and cooling well one would be betting that "this one will be different".  
LMAO, I had that video card. I didn't even know it was an ATI card. It was....EGA I think. Way back then....
full member
Activity: 227
Merit: 100
This is hilarious... I actually have some BFL hardware and even I think this means they are finished..  They are obviously trying to move payments to a medium that cant be refunded (bitcoin or wire xfer).  They are using a design which alot of us *know* cannot dissipate that much heat, and they are moving at a snail's pace with current orders.

Guys... Ive never said this before, but i believe they are on the virge of folding and taking anyone who preorders money with them..

Regarding power consumption, Radeon 5970 and 5870 both consume more power than our card does, the very reason we took this design approach.

TDP of 5970 is 294W and 5870 is 224W.   The card is reported to be 350W which is significantly higher not lower.
Still this is AMD and even with three decades of experience the 7990 (375W TDP) was delayed by six months due to power/thermal issues that they found challenging to resolve.

While 350W is possible in that form factor one would have to be willing to bet that
a) BFL hasn't been overly optimistic in power simulations (unlikely every other product in the past).  350W is cutting it close, 400W would be nearly impossible
b) BFL doesn't run into any cooling or power problems due to the high energy density.  Something that has plagued even veteran companies like AMD and NVidia.

As for 350W being realistic.  Well it is 0.6 w/GH.  BFL current chips are 3.1 w/GH.  A die shrink generally cuts power consumption by 40% (you stated upthread up to 60% but that would be rather optimistic don't you think). 28nm is two die shrinks from current chip.  So 3.1 w/GH * 0.6 * 0.6 = 1.1 w/GH.  Ouch.  1.1 * 600 = 660W.  Now you did indicate you opimtized the chip but that is a rather significant optimization wouldn't you say.  Nearly an 86% performance per watt.  Intel is happy for a 5% to 10% improvement in performance per watt (outside of die shrinks).

Given the aggressive improvement in performance per watt necessary, combined with the lack of any headroom (if it misses by even 20% then it can't be cooled in that form factor at that speed), it would need to be a nearly flawless design and execution from start to finish.   It certainly "can" be done but given BFL past promises on power and cooling well one would be betting that "this one will be different". 

Regarding 5970 and 5870, it was my mistake looking at some charts (I'm not good with GPUs generally), what I meant was 6990. The actual design was modified, stray capacitance and flip-flop instability was resolved (which were causing the majority of consumption). The numbers we have are lower, and were reported here with margin. The migration between Stratix 3 and Arria 2 GX (Original single vs. MiniRig FPGA cards), proved a 50% reduction in power. As both were doing SHA256, taking 50% for this case would be a reasonable number. Including the corrections made to stray capacitance and flip flop instability, the final figures arrive at an even lower number. The number announced by us is the worst case scenario based on what we have in our hands; however, as stated, these are estimations.


Regards,
Nasser

hero member
Activity: 767
Merit: 500
Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.

Hi Nasser,

Brave of you to dive into the BCT feeding frenzy!

I'm curious about your numbers - 0.77 * 600 = 462W - how come it says 350W on the website?

Will
hero member
Activity: 574
Merit: 500
Jesus fucking Christ. Here we go again.

Christ? Yeah good point! Where is Christ Vleisides (whose name is on the incorp docs).

Can't find him here http://www.butterflylabs.com/management/

Its his uncle or something
donator
Activity: 1218
Merit: 1079
Gerald Davis
This is hilarious... I actually have some BFL hardware and even I think this means they are finished..  They are obviously trying to move payments to a medium that cant be refunded (bitcoin or wire xfer).  They are using a design which alot of us *know* cannot dissipate that much heat, and they are moving at a snail's pace with current orders.

Guys... Ive never said this before, but i believe they are on the virge of folding and taking anyone who preorders money with them..

Regarding power consumption, Radeon 5970 and 5870 both consume more power than our card does, the very reason we took this design approach.

TDP of 5970 is 294W and 5870 is 224W.   The card is reported to be 350W which is significantly higher not lower.

Also this pretends away the challenges of the form factor and ignores it was AMD with three decades of experience, and the HD 5000 series was their 12th generation of graphics cards. AMD/ATI's first graphics card looked like this and consumed 10W.  


Even AMD isn't immune to the challenges of working in a compact unforgiving form factor, the 7990 (375W TDP) was delayed by six months due to power/thermal issues that they found challenging to resolve.


While 350W is possible in that form factor one would have to be willing to bet that unlike every other time the simulations aren't lower than reality AND that the company doesn't run into any cooling/power problems due to the extremely high energy density.   As for 350W being conservative?  I don't see it.  350W is 0.6 w/GH.  BFL current chips are 3.1 w/GH correct?  A die shrink conservatively means at best a 40% reduction in power (miners tend to be always on so we are really only interested in active load).  28nm is two die shrinks from current chip.  So 3.1 w/GH * 0.6 * 0.6 = 1.1 w/GH.  If the current generation was just a die shrink (Intel's tick/tock strategy) we would be looking at 1.1w/GH (660W for this card).  You stated you will both shrink and optimize (something Intel split up to reduce risk) but that is a rather significant optimization wouldn't you say?  Nearly an 86% (1.1/0.6) improvement in performance per watt outside what is gained from the die shrink.  Intel (that small rookie ASIC designer) is happy for a 10% improvement in performance per watt from architectural changes.

Given the aggressive improvement in performance per watt necessary, combined with the lack of any headroom (if it misses by even 20% then it can't be cooled in that form factor at that speed), it would need to be a nearly flawless design and execution from start to finish.   It certainly "can" be done (it isn't beyond the theoretical limits of silicon on forced air cooling) but given BFL past promises on power and cooling well one would be betting that "this one will be different".  
hero member
Activity: 854
Merit: 1000
This really is the endgame, guys.  They're trying to get a little extra (irreversible) cash out of their marks.

I feel like this is similar to the Pirate situation.  Everyone was SURE that he would repay... Well, at least until months after he didn't.
full member
Activity: 238
Merit: 100
Insane. Having a PCIe card manufactured for you by a 3rd party should be pretty easy, but they say they're going to be using their own idiotically slow production facilities.  WTF?

The thing is a PCIe mining card might make some sense, you can take advantage of infrastructure that already exists.  But 350W? In that tiny case?  WTF?

Avalon, and now Knc and HashFast are all using huge boxes for their chips in order to cram in enough cooling power.  The Heatsink KnC is using for one of it's chips probably has 4x the mass you could fit in that card, and the Jupiter will come with 4 of them.

HashFast is using water cooling to remove heat from it's single 350W chip, and they have a full sized PC case for just one chip.

It's total insanity.

My guess is that 1) They didn't want a smaller unit because they wanted to force people to buy something that was more expensive then everything but the Mini-rig. That way, if anyone wanted to transfer who'd bought less then 2 50Gh units (looking at their current prices) they'd still need to actually fund their order with more cash.

If they were selling a 100W, 200GH/s card for $1500 then people might only buy one and want the rest of their money back as a refund if they transferred their orders.

I'm 100% sure they're fucked and this is just an attempt to push problems off into the future, and give themselves a few more months of breathing room.  Their "orders will be done by september" because so many users will "upgrade" and start waiting for BFL's undoubtedly epicly delayed next product.

All you have to do is read Josh's comments to get an idea of how little they give a shit about shipping late.  They clearly don't view it as a priority or something they need to worry about so why would anyone expect them to even bother trying to ship this on time?
legendary
Activity: 1890
Merit: 1003
legendary
Activity: 3430
Merit: 3080
At least they make up for their numerous shortcomings with pure, comedy gold  Cheesy

I'm just picturing the brainstorming that came up with this:

Sonny "But guys, we haven't even got the money to produce mock-up box-o'-fans photos? No-one believes anything without glossy publicity shots!?!?"
Josh "How long does it take to type "generic video card stock photo" into image search?"
legendary
Activity: 1890
Merit: 1003
Even their lead engineer thinks that this is possible and talks as if they already had a prototype.... (Tapeout is in August for those who didn't bother to read)

 Assuming this is the same engineer that estimated the prior power requirements for their presently shipping product, I'm not sure how anyone can believe anything being reported prior to having a physical working product.

 Again, I am using well-documented history as an indicator.

Right, last time the estimate was 0.8 to 1.2watts tops.

Turned out to be anywhere from 1.6 watts to 6 watts per GH/s. [in reality]

This underestimation caused the entire platform to have to be redesigned and caused immense delays of nearly a year!
Pages:
Jump to: