Pages:
Author

Topic: BFL announces 28nm 600GH/S blade for $4680 - page 29. (Read 41048 times)

legendary
Activity: 1190
Merit: 1000
Perhaps he was watching this while posting his responses:
http://www.youtube.com/watch?v=4jYNMKdv36w
legendary
Activity: 1904
Merit: 1007
This is hilarious... I actually have some BFL hardware and even I think this means they are finished..  They are obviously trying to move payments to a medium that cant be refunded (bitcoin or wire xfer).  They are using a design which alot of us *know* cannot dissipate that much heat, and they are moving at a snail's pace with current orders.

Guys... Ive never said this before, but i believe they are on the virge of folding and taking anyone who preorders money with them..

Regarding power consumption, Radeon 6990 both consume more power than our card does, the very reason we took this design approach.


Regards,
Nasser

EDIT: Corrected '5970' and '5870' to 6990

Simple mistake for a Persian engineer to make late at night in France on a Saturday.

It's 3:00 AM Paris time, and I do enjoy reading threads usually.

Nasser

But you don't enjoy answering to my question: what were your power estimates for your current ASIC chip?
full member
Activity: 238
Merit: 100
On the business aspect of this - I'm guessing a lot of the "preorder money" started flowing into KnC and HashFast.  If BFL still had orders coming in I guess it finally stopped.

Still, Whatever you want to say about HashFast - I'm not sure they'll deliver on time and I think their prices are too high (although the miner protection plan, if they can pull it off ameliorates that somewhat).  At least with HashFast they're telling you what company they have making the chip.  It's Uniquify. And KnC is working with ORSoC and eASIC.

BFL isn't telling you who's making their chips.  I'm guessing there's an NDA and if it's a legit company it may be they don't even want their name out there associated with BFL Grin.  I sure as hell wouldn't.

But still, even though ORSoC and Uniquify aren't taking financial risks with these chips - their names are still on them. If the chips fail spectacularly, those companies will have a major public failure.  If they succeed everyone will see that they're able to produce spectacularly on crazy-tight schedules.

So, I'm confident that the chips will be delivered on (or close) to spec and on (or close) to deadline.   That doesn't mean I think HashFast will actually be able to get them all in boxes and shipped on time, though.  That's where Avalon actually had issues. KnC will have more time to get everything prepaired.

And of course, both HashFast and KnC are using huge boxes with plenty of room for tons of heatsink surface area and lots of fans. A KnC Jupiter will ship with fans rated at 1.2kW of heat dispersal.  HashFast will ship with a waterblock probably capable of about the same. Individual water blocks can remove 1.2kW as well.

But, fans in that configuration definitely cannot remove 350W of heat. GPUs over a certain threshold need 3-slot widths and giant triple fans in order to stay cool.

The only way I see this working is if they use waterblocks.

Anyway, you can see the wages of shipping late here. In order to even HOPE to get sales they have to way under-price their competition.  Delivering late is far worse then never having delivered at all.

Had they shipped on time, long ago, they'd be able to simply lower the prices on their 65nm chips, as their production costs would have gone way down by now. 
full member
Activity: 238
Merit: 100
All those lemmings that will be cancelling there order to upgrade  Tongue

I would consider you a bigger lemming for sticking it out this long in the face of skyrocketing difficulty and a completely shady company that has failed to deliver promises time and time again, for over a year.  
newbie
Activity: 38
Merit: 0
This is hilarious... I actually have some BFL hardware and even I think this means they are finished..  They are obviously trying to move payments to a medium that cant be refunded (bitcoin or wire xfer).  They are using a design which alot of us *know* cannot dissipate that much heat, and they are moving at a snail's pace with current orders.

Guys... Ive never said this before, but i believe they are on the virge of folding and taking anyone who preorders money with them..

Regarding power consumption, Radeon 5970 and 5870 both consume more power than our card does, the very reason we took this design approach.

TDP of 5970 is 294W and 5870 is 224W.   The card is reported to be 350W which is significantly higher not lower.

Also this pretends away the challenges of the form factor and ignores it was AMD with three decades of experience, and the HD 5000 series was their 12th generation of graphics cards. AMD/ATI's first graphics card looked like this and consumed 10W.  
http://upload.wikimedia.org/wikipedia/commons/thumb/a/ad/ATI_Hercules_Card_1986.xcf/690px-ATI_Hercules_Card_1986.xcf.png

Even AMD isn't immune to the challenges of working in a compact unforgiving form factor, the 7990 (375W TDP) was delayed by six months due to power/thermal issues that they found challenging to resolve.


While 350W is possible in that form factor one would have to be willing to bet that unlike every other time the simulations aren't lower than reality AND that the company doesn't run into any cooling/power problems due to the extremely high energy density.   As for 350W being conservative?  I don't see it.  350W is 0.6 w/GH.  BFL current chips are 3.1 w/GH correct?  A die shrink conservatively means at best a 40% reduction in power (miners tend to be always on so we are really only interested in active load).  28nm is two die shrinks from current chip.  So 3.1 w/GH * 0.6 * 0.6 = 1.1 w/GH.  If the current generation was just a die shrink (Intel's tick/tock strategy) we would be looking at 1.1w/GH (660W for this card).  You stated you will both shrink and optimize (something Intel split up to reduce risk) but that is a rather significant optimization wouldn't you say?  Nearly an 86% (1.1/0.6) improvement in performance per watt outside what is gained from the die shrink.  Intel (that small rookie ASIC designer) is happy for a 10% improvement in performance per watt from architectural changes.

Given the aggressive improvement in performance per watt necessary, combined with the lack of any headroom (if it misses by even 20% then it can't be cooled in that form factor at that speed), it would need to be a nearly flawless design and execution from start to finish.   It certainly "can" be done (it isn't beyond the theoretical limits of silicon on forced air cooling) but given BFL past promises on power and cooling well one would be betting that "this one will be different".  

Give Nasser a break, for he's not quite up to speed. Remember, he was on vacation in Rome at the end of November, early December, on those who pre-ordered's dime. He just came outta the woodwork to address a couple concerns about BFL's new product line to coincide with the recent video and Josh coming back to play with us naysayers after being away for a week. After today, Nasser will go back to his French cave for several months.

This new revelation is sick on so many levels.

Madness!

+1

Madness!
full member
Activity: 238
Merit: 100
what a joke.
newbie
Activity: 13
Merit: 0
Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.

Hi Nasser,

Brave of you to dive into the BCT feeding frenzy!

I'm curious about your numbers - 0.77 * 600 = 462W - how come it says 350W on the website?

What he is saying is that just the theoretical power efficiency increase alone gained from 55nm to 28nm should guarantee 0.77 W/GH. But he also said they improved the design (maybe fewer transistors, a "sea-of-hasher" design like bitfury, etc), therefore it should be even lower than 0.77 W/GH.

There's no way that's true.  Check out this paper: Power Consumption in CMOS VLSI chips

It mostly comes down to the gate capacitance. I'm not an IC engineer, but my understanding is that when you have a closed transistor with a +V on one side an -V on the other, then the charges on the two sides end up being like an incredibly tiny capacitor.  So, even though the transistor is in the 'off' state current will still flow a tiny bit just like how it can flow through an un-charged capacitor.

And the thing is as feature sizes get smaller and smaller, the ratio between surface area and volume goes up, and the gap between the gate and drain get smaller as well. And of course you have more transistors.  

And I also think gate leakage is higher with smaller nodes as well.

On the other hand, the voltage can be a lot lower.

Either way, claiming you'll have a straight linear relationship between feature area and power seems kind of ridiculous to me. I guess we'll see.

Dont sweat the little stuff. This is just the initial announcement. We will take lots of pre-orders first. Then we will get one or two and say, oops missed our power estimates. Then we will redesign, double the price and sell more.  Rinse, Repeat. Wink
legendary
Activity: 1918
Merit: 1570
Bitcoin: An Idea Worth Spending
This is hilarious... I actually have some BFL hardware and even I think this means they are finished..  They are obviously trying to move payments to a medium that cant be refunded (bitcoin or wire xfer).  They are using a design which alot of us *know* cannot dissipate that much heat, and they are moving at a snail's pace with current orders.

Guys... Ive never said this before, but i believe they are on the virge of folding and taking anyone who preorders money with them..

Regarding power consumption, Radeon 5970 and 5870 both consume more power than our card does, the very reason we took this design approach.

TDP of 5970 is 294W and 5870 is 224W.   The card is reported to be 350W which is significantly higher not lower.

Also this pretends away the challenges of the form factor and ignores it was AMD with three decades of experience, and the HD 5000 series was their 12th generation of graphics cards. AMD/ATI's first graphics card looked like this and consumed 10W.  


Even AMD isn't immune to the challenges of working in a compact unforgiving form factor, the 7990 (375W TDP) was delayed by six months due to power/thermal issues that they found challenging to resolve.


While 350W is possible in that form factor one would have to be willing to bet that unlike every other time the simulations aren't lower than reality AND that the company doesn't run into any cooling/power problems due to the extremely high energy density.   As for 350W being conservative?  I don't see it.  350W is 0.6 w/GH.  BFL current chips are 3.1 w/GH correct?  A die shrink conservatively means at best a 40% reduction in power (miners tend to be always on so we are really only interested in active load).  28nm is two die shrinks from current chip.  So 3.1 w/GH * 0.6 * 0.6 = 1.1 w/GH.  If the current generation was just a die shrink (Intel's tick/tock strategy) we would be looking at 1.1w/GH (660W for this card).  You stated you will both shrink and optimize (something Intel split up to reduce risk) but that is a rather significant optimization wouldn't you say?  Nearly an 86% (1.1/0.6) improvement in performance per watt outside what is gained from the die shrink.  Intel (that small rookie ASIC designer) is happy for a 10% improvement in performance per watt from architectural changes.

Given the aggressive improvement in performance per watt necessary, combined with the lack of any headroom (if it misses by even 20% then it can't be cooled in that form factor at that speed), it would need to be a nearly flawless design and execution from start to finish.   It certainly "can" be done (it isn't beyond the theoretical limits of silicon on forced air cooling) but given BFL past promises on power and cooling well one would be betting that "this one will be different".  

Give Nasser a break, for he's not quite up to speed. Remember, he was on vacation in Rome at the end of November, early December, on those who pre-ordered's dime. He just came outta the woodwork to address a couple concerns about BFL's new product line to coincide with the recent video and Josh coming back to play with us naysayers after being away for a week. After today, Nasser will go back to his French cave for several months.

This new revelation is sick on so many levels.

Madness!
full member
Activity: 238
Merit: 100
Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.

Hi Nasser,

Brave of you to dive into the BCT feeding frenzy!

I'm curious about your numbers - 0.77 * 600 = 462W - how come it says 350W on the website?

What he is saying is that just the theoretical power efficiency increase alone gained from 55nm to 28nm should guarantee 0.77 W/GH. But he also said they improved the design (maybe fewer transistors, a "sea-of-hasher" design like bitfury, etc), therefore it should be even lower than 0.77 W/GH.

There's no way that's true.  Check out this paper: Power Consumption in CMOS VLSI chips

It mostly comes down to the gate capacitance. I'm not an IC engineer, but my understanding (Which I'm not certain of) is that when you have a closed transistor with a +V on one side an -V on the other, then the charges on the two sides end up being like an incredibly tiny capacitor.  So, even though the transistor is in the 'off' state current will still flow a tiny bit just like how it can flow through an un-charged capacitor.

And the thing is as feature sizes get smaller and smaller, the ratio between surface area and volume goes up, and the gap between the gate and drain get smaller as well. And of course you have more transistors.  

And I also think gate leakage is higher with smaller nodes as well.

On the other hand, the voltage can be a lot lower.

Either way, claiming you'll have a straight linear relationship between feature area and power seems kind of ridiculous to me. I guess we'll see.
erk
hero member
Activity: 826
Merit: 500
I will never purchase a BFL product unless it is in stock and will ship within 24 hours.

Screw this pre-order scam crap...

BFL should know by now that you only get one chance to prove you can be trusted with pre-orders. A deposit perhaps, but no longer a 100% up front. If they didn't make enough profit to totally fund the 28nm production from the pre-orders for the existing products, then there is something wrong.

newbie
Activity: 13
Merit: 0
After the jalapeno to little single "upgrade" screw over some people got. Its gonna be interesting to see how this washes out. Love the no CC or Paypal. Must have been a little painful giving that money back, having it taken back. Probably way more refunds went through than we will ever know. This might be a good place for a PSA?

BFL Fanboys: Yes, I am a sock puppet and proud of it. Smiley
mrb
legendary
Activity: 1512
Merit: 1028
28nm is two die shrinks from current chip.  So 3.1 w/GH * 0.6 * 0.6 = 1.1 w/GH.

You are off by 2x. A shrink from 65 to 28nm theoretically increases power efficiency to 3.1 / ((65/28)^2) = 0.57 W/GH. That's because power efficiency is inversely proportional to the square of the feature size.

(However leakage is a bigger problem at 28nm than 65nm, which is why 28nm GPUs weren't quite as efficient as AMD/Nvidia had predicted, but combined with whatever logical improvements BFL claims, a 0.6 W/GH number is totally plausible at 28nm. Heck Bitfury does 0.8 W/GH at 55nm!)
legendary
Activity: 1918
Merit: 1570
Bitcoin: An Idea Worth Spending
This really is the endgame, guys.  They're trying to get a little extra (irreversible) cash out of their marks.

I feel like this is similar to the Pirate situation.  Everyone was SURE that he would repay... Well, at least until months after he didn't.

hero member
Activity: 742
Merit: 500
I will never purchase a BFL product unless it is in stock and will ship within 24 hours.

Screw this pre-order scam crap...
sr. member
Activity: 1316
Merit: 254
Sugars.zone | DatingFi - Earn for Posting
This has made my day..
All those lemmings that will be cancelling there order to upgrade  Tongue
I may now actually get my order delivered in time to get some return,
and then firmly put the hole bfl pre-order cr@p behind me.

Go Girls - keep pushing me to the front of the line suckers.  Grin
sr. member
Activity: 322
Merit: 250

What next? Jody printing out the FedEx shipping labels for them by mistake?

dammit i meant to put that in the hitler video.
legendary
Activity: 1918
Merit: 1570
Bitcoin: An Idea Worth Spending
This is hilarious... I actually have some BFL hardware and even I think this means they are finished..  They are obviously trying to move payments to a medium that cant be refunded (bitcoin or wire xfer).  They are using a design which alot of us *know* cannot dissipate that much heat, and they are moving at a snail's pace with current orders.

Guys... Ive never said this before, but i believe they are on the virge of folding and taking anyone who preorders money with them..

Regarding power consumption, Radeon 5970 and 5870 both consume more power than our card does, the very reason we took this design approach.


Regards,
Nasser
Remember when they said this about the Jally while doing simulations and estimating 5 watts only?

Turned out to be closer to 30 watts and the whole thing had to change.

----------------------

Now they are talking about 350 watts on a form factor that large companies like AMD and NVidia have trouble cooling and dissipating 250+watts of TDP.

Is it a step up (in theory) if it actually worked on a PCI-e bus? Yes, but is that realistic....nah.

Right now they are quoting close to the same number they used back when the initial design was 65nm. @0.8~1.2 watt per GH/s

Why do they never learn. Even their lead engineer thinks that this is possible and talks as if they already had a prototype.... (Tapeout is in August for those who didn't bother to read)

Exactly what I was implying with my last post. What next? Jody printing out the FedEx shipping labels for them by mistake?
full member
Activity: 227
Merit: 100
This is hilarious... I actually have some BFL hardware and even I think this means they are finished..  They are obviously trying to move payments to a medium that cant be refunded (bitcoin or wire xfer).  They are using a design which alot of us *know* cannot dissipate that much heat, and they are moving at a snail's pace with current orders.

Guys... Ive never said this before, but i believe they are on the virge of folding and taking anyone who preorders money with them..

Regarding power consumption, Radeon 6990 both consume more power than our card does, the very reason we took this design approach.


Regards,
Nasser

EDIT: Corrected '5970' and '5870' to 6990

Simple mistake for a Persian engineer to make late at night in France on a Saturday.

It's 3:00 AM Paris time, and I do enjoy reading threads usually.

Nasser
hero member
Activity: 504
Merit: 500
BFL will make headlines soon... Joining the ranks of pirateat40
sr. member
Activity: 322
Merit: 250

Simple mistake for a Persian engineer to make late at night in France on a Saturday.

 Cheesy Cheesy
Pages:
Jump to: