Pages:
Author

Topic: BFL announces 28nm 600GH/S blade for $4680 - page 30. (Read 41048 times)

legendary
Activity: 1918
Merit: 1570
Bitcoin: An Idea Worth Spending
This is hilarious... I actually have some BFL hardware and even I think this means they are finished..  They are obviously trying to move payments to a medium that cant be refunded (bitcoin or wire xfer).  They are using a design which alot of us *know* cannot dissipate that much heat, and they are moving at a snail's pace with current orders.

Guys... Ive never said this before, but i believe they are on the virge of folding and taking anyone who preorders money with them..

Regarding power consumption, Radeon 6990 both consume more power than our card does, the very reason we took this design approach.


Regards,
Nasser

EDIT: Corrected '5970' and '5870' to 6990

Simple mistake for a Persian engineer to make late at night in France on a Saturday.
hero member
Activity: 518
Merit: 500
From the BFL website:


Plus

Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.

Regards,
Nasser

Picture says 600 GH/s @ 350W.
BFL "engineer" (who's area of expertise is Visual Basic and .NET) says 0.77W per GH/s

Unfortunately, multiplication says 600 GH/s * 0.77W per GH/s = 462W

Based on everything we know about multiplication (FPGA, CPU, GPU. etc) that should mean you are just as good at guessing TDP in August of 2013 as you were in August of 2012.

If you read carefully, it's noted that 0.77W/GH would be ceiling due to node jump, not taking any optimization or correction into consideration. The actual numbers are lower, and the ~0.6W/GH was number we decided that was closest to reality + error margin.


Regards,
Nasser

1) How are you going to get 400+ watts into 1 card 4 x Pcie connectors Huh
2) You would no be able to disapate the heat that 400 watts would generate for a single fan reference design
3) Your VB.NET power estimates where out by ~ 40% what happened in the last 6 months

Finally 4 how can u as a human be associated with this total joke of a company whose professionalism is NON EXISTANT if not criminal in nature !

Reference style fans work the best by far. You could cool a 400 watt card, depending on how the heat was spread out. A 6990 is about the same. They should be able to make a water block for these pretty easily though.  Seems more likely that the card will not run at 400watts. When is the last time they hit the mark guessing wattage?
legendary
Activity: 1190
Merit: 1000
If you read carefully, it's noted that 0.77W/GH would be ceiling due to node jump, not taking any optimization or correction into consideration. The actual numbers are lower, and the ~0.6W/GH was number we decided that was closest to reality + error margin.
Regards,
Nasser

How do you have actual numbers for something that will not exist for months?  Its just a thought experiment.  Please try to use BFL-speak, you have pre-orders on the line.

These actual numbers come from a guy with the skill set of "account management", "sales management", "product management", "solution selling", "new business development", "windows", "product marketing", "Microsoft excel", 2 years of writing Visual Basic and a degree in Telecommunications Engineering from the Islamic Azad University in Tehran.

You might need this to interpret them correctly:


Anyone want to start a pool on the day he starts calling everyone douchebags and trolls?
hero member
Activity: 574
Merit: 500
Once again FUCKING CRICKETS !!

You know its BFL when they wont answer simple questions !!

Let me guess the Vtn value is unknown as you are not the mythical French Bank employee /ASIC chief engineer
hero member
Activity: 574
Merit: 500
No We want exact answers to our resonable questions ...I.E some form of proof

WHat is the vtn ?

Not Just "IT GO BETTER" ...or "TRUST ME I KNOW WHAT I AM TALKING ABOUT"

because on all fronts this has been proven to be false !
legendary
Activity: 1904
Merit: 1007
Regarding power consumption, Radeon 5970 and 5870 both consume more power than our card does, the very reason we took this design approach.

Yeah, but what about when the power doubles from your pre-fab estimates, like every other chip you've built?

Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.



Regards,
Nasser


What were your power estimates for the your current ASIC chips?
full member
Activity: 167
Merit: 100
I wonder if their engineers went to DeVry U? Or maybe just photoshopped those diplomas as well? I feel bad for people with pre-orders that haven't received what they payed for, but anyone that orders this... as Gob would say CMON ON!

 Huh Huh Huh Huh
mrb
legendary
Activity: 1512
Merit: 1028
Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.

Hi Nasser,

Brave of you to dive into the BCT feeding frenzy!

I'm curious about your numbers - 0.77 * 600 = 462W - how come it says 350W on the website?

What he is saying is that just the theoretical power efficiency increase alone gained from 55nm to 28nm should guarantee 0.77 W/GH. But he also said they improved the design (maybe fewer transistors, a "sea-of-hasher" design like bitfury, etc), therefore it should be even lower than 0.77 W/GH.
sr. member
Activity: 322
Merit: 250
From the BFL website:


Plus

Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.

Regards,
Nasser

Picture says 600 GH/s @ 350W.
BFL "engineer" (who's area of expertise is Visual Basic and .NET) says 0.77W per GH/s

Unfortunately, multiplication says 600 GH/s * 0.77W per GH/s = 462W

Based on everything we know about multiplication (FPGA, CPU, GPU. etc) that should mean you are just as good at guessing TDP in August of 2013 as you were in August of 2012.

If you read carefully, it's noted that 0.77W/GH would be ceiling due to node jump, not taking any optimization or correction into consideration. The actual numbers are lower, and the ~0.6W/GH was number we decided that was closest to reality + error margin.


Regards,
Nasser

1) How are you going to get 400+ watts into 1 card 4 x Pcie connectors Huh
2) You would no be able to disapate the heat that 400 watts would generate for a single fan reference design
3) Your VB.NET power estimates where out by ~ 40% what happened in the last 6 months

Finally 4 how can u as a human be associated with this total joke of a company whose professionalism is NON EXISTANT if not criminal in nature !
5) why are you mysteriously on duty on a saturday night, is this an official launch or a leak?
full member
Activity: 238
Merit: 100
If you read carefully, it's noted that 0.77W/GH would be ceiling due to node jump, not taking any optimization or correction into consideration. The actual numbers are lower, and the ~0.6W/GH was number we decided that was closest to reality + error margin.
Regards,
Nasser

What is the load capacitance and beta values for the transistors in the process your using? What's the Vtn?
legendary
Activity: 1400
Merit: 1000
I owe my soul to the Bitcoin code...
If you read carefully, it's noted that 0.77W/GH would be ceiling due to node jump, not taking any optimization or correction into consideration. The actual numbers are lower, and the ~0.6W/GH was number we decided that was closest to reality + error margin.
Regards,
Nasser

How do you have actual numbers for something that will not exist for months?  Its just a thought experiment.  Please try to use BFL-speak, you have pre-orders on the line.
hero member
Activity: 574
Merit: 500
From the BFL website:


Plus

Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.

Regards,
Nasser

Picture says 600 GH/s @ 350W.
BFL "engineer" (who's area of expertise is Visual Basic and .NET) says 0.77W per GH/s

Unfortunately, multiplication says 600 GH/s * 0.77W per GH/s = 462W

Based on everything we know about multiplication (FPGA, CPU, GPU. etc) that should mean you are just as good at guessing TDP in August of 2013 as you were in August of 2012.

If you read carefully, it's noted that 0.77W/GH would be ceiling due to node jump, not taking any optimization or correction into consideration. The actual numbers are lower, and the ~0.6W/GH was number we decided that was closest to reality + error margin.


Regards,
Nasser

1) How are you going to get 400+ watts into 1 card 4 x Pcie connectors Huh
2) You would no be able to disapate the heat that 400 watts would generate for a single fan reference design
3) Your VB.NET power estimates where out by ~ 40% what happened in the last 6 months

Finally 4 how can u as a human be associated with this total joke of a company whose professionalism is NON EXISTANT if not criminal in nature !
full member
Activity: 196
Merit: 100
The actual numbers are lower, and the ~0.6W/GH was number we decided that was closest to reality + error margin.

Was that in the board meeting where you decided to overlook all those nasty complicated calculations and instead chose the numbers that fitted the maximum TDP of the PCI-E card you had in mind from your marketing guys?
legendary
Activity: 1190
Merit: 1000
From the BFL website:


Plus

Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.

Regards,
Nasser

Picture says 600 GH/s @ 350W.
BFL "engineer" (who's area of expertise is Visual Basic and .NET) says 0.77W per GH/s

Unfortunately, multiplication says 600 GH/s * 0.77W per GH/s = 462W

Based on everything we know about multiplication (FPGA, CPU, GPU. etc) that should mean you are just as good at guessing TDP in August of 2013 as you were in August of 2012.

If you read carefully, it's noted that 0.77W/GH would be ceiling due to node jump, not taking any optimization or correction into consideration. The actual numbers are lower, and the ~0.6W/GH was number we decided that was closest to reality + error margin.


Regards,
Nasser

Decided by whom?
full member
Activity: 227
Merit: 100
From the BFL website:


Plus

Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.

Regards,
Nasser

Picture says 600 GH/s @ 350W.
BFL "engineer" (who's area of expertise is Visual Basic and .NET) says 0.77W per GH/s

Unfortunately, multiplication says 600 GH/s * 0.77W per GH/s = 462W

Based on everything we know about multiplication (FPGA, CPU, GPU. etc) that should mean you are just as good at guessing TDP in August of 2013 as you were in August of 2012.

If you read carefully, it's noted that 0.77W/GH would be ceiling due to node jump, not taking any optimization or correction into consideration. The actual numbers are lower, and the ~0.6W/GH was number we decided that was closest to reality + error margin.


Regards,
Nasser
legendary
Activity: 1190
Merit: 1000
From the BFL website:


Plus

Due to double node jump, the max power should be 0.77W/GH (3.1W/GH divided by 4). Based on everything we know from any chip industry (FPGA, CPU, GPU, etc), that should be the ceiling in power-consumption.

Regards,
Nasser

Picture says 600 GH/s @ 350W.
BFL "engineer" (who's area of expertise is Visual Basic and .NET) says 0.77W per GH/s

Unfortunately, multiplication says 600 GH/s * 0.77W per GH/s = 462W

Based on everything we know about multiplication (FPGA, CPU, GPU. etc) that should mean you are just as good at guessing TDP in August of 2013 as you were in August of 2012.
legendary
Activity: 1890
Merit: 1003
This could be a good thing or a bad thing depending on whether BFL is actually insolvent right now or not.

They could have run out of money and are trying to fund old pre-orders with new pre-order money. If that's the case I see prison cells in BFL mangers' futures since it's a blatantly ponzi-esque tactic that will collapse in on itself eventually.

However think about the people who ordered after the price of BFL units doubled. They will likely never see a return on investment on their order anyway so switching to the new product queue and being at the head of that line actually makes a ton of sense for them. This only holds true as long as BFL has enough money to stay afloat without dipping into future pre-order money to fund old pre-orders' production though.

With their history though I would certainly never pay for a pre-order in Bitcoin. For a product in hand that other people have recieved from a seller with no problem Bitcoin works great, but for a pre-order the lack of protection for buyers is very worrying.

You know this means that suckers that actually believe this new development are going to want to resell their current BFL hardware to make money to enter the newer queue.

So underbid every single one of them. Lets drive the cost of BFL hardware into the floor so we can get it cheap as chips.
legendary
Activity: 1918
Merit: 1570
Bitcoin: An Idea Worth Spending
Quote
Since this is our 2nd generation ASIC chip, we're free from the pitfalls sometimes associated with a first generation design. Testing systems, Bumping masks, Substrates & under fill engineering are all carryovers from our last version of the chip so they're ready for high volume production once the wafers are ready. The importance of this can't be overstated when considering schedule risk & reliability.  Nevertheless, please do not purchase this product if you are unwilling to wait for the product to complete it's development.
No bitchin' bitches.

This negates having a manufactured 28nm device in hand.
legendary
Activity: 1918
Merit: 1570
Bitcoin: An Idea Worth Spending
We should all just go to Kansas....  #occupyBFL   until we get what we paid for,  or refunds on money paid + interest

It'll be the first oCCo demonstration, as in oCcupy [a] Company, with the domain name occo.co or, if that is taken, oc.co.co.
sr. member
Activity: 446
Merit: 250
I hope a load of suckers cancel their Jalapeno orders.  Brings the huge queue in front of my order a lot shorter. Smiley

In all honesty, from their standpoint, it's not a bad way to clear some of the disgruntled backlog and avert the rest of the discontent from delayed products a few months.
By charging an xfer fee to people in queue 10 months just to start them all over again??!  Cheesy Cheesy Cheesy

Fuck me in the ass once, shame on me. Fuck me in the ass twice, when are we getting married?

True this ^^^^. My grandfather always used to say to me: do something wrong once its a mistake, twice is a coincidence and three times (as he was smacking up the back of the head) you're a fucking idiot.
Pages:
Jump to: