Pages:
Author

Topic: Purchasing a Bitforce unit = betting on stagnation? (Read 4824 times)

legendary
Activity: 922
Merit: 1003
When comparing FPGA to potential future competitors (ASIC or really any technology) you can't compute the purchase cost of the item purchased in the past against the cost of the item just purchased and compare the W:MH or $ ratio's directly.

You need to factor in what it's paid back in the interval between when you purchased it and when the future technology is made available.

If you pay $700 for a BFL now and in 6 months you've made $350 with it, then a nifty ASIC version comes out for $500, you aren't comparing a $500 device against a $700 device, you are comparing a $350 device against a $500 device, and run your calculations from that point.  

A device generating money RIGHT NOW is worth an infinite number of devices that might generate money in the future... until you have both devices generating money RIGHT NOW, you can't really compare them directly.  When they are both available, compare them at their current costs, not at their costs in the past and future.  Lest, if you follow that logic, you can deduce that sASICs purchased in 3 years are more valuable and thus sASICs purchased in 1 year, since those in 3 years will be cheaper than those next year.  However, that sASIC available in one year, being less valuable than the one available in 3 years is a reason to wait - in three years, that one available is going to be less valuable than the one available in 5 years, so wait until then.  Repeat this cycle in 5 years.
This is a good perspective on the situation.

Unfortunately even the BFL units are still not shipping and may not be for another month. So although you can PAY for the BFL unit today, you won't be able to MINE with it yet. Still, once you can, the comparison can be made in the manner you've outlined.
donator
Activity: 1218
Merit: 1080
Gerald Davis
When comparing FPGA to potential future competitors (ASIC or really any technology) you can't compute the purchase cost of the item purchased in the past against the cost of the item just purchased and compare the W:MH or $ ratio's directly.

You need to factor in what it's paid back in the interval between when you purchased it and when the future technology is made available.

Good points.  The real risk would be a more efficient system hitting the market before you have paid back a significant cost of capital.  Even if sASICS or more exotic silicon tech eventually hit the market IMHO it will be years long enough for any FPGA to have long since paid for its cost of capital.

The greatest "threat" to 45nm FPGA is a .... 28nm FPGA.  The good news is there is likely at least a year (probably two) before 28nm FPGA are available to the general public at reasonable prices and those miners gets built and start driving up difficulty.  While a 45nm FPGA will be inferior to a 28nm FPGA in 24 months it will also have 24 months of cashflow under it's belt.
legendary
Activity: 1260
Merit: 1000
When comparing FPGA to potential future competitors (ASIC or really any technology) you can't compute the purchase cost of the item purchased in the past against the cost of the item just purchased and compare the W:MH or $ ratio's directly.

You need to factor in what it's paid back in the interval between when you purchased it and when the future technology is made available.

If you pay $700 for a BFL now and in 6 months you've made $350 with it, then a nifty ASIC version comes out for $500, you aren't comparing a $500 device against a $700 device, you are comparing a $350 device against a $500 device, and run your calculations from that point.  

A device generating money RIGHT NOW is worth an infinite number of devices that might generate money in the future... until you have both devices generating money RIGHT NOW, you can't really compare them directly.  When they are both available, compare them at their current costs, not at their costs in the past and future.  Lest, if you follow that logic, you can deduce that sASICs purchased in 3 years are more valuable and thus sASICs purchased in 1 year, since those in 3 years will be cheaper than those next year.  However, that sASIC available in one year, being less valuable than the one available in 3 years is a reason to wait - in three years, that one available is going to be less valuable than the one available in 5 years, so wait until then.  Repeat this cycle in 5 years.

donator
Activity: 1218
Merit: 1080
Gerald Davis
However there are methods of making a completely "custom chip" (no standardized logic layer) using "cells".  The cells are pre-designed low level silicon.  Using higher level software a designer combines cells to make a "custom" chip from standardized blocks.  Design software can then build a custom mask.
This is real ASIC. Of course everyone uses standard libraries of elements, many of them are produced by the fabs like IBM.
I don't think anyone still laying out routes and silicon pieces by hand Smiley

The difference between sASIC and ASIC is that the first one uses custom masks only for routing, that's why it's cheap.

I never said cell based ASIC aren't "real" ASICs just that they have lower upfront cost and higher per unit cost than a full custom design.

http://en.wikipedia.org/wiki/Application-specific_integrated_circuit#Standard-cell_design

Generally the definition of cell based ASIC is where you can't design below the cell level.  This reduces design complexity but will result in less than perfect utilization of the silicon compared to a complete custom design.  A cell may be hundreds of even thousands of gates.  That simplifies routing, design, and mask creation but has overhead and wasted die space.  Cell based designs will never be as efficient (performance per watt or dollar) than a custom design but are much simpler to create resulting in significant time, resource, upfront cost savings.

Thus generally unless millions of units will be built the higher upfront cost of a full custom design will never be recovered.
donator
Activity: 532
Merit: 501
We have cookies
However there are methods of making a completely "custom chip" (no standardized logic layer) using "cells".  The cells are pre-designed low level silicon.  Using higher level software a designer combines cells to make a "custom" chip from standardized blocks.  Design software can then build a custom mask.
This is real ASIC. Of course everyone uses standard libraries of elements, many of them are produced by the fabs like IBM.
I don't think anyone still laying out routes and silicon pieces by hand :)

The difference between sASIC and ASIC is that the first one uses custom masks only for routing, that's why it's cheap.
donator
Activity: 1218
Merit: 1080
Gerald Davis
Well you can't lump together all ASIC as they get vastly higher efficiency as you move up the cost ladder.
sASIC - lowest upfront cost, highest per unit cost.  Still roughly 2x the efficiency of FPGA (in performance per watt and performance per $).
cell based ASICS.  higher upfront cost, significant risk, much lower per unit cost.
Aren't sASICs (structured ASICs) cell-based ? Smiley

The lines kinda blur but sASIC are generally classified as a static computational layer and a custom routing layer.  The computational layer is mass produced and combined w/ custom routing layer in the fab.  It can be considered a form of cell based ASIC. Still it has the limit in that the computational layer is fixed (like a FPGA).  It has a set number of components which can't be changed.  Since the design requirements will never exactly line up w/ the capabilities of any FPGA/sASIC there is wasted die space.

However there are methods of making a completely "custom chip" (no standardized logic layer) using "cells".  The cells are pre-designed low level silicon.  Using higher level software a designer combines cells to make a "custom" chip from standardized blocks.  Design software can then build a custom mask.  You gain higher efficiencies over sASIC because you can choose exact number of cells, chip size, routing, etc.  Upfront costs go way up and per unit costs go way down.  

Of course the terms aren't exactly set in stone.  I guess in one sense of the word all structured ASICS are cell based but at least my understanding is that structured asics involve a fixed computational layer.
donator
Activity: 532
Merit: 501
We have cookies
Well you can't lump together all ASIC as they get vastly higher efficiency as you move up the cost ladder.
sASIC - lowest upfront cost, highest per unit cost.  Still roughly 2x the efficiency of FPGA (in performance per watt and performance per $).
cell based ASICS.  higher upfront cost, significant risk, much lower per unit cost.
Aren't sASICs (structured ASICs) cell-based ? :)
sr. member
Activity: 349
Merit: 250
I think that the next year will be interesting at least.

If we are following the Gartner Hype Cycle, and have passed the "trough of disillusionment," then a stabilized bitcoin price is in our future.

Increased bitcoin price won't force current GPU miners out, and enhanced profitability for the FPGA mining operations will be good news.

In any case, I would NOT be betting on stagnation.
donator
Activity: 1218
Merit: 1080
Gerald Davis
Without knowing the actual asic power consumption I would speculate it to be about ~2x more efficient. Using 20w/GH for the asic. Which may be optimistic imho.

  I do see where you are coming from though. Even as cheap as an fpga is to power, if the difficulty goes up enough then it becomes obviously more profitable for asic. But, just how much @2x(my speculated number since we lack hard data) would difficulty need to go up? We need to chart or graph it out I think. My math skills are really pretty basic so I am not sure whether diffulty would need to increase the same as the efficiency difference between CPU/GPU, CPU/FPGA, GPU/FPGA or what.? We could use the historical dificulty to summize the growth % from cpu to gpu but it would be hard to pin down the point where gpu not only took majority share of the hash rate but where that would intersect with stale earnings for CPU. We would of course have to normalize the price/difficulty data. Even lacking good FPGA global hash data we could get pretty close to speculating it's difficulty apex. We would need to compare the CPU to GPU difficulty apex slope in relation to their efficiency. Then applying that formula to the GPU to FPGA difficulty in relation to efficiency. I can probably pen and paper it but it will take me considerably longer to trial and error the proper method. Maybe one of the more proficient academics here can lend a hand?

  The questions then are how much will asics $/MH be? How much cheaper can FPGAs be made? I believe the asic $/MH will not be enough of a leap lower verses FPGA build costs to make FPGA payoff time unreasonable. To make this speculation I am considering an FPGA cost of 1/MH or less. Which is very doable now. LX150-n3 are street priced at $141, a cheap board and components costs $35 and assembly can be done for as low as $17. Total for ~200MH = $193  And the new series of Spartan are due out soon.

A couple of concepts which might enlighten you (or maybe muddy the waters even more).
sASIC (structured ASICS) are roughly 2x to 3x more efficient per watt than FPGA and have a per unit cost of ~1/2 to 1/5th depending on volume. (5K units to 50K units).
ASICS are roughly are more like 5x to 20x more efficient per watt than FPGA and can have a per unit cost as low as 1/10th that of a FPGA but really only make sense in volume of hundred thousands of units or more.

So it isn't like a sASIC would be more efficient BUT more expensive.  It would be that a sASIC could be 2x as efficient per watt and half the cost.  A true ASIC (even cell based) could be in the <$0.20 per GH and 100MH/W range.  

Now I find it beyond unlikely we will see sASICS anytime in the next couple years.  Startup capital is in the hundreds of thousands of dollars.  We are talking about months of talent/salary, IP licensing, high end design software, FPGA prototyping (@ $2000+ per chip), test runs and contracted (and at a minimum partially prepaid) production runs, etc.  An established player could do it for cheaper but a no FAB is going to trust a startup with anything less than full prepayment of 10K units.  

True ASICS are even more unlikely as it requires even more customization and that means talent, more testing, and unless you want development time in years even more licensing of IP.  Startup capital is likely in the low millions for current gen (45nm) ASIC.

So I think any FPGA bought today is safe from the threat of sASIC or cell based ASIC "future" designs at least for 3-5 years.  Bitcoin would need to see significant stabilization and growth before it attracts the kind of capital necessary for those kind of designs.


Still remember FPGA are subject to Moore's law.  28nm FPGA are very scarce right now and priced off the chart but in time they will be mundane.  They will deliver ~2x the performance per watt and per $ (slightly less but using 2 as multiplier is fine).  That will be the true threat to current gen FPGA but even there it will affect new sales and resale value more than profitability for a long time.  



Quote
And, just how many MH can an asic achieve?

This is a meaningless metric.  Say you have a design which gets x GH.  If you quadruple the size of the chip you could get 4x performance.  So the performance per chip isn't relevent.  A chip that has 4x the surface area will generally have lower yields.  So as some point there is a "magic" size where the cost of multi-chip design balances the additional cost of larger chip. 

If you could get a 1 GH board @ 15W for $100 would you really care if it was made up of 1, 2, or 4 chips.  All that matters is performance per watt and performance per $ right?

Still to get a very loose ballpark figure.
Current FPGA gets about 1MH per square mm.
On a 45nm process a completely custom ASIC could maybe achieve ~20MH per square mm.  On a 100mm^2 chip we are talking ~2GH/s.  Of course there is no reason one would need to stop at 100mm^2.  CPU/GPU come as large as 500mm^2.  A chip that large could acheive maybe 10GH/s.  However larger chips = lower yields.  Likely first gen ASIC will be designed small.


Quote
What would be the estimated power usage of a 1GH asic? using 'asic' as a blanket term for all variations, s-sasic, f-asic, etc.

Well you can't lump together all ASIC as they get vastly higher efficiency as you move up the cost ladder.

sASIC - lowest upfront cost, highest per unit cost.  Still roughly 2x the efficiency of FPGA (in performance per watt and performance per $).
cell based ASICS.  higher upfront cost, significant risk, much lower per unit cost.
Custom ASIC. huge risk, massive upfront cost, "negligble" per unit cost.

hero member
Activity: 504
Merit: 500
Well, I was thinking that if asics can produce bitcoin at $0.05 per bitcoin, that it would be tough to compete if we are using BFL singles at a cost of $0.65 per bitcoin.  That doesn't mean that we'll be operating at a loss, but that would reduce profitability.  I think that it is unlikely near term, but after the block reward halving, it might bite us in the ass. Just a risk, that I don't want to leave off my radar screen.
 Time will tell. I for one will be watching very closely what is available behind the scenes with asic development. But for now, my money is on fpga for atleast the next 6-9 months. And of course super efficient GPU's if the price stays up and dif does not jump to much.
I think the FPGA is the most promising also.  They blow all current GPUs away for energy use. BFL also competes with hashpower with GPUs.  

We need to keep an eye on GCN processors, while many seem to think that for mining they wiil follow NVIDIA's path and be a yawn, it just seems that algorithms can be rewritten to take advantage of the quad threads in each stream processor.  I really would not be surprised to see a much greater hash power boost from the 79xx series.

While it is easy to succumb to thoughts that bitcoin could flounder and die altogether, I think that this is an unreasonable reaction to the "bubble" bursting.

A nice article http://www.avc.com/a_vc/2011/11/bitcoin.html
  So 4.6w for 900MH? I wish I knew, as having that kind of data or even something close about potential asic would make the math a ton easier. Atleast for me.  Tongue

Yea, I certainly wont turn a blind eye to 79xx until it has been tested. Sadly I am not capable of writing any kind of code that could attempt to utilize its new architecture. Or any other code for that matter, really. :/

Yea, the whole bubble thing did not bother me. Look at silver, from $24 to $47 in 6 months. And how long has it been around? I believe as Bitcoin gets into more and more hands the 'bubbles' will have less and less impact. Especially since it is not subject to the 'paper' sword that entities like JP Morgan took to silver's throat in order to make a profit.

Thanks for the aritcle read and the convo. The back and forth debating helps open up angles one may not have thought of otherwise.

  Cheers,
   Derek
hero member
Activity: 504
Merit: 500
I agree with most of what you're saying, Fred0. I still don't see where asics would affect the earnings of an fpga unit in operation now. Unless we assume fpga builders will not lower their prices as more competition comes in and asic units are sold at no mark up. The fpgas are already so low on energy costs that a further reduction there from asics would not be enough by itself to make asics instantly more economical than fpgas.
May ASICs adoption cause difficulty to rise, lowering FPGA earnings ?
That would almost equally reduce asic earnings as well, since the elctricty costs will not be that much lower.
But the ASICs' price per MH/s may be lower, comparing to FPGA.

  Likely they will be. But, we really do not know yet.  The important part is that it does not change the earnings potential. Only the costs to operate and electricty cost per BTC will effect earnings.

  If asics end up being a huge percentage faster to pay for the initial investment then it is quite likely the difficulty will adjust rather quickly to compensate for that. That would effect anything bought for mining and its time to pay back investment, including the asics. But that is a never ending ladder. The same thing was true for CPU < GPU < FPGA. The key difference between the tech on that ladder is that CPU is ~90w/MH, GPU is ~.44w/MH and fpga is ~.042w/MH. That's a huge jump in efficieny from CPU to GPU, roughly 200x. Then only about 10.5x for GPU to FPGA(5970 v. Ztex). Without knowing the actual asic power consumption I would speculate it to be about ~2x more efficient. Using 20w/GH for the asic. Which may be optimistic imho.

  I do see where you are coming from though. Even as cheap as an fpga is to power, if the difficulty goes up enough then it becomes obviously more profitable for asic. But, just how much @2x(my speculated number since we lack hard data) would difficulty need to go up? We need to chart or graph it out I think. My math skills are really pretty basic so I am not sure whether diffulty would need to increase the same as the efficiency difference between CPU/GPU, CPU/FPGA, GPU/FPGA or what.? We could use the historical dificulty to summize the growth % from cpu to gpu but it would be hard to pin down the point where gpu not only took majority share of the hash rate but where that would intersect with stale earnings for CPU. We would of course have to normalize the price/difficulty data. Even lacking good FPGA global hash data we could get pretty close to speculating it's difficulty apex. We would need to compare the CPU to GPU difficulty apex slope in relation to their efficiency. Then applying that formula to the GPU to FPGA difficulty in relation to efficiency. I can probably pen and paper it but it will take me considerably longer to trial and error the proper method. Maybe one of the more proficient academics here can lend a hand?

  The questions then are how much will asics $/MH be? How much cheaper can FPGAs be made? I believe the asic $/MH will not be enough of a leap lower verses FPGA build costs to make FPGA payoff time unreasonable. To make this speculation I am considering an FPGA cost of 1/MH or less. Which is very doable now. LX150-n3 are street priced at $141, a cheap board and components costs $35 and assembly can be done for as low as $17. Total for ~200MH = $193  And the new series of Spartan are due out soon.

  On that note, has anyone had access to any of the Spartan-7 early release chips? And, just how many MH can an asic achieve? What would be the estimated power usage of a 1GH asic? using 'asic' as a blanket term for all variations, s-sasic, f-asic, etc.

  Thanks for poking me more about this fpga to asic thing, Deepbit. If I have time I will try to apply more than my instincts to giving a proper answer.

  Cheers,
   Derek
sr. member
Activity: 349
Merit: 250
 I agree with most of what you're saying, Fred0. I still don't see where asics would affect the earnings of an fpga unit in operation now. Unless we assume fpga builders will not lower their prices as more competition comes in and asic units are sold at no mark up. The fpgas are already so low on energy costs that a further reduction there from asics would not be enough by itself to make asics instantly more economical than fpgas. The only real factor will be initial $/MH and it's near impossible to speculate what the $/MH of asics will be. My speculation would be that they will have a higher mark-up over component build costs due to the massive dev costs involved. This should be more than enough to give fpga miners time to make up the starting costs of the units price/MH.
Well, I was thinking that if asics can produce bitcoin at $0.05 per bitcoin, that it would be tough to compete if we are using BFL singles at a cost of $0.65 per bitcoin.  That doesn't mean that we'll be operating at a loss, but that would reduce profitability.  I think that it is unlikely near term, but after the block reward halving, it might bite us in the ass. Just a risk, that I don't want to leave off my radar screen.
 Time will tell. I for one will be watching very closely what is available behind the scenes with asic development. But for now, my money is on fpga for atleast the next 6-9 months. And of course super efficient GPU's if the price stays up and dif does not jump to much.
I think the FPGA is the most promising also.  They blow all current GPUs away for energy use. BFL also competes with hashpower with GPUs.  

We need to keep an eye on GCN processors, while many seem to think that for mining they wiil follow NVIDIA's path and be a yawn, it just seems that algorithms can be rewritten to take advantage of the quad threads in each stream processor.  I really would not be surprised to see a much greater hash power boost from the 79xx series.

While it is easy to succumb to thoughts that bitcoin could flounder and die altogether, I think that this is an unreasonable reaction to the "bubble" bursting.

A nice article http://www.avc.com/a_vc/2011/11/bitcoin.html
donator
Activity: 532
Merit: 501
We have cookies
I agree with most of what you're saying, Fred0. I still don't see where asics would affect the earnings of an fpga unit in operation now. Unless we assume fpga builders will not lower their prices as more competition comes in and asic units are sold at no mark up. The fpgas are already so low on energy costs that a further reduction there from asics would not be enough by itself to make asics instantly more economical than fpgas.
May ASICs adoption cause difficulty to rise, lowering FPGA earnings ?
That would almost equally reduce asic earnings as well, since the elctricty costs will not be that much lower.
But the ASICs' price per MH/s may be lower, comparing to FPGA.
hero member
Activity: 504
Merit: 500
I agree with most of what you're saying, Fred0. I still don't see where asics would affect the earnings of an fpga unit in operation now. Unless we assume fpga builders will not lower their prices as more competition comes in and asic units are sold at no mark up. The fpgas are already so low on energy costs that a further reduction there from asics would not be enough by itself to make asics instantly more economical than fpgas.
May ASICs adoption cause difficulty to rise, lowering FPGA earnings ?
That would almost equally reduce asic earnings as well, since the elctricty costs will not be that much lower.
I for one will be watching very closely what is available behind the scenes with asic development.
Do you know something about what happens there, behind the scenes ? Care to tell us ? Smiley
hehe, I wish I knew something worth sharing or had what bit of information I would monitor in a form that was worth sharing. =)


  Cheers
donator
Activity: 532
Merit: 501
We have cookies
I agree with most of what you're saying, Fred0. I still don't see where asics would affect the earnings of an fpga unit in operation now. Unless we assume fpga builders will not lower their prices as more competition comes in and asic units are sold at no mark up. The fpgas are already so low on energy costs that a further reduction there from asics would not be enough by itself to make asics instantly more economical than fpgas.
May ASICs adoption cause difficulty to rise, lowering FPGA earnings ?

I for one will be watching very closely what is available behind the scenes with asic development.
Do you know something about what happens there, behind the scenes ? Care to tell us ? :)
hero member
Activity: 504
Merit: 500
  I agree with most of what you're saying, Fred0. I still don't see where asics would affect the earnings of an fpga unit in operation now. Unless we assume fpga builders will not lower their prices as more competition comes in and asic units are sold at no mark up. The fpgas are already so low on energy costs that a further reduction there from asics would not be enough by itself to make asics instantly more economical than fpgas. The only real factor will be initial $/MH and it's near impossible to speculate what the $/MH of asics will be. My speculation would be that they will have a higher mark-up over component build costs due to the massive dev costs involved. This should be more than enough to give fpga miners time to make up the starting costs of the units price/MH.

  Time will tell. I for one will be watching very closely what is available behind the scenes with asic development. But for now, my money is on fpga for atleast the next 6-9 months. And of course super efficient GPU's if the price stays up and dif does not jump to much. A little note from my hunt for lx150's leads me to believe a lot of batches have sold in the last few months that would not normally have done so. According to the distributors anyhows. There was of course no mention of who or for what purpose they were bought for however.

  Cheers

  Edit'; Reading over your post again I am pretty sure we're seeing it about the same way. Not sure what I interpreted at first that we disagreed on. ;p
sr. member
Activity: 349
Merit: 250
.... There is a necessity to recover as much of dev costs as quickly as possible before next competing product comes out.  And from an investment perspective, sitting on your cash loses you money.
I used even more conservative figures($2/BTC, $0.20 kwh electrical), and came up with hardware paid off just under a year.

The real challenge is forecasting longer than a year.

Block reward halves in Dec 2012(+/- 1 month)
Greater adoption of bitcoin
Development if asic hashing tech

Actually, I am optimistic about the whole thing.

Block reward halves implies the cost to produce a bitcoin will double.  If bitcoin price is based in the cost to manufacture, bitcoin would double.  I doubt this likely, but some people really believe this. I think that supply and demand trumps all. Since the block reward halving implies 50% of all the bitcoin that will exist are in circulation, we should start to see the deflationary effects of bitcoin kick in, not major, but likely minor, nonetheless stabilizing bitcoin price.

Greater adoption of bitcoin implies greater demand for bitcoin. Greater demand, greater price.

Development of asic hashing tech, this could lower the cost to produce a bitcoin, but I really think manufacturing cost to produce a bitcoin is not the major factor in the price. If, tomorrow, we develop a technology to mine gold that costs $0.01 per ounce, would the price of gold plummet?  I think not, since most of the gold has already been mined.  Likewise, unless asic technology comes out really soon, it's not likely to have an effect on bitcoin price.  If it comes out in a year, we will have already hit the 50% mark, and hit the beginning of deflationary period for the bitcoin lifecycle. It still could be deterimental to existing miners, but hopefully the hardware has already been paid for.  A definite risk factor.

Since we are really in an early adoption phase, I think that bitcoin price will rise enough to allow anyone to invest in FPGA tech (BFL or other) and recover their hardware costs in under a year.

Heaven knows, I not an economist, these are just my speculations, and the logic I used to arrive at them.  

So stop with all the gloom and doom!
hero member
Activity: 504
Merit: 500
To put things in real world perspective let's pit a BFL Bitforce FPGA against an AMD 5970.

Price:
5970: $300-400
BFL: $700

MH/s:
5970: 840MH/s
BFL: 1000MH/s

Power Consumption:
5970: 350W
BFL: 20W

Cost of operation @ $.10/KWh:
5970: $26.04/mo
BFL: $1.55/mo

Price:
5970: $300-400 (resale to miners @6-12months 0% due to 0 ROI from power usage)(resale to gamers with 7xxx available??)
BFL: $700 (resale to miners 100% pre asic) (cost to repurpose and value unknown)

MH/s:
5970: 840MH/s (@925 MHz using phatk mod?)
BFL: 900MH/s  (so far)

Power Consumption:
5970: 374w (@ 925MHz, 840MH/s) (Does not account for non-idle cpu, mobo, PSU heat usage)
BFL: 60w (@ ~900MH/s) (numbers not official but is known to be more than 20w, less than 80w for now) (Does not account for ~idle comp usage)

Cost of operation @ $.10/KWh:
5970: $26.93/mo
BFL: $4.32/mo

Earnings after electricity @ $3.00/BTC @ 1.1mil difficulty:
5970: $43.12/mo (70.05 gross - 26.93)
BFL: $70.73/mo (75.05 gross - 4.32)


  You were kidding about the gaming/going out thing right? ;p IF you're a gamer, you're not going to be gaming on your mining 5970's. Else you earn 0 and still cost elec. With BFL, you can still run your games at the same time on a much more efficient vid setup.  Grin Thus saving money by staying home more.  Waiting on Asics and assuming the creators of which have no desire to mark-up with their dev costs in consideration seems haphazard to me. Look at the mark up of existing FPGAs..... There is a necessity to recover as much of dev costs as quickly as possible before next competing product comes out.  And from an investment perspective, sitting on your cash loses you money.

DISCLAIMER; My statements are not an endorsement for BFL. I am and will remain of the mindset, I'll believe it when I see it...
member
Activity: 86
Merit: 10
Everone seems to forget that in a year 50 coin reward is going to go down to 25 per block. This makes it pretty difficult to put hard earned money out when you cant control price or difficulty knowing that your gross cash flow is going to get cut in half.

What is the reliability going to be like on these home brewed fpga designs?
legendary
Activity: 980
Merit: 1008
Mining is going to remain unprofitable until transaction fees kick in. Oh I nearly forgot: Isn't there a design flaw?
Something which makes it unprofitable to share transactions with high reward?
Mining is supposed to always fluctuate at the border between unprofitable and profitable. That's how users of Bitcoin get the lowest possible fees. Mining profitability will never get back to the level it was at 6 months ago. Back then, all Bitcoin users were financing miners though the inflation of Bitcoins.
TL;DR: paying 50 BTC per block (as we're doing now) is a huge amount when there are only no more than 50 transactions in a block. In essence, every Bitcoin user is right now - through inflation - sharing a per-transaction fee of 1 BTC for everyone's transactions.

Mining right now - assuming Bitcoin doesn't disappear - is probably more profitable than it ever will be again.
Pages:
Jump to: