because this is not a official thread, so i join this discussion.
Optimization is more important than rushing to a smaller process size.
Bitmine 28nm chips on low power: 0.35w/gh
Asicminer 40nm chips on low power: 0.2w/gh
Although the AM specs are only simulated I think they will end up pretty close.
Saving money as well as time is why they went with 40nm. Why spend 10 million in nre to make 20nm chips when you can make equally efficient 40nm chips?
I have no doubt that eventually smaller process sizes will come out on top, but it is not with these chips(maybe v2). We will probably have 14nm chips by the time a 20/28nm chip is fully optimized.
With all that said it does look like bitmine does have the most efficient chips as of now. Much more impressive than cointerra/hashfast.
compare estimate power consumption with actual measurement is unreasonable. this is not about cheating or something like that.
this is because to estimate a chip's exact dynamic power is very hard. a order of magnitude estimate is not bad on new process node (<90nm).
Fully agree. Right now, the manufacturers are in an arms race, and because time to market is so important, they are taking shortcuts. This in itself is not egregious, but as things slow down and sanity rears its head, things will change.
I think the next iteration of ASIC design will not be aimed so much at lower process mode, but at designing from the ground up for efficiency. It is my understanding (correct me if I'm wrong) that most, if not all, of the current bitcoin ASICs are essentially miniaturized FPGA arrays rather than a complete ground-up design. Thus they have a lot of room for reducing redundancies, thus freeing up a lot of silicon real estate. Die shrinks make them bigger and badder fast, but they are not truly optimized. Since the ASIC essentially has one relatively simple task, I suspect that we will soon see chips with much better efficiencies and higher hashrates. But it will probably be a year or more, as that's a much more difficult proposition than what is currently being done.
I truly hope the preorder madness has run its course. Without that perverse incentive, companies will have to focus more on quality and innovation.
there are many poor designs, but we only can see very few of them on market. in fact, present designs are pretty good, most of optimization method already been used.
The Bitmine chips might be that efficient on paper, but E's review of his 600GH Coincraft Desk shows it taking 1.08 J/GH at the wall in normal mode. So right now Bitmine's power efficiency, measured at the wall, is about on a par with KNC or Bitfury.
As for low power mode, E's figures show barely any improvement over normal mode if I'm reading them right (down from 1.08 to 1.06 J/GH). So looks like no real low power mode, at least for now. Of course, this is probably because the chips are not being undervolted, which is needed for low power mode.
I'd like to believe this will be fixed in a future firmware release, but does anyone know whether the hardware currently shipping actually supports changing the core voltage?
roy
ETA: Anyone know what efficiencies Cointerra and Hashfast are getting at the wall?
running at a low voltage will significantly increase power efficiency. under 1V, avalon gen2 is 2.5W/G on chip, under 0.8V is 1.45W/G. 42% improve.
but on lower process node, the voltage is already lower to 0.9V, further lower to 0.7V will cause extreme slow clock speed.
as well as, the chips them self is extremely expensive . running them on a low clock speed is a obvious lose.