GTX 1070 is 5.25GH/s on XVC (Blake-256 8 round), pulling 150W - this gives it a MH/s/W value of 35MH/s/W.
I'd like to stress that it's 14nm. With my full-custom design on one of my 28nm FPGAs, I get 2.1GH/s at 24W - this gives it an MH/s/W value of 87.5MH/s/W.
As it is, this fight is one-sided. If they had been manufactured on the same node, it wouldn't be a fight - it would be an execution.
well not fair to compare a gpu with fpga, fpga can do well one thing at time, then you need to reprogram it, gpu can do multiple things
it will consume more energy because of that, if gpu were specialized only on mining, they would be just asic, so yes it's not all about nm productive process
Yeah, but my point was, from a mining perspective, a GPU and an FPGA can both mine many algos. The FPGA may be somewhat more restricted in selection, but it can still switch. So comparing raw hash/watt as a measure of merit is faulty, unless what I pointed out holds as well.
FPGAs are in a completely different class... You could lump ASICs into that comparison too. They also can mine multiple algos... They just have to be built from the ground up each and every time. To that extent, so do miners for GPUs (depending on how different the algo is from other ones already made), but the time requirement is quite a bit different.
ASICs can't mine multiple algos unless they're made to from the start. You can buy an FPGA ONCE and reprogram it - a GPU is closer to this. You don't have to get a new GPU every algo.
Sure, but often times you have to completely reprogram the thing from the ground up. They're both in a different class of products from GPUs.
You do realize GPUs are pretty much the same, except they expose an instruction set, correct?
Something about memory, horsepower too (computational units), instruction sets they support, and operating environment. Even if you can do one thing really well with a FPGA (much like a ASIC), that doesn't mean it'll do everything else pretty much equally as ewll. There is a reason FPGAs have always been the stepping stone to ASICs. Because if you're going to take enough time to program for a FPGA, you can just take that one step further and start designing the chip too, which adds a lot more flexibility when it comes to efficiency and raw horsepower (more of whatever you need to produce a certain amount of hashrate, less of whatever isn't being used) and allows your clientele to easily implement them (a box you plug in). The level of expertise you need for each of those goes up quite a bit hoping from GPU > FPGA > ASIC.
Like I said, there is pretty much three classes, GPUs, ASICs, and FPGAs. FPGAs are like the experimentation ground for ASICs. There are CPUs too, but GPUs can do almost everything CPUs can better, especially when it comes to cryptos.