I think the RTX and tensor core is much faster to Xilinx fpga .
Let me try calculate that...
For starters we ignore the RTX cores, these are doing a very specific kind of job and I don't expect them to have any use in mining.
-
The Nvidia's Tensor cores for a 2080ti have an FP16 (aka half precision) computation performance of 110 TFLOPs! -
Xilinx most powerful FPGA can offer 10,948 GFLOPs, or almost 11 TFLOPs (source: https://www.xilinx.com/products/technology/dsp.html#solution)Xilinx doesn't mention which model resulted in this performance, only that it's from the UltraScale+ family. The fastest available FPGA for mining is this one
https://store.mineority.io/sqrl/cvp13/ which costs $6,370 before tax and if you import it in Europe.... I don't want to think about it. This FPGA miner uses the Virtex UltraScale+ VU13P.
Nvidia's Tensor cores can also offer mixed precision computation which, to be honest, I have no idea what it is, if Xilinx offers this or if it matters for mining!
So as far as pure computation performance per $ goes, the RTX 2080ti is putting everything else miles away... and people call the RTX series overpriced
I wish however someone could go into more detail about how important the above numbers are and which algorithms would benefit more from this because I'm sure TFLOPs are NOT the only factor.
For comparison, 1080ti offers 13 TFLOPs FP32 (aka Single Precision), shader GPU only. I don't know if the 1080ti could compute FP16 but if it could that'd be 26 TFLOPs (double the FP32).
The 2080ti shader GPU offers 16 TFLOPs FP32 on top of the Tensor core which I mentioned above.
PS: That post took me more than half an hour to gather all these numbers from valid sources! omg