Pages:
Author

Topic: Estimated Hash Rates for the RTX 2080 and RTX 2080 Ti - page 2. (Read 11505 times)

jr. member
Activity: 392
Merit: 5
Would it be worth to upgrade from 1080ti?

for gamer it sure

for miner i dont think so

I think it's too early to draw conclusions about the effectiveness of new video cards. To begin with, we need to know the hasrate and power consumption on different algorithms, and then we can make a desition.
jr. member
Activity: 252
Merit: 4
Would it be worth to upgrade from 1080ti?

for gamer it sure

for miner i dont think so
newbie
Activity: 10
Merit: 0
after crypto prices falling

2x RX570 card can be bought for about 1600zł (PLN) here (its about 435$)

hashes about 55-60mhash easy

used even 20% cheaper

even with pill - miners will not buy 2xxx series for 1000$ O_O
legendary
Activity: 3318
Merit: 1247
Bitcoin Casino Est. 2013
The data are not that impressive considering the price of a Rtx 2080 ti which is over 1000 dollars and the mining rewards from Gpu mining computers right now it is not worth to spend that amount of money for a card who only mines at less than 50 mhs , it may be the best card mining ethereum yet but not worth to spend over 1000 dollars for a single card.

If you want to sell it after some time , still it is a fifty-fifty successful situation in my opinion.
full member
Activity: 728
Merit: 169
What doesn't kill you, makes you stronger
If they developed an EthLargement pill for the newer Turing architecture, we might speculate atleast a 50% increase in hashrate at max or around 84mh/s.
The 1080Ti and EthLargement Pill works by editing the memory timings of GDDR5X memory specifically though; and 2080Ti uses GDDR6 memory so I'm not sure if it would apply in the same way. By that same argument they could make an Ethlargement Pill for 1060/1070 and straight GDDR5 Memory too but they haven't. I'm not well-versed enough in the implentation to argue either for or against but yes you could be right.
But they've made it for "older cards" too (3:00 of interview: https://youtu.be/ZLTRYp_kCYg?t=180), it's a private tool though and they sell them to companies.
Now if they'll release a 2080ti tool for free... I don't know... but they could if they wanted to.
jr. member
Activity: 68
Merit: 6
If they developed an EthLargement pill for the newer Turing architecture, we might speculate atleast a 50% increase in hashrate at max or around 84mh/s.
The 1080Ti and EthLargement Pill works by editing the memory timings of GDDR5X memory specifically though; and 2080Ti uses GDDR6 memory so I'm not sure if it would apply in the same way. By that same argument they could make an Ethlargement Pill for 1060/1070 and straight GDDR5 Memory too but they haven't. I'm not well-versed enough in the implentation to argue either for or against but yes you could be right.
newbie
Activity: 106
Merit: 0
Would it be worth to upgrade from 1080ti?

sure.
copper member
Activity: 2
Merit: 0
Would it be worth to upgrade from 1080ti?
sr. member
Activity: 728
Merit: 252
Healing Galing

If that's true; and RTX 2080Ti only hashes at 50 Mh or less that would make it just at fast as a GTX 1080Ti with the ETHLargement Pill, correct? That would make it a massive flop for mining ETH atleast.
If they developed an EthLargement pill for the newer Turing architecture, we might speculate atleast a 50% increase in hashrate at max or around 84mh/s.
jr. member
Activity: 68
Merit: 6

If that's true; and RTX 2080Ti only hashes at 50 Mh or less that would make it just at fast as a GTX 1080Ti with the ETHLargement Pill, correct? That would make it a massive flop for mining ETH atleast.
legendary
Activity: 1946
Merit: 1006
Bitcoin / Crypto mining Hardware.
member
Activity: 93
Merit: 41
There are two current algos I am aware of which use matrix multiply: Tensority and Groestl.

I posted some details on them wrt Turing's new features in this post: https://bitcointalksearch.org/topic/m.44769341
full member
Activity: 728
Merit: 169
What doesn't kill you, makes you stronger
Note that the 110 FP16 tflops of performance of the tensor cores is in one specific operation only: 4x4 matrix multiplication and accumulate. That's all tensor cores can do. It's essentially an ASIC which is designed to take two 4x4 matrices of FP16 values and multiply them, accumulating the result into a 4x4 matrix of FP32 values. You can't use a tensor core for anything else except to do matrix multiply and accumulate.

That's why it's rated at such a high tflops number: because it's hardware has been designed to do only matrix multiply and accumulate, it has no other functional use beyond that, you can't reprogram a tensor core to perform any other operation. Think of a tensor core like an S9 ASIC, but instead of doing sha256 all it does is 4x4 matrix multiply and accumulate.

On the other hand, Xilinx logic cells can be reconfigured to perform different operations, it's completely flexible, hence VHDL/Verilog development. In fact, using something like Xilinx's SDAccel you can write a C++/OpenCL program and have it built into a bitstream to run on a Xilinx FPGA.

Oh... I see! Good explanation!
Then I guess that a specific algo has to build which would take advantage of this operation. A software developer who creates miners probably wouldn't be able to take advantage of this core for the current algos.
member
Activity: 93
Merit: 41
Note that the 110 FP16 tflops of performance of the tensor cores is in one specific operation only: 4x4 matrix multiplication and accumulate. That's all tensor cores can do. It's essentially an ASIC which is designed to take two 4x4 matrices of FP16 values and multiply them, accumulating the result into a 4x4 matrix of FP32 values. You can't use a tensor core for anything else except to do matrix multiply and accumulate.

That's why it's rated at such a high tflops number: because it's hardware has been designed to do only matrix multiply and accumulate, it has no other functional use beyond that, you can't reprogram a tensor core to perform any other operation. Think of a tensor core like an S9 ASIC, but instead of doing sha256 all it does is 4x4 matrix multiply and accumulate.

On the other hand, Xilinx logic cells can be reconfigured to perform different operations, it's completely flexible, hence VHDL/Verilog development. In fact, using something like Xilinx's SDAccel you can write a C++/OpenCL program and have it built into a bitstream to run on a Xilinx FPGA.
full member
Activity: 728
Merit: 169
What doesn't kill you, makes you stronger
I think the RTX and tensor core is much faster to Xilinx fpga .

Let me try calculate that...

For starters we ignore the RTX cores, these are doing a very specific kind of job and I don't expect them to have any use in mining.
 - The Nvidia's Tensor cores for a 2080ti have an FP16 (aka half precision) computation performance of 110 TFLOPs!
 - Xilinx most powerful FPGA can offer 10,948 GFLOPs, or almost 11 TFLOPs (source: https://www.xilinx.com/products/technology/dsp.html#solution)

Xilinx doesn't mention which model resulted in this performance, only that it's from the UltraScale+ family. The fastest available FPGA for mining is this one https://store.mineority.io/sqrl/cvp13/ which costs $6,370 before tax and if you import it in Europe.... I don't want to think about it. This FPGA miner uses the Virtex UltraScale+ VU13P.

Nvidia's Tensor cores can also offer mixed precision computation which, to be honest, I have no idea what it is, if Xilinx offers this or if it matters for mining!

So as far as pure computation performance per $ goes, the RTX 2080ti is putting everything else miles away... and people call the RTX series overpriced Roll Eyes
I wish however someone could go into more detail about how important the above numbers are and which algorithms would benefit more from this because I'm sure TFLOPs are NOT the only factor.


For comparison, 1080ti offers 13 TFLOPs FP32 (aka Single Precision), shader GPU only. I don't know if the 1080ti could compute FP16 but if it could that'd be 26 TFLOPs (double the FP32).
The 2080ti shader GPU offers 16 TFLOPs FP32 on top of the Tensor core which I mentioned above.

PS: That post took me more than half an hour to gather all these numbers from valid sources! omg Shocked
member
Activity: 93
Merit: 41
Yes that's true, but when using the memory bandwidth values to estimate Ethash hashrates, the value obtained is actually a theoretical zero-latency hashrate. But since Ethash takes each DAG sample from a pseudo-random location, latency actually has a substantial effect. So one conservatively assumes that the overclocking is used largely to make up for the actual latencies as compared to the theoretical zero-latency hashrate.

So for example, a 192-bit bus GTX 1060 with 8 Gbps memory has a theoretical zero-latency Ethash hashrate of around ~23.4 MH/s. An overclock to 9 Gbps would raise this to a theoretical zero-latency rate of ~26.4 MH/s. But due to the actual latencies involved, what we see in reality is hashrates closer to the ~23 MH/s value.
member
Activity: 413
Merit: 17
Those 14Gbps chips can probably be overclocked to at least 15Gbps - just like most GDDR5 can reach 9Gbps. Perhaps it is the lowest 100% stable speed for all chips.
member
Activity: 93
Merit: 41
The memory bandwidths you listed above are for 16 Gbps GDDR6. The RTX cards ship with 14 Gbps speed memory, bandwidths are: 2080Ti (352 bit bus) = 616 GB/s, 2080 (256 bit bus) = 448 GB/s.
jr. member
Activity: 208
Merit: 3
The ethash speed is limited by the bandwith.
With a GDDR6 384-bit memory bus you have ~768 GB/s  bandwidth --> ~90 Mh/s eth.
With a GDDR6 256-bit memory bus you have ~512 GB/s  bandwidth --> ~60 Mh/s eth.
hero member
Activity: 1190
Merit: 641
In the near future, the test results should be released, but they will not reflect all the capabilities of the new generation of video cards.
After a few months, the developers optimize the software for the new GPU architecture and memory and then we will get a real result.

Here are a few tests of  RTX 2080 Ti & RTX 2080
https://videocardz.com/77983/nvidia-geforce-rtx-2080-ti-and-rtx-2080-official-performance-unveiled
Pages:
Jump to: