Author

Topic: Why is ETH mining more profitable on AMD cards? (Read 675 times)

sr. member
Activity: 2464
Merit: 318


Yes my bad, I believed what I read here in some other post, never personally had Nvidia card Smiley

Someone mentioned they solved ordinary 1080s memory latency by going back to gddr5, and I found out on wiki
they actually did it by making 50% wider mem bus
hero member
Activity: 756
Merit: 560
I've seen folks quote specific 1070s at 32-33 Mhash, but they also quote memory overclocks my specific models have never managed.

I have 18 Asus Dual GTX 1070s hashing for me right now. I can get all of them except 2 to break 32mh but they only seem to run about 72 hours at that speed before crashing. At 29mh they are rock solid and run for weeks on end. (stock clocks they only do 26)

legendary
Activity: 1498
Merit: 1030
1070 cost twice as much as AMD cards (while they've been sold at normal prices) so I'd really
expect them to beat them (although Wolf claims his samsung based 470s hash at 33MH/s, doubt
many 1070s reach that)

1080ti does NOT use gddr5x


 1070 recently has been a lot closer, but that's due to gouge pricing because of the "miner-caused RX 470/480/570/580" shortages.
 Double at MSRP is about the right ballpark, sometimes even MORE vs the bottom-end "on sale" RX 470s 5 months ago.

 I've seen folks quote specific 1070s at 32-33 Mhash, but they also quote memory overclocks my specific models have never managed.

 The GTX 1080 AND GTX 1080ti do in fact use GDDR5X (as do the Pascal-based Titan models):

http://nvidianews.nvidia.com/news/a-quantum-leap-in-gaming:-nvidia-introduces-geforce-gtx-1080
http://nvidianews.nvidia.com/news/nvidia-introduces-the-beastly-geforce-gtx-1080-ti-fastest-gaming-gpu-ever


hero member
Activity: 1036
Merit: 606
sr. member
Activity: 2464
Merit: 318
1070 cost twice as much as AMD cards (while they've been sold at normal prices) so I'd really
expect them to beat them (although Wolf claims his samsung based 470s hash at 33MH/s, doubt
many 1070s reach that)

1080ti does NOT use gddr5x

legendary
Activity: 1498
Merit: 1030
AMD is not "stronger than NVidia" on ETH hashing.
They're actually pretty close on performance - typical numbers for NVida are in the ballpark:

 1060     22 Mhash (same as a non-bios modded 470/570 and very close to a non-modded 480/580 depending on memory speed of the RX card)
 1070     30 Mhash (ones that memory overclock WELL beat ANY RX series card even with BIOS mods).
 1080     26 Mhash (loses to the 1070 despite having a LOT more cores and higher memory bandwidth due to the latency of GDDR5x vs GDDR5)
 1080ti   35 Mhash (GDDR5x hurts again, but the 1080ti is STILL faster than any AMD card by a narrow margin except PERHAPS the Vega).

 Like with AMD cards, these numbers will vary depending on how well you can overclock RAM and how far you drop TDP/undervolt.

 The issue prior to the AMD "gouge pricing" period is that the NVidia cards that beat or close-to-matched the RX 470/480/570/580 on ETH hashrate COST more per hash - the 1080ti in particular cost 2-3 TIMES as much per hash back when you could find AMD cards for MSRP or close.

 ETH is very picky about memory bandwidth AND memory latency - which is what hammers the Fury and Vega lines, and hammers the GTX 1080/1080ti almost as hard.
 It's also why the R9 290x/390x were little or NO faster than the R9 290/390 despite the x models having quite a few more cores.

sr. member
Activity: 463
Merit: 250
It is not. Nvidia cards are actually stronger on the algo.

The only reason that AMD has better hash/price ratio is because dagger algo uses a lot of memory and not that much core. Nvidia or AMD actually do not produce their own memory, only the GPU chips. That is the reason that AMD is not total shit on this algo.

Also that you can edit memory timings in bios helps as well.
sr. member
Activity: 613
Merit: 305
I am curious to know the technical answer to this question: why is AMD stronger  than NVIDIA on calculating Ethash algorithm?

Is there any relevant GPU computing core difference that made this possible?

Or is it just a matter of OpenCL device drivers ?
Maybe the way NVIDIA implemented their drivers made it inefficient to calculate ETH algorithm so that AMD is stronger on ETH

Don't be afraid of being too technical while answering, i wanna dig as low-level as possible on this topic
Jump to: