Author

Topic: Mining Performance Evaluation (Read 2472 times)

newbie
Activity: 21
Merit: 0
July 23, 2013, 10:26:43 AM
#7
I am currently going with amd 7950 cards as they seem like the best price/performance cards out there right now
newbie
Activity: 14
Merit: 0
July 22, 2013, 11:56:31 AM
#6
Interesting. I'd like to know whether a similar trend exists for Nvidia cards.

As would I as I have a lot of NVidia hardware for gaming and not many amd/ati cards
member
Activity: 70
Merit: 10
July 22, 2013, 03:11:06 AM
#5

Quote
- (0.2935) is a scale factor.
And this scale factor is almost exactly 1 GHz/1MHs/3375 integer ops = 0.3034, see https://bitcointalksearch.org/topic/m.550288

- Yes, the difference between above numbers is only 3 % that's probably in the margins of error.
hero member
Activity: 524
Merit: 500
July 22, 2013, 02:40:50 AM
#4
M (MHps) = 0.2935 x (Shaders x Frequency In GHz)

where:

- (M) - BTC mining speed in Megahashes per second;
- (Shaders x Frequency) is a multiplication of number of shaders at GPU and their stock frequency in gigaherz (GHz);
- (0.2935) is a scale factor.
And this scale factor is almost exactly 1 GHz/1MHs/3375 integer ops = 0.3034, see https://bitcointalksearch.org/topic/m.550288
member
Activity: 70
Merit: 10
June 11, 2013, 02:53:31 AM
#3

I'd like to know whether a similar trend exists for Nvidia cards.

- Plenty of data regarding bitcoin mining performance, available here:

https://en.bitcoin.it/wiki/Mining_Hardware_Comparison

But as a rule NVidia GPUs are significantly slower on BTC mining due to the specifics of NVidia graphics hardware architecture.

That's why people prefer mining on AMD graphic boards...  Smiley
newbie
Activity: 1
Merit: 0
June 11, 2013, 02:36:00 AM
#2
Interesting. I'd like to know whether a similar trend exists for Nvidia cards.
member
Activity: 70
Merit: 10
June 11, 2013, 12:46:21 AM
#1

Well, guys and gals, - as the time of GPU mining seems to be close to its logical end and first ASIC miners makes their way to the first happy customers, I've decided to publish a small research about the factors that really affect the modern GPUs mining performance.

I've had an idea that on a such a class of task suitable for good parallelization, like SHA256-based bitcoin mining algorithm, only two GPU parameters should affect the total GPU chip speed. I've made an assumption that these parameters are:

- number of GPU specific processors, so called 'shaders' (shader units);
- a clock frequency the shader units above mentioned, able to use.

So, it is possible to evaluate a (total computing power) P of every GPU by a synthetic number - a multiplication of (number of shader units) N by a (clock frequency of GPU at 100 % load) F:

P = N x F

- For example, the AMD Radeon HD 6670 has 480 shaders working at 800 MHz (0.8 gigaherz) clock frequency in a full-speed mode, i.e. at 100 % GPU load.
So, the computational power for Radeon HD 6670 may be evaluated by the formulae (480 shaders x 0.8 GHz) = 384 'theoretical' units of computing power.

- In comparison, the AMD Radeon HD 7970, one of the most powerful GPUs on the modern market, has 2048 shaders working at 925 MHz (0.925 gigaherz) clock frequency at full GPU load.
And the power for Radeon HD 7970 may be evaluated as (2048 shaders x 0.925 GHz) = 1894 units of computational power, so AMD Radeon HD 7970 looks to be 5 times more powerful hardware solution than AMD Radeon HD 6670.

The next question is - how these 'theoretical units of power' mentioned above can be conmpared with a real GPU productivity on such a task as, for example, bitcoin mining?

The answer is simple - to draw a simple 2D-plot, where vertical axis will represent the real computing power of GPU on the (for example) task of bitcoin mining, in megahashes per second (M, MHps) while the horizontal axis will represent the theoretical computing power of GPU calculated by the method mentioned above.

So, for every modern GPU we can get a simple pair numbers and use these numbers as 2D-plot coordinates - theoretical power P and real power M.

Here the final table:



and the plot:



as we can see from the graph above, the plot of M versus P is pretty linear that really shows the direct dependence between the 'theoretical computing power' calculated and the real-world power, measured in nature experiments in (megahashes per second) on a real hardware.

As a result, now you can evalute the mining speed / performance of every GPU of AMD Radeon 6xx0, 7xx0 and may be probably of forthcoming 8xx0 series, via a simple formulae of linear equation:

M (MHps) = 0.2935 x (Shaders x Frequency In GHz)

where:

- (M) - BTC mining speed in Megahashes per second;
- (Shaders x Frequency) is a multiplication of number of shaders at GPU and their stock frequency in gigaherz (GHz);
- (0.2935) is a scale factor.

That's all, folks - and happy mining!
Jump to: