Author

Topic: CCminer(SP-MOD) Modded NVIDIA Maxwell / Pascal kernels. - page 1056. (Read 2347601 times)

legendary
Activity: 1764
Merit: 1024
Some sad things about ccminer.
I used to test my "legendary" oc'ed gtx750 in different algo's on nicehash and yaamp.
Let's talk only about quark and nicehash as they are still look good ))

My ccminer shown hashrate was ~6000khs. But in nicehash on long run average hashrate was smoothly fluctuating between only 5400 and 5900. Very seldom it goes up to 6000 and very soon drops to ~5500 again. I talk about Average speed nicehash, not about Speed accepted.
I thought it's ok due to some network losses or something like that.

But now i temporary grab 2 radeon 7950 and put it instead of my gtx750 in the same machine. With optimized ccminer 5.1.1 (russian, available from cryptomining blog) I see about 21-21.5mhs in sgminer window. I use xintensity 512 that is known to provide good hashrate on pool ...
And nicehash now shows Average speed 21.5-22.3mhs. This is even better then miner shows!

So this situation is not in favor of nvidia and sp_'s ccminer. I know that nvidia and sp_'s ccminer are better in terms of perf per watt ... but this amd cards are so dirt cheap now ... and give nice absolute performance with more "honest" hashrate in miner ...

If you pay more then you make in electricity it doesn't matter what it hashes at. AMD cards use about 50% more electricity. Hope you have cheap $/kwh
legendary
Activity: 1400
Merit: 1050
Hi guys, not sure where to post this.

I've published an issue on github about it : https://github.com/cbuchner1/ccminer/issues/31

1.5.53(sp-mod) does not submit anything.

I have tried several pools, it doesnt say reject, it just says nothing.

It goes on and on showing me a hashrate, but nothing is ever submitted.

I've googled this a lot, but to no avail. Anyone had this issue before ?

I'm on RedHat, CUDA 7.0, nVidia GRID K2, compute 3.0 (is support for 3.0 dropped in this version ?)

First try to use cuda 6.5 until we figure out how to use it  Grin
Second yes these version doesn't support compute 3.0 (I haven't seen either what you we mining exactly...)
epsylon3 has a version with compute 3.0 support
legendary
Activity: 3164
Merit: 1003


#crysx
I have one of gigabyte's super overclocked and without any additional oc it humms at 1350 gig hertz. That's where I find nvidia's sweet spot on all types of gigabyte's 750 ti's even the standard or low clocked by them. I set it to 1350 gig hertz.

Is that the actual clock speed or the setting in what ever program you use to OC?
The actual clock speed CapnBDL. No oc.   Smiley

See this is where I think a baseline reference is needed. Manufactures supply cards that they themselves have OCd beyond what Nvidia states as the reference design (which is 1020Mhz in the case of the 750 Ti). I went and took a look at the gigabyte website and compared all of their various 750 Ti models, none of them are clocked at 1350Mhz (neither base nor boost).

So, without further info and clarification, I can only assume the card has been OCd further from listed on MFG's page, or you have a model that is no longer listed.
[/quote]
It is a rare card....haven't seen any more since...... i bought it 8 months ago.
newbie
Activity: 4
Merit: 0
Thanks !!! the tpruvot one did work !

The first time I tried to compile it, it didn't work, here's what I did to have it running
1-I take a realease version, not the repo directly
2- I just had to edit the Makefile.am and uncomment the line " nvcc_ARCH += -gencode=arch=compute_30,code=\"sm_30,compute_30\" "
3-
export CUDA_HOME=/usr/local/cuda-7.0
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64
PATH=${CUDA_HOME}/bin:${PATH}

Then it worked just fine !

Code:
[2015-06-18 16:19:00] Stratum difficulty set to 0.1
[2015-06-18 16:19:03] GPU #1: GRID K2, 458.14 kH/s
[2015-06-18 16:19:03] GPU #2: GRID K2, 458.08 kH/s
[2015-06-18 16:19:03] GPU #3: GRID K2, 457.04 kH/s
[2015-06-18 16:19:06] GPU #1: GRID K2, 527.81 kH/s
[2015-06-18 16:19:06] GPU #0: GRID K2, 527.77 kH/s
[2015-06-18 16:19:07] GPU #3: GRID K2, 527.55 kH/s
[2015-06-18 16:19:11] GPU #3: GRID K2, 527.39 kH/s
[2015-06-18 16:19:11] accepted: 1/1 (100.00%), 2041.05 kH/s yay!!!

Thanks for your help !
legendary
Activity: 1470
Merit: 1114
Hi guys, not sure where to post this.

I've published an issue on github about it : https://github.com/cbuchner1/ccminer/issues/31

1.5.53(sp-mod) does not submit anything.

I have tried several pools, it doesnt say reject, it just says nothing.

It goes on and on showing me a hashrate, but nothing is ever submitted.

I've googled this a lot, but to no avail. Anyone had this issue before ?

I'm on RedHat, CUDA 7.0, nVidia GRID K2, compute 3.0 (is support for 3.0 dropped in this version ?)


You need to enable compute 3.0 in  Makefile.am. There is some discussion on how to do this earlier
in this thread. I have no 3.0 cards but I can confirm it works on 3.5. Also stick with cuda 6.5 for now.

FYI this isn't the kind of problem that warrants a ticket. Besides the fact that you ticketed the wrong
fork SP is focussed on 5.0 and above and unlikely to respond to a problem affecting other compute
versions, even a legitimate problem.

However, you're in the right place now. Be sure to post your results.

Edit: Ooops, cuda 6.5 not available on Redhat 7
legendary
Activity: 1154
Merit: 1001
@ Taiko3615:

This is SP_'s fork, not cbuchner1 (you posted an issue on the wrong ccminer fork @ github).
This fork is optimized for compute 5.0+ cards, compute 3.0 will probably not work for most algos, if it works at all.
Cuda 7.0 is known to break some algos, and is generally slower on the ones that do work.

Your best bet, if you really only have that hardware/cuda setup available, would be Epsylon3's fork. As I understand, Epsylon3 has been working on integrating Cuda 7.0, and also, his fork is generally more inclusive of older cards.
https://github.com/tpruvot/ccminer/ 

Good luck!
newbie
Activity: 4
Merit: 0
Hi guys, not sure where to post this.

I've published an issue on github about it : https://github.com/cbuchner1/ccminer/issues/31

1.5.53(sp-mod) does not submit anything.

I have tried several pools, it doesnt say reject, it just says nothing.

It goes on and on showing me a hashrate, but nothing is ever submitted.

I've googled this a lot, but to no avail. Anyone had this issue before ?

I'm on RedHat, CUDA 7.0, nVidia GRID K2, compute 3.0 (is support for 3.0 dropped in this version ?)

Code:
[root@sommer ccminer-1.5.53]# ./ccminer -r 0 -a lyra2 -o stratum+tcp://hub.miningpoolhub.com:20507 -u X.X -p X
*** ccminer 1.5.53-git(SP-MOD) for nVidia GPUs by sp-hash@github ***
    Built with the nVidia CUDA SDK 6.5

  Based on pooler cpuminer 2.3.2 and the tpruvot@github fork
   CUDA support by Christian Buchner, Christian H. and DJM34
  Includes optimizations implemented by sp , klaust, tpruvot and tsiv.

[2015-06-18 14:47:37] Starting Stratum on stratum+tcp://hub.miningpoolhub.com:20507
[2015-06-18 14:47:37] NVML GPU monitoring enabled.
[2015-06-18 14:47:37] Binding thread 0 to cpu 0 (mask 1)
0
[2015-06-18 14:47:37] 4 miner threads started, using 'lyra2' algorithm.
[2015-06-18 14:47:37] Binding thread 2 to cpu 2 (mask 4)
2
[2015-06-18 14:47:37] Binding thread 1 to cpu 1 (mask 2)
1
[2015-06-18 14:47:37] Binding thread 3 to cpu 3 (mask 8)
3
[2015-06-18 14:47:37] Stratum difficulty set to 1
[2015-06-18 14:47:38] GPU #0: GRID K2, 11289749
[2015-06-18 14:47:38] GPU #3: GRID K2, 8686590
[2015-06-18 14:47:38] GPU #1: GRID K2, 8660615
[2015-06-18 14:47:38] GPU #2: GRID K2, 8516486
[2015-06-18 14:47:38] GPU #2: GRID K2, 8844740
[2015-06-18 14:47:38] GPU #2: GRID K2, 8844740
[2015-06-18 14:47:38] GPU #0: GRID K2, 8771194
[2015-06-18 14:47:38] GPU #1: GRID K2, 8937571
[2015-06-18 14:47:38] GPU #3: GRID K2, 8788215
[2015-06-18 14:47:38] GPU #2: GRID K2, 9071190
[2015-06-18 14:47:38] GPU #2: GRID K2, 9071190
[2015-06-18 14:47:38] GPU #0: GRID K2, 8807975
[2015-06-18 14:47:38] GPU #3: GRID K2, 9055807
[2015-06-18 14:47:38] GPU #1: GRID K2, 8759813
[2015-06-18 14:47:39] GPU #2: GRID K2, 8984403
[2015-06-18 14:47:39] GPU #2: GRID K2, 8984403
[2015-06-18 14:47:39] GPU #0: GRID K2, 8532861
[2015-06-18 14:47:39] GPU #1: GRID K2, 8637856
[2015-06-18 14:47:39] GPU #3: GRID K2, 8390911
[2015-06-18 14:47:39] GPU #2: GRID K2, 8505948
[2015-06-18 14:47:39] GPU #2: GRID K2, 8505948
[2015-06-18 14:47:39] GPU #0: GRID K2, 8176897
[2015-06-18 14:47:39] GPU #3: GRID K2, 8595384
[2015-06-18 14:47:39] GPU #1: GRID K2, 8423309
[2015-06-18 14:47:39] GPU #3: GRID K2, 8353730
[2015-06-18 14:47:39] GPU #2: GRID K2, 8371279
[2015-06-18 14:47:39] GPU #2: GRID K2, 8371279
[2015-06-18 14:47:39] GPU #0: GRID K2, 8113358
[2015-06-18 14:47:39] GPU #1: GRID K2, 8288406
[2015-06-18 14:47:39] GPU #3: GRID K2, 7940634
[2015-06-18 14:47:39] GPU #2: GRID K2, 8217212
[2015-06-18 14:47:39] GPU #2: GRID K2, 8217212
[2015-06-18 14:47:39] GPU #0: GRID K2, 8043318
[2015-06-18 14:47:39] GPU #1: GRID K2, 8100500
[2015-06-18 14:47:39] GPU #3: GRID K2, 8158376
[2015-06-18 14:47:39] GPU #2: GRID K2, 8514383
[2015-06-18 14:47:39] GPU #2: GRID K2, 8514383
[2015-06-18 14:47:39] GPU #0: GRID K2, 7543812
[2015-06-18 14:47:39] GPU #3: GRID K2, 8136360
[2015-06-18 14:47:39] GPU #1: GRID K2, 7497388
[2015-06-18 14:47:39] GPU #2: GRID K2, 7690066
[2015-06-18 14:47:39] GPU #2: GRID K2, 7690066
[2015-06-18 14:47:39] GPU #0: GRID K2, 7533328
[2015-06-18 14:47:39] GPU #3: GRID K2, 7647657
[2015-06-18 14:47:39] GPU #1: GRID K2, 7625164
[2015-06-18 14:47:39] GPU #2: GRID K2, 7608678

etc....

[2015-06-18 14:53:08] GPU #2: GRID K2, 8787999
[2015-06-18 14:53:08] GPU #3: GRID K2, 8645927
[2015-06-18 14:53:08] GPU #0: GRID K2, 8696574
[2015-06-18 14:53:08] GPU #1: GRID K2, 10588738
[2015-06-18 14:53:09] GPU #2: GRID K2, 4389525
[2015-06-18 14:53:09] GPU #3: GRID K2, 3905827
[2015-06-18 14:53:09] GPU #0: GRID K2, 3895818
[2015-06-18 14:53:09] GPU #1: GRID K2, 10437987
[2015-06-18 14:53:09] hub.miningpoolhub.com:20507 lyra2 block 317019
[2015-06-18 14:53:09] GPU #2: GRID K2, 4319710
[2015-06-18 14:53:09] GPU #2: GRID K2, 106187
[2015-06-18 14:53:10] GPU #1: GRID K2, 3302713
[2015-06-18 14:53:10] GPU #3: GRID K2, 3063271
[2015-06-18 14:53:10] GPU #0: GRID K2, 3021704
[2015-06-18 14:53:10] GPU #2: GRID K2, 3743190
[2015-06-18 14:53:10] GPU #2: GRID K2, 119991
[2015-06-18 14:53:10] GPU #1: GRID K2, 3974467
[2015-06-18 14:53:10] GPU #3: GRID K2, 4139233
[2015-06-18 14:53:10] GPU #0: GRID K2, 4146589
[2015-06-18 14:53:10] GPU #2: GRID K2, 4827706
[2015-06-18 14:53:10] GPU #2: GRID K2, 169269
[2015-06-18 14:53:10] GPU #1: GRID K2, 5249290

etc...
hero member
Activity: 750
Merit: 500


#crysx
I have one of gigabyte's super overclocked and without any additional oc it humms at 1350 gig hertz. That's where I find nvidia's sweet spot on all types of gigabyte's 750 ti's even the standard or low clocked by them. I set it to 1350 gig hertz.

Is that the actual clock speed or the setting in what ever program you use to OC?
The actual clock speed CapnBDL. No oc.   Smiley
[/quote]

See this is where I think a baseline reference is needed. Manufactures supply cards that they themselves have OCd beyond what Nvidia states as the reference design (which is 1020Mhz in the case of the 750 Ti). I went and took a look at the gigabyte website and compared all of their various 750 Ti models, none of them are clocked at 1350Mhz (neither base nor boost).

So, without further info and clarification, I can only assume the card has been OCd further from listed on MFG's page, or you have a model that is no longer listed.
hero member
Activity: 1064
Merit: 500
MOBU


#crysx
I have one of gigabyte's super overclocked and without any additional oc it humms at 1350 gig hertz. That's where I find nvidia's sweet spot on all types of gigabyte's 750 ti's even the standard or low clocked by them. I set it to 1350 gig hertz.

Is that the actual clock speed or the setting in what ever program you use to OC?
The actual clock speed CapnBDL. No oc.   Smiley
[/quote]

Thx...that's what I thought...I OC to 1400. Can't go higher without a crash.
legendary
Activity: 3164
Merit: 1003


#crysx
I have one of gigabyte's super overclocked and without any additional oc it humms at 1350 gig hertz. That's where I find nvidia's sweet spot on all types of gigabyte's 750 ti's even the standard or low clocked by them. I set it to 1350 gig hertz.

Is that the actual clock speed or the setting in what ever program you use to OC?
[/quote]
The actual clock speed CapnBDL. No oc.   Smiley
hero member
Activity: 1064
Merit: 500
MOBU


#crysx
I have one of gigabyte's super overclocked and without any additional oc it humms at 1350 gig hertz. That's where I find nvidia's sweet spot on all types of gigabyte's 750 ti's even the standard or low clocked by them. I set it to 1350 gig hertz.
[/quote]

Is that the actual clock speed or the setting in what ever program you use to OC?
legendary
Activity: 3164
Merit: 1003
OC +200core clock / +250 memory

yup - thats what i thought ...

we really need to have some sort of standard that we can compare against ...

non oc is the best way on a card by card basis ...

every card oc's differently and some cards can be pushed harder than others - with others again being tweaked with firmware and such ...

this means that the readings we give here are utterly useless to compare with ...

they are great readings to compare oc'ing with - and a table / list that would be created as a comparison would be even better ...

who has skills ( and time ) to do such a thing? ...

is there already a site that has a comparison ( and settings ) list? ...

would luv to see how ccminer-spmod compares to other with the same cards ...

#crysx
I have one of gigabyte's super overclocked and without any additional oc it humms at 1350 gig hertz. That's where I find nvidia's sweet spot on all types of gigabyte's 750 ti's even the standard or low clocked by them. I set it to 1350 gig hertz.
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
There used to be a spreadsheet out there that users could upload results, etc, too...

I used to maintain a few of them for scrypt (wow this is flooded) and keccak and I haven't heard of any newer. To be honest I don't see the point of starting again because everything changes rapidly.


Again, as I'm used to looking at the 24h graphs at nicehash, I can tell you that the reported average can easily go up or down by 5% or more, in a 5h window. This stands to produce a potential reading variance as high as 10% overall, or more. I'd put my finger at the 12h mark for the minimum period of reliable reporting, and ideally, the full 24h time frame.
Regarding the fluctuation on pools, it's probably vardiff. It never reaches equilibrium so it always increasing the diff until your accepted share frequency gets below a threshold then it decreases the diff to offset it until your accepted share frequency goes above a threshold over and over again. And if vardiff occasionally jumps too high you might not even submit a single share before a block is solved so you're work is useless.

With fixed difficulty settings you can find a setting that let's you submit shares in a steady rate. I prefer submitting a share at around every ~3 seconds to avoid missing anything. But then again if a lot of people are using too low difficulty on a big pool then they are practically ddosing the pool which is why I guess vardiff was created but I think most pool owners are using it terribly.

Solomining FTW.
legendary
Activity: 1400
Merit: 1050
several reasons:
* you set a too high diff or something like that...
* sgminer use a buffer to accumulate block header while solving others so it has always work on hand, and that's a big advantage on pool. It also send different blockheader to the different cards while in ccminer a new blockheader is obtained only after one has been solved and all cards work on the same block.
* sgminer 5 was developped by nicehash, so obviously they tuned their system on that...
I see. Concerning first * I didn't play with diff neither in ccminer nor in password provided to Nicehash. All is by default

I don't see how sgminer could cause a speedup by simply queueing work. I'm considering having my Stratum implementation fill up a global work queue for that specific pool, and then let the mining threads pop work as they will (unless interrupted by a hard restart due to a new block on the network or whatever else that would cause cleanjobs to be set in the mining.notify.) Thing is, it doesn't matter how much work I create and queue for the threads - if there's a new block on the network, I'm flushing all of it, because it's all gotta go - no work based on headers I generated before will be accepted. So, if ccminer just parses mining.notify, generates work, and passes it to all its running threads while restarting them, it'll have pretty much the exact same slowdown I will.
I don't say there is a speedup in sgminer but rather a slow down in ccminer... the advantage of sgminer is that it doesn't have to wait for a new block once it solved one. It pick up the next in its queue and also each gpu uses a different block which is better than the current implementation in ccminer (based on cpuminer) where the block is divided among the card (kinda pointless when you have fast and slow card on the same system.
legendary
Activity: 1154
Merit: 1001
I think a 5 hr. test would be enough, but... We are a bit off topic...a little. I'm on my tablet right now & am sure I have a bookmark for that stat sheet somewhere. I believe it's on my 'puter. I'll get back & look.

Later-

Again, as I'm used to looking at the 24h graphs at nicehash, I can tell you that the reported average can easily go up or down by 5% or more, in a 5h window. This stands to produce a potential reading variance as high as 10% overall, or more. I'd put my finger at the 12h mark for the minimum period of reliable reporting, and ideally, the full 24h time frame.

On a slightly related note: people that benchmark often, and especially on the 970s & 980s, should take into account the temperature of their cards when trying to benchmark. From a cold start, my 980s always hash at their fastest for a couple of minutes, until they settle on their usual (hot) hashing temperature. If starting from a warm/hot state, then the output is more consistent throughout.
Anyone mining in the north pole or using liquid cooling will probably not care though...
hero member
Activity: 1064
Merit: 500
MOBU
I think a 5 hr. test would be enough, but... We are a bit off topic...a little. I'm on my tablet right now & am sure I have a bookmark for that stat sheet somewhere. I believe it's on my 'puter. I'll get back & look.

Later-
full member
Activity: 241
Merit: 100
[
[/quote]

you know - i do vaguely remember seeing that - now that you mention it ...

but where? ... and how accurate is it? ...

#crysx
[/quote]

Only 1 I've seen is for litecoin.      https://litecoin.info/Mining_hardware_comparison
We can use it as a guide to making one better suited to what we want.   
legendary
Activity: 2912
Merit: 1091
--- ChainWorks Industries ---
OC +200core clock / +250 memory

yup - thats what i thought ...

we really need to have some sort of standard that we can compare against ...

non oc is the best way on a card by card basis ...

every card oc's differently and some cards can be pushed harder than others - with others again being tweaked with firmware and such ...

this means that the readings we give here are utterly useless to compare with ...

they are great readings to compare oc'ing with - and a table / list that would be created as a comparison would be even better ...

who has skills ( and time ) to do such a thing? ...

is there already a site that has a comparison ( and settings ) list? ...

would luv to see how ccminer-spmod compares to other with the same cards ...

#crysx

I think a standard is good, but stock clocks probably shouldn't be it. Pure stock clocks are usually so bad, most people don't mine at them - hurts efficiency. A slight OC that pretty much all cards can reach makes sense, though.

shouldnt factory clocks BE the standard - even for a 25 minute test? ...

a list for stock ( standard rates ) and a list for oc ( using the same card and what has been done to the card ) ...

whether its is efficient or inefficient - a standard that the tweaks and overclocks ( with the methods of how it was done ) that can be compared to - is what i believe should be the 'norm' for comparison ...

for example - you buy a car that is 'factory stock' and test it ... then you tweak and tune and improve then test - and THEN compare the results ...

i believe it should be no different to gpu's ... within reason of course ...

no use trying to compare a liquid nitrogen cooled gpu with no way of duplicating it for the home user ...

as nice as that would be to see Wink ...

#crysx

The goals of the GPU manufacturer and the miner are very different. The GPU manufacturer wants to maximize the number of cards that pass the tests, so they set the standards low. Any miner caring about efficiency will OC somewhat if they can - we're not comparing mining usage to regular gaming usage, we're trying to compare a reasonable baseline vs improvements or better OCs.

With cars, whether you buy it to sit in traffic, or to race, it's still driving. Mining and gaming are two very different types of things - it's not just doing the same thing faster - which, I think, makes it reasonable to have a baseline more suited to mining (that almost all cards can still do) instead of one that was chosen by the manufacturer, who has entirely different goals in mind.

makes sense ...

so how would a baseline be agreed upon for comparison? ...

there really is no standards to base a lot of this off ( omg Smiley - here we go with the extranonce standards issue again ) ...

im curious ...

#crysx

There used to be a spreadsheet out there that users could upload results, etc, too...

you know - i do vaguely remember seeing that - now that you mention it ...

but where? ... and how accurate is it? ...

#crysx
legendary
Activity: 2912
Merit: 1091
--- ChainWorks Industries ---
FWIW, for me, mining quark, nicehash reports exactly the same hashrate as ccminer does locally (using any recent release on windows, 980s).
I've always kept an eye on the 24h averages, and they neatly match what ccminer reports, sometimes the pool reports a little higher, but just a negligible difference.

The instant hashrate (or small windows like 5 minutes) is rather pointless.
Going by instant hashrate, I've mined as slow as 10MH/s, and as fast as 80MH/s, you know, variance...

Just my 0.02 BTC ...

we find similar ...

though at time - the variance is quite large - as you stated ...

mining quark on nicehash us stratum ( westahsh ) for the last few hours - and this is proving to be true ...

besides - dont the pools 'average' the hashrate by the collective shares submitted by the miner? ...

if that is so - then it wont ever really be accurate ... isnt this the case? ...

#crysx
hero member
Activity: 1064
Merit: 500
MOBU
OC +200core clock / +250 memory

yup - thats what i thought ...

we really need to have some sort of standard that we can compare against ...

non oc is the best way on a card by card basis ...

every card oc's differently and some cards can be pushed harder than others - with others again being tweaked with firmware and such ...

this means that the readings we give here are utterly useless to compare with ...

they are great readings to compare oc'ing with - and a table / list that would be created as a comparison would be even better ...

who has skills ( and time ) to do such a thing? ...

is there already a site that has a comparison ( and settings ) list? ...

would luv to see how ccminer-spmod compares to other with the same cards ...

#crysx

I think a standard is good, but stock clocks probably shouldn't be it. Pure stock clocks are usually so bad, most people don't mine at them - hurts efficiency. A slight OC that pretty much all cards can reach makes sense, though.

shouldnt factory clocks BE the standard - even for a 25 minute test? ...

a list for stock ( standard rates ) and a list for oc ( using the same card and what has been done to the card ) ...

whether its is efficient or inefficient - a standard that the tweaks and overclocks ( with the methods of how it was done ) that can be compared to - is what i believe should be the 'norm' for comparison ...

for example - you buy a car that is 'factory stock' and test it ... then you tweak and tune and improve then test - and THEN compare the results ...

i believe it should be no different to gpu's ... within reason of course ...

no use trying to compare a liquid nitrogen cooled gpu with no way of duplicating it for the home user ...

as nice as that would be to see Wink ...

#crysx

The goals of the GPU manufacturer and the miner are very different. The GPU manufacturer wants to maximize the number of cards that pass the tests, so they set the standards low. Any miner caring about efficiency will OC somewhat if they can - we're not comparing mining usage to regular gaming usage, we're trying to compare a reasonable baseline vs improvements or better OCs.

With cars, whether you buy it to sit in traffic, or to race, it's still driving. Mining and gaming are two very different types of things - it's not just doing the same thing faster - which, I think, makes it reasonable to have a baseline more suited to mining (that almost all cards can still do) instead of one that was chosen by the manufacturer, who has entirely different goals in mind.

makes sense ...

so how would a baseline be agreed upon for comparison? ...

there really is no standards to base a lot of this off ( omg Smiley - here we go with the extranonce standards issue again ) ...

im curious ...

#crysx

There used to be a spreadsheet out there that users could upload results, etc, too...
Jump to: