Author

Topic: Utility Rate vs Hash Rate when tweaking CGMiner3.1 (LTC Pool Mining) (Read 2085 times)

sr. member
Activity: 436
Merit: 250
These HW errors only seem to occur when I mine in a pool.
When mining solo I can set cgminer to -I 20 and get 1200kh/s with 0 HW errors.
Could it be a networking issue?
newbie
Activity: 28
Merit: 0
Now the mill is difficult to BTC.
full member
Activity: 196
Merit: 100
Check your .conf for your thread concurrency. Also important is the gpu-threads. With a high thread concurrency..try 21712...set gpu-threads to 1. For a low concurrency...try 8192...set gpu-threads to 2.
For low TC I run Intensity 13, for high TC I try and run -I 20


Code:
{
"pools" : [
{
"url" : "your pool info here",
"user" : "your worker info here",
"pass" : "your password here"
}
]
,
"intensity" : "20,20",
"vectors" : "1,1",
"worksize" : "256,256",
"kernel" : "scrypt,scrypt",
"lookup-gap" : "0,0",
"thread-concurrency" : "21712,21712",
"shaders" : "1792,1792",
"gpu-engine" : "1100-1100,1100-1100",
"gpu-fan" : "0-100,0-100",
"gpu-memclock" : "1000,1500",
"gpu-memdiff" : "0,0",
"gpu-powertune" : "20,20",
"gpu-vddc" : "1.250,1.250",
"temp-cutoff" : "95,95",
"temp-overheat" : "85,85",
"temp-target" : "75,75",
"api-port" : "4028",
"expiry" : "120",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"gpu-threads" : "1",
"hotplug" : "5",
"log" : "5",
"no-pool-disable" : true,
"queue" : "1",
"scan-time" : "60",
"temp-hysteresis" : "3",
"shares" : "0",
"kernel-path" : "/usr/local/bin"
}
sr. member
Activity: 436
Merit: 250
On another note, you should be getting a lot more than 678 kh/s and 18 shares per minutes out of two 7950s. I would investigate why high intensity is causing a vast amount of hardware errors (you should have a 0%, but I normally get around 0.1%).

Does anyone know how I find out what is causing these HW errors I'm getting?
I've added a log.txt file for CGMiner output, and turned on Debugging, but there doesn't seem to be any clues in the generated log... It just shows the HW count climbing each time the stats are refreshed.
Also looked in Windows Hardware & Afterburner logs but nothing there.

FD
member
Activity: 98
Merit: 10
You always want the highest number of shares per minute, so you would want to go with the 678 kh/s over the 1200 kh/s in this instance. Actual raw hashing power is irrelevant, since the server only sees your viable shares.

On another note, you should be getting a lot more than 678 kh/s and 18 shares per minutes out of two 7950s. I would investigate why high intensity is causing a vast amount of hardware errors (you should have a 0%, but I normally get around 0.1%).
sr. member
Activity: 436
Merit: 250
I haven't read much about the Utility Rate (U:__/m) in CGMiner, so thought I'd ask...

From what I understand, the Utility Rate is the number of Accepted (?) Shares per minute.
So, a low Hash Rate with high Utility Rate is preferable to a high Hash rate with low Utility Rate. Is that right?

I've tested lots of config combinations on my new rig with 2  x HD 7950s on Win7 64, and have included the 2 at each end of the scale below. Results are over 10 minutes. (--scrypt --auto-* & pool details removed for clarity). Thread concurrency no longer used as it didn't improve results.

1. --lookup-gap 2 -I 19 -W 256
2. --lookup-gap 1 -I 12 -W 256

Result (after 10 minutes)
1. 1200 Kh/s, U:  4 Accepted Shares per minute, GPU Temp 79C, 1000s of HW errors
2.  678 Kh/s,  U: 18 Accepted Shares per minute, GPU Temp 64C, 4 HW errors

So, config 2 with the lower Intensity & lookup gap of 1 gives a far greater Utility Rate with much lower Hash Rate & GPU Temps, and almost no HW errors.
This is what I want right? Greater efficiency rather than highest Hash Rate? Minimum HW errors, and lower running temps too for 24/7 mining.

FD
Jump to: