Guys, should I be able to utilize the sphsgminer from lasybear to mine x11 if I have the amd 13.1 techpowerup modded gpu driver installed, or do I have to go to the amd 14.1 drivers? I'm getting all hw errors and no accepted shares.
My miner is running six sapphire HD7950 cards. It shows them all hashing, but getting nothing but hw errors. I've changed a few settings and while searching for answers, I saw that a lot of folks have had issues with a new 14.4 beta driver and went back to a 14.1 version.
My modded 13.1 driver was a pain in the ass to get all six of my cards going to begin with, and it has worked for scrypt and scrypt-n, so I want to ask if anyone is mining with the modded 13.1 driver before I start stripping it out and loading a newer one.
I can post my config, but I'm actively changing it right now in hopes of solving my issue.
Be sure to completely remove all old AMD driver using DDU
https://forums.geforce.com/default/topic/550192/geforce-drivers/display-driver-uninstaller-ddu-v12-9-3-2-released-06-05-14-/Get new 14.4 driver at
http://support.amd.com/en-us/downloadAlso delete all *.bin files in miner directory so they will be recompiled by new driver.
Here's my working config for HD7950 doing 2.8MHs and 0.040 WU (sgminer 4.1 x11x13mod) and 14.4 catalyst whql:
{
"pools" : [
{
"url" : "stratum+tcp://uswest.wafflepool.com:3331",
"user" : "1HANJQygp3jHuzutceBgMT7wfCgEug6h4L_gpu2",
"pass" : "d=0.008"
}
]
,
"intensity" : "16",
"worksize" : "256",
"kernel" : "x11mod",
"lookup-gap" : "2",
"thread-concurrency" : "24000",
"shaders" : "1792",
"gpu-threads" : "4",
"gpu-engine" : "1065-1100",
"gpu-fan" : "60-100",
"auto-fan" : true,
"gpu-memclock" : "1250",
"gpu-memdiff" : "0",
"gpu-powertune" : "0",
"gpu-vddc" : "0.000",
"temp-cutoff" : "95",
"temp-overheat" : "85",
"temp-target" : "70",
"api-mcast-port" : "4028",
"api-port" : "4028",
"expiry" : "60",
"failover-switch-delay" : "60",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"log" : "5",
"no-pool-disable" : true,
"queue" : "0",
"scan-time" : "59",
"tcp-keepalive" : "30",
"temp-hysteresis" : "2",
"shares" : "0",
"kernel-path" : "/usr/local/bin",
"no-client-reconnect" : true,
"no-submit-stale" : true
}
For R9 280X doing 3.05Mhs and 0.045WU:
Intensity 16 crashes display driver frequently so I backed it down to 15.{
"pools" : [
{
"url" : "stratum+tcp://uswest.wafflepool.com:3331",
"user" : "1HANJQygp3jHuzutceBgMT7wfCgEug6h4L_gpu3",
"pass" : "d=0.008"
}
]
,
"intensity" : "15",
"worksize" : "256",
"kernel" : "x11mod",
"show-coindiff" : true,
"lookup-gap" : "2",
"thread-concurrency" : "24576",
"shaders" : "2048",
"gpu-threads" : "4",
"gpu-engine" : "1050-1080",
"gpu-fan" : "70-100",
"auto-fan" : true,
"gpu-memclock" : "1600",
"gpu-memdiff" : "0",
"gpu-powertune" : "0",
"gpu-vddc" : "0.000",
"temp-cutoff" : "95",
"temp-overheat" : "85",
"temp-target" : "70",
"api-mcast-port" : "4028",
"api-port" : "4028",
"expiry" : "30",
"failover-switch-delay" : "60",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"log" : "5",
"no-pool-disable" : true,
"queue" : "0",
"scan-time" : "29",
"tcp-keepalive" : "30",
"temp-hysteresis" : "2",
"shares" : "0",
"kernel-path" : "/usr/local/bin",
"no-client-reconnect" : true,
"no-submit-stale" : true
}
I'll play around with TC a bit and see if I can improve
If anyone has a starting point for "rawintensity" setting for either of these cards please post working config