Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 345. (Read 5806015 times)

legendary
Activity: 3586
Merit: 1099
Think for yourself

"kernel" : "scrypt,scrypt",

"gpu-fan" : "100,100",
"gpu-engine" : "800-930,800-930",

"temp-cutoff" : "92,92",
"temp-overheat" : "88,88",
"temp-target" : "85,85",

"scrypt" : true,

"auto-gpu" : true

Well it looks like your using scrypt which I know nothing about.

Your using auto gpu which is good.  I don't know if your engine range is OK?

Your not using auto fan so I would enable that and set the range for the fans to something like 60-85 instead of running at 100.  Still seems odd that your GPU is even reaching 92.  Are you running with your case on?

Since your not using auto fan and running them at 100% the only variable to control the temperature is to lower the engine clock and the floor is 800.

Something is missing.
full member
Activity: 208
Merit: 100
can someone explain to me what --gpu-powertune  x   (where x is a number like -10 or 10) does?

Does this try to undervolt or overvolt the card by x%?

Also just wondering if my hash rate for litecoin is ok for a 6950+5770 system, I'm getting a combined  rate of 615Khash/sec is that good?
newbie
Activity: 13
Merit: 0
Can some one please explain me what happens. I'm useing 2 x 7950
In config file i have an option "temp-cutoff" : "90,90"
So when the temperature goes ower 90 cgminer turns off a card.... but  another card. So the "cold" card goes off(temperature lowing), and hot card stay working with rising temperature, but cgminer saying that card is off.
So cgminer turning off the wrong video card in my case.
cgminer 3.1
Lean how to use --gpu-map
Thank you.

Make a correct option "gpu-map" : "0:1,1:0"

Now i face with situation that after cut off and cool down GPU dont want to back to work before restart it (restart it from menu works fine)

Are you using auto gpu and auto fan?
If so do you have a range set for your engine clock and fan speed?
Also do you have a temp target set?
Seems odd that your GPU's are even reaching 90c.

My config :

"intensity" : "20,20",
"vectors" : "1,1",
"worksize" : "256,256",
"kernel" : "scrypt,scrypt",
"lookup-gap" : "2,2",
"thread-concurrency" : "21712,21712",
"shaders" : "1792,1792",
"gpu-fan" : "100,100",
"gpu-engine" : "800-930,800-930",
"gpu-map" : "0:1,1:0",
"gpu-powertune" : "-11,-11",
"temp-cutoff" : "92,92",
"temp-overheat" : "88,88",
"temp-target" : "85,85",
"api-port" : "4028",
"expiry" : "120",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"gpu-threads" : "1",
"hotplug" : "5",
"log" : "5",
"no-pool-disable" : true,
"queue" : "1",
"scan-time" : "60",
"scrypt" : true,
"temp-hysteresis" : "3",
"shares" : "0",
"auto-gpu" : true
legendary
Activity: 3586
Merit: 1099
Think for yourself
Can some one please explain me what happens. I'm useing 2 x 7950
In config file i have an option "temp-cutoff" : "90,90"
So when the temperature goes ower 90 cgminer turns off a card.... but  another card. So the "cold" card goes off(temperature lowing), and hot card stay working with rising temperature, but cgminer saying that card is off.
So cgminer turning off the wrong video card in my case.
cgminer 3.1
Lean how to use --gpu-map
Thank you.

Make a correct option "gpu-map" : "0:1,1:0"

Now i face with situation that after cut off and cool down GPU dont want to back to work before restart it (restart it from menu works fine)

Are you using auto gpu and auto fan?
If so do you have a range set for your engine clock and fan speed?
Also do you have a temp target set?
Seems odd that your GPU's are even reaching 90c.
sr. member
Activity: 658
Merit: 250
Good to know, I guess I shouldn't expect every commit to work perfectly in marginal cases like cross-compiling.

EDIT: I saw some new commits and tried again. Now it works.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
My cross-compiling setup suddenly fails on the newest git master version. I get multiple errors about sys/socket.h not being found. I traced the problem back to commit 31aa4f6cebc51e26b349606fd78d71954bda87da. Commit 657e64477b75603bc9b08eed425bc47f606814cb and everything before that compiles correctly. Is there a new dependency that I'm now missing, or is this a bug?
It's just not complete yet for ming which I assume you're compiling for.
sr. member
Activity: 658
Merit: 250
My cross-compiling setup suddenly fails on the newest git master version. I get multiple errors about sys/socket.h not being found. I traced the problem back to commit 31aa4f6cebc51e26b349606fd78d71954bda87da. Commit 657e64477b75603bc9b08eed425bc47f606814cb and everything before that compiles correctly. Is there a new dependency that I'm now missing, or is this a bug? The only sys/socket.h I have (/usr/include/x86_64-linux-gnu/sys/socket.h) is not under my mingw toolchain, but that wasn't a problem when cross-compiling until now. For reference, here are the files in my toolchain: http://pastebin.com/sSEqFL63
newbie
Activity: 13
Merit: 0
Can some one please explain me what happens. I'm useing 2 x 7950
In config file i have an option "temp-cutoff" : "90,90"
So when the temperature goes ower 90 cgminer turns off a card.... but  another card. So the "cold" card goes off(temperature lowing), and hot card stay working with rising temperature, but cgminer saying that card is off.
So cgminer turning off the wrong video card in my case.
cgminer 3.1
Lean how to use --gpu-map
Thank you.

Make a correct option "gpu-map" : "0:1,1:0"

Now i face with situation that after cut off and cool down GPU dont want to back to work before restart it (restart it from menu works fine)
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Can some one please explain me what happens. I'm useing 2 x 7950
In config file i have an option "temp-cutoff" : "90,90"
So when the temperature goes ower 90 cgminer turns off a card.... but  another card. So the "cold" card goes off(temperature lowing), and hot card stay working with rising temperature, but cgminer saying that card is off.
So cgminer turning off the wrong video card in my case.
cgminer 3.1
Lean how to use --gpu-map
sr. member
Activity: 322
Merit: 250
Hd 6950problems

Tried diffrent types off sdkand drivers, now can i finally farm bitcoin true java on bitminter.
Still no bitcoin farming on cgminer
Want to use it for ltc or ftc. But can only work with that java app...

Can you help me please?
newbie
Activity: 13
Merit: 0
Can some one please explain me what happens. I'm useing 2 x 7950
In config file i have an option "temp-cutoff" : "90,90"
So when the temperature goes ower 90 cgminer turns off a card.... but  another card. So the "cold" card goes off(temperature lowing), and hot card stay working with rising temperature, but cgminer saying that card is off.
So cgminer turning off the wrong video card in my case.
cgminer 3.1
newbie
Activity: 17
Merit: 0
Build cgminer without enabling support for anything that requires usbutils.c (currently that includes bitforce, modminer, and bflsc). That's what the blog you linked does.

Yeah, I know its because I am building with --enable-bflsc, which is what I am attempting to do.

I commented out the offending lines and it seems to compile and run. I won't know if it actually works till I get my BFLs, which hopefully wont be too long.
member
Activity: 112
Merit: 10
The last version of cgminer I was able to compile on OS/X was 2.10.5, since then there has been a change to usbutils.c that has prevented it being compiled.

the error is:

usbutils.c:805: error: redefinition of ‘union semun’

I removed the offending code

and tried to recompile and got

usbutils.c: In function ‘cgminer_usb_lock_bd’:
usbutils.c:908: error: ‘union semun’ has no member named ‘seminfo’
usbutils.c:924: error: ‘union semun’ has no member named ‘seminfo’

so it seems the semun structure on mac osx doesn't include seminfo for some reason.

any ideas?
semun doesn't exist on linux - you have to define it yourself.
I guess the OSX OS devs misunderstood that Tongue

You'd have to come by IRC to help work out how to fix this in OSX - in linux I define it myself as per the requirements - but that fails in OSX coz it is already defined

Maybe in a few hours though I don't have time right now

I am also encountering this issue. Was it ever resolved?

I was using http://www.spaceman.ca/blog/?p=235&cpage=1 to help me get the build up and running.

Build cgminer without enabling support for anything that requires usbutils.c (currently that includes bitforce, modminer, and bflsc). That's what the blog you linked does.

newbie
Activity: 17
Merit: 0
The last version of cgminer I was able to compile on OS/X was 2.10.5, since then there has been a change to usbutils.c that has prevented it being compiled.

the error is:

usbutils.c:805: error: redefinition of ‘union semun’

I removed the offending code

and tried to recompile and got

usbutils.c: In function ‘cgminer_usb_lock_bd’:
usbutils.c:908: error: ‘union semun’ has no member named ‘seminfo’
usbutils.c:924: error: ‘union semun’ has no member named ‘seminfo’

so it seems the semun structure on mac osx doesn't include seminfo for some reason.

any ideas?
semun doesn't exist on linux - you have to define it yourself.
I guess the OSX OS devs misunderstood that Tongue

You'd have to come by IRC to help work out how to fix this in OSX - in linux I define it myself as per the requirements - but that fails in OSX coz it is already defined

Maybe in a few hours though I don't have time right now

I am also encountering this issue. Was it ever resolved?

I was using http://www.spaceman.ca/blog/?p=235&cpage=1 to help me get the build up and running.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Is WU accurate while solo-mining scrypt?
No, cause the base is 64k diff to work out WU with scrypt (at the moment) while solo mining.
hero member
Activity: 807
Merit: 500
Is WU accurate while solo-mining scrypt?  I noticed it at 8xx a couple times while I was tuning, but right now, at a higher hashrate for 36 hours, it is below 2xx, and more often than not while tuning and at this hashrate, it was around 2xx.  I am thinking it may not mean much since it stays at 0 for a long time after mining starts, but I want to confirm I'm not just getting unreported hardware issues from pushing too hard or something.  I could mine at a pool to see if I can get it to 800 and whether or not I notice a difference, but there is variance, and that would be a waste of time and effort if it doesn't even mean anything, so I'm hoping someone knows...
newbie
Activity: 59
Merit: 0
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
newbie
Activity: 59
Merit: 0
Hi,

Quick question. Is it possible to set a different gpu thread value for each card in the set up?

For instance I've been trying to mine with 1x7970 and 2x7950's in the same system.

I tried -g 2,1 but cgminer did not seem to like this. Do I have to run with multiple instances to achieve this?
newbie
Activity: 24
Merit: 0
Potential bug report - "corrupted double-linked list"
(or it can be my hardware failing)

Code:
[2013-05-08 04:50:36] Accepted f266a801 Diff 189/64 GPU 0 pool 0
[2013-05-08 04:50:36] Accepted b4effe14 Diff 116/64 GPU 0 pool 0
[2013-05-08 04:50:39] Accepted aeb15073 Diff 83/64 GPU 1 pool 0
[2013-05-08 04:50:39] Accepted a8f92b24 Diff 138/64 GPU 1 pool 0
[2013-05-08 04:50:48] Stratum from pool 0 detected new block*** glibc detected *** cgminer-3.0.0-x86_64-built/cgminer: corrupted double-linked list: 0x0000000001c1ed10 ***
[2013-05-08 04:50:54] Accepted 0458233f Diff 143/64 GPU 0 pool 0
[2013-05-08 04:50:56] Accepted 7bbd5c46 Diff 199/64 GPU 0 pool 0
[2013-05-08 04:50:56] Accepted 6d3aa40a Diff 111/64 GPU 1 pool 0
[2013-05-08 04:50:57] Accepted 4a58484c Diff 112/64 GPU 1 pool 0

Xubuntu 12.04 x64, Catalyst 12.8, cgminer 3.0.0, scrypt, intensity 19, lg2tc22336w256l8

I didn't turn on debug when this happened so this is all I have. I've only seen it once so far. I'll update to the latest version and let you know if I can still get the error.
Jump to: