Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 783. (Read 5805728 times)

full member
Activity: 181
Merit: 100
1) Dual GPU cards don't get adjusted/disabled correctly. For example, a 5970 will only disable the one GPU that has the temp sensor:

Code:
 GPU 3: [80.5 C] [DISABLED /78.5 Mh/s] [Q:2 A:11 R:1 HW:0 E:550% U:1.81/m]
 GPU 4: [327.3/324.5 Mh/s] [Q:25 A:23 R:1 HW:0 E:92% U:3.78/m]
Strange, for me each core on my 5970's can be disabled seperately and correctly ([g] [d] 0, [g] [d] 1).

2) It'd be awesome if cgminer would record the current clocks on startup, and restore them on exit if the auto tuning changed them at all.
I believe I read it's supposed to already. Could be a bug or older version?

3) Pressing "G" when you have more than a couple of cards makes it impossible to read the output, because the window the output is displayed to is too small unless you have a HUGE screen.
Windows? Make shortcut and size the font down or increase the lines height from 25 to 50.

4) I'd love a way to specify different temperature thresholds per GPU on the command line. If I have different model cards in there, they have different points where they're happy. 5770s get crashy above 90-95, where 5970 and 6990 cards idle near there at times. Smiley
You can, comma's, just have to know which ones are enumerated which. (--gpu-clock 1000,440,750,950) for example.

5) My ideal dream would be a way of somehow saying "Any 5970 cards you see, set the temperature thresholds to X/Y, the voltage to Z, etc. Any 5770 cards, the temperature threshold is..." so that I don't have to look up which cards are in which system, just to pass that along to cgminer.
CARDbased, would be nice to see, but there'd have to be a way to check by manufacturer and such too, I know some 5970's by like MSI have different heatsink/fan combinations than say PowerCooler or something, and they might not be tolerant the same way but still show up as 5970 by their ID. Don't know how you'd be specific with setting that maybe with grouping (--gpu-group {5970@950+300+1.5})?

7) Specifying an overclock/underclock range that cgminer is allowed to adjust the clock in would be handy.
A range with a dash in it would be cool. (--gpu-memory 200-700)!

Cool One step further, having it attempt to determine (maybe even saved into a local file) how high the clock was able to go without problems, and self-tuning the max clock rate while under the threshold temperature.
Well if he's already parsing IN the config file, maybe if you specify a config, and change something inside it can be exported back out in addition tacking on these extra settings in the file? (--config myconfig.json) could get a line with card serial numbers or something unique? ({"_Safe_CARDID": "CLOCK,MEMORY,FAN"})
legendary
Activity: 3583
Merit: 1094
Think for yourself
+1

p.s. os2sam: do you really still run os/2? Smiley

I remember getting OS/2 Warp for Christmas one year when I was like 12 or 13, too bad for IBM win'95 came out that summer ;/

Well of course.  Doesn't everyone?

Still running OS/2 Warp 4 on a really old Thinkpad.  The current versions are now called eComStation and I have that on my Personal Laptop and on a VPC on my company laptop.  Having trouble finding a bitcoin miner though Smiley.
Thanks for asking,
Sam
hero member
Activity: 896
Merit: 1000
Buy this account on March-2019. New Owner here!!
+1

p.s. os2sam: do you really still run os/2? Smiley

I remember getting OS/2 Warp for Christmas one year when I was like 12 or 13, too bad for IBM win'95 came out that summer ;/
full member
Activity: 174
Merit: 100
I have a feature request.
Every once in a while i have a GPU that can't be restarted...
but restarting the CG miner works fine.

I was wondering if you could add a cmd switch that exits after a GPU failure.
i.e --gf # ( where # is number of GPUs to fail before exiting the CGMiner.exe)

this would save me on constantly having to check on the miner to make sure that all GPU's are constantly mining

Thanks.

Makes sense...

Default should be 0, where:
0 = disabled function, no exit after GPU failure

At Windows this batch file would make it:
Code:
:cgminer
cgminer.exe -blah -blah -blah -theargumentCkolivaschoosetouse 2
GOTO cgminer

+1
full member
Activity: 168
Merit: 100
Live long and prosper. \\//,
I have a feature request.
Every once in a while i have a GPU that can't be restarted...
but restarting the CG miner works fine.

I was wondering if you could add a cmd switch that exits after a GPU failure.
i.e --gf # ( where # is number of GPUs to fail before exiting the CGMiner.exe)

this would save me on constantly having to check on the miner to make sure that all GPU's are constantly mining

Thanks.

Makes sense...

Default should be 0, where:
0 = disabled function, no exit after GPU failure

At Windows this batch file would make it:
Code:
:cgminer
cgminer.exe -blah -blah -blah -theargumentCkolivaschoosetouse 2
GOTO cgminer
full member
Activity: 235
Merit: 100
I have a feature request.
Every once in a while i have a GPU that can't be restarted...
but restarting the CG miner works fine.

I was wondering if you could add a cmd switch that exits after a GPU failure.
i.e --gf # ( where # is number of GPUs to fail before exiting the CGMiner.exe)

this would save me on constantly having to check on the miner to make sure that all GPU's are constantly mining

Thanks.
legendary
Activity: 3583
Merit: 1094
Think for yourself
The new GPU features are awesome! A few suggestions/requests:

3) Pressing "G" when you have more than a couple of cards makes it impossible to read the output, because the window the output is displayed to is too small unless you have a HUGE screen.


In Windoze I created a shortcut and changed the layout properties to 55 rows (Window height) instead of the default 25.  Now I can see the info for both of my GPU's at once.

You can also modify the pixels per character under the Fonts tab in conjunction with the layout window height to get more window realestate.

Sam
hero member
Activity: 807
Merit: 500
2) It'd be awesome if cgminer would record the current clocks on startup, and restore them on exit if the auto tuning changed them at all.
Are you having trouble in that department, or did you just miss this?
STARTUP / SHUTDOWN:
When cgminer starts up, it tries to read off the current profile information
for clock and fan speeds and stores these values. When quitting cgminer, it
will then try to restore the original values. Changing settings outside of
cgminer while it's running may be reset to the startup cgminer values when
cgminer shuts down because of this.
legendary
Activity: 3583
Merit: 1094
Think for yourself
I know the main stream thought is 300 is the sweet spot for mem, and I think this myth exists because it is the lowest point you can downclock in some early versions of software.

(in a slightly off topic but related issue, I can't clock my card, I got drivers installed and can mine, but EVERY clocking app {cgminer, overdrive, ccc (wont even start), clock tool} either doesn't run or has all sliders grayed out... nobody's helping in the technical forums D:> tried different drivers and ccc versions 11.5 thru 11.8 and it can't be messed up installs, I got frustrated and even reformatted and installed windows over and fresh installed the drivers for two of the versions .6 and .Cool

Sorry if I'm stating the overly obvious, but in the ATI CCC did you unlock the overclocking page?  When I first started messing with this stuff I looked at the overclock page a bunch of times before I realized that the lock was actually a button.  I was really irritated about it being grayed out too.
Sam
member
Activity: 90
Merit: 12
Code:
[2011-09-07 13:36:44] Overheat detected, increasing fan to 100%
[2011-09-07 13:36:46] Overheat detected, increasing fan to 100%
[2011-09-07 13:36:49] Overheat detected, increasing fan to 100%
[2011-09-07 13:36:51] Overheat detected, increasing fan to 100%
[2011-09-07 13:36:53] Overheat detected, increasing fan to 100%
[2011-09-07 13:36:55] Overheat detected, increasing fan to 100%

This should probably identify which GPU it's talking about, and maybe have some kind of throttling added to it, if a card's temperature is wiggling around the threshold.
sr. member
Activity: 278
Merit: 250
Cool One step further, having it attempt to determine (maybe even saved into a local file) how high the clock was able to go without problems, and self-tuning the max clock rate while under the threshold temperature.

^^^ That's the ticket.  Plus keep track of how long it was able to run at that clock rate and use that info to drive the adjustments.  I've got a bunch of cards that don't like to run for more than 20-30 mins at elevated clocks, and it can take a couple days to dial them in.

member
Activity: 90
Merit: 12
The new GPU features are awesome! A few suggestions/requests:


1) Dual GPU cards don't get adjusted/disabled correctly. For example, a 5970 will only disable the one GPU that has the temp sensor:

Code:
 GPU 3: [80.5 C] [DISABLED /78.5 Mh/s] [Q:2 A:11 R:1 HW:0 E:550% U:1.81/m]
 GPU 4: [327.3/324.5 Mh/s] [Q:25 A:23 R:1 HW:0 E:92% U:3.78/m]

2) It'd be awesome if cgminer would record the current clocks on startup, and restore them on exit if the auto tuning changed them at all.

3) Pressing "G" when you have more than a couple of cards makes it impossible to read the output, because the window the output is displayed to is too small unless you have a HUGE screen.

4) I'd love a way to specify different temperature thresholds per GPU on the command line. If I have different model cards in there, they have different points where they're happy. 5770s get crashy above 90-95, where 5970 and 6990 cards idle near there at times. Smiley

5) My ideal dream would be a way of somehow saying "Any 5970 cards you see, set the temperature thresholds to X/Y, the voltage to Z, etc. Any 5770 cards, the temperature threshold is..." so that I don't have to look up which cards are in which system, just to pass that along to cgminer.

6) Temperatures >100C should be allowed, no matter how bad of an idea that sounds. We have some cards that go up to 105-107C without issue.

7) Specifying an overclock/underclock range that cgminer is allowed to adjust the clock in would be handy.

Cool One step further, having it attempt to determine (maybe even saved into a local file) how high the clock was able to go without problems, and self-tuning the max clock rate while under the threshold temperature.





full member
Activity: 181
Merit: 100
I know the main stream thought is 300 is the sweet spot for mem, and I think this myth exists because it is the lowest point you can downclock in some early versions of software.

Can we not test this with some clever CUDA/OpenCL code? run a light gpu loop thats heavy on memory operations for X amount of seconds, count ops/sec and write back out of gpu to the cpu to write to disk, then throttle down the memory more and repeat, you should be able to see a bottleneck at some point?

(in a slightly off topic but related issue, I can't clock my card, I got drivers installed and can mine, but EVERY clocking app {cgminer, overdrive, ccc (wont even start), clock tool} either doesn't run or has all sliders grayed out... nobody's helping in the technical forums D:> tried different drivers and ccc versions 11.5 thru 11.8 and it can't be messed up installs, I got frustrated and even reformatted and installed windows over and fresh installed the drivers for two of the versions .6 and .Cool
hero member
Activity: 807
Merit: 500
When I build 2.0.0 (with the original or newer -1 source), it works fine as long as I don't add the ADL header files to the ADL_SDK folder.  When I do that, I get this at the end of the make step:
Code:
/usr/bin/ld: cgminer-adl.o: undefined reference to symbol 'dlclose@@GLIBC_2.2.5'
/usr/bin/ld: note: 'dlclose@@GLIBC_2.2.5' is defined in DSO /lib64/libdl.so.2 so try adding it to the linker command line
/lib64/libdl.so.2: could not read symbols: Invalid operation
collect2: ld returned 1 exit status
make[2]: *** [cgminer] Error 1
make[2]: Leaving directory `/usr/src/cgminer-2.0.0'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr/src/cgminer-2.0.0'
make: *** [all] Error 2
This is running Fedora 15 with GLIBC 2.14-4.  I'm guessing that is the problem, and I'm guessing it is due to the AMD header files and outside of ck's control, but wanted to report it just in case.  I may look for some repo with a newer GLIBC at some point if I can find time, but in the meantime, FYI:  It would appear that you have to have a pretty recent version of GLIBC to compile the gpu monitoring support.

EDIT:  I actually resolved this issue by defining LDFLAGS to point at ..../ati-stream-sdk-v2.1-lnx64/lib/x86_64/

I guess that means my guess was wrong.

For further clarification:  To get this running on Fedora 15 x86_64, I am running ./configure, gathering the CFLAGS and LDFLAGS settings from Makefile, and re-running ./configure with CFLAGS using those settings plus a -I..../ati-stream-sdk-v2.1-lnx64/..../includes/ setting and LDFLAGS using those settings plus a -L..../ati-stream-sdk-v2.1-lnx64/lib/x86_64/ (where the .... sections indicate paths that may vary per machine that I don't remember and can't see at this very moment).  Finally, because Fedora 15's JSON library is too old, I am editing the Makefile to use the source's included JSON instead of my installed JSON (I haven't tried setting JSON_INCLUDES for .lconfigure yet).
hero member
Activity: 896
Merit: 1000
Buy this account on March-2019. New Owner here!!
Hi Gents

Just want to report a possible bug here of some kind.

---Snip---

But if I use CGMiner to clock my cards, I am getting significantly less performance
like 20 Mhash per card difference (total of 3 cards in machine)

I'm using WinXP with 11.6 and am using it to overclock a Radeon 5770 and 5830 to 950Mhz and underclock the memory to 300MHz and I verified that the settings took with GPU Shark.  I could not do that with the ATI CCC.  And my hash rate has improved by 60 to 70Mhs for the pair of GPU's.
Sam

Well for one thing ATI CCC is completely useless IMO anyway.

But
Hi Gents

Just want to report a possible bug here of some kind.

Here is what I am running.

Windows 7 64x
Cgminer 2.0
AMD 11.8 Drivers

(I have been running cgminer since 1.5.something and I am a HUGE fan)
 
CGMINER 2.0 Has Same Great Mining performance as 1.6.2 if I clock the cards with my own util (Saphire Trixx or MSI Afterburner)
/
But if I use CGMiner to clock my cards, I am getting significantly less performance
like 20 Mhash per card difference (total of 3 cards in machine)

I am 100% positive I am using the correct syntax to clock the cards at command line
and its the same if I go into the program and use CGMiner to clock them manually

it does not matter if I have auto-tune features on or off, I know the developer is a linux guy, but has any one run into this issue yet? I would love to be able to use cgminer to clock my cards and start mining with a nice batch file (more time for madden 2012)
Any Ideas?

Cgminer cannot down clock the memory as much as MSI Afterburner or Trixx on some cards. So if that is happening and you do not have enough power you may be exceeding the power you need to get a steady Mhash out of you cards. That is what happened to me anyway.

The developer tells me it is because he uses the ATI stuff to change settings. MSI Afterburner and Trixx bypass the ATI stuff and change some settings on their own directly.

you may have an interesting point on that cgminer cannot downclock the memory, is it because it cant downclock past 300? You could be right. I was trying to downclock my mem to 180 which is the ideal spot for the cards I was testing this on.

I dont think it has anything to do with power though, this rig has a corsair 1200 watt PSU

but I bet your right on with the downclock thing

ckolivas: can you verify that CGMINER can't downclock mem past 300? This would be a feature I would very much like. I know the main stream thought is 300 is the sweet spot for mem, and I think this myth exists because it is the lowest point you can downclock in some early versions of software.

In any case I have 9 GPU's and have been mining since BTC was worth 0.85 USD - my point being?

I have thoroughly tested these cards every which way possible, and although 1 of my cards prefers 300 for memclock (my xfx 5830 which I hate BTW) all my 6870's and 5870's LOVE 180 plus I am getting some energy savings there (even if its not a lot it adds up)
legendary
Activity: 3583
Merit: 1094
Think for yourself
Hi Gents

Just want to report a possible bug here of some kind.

---Snip---

But if I use CGMiner to clock my cards, I am getting significantly less performance
like 20 Mhash per card difference (total of 3 cards in machine)

I'm using WinXP with 11.6 and am using it to overclock a Radeon 5770 and 5830 to 950Mhz and underclock the memory to 300MHz and I verified that the settings took with GPU Shark.  I could not do that with the ATI CCC.  And my hash rate has improved by 60 to 70Mhs for the pair of GPU's.
Sam
sr. member
Activity: 383
Merit: 250
Hi Gents

Just want to report a possible bug here of some kind.

Here is what I am running.

Windows 7 64x
Cgminer 2.0
AMD 11.8 Drivers

(I have been running cgminer since 1.5.something and I am a HUGE fan)
 
CGMINER 2.0 Has Same Great Mining performance as 1.6.2 if I clock the cards with my own util (Saphire Trixx or MSI Afterburner)

But if I use CGMiner to clock my cards, I am getting significantly less performance
like 20 Mhash per card difference (total of 3 cards in machine)

I am 100% positive I am using the correct syntax to clock the cards at command line
and its the same if I go into the program and use CGMiner to clock them manually

it does not matter if I have auto-tune features on or off, I know the developer is a linux guy, but has any one run into this issue yet? I would love to be able to use cgminer to clock my cards and start mining with a nice batch file (more time for madden 2012)
Any Ideas?

Cgminer cannot down clock the memory as much as MSI Afterburner or Trixx on some cards. So if that is happening and you do not have enough power you may be exceeding the power you need to get a steady Mhash out of you cards. That is what happened to me anyway.

The developer tells me it is because he uses the ATI stuff to change settings. MSI Afterburner and Trixx bypass the ATI stuff and change some settings on their own directly.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Now if this $%&##^ 100% CPU bug can be squashed I'll be laughing.    Bloody AMD *shakes fist*.
just out  of curiosity... can't a Sleep(1); be added into each thread? this would fix the 100% cpu bug?
1ms shouldn't affect the mining at any signifigant level
It's while the GPU code is executing that the CPU usage is high due to the driver consuming useless cycles. Sleeping when it comes back to the CPU will do nothing for that.
full member
Activity: 235
Merit: 100
Now if this $%&##^ 100% CPU bug can be squashed I'll be laughing.    Bloody AMD *shakes fist*.
just out  of curiosity... can't a Sleep(1); be added into each thread? this would fix the 100% cpu bug?
1ms shouldn't affect the mining at any signifigant level
sr. member
Activity: 252
Merit: 250
As per the README: grab AMD's ADL, unzip and copy the header (*.h) files from the "include" directory to the "cgminer/ADL_SDK" directory. After that, configure should pick up card control support and it'll be available in the compiled binary.

Any other link or torrent to download ADL_SDK? I hate write down forms for AMD.

TIA

You have the option to bypass the registration and go straight to the download.

Ooopss! Thanks a lot!
Jump to: