Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 655. (Read 5806103 times)

full member
Activity: 155
Merit: 100
@Elmojo:
Do a cd "D:\X\Bitcoin Stuff\cgminer-2.2.3-win32\" first.
Tried that, no joy.
In fact, even my shortcuts, which used to kinda work, now give the same error.
It's like the program has gotten corrupted somehow.
I'm gonna delete and reinstall to see if that helps.
legendary
Activity: 3586
Merit: 1099
Think for yourself
New version: 2.2.5

Finally fixed the opencl created zero sized binary bug.


Hey, I finally have a brand spanking new set of .bin files for Windoze.  Smiley
Sam
full member
Activity: 373
Merit: 100
@Elmojo:
Do a cd "D:\X\Bitcoin Stuff\cgminer-2.2.3-win32\" first.
full member
Activity: 155
Merit: 100
You need to run cgminer twice and capture the output:
 - once with "-n" as the only option
 - once with "-D -T --verbose " added to all the options you already use

Quote
You need to run cgminer from a command prompt instead of editing a shortcut.  Then the output will stay on the screen for you to copy and paste.  I am assuming you are running Windows.  In *nix it wouldn't be a command prompt.  In Windows command prompt, in order to copy, you need to right click, select all, highlight what you want to copy (instead of all), and hit enter to copy it to the clipboard.

Thanks, I never would have guessed that, since the 'guides' I read just say to edit the shortcut. Smiley

Okay, I did it.
The first time, the target looked like this: "D:\X\Bitcoin Stuff\cgminer-2.2.3-win32\cgminer.exe" -n
Here's the output:
Code:
C:\Windows\system32>"D:\X\Bitcoin Stuff\cgminer-2.2.3-win32\cgminer.exe" -n
[2012-02-12 22:00:45] CL Platform 0 vendor: NVIDIA Corporation
[2012-02-12 22:00:45] CL Platform 0 name: NVIDIA CUDA
[2012-02-12 22:00:45] CL Platform 0 version: OpenCL 1.1 CUDA 4.1.1
[2012-02-12 22:00:45] Platform 0 devices: 1
[2012-02-12 22:00:45] Unable to load ati adl library
[2012-02-12 22:00:45] 1 GPU devices max detected

The second time, I edited the target to look like this: "D:\X\Bitcoin Stuff\cgminer-2.2.3-win32\cgminer.exe" -o http://127.0.0.1:9332 -u noob -p sauce -D --verbose
Output: nothing.
It appears that suddenly cgminer is "not a valid win32 application" WTH?!

EDIT: I rebooted, and the "not a valid..." error persists.
I'm stumped.  Angry
hero member
Activity: 807
Merit: 500
And the other repeated again and again info often needed with problems:
the full output of 'cgminer -n' and some of 'cgminer -D -T --verbose ...' (obviously not all of it if cgminer is actually running ...)
where '...' are the options you use  <-I have NO idea what this means. I tried running cgminer with the -n flag, and it flashes on the screen so fast I can't read anything, then closes.
You need to run cgminer from a command prompt instead of editing a shortcut.  Then the output will stay on the screen for you to copy and paste.  I am assuming you are running Windows.  In *nix it wouldn't be a command prompt.  In Windows command prompt, in order to copy, you need to right click, select all, highlight what you want to copy (instead of all), and hit enter to copy it to the clipboard.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
New version: 2.2.5

Finally fixed the opencl created zero sized binary bug.
Faster kernels for both phatk and poclbm.
Detection of sdk2.6 and using poclbm if no phatk binary available.

Full changelog:
- Make output buffer write only as per Diapolo's suggestion.
- Constify nonce in poclbm.
- Use local and group id on poclbm kernel as well.
- Microoptimise phatk kernel on return code.
- Adjust engine speed up according to performance level engine setting, not the
current engine speed.
- Try to load a binary if we've defaulted to the poclbm kernel on SDK2.6
- Use the poclbm kernel on SDK2.6 with bitalign devices only if there is no
binary available.
- Further generic microoptimisations to poclbm kernel.
- The longstanding generation of a zero sized binary appears to be due to the
OpenCL library putting the binary in a RANDOM SLOT amongst 4 possible binary
locations. Iterate over each of them after building from source till the real
binary is found and use that.
- Fix harmless warnings with -Wsign-compare to allow cgminer to build with -W.
- Fix missing field initialisers warnings.
- Put win32 equivalents of nanosleep and sleep into compat.h fixing sleep() for
adl.c.
- Restore compatibility with Jansson 1.3 and 2.0 (api.c required 2.1)
- Modularized logging, support for priority based logging
- Move CPU chipset specific optimization into device-cpu
Vbs
hero member
Activity: 504
Merit: 500
Quote from: Phateus on May 11, 2011, 05:05:55 PM

... so nothing has changed since then?

Hmmm... Tough question there mate! To the best of my knowledge, we still have a kernel made for the 2.4 SDK having the best performance today on BFI_INT cards.
Hmmm... so anyone here saying that 2.5 is what everyone uses might be wrong then ...
That graph was done with 2.4 I presume?
It's also only done for a 5870.
Obviously would be worth seeing how it differs on 2.5 and 2.6

As for 69xx and 79xx cards - the graph actually means nothing.

I use 2.4, but I did use 2.6 last night for a few hours (linux of course)
It's the 2 new libraries (libaticalcl.so  libaticalrt.so) that replace those 2 fglrx libraries, that actually cause most of the 2.6 suckage.
The 2.6 bin files are smaller and only a few % slower when run on a true 2.4 with 11.6/4, however 2.6 bins on 2.6 ... well yep - that is unbelievably bad (due to those libraries which I don't even know what the real versions of them are except the README.txt implies they are before 12.1 coz it tells you to install 12.1+ after the SDK - obviously for that reason)

Yep, the graph was made for a VLIW5 card, showing that you can get good performance on low mem clocks with a big worksize vs high mem clocks and a small worksize. It suggests using lesser vectors (2) and a big worksize (256/128) for ram clocks below 800MHz (on a 5870); and more vectors (4) and small worksizes (64/128) for faster ram clocks.

69xx cards are VLIW4, so they should have a similar graph (ofc with the clock restriction on linux of abs(core-ram)<150MHz, worksize 64/128 and vectors 4/2 seems the way to go).
79xx cards are GCN which "should" adapt to VLIW4 when needed, but as it stands now, AMD wants us to reinvent the wheel again.

SDK 2.5 is very very similar to SDK 2.4 in terms of compiler performance and optimizations, you can try the latest versions of each to see what gains you get (latest 2.4 is in 11.6 driver (v.2.4.650.9), latest 2.5 is in 11.11 driver (v.2.5.793.1)). Still, the phatk kernel was made for 2.4, and, luckily for us, SDK 2.5's compiler seems to like the same performance tweaks (glitches?) as 2.4.

SDK 2.6 is a really different beast, the phatk kernel will most certainly need to be somewhat rewritten as some performance tweaks in it are no longer useful on the latest compiler.



legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
...
And the other repeated again and again info often needed with problems:
the full output of 'cgminer -n' and some of 'cgminer -D -T --verbose ...' (obviously not all of it if cgminer is actually running ...)
where '...' are the options you use  <-I have NO idea what this means. I tried running cgminer with the -n flag, and it flashes on the screen so fast I can't read anything, then closes.

I still think the biggest issue is figuring why it crashes my graphics driver every time.
...
That's the exact bit my
  "https://bitcointalksearch.org/topic/m.742897 (the bit after 4) )"
is referring to.

You need to run cgminer twice and capture the output:
 - once with "-n" as the only option
 - once with "-D -T --verbose " added to all the options you already use
legendary
Activity: 1876
Merit: 1000


Kudos for the 2.2.4 release for the 7970.

I was running on 2.2.3. windows.  couldn't run them past 1100 stable, even after rmoving one of the 5 cards, thinking it was power issue.


now!
Code:
GPU 0:	72.0C	2427RPM 	40% | 	672.8/660.5Mh/s | 	99% |	1140Mhz 	1000Mhz 	1.17V 	A:111	R:1	HW:0	U:8.26/m 	I: 9
GPU 1: 74.0C 2286RPM 39% | 672.9/659.4Mh/s | 99% | 1140Mhz 1000Mhz 1.17V A:142 R:1 HW:0 U:10.57/m I: 9
GPU 2: 74.0C 2167RPM 38% | 672.8/659.9Mh/s | 99% | 1140Mhz 1000Mhz 1.17V A:133 R:1 HW:0 U:9.90/m I: 9
GPU 3: 74.0C 2630RPM 46% | 672.8/655.0Mh/s | 99% | 1140Mhz 1000Mhz 1.17V A:117 R:0 HW:0 U:8.71/m I: 9
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Quote from: Phateus on May 11, 2011, 05:05:55 PM

... so nothing has changed since then?

Hmmm... Tough question there mate! To the best of my knowledge, we still have a kernel made for the 2.4 SDK having the best performance today on BFI_INT cards.
Hmmm... so anyone here saying that 2.5 is what everyone uses might be wrong then ...
That graph was done with 2.4 I presume?
It's also only done for a 5870.
Obviously would be worth seeing how it differs on 2.5 and 2.6

As for 69xx and 79xx cards - the graph actually means nothing.

I use 2.4, but I did use 2.6 last night for a few hours (linux of course)
It's the 2 new libraries (libaticalcl.so  libaticalrt.so) that replace those 2 fglrx libraries, that actually cause most of the 2.6 suckage.
The 2.6 bin files are smaller and only a few % slower when run on a true 2.4 with 11.6/4, however 2.6 bins on 2.6 ... well yep - that is unbelievably bad (due to those libraries which I don't even know what the real versions of them are except the README.txt implies they are before 12.1 coz it tells you to install 12.1+ after the SDK - obviously for that reason)
legendary
Activity: 1876
Merit: 1000
Would it be possible to add a switch to set the memory voltage? By default it's about 1.6V - but since we usually downclock memory so much, it would make sense to lower it considerably.

Not possible to alter the mem voltage via drivers.  Your only option is a flashed bios and even then
a) on most cards it is impossible, there is no voltage controller it is a single value device
b) you have a very good chance of bricking the card.

Didn't i read somewhere that even if you did lower the memory voltage. it would only save a couple of watts...
hero member
Activity: 518
Merit: 500
Quote from: Phateus on May 11, 2011, 05:05:55 PM

... so nothing has changed since then?

Hmmm... Tough question there mate! To the best of my knowledge, we still have a kernel made for the 2.4 SDK having the best performance today on BFI_INT cards.

And what kernel is that ?

Are you referring to phatk ?

Thanks !
Vbs
hero member
Activity: 504
Merit: 500
Quote from: Phateus on May 11, 2011, 05:05:55 PM

... so nothing has changed since then?

Hmmm... Tough question there mate! To the best of my knowledge, we still have a kernel made for the 2.4 SDK having the best performance today on BFI_INT cards.
donator
Activity: 1218
Merit: 1080
Gerald Davis
Would it be possible to add a switch to set the memory voltage? By default it's about 1.6V - but since we usually downclock memory so much, it would make sense to lower it considerably.

Not possible to alter the mem voltage via drivers.  Your only option is a flashed bios and even then
a) on most cards it is impossible, there is no voltage controller it is a single value device
b) you have a very good chance of bricking the card.
hero member
Activity: 1162
Merit: 500
Would it be possible to add a switch to set the memory voltage? By default it's about 1.06V - but since we usually downclock memory so much, it would make sense to lower it considerably.
full member
Activity: 155
Merit: 100
Perhaps if you considered following kano's advice, somebody could actually help you. So far, you haven't given anywhere near enough information.
I didn't realize kano was responding to me. I didn't see anything in his post that made me think he was answering my question.
I looked at the post he linked to, but I don't see anything there that appears to apply to me.
He posted:
1) Read the README <-Did that first, looks like Greek (or Klingon) to me!
or
2) just restart it over and over until it succeeds to generate the new *.bin  <-From what I can tell, mine is generating BINs correctly
or
3) make sure you extracted all the files (including the new *.cl files) <- The cl files are in the directory, not sure what else to check
or
4) Last resort: just rename the old *.bin files to the new names: replace 110817 with 120203 in the names <-My bin files already have 120203 in the name

And the other repeated again and again info often needed with problems:
the full output of 'cgminer -n' and some of 'cgminer -D -T --verbose ...' (obviously not all of it if cgminer is actually running ...)
where '...' are the options you use  <-I have NO idea what this means. I tried running cgminer with the -n flag, and it flashes on the screen so fast I can't read anything, then closes.

I still think the biggest issue is figuring why it crashes my graphics driver every time.

1.  if your going to use cgminer, try not to use other clocking tools, let cgminer manage
2.  "heavily OC'd"  is probably your issue

1. I will, if I ever figure out how to use it.
2. Unlikely, it does the same even when my card is set back to stock clocks.

Does any of this help?
I'll be happy to post additional stats and logs, if I can get it working.
hero member
Activity: 896
Merit: 1000
Buy this account on March-2019. New Owner here!!

Code:
C:\cgminer-2.2.4-win32>cgminer -ndev.
[2012-02-12 16:10:55] CL Platform 0 vendor: Advanced Micro Devices, Inc.
[2012-02-12 16:10:55] CL Platform 0 name: AMD Accelerated Parallel Processing
[2012-02-12 16:10:55] CL Platform 0 version: OpenCL 1.1 AMD-APP (851.4)
[2012-02-12 16:10:55] Platform 0 devices: 4
[2012-02-12 16:10:55] GPU 0 AMD Radeon HD 6900 Series hardware monitoring enable
d
[2012-02-12 16:10:55] GPU 1 AMD Radeon HD 6900 Series hardware monitoring enable
d
[2012-02-12 16:10:55] GPU 2 AMD Radeon HD 6900 Series hardware monitoring enable
d
[2012-02-12 16:10:55] GPU 3 AMD Radeon HD 6900 Series hardware monitoring enable
d
[2012-02-12 16:10:55] 4 GPU devices max detected

C:\cgminer-2.2.4-win32>
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Thanks very much!

I guess I could whitelist the SDK and when cgminer detects sdk 2.6 it could default to poclbm instead of always. On linux it is "OpenCL 1.1 AMD-APP (844.4)". Can anyone tell me what "cgminer -n" tells them on windows with sdk 2.6 please?

edit: I guess I can install it into my windows VM and see for myself what 32 bit sdk2.6 is

edit: I have them all now. (does anyone use osx?)
hero member
Activity: 896
Merit: 1000
Buy this account on March-2019. New Owner here!!
Thanks for that. The question was about the poclbm kernel being default, not the SDK being default. How does poclbm perform on 2.5 sdk for you?

thats easy

POCLBM IS ABSOLUTE GARBAGE ON 2.5 !

seriously, I lose 100-150 Mhash per GPU using -k poclbm with 2.5 SDk
(tested with multiple 5870 and 6990)

that what I was trying to say it is premature to make poclbm the default kernel because most miners are using 2.5
the only people that want 2.6 and poclbm are people who want to do mining and gaming with the same machine


Woho. That's more definitive testing. Just for my comfort, can you confirm that's the latest poclbm in 2.2.4 you're talking about? Thanks!

yes, I tested it on a freshly unzipped cgminer 2.2.4

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Thanks for that. The question was about the poclbm kernel being default, not the SDK being default. How does poclbm perform on 2.5 sdk for you?

thats easy

POCLBM IS ABSOLUTE GARBAGE ON 2.5 !

seriously, I lose 100-150 Mhash per GPU using -k poclbm with 2.5 SDk
(tested with multiple 5870 and 6990)

that what I was trying to say it is premature to make poclbm the default kernel because most miners are using 2.5
the only people that want 2.6 and poclbm are people who want to do mining and gaming with the same machine


Woho. That's more definitive testing. Just for my comfort, can you confirm that's the latest poclbm in 2.2.4 you're talking about? Thanks!
Jump to: