Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 397. (Read 5805537 times)

member
Activity: 76
Merit: 10
Hi all,

Thanks for creating and maintaining such a great program!

Does anyone mine Litecoin/scrypt on a GPU with cgminer and want to share their hashrate?  I'm using an nVidia 680M and getting about 7Kh/s, but my CPU miner gets about 40Kh/sec.  I know nVidia cards aren't great for Bitcoin, but does my hashrate seem right for LTC?  I'm using Windows 7 pro with an Intel i7 3720QM, 32 GB RAM.

Thanks,

_theJestre
sr. member
Activity: 451
Merit: 250
Another litecoin miner question.

I have a dual boot computer.  It has a 6 core AMD processor, 2 GBytes memory, and 2 5850 cards.  It mines bitcoins well.   Under Ubuntu 11.04 it will not mine litecoins.  But if I boot to Windows 7 and run the same version of cgminer it will mine litecoins well.

The hardware, the cgminer version and the cgminer options are the same.  Only the operating system is different.

Windows is running AMD 13.1.  Ubuntu is running 12 something.

Is this the solution: Mine with cgminer 2.11.2 on Windows 7 and not on Ubuntu 11.04?

Sam
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Code:
export GPU_MAX_ALLOC_PERCENT=100
export GPU_USE_SYNC_OBJECTS=1
Since the "GPU_MAX_ALLOC_PERCENT" and "GPU_USE_SYNC_OBJECTS" are linux only, what do us windows users do?
The first is not required on windows (it's on by default anyway). The second decreases CPU usage, so if you have that problem on windows, the only way to improve on it is use linux.
newbie
Activity: 57
Merit: 0

7970 @ 1135/1890, LG 2, TC 22392:
Code:
 GPU 0:  72.0C 3413RPM | 714.6K/715.7Kh/s | A:0 R:1 HW:0 U:0.00/m I:20

Uhm. With LG 2 and TC 22392 I get:

Code:
[2013-03-15 16:04:33] Maximum buffer memory device 0 supports says 805306368
[2013-03-15 16:04:33] Your scrypt settings come to 1467482112
[2013-03-15 16:04:33] Error -61: clCreateBuffer (padbuffer8), decrease CT or increase LG

Ok, the above error seems going away using:
Code:
export GPU_MAX_ALLOC_PERCENT=100
export GPU_USE_SYNC_OBJECTS=1
(I never ever needed this thing before).
As always YMMV, just because it works on mine doesn't mean it will work on yours - motherboard, CPU, and ram actually matter with scrypt. However -g 1 is now almost mandatory with these higher TCs. I'm making 1 GPU thread default in scrypt in the next version (just made it into git).

Since the "GPU_MAX_ALLOC_PERCENT" and "GPU_USE_SYNC_OBJECTS" are linux only, what do us windows users do?
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
CGMiner keeps telling me "Disabling extra threads due to dynamic mode." How do I stop it from doing this?
Don't use dynamic mode. Smiley

Gee, thanks. I hadn't thought of that.  Roll Eyes

HOW?
... as it says in the README that no one reads ...
Code:
--intensity|-I  Intensity of GPU scanning (d or -10 -> 10, default: d to maintain desktop interactivity)
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/

7970 @ 1135/1890, LG 2, TC 22392:
Code:
 GPU 0:  72.0C 3413RPM | 714.6K/715.7Kh/s | A:0 R:1 HW:0 U:0.00/m I:20

Uhm. With LG 2 and TC 22392 I get:

Code:
[2013-03-15 16:04:33] Maximum buffer memory device 0 supports says 805306368
[2013-03-15 16:04:33] Your scrypt settings come to 1467482112
[2013-03-15 16:04:33] Error -61: clCreateBuffer (padbuffer8), decrease CT or increase LG

Ok, the above error seems going away using:
Code:
export GPU_MAX_ALLOC_PERCENT=100
export GPU_USE_SYNC_OBJECTS=1
(I never ever needed this thing before).
As always YMMV, just because it works on mine doesn't mean it will work on yours - motherboard, CPU, and ram actually matter with scrypt. However -g 1 is now almost mandatory with these higher TCs. I'm making 1 GPU thread default in scrypt in the next version (just made it into git).
hero member
Activity: 896
Merit: 1000
CGMiner keeps telling me "Disabling extra threads due to dynamic mode." How do I stop it from doing this?
Don't use several threads (-g 1)
hero member
Activity: 591
Merit: 500
CGMiner keeps telling me "Disabling extra threads due to dynamic mode." How do I stop it from doing this?
Don't use dynamic mode. Smiley
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
I presume you mean a BFL ASIC (wont be much point running FPGAs once everyone has an ASIC)
So, unless we change the defaults between now and then, the first step (in the future) of building from a git clone would be:
./autogen.sh --disable-opencl --enable-bflsc

This is exactly what I was looking for. Thanks!


I just tried compiling with that flag turned on, and got an error:
Code:
usbutils.c:1842:16: error: âbflsrc_drvâ undeclared (first use in this function)

It looks like line 1842 has a typo:
Code:
					drv_count[[b]bflsrc_drv[/b].drv_id].limit = lim;

bflsrc_drv should be bflsc_drv, to match the definition at line 151.
... Smiley
member
Activity: 112
Merit: 10
I presume you mean a BFL ASIC (wont be much point running FPGAs once everyone has an ASIC)
So, unless we change the defaults between now and then, the first step (in the future) of building from a git clone would be:
./autogen.sh --disable-opencl --enable-bflsc

This is exactly what I was looking for. Thanks!


I just tried compiling with that flag turned on, and got an error:
Code:
usbutils.c:1842:16: error: âbflsrc_drvâ undeclared (first use in this function)

It looks like line 1842 has a typo:
Code:
					drv_count[[b]bflsrc_drv[/b].drv_id].limit = lim;

bflsrc_drv should be bflsc_drv, to match the definition at line 151.
member
Activity: 112
Merit: 10
I presume you mean a BFL ASIC (wont be much point running FPGAs once everyone has an ASIC)
So, unless we change the defaults between now and then, the first step (in the future) of building from a git clone would be:
./autogen.sh --disable-opencl --enable-bflsc

This is exactly what I was looking for. Thanks!
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
I was hoping by now I could have said something like "GPU mining is deprecated, only ASIC code is supported from here on"...

Speaking of which, I'm setting up a new Linux install on an old atom netbook that I plan to use as the driver for a BFL Single (hopefully some time this year... sigh).

For configuration testing purposes, I built it with cpu mining enabled, and it's rocking along at 1 Mh/s...

I assume all I'll need for autogen.sh flags when I want to rebuild cgminer to drive the Single is

Code:
autogen.sh --disable-opencl --disable-adl --enable-bitforce

Is there anything else that I should, or might want, to include?


I presume you mean a BFL ASIC (wont be much point running FPGAs once everyone has an ASIC)
So, unless we change the defaults between now and then, the first step (in the future) of building from a git clone would be:
./autogen.sh --disable-opencl --enable-bflsc
If you want both BFL ASIC and BFL FPGA
./autogen.sh --disable-opencl --enable-bflsc --enable-bitforce

If it's a source download, however, the first step would be:
./configure --disable-opencl --enable-bflsc
or
./configure --disable-opencl --enable-bflsc --enable-bitforce
hero member
Activity: 924
Merit: 1000
Watch out for the "Neg-Rep-Dogie-Police".....
I'm getting about 20-25% cpu usage on gentoo (htop shows the main process accounts for all of this, but there are about 10 other cgminer entries) with an amd 8350 and two 7950s. I have another box with one 5770 and one 7950 and it showed similar behavior (about 13% cpu usage). If I disable the 7950 then the usage goes down to 3-4%. I am using -I 5,8 -w 256 -v 1 and I have tried all of the kernels. I have also tried cgminer 2.10.4 and 2.11.2. I have another couple boxes with 5830s and 5770s that have less than 2% cpu usage. Any thoughts as to why the 7950 boxes have higher cpu usage?

From what I can gather, mixing 7xxx series cards with 5xxx or 6xxx series cards causes issues, as they prefer to have different driver/sdk setups. There is plenty of info elsewhere on the forums about it, that's how I found out. I keep all my 7xxx series separate in their own rig. Here's one link that might help you:

https://bitcointalksearch.org/topic/7970-linux-xubuntu-guide-please-77950

Or, do a search for "7970 settings". Hope it helps a bit.......

Peace.
member
Activity: 112
Merit: 10
I was hoping by now I could have said something like "GPU mining is deprecated, only ASIC code is supported from here on"...

Speaking of which, I'm setting up a new Linux install on an old atom netbook that I plan to use as the driver for a BFL Single (hopefully some time this year... sigh).

For configuration testing purposes, I built it with cpu mining enabled, and it's rocking along at 1 Mh/s...

I assume all I'll need for autogen.sh flags when I want to rebuild cgminer to drive the Single is

Code:
autogen.sh --disable-opencl --disable-adl --enable-bitforce

Is there anything else that I should, or might want, to include?
newbie
Activity: 42
Merit: 0
I'm getting about 20-25% cpu usage on gentoo (htop shows the main process accounts for all of this, but there are about 10 other cgminer entries) with an amd 8350 and two 7950s. I have another box with one 5770 and one 7950 and it showed similar behavior (about 13% cpu usage). If I disable the 7950 then the usage goes down to 3-4%. I am using -I 5,8 -w 256 -v 1 and I have tried all of the kernels. I have also tried cgminer 2.10.4 and 2.11.2. I have another couple boxes with 5830s and 5770s that have less than 2% cpu usage. Any thoughts as to why the 7950 boxes have higher cpu usage?
.m.
sr. member
Activity: 280
Merit: 260
Hi, after some time I am testing mining again - similar setup, new GPU.
I am not able to lower gpu mem clock, it always jumps to 1200 when I enter a different value (and it hungs soon).
Another guy mentioned, he achieves around 300Mh/s with 1050 MHz gpu eng and 600 gpu mem clk with GUIminer(windows)-and his temps are around 55 C(two fans).
Would anybody have an idea what can I do to improve hash speed ?

Linux Fedora Core 16 x64, MSI 7850 (only one fan Sad
cgminer from git,ADL,amd app sdk 2.8

when I run ./cgminer --benchmark
I get speed 228.2 Mh/s (when setting engine clk to 1120 MHz)

Thanks a lot !
.m.
Lem
newbie
Activity: 78
Merit: 0

7970 @ 1135/1890, LG 2, TC 22392:
Code:
 GPU 0:  72.0C 3413RPM | 714.6K/715.7Kh/s | A:0 R:1 HW:0 U:0.00/m I:20

Uhm. With LG 2 and TC 22392 I get:

Code:
[2013-03-15 16:04:33] Maximum buffer memory device 0 supports says 805306368
[2013-03-15 16:04:33] Your scrypt settings come to 1467482112
[2013-03-15 16:04:33] Error -61: clCreateBuffer (padbuffer8), decrease CT or increase LG

Ok, the above error seems going away using:
Code:
export GPU_MAX_ALLOC_PERCENT=100
export GPU_USE_SYNC_OBJECTS=1
(I never ever needed this thing before).

However, I get:
Code:
[2013-03-15 16:39:49] Error -5: Enqueueing kernel onto command queue. (clEnqueueNDRangeKernel)
 [2013-03-15 16:39:49] GPU 0 failure, disabling!
 [2013-03-15 16:39:49] Thread 1 being disabled
 [2013-03-15 16:39:49] Error -5: Enqueueing kernel onto command queue. (clEnqueueNDRangeKernel)
 [2013-03-15 16:39:49] GPU 0 failure, disabling!
 [2013-03-15 16:39:49] Thread 0 being disabled
 [2013-03-15 16:39:49] Thread 1 being re-enabled
 [2013-03-15 16:39:49] Error -5: Enqueueing kernel onto command queue. (clEnqueueNDRangeKernel)
 [2013-03-15 16:39:49] GPU 1 failure, disabling!
 [2013-03-15 16:39:49] Thread 2 being disabled
 [2013-03-15 16:39:49] Error -5: Enqueueing kernel onto command queue. (clEnqueueNDRangeKernel)
 [2013-03-15 16:39:49] GPU 1 failure, disabling!
 [2013-03-15 16:39:49] Thread 3 being disabled
 [2013-03-15 16:39:49] Error -5: Enqueueing kernel onto command queue. (clEnqueueNDRangeKernel)
 [2013-03-15 16:39:49] GPU 0 failure, disabling!
 [2013-03-15 16:39:49] Thread 1 being disabled
 [2013-03-15 16:39:49] Thread 1 being re-enabled
 [2013-03-15 16:39:49] Error -5: Enqueueing kernel onto command queue. (clEnqueueNDRangeKernel)
 [2013-03-15 16:39:49] GPU 0 failure, disabling!
 [2013-03-15 16:39:49] Thread 1 being disabled
full member
Activity: 160
Merit: 100
Try set "safe" settings and disable ADL. I have similar troubles when ADL is enabled and I switch user or enable screen saver...

Where is the "safe" setting? What is ADL?
legendary
Activity: 1361
Merit: 1003
Don`t panic! Organize!
Try set "safe" settings and disable ADL. I have similar troubles when ADL is enabled and I switch user or enable screen saver...
full member
Activity: 160
Merit: 100
Ok. I still cannot get CGminer working how I'd like it to.
I have been through 7 OS installs in the past two weeks, 20 different ATi driver installs, 10 CGMiner installs. + much much more.


Initially I couldn't get windows to recognize all the GPU's at once, so I went to ubuntu, and back to windows and then to another version of windows.............................. Well one day it just worked, I did nothing special I can rememeber it just all of a sudden worked..............................

So the day it worked I threw all my GPU's up on CGMiner (10.5; CCC 13.1), and used it for another week or so. Everything was *seemingly* fine, <75C for each GPU. Ran 1Ghash even with x4 5830's. No OC, no nothing, just stock.

Well, with this I kept getting BSOD and AMD driver crashes. To fix this I flashed my BIOS, reinstalled windows, install AMD driver 12.10 and CGMiner 10.5.
For a while I couldn't (AGAIN) get the GPUs to all be recognized, and as it happened the first time, it just worked one night......

Now that I had it working......
I tested each GPU individually by running it on CG for 10mins or so. All the cards ran about 75C and 3200RPM. Which is what they ran at before this BIOS flash and new CCC and CG.

NOW, when I run two or more, the cards get HOT. 90C in seconds, but they are maintainable/stable. Prior to this, the temps were low but they would crash the system, now the temps are high but the system wont crash for stupid reasons as before(BIOS, Drivers, and OS not speaking correctly).

I have tried setting GPU fan to 100% and Auto, lowering the clocks, dynamic intensity, more fans, different TIM (reset the TIM on each card Multiple times), carressing the GPUs before I go to sleep. Nothing seems to work to get them back to their coolness prior to the new software...



I am not even sure what I am trying to do anymore. So many fucking software problems my head is spinning. And no one can give me a good answer. This computer is making me look like a preschooler.
Jump to: