Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 382. (Read 5806057 times)

hero member
Activity: 924
Merit: 501
No one for help?
Most likely your driver version is the problem ...
But you haven't provided any useful information.
Could even just be a setting you've used ...

Here, want to help on a problem with documentation? try this...
https://bitcointalksearch.org/topic/m.1711097

The problem is not the card as it was working before and is currently working under a different kernel.  As long as lspci can see the card I can use it... except on this one machine which is driving crazy.

Happy to try another OS though my experience with Ubuntu was that lspci didn't show multiple cards (same problem I'm having with Centos 6.4).  I know I have had cgminer operating under centos 6.3 with all gpu's operating on a multi card system and as I say I f'd up when I upgraded to 6.4.

Where is there some log file that can give more info?  ANY advice appreciated.



hero member
Activity: 518
Merit: 500
Hello. Latest CGminer shows that my 7950 runs at about 650 k/h, but mining pools (i tried 4) show in stats (and pays so) that i make only about 150-200 k/h.
But if i use Reaper, pool stats shows real 600-650 k/h (and pays). So what it means? Technical problems, or CGminer stealing k/h for some backdoor pool?  Huh
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Hi Guys,

Does anyone know any flags that might assist in keeping GPU temps down with a small sacrifice to the hash rate? Its very hot where I live during the day and I noticed regardless if I run cgminer with low aggression and low thread concurrency the temp is the same as with higher settings.

Im running 2x 7950's with no flags at the moment just the profile in guiminer
Reduce --gpu-engine limits
hero member
Activity: 697
Merit: 503
Hi Guys,

Does anyone know any flags that might assist in keeping GPU temps down with a small sacrifice to the hash rate? Its very hot where I live during the day and I noticed regardless if I run cgminer with low aggression and low thread concurrency the temp is the same as with higher settings.

Im running 2x 7950's with no flags at the moment just the profile in guiminer
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
...
 [2013-04-03 21:19:29] CL Platform 0 version: OpenCL 1.2 AMD-APP (1124.2)
...
Sounds like that's it ...
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
No one for help?
Most likely your driver version is the problem ...
But you haven't provided any useful information.
Could even just be a setting you've used ...
member
Activity: 81
Merit: 1002
It was only the wind.
By the way, and this is totally unrelated, is it possible to use Eloipool to run a TRC pool?
I don't know what TRC is. Try it and see?

Honestly, I don't really know how to set up a pool. I'm a programmer, but I haven't found much information out there. If you could point me to a resource, that'd be awesome.
jhd
member
Activity: 63
Merit: 10
full member
Activity: 239
Merit: 100
Additionally, this is what I'm seeing when I run the -n command.

The first snipped is the output from 2.10.3.

Code:
 [2013-04-03 21:19:29] CL Platform 0 vendor: Advanced Micro Devices, Inc.
 [2013-04-03 21:19:29] CL Platform 0 name: AMD Accelerated Parallel Processing
 [2013-04-03 21:19:29] CL Platform 0 version: OpenCL 1.2 AMD-APP (1124.2)
 [2013-04-03 21:19:29] Platform 0 devices: 2
 [2013-04-03 21:19:29] 0 Cayman
 [2013-04-03 21:19:29] 1 Cayman
 [2013-04-03 21:19:29] Failed to ADL_Adapter_ID_Get. Error -1
 [2013-04-03 21:19:29] GPU 0 AMD Radeon HD 6900 Series hardware monitoring enabled
 [2013-04-03 21:19:29] GPU 1 AMD Radeon HD 6900 Series hardware monitoring enabled
 [2013-04-03 21:19:29] 2 GPU devices max detected

I ran this with the newest debug version of cgminer.exe and got the exact same results (minus the newly added USB support of course.)
full member
Activity: 239
Merit: 100
Was wondering if someone could help me troubleshoot my cgminer.

As I've stated in another post, for whatever reason I am unable to mine with any version of CG Miner past 2.10.3. I'm able to mine without any issue whatsoever using 2.10.3. But starting at 2.10.4 and onward, CG Miner just stops running shortly after it probes for an alive pool.

Card Info - HIS H699F4G4M Radeon HD 6990 - http://www.newegg.com/Product/Product.aspx?Item=N82E16814161366

Driver Info - AMD Catalyst™ 13.2 Beta Driver - http://support.amd.com/us/kbarticles/Pages/amdcatalyst132betadriver.aspx

OS - Windows 8 Pro

AMD SDK Info - http://developer.amd.com/tools/heterogeneous-computing/amd-accelerated-parallel-processing-app-sdk/downloads/

AMD-APP-SDK-v2.8-Windows-64.exe

AMD 6990

Only logs recovered.

Code:
 [2013-04-03 20:47:55] Started cgminer 2.11.3
 [2013-04-03 20:47:56] Probing for an alive pool

This is the command line I'm using: (same command line works perfectly with 2.10.3)

Code:
@echo off
TIMEOUT 60
cgminer.exe -o stratum+tcp://stratum.btcguild.com:3333 -u username_1 -p pass -I 9 --gpu-memclock 300 -w 128 --auto-fan --auto-gpu --temp-target 70 --temp-overheat 80 2>logtofile.txt

Lastly, I installed Dr Mingw debugger and this is the output text from cgminer when it crashes:

Code:
cgminer.exe caused an Access Violation at location 004312e6 in module cgminer.exe Reading from location 08461499.

Registers:
eax=08461495 ebx=01f90d88 ecx=ffb4cf2b edx=00000008 esi=01f72448 edi=01f90baf
eip=004312e6 esp=0028f4e0 ebp=0028f548 iopl=0         nv up ei pl nz na pe nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010202

Call stack:
004312E6  cgminer.exe:004312E6
004333F0  cgminer.exe:004333F0
00430289  cgminer.exe:00430289
00419380  cgminer.exe:00419380
004010B9  cgminer.exe:004010B9  __mingw_CRTStartup  crt1.c:244

00401284  cgminer.exe:00401284  WinMainCRTStartup  crt1.c:274

76B98543  KERNEL32.DLL:76B98543  BaseThreadInitThunk
76F7AC69  ntdll.dll:76F7AC69  RtlInitializeExceptionChain
76F7AC3C  ntdll.dll:76F7AC3C  RtlInitializeExceptionChain

If you want I can try and provide more information if required, but this is what I'm currently experiencing.
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
I have only 53MH/s for a 670M too. The Desktop 670 is with 112MH/s in the list. I tried everything but it remains there. Im not sure if it can be that its so much less only because its the notebook-gpu. But it wouldnt turn out to profit to mine with it anyway most probably.
member
Activity: 81
Merit: 1002
It was only the wind.
While I agree that using direct USB is probably better overall,

Then we have no argument here! You're agreeing with kano! By the way, and this is totally unrelated, is it possible to use Eloipool to run a TRC pool?
jhd
member
Activity: 63
Merit: 10
I setup it and i have only 600mhash for 2 7950 cards. Its very poor
jhd
member
Activity: 63
Merit: 10
Hi i have a problem. I have 2 hd7950 ans i want to mine ppcoin (bitparking) but it dont work. Someone can share me settings for 7950?
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
It works now... solution is on bottom...

Hello,

since my roommate has a new notebook with a NVidia GT 670M i wanted to check if its useful to mine with it now. So i set up an account at pool.itzod.ru and used the standard commands they suggested.

I tried with -I 9 but it failed everytime. The driver crashed. But under 9 it works fine at least. But i wondered by its only 14MH/s. It should be 112MH/s judging from the tables. So i checked GPU-Z and found that only the desktop-GPU is working. Optimus isnt enabling the GTX. And this gpu works with 14MH/s. I then tried forcing the use of the GTX for the cgminer.exe. But it doesnt change. Its still using the desktop-gpu but with even less MH/s. Its 1-5 only.

What can i do to force the use of the gtx 670M? And is it maybe even possible to use both gpus at the same time?

I wonder why it crashed with -I 9 when its the standard. But maybe it will work with the normal GPU with such values.

Does one have disadvantages using stratum with gpu or isnt there a disadvantage?

Thanks!

Edit: I checked out the command -n  and got this result:

Quote
F:\Programme\CGMiner>cgminer.exe -n
CL Platform 0 vendor: Intel(R) Corporation

CL Platform 0 name: Intel(R) OpenCL
CL Platform 0 version: OpenCL 1.1
Platform 0 devices: 1
0       Intel(R) HD Graphics 4000
CL Platform 1 vendor: NVIDIA Corporation

CL Platform 1 name: NVIDIA CUDA
CL Platform 1 version: OpenCL 1.1 CUDA 4.2.1

Platform 1 devices: 1
0       GeForce GTX 670M
Unable to load ati adl library
1 GPU devices max detected
USB all: found 6 devices - listing known devices

No known USB devices

Intel HD Graphics 4000 is the normal GPU and GeForce GTX 670M is the fast GPU. Optimus should switch between it when speed is needed. It seems to not work.
So i tried the param -d 1 to use the second GPU but it told me there is no device.

Edit2: It works now. I used the param --gpu-platform 1 -d 0 and now the real gpu is mining... Smiley I will test if i can mine with the other too now. But at least it runs now...
member
Activity: 81
Merit: 1002
It was only the wind.
USB does, yes. But not the devices in question.

If that's the case, then I suppose I see no real benefit to using libusb either.
full member
Activity: 247
Merit: 100
Hello guys,
I've tried today last cgminer(2.11.3win32), on my Win8x64/Radeon 5850 card/Catalyst 13.3 beta but it crash on start.

Code:
cgminer -o stratum+tcp://coinotron.com:3334 -u blabla -p blabla --scrypt --thread-concurrency 8000 -I 18 -g 1 -w 256

Can you help me?

On Ubuntu 12.10/Catalyst 2:9.000-0ubuntu3 works flawlessly.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
How do i --enable-cpu ?

doesn't seem to work o.0
CPU mining is not supported as per the README
member
Activity: 81
Merit: 1002
It was only the wind.
Wait, you just said that it was the current, supported, standard interface, and libusb is low level. Then you said libusb adds a lot of abstraction and does the same things as the serial I/O libs, which would make it higher level. Which is it?
Both. For the network analogy, libusb would be libpcap - it adds some programmer-friendly abstractions on top of a raw socket. It's still working with low-level raw sockets, but in an abstracted way.

If we're using that analogy, then the serial I/O libs would be the regular TCP/IP stack.
Right...
And libpcap is still faster and offers more functionality than the TCP/IP stack, it doesn't reimplement too much.
Even if you implement your own TCP/IP stack on top of it?
But it's nothing like that. USB provides a lot more than serial transfers, as kano said. So it's not a reimplementation.
M3t
newbie
Activity: 42
Merit: 0
also, does it matter how many threads i have per GPU? doesn't seem to effect anything... hmm... ?
Jump to: