Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 448. (Read 5805537 times)

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Ok, one of my cards has a fan on it that is dying, but it has given me a chance to observe a behavior in cgminer that, I believe, needs tweaking, when it comes to auto-fan and auto-gpu.

I notice repeatedly, that the temps of my card are fluctuating a lot, and this is why...

SNIP

Creating a PID type controller for every fan/gpu combination out there with different heat generation qualities, different cooling capacities, different fan speed change effects, different fan speed acceleration capabilities, etc. etc. etc....  is basically impossible. As per the the readme, the algorithm is an algorithm designed to work in most places well most of the time with most hardware, and will not get it right occasionally. Since GPUs mining are dead long term, I have zero interest in rewriting the algorithm or developing it further. It won't be long before GPU miners will be the poor relatives that give me nothing for further development when ASICs hit. Sad, since I really liked GPU mining, but that's the reality. Probably best to find some static fan speed workaround in your case.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/

Cgminer 287 with two cards on my everyday PC. It mined 10001 of the 10000 requested, and it quits normally, but windows thinks it quit unexpectedly. Win pops up a dialog box asking if I want to quit cgminer.  Of course I want to quit... because I have it on a loop.

I had it mine 10K shares and loop. Has looped a few times, and then this happens.  Wish there was a way to tell windows "it's okay, let it quit/crash and don't worry about it" and my loop would restart it w/o babysitting it.
Probably because cgminer returns a "failure" type return code if the number of shares it mined isn't exact.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
2.9.3 works fine for me in Win7 but when I get into the settings area, there's a 2 second lag. Weird. Older versions did not have this weird lag.
Intentional due to other changes, it's clearing the screen and it doesn't update that frequently.
legendary
Activity: 2128
Merit: 1002
2.9.3 works fine for me in Win7 but when I get into the settings area, there's a 2 second lag. Weird. Older versions did not have this weird lag.
hero member
Activity: 981
Merit: 500
DIV - Your "Virtual Life" Secured and Decentralize
yes but it turns off the prompt for everything.
Change a single value at this point in registry.
HKEY_CURRENT_USER\Software\Microsoft\Windows\Windows Error Reporting
Item is called
DontShowUI
Change to 1
or enabled.
Restart.
Magic it doesn't dick around anymore.
sr. member
Activity: 322
Merit: 250
Sometimes Windows (7 x64) thinks cgminer has crashed when I ask it to stop mining after xxxx shares. Perhaps it IS a crash, because it doesn't happen all the time.

here is the latest info:
Code:
Problem signature:
  Problem Event Name: APPCRASH
  Application Name: cgminer.exe
  Application Version: 0.0.0.0
  Application Timestamp: 508e1955
  Fault Module Name: libusb-1.0.dll
  Fault Module Version: 1.0.12.10532
  Fault Module Timestamp: 503cda37
  Exception Code: c0000005
  Exception Offset: 0000196d
  OS Version: 6.1.7601.2.1.0.768.3
  Locale ID: 1033
  Additional Information 1: 0a9e
  Additional Information 2: 0a9e372d3b4ad19135b953a78882e789
  Additional Information 3: 0a9e
  Additional Information 4: 0a9e372d3b4ad19135b953a78882e789

Cgminer 287 with two cards on my everyday PC. It mined 10001 of the 10000 requested, and it quits normally, but windows thinks it quit unexpectedly. Win pops up a dialog box asking if I want to quit cgminer.  Of course I want to quit... because I have it on a loop.


EDIT: Happened on my new build too... using version 285

Code:
Problem signature:
  Problem Event Name: APPCRASH
  Application Name: cgminer.exe
  Application Version: 0.0.0.0
  Application Timestamp: 508663c0
  Fault Module Name: ntdll.dll
  Fault Module Version: 6.1.7600.16559
  Fault Module Timestamp: 4ba9b29c
  Exception Code: c0000005
  Exception Offset: 000328bf
  OS Version: 6.1.7600.2.0.0.768.3
  Locale ID: 1033
  Additional Information 1: 0a9e
  Additional Information 2: 0a9e372d3b4ad19135b953a78882e789
  Additional Information 3: 0a9e
  Additional Information 4: 0a9e372d3b4ad19135b953a78882e789

I had it mine 10K shares and loop. Has looped a few times, and then this happens.  Wish there was a way to tell windows "it's okay, let it quit/crash and don't worry about it" and my loop would restart it w/o babysitting it.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Hey guys.  I turned the help text files into one HTML help file.

Let me know what you think!

http://test.the47.net/cgminer/readme.html
You'll need to find any '<' and '>' and replace them with < and >
in:
 API-README
 FPGA-README
 README
 linux-usb-cgminer
 windows-build.txt

Oh, also any '&' with &
 in README and API-README

I have done this for stuff in

tags, and other text I have surrounded with

 so it doesn't need escaping yet.

But I will continue to work on it.
Yeah I just checked the source now and noticed most of them had already been done.
Not sure if any others were missed, but there are some brackets in linux-usb-cgminer that were missed - in the first part '4)'
sr. member
Activity: 322
Merit: 250
Hey guys.  I turned the help text files into one HTML help file.

Let me know what you think!

http://test.the47.net/cgminer/readme.html
You'll need to find any '<' and '>' and replace them with < and >
in:
 API-README
 FPGA-README
 README
 linux-usb-cgminer
 windows-build.txt

Oh, also any '&' with &
 in README and API-README

I have done this for stuff in

tags, and other text I have surrounded with

 so it doesn't need escaping yet.

But I will continue to work on it.


EDIT: looks like the
 tags still need its contents to be escaped.  Well, that's an easy fix.  [strike]Working on that right now.[/strike]  Fixed.            
hero member
Activity: 504
Merit: 500
Scattering my bits around the net since 1980
Ok, one of my cards has a fan on it that is dying, but it has given me a chance to observe a behavior in cgminer that, I believe, needs tweaking, when it comes to auto-fan and auto-gpu.

I notice repeatedly, that the temps of my card are fluctuating a lot, and this is why...

I'm using the following options (I was using hysteresis 4, changed it to 2 a few days ago thinking it might do better at holding the temp on card #2 set to 2 instead, didn't work out that way)

"auto-fan" : true,
"auto-gpu" : true,
"gpu-threads" : "1",
"gpu-engine" : "600-825,600-825",
"gpu-fan" : "0-85,0-85",
"gpu-memdiff" : "200,200",
"intensity" : "9,5",
"temp-hysteresis" : "2",
"temp-target" : "70,70",
"temp-overheat" : "80,80",
"temp-cutoff" : "90,90"

What ends up happening, is the #2 card, immediately sets up at 825 engine, 50 fan, upon startup.

Ok... would rather have it start out of the gate at default with 85 fan, but no big deal, it'll find its sweet spot in a few minutes, right? wrong...

There is some lag time between the heat being generated, and the sensor picking it up, so my temp is constantly fluctuating at least 5 degrees.

First, since the temp hasn't risen yet, cgminer starts stepping down the fan. Usually with bigger steps when the temp is below 68. At the same time, the clock is already at 825, and is building up heat. By the time the sensors pick it up, it is raising the fan speed quickly up to 85, at which point the temp is usually over 72-73, and then starts dropping the clock speed step by step to generate less heat.

So far so good. It will make its way, most of the time, all the way down to 600 before the temp starts to come down. The fan speed gets reduced along the way as expected... but when my temp finally drops below 68 degrees, instead of gradually bringing the clock speed back up 1 step at a time like I would expect, it slams it back up to full in one go, from 600-650 directly to 825 again.

Since the temp is still low, the fan speed continues to be reduced, until finally the heat being generated, reaches the sensor, which then starts ramping up the fan again, quickly, as the sensor's temp is rising fast, and the cycle continues, over and over and over again.

These temp fluctuations can't be good at all for the card. Granted, the fan on that card isn't operating as efficiently as it used to, but it is still running, it just doesn't spin as freely as it should. This just gave me a chance to see auto-fan and auto-gpu do its thing over a long period of time.

I have observed my card's temp fluctuate like this for days now, never finding a sweet-spot to settle into, which I would expect it should, since the card does cool down when the engine is at a certain point.

My suggestion would be a change in behavior to auto-fan and auto-gpu...

for auto-fan: allow the fan speed to rise as quickly as it needs to, but never lower it by more than a single step at a time.

for auto-gpu: allow the engine speed to fall as quickly as it needs to, but never raise it by more than a single step at a time.

It should never go directly to the maximum overclock speed, but get there gradually.

Making them adjust more slowly, in the direction that would raise the temp of the card, gives the sensors on the card more time to pick up the heat being built up, since heat isn't an instant indicator, there is some lag time for heat to propogate, but don't limit how fast the card can react when the temp needs to be cut down.

The rationale is, that while the sensor may think the card's chips are at 80-85 degrees that instant, those chips may actually be at 90-95, but that the heat just hasn't reached the sensor yet, giving the chips even more of a pronounced heat/cool cycle than they should.

-- Smoov
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Hey guys.  I turned the help text files into one HTML help file.

Let me know what you think!

http://test.the47.net/cgminer/readme.html
You'll need to find any '<' and '>' and replace them with < and >
in:
 API-README
 FPGA-README
 README
 linux-usb-cgminer
 windows-build.txt

Oh, also any '&' with &
 in README and API-README
full member
Activity: 234
Merit: 114
Hi,

i had the same problems on my Windows system too.
Running a ZTex 1.15x with CGMiner result in the following.



Where do all the HW Errors come from? Its no real problem, the hashrate is shown right on CGMiner and the Pool i am mining on. But its not nice to see these errors counting Wink
sr. member
Activity: 322
Merit: 250
Hey guys.  I turned the help text files into one HTML help file.

Let me know what you think!

http://test.the47.net/cgminer/readme.html

Looks good! Cept the FAQ (point 10) link isn't working.

Doh! Will fix asap.

EDIT: Fixed
legendary
Activity: 952
Merit: 1000
Hey guys.  I turned the help text files into one HTML help file.

Let me know what you think!

http://test.the47.net/cgminer/readme.html

Looks good! Cept the FAQ (point 10) link isn't working.
sr. member
Activity: 322
Merit: 250
Hey guys.  I turned the help text files into one HTML help file.

Let me know what you think!

http://test.the47.net/cgminer/readme.html
sr. member
Activity: 322
Merit: 250

...
There is no many coz there are no implementations of any of the crap that you seem to think is better in GBT.
...
So yes GBT sux and no matter how often you make these deceptive comments, GBT is still crap.


GBT just wants to be friends.   Cry
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Also I note Luke actually makes less then 1Diff shares only send >=1Diff shares so those could have losses.
It's not possible to send less than 1diff shares (except for CPU/GPU).
Ah ... OK ... yes to put your vague stupid statement into reality ... it's possible for all devices.
The issue at the moment is that no BTC-FPGA bitstreams have been written that support <1 difficulty.
All the GPU ocl code that I have seen also only supports 1 difficulty.
However, there is no actual need for < 1 difficulty with BTC.
If there was, you'd just need to write a new OCL for the GPUs or a new bitstream for the FPGAs

However, for all Scrypt mining, it is of course possible and mandatory to support < 1 difficulty.
i.e. it does already.

Luke used the laziest way to fix it. A Compliant answer "Should" actually only send what the difficulty is.
Perhaps it is a bit lazy, but it works. Unfortunately, since stratum is representing the target as bdiff, it is impossible for the miner to know exactly what the target is. (Admittedly, it would be possible to get a closer guess of course.)
Look just cos your a piece of crap programmer, doesn't mean you need to tell everyone you are, here in the cgminer thread.
Go do that else where.

Fractional difficulty is simple.
The only issue is that maybe one in a few billion-trillion shares might be rejected that are valid.
... and they wouldn't ever be a block either.
So who gives a damn if it is that rare to ever lose a share?
i.e. stop writing crappy hack code and do it properly for once.

My second suggestion posted shortly after the one I am quoting lower and before Luke quoted me wondered why a client couldn't ask for or submit a higher difficulty work unit. Say the pool asked for 1.09 and your miner only uses ints why not send diff 2 shares but before you do negotiate with the server for pay based on diff 2. Still runs ints but pays more per unit and requires less network traffic.
Just one of the many things GBT supports but stratum doesn't (yet?).
More FUD - fuck you like to spread it around like shit.
There is no many coz there are no implementations of any of the crap that you seem to think is better in GBT.
Can you name even ONE that is implemented and used anywhere?
And if you can find ONE, then maybe consider listing this delusional many you keep having wet dreams about.

You've been told what is wrong with GBT and ignored it.
So yes GBT sux and no matter how often you make these deceptive comments, GBT is still crap.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Updated with support for fractional diffs and diffs just below 1.
legendary
Activity: 2576
Merit: 1186
Also I note Luke actually makes less then 1Diff shares only send >=1Diff shares so those could have losses.
It's not possible to send less than 1diff shares (except for CPU/GPU).

Luke used the laziest way to fix it. A Compliant answer "Should" actually only send what the difficulty is.
Perhaps it is a bit lazy, but it works. Unfortunately, since stratum is representing the target as bdiff, it is impossible for the miner to know exactly what the target is. (Admittedly, it would be possible to get a closer guess of course.)

My second suggestion posted shortly after the one I am quoting lower and before Luke quoted me wondered why a client couldn't ask for or submit a higher difficulty work unit. Say the pool asked for 1.09 and your miner only uses ints why not send diff 2 shares but before you do negotiate with the server for pay based on diff 2. Still runs ints but pays more per unit and requires less network traffic.
Just one of the many things GBT supports but stratum doesn't (yet?).
hero member
Activity: 675
Merit: 514
On my system it looks about the same:
Code:
cgminer (1).exe caused an Access Violation at location 0040cc8e in module cgminer (1).exe Reading from location 00000000.

Registers:
eax=00000000 ebx=00000000 ecx=00000000 edx=00000000 esi=00000000 edi=01e4f908
eip=0040cc8e esp=02b6f3d0 ebp=01e73df8 iopl=0         nv up ei pl zr na po nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010246

Call stack:
0040CC8E  cgminer (1).exe:0040CC8E
77D2FA19  ntdll.dll:77D2FA19  RtlAnsiCharToUnicodeChar
77D2FAF6  ntdll.dll:77D2FAF6  RtlAnsiCharToUnicodeChar
77D30093  ntdll.dll:77D30093  LdrGetDllHandleEx
77D2FD2F  ntdll.dll:77D2FD2F  LdrGetDllHandle
76D81A35  KERNELBASE.dll:76D81A35  GetModuleFileNameW
76DA734E  KERNELBASE.dll:76DA734E  IsNLSDefinedString
76D81CFB  KERNELBASE.dll:76D81CFB  GetModuleFileNameW
75733362  kernel32.dll:75733362  RegKrnInitialize
This is with Windows 7
legendary
Activity: 1540
Merit: 1001
Thanks. That is quite different actually. I've uploaded another set of executables into that temporary directory which may get us even further. Can you try them please? There is no way on earth it will compile on VS.

This is more like what I was expecting.  The debug version is much much larger than the production one.  But the output looks about the same...

Code:
cgminerdebug.exe caused an Access Violation at location 0040cb33 in module cgminerdebug.exe Reading from location 00000000.

Registers:
eax=00000000 ebx=624836d0 ecx=74712e09 edx=07c51858 esi=00000000 edi=00770ea0
eip=0040cb33 esp=03fff3d0 ebp=00801480 iopl=0         nv up ei pl nz na pe nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010202

Call stack:
0040CB33  cgminerdebug.exe:0040CB33
7713FAF6  ntdll.dll:7713FAF6  RtlAnsiCharToUnicodeChar
77140093  ntdll.dll:77140093  LdrGetDllHandleEx
7713FD2F  ntdll.dll:7713FD2F  LdrGetDllHandle
76641A35  KERNELBASE.dll:76641A35  GetModuleFileNameW
7666734E  KERNELBASE.dll:7666734E  IsNLSDefinedString
76641CFB  KERNELBASE.dll:76641CFB  GetModuleFileNameW
7666734E  KERNELBASE.dll:7666734E  IsNLSDefinedString
76641CFB  KERNELBASE.dll:76641CFB  GetModuleFileNameW
74AF3362  kernel32.dll:74AF3362  RegKrnInitialize

How about Watcom?  I can dust it off and see if I can get it to run there.

M
Jump to: