Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 752. (Read 5805728 times)

donator
Activity: 1218
Merit: 1079
Gerald Davis
Is there a way to tell cgminer to stop running after x minutes?

If it can't be done in cgminer is there a method via script in linux to tell cgminer to shutdown and then restart after x minutes.

The reason I ask is sometimes cgminer is unable to restart dead miner.  It happens rarely but with 12 GPU it does happen and when I fail to detect it hours can past and that is wasted time.  What I have found is that when a GPU "dies" stopping cgminer and restarting it from terminal fixes the problem 100% of time.

So if I could have cgminer run for 60 minutes and then quit I could setup a script that simply restarts cgminer everytime it quits which likely would give me nearly 100% uptime with little need for monitoring.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Well something also went badly wrong at 2011-10-30 22:54:42
(repeating the stats over and over again ...)

Is something you linked against no good?
Coz yeah, 4.9Mh/s is something like what a CPU should get Smiley

Also ensure you are running everything with default options except maybe add "-I 7"

Are the cards certainly OK and are you sure they are 5830's ?

What does aticonfig tell you?
(sudo aticonfig --lsa)

Mine looks like this:
* 0. 01:00.0 AMD Radeon HD 6900 Series
  1. 02:00.0 AMD Radeon HD 6900 Series

* - Default adapter

...

The three options I'd guess are:
1) OpenCL/other software issue with that version of Linux
2) Not 5830
3) Faulty card
newbie
Activity: 78
Merit: 0
Can you use something like http://pastebin.com/ to show us the whole output of ./configure from when you built cgminer?  Thanks!

http://pastebin.com/ZwhuEG6r

there goes

OK, that covers the obvious problems (none of which are there).  The next step would be:
./cgminer .. options to connect to a pool ... --verbose --text-only --shares 1 > debug.log 2>&1, pastebin it once it complete and wait for someone with more depth to figure out what's going on..

Ghs / Mhs: yes, my mind was playing tricks with me.

---

Code:
./cgminer -n
1 GPU devices detected


Pastebin of the log:
http://pastebin.com/BDFyPkjk
full member
Activity: 235
Merit: 100
Hi there

All went well, but for some reason, it mines at 4MHASH/s on my 5830.

it used to produce 300Ghash/s when overclocked to 975, and it still does it on phoenix.

Wow,
If you can tell me how I can get 300GHash/s out of my 5830 I'll give you all my bitcoins. Smiley
Sam
i have 6 of them in 1 machine mining and all producing 296-304 Mh/s
have the core clocked up to 950, on the 304 MH/s ones, 930 for the 296 MHs one
memory clocked at 310

You'd need 1,000 to get the amount he accidentally typed Smiley
Of course he meant 300Mh/s (not 300Gh/s)
didn't see the Gh/s  typo although we know that its ment Mh/s and it is just that a typo IMHO
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Hi there

All went well, but for some reason, it mines at 4MHASH/s on my 5830.

it used to produce 300Ghash/s when overclocked to 975, and it still does it on phoenix.

Wow,
If you can tell me how I can get 300GHash/s out of my 5830 I'll give you all my bitcoins. Smiley
Sam
i have 6 of them in 1 machine mining and all producing 296-304 Mh/s
have the core clocked up to 950, on the 304 MH/s ones, 930 for the 296 MHs one
memory clocked at 310

You'd need 1,000 to get the amount he accidentally typed Smiley
Of course he meant 300Mh/s (not 300Gh/s)
legendary
Activity: 3583
Merit: 1094
Think for yourself
Hi there

All went well, but for some reason, it mines at 4MHASH/s on my 5830.

it used to produce 300Ghash/s when overclocked to 975, and it still does it on phoenix.

Wow,
If you can tell me how I can get 300GHash/s out of my 5830 I'll give you all my bitcoins. Smiley
Sam
i have 6 of them in 1 machine mining and all producing 296-304 Mh/s
have the core clocked up to 950, on the 304 MH/s ones, 930 for the 296 MHs one
memory clocked at 310


Yep, Mega Hash's not Giga Hash's.  I was just being a smart a--.  Sorry, not really productive.
Sam
full member
Activity: 235
Merit: 100
Hi there

All went well, but for some reason, it mines at 4MHASH/s on my 5830.

it used to produce 300Ghash/s when overclocked to 975, and it still does it on phoenix.

Wow,
If you can tell me how I can get 300GHash/s out of my 5830 I'll give you all my bitcoins. Smiley
Sam
i have 6 of them in 1 machine mining and all producing 296-304 Mh/s
have the core clocked up to 950, on the 304 MH/s ones, 930 for the 296 MHs one
memory clocked at 310
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Can you use something like http://pastebin.com/ to show us the whole output of ./configure from when you built cgminer?  Thanks!

http://pastebin.com/ZwhuEG6r

there goes
Well looking at the actual figures my guess is you started CPU mining instead of GPU mining ... or cgminer couldn't detect the GPUs

The command "./cgminer -n" will tell you if it can see any GPUs

If it can't well there are things like ... is the window manager running on the machine?
Is your DISPLAY correct and does it have access
Does aticonfig see the cards

(... and other such things listed in: ... linux-usb-cgminer)
legendary
Activity: 3583
Merit: 1094
Think for yourself
Hi there

All went well, but for some reason, it mines at 4MHASH/s on my 5830.

it used to produce 300Ghash/s when overclocked to 975, and it still does it on phoenix.

Wow,
If you can tell me how I can get 300GHash/s out of my 5830 I'll give you all my bitcoins. Smiley
Sam
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
At high intensity levels, the time spent stuck in GPU code is in the order of SECONDS, not microseconds. The faster the GPU, the shorter it is, but at intensity 9 or 10 a 200Mhash card could actually be working for more than 5 seconds on each iteration into the GPU. It takes less time when there is only one thread per GPU, but then the hash rate drops off slightly. And NO there is NOT a way to interrupt a GPU once it has started working on the openCL code. While the worksize is somewhere between 64 and 256, the actual requested work every time the GPU is loaded is up to 2^20 iterations (at intensity 10). There is no way to interrupt it. It is not like doing something on a CPU. Faster cards won't take long to return even at high intensity levels, but basically, any shares discovered during this time in the GPU do NOT get returned until the GPU has finished its 2^20 iterations. That's just the way opencl kernel code works. The GPU takes its work and runs off and does it independently of anything else going on in your PC and then only returns answers once it's done. So there is work "wasted" here if it starts just before a longpoll, goes out for say 5 seconds and finds a share in that time. It is then obliged to discard it since cgminer now says that work is no longer valid for the current block of work unless you enable the --submit-stale option.

2^20 iterations is 1048576 hashes right?  If we consider the upper and lower bound of modern GPU to be 100MH/s to 500MH/s that is 0.002s to 0.01s.

Not sure how a GPU can take full seconds to finish.  Wouldn't that cause system instabilities?  I mean the GPU is unusable for other tasks while OpenCL kernel is running.

2^20/2^32 = 0.024%  Thus each interrupted "cycle" reduces EV (expected value) by 0.00024 shares.*
*Granted each individual iteration will either be 1 share lost or 0 shares lost but the EV is still a fractional share.  

Per GPU 'thread' ...

I can understand people telling me I'm wrong ... what would I know Cheesy

But you gotta realise that if you are talking to the person who wrote the program and you get a different answer - you've made a mistake.
newbie
Activity: 47
Merit: 0
Can you use something like http://pastebin.com/ to show us the whole output of ./configure from when you built cgminer?  Thanks!

http://pastebin.com/ZwhuEG6r

there goes

OK, that covers the obvious problems (none of which are there).  The next step would be:
./cgminer .. options to connect to a pool ... --verbose --text-only --shares 1 > debug.log 2>&1, pastebin it once it complete and wait for someone with more depth to figure out what's going on..
newbie
Activity: 78
Merit: 0
Can you use something like http://pastebin.com/ to show us the whole output of ./configure from when you built cgminer?  Thanks!

http://pastebin.com/ZwhuEG6r

there goes
donator
Activity: 1218
Merit: 1079
Gerald Davis
At high intensity levels, the time spent stuck in GPU code is in the order of SECONDS, not microseconds. The faster the GPU, the shorter it is, but at intensity 9 or 10 a 200Mhash card could actually be working for more than 5 seconds on each iteration into the GPU. It takes less time when there is only one thread per GPU, but then the hash rate drops off slightly. And NO there is NOT a way to interrupt a GPU once it has started working on the openCL code. While the worksize is somewhere between 64 and 256, the actual requested work every time the GPU is loaded is up to 2^20 iterations (at intensity 10). There is no way to interrupt it. It is not like doing something on a CPU. Faster cards won't take long to return even at high intensity levels, but basically, any shares discovered during this time in the GPU do NOT get returned until the GPU has finished its 2^20 iterations. That's just the way opencl kernel code works. The GPU takes its work and runs off and does it independently of anything else going on in your PC and then only returns answers once it's done. So there is work "wasted" here if it starts just before a longpoll, goes out for say 5 seconds and finds a share in that time. It is then obliged to discard it since cgminer now says that work is no longer valid for the current block of work unless you enable the --submit-stale option.

2^20 iterations is 1048576 hashes right?  If we consider the upper and lower bound of modern GPU to be 100MH/s to 500MH/s that is 0.002s to 0.01s.

Not sure how a GPU can take full seconds to finish.  Wouldn't that cause system instabilities?  I mean the GPU is unusable for other tasks while OpenCL kernel is running.

2^20/2^32 = 0.024%  Thus each interrupted "cycle" reduces EV (expected value) by 0.00024 shares.*
*Granted each individual iteration will either be 1 share lost or 0 shares lost but the EV is still a fractional share.  
newbie
Activity: 47
Merit: 0
Can you use something like http://pastebin.com/ to show us the whole output of ./configure from when you built cgminer?  Thanks!
newbie
Activity: 78
Merit: 0
Hi there

I reinstalled my linux home server, this time using x86 instead of x64.

Since you only provide the x64 binary i had to compile it myself.

Im using debian, and on x86 the ati installer for the drivers had SOME issue, dunno which or why, but it would not install completly.

I had to first install from synaptic, and then force install with the installer the 11.6 ati drivers (synaptic had 10.9).

I installed those cause i found that 11.6 + 2.4 sdk is best for cpu utilization.

Now, i installed the sdk, pyopencl and all the stuff i needed for phoenix (just to make sure i could GPU mine, since i had never built cgminer myself)

Then i proceeded to build CGMINER with ADL support.

All went well, but for some reason, it mines at 4MHASH/s on my 5830.

it used to produce 300Ghash/s when overclocked to 975, and it still does it on phoenix.

Any idea what might have gone wrong ?

thx
legendary
Activity: 1862
Merit: 1011
Reverse engineer from time to time
I agree with Kano with all my 3 hands up
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
At high intensity levels, the time spent stuck in GPU code is in the order of SECONDS, not microseconds. The faster the GPU, the shorter it is, but at intensity 9 or 10 a 200Mhash card could actually be working for more than 5 seconds on each iteration into the GPU. It takes less time when there is only one thread per GPU, but then the hash rate drops off slightly. And NO there is NOT a way to interrupt a GPU once it has started working on the openCL code. While the worksize is somewhere between 64 and 256, the actual requested work every time the GPU is loaded is up to 2^20 iterations (at intensity 10). There is no way to interrupt it. It is not like doing something on a CPU. Faster cards won't take long to return even at high intensity levels, but basically, any shares discovered during this time in the GPU do NOT get returned until the GPU has finished its 2^20 iterations. That's just the way opencl kernel code works. The GPU takes its work and runs off and does it independently of anything else going on in your PC and then only returns answers once it's done. So there is work "wasted" here if it starts just before a longpoll, goes out for say 5 seconds and finds a share in that time. It is then obliged to discard it since cgminer now says that work is no longer valid for the current block of work unless you enable the --submit-stale option.
donator
Activity: 1218
Merit: 1079
Gerald Davis
In just reading Kano's posts, I tend to expect them to be wrong, and keep my mouth shut because I don't want to instigate a flame war or argue about them.  However, the original post about work being "held" does make sense.  If it is true, it isn't being held by the miner, it is being held by the GPU until the work submitted to it (as a group of whatever) is complete.

While that is true the number of hashes performed before GPU returns results is relatively tiny.  It is based on aggression value (or intensity for cgminer).  Even at max aggression this is a fraction of a second, at most a tiny fraction of a share (in expected value).  That combined with the fact  long polls occur relatively infrequently this is a rounding error in performance.

So if that is what he means he is "correct" but the real world performance is minimal.  I also have to check the source code of the kernel I believe (but need to verify) even when GPU continues to run nonce range it returns shares as they are found via callback.  If that is true then there is no performance loss not even negligible amounts.
hero member
Activity: 807
Merit: 500
No you are just wrong.  While there can be more than one share per nonce range the miner submits shares as it discovers them so at any point in time there is no such thing as incomplete work.  You are 100% wrong if you think a miner holds onto shares before submitting them. Even without merged mining that would be a flawed implementation because at any point a block could be found and then the shares not submitted are stale.

A miner doesn't hold onto shares as there is no value to do so.  At any point in time there is no such thing as incomplete work.  The next has is completely independent of prior work.
In just reading Kano's posts, I tend to expect them to be wrong, and keep my mouth shut because I don't want to instigate a flame war or argue about them.  However, the original post about work being "held" does make sense.  If it is true, it isn't being held by the miner, it is being held by the GPU until the work submitted to it (as a group of whatever) is complete.  If the GPU holds it and the miner doesn't know it's there, it can't submit it.  You have an option on worksize, and it is more efficient to run a different worksize on one GPU vs another.  Presumably because it consumes cycles to deliver work to the GPU and accept completion from the GPU.  You could call this broken, but if you can't get or cancel the half-completed block of work from the GPU, that doesn't imply a coding problem, as hardware can also have constraints and GPUs weren't designed specifically to mine.  Anyway, suppose for a minute that you actually can't get or cancel the half completed block of work.  Perhaps you could have a worksize of one to resolve this problem, but your efficiency would drop so bad from all the extra cycles that you're far better off getting a group of work back that ends up being stale every now and again than losing those cycles with each piece of work, so why would you want to do that?  I don't have any clue how all of this stuff actually works and have never really dealt with code, but this is one of few arguments I have seen that makes any sense (although it doesn't matter, as CGMiner supporting merged mining or properly supporting longpolling [whatever you want to call the behavior] has absolutely nothing to do with how a pool behaves and any pool that would completely reject a share that is still valid for the current blockchain is certainly broken and would still mean that CGMiner SHOULD accept the longpoll).
donator
Activity: 1218
Merit: 1079
Gerald Davis
However, the actual issue is what happens to any incomplete (but started) work occurring at the time of the LP?
Discarding it is actually discarding work that is valid in all cases except on a Bitcoin new block LP.
Submitting it and then getting a 'stale' response is just as bad.
This is work you have done that would be valid if not for merged mining.
i.e. back to what I said about it earlier on ...

There is no such thing as incomplete but started work.  You don't make progress in mining.

Either you have a valid share or you don't.
If you don't then nothing has been lost.
If you do then submit it.

There is no concept of progress.  Each hash is completely independent and on average take 1/300,000,000th of a second to complete.
Again as I have ALREADY said above, each hash is NOT independent.
The GPU does a set of hashes and returns the results for that set.
That set can contain one or more shares and those shares could be deemed invalid/stale by a pool under the circumstances I have said above and not invalid/stale by the same pool if it was not merged mining.
You seem to have missed the point of how a GPU miner program actually does work.

No you are just wrong.  While there can be more than one share per nonce range the miner submits shares as it discovers them so at any point in time there is no such thing as incomplete work.  You are 100% wrong if you think a miner holds onto shares before submitting them. Even without merged mining that would be a flawed implementation because at any point a block could be found and then the shares not submitted are stale.

A miner doesn't hold onto shares as there is no value to do so.  At any point in time there is no such thing as incomplete work.  The next has is completely independent of prior work.
Jump to: