Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 751. (Read 5805728 times)

legendary
Activity: 3583
Merit: 1094
Think for yourself
If you have more than one GPU and all your monitors hooked up to a single GPU you can set the "monitor" GPU to dynamic and all other cards to a high I value.  For example my 3x5970 workstation has 1 GPU set to D and the other 5 set to I=9.  If you have multiple monitors hook them all up to the same GPU to minimze the number of GPU which need to be set to dynamic.

That doesn't hold true for Windoze does it?  Since I have to extend the desktop to the GPU without the monitor to enable it.  Whatever I do to that GPU seems to have an effect on the desktop GUI as well.
Sam

With new(er) drivers you shouldn't need to do any tricks like extending desktops or using dummy plugs.  My Win7 workstations has 3 monitors all on a single GPU.  None of the other GPU have any desktops extended to them.  I can't remember which driver version got rid of the need for that but it has been a long time.

Hmm,  I'm using Catalyst 11.6 and tried it with that version and also 11.7 and neither could do it, plus the 11.7 has the high CPU utilization issue.

If you get a chance take a peek at which version Catalyst your using drop me a note and I'll give it a whirl.

Thanks,
Sam
sr. member
Activity: 462
Merit: 250
I heart thebaron
If you have more than one GPU and all your monitors hooked up to a single GPU you can set the "monitor" GPU to dynamic and all other cards to a high I value.  For example my 3x5970 workstation has 1 GPU set to D and the other 5 set to I=9.  If you have multiple monitors hook them all up to the same GPU to minimze the number of GPU which need to be set to dynamic.

That doesn't hold true for Windoze does it?  Since I have to extend the desktop to the GPU without the monitor to enable it.  Whatever I do to that GPU seems to have an effect on the desktop GUI as well.
Sam

With new(er) drivers you shouldn't need to do any tricks like extending desktops or using dummy plugs.  My Win7 workstations has 3 monitors all on a single GPU.  None of the other GPU have any desktops extended to them.  I can't remember which driver version got rid of the need for that but it has been a long time.
I was just about to post the same thing...lol
No 'extending' required since 11.7
One GPU shows up in display properties, but all 4 are seen and used by CGMiner in my quad-card setups.
No more dummy plugs either.....
donator
Activity: 1218
Merit: 1079
Gerald Davis
If you have more than one GPU and all your monitors hooked up to a single GPU you can set the "monitor" GPU to dynamic and all other cards to a high I value.  For example my 3x5970 workstation has 1 GPU set to D and the other 5 set to I=9.  If you have multiple monitors hook them all up to the same GPU to minimze the number of GPU which need to be set to dynamic.

That doesn't hold true for Windoze does it?  Since I have to extend the desktop to the GPU without the monitor to enable it.  Whatever I do to that GPU seems to have an effect on the desktop GUI as well.
Sam

With new(er) drivers you shouldn't need to do any tricks like extending desktops or using dummy plugs.  My Win7 workstations has 3 monitors all on a single GPU.  None of the other GPU have any desktops extended to them.  I can't remember which driver version got rid of the need for that but it has been a long time.
legendary
Activity: 3583
Merit: 1094
Think for yourself
If you have more than one GPU and all your monitors hooked up to a single GPU you can set the "monitor" GPU to dynamic and all other cards to a high I value.  For example my 3x5970 workstation has 1 GPU set to D and the other 5 set to I=9.  If you have multiple monitors hook them all up to the same GPU to minimze the number of GPU which need to be set to dynamic.

That doesn't hold true for Windoze does it?  Since I have to extend the desktop to the GPU without the monitor to enable it.  Whatever I do to that GPU seems to have an effect on the desktop GUI as well.
Sam
donator
Activity: 1218
Merit: 1079
Gerald Davis
It was suggested that I=8 be used for 5xxx series ATI cards, while I=9 be used for the 69xx series cards.

I did some playing around after a few stalls here and there and actually found that for my quad 6950 rig, I actually get a higher share/min rate ("U" value) when I use I=8 rather than I=9, as suggested.....and no GPU stalling or driver crashing with a much higher overclock.

Is I=9 still suggested for 69xx cards ?

There is no hard fast rule because there are a lot of variables.  Essentially Intensity determines how big of a chunk of hashes the GPU processes at one time.  More hashes at once = less overhead however when a block changes it means more wasted hashes because the GPU can't be stopped once it starts a "run".

ckolivas can correct me if I am wrong but the I rating is more based on the hashing power of the card.  The more hashes you can complete in 1 second the larger the optimal "batch" should be = higher I value.  However other factors can alter that too.  If your pool is issuing higher number of LP a lower I value reduces the amount of potential stales.  So really it is a balancing act between stales & raw hashing power.

Quote
I am currently still using 2.0.6 and love it......insane performance. I pretty much use I=8 for everything, except for working-workstations, where I use Dynamic Intensity and it all works great.

If you have more than one GPU and all your monitors hooked up to a single GPU you can set the "monitor" GPU to dynamic and all other cards to a high I value.  For example my 3x5970 workstation has 1 GPU set to D and the other 5 set to I=9.  If you have multiple monitors hook them all up to the same GPU to minimze the number of GPU which need to be set to dynamic.

legendary
Activity: 3583
Merit: 1094
Think for yourself
They drove a dumptruck full of money and unloaded it on my front lawn. I'm not made of stone. It's not quite true, but some people are still donating (thanks!!), and the alleged other uses for longpoll on the same block were valid.

I'm using --donate 5 right now.  Please don't confuse that with supporting Merged Mining, because I don't.
Thanks for your great work.
Sam
newbie
Activity: 73
Merit: 0
Goddamnit I was still thinking from when cgminer would support intensities up to 14 which would put out a 5770 for 5 seconds at a time. Of course it only supports up to intensity 10 now because it proved to be a waste of time, not improving throughput and did cause nice stalls at 14 - plus people tend to just set something to the top value thinking it will definitely be better. So yes you're right at intensity 10 it's not much time/lost work. I had forgotten exactly how much it was, but intensity 10 is 2^25 just for the record.

I still recall reading recommendations for cards and CGMiner's intensity value a while back.....

It was suggested that I=8 be used for 5xxx series ATI cards, while I=9 be used for the 69xx series cards.

I did some playing around after a few stalls here and there and actually found that for my quad 6950 rig, I actually get a higher share/min rate ("U" value) when I use I=8 rather than I=9, as suggested.....and no GPU stalling or driver crashing with a much higher overclock.

I am currently still using 2.0.6 and love it......insane performance. I pretty much use I=8 for everything, except for working-workstations, where I use Dynamic Intensity and it all works great.

Is I=9 still suggested for 69xx cards ?

Absolutely the same here Smiley I get more using crossfired 6990s at I=8... thinking about I=7 Wink Besides, I=8 gives me less heat and noise.
sr. member
Activity: 462
Merit: 250
I heart thebaron
Goddamnit I was still thinking from when cgminer would support intensities up to 14 which would put out a 5770 for 5 seconds at a time. Of course it only supports up to intensity 10 now because it proved to be a waste of time, not improving throughput and did cause nice stalls at 14 - plus people tend to just set something to the top value thinking it will definitely be better. So yes you're right at intensity 10 it's not much time/lost work. I had forgotten exactly how much it was, but intensity 10 is 2^25 just for the record.

I still recall reading recommendations for cards and CGMiner's intensity value a while back.....

It was suggested that I=8 be used for 5xxx series ATI cards, while I=9 be used for the 69xx series cards.

I did some playing around after a few stalls here and there and actually found that for my quad 6950 rig, I actually get a higher share/min rate ("U" value) when I use I=8 rather than I=9, as suggested.....and no GPU stalling or driver crashing with a much higher overclock.

I am currently still using 2.0.6 and love it......insane performance. I pretty much use I=8 for everything, except for working-workstations, where I use Dynamic Intensity and it all works great.

Is I=9 still suggested for 69xx cards ?


* I should also add that I am using Windows 7 on all rigs (currently a mixture of 32 & 64bit machines).
donator
Activity: 1218
Merit: 1079
Gerald Davis
Hmm bitcoind needs to restrict the size of coinbase and thus stop all merged mining in it's tracks ...

I believe coinbase size is already restricted.  Of course you do realize that coinbase field is simply the "nice" way to do merged mining.  It doesn't interfere or clutter the transaction list.  If there was no coinbase field merged mining could still be done by creating a bogus transaction which contains a hash of the prior NMC block.

Quote
I think IXCoin is going to be the next merged mining coin if the alt threads are true.
The alt-coin users think a lot of things most which never come true.  I don't see any major pool making the back end changes necessary to add IXCoin which has no market depth or value.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
They drove a dumptruck full of money and unloaded it on my front lawn. I'm not made of stone. It's not quite true, but some people are still donating (thanks!!), and the alleged other uses for longpoll on the same block were valid.

Actually I agree with you Kano, NMC only eventually takes value from BTC. It has to come from somewhere. It was not the argument for merged mining that made me -indirectly- add support for it.

Plus, basically, there is only so much I can influence how people mine bitcoin from within my software.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Pity coz in my opinion nmc is a virus

All it does is devalue BTC by taking some BTC value for itself.
It's a crap DNS idea that still hasn't been implemented and doesn't work and was dying.
Most people didn't want it - if they did want it, it wouldn't have been dying.

Pity it's not already dead.
Now it will linger on and do nothing but take value from BTC.

The shares are free yet have some trade-able value ... where does that value come from? BTC of course.

Hmm bitcoind needs to restrict the size of coinbase and thus stop all merged mining in it's tracks ...

I think IXCoin is going to be the next merged mining coin if the alt threads are true.

Great, IXCoin will be doing it soon too ........
c_k
donator
Activity: 242
Merit: 100
I saw your latest commit, awesome work ckolivas! cgminer reigns supreme again Cheesy
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
2^20 iterations is 1048576 hashes right?  If we consider the upper and lower bound of modern GPU to be 100MH/s to 500MH/s that is 0.002s to 0.01s.

Not sure how a GPU can take full seconds to finish.  Wouldn't that cause system instabilities?  I mean the GPU is unusable for other tasks while OpenCL kernel is running.

2^20/2^32 = 0.024%  Thus each interrupted "cycle" reduces EV (expected value) by 0.00024 shares.*
*Granted each individual iteration will either be 1 share lost or 0 shares lost but the EV is still a fractional share.  

Goddamnit I was still thinking from when cgminer would support intensities up to 14 which would put out a 5770 for 5 seconds at a time. Of course it only supports up to intensity 10 now because it proved to be a waste of time, not improving throughput and did cause nice stalls at 14 - plus people tend to just set something to the top value thinking it will definitely be better. So yes you're right at intensity 10 it's not much time/lost work. I had forgotten exactly how much it was, but intensity 10 is 2^25 just for the record.
donator
Activity: 1218
Merit: 1079
Gerald Davis

While my "farm" is configurable using startup scripts, and monitoring is possible via ssh sessions it is "clunky" with just 5 rigs (30 GPUs). If I ever expanded I think management will just become more of a pain in the ass.  I have to think there is a better way.  I am thinking single webpage interface on a host that has current status of each rig, plus options to restart, and modify settings.  akak BAMT but based on debian lxde and using the vastly superior cgminer. 

Sounds like a plan. Now make it happen Smiley
FWIW, I was in fact already looking for a way to remote manage cgminer without needing SSH. I wanted to make a simple android  app to monitor and control my rigs. Web based might be good enough tho.

Well if I (or someone smarter than me) ever got JSON-RPC calls working, making those calls from an Anroid app wouldn't be too tough.  Pretty sure there are JSON libraries for Android.  It is pretty much make call A (w/ values x,y,z), wait for response. Then do something based on response.  An Android app would be pretty cool for multi-rig monitoring & control.
hero member
Activity: 518
Merit: 500

While my "farm" is configurable using startup scripts, and monitoring is possible via ssh sessions it is "clunky" with just 5 rigs (30 GPUs). If I ever expanded I think management will just become more of a pain in the ass.  I have to think there is a better way.  I am thinking single webpage interface on a host that has current status of each rig, plus options to restart, and modify settings.  akak BAMT but based on debian lxde and using the vastly superior cgminer. 

Sounds like a plan. Now make it happen Smiley
FWIW, I was in fact already looking for a way to remote manage cgminer without needing SSH. I wanted to make a simple android  app to monitor and control my rigs. Web based might be good enough tho.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Is there a way to tell cgminer to stop running after x minutes?

If it can't be done in cgminer is there a method via script in linux to tell cgminer to shutdown and then restart after x minutes.

The reason I ask is sometimes cgminer is unable to restart dead miner.  It happens rarely but with 12 GPU it does happen and when I fail to detect it hours can past and that is wasted time.  What I have found is that when a GPU "dies" stopping cgminer and restarting it from terminal fixes the problem 100% of time.

So if I could have cgminer run for 60 minutes and then quit I could setup a script that simply restarts cgminer everytime it quits which likely would give me nearly 100% uptime with little need for monitoring.

You could use a simple cron script to kill the process and restart it every hour.

I may do that as short term solution.  I think long term I may want to look at forking cgminer to provide quit on dead GPU support, RPC calls to control it remotely and web interface.  I am not a big fan of JSON-RPC but since most other Bitcoin tools & utilities seem to use it I guess I would go that route.  Since cgminer is already complex enough a branch/fork dedicated to remote operation might make more sense (while getting rid of CPU support, and command line options).

While my "farm" is configurable using startup scripts, and monitoring is possible via ssh sessions it is "clunky" with just 5 rigs (30 GPUs). If I ever expanded I think management will just become more of a pain in the ass.  I have to think there is a better way.  I am thinking single webpage interface on a host that has current status of each rig, plus options to restart, and modify settings.  akak BAMT but based on debian lxde and using the vastly superior cgminer. 
hero member
Activity: 518
Merit: 500
Is there a way to tell cgminer to stop running after x minutes?

If it can't be done in cgminer is there a method via script in linux to tell cgminer to shutdown and then restart after x minutes.

The reason I ask is sometimes cgminer is unable to restart dead miner.  It happens rarely but with 12 GPU it does happen and when I fail to detect it hours can past and that is wasted time.  What I have found is that when a GPU "dies" stopping cgminer and restarting it from terminal fixes the problem 100% of time.

So if I could have cgminer run for 60 minutes and then quit I could setup a script that simply restarts cgminer everytime it quits which likely would give me nearly 100% uptime with little need for monitoring.

You could use a simple cron script to kill the process and restart it every hour.
newbie
Activity: 78
Merit: 0
i compiled it again, and now it works Tongue

300Gigowatts!
newbie
Activity: 78
Merit: 0
Well something also went badly wrong at 2011-10-30 22:54:42
(repeating the stats over and over again ...)

Is something you linked against no good?
Coz yeah, 4.9Mh/s is something like what a CPU should get Smiley

Also ensure you are running everything with default options except maybe add "-I 7"

Are the cards certainly OK and are you sure they are 5830's ?

What does aticonfig tell you?
(sudo aticonfig --lsa)

Mine looks like this:
* 0. 01:00.0 AMD Radeon HD 6900 Series
  1. 02:00.0 AMD Radeon HD 6900 Series

* - Default adapter

...

The three options I'd guess are:
1) OpenCL/other software issue with that version of Linux
2) Not 5830
3) Faulty card

Its just one 5830, and it runs fine on phoenix.

Its just that i like cgminer more Tongue

My entire gui crashed when i tried to end cgminer, weird stuff, the log looked fine until that point.

rcocchiararo@omega:~$ sudo aticonfig --lsa
[sudo] password for rcocchiararo:
* 0. 02:00.0 ATI Radeon HD 5800 Series 

* - Default adapter

legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Well there is the --sched-stop HH:MM which you could set to 60 minutes in the future every time your script starts cgminer
Jump to: