Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 700. (Read 5805728 times)

legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
On the scantime and intensity topic.  Should scantime be increased and intensity decreased when mining on p2pool? I know it works a bit different than regular pools and on my 6950 rig where I usually have an E of 140% and U of 29.xx I now have an E of 4% and U of 3.45.  Am I overworking or underworking the cards?
U is proportional to income ... directly ...
If it is now 1/10 over a long period of time - that's bad.
Edit: assuming a share is still worth the same amount ...
donator
Activity: 798
Merit: 500
On the scantime and intensity topic.  Should scantime be increased and intensity decreased when mining on p2pool? I know it works a bit different than regular pools and on my 6950 rig where I usually have an E of 140% and U of 29.xx I now have an E of 4% and U of 3.45.  Am I overworking or underworking the cards?
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
If the pool does merged mining and is sending non-BTC LP's (i.e. more than just a single BTC LP per network block) to you, it will reduce your efficiency that way.
This is the most likely reason.

If you want to increase efficiency (for the pool's sake, not yours), decrease thread count to 1 and increase scan time to say 115 seconds.
This sounds a bit wrong. The scantime would be understandable for a slower card that cannot do 2^32 in 60 seconds, but, for a fast card it takes seconds to test them all.
Increasing scantime will allow cgminer to roll the work for longer. Scantime affects rolltime expiration as well.
legendary
Activity: 1862
Merit: 1011
Reverse engineer from time to time
If the pool does merged mining and is sending non-BTC LP's (i.e. more than just a single BTC LP per network block) to you, it will reduce your efficiency that way.
This is the most likely reason.

If you want to increase efficiency (for the pool's sake, not yours), decrease thread count to 1 and increase scan time to say 115 seconds.
This sounds a bit wrong. The scantime would be understandable for a slower card that cannot do 2^32 in 60 seconds, but, for a fast card it takes seconds to test them all.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
If the pool does merged mining and is sending non-BTC LP's (i.e. more than just a single BTC LP per network block) to you, it will reduce your efficiency that way.
This is the most likely reason.

If you want to increase efficiency (for the pool's sake, not yours), decrease thread count to 1 and increase scan time to say 115 seconds.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
...
Yes unfortunately I do, I have 9 rigs, 2 with 2 GPU's and the rest with only 1.

I know what efficiency means, was just wondering why it is so different for different people.
I have no idea what influences efficiency

Using mainly 5800's 2 6950's

I am using default settings, no overclocking, intensity 8

I am not worried about the Performance, was just wondering about the low efficiency
Ah they're not your computers?
No one would run an extra 120+W per card on purpose Tongue
Maybe work owns them?

A work request represents 2^32 hash attempts (without using rollntime)
For a good quality 6950 on default settings (e.g. GB HD 6950 900Mhz/775Mhz = ~365Mh/s) it should take about 11.77 seconds to complete the full 2^32 hash tests.
However, if it should get any LP messages during that 11.77 seconds then (by supposed definition) all the work requests queued and being worked on will be thrown away and give no more results than what has already been submitted - thus reducing your efficiency.

A higher intensity will make that worse also since it increases how long the GPU is working without replying.
A lower hash rate will also decrease efficiency for the same reason.

If the pool does merged mining and is sending non-BTC LP's (i.e. more than just a single BTC LP per network block) to you, it will reduce your efficiency that way.
hero member
Activity: 774
Merit: 500
Lazy Lurker Reads Alot
Well regarding over or undervolting cards
There is not really a easy answer, it all depends on how the card its bios is made.
I found my xfx and sapphire vapor-x cards not allow any underclock or overclocks besides the 3 steps programmed into the bios for the different power states programmed in the card
I had before some 5850 cards from asus who actually responded well to any overclock tool and settings where i could put in any value i liked, even which where much higher then factory allowed, so even though i set afterburner to allow voltage changes they simply refuse todo so

But the sapphire/xfx are not responding at all to any of the tools available (trixx, afterburner, adams tray tools) even the amd overclock tool crashes with the message no cards present. So i am stuck to be able to change only mhz on mem and core and thats it.

I have not read any people being able to flash these cards succesfull with asus top bios version so i kinda not dare todo so either.
Although i am sure these very well build cards could do much more with some slight overvolting

 
legendary
Activity: 1904
Merit: 1002
I haven't switched up for quite some time: still running Ubuntu 11.04 with Cat 11.6 and SDK 2.4 with no problems for 69xx cards. There doesn't seem to be any real consensus on improvements in software platforms, although SDK 2.6 now appears to be the bane of mining.

Is there any definitive improvement with SDK 2.5 over v2.4, or any Catalyst driver version greater than 11.6?

Also, does anyone else use Arch for mining?


I love Arch...mining with cgminer works great.
hero member
Activity: 868
Merit: 1000
On the BTCGuild thread we are talking about the efficiency of CGMiner for different users

I have 9 miners, all between 200 & 350 MHash, and on each and everyone of them the efficiency is between 10% & 20%

This is on 100K+ Accepted shares per miner, so should be statistically valid

I am not complaining or anything since my stales are on avg below 0.4%, just wondering if someone can explain the reasons why the efficiency is so different for different users

Brat

Why do you have 9 miners of 200 to 350 MH ea?  Do you have 9 rigs w/ 1 GPU each?

Low efficiency simply means you are requesting more work then you complete.  In case of 10% you are requesting 10 work units and only completing one.

Yes unfortunately I do, I have 9 rigs, 2 with 2 GPU's and the rest with only 1.

I know what efficiency means, was just wondering why it is so different for different people.
I have no idea what influences efficiency

Using mainly 5800's 2 6950's

I am using default settings, no overclocking, intensity 8

I am not worried about the Performance, was just wondering about the low efficiency
full member
Activity: 210
Merit: 100
Dutch, what are those miners?
350 MHash is what a signle half-decent GPU achieves.
Are you using only very low-end GPUs?

What queue size and thread count per GPU are you using?
donator
Activity: 1218
Merit: 1079
Gerald Davis
On the BTCGuild thread we are talking about the efficiency of CGMiner for different users

I have 9 miners, all between 200 & 350 MHash, and on each and everyone of them the efficiency is between 10% & 20%

This is on 100K+ Accepted shares per miner, so should be statistically valid

I am not complaining or anything since my stales are on avg below 0.4%, just wondering if someone can explain the reasons why the efficiency is so different for different users

Brat

Why do you have 9 miners of 200 to 350 MH ea?  Do you have 9 rigs w/ 1 GPU each?

Low efficiency simply means you are requesting more work then you complete.  In case of 10% you are requesting 10 work units and only completing one.
hero member
Activity: 868
Merit: 1000
On the BTCGuild thread we are talking about the efficiency of CGMiner for different users

I have 9 miners, all between 200 & 350 MHash, and on each and everyone of them the efficiency is between 10% & 20%

This is on 100K+ Accepted shares per miner, so should be statistically valid

I am not complaining or anything since my stales are on avg below 0.4%, just wondering if someone can explain the reasons why the efficiency is so different for different users

Brat
newbie
Activity: 36
Merit: 0
Hi.

Is there any way to send minimized cgminer msdos window, to SYSTEM TRAY??'

I have windows vista, i tried softwware like tray it! and it does not work

Thanks

I have Vista and TrayIt! indeed does work. You need to run cgminer, and while it is running, open TrayIt! Select to place cgminer to system tray, and put TrayIt to your startup folder. Should work.
full member
Activity: 210
Merit: 100
Is there any way to send minimized cgminer msdos window, to SYSTEM TRAY??'
I have windows vista, i tried softwware like tray it! and it does not work

Firstly, there is no MS-Dos in Windows anymore. The correct term is "command line interpreter".

Your best bet might be going with Sysinternals' Microsoft's Desktops(1) app.
Create a second virtual desktop and move all those obnoxious text-mode windows to it, freeing your main desktop and taskbar.
The Desktops app lives only as an icon in the system tray.

With a little work you can set up a few separate, task-oriented desktops, like main desktop, bitcoin-mining desktop, porn desktop, and donating-to-cgminer-dev desktop.


Links:
(1)  http://technet.microsoft.com/en-us/sysinternals/cc817881
legendary
Activity: 1316
Merit: 1005
I haven't switched up for quite some time: still running Ubuntu 11.04 with Cat 11.6 and SDK 2.4 with no problems for 69xx cards. There doesn't seem to be any real consensus on improvements in software platforms, although SDK 2.6 now appears to be the bane of mining.

Is there any definitive improvement with SDK 2.5 over v2.4, or any Catalyst driver version greater than 11.6?

Also, does anyone else use Arch for mining?
member
Activity: 77
Merit: 10
Hi.

Is there any way to send minimized cgminer msdos window, to SYSTEM TRAY??'

I have windows vista, i tried softwware like tray it! and it does not work

Thanks
legendary
Activity: 1862
Merit: 1011
Reverse engineer from time to time
I'm not using a json config so I'm not sure what that may be. On the latest version it recognizes the config option gpu-vddc but doesn't seem to apply it to my 5970s.

I *think* it also depends on the amd drivers you use. I could be completely wrong here, but I think it only works on older ones (like 11.6 that ships with linuxcoin). If you are using a recent version of ubuntu, you would have newer drivers by default. On ubuntu 11.10 I was unable to clock over 775 MHz, no such issues on linuxcoin.
The reason this happens is because there are specific hardwired voltages. I.e My card only accepts increments from 1.087 to 1.075, if I wanted to change it to say 1.078 it won't apply it. There are specific voltages that must applied, not such that you want to apply.
legendary
Activity: 1876
Merit: 1000



But yeah - state exactly what you want with those 2 commands and once the CPU changes happen I'll put those 2 on first priority (for 5 BTC Smiley


I would like to add V to the list.  the more i read about undervolting, the more I want to try it.  especially with summer approaching.
Also, since reconfiguring a bunch of rigs will be easy to do from this interface, would be nice to be able to write the config to a file.


Kano,  I am willing to up the anti to get this stuff in....  10btc? anyone else willing to contribute?



1. ability to switch pools (i think setting a pools priority to 0 would work?)
2.  setting the following, for gpus's:
  • "intensity" : "newValue",
  • "gpu-engine" : "newValue",
  • "gpu-vddc" : "newValue",
  • "gpu-memclock" : "newValue",
  • "gpu-fan":      "newValue",  ex(50-85)
3. ability to write current config to text file:   {command writeConfig,  param filename)


hero member
Activity: 518
Merit: 500
I'm not using a json config so I'm not sure what that may be. On the latest version it recognizes the config option gpu-vddc but doesn't seem to apply it to my 5970s.

I *think* it also depends on the amd drivers you use. I could be completely wrong here, but I think it only works on older ones (like 11.6 that ships with linuxcoin). If you are using a recent version of ubuntu, you would have newer drivers by default. On ubuntu 11.10 I was unable to clock over 775 MHz, no such issues on linuxcoin.
legendary
Activity: 1428
Merit: 1000
https://www.bitworks.io
Has any successfully used gpu-vddc on Radeon 5970 cards? The GPU results when looking at details are providing the correct results at 1.050 but if I change the voltage to something else it still reports the stock voltage. I'm running 8 GPUs (4x5970) so I am using Linux due to stability issues in Windows so I cannot use MSI Afterburner or similar Windows tools.

I just tried to change the voltage, but when I add the voltage line to the conf file, the program reports an error parsing json conf file.
"gpu-vddc" : "1.05",

I'm not using a json config so I'm not sure what that may be. On the latest version it recognizes the config option gpu-vddc but doesn't seem to apply it to my 5970s.
Jump to: