Feature Request.
Greetings Mr Claymore and fellow miners.
Apologies if there is an existing thread for Claymore miner feature requests, I did search for that, and while I found multiple postings, there wasn't an obvious (to me), category for those.
So, without further ado, having used Claymore's miner for about 6 months, I have a list of refinements for consideration.
Background:
One the biggest PITAs is the way windows enumerates GPUs. AMD drivers seem to confuse things further, and when one looks at the numbering, the numbers used by Claymore, Radeon settings, GPU-z, Trixx etc, and device manager seem to differ. To make matters worse, every time I add a GPU, the numbering changes. The Radeon settings are bad joke, if you have more than 10 GPUs, and just checking if all are set to compute mode, and or setting them, can take well over an hour.
Some changes (graphic/compute) fail, and multiple reboots and attempts can be required.
With all the moaning and bitching about GPU shortages, gamers vs miners etc, regardless of what camp you sit in, for a supplier, miners are their dream come true. New market, multiple purchases, shortage generated, supply and demand rules, price hikes, or excuses for them, I'd say ALL consumers are loosing from that, but really, AMD, WTH, miners are your dream customer, purchasing 10s of cards vs gamers 1, so limiting drivers to 13 cards, and with a crappy interface that can literally take 3 hours to set up a 13 GPU rig, you should be utterly ashamed of yourselves!
Cudos to Mr Claymore, I see far more development with useful features, prompt action when bugs are encountered, and SOLUTIONS to issues.
(I've been playing with the -y 1 switch, but as it needs to be run as admin, that adds other concerns, but all the same, it's nice to see SOMEONE doing something about a moronic situation with AMD drivers).
One of the challenges maintaining a multiple GPU rig is identifying and locating sources of problems. This very same challenge is a daily event in every data/storage centre, and one solution that evolved was an ident-LED, so by management interface, it's possible to alarm trip, or manually illuminate those, which clearly marks the physical hardware on the rack.
So.....requests.
I'm using the -altnum 3 switch, which has at least remove the GPU-0 tag, which no other applications use, and seems to have aligned numbering with GPU-z, however, not so in the EthMan application, which is locked to the default numbering.
1.0 Would it be possible to have numbering control in ethman, (or have it clone the setting used in the miner config?)
1.1 Would it be possible to add a control in EthMan, to set FAN speed to maximum. It would be nice to have Ethman change LED colour on GPUs, but I'd imagine that would be a nightmare for programming as no 2 vendors or even models of card could be relied on to behave the same way. So, forgoing that, setting fan to MAX would serve as a useful manual indicator to identify problem GPU in the rack.
(I set the Sapphire cards, nitro LED to change by fan speed, so setting fan to max, would also make LEDs on those cards change to a specific colour, as well as hearing/seeing fans trucking.)
2.0 Console tweak. Option to suppress the purple temp/fan info. (Please keep that in the logs).
(I'm monitoring that in EthMan)
Side note about logs, THANK YOU for sorting out a configurable path for those, having them bundled in with the app was a pita when cleaning up/updating etc.
2.1 Console Tweak. Option to aggregate the "New job from..." i.e time stamp (see below), "n New jobs from...in last n seconds"
2.2 Console tweak. Option to add time stamps for each line.
Maybe I should explain, or rather ask for some clarification as to what I'm currently seeing in the console. I see a lot of new jobs, and indeed shares are found, but there is no means to match those up. Which jobs are completed? How many outstanding jobs, (buffer/queue size).
I get the feeling a lot more jobs are received than solved/shares received.
Perhaps that is normal?
2.3 What would be cool is to have the means to stop/pause new jobs, and monitor the processing, completion of queued jobs, and have some means to tune that.
Or at least monitor it.
Being able to log, see something like.
20180428082631 Job 12345 received, 14th in queue, ......
20180428082701 Job 12345 finished, share accepted 16ms.......
20180428082959 Job 12345 finished, share (confirmed/rejected/stale/incorrect)
On that subject, I NEVER see any rejected shares in the console.
(BTW I am using -estale 1)
But what is odd is, some percentage 1~2% are stale according to the pool stats, but Claymore does not indicate that.
So, maybe I'm misunderstanding the purpose of "rejected" in the console, but at least for me it seems to serve no purpose, while incorrect shares, and stale shares don't seem to be indicated in the console.
2.4 Being able to follow the full path, received, processed, submitted, accepted, confirmed etc would be really helpful with debugging performance issues.
3.0 Configs for multiple pools.
Background, sometimes I'm switching coins, and pools. the epools.txt and dpools.txt are nice, but it's not clear what pools are in those.
Being able to define in the BAT file, WHICH pool file to use would be nice.
e.g. ETHpools.txt and ETCpools.txt, and "select" that in the start batch file would make that clearer, and mean I can leave static configs deployed, rather than having to edit/swap pool files each time, and run the risk I mess that up.
Update May 1st 2018. My bad, I spent some time to more thoroughly review the 11.7 readme, and spotted the solution to my 3.0 request above. Specifically Claymore 11.7 (and 11.6) now allows you to specify the file name of your pools file, (in your startup batch file).
Very cool, I've tested it in 11.7, and confirmed it works. e.g.
-epoolsfile epoolsEthermineETH.txt
What I would suggest though, is sectioning and versioning the readme file.
It seems this feature was already released with 11.6, mentioned in brief in the History.txt, but was missing the scope/scale/use case/example entry in the readme.
Well, that's about it for now.
Cheers.