I've seen a number of discussions about the Excavator miner, and I've just implemented initial support for the recent versions of Excavator as an External Miner. It's included in the latest Awesome Miner release where it's at least possible to monitor it.
I do have an implementation for it as a Managed Miner in the pipeline as well, but that one needs a bit more testing before I will release it. More operations will also be made available later on.
The Excavator mining software is a bit special, and it looks like the configuration file needs to be specified with the number of workers you want for each GPU (the worker.add command). From an Awesome Miner point of view it would be great if Excavator simply could use all GPU's by default, because having Awesome Miner detecting GPU's first and then launch Excavator with a configuration based on this is a possible source of error.
The Excavator miner also provides very few configuration options via the command line, making it difficult to override any behaviors that Awesome Miner will enforce by default. I will try to come up with a good way to support this software anyway, but it's has a "unique" design.
In addition to the above, the Excavator miner will never be downloaded automatically due to the restrictions in the Nicehash EULA.
Thank you for adding Excavator support and for the effort to integrate new api from it.
I want to propose to you some improvements :
1) I think it would be very useful for all AM users to see also the hash-rate a pool records for us. Lately almost all pools had connection problems, was stopped or they are simply recording a lower hashrate than the one that we see on miner's window.
To achieve this you have to call pool's API :
https://www.ahashpool.com/api/walletEx/?address= and you have the list of all miners. You have to uniquely identify each AM instance by assigning a uniquely generated random ID (that appears also in API json from the pool) that remains the same for a RIG (do not generate a new id when starting a new miner to track past values). Now that you have uniquely identified each instance you will have to download every 1 minute a new JSON from the link above and add values together with timestamp in an array you store for each pool and algo. By making the difference between 2 consecutive different values and divide to the number of seconds between the 2 values you have the real HashRate per second from the Pool. You also can show values like : unsold, balance and unpaid for that pool.
The implementation from above will give us very useful insight about each pool performance and can help us earn more, now that BTC has a very low value, every optimization is really useful.
2) Sometimes when pools have too many connections they disable new connections for a period of time for a specific algo. In this case AM remains stucked with the last value AM received from the mining instance of ccminer and does nothing to fix that.
I believe it would be better to show income statistics based on submitted (and also accepted shares) , instead AM now remains without any refresh to the last hash rate that was submitted by ccminer even hours ago.
If AM waits for hours for an updated hashrate from ccminer and does not take time into consideration it is not the correct behavior. It should have a timer that checks every x seconds the received hashrate from the miner (if there isn't an updated value - identical values could also be a problem - , the for several checks, that miner has an issue) with one minute delay when miner is started to allow all GPUs to be started and send hash rates. Like this if one pool / algo server is offline, it will change to the next most profitable algo from the list.
What do you think,
Patrike ?