Thanks vmozara
I don't reply about the pool side difference. I don't want to start a dev war.
I just still say Claymore obviously tweaked the effective hashrate of his XMR miner against displayed hashrate, and he pretended in his doc that was because of "disabled optims" -5%. No problem with that, but a disabled optim should also lower the displayed hashrate. If you explicitely mine @500 minus 5% of "disabled optims" you must display 475. Period.
Note that the GPU devfees are
0.9% and not 1.5%.
1.5% is on the CPU part only, your 2KH/s vegas mine with 0.9% devfee.
Normal minimalistic interface
Thanks, this is on purpose, I wanted something clear and simple, close to Claymore 9.7
I found later versions harder to read. For example, I give the pool-side value of the share, which is useful, but not the hexadecimal hash value, which is useless, except for the dev debug.
No monitoring of temperature and speed, voltage, frequency, their adjustment
No failover pools
Planned
There is not enough identification of the card on bus_id
There's currently the PCIe ID (named "Device" at JCE startup, in green)
A heavy algorithm on rx550 has an unstable hashrate
I know my heavy is bad, i need to optimize it. I even tell it explicitely in my doc.
Display sequence is not the same as in gpuz and overdrive
Debatable. I currently use the OpenCL order.
Very long compilation at startup. Rig on 12 cards about 5 minutes, this makes testing very difficult
When you test on 12 cards, first test on one card, then copy-paste the optimal values to the other 11 ones.
JCE OpenCL is generated on the fly and is bound to all environement, including the card, the parameters, the memory addresses and the coin. Recycling between two tests is impossible. Recycling between two same runs of JCE with exact same parameters is possible but would compromise security, I already explained why. But I don't give up finding a way that's both secure and faster.