Ok so i took 5 screenshots with gpuZ opened to see what happens when i try to mine grin32.
1. https://i.imgur.com/FNylIWV.jpg --> Start miner (as usually it sees vga as gpu0, gpu1 and one of two *threads? not working ).
2. https://i.imgur.com/7sqlRro.jpg --> Crashing gpu0 and call external script.
3. https://i.imgur.com/67nRXgr.jpg --> Miner reports 80W but vga actually use 40W.
4. https://i.imgur.com/8VuYj5k.jpg --> First *thread? start working again, Efficiency per gpu thread climbs to 23,000, gpu clock decrease, memory clock decrease, gpu load is now 3% and gpu power draw 5W, memory load stays in the same levels.
5. https://i.imgur.com/tTTaVZG.jpg --> Card readings show that vga is not doing any task, but memory is at full load and efficiency g/s/kw growing with hashrate. Vga readings will not change even if i let it 8 hours and it will not find any share from now on.
Do you have 2 x 5500xt?...
let's try to run only 2... could you try to run only 1 with : --devices 0 that should be put it at the end of the line of lolminer.exe
I tried but miner starting with 0.03 g/s and crashes instantly now. I also use lolminer to mine btg and eth and it always starts by recognizing 2 cards instead of 1 and mine with no problems. I really don't know why it crashes when try to mine grin32.
I have no issues when mining eth and btg because it's something like seeing a single core cpu with two threads. The sum of the two hashrates reported is the right one. For example while mining btg i get the result : gpu0: 11.2 sol/s and gpu1: 12.2 sol/s with total 23.4 sol/s which seems ok and i have no errors at all.
The same happens when i mine ethereum and no problems here also. The sum is ok but seems like splitting one process in two threads. In device manager, gpuZ and Hwinfo64 it seems as one vga but with 2 uvd clocks with other frequencies (uvd1:330, uvd2:316).
The issue is not generic but only happens when try to mine grin32.