Thanks for support!
TeamBlackMiner_1_44_cuda_11_6_beta6.7z
The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks
https://i.ibb.co/KGmn76x/1-44.png
My observations for almost a day of mining on a 12x3080 no LHR rig.
1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate.
2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards.
3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it!
I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate.
4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096.
Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on.
Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition.
I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings.
https://i.ibb.co/7QxhBbb/2022-01-23-151607.png
I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment. setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value.
I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS.
OK, I think you use it now ? Can you show your up time in hive os ?