Thanks for support!
TeamBlackMiner_1_44_cuda_11_6_beta6.7z
The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks
My observations for almost a day of mining on a 12x3080 no LHR rig.
1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate.
2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards.
3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it!
I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate.
4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096.
Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on.
Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition.
I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings.
I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment. setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value.
I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS.
OK, I think you use it now ? Can you show your up time in hive os ?
Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig.
I just realized the the GPU and Memory stats are missing from the display (happens with HiveOS).
Here is my HiveOS GUI for the reference values:
Not bad , what cuda and drivers do you use ? 11.5 ? Or 11.4 ?
I'm using the 11.4 CUDA right now since 11.5 in linux underperforms. But I'm switching back and forth between 470.94 and 495.46 while running the 11.4 CUDA runtime to see if there's any driver benefits.
how is your test going? what are the results?
I see some minor improvements when using the 495.46 drivers with the CUDA 1.14 library under version 1.43 (I didn't run any 1.15 CUDA tests). My average over the timeframe of testing (19 hours or so), was about 140 MH/s but I lost my logs when HiveOS updated to the newest version of TBM (and I couldn't get the AMD cards to hash properly with that, so I'm manually running the Nvidia cards with the new 1.44 binary and running the AMD cards through HiveOS on 1.43).
I'd prefer not to have to manually set the kernel, but for some reason with the new set of kernels, the attuning will find a lot of 100's but pick the lowest (so it seems to like kernel 3), but my understanding is that the low kernels are good for low intensities, and the new kernels are good for low power, but I'm running my 3090 at high(ish) intensity and high power, so it would be good to have an idea of which would be expected to be the best kernel for the 3090 and the 2060S based on the 'style', so I don't have to spend days running tests on each kernel (for at least 2 hours with multiple non-serial runs).
That being said -- This morning my Pool-side average hashrate for my rig on 2Miners (so the last 6 hour average) was 511 MH/s. This is for the combined hashing of 5 AMD 6-series cards, 1 3090 and 1 2060S. With T-rex and TRM I would expect a maximum of about 470 MH/s (usually it was much lower-- 440 - 450 MH/s, so being at 511 MH/s is almost a 10% improvement across all the cards.