Pages:
Author

Topic: Team Black Miner (ETHW ETC Vertcoin Ravencoin Zilliqa +dual +tripple mining ) - page 27. (Read 35053 times)

jr. member
Activity: 139
Merit: 3
I am leaving the house now and can continue testing tomorrow.
Thanks for support!

TeamBlackMiner_1_44_cuda_11_6_beta6.7z

The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks




My observations for almost a day of mining on a 12x3080 no LHR rig.
1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate.
2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards.
3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it!
I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate.
4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096.

Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on.

Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition.

I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings.




I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment.  setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value.

I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS.



OK,  I think you use it now ? Can you show your up time in hive os ?

Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig.



I just realized the the GPU and Memory stats are missing from the display (happens with HiveOS).  
Here is my HiveOS GUI for the reference values:



Not bad , what cuda and drivers do you  use ? 11.5 ? Or 11.4 ?

I'm using the 11.4 CUDA right now since 11.5 in linux underperforms.  But I'm switching back and forth between 470.94 and 495.46 while running the 11.4 CUDA runtime to see if there's any driver benefits.  

how is your test going? what are the results?

I see some minor improvements when using the 495.46 drivers with the CUDA 1.14 library under version 1.43 (I didn't run any 1.15 CUDA tests).  My average over the timeframe of testing (19 hours or so), was about 140 MH/s but I lost my logs when HiveOS updated to the newest version of TBM (and I couldn't get the AMD cards to hash properly with that, so I'm manually running the Nvidia cards with the new 1.44 binary and running the AMD cards through HiveOS on 1.43).  

I'd prefer not to have to manually set the kernel, but for some reason with the new set of kernels, the attuning will find a lot of 100's but pick the lowest (so it seems to like kernel 3), but my understanding is that the low kernels are good for low intensities, and the new kernels are good for low power, but I'm running my 3090 at high(ish) intensity and high power, so it would be good to have an idea of which would be expected to be the best kernel for the 3090 and the 2060S based on the 'style', so I don't have to spend days running tests on each kernel (for at least 2 hours with  multiple non-serial runs).  

That being said -- This morning my Pool-side average hashrate for my rig on 2Miners (so the last 6 hour average) was 511 MH/s.  This is for the combined hashing of 5 AMD 6-series cards, 1 3090 and 1 2060S.   With T-rex and TRM I would expect a maximum of about 470 MH/s (usually it was much lower-- 440 - 450 MH/s, so being at 511 MH/s is almost a 10% improvement across all the cards.
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
there is a new cuda version 11.6 with the latest nvidia driver, have you gotten a look at it?

Should work with this setting set in the nvidiaprofileinspector (windows)



I can confirm this. TBM can't handle a Vega card. I tried to play with different timings and settings to make it work but the best i get is around 70% of the max speed i get with TRM or Phoenix. I sold the vega card though so i can't test it anymore.

Might need a kernel rewrite. the issue has been registered.

https://github.com/sp-hash/TeamBlackMiner/issues/235
newbie
Activity: 30
Merit: 0
I am leaving the house now and can continue testing tomorrow.
Thanks for support!

TeamBlackMiner_1_44_cuda_11_6_beta6.7z

The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks

https://i.ibb.co/KGmn76x/1-44.png


My observations for almost a day of mining on a 12x3080 no LHR rig.
1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate.
2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards.
3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it!
I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate.
4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096.

Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on.

Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition.

I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings.

https://i.ibb.co/7QxhBbb/2022-01-23-151607.png


I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment.  setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value.

I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS.



OK,  I think you use it now ? Can you show your up time in hive os ?

Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig.
https://i.imgur.com/Y0wuSej.png


I just realized the the GPU and Memory stats are missing from the display (happens with HiveOS).  
Here is my HiveOS GUI for the reference values:

https://i.imgur.com/b1bjcil.png

Not bad , what cuda and drivers do you  use ? 11.5 ? Or 11.4 ?

I'm using the 11.4 CUDA right now since 11.5 in linux underperforms.  But I'm switching back and forth between 470.94 and 495.46 while running the 11.4 CUDA runtime to see if there's any driver benefits.  

how is your test going? what are the results?
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
v1.45

1. Fixed cuda stats in mixed card rig and missing stats in AMD on linux.
2. Fixed bug in LHR detector, sometimes the program didn't detect correctly.
3. Fix AMD rig fail to start issue on linux from 1.44.
4. Improved default setting for the LHR mode.

https://github.com/sp-hash/TeamBlackMiner/releases

TeamBlackMiner_1_45_cuda_11_4.7z
https://www.virustotal.com/gui/file/4b4e7f5d3f94855a20c3f8dc093cb329522213abf5518cac5b6a02f903d5d03a?nocache=1

TeamBlackMiner_1_45_cuda_11_5.7z
https://www.virustotal.com/gui/file/081b774689766644ff54cb08ffc914162f522c4c955b57f66f9570a07de054fb?nocache=1

TeamBlackMiner_1_45_Ubuntu_18_04_Cuda_11_4.tar.xz
https://www.virustotal.com/gui/file/11a14093aadcf09563fe4398f6b8f073e0b9882721968d66473349faaa30b9e4?nocache=1

TeamBlackMiner_1_45_Ubuntu_18_04_Cuda_11_5.tar.xz
https://www.virustotal.com/gui/file/5f6f2392d630a889396161a5cf7eb2c8a628c392b3413947a1256fcbaf02b659?nocache=1
jr. member
Activity: 60
Merit: 2

Hi sp_!
What about vega 64? This is the last problematic card with your miner, the others are ok :-)
Getting 33.5 MH/s in ethereum  vs 50.5 with trm :-(

The Vega needs a program called memtweak. It's buildt into hiveos. I don't know the optimal parameters, because I have never owned a Vega.


I use amdmemtweak under windows, tbm 1.43 cuda 11.4.. it works with Trm, Gminer and lolminer but not for Tbm!
in Trm/gminer/lolminer vega64 gives me similar result..50.5mhs!
thanks
alex



I can confirm this. TBM can't handle a Vega card. I tried to play with different timings and settings to make it work but the best i get is around 70% of the max speed i get with TRM or Phoenix. I sold the vega card though so i can't test it anymore.
newbie
Activity: 21
Merit: 0
there is a new cuda version 11.6 with the latest nvidia driver, have you gotten a look at it?

also for recent nvidia drivers, on cards like 3080 and 3090 you can get the bug of the card stuck at p3 state after a while and only produce about half of the expected hashrate. can you confirm and maybe create a workaround for this issue?
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
00:00:02 [2022-01-24 01:00:36.111] eth_submitHashrate: Pool reponded with error
00:00:02 [2022-01-24 01:00:36.111] Error(-32602): Invalid params
Which pool is this?
Hiveon

Do you get any other error messages? Mining should continue even if the eth_submithashrate fails. Cuda 11_4 or 11_5 build?

This code indicate a missing ethereum adress, so perhaps something is wrong in your setup or wallet adress.

(Invalid parameters: must provide an Ethereum address. Code -32602)
newbie
Activity: 30
Merit: 0
00:00:02 [2022-01-24 01:00:36.111] eth_submitHashrate: Pool reponded with error
00:00:02 [2022-01-24 01:00:36.111] Error(-32602): Invalid params

Which pool is this?

Hiveon
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
00:00:02 [2022-01-24 01:00:36.111] eth_submitHashrate: Pool reponded with error
00:00:02 [2022-01-24 01:00:36.111] Error(-32602): Invalid params

Which pool is this?
newbie
Activity: 30
Merit: 0
v1.44

1. Speedup RTX 3xxx series, NON-LHR/LHR +1-2%
2. Rewrote the dag generator to work better on high oc (3060ti /3070).
3. Fixed empty CUDA stats when running with a selection of the gpus.
4. Fixed a bug in the dag validation code for Cuda.

https://github.com/sp-hash/TeamBlackMiner/releases


TeamBlackMiner_1_44_cuda_11_5.7z
https://www.virustotal.com/gui/file/0dd87f656176f9e9e8e7930841e5c08eb32fd5ddc692e2989690c88861b3bce7?nocache=1

TeamBlackMiner_1_44_cuda_11_4.7z
https://www.virustotal.com/gui/file/083c0f128c80aa8bf480605ce43a38b000126ab923851d01b9456a3838c4869d?nocache=1

TeamBlackMiner_1_44_Ubuntu_18_04_Cuda_11_5.tar.xz
https://www.virustotal.com/gui/file/600c4cda61cf0a6ea06f87d958c3b1499b6d5bd27af9d56a3625ed256ed3aead?nocache=1

TeamBlackMiner_1_44_Ubuntu_18_04_Cuda_11_4.tar.xz
https://www.virustotal.com/gui/file/600c4cda61cf0a6ea06f87d958c3b1499b6d5bd27af9d56a3625ed256ed3aead?nocache=1


I upgraded to 1.44 and now I'm unable to start mining (full AMD rig + Ubuntu):


00:00:02 [2022-01-24 01:00:36.111] eth_submitHashrate: Pool reponded with error
00:00:02 [2022-01-24 01:00:36.111] Error(-32602): Invalid params

1.42 is working, I had to downgrade
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards.

The program will autotune to find the best Kernel. In 1.44 kernel 1 or 12 seems to be best on rtx 3xxx cards.
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
v1.44

1. Speedup RTX 3xxx series, NON-LHR/LHR +1-2%
2. Rewrote the dag generator to work better on high oc (3060ti /3070).
3. Fixed empty CUDA stats when running with a selection of the gpus.
4. Fixed a bug in the dag validation code for Cuda.

https://github.com/sp-hash/TeamBlackMiner/releases


TeamBlackMiner_1_44_cuda_11_5.7z
https://www.virustotal.com/gui/file/0dd87f656176f9e9e8e7930841e5c08eb32fd5ddc692e2989690c88861b3bce7?nocache=1

TeamBlackMiner_1_44_cuda_11_4.7z
https://www.virustotal.com/gui/file/083c0f128c80aa8bf480605ce43a38b000126ab923851d01b9456a3838c4869d?nocache=1

TeamBlackMiner_1_44_Ubuntu_18_04_Cuda_11_5.tar.xz
https://www.virustotal.com/gui/file/600c4cda61cf0a6ea06f87d958c3b1499b6d5bd27af9d56a3625ed256ed3aead?nocache=1

TeamBlackMiner_1_44_Ubuntu_18_04_Cuda_11_4.tar.xz
https://www.virustotal.com/gui/file/600c4cda61cf0a6ea06f87d958c3b1499b6d5bd27af9d56a3625ed256ed3aead?nocache=1
jr. member
Activity: 42
Merit: 2

Hi sp_!
What about vega 64? This is the last problematic card with your miner, the others are ok :-)
Getting 33.5 MH/s in ethereum  vs 50.5 with trm :-(

The Vega needs a program called memtweak. It's buildt into hiveos. I don't know the optimal parameters, because I have never owned a Vega.


I use amdmemtweak under windows, tbm 1.43 cuda 11.4.. it works with Trm, Gminer and lolminer but not for Tbm!
in Trm/gminer/lolminer vega64 gives me similar result..50.5mhs!
thanks
alex

sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
The next build is good. My 3070 LHR can have around 200-300MHZ higher memclock without dag verification errors.


Hi sp_!
What about vega 64? This is the last problematic card with your miner, the others are ok :-)
Getting 33.5 MH/s in ethereum  vs 50.5 with trm :-(

The Vega needs a program called memtweak. It's buildt into hiveos. I don't know the optimal parameters, because I have never owned a Vega.

Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on.

Hopefully v1.44 will solve this bug. But it might not work on high intensities. To make it work on High intensities you need to lower the memclock
newbie
Activity: 40
Merit: 0
What is your up time ? With tweak ?

arround 8 hrs
https://ibb.co/RTzqKwy

sorry that the last ss i have.. i dont know i would need it. i already change to gminer for testing just now.. For my LHR card core 1485 is better in term of efficiency than 1440. can go up to 1600 and adding more hash but also consume more power + dropping efficiency. Imo invalid in other miner (not TBM) related to memclock too high or need adding core (more power) to stabilize memclock. If still appear then try P0 same process adding core while begin memclock -400 from P2 step by step adding memclock. Still appear then should lowering memclock.

If we drop core down from 1440 to 1380, hash and power will drop. So ya core need to support mem.
jr. member
Activity: 42
Merit: 2
Hi sp_!
What about vega 64? This is the last problematic card with your miner, the others are ok :-)
Getting 33.5 MH/s in ethereum  vs 50.5 with trm :-(

please help me!
Alex
jr. member
Activity: 139
Merit: 3
I am leaving the house now and can continue testing tomorrow.
Thanks for support!

TeamBlackMiner_1_44_cuda_11_6_beta6.7z

The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks




My observations for almost a day of mining on a 12x3080 no LHR rig.
1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate.
2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards.
3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it!
I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate.
4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096.

Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on.

Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition.

I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings.




I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment.  setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value.

I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS.



OK,  I think you use it now ? Can you show your up time in hive os ?

Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig.



I just realized the the GPU and Memory stats are missing from the display (happens with HiveOS).  
Here is my HiveOS GUI for the reference values:



Not bad , what cuda and drivers do you  use ? 11.5 ? Or 11.4 ?

I'm using the 11.4 CUDA right now since 11.5 in linux underperforms.  But I'm switching back and forth between 470.94 and 495.46 while running the 11.4 CUDA runtime to see if there's any driver benefits.  

And here is my 6900XT on windows TBM 1.42 in case you're running any AMD 6-series cards (and this card usually underperforms against the 6800/XT by about 1-2 MH/s, so I'm seriously contemplating switching back to windows for my main rig and just using something like team viewer as a remote interface, but I like the efficiencies I get with Linux...)

jr. member
Activity: 139
Merit: 3
I am leaving the house now and can continue testing tomorrow.
Thanks for support!

TeamBlackMiner_1_44_cuda_11_6_beta6.7z

The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks




My observations for almost a day of mining on a 12x3080 no LHR rig.
1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate.
2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards.
3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it!
I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate.
4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096.

Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on.

Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition.

I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings.




I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment.  setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value.

I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS.



OK,  I think you use it now ? Can you show your up time in hive os ?

Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig.



I just realized the the GPU and Memory stats are missing from the display (happens with HiveOS).  
Here is my HiveOS GUI for the reference values:



Not bad , what cuda and drivers do you  use ? 11.5 ? Or 11.4 ?

I'm using the 11.4 CUDA right now since 11.5 in linux underperforms.  But I'm switching back and forth between 470.94 and 495.46 while running the 11.4 CUDA runtime to see if there's any driver benefits.  
newbie
Activity: 30
Merit: 0
I am leaving the house now and can continue testing tomorrow.
Thanks for support!

TeamBlackMiner_1_44_cuda_11_6_beta6.7z

The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks

https://i.ibb.co/KGmn76x/1-44.png


My observations for almost a day of mining on a 12x3080 no LHR rig.
1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate.
2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards.
3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it!
I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate.
4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096.

Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on.

Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition.

I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings.

https://i.ibb.co/7QxhBbb/2022-01-23-151607.png


I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment.  setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value.

I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS.



OK,  I think you use it now ? Can you show your up time in hive os ?

Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig.
https://i.imgur.com/Y0wuSej.png


I just realized the the GPU and Memory stats are missing from the display (happens with HiveOS).  
Here is my HiveOS GUI for the reference values:

https://i.imgur.com/b1bjcil.png

Not bad , what cuda and drivers do you  use ? 11.5 ? Or 11.4 ?
jr. member
Activity: 139
Merit: 3
I am leaving the house now and can continue testing tomorrow.
Thanks for support!

TeamBlackMiner_1_44_cuda_11_6_beta6.7z

The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks




My observations for almost a day of mining on a 12x3080 no LHR rig.
1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate.
2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards.
3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it!
I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate.
4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096.

Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on.

Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition.

I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings.




I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment.  setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value.

I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS.



OK,  I think you use it now ? Can you show your up time in hive os ?

Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig.



I just realized the the GPU and Memory stats are missing from the display (happens with HiveOS).  
Here is my HiveOS GUI for the reference values:

Pages:
Jump to: