Author

Topic: [ANN] TeamRedMiner v0.10.10 - Ironfish/Kaspa/ZIL/Kawpow/Etchash and More - page 109. (Read 211877 times)

newbie
Activity: 22
Merit: 0
Any idea why I have no status info or hashrates display for me with 0.4.0 ?

It just says Team Red miner Version 0.4.0 with a blinking cursor behind it.
But mining is working normally in the backround as I can see with my power consumption and with the pool-monitor.

I was using 0.3.7 before that which worked fine.
full member
Activity: 163
Merit: 100

Was having the same problems with my rx vega. Adding this to the bat file fixed it....
 
set GPU_MAX_WORKGROUP_SIZE=256
full member
Activity: 729
Merit: 114
but with this setting hashrate is only 2000h/s, not 3000h/s  Undecided

3000 was never the correct value for hashrate on vegas.
jr. member
Activity: 41
Merit: 1
setting the environment variable worked for me too
newbie
Activity: 50
Merit: 0
I think the dead gpu problem is from the dot net core install which was required for the GGM grin miner.
https://github.com/mozkomor/GrinGoldMiner
https://dotnet.microsoft.com/download
I have a Vega rig that I previously used to mine Grin and it has problems with some miners like Team Red (but not lyra) and wildrig.

BOOM! THIS IS IT! We should probably hardcode all our desired local worksizes and this wouldn't have happened.

FOR THE TIME BEING FOLKS, SEND DONATIONS TO HAMMUH AND MAKE SURE YOUR ENVIRONMENT VARIABLE GPU_MAX_WORKGROUP_SIZE IS EITHER SET TO 256 OR NOT SET AT ALL!

I'll start working on making sure that isn't necessary in an upcoming release.

It worked! Thank you for quick reply.
member
Activity: 658
Merit: 86
I think the dead gpu problem is from the dot net core install which was required for the GGM grin miner.
https://github.com/mozkomor/GrinGoldMiner
https://dotnet.microsoft.com/download
I have a Vega rig that I previously used to mine Grin and it has problems with some miners like Team Red (but not lyra) and wildrig.

BOOM! THIS IS IT! We should probably hardcode all our desired local worksizes and this wouldn't have happened.

FOR THE TIME BEING FOLKS, SEND DONATIONS TO HAMMUH AND MAKE SURE YOUR ENVIRONMENT VARIABLE GPU_MAX_WORKGROUP_SIZE IS EITHER SET TO 256 OR NOT SET AT ALL!

I'll start working on making sure that isn't necessary in an upcoming release.
full member
Activity: 729
Merit: 114
I think the dead gpu problem is from the dot net core install which was required for the GGM grin miner.
https://github.com/mozkomor/GrinGoldMiner
https://dotnet.microsoft.com/download
I have a Vega rig that I previously used to mine Grin and it has problems with some miners like Team Red (but not lyra) and wildrig.

Thanks to this and kerney.  figured out the problem. Would let kerney provide an update.
jr. member
Activity: 41
Merit: 1
I think the dead gpu problem is from the dot net core install which was required for the GGM grin miner.
https://github.com/mozkomor/GrinGoldMiner
https://dotnet.microsoft.com/download
I have a Vega rig that I previously used to mine Grin and it has problems with some miners like Team Red (but not lyra) and wildrig.
member
Activity: 658
Merit: 86
cnr won't start...lyra2rev3 works though.



win10, driver ver. 18.6.1 // 1408-905 | 1100-900
What could be the problem?

Looking into a similar issue right now, a few Vega rigs are having init issues it seems. Just to collect some more data for me, what cpu do you have in that rig? Also, I assume you know you have enough swap enabled?

I am having the exact same error, this has been my favorite miner for CN, but i can't get it running. When i do 12+12, the avg hashrate on my flashed vegas 56 says 1450 h/s, and the current hashrate shows 3000 h/s.

I am not sure what is the problem. SRBminer works fine, but i preffer this one.

Please give us the solution to this ASAP.

P.S. I tried it both on 19.x drivers, and 18.6... same things happen

Working on it, always messy when we can't reproduce it ourselves, heavyarms1912 is right now testing build after build to zoom in on the issue.
member
Activity: 340
Merit: 29
Yep, absolutely. Now also confirmed in HWINFO64.
GPU Clock = 1360-ish (vs 1408 set, so 50-ish droop)
GPU Memory Clock = 1100.0 MHz (solid)
GPU SOC Clock = 1199.0 MHz (solid)

EDIT: same accross all 6 cards.

EDIT 2: I've now been playing with the lower mem pstates and noticed that whatever clocks I enter there are being ignored and the cards are using presets instead. Very weird. SOC then sets itself to different freqs as well depending on which mem p_states are active...

Ok, I see what happened...  Looks like your PPT has an edit changing p7 SOC to 1200 from 1107. 

Look for "C0,D4,01,00,07" (only first occurrence) and try changing it to "6C,B0,01,00,07".

Still not sure why that would require 1v.  When I run 1200 SOC, i generally see voltage requirements around 900mv +/- 20mv.
So the 1200 is hardcoded in the reg, yeah. I can change it down to 1107, it'll spare me more headaches.

Now why are the mem p_states hard coded too?
Whatever clocks I enter there are being ignored and the cards are using presets instead.
Currently I run in mem_p2 and it forces 800MHz clock. That gives me 1800h/s per card (power draw for 6 at the wall is 1120W) and is actually more efficient than running 2000 at total power draw of 1400W!

That drop from 1200 to 1107 SOC should save you at least 50mv, from my experience.

Drivers don't allow changing mem states p0-p2 at runtime.  You should be able to edit them in PPT if you want, but i've never bothered.  In the past, you could not edit core p0-p5 either (the whole reason everyone uses PPTs,) but this was changed w/ 19.x drivers.  Unfortunately, they didn't make the same change for mem states (at least in early vers of 19.x - haven't tested latest versions.)
I can confirm that hardcoding back down to 1107 solved everything.
I'm now running 1408/1100 @ 850mV, SOC clock @ 1107 MHz. 12Kh/s, 1200W at the wall.
Of course this means I will not push the mem clocks higher, but tbh it was never really worth it. Especially as it completely buggered up the power states/delivery.

Thanks WillJ7 and pbfarmer you've been legends, as usual. If I only had merit left to spare I'd dish it all out to you!

Awesome!  Glad you got it worked out.  A couple last tweaks for you...

1. You can go up to 1107 on mem - tho obv won’t make much of a diff.

2. I tend to see 1400 effective possible w/ 850mv.  W/ a 1408 setting, you’re prob running quite a bit lower at least on some of your GPUs due to acg - my 64 settings range from 1425-1550 to get to 1400..  Just keep bumping your clock setting til you see close to 1400 in hwinfo.

3. If you really want to shave the last bit of excess power, you might be able to get voltages on some cards down as low as 825mv.  Every card is different tho, so like the core clock setting, you’ll have to tune each individually.  And it’s an opaque relaionship between voltage and core clock - as you drop mv, core will need to be raised to maintain your target clock.  It’s a bit of an art form finding the balance.
newbie
Activity: 21
Merit: 0
cnr won't start...lyra2rev3 works though.

https://i.imgur.com/H8oLL3i.png

win10, driver ver. 18.6.1 // 1408-905 | 1100-900
What could be the problem?

Looking into a similar issue right now, a few Vega rigs are having init issues it seems. Just to collect some more data for me, what cpu do you have in that rig? Also, I assume you know you have enough swap enabled?

I am having the exact same error, this has been my favorite miner for CN, but i can't get it running. When i do 12+12, the avg hashrate on my flashed vegas 56 says 1450 h/s, and the current hashrate shows 3000 h/s.

I am not sure what is the problem. SRBminer works fine, but i preffer this one.

Please give us the solution to this ASAP.

P.S. I tried it both on 19.x drivers, and 18.6... same things happen
newbie
Activity: 50
Merit: 0
cnr won't start...lyra2rev3 works though.

https://i.imgur.com/H8oLL3i.png

win10, driver ver. 18.6.1 // 1408-905 | 1100-900
What could be the problem?

Looking into a similar issue right now, a few Vega rigs are having init issues it seems. Just to collect some more data for me, what cpu do you have in that rig? Also, I assume you know you have enough swap enabled?

I have Intel Celeron G3930 @ 2.90GHz. I'm not sure what swap is, but if it means virtual memory then yes. I set virtual memory to 60000 MB.
SRBminer cnr works with the same settings.
member
Activity: 658
Merit: 86
cnr won't start...lyra2rev3 works though.



win10, driver ver. 18.6.1 // 1408-905 | 1100-900
What could be the problem?

Looking into a similar issue right now, a few Vega rigs are having init issues it seems. Just to collect some more data for me, what cpu do you have in that rig? Also, I assume you know you have enough swap enabled?
member
Activity: 658
Merit: 86
any chance of getting higher numbers than 16 for cn_config.... vega VII only gets 2.5Kh with TeamRed.   I get 2.9Kh with SRBminer for cnr.

Hi alu! Oh for sure. I wrote about it earlier today, we have done zero optimizations for VII. We even debated not allowing Radeon VII on CN just because it would look bad and be far from the quality standard we're striving for. However, we left it in there, I believe it could still be interesting to choose TRM from an efficiency perspective in some cases. Other miners have higher perf though for sure.

That said, I have gotten ~2750 h/s with TRM on my VII using 16+16 or 16-16, can't remember which was the better one. We have done some initial tests, and we know we'll provide a solid hashrate+efficiency on the VII as soon as we've had the proper time for some more R&D.
hero member
Activity: 1274
Merit: 556
Yep, absolutely. Now also confirmed in HWINFO64.
GPU Clock = 1360-ish (vs 1408 set, so 50-ish droop)
GPU Memory Clock = 1100.0 MHz (solid)
GPU SOC Clock = 1199.0 MHz (solid)

EDIT: same accross all 6 cards.

EDIT 2: I've now been playing with the lower mem pstates and noticed that whatever clocks I enter there are being ignored and the cards are using presets instead. Very weird. SOC then sets itself to different freqs as well depending on which mem p_states are active...

Ok, I see what happened...  Looks like your PPT has an edit changing p7 SOC to 1200 from 1107.  

Look for "C0,D4,01,00,07" (only first occurrence) and try changing it to "6C,B0,01,00,07".

Still not sure why that would require 1v.  When I run 1200 SOC, i generally see voltage requirements around 900mv +/- 20mv.
So the 1200 is hardcoded in the reg, yeah. I can change it down to 1107, it'll spare me more headaches.

Now why are the mem p_states hard coded too?
Whatever clocks I enter there are being ignored and the cards are using presets instead.
Currently I run in mem_p2 and it forces 800MHz clock. That gives me 1800h/s per card (power draw for 6 at the wall is 1120W) and is actually more efficient than running 2000 at total power draw of 1400W!

That drop from 1200 to 1107 SOC should save you at least 50mv, from my experience.

Drivers don't allow changing mem states p0-p2 at runtime.  You should be able to edit them in PPT if you want, but i've never bothered.  In the past, you could not edit core p0-p5 either (the whole reason everyone uses PPTs,) but this was changed w/ 19.x drivers.  Unfortunately, they didn't make the same change for mem states (at least in early vers of 19.x - haven't tested latest versions.)
I can confirm that hardcoding back down to 1107 solved everything.
I'm now running 1408/1100 @ 850mV, SOC clock @ 1107 MHz. 12Kh/s, 1200W at the wall.
Of course this means I will not push the mem clocks higher, but tbh it was never really worth it. Especially as it completely buggered up the power states/delivery.

Thanks WillJ7 and pbfarmer you've been legends, as usual. If I only had merit left to spare I'd dish it all out to you!
legendary
Activity: 4256
Merit: 8551
'The right to privacy matters'
oh this looks like I have some for to do here.

I have been pointing the vegas to BCI

have not mined this coin in a while. other then with my thread ripper.

I will need to check out how to get the threadripper to mine this.

will a thread ripper work with this software or is it all cpu bound.
member
Activity: 340
Merit: 29
Yep, absolutely. Now also confirmed in HWINFO64.
GPU Clock = 1360-ish (vs 1408 set, so 50-ish droop)
GPU Memory Clock = 1100.0 MHz (solid)
GPU SOC Clock = 1199.0 MHz (solid)

EDIT: same accross all 6 cards.

EDIT 2: I've now been playing with the lower mem pstates and noticed that whatever clocks I enter there are being ignored and the cards are using presets instead. Very weird. SOC then sets itself to different freqs as well depending on which mem p_states are active...

Ok, I see what happened...  Looks like your PPT has an edit changing p7 SOC to 1200 from 1107. 

Look for "C0,D4,01,00,07" (only first occurrence) and try changing it to "6C,B0,01,00,07".

Still not sure why that would require 1v.  When I run 1200 SOC, i generally see voltage requirements around 900mv +/- 20mv.
So the 1200 is hardcoded in the reg, yeah. I can change it down to 1107, it'll spare me more headaches.

Now why are the mem p_states hard coded too?
Whatever clocks I enter there are being ignored and the cards are using presets instead.
Currently I run in mem_p2 and it forces 800MHz clock. That gives me 1800h/s per card (power draw for 6 at the wall is 1120W) and is actually more efficient than running 2000 at total power draw of 1400W!

That drop from 1200 to 1107 SOC should save you at least 50mv, from my experience.

Drivers don't allow changing mem states p0-p2 at runtime.  You should be able to edit them in PPT if you want, but i've never bothered.  In the past, you could not edit core p0-p5 either (the whole reason everyone uses PPTs,) but this was changed w/ 19.x drivers.  Unfortunately, they didn't make the same change for mem states (at least in early vers of 19.x - haven't tested latest versions.)
newbie
Activity: 50
Merit: 0
cnr won't start...lyra2rev3 works though.

https://i.imgur.com/H8oLL3i.png

win10, driver ver. 18.6.1 // 1408-905 | 1100-900
What could be the problem?
jr. member
Activity: 73
Merit: 2
Just an update.

Downloaded the latest version last night and mined lyra2rev3 last night and did notice a slight increase in hashing performance.

Switched over to CN R just recently and got everything running within 10-15 minutes no problems. I did not measure voltage or anything but settings have been left mostly the same as with CN v8. Im on Windows 10 1607 with the network update. I think most of the drivers are the 19.x.x but might have a few on 18.x.x. Performance seems to be on par with what it was before and now just testing for stability and longevity.

Overall this is probably the smoothest fork update for my little farm! Im liking the addition of color, so much easier to read things lol.

Another quick update, I just measured power from the wall on one of my 3x vega 56 rigs and im reading slightly under 600w total system power. So Almost bout the same as CN v8 maybe even a bit less. This system is using wattman for my settings. Other systems have a soft power table mod and using overdriveNT. Looking very stable but unfortunately difficulty is still mighty high.
hero member
Activity: 1274
Merit: 556
Yep, absolutely. Now also confirmed in HWINFO64.
GPU Clock = 1360-ish (vs 1408 set, so 50-ish droop)
GPU Memory Clock = 1100.0 MHz (solid)
GPU SOC Clock = 1199.0 MHz (solid)

EDIT: same accross all 6 cards.

EDIT 2: I've now been playing with the lower mem pstates and noticed that whatever clocks I enter there are being ignored and the cards are using presets instead. Very weird. SOC then sets itself to different freqs as well depending on which mem p_states are active...

Ok, I see what happened...  Looks like your PPT has an edit changing p7 SOC to 1200 from 1107.  

Look for "C0,D4,01,00,07" (only first occurrence) and try changing it to "6C,B0,01,00,07".

Still not sure why that would require 1v.  When I run 1200 SOC, i generally see voltage requirements around 900mv +/- 20mv.
So the 1200 is hardcoded in the reg, yeah. I can change it down to 1107, it'll spare me more headaches.

Now why are the mem p_states hard coded too?
Whatever clocks I enter there are being ignored and the cards are using presets instead.
Currently I run in mem_p2 and it forces 800MHz clock. That gives me 1800h/s per card (power draw for 6 at the wall is 1120W) and is actually more efficient than running 2000 at total power draw of 1400W!
Jump to: