Author

Topic: Claymore's Dual Ethereum AMD+NVIDIA GPU Miner v15.0 (Windows/Linux) - page 822. (Read 6590718 times)

newbie
Activity: 19
Merit: 0
Hello everyone.

I’m a new guy in this forum and i come to expose my big problème with claymore.
I don’t find solution and nobody knows how to help me.

I mine with 6 RX 480 4Go Nitro Sapphire on windows 10.

https://cdn.discordapp.com/attachments/317646633104048128/323717897786490892/claymore_s.jpg

As you cas see on this picture, i have good result BUT, about FAN, i can’t see all of GPU.
I always see the GPU0 FAN.
I don’t know why, but before i always see all of GPU FANS.

And i use MISafterburer to OC my cards but claymore just OC the GPU0 and not all cards.
I think it's probably because i just see gpu0 fans.

Have you an explication about my problème ?  
I test claymore 9,4, and 9,5 but this problème existe on all claymore version.

Thank you.

Uff still the same question for milionth time. Disable crossfire, don't use RDP...

Thx, I was going to ask the same question. Crossfire disabled. About RDP, this question must be another one asked million times, which is the problem with it?.

Thx

Carlos
newbie
Activity: 19
Merit: 0
Hi Everybody:

As a lot of messages I have seen here, I will start stating that I'm new here and new in mining, I started mining on friday, so far great, I have learnt a lot.

Right now I am mining with only 3 ASUS STRIX RX580 8GB OC, I have reduced the GPU clock in 5% and increased the memory clock speed to 2200 MHz, I have got 27.4 MH/s from 24.5MH/s, all this in AMDs Wattman, I had problems with the Asus Aura Light Effects, it seems it mess with the MH/s fluency, It is off and uninstalled now.

I have read about some widely known power adjustments, I understand the idea is to reduce voltage, so reduce power consumption, and increase memory clock speed, so increase MH/s, I have read too about doing it with AMDs Wattman or MSIs AfterBurner, and about bios flashing.

Knowing many has been able to get 29MH/s and 135 Watts of power from this cards, my questions so far are the following:

- There is need to flash a new bios to get the best performance?, if so, how many MH/s could I get?, or power reduction?.
- If I can get similar improvements with AMDs Wattman, which are the parameters I must use?.

Thank you very much for your support.

Carlos


If you're still using Windows (which is about as stable as a house of cards during a hurricane), then you can easily undervolt using MSI AB or whatever. Also, only 27.4MH/s? Those can't be Samsung, are they?

LOL.... "...If you're still using Windows (which is about as stable as a house of cards during a hurricane)..." ~ Wolf0, 2017

thanks for brief grin... needed one.... after the BTC bloodbath last night

27.5mh -- standard 1500 straps -- maybe Elpida memory

possibly SK-Hynix... lucky card this

Use GPUZ to find out memory type

It seems I am Lucky, these are Samsung, just checked.

What I need to change?, which settings must I use?.

Thx very much,


Damn! I thought copy+paste hackjob from 1750 -> 2000 would do better than that on Samsung K4G80325FB. I figured at least 28.5MH/s (which still, for Samsung, is not good) but... wow.

Hi:

So far I have reduced Voltage -96 mV, increased GPU Clock Speed to 1400MHz and Memory Clock Speeds to 2200MHz, I keep reading about that 1750=>2000 "thing", I understand it can be done flashing the bios, so if I understand it ok, there is nothing else I can do to improve my MH/s but to flash the bios, 27.5MH/s is the Top I can get with AfterBurner?.

Thx very much,

Carlos
hero member
Activity: 2548
Merit: 626
I already asked about 20 pages ago, but here it goes again, cause i did not get an answer:
can pool job timeout be changed? Now it's 15 minutes, can it be set to less?
newbie
Activity: 2
Merit: 0
Hello,

newbie here. I am facing the following problem with the installation.

The OpenCl seems not working. I am using MSI GTX980ti. Anyone to my rescue???

The message:
SYSTEM32\OpenCL.dll is either not designed to run on Windows or it contains an error. Try installing the program again using the original installation media or contact your system administrator or the software vendor for support. Error status 0xc000012f.

newbie
Activity: 12
Merit: 0
Is anyone mining with both AMD and NVIDIA cards in a single rig? I'm having trouble getting Afterburner to work with the AMD cards when I mix.. Anyone got a solution to this? Cheesy
full member
Activity: 224
Merit: 102
Hi Everybody:

As a lot of messages I have seen here, I will start stating that I'm new here and new in mining, I started mining on friday, so far great, I have learnt a lot.

Right now I am mining with only 3 ASUS STRIX RX580 8GB OC, I have reduced the GPU clock in 5% and increased the memory clock speed to 2200 MHz, I have got 27.4 MH/s from 24.5MH/s, all this in AMDs Wattman, I had problems with the Asus Aura Light Effects, it seems it mess with the MH/s fluency, It is off and uninstalled now.

I have read about some widely known power adjustments, I understand the idea is to reduce voltage, so reduce power consumption, and increase memory clock speed, so increase MH/s, I have read too about doing it with AMDs Wattman or MSIs AfterBurner, and about bios flashing.

Knowing many has been able to get 29MH/s and 135 Watts of power from this cards, my questions so far are the following:

- There is need to flash a new bios to get the best performance?, if so, how many MH/s could I get?, or power reduction?.
- If I can get similar improvements with AMDs Wattman, which are the parameters I must use?.

Thank you very much for your support.

Carlos


If you're still using Windows (which is about as stable as a house of cards during a hurricane), then you can easily undervolt using MSI AB or whatever. Also, only 27.4MH/s? Those can't be Samsung, are they?

How do you check for memory errors under Linux? I was asking this question few times with no answer Sad

Quite serious ones will appear in the kernel log - so you'd just check dmesg. I'm pretty sure there's a register with the count somewhere in the space, if I directly access the GPU (not bothering with the driver), but there's already SO MUCH awesome shit I can do with this access that I have yet to implement (and I've implemented plenty!) so it's not that high on my TODO list...

So thats it. There is no way for nonterminal gurus to easilly check for memory errors. I am about 25 years Mac user, also have some FreeBSD & Linux servers. I like Unixes and hate Win, but.... For example this little HWinfo64 is really handy and easy to use. I would like to try build a linux miner system but don't want to ruin my cards running them on the edge of millions of hw errors without knowing it...
And the solution to first run them on Win to find each cards limits, then flash those values into their bioses and after that run them under Linux seems a bit uncomfortable, don't ya think? Smiley

ACTUALLY - even if you're an expert in bash, you still aren't able to see the memory errors - all of them. You would have to code - access the GPU directly, telling the driver to go fuck itself, and read them.

About your worry with memory errors - they have zero chance of harming the GPU. A memory error is basically that the delay that was waited before a given memory command simply was not long enough, and as such, you got garbage back (most likely), assuming it was a read command. It's not going to hurt a thing, besides possibly your profits.

There is one other option, if you're a dev with a shitload of time... (or you just bribe me for a copy of mine) - write a tool to directly access the VRM controller(s) on the GPU, and command them directly. This is fun & rewarding, because you find out the Windows... those nice tools like MSI AB and Sapphire Trixx... they hide SO much power from you, and are like safety scissors when you need a scalpel.

Wolf, maybe it would sound strange, but - are you 100% sure that running cards 24/7 for a year with millions of memory errors will have NO impact to possible degradation or damage of that cards? I saw a lot of cards for example RX 480 8GB with Samsung memories, which in normal condition can do easilly 30+ Mhs, but they were hardly hitting just 27 Mhs! Something had to degrade them and memory errors is got an idea no. 1.

I am absolutely certain. The reason I'm paid so well for custom performance timings is because not only do I not copy+paste timing sets, I also do not blindly change values - I understand how to use & interact with GDDR5, and how it functions, to an extent. I don't mean in code, storing & retrieving shit, but more on the level of how to operate it, and how the GPU's memory controller will drive it, the various delays required between different commands issued, and whatnot. Attempting to do something too quickly (like back-to-back ACTIVE commands to rows in different banks without waiting long enough) will simply end with incorrect data if you fuck up just a little, or a memcrash (identifiable by the GPU's core clock being normal, but the memclk dropping to 300 and it not hashing) if you fuck up a lot. You ain't gonna damage it, short of voltage modifications.

Now - I have seen this case of Samsung just not being... well, Samsung, in some cases. In all of them, the issue was heat. Now, I know what you're thinking. Something along the lines of the core temp being more than fine, right? This is due to the cooling being what I call a "show cooler." XFX RS XXX (470 or 480, 4G or 8G), as well as MSI's Armor coolers are ones I have personally bought and confirmed this behavior. They ensure the GPU's ASIC is connected *really* well to the heatsink - and that's about all they do. Most gamers/overclockers/miners don't even know there are other temp sensors, let alone check them... this means that while everything on the PCB besides the core is left to cook, most notably the VRM controller(s) and the GDDR5, all appears well! This has been the cause of my Samsung under-performance issues without fail.

Well, little example. I had 2 identical cards, Nitro+ RX480 4GB Samsung. Same settings, same core, mem, voltages etc... Simply exact same confirmed settings. But one card simply was running 13-15 degC hotter then second one. Yes, far away from each other to make the cooling factor, airflow etc. irrelevant. And guess what? The hotter card even with the same mem straps was hashing a lot lower. And now why was that? Both cards had the same cooling solution from factory, backplate, not sure about Nitro+ VRM heatsink etc. My guess - one card simply was "a bit screwed" or something, my first idea previous owner run it with mem errors and the memory is since that time "more likely to produce another mem errors"...
So what is the conclusion? Should I change settings on all my card to run faster (more Mhs) but with mem errors, or just have all cards running slower but clean with zero mem errors? Where is the border where mem errors will start affecting accepted shares due to incorrect shares?
member
Activity: 87
Merit: 11
Does anyone know when Claymore will exceed the 3GB GPU's? There are some reasonable prices for 3GB CUDA cards on the market now and in times of GPU scarcity these are pretty tempting. Looking at GPU-Z right now it shows 2953MB dedicated memory on my RX 470s and I don't know if that means we are getting close to the 3GB limit. I developed a sheet that shows the DAG may not exceed 3GB until Nov 2018. Any thoughts on how much GPU memory above and beyond the DAG size is required to run? If anyone else is interested in my DAG size sheet it is https://goo.gl/YHM7HY. I couldn't find such a table anywhere so I decided to create one myself. If see any errors please let me know.
donator
Activity: 1610
Merit: 1325
Miners developer
Claymore - thanks for adding the new API to enable and disable individual GPU's. I've added support for that in Awesome Miner.
Would it also be possible to extend the API to include accepted / rejected shares per GPU? Thanks!

It will require some changes in Stratum implementation (in all supported versions) because I will have to use several IDs in share submit requests to identify GPU in server response. I would not like to change that code because it can break compatibility with some pools, I will do it but I need to do a lot of tests to check compatibility with all pools. No ETA for now though.
donator
Activity: 1610
Merit: 1325
Miners developer
Claymore, I noticed the front page of the thread with the new release links no longer mentions the issues with Nvidia 9xx and new drivers on Windows 10.  Has that been resolved and Maxwell cards can mine at full hashrate with latest drivers in windows 10?

I removed it because latest drivers solved issues for Pascal cards. Well, may be I should return these notes back, though need to find them somewhere to do it...

How long does the devfee usually take?

Claymore claim that he take 1%, but in fact he took more. You can watch the status of mining, every devfee mining, he is always get the share. He is cheating on 1%. I think he takes at least 3%.

In fact you don't understand mining details. Some people think like you and tried to check fee rate in my miners technically, for example:
https://bitcointalksearch.org/topic/edit-confirmed-fee-is-accurate-1681108
About number of shares: I explained it several times, then added to FAQ here:
https://bitcointalksearch.org/topic/claymores-zcashbtg-amd-gpu-miner-v126-windowslinux-1670733

Here it is again:
.....
Q: Why do I see more shares for devfee than in my mining for the same time?
A: Most pools support variable diff, they change "target share" after some time after connection. For example, you have very powerful rig, after connection you will send shares very often. It takes some CPU time to check your shares so after some time pool will send higher share target and miner will send less shares (but they will have more value). When pool updates share target you will see "Pool sets new share target" line in the miner. This way pool can adjust the number of shares that miner sends and balance its load.
So check the log or console text to see current target for main mining thread and for devfee thread. For example:
DevFee: Pool sets new share target: 0x0083126e (diff: 500H) - this is for devfee mining connection
Pool sets new share target: 0x0024fa4f (diff: 1772H) - this is for main mining connection
As you can see, target share for main mining is higher in about 3.5 times, so for main mining miner sends in 3 times less shares (but they have 3x more value) than for devfee mining.
.....

For ETH most popular pools don't use vardiff, but if you see many shares in devfee it means that you use such pool.
So if you see that devfee mining gets more shares - check in the log what share target is used for devfee, it will be much lower than for main mining, so these shares are much cheaper.
Of course you can forget about all these complex technical details and continue think that I fool entire community and take 3% or 5% or 7%...
I write miners for a long time and a lot of people use them, such thing would be discovered quickly because there are ways to check real devfee mining rate.
sr. member
Activity: 448
Merit: 250
I just wanted to mention that the Mega.nz link on the OP of this thread is infected with a few viruses (as of 24hrs ago). Can someone please look into this?
I would hate to see anyone get burned...
legendary
Activity: 3346
Merit: 1094
Claymore - thanks for adding the new API to enable and disable individual GPU's. I've added support for that in Awesome Miner.
Would it also be possible to extend the API to include accepted / rejected shares per GPU? Thanks!


full member
Activity: 224
Merit: 102
Hi Everybody:

As a lot of messages I have seen here, I will start stating that I'm new here and new in mining, I started mining on friday, so far great, I have learnt a lot.

Right now I am mining with only 3 ASUS STRIX RX580 8GB OC, I have reduced the GPU clock in 5% and increased the memory clock speed to 2200 MHz, I have got 27.4 MH/s from 24.5MH/s, all this in AMDs Wattman, I had problems with the Asus Aura Light Effects, it seems it mess with the MH/s fluency, It is off and uninstalled now.

I have read about some widely known power adjustments, I understand the idea is to reduce voltage, so reduce power consumption, and increase memory clock speed, so increase MH/s, I have read too about doing it with AMDs Wattman or MSIs AfterBurner, and about bios flashing.

Knowing many has been able to get 29MH/s and 135 Watts of power from this cards, my questions so far are the following:

- There is need to flash a new bios to get the best performance?, if so, how many MH/s could I get?, or power reduction?.
- If I can get similar improvements with AMDs Wattman, which are the parameters I must use?.

Thank you very much for your support.

Carlos


If you're still using Windows (which is about as stable as a house of cards during a hurricane), then you can easily undervolt using MSI AB or whatever. Also, only 27.4MH/s? Those can't be Samsung, are they?

How do you check for memory errors under Linux? I was asking this question few times with no answer Sad

Quite serious ones will appear in the kernel log - so you'd just check dmesg. I'm pretty sure there's a register with the count somewhere in the space, if I directly access the GPU (not bothering with the driver), but there's already SO MUCH awesome shit I can do with this access that I have yet to implement (and I've implemented plenty!) so it's not that high on my TODO list...

So thats it. There is no way for nonterminal gurus to easilly check for memory errors. I am about 25 years Mac user, also have some FreeBSD & Linux servers. I like Unixes and hate Win, but.... For example this little HWinfo64 is really handy and easy to use. I would like to try build a linux miner system but don't want to ruin my cards running them on the edge of millions of hw errors without knowing it...
And the solution to first run them on Win to find each cards limits, then flash those values into their bioses and after that run them under Linux seems a bit uncomfortable, don't ya think? Smiley

ACTUALLY - even if you're an expert in bash, you still aren't able to see the memory errors - all of them. You would have to code - access the GPU directly, telling the driver to go fuck itself, and read them.

About your worry with memory errors - they have zero chance of harming the GPU. A memory error is basically that the delay that was waited before a given memory command simply was not long enough, and as such, you got garbage back (most likely), assuming it was a read command. It's not going to hurt a thing, besides possibly your profits.

There is one other option, if you're a dev with a shitload of time... (or you just bribe me for a copy of mine) - write a tool to directly access the VRM controller(s) on the GPU, and command them directly. This is fun & rewarding, because you find out the Windows... those nice tools like MSI AB and Sapphire Trixx... they hide SO much power from you, and are like safety scissors when you need a scalpel.

Wolf, maybe it would sound strange, but - are you 100% sure that running cards 24/7 for a year with millions of memory errors will have NO impact to possible degradation or damage of that cards? I saw a lot of cards for example RX 480 8GB with Samsung memories, which in normal condition can do easilly 30+ Mhs, but they were hardly hitting just 27 Mhs! Something had to degrade them and memory errors is got an idea no. 1.
member
Activity: 101
Merit: 10
How long does the devfee usually take?

Claymore claim that he take 1%, but in fact he took more. You can watch the status of mining, every devfee mining, he is always get the share. He is cheating on 1%. I think he takes at least 3%.
newbie
Activity: 14
Merit: 0
Claymore, I noticed the front page of the thread with the new release links no longer mentions the issues with Nvidia 9xx and new drivers on Windows 10.  Has that been resolved and Maxwell cards can mine at full hashrate with latest drivers in windows 10?
member
Activity: 101
Merit: 10
Dear,

I use ethos (linux), how can I set email in your config file?
-ewal
-eworker

legendary
Activity: 1540
Merit: 1003
How long does the devfee usually take? I just saw that "Devfee: Stop mining and disconnect"

Is the claymore chargning anything us, or the nanopool that I'm using, how much is the fee and how often does this happen (if more than once)

At the other side I'm very statisfied with claymore, I had an issue today as the windows defender is counting it as a threat/virus and deleted the files, but I got it solved by excluding the folder so the antivirus won't scan it.
member
Activity: 101
Merit: 10
batch:-

setx GPU_FORCE_64BIT_PTR 0
setx GPU_MAX_HEAP_SIZE 100
setx GPU_USE_SYNC_OBJECTS 1
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_SINGLE_ALLOC_PERCENT 100
EthDcrMiner64.exe -epool eu1.ethermine.org:4444 -ewal 0x9793F71eC2f913d01D8564f1F3E4bF73a02a2Cd1.123 -epsw x


but i get this error?  where am going wrong?  must be making a simple mistake Huh



C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_FORCE_64BIT_PTR 0

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_MAX_HEAP_SIZE 100

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_USE_SYNC_OBJECTS 1

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_MAX_ALLOC_PERCENT 100

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_SINGLE_ALLOC_PERCENT 100

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>EthDcrMiner64.exe -epool eu1.ethermine.org:4444 -ewal 0x9793F71eC2f913d01D856
4f1F3E4bF73a02a2Cd1.home1 -epsw
'EthDcrMiner64.exe' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>pause
Press any key to continue . . .

Ur Windows Defender deleting some files from Claymore folder
member
Activity: 357
Merit: 26
Quick shoutout for anyone still running v9.3 on Win10 - Defender just decided it was a Trojan and deleted it on my rigs. v9.5 appears to be fine ...

Always smart to add a Windows Defender exception on the folder you keep your mining software in.  Miners regularly get falsely flagged as trojans because they get packaged in trojans.

Top tip there! Knew about the reasons behind it, but excluding the folder is an excellent shout. Thanks.
legendary
Activity: 2212
Merit: 1038
...
'EthDcrMiner64.exe' is not recognized as an internal or external command
...

There be a Microsloth error! Windows Defender probably deleted it for you, all that dangerous money could harm a little feller like you! In any case your miner executable is missing.
newbie
Activity: 14
Merit: 0
Claymore,
the problem with ETHMan 3.1 + Dualminer 9.5
If some rig go down, and then after some time goes up, it dosent restore in ETHMan, on screenshot you see that miner restarted and work, i launched web console and it works already 2 minutes, but ETHMan still show "lost ****". Even after 10 minutes ETHMan dosent see rig. So as it was with ETHMan 3.0, I need restart it to see all rigs stats again.

If you included cryptography and https, then maybe here some trouble with ssl? need to update certificate if connection was lost?

https://pp.userapi.com/c637223/v637223838/6dcbe/XV9iyhdO-c0.jpg

Can confirm this, same problem here.
The last update of EthMan improved the 100% lock after a warning, but a rig which was offline and comes back up is not (always?) shown as alive until EthMan is restarted.
Also the batch action is still not retriggered in case of a persisting failure.
The same thing is happening here.
Jump to: