Author

Topic: Claymore's Dual Ethereum AMD+NVIDIA GPU Miner v15.0 (Windows/Linux) - page 821. (Read 6590565 times)

legendary
Activity: 1510
Merit: 1003
Need advice from Mister Claymore or someone else.
I'm mining now at ethermine.org (Europe (France): eu1.ethermine.org:4444)
The pool uses diff 40M

My 3-4 cards rigs are doing fine, but I have 2 standalone comps with only one 30mhs card in each. And this one-card rigs have huge (5-15%) stale share rate reported by pool. all rigs have identical and fast Internet access with the multi-card rigs, 70-100ms latency to pool. Is it due to high pool diff and low (30mhs) rig speed? Is there a way to fix this without changing a pool? I can't move this cards to multi-card rigs ((
 
I'm using latest Claymore 9.5 but older versions seem to behave the same.
Thanx in advance
klf
legendary
Activity: 1344
Merit: 1000
Im having some problems running the miner on my system. it gives me a CUDA error, saying that it cant allocate for the DAG



ive tried -lidag or just -li to the lowest intensity, got the latest drivers from nvidia, but it still gives the same error. help?

You can mine one of the other Forks of ETH like MusicCoin or Expansys

I'm also new to mining and just tried a couple of days back on my PC and got the same error message like above. Is it because of my PC issue or script or settings issue?

I though if it works then can invest money on small rig first and then can continue. Your suggestions will help a lot and many thanks in advance.
sr. member
Activity: 420
Merit: 260
Can someone share their .bat file to dual mine ethereum and expanse? Mine is giving me an "failed to get n2size" error.

EthDcrMiner64.exe -epool us2.ethermine.org:14444 -ewal account.name -epsw x -dpool exp.suprnova.cc:3333 -dwal account.name -dpsw x -esm 3

You cannot dual Mine ETH and Expanse since Expanse is a fork of ETH that uses the same algorithm. You can dual Mine Ethereum and another compatible coin like Decreed, Sia or Pascal or mine Expanse and Decreed, Sia or Pascal. You cannot dual mine two ETH-Hash coins.
legendary
Activity: 2450
Merit: 1004
Windows 10 Defender kill yesterday the 9.3 as Trojan: Win32/Skeeyah.A!rfn   Huh
member
Activity: 79
Merit: 10
Can someone share their .bat file to dual mine ethereum and expanse? Mine is giving me an "failed to get n2size" error.

EthDcrMiner64.exe -epool us2.ethermine.org:14444 -ewal account.name -epsw x -dpool exp.suprnova.cc:3333 -dwal account.name -dpsw x -esm 3
full member
Activity: 306
Merit: 100
anyone here who can share their hashrate on this miner using a 1060 and if it is possible the power consumption as well thanks!
legendary
Activity: 1246
Merit: 1024
You GPU is only 2gb and has to be at least 3gb to hold the dag.

isnt the eth miner supposed to work with 2 gb cards though?

No, it is impossible as the DAG file must be in the card's memory and it is over 2GB in size. No way around that.
newbie
Activity: 2
Merit: 0
You GPU is only 2gb and has to be at least 3gb to hold the dag.

isnt the eth miner supposed to work with 2 gb cards though?
newbie
Activity: 1
Merit: 0
trying to get going with mining for the first time. been trying to find my own way but im stuck and could use a hand. Trying to set up geth, and i think im getting ok with understanding that. I run it with

 geth --rpc --fast --cache=1024

and it goes on its merry way

but when i launch ethdcrminer64 with the command provided by the first post

./ethdcrminer64 -epool


i get:

ETH: 1 pool is specified
Main Ethereum pool is
DCR: 0 pool is specified
AMD OpenCL platform not found


then after a bunch of other stuff i get

No pool specified for Decred! Ethereum-only mining mode is enabled
ETHEREUM-ONLY MINING MODE ENABLED (-mode 1)

Probably you are trying to mine Ethereum fork. Please specify "-allcoins 1" or "-allpools 1" option. Check "Readme" file for details.
Pool sent wrong data, cannot set epoch, disconnectETH: Connection lost, retry in 20 sec...

ive tried doing just that, setting the --allcoins 1 or the --allpools 1 option

but then it just sits there, occasionally telling me my gpus temp and fan speed a couple times, till watchdog restarts it and rinse and repeat. Any advice?
sr. member
Activity: 420
Merit: 260
Im having some problems running the miner on my system. it gives me a CUDA error, saying that it cant allocate for the DAG



ive tried -lidag or just -li to the lowest intensity, got the latest drivers from nvidia, but it still gives the same error. help?

You can mine one of the other Forks of ETH like MusicCoin or Expansys
member
Activity: 87
Merit: 11
You GPU is only 2gb and has to be at least 3gb to hold the dag.
newbie
Activity: 2
Merit: 0
Im having some problems running the miner on my system. it gives me a CUDA error, saying that it cant allocate for the DAG

https://puu.sh/wjonC/51837b2990.png

ive tried -lidag or just -li to the lowest intensity, got the latest drivers from nvidia, but it still gives the same error. help?
sr. member
Activity: 326
Merit: 250
Greetings!

IS this miner supporting cuda8? Like using nvidia Cuda8 for more effective mining? Like genoil cuda miner did?


I wanted to install Claymore miner on Linux, Ubuntu 16.04 and is it working with cuda?


Cheers!!
full member
Activity: 238
Merit: 100
Hi Everybody:

As a lot of messages I have seen here, I will start stating that I'm new here and new in mining, I started mining on friday, so far great, I have learnt a lot.

Right now I am mining with only 3 ASUS STRIX RX580 8GB OC, I have reduced the GPU clock in 5% and increased the memory clock speed to 2200 MHz, I have got 27.4 MH/s from 24.5MH/s, all this in AMDs Wattman, I had problems with the Asus Aura Light Effects, it seems it mess with the MH/s fluency, It is off and uninstalled now.

I have read about some widely known power adjustments, I understand the idea is to reduce voltage, so reduce power consumption, and increase memory clock speed, so increase MH/s, I have read too about doing it with AMDs Wattman or MSIs AfterBurner, and about bios flashing.

Knowing many has been able to get 29MH/s and 135 Watts of power from this cards, my questions so far are the following:

- There is need to flash a new bios to get the best performance?, if so, how many MH/s could I get?, or power reduction?.
- If I can get similar improvements with AMDs Wattman, which are the parameters I must use?.

Thank you very much for your support.

Carlos


If you're still using Windows (which is about as stable as a house of cards during a hurricane), then you can easily undervolt using MSI AB or whatever. Also, only 27.4MH/s? Those can't be Samsung, are they?

How do you check for memory errors under Linux? I was asking this question few times with no answer Sad

Quite serious ones will appear in the kernel log - so you'd just check dmesg. I'm pretty sure there's a register with the count somewhere in the space, if I directly access the GPU (not bothering with the driver), but there's already SO MUCH awesome shit I can do with this access that I have yet to implement (and I've implemented plenty!) so it's not that high on my TODO list...

So thats it. There is no way for nonterminal gurus to easilly check for memory errors. I am about 25 years Mac user, also have some FreeBSD & Linux servers. I like Unixes and hate Win, but.... For example this little HWinfo64 is really handy and easy to use. I would like to try build a linux miner system but don't want to ruin my cards running them on the edge of millions of hw errors without knowing it...
And the solution to first run them on Win to find each cards limits, then flash those values into their bioses and after that run them under Linux seems a bit uncomfortable, don't ya think? Smiley

ACTUALLY - even if you're an expert in bash, you still aren't able to see the memory errors - all of them. You would have to code - access the GPU directly, telling the driver to go fuck itself, and read them.

About your worry with memory errors - they have zero chance of harming the GPU. A memory error is basically that the delay that was waited before a given memory command simply was not long enough, and as such, you got garbage back (most likely), assuming it was a read command. It's not going to hurt a thing, besides possibly your profits.

There is one other option, if you're a dev with a shitload of time... (or you just bribe me for a copy of mine) - write a tool to directly access the VRM controller(s) on the GPU, and command them directly. This is fun & rewarding, because you find out the Windows... those nice tools like MSI AB and Sapphire Trixx... they hide SO much power from you, and are like safety scissors when you need a scalpel.

Wolf, maybe it would sound strange, but - are you 100% sure that running cards 24/7 for a year with millions of memory errors will have NO impact to possible degradation or damage of that cards? I saw a lot of cards for example RX 480 8GB with Samsung memories, which in normal condition can do easilly 30+ Mhs, but they were hardly hitting just 27 Mhs! Something had to degrade them and memory errors is got an idea no. 1.

I am absolutely certain. The reason I'm paid so well for custom performance timings is because not only do I not copy+paste timing sets, I also do not blindly change values - I understand how to use & interact with GDDR5, and how it functions, to an extent. I don't mean in code, storing & retrieving shit, but more on the level of how to operate it, and how the GPU's memory controller will drive it, the various delays required between different commands issued, and whatnot. Attempting to do something too quickly (like back-to-back ACTIVE commands to rows in different banks without waiting long enough) will simply end with incorrect data if you fuck up just a little, or a memcrash (identifiable by the GPU's core clock being normal, but the memclk dropping to 300 and it not hashing) if you fuck up a lot. You ain't gonna damage it, short of voltage modifications.

Now - I have seen this case of Samsung just not being... well, Samsung, in some cases. In all of them, the issue was heat. Now, I know what you're thinking. Something along the lines of the core temp being more than fine, right? This is due to the cooling being what I call a "show cooler." XFX RS XXX (470 or 480, 4G or 8G), as well as MSI's Armor coolers are ones I have personally bought and confirmed this behavior. They ensure the GPU's ASIC is connected *really* well to the heatsink - and that's about all they do. Most gamers/overclockers/miners don't even know there are other temp sensors, let alone check them... this means that while everything on the PCB besides the core is left to cook, most notably the VRM controller(s) and the GDDR5, all appears well! This has been the cause of my Samsung under-performance issues without fail.

Well, little example. I had 2 identical cards, Nitro+ RX480 4GB Samsung. Same settings, same core, mem, voltages etc... Simply exact same confirmed settings. But one card simply was running 13-15 degC hotter then second one. Yes, far away from each other to make the cooling factor, airflow etc. irrelevant. And guess what? The hotter card even with the same mem straps was hashing a lot lower. And now why was that? Both cards had the same cooling solution from factory, backplate, not sure about Nitro+ VRM heatsink etc. My guess - one card simply was "a bit screwed" or something, my first idea previous owner run it with mem errors and the memory is since that time "more likely to produce another mem errors"...
So what is the conclusion? Should I change settings on all my card to run faster (more Mhs) but with mem errors, or just have all cards running slower but clean with zero mem errors? Where is the border where mem errors will start affecting accepted shares due to incorrect shares?

Cards may look like identical but have different BIOS settings, especially when you mentioned a "previous owner"... BIOS setting do affect voltage, frequencies which affect the temperature. If those cards are used the thermal paste could dry out and have to be replaced...
full member
Activity: 238
Merit: 100
batch:-

setx GPU_FORCE_64BIT_PTR 0
setx GPU_MAX_HEAP_SIZE 100
setx GPU_USE_SYNC_OBJECTS 1
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_SINGLE_ALLOC_PERCENT 100
EthDcrMiner64.exe -epool eu1.ethermine.org:4444 -ewal 0x9793F71eC2f913d01D8564f1F3E4bF73a02a2Cd1.123 -epsw x


but i get this error?  where am going wrong?  must be making a simple mistake Huh



C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_FORCE_64BIT_PTR 0

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_MAX_HEAP_SIZE 100

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_USE_SYNC_OBJECTS 1

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_MAX_ALLOC_PERCENT 100

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_SINGLE_ALLOC_PERCENT 100

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>EthDcrMiner64.exe -epool eu1.ethermine.org:4444 -ewal 0x9793F71eC2f913d01D856
4f1F3E4bF73a02a2Cd1.home1 -epsw
'EthDcrMiner64.exe' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>pause
Press any key to continue . . .

Cause: The script does not know where your "EthDcrMiner64.exe" is located.
Solution: Add a full path in the string "C:\XXXX\XXXX\EthDcrMiner64.exe -epool eu1.ethermine.org:4444 -ewal 0x9793F71eC2f913d01D8564f1F3E4bF73a02a2Cd1.123 -epsw x"
newbie
Activity: 17
Merit: 0
windows defender today place to quarantine ethdcrminer64.exe. (Trojan: Win32/Skeeyah.A!rfn)
http://imgur.com/a/0te75
anyone have same problem?

i found, sorry
full member
Activity: 224
Merit: 102
Hi Everybody:

As a lot of messages I have seen here, I will start stating that I'm new here and new in mining, I started mining on friday, so far great, I have learnt a lot.

Right now I am mining with only 3 ASUS STRIX RX580 8GB OC, I have reduced the GPU clock in 5% and increased the memory clock speed to 2200 MHz, I have got 27.4 MH/s from 24.5MH/s, all this in AMDs Wattman, I had problems with the Asus Aura Light Effects, it seems it mess with the MH/s fluency, It is off and uninstalled now.

I have read about some widely known power adjustments, I understand the idea is to reduce voltage, so reduce power consumption, and increase memory clock speed, so increase MH/s, I have read too about doing it with AMDs Wattman or MSIs AfterBurner, and about bios flashing.

Knowing many has been able to get 29MH/s and 135 Watts of power from this cards, my questions so far are the following:

- There is need to flash a new bios to get the best performance?, if so, how many MH/s could I get?, or power reduction?.
- If I can get similar improvements with AMDs Wattman, which are the parameters I must use?.

Thank you very much for your support.

Carlos


If you're still using Windows (which is about as stable as a house of cards during a hurricane), then you can easily undervolt using MSI AB or whatever. Also, only 27.4MH/s? Those can't be Samsung, are they?

How do you check for memory errors under Linux? I was asking this question few times with no answer Sad

Quite serious ones will appear in the kernel log - so you'd just check dmesg. I'm pretty sure there's a register with the count somewhere in the space, if I directly access the GPU (not bothering with the driver), but there's already SO MUCH awesome shit I can do with this access that I have yet to implement (and I've implemented plenty!) so it's not that high on my TODO list...

So thats it. There is no way for nonterminal gurus to easilly check for memory errors. I am about 25 years Mac user, also have some FreeBSD & Linux servers. I like Unixes and hate Win, but.... For example this little HWinfo64 is really handy and easy to use. I would like to try build a linux miner system but don't want to ruin my cards running them on the edge of millions of hw errors without knowing it...
And the solution to first run them on Win to find each cards limits, then flash those values into their bioses and after that run them under Linux seems a bit uncomfortable, don't ya think? Smiley

ACTUALLY - even if you're an expert in bash, you still aren't able to see the memory errors - all of them. You would have to code - access the GPU directly, telling the driver to go fuck itself, and read them.

About your worry with memory errors - they have zero chance of harming the GPU. A memory error is basically that the delay that was waited before a given memory command simply was not long enough, and as such, you got garbage back (most likely), assuming it was a read command. It's not going to hurt a thing, besides possibly your profits.

There is one other option, if you're a dev with a shitload of time... (or you just bribe me for a copy of mine) - write a tool to directly access the VRM controller(s) on the GPU, and command them directly. This is fun & rewarding, because you find out the Windows... those nice tools like MSI AB and Sapphire Trixx... they hide SO much power from you, and are like safety scissors when you need a scalpel.

Wolf, maybe it would sound strange, but - are you 100% sure that running cards 24/7 for a year with millions of memory errors will have NO impact to possible degradation or damage of that cards? I saw a lot of cards for example RX 480 8GB with Samsung memories, which in normal condition can do easilly 30+ Mhs, but they were hardly hitting just 27 Mhs! Something had to degrade them and memory errors is got an idea no. 1.

I am absolutely certain. The reason I'm paid so well for custom performance timings is because not only do I not copy+paste timing sets, I also do not blindly change values - I understand how to use & interact with GDDR5, and how it functions, to an extent. I don't mean in code, storing & retrieving shit, but more on the level of how to operate it, and how the GPU's memory controller will drive it, the various delays required between different commands issued, and whatnot. Attempting to do something too quickly (like back-to-back ACTIVE commands to rows in different banks without waiting long enough) will simply end with incorrect data if you fuck up just a little, or a memcrash (identifiable by the GPU's core clock being normal, but the memclk dropping to 300 and it not hashing) if you fuck up a lot. You ain't gonna damage it, short of voltage modifications.

Now - I have seen this case of Samsung just not being... well, Samsung, in some cases. In all of them, the issue was heat. Now, I know what you're thinking. Something along the lines of the core temp being more than fine, right? This is due to the cooling being what I call a "show cooler." XFX RS XXX (470 or 480, 4G or 8G), as well as MSI's Armor coolers are ones I have personally bought and confirmed this behavior. They ensure the GPU's ASIC is connected *really* well to the heatsink - and that's about all they do. Most gamers/overclockers/miners don't even know there are other temp sensors, let alone check them... this means that while everything on the PCB besides the core is left to cook, most notably the VRM controller(s) and the GDDR5, all appears well! This has been the cause of my Samsung under-performance issues without fail.

Well, little example. I had 2 identical cards, Nitro+ RX480 4GB Samsung. Same settings, same core, mem, voltages etc... Simply exact same confirmed settings. But one card simply was running 13-15 degC hotter then second one. Yes, far away from each other to make the cooling factor, airflow etc. irrelevant. And guess what? The hotter card even with the same mem straps was hashing a lot lower. And now why was that? Both cards had the same cooling solution from factory, backplate, not sure about Nitro+ VRM heatsink etc. My guess - one card simply was "a bit screwed" or something, my first idea previous owner run it with mem errors and the memory is since that time "more likely to produce another mem errors"...
So what is the conclusion? Should I change settings on all my card to run faster (more Mhs) but with mem errors, or just have all cards running slower but clean with zero mem errors? Where is the border where mem errors will start affecting accepted shares due to incorrect shares?

Honestly, it's simpler than you think when it comes to memory errors...  I think HWiNFO64 is giving you too much information. Basically - try checking average pool hashrate (I do like Nanopool for this, with their 1, 3, 6, 12, and 24 hour averages.) You just worry about the amount of valid shares adding up to the hashrate you want - because this is what determines what you get paid, in the end.
I like nanopool too, I agree accepted shares is nice information, but nanopool does not show stale shares or invalid shares. Erhermine will show you exact number of stales.
But you didn't answered my "where is the border" question Smiley
newbie
Activity: 15
Merit: 0
can any one help i have one rig working prefect, my other rig with the same cofig wont mine. i have 4 1060s running at 2-3 mhs. while watching afterburner the cards wont go over 40% power...if i switch to vert with ccminer they work perfect? i dont under standwhy my other rig wont mine if im using the same exact setting?
member
Activity: 91
Merit: 10
I've got one gpu, which make one incorrect share for 2-3 thousands good shares, running 30.7 Mhs. Zero memory errors on hwinfo64. And you would trash its bios? Can't agree..
What you say make sense... I've a tendency to be too cautious. I could tolerate 2 errors separated by enough delay (eg 60 minutes) Undecided
sr. member
Activity: 276
Merit: 250
My hashrate dropped from 30 to 29.7MH since a couple of days now. Miner never went off or something. Why i get 0.3 mh less now? Anyone have this issue?
I am/was using 9.3 and life was great. Now every claymore version give me 0.3 mh less (verions 8.1 to 9.5) all the same now. Dont understand what happened
Jump to: