Author

Topic: PhoenixMiner 6.2c: fastest Ethereum/Ethash miner with lowest devfee (Win/Linux) - page 409. (Read 784965 times)

newbie
Activity: 49
Merit: 0
Just to report in, the latest Windows 10 update pushed out (yesterday) is not compatible and miner will hang. I rolled back the update to get the miner working again.
newbie
Activity: 74
Merit: 0
The most stable driver for Nvidia 1060 6g?
newbie
Activity: 129
Merit: 0
Hello Phoenix.
I have a request for the next version of your miner:

The H/s is shown every 5 sec., it is too much for me, filling the screen with unneeded information.
Could you add, when using -gswin, the H/s display change with this value, for example if you use -gswin 5 you see H/s every 5 sec. but if you set -gswin 15 it show every 15 sec.?
And/or add an option to show the H/s only after every received job like in claymore miner?
Thank you.
full member
Activity: 357
Merit: 101
I have this error regularly beginning from 2018.04.22:18:14. I do not remove logs, so I have full history. Sometime I have other errors. My error history is the following:
....
   We are working on some changes/workarounds for Nvidia cards that will be included in 3.0, and when they are ready, we will release a beta version and ask any affected miners to test if fixes are working for them.

Hello,
With this version I have lots of share timeouts.
I never had this problem, I have stable Internet connection and using DwarfPool.
Here the log:

Code:
....

How can I fix it?
Regards,
Marcin
   We use the term "share timeout" to report that the miner didn't hear from the pool about a submitted share. We wait about 10 minutes and then count the share as rejected even though it may be accepted by the pool but for some reason it wasn't able to send a response. There are no changes in PhoenixMiner that would cause more or less such shares - it is entirely on the pool to respond with either accepted or rejected message after receiving a share. A possible reason is that the pool (or its concrete server) is overloaded and share acceptance messages are regarded as low priority by the pool so if a share is being sent in a period of high network activity (like a few job changes one after the other), these messages are simply "lost" by the pool.

   TLDR: This is not caused by PhoenixMiner but by the pool, and the only thing you can do is to change the pool (or its server) if this happens too often. How often is "too often" is up to you but you can use the rejected shares as a guide as we (conservatively) count the shares without response as rejected shares.


the legendary AMD 15.12 driver triggers the debuging error,make some sort of exception / fix in your next release
   Thank you for reporting this, as we almost never see this problem in our testing, so it'll be good to see one reliable test case that always fails.

Dev when will new coins appear? such as Egem,Clo ? Huh
   They are already implemented in the development version, we will probably release a beta in a week or so.

Hi phoenix, I have tried your miner and it is very good except for the one that I can't set a single GPU to 50% usage so I can still used it for editing and gaming. At claymore i can do that, any help would be greatly appreciated.
   You can use -mi 0 option to give priority to whatever else you are doing on the GPU (gaming, etc.). For example if you have five GPUs, and the third one is connected to your display, use the following: -mi 14,14,0,14,14  You can additionally lower the GPU load by using the option -li. In our example with five GPUs, that would be -li 0,0,1,0,0 Increase the -li option value as necessary.


so how can we make the fans here to work the same as claymore? any ideas? if i set -tt 60 and -fanmax 90 in claymore, as long the temperature is higher than -tt 60, the -fanmax 90 will always be at 90%, if temperature is 50c then the fan in claymore is around 0% and jumps to 50% and then 0% again, so how this miner can work the same?
   In PhoenixMiner it is dependent on the card's BIOS and the drivers but generally, the fan is spinning as low as possible (but no lower than -fanmin) until the temperature hits -tt. Then the fan starts gradually to spin up until it hits -fanmax. If the temperature is still growing, when it hits -tmax, the card will begin to throttle in order to keep the temperature at about -tmax.


Trying to get JsonRPC responses from Phoenix, having some serious issues, If someone could tell me what I did wrong on this I would greatly appreciate it, this code works perfectly fine for Claymore's but for PhoenixMiner it doesnt do anything.

Code:
			try
{
var clientSocket = new System.Net.Sockets.TcpClient();

if (clientSocket.ConnectAsync("127.0.0.1", 3337).Wait(5000))
{
string get_menu_request = "{\"id\":0,\"jsonrpc\":\"2.0\",\"method\":\"miner_getstat1\"}\n";
NetworkStream serverStream = clientSocket.GetStream();
byte[] outStream = System.Text.Encoding.ASCII.GetBytes(get_menu_request);
serverStream.Write(outStream, 0, outStream.Length);
serverStream.Flush();

byte[] inStream = new byte[clientSocket.ReceiveBufferSize];
serverStream.Read(inStream, 0, (int)clientSocket.ReceiveBufferSize);
string _returndata = System.Text.Encoding.ASCII.GetString(inStream);

if (_returndata.Length == 0)
{
throw new Exception("Invalid data");
}

Console.WriteLine(_returndata);


}
else
{

}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
stats.ex = ex;
logger.LogWrite("Host socket exception: " + ex.ToString());
}


Appreciate the help.
   At first glance the code seems OK. The usual reason for this problem is a missing new line (\n) at the end of the request but you seem to have it. Just make sure that it doesn't get mangled in some way during the conversion of the request string from ASCII to byte array.


In version 2.9e, after closing the miner such errors:



In version 2.8c everything is fine. Video cards 1070 and 1070 Ti.
   It's just another reincarnation of the Nvidia errors with overclocked cards under 2.9e. For now either use 2.8 for your Nvidia cards or lower the overclock settings. We are trying to make the new kernels more stable in overclocked conditions but fundamentally the maximum speed loads the hardware more and hence any hardware problems at high memory overclocks are becoming more critical.


Hi
how to solve this
I`m using PhoenixMiner 2.9e and  have this error every 10 min
2018.05.17:19:27:32.267: eths Eth: Unable to read pool response: The semaphore timeout period has expired
2018.05.17:19:27:32.267: eths Eth: Reconnecting in 20 seconds...
   There is some kind of problem with the pool. Switch to different pool or different server of the same pool.


It's kinda a copy or claymore but better in my perspective (more info & faster mh/s).

I used exact the same config for PhoenixMiner as I used for Claymore. I've set the mvddc, cvddc, cclock, memclock to a specific value for each card. I have 6x RX 580 8gb and i've set for every individual card a value.
Claymore was consuming 850W and a hashrate of 181mh/s.
I tried PhoenixMiner 2.9e (exact parameters) and it uses 1200W !!!! I didn't know it was doing that because I used teamviewer. So after like 4 minutes I checked my miner in the basement because the temperature of the cards also increased. And then I saw 1200W for a 1000W PSU..... I immediatly switched the power off.

Now I do have 2 questions:


1. How did it consume 350W more?
2. Did I badly damage my 1000W gold+ PSU?

Thanks
   If you are using older AMD drivers (blokchain beta drivers, or any driver before 18.x.x), the voltage settings won't work properly in PhoenixMiner. So either install new drivers or use a third-party tool to control the voltages. With the same voltages, the power usage will be roughly the same (+- few percent) between our miner and Claymore.

   As for your second question - no, you didn't damage anything. First, the power rating of the PSU is at it's output, and at high load it's efficiency is usually no more than 85%, so at the input (the electrical outlet, where you are measuring the consumption), 1000/0.85 = 1176W. Second, each high-end PSU has multiple protections and will shut down automatically if it is near its limits. With that being said, continuous (24/7) load above 70-80% of the PSU rated power is not recommended.


So is anyone able to help with this issue

Code:
GPU1: Radeon RX Vega (pcie 38), OpenCL 2.0, 8 GB VRAM, 64 CUs
GPU2: Radeon RX 580 Series (pcie 39), OpenCL 2.0, 8 GB VRAM, 36 CUs
Listening for CDM remote manager at port 3333 in read-only mode
Eth: the pool list contains 3 pools
Eth: primary pool: eth-au.dwarfpool.com:8008
Starting GPU mining
Eth: Connecting to ethash pool eth-au.dwarfpool.com:8008 (proto: EthProxy)
Eth: Connected to ethash pool eth-au.dwarfpool.com:8008 (163.47.16.147)
Eth: New job #4330e9cb from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Eth: Generating light cache for epoch #187
GPU2: Starting up... (0)
GPU1: 34C 49%, GPU2: 45C 0%
build: Failed to GPU1 program: clBuildProgram (-11)
GPU1: Failed to load kernels: clCreateKernel (-46)
Eth: New job #58668d23 from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Thread(s) not responding. Restarting.
Eth speed: 0.000 MH/s, shares: 0/0/0, time: 0:00
GPUs: 1: 0.000 MH/s (0) 2: 0.000 MH/s (0)
   Is this a regular AMD Vega GPU, or one of the first or PRO editions? Also, it would be helpful to tell us the driver version.


So I have a very basic question I would like to verify.  Say I am Mining EtherGem solo through Geth.  That coin Difficulty goes up and down and right now it goes from 700 MH to about 1.2 GH Difficulty.  When I login to my miners and see a miner that has been for like 5 days and it stats the highest difficulty level for found block and it states something like 120 GH, why does it have a result that is some 120 x more difficult then what is needed to win the block?

And is that what that does mean, I found something in the 5 day period at 120 GH?!  If so that is pretty awesome, if that si what it means.  Thank you.
   The network difficulty of EtherGEM is currently between 600 and 1000 GH. Note that you may find a block even with lower difficulty but 120GH is in no way guarantee that you have found a block. You just need to check your wallet to see if you have received a block reward.


At DAG update [so every 5 days and a few hours] I'm running into the issue as per the screenshot:

https://pasteboard.co/HlSWewG.png

I tried to find in readme file whether there's a fix for this; I already have 16 GB of virtual memory allocated so that shouldn't be the issue.

The cards in question are all nvidia, so the codes suggested won't work [as it says in title only for AMD].

Any help is appreciated!

Cheers,

Greg
   This is more overclocking and/or PSU releated. If your cards are overclocked, during the DAG generation they may fail as DAG generation is more computationally intensive than the normal ethash computation. Another possible reason is a PSU(s) that are at the limit. During the DAG generation the cards will draw more power and if the PSU is near it's limit, the voltages may drop too much and the cards will crash. We will add the -gser option to serialize the DAG generation to solve the second problem in the next release.


Phoenixminer developer, if you are working on a newer version of this miner software. Give it also by which version of the nvidia driver you also use, because often the different drivers of nvdia different results and also different problems, to avoid all this also indicate which version you work with.
    We usually test with fairly new driver version but we also have rigs with older versions. However it is not possible to test with each version. The rule of thumb is (as always) if it works, don't break it. There is virtually nothing to gain from the latest Nvidia drivers because the current GPUs are based on an architecture that is fairly old and any future driver optimizations just game-related fixes, etc.


So is anyone able to help with this issue

Code:
GPU1: Radeon RX Vega (pcie 38), OpenCL 2.0, 8 GB VRAM, 64 CUs
GPU2: Radeon RX 580 Series (pcie 39), OpenCL 2.0, 8 GB VRAM, 36 CUs
Listening for CDM remote manager at port 3333 in read-only mode
Eth: the pool list contains 3 pools
Eth: primary pool: eth-au.dwarfpool.com:8008
Starting GPU mining
Eth: Connecting to ethash pool eth-au.dwarfpool.com:8008 (proto: EthProxy)
Eth: Connected to ethash pool eth-au.dwarfpool.com:8008 (163.47.16.147)
Eth: New job #4330e9cb from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Eth: Generating light cache for epoch #187
GPU2: Starting up... (0)
GPU1: 34C 49%, GPU2: 45C 0%
build: Failed to GPU1 program: clBuildProgram (-11)
GPU1: Failed to load kernels: clCreateKernel (-46)
Eth: New job #58668d23 from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Thread(s) not responding. Restarting.
Eth speed: 0.000 MH/s, shares: 0/0/0, time: 0:00
GPUs: 1: 0.000 MH/s (0) 2: 0.000 MH/s (0)
What is ram and what is your virtual memory right now? thx

16GB Ram
50,000 MB Page


this is a bug with only vega 64. after windows update and 18.4.1 i have seen. no fix yet as PM seems to have built vega kernals with only a vega 56. Hopefully in new version its fixed.
   We are waiting for the Vega 64s to arrive and will fix it for sure.


So I have a very basic question I would like to verify.  Say I am Mining EtherGem solo through Geth.  That coin Difficulty goes up and down and right now it goes from 700 MH to about 1.2 GH Difficulty.  When I login to my miners and see a miner that has been for like 5 days and it stats the highest difficulty level for found block and it states something like 120 GH, why does it have a result that is some 120 x more difficult then what is needed to win the block?

And is that what that does mean, I found something in the 5 day period at 120 GH?!  If so that is pretty awesome, if that si what it means.  Thank you.

Dev, if you could answer this real quick on your next post I would appreciate it, just one sentence if you can.  The other thing I did today is I grabbed about 200 MB of log files and threw all of that data into Excel, some 900k+ lines, I dont think Excel would even let me use al of the data I grabbed, and needless to say Excel did not like it.  So I sorted all of my found shares and block found difficulties for the past couple of weeks.  My highest find was 455 TH, so half of a PH, or 1/6 of the way to finding an Etheruem block at current Difficulty.

While fighting Excel through this exercise, aside from making a macro I can run to truncate, sort and filter of all the logs, would it be possible on your end, as I have no idea, to display the highest Difficulty at the end of each round?  Not just shares or blocks found.  But once a new block arrives a snap shot of the highest reached difficulty is shown.  I guess that would never be higher then the coin's current difficulty as you would have found the block, but still if working on a high difficulty coin it be nice to see how high you got for each block.  But then that raises the question, OK after the end of block X, I get to say 1.2 PH, thats great and then the block ended and a new block arrives, if on a pool this would have almost guaranteed resulted in a share right, but you would have shoot through the share difficulty and still been going up to the block difficulty, so the block end highest difficulty is still valid.

But for those of us who only solo mine 99% of the time, we don't have pesky shares to deal with and then if at the end of each block we can see how high we did get or even a this is your block height and this is the current block  difficulty...You achieved 32% of the block.  (actual difficulty/current difficulty).

Just thinking about this, I have no idea if you are even able to capture then end of block highest GPU difficulty achieved, but that data would be real awesome to see and look through.  See how close we get to hitting an Ethereum block.  Calcs tell me  my own rig(s) is 72 days, even made my own calculator, all tweaked out and it says same, but to watch the progress as it works would be awesome!  Anyways, just some thoughts.  Great miner btw thanks a ton.
   It's doable and not a bad idea, however it will have to wait in line with other TODOs. It may be possible to something like this for pool mining too but then we will have to get the network difficulty from another source as the pool doesn't report it.


Hey,

I have a problem, I was using phoenix miner 2.5d sometimes when cuda error happened (1/month) the miner restarted itself without a problem. I started using 2.9e, but whenever a cuda error happens the miner will just stay at the restarting part, without restarting it, what could be the problem?

start.bat (address removed)

setx GPU_FORCE_64BIT_PTR 0
setx GPU_MAX_HEAP_SIZE 100
setx GPU_USE_SYNC_OBJECTS 1
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_SINGLE_ALLOC_PERCENT 100
PhoenixMiner.exe -pool eth-eu2.nanopool.org:9999 -pool2 eth-eu1.nanopool.org:9999 -wal -pass x -coin eth -cdm 0 -nvidia -wdog 1 -rmode 1
pause

.....

If you press any key the miner just closes itself and you have to manually restart it, if you remove the pause from the bat then the miner just closes itself simply.

Windows 10 64bit, no april update cause that killed the rig almost.

Thanks in advance.
   We are working on a fix for these problems.

newbie
Activity: 1
Merit: 0
Hey,

I have a problem, I was using phoenix miner 2.5d sometimes when cuda error happened (1/month) the miner restarted itself without a problem. I started using 2.9e, but whenever a cuda error happens the miner will just stay at the restarting part, without restarting it, what could be the problem?

start.bat (address removed)

setx GPU_FORCE_64BIT_PTR 0
setx GPU_MAX_HEAP_SIZE 100
setx GPU_USE_SYNC_OBJECTS 1
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_SINGLE_ALLOC_PERCENT 100
PhoenixMiner.exe -pool eth-eu2.nanopool.org:9999 -pool2 eth-eu1.nanopool.org:9999 -wal -pass x -coin eth -cdm 0 -nvidia -wdog 1 -rmode 1
pause


2018.05.20:23:38:41.707: GPU3 CUDA error in CudaProgram.cu:264 : the launch timed out and was terminated (702)
2018.05.20:23:38:41.707: GPU2 CUDA error in CudaProgram.cu:264 : the launch timed out and was terminated (702)
2018.05.20:23:38:41.707: GPU2 GPU2 search error: the launch timed out and was terminated
2018.05.20:23:38:41.707: GPU3 GPU3 search error: the launch timed out and was terminated
2018.05.20:23:38:41.748: GPU4 CUDA error in CudaProgram.cu:264 : the launch timed out and was terminated (702)
2018.05.20:23:38:41.748: GPU4 GPU4 search error: the launch timed out and was terminated
2018.05.20:23:38:41.767: GPU5 CUDA error in CudaProgram.cu:264 : the launch timed out and was terminated (702)
2018.05.20:23:38:41.767: GPU5 GPU5 search error: the launch timed out and was terminated
2018.05.20:23:38:41.768: GPU1 CUDA error in CudaProgram.cu:264 : the launch timed out and was terminated (702)
2018.05.20:23:38:41.768: GPU1 GPU1 search error: the launch timed out and was terminated
2018.05.20:23:38:42.547: wdog Thread(s) not responding. Restarting.

press any key to continue....

If you press any key the miner just closes itself and you have to manually restart it, if you remove the pause from the bat then the miner just closes itself simply.

Windows 10 64bit, no april update cause that killed the rig almost.

Thanks in advance.
jr. member
Activity: 36
Merit: 12
It's kinda a copy or claymore but better in my perspective (more info & faster mh/s).

I used exact the same config for PhoenixMiner as I used for Claymore. I've set the mvddc, cvddc, cclock, memclock to a specific value for each card. I have 6x RX 580 8gb and i've set for every individual card a value.
Claymore was consuming 850W and a hashrate of 181mh/s.
I tried PhoenixMiner 2.9e (exact parameters) and it uses 1200W !!!! I didn't know it was doing that because I used teamviewer. So after like 4 minutes I checked my miner in the basement because the temperature of the cards also increased. And then I saw 1200W for a 1000W PSU..... I immediatly switched the power off.

Now I do have 2 questions:


1. How did it consume 350W more?
2. Did I badly damage my 1000W gold+ PSU?

Thanks

As for your question about damaging the PSU, probably not.  The wattage rating on a PSU is what the unit is expected to OUTPUT... in your case, 1000W.  However, it is not perfectly efficient, so draw from the wall will be higher.  If your PSU is gold rated (80% efficient), then at maximum capacity, it should be pulling about 1200W at the wall.  This isn't a good thing to do in practice, since it generates a lot of heat and leaves no headroom for the system to get any more power (which would cause instability / crashes), and efficiency decreases at higher loads which means more $$ for power.  But if you only ran like that for a few minutes, then you don't need to worry about damage.  1000W PSU is pretty borderline for 6 580's, but if it's working for you then great.

As I said before, try the 18.3.4 drivers (and make sure you enable compute mode in settings) and run PheonixMiner again with your voltage tweaks.  

Thank you for your 2 reply's. I've installed 18.3.4 drivers and set them to compute. Im now getting the most stable hashrate in like 6 months! It keeps between 183 and 183.9 mh/s. The blockchain drivers were buggy as hell but they worked (7+days straight) at that time without crashes to thats why I never updated them. But this is alot better and even less power consume (835W). I'm now dealing with these driver and miner because phoenix is restarting like every 3 hours because gpu6 is hanging (never happend in claymore but that maybe due the drivers). So i'm trying to reduce speed and give some more volts, see how that goes.
Thanks man!
newbie
Activity: 31
Merit: 0
So I have a very basic question I would like to verify.  Say I am Mining EtherGem solo through Geth.  That coin Difficulty goes up and down and right now it goes from 700 MH to about 1.2 GH Difficulty.  When I login to my miners and see a miner that has been for like 5 days and it stats the highest difficulty level for found block and it states something like 120 GH, why does it have a result that is some 120 x more difficult then what is needed to win the block?

And is that what that does mean, I found something in the 5 day period at 120 GH?!  If so that is pretty awesome, if that si what it means.  Thank you.

Dev, if you could answer this real quick on your next post I would appreciate it, just one sentence if you can.  The other thing I did today is I grabbed about 200 MB of log files and threw all of that data into Excel, some 900k+ lines, I dont think Excel would even let me use al of the data I grabbed, and needless to say Excel did not like it.  So I sorted all of my found shares and block found difficulties for the past couple of weeks.  My highest find was 455 TH, so half of a PH, or 1/6 of the way to finding an Etheruem block at current Difficulty.

While fighting Excel through this exercise, aside from making a macro I can run to truncate, sort and filter of all the logs, would it be possible on your end, as I have no idea, to display the highest Difficulty at the end of each round?  Not just shares or blocks found.  But once a new block arrives a snap shot of the highest reached difficulty is shown.  I guess that would never be higher then the coin's current difficulty as you would have found the block, but still if working on a high difficulty coin it be nice to see how high you got for each block.  But then that raises the question, OK after the end of block X, I get to say 1.2 PH, thats great and then the block ended and a new block arrives, if on a pool this would have almost guaranteed resulted in a share right, but you would have shoot through the share difficulty and still been going up to the block difficulty, so the block end highest difficulty is still valid.

But for those of us who only solo mine 99% of the time, we don't have pesky shares to deal with and then if at the end of each block we can see how high we did get or even a this is your block height and this is the current block  difficulty...You achieved 32% of the block.  (actual difficulty/current difficulty).

Just thinking about this, I have no idea if you are even able to capture then end of block highest GPU difficulty achieved, but that data would be real awesome to see and look through.  See how close we get to hitting an Ethereum block.  Calcs tell me  my own rig(s) is 72 days, even made my own calculator, all tweaked out and it says same, but to watch the progress as it works would be awesome!  Anyways, just some thoughts.  Great miner btw thanks a ton.

   
newbie
Activity: 31
Merit: 0
At DAG update [so every 5 days and a few hours] I'm running into the issue as per the screenshot:

https://pasteboard.co/HlSWewG.png

I tried to find in readme file whether there's a fix for this; I already have 16 GB of virtual memory allocated so that shouldn't be the issue.

The cards in question are all nvidia, so the codes suggested won't work [as it says in title only for AMD].

Any help is appreciated!

Cheers,

Greg

Hey Greg,

So from your image everything starts to go to shit when you DAGup as I like to call it.  Your move to DAG 188.  You wouldn't happen to be on 3GB cards would you.  I think DAG 188 is or near the ohh shit 3GB done for range.

Remember PM auto generates a DAG +2 each time, you can disable this but you maybe pre-hitting that coins cutoff for your gpus memory size.  If you have 4gb or higher cards then have you tried the follow:

  -lidag Slow down DAG generation to avoid crashes when swiching DAG epochs
      (0-3, default: 0 - fastest, 3 - slowest). This option works only on AMD cards

My gut is you are on 3GB cards and for the coin you are mining the DAG has gotten to high.  You will need to find a new Altcoin to mine with a much lower Day, to DAGdown to.  Like Pirl or Callisto or Whale, whatever you favor.  Thats my guess, would like to see the 1st screen of PM after you fire it up to see the card details.
jr. member
Activity: 117
Merit: 3
So is anyone able to help with this issue

Code:
GPU1: Radeon RX Vega (pcie 38), OpenCL 2.0, 8 GB VRAM, 64 CUs
GPU2: Radeon RX 580 Series (pcie 39), OpenCL 2.0, 8 GB VRAM, 36 CUs
Listening for CDM remote manager at port 3333 in read-only mode
Eth: the pool list contains 3 pools
Eth: primary pool: eth-au.dwarfpool.com:8008
Starting GPU mining
Eth: Connecting to ethash pool eth-au.dwarfpool.com:8008 (proto: EthProxy)
Eth: Connected to ethash pool eth-au.dwarfpool.com:8008 (163.47.16.147)
Eth: New job #4330e9cb from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Eth: Generating light cache for epoch #187
GPU2: Starting up... (0)
GPU1: 34C 49%, GPU2: 45C 0%
build: Failed to GPU1 program: clBuildProgram (-11)
GPU1: Failed to load kernels: clCreateKernel (-46)
Eth: New job #58668d23 from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Thread(s) not responding. Restarting.
Eth speed: 0.000 MH/s, shares: 0/0/0, time: 0:00
GPUs: 1: 0.000 MH/s (0) 2: 0.000 MH/s (0)
What is ram and what is your virtual memory right now? thx

16GB Ram
50,000 MB Page


this is a bug with only vega 64. after windows update and 18.4.1 i have seen. no fix yet as PM seems to have built vega kernals with only a vega 56. Hopefully in new version its fixed.
newbie
Activity: 27
Merit: 1
So is anyone able to help with this issue

Code:
GPU1: Radeon RX Vega (pcie 38), OpenCL 2.0, 8 GB VRAM, 64 CUs
GPU2: Radeon RX 580 Series (pcie 39), OpenCL 2.0, 8 GB VRAM, 36 CUs
Listening for CDM remote manager at port 3333 in read-only mode
Eth: the pool list contains 3 pools
Eth: primary pool: eth-au.dwarfpool.com:8008
Starting GPU mining
Eth: Connecting to ethash pool eth-au.dwarfpool.com:8008 (proto: EthProxy)
Eth: Connected to ethash pool eth-au.dwarfpool.com:8008 (163.47.16.147)
Eth: New job #4330e9cb from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Eth: Generating light cache for epoch #187
GPU2: Starting up... (0)
GPU1: 34C 49%, GPU2: 45C 0%
build: Failed to GPU1 program: clBuildProgram (-11)
GPU1: Failed to load kernels: clCreateKernel (-46)
Eth: New job #58668d23 from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Thread(s) not responding. Restarting.
Eth speed: 0.000 MH/s, shares: 0/0/0, time: 0:00
GPUs: 1: 0.000 MH/s (0) 2: 0.000 MH/s (0)
What is ram and what is your virtual memory right now? thx

16GB Ram
50,000 MB Page
jr. member
Activity: 222
Merit: 2
Phoenixminer developer, if you are working on a newer version of this miner software. Give it also by which version of the nvidia driver you also use, because often the different drivers of nvdia different results and also different problems, to avoid all this also indicate which version you work with.



======

WiNEther Miner is a graphical interface to use with ethminer with advanced watchdog options and monitoring................. . .. Download : https://github.com/digitalpara/WiNETH
newbie
Activity: 3
Merit: 0
At DAG update [so every 5 days and a few hours] I'm running into the issue as per the screenshot:

https://pasteboard.co/HlSWewG.png

I tried to find in readme file whether there's a fix for this; I already have 16 GB of virtual memory allocated so that shouldn't be the issue.

The cards in question are all nvidia, so the codes suggested won't work [as it says in title only for AMD].

Any help is appreciated!

Cheers,

Greg
member
Activity: 367
Merit: 34
Hello all - could really use some help here.

I switched from Claymore 11.6 to PhoenixMiner 2.9e due to the DAG and MSWindows 10 issues that prevent the nVidia 1060 3GB cards from working.  However for the last 3 weeks I have been getting an illegal memory error:

 GPU10 CUDART error in CudaProgram.cu:127 : an illegal memory access was encountered (77)
2018.05.15:15:13:00.328: GPU11 CUDA error in CudaProgram.cu:102 : an illegal memory access was encountered (700)

To be clear, I can no longer get the nVidia 1060 cards to work with Claymore or PhoenixMiner on ETH or PIRL....

OC'ing the cards seems to be a common cause, but I'm not OCing the cards, I can't even get to the point of doing that...

Would welcome any thoughts, I can't seem to find any solutions online.

Russell

The memory errors I have no idea.
But should should start shopping for new cards with a minimum of 4GB memory.
At best you probably have a few weeks or a month or so left on the usage of those 3GB cards.
Then you won't be able to mine with them on  Ethereum.

See:  https://investoon.com/tools/dag_size




3GB cards will last into 2019 with windows7 or Linux
newbie
Activity: 31
Merit: 0
So I have a very basic question I would like to verify.  Say I am Mining EtherGem solo through Geth.  That coin Difficulty goes up and down and right now it goes from 700 MH to about 1.2 GH Difficulty.  When I login to my miners and see a miner that has been for like 5 days and it stats the highest difficulty level for found block and it states something like 120 GH, why does it have a result that is some 120 x more difficult then what is needed to win the block?

And is that what that does mean, I found something in the 5 day period at 120 GH?!  If so that is pretty awesome, if that si what it means.  Thank you.
newbie
Activity: 51
Merit: 0
So is anyone able to help with this issue

Code:
GPU1: Radeon RX Vega (pcie 38), OpenCL 2.0, 8 GB VRAM, 64 CUs
GPU2: Radeon RX 580 Series (pcie 39), OpenCL 2.0, 8 GB VRAM, 36 CUs
Listening for CDM remote manager at port 3333 in read-only mode
Eth: the pool list contains 3 pools
Eth: primary pool: eth-au.dwarfpool.com:8008
Starting GPU mining
Eth: Connecting to ethash pool eth-au.dwarfpool.com:8008 (proto: EthProxy)
Eth: Connected to ethash pool eth-au.dwarfpool.com:8008 (163.47.16.147)
Eth: New job #4330e9cb from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Eth: Generating light cache for epoch #187
GPU2: Starting up... (0)
GPU1: 34C 49%, GPU2: 45C 0%
build: Failed to GPU1 program: clBuildProgram (-11)
GPU1: Failed to load kernels: clCreateKernel (-46)
Eth: New job #58668d23 from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Thread(s) not responding. Restarting.
Eth speed: 0.000 MH/s, shares: 0/0/0, time: 0:00
GPUs: 1: 0.000 MH/s (0) 2: 0.000 MH/s (0)
What is ram and what is your virtual memory right now? thx
newbie
Activity: 27
Merit: 1
So is anyone able to help with this issue

Code:
GPU1: Radeon RX Vega (pcie 38), OpenCL 2.0, 8 GB VRAM, 64 CUs
GPU2: Radeon RX 580 Series (pcie 39), OpenCL 2.0, 8 GB VRAM, 36 CUs
Listening for CDM remote manager at port 3333 in read-only mode
Eth: the pool list contains 3 pools
Eth: primary pool: eth-au.dwarfpool.com:8008
Starting GPU mining
Eth: Connecting to ethash pool eth-au.dwarfpool.com:8008 (proto: EthProxy)
Eth: Connected to ethash pool eth-au.dwarfpool.com:8008 (163.47.16.147)
Eth: New job #4330e9cb from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Eth: Generating light cache for epoch #187
GPU2: Starting up... (0)
GPU1: 34C 49%, GPU2: 45C 0%
build: Failed to GPU1 program: clBuildProgram (-11)
GPU1: Failed to load kernels: clCreateKernel (-46)
Eth: New job #58668d23 from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Thread(s) not responding. Restarting.
Eth speed: 0.000 MH/s, shares: 0/0/0, time: 0:00
GPUs: 1: 0.000 MH/s (0) 2: 0.000 MH/s (0)
newbie
Activity: 7
Merit: 0
It's kinda a copy or claymore but better in my perspective (more info & faster mh/s).

I used exact the same config for PhoenixMiner as I used for Claymore. I've set the mvddc, cvddc, cclock, memclock to a specific value for each card. I have 6x RX 580 8gb and i've set for every individual card a value.
Claymore was consuming 850W and a hashrate of 181mh/s.
I tried PhoenixMiner 2.9e (exact parameters) and it uses 1200W !!!! I didn't know it was doing that because I used teamviewer. So after like 4 minutes I checked my miner in the basement because the temperature of the cards also increased. And then I saw 1200W for a 1000W PSU..... I immediatly switched the power off.

Now I do have 2 questions:


1. How did it consume 350W more?
2. Did I badly damage my 1000W gold+ PSU?

Thanks

As for your question about damaging the PSU, probably not.  The wattage rating on a PSU is what the unit is expected to OUTPUT... in your case, 1000W.  However, it is not perfectly efficient, so draw from the wall will be higher.  If your PSU is gold rated (80% efficient), then at maximum capacity, it should be pulling about 1200W at the wall.  This isn't a good thing to do in practice, since it generates a lot of heat and leaves no headroom for the system to get any more power (which would cause instability / crashes), and efficiency decreases at higher loads which means more $$ for power.  But if you only ran like that for a few minutes, then you don't need to worry about damage.  1000W PSU is pretty borderline for 6 580's, but if it's working for you then great.

As I said before, try the 18.3.4 drivers (and make sure you enable compute mode in settings) and run PheonixMiner again with your voltage tweaks.  
newbie
Activity: 7
Merit: 0
It's kinda a copy or claymore but better in my perspective (more info & faster mh/s).

I used exact the same config for PhoenixMiner as I used for Claymore. I've set the mvddc, cvddc, cclock, memclock to a specific value for each card. I have 6x RX 580 8gb and i've set for every individual card a value.
Claymore was consuming 850W and a hashrate of 181mh/s.
I tried PhoenixMiner 2.9e (exact parameters) and it uses 1200W !!!! I didn't know it was doing that because I used teamviewer. So after like 4 minutes I checked my miner in the basement because the temperature of the cards also increased. And then I saw 1200W for a 1000W PSU..... I immediatly switched the power off.

Now I do have 2 questions:


1. How did it consume 350W more?
2. Did I badly damage my 1000W gold+ PSU?

Thanks

I'm betting that you are running the AMD blockchain drivers?  If so, then I had the same experience with Phoenix miner + blockchain drivers.  The cmd line clock and voltage settings did not take effect.  Once I upgraded to Adrenaline 18.3.4, then Phoenix worked perfectly.... I find it to be more stable than Claymore for sure. 
jr. member
Activity: 36
Merit: 12
It's kinda a copy or claymore but better in my perspective (more info & faster mh/s).

I used exact the same config for PhoenixMiner as I used for Claymore. I've set the mvddc, cvddc, cclock, memclock to a specific value for each card. I have 6x RX 580 8gb and i've set for every individual card a value.
Claymore was consuming 850W and a hashrate of 181mh/s.
I tried PhoenixMiner 2.9e (exact parameters) and it uses 1200W !!!! I didn't know it was doing that because I used teamviewer. So after like 4 minutes I checked my miner in the basement because the temperature of the cards also increased. And then I saw 1200W for a 1000W PSU..... I immediatly switched the power off.

Now I do have 2 questions:


1. How did it consume 350W more?
2. Did I badly damage my 1000W gold+ PSU?

Thanks
jr. member
Activity: 47
Merit: 1
so how can we make the fans here to work the same as claymore? any ideas? if i set -tt 60 and -fanmax 90 in claymore, as long the temperature is higher than -tt 60, the -fanmax 90 will always be at 90%, if temperature is 50c then the fan in claymore is around 0% and jumps to 50% and then 0% again, so how this miner can work the same?
Actually it does work like that. Both Phoenix and Claymore have a temperature management matrix (or a function) that translates current temperature/target temperature, min/max speeds into a targeted fan speed.
Addition of the formula/matrix would enrich the documentation, but on the other hand this is "proprietary" part of the code that is not likely to be revealed. PH correct me if I am wrong.

If you wish you could  take over the control of the temperature and fans speed on your own with the OC tools.
I am taking control at 2 of my rigs as both GPU self management and PhoenixMiner Tempeature management fail to keep the control over temperature in optimal way (in this particular setups in my opinion).

On the nvidia, I am doing this as part of the OC (setting fans on fixed spin rate) using nvidiainspector.
Code:
start C:\dev\Tools\Guru3D.com\nvidiaInspector.exe -setBaseClockOffset:0,0,-200 -setMemoryClockOffset:0,0,795 -setPowerTarget:0,59 -setTempTarget:0,1,65 -setFanSpeed:0,45

On AMD GPUs I am using overdrive5_64.exe
Code:
overdrive5_64.exe -a 1 -F 45


One more thing. If you have multiple GPUs, then -tt 60 might not be right. Try -tt 60,60,60 ... listing a target temperature for each GPU rather than just one.
Jump to: