I have this error regularly beginning from 2018.04.22:18:14. I do not remove logs, so I have full history. Sometime I have other errors. My error history is the following:
....
We are working on some changes/workarounds for Nvidia cards that will be included in 3.0, and when they are ready, we will release a beta version and ask any affected miners to test if fixes are working for them.
Hello,
With this version I have lots of share timeouts.
I never had this problem, I have stable Internet connection and using DwarfPool.
Here the log:
How can I fix it?
Regards,
Marcin
We use the term "share timeout" to report that the miner didn't hear from the pool about a submitted share. We wait about 10 minutes and then count the share as rejected even though it may be accepted by the pool but for some reason it wasn't able to send a response. There are no changes in PhoenixMiner that would cause more or less such shares - it is entirely on the pool to respond with either accepted or rejected message after receiving a share. A possible reason is that the pool (or its concrete server) is overloaded and share acceptance messages are regarded as low priority by the pool so if a share is being sent in a period of high network activity (like a few job changes one after the other), these messages are simply "lost" by the pool.
TLDR: This is not caused by PhoenixMiner but by the pool, and the only thing you can do is to change the pool (or its server) if this happens too often. How often is "too often" is up to you but you can use the rejected shares as a guide as we (conservatively) count the shares without response as rejected shares.
the legendary AMD 15.12 driver triggers the debuging error,make some sort of exception / fix in your next release
Thank you for reporting this, as we almost never see this problem in our testing, so it'll be good to see one reliable test case that always fails.
Dev when will new coins appear? such as Egem,Clo ?
They are already implemented in the development version, we will probably release a beta in a week or so.
Hi phoenix, I have tried your miner and it is very good except for the one that I can't set a single GPU to 50% usage so I can still used it for editing and gaming. At claymore i can do that, any help would be greatly appreciated.
You can use
-mi 0 option to give priority to whatever else you are doing on the GPU (gaming, etc.). For example if you have five GPUs, and the third one is connected to your display, use the following:
-mi 14,14,0,14,14 You can additionally lower the GPU load by using the option
-li. In our example with five GPUs, that would be
-li 0,0,1,0,0 Increase the -li option value as necessary.
so how can we make the fans here to work the same as claymore? any ideas? if i set -tt 60 and -fanmax 90 in claymore, as long the temperature is higher than -tt 60, the -fanmax 90 will always be at 90%, if temperature is 50c then the fan in claymore is around 0% and jumps to 50% and then 0% again, so how this miner can work the same?
In PhoenixMiner it is dependent on the card's BIOS and the drivers but generally, the fan is spinning as low as possible (but no lower than -fanmin) until the temperature hits -tt. Then the fan starts gradually to spin up until it hits -fanmax. If the temperature is still growing, when it hits -tmax, the card will begin to throttle in order to keep the temperature at about -tmax.
Trying to get JsonRPC responses from Phoenix, having some serious issues, If someone could tell me what I did wrong on this I would greatly appreciate it, this code works perfectly fine for Claymore's but for PhoenixMiner it doesnt do anything.
try
{
var clientSocket = new System.Net.Sockets.TcpClient();
if (clientSocket.ConnectAsync("127.0.0.1", 3337).Wait(5000))
{
string get_menu_request = "{\"id\":0,\"jsonrpc\":\"2.0\",\"method\":\"miner_getstat1\"}\n";
NetworkStream serverStream = clientSocket.GetStream();
byte[] outStream = System.Text.Encoding.ASCII.GetBytes(get_menu_request);
serverStream.Write(outStream, 0, outStream.Length);
serverStream.Flush();
byte[] inStream = new byte[clientSocket.ReceiveBufferSize];
serverStream.Read(inStream, 0, (int)clientSocket.ReceiveBufferSize);
string _returndata = System.Text.Encoding.ASCII.GetString(inStream);
if (_returndata.Length == 0)
{
throw new Exception("Invalid data");
}
Console.WriteLine(_returndata);
}
else
{
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
stats.ex = ex;
logger.LogWrite("Host socket exception: " + ex.ToString());
}
Appreciate the help.
At first glance the code seems OK. The usual reason for this problem is a missing new line (\n) at the end of the request but you seem to have it. Just make sure that it doesn't get mangled in some way during the conversion of the request string from ASCII to byte array.
In version 2.9e, after closing the miner such errors:
In version 2.8c everything is fine. Video cards 1070 and 1070 Ti.
It's just another reincarnation of the Nvidia errors with overclocked cards under 2.9e. For now either use 2.8 for your Nvidia cards or lower the overclock settings. We are trying to make the new kernels more stable in overclocked conditions but fundamentally the maximum speed loads the hardware more and hence any hardware problems at high memory overclocks are becoming more critical.
Hi
how to solve this
I`m using PhoenixMiner 2.9e and have this error every 10 min
2018.05.17:19:27:32.267: eths Eth: Unable to read pool response: The semaphore timeout period has expired
2018.05.17:19:27:32.267: eths Eth: Reconnecting in 20 seconds...
There is some kind of problem with the pool. Switch to different pool or different server of the same pool.
It's kinda a copy or claymore but better in my perspective (more info & faster mh/s).
I used exact the same config for PhoenixMiner as I used for Claymore. I've set the mvddc, cvddc, cclock, memclock to a specific value for each card. I have 6x RX 580 8gb and i've set for every individual card a value.
Claymore was consuming 850W and a hashrate of 181mh/s.
I tried PhoenixMiner 2.9e (exact parameters) and it uses 1200W !!!! I didn't know it was doing that because I used teamviewer. So after like 4 minutes I checked my miner in the basement because the temperature of the cards also increased. And then I saw 1200W for a 1000W PSU..... I immediatly switched the power off.
Now I do have 2 questions:
1. How did it consume 350W more?
2. Did I badly damage my 1000W gold+ PSU?
Thanks
If you are using older AMD drivers (blokchain beta drivers, or any driver before 18.x.x), the voltage settings won't work properly in PhoenixMiner. So either install new drivers or use a third-party tool to control the voltages. With the same voltages, the power usage will be roughly the same (+- few percent) between our miner and Claymore.
As for your second question - no, you didn't damage anything. First, the power rating of the PSU is at it's output, and at high load it's efficiency is usually no more than 85%, so at the input (the electrical outlet, where you are measuring the consumption), 1000/0.85 = 1176W. Second, each high-end PSU has multiple protections and will shut down automatically if it is near its limits. With that being said, continuous (24/7) load above 70-80% of the PSU rated power is not recommended.
So is anyone able to help with this issue
GPU1: Radeon RX Vega (pcie 38), OpenCL 2.0, 8 GB VRAM, 64 CUs
GPU2: Radeon RX 580 Series (pcie 39), OpenCL 2.0, 8 GB VRAM, 36 CUs
Listening for CDM remote manager at port 3333 in read-only mode
Eth: the pool list contains 3 pools
Eth: primary pool: eth-au.dwarfpool.com:8008
Starting GPU mining
Eth: Connecting to ethash pool eth-au.dwarfpool.com:8008 (proto: EthProxy)
Eth: Connected to ethash pool eth-au.dwarfpool.com:8008 (163.47.16.147)
Eth: New job #4330e9cb from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Eth: Generating light cache for epoch #187
GPU2: Starting up... (0)
GPU1: 34C 49%, GPU2: 45C 0%
build: Failed to GPU1 program: clBuildProgram (-11)
GPU1: Failed to load kernels: clCreateKernel (-46)
Eth: New job #58668d23 from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Thread(s) not responding. Restarting.
Eth speed: 0.000 MH/s, shares: 0/0/0, time: 0:00
GPUs: 1: 0.000 MH/s (0) 2: 0.000 MH/s (0)
Is this a regular AMD Vega GPU, or one of the first or PRO editions? Also, it would be helpful to tell us the driver version.
So I have a very basic question I would like to verify. Say I am Mining EtherGem solo through Geth. That coin Difficulty goes up and down and right now it goes from 700 MH to about 1.2 GH Difficulty. When I login to my miners and see a miner that has been for like 5 days and it stats the highest difficulty level for found block and it states something like 120 GH, why does it have a result that is some 120 x more difficult then what is needed to win the block?
And is that what that does mean, I found something in the 5 day period at 120 GH?! If so that is pretty awesome, if that si what it means. Thank you.
The network difficulty of EtherGEM is currently between 600 and 1000
GH. Note that you may find a block even with lower difficulty but 120GH is in no way guarantee that you have found a block. You just need to check your wallet to see if you have received a block reward.
At DAG update [so every 5 days and a few hours] I'm running into the issue as per the screenshot:
https://pasteboard.co/HlSWewG.pngI tried to find in readme file whether there's a fix for this; I already have 16 GB of virtual memory allocated so that shouldn't be the issue.
The cards in question are all nvidia, so the codes suggested won't work [as it says in title only for AMD].
Any help is appreciated!
Cheers,
Greg
This is more overclocking and/or PSU releated. If your cards are overclocked, during the DAG generation they may fail as DAG generation is more computationally intensive than the normal ethash computation. Another possible reason is a PSU(s) that are at the limit. During the DAG generation the cards will draw more power and if the PSU is near it's limit, the voltages may drop too much and the cards will crash. We will add the -gser option to serialize the DAG generation to solve the second problem in the next release.
Phoenixminer developer, if you are working on a newer version of this miner software. Give it also by which version of the nvidia driver you also use, because often the different drivers of nvdia different results and also different problems, to avoid all this also indicate which version you work with.
We usually test with fairly new driver version but we also have rigs with older versions. However it is not possible to test with each version. The rule of thumb is (as always) if it works, don't break it. There is virtually nothing to gain from the latest Nvidia drivers because the current GPUs are based on an architecture that is fairly old and any future driver optimizations just game-related fixes, etc.
So is anyone able to help with this issue
GPU1: Radeon RX Vega (pcie 38), OpenCL 2.0, 8 GB VRAM, 64 CUs
GPU2: Radeon RX 580 Series (pcie 39), OpenCL 2.0, 8 GB VRAM, 36 CUs
Listening for CDM remote manager at port 3333 in read-only mode
Eth: the pool list contains 3 pools
Eth: primary pool: eth-au.dwarfpool.com:8008
Starting GPU mining
Eth: Connecting to ethash pool eth-au.dwarfpool.com:8008 (proto: EthProxy)
Eth: Connected to ethash pool eth-au.dwarfpool.com:8008 (163.47.16.147)
Eth: New job #4330e9cb from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Eth: Generating light cache for epoch #187
GPU2: Starting up... (0)
GPU1: 34C 49%, GPU2: 45C 0%
build: Failed to GPU1 program: clBuildProgram (-11)
GPU1: Failed to load kernels: clCreateKernel (-46)
Eth: New job #58668d23 from eth-au.dwarfpool.com:8008; diff: 2000MH
GPU1: Starting up... (0)
Thread(s) not responding. Restarting.
Eth speed: 0.000 MH/s, shares: 0/0/0, time: 0:00
GPUs: 1: 0.000 MH/s (0) 2: 0.000 MH/s (0)
What is ram and what is your virtual memory right now? thx
16GB Ram
50,000 MB Page
this is a bug with only vega 64. after windows update and 18.4.1 i have seen. no fix yet as PM seems to have built vega kernals with only a vega 56. Hopefully in new version its fixed.
We are waiting for the Vega 64s to arrive and will fix it for sure.
So I have a very basic question I would like to verify. Say I am Mining EtherGem solo through Geth. That coin Difficulty goes up and down and right now it goes from 700 MH to about 1.2 GH Difficulty. When I login to my miners and see a miner that has been for like 5 days and it stats the highest difficulty level for found block and it states something like 120 GH, why does it have a result that is some 120 x more difficult then what is needed to win the block?
And is that what that does mean, I found something in the 5 day period at 120 GH?! If so that is pretty awesome, if that si what it means. Thank you.
Dev, if you could answer this real quick on your next post I would appreciate it, just one sentence if you can. The other thing I did today is I grabbed about 200 MB of log files and threw all of that data into Excel, some 900k+ lines, I dont think Excel would even let me use al of the data I grabbed, and needless to say Excel did not like it. So I sorted all of my found shares and block found difficulties for the past couple of weeks. My highest find was 455 TH, so half of a PH, or 1/6 of the way to finding an Etheruem block at current Difficulty.
While fighting Excel through this exercise, aside from making a macro I can run to truncate, sort and filter of all the logs, would it be possible on your end, as I have no idea, to display the highest Difficulty at the end of each round? Not just shares or blocks found. But once a new block arrives a snap shot of the highest reached difficulty is shown. I guess that would never be higher then the coin's current difficulty as you would have found the block, but still if working on a high difficulty coin it be nice to see how high you got for each block. But then that raises the question, OK after the end of block X, I get to say 1.2 PH, thats great and then the block ended and a new block arrives, if on a pool this would have almost guaranteed resulted in a share right, but you would have shoot through the share difficulty and still been going up to the block difficulty, so the block end highest difficulty is still valid.
But for those of us who only solo mine 99% of the time, we don't have pesky shares to deal with and then if at the end of each block we can see how high we did get or even a this is your block height and this is the current block difficulty...You achieved 32% of the block. (actual difficulty/current difficulty).
Just thinking about this, I have no idea if you are even able to capture then end of block highest GPU difficulty achieved, but that data would be real awesome to see and look through. See how close we get to hitting an Ethereum block. Calcs tell me my own rig(s) is 72 days, even made my own calculator, all tweaked out and it says same, but to watch the progress as it works would be awesome! Anyways, just some thoughts. Great miner btw thanks a ton.
It's doable and not a bad idea, however it will have to wait in line with other TODOs. It may be possible to something like this for pool mining too but then we will have to get the network difficulty from another source as the pool doesn't report it.
Hey,
I have a problem, I was using phoenix miner 2.5d sometimes when cuda error happened (1/month) the miner restarted itself without a problem. I started using 2.9e, but whenever a cuda error happens the miner will just stay at the restarting part, without restarting it, what could be the problem?
start.bat (address removed)
setx GPU_FORCE_64BIT_PTR 0
setx GPU_MAX_HEAP_SIZE 100
setx GPU_USE_SYNC_OBJECTS 1
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_SINGLE_ALLOC_PERCENT 100
PhoenixMiner.exe -pool eth-eu2.nanopool.org:9999 -pool2 eth-eu1.nanopool.org:9999 -wal -pass x -coin eth -cdm 0 -nvidia -wdog 1 -rmode 1
pause
.....
If you press any key the miner just closes itself and you have to manually restart it, if you remove the pause from the bat then the miner just closes itself simply.
Windows 10 64bit, no april update cause that killed the rig almost.
Thanks in advance.
We are working on a fix for these problems.