Author

Topic: [ANN] ccminer 2.3 - opensource - GPL (tpruvot) - page 167. (Read 500112 times)

legendary
Activity: 1484
Merit: 1082
ccminer/cpuminer developer
i will check yamp... And yes there was a small decrease but not related to optimisations, but to a change in scan ranges.

v1.5 is not finished, under hard work Wink

do you know the commit hash of your build ? (git show)

edit: for sm 3.0 yes i know, i will make a special change for that (pushed the sp commit by mistake)

edit2: yamp fixed, tx for the bug report Smiley
sr. member
Activity: 329
Merit: 250
thank you Epsylon3 for this miner, i'm testing 1.5-git and found 2 issues:
Code:
*** ccminer 1.5-git for nVidia GPUs by tpruvot@github ***
        Built with the nVidia CUDA SDK 6.5

  Based on pooler cpuminer 2.3.2
  CUDA support by Christian Buchner and Christian H.
  Include some of djm34 additions and sp optimisations

BTC donation address: 1AJdfCpLWPNoAMDfHF1wD5y8VgKSSTHxPo

[2014-11-23 03:34:06] Starting Stratum on stratum+tcp://yaamp.com:3633
[2014-11-23 03:34:06] NVML GPU monitoring enabled.
[2014-11-23 03:34:06] 1 miner threads started, using 'x13' algorithm.
[2014-11-23 03:34:06] Stratum extranonce answer id is not correct!
[2014-11-23 03:34:06] stratum time is at least 34s in the future
[2014-11-23 03:34:06] yaamp.com:3633 x13 block 8885
[2014-11-23 03:34:06] accepted: 1/1 (100.00%), 0.00 khash/s yay!!!
[2014-11-23 03:34:16] stratum time is at least 87s in the future
[2014-11-23 03:34:16] yaamp.com:3633 x13 block 8885
[2014-11-23 03:34:16] GPU #0: GeForce GTX 660, 1291 kH/s
[2014-11-23 03:35:16] GPU #0: GeForce GTX 660, 1287 kH/s
[2014-11-23 03:35:19] stratum time is at least 88s in the future
[2014-11-23 03:35:19] yaamp.com:3633 x13 block 8885
[2014-11-23 03:35:19] GPU #0: GeForce GTX 660, 1290 kH/s
[2014-11-23 03:36:20] GPU #0: GeForce GTX 660, 1285 kH/s
[2014-11-23 03:36:24] yaamp.com:3633 x13 block 8885
[2014-11-23 03:36:24] GPU #0: GeForce GTX 660, 1290 kH/s
[2014-11-23 03:36:31] yaamp.com:3633 x13 block 78884
[2014-11-23 03:36:31] GPU #0: GeForce GTX 660, 1290 kH/s
while mining.set_extranonce feature is working fine with nicehash and has no side effects with trademybit.com, it breaks mining on yaamp.com (after the first share no more yays).
second, the hash rate rate has decreased ~200khs on all x## algos compared with version 1.4.9 on sm_30 (gtx660), do you think it's caused by changes made in the calculation of the average hash rate or by recent sm_50 optimizations?
legendary
Activity: 1484
Merit: 1082
ccminer/cpuminer developer
Is there and AMD version of this? I skimmed through the other post and didn't see an answer.  Huh

no.... cuda is a nvidia compute language


tpruvot's ccminer-

Since adding extranonce compatibility and other recent improvements to ccminer, i have seen a much lower error rate when mining x11/x13.         --scryptr

Tx Smiley I try to reduce the problems i see... Not perfect for the moment, 1.5 is not ready, but usable
newbie
Activity: 36
Merit: 0
Is there and AMD version of this? I skimmed through the other post and didn't see an answer.  Huh
legendary
Activity: 1797
Merit: 1028
tpruvot's ccminer-

Since adding extranonce compatibility and other recent improvements to ccminer, i have seen a much lower error rate when mining x11/x13.         --scryptr
sr. member
Activity: 476
Merit: 250
custom fan or stock (nvidia) fan ?
I have the stock fan which I prefer actually... my 290x was a msi twin froz and the 780ti which was on top was totally unmanageable because the 290x was blowing hot air on it...

I use also the speed fan curve with msi (temp limit is 79°C but it never went that far...)

No matter what this is within nvidia standard temp, but this is still surprising...
 

G1's have the custom Windforce setup which should be plenty of cooling. I'm sure it would go down to maybe 65 degrees if they weren't inside the case and set up for using SLI sometimes, but I can't imagine they'd ever be cooler than that.  All of my 750s and 750ti's run 50-61 most of the time so it seems normal to me that the 970 would be about 10-20C more.
legendary
Activity: 1400
Merit: 1050
i'm not so sure P0 is really faster on the 9xx... memory speed is a bit lowered in P2 but (max) core freq seems to be the same :

gpu temp on the 970 77°C  Shocked How do you do that ?  Grin (my 980 never go beyond 73~74 at 120% tdp)
never mind I see your 750ti at 71°C (hope it is only a setting and not its temp...)

p0 isn't necessarily the fastest but it is the easiest to overclock...
The main problem with p2, is you also need nvidia inspector to overclock it...
I can overclock core clock, tdp with MSI AB, but to overclock mem clock, I need nvidia inspector, and 2 programs to overclock the same cards is a bit too much...

Now I am not sure if it is possible to change it, from what I read it seems that any cuda application (not game obviously) run at p2 on design...  

Simple: 2 cards in the same tower running ccminer Wink
still this is hot, I have 2 gtx980 in my tower (750 and 780 outside) and it doesn't go that high...
but fans run higher too (~90% for the 980), but for the 750ti this is really anormal, I never saw that card going higher than 60°C (outside or inside the tower).
You should add some autofan to ccminer (or use msi ab)

My 970 Gigabyte G1's are never colder than 71 while mining.  That's not normal?  I have plenty of fans, custom water cooling on CPU, and fans are set to 75-100% most of the time.  I used to run with the side panel off so it would stay around 75C, but it only goes up to 78C with it on so I just set the temp limit in Precision X to 77C(95% power) and stick with that.

Even with a large fan blowing directly at them it stays around 70.  Pretty sure those temps are completely normal.
custom fan or stock (nvidia) fan ?
I have the stock fan which I prefer actually... my 290x was a msi twin froz and the 780ti which was on top was totally unmanageable because the 290x was blowing hot air on it...

I use also the speed fan curve with msi (temp limit is 79°C but it never went that far...)

No matter what this is within nvidia standard temp, but this is still surprising...
 
sr. member
Activity: 476
Merit: 250
i'm not so sure P0 is really faster on the 9xx... memory speed is a bit lowered in P2 but (max) core freq seems to be the same :

gpu temp on the 970 77°C  Shocked How do you do that ?  Grin (my 980 never go beyond 73~74 at 120% tdp)
never mind I see your 750ti at 71°C (hope it is only a setting and not its temp...)

p0 isn't necessarily the fastest but it is the easiest to overclock...
The main problem with p2, is you also need nvidia inspector to overclock it...
I can overclock core clock, tdp with MSI AB, but to overclock mem clock, I need nvidia inspector, and 2 programs to overclock the same cards is a bit too much...

Now I am not sure if it is possible to change it, from what I read it seems that any cuda application (not game obviously) run at p2 on design... 

Simple: 2 cards in the same tower running ccminer Wink
still this is hot, I have 2 gtx980 in my tower (750 and 780 outside) and it doesn't go that high...
but fans run higher too (~90% for the 980), but for the 750ti this is really anormal, I never saw that card going higher than 60°C (outside or inside the tower).
You should add some autofan to ccminer (or use msi ab)

My 970 Gigabyte G1's are never colder than 71 while mining.  That's not normal?  I have plenty of fans, custom water cooling on CPU, and fans are set to 75-100% most of the time.  I used to run with the side panel off so it would stay around 75C, but it only goes up to 78C with it on so I just set the temp limit in Precision X to 77C(95% power) and stick with that.

Even with a large fan blowing directly at them it stays around 70.  Pretty sure those temps are completely normal.
legendary
Activity: 3164
Merit: 1003
anyone notice much difference in a 970 and a 970 oc hash rates at stockclock offsets ?  thanks
legendary
Activity: 1400
Merit: 1050
i'm not so sure P0 is really faster on the 9xx... memory speed is a bit lowered in P2 but (max) core freq seems to be the same :

gpu temp on the 970 77°C  Shocked How do you do that ?  Grin (my 980 never go beyond 73~74 at 120% tdp)
never mind I see your 750ti at 71°C (hope it is only a setting and not its temp...)

p0 isn't necessarily the fastest but it is the easiest to overclock...
The main problem with p2, is you also need nvidia inspector to overclock it...
I can overclock core clock, tdp with MSI AB, but to overclock mem clock, I need nvidia inspector, and 2 programs to overclock the same cards is a bit too much...

Now I am not sure if it is possible to change it, from what I read it seems that any cuda application (not game obviously) run at p2 on design... 

Simple: 2 cards in the same tower running ccminer Wink
still this is hot, I have 2 gtx980 in my tower (750 and 780 outside) and it doesn't go that high...
but fans run higher too (~90% for the 980), but for the 750ti this is really anormal, I never saw that card going higher than 60°C (outside or inside the tower).
You should add some autofan to ccminer (or use msi ab)
member
Activity: 81
Merit: 1002
It was only the wind.
i'm not so sure P0 is really faster on the 9xx... memory speed is a bit lowered in P2 but (max) core freq seems to be the same :

gpu temp on the 970 77°C  Shocked How do you do that ?  Grin (my 980 never go beyond 73~74 at 120% tdp)
never mind I see your 750ti at 71°C (hope it is only a setting and not its temp...)

p0 isn't necessarily the fastest but it is the easiest to overclock...
The main problem with p2, is you also need nvidia inspector to overclock it...
I can overclock core clock, tdp with MSI AB, but to overclock mem clock, I need nvidia inspector, and 2 programs to overclock the same cards is a bit too much...

Now I am not sure if it is possible to change it, from what I read it seems that any cuda application (not game obviously) run at p2 on design...  

Games run at p0, therefore, miner can run at p0 even if you need to disassemble the game and rip the code out! Cheesy
legendary
Activity: 1484
Merit: 1082
ccminer/cpuminer developer
i'm not so sure P0 is really faster on the 9xx... memory speed is a bit lowered in P2 but (max) core freq seems to be the same :

gpu temp on the 970 77°C  Shocked How do you do that ?  Grin (my 980 never go beyond 73~74 at 120% tdp)
never mind I see your 750ti at 71°C (hope it is only a setting and not its temp...)

p0 isn't necessarily the fastest but it is the easiest to overclock...
The main problem with p2, is you also need nvidia inspector to overclock it...
I can overclock core clock, tdp with MSI AB, but to overclock mem clock, I need nvidia inspector, and 2 programs to overclock the same cards is a bit too much...

Now I am not sure if it is possible to change it, from what I read it seems that any cuda application (not game obviously) run at p2 on design...  

Simple: 2 cards in the same tower running ccminer Wink
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
The API is open, It seems just to call the method NVAPI_INTERFACE NvAPI_GPU_GetPstates20   (   __in NvPhysicalGpuHandle    hPhysicalGpu, __inout NV_GPU_PERF_PSTATES20_INFO *    pPstatesInfo

http://docs.nvidia.com/gameworks/content/gameworkslibrary/coresdk/nvapi/group__gpupstate.html#gaeffe0838ca9850b9984fa9be117f637e

Detailed Description

The GPU performance state APIs are used to get and set various performance levels on a per-GPU basis. P-States are GPU active/executing performance capability and power consumption states.

P-States range from P0 to P15, with P0 being the highest performance/power state, and P15 being the lowest performance/power state. Each P-State maps to a performance level. Not all P-States are available on a given system. The definition of each P-States are currently as follows:

P0/P1 - Maximum 3D performance
P2/P3 - Balanced 3D performance-power
P8 - Basic HD video playback
P10 - DVD playback
P12 - Minimum idle power consumption


..
Here is the method to call:


NVAPI_INTERFACE NvAPI_GPU_GetPstates20   (   __in NvPhysicalGpuHandle    hPhysicalGpu,
__inout NV_GPU_PERF_PSTATES20_INFO *    pPstatesInfo
)      
DESCRIPTION: This API retrieves all performance states (P-States) 2.0 information.

P-States are GPU active/executing performance capability states. They range from P0 to P15, with P0 being the highest performance state, and P15 being the lowest performance state. Each P-State, if available, maps to a performance level. Not all P-States are available on a given system. The definition of each P-States are currently as follow:

P0/P1 - Maximum 3D performance
P2/P3 - Balanced 3D performance-power
P8 - Basic HD video playback
P10 - DVD playback
P12 - Minimum idle power consumption
TCC_SUPPORTED

Since:
Release: 295
SUPPORTED OS: Windows XP and higher

Parameters:
[in]   hPhysicalGPU   GPU selection
[out]   pPstatesInfo   P-States information retrieved, as documented in declaration above
Returns:
This API can return any of the error codes enumerated in NvAPI_Status. If there are return error codes with specific meaning for this API, they are listed below.
legendary
Activity: 1400
Merit: 1050
i'm not so sure P0 is really faster on the 9xx... memory speed is a bit lowered in P2 but (max) core freq seems to be the same :

gpu temp on the 970 77°C  Shocked How do you do that ?  Grin (my 980 never go beyond 73~74 at 120% tdp)
never mind I see your 750ti at 71°C (hope it is only a setting and not its temp...)

p0 isn't necessarily the fastest but it is the easiest to overclock...
The main problem with p2, is you also need nvidia inspector to overclock it...
I can overclock core clock, tdp with MSI AB, but to overclock mem clock, I need nvidia inspector, and 2 programs to overclock the same cards is a bit too much...

Now I am not sure if it is possible to change it, from what I read it seems that any cuda application (not game obviously) run at p2 on design... 

Games run at p0, therefore, miner can run at p0 even if you need to disassemble the game and rip the code out! Cheesy
I don't think it is really needed, but I haven't seen anything which says what trigger p0... so it is a bit puzzling
legendary
Activity: 1400
Merit: 1050
i'm not so sure P0 is really faster on the 9xx... memory speed is a bit lowered in P2 but (max) core freq seems to be the same :

gpu temp on the 970 77°C  Shocked How do you do that ?  Grin (my 980 never go beyond 73~74 at 120% tdp)
never mind I see your 750ti at 71°C (hope it is only a setting and not its temp...)

p0 isn't necessarily the fastest but it is the easiest to overclock...
The main problem with p2, is you also need nvidia inspector to overclock it...
I can overclock core clock, tdp with MSI AB, but to overclock mem clock, I need nvidia inspector, and 2 programs to overclock the same cards is a bit too much...

Now I am not sure if it is possible to change it, from what I read it seems that any cuda application (not game obviously) run at p2 on design...  
legendary
Activity: 1484
Merit: 1082
ccminer/cpuminer developer
i'm not so sure P0 is really faster on the 9xx... memory speed is a bit lowered in P2 but (max) core freq seems to be the same :

sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
You should implement powertune and clock settings. The NVIDIA gpu's also have a P0-P15 states that indicate the performance level. CCminer is currently running in P2. (P0 is the fastest)


http://docs.nvidia.com/gameworks/content/gameworkslibrary/coresdk/nvapi/group__gpupstate.html
legendary
Activity: 1484
Merit: 1082
ccminer/cpuminer developer
Thanks.. I will add that into the api folder...

Api 1.2 is under work... There was also some mismatches on windows with the monitoring card ids... Only detectable on multi gpus config when playing with the -d option.

I plan to add a query to get stratum/pool infos like the bloc height, url etc... And maybe another one for the config and/or supported algo list
hero member
Activity: 644
Merit: 500
For those interested, I'm currently writing a small monitoring app in C# for ccminer with tpruvots API. Here's the C# code for anyone who also wants to play with the api (very basic though, error handling is for you Tongue).

Code:
using System;
using System.Net.Sockets;
using System.Text;

namespace ccMonitor
{
    static class Api
    {
        public static string GetSummary(string ip = "127.0.0.1", int port = 4068)
        {
            return Request(ip, port, "summary");
        }

        public static string GetThreads(string ip = "127.0.0.1", int port = 4068)
        {
            return Request(ip, port, "threads");
        }

        public static string GetHistory(int thread = 0 , string ip = "127.0.0.1", int port = 4068)
        {
            return Request(ip, port, "histo|" + thread);
        }

        private static string Request(string ip, int port, string message)
        {
            string responseData;

            using (TcpClient client = new TcpClient(ip, port))
            using (NetworkStream stream = client.GetStream())
            {
                byte[] data = Encoding.ASCII.GetBytes(message);
                stream.Write(data, 0, data.Length);

                data = new Byte[2560];

                int bytes = stream.Read(data, 0, data.Length);
                responseData = Encoding.ASCII.GetString(data, 0, bytes);
            }
            
            return responseData;
        }
    }
}
legendary
Activity: 3164
Merit: 1003
CCMINER crashes-

My rig is running smoothly with nearly 3500 accepts since this morning before school.  My  other, headless, rig has been running for a couple days, with v1.4.9.   I ssh into either rig and open "screen" via the command line.  I close out of "screen" when gone.

I had poor luck on the problem rig when running it with keyboard and attached monitor.     -scryptr

p.s.  -currently on NiceHash two 6-card nVidia 750ti rigs mining X11 earn less than one 6-card AMD 280x rig mining neoscrypt.  But the 12 750ti cards consume one-half the electricity of the 6 280x cards.       -scryptr
now you get the point of the 750ti and its probably cheaper. Smiley
Jump to: