Author

Topic: CCminer(SP-MOD) Modded NVIDIA Maxwell / Pascal kernels. - page 1078. (Read 2347601 times)

legendary
Activity: 1470
Merit: 1114
-Faster quark
-Faster groestlcoin
-Faster aes (tiny speedup in most algos)
-added --gpu-memspeed and --gpu-engine but it only works on windows and for some gpus (nvidia-smi limitation)


1.5.51(sp-MOD) is available here: (29-may-2015)

https://github.com/sp-hash/ccminer/releases/tag/1.5.51

The sourcecode is available here:

https://github.com/sp-hash/ccminer

Post your stats here. Card name/gpu clock/memclock



Quark seeing 6120 KH/s, up from 6050, on EVGA GTX750ti SC LP no aux pwr, default clocks and ccminer parms, Centos 6.

I noticed you don't include sm52 in the makefile. Any particular reason?
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
newer number for lyra:
gtx980:   1990kh/s
gtx750ti: 1140kh/s
gtx780ti: 2787kh/s (lol doesn't seem to want to stop increasing...)

I suspect you have done something like:

1. Reduced the register count
2. changed launchconfig like in cudaminer/the cryptonight miner -l 8x60 etc.

am I right?
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
speeds build 51 (factory standard clocks)

groestl:
gtx 970(gigabyte oc): 23.300 MHASH
gtx 750ti(gigabyte windforce): 7.560 MHASH

quark:
gtx 970(gigabyte oc): 16.200 MHASH (up from 15.700 in build 50)
gtx 750ti(gigabyte windforce): 5,7 MHASH


legendary
Activity: 1400
Merit: 1050
-Faster quark
-Faster groestlcoin
-Faster aes (tiny speedup in most algos)
-added --gpu-memspeed and --gpu-engine but it only works on windows and for some gpus (nvidia-smi limitation)


1.5.51(sp-MOD) is available here: (29-may-2015)

https://github.com/sp-hash/ccminer/releases/tag/1.5.51

The sourcecode is available here:

https://github.com/sp-hash/ccminer

Post your stats here. Card name/gpu clock/memclock


how faster is the new groeslt ?
By the way there is a bug in ccminer.cpp which is at the origin of the duplicate share:
the line reseting the nonce value when checking if the target has changed in miner_thread

newer number for lyra:
gtx980:   1990kh/s
gtx750ti: 1140kh/s
gtx780ti: 2787kh/s (lol doesn't seem to want to stop increasing...)

sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
-Faster quark
-Faster groestlcoin
-Faster aes (tiny speedup in most algos)
-added --gpu-memspeed and --gpu-engine but it only works on windows and for some gpus (nvidia-smi limitation)


1.5.51(sp-MOD) is available here: (29-may-2015)

https://github.com/sp-hash/ccminer/releases/tag/1.5.51

The sourcecode is available here:

https://github.com/sp-hash/ccminer

Post your stats here. Card name/gpu clock/memclock

legendary
Activity: 1510
Merit: 1003
I said in context that Titan X should be able to change frequencies with nvml api. Someone should test this guess )))
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
new numbers from lyra (work still in progress):
gtx980: 1900kh/s
gtx780ti: 2400kh/s
gtx750ti: 990kh/s
nice, my gtx750 with latest sp_ mod can do only 905khs. (1510/1600) Performance is memory constrained ...
actually mine with latest sp_mod mine (1365/2900) does 770kh/s... now at 1031kH/s  (same setting)
gtx780ti 2588kh/s
gtx980 1900kh/s (no change)

good job. More interested in your quark optimalizations so I submitted another one @github.
legendary
Activity: 1470
Merit: 1114

what would you suggest for the higher level card? ... even if all we do is test with it ...
Titan X )))
I don't think so.

http://cryptomining-blog.com/tag/gtx-titan-x-hashrate/

Might be worth waiting for the 980ti.
legendary
Activity: 1484
Merit: 1082
ccminer/cpuminer developer
yep, but look like its under work from nvidia, could appear soon. I will try to find beta versions and follow them.

but no, same behavior in 352.09 (released the 18 May)
member
Activity: 111
Merit: 10
yep, after some tests i can say these application clocks are not used at all on linux... could be some day...

the setting is indeed changed in nvidia-smi -q but doesnt really change the device clocks (i also tried cudaResetDevice to be sure after changes)

http://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceCommands.html#group__nvmlDeviceCommands_1gc2a9a8db6fffb2604d27fd67e8d5d87f

From that nvmlDeviceCommands page, it looks like it is only used by: "For Tesla products from the Kepler family."

On the page: http://docs.nvidia.com/deploy/nvml-api/nvml-api-reference.html#nvml-api-reference
Quote
1. NVML API Reference
Supported products:
Full Support
NVIDIA GeForce Line: None

Limited Support
NVIDIA GeForce Line: All current and previous generation GeForce-branded parts

So only limited support, but it doesn't seem to list what is/isn't supported.

legendary
Activity: 1400
Merit: 1050
new numbers from lyra (work still in progress):

gtx980: 1900kh/s
gtx780ti: 2400kh/s
gtx750ti: 990kh/s

nice, my gtx750 with latest sp_ mod can do only 905khs. (1510/1600) Performance is memory constrained ...
actually mine with latest sp_mod mine (1365/2900) does 770kh/s... now at 1031kH/s  (same setting)
gtx780ti 2588kh/s
gtx980 1900kh/s (no change)
legendary
Activity: 1484
Merit: 1082
ccminer/cpuminer developer
yep, after some tests i can say these application clocks are not used at all on linux... could be some day...

the setting is indeed changed in nvidia-smi -q but doesnt really change the device clocks (i also tried cudaResetDevice to be sure after changes)

http://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceCommands.html#group__nvmlDeviceCommands_1gc2a9a8db6fffb2604d27fd67e8d5d87f
legendary
Activity: 3164
Merit: 1003
@sp could you change the retrys on pool disconnects to 15 seconds default please. thx
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
Calling the api calls directly is currently not working on win32. Thats why i use the commandline. The code was pretty bugged Smiley Late night quickchanges (after work  (10hour workdayin c#))

In the next version I will retrieve the supported clock's and select the closest  match.

A memclock of 1504 is supported but not 1505. If you specify --gpu-memspeed 1505 it will crash. I want it to force it to 1504 wich is supported.
This is something you cannot do in the commandline
legendary
Activity: 1510
Merit: 1003
I think all this stuff can be usefull in complex: when you can monitor gpu temp and adjust engine-memory frequencies and fan speed. By now you just trying to invoke external executable to set clocks, it can be done easily in .bat file prior the ccminer launch. In current state I think this part of your work is useless (((
legendary
Activity: 1484
Merit: 1082
ccminer/cpuminer developer
you made again a bunch of code mistakes :

case '1070' should be case 1070

-ac   --applications-clocks= Specifies clocks

(not the reverse)

indeed, nvidia-smi was updated (on linux too in 346.72) but its maybe only a first step and not fully implemented by nvidia
legendary
Activity: 1510
Merit: 1003

what would you suggest for the higher level card? ... even if all we do is test with it ...
Titan X )))
legendary
Activity: 2912
Merit: 1091
--- ChainWorks Industries ---
I have now added --gpu-memclock and --gpu-engine

I just use the nvidia-smi and it seems to fail to adjust the clocks on the 750ti's, but report no problems on the gtx 970 (if you set the correct speeds)
However, when I monitor the GPU in GPU-z the clocks are not changed.. (might be a permission issue or something)


If anyone can make it work there is a commandline tool available here that you can test:

C:\\Progra~1\\NVIDIA~1\\NVSMI\\nvidia-smi

maybe this is why we are having so much trouble adjusting ( regardless of whether it is linux or windows ) the clocks on the 750ti oc cards here? ...

it might be wise to invest in a higher level card ...

what would you suggest for the higher level card? ... even if all we do is test with it ...

#crysx
legendary
Activity: 1510
Merit: 1003
I have now added --gpu-memclock and --gpu-engine

I just use the nvidia-smi and it seems to fail to adjust the clocks on the 750ti's, but report no problems on the gtx 970 (if you set the correct speeds)
However, when I monitor the GPU in GPU-z the clocks are not changed.. (might be a permission issue or something)


If anyone can make it work there is a commandline tool available here that you can test:


"Supported products:
- Full Support
    - All Tesla products, starting with the Fermi architecture
    - All Quadro products, starting with the Fermi architecture
    - All GRID products, starting with the Kepler architecture
    - GeForce Titan products, starting with the Kepler architecture
- Limited Support
    - All Geforce products, starting with the Fermi architecture
"

This tool doesn't want to give full control for simple geforce cards. I think such programs as gpu-z or nvidia inspector or msi afterburner have special driver communication hacks that are not available as open source (((

Some more:
"reading various sensors of graphics cards isn't as easy as people might imagine it to be. In GPU-Z's case, it needs to read and write to the I2C bus via MMIO(Memory Mapped Input-Output) on the graphics card, this can only be achieved through what is called a kernel-mode driver on Microsoft Windows operating systems. And that is exactly how GPU-Z does it, but have you ever wondered if the way GPU-Z is doing it is safe?

The driver that GPU-Z uses is a digitally signed kernel-mode driver, so it can run with DSEO enabled without asking the user for permission, it can access physical memory at a whim. You use DeviceIoctl, you specify an address in physical memory and size and the driver returns a pointer to that address, now you can fiddle with kernel space memory however you like."
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
I have now added --gpu-memclock and --gpu-engine

I just use the nvidia-smi and it seems to fail to adjust the clocks on the 750ti's, but report no problems on the gtx 970 (if you set the correct speeds)
However, when I monitor the GPU in GPU-z the clocks are not changed.. (might be a permission issue or something)


If anyone can make it work there is a commandline tool available here that you can test:

C:\\Progra~1\\NVIDIA~1\\NVSMI\\nvidia-smi
Jump to: