Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 948. (Read 3426921 times)

newbie
Activity: 53
Merit: 0
Quote from: Quote from Readme
Example for Litecoin Mining on coinotron pool with GTX 660 Ti

cudaminer -d gtx660ti -l K28x32 -C 2 -i 0 -o stratum+tcp://coinotron.com:3334 -O workername:password

Anyone else getting infinite "result does not validate on CPU" errors with this settings?
I have an Asus GTX 660Ti OC

you are not alone, I have the same problems with my GTX770, cannot find a setting without validation errors ...
legendary
Activity: 1400
Merit: 1050
Anyone been able to get a evga gtx780ti with acx cooling to 640khash?

Highest I can get is around 400.  I'm just guessing different values now.  But the old version could get very close to 650
cudaminer -a scrypt:2048 -d gtx780ti -H 1 -l t15x16 -C 2 -i 0

For vertcoin getting 181khash/sec.  The gpu reaches 1.1Ghz when running cudaminer.  I should be getting over 650.  With the old version gpu would only get up to around 1000mhz.  So now I'm getting 100 more mhz but very bad hash rates.


Difficult to understand if you are speaking about scrypt, scrypt:2048 or both...
Anyhow, on vertcoin I use t15x32, and I get around 320khash/sec (pointless though...)
newbie
Activity: 26
Merit: 0
Anyone been able to get a evga gtx780ti with acx cooling to 640khash?

Highest I can get is around 400.  I'm just guessing different values now.  But the old version could get very close to 650
cudaminer -a scrypt:2048 -d gtx780ti -H 1 -l t15x16 -C 2 -i 0

For vertcoin getting 181khash/sec.  The gpu reaches 1.1Ghz when running cudaminer.  I should be getting over 650.  With the old version gpu would only get up to around 1000mhz.  So now I'm getting 100 more mhz but very bad hash rates.



Start at T15x24 and scale the x24 down by 1 till you find the best fit for you. T15x16 was the best for me, other with 780ti's was T15x20...
newbie
Activity: 21
Merit: 0
Anyone been able to get a evga gtx780ti with acx cooling to 640khash?

Highest I can get is around 400.  I'm just guessing different values now.  But the old version could get very close to 650
cudaminer -a scrypt:2048 -d gtx780ti -H 1 -l t15x16 -C 2 -i 0

For vertcoin getting 181khash/sec.  The gpu reaches 1.1Ghz when running cudaminer.  I should be getting over 650.  With the old version gpu would only get up to around 1000mhz.  So now I'm getting 100 more mhz but very bad hash rates.

legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
I'm getting real sick of google and the way it can't handle a spreadsheet. 352 lines and 50 users and it's dead.

It keeps disconnecting me, reverting changes I made, giving errors, etc.

Anybody have a recommendation what alternatives are out there to replace this =/%! piece of +!%/?
member
Activity: 91
Merit: 10
2014-02-04: all three of my GPUs run in one instance now. Autotune is much better also. It found optimal settings for my 660ti (K7x32), and was almost optimal for my 660ti SC (only 15 khash/s short of what K7x32 can do).
member
Activity: 69
Merit: 10
I posted a new release 2014-02-04 fixing two important bugs.

- autotune underreporting kHash/s values if the kernel finished in under 50ms (forgot to divide time elapsed by number of measurements, doh!)
- Multi-GPU support was not working - it is now.



OMG at last!!!! Thank you, thank you, thank you!!!
hero member
Activity: 756
Merit: 502
Quote from: Quote from Readme
Example for Litecoin Mining on coinotron pool with GTX 660 Ti

cudaminer -d gtx660ti -l K28x32 -C 2 -i 0 -o stratum+tcp://coinotron.com:3334 -O workername:password

Anyone else getting infinite "result does not validate on CPU" errors with this settings?
I have an Asus GTX 660Ti OC

28x32 is a big number. Try 7x32 or 14x32 maybe...
newbie
Activity: 34
Merit: 0
Quote from: Quote from Readme
Example for Litecoin Mining on coinotron pool with GTX 660 Ti

cudaminer -d gtx660ti -l K28x32 -C 2 -i 0 -o stratum+tcp://coinotron.com:3334 -O workername:password

Anyone else getting infinite "result does not validate on CPU" errors with this settings?
I have an Asus GTX 660Ti OC
sr. member
Activity: 280
Merit: 250
So i broke something.  Hopefully someone can enlighten me to where my problem lies (other than in the chair)

Please keep in mind i make no claim to knowing what the hell i'm doing, but only one way to learn, right?

2 670 GTXs, not in sli, sep .bat files for each, x64 cudaminer


  On VTC, K7x32 was producing ~133kh per card.  After tinkering with some different kernals received cuda error 30 and display driver crashed.   Rebooted and went back to K7x32 but that now produced cuda error 30 and a crash.  Reinstalled cuda and vid drivers and was able to use K7x32 with same results (133/per).  Once again tinkered with different kernals (using WHQL driver instead of beta this time to see if there was any difference) and received cuda error 30 etc etc but this time a cuda/driver reinstall didn't fix issue.

Currently running K7x20 with 155kh on one card and 135 on the other atm, but what did i break exactly and how?   

maybe the x32 config is at the limits of what the WDDM graphics card driver will allow you to allocate. Sometimes it works, and sometimes it doesn't. Leave away any -m1 or -C 1/2 options or reduce the x32 to something slightly smaller e.g. x30, x28




Settled on K7x24 for now getting 160kh/157kh.  Pulling m and C seemed to be the big factor.  Thanks!
hero member
Activity: 756
Merit: 502
So i broke something.  Hopefully someone can enlighten me to where my problem lies (other than in the chair)

Please keep in mind i make no claim to knowing what the hell i'm doing, but only one way to learn, right?

2 670 GTXs, not in sli, sep .bat files for each, x64 cudaminer


  On VTC, K7x32 was producing ~133kh per card.  After tinkering with some different kernals received cuda error 30 and display driver crashed.   Rebooted and went back to K7x32 but that now produced cuda error 30 and a crash.  Reinstalled cuda and vid drivers and was able to use K7x32 with same results (133/per).  Once again tinkered with different kernals (using WHQL driver instead of beta this time to see if there was any difference) and received cuda error 30 etc etc but this time a cuda/driver reinstall didn't fix issue.

Currently running K7x20 with 155kh on one card and 135 on the other atm, but what did i break exactly and how?   

maybe the x32 config is at the limits of what the WDDM graphics card driver will allow you to allocate. Sometimes it works, and sometimes it doesn't. Leave away any -m1 or -C 1/2 options or reduce the x32 to something slightly smaller e.g. x30, x28

sr. member
Activity: 280
Merit: 250
So i broke something.  Hopefully someone can enlighten me to where my problem lies (other than in the chair)

Please keep in mind i make no claim to knowing what the hell i'm doing, but only one way to learn, right?

2 670 GTXs, not in sli, sep .bat files for each, x64 cudaminer


  On VTC, K7x32 was producing ~133kh per card.  After tinkering with some different kernals received cuda error 30 and display driver crashed.   Rebooted and went back to K7x32 but that now produced cuda error 30 and a crash.  Reinstalled cuda and vid drivers and was able to use K7x32 with same results (133/per).  Once again tinkered with different kernals (using WHQL driver instead of beta this time to see if there was any difference) and received cuda error 30 etc etc but this time a cuda/driver reinstall didn't fix issue.

Currently running K7x20 with 155kh on one card and 135 on the other atm, but what did i break exactly and how?   
newbie
Activity: 28
Merit: 0
Nice work Smiley
Running scrypt jane (UTC) on GTX 680 at 400kh(430 normally) using -i 0 -C 0 -m 0 -H 2  -l K8x32
newbie
Activity: 3
Merit: 0
Anything else I could check?
what device nodes starting with nvidia are in your /dev folder? only the nvidiactl node or also a node representing a GPU?

is the nvidia kernel module loaded?

Christian


Code:
[root@fedora cudaminer]# ls -lh /dev/nvidia*
crw-rw-rw-. 1 root root 195,   0 feb  4 14:35 /dev/nvidia0
crw-rw-rw-. 1 root root 195, 255 feb  4 14:35 /dev/nvidiactl
[root@fedora cudaminer]# lsmod | grep nvidia
nvidia_uvm             34728  0
nvidia              10677446  57 nvidia_uvm
drm                   283349  2 nvidia
i2c_core               38476  3 drm,i2c_i801,nvidia

(sorry for the delay, the forum doesn't let me post to many replies in a short time)
hero member
Activity: 756
Merit: 502
Anything else I could check?
what device nodes starting with nvidia are in your /dev folder? only the nvidiactl node or also a node representing a GPU?

is the nvidia kernel module loaded?

any nvidia related error messages in your kernel log or system log?

Christian
newbie
Activity: 3
Merit: 0
[2014-02-04 15:08:32] Unable to query CUDA driver version! Is an nVidia driver installed?

creating the nvidia device nodes manually may be required.

see here
https://bitcointalk.org/index.php?topic=167229.msg4750585;topicseen#msg4750585
Thanks for a fast reply!!

I've already tried. With this configuration:
Code:
[root@fedora cudaminer]# cat ./nvidia_nodes
#!/bin/bash
timestamp=`date`
modprobe nvidia-uvm

if [ ! -c /dev/nvidiactl ]
then
    echo "Server $HOSTNAME device files re-created at $timestamp"
    
    # Count the number of NVIDIA controllers found.
    N3D=`lspci | grep -i NVIDIA | grep "3D controller" | wc -l`
    NVGA=`lspci | grep -i NVIDIA | grep "VGA compatible controller" | wc -l`

    N=`expr $N3D + $NVGA - 1`
    for i in `seq 0 $N`; do
        mknod -m 666 /dev/nvidia$i c 195 $i;
    done
    mknod -m 666 /dev/nvidiactl c 195 255
else
    echo "Files exists"
    exit 1
fi

... and this result:
Code:
[root@fedora cudaminer]# ./nvidia_nodes
Files exists

Anything else I could check?
hero member
Activity: 756
Merit: 502
[2014-02-04 15:08:32] Unable to query CUDA driver version! Is an nVidia driver installed?

creating the nvidia device nodes manually may be required.

see here
https://bitcointalk.org/index.php?topic=167229.msg4750585;topicseen#msg4750585
hero member
Activity: 756
Merit: 502
I posted a new release 2014-02-04 fixing two important bugs.

- autotune underreporting kHash/s values if the kernel finished in under 50ms (forgot to divide time elapsed by number of measurements, doh!)
- Multi-GPU support was not working - it is now.

newbie
Activity: 3
Merit: 0
Hi, I have a problem and I'm out of ideas of how to fix it. The cudaminer gives me this error:
Code:
[root@fedora cudaminer]# ./CudaMiner-master/cudaminer --help
  *** CudaMiner for nVidia GPUs by Christian Buchner ***
            This is version 2014-02-02 (beta)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
   Cuda additions Copyright 2013,2014 Christian Buchner
 LTC donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
 BTC donation address: 16hJF5mceSojnTD3ZTUDqdRhDyPJzoRakM
 YAC donation address: Y87sptDEcpLkLeAuex6qZioDbvy1qXZEj4
[2014-02-04 15:08:32] Unable to query CUDA driver version! Is an nVidia driver installed?
I have CUDA installed:
Code:
[root@fedora cudaminer]# nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2013 NVIDIA Corporation
Built on Wed_Jul_17_18:36:13_PDT_2013
Cuda compilation tools, release 5.5, V5.5.0
And the Nvidia Driver updated:
Code:
[root@fedora cudaminer]# nvidia-settings -v
nvidia-settings:  version 331.38  (buildmeister@swio-display-x64-rhel04-15)  Wed Jan  8 19:53:03 PST 2014
I've already searched the forum and I'm sure I
 - have correctly installed autoconf and automake
 - first run autogen.sh, then configure and the make.
 - have the path for CUDA to work correctly exported system wide

I've been dealing with this for more than a day so any suggestions are welcome!!

[ps]
Code:
[root@fedora cudaminer]# uname -a
Linux fedora 3.12.9-301.fc20.x86_64 #1 SMP Wed Jan 29 15:56:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
sr. member
Activity: 280
Merit: 250

1024 would get me ~280 with best results from Y14x20
2048 it's ~133 with the best results from K7x32

isn't Y an alias for K? Wink


Actually, it's funny you mention that.  If i try to run Y7x32, it fails horribly aand crashes the driver

according to the code, it shouldn't crash.... K and Y really do the same thing.

Code:
            switch (kernelid)
            {
                case 'T': case 'Z': *kernel = new NV2Kernel(); break;
                case 't':           *kernel = new TitanKernel(); break;
                case 'K': case 'Y': *kernel = new NVKernel(); break;
                case 'k':           *kernel = new KeplerKernel(); break;
                case 'F': case 'L': *kernel = new FermiKernel(); break;
                case 'f': case 'X': *kernel = new TestKernel(); break;
                case ' ': // choose based on device architecture
                    *kernel = Best_Kernel_Heuristics(props);
                break;




After tinkering some more K7x32 isn't working.  Has to be something on my end.  There's a seperate 670 in another machine that's happily hashing away with K7x32, but now the 670s won't. Going to reinstall drivers/cuda and see if anything changes
Jump to: