Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1084. (Read 3426947 times)

newbie
Activity: 28
Merit: 0
ESET NOD32 sees this new version as a PUP (Potentially Unwanted Program) and warns against its installation.  This didn't occur with the 4-22 or 4-30 versions.

This is also the case for other antivirus programs (and for other miners); it's probably that the programs have been either compiled into some malware/viruses and/or dropped as a payload from viruses--and anti-virus companies lazily mark the payload as dangerous, instead of the program that delivers it. Angry

However, I've found what seem to be two effective, painless fixes for this problem, which I would rather not allow people who plant miners via trojan viruses to know about. Anyone who would like to know these workarounds is welcome to PM me and try to persuade me.
newbie
Activity: 32
Merit: 0
It also works with 5.5. In case anyone else needs it, here's my configure.sh:

Code:
./configure "CFLAGS=-O3" "CXXFLAGS=-O3" --with-cuda=/usr/local/cuda-5.5 --build=x86_64-unknown-linux-gnu --host=i686-unknown-linux-gnu

Various forum posts had given me the idea that building a 64-bit binary was a bad idea.

EDIT: I removed the --build and --host and recompiled just to see what would happen. It seems like it's working just fine without them, and now I'm getting lines that look like this:

[2013-09-08 10:27:19] accepted: 2/2 (100.00%), 80.80 khash/s (yay!!!)

So I guess the 64-bit binary works just fine?
newbie
Activity: 32
Merit: 0
Ok, you guys were on the money - the issue was with the driver in the Fedora repositories. Once I installed the latest driver straight from Nvidia, it started working with my CUDA 5.0 binary. Thanks for the help.
newbie
Activity: 32
Merit: 0
I tried downgrading to CUDA 5.0 and recompiling. Pretty much the same result:

Code:
LD_LIBRARY_PATH=/usr/local/cuda-5.0/lib64/ ./cudaminer -d 0 -i 0 --benchmark
   *** CudaMiner for nVidia GPUs by Christian Buchner ***
             This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
       Cuda additions Copyright 2013 Christian Buchner
   My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-09-08 08:59:59] 1 miner threads started, using 'scrypt' algorithm.
[2013-09-08 08:59:59] GPU #0: starting up...

[2013-09-08 08:59:59] GPU #0:  with compute capability 131072.0
[2013-09-08 08:59:59] GPU #0: interactive: 0, tex-cache: 0 , single-alloc: 0
[2013-09-08 08:59:59] GPU #0: Performing auto-tuning (Patience...)
[2013-09-08 08:59:59] GPU #0:    0.00 khash/s with configuration  0x0
[2013-09-08 08:59:59] GPU #0: using launch configuration  0x0
Floating point exception (core dumped)

If I remove the -d 0 -i 0 arguments, here's what I get instead:

Code:
LD_LIBRARY_PATH=/usr/local/cuda-5.0/lib64/ ./cudaminer --benchmark
   *** CudaMiner for nVidia GPUs by Christian Buchner ***
             This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
       Cuda additions Copyright 2013 Christian Buchner
   My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-09-08 09:58:51] GPU #6: starting up...

[2013-09-08 09:58:51] GPU #6:  with compute capability 131072.0
[2013-09-08 09:58:51] GPU #6: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #6: Performing auto-tuning (Patience...)
[2013-09-08 09:58:51] GPU #0: starting up...

[2013-09-08 09:58:51] GPU #7: starting up...

[2013-09-08 09:58:51] GPU #0:  with compute capability -67106336.32627
[2013-09-08 09:58:51] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #0: Performing auto-tuning (Patience...)
[2013-09-08 09:58:51] GPU #1: starting up...

[2013-09-08 09:58:51] GPU #-1: starting up...

[2013-09-08 09:58:51] GPU #7:  with compute capability 131072.0
[2013-09-08 09:58:51] GPU #7: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #0: starting up...

[2013-09-08 09:58:51] GPU #0:    0.00 khash/s with configuration  0x0
[2013-09-08 09:58:51] GPU #0: using launch configuration  0x0
[2013-09-08 09:58:51] GPU #0: starting up...

[2013-09-08 09:58:51] GPU #6:    0.00 khash/s with configuration  0x0
[2013-09-08 09:58:51] GPU #6: using launch configuration  0x0
[2013-09-08 09:58:51] GPU #1: starting up...

[2013-09-08 09:58:51] GPU #0:  with compute capability 131072.0
[2013-09-08 09:58:51] GPU #3: starting up...

[2013-09-08 09:58:51] GPU #4: starting up...

[2013-09-08 09:58:51] GPU #0: starting up...

[2013-09-08 09:58:51] GPU #0:  with compute capability 131072.0
[2013-09-08 09:58:51] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #1:  with compute capability 131072.0
Floating point exception (core dumped)

What on earth is it doing? GPU #7? GPU #-1? This can't be right. I agree that it looks like it's not detecting my video card correctly, but how can I figure out what the issue is? I have no idea at all what makes it think I have more than one GPU in my system.
hero member
Activity: 675
Merit: 514
I'm not sure if the driver version 319.32 supports CUDA 5.5
sr. member
Activity: 247
Merit: 250
newbie
Activity: 32
Merit: 0
I have CUDA 5.5 installed, that's what I compiled cudaminer against.
sr. member
Activity: 247
Merit: 250
I'm using the proprietary driver - I can play games just fine, so I would think that the driver isn't the issue? I could be wrong, but:

Code:
glxinfo | grep OpenGL
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 650 Ti/PCIe/SSE2
OpenGL core profile version string: 4.3.0 NVIDIA 319.32
OpenGL core profile shading language version string: 4.30 NVIDIA via Cg compiler
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.3.0 NVIDIA 319.32
OpenGL shading language version string: 4.30 NVIDIA via Cg compiler
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL extensions:

those drivers should work but there is newer. you have CUDA 5 dev kit installed?
newbie
Activity: 32
Merit: 0
I'm using the proprietary driver - I can play games just fine, so I would think that the driver isn't the issue? I could be wrong, but:

Code:
glxinfo | grep OpenGL
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 650 Ti/PCIe/SSE2
OpenGL core profile version string: 4.3.0 NVIDIA 319.32
OpenGL core profile shading language version string: 4.30 NVIDIA via Cg compiler
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.3.0 NVIDIA 319.32
OpenGL shading language version string: 4.30 NVIDIA via Cg compiler
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL extensions:
sr. member
Activity: 247
Merit: 250

Code:
LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64/ ./cudaminer -o stratum+tcp://ltc.give-me-coins.com:3333 -O [my credentials] -t 1024
  *** CudaMiner for nVidia GPUs by Christian Buchner ***
            This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
      Cuda additions Copyright 2013 Christian Buchner
  My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-09-06 20:41:06] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:3333
[2013-09-06 20:41:06] 1024 miner threads started, using 'scrypt' algorithm.
[2013-09-06 20:41:07] GPU #0: starting up...

[2013-09-06 20:41:07] GPU #1: starting up...

I only have 1 GPU (a GTX 650 Ti) so I'm not sure if GPU #1, GPU #4 etc makes any sense.
Is this because I'm on a 64 bit system?
[/quote]

This is usually a driver issue. make sure they are up to date
hero member
Activity: 675
Merit: 514
I compiled for linux (fedora 18) and it appears to have worked, but when I try and run the program, I get this:

Code:
[2013-09-06 20:39:25] thread 31334 create failed
cudaminer doesn't recognise your graphics card.
One of the linux freaks here should be able to help you.
newbie
Activity: 32
Merit: 0
I compiled for linux (fedora 18) and it appears to have worked, but when I try and run the program, I get this:

Code:
LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64/ ./cudaminer -o stratum+tcp://ltc.give-me-coins.com:3333 -O [my credentials]
   *** CudaMiner for nVidia GPUs by Christian Buchner ***
             This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
       Cuda additions Copyright 2013 Christian Buchner
   My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-09-06 20:39:24] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:3333
[2013-09-06 20:39:25] thread 31334 create failed

If I limit the number of threads with a -t option, I get something more like this:

Code:
LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64/ ./cudaminer -o stratum+tcp://ltc.give-me-coins.com:3333 -O [my credentials] -t 1024
   *** CudaMiner for nVidia GPUs by Christian Buchner ***
             This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
       Cuda additions Copyright 2013 Christian Buchner
   My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-09-06 20:41:06] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:3333
[2013-09-06 20:41:06] 1024 miner threads started, using 'scrypt' algorithm.
[2013-09-06 20:41:07] GPU #0: starting up...

[2013-09-06 20:41:07] GPU #1: starting up...

[2013-09-06 20:41:07] GPU #1:  with compute capability 0.0
[2013-09-06 20:41:07] GPU #1: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-06 20:41:07] GPU #1: Performing auto-tuning (Patience...)
[2013-09-06 20:41:07] GPU #0: w�= with compute capability 0.0
[2013-09-06 20:41:07] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-06 20:41:07] GPU #0: Performing auto-tuning (Patience...)
[2013-09-06 20:41:07] GPU #4: starting up...

[2013-09-06 20:41:07] GPU #4:  with compute capability 0.0
[2013-09-06 20:41:07] GPU #4: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-06 20:41:07] GPU #4: Performing auto-tuning (Patience...)
[2013-09-06 20:41:07] GPU #4:    0.00 khash/s with configuration  0x0
[2013-09-06 20:41:07] GPU #4: using launch configuration  0x0
Floating point exception (core dumped)

I only have 1 GPU (a GTX 650 Ti) so I'm not sure if GPU #1, GPU #4 etc makes any sense.
Is this because I'm on a 64 bit system?
hero member
Activity: 516
Merit: 500
CAT.EX Exchange
I now get the "stratum_recv_line failed to parse a newline-terminated string" about once every 6 hrs of operations (which freezes mining).

Would it be better if I went back to my own stratum proxy?

That is what I did.
newbie
Activity: 17
Merit: 0
I now get the "stratum_recv_line failed to parse a newline-terminated string" about once every 6 hrs of operations (which freezes mining).

Would it be better if I went back to my own stratum proxy?
hero member
Activity: 756
Merit: 502

Mine on a pool for testing and see if the reported hits are validated by the CPU.
~600 kHash seems way too high for this type of card.

on solo mining, you might only find out after months that your computed result was bogus Wink

Christian
sr. member
Activity: 322
Merit: 250


uhm +600kh/s ?

what is that all about, not on a pool.. solo on a coin.

Gonna run for abit, maybe its fake?

thx
member
Activity: 104
Merit: 10
Yes, i had the same issue. Make sure you set it up, get on a pool, etc. You dont have to make a bat tho if you dont know how and i prefer short cuts anyways.  You can right click, create short cut to desktop, then right click the short cut, go to properties, and on the target, just type the command like stoneyshows above.


Btw, Thanks for the wemineltc pool suggestions, its night and day difference!

Also, anyone still trying to fix that stratum failed to parse error?  Would love a fix o.O, is it not that common? I would expect it to be fairly common..

I am old school, I just find Bat files easier since they are plain text and you just cut and paste. Shortcuts can be a pain to setup with parameters.

Either way works.
sr. member
Activity: 274
Merit: 250
Any chance this gets adapted for the Yacoin family, those scrypt coins with a higher N factor? AMD cards performance are taking a bigger hit with higher N than CPU. Would the performance gap between Nvidia and AMD cards also narrow?  Huh

it doesn't work? I was using multipool.us pretty well with it..which has variety of coins. I don't know the N factor of those coins off the top of my head..

The yacoin family of coins uses a modified version of scrypt for hashing. So, a normal miner that works with litecoin etc won't work without adaptation.

BTW, just tested the CUDA miner with litecoin and it works well. Getting ~60kh on a 650ti.
sr. member
Activity: 252
Merit: 250
Any chance this gets adapted for the Yacoin family, those scrypt coins with a higher N factor? AMD cards performance are taking a bigger hit with higher N than CPU. Would the performance gap between Nvidia and AMD cards also narrow?  Huh

it doesn't work? I was using multipool.us pretty well with it..which has variety of coins. I don't know the N factor of those coins off the top of my head..
multipool doesn't do scrypt jane coins
sr. member
Activity: 247
Merit: 250
Any chance this gets adapted for the Yacoin family, those scrypt coins with a higher N factor? AMD cards performance are taking a bigger hit with higher N than CPU. Would the performance gap between Nvidia and AMD cards also narrow?  Huh

it doesn't work? I was using multipool.us pretty well with it..which has variety of coins. I don't know the N factor of those coins off the top of my head..
Jump to: