I tried downgrading to CUDA 5.0 and recompiling. Pretty much the same result:
LD_LIBRARY_PATH=/usr/local/cuda-5.0/lib64/ ./cudaminer -d 0 -i 0 --benchmark
*** CudaMiner for nVidia GPUs by Christian Buchner ***
This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
Cuda additions Copyright 2013 Christian Buchner
My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-09-08 08:59:59] 1 miner threads started, using 'scrypt' algorithm.
[2013-09-08 08:59:59] GPU #0: starting up...
[2013-09-08 08:59:59] GPU #0: with compute capability 131072.0
[2013-09-08 08:59:59] GPU #0: interactive: 0, tex-cache: 0 , single-alloc: 0
[2013-09-08 08:59:59] GPU #0: Performing auto-tuning (Patience...)
[2013-09-08 08:59:59] GPU #0: 0.00 khash/s with configuration 0x0
[2013-09-08 08:59:59] GPU #0: using launch configuration 0x0
Floating point exception (core dumped)
If I remove the -d 0 -i 0 arguments, here's what I get instead:
LD_LIBRARY_PATH=/usr/local/cuda-5.0/lib64/ ./cudaminer --benchmark
*** CudaMiner for nVidia GPUs by Christian Buchner ***
This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
Cuda additions Copyright 2013 Christian Buchner
My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-09-08 09:58:51] GPU #6: starting up...
[2013-09-08 09:58:51] GPU #6: with compute capability 131072.0
[2013-09-08 09:58:51] GPU #6: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #6: Performing auto-tuning (Patience...)
[2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #7: starting up...
[2013-09-08 09:58:51] GPU #0: with compute capability -67106336.32627
[2013-09-08 09:58:51] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #0: Performing auto-tuning (Patience...)
[2013-09-08 09:58:51] GPU #1: starting up...
[2013-09-08 09:58:51] GPU #-1: starting up...
[2013-09-08 09:58:51] GPU #7: with compute capability 131072.0
[2013-09-08 09:58:51] GPU #7: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #0: 0.00 khash/s with configuration 0x0
[2013-09-08 09:58:51] GPU #0: using launch configuration 0x0
[2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #6: 0.00 khash/s with configuration 0x0
[2013-09-08 09:58:51] GPU #6: using launch configuration 0x0
[2013-09-08 09:58:51] GPU #1: starting up...
[2013-09-08 09:58:51] GPU #0: with compute capability 131072.0
[2013-09-08 09:58:51] GPU #3: starting up...
[2013-09-08 09:58:51] GPU #4: starting up...
[2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #0: with compute capability 131072.0
[2013-09-08 09:58:51] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #1: with compute capability 131072.0
Floating point exception (core dumped)
What on earth is it doing? GPU #7? GPU #-1? This can't be right. I agree that it looks like it's not detecting my video card correctly, but how can I figure out what the issue is? I have no idea at all what makes it think I have more than one GPU in my system.