Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1048. (Read 3426947 times)

sr. member
Activity: 247
Merit: 250
That latest "GeForce 331.82 Driver" which is required to run this release hangs and BSODs my pc. Therefore, not able to run 18th DEC release  Cry

On GTS 450

Can you help out guys it's a bit urgent ?

I'm having the same problem, 2 780's

Clean install?
full member
Activity: 226
Merit: 100
That latest "GeForce 331.82 Driver" which is required to run this release hangs and BSODs my pc. Therefore, not able to run 18th DEC release  Cry

On GTS 450

Can you help out guys it's a bit urgent ?

I'm having the same problem, 2 780's
full member
Activity: 210
Merit: 100
Crypto News & Tutorials - Coinramble.com
That latest "GeForce 331.82 Driver" which is required to run this release hangs and BSODs my pc. Therefore, not able to run 18th DEC release  Cry

On GTS 450

Can you help out guys it's a bit urgent ?
dga
hero member
Activity: 737
Merit: 511
I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.

I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.

Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?

Thank you!

It would be a bit like explaining to a passenger how to land a plane. Wouldn't it be easier if I just showed him how to push the auto-land button? (yes, newer models have that feature).

To understand the terminology of launch configurations like -l K28x8  you would have to understand the CUDA programming model, what a launch grid is, what a thread block is, and how it consists of warps that are independently scheduled on your Kepler multiprocessor's for warp schedulers. And you would have to understand what parameters could make sense on your particular GPU architecture to achieve high occupancy. You would also have to know certain limits imposed by shared memory use and registers used by a given kernel.

Try auto-tuning first. Pass either -l auto, or no -l argument at all.

If that doesn't find a satisfactory configuration, we can talk about blocks and warps and the memory requirements.
the treatise linked to in my follow-up posting also has a bit of information.

Christian


Here's what I found as a noob... My card likes multiples of 160. Used to be 80x2, then 10x16, now 5x32 is the best. So find your "magic number" by running autotune several times and look at the first four digit hash number it gives you, mine was 5120, then divide by 32 to get your magic number. Then experiment with multiples. Has always worked out best for me and I have no idea why.  Grin

Step 1:  Figure out how many CUDA cores your device has by googling for it and looking at NVidia's page.  Example:  GTX 660
http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-660/specifications

Has 960 CUDA cores.

Step 2:  Figure out how many of those cores are physically present on each execution unit.
  Step 2a:  Figure out your compute capability
   https://developer.nvidia.com/cuda-gpus
    Example, GTX 660, compute capability 3.0

  Step 2b:  For compute capability 3.0, each execution unit has 192 CUDA cores.

Step 3:  Divide #CUDA cores by number per execution unit.  Ex:  960 / 192 = 5.

  This tells you how many independent execution units you have ("SMXes" is the name for them in Kepler-based devices).
Setting the first number to the number of execution units is good.

Therefore:  5 is a very good choice for the first number in your tuning.  The second number depends on the amount of memory you have, but for compute capability 3.0 and 3.5 devices, 32 is a pretty good one.  So 5x32, as the follow-up poster suggested, is probably the right answer for you.  But trusting autotune in general is probably simpler. :-)

  -Dave
full member
Activity: 167
Merit: 100
How do you use cudaminer through a proxy?

Anyone? I can see there is a setting -x or --proxy. How do i use that?
full member
Activity: 173
Merit: 100
I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.

I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.

Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?

Thank you!

It would be a bit like explaining to a passenger how to land a plane. Wouldn't it be easier if I just showed him how to push the auto-land button? (yes, newer models have that feature).

To understand the terminology of launch configurations like -l K28x8  you would have to understand the CUDA programming model, what a launch grid is, what a thread block is, and how it consists of warps that are independently scheduled on your Kepler multiprocessor's for warp schedulers. And you would have to understand what parameters could make sense on your particular GPU architecture to achieve high occupancy. You would also have to know certain limits imposed by shared memory use and registers used by a given kernel.

Try auto-tuning first. Pass either -l auto, or no -l argument at all.

If that doesn't find a satisfactory configuration, we can talk about blocks and warps and the memory requirements.
the treatise linked to in my follow-up posting also has a bit of information.

Christian


Here's what I found as a noob... My card likes multiples of 160. Used to be 80x2, then 10x16, now 5x32 is the best. So find your "magic number" by running autotune several times and look at the first four digit hash number it gives you, mine was 5120, then divide by 32 to get your magic number. Then experiment with multiples. Has always worked out best for me and I have no idea why.  Grin
full member
Activity: 126
Merit: 100
I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.

I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.

Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?

Thank you!

Your GPU isn't gonna burn up dude. Relax. Hundreds of 660TI owners here have been mining away on a whole host of Kxxxx configurations. I myself have used K7x32, K12x16, K31x6, K14x8, K14x16, and K12x8 (recommended by -l auto)

and THEY BURNED MY GPU OH MY GOD

hero member
Activity: 756
Merit: 502

the GPU_MAX_ALLOC_PERCENT is snake-oil for nVidia CUDA devices. I am pretty sure the driver won't care about this flag.

Christian

Hi everyone, I wrote a small Treatise on Cuda Miner, mind helping me check it over? (Much updated! wow!)
http://www.reddit.com/r/Dogecoinmining/comments/1tguse/a_treatise_on_cuda_miner/
hero member
Activity: 756
Merit: 502
I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.

I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.

Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?

Thank you!

It would be a bit like explaining to a passenger how to land a plane. Wouldn't it be easier if I just showed him how to push the auto-land button? (yes, newer models have that feature).

To understand the terminology of launch configurations like -l K28x8  you would have to understand the CUDA programming model, what a launch grid is, what a thread block is, and how it consists of warps that are independently scheduled on your Kepler multiprocessor's for warp schedulers. And you would have to understand what parameters could make sense on your particular GPU architecture to achieve high occupancy. You would also have to know certain limits imposed by shared memory use and registers used by a given kernel.

Try auto-tuning first. Pass either -l auto, or no -l argument at all.

If that doesn't find a satisfactory configuration, we can talk about blocks and warps and the memory requirements.
the treatise linked to in my follow-up posting also has a bit of information.

Christian
full member
Activity: 210
Merit: 100
I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.

I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.

Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?

Thank you!
full member
Activity: 167
Merit: 100
How do you use cudaminer through a proxy?
full member
Activity: 173
Merit: 100
is that configuration based off of autotune? or did you just selected it?

The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparison

Which were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again.
The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU"

To get the other card working you need "-d 0,1" and then you can set intensity to "-i 1,0" to use the 2nd card to it's max.

Ex. "-D -d 0,1 -i 1,0 -l auto,auto -H 1 -C 2"
sr. member
Activity: 408
Merit: 250
ded
So after running my GTX 560 Ti for 2days staight at a solid 280khash on the new 12/18 software, cudaminer out of nowhere spiked the hashrate up to about 450 and the card started giving hardware errors as it can't run that high.

I got the notification from my.mining pool that a worker was down, so I RDP to the machine close out cudaminer and restart my script, no changes made at all.

Now all of a sudden cudaminer is saying, "unable to query CUDA driver version.  Is an nVidia driver installed."
This of course isn't true.

Seeing as how this happened the very first time I ran cudaminer I simply tried to reinstall the driver.  When that didn't work I tried downgrading the driver and still no luck.  I even installed the CUDA development kit and that didn't work either.  I can no longer get cudaminer to launch any of the 3 versions that I have previously used.

I'm very confused at the moment.  The only thing crossing my mind is that maybe when I RDP to the machine the graphic settings are changing for remote desktop and the CUDA driver is being disabled and therefore cannot relaunch.

Anyone ever tried to restart cudaminer via RDP before?
Bigger question is why did cudaminer decide to randomly jump to 450khash after 2 straight days mining at 280?

Thoughts, comments, help, all appreciated.  5k doge to anyone that can help me find a solution.

Lots doge you rich coins wow cudaminer wow doge happy coin.


Driver crashed? happens to me if I try to push my oc to high, does it still happen after reboot?

Haven't used RDP but I am using chrome remote desktop and haven't had issues.

WOOOT!!!!!  kernels10 you have been awarded 5k doge.  My conclusion about RDP was 100% accurate and I was able to verify that via chrome remote desktop.

I used RDP to install chrome remote desktop, exited RDP, entered through chrome remote desktop and the scripts started up perfectly.  What this verified is that at least on the GTX 560 Ti RDP does indeed kill the CUDA nVidia drivers upon connection; therefore making it impossible to restart cudaminer.

I'm curious if this is the case with all Microsoft RDP sessions.

Thx  Cheesy
DL7Kf4tT1heq4E8NX41mSCKoaWnsySQEAt

Maybe MS RDP disables "unnecessary" for performance reasons?
I am not too familiar at all with RDP
sr. member
Activity: 247
Merit: 250
is that configuration based off of autotune? or did you just selected it?

The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparison

Which were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again.
The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU"

Take out -l because that could be and old config that no longer valid. You need to autotune after every upgrade
newbie
Activity: 3
Merit: 0
is that configuration based off of autotune? or did you just selected it?

The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparison

Which were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again.
The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU"
sr. member
Activity: 247
Merit: 250
Recently started mining. I've got 2 Gigabyte GTX 770 4GB however I'm seeing some weird results.

My settings looks like this: "-i 1 -l K16x16 -C 2"

I'm seeing a lot of "result does not validate on CPU", basically every change I make to the settings will cause this error. Even restarting the client with the same settings will give me this error a bunch of times, I have to tweak the settings, try to launch with new settings and then switch back to my old settings to get it to work. Even launching it without any pre-set settings will cause an error.

Once I do get it to work, one of the GPUs is reporting a hashrate of ~2000 khash/s which is obviously false, and the other is showing ~350 khash/s. However, if I change my settings to "-i 0", one of the GPUs will show a hashrate of ~90,000 khash/s while the other one remains at 350.

Also, looking at MSI Afterburner only one card is being utilized, GPU #1 is at 100% usage while GPU #2 is sitting at 0% usage, I can't seem to get both GPUs to be utilized.

Any suggestions or ideas?

is that configuration based off of autotune? or did you just selected it?
newbie
Activity: 3
Merit: 0
Recently started mining. I've got 2 Gigabyte GTX 770 4GB however I'm seeing some weird results.

My settings looks like this: "-i 1 -l K16x16 -C 2"

I'm seeing a lot of "result does not validate on CPU", basically every change I make to the settings will cause this error. Even restarting the client with the same settings will give me this error a bunch of times, I have to tweak the settings, try to launch with new settings and then switch back to my old settings to get it to work. Even launching it without any pre-set settings will cause an error.

Once I do get it to work, one of the GPUs is reporting a hashrate of ~2000 khash/s which is obviously false, and the other is showing ~350 khash/s. However, if I change my settings to "-i 0", one of the GPUs will show a hashrate of ~90,000 khash/s while the other one remains at 350.

Also, looking at MSI Afterburner only one card is being utilized, GPU #1 is at 100% usage while GPU #2 is sitting at 0% usage, I can't seem to get both GPUs to be utilized.

Any suggestions or ideas?
newbie
Activity: 26
Merit: 0
Is there anyway to do backup pools ?

This is a feature to come, read the bottom of the readme.txt file.
sr. member
Activity: 791
Merit: 273
This is personal
Is there anyway to do backup pools ?
newbie
Activity: 4
Merit: 0
Hey everyone,

I decided to start mining some alt coins and thought I'd fire up a device I had lying around. I'm using a Tesla S870 system, but only 1 of the 2 connections for now (2 cards).  I'm having an issue finding a stable mining configuration. Essentially, the system appears to be busy and accepting work, but not coins are ever mined. I've done a lot of searching and trail & error with the -l command-line option. As I understand it, the -l option should be the multiprocessors x CUDA cores, correct?  I.e. (from deviceQuery):

Code:
  (16) Multiprocessors x (  8) CUDA Cores/MP:    128 CUDA Cores

Therefore -l L16x8 ?

If I pump up the -l setting (i.e. L128x64, but can be much lower), it will often say it's getting 300+kh/s, which I believe is completely off. I noticed this in another post in this thread, and it seemed to be a one-off. I know that this is not rockstar hardware, but I would like to use it for some light mining. My questions are:

(1) What's the _right_ way to determine the -l settings? I have tried many options as well as 'auto' with -D and even -P (I am a web guy, after all Wink ), which often leads to L0x0 and crashes.

(2) Is there anything I can do to help with support for this hardware?

Here's my configuration:

OS:

Code:
$ uname -a
Linux hypercoil 3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

NVIDIA Driver:

Code:
$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module  304.54  Sat Sep 29 00:05:49 PDT 2012
GCC version:  gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)

NVCC/Cuda Tools:

Code:
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2012 NVIDIA Corporation
Built on Fri_Sep_21_17:28:58_PDT_2012
Cuda compilation tools, release 5.0, V0.2.1221

CudaMiner:

Code:
$ ./cudaminer
   *** CudaMiner for nVidia GPUs by Christian Buchner ***
             This is version 2013-12-10 (beta)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
       Cuda additions Copyright 2013 Christian Buchner
   My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

Kind regards,

DW
Jump to: