Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1051. (Read 3426947 times)

full member
Activity: 196
Merit: 100
what does the -H1 parallelizatoin do?

and why do you have -i 0  , what does that do? vs. -i 1

With the -H flag at 1, the SHAish parts of the app are done multi-threaded on the CPU.

The -i flag says whether the app may load the GPU as much as possible (0) or keep some headroom for the user interface (1).
newbie
Activity: 6
Merit: 0
Ok. Windows 7 user here. Runned it using cudaminer.exe -i 13 -D -H 1 -C 2 -o stratum+tcp://pool.com:3333 -O id.worker:pass
All I got is:
https://i.imgur.com/qW0qoO2.png
Edit: Tried it with a 3335 port, since that is what my pool uses. No luck there either.

Its the pool not the miner in this case.
sr. member
Activity: 406
Merit: 250
Ok. Windows 7 user here. Runned it using cudaminer.exe -i 13 -D -H 1 -C 2 -o stratum+tcp://pool.com:3333 -O id.worker:pass
All I got is:

Edit: Tried it with a 3335 port, since that is what my pool uses. No luck there either.
full member
Activity: 140
Merit: 100
Okay guys I looked in the spreadsheet for GTS 450 code, still cannot run the miner (it runs for a microsecond and then closes)

Code:
cudaminer.exe -i 0 -D -H 1 -C 2 -l F24x8 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 


I can easily run the 7 Dec release on this code:

Code:
cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 

cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass

in bold is intensity, you have set this at zero, set it to 13 and see what happens, also id drop the flag -l auto, just let it figure it out itself then set it to what it targeted at.

Changed -i and -l auto codes as you said, doesn't start up (same microsecond startup)
Code:
cudaminer.exe -i 13 -D -H 1 -C 2 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 



One thing a lot of people get wrong is targeting

is your cudaminer file in C:/ root? if it was your bat would look something like this

START C:/cudaminer/cudaminer.exe -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass

if your file is in C you could run that right now for autotuning
member
Activity: 117
Merit: 10
I get 224kH/s on my MSI GTX 660 (non-TI) with optimal settings and a good overclock with maximum powertune.

Settings: -l K10x16 -i 0 -m 1

While I use the machine, I can set the card to minimum powertune, slight overclock on engine clock, use -i 1 of course, and still get 170kH/s. Real power is at 77% of nominal, at a very efficient 0.962V figure for voltage.

Keep up the good work! I am considering a donation Smiley

Add: Just tried the -H 1 parallelization. Together with a more parallel K5x32 it got me to 180kH/s, with interactive powersave settings. My UI now is a tiny bit less responsive and CPU load is somewhat higher, though.


hi 660 owner!

what does the -H1 parallelizatoin do?

and why do you have -i 0  , what does that do? vs. -i 1



It's all in the readme.
full member
Activity: 126
Merit: 100
I get 224kH/s on my MSI GTX 660 (non-TI) with optimal settings and a good overclock with maximum powertune.

Settings: -l K10x16 -i 0 -m 1

While I use the machine, I can set the card to minimum powertune, slight overclock on engine clock, use -i 1 of course, and still get 170kH/s. Real power is at 77% of nominal, at a very efficient 0.962V figure for voltage.

Keep up the good work! I am considering a donation Smiley

Add: Just tried the -H 1 parallelization. Together with a more parallel K5x32 it got me to 180kH/s, with interactive powersave settings. My UI now is a tiny bit less responsive and CPU load is somewhat higher, though.


hi 660 owner!

what does the -H1 parallelizatoin do?

and why do you have -i 0  , what does that do? vs. -i 1

full member
Activity: 210
Merit: 100
Crypto News & Tutorials - Coinramble.com
Okay guys I looked in the spreadsheet for GTS 450 code, still cannot run the miner (it runs for a microsecond and then closes)

Code:
cudaminer.exe -i 0 -D -H 1 -C 2 -l F24x8 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 


I can easily run the 7 Dec release on this code:

Code:
cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 

cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass

in bold is intensity, you have set this at zero, set it to 13 and see what happens, also id drop the flag -l auto, just let it figure it out itself then set it to what it targeted at.

Changed -i and -l auto codes as you said, doesn't start up (same microsecond startup)
Code:
cudaminer.exe -i 13 -D -H 1 -C 2 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 

full member
Activity: 140
Merit: 100
Okay guys I looked in the spreadsheet for GTS 450 code, still cannot run the miner (it runs for a microsecond and then closes)

Code:
cudaminer.exe -i 0 -D -H 1 -C 2 -l F24x8 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 


I can easily run the 7 Dec release on this code:

Code:
cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 

cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass

in bold is intensity, you have set this at zero, set it to 13 and see what happens, also id drop the flag -l auto, just let it figure it out itself then set it to what it targeted at.
full member
Activity: 210
Merit: 100
Crypto News & Tutorials - Coinramble.com
Okay guys I looked in the spreadsheet for GTS 450 code, still cannot run the miner (it runs for a microsecond and then closes)

Code:
cudaminer.exe -i 0 -D -H 1 -C 2 -l F24x8 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 


I can easily run the 7 Dec release on this code:

Code:
cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 
newbie
Activity: 43
Merit: 0
Hi,

A new miner here. I've a rig which is not meant for mining, but I thought to use it just for trial run to see how the things work. I am mining Litecoins with the setup.

I've 4x nVidia Quadro K6000 cards and I am running the cudaMiner in autotune mode(if that's the mode in which it runs if no extra flags are specified).

One thing which I've noticed is that, one of the GPU out of 4 gives hash rate of 485kh/s while other 3 ranges from 280-370kh/s. I am getting a total avg of 1450 kh/s. Is this hash rate good at all? How to make the other 3 work on 485 kh/s as well?

Do anyone have any suggestion with manual config to get better result? UI response is of no use to me. Just suggest me the settings that would give maximum performance.

At last, I would like to thank OP for the cudaMiner. As far as I've read, it seems to be only one which makes nVidia cards any useful.

Regards.

Try adding -D to see which config it picks for the cards, and then applying the one that gives the highest hashrate to all of them. After that you can play around with -H 0/1/2. Otherwise from previous posts, you should try the -l config of (amount of smx units enabled on card)x32, which for k6000 should be -l T15x32, or maybe the kepler kernel with K15x32
member
Activity: 117
Merit: 10

Coolbits is not supported anymore for Fermi and Kepler. Despite Nvidia advertising their unified driver architecture with equal features on all platforms, it is actually impossible to software-overclock 4xx and newer cards on GNU/Linux. nvclock doesn't work either with the newer cards.
The only way is to flash a modified VBIOS. But most of the modified ones floating around on the net, for example the one on TechInferno by svl7, disable GPU Boost and set a low base clock, relying on the user to set the clock rate in software.
You could edit one yourself (Kepler BIOS Tweaker 1.26 (not 1.25) supports base clock, boost and TDP editing for recent cards), but I'd advise you to have a backup card for recovering if it goes wrong.

I'm also curious about OC on Linux, but I don't even need to set any clocks, just raise the power target (like in MSI Afterburner). Is that impossible, too?
newbie
Activity: 4
Merit: 0
Hi,

A new miner here. I've a rig which is not meant for mining, but I thought to use it just for trial run to see how the things work. I am mining Litecoins with the setup.

I've 4x nVidia Quadro K6000 cards and I am running the cudaMiner in autotune mode(if that's the mode in which it runs if no extra flags are specified).

One thing which I've noticed is that, one of the GPU out of 4 gives hash rate of 485kh/s while other 3 ranges from 280-370kh/s. I am getting a total avg of 1450 kh/s. Is this hash rate good at all? How to make the other 3 work on 485 kh/s as well?

Do anyone have any suggestion with manual config to get better result? UI response is of no use to me. Just suggest me the settings that would give maximum performance.

At last, I would like to thank OP for the cudaMiner. As far as I've read, it seems to be only one which makes nVidia cards any useful.

Regards.
full member
Activity: 196
Merit: 100
I get 224kH/s on my MSI GTX 660 (non-TI) with optimal settings and a good overclock with maximum powertune.

Settings: -l K10x16 -i 0 -m 1

While I use the machine, I can set the card to minimum powertune, slight overclock on engine clock, use -i 1 of course, and still get 170kH/s. Real power is at 77% of nominal, at a very efficient 0.962V figure for voltage.

Keep up the good work! I am considering a donation Smiley

Add: Just tried the -H 1 parallelization. Together with a more parallel K5x32 it got me to 180kH/s, with interactive powersave settings. My UI now is a tiny bit less responsive and CPU load is somewhat higher, though.
full member
Activity: 126
Merit: 100
Anyone have optimal settings for GTX660 Ti?
I'm just starting out and my settings aren't working. I've been reading the readme file for an hour now trying to make sense of the -H -C and all.

C:\cudaminer\x64\cudaminer.exe -H 2 -d 0 -l auto -i 1 K7x32 -o stratum+tcp://www.zzz.com:3333 -O worker:x

This is my first time so I must have the settings wrong. Thanks!


edit: ok I went back a a few pages and just copied other people's 660ti settings. I am testing out

cudaminer.exe -H 1 -i 0 -C 1 -D -l K14x16 -o stratum...   (AVG around 265 khash/s)

and

cudaminer.exe -H 1 -i 0 -C 1 -D -l k7x32-o stratum   (will report later)
full member
Activity: 136
Merit: 100
Gigabyte GTX 770 OC 4 gig card Cheesy...few beers last night.

How does ~330 Kh/s sound with new cudaminer....4 gigs.
full member
Activity: 210
Merit: 100
Crypto News & Tutorials - Coinramble.com
...
550Ti
Linux kernel-3.11 x86_64  cudatoolkit-5.5 90-92 kH
Win7x64 72-76 kH

Aha! It's all in the settings...

Using the following:
cudaminer -i 0 -C 1 -H 1 -o stratum.... -O ....

--- the result is
Code:
GPU #0: GeForce GTX 550 Ti, 313344 hashes, 94.62 khash/s
accepted: 35/35 (100.00%), 94.62 khash/s (yay!!!)

To get this result, I also had to stop mining on the CPU, since CPU usage is over 50% of 2 cores with that setup. My setup is a GV-N550OC-1GI at its factory overclock running on a lowly AMD 5200+ / Asus M2AVM / 6GB. (Ubuntu 13.04 with the kernel 3.12 .DEBs stolen from "trusty"'s archive)

On GTS 450, recent miner won't launch with code:
Code:
cudaminer.exe -i 0 [b]-C 1 -H 1[/b]
or with any other code Sad  Cry someone plz give me a working code for 450

Tried this too:
Code for 18 Dec release to run on GTS 450 please.

Change -l 32x4 to -l auto, remove -C (it's ignored now).
Not working
sr. member
Activity: 408
Merit: 250
ded

Coolbits is not supported anymore for Fermi and Kepler. Despite Nvidia advertising their unified driver architecture with equal features on all platforms, it is actually impossible to software-overclock 4xx and newer cards on GNU/Linux. nvclock doesn't work either with the newer cards.
The only way is to flash a modified VBIOS. But most of the modified ones floating around on the net, for example the one on TechInferno by svl7, disable GPU Boost and set a low base clock, relying on the user to set the clock rate in software.
You could edit one yourself (Kepler BIOS Tweaker 1.26 (not 1.25) supports base clock, boost and TDP editing for recent cards), but I'd advise you to have a backup card for recovering if it goes wrong.

yep, no coolbits.

I guess I'm gonna have to try this out sooner than I'd planned lol
newbie
Activity: 2
Merit: 0

Coolbits is not supported anymore for Fermi and Kepler. Despite Nvidia advertising their unified driver architecture with equal features on all platforms, it is actually impossible to software-overclock 4xx and newer cards on GNU/Linux. nvclock doesn't work either with the newer cards.
The only way is to flash a modified VBIOS. But most of the modified ones floating around on the net, for example the one on TechInferno by svl7, disable GPU Boost and set a low base clock, relying on the user to set the clock rate in software.
You could edit one yourself (Kepler BIOS Tweaker 1.26 (not 1.25) supports base clock, boost and TDP editing for recent cards), but I'd advise you to have a backup card for recovering if it goes wrong.
newbie
Activity: 19
Merit: 0
...
550Ti
Linux kernel-3.11 x86_64  cudatoolkit-5.5 90-92 kH
Win7x64 72-76 kH

Aha! It's all in the settings...

Using the following:
cudaminer -i 0 -C 1 -H 1 -o stratum.... -O ....

--- the result is
Code:
GPU #0: GeForce GTX 550 Ti, 313344 hashes, 94.62 khash/s
accepted: 35/35 (100.00%), 94.62 khash/s (yay!!!)

To get this result, I also had to stop mining on the CPU, since CPU usage is over 50% of 2 cores with that setup. My setup is a GV-N550OC-1GI at its factory overclock running on a lowly AMD 5200+ / Asus M2AVM / 6GB. (Ubuntu 13.04 with the kernel 3.12 .DEBs stolen from "trusty"'s archive)

Jump to: