Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 286. (Read 3426976 times)

newbie
Activity: 59
Merit: 0
from 1200h/s to 1400h/s

fuck it, time to sell my amd rigs.

but it isn't profitable right? 2-3 monero a day?

My profit calc says otherwise Cheesy

Mining using crytonight miner is very unstable. The hashrate shown on miner and pool is very different. Most of the time pool hashrate could be reduce by 50% of what is shown on miner. I think crytonight miner submitting of shares to pool need improvement because pool don't receive share often which cause miner hashrate shown on pool to cut down majority of the time. The only time pool showing correct hashrate as miner is when initial connection to pool. After which hashrate start to decay and reduce. Hashrate shown on miner is just a guide line, most importance is how much hash the pool receive because that is how much they going to pay you. So what u see on profit cal need to be reduce by 30-40% due to unstable hashrate.

meh, it's pretty much accurate for me, still within ~5-10%, so no complaints there.

I think that might depend on pool/difficulty, thats why you have that issue or maybe you just need more ram. not sure.
legendary
Activity: 1764
Merit: 1006
from 1200h/s to 1400h/s

fuck it, time to sell my amd rigs.

but it isn't profitable right? 2-3 monero a day?

My profit calc says otherwise Cheesy

Mining using crytonight miner is very unstable. The hashrate shown on miner and pool is very different. Most of the time pool hashrate could be reduce by 50% of what is shown on miner. I think crytonight miner submitting of shares to pool need improvement because pool don't receive share often which cause miner hashrate shown on pool to cut down majority of the time. The only time pool showing correct hashrate as miner is when initial connection to pool. After which hashrate start to decay and reduce. Hashrate shown on miner is just a guide line, most importance is how much hash the pool receive because that is how much they going to pay you. So what u see on profit cal need to be reduce by 30-40% due to unstable hashrate.

meh, it's pretty much accurate for me, still within ~5-10%, so no complaints there.

from 1200h/s to 1400h/s

fuck it, time to sell my amd rigs.
For XMR  :
The amd R9 290X is about 750h/s at stock (gpu@1000, tri-x).
The couple R9 290X @1000 + R9 290@900 (underclocked for noise and heat, -100mv) give me about 1390 h/s.
Power usage is about 330W for the pair.
Maybe when the BBR ccminer will arrive, I will switch to nvidia, up to now amd is good enough.



i still have 15 amd gpus, 7 7950, 5 280x and and 3 7970, so i probably just switch out the 7950s.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               
full member
Activity: 210
Merit: 100
Crypto Currency Supporter
Hello every1,

I am using ccminer v1 which gets me 1800kh from my 660ti, and ccminer1.2 for 2200kh/s, it just makes my card lag a bit when using pc. Have there been any links for a better/more optimized CC other than github release? Thank you very much!
sr. member
Activity: 480
Merit: 250
Does the new cryptonight release of ccminer support anything other than the 750 Ti? I tried to get it running on my laptop with a GeForce GT 750M (don't laugh) which is compute 3.0 compatible, but I got a message in windows that the driver had crashed and recovered and ccminer reported outrageously high hashrates while never submitting anything successfully to the pool and my GPU meter showing basically zero activity. Do I need to do more than put the "-a cryptonight" flag in the batch file?

I get exactly the same problem with a 750 ti, on Windows 8.1, as described in detail here:
https://bitcointalksearch.org/topic/m.7600519
Any help appreciated.

Same problem using wins 8.1. Any optimized version for wins 8.1 ?

Ah, my laptop is also running Windows 8.1, but it never occurred to me that could be the problem. Please tsiv, is there any way to fix this? Have mercy on us!  Cheesy
full member
Activity: 137
Merit: 100
Does the new cryptonight release of ccminer support anything other than the 750 Ti? I tried to get it running on my laptop with a GeForce GT 750M (don't laugh) which is compute 3.0 compatible, but I got a message in windows that the driver had crashed and recovered and ccminer reported outrageously high hashrates while never submitting anything successfully to the pool and my GPU meter showing basically zero activity. Do I need to do more than put the "-a cryptonight" flag in the batch file?

I get exactly the same problem with a 750 ti, on Windows 8.1, as described in detail here:
https://bitcointalksearch.org/topic/m.7600519
Any help appreciated.

Same problem using wins 8.1. Any optimized version for wins 8.1 ?

Cross-post from the bounty thread:

I'm trying to use the pre-compiled ccminer-cryptonight_20140630_r2 ccminer on Windows 8.1 with a GTX750ti and seem to be having some problems getting results.

I am pointing the miner at minexmr as indicated on their website:
http://minexmr.com/
with a batch file as follows:
C:\monero\ccminer-cryptonight_20140630_r2\ccminer.exe -t 1 -d gtx750ti -o stratum+tcp://pool.minexmr.com:7777 -u
-p x

At launch, I get a series of results like:
GPU #0: GeForce GTX 750 Ti, using 40 blocks of 8 threads
Pool set diff to 15000
GPU #0: GeForce GTX 750 Ti, 93.81 H/s
then a popup says display driver stopped responding and has recovered. After that I see results with crazy high numbers of hashes like this:
GPU #0: GeForce GTX 750 Ti, 163611988.12 H/s
interspersed with
'stratum detected new block'
but no accepted results within a half hour check period.

I also tried downloading the previous release, but switching to that one makes the cmd.exe pop up and vanish immediately on my system (Windows 8.1, Driver 337.88). The GTX750ti is not attached to a display output.

Any help appreciated. Not sure what's going wrong.

Pretty sure it's still a TDR issue, the biggest part of the cryptonight core get still run as a single launch and it just might take that 2 seconds and Windows with default TDR delay considers the GPU stuck and does a driver reset. https://bitcointalksearch.org/topic/m.7529269 for a workaround. I plan on looking at splitting the work down, at quick glance it looks like it could be run piece by piece. Will probably hurt performance a bit, have to save and reload the encryption keys on every kernel launch and launches themselves have some overhead. My thought was to make it a cmd line option, allowing the user to decide how much (or if) they want to split it up. Maybe add a few microseconds of sleep between the launches, stop the display freezing for 1+ seconds at a time and make the computer at least semi-usable.

full member
Activity: 137
Merit: 100
Compute 2.0/3.0/3.5 win32 binary: https://github.com/tsiv/ccminer-cryptonight/releases/tag/v0.13

No need to update if you're running the previous release, this update simply pulls out the other ccminer algos and adds compute 2.0 support.

time to show some gratitude, the binary release readme.txt contains tsiv's "motivational addresses" Wink

pushed some VTC and XMR your way.

Not gonna lie, working on this algo is making a huge dent on the whiskey fund. Cheers mate Smiley
newbie
Activity: 23
Merit: 0
Compute 2.0/3.0/3.5 win32 binary: https://github.com/tsiv/ccminer-cryptonight/releases/tag/v0.13

No need to update if you're running the previous release, this update simply pulls out the other ccminer algos and adds compute 2.0 support.

time to show some gratitude, the binary release readme.txt contains tsiv's "motivational addresses" Wink

pushed some VTC and XMR your way.
hero member
Activity: 868
Merit: 1000
Does the new cryptonight release of ccminer support anything other than the 750 Ti? I tried to get it running on my laptop with a GeForce GT 750M (don't laugh) which is compute 3.0 compatible, but I got a message in windows that the driver had crashed and recovered and ccminer reported outrageously high hashrates while never submitting anything successfully to the pool and my GPU meter showing basically zero activity. Do I need to do more than put the "-a cryptonight" flag in the batch file?

I get exactly the same problem with a 750 ti, on Windows 8.1, as described in detail here:
https://bitcointalksearch.org/topic/m.7600519
Any help appreciated.

Same problem using wins 8.1. Any optimized version for wins 8.1 ?
sr. member
Activity: 280
Merit: 250
Does the new cryptonight release of ccminer support anything other than the 750 Ti? I tried to get it running on my laptop with a GeForce GT 750M (don't laugh) which is compute 3.0 compatible, but I got a message in windows that the driver had crashed and recovered and ccminer reported outrageously high hashrates while never submitting anything successfully to the pool and my GPU meter showing basically zero activity. Do I need to do more than put the "-a cryptonight" flag in the batch file?

I get exactly the same problem with a 750 ti, on Windows 8.1, as described in detail here:
https://bitcointalksearch.org/topic/m.7600519
Any help appreciated.
full member
Activity: 149
Merit: 100
Took the easy gains first, around 18% boost on 750 Ti. Still trying to wrap my head around the hard part.

[2014-06-30 00:11:59] GPU #5: GeForce GTX 750 Ti, 286.07 H/s
[2014-06-30 00:11:59] GPU #1: GeForce GTX 750 Ti, 286.72 H/s
[2014-06-30 00:11:59] GPU #2: GeForce GTX 750 Ti, 285.23 H/s
[2014-06-30 00:11:59] GPU #0: GeForce GTX 750 Ti, 284.80 H/s
[2014-06-30 00:11:59] GPU #4: GeForce GTX 750 Ti, 284.69 H/s
[2014-06-30 00:11:59] GPU #3: GeForce GTX 750 Ti, 284.42 H/s
[2014-06-30 00:12:04] GPU #0: GeForce GTX 750 Ti, 274.37 H/s
[2014-06-30 00:12:04] accepted: 4/4 (100.00%), 1701.50 H/s (yay!!!)

...more performance gain than I thought. really nice. thanks!
missing your donation address as signature or on github

What's the power usage per GPU with the new build? It seems GPU's are getting close to Xeons in efficiency.

My 6x750 Ti rig is drawing 310 W from wall at 280 H/s per card. BTC H81 with a Celeron G1820. Somewhere slightly under 50 W per card + whatever the rest of the rig pulls.

Edit: Put some addresses on the Github front page earlier today btw. My "member rank" on these forums allows for a massive 50 character signature, pretty much good for nothing.

same for me. but with old compiler version (about 220 H/s per card).

where i can find this new update but in w64 compiler version?
full member
Activity: 137
Merit: 100
Took the easy gains first, around 18% boost on 750 Ti. Still trying to wrap my head around the hard part.

[2014-06-30 00:11:59] GPU #5: GeForce GTX 750 Ti, 286.07 H/s
[2014-06-30 00:11:59] GPU #1: GeForce GTX 750 Ti, 286.72 H/s
[2014-06-30 00:11:59] GPU #2: GeForce GTX 750 Ti, 285.23 H/s
[2014-06-30 00:11:59] GPU #0: GeForce GTX 750 Ti, 284.80 H/s
[2014-06-30 00:11:59] GPU #4: GeForce GTX 750 Ti, 284.69 H/s
[2014-06-30 00:11:59] GPU #3: GeForce GTX 750 Ti, 284.42 H/s
[2014-06-30 00:12:04] GPU #0: GeForce GTX 750 Ti, 274.37 H/s
[2014-06-30 00:12:04] accepted: 4/4 (100.00%), 1701.50 H/s (yay!!!)

...more performance gain than I thought. really nice. thanks!
missing your donation address as signature or on github

What's the power usage per GPU with the new build? It seems GPU's are getting close to Xeons in efficiency.

My 6x750 Ti rig is drawing 310 W from wall at 280 H/s per card. BTC H81 with a Celeron G1820. Somewhere slightly under 50 W per card + whatever the rest of the rig pulls.

Edit: Put some addresses on the Github front page earlier today btw. My "member rank" on these forums allows for a massive 50 character signature, pretty much good for nothing.
legendary
Activity: 1400
Merit: 1050
can you message me with the code for reading the memory values out that gives you the 2 uint32's?
im at work so i cant load any coding files up, only see text

i will have a look
you mean the cu file ?

by the way, could anyone having older cards (fermi & kepler) have a look at the performance with this executable:

https://mega.co.nz/#!tMl0TAAK!UOlmfbmhT1_UEDC7NSYUcNFTTrdVDIp2Q6DJ1Riok5U
Sad

Did my previous exe (or Amph exe) worked for you (because if you had a problem before with my previous exe, well it won't change...)
newbie
Activity: 47
Merit: 0
can you message me with the code for reading the memory values out that gives you the 2 uint32's?
im at work so i cant load any coding files up, only see text

i will have a look
you mean the cu file ?

by the way, could anyone having older cards (fermi & kepler) have a look at the performance with this executable:

https://mega.co.nz/#!tMl0TAAK!UOlmfbmhT1_UEDC7NSYUcNFTTrdVDIp2Q6DJ1Riok5U
Sad
http://sd.uploads.ru/Pju6f.png
ahu
newbie
Activity: 15
Merit: 0
Took the easy gains first, around 18% boost on 750 Ti. Still trying to wrap my head around the hard part.

[2014-06-30 00:11:59] GPU #5: GeForce GTX 750 Ti, 286.07 H/s
[2014-06-30 00:11:59] GPU #1: GeForce GTX 750 Ti, 286.72 H/s
[2014-06-30 00:11:59] GPU #2: GeForce GTX 750 Ti, 285.23 H/s
[2014-06-30 00:11:59] GPU #0: GeForce GTX 750 Ti, 284.80 H/s
[2014-06-30 00:11:59] GPU #4: GeForce GTX 750 Ti, 284.69 H/s
[2014-06-30 00:11:59] GPU #3: GeForce GTX 750 Ti, 284.42 H/s
[2014-06-30 00:12:04] GPU #0: GeForce GTX 750 Ti, 274.37 H/s
[2014-06-30 00:12:04] accepted: 4/4 (100.00%), 1701.50 H/s (yay!!!)

...more performance gain than I thought. really nice. thanks!
missing your donation address as signature or on github

What's the power usage per GPU with the new build? It seems GPU's are getting close to Xeons in efficiency.
legendary
Activity: 1400
Merit: 1050
just had a look and i cant see any way to do it without merging. There are ways to get 64 bit values out of the memory but i cant find them
I was thinking using the funnelshift (without shifting  Grin), it is supposed to work with uint64 type (at least I read it, however I don't find any method returning a long long).
Not mentioning I am not sure entirely it is for the same type of memory... (such a mess for the beginner...)
sr. member
Activity: 350
Merit: 250
just had a look and i cant see any way to do it without merging. There are ways to get 64 bit values out of the memory but i cant find them
legendary
Activity: 1400
Merit: 1050
can you message me with the code for reading the memory values out that gives you the 2 uint32's?
im at work so i cant load any coding files up, only see text

i will have a look
you mean the cu file ?

by the way, could anyone having older cards (fermi & kepler) have a look at the performance with this executable:

https://mega.co.nz/#!tMl0TAAK!UOlmfbmhT1_UEDC7NSYUcNFTTrdVDIp2Q6DJ1Riok5U
sr. member
Activity: 350
Merit: 250
can you message me with the code for reading the memory values out that gives you the 2 uint32's?
im at work so i cant load any coding files up, only see text

i will have a look
legendary
Activity: 1400
Merit: 1050
something strange.
I am able to get the 780ti up to 3477kHash/s (on X15) and 97%gpu usage, however I lose 200kHash/s on the 750ti.
(it also compiles a lot faster)

apparently this has to do with how the conversion from uint64 to uint32 is made...
faster on 750ti: __double_as_longlong(__hiloint2double(
faster on 780ti: (unsigned long long)LO | (((unsigned long long)HI) << 32ULL);

What is strange is I thought that gpus where optimised for real numbers (rather than integer)


Very strange. Maybe compute 5 cards are better at that **shrugs shoulder**
Are you still using the 2 uint32's being merged?
for the moment yes, the best thing would be to just get a pointer to the texture memory, because the long long is just there, but I can only access it in smaller chunk.
sr. member
Activity: 350
Merit: 250
something strange.
I am able to get the 780ti up to 3477kHash/s (on X15) and 97%gpu usage, however I lose 200kHash/s on the 750ti.
(it also compiles a lot faster)

apparently this has to do with how the conversion from uint64 to uint32 is made...
faster on 750ti: __double_as_longlong(__hiloint2double(
faster on 780ti: (unsigned long long)LO | (((unsigned long long)HI) << 32ULL);

What is strange is I thought that gpus where optimised for real numbers (rather than integer)


Very strange. Maybe compute 5 cards are better at that **shrugs shoulder**
Are you still using the 2 uint32's being merged?
Jump to: