Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 195. (Read 3426932 times)

newbie
Activity: 25
Merit: 0
I am stuck with ccminer compilation @linux (linux mint actually). Please help me.

- installed 340.24 driver from xedgers repo
- installed CUDA toolkit 6.0 from nvidia
- downloaded current ccminer source from github (once cbuchner1, once djm34)
- autoconfig.sh
- configure --with-cuda=/usr/local/cuda-6.0
- make -j8
- export LD_LIBRARY_PATH=/usr/local/cuda-6.0/lib64
- export PATH=$PATH:/usr/local/cuda-6.0/bin
- ccminer -a x11 -benchmark with or without exiting X

"Unable to query CUDA driver version! Is an nVidia driver installed?"
:-(

/dev/nvidia* exists

any ideas what I did wrong ?

OK, I figured it out myself. Here is what is needed in order to have ccminer working @linux once you have compiled the ccminer.
#1 except nvidia module nvidia_uvm module needs to be loaded in the kernel. you can do it adding 'nvidia_uvm' string to /etc/modules (@linux mint) and nvidia_uvm will be loaded automatically. nvidia_uvm is a separate package @linux mint.
#2 /dev/nvidia_uvm device needs to be created and accessible to 'others'. you can do it this way:
Code:
sudo mknod -m 660 /dev/nvidia-uvm c 250 0
sudo chmod o+rw /dev/nvidia-uvm
and you can run ccminer as a user - not root.

if you want quick compilation modify Makefile.am: change all compute_30/sm_30 into compute50/sm_50 and get rid of compute_35/sm35 strings. thanks to that compilation takes ca 15 minutes but ccminer will work only on maxwell cards with compute 5.0.

in my case I get ca 2300kh/s instead of 2600 @win8 for x11 algo. but @win I add 100MHz to GPU.
newbie
Activity: 14
Merit: 0
This might be a dumb question, but could anyone point me in the right direction for what we are currently mining and what algo? I have 5x 750ti's and I was on scrypt, now I'm on x11 just mining multipool since I'm so out of the loop lately on what to mine, how to mine, and with what miner to be running. I'm currently using CCMiner1.0 or 1.1. Also what driver is everyone using now and still on x11? Just wondering what Is most efficient for mining now. Thanks so much in advance!

My setup:
5x 750ti's
Windows 8.1 Pro
Driver Version 335.23
sr. member
Activity: 401
Merit: 250
Code:
var inner_hash = sha256(block_header);
var chain_index = floor(chain_height * (inner_hash_last_32_bits / max_uint));
var chain_hash = sha256(blockchain[chain_index].header + inner_hash);
var tran_index = floor(blockchain[chain_index].tran_count * chain_hash_last_32_bits / max_uint));
var final_hash = sha256(blockchain[chain_index].transaction[tran_index] + chain_hash);

Doing this your coin has no future.
Excluding asic/gpu at launch is ok that put incentive to people to mine it. But on the long term, you need to keep your network secure not attract people and for that asic are still the best solution (low cost, low power consumption, low to no maintenance...)
GPU and CPU are versatile they are not meant to do only one task forever.

I'd really love to read some research on why this would be.  Early on Bitcoin survived with just CPU and later GPU miners, and there were more full nodes in operation then than there are now.  I fear that the ASIC trend has led to increasing centralization of hashing power which is not good for the distributed nature of the network.  As for power, although an ASIC uses less power the arms race of people bringing warehouses full of them online can't be good for the overall power consumption.  Not sure if anyone has ever tried to graph total power consumption of Bitcoin over time (estimating the spread on the types of miners out there) but I think you would see a lot more power being used now than a year ago or a year before that.
sr. member
Activity: 401
Merit: 250
OK, did you look at the pseudocode I put here?  Not sure how broadcasting will work when every calculation is going to require a different set or records from the blockchain.

Code:
var inner_hash = sha256(block_header);
var chain_index = floor(chain_height * (inner_hash_last_32_bits / max_uint));
var chain_hash = sha256(blockchain[chain_index].header + inner_hash);
var tran_index = floor(blockchain[chain_index].tran_count * chain_hash_last_32_bits / max_uint));
var final_hash = sha256(blockchain[chain_index].transaction[tran_index] + chain_hash);

No, I hadn't seen it - did you post it in this thread?

Hm... at first glance, correct, it won't work.

The link was in my signature but I should have probably linked directly to it much earlier in this chain.
legendary
Activity: 1154
Merit: 1001
[...] Anyway it would scare off every miner unless new mini-blockchain technology is implemented. [...]

Perhaps like so?  Wink
https://bitcointalksearch.org/topic/annxcn-cryptonite-1st-mini-blockchain-coin-m7-pow-no-premine-713538

Cheers,
~ Myagui
legendary
Activity: 1400
Merit: 1050

OMG, that is one of the worst "new" implementation yet.

DOOMED before launch.

I'm beyond sick of people just continuing to plug in new or modified algorithms to try and buy a little time for 'their' CPU miner to instamine 'their' coin before the GPU miners can catch up.  I really want to see some new concepts used in block generation.  Still wish someone would try out my PoBC concept.  I'd be really curious if it is even possible to develop a GPU miner which can see an entire blockchain.
why do you want to see the entire blockchain ? (there is no point in that)

To solve two problems.  

First, require every miner to have a copy of the blockchain which effectively means running a full node.  There have been issues on the Bitcoin network with a declining number of full nodes being operated.

Next, make it harder (if not impossible) to implement an ASIC for mining as a large blockchain will be difficult for the ASIC to access with any speed.

CPU mining will have the easiest time as they can access the blockchain from disk or on a machine with lots of RAM then direct memory access will be faster.  GPU should be possible but will have constraints based on memory bandwidth to the system.  ASIC will be really hamstrung as it would have to either be an internal card for a machine or have a high-speed link to a machine with the blockchain.

Nooooope. Nope nope nope nope nope.
You only need ONE copy of the chain, see. So ASIC just needs one large bank of high speed memory to run your PoW on 1 billion custom hardware chips. Game over, thanks for playing.

You are going to have one heck of a memory bottleneck with 1 billion chips trying to talk to a single bank of memory.  That is a bunch of chips having to pull a header and a transaction from a set of 20GB of data (for the current Bitcoin blockchain) for every hash calculation.

No you won't - it's read once and broadcasted to all threads. If you sync them like CUDA warps, you'll be in business.

OK, did you look at the pseudocode I put here?  Not sure how broadcasting will work when every calculation is going to require a different set or records from the blockchain.

Code:
var inner_hash = sha256(block_header);
var chain_index = floor(chain_height * (inner_hash_last_32_bits / max_uint));
var chain_hash = sha256(blockchain[chain_index].header + inner_hash);
var tran_index = floor(blockchain[chain_index].tran_count * chain_hash_last_32_bits / max_uint));
var final_hash = sha256(blockchain[chain_index].transaction[tran_index] + chain_hash);

Doing this your coin has no future.
Excluding asic/gpu at launch is ok that put incentive to people to mine it. But on the long term, you need to keep your network secure not attract people and for that asic are still the best solution (low cost, low power consumption, low to no maintenance...)
GPU and CPU are versatile they are not meant to do only one task forever.
hero member
Activity: 938
Merit: 1000

OMG, that is one of the worst "new" implementation yet.

DOOMED before launch.

I'm beyond sick of people just continuing to plug in new or modified algorithms to try and buy a little time for 'their' CPU miner to instamine 'their' coin before the GPU miners can catch up.  I really want to see some new concepts used in block generation.  Still wish someone would try out my PoBC concept.  I'd be really curious if it is even possible to develop a GPU miner which can see an entire blockchain.
why do you want to see the entire blockchain ? (there is no point in that)

To solve two problems. 

First, require every miner to have a copy of the blockchain which effectively means running a full node.  There have been issues on the Bitcoin network with a declining number of full nodes being operated.

Next, make it harder (if not impossible) to implement an ASIC for mining as a large blockchain will be difficult for the ASIC to access with any speed.

CPU mining will have the easiest time as they can access the blockchain from disk or on a machine with lots of RAM then direct memory access will be faster.  GPU should be possible but will have constraints based on memory bandwidth to the system.  ASIC will be really hamstrung as it would have to either be an internal card for a machine or have a high-speed link to a machine with the blockchain.

Miners need to connect to a node in order to mine anyway (pool-node or self-node). So the first problem is actually not need to be solved. But the second sounds interesting. Anyway it would scare off every miner unless new mini-blockchain technology is implemented.
full member
Activity: 168
Merit: 100
Depends how the algo is written.

Say it was something like x11 or x13 (whatever) where after you generate the current round for each algo you then took the resulting hash and with some function that looks at it has to go out and pull a "random" block based on the hash.  Then using the block info just pulled (seudo random) is fed into next algo, etc...

So in some way you use the current hash to figure out what block is needed to look up.

That would keep it from being able to "distribute" the same info via threads to all procs/cores/whatever.

Carlo
sr. member
Activity: 401
Merit: 250

OMG, that is one of the worst "new" implementation yet.

DOOMED before launch.

I'm beyond sick of people just continuing to plug in new or modified algorithms to try and buy a little time for 'their' CPU miner to instamine 'their' coin before the GPU miners can catch up.  I really want to see some new concepts used in block generation.  Still wish someone would try out my PoBC concept.  I'd be really curious if it is even possible to develop a GPU miner which can see an entire blockchain.
why do you want to see the entire blockchain ? (there is no point in that)

To solve two problems.  

First, require every miner to have a copy of the blockchain which effectively means running a full node.  There have been issues on the Bitcoin network with a declining number of full nodes being operated.

Next, make it harder (if not impossible) to implement an ASIC for mining as a large blockchain will be difficult for the ASIC to access with any speed.

CPU mining will have the easiest time as they can access the blockchain from disk or on a machine with lots of RAM then direct memory access will be faster.  GPU should be possible but will have constraints based on memory bandwidth to the system.  ASIC will be really hamstrung as it would have to either be an internal card for a machine or have a high-speed link to a machine with the blockchain.

Nooooope. Nope nope nope nope nope.
You only need ONE copy of the chain, see. So ASIC just needs one large bank of high speed memory to run your PoW on 1 billion custom hardware chips. Game over, thanks for playing.

You are going to have one heck of a memory bottleneck with 1 billion chips trying to talk to a single bank of memory.  That is a bunch of chips having to pull a header and a transaction from a set of 20GB of data (for the current Bitcoin blockchain) for every hash calculation.

No you won't - it's read once and broadcasted to all threads. If you sync them like CUDA warps, you'll be in business.

OK, did you look at the pseudocode I put here?  Not sure how broadcasting will work when every calculation is going to require a different set or records from the blockchain.

Code:
var inner_hash = sha256(block_header);
var chain_index = floor(chain_height * (inner_hash_last_32_bits / max_uint));
var chain_hash = sha256(blockchain[chain_index].header + inner_hash);
var tran_index = floor(blockchain[chain_index].tran_count * (chain_hash_last_32_bits / max_uint));
var final_hash = sha256(blockchain[chain_index].transaction[tran_index] + chain_hash);
sr. member
Activity: 401
Merit: 250

OMG, that is one of the worst "new" implementation yet.

DOOMED before launch.

I'm beyond sick of people just continuing to plug in new or modified algorithms to try and buy a little time for 'their' CPU miner to instamine 'their' coin before the GPU miners can catch up.  I really want to see some new concepts used in block generation.  Still wish someone would try out my PoBC concept.  I'd be really curious if it is even possible to develop a GPU miner which can see an entire blockchain.
why do you want to see the entire blockchain ? (there is no point in that)

To solve two problems. 

First, require every miner to have a copy of the blockchain which effectively means running a full node.  There have been issues on the Bitcoin network with a declining number of full nodes being operated.

Next, make it harder (if not impossible) to implement an ASIC for mining as a large blockchain will be difficult for the ASIC to access with any speed.

CPU mining will have the easiest time as they can access the blockchain from disk or on a machine with lots of RAM then direct memory access will be faster.  GPU should be possible but will have constraints based on memory bandwidth to the system.  ASIC will be really hamstrung as it would have to either be an internal card for a machine or have a high-speed link to a machine with the blockchain.

Nooooope. Nope nope nope nope nope.
You only need ONE copy of the chain, see. So ASIC just needs one large bank of high speed memory to run your PoW on 1 billion custom hardware chips. Game over, thanks for playing.

You are going to have one heck of a memory bottleneck with 1 billion chips trying to talk to a single bank of memory.  That is a bunch of chips having to pull a header and a transaction from a set of 20GB of data (for the current Bitcoin blockchain) for every hash calculation.
sr. member
Activity: 401
Merit: 250

OMG, that is one of the worst "new" implementation yet.

DOOMED before launch.

I'm beyond sick of people just continuing to plug in new or modified algorithms to try and buy a little time for 'their' CPU miner to instamine 'their' coin before the GPU miners can catch up.  I really want to see some new concepts used in block generation.  Still wish someone would try out my PoBC concept.  I'd be really curious if it is even possible to develop a GPU miner which can see an entire blockchain.
why do you want to see the entire blockchain ? (there is no point in that)

To solve two problems. 

First, require every miner to have a copy of the blockchain which effectively means running a full node.  There have been issues on the Bitcoin network with a declining number of full nodes being operated.

Next, make it harder (if not impossible) to implement an ASIC for mining as a large blockchain will be difficult for the ASIC to access with any speed.

CPU mining will have the easiest time as they can access the blockchain from disk or on a machine with lots of RAM then direct memory access will be faster.  GPU should be possible but will have constraints based on memory bandwidth to the system.  ASIC will be really hamstrung as it would have to either be an internal card for a machine or have a high-speed link to a machine with the blockchain.
legendary
Activity: 1400
Merit: 1050

OMG, that is one of the worst "new" implementation yet.

DOOMED before launch.

I'm beyond sick of people just continuing to plug in new or modified algorithms to try and buy a little time for 'their' CPU miner to instamine 'their' coin before the GPU miners can catch up.  I really want to see some new concepts used in block generation.  Still wish someone would try out my PoBC concept.  I'd be really curious if it is even possible to develop a GPU miner which can see an entire blockchain.
why do you want to see the entire blockchain ? (there is no point in that)
full member
Activity: 263
Merit: 100
v2.0.1 "Split Screen" (2014-08-01) Windows Binary alfa version

Rewrite for work with all ccminer forks where console output for gpu looks like this:
[2014-08-01 06:38:56] GPU #2: GeForce GTX 750 Ti, 291.38 H/s

Install: copy in your miner exe directory

Launch:
splitscreen.exe <"arguments"> <--height xx>

miner - name of miner exe
arguments - string of arguments for miner
--height - number of rows in console window (default: 30)

do NOT use -q key in argument string!

Example:
splitscreen.exe ccminer.exe "-a cryptonight -o stratum+tcp://xmr1.crypto-pool.fr:7777 -u 4AbB64kKv2rQuw3M3iSHhKGgfnkETg3mMWuDLACeJAnqKwhb6jZxs1dFXvRUgtTPj2Mt6EM6d3FGZLA YyVuh6WKy7ohqVYY -p x"

splitscreen.exe nvminer.exe "-d 0,1,2 -a jackpot -o stratum+tcp://www.hashharder.com:9958 -u JfQVwQBdbqeL2Vo1zNrPgD4AGWoqjzExSA -p x" --height 50

https://github.com/zelante/ccminer/releases/download/v2.0.1/splitscreen_v2.0.1_alfa.zip

Very Good one (Work separately from the miner)
A little suggestion from me, would it be ok to have pool (-o) and user (-u) to display within your (upper part of) splitscreen ?

Edit- I got some error and it only lets me close the splitscreen.
(splitscreen with nvminer 7b)


try this version https://github.com/zelante/ccminer/releases/tag/v2.0.2
i'll add pool and user later...
legendary
Activity: 1400
Merit: 1050
Is there any way to run ccminer with X11, X13 or NIST5 on a Geforce 570?  It only works on my 680.
you need to use a pre-killer groestl version as the latest one isn't compatible with compute 2.0
sr. member
Activity: 401
Merit: 250

OMG, that is one of the worst "new" implementation yet.

DOOMED before launch.

I'm beyond sick of people just continuing to plug in new or modified algorithms to try and buy a little time for 'their' CPU miner to instamine 'their' coin before the GPU miners can catch up.  I really want to see some new concepts used in block generation.  Still wish someone would try out my PoBC concept.  I'd be really curious if it is even possible to develop a GPU miner which can see an entire blockchain.
sr. member
Activity: 266
Merit: 256
Is there any way to run ccminer with X11, X13 or NIST5 on a Geforce 570?  It only works on my 680.
member
Activity: 112
Merit: 10
Guys i want to urge you all to donate to people who delevop for us in ccminer and cudaminer..the people who post new source for the new algos and the various utilities the profitcalcs...the split screen zelante....nvimer...ccminer,,you know what i am talking about

This is crucial  because all those people take time from their own personal lives to help us...
If no donations and help is given to them i predict a bleak gpu-mining future...and i am not talking only about cuda mining the same also applies to amd gpu people
member
Activity: 81
Merit: 10
Splitscreen working as advertised, even when using the /taskkill .bat

Edit:  well, almost.  It displays properly but crashes with an error when /taskkill is used.  It does successfully re-launch the miner, but leaves the error window open as well.
hero member
Activity: 809
Merit: 501
I have one of the EVGA 740 4GB cards, and I'm trying to mine YAC (scrypt-chacha, NFactor 15). I can't get past 630 hash/s. Anyone have suggestions on settings? Here is what I have:

-q -a scrypt-jane -o stratum+tcp://yac.m-s-t.org:3333 -O worker:password -H 2 -i 0 -l auto -L 2 -C 2 -m 0
Jump to: