Pages:
Author

Topic: [ANN] ccminer 2.3 - opensource - GPL (tpruvot) - page 34. (Read 500113 times)

legendary
Activity: 2716
Merit: 1094
Black Belt Developer
How to split GPUs to mine different pools?

Run multiple ccminers with -d
may be you know, how to change shares difficulty in pools like suprnova?

No way.
newbie
Activity: 104
Merit: 0
How to split GPUs to mine different pools?

Run multiple ccminers with -d
may be you know, how to change shares difficulty in pools like suprnova?
newbie
Activity: 104
Merit: 0
How to split GPUs to mine different pools?

Run multiple ccminers with -d
Thank you! it works !!!
legendary
Activity: 2716
Merit: 1094
Black Belt Developer
How to split GPUs to mine different pools?

Run multiple ccminers with -d
newbie
Activity: 104
Merit: 0
How to split GPUs to mine different pools?
legendary
Activity: 3248
Merit: 1070
i'm getting unsupported extranonce size of 16 error, when mining equihash, how to solve this?

...ok it's a pool side issue...
newbie
Activity: 39
Merit: 0
Bitcore @ yiimp seems to have some problems.

https://image.ibb.co/dHHQnb/bcore.png
yeah i think is sp_ dev fee Smiley)
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
Bitcore @ yiimp seems to have some problems.

newbie
Activity: 16
Merit: 0
Hi,

I am using ccminer for electroneum. I want to use electroneum.hashparty.io mining server. I can add 1 mining rig only with 1 etn address.

I already added all rigs on suprnova.cc but i think suprnova is steeling from me. I want to shift all my rigs to hashparty.io now.

How can I add all my rigs on hashparty.io?

Thanks.
newbie
Activity: 5
Merit: 0
Schap, please look again -- I did set the intensity to 14.99, but two lines from the bottom of my saved text, it gets reset to 12... not by me, it won't stay at 14 after I set it there.
newbie
Activity: 14
Merit: 0
I am a micro miner, one GPU card in my desktop computer, and no programmer, just an "end consumer".  I am using the precompiled windows 64 version of ccminer 2.2.2 from GitHub.  The miner works, but I am unable to increase the intensity level when scrypt mining.  No matter what intensity I set in the command line using the -i function, no matter where in the command line I put the -i function, and even though it initially assigns more threads, it then reverts to a different (?default) intensity a few lines down. My gpu doesn't even warm up, and the hash rate is low.  Rather than overclocking my GPU, I would prefer to increase the intensity so that it works harder.  How do I get it to actually run with the intensity settings I want? This is what I see:


D:\ccminer-x64-2.2.2-cuda9>ccminer-x64 -a scrypt -o stratum+tcp://us.multipool.us:XXX -u theXXX.1 -p XXX -i 14.99

*** ccminer 2.2.2 for nVidia GPUs by tpruvot@github ***
Built with VC++ 2013 and nVidia CUDA SDK 9.0 64-bits
Originally based on Christian Buchner and Christian H. project
Include some kernels from alexis78, djm34, djEzo, tsiv and krnlx.

BTC donation address: 1AJdfCpLWPNoAMDfHF1wD5y8VgKSSTHxPo (tpruvot)

[2017-11-08 06:34:23] Adding 16128 threads to intensity 14, 32512 cuda threads
[2017-11-08 06:34:23] Starting on stratum+tcp://us.multipool.us:XXX
[2017-11-08 06:34:23] NVML GPU monitoring enabled.
[2017-11-08 06:34:23] NVAPI GPU monitoring enabled.
[2017-11-08 06:34:23] 1 miner thread started, using 'scrypt' algorithm.
[2017-11-08 06:34:24] Stratum difficulty set to 131072 (2.00000)
[2017-11-08 06:34:24] scrypt block 1631827, diff 10898.083
[2017-11-08 06:34:25] GPU #0: 32 hashes / 4.0 MB per warp.
[2017-11-08 06:34:26] GPU #0: Performing auto-tuning, please wait 2 minutes...
[2017-11-08 06:34:26] GPU #0: maximum total warps (BxW): 1484
[2017-11-08 06:34:45] Stratum difficulty set to 256 (0.00391)
[2017-11-08 06:34:45] scrypt block 1631828, diff 8720.082
[2017-11-08 06:35:09] scrypt block 1631829, diff 8720.082
[2017-11-08 06:36:47] GPU #0: 589791.16 hash/s with configuration T12x11
[2017-11-08 06:39:34] scrypt block 1631830, diff 8720.082
[2017-11-08 06:39:34] GPU #0: using launch configuration T12x11
[2017-11-08 06:39:34] scrypt block 1631831, diff 8720.082
[2017-11-08 06:39:34] GPU #0: Intensity set to 12.0313, 4224 cuda threads
[2017-11-08 06:39:34] scrypt factor set to 9 (1024)

I XXX'd out the actual pool, worker name, password, but you can see where I attempted to set the intensity to 14.99 in the command line, was assigned extra threads, then lower down the intensity was changed to 12.0313 with only a fraction of the threads.  I have also tried with setting the T levels myself in advance, so that the auto-tuning does not occur, same result. Any help would be appreciated.  Thank you in advance.

You just change the value after -i

You have set to 14 so your running at a intensity of 14
newbie
Activity: 5
Merit: 0
I am a micro miner, one GPU card in my desktop computer, and no programmer, just an "end consumer".  I am using the precompiled windows 64 version of ccminer 2.2.2 from GitHub.  The miner works, but I am unable to increase the intensity level when scrypt mining.  No matter what intensity I set in the command line using the -i function, no matter where in the command line I put the -i function, and even though it initially assigns more threads, it then reverts to a different (?default) intensity a few lines down. My gpu doesn't even warm up, and the hash rate is low.  Rather than overclocking my GPU, I would prefer to increase the intensity so that it works harder.  How do I get it to actually run with the intensity settings I want? This is what I see:


D:\ccminer-x64-2.2.2-cuda9>ccminer-x64 -a scrypt -o stratum+tcp://us.multipool.us:XXX -u theXXX.1 -p XXX -i 14.99

*** ccminer 2.2.2 for nVidia GPUs by tpruvot@github ***
Built with VC++ 2013 and nVidia CUDA SDK 9.0 64-bits
Originally based on Christian Buchner and Christian H. project
Include some kernels from alexis78, djm34, djEzo, tsiv and krnlx.

BTC donation address: 1AJdfCpLWPNoAMDfHF1wD5y8VgKSSTHxPo (tpruvot)

[2017-11-08 06:34:23] Adding 16128 threads to intensity 14, 32512 cuda threads
[2017-11-08 06:34:23] Starting on stratum+tcp://us.multipool.us:XXX
[2017-11-08 06:34:23] NVML GPU monitoring enabled.
[2017-11-08 06:34:23] NVAPI GPU monitoring enabled.
[2017-11-08 06:34:23] 1 miner thread started, using 'scrypt' algorithm.
[2017-11-08 06:34:24] Stratum difficulty set to 131072 (2.00000)
[2017-11-08 06:34:24] scrypt block 1631827, diff 10898.083
[2017-11-08 06:34:25] GPU #0: 32 hashes / 4.0 MB per warp.
[2017-11-08 06:34:26] GPU #0: Performing auto-tuning, please wait 2 minutes...
[2017-11-08 06:34:26] GPU #0: maximum total warps (BxW): 1484
[2017-11-08 06:34:45] Stratum difficulty set to 256 (0.00391)
[2017-11-08 06:34:45] scrypt block 1631828, diff 8720.082
[2017-11-08 06:35:09] scrypt block 1631829, diff 8720.082
[2017-11-08 06:36:47] GPU #0: 589791.16 hash/s with configuration T12x11
[2017-11-08 06:39:34] scrypt block 1631830, diff 8720.082
[2017-11-08 06:39:34] GPU #0: using launch configuration T12x11
[2017-11-08 06:39:34] scrypt block 1631831, diff 8720.082
[2017-11-08 06:39:34] GPU #0: Intensity set to 12.0313, 4224 cuda threads
[2017-11-08 06:39:34] scrypt factor set to 9 (1024)

I XXX'd out the actual pool, worker name, password, but you can see where I attempted to set the intensity to 14.99 in the command line, was assigned extra threads, then lower down the intensity was changed to 12.0313 with only a fraction of the threads.  I have also tried with setting the T levels myself in advance, so that the auto-tuning does not occur, same result. Any help would be appreciated.  Thank you in advance.
newbie
Activity: 14
Merit: 0
Cuda 9 does not support compute 2.

What about 8? I have 8 installed
legendary
Activity: 2716
Merit: 1094
Black Belt Developer
Cuda 9 does not support compute 2.
legendary
Activity: 1470
Merit: 1114
Guys im having a tough time getting this to run with my Nvidia Tesla 2090, can anyone help please. downloaded the Cuda 8 version off git and this is error I am getting.....
[2017-11-07 20:02:04] Starting on stratum+tcp://yiimp.eu:3737
[2017-11-07 20:02:04] NVML GPU monitoring enabled.
[2017-11-07 20:02:04] 1 miner thread started, using 'x17' algorithm.
[2017-11-07 20:02:20] Stratum difficulty set to 0.016
[2017-11-07 20:02:20] x17 block 1630213, diff 1136.048
[2017-11-07 20:02:20] GPU #0: Intensity set to 20, 1048576 cuda threads
[2017-11-07 20:02:20] GPU #0: aes_cpu_init invalid device symbol
[2017-11-07 20:02:21] GPU #0: aes_cpu_init invalid device symbol
Cuda error in func 'x13_hamsi512_cpu_init' at line 685 : invalid device symbol.

I also changed the MakeFile.am to match the correct compatibility which should be nvcc_ARCH = -gencode=arch=compute_20,code="sm_20,compute_20"

thanks for any help


This model is no longer listed in the nVidia CUDA compute list - https://developer.nvidia.com/cuda-gpus/

It's listed on the legacy page as compute 2.0.

https://developer.nvidia.com/cuda-legacy-gpus

Since it's not the compute version perhaps it's the drivers or the cuda version. As a legacy product it's probably no longer
supported in the latest drivers and cuda. You need to go back in time with all the SW.

newbie
Activity: 14
Merit: 0
Guys im having a tough time getting this to run with my Nvidia Tesla 2090, can anyone help please. downloaded the Cuda 8 version off git and this is error I am getting.....
[2017-11-07 20:02:04] Starting on stratum+tcp://yiimp.eu:3737
[2017-11-07 20:02:04] NVML GPU monitoring enabled.
[2017-11-07 20:02:04] 1 miner thread started, using 'x17' algorithm.
[2017-11-07 20:02:20] Stratum difficulty set to 0.016
[2017-11-07 20:02:20] x17 block 1630213, diff 1136.048
[2017-11-07 20:02:20] GPU #0: Intensity set to 20, 1048576 cuda threads
[2017-11-07 20:02:20] GPU #0: aes_cpu_init invalid device symbol
[2017-11-07 20:02:21] GPU #0: aes_cpu_init invalid device symbol
Cuda error in func 'x13_hamsi512_cpu_init' at line 685 : invalid device symbol.


I also changed the MakeFile.am to match the correct compatibility which should be nvcc_ARCH = -gencode=arch=compute_20,code="sm_20,compute_20"

thanks for any help


This model is no longer listed in the nVidia CUDA compute list - https://developer.nvidia.com/cuda-gpus/

If it is the next version up from the 2075, then you will find at Compute20, it is too old for even ccminer-tpruvot to compile to. Have you tried adding this compute level in the Makefile.am, then compiling?

tpruvot, you would know better as to whether this Compute level is supported or not.

#crysx

Here is what some of the MakeFile.am looks like

if HAVE_WINDOWS
ccminer_SOURCES += compat/winansi.c
endif

ccminer_LDFLAGS  = $(PTHREAD_FLAGS) @CUDA_LDFLAGS@
ccminer_LDADD    = @LIBCURL@ @JANSSON_LIBS@ @PTHREAD_LIBS@ @WS2_LIBS@ @CUDA_LIBS@ @OPENMP_CFLAGS@ @LIBS@ $(nvml_libs)
ccminer_CPPFLAGS = @LIBCURL_CPPFLAGS@ @OPENMP_CFLAGS@ $(CPPFLAGS) $(PTHREAD_FLAGS) -fno-strict-aliasing $(JANSSON_INCLUDES) $(DEF_INCLUDES) $(nvml_defs)

#nvcc_ARCH  = -gencode=arch=compute_50,code=\"sm_50,compute_50\"
nvcc_ARCH = -gencode=arch=compute_20,code=\"sm_20,compute_20\"
#nvcc_ARCH += -gencode=arch=compute_61,code=\"sm_61,compute_61\"
#nvcc_ARCH += -gencode=arch=compute_52,code=\"sm_52,compute_52\"
#nvcc_ARCH += -gencode=arch=compute_35,code=\"sm_35,compute_35\"
#nvcc_ARCH += -gencode=arch=compute_30,code=\"sm_30,compute_30\"
#nvcc_ARCH += -gencode=arch=compute_20,code=\"sm_21,compute_20\"

nvcc_FLAGS = $(nvcc_ARCH) @CUDA_INCLUDES@ -I. @CUDA_CFLAGS@
nvcc_FLAGS += $(JANSSON_INCLUDES) --ptxas-options="-v"
newbie
Activity: 14
Merit: 0
Yes i put in there that I did add nvcc_ARCH = -gencode=arch=compute_20,code="sm_20,compute_20"
legendary
Activity: 2870
Merit: 1091
--- ChainWorks Industries ---
Guys im having a tough time getting this to run with my Nvidia Tesla 2090, can anyone help please. downloaded the Cuda 8 version off git and this is error I am getting.....
[2017-11-07 20:02:04] Starting on stratum+tcp://yiimp.eu:3737
[2017-11-07 20:02:04] NVML GPU monitoring enabled.
[2017-11-07 20:02:04] 1 miner thread started, using 'x17' algorithm.
[2017-11-07 20:02:20] Stratum difficulty set to 0.016
[2017-11-07 20:02:20] x17 block 1630213, diff 1136.048
[2017-11-07 20:02:20] GPU #0: Intensity set to 20, 1048576 cuda threads
[2017-11-07 20:02:20] GPU #0: aes_cpu_init invalid device symbol
[2017-11-07 20:02:21] GPU #0: aes_cpu_init invalid device symbol
Cuda error in func 'x13_hamsi512_cpu_init' at line 685 : invalid device symbol.


I also changed the MakeFile.am to match the correct compatibility which should be nvcc_ARCH = -gencode=arch=compute_20,code="sm_20,compute_20"

thanks for any help


This model is no longer listed in the nVidia CUDA compute list - https://developer.nvidia.com/cuda-gpus/

If it is the next version up from the 2075, then you will find at Compute20, it is too old for even ccminer-tpruvot to compile to. Have you tried adding this compute level in the Makefile.am, then compiling?

tpruvot, you would know better as to whether this Compute level is supported or not.

#crysx
newbie
Activity: 14
Merit: 0
Guys im having a tough time getting this to run with my Nvidia Tesla 2090, can anyone help please. downloaded the Cuda 8 version off git and this is error I am getting.....
[2017-11-07 20:02:04] Starting on stratum+tcp://yiimp.eu:3737
[2017-11-07 20:02:04] NVML GPU monitoring enabled.
[2017-11-07 20:02:04] 1 miner thread started, using 'x17' algorithm.
[2017-11-07 20:02:20] Stratum difficulty set to 0.016
[2017-11-07 20:02:20] x17 block 1630213, diff 1136.048
[2017-11-07 20:02:20] GPU #0: Intensity set to 20, 1048576 cuda threads
[2017-11-07 20:02:20] GPU #0: aes_cpu_init invalid device symbol
[2017-11-07 20:02:21] GPU #0: aes_cpu_init invalid device symbol
Cuda error in func 'x13_hamsi512_cpu_init' at line 685 : invalid device symbol.


I also changed the MakeFile.am to match the correct compatibility which should be nvcc_ARCH = -gencode=arch=compute_20,code="sm_20,compute_20"

thanks for any help
newbie
Activity: 73
Merit: 0
Hi,

I'm using tpruvot 2.2.2 unde Ubuntu 16.04. I have suddenly found out, some algos are about 15-20% slower on AMD boards than Intel boards.

Two rigs with same cards: 4 pcs 1060 3gb Palit Samsung card + 1 pcs 1050 ti card. All rigs use PCIE splitters.

Rig using H87 Intel mobo G1840 CPU produces around 96 khs on Lyra2v2, whereas rig on a G43-970 AMD Athlon 245 II CPU mobo produces only around 80 khs. Overclock settings are the same on both rigs: mem +800, core +240. Drivers are 384.66, CUDA 8.0. I compiled ccminer on one rig and copied over to others.

Same goes for groestl, myr-gr, etc. However, Neoscrypt produces around 2500 khs on all rigs no matter what platform.

Any ideas why this could be happening? Would it make sense to compile on AMD rigs individually (maybe some AMD CPU optimizations out there?)
I've heard it, ccminer is pretty hard on CPU, could that be the case that AMD CPU is not fast enough to catch up?

Thanks!!

p.s. There is no significant performance difference on ETHash (Claymore) or Equihash (EWBF or ZM).
Pages:
Jump to: