Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 580. (Read 3426975 times)

member
Activity: 61
Merit: 10
Try this
sudo apt-get install libcurl4-openssl-dev



32-bit: sudo ldconfig /usr/local/cuda/lib

64-bit: sudo ldconfig /usr/local/cuda/lib64
newbie
Activity: 10
Merit: 0
Hi i have problem building the miner under ./configure. The error as below:




checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for gcc option to accept ISO C99... -std=gnu99
checking how to run the C preprocessor... gcc -std=gnu99 -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking whether gcc -std=gnu99 needs -traditional... no
checking whether gcc -std=gnu99 and cc understand -c and -o together... yes
checking dependency style of gcc -std=gnu99... gcc3
checking for ranlib... ranlib
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking dependency style of g++... gcc3
checking for gcc -std=gnu99 option to support OpenMP... -fopenmp
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking sys/endian.h usability... no
checking sys/endian.h presence... no
checking for sys/endian.h... no
checking sys/param.h usability... yes
checking sys/param.h presence... yes
checking for sys/param.h... yes
checking syslog.h usability... yes
checking syslog.h presence... yes
checking for syslog.h... yes
checking for sys/sysctl.h... yes
checking whether be32dec is declared... no
checking whether le32dec is declared... no
checking whether be32enc is declared... no
checking whether le32enc is declared... no
checking for size_t... yes
checking for working alloca.h... yes
checking for alloca... yes
checking for getopt_long... yes
checking whether we can compile AVX code... yes
checking whether we can compile XOP code... yes
checking whether we can compile AVX2 code... no
configure: WARNING: The assembler does not support the AVX2 instruction set.
checking for json_loads in -ljansson... no
checking for pthread_create in -lpthread... yes
checking for gawk... (cached) gawk
checking for curl-config... no
checking whether libcurl is usable... no
configure: error: Missing required libcurl >= 7.15.2
[root@BTCminingCYP CudaMiner]# sudo aptitude install libcurl4-gnutls-dev
sudo: aptitude: command not found
[root@BTCminingCYP CudaMiner]# sudo yum install libcurl4-gnutls-dev
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: www.gtlib.gatech.edu
 * extras: www.gtlib.gatech.edu
 * updates: www.gtlib.gatech.edu
Setting up Install Process
No package libcurl4-gnutls-dev available.
Error: Nothing to do
[root@BTCminingCYP CudaMiner]# ./autogen.sh
configure.ac:103: error: possibly undefined macro: AC_MSG_ERROR
      If this token and others are legitimate, please use m4_pattern_allow.
      See the Autoconf documentation.
[root@BTCminingCYP CudaMiner]# aclocal;autoconf;automake;
[root@BTCminingCYP CudaMiner]# ./autogen.sh
[root@BTCminingCYP CudaMiner]# ./configure
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking for style of include used by make... GNU
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for gcc option to accept ISO C99... -std=gnu99
checking how to run the C preprocessor... gcc -std=gnu99 -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking whether gcc -std=gnu99 needs -traditional... no
checking whether gcc -std=gnu99 and cc understand -c and -o together... yes
checking dependency style of gcc -std=gnu99... gcc3
checking for ranlib... ranlib
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking dependency style of g++... gcc3
checking for gcc -std=gnu99 option to support OpenMP... -fopenmp
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking sys/endian.h usability... no
checking sys/endian.h presence... no
checking for sys/endian.h... no
checking sys/param.h usability... yes
checking sys/param.h presence... yes
checking for sys/param.h... yes
checking syslog.h usability... yes
checking syslog.h presence... yes
checking for syslog.h... yes
checking for sys/sysctl.h... yes
checking whether be32dec is declared... no
checking whether le32dec is declared... no
checking whether be32enc is declared... no
checking whether le32enc is declared... no
checking for working alloca.h... yes
checking for alloca... yes
checking for getopt_long... yes
checking whether we can compile AVX code... yes
checking whether we can compile XOP code... yes
checking whether we can compile AVX2 code... no
configure: WARNING: The assembler does not support the AVX2 instruction set.
checking for json_loads in -ljansson... no
checking for pthread_create in -lpthread... yes
checking for SSL_library_init in -lssl... yes
checking for EVP_DigestFinal_ex in -lcrypto... yes
./configure: line 7268: syntax error near unexpected token `,'
./configure: line 7268: `LIBCURL_CHECK_CONFIG(, 7.15.2, ,'




Could anyone tell me what wrong and how can i fix it. Please don't ignore my message i really need your for help because i new to this miner. Thanks for those taking time to read this message and additional for that who willing help me out.
member
Activity: 61
Merit: 10
Driver 337 Xubuntu 32bit no overclock
Code:
[2014-04-16 21:32:25] GPU #0: 337964.42 hash/s with configuration K112x2
[2014-04-16 21:32:25] GPU #0: using launch configuration K112x2
[2014-04-16 21:32:25] GPU #0: GeForce GTX 660 Ti, 13706 khash/s
[2014-04-16 21:32:30] DEBUG: job_id='78' extranonce2=00000000 ntime=534f3634
[2014-04-16 21:32:42] DEBUG: job_id='79' extranonce2=00000000 ntime=534f3641
[2014-04-16 21:32:42] Stratum detected new block
[2014-04-16 21:32:42] GPU #0: GeForce GTX 660 Ti, 310.02 khash/s
[2014-04-16 21:32:42] GPU #1: GeForce GTX 670, 334.33 khash/s
full member
Activity: 210
Merit: 100
It's probably already been posted before, but there's 600 pages, and I haven't been following this. 

cudaminer-2013-12-18 is hands-down the fastest for me.  With all of the later versions I only get about 100khash, but on this version I get 160+.  I use an nVidia 650 TI.
sr. member
Activity: 350
Merit: 250
My windows 7 installed, i installed them all one by one with a reboot inbetween. I found that even after the first cards, windows did not detect the cards automatically and did not put drivers on.

I went into hardware manager to find the cards had been detected but nothing done. So i simple right clicked each and told it to install drivers from the net. Worked perfectly Smiley
full member
Activity: 238
Merit: 100
Medichain: The Medical Big-Data Platform
Something that I had to deal with last night when installing a 5th & 6th GPU in my system....

I fought with trying to get 6 cards all recognized and working for hours, and didn't have a whole lot of progress understanding what was the problem until I slept on it and came at it again this morning... so wanted to share in case it saves someone else a few hours of frustration in the future.

I am not positive of the 'cause' that was hanging up the process, but I was able to get it working and have a theory...

After spending hours testing, re-testing, installing single cards in different orders, installing and uninstalling many versions of the NVIDIA drivers, and getting ready to throw it all out the window I came to the conclusion that the actual NVIDIA driver package gets in the way of installing the actual hardware.

So I ripped out all the drivers and devices I had installed, and let Windows do all the hardware detection and installation of the cards without using anything from NVIDIA.

This worked, and all 6 cards are mining away since this morning with no problems.

My theory about this evolved from my first build with the 4 cards I started with I put them all in at once, and when the system came up they were all detected at once and installed fine, and good dutiful me went and installed the NVIDIA package immediately following and everything still ran fine... until I tried to add two new cards. 5 & 6 would not be recognized no matter it seemed what I tried.

Once the NVIDIA pacakage is installed though, it tries to install that driver when you add a new card, or may even keep it from trying to install.. but nothing was working until I ripped everything out, put all 6 cards in at once, and then let it install everything from scratch. Worked on the first try doing that with no problems.

I am going to try installing the full NVIDIA package tonight to see if there is any performance gain over the base windows drivers, but I suspect it will work just fine as well now that 6 cards are all working in Win7.

There are tons of 6 GPU builds, but I suspect those who are writing about their builds and all regularly comment about having no problems with Windows installing all 6 cards, but they are bringing them up all new at the same time, and that would avoid this problem.

Hope that helps someone down the line...

I will also be able to post some comparisons as the two new cards I added are MSI 750 TI TF Gaming cards, where the original 4 are MSI 750 TI OC cards (single fan).

I will be OC'ing and raising the TDP on the new cards tonight as well so it should be a good comparison.

I do think that I am seeing about a 7% drop in performance with the added 2 cards so far... (3250/card mining Groestlcoin before, and 3075/card mining Groestlcoin after adding the two new cards). Hopefully that is just related to the cards running stock and bringing down the average so far... but will let you all know what I can figure out.
newbie
Activity: 13
Merit: 0
only way ive done it is flashing the bios.  Its pretty easy with the editor.
legendary
Activity: 1400
Merit: 1050
can you overclock through nvidia-smi (or if it isn't plugged into a monitor) ?
newbie
Activity: 13
Merit: 0
On the latest nvidia driver for linux 337.12.  with cuda 6

Unfortunately i cant tell what options are in the nvidia-settings as my monitor is plugged into the intel video.

From the cli tho everything looks fine.  ccminer works well, i think there is a slight increase, but could be placebo.

Code:
+------------------------------------------------------+                       
| NVIDIA-SMI 337.12     Driver Version: 337.12         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 750 Ti  Off  | 0000:02:00.0     N/A |                  N/A |
| 40%   37C  N/A     N/A /  N/A |     40MiB /  2047MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 750 Ti  Off  | 0000:03:00.0     N/A |                  N/A |
| 40%   33C  N/A     N/A /  N/A |     40MiB /  2047MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX 750 Ti  Off  | 0000:04:00.0     N/A |                  N/A |
| 40%   33C  N/A     N/A /  N/A |     40MiB /  2047MiB |     N/A     
Cuda device query output.
Code:
Device 2: "GeForce GTX 750 Ti"
  CUDA Driver Version / Runtime Version          6.0 / 6.0
  CUDA Capability Major/Minor version number:    5.0
  Total amount of global memory:                 2048 MBytes (2147287040 bytes)
  ( 5) Multiprocessors, (128) CUDA Cores/MP:     640 CUDA Cores
  GPU Clock rate:                                1291 MHz (1.29 GHz)
  Memory Clock rate:                             3200 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 2097152 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           4 / 0
member
Activity: 80
Merit: 10
Everyone running Windows? :/

Win7 x64 with 6x 750 ti

Seems like windows is preferred, and certainly a lot easier to handle (particularly if you are a complete Linux novice like me...). I will report back on any attempts at Linux overclocking, for anyone that does happen to use it!
legendary
Activity: 1400
Merit: 1050
full member
Activity: 182
Merit: 100
About the VRM heat thing, mine seem to hit 90C according to the sensor that shows in GPU-Z.
Did not overclock, using Asus GeForce GTX 780 DirectCU II Overclocked
full member
Activity: 238
Merit: 100
Medichain: The Medical Big-Data Platform
sr. member
Activity: 350
Merit: 250
Everyone running Windows? :/

not by choice

for anyone interested, i threw the details i added into the webserver today, sorted out the ports for the servers aswell, and i have thrown the data onto my old page so you can see it


temps and fanspeed update every second within ccminer, so its always up to date.
it shows gpu name, temperature, and fan speed
so i doesn't matter what gpu you have installed it will set the name for it automatically

it is still limited to one instance per gpu right now sadly but i intend to remove that issue soon. so far, 3 hours of coding, don't think i am doing too bad as i never touched c++ before this  Cool
member
Activity: 80
Merit: 10
Just wondering, has anyone tried the latest beta linux drivers that enable overclocking? I'm using my 750Tis in my media/backup server, so it's running linux. Would be good to boost the hashrate a little!

Also, any documentation yet for the latest commits on Git? I was wondering how the failover works i.e. does it return to the original pool when it comes back up? I'm not using it yet as an mining YAC and I believe the latest don't do jane yet?

Everyone running Windows? :/
full member
Activity: 146
Merit: 100
After an incident (driver crash), the card goes into fail safe mode (half-clock speed).
you need to reboot, or reset the driver (there is a tool to reset the driver, some other might remember its name)

Devcon.exe (Device Console or something like that... it is from Microsoft)

There is a 32 bit and 64 bit version. Christian has both of the exe files in his advanced script download.

Very useful for resetting the cards after the driver reset without rebooting or having to do it manually.


After I have a driver crash, I just press reset in MSI afterburner, or apply, a slightly lower, overclock, and that seems to fix it as well. At least, my hashrates do not indicate otherwise.
sr. member
Activity: 350
Merit: 250
yeh, i think it uses the nvml api to force the state of the gpu, i have been messing around with it for a few hours so i did see that
full member
Activity: 238
Merit: 100
Medichain: The Medical Big-Data Platform
After an incident (driver crash), the card goes into fail safe mode (half-clock speed).
you need to reboot, or reset the driver (there is a tool to reset the driver, some other might remember its name)

Devcon.exe (Device Console or something like that... it is from Microsoft)

There is a 32 bit and 64 bit version. Christian has both of the exe files in his advanced script download.

Very useful for resetting the cards after the driver reset without rebooting or having to do it manually.
legendary
Activity: 1400
Merit: 1050
Since we're on the topic of temps,my card's doing something odd. I start cuda/ccminer and then OC the card. Fine,the core clock goes up and stays there as long as the mining continues. Then there's a small connection error and the MSI stats of all the clocks and temp drop to 0. On restarting,the mem clock goes back to the OC'd value but the core clock is underclocked,ie,even lesser than it's original clock. Why is this?
After an incident (driver crash), the card goes into fail safe mode (half-clock speed).
you need to reboot, or reset the driver (there is a tool to reset the driver, some other might remember its name)
sr. member
Activity: 308
Merit: 250
New into mining. I'm mining on Asus GTX 780 stock no OC and getting about 530KHs. My computer is on 24/7 usually gaming when I'm on and mining with I'm off. I curious about my temps. Right now I'm GPU temps of 65-69C and VRM temps of 70-74 with the fan on 80%. Is this in the safe zone kinda?

Anything below 75c is safe IMO but less is better anyway.

GPU or VRM? Both?
its for GPU my VRM run at 98 most of the time.
98 is way too high for the VRM (well I think so... can't monitor mine on the 780ti)

I use hwinfo for monitering VRM temps. you can try it.

@bigjme

are you sure cuz that 98c got me worried but there is no way I can get it down except may be water cooling

98c is too hot for continuous running. You'd be greatly reducing it's life. ~80c is alright,I guess. Maybe keep it near a window and space it out?
Jump to: