Hello everyone. I would like to know if there is still someone interested in trying to compile this in Linux.
I'm having trouble to compile in Ubuntu 10.04. I am preparing a little script in Bash based on previous posts so that others benefit from it as well, but I can't get pass the issue with nvcc (I'm using gcc and g++ version 4.4). I also attach the characteristics of my video card. Let me know if you need more information:
The error is this:
/usr/lib/gcc/x86_64-linux-gnu/4.4.3/../../../../lib/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
collect2: ld returned 1 exit status
The script is this
#!/bin/bash
# Script to modify the source to build rpcminer with CUDA support
echo "Make sure to have installed the CUDA drivers and toolkit";
echo "You can go to https://developer.nvidia.com/cuda-downloads";
echo "and https://help.ubuntu.com/community/Cuda";
sleep 3;
# 0. Prepare the environment
Unpack the zip file with the source and go into the folder
unzip -q -d rpcminer `ls bitcoin-remote-rpc-*-src.zip` && cd rpcminer
# 1. Open up the CMakeLists.txt. There's a group of seven options
# (lines 5-11 in my version of the file for me). I set "Enable CUDA
# miner" and "Build RPC miner" to ON and everythign else to OFF.
sed -i 's-\(OPTION(BITCOIN_\(ENABLE\|BUILD\)_\(OPENCL\|REMOTE\|GUI\|DAEMON\).*\)ON)-\1OFF)-g;' CMakeLists.txt;
# 2. Ran cmake → go to step 4
# 3. Cleaned up some compile errors:
# 3A. Added "#include "
# and "using namespace boost;" to serialize.h after the rest of the
# #includes
# 3A-a. Find last line with #include (after the boost library;
# there is another include near line 1112)
last_line_include_boost=`sed -n '/\#include.*boost/=' ./src/serialize.h | tail -n 1`;
# 3A-b. Add both "#include " and "using namespace boost;"
sed -i $last_line_include_boost's-^\(.*\)$-\1\n\#include \nusing namespace boost\;-g' ./src/serialize.h;
# 3C. Modified bitcoinminercuda.cu to "#define _BITCOIN_MINER_CUDA_"
# right above the #ifdef that checks for it.
# 3C-a. For every file named "bitcoinminercuda.cu", check if it does
# not have "#define _BITCOIN_MINER_CUDA_". If that is the case, find
# the line with "#ifdef _BITCOIN_MINER_CUDA_", and put
# "#define# _BITCOIN_MINER_CUDA_" before it
for i in `find ./ -iname 'bitcoinminercuda.cu'`; do
if [[ -z `grep '#define _BITCOIN_MINER_CUDA_' "$i"` ]]; then
sed -i '/\#ifdef _BITCOIN_MINER_CUDA_/{x;s-.*-\#define _BITCOIN_MINER_CUDA_-;p;x}' "$i";
fi;
done;
#4. Ran make → Ran cmake and make
cmake ./ && make
#5. Build the appropriate ".cubin" file
fad
#5A. Ask the user to input the correct version of the compute
#capabilities for their GPU
# http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/Building
# http://ondoc.logand.com/d/365/html
# https://en.wikipedia.org/wiki/CUDA
# http://docs.nvidia.com/cuda/cuda-c-programming-guide/
#5A-a. Print message with the link to find the right information
echo "Visit the website https://developer.nvidia.com/cuda-gpus";
echo "and look for the Compute Capability corresponding to the model";
echo "of your GPU. Then input the number and press enter";
#5A-b. Read value from standard input
read GPU_CompCap;
#5A-c. Delete the period in the version
GPU_CompCap=`echo "$GPU_CompCap" | tr -d '.'`;
#5A-d. Run nvcc to compile the cubin file
nvcc ./src/cuda/bitcoinminercuda.cu -gencode arch=compute_"$GPU_CompCap",\"code=sm_$GPU_CompCap,compute_$GPU_CompCap\" --keep
I have been using 1.3 for GPU_CompCap. I also tried with 1.2 with the same result
The specs of my card are:
Device 0: "NVS 3100M"
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 1.2
Total amount of global memory: 511 MBytes (536084480 bytes)
( 2) Multiprocessors x ( 8) CUDA Cores/MP: 16 CUDA Cores
GPU Clock rate: 1468 MHz (1.47 GHz)
Memory Clock rate: 790 Mhz
Memory Bus Width: 64-bit
Max Texture Dimension Size (x,y,z) 1D=(8192), 2D=(65536,32768), 3D=(2048,2048,2048)
Max Layered Texture Size (dim) x layers 1D=(8192) x 512, 2D=(8192,8192) x 512
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 16384
Warp size: 32
Maximum number of threads per multiprocessor: 1024
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 2147483647 bytes
Texture alignment: 256 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): No
Device PCI Bus ID / PCI location ID: 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 5.0, CUDA Runtime Version = 5.0, NumDevs = 1, Device0 = NVS 3100M
If someone can help me with the optimal parameters, that would be greatly appreciated :D