Author

Topic: CCminer(SP-MOD) Modded NVIDIA Maxwell / Pascal kernels. - page 1076. (Read 2347601 times)

legendary
Activity: 1470
Merit: 1114

ultimately its about building - compiling - rolling out - streamlining ... all clones of one worker ... all with linux - all with exact same hardware - all with ccminer-spmod

Hey Guys...for some of us...this stuff is all greek.   Huh   I made no sense of any of it and would really like to keep up with the Jones here. It would be nice to see some Win-install files, just like you had release #s before. So much love would come your way!    Shocked

Thanks

A little quotation error corrected.

If you mean precompiled Windows binaries, they are still produced and available on the release page as
posted by SP. If you want to compile your own Windows binaries it not too difficult but very long and I might
be able to help. I've gone through it a couple of times and it can be done with all freeware, except for the OS of course.

In short you need Visual Studio Community and Cuda Tools, cminer source and lots of disk space. ccminer.sln is
the configuration file and you double click on it to open the "Solution" in VS. You can edit the properties to add
compute versions (similar to what Linux folks do with makefiles). Then Build Solution. Eventually if everything
works you should find a release directory with a freshly minted ccminer.exe file in it.

I can write up a better tutorial if you like but it will take some time.



that actually would be good ... and highly beneficial jo ...

i for one would like to learn how to build under windows ... hell - ill build a windows machine JUST for it ...

tutor me / us please ...

#crysx
@joblo

Yes!  How much disk space do I need and will I be able to use the system while it compiles? I only have 1 and it's my mining machine, but I'd really like to learn to Windows compile.  

ASUS x77pro, Intel i7 3770, 16GB, 256 SSD, 1TB SSHD, 1TB HD, 2 TB NAS, Win8.1 pro, CUDA6.5 (have 7 too but not running). None of the HDs are completely empty tho, but I have combined space of ~.75 TB available on computer and ~1 TB on NAS. Running mine equipment is one S3+, three r-Box110s, two 1.3Mh Fury/Zeus, five RedFury USB sticks, three Gridseed duals, and I vid card mine (naturally) as well.

I think that's it. What else do I need to D/L, and install for Windows compiling? Thanks much joblo

I can post a very rough draft outline which might get you going if you're resourcefull, but be patient.
I don't have specific numbers for disk space but vs_comunity seems to insist on it being on the
system partition although it states "accross all drives" in its requirements. It is probably better to put cuda on that
drive as well. Probably 10 GB combined but that might not take into account temporary bloat during installation.
The ccminer files can be on another drive. How much space
you require depends on how many compute versions are coded and how many old compiles you
leave lying around. Your PC will be usable while compiling unless you are running other CPU intensive stuff.
You can easily be mining while compiling.

Disclaimer: Use at your own risk. I am not responsible for any loss or damage to your computer or your sanity. Grin

1. need at least 20 GB disk space

2. download vs_community, cudatools

3. install vs_community

4. install cuda

5. download and unpack ccminer

6. open ccminer.sln

7. verify release, 32

8. change config to customize compute version
   project->properties->configuration properties->cuda c/c++->device
   select code generation, edit

9 build solution

10 find ccminer.exe in release dir


legendary
Activity: 2912
Merit: 1091
--- ChainWorks Industries ---

in this case - i did do clean - and that is part of the compile / installation script ... when i took sp's suggestion of copying back the previous makefile from the older v51 compile - the compilation went through without issue ...


clean or distclean? If you modify config files you need to do a distclean. Otherwise there must be an error
in Makefile.am. A diff of the files may make the error more apparent.

always distclean ... my apologies ...

i didnt end up at the office today - had other personal errands to run ( unfortunately same with tomorrow - but will have time tomorrow afternoon ) - so i didnt get a chance to make changes and 'play' with the systems ...

you suggested centos 7 from memory ... is that straightforward enough to get cuda installed and running within centos? ...

as im much more familiar with rhel based systems ( centos / scientific linux / fedora ) than debian based systems ( ubuntu ) ...

#crysx
legendary
Activity: 1470
Merit: 1114

in this case - i did do clean - and that is part of the compile / installation script ... when i took sp's suggestion of copying back the previous makefile from the older v51 compile - the compilation went through without issue ...


clean or distclean? If you modify config files you need to do a distclean. Otherwise there must be an error
in Makefile.am. A diff of the files may make the error more apparent.
legendary
Activity: 2912
Merit: 1091
--- ChainWorks Industries ---
for sm_52 support you need this sdk : https://developer.nvidia.com/cuda-downloads-geforce-gtx9xx

else, ive fixed cuda 7.0 support on x11 in my linux branch but... we lose 180kH/s on a 750ti with the same code (compared to cuda 6.5) :/ so... i don't recommend it

so does this mean we have a version that is worse off in terms of 'lower end' card support? ...

or is it just a matter of time before the coding becomes on par with 6.5? ...

btw - where is your git Epsylon3? ...

#crysx
hero member
Activity: 1064
Merit: 500
MOBU

ultimately its about building - compiling - rolling out - streamlining ... all clones of one worker ... all with linux - all with exact same hardware - all with ccminer-spmod

Hey Guys...for some of us...this stuff is all greek.   Huh   I made no sense of any of it and would really like to keep up with the Jones here. It would be nice to see some Win-install files, just like you had release #s before. So much love would come your way!    Shocked

Thanks

A little quotation error corrected.

If you mean precompiled Windows binaries, they are still produced and available on the release page as
posted by SP. If you want to compile your own Windows binaries it not too difficult but very long and I might
be able to help. I've gone through it a couple of times and it can be done with all freeware, except for the OS of course.

In short you need Visual Studio Community and Cuda Tools, cminer source and lots of disk space. ccminer.sln is
the configuration file and you double click on it to open the "Solution" in VS. You can edit the properties to add
compute versions (similar to what Linux folks do with makefiles). Then Build Solution. Eventually if everything
works you should find a release directory with a freshly minted ccminer.exe file in it.

I can write up a better tutorial if you like but it will take some time.



that actually would be good ... and highly beneficial jo ...

i for one would like to learn how to build under windows ... hell - ill build a windows machine JUST for it ...

tutor me / us please ...

#crysx
@joblo

Yes!  How much disk space do I need and will I be able to use the system while it compiles? I only have 1 and it's my mining machine, but I'd really like to learn to Windows compile.  

ASUS x77pro, Intel i7 3770, 16GB, 256 SSD, 1TB SSHD, 1TB HD, 2 TB NAS, Win8.1 pro, CUDA6.5 (have 7 too but not running). None of the HDs are completely empty tho, but I have combined space of ~.75 TB available on computer and ~1 TB on NAS. Running mine equipment is one S3+, three r-Box110s, two 1.3Mh Fury/Zeus, five RedFury USB sticks, three Gridseed duals, and I vid card mine (naturally) as well.

I think that's it. What else do I need to D/L, and install for Windows compiling? Thanks much joblo
legendary
Activity: 1050
Merit: 1293
Huh?

ultimately its about building - compiling - rolling out - streamlining ... all clones of one worker ... all with linux - all with exact same hardware - all with ccminer-spmod

Hey Guys...for some of us...this stuff is all greek.   Huh   I made no sense of any of it and would really like to keep up with the Jones here. It would be nice to see some Win-install files, just like you had release #s before. So much love would come your way!    Shocked

Thanks

A little quotation error corrected.

If you mean precompiled Windows binaries, they are still produced and available on the release page as
posted by SP. If you want to compile your own Windows binaries it not too difficult but very long and I might
be able to help. I've gone through it a couple of times and it can be done with all freeware, except for the OS of course.

In short you need Visual Studio Community and Cuda Tools, cminer source and lots of disk space. ccminer.sln is
the configuration file and you double click on it to open the "Solution" in VS. You can edit the properties to add
compute versions (similar to what Linux folks do with makefiles). Then Build Solution. Eventually if everything
works you should find a release directory with a freshly minted ccminer.exe file in it.

I can write up a better tutorial if you like but it will take some time.



that actually would be good ... and highly beneficial jo ...

i for one would like to learn how to build under windows ... hell - ill build a windows machine JUST for it ...

tutor me / us please ...

#crysx

I have no experience in compiling ccminer for windows.

I do have some for sgminer under windows and i have found out that it's super easy with mingw.

There is even a tutorial for it in the master git branch under the winbuild dir.

Maybe most of those steps apply to ccminer aswell.
legendary
Activity: 2912
Merit: 1091
--- ChainWorks Industries ---

ultimately its about building - compiling - rolling out - streamlining ... all clones of one worker ... all with linux - all with exact same hardware - all with ccminer-spmod

Hey Guys...for some of us...this stuff is all greek.   Huh   I made no sense of any of it and would really like to keep up with the Jones here. It would be nice to see some Win-install files, just like you had release #s before. So much love would come your way!    Shocked

Thanks

A little quotation error corrected.

If you mean precompiled Windows binaries, they are still produced and available on the release page as
posted by SP. If you want to compile your own Windows binaries it not too difficult but very long and I might
be able to help. I've gone through it a couple of times and it can be done with all freeware, except for the OS of course.

In short you need Visual Studio Community and Cuda Tools, cminer source and lots of disk space. ccminer.sln is
the configuration file and you double click on it to open the "Solution" in VS. You can edit the properties to add
compute versions (similar to what Linux folks do with makefiles). Then Build Solution. Eventually if everything
works you should find a release directory with a freshly minted ccminer.exe file in it.

I can write up a better tutorial if you like but it will take some time.



that actually would be good ... and highly beneficial jo ...

i for one would like to learn how to build under windows ... hell - ill build a windows machine JUST for it ...

tutor me / us please ...

#crysx
legendary
Activity: 1470
Merit: 1114

ultimately its about building - compiling - rolling out - streamlining ... all clones of one worker ... all with linux - all with exact same hardware - all with ccminer-spmod

Hey Guys...for some of us...this stuff is all greek.   Huh   I made no sense of any of it and would really like to keep up with the Jones here. It would be nice to see some Win-install files, just like you had release #s before. So much love would come your way!    Shocked

Thanks

A little quotation error corrected.

If you mean precompiled Windows binaries, they are still produced and available on the release page as
posted by SP. If you want to compile your own Windows binaries it not too difficult but very long and I might
be able to help. I've gone through it a couple of times and it can be done with all freeware, except for the OS of course.

In short you need Visual Studio Community and Cuda Tools, cminer source and lots of disk space. ccminer.sln is
the configuration file and you double click on it to open the "Solution" in VS. You can edit the properties to add
compute versions (similar to what Linux folks do with makefiles). Then Build Solution. Eventually if everything
works you should find a release directory with a freshly minted ccminer.exe file in it.

I can write up a better tutorial if you like but it will take some time.

legendary
Activity: 1797
Merit: 1028
Anyone using some 960s? What are you getting for Quark/X11/Neo?
I assume ccminer doesn't scale and needs to be tailored to each cards gen.

you can get 10mhash if you overclock. With standard clocks around 9.2 Mhash



GTX 960--

Mining Quark, I get 10.5Mh/s with +80mhz/240mhz core/mem on the 2gb 960 SSC card with default intensity.  It runs on Win 7 x64.

--scryptr
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
Anyone using some 960s? What are you getting for Quark/X11/Neo?
I assume ccminer doesn't scale and needs to be tailored to each cards gen.

you can get 10mhash if you overclock. With standard clocks around 9.2 Mhash

legendary
Activity: 1484
Merit: 1082
ccminer/cpuminer developer
for sm_52 support you need this sdk : https://developer.nvidia.com/cuda-downloads-geforce-gtx9xx

else, ive fixed cuda 7.0 support on x11 in my linux branch but... we lose 180kH/s on a 750ti with the same code (compared to cuda 6.5) :/ so... i don't recommend it
hero member
Activity: 1064
Merit: 500
MOBU

ultimately its about building - compiling - rolling out - streamlining ... all clones of one worker ... all with linux - all with exact same hardware - all with ccminer-spmod

Hey Guys...for some of us...this stuff is all greek.   Huh   I made no sense of any of it and would really like to keep up with the Jones here. It would be nice to see some Win-install files, just like you had release #s before. So much love would come your way!    Shocked

Thanks
legendary
Activity: 2912
Merit: 1091
--- ChainWorks Industries ---

ultimately its about building - compiling - rolling out - streamlining ... all clones of one worker ... all with linux - all with exact same hardware - all with ccminer-spmod ( and sgminer at the moment for the current amd cards ) ...

i guess we will need to wait and see if any of these plans come to light ... im on this full time and daily now - so time is not an issue any more ...

edit - the compile still bombs ... but i am tired and need sleep - so maybe im doing something wrong ... tomorrow will work on it further ...

#crysx

Hardware shouldn't be an issue except for the CPU and GPU. There is no technical need, though you may have other reasons,
to have all the HW identical. Even the CPU and GPU differences can be ovecome in many cases. Different GPU generations
can be supported with a single compile as discussed in the part that I snipped. And differences in CPU extensions can be worked
around by specifying the arcitecture that works on the least capable CPU. Not really a good idea for CPU critical progs.

Software, of course, has to be the same.

I don't know if there is any benefit to having matching GPUs in the same rig. I tend to mix bigger and smaller cards as it's
easier on power supplies (I'm less likely to need a new bigger one) and heat dissipation especially on Linux where fan control
is usually limited to only one card.

Regarding your compile problems, and assuming they are related to the compute version setting in the makefile, the syntax
is pretty straightforward if you understand the difference between = and +=. Any C programmer would not have to think twice
but you've said many times you are not a coder so I'll offer the following.

= is a direct variable assignment. There must be exactly one of these for the compute version.
+= will add to the existing variable. There may be any number of these or none and they must be
after the direct assignment.
# is a comment. Those lines are not read by the compiler.

If you are compiling for only one compute version you only need the direct assignment of that version.
If you are compiling for multiple compute versions you need one direct assignment followed by as many
add assignments as needed to specify all the desired compute versions.

I took a close look at the code I posted and it should have worked for you as is. I also took another look
at your problem description, the part about Makefile.in. AFAIK that file is not part of the clone but created
by the compile process (either autogen or configure, not sure which). The point is did you do a make distclean
before attempting the second compile with the modified Makefile.am? I sometimes delete the tree and re-expand
the tarball if I'm having a bad compile day. You may want to reclone and edit (or replace) Makefile.am before doing
anything else.



i appreciate your explanation ... it has helped ...

just to clarify a couple of points jo ...

the reason i have designed the machines in the format they are AND we have the farm mining the way it does ( structurally and software ) is to simplify maintenance and maintain an even level of comparative statistics with respect to power draw - physical location and 'stacking' ( which is why these machines are to be rebuilt in the custom designed frames we have almost finished ) - cooling and software roll out ...

the main design goal is duplication ... which is why a development machine ( testing ) needs not comply with the overall farm design - though is beneficial for the software rollout ...

so when compiling - the cpu / gpu and software can be setup once - and rolled out to the existing infrastructure AND future when expansion to the farm is required ... so keeping everything in line with the gen and model and software is of major importance if the 'hands on' component is to be minimized ...

so all the same hardware and software would be much more beneficial across the farm - in order for the rollout to happen smoothly - and in the future of the farm in terms of expansion or even upgrades ( say - when the gtx 980 oc cards become the 'norm' ( so to speak - as 750ti oc has ) ) ...

so when the compilation breaks - it becomes an issue ... not so if there was only one machine to handle - thats simple ... but an issue when there are multiple machines with the same break ...

in this case - i did do clean - and that is part of the compile / installation script ... when i took sp's suggestion of copying back the previous makefile from the older v51 compile - the compilation went through without issue ...

i guess its just a matter of changing the makefile back to the way it was - or to the way you have posted earlier which i havent retested ... i was completely spent last night ... so mistakes would have been prevalent in almost all of the changes made to anything i touch last night ...

the main goal is to just standardize the clone / compile / install procedure via a script and rollout the same way across the machines of the farm ...

for the moment - gigabyte 750ti oc lp cards ar ethe only cards that will be used in this farm ... so compute_52 is useless to us here ...

obviously this is a rare issue and only seems to happen with us - as sp develops for the broader community ...

but i like to bring it to light here - just in case someone else has a similar issue ...

tanx again jo ...

#crysx
legendary
Activity: 1470
Merit: 1114

ultimately its about building - compiling - rolling out - streamlining ... all clones of one worker ... all with linux - all with exact same hardware - all with ccminer-spmod ( and sgminer at the moment for the current amd cards ) ...

i guess we will need to wait and see if any of these plans come to light ... im on this full time and daily now - so time is not an issue any more ...

edit - the compile still bombs ... but i am tired and need sleep - so maybe im doing something wrong ... tomorrow will work on it further ...

#crysx

Hardware shouldn't be an issue except for the CPU and GPU. There is no technical need, though you may have other reasons,
to have all the HW identical. Even the CPU and GPU differences can be ovecome in many cases. Different GPU generations
can be supported with a single compile as discussed in the part that I snipped. And differences in CPU extensions can be worked
around by specifying the arcitecture that works on the least capable CPU. Not really a good idea for CPU critical progs.

Software, of course, has to be the same.

I don't know if there is any benefit to having matching GPUs in the same rig. I tend to mix bigger and smaller cards as it's
easier on power supplies (I'm less likely to need a new bigger one) and heat dissipation especially on Linux where fan control
is usually limited to only one card.

Regarding your compile problems, and assuming they are related to the compute version setting in the makefile, the syntax
is pretty straightforward if you understand the difference between = and +=. Any C programmer would not have to think twice
but you've said many times you are not a coder so I'll offer the following.

= is a direct variable assignment. There must be exactly one of these for the compute version.
+= will add to the existing variable. There may be any number of these or none and they must be
after the direct assignment.
# is a comment. Those lines are not read by the compiler.

If you are compiling for only one compute version you only need the direct assignment of that version.
If you are compiling for multiple compute versions you need one direct assignment followed by as many
add assignments as needed to specify all the desired compute versions.

I took a close look at the code I posted and it should have worked for you as is. I also took another look
at your problem description, the part about Makefile.in. AFAIK that file is not part of the clone but created
by the compile process (either autogen or configure, not sure which). The point is did you do a make distclean
before attempting the second compile with the modified Makefile.am? I sometimes delete the tree and re-expand
the tarball if I'm having a bad compile day. You may want to reclone and edit (or replace) Makefile.am before doing
anything else.

legendary
Activity: 1470
Merit: 1114
Anyone using some 960s? What are you getting for Quark/X11/Neo?

I assume ccminer doesn't scale and needs to be tailored to each cards gen.

I'm not sure what you mean by ccminer scaling and I have no data for the 960 but
I did observe fairly linear hash rate vs cuda count scaling among the 750ti, 970 & 980
in quark & X11. Don't know about neo and I do know that lyra2 doesn't scale linearly
(as was graciously pointed out to me).

I'm satisfied with using the default ccminer parms. It works on all my cards, including
compute 3.5,  and produces hash rates comparable to test results posted by others.
legendary
Activity: 1764
Merit: 1024
Anyone using some 960s? What are you getting for Quark/X11/Neo?

I assume ccminer doesn't scale and needs to be tailored to each cards gen.
legendary
Activity: 2912
Merit: 1091
--- ChainWorks Industries ---
hi all ...

has anyone any issues with compiling since sp introduced compute_52 into the makefile? ...

sp - the compile crashes with - nvcc fatal   : Unsupported gpu architecture 'compute_52' - since you introduced the compute_52 arch into git ...

even if you comment out the option in the makefile - the compilation bombs out with - config.status: error: cannot find input file: `Makefile.in' ...

could you 'fix' this please? ...

#crysx

Guess you missed this post....

https://bitcointalksearch.org/topic/m.11475406

If you only have 750s you don't need compute_52. As SP pointed out compute_52 will give
better performance on 9xx cards.

tanx jo ... i understand ...

BUT - when the latest git clone was made and a compile got underway - the compile bombed out ...

editing the Makefile.am and removing that part for compute_52 did nothing more than create more errors concerning a missing Makefile.in ...

i have written a very simple script for the compilation of ccminer-spmod for our systems ...

simple but effective - in that it automates the entire process from cloning to compilation to installation and rebooting of the system to mine with the new miner without manual intervention for each of the machines in the farm ...

unfortunately - when there is an issue with the compilation ( as this has currently ) the process is broken ... which means manual intervention - and i REALLY dont like manual intervention Tongue ...

as per sp's advice - the compilation finishes successfully when the 'older' Makefile.am replaces the new one ( that is still currently in git ) and the latest ccminer gets compiled ...

but that takes intervention on my part for EVERY miner in the farm - and that is a major pain ...

so whether we need compute_52 or not - it exists currently in the Makefile.am when you clone and thus compilation crashes ... this was not an issue prior to the latest change - which made everything run automatically ...

and im a lazy guy - that likes automation Wink ...

#crysx

I see. You want the default makefile to work for your process. So compute_50 should be the default with others
being optional requiring manual intervention. Ex:

Code:
nvcc_ARCH = -gencode=arch=compute_50,code=\"sm_50,compute_50\"
#nvcc_ARCH += -gencode=arch=compute_52,code=\"sm_52,compute_52\"
#nvcc_ARCH += -gencode=arch=compute_35,code=\"sm_35,compute_35\"
#nvcc_ARCH += -gencode=arch=compute_30,code=\"sm_30,compute_30\"

Or you could just install the supported cuda version and your process should work
with the existing default makefile.

I'm curious whether you do seperate compiles for each system or just one central
compile. If your environment is homogeneous you could compile once and distruibute
the executable to all rigs. This would simplify your process in the event manual intervention
(ie editting makefile) is needed. With a central compile there is less incentive to optimize
for only the arch you need, Ie less need to modify the makefile.

It wasn't my intent to mess up your process with my query. I was just wondering if there
was a technical reason for not including the compute_52 option, even in commented form.

Since compute_52 support doesn't exist in the default cuda version it makes sense not to enable
it by default in the makefile.


tanx for the example code .. i havent tried it with that makefile - im not a developer and only know so much - enough that would get me into a lot of trouble ... hence where we are now Wink ...

what we have here is 'becoming' a homogeneuos environment ... this is the intent and the reason why the farm is going through a huge transition from the 'bitsa' system in its current form - to the cloned system homogeneous farm that it will become in the coming months ...

you are completely on par with what the goal is - having a singular carbon copy of EACH machine throughout the farm ...

it is a major pain in the rear in its current incarnation ... different cpu's with different motherboards with different psu's and a mix of usb risers and cabled risers ... mostly with the same video card ...

that is all changing ... which will ( as you mentioned ) make it a far simpler process with only ONE compile and ONE roll out with ONE script across the entirety of the farm in its present state - making it an easier way to expand the farm in the future ... we have just taken delivery of another 40 x identical motherboards which are ready to be built ( including rebuilding the current workers ) into clone workers for the farm ...

the only technical reasoning behind the omission of compute_52 is that it 'broke' the compile ... thats all ... but tomorrow ( as its late here now ) i will make the changes to the makefile and see if it compiles 'normally' without breaking ...

there is a need to get a higher level card ( or three ) to test for testing purposes - as the entire farm ( apart from the amd 280x cards ) is comprised of the gigabyte 750ti oc lp cards ( including the standard oc powered ones - but they will be sold off soon ) ... so the makefile in that format would make a lot of sense ...

the process we have here is a process put together by me for the 'production' environment and not for the 'development' environment ( which currently is a single machine with 2 x gigabyte 750ti oc cards ) - which makes for a double handling of manual intervention if the compiles break all the time ... corporate SOE has rubbed off a little here Wink ...

so its a major learning process for me - and with all the help that you and the forum has given - is fast becoming a standardized way of compiling and rolling out - albeit not as streamlined as i would personally like it ... yet Smiley ... im still working on that ...

ultimately its about building - compiling - rolling out - streamlining ... all clones of one worker ... all with linux - all with exact same hardware - all with ccminer-spmod ( and sgminer at the moment for the current amd cards ) ...

i guess we will need to wait and see if any of these plans come to light ... im on this full time and daily now - so time is not an issue any more ...

edit - the compile still bombs ... but i am tired and need sleep - so maybe im doing something wrong ... tomorrow will work on it further ...

#crysx
legendary
Activity: 1470
Merit: 1114
hi all ...

has anyone any issues with compiling since sp introduced compute_52 into the makefile? ...

sp - the compile crashes with - nvcc fatal   : Unsupported gpu architecture 'compute_52' - since you introduced the compute_52 arch into git ...

even if you comment out the option in the makefile - the compilation bombs out with - config.status: error: cannot find input file: `Makefile.in' ...

could you 'fix' this please? ...

#crysx

Guess you missed this post....

https://bitcointalksearch.org/topic/m.11475406

If you only have 750s you don't need compute_52. As SP pointed out compute_52 will give
better performance on 9xx cards.

tanx jo ... i understand ...

BUT - when the latest git clone was made and a compile got underway - the compile bombed out ...

editing the Makefile.am and removing that part for compute_52 did nothing more than create more errors concerning a missing Makefile.in ...

i have written a very simple script for the compilation of ccminer-spmod for our systems ...

simple but effective - in that it automates the entire process from cloning to compilation to installation and rebooting of the system to mine with the new miner without manual intervention for each of the machines in the farm ...

unfortunately - when there is an issue with the compilation ( as this has currently ) the process is broken ... which means manual intervention - and i REALLY dont like manual intervention Tongue ...

as per sp's advice - the compilation finishes successfully when the 'older' Makefile.am replaces the new one ( that is still currently in git ) and the latest ccminer gets compiled ...

but that takes intervention on my part for EVERY miner in the farm - and that is a major pain ...

so whether we need compute_52 or not - it exists currently in the Makefile.am when you clone and thus compilation crashes ... this was not an issue prior to the latest change - which made everything run automatically ...

and im a lazy guy - that likes automation Wink ...

#crysx

I see. You want the default makefile to work for your process. So compute_50 should be the default with others
being optional requiring manual intervention. Ex:

Code:
nvcc_ARCH = -gencode=arch=compute_50,code=\"sm_50,compute_50\"
#nvcc_ARCH += -gencode=arch=compute_52,code=\"sm_52,compute_52\"
#nvcc_ARCH += -gencode=arch=compute_35,code=\"sm_35,compute_35\"
#nvcc_ARCH += -gencode=arch=compute_30,code=\"sm_30,compute_30\"

Or you could just install the supported cuda version and your process should work
with the existing default makefile.

I'm curious whether you do seperate compiles for each system or just one central
compile. If your environment is homogeneous you could compile once and distruibute
the executable to all rigs. This would simplify your process in the event manual intervention
(ie editting makefile) is needed. With a central compile there is less incentive to optimize
for only the arch you need, Ie less need to modify the makefile.

It wasn't my intent to mess up your process with my query. I was just wondering if there
was a technical reason for not including the compute_52 option, even in commented form.

Since compute_52 support doesn't exist in the default cuda version it makes sense not to enable
it by default in the makefile.
hero member
Activity: 677
Merit: 500
Just now setting larger memspeed - on quark hashing going up...
legendary
Activity: 2912
Merit: 1091
--- ChainWorks Industries ---
hi all ...

has anyone any issues with compiling since sp introduced compute_52 into the makefile? ...

sp - the compile crashes with - nvcc fatal   : Unsupported gpu architecture 'compute_52' - since you introduced the compute_52 arch into git ...

even if you comment out the option in the makefile - the compilation bombs out with - config.status: error: cannot find input file: `Makefile.in' ...

could you 'fix' this please? ...

#crysx

Guess you missed this post....

https://bitcointalksearch.org/topic/m.11475406

If you only have 750s you don't need compute_52. As SP pointed out compute_52 will give
better performance on 9xx cards.

tanx jo ... i understand ...

BUT - when the latest git clone was made and a compile got underway - the compile bombed out ...

editing the Makefile.am and removing that part for compute_52 did nothing more than create more errors concerning a missing Makefile.in ...

i have written a very simple script for the compilation of ccminer-spmod for our systems ...

simple but effective - in that it automates the entire process from cloning to compilation to installation and rebooting of the system to mine with the new miner without manual intervention for each of the machines in the farm ...

unfortunately - when there is an issue with the compilation ( as this has currently ) the process is broken ... which means manual intervention - and i REALLY dont like manual intervention Tongue ...

as per sp's advice - the compilation finishes successfully when the 'older' Makefile.am replaces the new one ( that is still currently in git ) and the latest ccminer gets compiled ...

but that takes intervention on my part for EVERY miner in the farm - and that is a major pain ...

so whether we need compute_52 or not - it exists currently in the Makefile.am when you clone and thus compilation crashes ... this was not an issue prior to the latest change - which made everything run automatically ...

and im a lazy guy - that likes automation Wink ...

#crysx
Jump to: