Pages:
Author

Topic: CCminer(SP-MOD) Modded NVIDIA Maxwell / Pascal kernels. - page 29. (Read 2347632 times)

legendary
Activity: 2716
Merit: 1094
Black Belt Developer
The comparison of cuda cores between different architectures worked well for Maxwell Vs pascal, at least until the miner Devs figured out how to take the best out of the latter, and it wasn't a big difference. Same will be for Turing (if any).
Nvidia has been focusing on improving other things than integer math for many years now.
legendary
Activity: 1764
Merit: 1024



Also Jugger will never be happy.

I have nothing against him, if he just put facts out there instead of conjecture and wild statements that have no basis in reality I wouldn't be so harsh on him. He's a dev, I'm a miner, i have a predisposition to like him but just when I start to soften up to him he turns into Trump and just spews a bunch of BS smoke and mirrors type shit. I'm just looking for honesty... Like posting a 3 second start up comparison, he does stuff like that constantly, I just feel he's fishin' for nubs for fees. The reason most devs make it hard to know the real hashrate stats is cuz most devs know next week another miner will be out there beating their stats. I hold no ill will towards him though.

Yeah that's BS, you much like a few other people here are just looking for a axe to grind because profits in the shitty right now and you guys bought into crypto at one of the worst times to start mining in the history of mining.

It sucks, yes right now really sucks, especially considering the amount of dark hash euro side that's waiting for the increase in revenue (which means no real increase in revenue), but go take your shit else where. Objective comparisons with photos and no bias are welcome. Constant shit talking because you're butt hurt about your current finances is not.

sp do us a favor, buy a 2080ti and do some test on all algo, maybe we have a good perf on other non-eth algo
We will definitely get on gtx 2080 ti better speed because of wide memory and more CUDA cores. And it will be at lower power. The question is how big will be speed-up.

A raw estimate can be made by comparing number of cuda cores (only for low memory usage algos).
Power usage (at same clock and number of cuda cores) should be 10-15% less.

You can only compare cuda cores between cores from the same architecture. You can't directly compare between different architectures. Current pre-release discussion points to 20x series being 50% faster then it's last gen counterpart. So basically a 1080ti is a 2080, only the 2080 runs at 215w instead of 250. So roughly 20% more efficient for same hash. Currently it's cheaper buying 1080tis and probably will remain so for some time, especially buying second hand off eBay.

The memory bus only effects coins that are memory hard, Dagger/Cryptonight/etc and the only way to tell how much of a difference that makes is after getting them in hand and even then it'll take a few months before developers figure out what they can exploit. For instance Vega was out for like 4-5 months before they figured out how to make it do work in Cryptonight. 1080/ti was out a year and a half. It's entirely possible they will perform quite poorly in memory hard algos until they're ironed out, such as with GDDR5X.

The rush for 20xx series is like January all over again with really shitty revenue.

All the more proof that if you bought into mining this Jan+ you made a reallllly bad life choice.
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
Cuda 9.2 perform bether with more registers. You need launchbounds on youur kernels. In my opensource i have changed the register usage in the project file (visual studio). These adjustments are not added to the linux makefile.
legendary
Activity: 2716
Merit: 1094
Black Belt Developer
Then you compile with cude 9.1
Is cuda 9.1 built ccminer faster than cuda 9.2? Or does it depend on the algo/hardware?

I did some tests and 9.2 is slightly faster on most algos.
legendary
Activity: 1106
Merit: 1014
Then you compile with cude 9.1
Is cuda 9.1 built ccminer faster than cuda 9.2? Or does it depend on the algo/hardware?
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
and these are the words of the developer who has studied all the works of SP and other miners

I have 1300 commits to the ccminer opensource project at github(2014-2018). What have you done for the community besides throwing shit in my thread?. The stratum code in ccminer need some work. My miner isn't performing good when the latency on the pool is high. This is a known issue from ccminer 2.2.4 & cminer 1.0 alexis. Job not found rejected shares. etc,. The fee miners have improved the stratum code, and gained few percent more pool shares with a lower hashrate (gpu speed)..

Points to work on:

1. Never disconnect to the pool.
2. Reduce the sleep time
3. Cleanup of the blocking thread code.

Then you compile with cude 9.1, rewrite all the output to the command line, add a fee, make a stupid name and publish the new "faster" miner with my opensouce gpu kernel code inside... You might need to add a couple of puppy accounts here on bitcointalk, make a discord channel, buy some positive reviews, and then you are in business.. May the fee be with you. :-)
legendary
Activity: 1106
Merit: 1014
12nm production, vs 14nm for the pascal. Less heat and higher clocks should be possible.
AFAIK every Pascal card worth mining with is 16nm. Only low-end 1030s and 1050s are 14nm, while 1060 and anything above that is 16nm.
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
A raw estimate can be made by comparing number of cuda cores (only for low memory usage algos).
Power usage (at same clock and number of cuda cores) should be 10-15% less.

12nm production, vs 14nm for the pascal. Less heat and higher clocks should be possible. X16r/x16s/X17/C11 etc. should be around 50% faster than the pacal cards.. My guess 2080ti +50%
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
@sp_
code is not compiling due to an error

I have submitted a fix @ github
legendary
Activity: 2716
Merit: 1094
Black Belt Developer
sp do us a favor, buy a 2080ti and do some test on all algo, maybe we have a good perf on other non-eth algo
We will definitely get on gtx 2080 ti better speed because of wide memory and more CUDA cores. And it will be at lower power. The question is how big will be speed-up.

A raw estimate can be made by comparing number of cuda cores (only for low memory usage algos).
Power usage (at same clock and number of cuda cores) should be 10-15% less.
sr. member
Activity: 954
Merit: 250
sp do us a favor, buy a 2080ti and do some test on all algo, maybe we have a good perf on other non-eth algo
We will definitely get on gtx 2080 ti better speed because of wide memory and more CUDA cores. And it will be at lower power. The question is how big will be speed-up.
legendary
Activity: 3248
Merit: 1070
sp do us a favor, buy a 2080ti and do some test on all algo, maybe we have a good perf on other non-eth algo
newbie
Activity: 46
Merit: 0
@sp_

code is not compiling due to an error

Code:
deepongi@infected-MS-7996:~/Downloads/suprminer-master$ ./build.sh
make: *** No rule to make target 'distclean'.  Stop.
clean
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking dependency style of gcc... gcc3
checking for gcc option to accept ISO C99... none needed
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking whether gcc needs -traditional... no
checking dependency style of gcc... gcc3
checking for ranlib... ranlib
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking dependency style of g++... gcc3
checking for gcc option to support OpenMP... -fopenmp
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking sys/endian.h usability... no
checking sys/endian.h presence... no
checking for sys/endian.h... no
checking sys/param.h usability... yes
checking sys/param.h presence... yes
checking for sys/param.h... yes
checking syslog.h usability... yes
checking syslog.h presence... yes
checking for syslog.h... yes
checking for sys/sysctl.h... yes
checking whether be32dec is declared... no
checking whether le32dec is declared... no
checking whether be32enc is declared... no
checking whether le32enc is declared... no
checking for size_t... yes
checking for working alloca.h... yes
checking for alloca... yes
checking for getopt_long... yes
checking for json_loads in -ljansson... yes
checking for pthread_create in -lpthread... yes
checking for gzopen in -lz... yes
checking for SSL_free in -lssl... yes
checking for EVP_DigestFinal_ex in -lcrypto... yes
checking for gawk... (cached) gawk
checking for curl-config... /usr/bin/curl-config
checking for the version of libcurl... 7.58.0
checking for libcurl >= version 7.15.2... yes
checking whether libcurl is usable... yes
checking for curl_free... yes
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating Makefile
config.status: creating compat/Makefile
config.status: creating compat/jansson/Makefile
config.status: creating ccminer-config.h
config.status: executing depfiles commands
make  all-recursive
make[1]: Entering directory '/home/deepongi/Downloads/suprminer-master'
Making all in compat
make[2]: Entering directory '/home/deepongi/Downloads/suprminer-master/compat'
make[3]: Entering directory '/home/deepongi/Downloads/suprminer-master/compat'
make[3]: Nothing to be done for 'all-am'.
make[3]: Leaving directory '/home/deepongi/Downloads/suprminer-master/compat'
make[2]: Leaving directory '/home/deepongi/Downloads/suprminer-master/compat'
make[2]: Entering directory '/home/deepongi/Downloads/suprminer-master'
gcc -DHAVE_CONFIG_H -I.  -fopenmp  -pthread -fno-strict-aliasing  -I/usr/local/cuda/include -DUSE_WRAPNVML    -g -O2 -MT ccminer-crc32.o -MD -MP -MF .deps/ccminer-crc32.Tpo -c -o ccminer-crc32.o `test -f 'crc32.c' || echo './'`crc32.c
gcc -DHAVE_CONFIG_H -I.  -fopenmp  -pthread -fno-strict-aliasing  -I/usr/local/cuda/include -DUSE_WRAPNVML    -g -O2 -MT ccminer-hefty1.o -MD -MP -MF .deps/ccminer-hefty1.Tpo -c -o ccminer-hefty1.o `test -f 'hefty1.c' || echo './'`hefty1.c
g++ -DHAVE_CONFIG_H -I.  -fopenmp  -pthread -fno-strict-aliasing  -I/usr/local/cuda/include -DUSE_WRAPNVML    -O3 -march=native -D_REENTRANT -falign-functions=16 -falign-jumps=16 -falign-labels=16 -MT ccminer-ccminer.o -MD -MP -MF .deps/ccminer-ccminer.Tpo -c -o ccminer-ccminer.o `test -f 'ccminer.cpp' || echo './'`ccminer.cpp
g++ -DHAVE_CONFIG_H -I.  -fopenmp  -pthread -fno-strict-aliasing  -I/usr/local/cuda/include -DUSE_WRAPNVML    -O3 -march=native -D_REENTRANT -falign-functions=16 -falign-jumps=16 -falign-labels=16 -MT ccminer-pools.o -MD -MP -MF .deps/ccminer-pools.Tpo -c -o ccminer-pools.o `test -f 'pools.cpp' || echo './'`pools.cpp
mv -f .deps/ccminer-crc32.Tpo .deps/ccminer-crc32.Po
mv -f .deps/ccminer-hefty1.Tpo .deps/ccminer-hefty1.Po
g++ -DHAVE_CONFIG_H -I.  -fopenmp  -pthread -fno-strict-aliasing  -I/usr/local/cuda/include -DUSE_WRAPNVML    -O3 -march=native -D_REENTRANT -falign-functions=16 -falign-jumps=16 -falign-labels=16 -MT ccminer-util.o -MD -MP -MF .deps/ccminer-util.Tpo -c -o ccminer-util.o `test -f 'util.cpp' || echo './'`util.cpp
g++ -DHAVE_CONFIG_H -I.  -fopenmp  -pthread -fno-strict-aliasing  -I/usr/local/cuda/include -DUSE_WRAPNVML    -O3 -march=native -D_REENTRANT -falign-functions=16 -falign-jumps=16 -falign-labels=16 -MT ccminer-bench.o -MD -MP -MF .deps/ccminer-bench.Tpo -c -o ccminer-bench.o `test -f 'bench.cpp' || echo './'`bench.cpp
util.cpp: In function 'void print_hash_tests()':
[b]util.cpp:2328:35: error: too many arguments to function 'void x16r_hash(void*, const void*)'
[/b]  x16r_hash(&hash[0], &buf[0],false);
                                   ^
In file included from util.cpp:36:0:
miner.h:940:6: note: declared here
 void x16r_hash(void *output, const void *input);
      ^~~~~~~~~
mv -f .deps/ccminer-pools.Tpo .deps/ccminer-pools.Po
Makefile:1864: recipe for target 'ccminer-util.o' failed
make[2]: *** [ccminer-util.o] Error 1
make[2]: *** Waiting for unfinished jobs....
mv -f .deps/ccminer-bench.Tpo .deps/ccminer-bench.Po
mv -f .deps/ccminer-ccminer.Tpo .deps/ccminer-ccminer.Po
make[2]: Leaving directory '/home/deepongi/Downloads/suprminer-master'
Makefile:2262: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory '/home/deepongi/Downloads/suprminer-master'
Makefile:680: recipe for target 'all' failed
make: *** [all] Error 2
deepongi@infected-MS-7996:~/Downloads/suprminer-master$


tried on ubuntu and arch linux.

this seems to be the error on the code:

Code:
util.cpp: In function 'void print_hash_tests()':
[b]util.cpp:2328:35: error: too many arguments to function 'void x16r_hash(void*, const void*)'
member
Activity: 392
Merit: 27
http://radio.r41.ru
why do you think other developers do not put their work on github??? is the protection of the SP, who only does what takes other people's work, wind intensity and sells them as their work. Therefore, it is a negative. and these are the words of the developer who has studied all the works of SP and other miners
jr. member
Activity: 213
Merit: 3



Also Jugger will never be happy.

I have nothing against him, if he just put facts out there instead of conjecture and wild statements that have no basis in reality I wouldn't be so harsh on him. He's a dev, I'm a miner, i have a predisposition to like him but just when I start to soften up to him he turns into Trump and just spews a bunch of BS smoke and mirrors type shit. I'm just looking for honesty... Like posting a 3 second start up comparison, he does stuff like that constantly, I just feel he's fishin' for nubs for fees. The reason most devs make it hard to know the real hashrate stats is cuz most devs know next week another miner will be out there beating their stats. I hold no ill will towards him though.
member
Activity: 392
Merit: 27
http://radio.r41.ru
previously, he bought miners because there was no choice. now the choice is more and the miner from the SP loses on all counts, you look at the hash that draws the console, you can draw anything. a real profit in coins - SP in lost
this has already been proven and not by one person, there are addresses where miners worked from SP_, z-enemy and t-rex.
the highest hash in the SP_ and the largest number of coins from enemy
PLEASE LEAVE THE THREAD--

Nobody has proven anything about who's miner is best.  You just post your negative opinions and spite.  You talk like youve done crypto sunce day one, but your account is practically new.  It does not add up.       --scryptr
the old one was hacked and stolen, so I have to write from it.
and that I still should to do, man sells its manners for 0.05 BTC so still and devfee 2%
I warn other users not to buy from him miner
legendary
Activity: 1797
Merit: 1028
previously, he bought miners because there was no choice. now the choice is more and the miner from the SP loses on all counts, you look at the hash that draws the console, you can draw anything. a real profit in coins - SP in lost
this has already been proven and not by one person, there are addresses where miners worked from SP_, z-enemy and t-rex.
the highest hash in the SP_ and the largest number of coins from enemy
PLEASE LEAVE THE THREAD--

Nobody has proven anything about who's miner is best.  You just post your negative opinions and spite.  You talk like youve done crypto sunce day one, but your account is practically new.  It does not add up.       --scryptr
member
Activity: 392
Merit: 27
http://radio.r41.ru
previously, he bought miners because there was no choice. now the choice is more and the miner from the SP loses on all counts, you look at the hash that draws the console, you can draw anything. a real profit in coins - SP in lost
this has already been proven and not by one person, there are addresses where miners worked from SP_, z-enemy and t-rex.
the highest hash in the SP_ and the largest number of coins from enemy
legendary
Activity: 1764
Merit: 1024
Based on early whispers I'm hearing the 20xx series isn't that great for crypto or games ('great' being infinitely faster then pascal). It's still faster, but not by nearly as much as Nvidia wants you to think, it's mainly about raytracing. So unless that can be repurposed the newer GPUs aren't that interesting or more power efficient.

I'm keeping a eye on eBay and people are starting to dump their GPUs. Depending on early benchmarks for the 20xx series it might be worth snatching these up if you're playing the long game.

SP I understand this is a epic thread, but you should consider making a discord channel. However it probably will be overrun with trolls.


Also Jugger will never be happy.
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
The miners can be downloaded and you can perform the tests yourself.
In my short test, they where run at default intensities. Higher intensities doesn't always produce a bether result. If you mine on a multipool with rapid block switching, you need low intensities. If you mine on a pool with 1 coin and long blocktime you can increase the intensity. The alexis miner cannot run on 24 intensity because it uses more memory than the spmod-git. t-rex doesn't seem to produce a bether result on -i 24, rather worse.. Cuda 9.1 might be bether than cuda 9.2. Build yourself, find the best settings and share them.
Pages:
Jump to: