Pages:
Author

Topic: How will this change the world of mining?? GTX 1080 / 1070 - page 40. (Read 134117 times)

full member
Activity: 174
Merit: 100
I think its around 15mhs

This is sp-mod opensource release 62 mining lyra2v2 (980ti 1275MHZ core)



The sp-mod private #5 does 16.2MHASH at the same clocks.

I'll check 62 later and report, will try to overclock it as well
Ive got 16.3 at 180W without OC and 80 build
legendary
Activity: 1050
Merit: 1000
More like 40% more speed at half the power of current NVidia cards used in mining, from the early benchmarks I've been seeing, *IF* their mining potential matches their increased clockrate.

 Have to see if the changes to the archetecture make them more efficient on some coins that current higher-end NVidia cards do NOT mine efficiently on like Ethereum.



mining is never compared to gaming preformance wise its just impossible to compare them head to head
sr. member
Activity: 506
Merit: 252
I would actually love to see at what vram amount the tlb starts to trash.

https://bitcointalksearch.org/topic/assessing-the-impact-of-tlb-trashing-on-memory-hard-algorhitms-1268355

With this tool from Genoil you should be able to test it:

https://github.com/Genoil/dagSimCL
legendary
Activity: 3248
Merit: 1070
vaulter did you tried this?

12.5MH/s sounds like good 'ol TLB trashing. If there is a Linux or WDDM 1.x driver (Win 7 or 8.1), I would suggest installing that and try again. Knowing that the 980ti does about 6MH/s currenty while trashing, we may be in for a surprise.

Check Bus Interface Load in GPU-Z. Should be close to 0%. When TLB trashing it becomes high, around 50-60%.
full member
Activity: 174
Merit: 100
I think its around 15mhs

This is sp-mod opensource release 62 mining lyra2v2 (980ti 1275MHZ core)



The sp-mod private #5 does 16.2MHASH at the same clocks.

I'll check 62 later and report, will try to overclock it as well
legendary
Activity: 3248
Merit: 1070
i think there is no point in comparison now, better to wait for proper driver and proper code, before comparing, i can bet my ass that they are way faster and consume less

it's always like that with new generation
legendary
Activity: 1498
Merit: 1030
12.5 isn't impressive at all though - now that I've got my HD 7870 running, it's pulling very close to that (12.05 with occasional bumps up to 12.3) on Ethereum - on the other hand, my best estimate is that it's eating about 100 watts at the wall (massively overkill Seasonic X1250 (gold) running the system right now) which isn't competative.

 Pretty sad that a 2 Gig 5ish year old design card is pretty much matching the throughput of NVidia's latest and greatest - though could be early days pre-release drivers aren't optimised yet.


 I was just looking at my local Craigslist, see a 7870 listed for $90 and a pair of R9 270 (same except single 6-pin connector instead of 2 + updated BIOS) for $160. I'm being tempted......

sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
I think its around 15mhs

This is sp-mod opensource release 62 mining lyra2v2 (980ti 1275MHZ core)



The sp-mod private #5 does 16.2MHASH at the same clocks.
legendary
Activity: 2590
Merit: 1022
Leading Crypto Sports Betting & Casino Platform
the hashrate is also good, better wait for better drivers and optimization, i think they are far above current gen, imho, but if indeed there is a trashthing problem, i don't mind running a 1070 at 12.5M on etheruem while consuming only 30, no card can do better than that anyway so i'm fine in any case, yes
legendary
Activity: 3808
Merit: 1723
So far the efficiency seems very good however going by what Nvidia usually charges for their GPUs, I don't think it will be worth the efficiency upgrade.

But you never know unless the final pricing comes out for all the cards and the real benchmark results.

full member
Activity: 174
Merit: 100
[quote ]

I think 1.7.1 smth - can you point me to the proper one to test?
31.9 Intensity didnt change anything..

The fastest kernals are not opensource. What do you get in the lyra2v2 algo?
[/quote]
I think its around 15mhs
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
[quote ]

I think 1.7.1 smth - can you point me to the proper one to test?
31.9 Intensity didnt change anything..
[/quote]

The fastest kernals are not opensource. What do you get in the lyra2v2 algo?
sr. member
Activity: 438
Merit: 250
build with 5.0 and 5.2 only.

encode arch=compute_50,code=sm_50;-gencode arch=compute_52,code=sm_52

And compile with the latest cuda 7.5.


ok thanks.
full member
Activity: 174
Merit: 100
As for decred - 3300 at 180W tdp

Nice. The maxwell can do this (Decred sp-mod #9):

750ti tdp 38Watt
980tdp: 240watt (79% used) =189Watt

Wich kernal did you test?

Try -i 31.9



I think 1.7.1 smth - can you point me to the proper one to test?
31.9 Intensity didnt change anything..
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
build with 5.0 and 5.2 only.

encode arch=compute_50,code=sm_50;-gencode arch=compute_52,code=sm_52

And compile with the latest cuda 7.5.
sr. member
Activity: 438
Merit: 250
what is it sp_ that this 1080 runs ccminer while it is Compute 6.0 but it doesn't run ethminer (error: invalid device symbol during cudaMemcpyToSymbol)? my bins are built for -gencode arch=compute_30,code=sm_30;-gencode arch=compute_35,code=sm_35;-gencode arch=compute_50,code=sm_50;-gencode arch=compute_52,code=sm_52

sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
As for decred - 3300 at 180W tdp

Nice. The maxwell can do this (Decred sp-mod #9):

750ti tdp 38Watt
980tdp: 240watt (79% used) =189Watt

Wich kernal did you test?

Try -i 31.9

sr. member
Activity: 438
Merit: 250

Downloading VS 2013 now - never compiled before - are there any tips beyond github instructions on compiling the current master?

Download CUDA 8.0RC. Oh wait it isn't out yet  Cool. Couple days max I guess
full member
Activity: 174
Merit: 100
The 1080 will cost $700, have 2500 shaders and clocked @ 1600mhz.
A used 980ti can be picked up for $400, 2760 shaders and can be oveclocked to 1500mhz stable. (Gigabyte G1 windforce) Quark will draw around 240Watt

The shadercount of the gtx 1070 is unknown.

To benchmark the pascal is easy

compile this sourcecode

https://github.com/tpruvot/ccminer


run it with

ccminer -a quark --benchmark


And then compare it to my modded Maxwell kernal.

Here are the results on the 980ti: (31,8MHASH) (132.5KHASH /Watt)




As I mentioned in other topic
1080 at
Quark - 32Mhs at around ~170W
Ethereum - crashes on Genoil 1.0.7 with "device bit not recognizes" message (smth like that)
With Ocl ethereum miner - 12.5mhs at.. 30w
Neoscrypt is not optimized - 0.450
What do you suggest to test next?

1070
Quark was at 24mhs at 110W
the same for ethereum - 12.5 mhs at 30W

Must be nice to be an insider, loaner or keeper?

The 1070 results don't seem to scale the same as the 1080. Based on the 1080 rate you posted I was
estimating 26 MH/s on the 1070. But the power usage on the 1070 is unexpectedly low compared with
the 970/980 ratio.

Neoscrypt is a curious beast. The original neoscrypt kernel (DJM34) performs better on kepler (780ti specifically)
than the improved Pallas neoscrypt kernel, although pallas' works better on Maxwell, both compiled with cuda 6.5.
Then the Pallas neoscrypt took a big hit when compiled with cuda 7.5. DJM34 took a crack at it and restored much
of the lost hash. Now it appears it's taking another hit on Pascal.

I would suggest trying the original DJM34 neoscrypt (SP_MOD 58) compiled with cuda 6.5, the Pallas kernel compiled with
6.5 & 7.5 and the improved DJM34 (I think that is what you already tested).

have you compiled your cpp-ethereum miner (genoil) by your self ?
Same Problem in Build 0.9 with GTX 9XX Cards and prebuilded binarys...
Compiled by myself and everything works
Downloading VS 2013 now - never compiled before - are there any tips beyond github instructions on compiling the current master?
sr. member
Activity: 420
Merit: 252
The 1080 will cost $700, have 2500 shaders and clocked @ 1600mhz.
A used 980ti can be picked up for $400, 2760 shaders and can be oveclocked to 1500mhz stable. (Gigabyte G1 windforce) Quark will draw around 240Watt

The shadercount of the gtx 1070 is unknown.

To benchmark the pascal is easy

compile this sourcecode

https://github.com/tpruvot/ccminer


run it with

ccminer -a quark --benchmark


And then compare it to my modded Maxwell kernal.

Here are the results on the 980ti: (31,8MHASH) (132.5KHASH /Watt)




As I mentioned in other topic
1080 at
Quark - 32Mhs at around ~170W
Ethereum - crashes on Genoil 1.0.7 with "device bit not recognizes" message (smth like that)
With Ocl ethereum miner - 12.5mhs at.. 30w
Neoscrypt is not optimized - 0.450
What do you suggest to test next?

1070
Quark was at 24mhs at 110W
the same for ethereum - 12.5 mhs at 30W

Must be nice to be an insider, loaner or keeper?

The 1070 results don't seem to scale the same as the 1080. Based on the 1080 rate you posted I was
estimating 26 MH/s on the 1070. But the power usage on the 1070 is unexpectedly low compared with
the 970/980 ratio.

Neoscrypt is a curious beast. The original neoscrypt kernel (DJM34) performs better on kepler (780ti specifically)
than the improved Pallas neoscrypt kernel, although pallas' works better on Maxwell, both compiled with cuda 6.5.
Then the Pallas neoscrypt took a big hit when compiled with cuda 7.5. DJM34 took a crack at it and restored much
of the lost hash. Now it appears it's taking another hit on Pascal.

I would suggest trying the original DJM34 neoscrypt (SP_MOD 58) compiled with cuda 6.5, the Pallas kernel compiled with
6.5 & 7.5 and the improved DJM34 (I think that is what you already tested).

have you compiled your cpp-ethereum miner (genoil) by your self ?
Same Problem in Build 0.9 with GTX 9XX Cards and prebuilded binarys...
Compiled by myself and everything works
Pages:
Jump to: