Pages:
Author

Topic: How will this change the world of mining?? GTX 1080 / 1070 - page 41. (Read 134091 times)

sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
[quote ]

I think 1.7.1 smth - can you point me to the proper one to test?
31.9 Intensity didnt change anything..
[/quote]

The fastest kernals are not opensource. What do you get in the lyra2v2 algo?
sr. member
Activity: 438
Merit: 250
build with 5.0 and 5.2 only.

encode arch=compute_50,code=sm_50;-gencode arch=compute_52,code=sm_52

And compile with the latest cuda 7.5.


ok thanks.
full member
Activity: 174
Merit: 100
As for decred - 3300 at 180W tdp

Nice. The maxwell can do this (Decred sp-mod #9):

750ti tdp 38Watt
980tdp: 240watt (79% used) =189Watt

Wich kernal did you test?

Try -i 31.9



I think 1.7.1 smth - can you point me to the proper one to test?
31.9 Intensity didnt change anything..
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
build with 5.0 and 5.2 only.

encode arch=compute_50,code=sm_50;-gencode arch=compute_52,code=sm_52

And compile with the latest cuda 7.5.
sr. member
Activity: 438
Merit: 250
what is it sp_ that this 1080 runs ccminer while it is Compute 6.0 but it doesn't run ethminer (error: invalid device symbol during cudaMemcpyToSymbol)? my bins are built for -gencode arch=compute_30,code=sm_30;-gencode arch=compute_35,code=sm_35;-gencode arch=compute_50,code=sm_50;-gencode arch=compute_52,code=sm_52

sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
As for decred - 3300 at 180W tdp

Nice. The maxwell can do this (Decred sp-mod #9):

750ti tdp 38Watt
980tdp: 240watt (79% used) =189Watt

Wich kernal did you test?

Try -i 31.9

sr. member
Activity: 438
Merit: 250

Downloading VS 2013 now - never compiled before - are there any tips beyond github instructions on compiling the current master?

Download CUDA 8.0RC. Oh wait it isn't out yet  Cool. Couple days max I guess
full member
Activity: 174
Merit: 100
The 1080 will cost $700, have 2500 shaders and clocked @ 1600mhz.
A used 980ti can be picked up for $400, 2760 shaders and can be oveclocked to 1500mhz stable. (Gigabyte G1 windforce) Quark will draw around 240Watt

The shadercount of the gtx 1070 is unknown.

To benchmark the pascal is easy

compile this sourcecode

https://github.com/tpruvot/ccminer


run it with

ccminer -a quark --benchmark


And then compare it to my modded Maxwell kernal.

Here are the results on the 980ti: (31,8MHASH) (132.5KHASH /Watt)




As I mentioned in other topic
1080 at
Quark - 32Mhs at around ~170W
Ethereum - crashes on Genoil 1.0.7 with "device bit not recognizes" message (smth like that)
With Ocl ethereum miner - 12.5mhs at.. 30w
Neoscrypt is not optimized - 0.450
What do you suggest to test next?

1070
Quark was at 24mhs at 110W
the same for ethereum - 12.5 mhs at 30W

Must be nice to be an insider, loaner or keeper?

The 1070 results don't seem to scale the same as the 1080. Based on the 1080 rate you posted I was
estimating 26 MH/s on the 1070. But the power usage on the 1070 is unexpectedly low compared with
the 970/980 ratio.

Neoscrypt is a curious beast. The original neoscrypt kernel (DJM34) performs better on kepler (780ti specifically)
than the improved Pallas neoscrypt kernel, although pallas' works better on Maxwell, both compiled with cuda 6.5.
Then the Pallas neoscrypt took a big hit when compiled with cuda 7.5. DJM34 took a crack at it and restored much
of the lost hash. Now it appears it's taking another hit on Pascal.

I would suggest trying the original DJM34 neoscrypt (SP_MOD 58) compiled with cuda 6.5, the Pallas kernel compiled with
6.5 & 7.5 and the improved DJM34 (I think that is what you already tested).

have you compiled your cpp-ethereum miner (genoil) by your self ?
Same Problem in Build 0.9 with GTX 9XX Cards and prebuilded binarys...
Compiled by myself and everything works
Downloading VS 2013 now - never compiled before - are there any tips beyond github instructions on compiling the current master?
sr. member
Activity: 420
Merit: 252
The 1080 will cost $700, have 2500 shaders and clocked @ 1600mhz.
A used 980ti can be picked up for $400, 2760 shaders and can be oveclocked to 1500mhz stable. (Gigabyte G1 windforce) Quark will draw around 240Watt

The shadercount of the gtx 1070 is unknown.

To benchmark the pascal is easy

compile this sourcecode

https://github.com/tpruvot/ccminer


run it with

ccminer -a quark --benchmark


And then compare it to my modded Maxwell kernal.

Here are the results on the 980ti: (31,8MHASH) (132.5KHASH /Watt)




As I mentioned in other topic
1080 at
Quark - 32Mhs at around ~170W
Ethereum - crashes on Genoil 1.0.7 with "device bit not recognizes" message (smth like that)
With Ocl ethereum miner - 12.5mhs at.. 30w
Neoscrypt is not optimized - 0.450
What do you suggest to test next?

1070
Quark was at 24mhs at 110W
the same for ethereum - 12.5 mhs at 30W

Must be nice to be an insider, loaner or keeper?

The 1070 results don't seem to scale the same as the 1080. Based on the 1080 rate you posted I was
estimating 26 MH/s on the 1070. But the power usage on the 1070 is unexpectedly low compared with
the 970/980 ratio.

Neoscrypt is a curious beast. The original neoscrypt kernel (DJM34) performs better on kepler (780ti specifically)
than the improved Pallas neoscrypt kernel, although pallas' works better on Maxwell, both compiled with cuda 6.5.
Then the Pallas neoscrypt took a big hit when compiled with cuda 7.5. DJM34 took a crack at it and restored much
of the lost hash. Now it appears it's taking another hit on Pascal.

I would suggest trying the original DJM34 neoscrypt (SP_MOD 58) compiled with cuda 6.5, the Pallas kernel compiled with
6.5 & 7.5 and the improved DJM34 (I think that is what you already tested).

have you compiled your cpp-ethereum miner (genoil) by your self ?
Same Problem in Build 0.9 with GTX 9XX Cards and prebuilded binarys...
Compiled by myself and everything works
sr. member
Activity: 420
Merit: 252
preordered 2 Cards at Work for "Evaluation" 1 x GTX 1070, 1 x GTX 1080.
will report when arrived....
full member
Activity: 174
Merit: 100
Lets say... tester
As for neoscrypt - it is 300khs - 450khs depending on the version of ccminer - tested both on DJM34 and latest builds - but thats at 100W tdp - obviously it will improve
As for decred - 3300 at 180W tdp
Quark is 35mhs with +150mhz overclock and around 200W tdp

is this for 1080? what about 1070?
I dont have 1070 atm but the one i tested (final one can be better)
was 110W at 24mhs at Quark
I think it will be the best choice for mining.. or maybe 1060 Smiley depends on pricing and final perf
legendary
Activity: 3248
Merit: 1070
Lets say... tester
As for neoscrypt - it is 300khs - 450khs depending on the version of ccminer - tested both on DJM34 and latest builds - but thats at 100W tdp - obviously it will improve
As for decred - 3300 at 180W tdp
Quark is 35mhs with +150mhz overclock and around 200W tdp

is this for 1080? what about 1070?
full member
Activity: 174
Merit: 100
The 1080 will cost $700, have 2500 shaders and clocked @ 1600mhz.
A used 980ti can be picked up for $400, 2760 shaders and can be oveclocked to 1500mhz stable. (Gigabyte G1 windforce) Quark will draw around 240Watt

The shadercount of the gtx 1070 is unknown.

To benchmark the pascal is easy

compile this sourcecode

https://github.com/tpruvot/ccminer


run it with

ccminer -a quark --benchmark


And then compare it to my modded Maxwell kernal.

Here are the results on the 980ti: (31,8MHASH) (132.5KHASH /Watt)




As I mentioned in other topic
1080 at
Quark - 32Mhs at around ~170W
Ethereum - crashes on Genoil 1.0.7 with "device bit not recognizes" message (smth like that)
With Ocl ethereum miner - 12.5mhs at.. 30w
Neoscrypt is not optimized - 0.450
What do you suggest to test next?

1070
Quark was at 24mhs at 110W
the same for ethereum - 12.5 mhs at 30W

Must be nice to be an insider, loaner or keeper?

The 1070 results don't seem to scale the same as the 1080. Based on the 1080 rate you posted I was
estimating 26 MH/s on the 1070. But the power usage on the 1070 is unexpectedly low compared with
the 970/980 ratio.

Neoscrypt is a curious beast. The original neoscrypt kernel (DJM34) performs better on kepler (780ti specifically)
than the improved Pallas neoscrypt kernel, although pallas' works better on Maxwell, both compiled with cuda 6.5.
Then the Pallas neoscrypt took a big hit when compiled with cuda 7.5. DJM34 took a crack at it and restored much
of the lost hash. Now it appears it's taking another hit on Pascal.

I would suggest trying the original DJM34 neoscrypt (SP_MOD 58) compiled with cuda 6.5, the Pallas kernel compiled with
6.5 & 7.5 and the improved DJM34 (I think that is what you already tested).

Lets say... tester
As for neoscrypt - it is 300khs - 450khs depending on the version of ccminer - tested both on DJM34 and latest builds - but thats at 100W tdp - obviously it will improve
As for decred - 3300 at 180W tdp
Quark is 35mhs with +150mhz overclock and around 200W tdp
newbie
Activity: 10
Merit: 0
They are also much faster up to 50%
but they are so costly for eth mining
sr. member
Activity: 438
Merit: 250

Ethereum - crashes on Genoil 1.0.7 with "device bit not recognizes" message (smth like that)
With Ocl ethereum miner - 12.5mhs at.. 30w

Add compute 5.0 and sm 5.0 and rebuild. Then it should work without device messages. The 1.0.7 is buildt with 5.2 only

o did i? well mining eth on compute 5/ windows is pointless anyway. 1080 is Compute 6.0 so i don't know what Compute 5.0 would change to that.

12.5MH/s sounds like good 'ol TLB trashing. If there is a Linux or WDDM 1.x driver (Win 7 or 8.1), I would suggest installing that and try again. Knowing that the 980ti does about 6MH/s currenty while trashing, we may be in for a surprise.

Check Bus Interface Load in GPU-Z. Should be close to 0%. When TLB trashing it becomes high, around 50-60%.
legendary
Activity: 1498
Merit: 1030
The 1080 will cost $700, have 2500 shaders and clocked @ 1600mhz.
A used 980ti can be picked up for $400, 2760 shaders and can be oveclocked to 1500mhz stable. (Gigabyte G1 windforce) Quark will draw around 240Watt


 GTX 1080 was demonstrated to overclock to over 2 Ghz at the Nvidia demo on stock air cooling in the reference design.
 Also, while it has fewer shaders, they're supposed to be more efficient so it's not a straight up comparison.

 More importantly, some of my web digging over the last few days (slow at work) came up with references to some architecture changes that might help it noticeably on Ethereum, as they're targeted specifically at improving random access some AND they left the register count alone per processing unit while halfing the Cuda units per, effectively doubling the registers per processing unit - which should help a LOT if TLB register limits are what's holding the higher-end cards back.

Quote

The shadercount of the gtx 1070 is unknown.


 2048 per a couple of sites, which matches up well to the comparative benchmarks vs. the 1080, though not OFFICIALLY specified so far.




Quote

Ethereum - crashes on Genoil 1.0.7 with "device bit not recognizes" message (smth like that)
With Ocl ethereum miner - 12.5mhs at.. 30w
What do you suggest to test next?

1070
The same for ethereum - 12.5 mhs at 30W


 Ethereum using qtminer
 X11 but that's off topic, so post it in "the other thread"?
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer

Ethereum - crashes on Genoil 1.0.7 with "device bit not recognizes" message (smth like that)
With Ocl ethereum miner - 12.5mhs at.. 30w

Add compute 5.0 and sm 5.0 and rebuild. Then it should work without device messages. The 1.0.7 is buildt with 5.2 only
legendary
Activity: 3248
Merit: 1070
The 1080 will cost $700, have 2500 shaders and clocked @ 1600mhz.
A used 980ti can be picked up for $400, 2760 shaders and can be oveclocked to 1500mhz stable. (Gigabyte G1 windforce) Quark will draw around 240Watt

The shadercount of the gtx 1070 is unknown.

To benchmark the pascal is easy

compile this sourcecode

https://github.com/tpruvot/ccminer


run it with

ccminer -a quark --benchmark


And then compare it to my modded Maxwell kernal.

Here are the results on the 980ti: (31,8MHASH) (132.5KHASH /Watt)




As I mentioned in other topic
1080 at
Quark - 32Mhs at around ~170W
Ethereum - crashes on Genoil 1.0.7 with "device bit not recognizes" message (smth like that)
With Ocl ethereum miner - 12.5mhs at.. 30w
Neoscrypt is not optimized - 0.450
What do you suggest to test next?

1070
Quark was at 24mhs at 110W
the same for ethereum - 12.5 mhs at 30W

it lack optimization, i can not believe that the 1070 do only 12.5 on etheruem, it must be at least double if not more, same for the 1080

and in fact the consumption is too low, because the gpu is not working at max, check the gpu usage please
sr. member
Activity: 294
Merit: 250
While you could definitely see some profit on a few current coins, you'd be missing out on the most profitable one to mine at the moment (Ethereum) since Nvidia cards don't work as well as AMD cards on Ethash. Personally, I'm holding out on buying a new card until I get to see how well AMD's newer offerings mine.

According to AMD, the efficiency improvement of the  new cards should be similar to that of nVida cards. So we just need to wait for two more months.
member
Activity: 85
Merit: 10
"That's just like, your opinion, man."
While you could definitely see some profit on a few current coins, you'd be missing out on the most profitable one to mine at the moment (Ethereum) since Nvidia cards don't work as well as AMD cards on Ethash. Personally, I'm holding out on buying a new card until I get to see how well AMD's newer offerings mine.
Pages:
Jump to: