Pages:
Author

Topic: SILENTARMY v5: Zcash miner, 115 sol/s on R9 Nano, 70 sol/s on GTX 1070 - page 32. (Read 209309 times)

sr. member
Activity: 652
Merit: 266
FYI - latest eXtremal kernel and params utterly crash my AMD systems over a period of time.  About half drop to 0 Sol/sec and the other half become unresponsive and require hard AC cycle.

It also won't function at all on gtx970.

More memory intensive kernel, if you have VDROP+ modded bios some cards are not able to handle it well.
Two of my RX 480 are behaving very good, but the other two constantly crash. Although the cards are the same "Sapphire Nitro RX480 8GB OC" they had different bios revisions. I bought them in pairs so the first pair seems to handle vdrop+ bios but second is crashing. Even with unmodded bios cards do 95S/s vs 105S/s for modded.

newbie
Activity: 36
Merit: 0
FYI - latest eXtremal kernel and params utterly crash my AMD systems over a period of time.  About half drop to 0 Sol/sec and the other half become unresponsive and require hard AC cycle.

It also won't function at all on gtx970.
legendary
Activity: 3248
Merit: 1070
eXtremal's latest kernel was merged into my Windows port. Thanks a lot!

https://github.com/zawawawa/silentarmy

really good small improvement each time, now i'm doing 100 sol per gpu on stock without overclock, actually with an heavy underclock on the mem

i think we still have plenty of room, because a 1070 should at very least reach 150 but i suspect 180 can be achieved too

Does it mean the memory speed is not so important for the new implementation, so the 1080 could be better?

it is, and the 1080 use a different type of memory than the 1070 that's why it is worse in hashing

but apaprently the theoretical maximum for nvidia in equihash should be higher than amd
full member
Activity: 730
Merit: 102
Trphy.io
eXtremal's latest kernel was merged into my Windows port. Thanks a lot!

https://github.com/zawawawa/silentarmy

really good small improvement each time, now i'm doing 100 sol per gpu on stock without overclock, actually with an heavy underclock on the mem

i think we still have plenty of room, because a 1070 should at very least reach 150 but i suspect 180 can be achieved too

Does it mean the memory speed is not so important for the new implementation, so the 1080 could be better?
newbie
Activity: 16
Merit: 0
Updated cuda port with latest changes.

Code:
https://github.com/krnlx/nheqminer

in the previous release was a memory leak. I checked on the Ubuntu 16.04 and Win10
legendary
Activity: 3248
Merit: 1070
eXtremal's latest kernel was merged into my Windows port. Thanks a lot!

https://github.com/zawawawa/silentarmy

really good small improvement each time, now i'm doing 100 sol per gpu on stock without overclock, actually with an heavy underclock on the mem

i think we still have plenty of room, because a 1070 should at very least reach 150 but i suspect 180 can be achieved too
full member
Activity: 243
Merit: 105
Wait next +10% in 12h, then 2 days pause before major release (~+50% ).
Not 10% Sad Only 7-8% for NVidia (AMD may be +1% Smiley ):
http://coinsforall.io/distr/input.cl
for AMD: http://coinsforall.io/distr/param.h.amd
for NVidia: http://coinsforall.io/distr/param.h.nvidia

Stock GTX1070 - 93sols/s now.

Code:
#define LDS_COLL_SIZE (NR_SLOTS * 20 * (64 / THREADS_PER_ROW))

seems to be better for 1070
legendary
Activity: 1708
Merit: 1000
Solarcoin.org
getting on average 70 sols per card with msi 7950's.
legendary
Activity: 1274
Merit: 1000
Quote
Script died for GPUs when the Gridseed and later ASIC showed up for it - had nothing to do with AMD vs Nvidia.
 I wasn't around early enough for the Bitcoin GPU days but it appears that the same thing happened there.


I started mining when you could still get bitcoins for a short while off of https://slushpool.com/accounts/login/?next=/dashboard/ slushpool on his old pool with GPU's about the time Avalon killed it Avalon made the first ASIC miner and then the ANT S1 and ASIC USB sticks came and cost a ton to buy then GPU mining went to Script coins and grind seed wasn't the first ASIC Scrpit miner til that type miner killed off GPU again then it went to X 11 type coins here https://www.ltcrabbit.com and some others ,  then the new ASIC X11 type miner hit now look at GPU mining i thought it would die but not yet and may never they won't let it with good reason , any one remember that guy on you tube that made a milk cart rig for lite coin mining .https://www.youtube.com/watch?v=fzK-twEBKF4 , if you could get over 1MH not 10 MH my bad  your were bad ass . fun times and days .!!! .
sr. member
Activity: 449
Merit: 251
The RX 480 is a tossup with the GTX 1070 on memory access - which is why they're comparable at best in performance on algorythms like the ones ZEC and ETH use despite the GTX 1070 cost being almost twice as much.

Nvidia gets the "scraps" on mining because most mining algorythms don't use most of the parts of a NVidia card that makes it competative with AMD cards on general or compute-bound usage at a given price point, and as a result few folks use NVidia cards to mine on which makes them a much lower priority for development.

 It's not "lack of development" ALONE that keeps Nvidia uncompetative on a hash/$ basis for ETH and ZEC (and derivatives using the same algorythms).
 It's the inherent design of the ALGORYTHMS that keep NVidia uncompetative on a hash/$ basis coupled with the higher PRICE of their cards that have competative memory access even when development IS mature.

It's waaaaay too early to call this based on memory bus width. There is a lot of theorycrafting and it's all based on current hashrates and extrapolating against the original CPU miner code, not GPU optimized code, and not code made specifically for Nvidia hardware.

The only algo that doesn't fully utilize a 1070 is Dagger, Ethereum, which I've mentioned before. Which has lead to a misconception of the capabilities of a 1070... see your post. There are a lot of other algos out there... NeoS, Lyra2v2, Lbry, there are more all of which the 1070 performs quite well in. However, they aren't high volume and as such it leads to statements like what you made... Assuming all of crypto land is just Dagger-Hashimoto. Dagger is the only really memory bound Algo out there, Cryptonote also is, but that's controlled by CPU botnets because of it.

It is the lack of development in Equihash, that's for certain. The only Nvidia optimized miner that has come out was from Nicehash and it was worthless a day later as it wasn't being made by the big three.

The reason there is so much AMD development, is there are a lot more miners with AMD cards.  They have historically been best investment mining wise.  They were better with Bitcoin and Litecoin, (which are both heavily compute limited), because they supported certain key operations in hardware.  AMD tends to have better price/performance anyway, even for gaming.  The newer algorithms are memory hard on purpose, to be heavily ASIC resistant, there are ASICs for most non memory hard algorithms.  Sure a 1070 has more compute than a 480, but it costs 2X as much, so it is kinda silly to buy for mining, when you can get same/faster speed for half the cost.  The 1060 3GB is decent choice for Ethereum, but 470 is still a lot better cost/perf.

The term you were looking for is 'scrypt' and that is where things died for AMD as well. Everything went private in 2014 and you couldn't make money at the end of it on AMD hardware. I know, I had AMD hardware back then. I reinvested multiple times in order to get around that. The original CCminer was when Nvidia 750tis started separating out, after a decent amount of work. It wasn't until Ethereum came out that AMD was profitable again at the end of 2015.

I assume many of you guys are eth-babies, you started mining this spring and everything is Dagger to you. It's not the way it works.

Price/performance for gaming has no merit what so ever in a conversation about cryptos.

The 'price/performance' based solely around Dagger is silly. Dagger isn't the only algo.

AMD is definitely not the vast majority, maybe like a 70/30 split, but it's not just AMD. Almost all of Lbry right now is on Nvidia hardware as it's much more efficient and performs much better. When was the last time you mined Lbry, NeoS, or Lyra2v2? Is it profitable for you? There is a reason it's not. There are a lot of miners on Ethereum as well, as there isn't anywhere else to go right now due to the lack of development on Equihash.

Don't personify me as 'Nvidia' because I own Nvidia hardware. I am not Nvidia.

Equihash is obviously memory bound, you just don't want to admit it.  Theoretically you could do it with low memory, but it would be insanely inefficient.

I know there was the dip in mining profit in 2014 due to the crypto burst.  I mined in 2012-2013, then stopped for while and started again when it was worth it.  I know more proper would be SHA256 and scrypt, I was just naming by the main coins.  Obviously both irrelevant now due to ASICs.

Dagger isn't the only algo out there, but has easily the most marketshare.  Ethereum marketshare is over 100x what Lbry is.  So sure, you can mine Lbry with Nvidia, but since it's marketshare is a lot lower, profitibility is low because it's the only thing Nvidia cards are good at.  So if mining is 70/30 like you say, then 70% are smart miners, and 30% are Nvidia fans who want Nvidia to be better on hopes and dreams, or just mine when not gaming, and they have the card anyway.  It is hard to believe 70/30 when Ethereum has 100x marketshare, and Nvidia costs 2x as much for same speed.  Nvidia is subpar with Ethereum, it will be subpar with Zcash.  Sure you can still  make a profit, it will just take a while.  Having a handful of other coins that Nvidia is good at doesn't help, when none of them are coins anyone cares about.  And if they do take off, surely ASICs will be made, rendering your GPUs useless anyway.

I got some 1060 3GB cards to see how they do, they are decent doing Ethereum, definitively a lot better price/perf than 1070 or 1080.  For a short period, Lbry profit was good, now not so much.  90% of my farm is AMD though.  I would mine on either AMD or Nvidia hardware depending on profitability, I just don't see the point of whining that one or the other isn't as good.  Instead, you should just admit the facts, and go buy different hardware.  

Sure it has the most marketshare, but there are a LOT more algos and there have been a lot more big coins in the past before they went under. Dagger is the flavor of this year. It wasn't last year and it probably be next year after Ethereum armageddon. Nvidia performed better in past years. It's not a Dagger brand, that doesn't mean all algos will be memory bound or that Equihash is memory bound because Dagger was. I already read the theorycrafting thread on BCT your information came from, that was extrapolated from CPU mining and the original code. It's not the same.

Not sure where the Nvidia 'fanboism' came from. I'm glad you mined during 12-13, that means you understand Nvidia had better all around value last year because you were around back then. That also means you understand Nvidia cards are about 40% more efficient then the 4XX series. You also understand how mining works and this spring when Ethereum is basically pointless to mine and all the AMD miners are flopping around looking for something to mine that caters to their cards and there isn't anything there, you wont go down with the ship... But that's also all theorycrafting.

That aside, as I've mentioned development for Nvidia is wide open. ZC seems to be working on it, but he's behind. We still need someone to pick up the gauntlet fee or not for Nvidia miners. It's virgin terrain with almost no competition on Nvidia side.

Sure there are other coins, but my point is, Dagger coins have easily 95%+ marketshare of non ASIC coins.  So even though there are dozens of other coins, they don't really matter for mining.  Sure 1070 is more efficient for gaming, and some other compute limited coins, but memory limited coins are the future, due to not wanting ASICs to rule the land.  So, since all the profitable ones are memory limited, AMD clearly has an advantage mining, since Nvidia is overpriced for their cards.  Even if mining say Lbry, sure 1070 is faster, but it costs twice as much, and you are better of getting AMD cards due to memory. Compute power doesn't really matter for mining these days, so 1070 isn't king, 470/480 is.  Even for things like protein folding, 480 has similar compute power for the money, since 1070 is overpriced.  Nvidia isn't as far ahead on folding as the used to be.  You can undervolt to lessen efficiency gap.

As for optimized miners, developers have little incentive to optimize for 1070, since it has bad ROI, and such less miners use it.  And, they need to make dedicated CUDA port, since Nvidia kinda sucks at OpenCL.  So there is little reward for their dev efforts.  If Nvidia wasn't bad at OpenCL, it would be easier to develop Nvidia miners.  So blame Nvidia for hating OpenCL, and having overpriced cards.
full member
Activity: 243
Merit: 105
Updated cuda port with latest changes.

Code:
https://github.com/krnlx/nheqminer
newbie
Activity: 36
Merit: 0
Wait next +10% in 12h, then 2 days pause before major release (~+50% ).
Not 10% Sad Only 7-8% for NVidia (AMD may be +1% Smiley ):
http://coinsforall.io/distr/input.cl
for AMD: http://coinsforall.io/distr/param.h.amd
for NVidia: http://coinsforall.io/distr/param.h.nvidia

Stock GTX1070 - 93sols/s now.

something about this version doesn't mine on gtx970.
looks like the solvers crash once launched.  Any way to produce debug logs for you?
legendary
Activity: 1498
Merit: 1030
The RX 480 is a tossup with the GTX 1070 on memory access - which is why they're comparable at best in performance on algorythms like the ones ZEC and ETH use despite the GTX 1070 cost being almost twice as much.

Nvidia gets the "scraps" on mining because most mining algorythms don't use most of the parts of a NVidia card that makes it competative with AMD cards on general or compute-bound usage at a given price point, and as a result few folks use NVidia cards to mine on which makes them a much lower priority for development.

 It's not "lack of development" ALONE that keeps Nvidia uncompetative on a hash/$ basis for ETH and ZEC (and derivatives using the same algorythms).
 It's the inherent design of the ALGORYTHMS that keep NVidia uncompetative on a hash/$ basis coupled with the higher PRICE of their cards that have competative memory access even when development IS mature.

It's waaaaay too early to call this based on memory bus width. There is a lot of theorycrafting and it's all based on current hashrates and extrapolating against the original CPU miner code, not GPU optimized code, and not code made specifically for Nvidia hardware.

The only algo that doesn't fully utilize a 1070 is Dagger, Ethereum, which I've mentioned before. Which has lead to a misconception of the capabilities of a 1070... see your post. There are a lot of other algos out there... NeoS, Lyra2v2, Lbry, there are more all of which the 1070 performs quite well in. However, they aren't high volume and as such it leads to statements like what you made... Assuming all of crypto land is just Dagger-Hashimoto. Dagger is the only really memory bound Algo out there, Cryptonote also is, but that's controlled by CPU botnets because of it.

It is the lack of development in Equihash, that's for certain. The only Nvidia optimized miner that has come out was from Nicehash and it was worthless a day later as it wasn't being made by the big three.

The term you were looking for is 'scrypt' and that is where things died for AMD as well.


 ETH and ZEC are both memory-limited algorythms - and are where AMD is currently shining once again.
 NoeS, Lyra2v2, and Lbry don't make much - even with the limitations of the memory system on a 1070 being no faster than the AMD RX 470/480 I still see better profitability out of my 1070s on ETH than any of the coins based on those algorythms.


 Scrypt died for GPUs when the Gridseed and later ASIC showed up for it - had nothing to do with AMD vs Nvidia.
 I wasn't around early enough for the Bitcoin GPU days but it appears that the same thing happened there.


 Also, I never specified memory bus width - I'm talking OVERALL memory access, the stuff that keeps the R9 290x hashing at the same rate on ETH as the R9 290 (among other examples).
 The reason the R9 290/390 and such are competative on ETH and ZEC is that their bus width and other memory subsystem design makes up for their much lower memory speed, but the algorythms used in ETH and ZEC are very much memory access limited more than compute limited (or the R9 290x would hash noticeably better than the R9 290 does - on ETH at least where the code has been well optimised, they hash pretty much identically presuming same clocks and same BIOS memory system mods).

 Do keep in mind that for ETH at least there IS a miner (genoil's) that started out as CUDA specific and is well optimised for NVidia, yet the AMD RX series cards match or better the NVidia GTX 10xx cards on that algorythm on both raw performance AND hash/watt and at a much lower price point.
 This isn't the case as much for ZEC (the code is still getting optimised), but it's become apparent that ZEC is yet another "memory hard" algorythm by design and implimentation that does not reward superior compute performance past the point that the memory subsystem starts hitting it's limits (if not as much so as ETH).


 No, I'm not an "ETH baby" - all of my early ETH rigs were Scrypt rigs back in the day (give or take some cards getting moved around) that spent their time after Scrypt went ASIC doing d.net work (and most of the HD 7750s from my scrypt days are STILL working d.net via the BOINC MooWrapper project).


 I don't know where you're comming up with NVidia being 40% more efficient than the RX 4xx series - right now it's looking like actual efficiency is more or less a tossup, but very dependent on what you're actually working on with a given card. Even on Folding where NVidia offers a clear performance lead, the RX 480 is a tossup with the GTX 1070 on PPD/$ at the card level and very close at the system level, and very close on PPD/watt (less than 10% per the data I've seen at the card level).
 I do NOT see a 40% more efficient benefit to NVidia even in one of it's biggest strongholds.
legendary
Activity: 1764
Merit: 1024
The RX 480 is a tossup with the GTX 1070 on memory access - which is why they're comparable at best in performance on algorythms like the ones ZEC and ETH use despite the GTX 1070 cost being almost twice as much.

Nvidia gets the "scraps" on mining because most mining algorythms don't use most of the parts of a NVidia card that makes it competative with AMD cards on general or compute-bound usage at a given price point, and as a result few folks use NVidia cards to mine on which makes them a much lower priority for development.

 It's not "lack of development" ALONE that keeps Nvidia uncompetative on a hash/$ basis for ETH and ZEC (and derivatives using the same algorythms).
 It's the inherent design of the ALGORYTHMS that keep NVidia uncompetative on a hash/$ basis coupled with the higher PRICE of their cards that have competative memory access even when development IS mature.

It's waaaaay too early to call this based on memory bus width. There is a lot of theorycrafting and it's all based on current hashrates and extrapolating against the original CPU miner code, not GPU optimized code, and not code made specifically for Nvidia hardware.

The only algo that doesn't fully utilize a 1070 is Dagger, Ethereum, which I've mentioned before. Which has lead to a misconception of the capabilities of a 1070... see your post. There are a lot of other algos out there... NeoS, Lyra2v2, Lbry, there are more all of which the 1070 performs quite well in. However, they aren't high volume and as such it leads to statements like what you made... Assuming all of crypto land is just Dagger-Hashimoto. Dagger is the only really memory bound Algo out there, Cryptonote also is, but that's controlled by CPU botnets because of it.

It is the lack of development in Equihash, that's for certain. The only Nvidia optimized miner that has come out was from Nicehash and it was worthless a day later as it wasn't being made by the big three.

The reason there is so much AMD development, is there are a lot more miners with AMD cards.  They have historically been best investment mining wise.  They were better with Bitcoin and Litecoin, (which are both heavily compute limited), because they supported certain key operations in hardware.  AMD tends to have better price/performance anyway, even for gaming.  The newer algorithms are memory hard on purpose, to be heavily ASIC resistant, there are ASICs for most non memory hard algorithms.  Sure a 1070 has more compute than a 480, but it costs 2X as much, so it is kinda silly to buy for mining, when you can get same/faster speed for half the cost.  The 1060 3GB is decent choice for Ethereum, but 470 is still a lot better cost/perf.

The term you were looking for is 'scrypt' and that is where things died for AMD as well. Everything went private in 2014 and you couldn't make money at the end of it on AMD hardware. I know, I had AMD hardware back then. I reinvested multiple times in order to get around that. The original CCminer was when Nvidia 750tis started separating out, after a decent amount of work. It wasn't until Ethereum came out that AMD was profitable again at the end of 2015.

I assume many of you guys are eth-babies, you started mining this spring and everything is Dagger to you. It's not the way it works.

Price/performance for gaming has no merit what so ever in a conversation about cryptos.

The 'price/performance' based solely around Dagger is silly. Dagger isn't the only algo.

AMD is definitely not the vast majority, maybe like a 70/30 split, but it's not just AMD. Almost all of Lbry right now is on Nvidia hardware as it's much more efficient and performs much better. When was the last time you mined Lbry, NeoS, or Lyra2v2? Is it profitable for you? There is a reason it's not. There are a lot of miners on Ethereum as well, as there isn't anywhere else to go right now due to the lack of development on Equihash.

Don't personify me as 'Nvidia' because I own Nvidia hardware. I am not Nvidia.

Equihash is obviously memory bound, you just don't want to admit it.  Theoretically you could do it with low memory, but it would be insanely inefficient.

I know there was the dip in mining profit in 2014 due to the crypto burst.  I mined in 2012-2013, then stopped for while and started again when it was worth it.  I know more proper would be SHA256 and scrypt, I was just naming by the main coins.  Obviously both irrelevant now due to ASICs.

Dagger isn't the only algo out there, but has easily the most marketshare.  Ethereum marketshare is over 100x what Lbry is.  So sure, you can mine Lbry with Nvidia, but since it's marketshare is a lot lower, profitibility is low because it's the only thing Nvidia cards are good at.  So if mining is 70/30 like you say, then 70% are smart miners, and 30% are Nvidia fans who want Nvidia to be better on hopes and dreams, or just mine when not gaming, and they have the card anyway.  It is hard to believe 70/30 when Ethereum has 100x marketshare, and Nvidia costs 2x as much for same speed.  Nvidia is subpar with Ethereum, it will be subpar with Zcash.  Sure you can still  make a profit, it will just take a while.  Having a handful of other coins that Nvidia is good at doesn't help, when none of them are coins anyone cares about.  And if they do take off, surely ASICs will be made, rendering your GPUs useless anyway.

I got some 1060 3GB cards to see how they do, they are decent doing Ethereum, definitively a lot better price/perf than 1070 or 1080.  For a short period, Lbry profit was good, now not so much.  90% of my farm is AMD though.  I would mine on either AMD or Nvidia hardware depending on profitability, I just don't see the point of whining that one or the other isn't as good.  Instead, you should just admit the facts, and go buy different hardware.  

Sure it has the most marketshare, but there are a LOT more algos and there have been a lot more big coins in the past before they went under. Dagger is the flavor of this year. It wasn't last year and it probably be next year after Ethereum armageddon. Nvidia performed better in past years. It's not a Dagger brand, that doesn't mean all algos will be memory bound or that Equihash is memory bound because Dagger was. I already read the theorycrafting thread on BCT your information came from, that was extrapolated from CPU mining and the original code. It's not the same.

Not sure where the Nvidia 'fanboism' came from. I'm glad you mined during 12-13, that means you understand Nvidia had better all around value last year because you were around back then. That also means you understand Nvidia cards are about 40% more efficient then the 4XX series. You also understand how mining works and this spring when Ethereum is basically pointless to mine and all the AMD miners are flopping around looking for something to mine that caters to their cards and there isn't anything there, you wont go down with the ship... But that's also all theorycrafting.

That aside, as I've mentioned development for Nvidia is wide open. ZC seems to be working on it, but he's behind. We still need someone to pick up the gauntlet fee or not for Nvidia miners. It's virgin terrain with almost no competition on Nvidia side.
legendary
Activity: 2688
Merit: 1240
The dupe problem on zwawas build still remains but its bit better.

Edit over time the dupe problem gets even worse. 1/3 of shares submitted are flagged as dupe.


On every pool? Usually duplicates are very rare with equihash
sr. member
Activity: 728
Merit: 304
Miner Developer
There seems to be a problem with the random number generator.
I will try the RtlGenRandom API.
sr. member
Activity: 728
Merit: 304
Miner Developer
Oops, I forgot to sync the repo. Now it's up to date.
legendary
Activity: 1292
Merit: 1000

git clone https://github.com/mbevand/silentarmy
download latest input.cl and param.h.nvidia from eXtremal's post.
replace them and compile.


yes, there is old input.cl if you make git clone from zawa

now I have close to 90sol/s  Smiley
sr. member
Activity: 652
Merit: 266
So my last test with zawa version and exchanged the file with extermals ..just can't get the python to compile the exe due to some windows sdk things.. but

--instance=1
Total 416.2 sol/s [dev0 102.2, dev1 108.8, dev2 96.9, dev3 113.3] 12 shares
Total 417.7 sol/s [dev0 100.4, dev1 106.6, dev2 96.8, dev3 113.2] 12 shares
Total 421.6 sol/s [dev0 100.6, dev1 111.6, dev2 103.3, dev3 112.7] 12 shares
Total 420.3 sol/s [dev0 106.0, dev1 110.4, dev2 101.3, dev3 112.9] 12 shares
Total 420.4 sol/s [dev0 103.2, dev1 109.5, dev2 102.9, dev3 112.0] 12 shares
Total 418.9 sol/s [dev0 104.3, dev1 110.7, dev2 101.3, dev3 111.2] 12 shares
Total 419.2 sol/s [dev0 103.9, dev1 113.1, dev2 101.9, dev3 107.8] 12 shares
Total 419.4 sol/s [dev0 100.0, dev1 114.5, dev2 104.8, dev3 109.6] 12 shares
Total 420.0 sol/s [dev0 100.1, dev1 114.4, dev2 102.5, dev3 110.2] 13 shares


GTX1080 + GTX1070 + GTX1080 + GTX1070..nice power draw of 4 640Watts


I'm still about 80 sol/s under linux with new zawa and new param.h ...
?
git clone https://github.com/mbevand/silentarmy
download latest input.cl and param.h.nvidia from eXtremal's post.
replace them and compile.
legendary
Activity: 1292
Merit: 1000
So my last test with zawa version and exchanged the file with extermals ..just can't get the python to compile the exe due to some windows sdk things.. but

--instance=1
Total 416.2 sol/s [dev0 102.2, dev1 108.8, dev2 96.9, dev3 113.3] 12 shares
Total 417.7 sol/s [dev0 100.4, dev1 106.6, dev2 96.8, dev3 113.2] 12 shares
Total 421.6 sol/s [dev0 100.6, dev1 111.6, dev2 103.3, dev3 112.7] 12 shares
Total 420.3 sol/s [dev0 106.0, dev1 110.4, dev2 101.3, dev3 112.9] 12 shares
Total 420.4 sol/s [dev0 103.2, dev1 109.5, dev2 102.9, dev3 112.0] 12 shares
Total 418.9 sol/s [dev0 104.3, dev1 110.7, dev2 101.3, dev3 111.2] 12 shares
Total 419.2 sol/s [dev0 103.9, dev1 113.1, dev2 101.9, dev3 107.8] 12 shares
Total 419.4 sol/s [dev0 100.0, dev1 114.5, dev2 104.8, dev3 109.6] 12 shares
Total 420.0 sol/s [dev0 100.1, dev1 114.4, dev2 102.5, dev3 110.2] 13 shares


GTX1080 + GTX1070 + GTX1080 + GTX1070..nice power draw of 4 640Watts


I'm still about 80 sol/s under linux with new zawa and new param.h ...
?
Pages:
Jump to: