Pages:
Author

Topic: SILENTARMY v5: Zcash miner, 115 sol/s on R9 Nano, 70 sol/s on GTX 1070 - page 31. (Read 209309 times)

sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
thank you SP_  It is running much better..also if I am running -cd 00 11 22  do I leave the -t 0  or change it ?

You need spaces between the numbers. Use -t 0 for slow cpu's
hero member
Activity: 494
Merit: 500
thank you SP_  It is running much better..also if I am running -cd 00 11 22  do I leave the -t 0  or change it ?
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
SP_ will this fork pick up 750ti if I run it it does not look like it finding them and the hash is very slow on my 1070...

Check the bat file in the 7z file(zcash.bat). You need to run with the -cs to run the sp-modded cuda ported silent army kernel. You also need to specify the devices with the -cd parameter. On slow cpu's use -t 0. On fast cpu's use -cd 0 0  to run 2 threads on one gpu. -cd 0 0 1 1 2 2 3 3 wil run 4 gpu's with 2 threads each. Just like the Claymore miner.

There is a bug in release #1. only 8 threads are supported.
hero member
Activity: 494
Merit: 500
SP_ will this fork pick up 750ti if I run it it does not look like it finding them and the hash is very slow on my 1070...
newbie
Activity: 7
Merit: 0
Any chance to test a linux compile?
thx in advance
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
Just a windows binary and it's a nheqminer fork.
legendary
Activity: 1510
Merit: 1003
what do you get with the memory underclocked. (-500?)
Hi SP_ , i cannot test that because i didn't do the screen mod to allow OC on linux (I should do that, it is not hard, i'm just being lazy  Grin )
can you tested it newmz?
it should be better than 88 Sol/s in my case but it seems that my processor is not fast enough (low gpu utilization, 88%)

I have made a version of the cuda port of the SA kernel.

Zcash sp-mod #1

https://github.com/sp-hash/ccminer/releases/tag/zcash-spmod1

Low power more hash.
Since you did it for ccminer someone can do it for sgminer...
Can you share a command line because I cannot see equihash algo...
It is not ccminer, it is nheqminer binary just uploaded to ccminer github ))
sr. member
Activity: 652
Merit: 266
what do you get with the memory underclocked. (-500?)
Hi SP_ , i cannot test that because i didn't do the screen mod to allow OC on linux (I should do that, it is not hard, i'm just being lazy  Grin )
can you tested it newmz?
it should be better than 88 Sol/s in my case but it seems that my processor is not fast enough (low gpu utilization, 88%)

I have made a version of the cuda port of the SA kernel.

Zcash sp-mod #1

https://github.com/sp-hash/ccminer/releases/tag/zcash-spmod1

Low power more hash.
Since you did it for ccminer someone can do it for sgminer...
Can you share a command line because I cannot see equihash algo...
hero member
Activity: 2548
Merit: 626
how come aug 31 ??
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
what do you get with the memory underclocked. (-500?)
Hi SP_ , i cannot test that because i didn't do the screen mod to allow OC on linux (I should do that, it is not hard, i'm just being lazy  Grin )
can you tested it newmz?
it should be better than 88 Sol/s in my case but it seems that my processor is not fast enough (low gpu utilization, 88%)

I have made a version of the cuda port of the SA kernel.

Zcash sp-mod #1

https://github.com/sp-hash/ccminer/releases/tag/zcash-spmod1

Low power more hash.
legendary
Activity: 1274
Merit: 1000
i remember doing X11 with GPUs here https://www.ltcrabbit.com/index.php a while when i was trying to mine bitcoins on slushes old pool til GPU mining became useless with bitcoins while I learned then went to ASIC miners after that my first ASIC 256 miners was a USB stick miner i had a few, that might have been the of middle 2014 in May or later  I can look at  my pay outs on ltcrabit.com still to see i never mind anything but X11 on ltcriabit till i got my fist ASIC Scrpit miner in 2015 .I was about to give up mining, then the S5 came out  i forget that date, i got pulled back in that miner was so cheap who could refuse . now i really like  gpu mining over bitcoin ASIC mining .

legendary
Activity: 1764
Merit: 1024
The RX 480 is a tossup with the GTX 1070 on memory access - which is why they're comparable at best in performance on algorythms like the ones ZEC and ETH use despite the GTX 1070 cost being almost twice as much.

Nvidia gets the "scraps" on mining because most mining algorythms don't use most of the parts of a NVidia card that makes it competative with AMD cards on general or compute-bound usage at a given price point, and as a result few folks use NVidia cards to mine on which makes them a much lower priority for development.

 It's not "lack of development" ALONE that keeps Nvidia uncompetative on a hash/$ basis for ETH and ZEC (and derivatives using the same algorythms).
 It's the inherent design of the ALGORYTHMS that keep NVidia uncompetative on a hash/$ basis coupled with the higher PRICE of their cards that have competative memory access even when development IS mature.

It's waaaaay too early to call this based on memory bus width. There is a lot of theorycrafting and it's all based on current hashrates and extrapolating against the original CPU miner code, not GPU optimized code, and not code made specifically for Nvidia hardware.

The only algo that doesn't fully utilize a 1070 is Dagger, Ethereum, which I've mentioned before. Which has lead to a misconception of the capabilities of a 1070... see your post. There are a lot of other algos out there... NeoS, Lyra2v2, Lbry, there are more all of which the 1070 performs quite well in. However, they aren't high volume and as such it leads to statements like what you made... Assuming all of crypto land is just Dagger-Hashimoto. Dagger is the only really memory bound Algo out there, Cryptonote also is, but that's controlled by CPU botnets because of it.

It is the lack of development in Equihash, that's for certain. The only Nvidia optimized miner that has come out was from Nicehash and it was worthless a day later as it wasn't being made by the big three.

The term you were looking for is 'scrypt' and that is where things died for AMD as well.


 ETH and ZEC are both memory-limited algorythms - and are where AMD is currently shining once again.
 NoeS, Lyra2v2, and Lbry don't make much - even with the limitations of the memory system on a 1070 being no faster than the AMD RX 470/480 I still see better profitability out of my 1070s on ETH than any of the coins based on those algorythms.


 Scrypt died for GPUs when the Gridseed and later ASIC showed up for it - had nothing to do with AMD vs Nvidia.
 I wasn't around early enough for the Bitcoin GPU days but it appears that the same thing happened there.


 Also, I never specified memory bus width - I'm talking OVERALL memory access, the stuff that keeps the R9 290x hashing at the same rate on ETH as the R9 290 (among other examples).
 The reason the R9 290/390 and such are competative on ETH and ZEC is that their bus width and other memory subsystem design makes up for their much lower memory speed, but the algorythms used in ETH and ZEC are very much memory access limited more than compute limited (or the R9 290x would hash noticeably better than the R9 290 does - on ETH at least where the code has been well optimised, they hash pretty much identically presuming same clocks and same BIOS memory system mods).

 Do keep in mind that for ETH at least there IS a miner (genoil's) that started out as CUDA specific and is well optimised for NVidia, yet the AMD RX series cards match or better the NVidia GTX 10xx cards on that algorythm on both raw performance AND hash/watt and at a much lower price point.
 This isn't the case as much for ZEC (the code is still getting optimised), but it's become apparent that ZEC is yet another "memory hard" algorythm by design and implimentation that does not reward superior compute performance past the point that the memory subsystem starts hitting it's limits (if not as much so as ETH).


 No, I'm not an "ETH baby" - all of my early ETH rigs were Scrypt rigs back in the day (give or take some cards getting moved around) that spent their time after Scrypt went ASIC doing d.net work (and most of the HD 7750s from my scrypt days are STILL working d.net via the BOINC MooWrapper project).


 I don't know where you're comming up with NVidia being 40% more efficient than the RX 4xx series - right now it's looking like actual efficiency is more or less a tossup, but very dependent on what you're actually working on with a given card. Even on Folding where NVidia offers a clear performance lead, the RX 480 is a tossup with the GTX 1070 on PPD/$ at the card level and very close at the system level, and very close on PPD/watt (less than 10% per the data I've seen at the card level).
 I do NOT see a 40% more efficient benefit to NVidia even in one of it's biggest strongholds.


That is definitely incorrect. Private kernels killed Scrypt mining... ASIC's came along later If you weren't around at the end of 14 you would'nt have figured that out. Not everything is the big bad ASIC boogieman... Sometimes it's just greed and people turning off the lights. You can Google my posts and check them out from BCT in '14. Hence why I'm here trying to motivate some development for Nvidia's side.

"I don't know where you're comming up with NVidia being 40% more efficient than the RX 4xx series - right now it's looking like actual efficiency is more or less a tossup"'

With a lack of coding for Nvidia you're making this statement off of current conditions and rates. Do you think as much effort is going into developing code for Nvidia as AMD right now? The answer is no. You already said no. The efficiency argument is based off of algos that actually use more then memory, not just that but gaming as well. While mining isn't gaming, gaming has been optimized quite a bit over the years. When one brand is getting maxed, the other is as well. Go look up some hardware benchmarks, that's pretty fundamental stuff.

Genoil's miner isn't CUDA optimized. That was Dagger, not Equihash. His endeavours in Equihash are focused on AMD hardware as he owns it. It wasn't until recently that he made a Nvidia compatible miner and it's just a port of SAv5.

Alright, how about some sources for Equihash being hardware memory bus width locked that I haven't seen on BCT and isn't extrapolated from a CPU miner or current rates of AMD hardware. You know Fury also has a better processor then a R9-290? You also know that a RX-480 is basically a mid-range GPU with processing power to match it (close or a bit less then a R9-290)?

Do you also know if you want to check if a algo is memory limited, you can go into GPUZ and check out the MCU (memory controller unit) and see the load on it? Mine sits at 38% at 108sols for a 1070. If we want to take a page from your book and 'extrapolate' from that, that means there is potential there for 284sols on a 1070, that is IF it's completely memory bound and without any sort of optimizing for Nvidia hardware. NeoS also sits around 30% MCU usage. Dagger sits at 100% right before it trades off to more GPU and power usage (if you use a dual miner). Cryptonote also sits at 100% utilization. Weird, all the 'smart minds' and no one bothers checking the gauges.
By similar extrapolation, 480 could do 266S/s (no memory OC).  So slightly slower at half the cost, similar power.  So even with both using theoretical optimal miners, 1070 is still poor choice.  1060 3GB would be reasonable, but still not as good of S/$.

Where are you getting your power usage numbers from? Power efficiency neither matches other algos or gaming benchmarks. You know why so many people are complaining about 'claymore killing my GPUs' because they aren't used to a full power load on their hardware. They based it around silly low numbers, like first releases of Equihash or even Ethereum. As a miner more fully utilizes your hardware, it will start approaching the maximum TDP of the card. While the 1070 and the 480 have similiar TDPs, the amount of processing power available to a 1070 is almost double that of a 480.

MCU usage is sitting at 38% or did you just take my number and use it yourself? How about a screenshot.

my bad
Code:
That is definitely incorrect. Private kernels killed Scrypt mining... ASIC's came along later

it looked that way back then to me i also stopped using GPU to mine any coins when ASIC 256 miners came out if i remember right they came out before ASIC Script miners did so your probably right  and I must have just came around to mining about the time the shit hit the fan and missed more then i thought . i do know the first ASIC Script was made by LKETC it looked like a Grind seed GBlack miner but had more hash power and may of had help making it from some who made the Grin seed but LKETC made the first ASIC Script miner, Avalon made the first ASIC miner of any kind and  i remember when LKETC's came out and shortly after grind seed came out with there version ,and took over the market so to speak then Zeus or all the others including  Scam company's ,which Zeus turned into, but i didn't make the above post i made another one or short version.  

ASIC scrypt miners debuted in 15. Wolf0 is one of the popular private kernel devs that killed Scrypt mining. Back then he only sold to big farms and a handful of them. X11 started coming out around the end of '14, but once again that was specifically private kernels that were dominating that and was unprofitable with public miners at the end of '14. x11 ASIC miners didn't debut until '16, probably in use since the end of '15.

I looked at a ton of different options then, but sold all of my AMD hardware at the time as it was below power costs to mine with said hardware even with pretty cheap electricity. I took a 60% loss to my assets because of it. I would've made a pretty good amount of money on Ethereum with it, but you know hindsight is always 20/20 and it was an entire year before Ethereum came out.

This is going to happen again once Ethereum starts going PoS, which it's already doing. This spring it's going to be pointless to mine Ethereum and all that AMD hash is going to be looking for something juicy to sink their teeth into. Power efficiency is going to start mattering a lot more. Maybe Equihash will turn into the next Dagger, but those are big shoes to fill and we're just at the beginning.
legendary
Activity: 1274
Merit: 1000
 my bad
Code:
That is definitely incorrect. Private kernels killed Scrypt mining... ASIC's came along later

it looked that way back then to me i also stopped using GPU to mine any coins when ASIC 256 miners came out if i remember right they came out before ASIC Script miners did so your probably right  and I must have just came around to mining about the time the shit hit the fan and missed more then i thought . i do know the first ASIC Script was made by LKETC it looked like a Grind seed GBlack miner but had more hash power and may of had help making it from some who made the Grin seed but LKETC made the first ASIC Script miner, Avalon made the first ASIC miner of any kind and  i remember when LKETC's came out and shortly after grind seed came out with there version ,and took over the market so to speak then Zeus or all the others including  Scam company's ,which Zeus turned into, but i didn't make the above post i made another one or short version.  

started in 2009 or sooner with I thought BS then and never bothered with it till the 1000 buck bitcoin then the crash.
CPU  
GPU
FPGA by Butterfly Labs we all remember them
then I came in about here or back.
ASIC and the Arms Race as it was called till now .

thanks top
sr. member
Activity: 449
Merit: 251
The RX 480 is a tossup with the GTX 1070 on memory access - which is why they're comparable at best in performance on algorythms like the ones ZEC and ETH use despite the GTX 1070 cost being almost twice as much.

Nvidia gets the "scraps" on mining because most mining algorythms don't use most of the parts of a NVidia card that makes it competative with AMD cards on general or compute-bound usage at a given price point, and as a result few folks use NVidia cards to mine on which makes them a much lower priority for development.

 It's not "lack of development" ALONE that keeps Nvidia uncompetative on a hash/$ basis for ETH and ZEC (and derivatives using the same algorythms).
 It's the inherent design of the ALGORYTHMS that keep NVidia uncompetative on a hash/$ basis coupled with the higher PRICE of their cards that have competative memory access even when development IS mature.

It's waaaaay too early to call this based on memory bus width. There is a lot of theorycrafting and it's all based on current hashrates and extrapolating against the original CPU miner code, not GPU optimized code, and not code made specifically for Nvidia hardware.

The only algo that doesn't fully utilize a 1070 is Dagger, Ethereum, which I've mentioned before. Which has lead to a misconception of the capabilities of a 1070... see your post. There are a lot of other algos out there... NeoS, Lyra2v2, Lbry, there are more all of which the 1070 performs quite well in. However, they aren't high volume and as such it leads to statements like what you made... Assuming all of crypto land is just Dagger-Hashimoto. Dagger is the only really memory bound Algo out there, Cryptonote also is, but that's controlled by CPU botnets because of it.

It is the lack of development in Equihash, that's for certain. The only Nvidia optimized miner that has come out was from Nicehash and it was worthless a day later as it wasn't being made by the big three.

The term you were looking for is 'scrypt' and that is where things died for AMD as well.


 ETH and ZEC are both memory-limited algorythms - and are where AMD is currently shining once again.
 NoeS, Lyra2v2, and Lbry don't make much - even with the limitations of the memory system on a 1070 being no faster than the AMD RX 470/480 I still see better profitability out of my 1070s on ETH than any of the coins based on those algorythms.


 Scrypt died for GPUs when the Gridseed and later ASIC showed up for it - had nothing to do with AMD vs Nvidia.
 I wasn't around early enough for the Bitcoin GPU days but it appears that the same thing happened there.


 Also, I never specified memory bus width - I'm talking OVERALL memory access, the stuff that keeps the R9 290x hashing at the same rate on ETH as the R9 290 (among other examples).
 The reason the R9 290/390 and such are competative on ETH and ZEC is that their bus width and other memory subsystem design makes up for their much lower memory speed, but the algorythms used in ETH and ZEC are very much memory access limited more than compute limited (or the R9 290x would hash noticeably better than the R9 290 does - on ETH at least where the code has been well optimised, they hash pretty much identically presuming same clocks and same BIOS memory system mods).

 Do keep in mind that for ETH at least there IS a miner (genoil's) that started out as CUDA specific and is well optimised for NVidia, yet the AMD RX series cards match or better the NVidia GTX 10xx cards on that algorythm on both raw performance AND hash/watt and at a much lower price point.
 This isn't the case as much for ZEC (the code is still getting optimised), but it's become apparent that ZEC is yet another "memory hard" algorythm by design and implimentation that does not reward superior compute performance past the point that the memory subsystem starts hitting it's limits (if not as much so as ETH).


 No, I'm not an "ETH baby" - all of my early ETH rigs were Scrypt rigs back in the day (give or take some cards getting moved around) that spent their time after Scrypt went ASIC doing d.net work (and most of the HD 7750s from my scrypt days are STILL working d.net via the BOINC MooWrapper project).


 I don't know where you're comming up with NVidia being 40% more efficient than the RX 4xx series - right now it's looking like actual efficiency is more or less a tossup, but very dependent on what you're actually working on with a given card. Even on Folding where NVidia offers a clear performance lead, the RX 480 is a tossup with the GTX 1070 on PPD/$ at the card level and very close at the system level, and very close on PPD/watt (less than 10% per the data I've seen at the card level).
 I do NOT see a 40% more efficient benefit to NVidia even in one of it's biggest strongholds.


That is definitely incorrect. Private kernels killed Scrypt mining... ASIC's came along later If you weren't around at the end of 14 you would'nt have figured that out. Not everything is the big bad ASIC boogieman... Sometimes it's just greed and people turning off the lights. You can Google my posts and check them out from BCT in '14. Hence why I'm here trying to motivate some development for Nvidia's side.

"I don't know where you're comming up with NVidia being 40% more efficient than the RX 4xx series - right now it's looking like actual efficiency is more or less a tossup"'

With a lack of coding for Nvidia you're making this statement off of current conditions and rates. Do you think as much effort is going into developing code for Nvidia as AMD right now? The answer is no. You already said no. The efficiency argument is based off of algos that actually use more then memory, not just that but gaming as well. While mining isn't gaming, gaming has been optimized quite a bit over the years. When one brand is getting maxed, the other is as well. Go look up some hardware benchmarks, that's pretty fundamental stuff.

Genoil's miner isn't CUDA optimized. That was Dagger, not Equihash. His endeavours in Equihash are focused on AMD hardware as he owns it. It wasn't until recently that he made a Nvidia compatible miner and it's just a port of SAv5.

Alright, how about some sources for Equihash being hardware memory bus width locked that I haven't seen on BCT and isn't extrapolated from a CPU miner or current rates of AMD hardware. You know Fury also has a better processor then a R9-290? You also know that a RX-480 is basically a mid-range GPU with processing power to match it (close or a bit less then a R9-290)?

Do you also know if you want to check if a algo is memory limited, you can go into GPUZ and check out the MCU (memory controller unit) and see the load on it? Mine sits at 38% at 108sols for a 1070. If we want to take a page from your book and 'extrapolate' from that, that means there is potential there for 284sols on a 1070, that is IF it's completely memory bound and without any sort of optimizing for Nvidia hardware. NeoS also sits around 30% MCU usage. Dagger sits at 100% right before it trades off to more GPU and power usage (if you use a dual miner). Cryptonote also sits at 100% utilization. Weird, all the 'smart minds' and no one bothers checking the gauges.
By similar extrapolation, 480 could do 266S/s (no memory OC).  So slightly slower at half the cost, similar power.  So even with both using theoretical optimal miners, 1070 is still poor choice.  1060 3GB would be reasonable, but still not as good of S/$.
legendary
Activity: 1764
Merit: 1024
The RX 480 is a tossup with the GTX 1070 on memory access - which is why they're comparable at best in performance on algorythms like the ones ZEC and ETH use despite the GTX 1070 cost being almost twice as much.

Nvidia gets the "scraps" on mining because most mining algorythms don't use most of the parts of a NVidia card that makes it competative with AMD cards on general or compute-bound usage at a given price point, and as a result few folks use NVidia cards to mine on which makes them a much lower priority for development.

 It's not "lack of development" ALONE that keeps Nvidia uncompetative on a hash/$ basis for ETH and ZEC (and derivatives using the same algorythms).
 It's the inherent design of the ALGORYTHMS that keep NVidia uncompetative on a hash/$ basis coupled with the higher PRICE of their cards that have competative memory access even when development IS mature.

It's waaaaay too early to call this based on memory bus width. There is a lot of theorycrafting and it's all based on current hashrates and extrapolating against the original CPU miner code, not GPU optimized code, and not code made specifically for Nvidia hardware.

The only algo that doesn't fully utilize a 1070 is Dagger, Ethereum, which I've mentioned before. Which has lead to a misconception of the capabilities of a 1070... see your post. There are a lot of other algos out there... NeoS, Lyra2v2, Lbry, there are more all of which the 1070 performs quite well in. However, they aren't high volume and as such it leads to statements like what you made... Assuming all of crypto land is just Dagger-Hashimoto. Dagger is the only really memory bound Algo out there, Cryptonote also is, but that's controlled by CPU botnets because of it.

It is the lack of development in Equihash, that's for certain. The only Nvidia optimized miner that has come out was from Nicehash and it was worthless a day later as it wasn't being made by the big three.

The term you were looking for is 'scrypt' and that is where things died for AMD as well.


 ETH and ZEC are both memory-limited algorythms - and are where AMD is currently shining once again.
 NoeS, Lyra2v2, and Lbry don't make much - even with the limitations of the memory system on a 1070 being no faster than the AMD RX 470/480 I still see better profitability out of my 1070s on ETH than any of the coins based on those algorythms.


 Scrypt died for GPUs when the Gridseed and later ASIC showed up for it - had nothing to do with AMD vs Nvidia.
 I wasn't around early enough for the Bitcoin GPU days but it appears that the same thing happened there.


 Also, I never specified memory bus width - I'm talking OVERALL memory access, the stuff that keeps the R9 290x hashing at the same rate on ETH as the R9 290 (among other examples).
 The reason the R9 290/390 and such are competative on ETH and ZEC is that their bus width and other memory subsystem design makes up for their much lower memory speed, but the algorythms used in ETH and ZEC are very much memory access limited more than compute limited (or the R9 290x would hash noticeably better than the R9 290 does - on ETH at least where the code has been well optimised, they hash pretty much identically presuming same clocks and same BIOS memory system mods).

 Do keep in mind that for ETH at least there IS a miner (genoil's) that started out as CUDA specific and is well optimised for NVidia, yet the AMD RX series cards match or better the NVidia GTX 10xx cards on that algorythm on both raw performance AND hash/watt and at a much lower price point.
 This isn't the case as much for ZEC (the code is still getting optimised), but it's become apparent that ZEC is yet another "memory hard" algorythm by design and implimentation that does not reward superior compute performance past the point that the memory subsystem starts hitting it's limits (if not as much so as ETH).


 No, I'm not an "ETH baby" - all of my early ETH rigs were Scrypt rigs back in the day (give or take some cards getting moved around) that spent their time after Scrypt went ASIC doing d.net work (and most of the HD 7750s from my scrypt days are STILL working d.net via the BOINC MooWrapper project).


 I don't know where you're comming up with NVidia being 40% more efficient than the RX 4xx series - right now it's looking like actual efficiency is more or less a tossup, but very dependent on what you're actually working on with a given card. Even on Folding where NVidia offers a clear performance lead, the RX 480 is a tossup with the GTX 1070 on PPD/$ at the card level and very close at the system level, and very close on PPD/watt (less than 10% per the data I've seen at the card level).
 I do NOT see a 40% more efficient benefit to NVidia even in one of it's biggest strongholds.


That is definitely incorrect. Private kernels killed Scrypt mining... ASIC's came along later If you weren't around at the end of 14 you would'nt have figured that out. Not everything is the big bad ASIC boogieman... Sometimes it's just greed and people turning off the lights. You can Google my posts and check them out from BCT in '14. Hence why I'm here trying to motivate some development for Nvidia's side.

"I don't know where you're comming up with NVidia being 40% more efficient than the RX 4xx series - right now it's looking like actual efficiency is more or less a tossup"'

With a lack of coding for Nvidia you're making this statement off of current conditions and rates. Do you think as much effort is going into developing code for Nvidia as AMD right now? The answer is no. You already said no. The efficiency argument is based off of algos that actually use more then memory, not just that but gaming as well. While mining isn't gaming, gaming has been optimized quite a bit over the years. When one brand is getting maxed, the other is as well. Go look up some hardware benchmarks, that's pretty fundamental stuff.

Genoil's miner isn't CUDA optimized. That was Dagger, not Equihash. His endeavours in Equihash are focused on AMD hardware as he owns it. It wasn't until recently that he made a Nvidia compatible miner and it's just a port of SAv5.

Alright, how about some sources for Equihash being hardware memory bus width locked that I haven't seen on BCT and isn't extrapolated from a CPU miner or current rates of AMD hardware. You know Fury also has a better processor then a R9-290? You also know that a RX-480 is basically a mid-range GPU with processing power to match it (close or a bit less then a R9-290)?

Do you also know if you want to check if a algo is memory limited, you can go into GPUZ and check out the MCU (memory controller unit) and see the load on it? Mine sits at 38% at 108sols for a 1070. If we want to take a page from your book and 'extrapolate' from that, that means there is potential there for 284sols on a 1070, that is IF it's completely memory bound and without any sort of optimizing for Nvidia hardware. NeoS also sits around 30% MCU usage. Dagger sits at 100% right before it trades off to more GPU and power usage (if you use a dual miner). Cryptonote also sits at 100% utilization. Weird, all the 'smart minds' and no one bothers checking the gauges.
legendary
Activity: 3248
Merit: 1070
what do you get with the memory underclocked. (-500?)

620-630 sol, and 670 with +550 mem, not worth it for just 50 sol, because the consumption also increase by 50 watt or higher
hero member
Activity: 710
Merit: 502
what do you get with the memory underclocked. (-500?)
Hi SP_ , i cannot test that because i didn't do the screen mod to allow OC on linux (I should do that, it is not hard, i'm just being lazy  Grin )

can you tested it newmz?

it should be better than 88 Sol/s in my case but it seems that my processor is not fast enough (low gpu utilization, 88%)
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
what do you get with the memory underclocked. (-500?)
sr. member
Activity: 372
Merit: 250
The road of excess leads to the palace of wisdom
eXtremal's latest kernel was merged into my Windows port. Thanks a lot!

https://github.com/zawawawa/silentarmy

really good small improvement each time, now i'm doing 100 sol per gpu on stock without overclock, actually with an heavy underclock on the mem

i think we still have plenty of room, because a 1070 should at very least reach 150 but i suspect 180 can be achieved too

I get 112-114 sol on a 1070 memory overclocked +500
hero member
Activity: 710
Merit: 502
new krnlx build is getting  88 Sol/s on one GTX 1070 G1 stock clock at 90W TPL, lubuntu 14.04.  it's getting better now  Grin
GPU utilization reported by nvidia-smi is 88% so it's probably the processor being pushed hard, that's probably why i am not getting the advertised 91 Sol/s.
Pages:
Jump to: