Pages:
Author

Topic: [ANN] Kryptohash | Brand new PoW algo | 320bit hash | ed25519 | PID algo for dif - page 29. (Read 149446 times)

sr. member
Activity: 329
Merit: 250
I'm going to put my HD7970 in my Win7 system and see what happens.


Edit: Got the same results.  It seems that with the 14.12 Omega drivers you can only overclock the GPU.  Underclocking either GPU or Memory won't work.
sr. member
Activity: 350
Merit: 250
The ADL code in sgminer has a lot of changes.  I'm going to try to port them to cgminer-khc and see what happens.



Edit:

No change in behavior.  Autotune only works on the 6950. This rig has Win7 x64.

Perhaps, there is an issue with the driver in Windows 8.1.  My two rigs where autotune won't work are Windows 8.1 Pro.

I've got the 14.12 Omega on Win7x64, the drivers seem pretty stable other than when VRM temps reach tolerance threshold and starves the core voltage, and that is what pretty much is what saves my second card from bricking.  Both cards are running 15.43 bios as that idles the cards at 500mhz rather than 15.44 bios that idles at 300mhz which allows the mouse pointer driver corruption after the instance of a card being reset by the driver.
sr. member
Activity: 329
Merit: 250
The ADL code in sgminer has a lot of changes.  I'm going to try to port them to cgminer-khc and see what happens.



Edit:

No change in behavior.  Autotune only works on the 6950. This rig has Win7 x64.

Perhaps, there is an issue with the driver in Windows 8.1.  My two rigs where autotune won't work are Windows 8.1 Pro.
sr. member
Activity: 350
Merit: 250
I've switched to beta2,I am worried about my power supply. Angry

There is no need to go back to beta2. The Kernel is the same.

You can remove the '--cl-opt-disable' option, delete the 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' file and cgminer beta3 should run just like the beta2.  
 
Or, you can take advantage of the Intensity setting.  For your R270, you can use intensity 3 or 4 on each card.  Try specifying:  -I 3  or -I 4


@wr104

I am also getting the HW errors again with the Nvidia cards.  Was there something besides the .bin files that needed to be deleted?

EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.

As I said, the Kernel hasn't changed.  If the '--cl-opt-disable' option isn't helping on the 970, just remove it and cgminer should pretty much work as before.  Don't forget to delete 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' every time you make a change so, cgminer is forced to re-compile the Kernel next time it is executed.

Also, try changing the Work Size using the '--worksize' option on your 970.  The default is now 256 and your nVidia GPU might not like that value.

 


A request for increased value range of 'gpu-engine'.  Presently it will not accept a value below 1Ghz, default it tries to run at 1.1Ghz.

Presently if using he "cl-opt-disable" : true, with "Intensity" : "8" on an R9 280x DD, I am getting a fair hash rate of about 235KH/s but the driver is locking up, but not until card reaches around 90C which puts the VRM's near max tolerance of 120C with the increased wattage draw under the new kernel.

Would it be possible to allow input for gpu-engine to run down to around 850Mhz on the tahiti to maintain full utilization of memory while providing the ability to lower running temps across all the hardware in the card?

I've been playing with all the GPU settings (Manual Engine clock, Autotune, Powertune, etc) on all my cards and this seems to be an issue with the drivers or with the ADL SDK.

On my R9 290x, only reducing Powertune reduces the GPU clock.  
On my HD7970, nothing makes the GPU clock change.
On my HD6950, everything works fine.


On a 280x I can directly specify the clock setting or give it a range 1000-1100+ which cgminer is supposed to dynamically adjust based on the temp-overheat+temp-hysteresis and was observed doing such on beta 2.  It just won't go below 1Ghz even if told to.

Is a bummer that one cannot do the same based on VRM temps as this would have saved a lot of cards throughout the history of BTC.  No biggy for me though, I'd just swap out burned VRM's with new ones as I have to do a lot of component level board repairs on my day gig.  Could be another interesting implementation of PID in there as well.   Smiley

I don't have the GPU temp swap thing going on unless it has GPU-Z spoofed, funny though, GPU0 does run hotter that GPU1, but VRM on GPU1 runs hotter than GPU0.  But this has been observed mining under other algo's so may be a problem with that card.  Whacky, the temp-overheat+temp-hysteresis that is triggering clock down seems to clock down the cooler GPU1 running card.  But realistically, the '"cl-opt-disable" : true' option is for water cooled rigs, I was running the clock at 1Ghz WS256 I6 and still hit 90C and caught the VRM at 120C and only saw 24Kh/s gain, not worth it to me unless I can has less Mhz on the clock and avoid having to unbrick my card.

Unless something got messed up with the ADL library when I built Beta3, the Autotune should work the same as Beta2.

The temperature mapping issue exits because, there is no way to automatically correlate in code what OpenCL thinks it is GPU0 with what ADL reports as GPU0.
The way to tell for sure is by disabling one GPU in cgminer and watch the temperatures in the status bar.  If the disabled GPU doesn't cool down then, you know you got an incorrect mapping and you need to use the --gpu-map option.

Edit: I just recompiled a new cgminer from a fresh sandbox, included the latest ADL SDK and I get the same results...  AutoTune isn't working on Tahiti or Hawaii.  Cayman works fine.



If I disable GPU0 then the temp drops on GPU0, but the other condition exists.  If GPU0 goes to 88C then it throttles down GPU1 to 1Ghz so maybe the Autotune mappings are crossed in your latest kernel, but then again; cgminer source has had a lot of hands in it.
hero member
Activity: 690
Merit: 500
120C is max tolerance

How do you know that? I mean is there a datasheet or something?
sr. member
Activity: 350
Merit: 250
I'm getting around 240 - 247 per 280x
clocked 1000/1050 vddc 1100
still problems with keeping the heat on my cards down, secondary card hits 90+ in a couple of minutes

At that temp you better watch your VRM's. 120C is max tolerance and if you let it go over that for any length of time you are doing damage and you will brick your card(s).  GPU-Z will tell you your VRM temps.
sr. member
Activity: 329
Merit: 250
I've switched to beta2,I am worried about my power supply. Angry

There is no need to go back to beta2. The Kernel is the same.

You can remove the '--cl-opt-disable' option, delete the 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' file and cgminer beta3 should run just like the beta2.  
 
Or, you can take advantage of the Intensity setting.  For your R270, you can use intensity 3 or 4 on each card.  Try specifying:  -I 3  or -I 4


@wr104

I am also getting the HW errors again with the Nvidia cards.  Was there something besides the .bin files that needed to be deleted?

EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.

As I said, the Kernel hasn't changed.  If the '--cl-opt-disable' option isn't helping on the 970, just remove it and cgminer should pretty much work as before.  Don't forget to delete 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' every time you make a change so, cgminer is forced to re-compile the Kernel next time it is executed.

Also, try changing the Work Size using the '--worksize' option on your 970.  The default is now 256 and your nVidia GPU might not like that value.

 


A request for increased value range of 'gpu-engine'.  Presently it will not accept a value below 1Ghz, default it tries to run at 1.1Ghz.

Presently if using he "cl-opt-disable" : true, with "Intensity" : "8" on an R9 280x DD, I am getting a fair hash rate of about 235KH/s but the driver is locking up, but not until card reaches around 90C which puts the VRM's near max tolerance of 120C with the increased wattage draw under the new kernel.

Would it be possible to allow input for gpu-engine to run down to around 850Mhz on the tahiti to maintain full utilization of memory while providing the ability to lower running temps across all the hardware in the card?

I've been playing with all the GPU settings (Manual Engine clock, Autotune, Powertune, etc) on all my cards and this seems to be an issue with the drivers or with the ADL SDK.

On my R9 290x, only reducing Powertune reduces the GPU clock.  
On my HD7970, nothing makes the GPU clock change.
On my HD6950, everything works fine.


On a 280x I can directly specify the clock setting or give it a range 1000-1100+ which cgminer is supposed to dynamically adjust based on the temp-overheat+temp-hysteresis and was observed doing such on beta 2.  It just won't go below 1Ghz even if told to.

Is a bummer that one cannot do the same based on VRM temps as this would have saved a lot of cards throughout the history of BTC.  No biggy for me though, I'd just swap out burned VRM's with new ones as I have to do a lot of component level board repairs on my day gig.  Could be another interesting implementation of PID in there as well.   Smiley

I don't have the GPU temp swap thing going on unless it has GPU-Z spoofed, funny though, GPU0 does run hotter that GPU1, but VRM on GPU1 runs hotter than GPU0.  But this has been observed mining under other algo's so may be a problem with that card.  Whacky, the temp-overheat+temp-hysteresis that is triggering clock down seems to clock down the cooler GPU1 running card.  But realistically, the '"cl-opt-disable" : true' option is for water cooled rigs, I was running the clock at 1Ghz WS256 I6 and still hit 90C and caught the VRM at 120C and only saw 24Kh/s gain, not worth it to me unless I can has less Mhz on the clock and avoid having to unbrick my card.

Unless something got messed up with the ADL library when I built Beta3, the Autotune should work the same as Beta2.

The temperature mapping issue exits because, there is no way to automatically correlate in code what OpenCL thinks it is GPU0 with what ADL reports as GPU0.
The way to tell for sure is by disabling one GPU in cgminer and watch the temperatures in the status bar.  If the disabled GPU doesn't cool down then, you know you got an incorrect mapping and you need to use the --gpu-map option.

Edit: I just recompiled a new cgminer from a fresh sandbox, included the latest ADL SDK and I get the same results...  AutoTune isn't working on Tahiti or Hawaii.  Cayman works fine.

member
Activity: 143
Merit: 10
I'm getting around 240 - 247 per 280x
clocked 1000/1050 vddc 1100
still problems with keeping the heat on my cards down, secondary card hits 90+ in a couple of minutes
member
Activity: 86
Merit: 11
Nice miner upgrade regarding hasrate.
My power consumption went from around 240 Watt to 315 Watt on my Sapphire r9 290.

update: forgot the ADL_SDK after recompiling consumption is now around 250 Watt
sr. member
Activity: 350
Merit: 250
I guess when beta4 comes out,it will consume as much power as scrypt algo. Angry

I think it's rather good because it means that the miner is fully utilizing the GPU. Unless you can't adjust the intensity at all.

You know when you have full utilization of the card(s) when you no longer need to run the space heater that burns 1500 watts, but the trick in it is not to burn up the new heater...   Grin
hero member
Activity: 690
Merit: 500
I guess when beta4 comes out,it will consume as much power as scrypt algo. Angry

I think it's rather good because it means that the miner is fully utilizing the GPU. Unless you can't adjust the intensity at all.
sr. member
Activity: 350
Merit: 250
I've switched to beta2,I am worried about my power supply. Angry

There is no need to go back to beta2. The Kernel is the same.

You can remove the '--cl-opt-disable' option, delete the 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' file and cgminer beta3 should run just like the beta2.  
 
Or, you can take advantage of the Intensity setting.  For your R270, you can use intensity 3 or 4 on each card.  Try specifying:  -I 3  or -I 4


@wr104

I am also getting the HW errors again with the Nvidia cards.  Was there something besides the .bin files that needed to be deleted?

EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.

As I said, the Kernel hasn't changed.  If the '--cl-opt-disable' option isn't helping on the 970, just remove it and cgminer should pretty much work as before.  Don't forget to delete 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' every time you make a change so, cgminer is forced to re-compile the Kernel next time it is executed.

Also, try changing the Work Size using the '--worksize' option on your 970.  The default is now 256 and your nVidia GPU might not like that value.

 


A request for increased value range of 'gpu-engine'.  Presently it will not accept a value below 1Ghz, default it tries to run at 1.1Ghz.

Presently if using he "cl-opt-disable" : true, with "Intensity" : "8" on an R9 280x DD, I am getting a fair hash rate of about 235KH/s but the driver is locking up, but not until card reaches around 90C which puts the VRM's near max tolerance of 120C with the increased wattage draw under the new kernel.

Would it be possible to allow input for gpu-engine to run down to around 850Mhz on the tahiti to maintain full utilization of memory while providing the ability to lower running temps across all the hardware in the card?

I've been playing with all the GPU settings (Manual Engine clock, Autotune, Powertune, etc) on all my cards and this seems to be an issue with the drivers or with the ADL SDK.

On my R9 290x, only reducing Powertune reduces the GPU clock.  
On my HD7970, nothing makes the GPU clock change.
On my HD6950, everything works fine.


On a 280x I can directly specify the clock setting or give it a range 1000-1100+ which cgminer is supposed to dynamically adjust based on the temp-overheat+temp-hysteresis and was observed doing such on beta 2.  It just won't go below 1Ghz even if told to.

Is a bummer that one cannot do the same based on VRM temps as this would have saved a lot of cards throughout the history of BTC.  No biggy for me though, I'd just swap out burned VRM's with new ones as I have to do a lot of component level board repairs on my day gig.  Could be another interesting implementation of PID in there as well.   Smiley

I don't have the GPU temp swap thing going on unless it has GPU-Z spoofed, funny though, GPU0 does run hotter that GPU1, but VRM on GPU1 runs hotter than GPU0.  But this has been observed mining under other algo's so may be a problem with that card.  Whacky, the temp-overheat+temp-hysteresis that is triggering clock down seems to clock down the cooler GPU1 running card.  But realistically, the '"cl-opt-disable" : true' option is for water cooled rigs, I was running the clock at 1Ghz WS256 I6 and still hit 90C and caught the VRM at 120C and only saw 24Kh/s gain, not worth it to me unless I can has less Mhz on the clock and avoid having to unbrick my card.
jr. member
Activity: 59
Merit: 10
I guess when beta4 comes out,it will consume as much power as scrypt algo. Angry
sr. member
Activity: 329
Merit: 250
I've switched to beta2,I am worried about my power supply. Angry

There is no need to go back to beta2. The Kernel is the same.

You can remove the '--cl-opt-disable' option, delete the 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' file and cgminer beta3 should run just like the beta2.  
 
Or, you can take advantage of the Intensity setting.  For your R270, you can use intensity 3 or 4 on each card.  Try specifying:  -I 3  or -I 4


@wr104

I am also getting the HW errors again with the Nvidia cards.  Was there something besides the .bin files that needed to be deleted?

EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.

As I said, the Kernel hasn't changed.  If the '--cl-opt-disable' option isn't helping on the 970, just remove it and cgminer should pretty much work as before.  Don't forget to delete 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' every time you make a change so, cgminer is forced to re-compile the Kernel next time it is executed.

Also, try changing the Work Size using the '--worksize' option on your 970.  The default is now 256 and your nVidia GPU might not like that value.

 


A request for increased value range of 'gpu-engine'.  Presently it will not accept a value below 1Ghz, default it tries to run at 1.1Ghz.

Presently if using he "cl-opt-disable" : true, with "Intensity" : "8" on an R9 280x DD, I am getting a fair hash rate of about 235KH/s but the driver is locking up, but not until card reaches around 90C which puts the VRM's near max tolerance of 120C with the increased wattage draw under the new kernel.

Would it be possible to allow input for gpu-engine to run down to around 850Mhz on the tahiti to maintain full utilization of memory while providing the ability to lower running temps across all the hardware in the card?

I've been playing with all the GPU settings (Manual Engine clock, Autotune, Powertune, etc) on all my cards and this seems to be an issue with the drivers or with the ADL SDK.

On my R9 290x, only reducing Powertune reduces the GPU clock.  
On my HD7970, nothing makes the GPU clock change.
On my HD6950, everything works fine.
full member
Activity: 129
Merit: 100
The new cgminer is almost killing my power supply.I think I need to adjust intensity lower.
sr. member
Activity: 329
Merit: 250
I've switched to beta2,I am worried about my power supply. Angry

There is no need to go back to beta2. The Kernel is the same.

You can remove the '--cl-opt-disable' option, delete the 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' file and cgminer beta3 should run just like the beta2.  
 
Or, you can take advantage of the Intensity setting.  For your R270, you can use intensity 3 or 4 on each card.  Try specifying:  -I 3  or -I 4


@wr104

I am also getting the HW errors again with the Nvidia cards.  Was there something besides the .bin files that needed to be deleted?

EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.

As I said, the Kernel hasn't changed.  If the '--cl-opt-disable' option isn't helping on the 970, just remove it and cgminer should pretty much work as before.  Don't forget to delete 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' every time you make a change so, cgminer is forced to re-compile the Kernel next time it is executed.

Also, try changing the Work Size using the '--worksize' option on your 970.  The default is now 256 and your nVidia GPU might not like that value.

 


A request for increased value range of 'gpu-engine'.  Presently it will not accept a value below 1Ghz, default it tries to run at 1.1Ghz.

Presently if using he "cl-opt-disable" : true, with "Intensity" : "8" on an R9 280x DD, I am getting a fair hash rate of about 235KH/s but the driver is locking up, but not until card reaches around 90C which puts the VRM's near max tolerance of 120C with the increased wattage draw under the new kernel.

Would it be possible to allow input for gpu-engine to run down to around 850Mhz on the tahiti to maintain full utilization of memory while providing the ability to lower running temps across all the hardware in the card?

I'll look into the 'gpu-engine' parameter but for the mean time, try lowering the Intensity for GPU0.  For some reason, GPU0 always seems to be the one getting hotter. The R9 280x has 2048 Stream Engines, meaning that Intensity 8 is the max value you can use on it.
 
Also, beware that on systems with more than one GPU, cgminer may incorrectly show the temperature of a secondary cards as the GPU0's temp.  I realized this after I put my second 290x on my rig that has the HD7970 and I kept the HD7970 disabled.  To correct this, use the '--gpu-map' option to correctly map the GPU number seen by OpenCL with the GPU number seen by ADL.  
In my case, since I only have 2 GPUs, the mapping was simple:  --gpu-map 0:1,1:0

legendary
Activity: 1400
Merit: 1000
@wr104

I am also getting the HW errors again with the Nvidia cards.  Was there something besides the .bin files that needed to be deleted?

EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.

What build instructions or wget and command line did you use to get it working with your 750ti ?
I've been trying for days, I must be missing something.
I'm on Linux 64bit Ubuntu 14.

I did not compile or build at all.

Just downloaded and ran the miner from the github. I am on Windows 7 64 bit.

Did you try just running the miner that he already has built for Linux? https://github.com/kryptohash/cgminer-khc/releases/tag/v3.7.6-Beta3

EDIT: Here is my bat that I use: cgminer.exe --kryptohash --kernel kshake320v2 -d 1,2,3 -I 14 -o http://khc.nonce-pool.com:4300 -u YOUR_USER_NAME_HERE -p YOUR_PASSWORD_HERE --shaders 640 --shaders-mul 8
I probably should be using Nonce Pools stratum though.
sr. member
Activity: 350
Merit: 250
I've switched to beta2,I am worried about my power supply. Angry

There is no need to go back to beta2. The Kernel is the same.

You can remove the '--cl-opt-disable' option, delete the 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' file and cgminer beta3 should run just like the beta2.  
 
Or, you can take advantage of the Intensity setting.  For your R270, you can use intensity 3 or 4 on each card.  Try specifying:  -I 3  or -I 4


@wr104

I am also getting the HW errors again with the Nvidia cards.  Was there something besides the .bin files that needed to be deleted?

EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.

As I said, the Kernel hasn't changed.  If the '--cl-opt-disable' option isn't helping on the 970, just remove it and cgminer should pretty much work as before.  Don't forget to delete 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' every time you make a change so, cgminer is forced to re-compile the Kernel next time it is executed.

Also, try changing the Work Size using the '--worksize' option on your 970.  The default is now 256 and your nVidia GPU might not like that value.

 


A request for increased value range of 'gpu-engine'.  Presently it will not accept a value below 1Ghz, default it tries to run at 1.1Ghz.

Presently if using he "cl-opt-disable" : true, with "Intensity" : "8" on an R9 280x DD, I am getting a fair hash rate of about 235KH/s but the driver is locking up, but not until card reaches around 90C which puts the VRM's near max tolerance of 120C with the increased wattage draw under the new kernel.

Would it be possible to allow input for gpu-engine to run down to around 850Mhz on the tahiti to maintain full utilization of memory while providing the ability to lower running temps across all the hardware in the card?
hero member
Activity: 979
Merit: 510
@wr104

I am also getting the HW errors again with the Nvidia cards.  Was there something besides the .bin files that needed to be deleted?

EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.

What build instructions or wget and command line did you use to get it working with your 750ti ?
I've been trying for days, I must be missing something.
I'm on Linux 64bit Ubuntu 14.
legendary
Activity: 1400
Merit: 1000
I meant to say max group work size.  Sorry.

Delete the .bin and try --worksize 1024 and see what happens.


Also, if you want your 750s with a different worksize value, you can use for example: --worksize 1024,256,256,256

Still get the HW error but that is ok, my 750ti's owrk. Even if the --worksize 1024 worked the speed went from over 200 kh/s down to 40 kh/s.

I must need something else, file, for the 970 to work. Like maybe an updated open cl file or something or this Nvidia driver is holding it back.
Pages:
Jump to: