Pages:
Author

Topic: [XMR] JCE Miner Cryptonight/forks, now with GPU! - page 86. (Read 90814 times)

member
Activity: 350
Merit: 22
i didn't test on cards older than HD7000 and i don't think it will ever compile. but you can test Wink

use the same config that my Bonaire in the doc, and try to raise multi_hash to 240 in case it's better Smiley
newbie
Activity: 16
Merit: 0
I have a very old AMD HD5670 that I am just curious to try with the GPU miner. I know you said on Github you didn't have very good luck with a 1GB card, but I am just too curious.

What settings do you think I should try with this old 1Gb card?


Thanks!
member
Activity: 350
Merit: 22
hello,

When i say the fees are included, that means the displayed hashrate is the same no matter if you're in a devfee session or not. This is the raw physical hashrate.
If you punch h, you lock the mining thread to force it to update the hashrate counter and display it on screen, so if you punch again and again, it will lower the hashrate, that's normal. This is the same for the CPU version. Better use the JSON http output which is updated in the network thread, and does not cause such a problem. From your browser, you can press F5 as much as you want. But not H from the console.

Effective hashrate should converge mathematicaly to lower than the average physical hashrate. Normaly by ~2% on a very long term.
If it's higher, so you had a good luck biasis. You can check that number two ways :
1. Look at your pool, you should get a similar hashrate (the computation range of the pool may be different from JCE, but on the long term, they should converge).
2. Get your total hashes and divide it by the uptime. You should get the same value. The hashes themselve can be checked against your pool history, or JCE log, and the uptime by... your wall clock Smiley

A difference can be explained by some instant peaks from your cards that make you mine faster but may not be reported by the blue instant hashrate, if you mine so fast that the 512 hashes are obsoleted fast. I admit that 512 was good for CPU, which can mine at 150h/s per core at best, but that's just half a second for a Vega. Average rate over half a second is too low. This is the problem. Good remark, i'll increase the time period for average of fast cards like Vega Wink

Version 0.30c online
Quote
Netcode fixes
Bittube v4
newbie
Activity: 11
Merit: 0
Hi JCE, I just started testing your miner (I opted for the a version due to noted regressions in b) and after about 15 hours of testing on my 8 x Vega 56 rig, things are looking very good. I can post more details once it has proven stable over time, but upon initial observation, I am quite pleased.

I understand that the h/s displayed in blue is the true average physical hashes per second over the past 512 hashes. In Advanced Topics, you state that the displayed number has no tweak, and includes the fees. If I understand correctly, could this be a reason why when I refresh quickly using 'h', I see values drop considerably on one thread followed by the other, followed by yet another? The only reason I ask this, is that a highly variable h/s on any given thread can sometimes be an early warning sign that the card/thread is not stable due to overly-aggressive configuration settings or clocks/voltages). Based on my observations, the numbers come right back up into a normal and stable range, so it seems that this is by design and that you are mining to your dev pool on a thread-by-thread basis, and that this is not necessarily a sign of instability. I just want to be sure before I spend more time tweaking performance.

Also, Effective Net Hashrate as calculated by total hashes / total seconds spent mining is often much higher than my average physical hashrate, even over an extended period of time. Since this number is supposed to take into account stale/invalid shares as well as fees, this is surprising. I would expect the number to be lower. Do you have any thoughts on this matter?

Keep up the good work!
member
Activity: 350
Merit: 22
A technical possible reason could be a bug in code that cause false positive. Good shares that are not found.
Also, if the mining of N shares (N=multihash in JCE, intensity in Stak) is interrupted after only K shares have been computed (K
member
Activity: 190
Merit: 59
Ok, sorry, i will not do anymore. I just got angry because nobody listened to me. I will stick with JCE now Smiley
member
Activity: 350
Merit: 22
It may be possible, but slightly less secure. Kernels are not unsecure in ram, they are unsecure when passed to OpenCL as binaries.

I read SRB indirectly replied to my post about hashrate. I didn't want to polute his topic, so I answer here. Doktor, you're welcome to answer here, if you want.

I said and repeat that Claymore was cheating, at least on the XMR miner. Just use the no-fee mode, and you get the exact same displayed hashrate with a punishing -5% effective result. Plug it to a proxy, and you get -10%. As a vengeance. And hashrate difference was a lot worse on version 10+ than 9.7.
That's was I carefully measured when I mined with claymore, before I dev my own miner.
And I say the opposite for Stak or Xmrig. It means that anybody claiming Stak or xmrig cheats (i read one on your topic) just does a bad test, since they explicitely does nothing bad, their code is clear open and clean.
The only cheaty thing is that Stak mines +2s than said per session, to handle the connection delay. I find it oversized, no pool has a 2s ping, but that's a small difference.

Code:
inline bool is_dev_time()
{
//Add 2 seconds to compensate for connect
constexpr size_t dev_portion = static_cast(double(iDevDonatePeriod) * fDevDonationLevel + 2.);
....

However, all, please do not jump back to another miner's thread to claim you found a better one. That's not polite. I know that Xmrig did it a lot with Stak so it's somehow a tradition, but please avoid.
newbie
Activity: 54
Merit: 0
"Lermite" mmh, that name sounds familiar... Roll Eyes
I'm the one your thinking about Smiley

One way to save compiling time without compromising the security would be to compile the kernel of the first thread of a GPU, and keeping it in RAM or cache to inject it to the other treads of the same GPU if they have the exact same parameters.
This optimization would not apply to a GPU with only one thread, neither one with several threads with different parameters, but as the most usual case is two identical threads per GPU, it could save much time during each startup.
member
Activity: 350
Merit: 22
Wow, so many messages, thanks all.

I'm currently burning my ryzen with the Bittube-v4 fork test. I'm one day late, but the speed on ryzen is higher than other miner. I don't give numbers since i may have badly configured the other (xmrig) I let you test by yourself, but my assembly optimization looks good.

Code:
Detecting OpenCL-capable GPUs...
No OpenCL-capable GPU found.

I've a known bug about APUs, nVidia cards and probably other cases, i've several exotic cards on my rigs (Baffin, Tahiti, Pitcairn, and even a Bonaire) but no APU or RX470 yet. That's why i keep it labeled "prototype" for now. I plan to buy an APU to test my bug, and hope it will fix the other cases too.

Quote
Could you program include something that compare the current compiling kernel to the best result (that would be saved) and keep the best of the two ? Like this, each GPU could have a optimum kernel regarding the intensity setup.

As i already mentionned, the OpenCL in the prototype is dynamic and expandable for security reasons. No way to save or reuse a kernel, on purpose. JCE kernels, may you dump them, would not work, even on a subsequent run of JCE.
I hope i'll find a way to be both secure and fast, and then maybe i'll be able to provide reusable kernels.

SRB kernels are normal kernels with just the file encrypted. One can attach a OpenCL debugger and look at the clear IL code (CGN pseudo-assembly). The same for Cast or Claymore. I want to be more secure. (no i didn't do that myself, my code uses exclusive optims I originaly developped for JCE CPU 32-bits, since CGN GPU are scalar 32-bits now, hence why the perf are different, and lower on Heavy for now).

Quote
what happens with miner if one of the cards get stuck or some problem occurs?
Yeah a watchdog is planned, to look for zero hashrate or GPU overheat. But not higher priority for now.

"Lermite" mmh, that name sounds familiar... Roll Eyes
newbie
Activity: 54
Merit: 0
Anyone has some good settings mining Cryptonight-Fast? Thats the latest Masari Algo. So it looks like the regular V7 settings work pretty good.
Here are mines:

RX 580 4GB with lazy memory, GPU 1280 MHz VRAM 1950 Mhz: 1816 h/s
     { "mode": "GPU", "worksize": 4, "alpha": 128, "beta": 8, "gamma": 4, "delta": 4, "epsilon": 4, "zeta": 4, "index": 1, "multi_hash": 944 },
     { "mode": "GPU", "worksize": 4, "alpha": 128, "beta": 8, "gamma": 4, "delta": 4, "epsilon": 4, "zeta": 4, "index": 1, "multi_hash": 944 },

RX 570 8GB with awesome memory, GPU 1280 Mhz, VRAM 2250 MHz: 1868 h/s
     { "mode": "GPU", "worksize": 4, "alpha": 128, "beta": 8, "gamma": 4, "delta": 4, "epsilon": 4, "zeta": 4, "index": 2, "multi_hash": 1024 },
     { "mode": "GPU", "worksize": 4, "alpha": 128, "beta": 8, "gamma": 4, "delta": 4, "epsilon": 4, "zeta": 4, "index": 2, "multi_hash": 1024 },
newbie
Activity: 10
Merit: 0
Hi can anyone share the setting for vega 56?

Onething i notice from srb that jce uses 10 watts higher than srb.
newbie
Activity: 81
Merit: 0
Anyone has some good settings mining Cryptonight-Fast? Thats the latest Masari Algo. So it looks like the regular V7 settings work pretty good.
newbie
Activity: 25
Merit: 0
No nicehash. It's on https://pool.catalyst.cash

is it with Nicehash or a normal pool?
I hope i havent made a regression because of my Nicehash fixes Sad

I'm implementing Bittube-v2 right now
sr. member
Activity: 1484
Merit: 253
I mined a bit of XTL with the GPU miner

Config: Win 10 build 1803, AMD drivers 18.6.1
3X RX570 4GB with bios mod.
Display GPU (0): 1230/2100. Intensity: 720
GPU 1: 1230/2035. Intensity: 816
GPU2: 1230/2030.. Intensity: 816

Max speed: 2780 H/s. Power draw less than 350W. Others have a much higher intensity for this kind of algo, so I could get better hashrates.

JCE, one idea:
At some point, for one particular Stellite V4 kernel compilation, my display GPU had an exceptionnal hasrate regarding the intensity (always much lower than the two other GPUs), I had around 1030H/s. Could you program include something that compare the current compiling kernel to the best result (that would be saved) and keep the best of the two ? Like this, each GPU could have a optimum kernel regarding the intensity setup.
Did you tried intensity 832?

Right now:
720
880
880
Max hashrate: 2833 H/s.
345W from the wall

Higher 880 speed drops? Good intensity is 896, 912
full member
Activity: 1120
Merit: 131
I mined a bit of XTL with the GPU miner

Config: Win 10 build 1803, AMD drivers 18.6.1
3X RX570 4GB with bios mod.
Display GPU (0): 1230/2100. Intensity: 720
GPU 1: 1230/2035. Intensity: 816
GPU2: 1230/2030.. Intensity: 816

Max speed: 2780 H/s. Power draw less than 350W. Others have a much higher intensity for this kind of algo, so I could get better hashrates.

JCE, one idea:
At some point, for one particular Stellite V4 kernel compilation, my display GPU had an exceptionnal hasrate regarding the intensity (always much lower than the two other GPUs), I had around 1030H/s. Could you program include something that compare the current compiling kernel to the best result (that would be saved) and keep the best of the two ? Like this, each GPU could have a optimum kernel regarding the intensity setup.
Did you tried intensity 832?

Right now:
720
880
880
Max hashrate: 2833 H/s.
345W from the wall
sr. member
Activity: 1484
Merit: 253
On my RX 580 8Gb miner is pretty stable.
But on my 270X 4Gb one thread periodically stucks and didn't restarts until comp reboot. On Claymore's 11.3 with the same OC parameters 270X works stable.
member
Activity: 190
Merit: 59
Ok, I just started JCe miner on 34 Vegas to see if all have good results and stable operation. Total of 5 rigs with different motherboards, with and without risers, etc. After 24 hours I will count the hashes and run the calculations. If successful, I will update remaining rigs.

So far I did not encounter any stability issues, what happens with miner if one of the cards get stuck or some problem occurs? Does it have any kind of watchdog or restart feature? To be honest I didn't even read the manual, I just copy pasted settings that you guys put here and it worked beautiful  Grin
sr. member
Activity: 1484
Merit: 253
I mined a bit of XTL with the GPU miner

Config: Win 10 build 1803, AMD drivers 18.6.1
3X RX570 4GB with bios mod.
Display GPU (0): 1230/2100. Intensity: 720
GPU 1: 1230/2035. Intensity: 816
GPU2: 1230/2030.. Intensity: 816

Max speed: 2780 H/s. Power draw less than 350W. Others have a much higher intensity for this kind of algo, so I could get better hashrates.

JCE, one idea:
At some point, for one particular Stellite V4 kernel compilation, my display GPU had an exceptionnal hasrate regarding the intensity (always much lower than the two other GPUs), I had around 1030H/s. Could you program include something that compare the current compiling kernel to the best result (that would be saved) and keep the best of the two ? Like this, each GPU could have a optimum kernel regarding the intensity setup.
Did you tried intensity 832?
full member
Activity: 1120
Merit: 131
I mined a bit of XTL with the GPU miner

Config: Win 10 build 1803, AMD drivers 18.6.1
3X RX570 4GB with bios mod.
Display GPU (0): 1230/2100. Intensity: 720
GPU 1: 1230/2035. Intensity: 816
GPU2: 1230/2030.. Intensity: 816

Max speed: 2780 H/s. Power draw less than 350W. Others have a much higher intensity for this kind of algo, so I could get better hashrates.

JCE, one idea:
At some point, for one particular Stellite V4 kernel compilation, my display GPU had an exceptionnal hasrate regarding the intensity (always much lower than the two other GPUs), I had around 1030H/s. Could you program include something that compare the current compiling kernel to the best result (that would be saved) and keep the best of the two ? Like this, each GPU could have a optimum kernel regarding the intensity setup.
hero member
Activity: 935
Merit: 1001
I don't always drink...
When running probe I get:
Code:
For Windows 64-bits
Analyzing Processors topology...
Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
Assembly codename: generic_aes_avx
  SSE2          : Yes
  SSE3          : Yes
  SSE4          : Yes
  AES           : Yes
  AVX           : Yes
  AVX2          : Yes

Found CPU 0, with:
  L1 Cache:    32 KB, shared with CPU 1
  L2 Cache:   256 KB, shared with CPU 1
  L3 Cache:  8192 KB, shared with CPU 1, 2, 3, 4, 5, 6, 7
Found CPU 1, with:
  L1 Cache:    32 KB, shared with CPU 0
  L2 Cache:   256 KB, shared with CPU 0
  L3 Cache:  8192 KB, shared with CPU 0, 2, 3, 4, 5, 6, 7
Found CPU 2, with:
  L1 Cache:    32 KB, shared with CPU 3
  L2 Cache:   256 KB, shared with CPU 3
  L3 Cache:  8192 KB, shared with CPU 0, 1, 3, 4, 5, 6, 7
Found CPU 3, with:
  L1 Cache:    32 KB, shared with CPU 2
  L2 Cache:   256 KB, shared with CPU 2
  L3 Cache:  8192 KB, shared with CPU 0, 1, 2, 4, 5, 6, 7
Found CPU 4, with:
  L1 Cache:    32 KB, shared with CPU 5
  L2 Cache:   256 KB, shared with CPU 5
  L3 Cache:  8192 KB, shared with CPU 0, 1, 2, 3, 5, 6, 7
Found CPU 5, with:
  L1 Cache:    32 KB, shared with CPU 4
  L2 Cache:   256 KB, shared with CPU 4
  L3 Cache:  8192 KB, shared with CPU 0, 1, 2, 3, 4, 6, 7
Found CPU 6, with:
  L1 Cache:    32 KB, shared with CPU 7
  L2 Cache:   256 KB, shared with CPU 7
  L3 Cache:  8192 KB, shared with CPU 0, 1, 2, 3, 4, 5, 7
Found CPU 7, with:
  L1 Cache:    32 KB, shared with CPU 6
  L2 Cache:   256 KB, shared with CPU 6
  L3 Cache:  8192 KB, shared with CPU 0, 1, 2, 3, 4, 5, 6

Detecting OpenCL-capable GPUs...
No OpenCL-capable GPU found.

I have a MB-integrated graphics chip and 2 RX470s 8GB, Win7-64, AMD (Robinh00d blockchain driver)

Why aren't the GPUs being recognized?  They work in other miners.
Pages:
Jump to: