Pages:
Author

Topic: WildRig Multi 0.35.1 beta 2 multi-algo miner for AMD & NVIDIA - page 45. (Read 92105 times)

newbie
Activity: 29
Merit: 0
kernel-gfx804 ?
member
Activity: 720
Merit: 49
As @dragonmike wrote, I'm not a FPGA programmer, so no, I won't do anything related to FPGA's.
hero member
Activity: 1274
Merit: 556
I highly doubt it. Andrucrypt is not an FPGA programmer. He's been developing miners for AMD hardware and I'm quite confident that he's not going to start doing stuff for SQRL Research's Acorn FPGAs! Cheesy
You should look at their Discord channel if you want more info.
member
Activity: 720
Merit: 49
Is there 8Gb of RAM on rig?
Already fixed problem -  4 Gb RAM, upgraded to 8Gb - working Ok.  But working only on 6 GPU RX580 8Gb.  On 12 GPU RX580 8Gb not working.
Anyway, your CPU is low-end one, so monitor your real hashrate on pool side. Until I move everything to GPU(MerkleTree thing), efficiency of mining can be much lower, like up to 50%, if pool sends jobs too frequently.
newbie
Activity: 1
Merit: 0
Is there 8Gb of RAM on rig?
Already fixed problem -  4 Gb RAM, upgraded to 8Gb - working Ok.  But working only on 6 GPU RX580 8Gb.  On 12 GPU RX580 8Gb not working.
member
Activity: 720
Merit: 49
Trying to run wildrig-multi 0.15.0.9 on HiveOs MTP algo, connecting to 2miners. What is wrong?

Is there 8Gb of RAM on rig?
member
Activity: 720
Merit: 49
0.15.0.12 beta
- fixed client.reconnect logic

Download
member
Activity: 720
Merit: 49
blockchain drivers or latest drivers with compute on? for 15.10. 580 8gb btw

thanks for the great amd miner!
Drivers shouldn't matter, but I use 18.3.4 and 18.6.1. With latest drivers there can be a support problem with binary kernels, I didn't test them yet.
newbie
Activity: 42
Merit: 0
blockchain drivers or latest drivers with compute on? for 15.10. 580 8gb btw

thanks for the great amd miner!
member
Activity: 720
Merit: 49
0.15.0.10 beta
- improved speed for different hashorders in algorithms like x16r, x16s, x16rt, hex, timetravel and so on
Download. Still not an official release(because I need to do more work on MTP algo and prepare some benhmarks, that's why link is to github)

note: don't forget download new kernel for Hawaii cards if you use them.
full member
Activity: 307
Merit: 101
Also one more thing about that time without shares. As you can see in the end you have found a "high diff" share and your effort didn't change a lot(from 105% to 107% only), so you didn't lose almost anything here because you have solved huge share.

I watch the "transactions" directly in the pool and see how my revenues fluctuate with every block, even though this shouldn't be the case with PPLNS. Yes, each rig mines a dev fee but this is only every one hour for two minutes. I see however huge variations (up to 100% in revenues from block to block) so from minute to minute (since suprnova finds basically all blocks currently).

In the above example. Let's assume a new block was found at 13:36:08. This means all work done from 13:32:12 until 13:36:08 was for nothing, because no shares were submitted, right? It is like if the rig was offline. Since the rig didn't submit any shares from 13:32:12 until 13:36:08 and on average 3-4 blocks were found during this time (all of them found by the pool, let's assume). In case of PROP that would mean zero revenues for those blocks found while the rig submitted zero shares. In case of PPLNS it would still mean some revenues but they would ramp down and be lower than nominal.

The lucky high-diff share found at 13:36:11 will count for the block which started (in oiur assumed example) at 13:36:08. It was found within 3 seconds. It was not the result of the previous four minutes of work, right?

One reason why I don't like too high diffs is because you will often end up with outdated work, i.e. no share found yet while a new block was already found. So the work has to be abandoned and it was for nothing. Yes, miners nowadays at least notice this, abandon the old work and begin with fresh work. Still, it was wasted time if no share was found. In the old days miners would stubbornly continue trying to find a share for the outdated work, wasting even more time, then find it and submit it and trigger a stale share in the pool.
member
Activity: 720
Merit: 49
Optimization needed in general, so nothing can be done specially for that "high diff" job's you got. But there one cool new parameter in miner that you can try - --max-difficulty Smiley E.g. if you set --max-difficulty 1000 miner will reconnect to pool and get new job if current job received with 1G difficulty. That should help, because it reset the diff after reconnect, but still need monitoring and testing.

Also one more thing about that time without shares. As you can see in the end you have found a "high diff" share and your effort didn't change a lot(from 105% to 107% only), so you didn't lose almost anything here because you have solved huge share.
full member
Activity: 307
Merit: 101
Okay, here is a problem. See screenshot.





I'd say it is a problem that has two causes. The fluctuating hashrate due to the algo but mostly the suboptimal vardiff implementation of the pool, which ramps up way too much an then remains way to high for way too long. Both causes amplify each other.

See on the first screenshot around 13:30 how due to the skyrocketing hashrate to 80MH/s the pool ramps up the vardiff to 2.05G. However just then the hashrate drops (when hash order changed) to sub 30MH/s and the rig isn't able to find a single share for four minutes (13:32:12 to 13:36:11). Yes, I know the typical pool operator talk about how it doesn't matter for revenue because shares are weighted with difficulty. Sure. But 2.05G multiplied with ZERO shares is still ZERO over that four minutes period.

From experience I can tell that shares should be found/submitted every five seconds for an averager blocktime of 1min. Too low diff is bad (server traffic, latency), too high diff is bad too (no shares at all, working on outdated jobs).

I understand the pool operators' incentive to minimize server traffic by letting workers crunch higher diffs, but in this case the vardiff clearly is not working well because it goes way too high and remains there for too long. That's bad for the workers and bad for the pool too.

@ocminer: What you think?

@andrucrypt: Is it technically possible to optimize the miner to make the hashrate higher for those hash orders where it is much lower currently? Or is this limited by the varying usage of core and memory and the actual hardware specs with regard to both on which it runs?
full member
Activity: 307
Merit: 101
@nanona yeah, found that bug too today and released the fix, thanks for info Smiley In some cases it can crash the miner or BSOD(who knows...), because with x16rt there was memory writing out of range(RAM and VRAM, because that thousands of found shares is not good, limit is 15 for GPU).

0.15.0.9 beta
- improved stability
- implemented client.reconnect(needed for some pools, e.g. miningrigrentals, zpool)
Download

No problem throughout the night on all rigs (R9 Nanos) with this latest version.
Also no more unrealistic "max" value now (the previous screenshots above showed 415MH/s, an order of magnitude more than the rig can actually push). It shows 63MH/s now, which is plausible and happens for certain hash orders (appr. 15MH/s per card).
newbie
Activity: 103
Merit: 0
WildRig getting serious with their mining programs. This would be a good system for miners. but other than that Whalesburg also even capable of competing with the other. with the auto-switching algorithm that exists on this system will make this Whalesburg platform could have the profitability of mining more than others.

Hey, man) What are you comparing to? The Whalesburg - good idea, but it's pool with single algo, ethash. WildRig - multi algo miner. Or, it just referal to have some sweets from Whalesburg)))))
sr. member
Activity: 700
Merit: 251
WildRig getting serious with their mining programs. This would be a good system for miners. but other than that Whalesburg also even capable of competing with the other. with the auto-switching algorithm that exists on this system will make this Whalesburg platform could have the profitability of mining more than others.
member
Activity: 720
Merit: 49
@nanona yeah, found that bug too today and released the fix, thanks for info Smiley In some cases it can crash the miner or BSOD(who knows...), because with x16rt there was memory writing out of range(RAM and VRAM, because that thousands of found shares is not good, limit is 15 for GPU).

0.15.0.9 beta
- improved stability
- implemented client.reconnect(needed for some pools, e.g. miningrigrentals, zpool)
Download
full member
Activity: 307
Merit: 101
I noticed that the rigs would sometimes crash while or shortly after the dev mining period. Maybe the dev mining has a bug. At least it looks a bit strange in the console.
Sometimes they don't crash and just continue the normal mining as in these screenshots. The pool will however show lots of invalid shares during the dev mining period.
Also the cards cool down completely in those two minutes, so I don't think they are doing any dev mining in those two minutes. You might want to check to not lose out your fees.







I used the previous versions of this miner to mine x22i for two months and it didn't have this strange behaviour while mining the dev fee. No hundreds of lines of "oops, stale share" and cards wouldn't cool down at all.
Pages:
Jump to: