Pages:
Author

Topic: [ANN] TeamRedMiner v0.10.10 - Ironfish/Kaspa/ZIL/Kawpow/Etchash and More - page 56. (Read 211762 times)

member
Activity: 658
Merit: 86
no trolling, you have good product no need do this dirty game for get atenttion

Fair enough, and I get your perspective 100%. Now, just for a second, assume I'm correct and put yourself in my shoes:

  • I strongly believe closed source miners should present accurate numbers. We've always educated miners on how to use e.g. xmrig-proxy to test our CN kernels to make sure we are indeed producing the poolside results.
  • We have the best ethash kernel as measured by a number of metrics. If you could read GCN assembly, I could e.g. show that we're running X% less valu instructions, chose an original way of structuring and running the kernel to be able to remove other bloat, and more.
  • Currently, it doesn't matter how good we are. The results on Polaris cards are so cutthroat that +1.5% is a huge number (for Vegas we often have a distinct advantage if you take the time to tune properly, but not always beating 1.5%). So, everyone comparing Polaris miners will see that we're "worse". Farm admins testing miners for 30 mins comes back and says "no no, not better". Then they test poolside with a single rig for 24h, still just getting random results because it's far far far from enough shares to prove anything. I have no problem acknowledging solid work and being beaten, but this... No.

So, imho it's all about if I'm correct or not, and I will wrap up our tester tool today. It's a very bad timing I've been swamped for two days when this exploded, not my intent.

Last, If I'm NOT correct about all this, and for some reason I'm a complete fool that has misinterpreted everything and I'm the only one able to generate these results, I will most definitely issue the necessary apologies to all parties involved and go eat my underwear, then look like a fool for the rest of my life. Simple as that.
newbie
Activity: 156
Merit: 0
no trolling, you have good product no need do this dirty game for get atenttion
member
Activity: 658
Merit: 86
advertizing own work by spit on other dev's work is not the way to win!!!!
understand you late in the eth game so want to make atenttion in dirty way

Well, you're always a bit of a troll and probably some alias account for someone else, posting obscene posts removed by moderators in this thread, I know that, but I agree with both you and joblo, this can REALLY look that way. It's all about if I'm correct or not, isn't it?


newbie
Activity: 156
Merit: 0
advertizing own work by spit on other dev's work is not the way to win!!!!
understand you late in the eth game so want to make atenttion in dirty way
newbie
Activity: 24
Merit: 0
Hello! Do you update miner for monero fork friday?
Thank you, I've been using your miner for almost a year!

  Grin Wink Shocked
member
Activity: 658
Merit: 86

We will shortly release a Ethash miner testing tool, it's just a simple open source node.js project. You will be able to run controlled long-running tests on all miners. The reason this is important is that the displayed hashrates of Claymore and Phoenix are just bullshit. They both add +1.5% or more of fake hashrate.

Plenty of people have already pointed this out just by logging all shares over time, but it's really hard to prove anything.
Code:
Hashrates for 1 active connections:
    Global hashrate: 228.51 MH/s vs 235.43 MH/s avg reported [diff -2.94%] (A247366:S1575:R0 shares, 93582 secs)
    Global stats: 99.37% acc, 0.63% stale, 0% rej.
    [1] hashrate:  228.51 MH/s vs 235.43 MH/s avg reported [diff -2.94%] (A247366:S1575:R0 shares, 93582 secs)

Look at that, a diff of -2.94% between the reported and accepted hashrate, what a coincidence?

Isn't that statistically the same amount as the fee?  A likely coding error, a simple oversight at best or
willful incompetence at worst.

Miners typically calculate hash rate by counting hash iterations over time. it's a fair way to measure
the miner's performance but there's no direct connection to share submission. You're trying to prove
your point statistically when there is a precise way to do it if you can find the hash loop code in the kernel.


Man, I apologize, but there's just no beating around the bush here, this is maybe the most inaccurate statement I've ever seen in this forum. You just invalidated the whole world of PoW:

it's a fair way to measure the miner's performance but there's no direct connection to share submission.

The connection is exactly the opposite, they are directly and fully connected, and it's the basis for EVERYTHING related to proof-of-work: when you calculate N hashes for as many nonces, given a specific difficulty we can say _exactly_ how many shares matching that diff we are _expected_ to find. This is the very essence of proof-of-work, and there are no approximations involved. For a smaller N, the variance in the _actual_ outcome will be high, sometimes we find zero shares, sometimes we find more than expected. As N grows larger and larger, the relative error between the nr shares we are _expected_ to find and the nr of shares we _do_ find become smaller and smaller. With proven theorems, we can show that after e.g. 1 million shares, we can estimate N from the nr of shares found and the difficulty to within +-0.33% at a 99% confidence level.

So, I'm not _trying_ to prove my point statistically, I am indeed proving it. And yes, as mentioned above I've also reverse engineered both miners to verify the claims. The reason I am proving it statistically instead of doing open surgery in public on closed source miners has already been answered above: it's a matter of principle, I'm not going to hand out closed source intellectual property to the world. If you want to hack miners, do so yourself. The whole point is that the statistical test is just as effective down to a certain bounded interval that we can prove, but you are absolutely correct in that tracking the OpenCL enqueue calls would be more effective and provide the answer with 100% accuracy. However, since we can run tests where the deviation is 5x outside our 99% confidence level accuracy, you can be damn sure things are seriously off without having to hack miners.

Please note that I have not made any claims as to knowing _why_ this is the case. If you're a good-hearted person you are free to believe it's all just a little coding mistake. For some reason all other miners in the world manage to do these calculations correctly (and they are darn trivial), but only the two biggest closed source miners in the world for crypto's largest emission gpu mineable coin happen to have a little bug or two that makes e.g. Nicehash run their miners instead of other alternatives. Right. That's besides the point I'm trying to make though: when comparing miners by looking at their displayed hash rate, if you don't understand that there is a significant amount of BS hashrate involved in certain miners, that comparison is very much invalid.

Anyway, we will soon be releasing this little tester tool and everyone can run it for themselves. There will most probably be a range of "experts" expressing doubts around the statistical test, but what can you do? At the end of the day, you really just have to run this miner (or the open source Ethminer) instead on a large enough farm and the numbers will be visible poolside as well. Unfortunately, you need a farm in the TH/s range to really get a tight enough continuous sample on e.g. Ethermine, which is why I linked you to an example wallet above.

As a final little example, this is how the open source Ethminer converges after 900k shares on the same rig at the same clocks as the Phoenix example above:

Code:
[[20191114 08:23:17.421]] [LOG]    Hashrates for 1 active connections:
[[20191114 08:23:17.421]] [LOG]    Global hashrate: 230.97 MH/s vs 230.99 MH/s avg reported [diff -0.01%] (A908252:S6201:R0 shares, 85023 secs)
[[20191114 08:23:17.421]] [LOG]    Global stats: 99.32% acc, 0.68% stale, 0% rej.
[[20191114 08:23:17.421]] [LOG]    [1] hashrate:  230.97 MH/s vs 230.99 MH/s avg reported [diff -0.01%] (A908252:S6201:R0 shares, 85023 secs)

There just isn't any black magic to this process. Ethminer has no dev fee and no bullshit implementations. If it scans 230.99 MH/s x 85023 secs = 19.639 THashes it will find the expected nr of shares within a very tight range. Same thing with all miners that aren't faking any numbers. It really is that trivial unless you're running the worst PoW algo in history with some seriously skewed entropy, which isn't the case for any of the major algos out there. In this case, Phoenix miner underperformed the open source miner with more than -1% running under the same conditions. When people look at the displayed hash rates though, they will see Phoenix winning with +1.9%. That's how you corner a market of miners with no tools to verify they're making the optimal choice.



member
Activity: 204
Merit: 10
Running tests on ETH Vega 64 reference cards on windows 10 using 18.6.1 drivers

Actual effective core speed of 930mhz is the best speed for ETH on my cards.
By this i mean the speed u see in HWinfo or in the stats report when u press 'S' in TRM.
Anything above this is just adding temp/power with no HR improvements and anything below this reduces the HR.


Setting :
TRM Eth Config -> A1024
Timing -> --RAS 32 --RCDRD 12 --RCDWR 10 --RC 44 --RP 12 --RRDS 3 --RRDL 3 --FAW 12 --REF 15600 --RFC 248
ODT speeds : 930(P6 locked)/1028(p3 unlocked)/835mV  @ 1028 SOC

HWinfo speeds : 935/1028/812mV
TRM speeds : 935/1028/835mV

Results :
47.76mh/s per card

On 6 cards :
1275 core -> 286.0 mh @ 1278 watts -> 0.2238 mh/w
 930 core -> 286.0 mh @ 1166 watts -> 0.2453 mh/w  Cheesy Cool Grin
Note : these watts are at the wall with 5 120mm fans blasting the cards, actual efficiency(hash/watt) is better than this.

Note:
  • U can get the same Hash/watt results with 1100 mem clock and go over 50mh/s
  • I dont use 1100 mem clock, cause HBM temp arent controllable!! My ambients are high
  • I dont blow the tREF over 15600, cause i wany my card to stay alive long  Tongue Tongue
member
Activity: 658
Merit: 86
To verify that we're not insane, we've also reverse engineered both miners, analysed their kernels, traced their OpenCL enqueue calls, and the evidence is clear.
Wow, this deserves a separate article (Medium maybe?). Indeed, if you trace all OpenCL calls and save timestamps with microsecond precision, it's easy to calculate the real hashrate miner-side, without having to wait for 1000000 shares.

I've debated this. Regardless of what I think of the whole setup, I think it's even worse to hack a miner publicly and leave the kernels wide open etc. It's a matter of principle. You also leave the door open for arguments like "yes, but I have my super defense that notices your pathetic workarounds, so I give you my shit kernel instead". Bogus arguments, I've dumped the running kernels in 100% clean environments directly from mapped vram to verify miners were running the same kernels at later points in time, but these arguments are still very hard to prove wrong.

Therefore, I think it's better to provide a testing tool instead that anyone can run on a cloud vm (or on your local LAN with a few IP redirects), acting as a fake pool with low diff, but not low enough to affect miner operation. The miner will then run in a completely clean environment, zero tampering involved, and it's impossible for it to understand it's mining against a fake pool. Moreover, the testing tool is useful for other purposes as well, like generating bad shares much more quickly when tuning and testing OC.

Anyway, this way it will take around 2 days to get 1,000,000 shares, sure, but I think it's worth it. You can prove that you'll have a bound of +-0.3% at a 99% confidence level with that magnitude of shares, which is good enough for our purposes. It should also be sufficient with enough people verifying the results, I don't believe the whole wide world needs to test it themselves.

We will start by posting a github project, then we'll see what happens.
member
Activity: 116
Merit: 66
To verify that we're not insane, we've also reverse engineered both miners, analysed their kernels, traced their OpenCL enqueue calls, and the evidence is clear.
Wow, this deserves a separate article (Medium maybe?). Indeed, if you trace all OpenCL calls and save timestamps with microsecond precision, it's easy to calculate the real hashrate miner-side, without having to wait for 1000000 shares.
member
Activity: 658
Merit: 86
Hi,

can someone share hashrate and power consumption comparison for Polaris card (RX 580 8GB) between TRM, PM, Claymore with the same setting (Core Clock, Mem Clock, Other Tweak) ?

Thanks.

We will shortly release a Ethash miner testing tool, it's just a simple open source node.js project. You will be able to run controlled long-running tests on all miners. The reason this is important is that the displayed hashrates of Claymore and Phoenix are just bullshit. They both add +1.5% or more of fake hashrate. I don't ask anyone to buy these claims without being able to verify it themselves though, which is why we'll release the tool (which acts as a low diff fake pool with a static epoch and controlled job update mechanism) and everyone can see for themselves if they are willing to mine air for ~24h. To verify that we're not insane, we've also reverse engineered both miners, analysed their kernels, traced their OpenCL enqueue calls, and the evidence is clear.

Plenty of people have already pointed this out just by logging all shares over time, but it's really hard to prove anything. You need controlled runs of (preferably) 1 million shares, which is more or less impossible without controlling the environment properly. Another simple way is just to find a big farm from e.g. the frontpage of ethermine.org (found blocks) and check the difference between their reported and accepted hashrate.

Here is one example: https://ethermine.org/miners/6F714AaAAF72977267601cC1cADC49fb3966Ff89/dashboard

Reported: 3.206 TH/s, Accepted: 3.111 TH/s, difference -2.97%. I'm fairly certain this is Phoenix miner. Why do we have a diff of -3%? The dev fee is 0.65%, and there are 1% stale shares, and this is a HUGE sample set that should converge nicely. Contrary to what people seem to believe, the additional -1.35% just should _not_ disappear. For comparison, here is a run using our soon-to-be-released testing tool for 247k shares on Phoenix 4.7c:

Code:
Hashrates for 1 active connections:
    Global hashrate: 228.51 MH/s vs 235.43 MH/s avg reported [diff -2.94%] (A247366:S1575:R0 shares, 93582 secs)
    Global stats: 99.37% acc, 0.63% stale, 0% rej.
    [1] hashrate:  228.51 MH/s vs 235.43 MH/s avg reported [diff -2.94%] (A247366:S1575:R0 shares, 93582 secs)

Look at that, a diff of -2.94% between the reported and accepted hashrate, what a coincidence? To be fair though, 247k shares is only good for a +-0.5% estimate at 99% confidence, so we must treat the value accordingly. The point is that neither CM nor PM can never ever produce a poolside hashrate over time that matches their displayed hashrates.

Sorry for a long rant to your simple question, the bigger point I'm trying to make is this: unless miners start accepting that CM and PM are bullshitting their displayed hashrates, "comparing" CM/PM/TRM/Ethminer is pointless, you'll just be stating that CM and PM are much better than what they really are.
member
Activity: 658
Merit: 86
Hi! Could you give me an example of tuning AMD 570 for Turtle coin? More specifically, which is the most power efficient core clock/voltage? I've read the documentation and it said for the core 1050 mhz, but how low can the voltage go - 800V or even less? Currently I am 1150/850 but the power consumption is 20% more than CN-r.

Chukwa and CN/r are completely different algos, there's just no point expecting a power consumption X because CN/r draws Y.

For your question, you just have to test your way to a stable setup, impossible to say where your 570 will croak Smiley. If you have a power target you don't want to exceed you'll have to clock down the mem clk to throttle the algo, otherwise just continue to dial down the core clk and voltage and see where you end up.
jr. member
Activity: 176
Merit: 2
Hi,

can someone share hashrate and power consumption comparison for Polaris card (RX 580 8GB) between TRM, PM, Claymore with the same setting (Core Clock, Mem Clock, Other Tweak) ?

Thanks.
legendary
Activity: 1881
Merit: 3057
All good things to those who wait
Hi! Could you give me an example of tuning AMD 570 for Turtle coin? More specifically, which is the most power efficient core clock/voltage? I've read the documentation and it said for the core 1050 mhz, but how low can the voltage go - 800V or even less? Currently I am 1150/850 but the power consumption is 20% more than CN-r.
member
Activity: 658
Merit: 86
is/will the miner updated to November 30 XMR update?

Hi!

We currently do not plan to support RandomX for GPUs, or turn TRM into a CPU miner with RandomX support. We continue our work on GPU-centric algos and aim to provide the best possible implementations for AMD for the algos we support. For RandomX, GPUs will be profitable for a very short time around the fork, it simply isn't worth the huge investment in time.

full member
Activity: 500
Merit: 105
is/will the miner updated to November 30 XMR update?
jr. member
Activity: 195
Merit: 4
yes - you can also use the gui. Sometimes ref doesn't make a huge difference too. What type of memory do you have? Post your cli line for reference also.

sapphire rx 570 4gb pulse and sapphire rx 580 8gb pulse both have hynix memory
cli command
winamdtweak.exe --i 1,2 --REF 30

Okay.

So I do, in my batch file each gpu..
 
WinAMDTweak.exe --gpu 0 --ref 30
TIMEOUT /T 1
WinAMDTweak.exe --gpu 1 --ref 30
TIMEOUT /T 1
WinAMDTweak.exe --gpu 2 --ref 30
TIMEOUT /T 1
WinAMDTweak.exe --gpu 3 --ref 30
TIMEOUT /T 1
WinAMDTweak.exe --gpu 4 --ref 30
TIMEOUT /T 1
WinAMDTweak.exe --gpu 5 --ref 30
full member
Activity: 585
Merit: 110
yes - you can also use the gui. Sometimes ref doesn't make a huge difference too. What type of memory do you have? Post your cli line for reference also.

sapphire rx 570 4gb pulse and sapphire rx 580 8gb pulse both have hynix memory
cli command
winamdtweak.exe --i 1,2 --REF 30
full member
Activity: 585
Merit: 110
I ran into the same problem - but I also confirmed (using amdmemtweak --current) that the settings weren't actually being applied.  I could only get it to work a single time after each machine power-cycle.  Tried the gui version (including the original and xl versions) and had the same issue.

Then i finally figured out there was some conflict while also running HWInfo64.  So maybe try a full power cycle, make sure you don't run any hardware monitoring apps (hwinfo, gpu-z, etc) and try the cli again - and use the '--current' option after to make sure your settings actually get applied as expected.

oh this is it
i oc using overdriventool and hwinfo to monitor the temps of the gpu
i tried applying after a restart and it didn't prompt anything that it was applied successfully but it displayed the changes with the current parameter
thanks
member
Activity: 340
Merit: 29
Very cool, I just left it running on a 4 polaris machine.

Anyone knows of good mem timings to use with AMDmemtweak with Elpida and Hynix 4 Gb cards?

ref 30

using amdmemtweak cli
created a bat file to apply ref 30 in 6 gpus starting from gpu 0 to gpu 5 index
but after running it
it makes no difference
any way to do it in gui?

I ran into the same problem - but I also confirmed (using amdmemtweak --current) that the settings weren't actually being applied.  I could only get it to work a single time after each machine power-cycle.  Tried the gui version (including the original and xl versions) and had the same issue.

Then i finally figured out there was some conflict while also running HWInfo64.  So maybe try a full power cycle, make sure you don't run any hardware monitoring apps (hwinfo, gpu-z, etc) and try the cli again - and use the '--current' option after to make sure your settings actually get applied as expected.
member
Activity: 658
Merit: 86
Very cool, I just left it running on a 4 polaris machine.

Anyone knows of good mem timings to use with AMDmemtweak with Elpida and Hynix 4 Gb cards?

ref 30

Thanks alot! Free +1 Mh/s on each card.  Grin

I have memory strips made with polaris bios editor some 2 years ago, but REF was 5 on all cards.

To set it on command line, the argument should be --REF 30000, right?

No that would rather be for Vegas Smiley. For Polaris, it’s just “--REF 30”. You can always inspect the current values to get a feeling for the right scale.
 


Thanks. Great, great work. Now I can use TRM only.

As as you said, other miners do not report real hashrate. Using it for >12h, despite reporting -2 Mh/s than the miner I was using, calculated hashrate @ the pool went up 4 Mh/s...

That said, I'm not getting miner reported hashrate to eth.nanopool.org. How can I turn that on?

I believe nanopool only can do reported hashrate with the ethproxy protocol. We typically default to one of the other possible protocols. Try adding "--eth_stratum_mode=ethproxy", I think it will solve it for you. Otherwise, let us know and we'll look into it more closely.


Nopes, it says "Pool eth-eu1.nanopool rpc method "eth_getWork" timed out after 69 seconds. Discarding."

That's nothing to worry about though, but you're still not seeing any reported hashrates? Please confirm and I'll look into it.



No. It briefly reported (black line), but then it stopped reporting again.



Thank you, had a report from a different pool where the reported hashrate became stale, will investigate.
Pages:
Jump to: