So, at the very least it's not hurting it at all, and hopefully is helping. Will need to run longer to see how the share rates and efficiency do... I seem to see that as it goes along, getting past the 40 share mark is when I start seeing orphans/dead pop up.
On a side note tho - anyone seeing their DOA rates climb higher over the past week or so? I was happily running around 3-4% DOA for the longest time but now I'm seeing it in the 6-8% range quite a lot. Still better than the pool average, but I'm wondering if there's a core reason for the rates rising. (and yes I've changed my S3's down from the default queue 4096, and I've tried 0 and 1 with neither being any majorly different).
Similar results to you using Matt's fork and the embedded python-based relay since I got this going on Windows Server 2012 R2 with an update from Matt.
Version: unknown 7032706f6f6c2d6d6173746572
Pool rate: 3.00PH/s (13% DOA+orphan) Share difficulty: 12700000
Node uptime: 14.9 hours Peers: 6 out, 3 in
Local rate: 2.38TH/s (3.6% DOA) Expected time to share: 6.4 hours
Shares: 2 total (0 orphaned, 0 dead) Efficiency: 115.0%
Not seeing the increase in DOA yet but I've probably jinxed it now so time will tell.
GBT latency is the big issue for me as I'm 12,000+km of fiber away from Matt's US-West node which I'm using as it was 100ms faster than the AU node.
This is normal as any US-West servers are normally lower latency than Asia nodes due to network topology.
Currently GBT is between 0.2 - 0.4 so I'm happy with that and the low DOA and normal shares being found seems to support that things are running ok.
This on 7200 RPM Sata as the SSD died.