I'm trying to evaluate the difficulty algorithm, so I installed masari this morning, but it is not letting me get new blocks past 63507 (7 blocks after the changeover?).
All it says are things like this:
2017-12-04 14:36:45.897 [P2P0] INFO global src/cryptonote_protocol/cryptonote_protocol_handler.inl:302 [50.17.174.202:38080 OUT] Sync data returned a new top block candidate: 63507 -> 63879 [Your node is 372 blocks (0 days) behind]
SYNCHRONIZATION started
2017-12-04 14:37:30.716 [P2P5] INFO global src/cryptonote_protocol/cryptonote_protocol_handler.inl:302 [103.28.22.111:40256 INC] Sync data returned a new top block candidate: 63507 -> 63879 [Your node is 372 blocks (0 days) behind]
SYNCHRONIZATION started
2017-12-04 14:37:32.781 [P2P8] INFO global src/p2p/net_node.inl:258 Host 34.234.145.76 blocked.
Miners simply changing coins in pursuit of best price/difficulty ratio is desired behavior, but it is also an "attack" or "unfair" to your dedicated miners who are not as efficiently selfish. In one sense dedicated miners are merely whining. But the coin should take interest because they protect against 51% attacks by adding consistent diversity and because they are less likely to sell the coin. If difficulty algorithms could be perfect, the "attack" would not exist. They can't be perfect because the only way to know current hashrate is to collect and calculate it from recent solvetime and difficulty data, so there is a delay in response. If price changes a lot and the difficulty is slow, then big miners come in and get coin at low difficulty when the price jumps higher, and then leave when difficulty catches up, leaving constant miners with higher-than-appropriate difficulty for the length of the averaging window. But if the difficulty is made to respond fast, it has to base the calculation on fewer data points, so it will naturally vary more statistical "accidents" on the small N window. Historically I have pushed for low N, less than 30. But after seeing BCH do exceptionally well on keeping a low number of delays by using a large with N=144, I am having a change of heart. Coins have told me cryptonote's original code is effectively an N=300 and they have had to fork to get away from it. The problem (presumably) is that there is a good price increase so they get a lot of mining, but then it suddenly drops and no one want to mine it and it's going to take 300/2=150 blocks to get half-way back down to where it needs to be. I have not yet looked into cyrptonote code and data from coins to see exactly what the problem is, but that makes me afraid of N=144 for small coins. I have also been told BCH seems to be depending on Chinese pools actively deciding to not harm BCH. Zcash and its clones have done well with Digishield v3 with N=17 which is effectively an N=63 algorithms. For this reason I considered N=60 to be safe, and larger like I was seeking, but not risky like N=144. The weighted nature of this algorithm makes it respond faster to hashrate changes, which also means it will overshoot and undershoot more than Zcash which means big miners will see more opportunities to jump on when price/difficulty ratio looks good (by more accidentally lower difficulty). However, as it responds faster quicker, they will not be able to stay on and get as many blocks as they normally do on Zcash and its clones. On Zcash they get about 20 "cheap-difficulty" blocks about 3 times a day, a "loss" of about 10% of Zcash coins, as the difficulty accidentally goes low about 3x per day. So constant miners have to pay an excess difficulty of 10%. For Masari's WWHM N=60 algorithm, I expect twice as many price/difficulty opportunities per day than if Digishield v3 N=17 ("N=63") code were used, but only 1/2 as many blocks stolen per opportunity. I see this in testing. I do not know if not being able to get as many blocks "per attack" makes attacks less appealing, but I hope so. Also, the digishield v3 does not adjust for 5 or 6 blocks after a sudden hashrate change begins. This may cause some minor oscillations in Zcash. Although "blocks stolen" at "cheap-difficulty-cost" might be the same, testing indicates post-"attack" delays will be 1/3 as much in Masari as Zcash.
Review the following chart to see "state of the art" difficulty algorithms. I have learned a lot in the past few days. I just found two levels of improvement over what Masari is using (which is what I recommended), but I have been expecting the WWHM that Masari is using to be the best algorithm. For EMA and Dynamic
EMA details (the two new improvements) see
http://zawy1.blogspot.com/2017/11/best-difficulty-algorithms.html Someday in my blog i'll write an "all things difficulty" article to cover everything.
https://user-images.githubusercontent.com/18004719/33560871-02d98bf6-d8df-11e7-821a-147c0ae04165.gif