Pages:
Author

Topic: [ANN] Noirbits Update required Improved algos !!!! - page 11. (Read 74508 times)

full member
Activity: 154
Merit: 100
Can we put a hard floor on difficulty at 0.22?

Yeah, it's currently defined in two places in my current implementation (I need to fix it, but it's not urgent), once in main.cpp, and once in diff.cpp. However, you need to figure out how many targets bits that is, since
the min. difficulty (or max. target) is defined as :

Code:
const CBigNum CDiffProvider::bnProofOfWorkLimit(~uint256(0) >> 20);

Just need to replace 20 by the desired target bits for min. difficulty and it should be good.

Now I'm really gone until tomorrow...

member
Activity: 104
Merit: 10
Can we put a hard floor on difficulty at 0.22?
full member
Activity: 154
Merit: 100
Same here Smiley

We just need to decide on the height for the new retarget algorithm... 25000 25020 sounds a like a safe bet to give everyone time to upgrade...
I think 21000. We are getting the wild fluctuations because people are still jumping in on the low difficulty, so we are not averaging the right number of blocks per day. Can we push a message to all the clients telling them to upgrade? We're currently getting 600 blocks per day, so that gives 3 days. That seems like enough to get the majority of clients that matter (the pools are the big deal).

I'm not sure, I have to check the source, but not that I know of. The Json/RPC API has nothing like that... It would actually be a nifty feature to be able to push upgrade required messages to clients, would make transitions like this one easier. That's why I'm saying 25020 is safer, if we get a diff drop again like the other day, the 2K blocks are gonna fly by in less than 2 days.

We could split in half, say around the closest multiple of 60 in the 22500 range, should give an extra 1.5 days, it's not too far in the future, and gives some margin in case of hashrate increases.

We're gonna need barwizi and everyone who can to communicate on the change as soon as we merge Testing into master so we reach a maximum of users. Concerning pools, I run miners-united, so that won't be an issue, but there's still coinmine (feeleep needs to be contacted), minepool (I don't know the owner, most likely MarkusRomanus since he advertised on the thread), and the P2Pools... For cryptsy, BigVern apparently can push the new client in a 12 hours delay. We also need to relay the info on the other cryptocoin-forums who have Noirbits listed.

But I'm out for today, it's nice & hot outside, and the grilled pork & rosé are calling me out...

Edit : if you have time, please test the latest version from my github, edit diff.h and change the min. height required to the current height, and test that you can successfully mine with the new algo on testnet. I haven't gotten around to it, and it's the last thing that needs to be done before we can safely merge the changes into master branch
member
Activity: 104
Merit: 10
Same here Smiley

We just need to decide on the height for the new retarget algorithm... 25000 25020 sounds a like a safe bet to give everyone time to upgrade...
I think 21000. We are getting the wild fluctuations because people are still jumping in on the low difficulty, so we are not averaging the right number of blocks per day. Can we push a message to all the clients telling them to upgrade? We're currently getting 600 blocks per day, so that gives 3 days. That seems like enough to get the majority of clients that matter (the pools are the big deal).

full member
Activity: 154
Merit: 100
Allright, just pushed a commit with the new CDiff implementations (COldNetDiff & CMainNetDiff). I have validated that COldNetDiff works with the current blockchain, now I still need to test CMainNetDiff, which provides the new rules.

If I still have enough time before BBQ time, I'll start working on the test benches, shouldn't take too long.
full member
Activity: 154
Merit: 100
Same here Smiley

We just need to decide on the height for the new retarget algorithm... 25000 25020 sounds a like a safe bet to give everyone time to upgrade...
legendary
Activity: 882
Merit: 1000
i'm back and very hung over, what's new? 
member
Activity: 104
Merit: 10
Well I only ever compile the daemon with make and the makefiles in /src.

QT is the cross platform library used for the UI. But there's a Makefile in the main directory, that's used to compile Noirbits-qt (the UI client), and I'm not sure of how I should handle dependencies in that one... If you can have a look at it, I'd appreciate. Otherwise, I'll go ahead and try with QtCreator. I asked barwizi to have a look at the makefiles, but he seems to be MIA since yesterday, probably busy.

Edit : never mind that, QtCreator handles that file automagically...

What do you think about the new retarget implementation ? Feel free to improve it if you want...

I looked at the retarget implementation. Unfortunately I have to work today, so I can't work on it. My one comment is that it's probably better to factor out whole functions rather than specific steps in the code. i.e. you have a set of "new" functions and a set of "old" functions. That's it's easy to look at each set of functions and see what they do. It also makes it easier to handle switchovers (hoping we won't have many, but we will probably have non-zero after this one). i.e. We get to the target block, do the switchover, then you go into the code and copy the "new" functions to the "old" functions, and remove the switchover block id. Then if another change in the algorithm is required, you simply go change the switchover block id again, modify the "new" functions again, etc.. If you make that change now, and comment the code for the "new" and "old" functions, it will make any future changes, if they are required, really easy to implement and test.

If we write a testbench to simulate these types of fluctuations, then we would use it on the new functions only. It's easier if we have whole functions not wound up with testing switchover block IDs, etc. That type of testbench would be great, not only for us, but the original bitcoin codebase (it would help anyone else starting a new coin to see how their algorithm works before testing and getting screwed in the wild).

I see your point. I usually always factor steps into functions, because functions name carry meaning, and make code easier to parse mentally and maintain later.

I've started reworking the code as you suggested, by splitting CMainNetDiff into two classes, COldNetRules and CNewNetRules, each applying their own retarget algorithm without any branching dependant on height. I'm introducing a CDiffProvider static class that will handle the branching depending on height and used net (test or main), and return the appropriate CDiff instance.

I'm thinking writing a testbench with that structure will be a breeze, all we'll have to do is construct a fake chain of CBlockIndex's with timestamps, heights, and target bits that reflect the hashrate variations we have seen on the Noirbits network, or that could possibly be seen. I'll get back at you when I'm done, probably not tonight though, it's friday night here, hence drinking time and my brain's gonna be off for the next 12 hours or so Wink

Sounds great. I'm thinking that the testbench (and testsuite) can have a set of random tests.

The tests would define parameters like the following:

BaseHashingPower - this would distinguish between a situation like bitcoin where there is tons of constant hash power vs. a new coin where we maybe have 5MH/s which is constant and other stuff can add and subtract. (Currently about 5MH/s for noirbits).
VariableHashingPower - this is the amount that can come in and out (Currently about 150 MH/s for noirbits).
A set of probabilities describing how quickly people add or subtract power. What we want to get is something that let's use randomly add hashing, and the lower the difficulty, the higher the probability of adding hash at any point in time, and when the difficulty increases people are more likely to drop out. We could also define some other factors on how the hashing comes in and out, then the test would simulate timesteps (like a minute), and randomly find a block based on hashing power, etc, and hashing power adding or subtracting (in some minimum increments). This should allow us to model a number of different cases, and also pretest our algorithm change to see how it reacts to a situation like we are currently in (and test a number of variations and see what seems to work the best). Then pass/fail for the test would be a set of benchmarks like variance from the target behavior (in our case testing variance from 2 minute blocks over time, max/min time to generate a set of 30 blocks, etc). We looking for actual numbers, but then also an acceptance test where if we go out of range, the algorithm fails.

In my every day design life, this is how we make 90% of our design decisions, make a model, run a bunch of representative tests that we're trying to optimize, and try with a large range of parameters and select the ones that work best (since I do chip design, usually management is presented with a set of cost/performance alternatives).

[edit]

One other thing required is a random delay in response for the system. After a change in difficulty, since at this point it looks like humans are deciding to add/remove hashing power, the effective difficulty being responded to by the system will be delayed by some amount. You have a distribution that says 10% of the system is using difficulty from an hour ago, 25% is using it from 2 minutes ago, 20% from 10 minutes ago. This models the delay in reaction for the change (when the difficulty gets high, it take a little time for power to drop out, and when it's low, it takes a little while for it to come in).

The other things (pie in the sky talking here), would be to write something which goes through the block chain, and generates the probability parameters. You can then go through the block chains for all the other coins, and use them as testcases for your algorithm. A lot of other people have tried, and screwed things up, so this would be a good way to use all of their mistakes to tune our algorithm to be the best possible.

Having said all that, I think we need an algorithm change ASAP, so we should get our current proposed changes in as quickly as possible, because I think they will improve things, then target another algorithm refinement for 3 weeks away or something like that, so we can test a bunch of tweaked parameters and find a set which is much better, then make one more change the make the coin more solid than any that has come before.
full member
Activity: 154
Merit: 100
Well I only ever compile the daemon with make and the makefiles in /src.

QT is the cross platform library used for the UI. But there's a Makefile in the main directory, that's used to compile Noirbits-qt (the UI client), and I'm not sure of how I should handle dependencies in that one... If you can have a look at it, I'd appreciate. Otherwise, I'll go ahead and try with QtCreator. I asked barwizi to have a look at the makefiles, but he seems to be MIA since yesterday, probably busy.

Edit : never mind that, QtCreator handles that file automagically...

What do you think about the new retarget implementation ? Feel free to improve it if you want...

I looked at the retarget implementation. Unfortunately I have to work today, so I can't work on it. My one comment is that it's probably better to factor out whole functions rather than specific steps in the code. i.e. you have a set of "new" functions and a set of "old" functions. That's it's easy to look at each set of functions and see what they do. It also makes it easier to handle switchovers (hoping we won't have many, but we will probably have non-zero after this one). i.e. We get to the target block, do the switchover, then you go into the code and copy the "new" functions to the "old" functions, and remove the switchover block id. Then if another change in the algorithm is required, you simply go change the switchover block id again, modify the "new" functions again, etc.. If you make that change now, and comment the code for the "new" and "old" functions, it will make any future changes, if they are required, really easy to implement and test.

If we write a testbench to simulate these types of fluctuations, then we would use it on the new functions only. It's easier if we have whole functions not wound up with testing switchover block IDs, etc. That type of testbench would be great, not only for us, but the original bitcoin codebase (it would help anyone else starting a new coin to see how their algorithm works before testing and getting screwed in the wild).

I see your point. I usually always factor steps into functions, because functions name carry meaning, and make code easier to parse mentally and maintain later.

I've started reworking the code as you suggested, by splitting CMainNetDiff into two classes, COldNetRules and CNewNetRules, each applying their own retarget algorithm without any branching dependant on height. I'm introducing a CDiffProvider static class that will handle the branching depending on height and used net (test or main), and return the appropriate CDiff instance.

I'm thinking writing a testbench with that structure will be a breeze, all we'll have to do is construct a fake chain of CBlockIndex's with timestamps, heights, and target bits that reflect the hashrate variations we have seen on the Noirbits network, or that could possibly be seen. I'll get back at you when I'm done, probably not tonight though, it's friday night here, hence drinking time and my brain's gonna be off for the next 12 hours or so Wink
member
Activity: 104
Merit: 10
Well I only ever compile the daemon with make and the makefiles in /src.

QT is the cross platform library used for the UI. But there's a Makefile in the main directory, that's used to compile Noirbits-qt (the UI client), and I'm not sure of how I should handle dependencies in that one... If you can have a look at it, I'd appreciate. Otherwise, I'll go ahead and try with QtCreator. I asked barwizi to have a look at the makefiles, but he seems to be MIA since yesterday, probably busy.

Edit : never mind that, QtCreator handles that file automagically...

What do you think about the new retarget implementation ? Feel free to improve it if you want...

I looked at the retarget implementation. Unfortunately I have to work today, so I can't work on it. My one comment is that it's probably better to factor out whole functions rather than specific steps in the code. i.e. you have a set of "new" functions and a set of "old" functions. That's it's easy to look at each set of functions and see what they do. It also makes it easier to handle switchovers (hoping we won't have many, but we will probably have non-zero after this one). i.e. We get to the target block, do the switchover, then you go into the code and copy the "new" functions to the "old" functions, and remove the switchover block id. Then if another change in the algorithm is required, you simply go change the switchover block id again, modify the "new" functions again, etc.. If you make that change now, and comment the code for the "new" and "old" functions, it will make any future changes, if they are required, really easy to implement and test.

If we write a testbench to simulate these types of fluctuations, then we would use it on the new functions only. It's easier if we have whole functions not wound up with testing switchover block IDs, etc. That type of testbench would be great, not only for us, but the original bitcoin codebase (it would help anyone else starting a new coin to see how their algorithm works before testing and getting screwed in the wild).

full member
Activity: 154
Merit: 100
Well I only ever compile the daemon with make and the makefiles in /src.

QT is the cross platform library used for the UI. But there's a Makefile in the main directory, that's used to compile Noirbits-qt (the UI client), and I'm not sure of how I should handle dependencies in that one... If you can have a look at it, I'd appreciate. Otherwise, I'll go ahead and try with QtCreator. I asked barwizi to have a look at the makefiles, but he seems to be MIA since yesterday, probably busy.

Edit : never mind that, QtCreator handles that file automagically...

What do you think about the new retarget implementation ? Feel free to improve it if you want...
member
Activity: 104
Merit: 10
Agreed. Current source only applies a limited (80%) diff drop after 4 hours.

Have a look here : https://github.com/ftcminer/Noirbits/tree/Testing

I moved the diff calc into diff.h & diff.cpp (that's actually why I need help with the QT Makefile, to add these files in. It's already done for Noirbitsd, but not the UI client)

* Disclaimer : I'm no C++ pro...
What's QT? What are you using to compile. I'm decent in regular makefiles, and makepp as well..
full member
Activity: 154
Merit: 100
Agreed. Current source only applies a limited (80%) diff drop after 4 hours.

Have a look here : https://github.com/ftcminer/Noirbits/tree/Testing

I moved the diff calc into diff.h & diff.cpp (that's actually why I need help with the QT Makefile, to add these files in. It's already done for Noirbitsd, but not the UI client)

* Disclaimer : I'm no C++ pro...
member
Activity: 104
Merit: 10
Seems a bit too complex... the four hour retarget is to avoid killing the coin with massive hashrate drops.... look @ CHNCoin...
You can't have a yoyo either, so you don't want to drop it too low. It's a classic control systems problem. An unstable system fluctuates violently. It's better to take a little extra recovery time and land more softly.

The only real question I see if how to calculate the difficulty on a retarget with less than 30 block. You can either just calculate since the last retarget, or you could use the last 30 actual blocks (taking the difficulty of each block into account), either one you could add some weight or just make them equal. Where is the source on github?
member
Activity: 104
Merit: 10
By the way, anyone here has the know-how to update makefiles ?
What do you need?
full member
Activity: 154
Merit: 100
By the way, anyone here has the know-how to update makefiles ?
full member
Activity: 154
Merit: 100
Seems a bit too complex... the four hour retarget is to avoid killing the coin with massive hashrate drops.... look @ CHNCoin...
member
Activity: 104
Merit: 10
Thinking about the adjustment algorithm.

Assuming that we change the retarget to 30 blocks:

1) When difficulty goes up, it seems fine to just use the last 30 blocks to calculate the new difficulty (with some max change)
2) When difficulty goes down, it seems fine to just use the last 30 blocks to calculate the new difficulty (with some max change)

My statistics are terrible, but is looks like 30 blocks give you a +-10% with 75% confidence. That seems fine. My real concern is the 4 hour forced retarget. Let's take the worst case, that in 6 hours we only get a single block. Since this is random, that one sample is not nearly enough to set the new difficulty with any confidence at all. I'm not sure the best way to handle this situation, but I'm going to propose something different that what we were already going down. Currently the regular retargets can change the difficulty by a max of 80% in one direction (this seems reasonable). What if we make it depend on the current block, but make the max amount of change 55-60% instead of 80 so that we can dampen the fluctuations. If we're worried about how quickly we will recover, then we should use a smaller forced retarget (like 3 hours), instead of using a bigger percentage change at 4 hours. We also need a hard floor on the difficulty. We can either pick the value, or my suggestion is that we do something like 90% of the lowest calculated block from the last week (the lowest calculated hashrate from 168 retargets), or some shorter period of time. This gives us low bound for where you would expect the hash rate to go. This calculation is going to be a little tricky because of the 4 hour force retarget possibility.

Comments?

Oatmo
member
Activity: 104
Merit: 10
Just kicked in another 3 miners to help get us over the hump.
full member
Activity: 154
Merit: 100
I talked to sal002 and he said he'd take it off of coinchoose in a few hours.

About 14 hours until the next retarget according to minersunited.

EDIT: Its been taken off coinchoose! 12-13 hours now at current hash

Just a quick note on retarget times on miners-united : they're based off the current network hashrate, which itself is based upon the time it took to find all blocks found since last retarget. So you need to wait a few blocks (I'd say at least 10) after retargets until the measure is somewhat accurate.

Hashrate measurement is hardly ever accurate if you take into account diff. changes, luck, and so on... Most pools that display hashrates I believe use the last 120 blocks (it's the default of getnetworkhashps) to sample find times and estimate hashrate, but with Noirbits, that's bound to be off since it will always span two retargets.

As you can see, in practice, it's been almost a day since last retarget whilst the pool was announcing 12 to 14 hours for next retarget... I'm trying to find a more accurate way to sample hashrate and retarget estimates, but until then, it at least spares you the mental math Smiley
Pages:
Jump to: