Well I only ever compile the daemon with make and the makefiles in /src.
QT is the cross platform library used for the UI. But there's a Makefile in the main directory, that's used to compile Noirbits-qt (the UI client), and I'm not sure of how I should handle dependencies in that one... If you can have a look at it, I'd appreciate. Otherwise, I'll go ahead and try with QtCreator. I asked barwizi to have a look at the makefiles, but he seems to be MIA since yesterday, probably busy.
Edit : never mind that, QtCreator handles that file automagically...
What do you think about the new retarget implementation ? Feel free to improve it if you want...
I looked at the retarget implementation. Unfortunately I have to work today, so I can't work on it. My one comment is that it's probably better to factor out whole functions rather than specific steps in the code. i.e. you have a set of "new" functions and a set of "old" functions. That's it's easy to look at each set of functions and see what they do. It also makes it easier to handle switchovers (hoping we won't have many, but we will probably have non-zero after this one). i.e. We get to the target block, do the switchover, then you go into the code and copy the "new" functions to the "old" functions, and remove the switchover block id. Then if another change in the algorithm is required, you simply go change the switchover block id again, modify the "new" functions again, etc.. If you make that change now, and comment the code for the "new" and "old" functions, it will make any future changes, if they are required, really easy to implement and test.
If we write a testbench to simulate these types of fluctuations, then we would use it on the new functions only. It's easier if we have whole functions not wound up with testing switchover block IDs, etc. That type of testbench would be great, not only for us, but the original bitcoin codebase (it would help anyone else starting a new coin to see how their algorithm works before testing and getting screwed in the wild).
I see your point. I usually always factor steps into functions, because functions name carry meaning, and make code easier to parse mentally and maintain later.
I've started reworking the code as you suggested, by splitting CMainNetDiff into two classes, COldNetRules and CNewNetRules, each applying their own retarget algorithm without any branching dependant on height. I'm introducing a CDiffProvider static class that will handle the branching depending on height and used net (test or main), and return the appropriate CDiff instance.
I'm thinking writing a testbench with that structure will be a breeze, all we'll have to do is construct a fake chain of CBlockIndex's with timestamps, heights, and target bits that reflect the hashrate variations we have seen on the Noirbits network, or that could possibly be seen. I'll get back at you when I'm done, probably not tonight though, it's friday night here, hence drinking time and my brain's gonna be off for the next 12 hours or so
Sounds great. I'm thinking that the testbench (and testsuite) can have a set of random tests.
The tests would define parameters like the following:
BaseHashingPower - this would distinguish between a situation like bitcoin where there is tons of constant hash power vs. a new coin where we maybe have 5MH/s which is constant and other stuff can add and subtract. (Currently about 5MH/s for noirbits).
VariableHashingPower - this is the amount that can come in and out (Currently about 150 MH/s for noirbits).
A set of probabilities describing how quickly people add or subtract power. What we want to get is something that let's use randomly add hashing, and the lower the difficulty, the higher the probability of adding hash at any point in time, and when the difficulty increases people are more likely to drop out. We could also define some other factors on how the hashing comes in and out, then the test would simulate timesteps (like a minute), and randomly find a block based on hashing power, etc, and hashing power adding or subtracting (in some minimum increments). This should allow us to model a number of different cases, and also pretest our algorithm change to see how it reacts to a situation like we are currently in (and test a number of variations and see what seems to work the best). Then pass/fail for the test would be a set of benchmarks like variance from the target behavior (in our case testing variance from 2 minute blocks over time, max/min time to generate a set of 30 blocks, etc). We looking for actual numbers, but then also an acceptance test where if we go out of range, the algorithm fails.
In my every day design life, this is how we make 90% of our design decisions, make a model, run a bunch of representative tests that we're trying to optimize, and try with a large range of parameters and select the ones that work best (since I do chip design, usually management is presented with a set of cost/performance alternatives).
[edit]
One other thing required is a random delay in response for the system. After a change in difficulty, since at this point it looks like humans are deciding to add/remove hashing power, the effective difficulty being responded to by the system will be delayed by some amount. You have a distribution that says 10% of the system is using difficulty from an hour ago, 25% is using it from 2 minutes ago, 20% from 10 minutes ago. This models the delay in reaction for the change (when the difficulty gets high, it take a little time for power to drop out, and when it's low, it takes a little while for it to come in).
The other things (pie in the sky talking here), would be to write something which goes through the block chain, and generates the probability parameters. You can then go through the block chains for all the other coins, and use them as testcases for your algorithm. A lot of other people have tried, and screwed things up, so this would be a good way to use all of their mistakes to tune our algorithm to be the best possible.
Having said all that, I think we need an algorithm change ASAP, so we should get our current proposed changes in as quickly as possible, because I think they will improve things, then target another algorithm refinement for 3 weeks away or something like that, so we can test a bunch of tweaked parameters and find a set which is much better, then make one more change the make the coin more solid than any that has come before.