This means that with the starting Nfactor being 4, so min N = 1 << (4 + 1) = 32, and the max Nfactor would be 30, so max N = 1 << (30 + 1) = 2147483648.
I've taken some of the code out to VBA and have created an excel spreadsheet graphing N over the next 10 years. Download it at
http://goo.gl/pQqkIPlease feel free to review the code and let me know if there are any errors, as well as using the spreadsheet for your own needs around YAC.
Thanks for working up a spreadsheet. I haven't looked closely at the accuracy of the data, although I did note that it showed N=64 only for one day before jumping to N=128. I haven't continued watching N since I stopped mining when N went to 64 (thus invalidating my FPGA implementation). When I have a moment, I'll bring out the C code in GetNfactor() into a small standalone program to verify the timestamps for Nfactor++ events. Your data is different than what pocopoco (or perhaps it was someone else, can't remember) posted a while back in the official YACoin thread for the time vs. Nfactor++ events, but that doesn't mean what pocopoco posted was correct either.
Looking at the graphed data, it appears that N very roughly approximates Moore's law (with some step lengths being shorter than 18/24 months, and others being larger).
Assuming my data is correct, what is everyone's opinion of N's growth? Does it seem realistic to a) keep GPUs out and b) keep CPU mining feasible?
Offhand, for question (b), it appears from your data that CPU mining continues being feasible for quite a long time. Your data shows 512MB needed to calculate a hash in the year 2023, and that's an amount of RAM that I think everyone probably has available today for hashing. Disclaimer - I haven't double-checked that your data is correct.
On question (a), I doubt 512MB needed to hash is enough to exclude GPU's, especially a decade out. It may be enough to keep GPU's from having a huge massive advantage over CPU's once both technologies start having to hit slow external RAM and not fast internal L1/L2 caches, but we won't know for sure until we see what the future holds for GPU development, amounts of RAM on GPU's (and thus how many simultaneous hashes can be calculated in parallel), whether GPU's start getting massive quantities of fast L1/L2 cache like CPU's, what the ratio of L1/L2 cache in CPU's vs. GPU's looks like in a decade, etc.. Probably too early to tell. We also need to see some of the GPU implementations or adaptations of cgminer released so we can see what they're achieving for hash rates. Unlike Litecoin, where (almost) everyone derived their OpenCL implementation from Reaper, for YACoin, I suspect there were multiple independent adaptations of cgminer that occurred with no direct contact between the people performing each implementation. Too early to tell on that as well, or determine exactly how widespread GPU mining of YACoin actually was/is.
The possibility also exists that other technologies other than CPU and GPU options come into existence and widespread application during that timeframe that we can't yet anticipate, or that someone could identify a TMTO shortcut in scrypt+chacha20/8, as happened with the TMTO shortcut for scrypt+salsa20/8 that made GPU's practical for calculating Litecoin hashes.