Correct me if I am wrong. Each time the N-factor adaption kicks in, the current ASIC becomes useless, correct?
If that is the case, I suggest Dev should shorten the N-factor schedule.
The next adaption is 2 years later, and the 1st gen scrypt ASIS here already, although it does not bring much advantages over GPU other than power consumption.
I guess this gen is 65nm-ish chips but we all know how fast the BTC ASIC evolved from 65nm to 28nm, which is matter of months and the hash improvement is just enormous.
I say Dev change the 1st adaption to somewhere around July-Sep 2014 to fight the 2nd gen Scrypt ASIC assuming the 65nm-to-28nm evolution follows the approximately the same schedule as BTC ASIC did.
And 2nd adaption to Feb-April 2015 just in case it did not evolve that fast.
Remember once the ASIC reaches 28nm, it is very very hard for those guys to improve it within at least a couple years.
Reason:
1. There is nothing they can do about the architecture to improve it because it is pre-determined by Scrypt.
2. There is nearly equal trade-off when they just add more chips/cores to improve the hash. More costs more.
3. Now shrink the die. Once they reach 28nm, and certainly they wants to go to 22nm or 20nm, but, 20nm-ish is really already near the cutting edge of current die size and therefore a bottleneck. I think nowadays, only very handful of companies, like Intel, can produce quality consumer level 20nm-ish chips, and I doubt they are gonna share that tech with ASIC developers. And it most likely take more than 2-3 years before 20nm becomes the standard for an company like that.
I believe this is not really a concern for us, Though I do understand where you are coming from. Here is my opinion,
N-factor was studied and argued for a long time both ways, N-factor change was largely held-off to keep older 1gb memory GPU's in the game to broaden VTC's mining base and keep those miners in the game. At current N-factor these miners can still report good hashrates and will likely have upgraded long-before the next Nfactor bump.
Also, understand that what makes VTC hard to mine for GPU's is even more of a problem for ASIC. Until we get asic numbers in for VTC with the current grid-seed chips (which looks like it maybe nothing more than a FPGA direct-to-asic hardware replica) Its entirely possible that a ASIC that gets a given hashrate on LTC would get a huge drop on VTC, Much more a drop than current GPU's even, because GPU's have access to huge amounts of memory (by asic standards).
So its possible that even 2nd gen asics would be taking a backseat to GPU's even before the nfactor switch.
Even Nvidia cards are in the game with VTC and I'm willing to bet would hash way faster than a LTC-focused asic.