You keep talking as if the feature size for all of these chips was set in stone as soon as they started design work. In fact if you read that post it doesn't even sound like they had picked whether they were going to do 130 or 65 nm:
Tthey signed an NDA with the fab and received the cell libraries in July. Its set in stone then. Still took >2 months to tape out and until february to hash. Note that these libraries are not only specific to a node size, but to a specific process at a specific fab.
Just out of curiosity, do you have any idea what people actually
do with these cell libraries? I don't mean "design the chip" but actually how they
physically use them? Because it doesn't sound like you actually have any idea.
The evidence you're presenting is totally irrelevant to the actual claim you made.
The evidence covers 100% of all other bitcoin asics. Whats your evidence it can be done substantially faster?
Again, all your claims are about how long it takes to go from the start of a design to finished silicon. None of them have to do with how late in that process they could have finalized the hardware size. The initial choices were likely made for financial reasons, and stuck with for the same reasons.
I don't know why you're hammering this point. You said you didn't know what you're talking about, and obviously you haven't learned anything new in the past hour.
I also asked TheSeven about this in IRC:
[12:41] HDL is fairly highlevel and doesn't care much about the node
...
[12:41] however the synthesis, optimization and test of course isn't
...
[12:43] the time from deciding on a process node and receiving chips in quantity highly depends on: the fab, the process node, how much you're willing to pay, and how much effort you want to put into optimization
...
[12:44] if you have a deal with a university fab that gives you some spare space bi-weekly wafer runs, that will move a lot quicker than if you're a low priority customer on a shared wafer run of some fab
...
[12:44] synthesis is automated, but you typically need to do it a few times if you want to reach high performance, tweaking some parameters
See that? synthesis is
automated. The part that takes HDL describing the logic and generates a GDSII file containing the layers is
done by computer. If you want to generate at multiple node sizes, you can. All you would need to do is get
multiple cell libraries for different technologies, and synthesize in parallel. It would mean less overall optimization, but you could maintain feature size flexibility up until tapeout, depending on your financial situation, since smaller nodes cost more.
This is how you actually figure things out, but the way. If you don't know something you do research and find out. You don't just randomly guess based on "common sense" and then argue that your random guess is correct without any evidence.