Protocol-wise, in theory, yes. In practice it certainly wouldn't be technologically possible for any node to actually process infinity-sized blocks, and there are also some practical limits of the implementation such as integer overflows in block size calculations.
The big bang protection that was forked in at some point limits the rate of block size growth, so if extreme growth started happening there should be ample opportunity to address these issues via updates, including figuring out where the actual hardware limits might be and what to do about it.
Big bang protection kicks in at 50x so that would be 15 MB blocks every 2 minutes. At that point, block size growth would slow way down.