Bitcoin continues because it doesn't allow multiple Partitions and because in the case where chaos would result from a fork, then centralization of the mining is able to coordinate to set policy. But we also see the negative side of centralization when recently the Chinese miners who control roughly 67% of Bitcoin's network hashrate were able to veto any block size increase. And lie about their justification, since an analysis by smooth and I (mostly me) concluded that their excuse about slow connection across China's firewall is not a valid technical justification. Large blocks can be handled with their slow connection by putting a pool outside of China to communicate block hashes to the mining hardware in China.
Hmm, you have a steep background in relativity, but somehow things go wrong somewhere. Bitcoin partitions all the time - that's the default for everything. Nodes only synchronize ex-post, hence the block cycle.
Dude I haven't written anything in this entire thread (unless you crop out the context as you did!) to disagree with the fact that Bitcoin's global consensus is probabilistic. My point is the system converges on a Longest Chain Rule (LCR) and doesn't diverge. Duh! The distinction between convergence and divergence has been my entire point when comparing Satoshi's LCR PoW to other divergent Partition tolerance designs such as a DAG.
I request you quote me properly next time, so I don't have to go hunting for the post you quoted from. I fixed if for you above by inserting the proper quote and underlining the part you had written without attribution and link. I presume you know how to press the "Quote" button on each post. Please use it. Respect my time. Don't create extra work for me.
I'd humbly suggest to start with some through research of some basics:
Passive aggressively implying that I haven't studied fundamentals is not humble.
* computers are electronic elements with billions of components. how does such a machine achieve consistent state? see: Shannon and von Neumann and the early days of computing (maybe even Zuse)
* partitions, blocks, DAG's, .... all this stuff generally confuses the most fundamental notions. after investigating this matter for a very long time, I can assure you that almost nobody understands this.
Humble
Blah blah blah. Make your point.
I can assure you I've understood the point deeply about the impossibility of absolute global consistency in open systems (and a perfectly closed system is useless, i.e. static). Go read my debates with
dmbarbour (David Barbour) from circa 2010.
I'll give an example: in any computer language and modern OS, you have the following piece of code:
declare variable X
set X to 10
if (X equals 10) do Z
will the code do Z? unfortunately the answer in general is no, and its very hard to know why. the answer: concurrency. a different thread might have changed X and one needs to lock X safely.
Perhaps one day you will graduate to higher Computer Science concepts such as pure functional programming and asynchronous programming (e.g. Node.js or Scala Akka) to simulate multithreading safely with one thread using promises and continuations. But you are correct to imply there is never a perfect global model of consistency. This is fundamentally due to the unbounded entropy of the universe, which is also reflected in the unbounded recursion of Turing completeness which thus yields the proof that the Halting problem is undecidable.
If you are still using non-reentrant impure programming with threads (and mutexes), or otherwise threads in anything other than Worker threads mode, you are probably doing it wrong (in most every case).
in other words data or state in modern computing is based on memory locations. programs always assume that everything is completely static, when in reality it is dynamic (OS and CPU caches on many levels). These are all human abstractions. The actual physical process of a computing machine is not uniform. In fact it is amazing that one can have such things at all exist, since Heisenberg discovered its impossible to tell even the most elementary properties of a particle with certainty. Shannon found that still one can build reliable systems from many unreliable parts (the magic of computing).
Higher level abstractions and quantum decoherence. I am not operating at the quantum scale as a classical human being.
With regards to your basic thesis you're right and wrong at same time. Total coordination is impossible even on the microscopic level. Bitcoin implements a new coordination mechanism, based on the Internet, previously unknown to mankind. It's certainly not perfect but that notion leads nowhere anyway. The foundations of computing is how one treats unreliable physical parts to create reliable systems (things that are imperfect add up to something which as reliable as necessary).
Who ever said "perfect". I said probabilistic. The key distinction was convergence versus divergence, but that seems to have escaped you along the way here.