This is a very interesting idea. VERY impressive! Perhaps you should post this in General Discussion to get more attention?
The Cryptoglobalist Coalition is an internet movement which supports the redistribution of centralised power held by nation states to a global decentralised cryptographic resource distribution system (or 'cryptostate').
Calling it a Globalist Coalition seems to me like a bad idea for various reasons:
Globalization is commonly thought of as a tendency or agenda of governments to expand towards global centralization -- in other words, to consolidate jurisdiction of central authority.
'Coalition' is similarly associated with political parties, i.e. groups of people who believe they have rights that other people don't have. On the other hand, that has its positive aspect, in that it sounds formal thus resonates with statists.
Strictly speaking, centralized power isn't "held" by nation states, but rather exists as a collective belief in the legitimacy of the fear-based control system commonly known as government.
The first phase of the plan will be to engineer a cryptocurrency (I suggest calling it Gaiacoin) and the tools needed to take key functions of the internet (like search and publishing) away from centralised organisations like Google, Youtube, and Facebook, and make them available on a decentralised, cryptographic platform. This will mean creating a cryptographic search engine (or 'cryptosearch'), a cryptographic hosting service (or 'cryptopublisher'), a cryptographic content aggregator (or 'cryptoaggregator') and a cryptographic marketplace (or 'cryptomarket')
The latter already exists in the deep web. Could you elaborate on how technically feasible the others are at the current stage of decentralization development?
Until the cryptostate takes humanity to a post-scarcity state, it will be necessary for the system to deduct a percentage of income, mostly from richer users, from their wallets. This will be so the system can freely provide services like health, education,
Please define "health" and "education" in a cryptostate, and how it would differ from the current, to put it mildly, "highly dysfunctional" systems.
a police service; backed by universal surveillance only accessible by the system for use in preventing crime, the cryptopolice would enforce laws through a network of peacekeeping robots designed to non-lethally detain and arrest criminals, and take them to a station, whereby an AI would examine surveillance and collect evidence using forensic scavenging robots, and determine whether someone is guilty of a crime or not. While the ideas of surveillance everywhere is daunting today as governments cannot be trusted, having such a system run by an algorithm and only processed by an effectively dumb machine means there is no breach of our right to privacy.
The big question here is: what constitutes a crime? If there is no victim, there is no crime, but even with that understanding the threshold (crime/not-crime) is not well-defined. The surveillance idea seems a little scary indeed... many ways to abuse it, and most people would object to it in principle (few people like being observed).
Eventually, this would be superseded by a microchip deterrance system, whereby a microchip in the brain can stimulate areas associated with punishment to prevent a crime from even happening.
That idea makes a large number of assumptions that don't jive with reality.
Of course, this will be superseded by downloading information directly to the brain. Probably.
This already happens, in a sense.
The final phase, in the distant future, will involve the cryptostate being programmed into fully automating all aspects of human industry, including art, science and entertainment.
That makes no sense. Art is by definition a human thing, an aspect of being human. You're assuming AI can "master the processes of biology" as Kurzweil would have it, but even if that were so, art would lose its meaning.
The last thing to be programmed into the cryptostate will be the facility to programme and improve itself, which will lead to a technological singularity. This should not be went ahead with until we find some means of making sure the AI will not cause the extinction of humanity.
If the AI can make intelligent decisions to the degree of programming and improving itself, how would you make sure it won't become self-aware in some sense, if you believe that consciousness is an emergent phenomena arising from electrochemical neuronal interactions? Such kind of self-awareness would probably make the AI decide that humans are to be used to its liking, just as humans use livestock.