Hi slush (and other readers)!
I reread your proposal (both here and in Google docs). In my opinion you are proposing either (1) internally inconsistent protocol or (2) family of incompatible protocols supported by a hope of sharing some implementation details.
Second issue is that I'm NOT proposing RPC over HTTP(S).
I'll need to add some custom way how to support HTTP poll or HTTP push, because I *want* to provide services for this type of clients.
because json encoder/decoder or curl/ajax is available everywhere in standard libraries
The resultant Frankenstein monster (or family of monsters) is going to try to strangle its creator.
It is hard to argue with what you proposing point by point because of the contradictory nature of the requirements. I'm going to group my response into the following points.
1) Protocol can be either RPC or bidirectional, but not both. The RPC paradigm (request-response, master-slave) is mutually contradictory with a pair of peers exchanging asynchronous messages. Bitcoin protocol itself is from its origin asynchronous and cannot be squeezed into the master-slave architecture. It demands the bi-directional asynchronous protocol with finite state machines (FSMs) on both ends communicating with a two flows of sequenced frames. The successful design will look very much like TCP on top of IP frames.
2) Your target market (low-end consumer-level devices) demands that there's checksuming and framing at the application layer. Precisely because cheap NAT gateways and cheap DSL/Cable/WLAN modems are known to mangle the transport-level frames due to bugs (in the implementation of NAT) and excessive buffering (to improve one-way file transfer benchmarks).
If you think you can add CRC later you are going to loose by not detecting corruptions early.
3) In my experience JSON is probably the close to being the least resilient encoding possible. I can't disclose proprietary data, but I have over a decade's worth of reliability statistics for various remote services (RPC-like) that are sold by organizations I consulted. But the rough ranking is
as follows: (from the least errors to the most)
3.1) ultra-lean character-based protocol similar to FIX, designed originally for MNP5 and V.42bis modems, currently used through a simple TCP/IP socket
3.2) SOAP (RPC over XML) with Content-Length, Content-MD5 and DTD verification enabled
3.3) SOAP and plain XML-RPC without the above strenghtening options
3.4) JSON-RPC
3.5) RPC over e-mail with various human-readable encodings
All the above are primarily sold to small businesses and individual traveling salespeople, which seems to be the target market you are planning to serve. We also have enterprise-oriented services using various MQ and *-remoting technologies, but the services offered aren't directly comparable. I thing that JSON could be an acceptable choice if you from the start demand the strengthening by some form of Content-Length and Content-Checksum. JSON is also infamous for letting people easily make byte-endianness mistakes, very much like current "getwork" which is neither big-endian not little-endian. Yes, JSON saves a lot of bandwidth when compared to XML. But I know of no presently available implementation that isn't producing cryptic and hard to troubleshoot failure modes. It is a classic example of a "lean but very mean" design.
4) You somehow read the my earlier suggestions about IPsec imply high-end large-volume target market. The reality is quite opposite: Windows support IPsec since 2000, Linux for a long time, Netgear ProSafe family has several models in $100-$200 range, L2TP and PPTP are available for free on all iPhones,Blackberries,Androids; many Nokias and other smartphones. The real hindrance is the HTTP(S)-uber-alles mindset, not the actual difficulty and cost of the implementation.
In summary I'd like to say that you wrote a very interesting and thought provoking proposal. I just think that the range of the targets you are hoping to cover is way too broad ($3 AVR processors, shared hosting plans, home computers, etc.).
I have my personal litmus test for the technological implementation in the Bitcoin domain: it has to support (and be tested for) chain reorganization. Preferably it should correctly retry the transactions upon the reorg. Absolute minimal implementation should correctly shut down with a clear error message and a defined way to restart the operations. Any implementation that will behave like Tom Williams' Mybitcoin.org blame-it-on-the-chain-reorg is a failure. Thus far your proposals don't seem to encompass this essential feature of Bitcoin. If Electrum (and associated protocols) aren't going to behave properly in the presence of chain reorganization events then I withdraw my earlier opinion that Electrum has more potential than the Satoshi client.
Thank you again for your time and attention.