1) Hammer: RPC over HTTP(S) is a horrible transport for the financial data.
2112, I think you're mixing a lot of things together. Firstly, you're talking about something like HFT, so you worry about latency, data consistency etc. I'm talking about supportive network above Bitcoin layer, where consumers are mostly end user machines or websites with eshops. 1000ms latency definitely isn't an issue here, unlike trading NASDAQ. I'm not designing financial network in classic meaning of this word.
Second issue is that I'm NOT proposing RPC over HTTP(S). I'm proposing RPC over various set of transports, where HTTP(S) is only one of possible way, more like fallback alternative for clients unable to communicate in another way. If you read my implementation, you see that I implemented long-living TCP socket as a default and main transport, NOT a HTTP(S).
The main problem is that a site cannot really distinguish sudden spikes in popularity from distributed denial of service attack. The tools for DDoS defense and normal load balancing are popular and comparatively well understood, but they are create more side-effect problems that they solve.
I'm sorry, you are locked in your opinion that I'm designing JSON over HTTPS, so your comments are out of logic.
Again, proposed solution *uses* one TCP socket per one "session", which is perfectly compatible with all DDoS mitigation tools. Attacked site *can* turn off HTTP(S) layer and offer only TCP socket during the problems. By the way, you're again mixing transporting layer with application protocol. I don't know how can format of application protocol (FIX, ZeroMQ or JSON) affects mitigation of DDoS, as far as all mentioned protocols are de-facto standards and every tools (probably) knows JSON, too.
In my experience the transport has to rely on (comparatively) long lasting network sessions and instead of lowest-common-denominator SSL it should use some other form of network security technology like IPSec or any available VPN implementation (but not SSL-VPNs). Please take the advantage of the fact that your design involves two custom programs on both ends of the pipe.
Did you argue in the same way in (cca) 1993 when they created HTTP? That users need custom programs on both ends? I'm proposing and implementing transports with long-lasting network sessions, using widely used JSON protocol on application layer; where you read the oposite? Again, HTTP(S) is only compatibility mode for browsers and other devices unable to handle sockets directly. I personally don't like HTTP(S) too, but I want to give freedom of choice to developers. If they develop application which works only on HTTP and server provider closes HTTP transport during the DDoS attack, hmm, it's their problem.
You do not need to use the protocol that was designed to support dumb and full-of-holes web browser on the client side.
I'm sorry, but we're probably missing the whole point of this discussion. You're designing hi-secure network protocol between two hi-tech businesses using dedicated machines with any possible software setups on both sides, so VPNs and IPsec isn't a problem to run there. I'm designing network used by end users, and one of side can be web browser or even hardware wallet written in C on some $3 AVR processor.
JSON is essentially a LISP s-expressions sweetened with Javascript syntax. This is a great improvement over XML/SOAP but the lack of transport error-detection and packet boundary weakens all implementations. All the transport-level problems show up as hard to troubleshoot "syntax errors" or just pass undetected.
I understand what you mean and I partially agree. I already thought about adding some CRC into json payload, which should do the job. Actually it's not so important, because the chance that packet breaks in the way that it will be a valid json message, only with different data inside, is really neglible. But (thanks to json flexibility) there can be added some field containing CRC of the message without breaking backward compatibility, which is enough for me at this moment. When CRC become necessary, it can be implemented.
3) Pounding the wall: By committing to a rather primitive tool you also commit to solving many of the well known problems with the RPC paradigm. By the time you correctly implement the garbage collection of stale subscribers, reciprocal authentication of callbacks, etc. you'll understand why Microsoft delivered DCOM (Distributed COM) years after COM (Component Object Model). Please learn from MSFT mistakes and don't undertake an experimental re-verification of their money sinks. I'm not a big fan of DCOM, but it is a great example of how a full and efficient implementation of simple RPC concept needs to be either very complex or very fragile. Also, when speaking of DCOM please disregard ActiveX GUI-overlays on that protocol; they don't pertain here.
I think you're overcomplicating the task. Btw there are not such issues like stale subscribers, because subscriber "dies" once the session is over. There is no authentication as services are public. By the way, it's more like implementation details in server than protocol specification itself.
Your proposals of ZeroMQ or FIX sounds cool and like you really understand the domain, but
they don't answer problems I'm trying to solve. Those are only application protocols, and pretty complicated ones (think about that AVR client!), they're comparable with JSON (well, both three solutions have their own problems, every solution slightly different). Finally, even if I pick ZeroMQ, I'll need to add some custom way how to support HTTP poll or HTTP push, because I *want* to provide services for this type of clients.
Is there any real-world solution covering all requirements which I have to the final protocol? Something which I can use "as is"? If so, send me a link and I'll elaborate.