The encoding you use for transaction proposals should be fine, and if not, there are plenty of other coding schemes available that work fine for serial or other pipes with in-band control.
It sounds like the pure-ASCII-ness of the tx proposals (TxDPs) will benefit me here, compared to coming up with a binary encoding scheme. It will add a little more bloat, but guarantees I won't be sending any bytes that might trigger special character/command bytes by accident. When you speak of XON and XOFF, I assume you are talking about "defining" such bytes, myself. I will pick the XON, XOFF, TX_SEPARATOR, etc: I will be defining this stupidly-simple protocol myself using only bytes across pins 2 and 3. ...?
How do the buffers work exactly? If I do not execute a serialPort.read() or readline() in python, is the buffer accumulating the data being sent from the other system? Does it only accumulate when I create the serial port object? Perhaps I can clear the buffer the moment the user presses the Receive-Tx-Over-Serial button on the Sign-Transasction dialog. Is there a "danger" in the buffer getting too big?
There are also several hardware flow control lines, RTS, CTS, DSR, DTR and CD. Using them is better in some ways, but opens you up to strange cabling problems. Historically, it has been hard to get everything to line up right, because both sides and the cable all need to follow the same scheme, and serial ports aren't nearly as standardized as one might think. If you don't mind restricting yourself to only particular serial adapters/cables/devices, go for it. Just keep in mind that they were designed for communication between a terminal and a modem
I'll stay away. I don't mind doing a little extra work myself to add simplicity to device pairs with different OSes.
Thanks for the reply. It has been extremely useful!
For XON and XOFF, you'd be better off using 0x11 and 0x13. ASCII doesn't actually define flow control codes, which is odd considering how complete it is in other areas, but ASCII does define four "Device Control" codes, and people usually use DC1 for XON and DC3 for XOFF. Pretty much every UNIX terminal responds to them too, which is how you can stop scrolling with CTRL-S and resume with CTRL-Q.
There are other named control codes in ASCII that may or may not be useful to you. There is an enquiry code, an acknowledge code, a negative acknowledge code, an idle code, a whole set of separator codes (unit, record, group, file) and codes to indicate the start of a header, the start of the text, the end of the text, and the end of transmission. A bunch of them have special meaning in UNIX too, like EOT (end of transmission) is CTRL-D, which hangs up your terminal and logs off, ETX (end of text) is CTRL-C which breaks, and SUB (substitute) is CTRL-Z which substitutes a new prompt for the running job.
Way back in the bad old days, the 8250 UART chip itself had only a pair of 8 bit buffers. To send, you'd write your byte to the Transmit Holding Register and then set a bit in the control register, then the UART Would take over, generating the start signal, shifting each bit to the line, calculating the parity internally and signaling it (if set), and writing the stop signal (if set). When receiving, it would do the reverse, listen for the start signal, shift each bit in, check parity (if set) and check for the stop signal (if set), and copy the shift register to the Receiver Buffer Register, then set a bit in the control register, and if you were really, really lucky, trigger an interrupt so that you didn't have to poll the damn thing all the time.
If a second byte came in before you had read the first one, one of them was just lost. I think it was the newer byte, but I don't remember. It's been a while. Sending had similar issues, but wasn't quite as bad, because the chip would copy the THR into the shift register, so you only had to wait for as long as that took before writing the next byte, instead of having to wait for up to 10 baud periods for it to shift out to the line driver.
Over time, better chips like the 16450 and 16550 came out which had FIFO buffers, so that they could store up like 16 or 64 bytes at a time. These were buggy as shit too, if I recall, but still a huge improvement.
In a modern system, you should be able to ignore all of that. You should be able to call the read function in any given high level language and have it either return data, return null, or block until something comes along. The OS should handle the hardware and buffering properly so that your read calls return everything since either the last read or since the open call. Any computer should be fast enough not to have to worry about anything like that.
For the relatively small messages that will be involved here, you shouldn't need to worry about the buffers growing unreasonably. It shouldn't take more than a few seconds to send a transaction request. At 9600 baud 8N1, which is really slow, you can get out nearly a kilobyte every second.
What you will have to do is respect flow control signals coming in from other devices. Your first implementation will probably be between two modern boxes running Armory, both of which should be able to handle maximum speed serial lines without any problems. Looks like newer USB serial adapters can go up to 921,600 bits per second, which is probably faster than the the connectors on the null modem can physically handle without the signal turning to mush, but is still pathetically slow by computer I/O standards. But, at some point, someone is going to build a little Arduino or other embedded system that they will want to use as a secure offline physical wallet. Depending on the hardware they use it could even be a bit banging interface where the CPU literally spins in a loop raising and lowering an I/O line at the right times to generate the signals. If one of those sends 00010101, it might really mean it.