curl (program) has a way of passing arguments through a file instead of a command line. Something like "curl -d @filename ...". I recall being able to easily pass megabyte-long JSON-RPC queries without a problem by writing them first into a temporary file. This wasn't anything Bitcoin-related, it was for a closed-source load-balancers from F5 Networks.
Your architecture certainly looks original, but I probably just don't understand your constraints well enough. popen() is just a wrapper around pipe(), fork() and exec() calls, seems like using them directly would make the whole thing easier to understand.
I'm writing this on a decidedly non-POSIX tablet, so I can't even look up my old notes.
Ok thanks for the input.
Here are my architectural constraints. I want to keep the program single-threaded on purpose. It's way easier to develop, maintain, debug and understand a single-threaded program than a multi-threaded one. I use a main loop and epoll to listen for incoming TCP connections from localhost. The wallet manager has a CLI over TCP/IP. The program has to be self-contained and with only trivial dependencies so I am not using any big phat libraries and what not. I use SIGALRM to interrupt epoll_pwait for example so that the main loop of my program could maintain a stable FPS. The latter makes it very important to avoid potentially indefinite blocking caused by some system calls. The project has to run on popular Linux platforms so I try to keep it POSIX compliant. I believe that I can still taste the benefits of parallelism even though my program is single-threaded. I achieve it with the help of OS and I'd rather spawn a new, independent and isolated process than a new thread to perform tasks in parallel.
Curl program can actually indeed get its request data and config parameters from stdin. I believe the filename would be - in that case (minus sign indicates stdin). I was sort of hoping that popen provides me more isolation from the hassles that come with multithreading. For example, I assume that popen does not expose me to file/socket descriptor leaking and various signal handling related pitfalls in multithreaded context. If I was to use fork and pipe approach I'd have to disable signal handlers in the child process and close the sockets and what not (gets real messy real fast). So that's why I'm using popen (with & in the end of command line). But if this is a fallacy I'd be glad if you enlightened about that.
Below is the problematic function:
void TREASURER::bitcoin_rpc(const char *method, const nlohmann::json* params) {
/*
* Instead of making a blocking cURL request here we are spawning a child
* process with popen so that we can carry on with the main program while
* the request is being executed. When the child process finishes it will
* connect back to the main server providing us the response from Bitcoin RPC.
* This clever trick achieves asynchronous HTTP requests without using threads
* in our main process.
*/
if (manager->get_global("auth-cookie") == nullptr) {
manager->bug("Unable to execute Bitcoin RPC '%s': cookie not found.", method);
return;
}
nlohmann::json json;
json["jsonrpc"] = "1.0";
json["id"] = method;
json["method"] = method;
if (params) json["params"] = *params;
else json["params"] = nlohmann::json::array();
//std::cout << json.dump(4) << std::endl;
std::string cfg;
cfg.append("--url http://127.0.0.1:8332/\n");
cfg.append("--max-time 10\n");
cfg.append("-u ");
cfg.append(manager->get_global("auth-cookie"));
cfg.append(1, '\n');
cfg.append("-H \"content-type: text/plain;\"\n");
cfg.append("--data-binary @-\n");
cfg.append(json.dump());
std::string hex;
str2hex(cfg.c_str(), &hex);
std::string command = "printf \"%s\" \"";
command.append(hex);
command.append(1, '\"');
command.append(" | xxd -p -r ");
command.append(" | curl -s --config - ");
command.append(" | xargs -0 printf 'su\nsend ");
command.append(std::to_string(id));
command.append(" %s\nexit\nexit\n'");
command.append(" | netcat -q -1 localhost ");
command.append(manager->get_tcp_port());
command.append(" > /dev/null 2>/dev/null &");
FILE *fp = popen(command.c_str(), "r"); // Open the command for reading.
if (!fp) manager->bug("Unable to execute '%s'.\n", command.c_str());
else {
pclose(fp);
manager->vlog("Bitcoin RPC ---> %s", method);
}
}
The above code fails if the command length is too long. Strangely, the maximum length of the command is not equal to ARG_MAX. It's some 30 times less than that
If I could figure out how to programmatically get the real maximum command line length popen can handle then I could implement a fallback for really long Bitcoin RPC arguments (such as raw transaction hex). The fallback would first call makefifo to create a named pipe in /tmp , then spawn a curl process reading from that fifo with popen (& in the end) and then write the Bitcoin RPC into the fifo from the main program.