Background processes in Docker containers can be a bit of a pain for logging, however they can't always be avoided. For instance, I was working on a container for a legacy CGI application at work recently, which required both a FastCGI wrapper and a Nginx webserver in the same container, with Nginx running as the foreground process and logging to the container's stdout. The CGI application does a lot of logging and we didn't really want to have to manage those logs separately from the Nginx logs. The CGI wrapper we were stuck with doesn't pass on stderr (with CGI stdout goes to the browser so that's pointless also) so we had to resort to more devious means.
Linking a file/logging to /dev/stdout (or /dev/stderr) doesn't work because these are merely references to the current process's stdout/stderr and logging to a terminal device similarly doesn't work because in production, there won't really be a terminal.
So, I was stuck and gave up on the idea for a bit until I realized that in Linux process can write to other processs' stdin and stderr. There are 'files' in /proc/TARGET_PROCESS_PID/fd that represent stdin and stderr for that process, and you can write to them just like any other file if you have the right permissions. Processes can only write to these 'files' if they're running under the same user and group as the target process, and if the user is not root, you might have to change the permissions of the 'files' further down the link chain. Of course, in in my situation, the master nginx process was owned by root and the CGI scripts were running under an unprivileged user. However, I worked around this by writing to the stdout of a nginx worker processes, which would then be picked up by the master process.
Unfortunately, due to the way we set up the container in the end, this solution hasn't found it's way into production, but I hope it's useful to you.
Top comments (0)