Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it best practice to daemonize a process within docker?

Tags:

docker

Many best practice guides emphasize making your process a daemon and having something watch it to restart in case of failure. This made sense for a while. A specific example can be sidekiq.

bundle exec sidekiq -d

However, with Docker as I build I've found myself simply executing the command, if the process stops or exits abruptly the entire docker container poofs and a new one is automatically spun up - basically the entire point of daemonizing a process and having something watch it (All STDOUT is sent to CloudWatch / Elasticsearch for monitoring).

I feel like this also tends to re-enforce the idea of a single process in a docker container, which if you daemonize would tend to in my opinion encourage a violation of that general standard.

Is there any best practice documentation on this even if you're running only a single process within the container?

like image 532
CogitoErgoSum Avatar asked Oct 21 '25 03:10

CogitoErgoSum


2 Answers

You don't daemonize a process inside a container.

The -d is usually seen in the docker run -d command, using a detached (not daemonized) mode, where the the docker container would run in the background completely detached from your current shell.

For running multiple processes in a container, the background one would be a supervisor.
See "Use of Supervisor in docker" (or the more recent docker --init).

like image 147
VonC Avatar answered Oct 23 '25 20:10

VonC


Some relevent 12 Factor app recommendations

  • An app is executed in the execution environment as one or more processes
  • Concurrency is implemented by running additional processes (rather than threads)

Website:

https://12factor.net/

Docker was open sourced by a PAAS operator (dotCloud) so it's entirely possible the authors were influenced by this architectural recommendation. Would explain why Docker is designed to normally run a single process.

The thing to remember here is that a Docker container is not a virtual machine, although it's entirely possible to make it quack like one. In practice a docker container is a jailed process running on the host server. Container orchestration engines like Kubernetes (Mesos, Docker Swarm mode) have features that will ensure containers stay running, replacing them should the need arise.

Remember my mention of duck vocalization? :-) If you want your container to run multiple processes then it's possible to run a supervisor process that keeps everything healthy and running inside (A container dies when all processes stop)

https://docs.docker.com/engine/admin/using_supervisord/

The ultimate expression of this VM envy would be LXD from Ubuntu, here an entire set of VM services get bootstrapped within LXC containers

https://www.ubuntu.com/cloud/lxd

In conclusion is it a best practice? I think there is no clear answer. Personally I'd say no for two reasons:

  1. I'm fixated on deploying 12 factor compliant applications, so married to the single process model
  2. If I need to run two processes on the same set of data, then in Kubernetes I can run containers within the same POD... Means Kubernetes manages the processes (running as separate containers with a common data volume).

Clearly my reasons are implementation specific.

like image 24
Mark O'Connor Avatar answered Oct 23 '25 21:10

Mark O'Connor



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!