Say I have a Dockerfile
that will run a Ruby on Rails app:
FROM ruby:2.5.1
# - apt-get update, install nodejs, yarn, bundler, etc...
# - run yarn install, bundle install, etc...
# - create working directory and copy files
# ....
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
From my understanding, a container is an immutable running instance of an image and a set of runtime options (e.g. port mappings, volume mappings, networks, etc...).
So if I build and start a container from the above file, I'll get something that executes the default CMD
above (rails server
)
docker build -t myapp_web:latest
docker create --name myapp_web -p 3000:3000 -v $PWD:/app -e RAILS_ENV='production' myapp_web:latest
docker start myapp_web
Great, so now it's running rails server
in a container that has a CONTAINER_ID
.
But lets say tomorrow I want to run rake db:migrate
because I updated something. How do I do that?
I can't use docker exec
to run it in that container because db:migrate
will fail while rails server
is running
If I stop the rails server
(and therefore the container) I have to create a new container with the same runtime options but a different command (rake db:migrate
), which creates a new CONTAINER_ID
. And then after runs that I have to restart my original container that runs rails server
.
Is #2 just something we have to live with? Each new rake task I run that requires rails server
to be shut down will have to create a new CONTAINER and these pile up over time. Is there a more "proper" way to run this?
Thanks!
EDIT: If #2 is the way to go, is there an easy way to create a new container from an existing container so I can copy over all the runtime configs and just change the command?
It is incredibly routine to stop, delete, and recreate containers. If you fixed a typo in a view template and want to update your system, the generally correct Docker path is to build a new image with the fix, stop and delete the old container, and start a new container based on the new image.
There are two tricks that can help the scenario you describe. First of all, when you docker run
a container with a --name
, you can use that name for all subsequent Docker operations; you never need to know the hex container ID. Second, when you run a one-off command, you can add a --rm
option so that the container deletes itself when it finishes.
So this workflow might look like:
# Build the new image
docker build -t myapp_web:latest .
# Stop and delete the old server
docker stop myapp_web
docker rm myapp_web
# Run the migration task
docker run --rm myapp_web:latest rake db:migrate
# Restart the server
docker run -d --name myapp_web -p 3000:3000 myapp_web
You can also look at the docker system prune
command to clean up unused containers and images. I second the recommendation of Docker Compose to encapsulate simple docker run
options, but you can also write sequences of commands like this into a shell script instead of typing them out by hand repeatedly.
I strongly recommend using docker-compose. In my codebase I'll name my docker-compose section for rails 'web', and then when I want to run rails console for instance I'll do:
docker-compose run web bundle exec rails console
And you can use the entry point script to run rails console, that way you would start the server by running:
docker-compose up
In the apps directory.
More information on docker-compose here: https://docs.docker.com/compose/
More reasons to use compose rather then starting docker images manually:
- Provides a syntax for setting things like the exposed ports per image, instead of writing that out everytime
- Makes it easy to launch multiple images at once, ie, I have containers in my compose file for: rails, redis, postgres, sidekiq, etc. Sooner or later you'll want multiple images.
- Easy to specify local or remote images. You can use up
to both build and start your stack. Easy to get new developers up and running.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With