You've got a node app, it has a web server, a microservice, needs a database and uses Redis as a message queue for the microservice. Let me tell you how to dockerize your app in ~2 minutes.
Why should I use Docker?
It took me a long time to really figure out the benefits of using Docker. Up until now I would spin up a VirtualBox VM and have my dev environment all contained within it.
But when I started building node apps split over different microservices, sharing a Redis backed queue manager and a MongoDB database, I realised I needed a simple way to start and stop this 'production-like' environment on my development machine. But I also wanted other developers in my team to do it as well.
Docker was the solution. The benefit means I can create runnable versions of the whole stack, check the build and run configuration into my git repository, then run them from one simple command
docker-compose up. Magic!
So how does Docker work?
Docker containers are basically a running version of your app. Each process is run in its own container. So Nginx has its own container. Your Express.js API server? In its own container. Each Node.js microservice? In their own containers. MongoDB? In its own container. You get the picture.
Containers are created from images. Not photographs, but a snapshot of a prebuilt environment. What environment you say? Well, it's just the essential programs needed to run your app. That's what keeps them lightweight.
For example, you can use an Ubuntu 16.04 image. But it's a cut-down version with just the bare minimum for you to do just what you need.
Need more than one instance of your microservice running? Then you run multiple containers. All your containers can run on a single computer, or you can spread them across many.
Now, your containers can talk to each other, but only if you allow it. You can get them to talk through your host, or create a private bridge network so they can talk directly to each other. You'll see that happening when we get to Docker Compose (basically managing multiple containers).
What you'll need
You just need to install Docker CE. CE being the Community Edition. The free version for you and I. Go download it here. This bit isn't rocket science.
The concept here is that you define how to build an image in a file named
Dockerfile. How odd, a file with no extension. Oh well. That's actually the default name, you can call it whatever you want, but then you'd have to specify it on the command line every time.
Here's an example
Dockerfile for a single Express.js node app. The inline comments will tell you what's what. Feel free to stick it where you have your
package.json. I did.
# Use a node.js image from Docker. 'carbon' is the codename for version 8 # of node. Basically it's a debian container with node installed. FROM node:carbon # Create app directory, this is where our source code will live WORKDIR /usr/src/app # We need to install our npm package in the container image, not our host # so we'll copy the package.json and install the packages. COPY package*.json ./ # Now install the packages RUN npm install # This is a config file that my app uses which I can bake some docker # defaults in there when running in a container COPY config.docker.json config.json # Now copy your application source into a src folder. You can customise # this if you want. Just don't copy node_modules as we've already created it. COPY ./src /usr/src/app/src # If my node app runs on port 8080, tell Docker to make that available # on my host so I can go http://localhost:8080 EXPOSE 8080 # Now start my app by running `npm start` CMD ["npm", "start"]
Now you want to build your image by running the following from the folder your
Dockerfile is in:
docker build -t my-node-app .
Docker is pretty cool here, it will treat each command in your
Dockerfile as a separate step creating an intermediary image. This means if you change your
Dockerfile it only needs to run the later commands. This saves a lot of download and compile time.
-t my-node-app tags your image with it's own name. It saves having to use things like the hash ID which is not really human readable.
Now your image is created. But where is it? You see Docker has what's called a local registry. You can see all your images in this registry with:
git you can push images to a remote registry. You can use Docker Hub but you only get one private repository for free. A repository is basically an image, but you can have multiple versions of that image. So if your app has an API server, a web UI app and 2 micro services, that would be 4 repositories.
You can set up your own private registry if you want, but that's a topic for another day.
Just remember this:
- Registries contain Repositories
- Repositories contain multiple image versions of the same thing
- An image is used to run a container
- A container is basically a single running process
How do I run my docker images?
So we have our app, now lets run it.
docker run -p 8080:8080 my-node-app
This will run our app inside a container. Press
Ctrl+c to exit it.
-p 8080:8080 is basically saying, connect port
8080 on my host (left one) to port
8080 inside my docker container. It basically proxies your local network to the container's network. If all works, you can go to
So how do I run MongoDB in a docker container and connect it to my node app?
Let's assume you've got all your node apps set up in Docker. Now you want to create your database and wire everything together.
Here's our setup:
- MongoDB listening on port
- A Redis server used as a message queue for your microservices
- A node.js microservice that talks to Redis and does some processing
- An Express.js Node app listening on port
What we will do is create a
docker-compose.yml file that will orchestrate the whole lot to work together. There is one gotcha, your app must be able to wait or reconnect to things like the database as you can't really sequence the boot of the containers. This is good practice anyway and will make your app more robust.
docker-compose.yml file. I like to put this with my
package.json in my API server project.
# This is the version of docker-compose the file is written for version: "3.6" # Lets define all our services (i.e. our containers to run) services: # Create a mongo database my-db-server: image: "mongo" container_name: "my-db-server" # We want to use an internal private network for all these containers networks: - my-private-network # Open the Mongo DB port so I can connect and debug # from my host. You could skip this if you want. ports: - "27017:27017" # Now lets create a redis server on the same network my-queue-server: image: "redis" container_name: "my-queue-server" networks: - my-private-network # Here's a microservice we could build with our own Dockerfile my-microservice: image: "my-microservice-docker-image" container_name: "my-microservice" networks: - my-private-network # And finally our main Express.js app where we want to access # it on our host on port 8080 my-express-server: image: "my-express-server" container_name: "my-express-server" networks: - my-private-network ports: - "8080:8080" # Now define our network as a bridge network so all our # containers can see each other networks: my-private-network: driver: bridge
Now before we run it, remember that
config.docker.json we build into our image? It's time to explain how that works.
You should be thinking right now, what connection address does my Express.js app or my microservice use to connect to Mongo DB and Redis. The sweet thing with the Docker networking is that by joining the network, the service containers become hosts on the internal network. The hostname is the service name. Bonza!
So in the above example, you connect to Mongo in your Express app with
mongodb://my-db-server:27317. It doesn't have a username or password set up in this example.
You can then connect to Redis with host
Build your docker images with these changes in your application's configuration files, then start your containers with:
If all goes well, you can use your app by going to