Progress on blog rewrite

All right, this is where I say I can actually get something done.

Achievements for the blog project include:

  • APIs for log in/out, posts CRUD, comments CRUD, like/dislike
  • 93% coverage on APIs mentioned above
  • Using React-Redux to maximize data reuse and minimize the number of API calls
  • Using universal-cookie to store the logged in state (okay this might not deserve a stand alone bullet point)
  • Using Docker (Dockerfile and docker-compose) to automate the deployment process.

Today, lucky for you, I’ve decided to talk about how docker-compose in this project works.

Docker is the company driving the container movement and the only container platform provider to address every application across the hybrid cloud.

^ From Docker’s self introduction. What that means for me is that with proper usage, I wouldn’t have to set up production machines with all the dependencies that my project needs whenever I would like to deploy. Ideally all I would have to do is to write Dockerfiles and docker-compose.yml, install Docker and let Docker handle the rest.

In this blog project, separating the backend and the frontend, the dependencies (required on the environment, not the npm ones) are:

  • backend:
    • MongoDB
    • Node/npm
  • frontend:
    • Node/npm (for building)
    • Nginx (for serving)

With these in mind, I was able to write a Dockerfile and a docker-compose.yml for the backend following documentations and random StackOverflow answers online:


FROM node:carbon


COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build-server


RUN ["chmod", "+x", "/app/"]

CMD ["node", "build/server.js"]


version: '3'
      context: ./
      dockerfile: Dockerfile
    restart: always
      - mongodb
      MONGO_URL: mongodb://mongodb:27017/blog
      - "1717:1717"
    command: bash /app/ mongodb:27017 -- node build/server.js
    image: mongo:latest
    restart: always

The Dockerfile specifies the config for the blog-api container, while the docker-compose.yml tells Docker how my blog-api container relates to the mongodb service container.

Several things to notice:

  • Each Docker container is like a VM by itself, so the WORKDIR is the directory in the container, and when I do a ‘COPY . .’, naturally it copies from the current directory in the host to the current directory in the container.
  • Notice how I copied the package.json file first and npm installed before copying anything else. The reason for this is that Docker uses a layering cache system that is able to reuse previous versions of images if nothing changes in Dockerfile. Therfore if I only change some api route file, I wouldn’t have to wait for the long npm install process again.
  • wait-for-it is a tool to wait for a process to listen to a port before doing something. It has automatic retires that is very useful in this case. I could, however, just let blog-api restart always as is, but this tool doesn’t have as much overhead.

Later I added another Dockerfile for the frontend, which looks like this:

FROM nginx

RUN apt-get update

RUN apt-get install -y curl wget gnupg

RUN curl -sL | bash

RUN apt-get install -y nodejs


COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build

RUN cp -a /app/dist/* /usr/share/nginx/html

RUN cp /app/nginx.conf /etc/nginx/

This image extends from nginx, so the default CMD starts up the nginx server. I need nodejs for building the static files, so I added the couple lines there. The last two lines copy the static files to nginx’s serving directory and my config file to nginx’s config directory.

With the frontend added, I added one more service to docker-compose.yml:

      context: ./
      dockerfile: Dockerfile-frontend
    restart: always
      - "80:80"

This simply links my container for the web frontend to docker-compose so that I wouldn’t have to manually start up every container. Instead, I would only have to do docker-compose build and docker-compose up -d.

I also added automatic seeding for the MongoDB database but I’m too lazy to paste the steps here again so screw you.

This following point is unrelated to Docker, but I spent some time on it and felt like it would be interesting to include here. It is my nginx.conf file. Since I’m building the frontend with React single-page-serves-it-all pattern, I have to make sure that the nginx server returns the index.html file no matter what the sub url paths are. The only exception is that the client is requesting some js or resource file. With this in mind:

server {
    listen 80;
    root /usr/share/nginx/html;
    location / {
        try_files $uri /index.html;

It tries to file the file specified in the uri first, before returning index.html regardless. 404 is handled on the frontend by my React application.

For the next step, I’ll be working on attachments to posts as a feature request from this person.

Comments are closed.