Rank Images Site Built with React + Express + MongoDB

Not long ago I built this site that lets users rate images and provides a ranking. Yes, this looks similar to Zuckerberg’s notorious rank female faces site, but the project simply allows you to rank whatever images. The content is not my concern.

Backend

MongoDB was chosen as the database for storing user info, images and ratings. When my friends and I came up with the idea of this site, we also wanted to compare different aesthetic preferences among our friend circle, so we decided to pre-generate a queue of image pairs for all users to rate, and the progress of each user is stored as a field in the user model. The pairs are stored as a stand alone collection.

User model:

Queue model:

Rating and Image models are also created, but the structures are simple enough that I have no need to put them here. Rating model saves which user voted for which candidate in a Queue model; Image model just saves the image data and file type. A vote property is added to the Image model to simplify the sorting process.

The api endpoint design is also straightforward. There are three sub routers:

  1. image
    1. GET api for images
  2. rank
    1. GET for the ranking for all
  3. rate
    1. GET the next pair to rate with a user id
    2. POST the rating of the user

There is no user login/signup. Every user is “invited” after the user information is put directly into the database. They can put in their unique Object id on the page to access the rating page. The ranking result page is public to all.

Frontend

The frontend is also very simple: A React Router handles a home page, a rating page, and a ranking result page.

  • The home page takes the user’s unique id and redirect to the rating page:
    • this.props.history.push(`/rate/${this.state.userid}`);
  • The rating page extracts the unique id from the url and send a request to the server for the pair to rate:
    • nextEntryForUser(this.props.match.params.userid)
    • nextEntryForUser calls the api endpoint and pass the userid for the Queue object.
  • The rank page sends a GET request to the server and displays all the images in order.

The whole site is mobile responsive. I probably spent more time tuning the css than coding up and refactoring the components. -_-

I reused the Docker deployment mentioned in this article.

Video demo:

Takeaway/left to do

  1. data analysis on the ratings (our first incentive for building this)
  2. allow MongoDB to be connected remotely (this includes configuring the security settings so no one hacks us braindead-ly)
  3. find interesting and LEGAL images to rate on
  4. I really need to learn more about infrastructure stuff, which is my real job responsibility

The repository is here.

Image Uploading and Attaching for the Blog Project

Today I have been working on this feature that allows users to upload image as attachments and to include them as Markdown format.

Here’s a demo of the complete feature:

The first decision I made was how I should accept the upload. After several minutes of thinking with my pea brain, I decided to use multer to take file upload from clients. multer will put the file upload in a directory with a randomly generated name to avoid name duplications. It works well with express js in that it sets the ‘file’ property on req with useful properties such as filename, filesize and the path to the file.

Without much further thinking (which later proved to be a mistake), I thought it would be natural to simply serve the directory that has all the uploaded files.

So I wrote these:

API for upload

const express = require('express');
const router = new express.Router();
const fs = require('fs');
const multer = require('multer');
const upload = multer({ dest: 'uploads/' });
const auth = require('../auth');

const { ALLOWED_EXTENSIONS, MAX_SIZE } = require('../config');

router.post('/', [auth.admin, upload.single('upload')], (req, res, next) => {
  const filename = req.file.originalname;
  const path = req.file.path;

  const splitArr = filename.split('.');
  if (splitArr.length === 1 || !ALLOWED_EXTENSIONS.includes(splitArr.pop().toLowerCase())) {
    removeFile(path);
    return res.status(403).json({ message: 'Forbidden file extension' });
  }

  if (req.file.size > MAX_SIZE) {
    removeFile(path);
    return res.status(403).json({ message: `File exceeds maximum allowed size: ${MAX_SIZE / 1000000} MB` });
  }

  res.json({ path: req.file.path });
});

function removeFile(path) {
  fs.unlink(path, err => {
    if (err) console.log(err);
  });
}

module.exports = router;

serve directory:

app.use('/uploads', express.static('uploads'));

frontend’s upload method

onUpload() {
  if (!this.file) {
    this.props.displayMessage('You haven\'t selected any file yet!');
    return;
  }

  const data = new FormData();
  data.set('upload', this.file);
  instance.post('/uploads', data, {
    headers: { Authorization: 'Bearer ' + this.props.token },
  })
    .then(response => {
      const files = this.state.files.slice();
      files.push(response.data.path);
      this.setState({
        files,
      });
      console.log(files);
    })
    .catch(err => {
      this.props.displayMessage(err.response.data.message);
    });
}

This worked fine. However, when I deployed some other minor changes such as progress bar for upload and one click copy path button and hit refresh on my browser – the images were gone! I soon realized that it was because Docker created new containers because of the file changes, and the files on the original containers would be lost unless I do some backup or mount to the host.

That was when I decided to store all of the image files in MongoDB. In this way, the images are staying at the same place with the post contents, which makes backing up easy. It would also be easy to implement because I already had code for other schemas.

With some copy pasta

Schema for images:

const mongoose = require('mongoose');

const ImageShcema = new mongoose.Schema({
  data: Buffer,
  contentType: String,
}, { timestamps: true });

mongoose.model('Image', ImageShcema);

API handler for uploading and retrieving images:

const express = require('express');
const router = new express.Router();
const fs = require('fs');
const multer = require('multer');
const upload = multer({ dest: 'uploads/' });
const auth = require('../auth');
const mongoose = require('mongoose');
const Image = mongoose.model('Image');

const { ALLOWED_EXTENSIONS, MAX_SIZE } = require('../config');

router.get('/:id', (req, res, next) => {
  const id = req.params.id;
  Image.findById(id).exec()
    .then(image => {
      if (!image) {
        return res.sendStatus(404);
      }
      res.contentType(image.contentType);
      res.send(image.data);
    })
    .catch(err => {
      if (err.name === 'CastError') {
        return res.sendStatus(404);
      }
      next(err);
    });
});

router.post('/', [auth.admin, upload.single('upload')], (req, res, next) => {
  const filename = req.file.originalname;
  const path = req.file.path;

  const splitArr = filename.split('.');
  const extension = splitArr.pop().toLowerCase();
  if (!ALLOWED_EXTENSIONS.includes(extension)) {
    removeFile(path);
    return res.status(403).json({ message: 'Forbidden file extension' });
  }

  if (req.file.size > MAX_SIZE) {
    removeFile(path);
    return res.status(403).json({ message: `File exceeds maximum allowed size: ${MAX_SIZE / 1000000} MB` });
  }

  const image = new Image({
    data: fs.readFileSync(path),
    contentType: `image/${extension}`,
  });
  image.save()
    .then(saved => res.json({ path: `uploads/${saved._id}` }))
    .then(() => removeFile(path))
    .catch(next);
});

function removeFile(path) {
  fs.unlink(path, err => {
    if (err) console.log(err);
  });
}

module.exports = router;

I had to also catch ‘CastError’ in GET because the stupid mongoose throws when the param cannot be casted into ObjectId.

Basically what I do when the user uploads is storing the file to MongoDB, deleting the file in FS, and returning the ID of the file in the database. The ID can then be used for the GET api.

I’m also proud to say that this API endpoint is unit tested and (almost) fully covered. No I wish not to discuss the overall 0.2% coverage drop for the repo.

As I said above, I also added progress bar, feedback for copying path, error prompt for invalid files and some other UI features. The GitHub issue is now closed and I now just have to wait for the requester to come back online for my demo :).

Progress on blog rewrite

All right, this is where I say I can actually get something done.

Achievements for the blog project include:

  • APIs for log in/out, posts CRUD, comments CRUD, like/dislike
  • 93% coverage on APIs mentioned above
  • Using React-Redux to maximize data reuse and minimize the number of API calls
  • Using universal-cookie to store the logged in state (okay this might not deserve a stand alone bullet point)
  • Using Docker (Dockerfile and docker-compose) to automate the deployment process.

Today, lucky for you, I’ve decided to talk about how docker-compose in this project works.

Docker is the company driving the container movement and the only container platform provider to address every application across the hybrid cloud.

^ From Docker’s self introduction. What that means for me is that with proper usage, I wouldn’t have to set up production machines with all the dependencies that my project needs whenever I would like to deploy. Ideally all I would have to do is to write Dockerfiles and docker-compose.yml, install Docker and let Docker handle the rest.

In this blog project, separating the backend and the frontend, the dependencies (required on the environment, not the npm ones) are:

  • backend:
    • MongoDB
    • Node/npm
  • frontend:
    • Node/npm (for building)
    • Nginx (for serving)

With these in mind, I was able to write a Dockerfile and a docker-compose.yml for the backend following documentations and random StackOverflow answers online:

Dockerfile:

FROM node:carbon

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build-server

EXPOSE 1717

RUN ["chmod", "+x", "/app/wait-for-it.sh"]

CMD ["node", "build/server.js"]

docker-compose.yml

version: '3'
services:
  blog-api:
    build:
      context: ./
      dockerfile: Dockerfile
    restart: always
    depends_on:
      - mongodb
    environment:
      MONGO_URL: mongodb://mongodb:27017/blog
    ports:
      - "1717:1717"
    command: bash /app/wait-for-it.sh mongodb:27017 -- node build/server.js
  mongodb:
    image: mongo:latest
    restart: always

The Dockerfile specifies the config for the blog-api container, while the docker-compose.yml tells Docker how my blog-api container relates to the mongodb service container.

Several things to notice:

  • Each Docker container is like a VM by itself, so the WORKDIR is the directory in the container, and when I do a ‘COPY . .’, naturally it copies from the current directory in the host to the current directory in the container.
  • Notice how I copied the package.json file first and npm installed before copying anything else. The reason for this is that Docker uses a layering cache system that is able to reuse previous versions of images if nothing changes in Dockerfile. Therfore if I only change some api route file, I wouldn’t have to wait for the long npm install process again.
  • wait-for-it is a tool to wait for a process to listen to a port before doing something. It has automatic retires that is very useful in this case. I could, however, just let blog-api restart always as is, but this tool doesn’t have as much overhead.

Later I added another Dockerfile for the frontend, which looks like this:

FROM nginx

RUN apt-get update

RUN apt-get install -y curl wget gnupg

RUN curl -sL https://deb.nodesource.com/setup_8.x | bash

RUN apt-get install -y nodejs

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build

RUN cp -a /app/dist/* /usr/share/nginx/html

RUN cp /app/nginx.conf /etc/nginx/

This image extends from nginx, so the default CMD starts up the nginx server. I need nodejs for building the static files, so I added the couple lines there. The last two lines copy the static files to nginx’s serving directory and my config file to nginx’s config directory.

With the frontend added, I added one more service to docker-compose.yml:

web:
    build:
      context: ./
      dockerfile: Dockerfile-frontend
    restart: always
    ports:
      - "80:80"

This simply links my container for the web frontend to docker-compose so that I wouldn’t have to manually start up every container. Instead, I would only have to do docker-compose build and docker-compose up -d.

I also added automatic seeding for the MongoDB database but I’m too lazy to paste the steps here again so screw you.

This following point is unrelated to Docker, but I spent some time on it and felt like it would be interesting to include here. It is my nginx.conf file. Since I’m building the frontend with React single-page-serves-it-all pattern, I have to make sure that the nginx server returns the index.html file no matter what the sub url paths are. The only exception is that the client is requesting some js or resource file. With this in mind:

server {
    listen 80;
    root /usr/share/nginx/html;
    location / {
        try_files $uri /index.html;
    }
}

It tries to file the file specified in the uri first, before returning index.html regardless. 404 is handled on the frontend by my React application.

For the next step, I’ll be working on attachments to posts as a feature request from this person.