By: [email protected]

Guide on using Docker with Django and NextJS (some NGinx in there for fun too)

Published 6/1/2022, 2:48:23 AM


Docker - Django + NextJS

Introduction

So recently I finally took the time to 'convert' this blog's backend to using Docker for development and hopefully production as well.

It runs on Django server with a NextJS frontend which is my most used stack at the moment so I thought it'd be a really great exercise to nail down that workflow and 'perfect' my set up.

Why?

But first, why?

Well more directly I was facing a problem with Heroku and the way my NextJS app worked. TLDR was the routing to and from my app was weird and Heroku doesn't support Nginx directly but does through containers, therefore I finally had good reason to go containerize the entire app :).

More generally, this relates to the idea of making this full stack application more isomorphic.

I've tried Docker before for development and while I definitely saw some benefits it really was not for me (and my hardware) at the time. I'm definitely excited to give it another shot with more complex projects to see if maybe it really does make the workflow a bit easier - particularly for a (primarily) solo developer like myself.

To say really briefly, for both development and production environments (depending on your hosting), Docker offers a really great option for simple and consistent development environments.

Quick spoiler, I thought and still do think that they fall short of the promise in terms of simplicity/ease of use quite a bit but Docker is undoubtedly an incredibly powerful tool and as we'll see very useful when you need it!

Docker Compose

Docker Logo

Please note, the above is not the actual Docker logo but rather their mascot Moby.

So to jump right into the 'solution' I first need to explain this will be done using Docker Compose.

Simply put: Docker Compose is a way to use multiple Docker Containers in tandem to allow them to interact much more easily and keep your Dockerfiles as short and sweet as possible.

If you're reading this post I'm going to assume you have a pretty good handle on Docker, Compose, etc. however if you don't (or you're just curious) feel free to review the Docker Compose docs. The Docker docs in general are pretty solid in my opinion so definitely worth a read if you need a refresher on containers or any of that stuff.

We'll go through each section of the docker-compose file I made to get these services to all play nice.

NOTE: This solution is NOT production ready. It is not viable security wise and should be used only for development. I'll discuss what to do for production more later.

NextJS

The NextJS Dockerfile is the simplest and probably the best place to start:

# client/Dockerfile
FROM node:alpine


RUN mkdir /client
COPY . /client
COPY package.json /client/package.json
WORKDIR /client
RUN npm install

CMD ["./entrypoint.sh"]

Key points:

  • FROM node:alpine: This is the Docker image we're using. In this case simply Alpine Linux with Node installed.
  • Next few lines should be fairly straightforward however simply creates a directory IN THE CONTAINER at /client then copies the current client folder there. Then install the dependencies.
  • CMD ["./entrypoint.sh"]: The 'CMD' keyword in Docker is special. It comes at the end of your file and specifies the command to run to start your container. In this case, I point it at a script in the same directory.

What is the script you ask? Great question!

#!/bin/sh
npm run build && npm run start

Pretty straightforward - in all honesty you don't really need to put it in a standalone file I just do so as a design preference. Primarily, because it allows you to reuse the entrypoint.sh script in Docker and otherwise.

Django

The Django Dockerfile is also relatively straightforward:

# server/Dockerfile
FROM python:3.9.12-alpine


ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1


RUN apk add --no-cache --virtual .build-deps ca-certificates gcc postgresql-dev linux-headers musl-dev libffi-dev jpeg-dev zlib-dev gcc python3-dev jpeg-dev


WORKDIR /app/server
COPY ../Pipfile ../Pipfile.lock /app/server/
RUN pip install --upgrade pip
RUN pip install pipenv
RUN pipenv install --system --deploy --ignore-pipfile
COPY ./server /app/server/
ENV IN_DOCKER True


CMD ["./entrypoint.sh"]

Key points:

  • FROM python:3.9.12-alpine: Just like the client - this image is Alpine Linux with Python 3.9.12 preinstalled.
  • RUN apk add ...: 'RUN' is Docker for 'run this bash command' just like normal. The rest of these are Alpine Linux packages we need to run properly.
  • Next few are similar to the client, just copying files and installing packages. Main difference here is we're copying to the /app/server folder in the container.
  • ENV IN_DOCKER True: This sets the environment variable "IN_DOCKER" to true in the container. Completely optional I just like having access to this in Django.

NOTE: I keep my Pipfile in the directory above my Django application - this will come into play when we start playing with the Compose file.

As far as the entrypoint file:

#!/bin/bash
until cd /app/server
do
    echo "Waiting for server volume..."
done

until ./manage.py migrate
do
    echo "Waiting for db to be ready..."
    sleep 2
done

./manage.py collectstatic --noinput

gunicorn app_name.wsgi:application --bind 0.0.0.0:8000 --workers 4 --threads 4 --log-file=-

A bit more complex than the NextJS entrypoint file but still simple enough. This simply:

  • Runs migrations
  • Runs collectstatic
  • Starts the django server behind gunicorn
    • No, gunicorn is NOT necessary for development I just think it makes things run a bit more consistently

NGinx

This one is the most fun! For those who don't know, NGinx is a web server you can use to serve your application.

You don't need it to run these frameworks or use Docker in this way however it does help performance and just make your life easier. Moreover, most production configs will probably require you to use NGinx.

The Dockerfile:

# /nginx/development/Dockerfile
FROM nginx:alpine


RUN rm /etc/nginx/conf.d/*
COPY nginx.conf /etc/nginx/conf.d/


EXPOSE 80
EXPOSE 443


CMD [ "nginx", "-g", "daemon off;" ]

Key points:

  • COPY nginx.conf...: Copies the NGinx config file in the current directory. Will discuss more in a second.
  • EXPOSE 80: Expose port 80 of this container. Necessary since this is a web server.
  • CMD ...: This command is the 'raw' way. Docker prefers if you're doing a command to separate it out into a list like so. This is the NGinx command to start the web server.

The big thing with NGinx is the conf file:

# Upstream Server(s)
upstream client_upstream {
  server client_app_name:3000; # note the usage of the docker container name NOT 'localhost'
}

upstream server_upstream {
  server server_app_name:8000;
}

# Server config
server {
    # Defaults
    listen 80 default_server;
    server_name localhost;
    server_tokens off;
    client_max_body_size 10M;

    # Gzip compression
    gzip on;
    gzip_proxied any;
    gzip_comp_level 4;
    gzip_types text/css application/javascript image/svg+xml;

    # Proxy headers
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_cache_bypass $http_upgrade;

    # Client static files - these are NextJS Routes
    location /_next/static/ {
        proxy_pass http://client_upstream;
    }

    # Server static files - these are Django routes
    location ~* ^/(static|static-debug|media|media-debug)/ {
        proxy_pass http://server_upstream;
        proxy_set_header X-Url-Scheme $scheme;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
    }

    # This is the route for Django APIs
    location /api {
        try_files $uri @proxy_api;
    }

    # This is the route for the Django Admin
    location /management {
        try_files $uri @proxy_api;
    }

    location @proxy_api {
        proxy_pass http://server_upstream;
        proxy_set_header X-Url-Scheme $scheme;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
    }

    # Client proxy - catch all send back to the client
    location / {
        proxy_pass http://client_upstream;
    }
}

I'm not going to get too far into this since NGinx config is well beyond the scope of this post, but I left comments regarding some bigger points.

docker-compose-dev.yml

Last but not least, the Docker Compose file itself is:

# docker-compose.yml
version: '3'
services:
  # CLIENT
  client-dans-backend:
    container_name: client-dans-backend
    restart: always
    env_file:
      - ./client/.env.local
    build:
      context: ./client
      dockerfile: Dockerfile
    depends_on:
      - server-dans-backend
    ports:
      - "3000:3000"
    links:
      - server-dans-backend
    entrypoint: /client/entrypoint.sh


  # SERVER
  server-dans-backend:
    container_name: server-dans-backend
    build:
      context: .  # declaring the current (root) dir as the context so we can access "Pipfile"
      dockerfile: server/Dockerfile
    env_file: # setting env file for local development
      - ./server/.env
    depends_on:
      - db-dans-backend
    ports:
      - "8000:8000"
    links:
      - db-dans-backend
    entrypoint: /app/server/entrypoint.sh


  # DATABASE
  db-dans-backend:
    container_name: db-dans-backend
    restart: always
    image: postgres:14-alpine
    environment:
      - POSTGRES_DB=dans_backend
      - POSTGRES_USER=dans_backend
      - POSTGRES_PASSWORD=dans_backend
      - POSTGRES_HOST=db-dans-backend
    ports:
      - "5432:5432"


  # NGINX SERVER
  nginx-dans-backend:
    container_name: nginx-dans-backend
    build:
      context: ./nginx/development
      dockerfile: Dockerfile
    ports:
      - "80:80"
    links:
      - server-dans-backend
      - client-dans-backend

I left comments in this file as well to make it shorter. TLDR: It links everything together!

Note the usage of '-' not '_' for the service names - this is to ensure they are valid hostnames if we plan on contacting over the network.

Run it!

To run your set-up simply run:

docker-compose up -d

All this does is start your Docker container(s) - the up command - and runs them in a detached state - the d flag. Running this will run whatever docker-compose.yml file is in your current directory.

To stop your container(s) simply run:

docker-compose down

Feel free to take advantage of the Docker Desktop UI as well especially while getting started!

NOTE that your compose file does not have to be called docker-compose.yml however the vast majority of frameworks, tools, etc. will expect this. To say, you can call it something else (i.e., I use docker-compose-dev.yml) however just be wary you may run into some weird issues with certain tools.

For example, for Docker Compose itself you would have to call:

docker-compose up -f docker-compose-dev.yml

Conclusion

That's about it! Overall feels like a lot but pretty simple when you break it down. I didn't manage to get to the production config in this post, so I'll be sure to write up that soon since it's (kind of obviously) a bit more involved.

I can't really succinctly say how I feel about Docker. I do love it, and it's quite useful, but it is definitely a pain at times - and usually slower than you'd probably like.

Thankfully, I think by the time you find yourself needing Docker you know 95% of what you need to to get the most out of it. This is because, I'd argue, Docker itself is relatively simple, the Linux/platform knowledge a bit less so.

All said, when you need Docker or find a good fit for it, it can be truly incredibly useful - I think the issue in this post is a great example of that.

Thank you for reading!

Reference

If you'd like to view the files/basic structure, it's available on GitHub here

NOTE: There is a certain directory structure these Dockerfiles expect. I tried my best to document it within the repo however you may have to play with it a bit to fit the system you like.


Comments

None available :/

Must be logged in to comment


Tags

Django

Docker

NextJS

NGinx

Python

TypeScript