Adam Johnston

Connecting to a PostgreSQL Docker container using Node & Slonik

So I recently started a new project (which I will totally see through to completion). In the course of reliving the joys of configuring a project from scratch, I reached the point where I was ready to hook up my web API to a database. It turned out to be a little more frustrating than I’d expected it to be, after all I’d done it once before about a year ago so of course it should be fresh in my mind. Sarcasm aside I’m writing this to hopefully solidify it in my brain and failing that at least have this as a reference.

Some context

Before jumping in I just wanted to add some context. I am approaching the architecture of this project in a micro-service-like way. The project is a single mono-repo broken into clients (UIs), services (APIs) and utils (Shareable code). I think the benefit to this is approach will be to keep different parts of the project isolated and individually deployable, however it could always be bundled as a single image (I think so anyway, I guess I’ll find out).

Below is a snapshot of the rough shape of the project with some files not shown as they’re mostly configuration:

··service A 

Docker Compose configuration file

So in keeping with the idea of isolation each service will have it’s own DB (again, potentially since each service could be configured to point to a single DB at deployment) I opted to use Docker Compose to easily grab a PostgreSQL image and spin it up when the service starts. Here’s a look at the docker-compose.yml as I have it right now. I’ll walk through and point out things that might be of interest if you’re not familiar with using Docker Compose.

version: '3.9'

    image: 'postgres:14.1-alpine'
    container_name: service_a_db
    restart: always
      POSTGRES_USER: service_a
      POSTGRES_PASSWORD: super_secret
      - 5000:5432

Config breakdown

  • version
    The version is optional since v1.27.0 of Docker Compose however I feel it doesn’t hurt to include it.

  • services
    This is a list of services you want to spin up using Docker. In my case I just want a single service, a DB, that service being the one I’ve named db. You can call yours Jim if you prefer.

  • image
    This if the Docker image I want to create a container from. It’s an official image on DockerHub for PostgreSQL. I’ve used the version tag so I can be certain that as new versions of PostgreSQL are released I am still developing and testing against the same as what my production environment is using. alpine denotes that this is a minimal sized container.

  • container_name
    This is optional but it is handy to help find your container once it’s up and running. If you omit it you’ll be given a default name. To check this is set correctly you can run the following command; docker ps. docker ps shows you a list of containers. You should see something like this:

    CONTAINER ID   IMAGE                  COMMAND                  CREATED        STATUS       PORTS                    NAMES
    51f30726c405   postgres:14.1-alpine   "docker-entrypoint.s…"   26 hours ago   Up 9 hours>5432/tcp   service_a_db
  • restart
    I’m honestly not sure why this should be set to always after reading the Docker Compose docs but it is set this way according to the Docker Hub example but I’ll go with what they suggest.

  • environment
    The environment allows for passing environment variables into the container. POSTGRES_PASSWORD is required by the official PostgreSQL image. For security reasons there is no root user when the DB starts. For my needs I’ve included a specific user with POSTGRES_USER. This also has the effect of creating a database of the same name. If this variable is omitted then postgres will be used instead.

  • ports
    The ports expose network ports within the container to an “outside world”. By default, PostgreSQL runs on port 5432. In the case above I’ve mapped 5000 as each of my services will increment, so Service A’s API will be 8000 with the DB on 5000 while Service B will be 8001 & 5001. Not exposing these ports when I initially wrote the config lead to which you can read about in Troubleshooting.

Working with the DB service

Now that we’ve got our config we’re ready to get our DB started. To do this we can run the command docker-compose up. This command will build the image, create a container and start it. You’ll get a bunch of output but ideally you’ll see a final log that looks something like the following:

service_a_db  | 2021-12-31 10:07:50.025 UTC [1] LOG:  database system is ready to accept connections

You can optionally run this in the background using the detach flag; -d, e.g. docker-compose up -d. While this will let you use your terminal you’ll not see the logs. At times though you might want to see logs and you can do this with the following command; docker logs [CONTAINER_NAME] for a one off look or with an optional follow flag; -f to give you realtime updates.

To stop the service and remove any data you can run docker-compose down.

If you want to mess around with the DB running in the container there’s two ways I do this. The first way is to use Docker’s exec command to call psql. Using docker exec -it [CONTAINER_NAME] psql -U [POSTGRES_USER], i.e. docker exec -it service_a_db psql -U service_a or you can use the DB’s connection string, i.e. docker exec -it service_a_db psql postgres://service_a:super_secret@localhost:5432/service_a. This should then give you access to psql tooling.

Another approach is to access the container’s shell and then access psql with it. This offers you more access to the container environment beyond just psql. To do this use docker exec -it /bin/bash and then connect to the DB in the same way as above.

Connecting to DB in Node

Now that the service is running we can attempt to connect the Node API to the running instance. For this I use Slonik. First of all though we’ll want to add the DB’s connection string to the env variables for the API. I am using dotenv so I just need to edit my .env file to include it.


Notice that we’re using the port that we set in docker-compose.yml and not the one that PostgreSQL uses by default. This tripped me up a bit an I’ll cover it in troubleshooting. Once the env variable is added, I access it using a config.ts which looks something like this:

import { MissingConnectionStringError } from './errors'

const { DB_CONNECTION_STRING: dbConnectionString } = process.env

if (!dbConnectionString) {
  throw new MissingConnectionStringError()

export const config = {
} as const

A couple of things to note here. I’ve created a custom error to be thrown if the env variable has not been loaded correctly. I then export a config object with the connection string so it can be accessed more easily and mark the config as const to avoid any unexpected attempts to overwrite the values within it. Using as const will mark each property of the object as readonly.

Finally I have a db.ts which is where we’ll hook up Slonik to our DB and export a pool that will allow us to connect and query the DB. The pool can then be imported into our services to make queries against the DB.

import { createPool, type DatabasePool } from 'slonik'
import { config } from './config'

export const pool: DatabasePool = createPool(config.dbConnectionString)


Not exposing ports

The first time I did this I didn’t expose the ports in the Docker compose config and my connection string used 5432 which resulted in a red herring of an error message. I would get the following error stack in my API:

  throw new ConnectionError(error.message);
ConnectionError: password authentication failed for user "service_a"
  at createConnection (/…/node_modules/slonik/src/factories/createConnection.ts:128:15)
  at processTicksAndRejections (node:internal/process/task_queues:96:5)

This lead my down the path of wondering whether I had configured the environment variables for PostgreSQL correctly but it was resolved by exposing the ports correctly.