Docker for Devs – Start Simple

by Javier Treviño Saldaña

I’ve been wanting to use Docker in my projects. Being able to replicate my development environment in any modern machine sounds very promising. But every time I’ve tried to use Docker, I’ve hit some complex flow difficult to automate, which hinders my development flow.

For example, I tried it on a recent project which had a mix of Clojure and Node. Everything went well in the beginning to install dependencies and get the app running, but making any real changes required rebuilding containers, which resulted in a super slow feedback loop. I ended up installing everything in my computer.

So I’ve tried using Docker in the past following the official guides & other popular blog posts without much success. This article will document my latest attempt, and it’ll be a deep dive to understand it, reading the Docker manual instead of the “Quickstart” guides.

This time I’ll experiment with a simple web app using HTML/JavaScript/React. I’ll simplify things further by reducing the number of dependencies, so instead of using create-react-app to handle my dev workflow, I’ll shoot to understand the role of every piece I bring into the project.

My docker development environment should support:

  • (Re)installing dependencies, but cache them if they don’t change
  • (Re)running data migrations
  • (Re)running tests
  • (Re)running the application
  • Reloading (most) application changes instantly

The operations needed to execute in production are different, so it makes sense our docker configuration will vary with the environment.

Getting Started– Choosing a React Architecture

When I use a framework I like visiting their documentation, even if I’ve used it before. As it evolves I’ve found the authors advice new approaches to install/use it. Let’s start with React. At the time of writing, the documentation suggests different options depending on your use case.

  • Adding React as a plain “script” tag
  • Plain “script” tag with JSX support
  • If you’re learning React try create-react-app
  • If you’re building a static content-oriented website, try X framework
  • If you’re integrating with an existing codebase, try <link to “More flexible toolchains”>

I really like this. Some frameworks ship with ton of trash you never use. I’ll go with the last option (“integrating with an existing codebase”). I don’t have an existing codebase, but I don’t want React to be dictating how I structure my code, so this option seems to make the most sense for my goal.

More Flexible Toolchains:

  • X combines the power of W with the simplicity of presets.
  • Y is a fast, zero configuration web application bundler that works with React.
  • Z is a server-rendering framework that doesn’t require any configuration, but offers more flexibility than Q.

Upon a closer look, I’m not familiar with those frameworks/toolchains. Knowing I can just include React in a script tag and add JSX support makes me wonder if I need those libs.

There’s another option that seems good:

Creating a Toolchain from Scratch

A JavaScript build toolchain typically consists of:

  • A package manager, such as Yarn or npm. It lets you take advantage of a vast ecosystem of third-party packages, and easily install or update them.
  • A bundler, such as webpack or Parcel. It lets you write modular code and bundle it together into small packages to optimize load time.
  • A compiler such as Babel. It lets you write modern JavaScript code that still works in older browsers.

Now we’re talking. Those three pieces make a lot of sense. Most apps I’ve worked on require a package/dependency manager, and the bundler & compiler sound great to tackle the limitations and variability of JS in different browsers.

Using Docker– (Don’t) Install Npm

We’ll need a dependency manager to download React; I’ll go with npm for no particular reason. Npm is available in Node.js images, visiting https://hub.docker.com/_/node. I glanced at the available versions & options– I want something recent, but stable. I see the LTS tag (Long Term Support). Plus I’ll go with the alpine tag because it’s very lightweight compared to other images.

Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.

I’ve tried a couple Docker setups in the past, and ended up using Docker Compose or similar. I’ll start simpler this time. I read I can run npm init to create a package.json file to manage my dependencies. I’m hoping I can install React once I have that.

So the first challenge is how to execute npm init. Let’s look at the docker run command.

Docker runs processes in isolated containers. A container is a process which runs on a host. The host may be local or remote. When an operator executes docker run, the container process that runs is isolated in that it has its own file system, its own networking, and its own isolated process tree separate from the host.

Let’s poke this thing.

$ docker run node:lts-alpine

Downloads the image but doesn’t appear to do anything.

We can try running npm here, according to:

docker run [OPTIONS] IMAGE[:TAG @DIGEST] [COMMAND] [ARG…]
$ docker run node:lts-alpine npm

Usage: npm <command>

where <command> is one of:
    access, adduser, audit, bin, bugs, c, cache, ci, cit,
    clean-install, clean-install-test, completion, config,
    create, ddp, dedupe, deprecate, dist-tag, docs, doctor,
    edit, explore, fund, get, help, help-search, hook, i, init,
    install, install-ci-test, install-test, it, link, list, ln,
    login, logout, ls, org, outdated, owner, pack, ping, prefix,
    profile, prune, publish, rb, rebuild, repo, restart, root,
    run, run-script, s, se, search, set, shrinkwrap, star,
    stars, start, stop, t, team, test, token, tst, un,
    uninstall, unpublish, unstar, up, update, v, version, view,
    whoami

npm <command> -h  quick help on <command>
npm -l            display full usage info
npm help <term>   search for help on <term>
npm help npm      involved overview

Specify configs in the ini-formatted file:
    /root/.npmrc
or on the command line via: npm <command> --key value
Config info can be viewed via: npm help config

npm@6.13.4 /usr/local/lib/node_modules/npm

Cool! We’ve got access to npm. Now let’s try to run npm init to create a package.json file to manage our dependencies, and install React. One thing to keep in mind here, is that when package.json is generated, it’ll probably save it inside in the container.

$ docker run node:lts-alpine npm init

This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg>` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
package name:

The npm output above looks good, but there’s a problem. When it asked me for a file name, the process exited immediately.

From the docs:

For interactive processes (like a shell), you must use -i -t together in order to allocate a tty for the container process. -i -t is often written -it as you’ll see in later examples.

-i : Keep STDIN open even if not attached
-t : Allocate a pseudo-tty

I tried that out and it seemed to work. Now I can interact with the shell prompts:

$ docker run -it node:lts-alpine npm init

This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg>` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
package name: experiment
version: (1.0.0)
description:
entry point: (index.js) nothx
test command:
git repository:
keywords:
author: Javier Treviño Saldaña
license: (ISC)
About to write to /package.json:

{
  "name": "experiment",
  "version": "1.0.0",
  "description": "",
  "main": "nothx",
  "directories": {
    "lib": "lib"
  },
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "Javier Treviño Saldaña",
  "license": "ISC"
}


Is this OK? (yes)

We made progress, but unfortunately after running this command I don’t see a package.json in my project dir. It must be in the container, let’s verify:

$ ls

$ docker run -it node:lts-alpine /bin/sh
/ @ ls
bin    dev    etc    home   lib    media  mnt    opt    proc   root   run    sbin   srv    sys    tmp    usr    var

Huh, it wasn’t in the container either. I ran docker container ls and noticed the container ID changes on every docker run. I’m not super familiar with Docker and this is one of the things I’ve been curious about– seems like containers are deleted as soon as they exit?

From the docs:

By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up.

Wait, that’s confusing. So the container is not deleted, it should remain there, but every time I execute docker run I get a new container. I looked up how to list “exited” containers:

$ docker ps -f "status=exited" | grep lts-alpine
CONTAINER ID        IMAGE                  CREATED             STATUS                           NAMES
9526f160df88        node:lts-alpine        2 minutes ago       Exited (0) 35 seconds ago        musing_chaplygin
59f1151e17be        node:lts-alpine        3 minutes ago       Exited (0) 3 minutes ago         quizzical_mestorf
b513661d8ad5        node:lts-alpine        4 minutes ago       Exited (1) 3 minutes ago         stupefied_nightingale

Our package.json file must be in one of those exited containers. Let’s verify for the sake of understanding. I suppose it’s not in the most recent one, because we just looked there, so I’ll spin up the next exited container.

docker container start [OPTIONS] CONTAINER [CONTAINER…]

$ docker container start 59f1151e17be
59f1151e17be

How do I run commands on it?

docker container exec [OPTIONS] CONTAINER COMMAND [ARG…]

$ docker container exec 59f1151e17be /bin/sh

$ docker container exec -it 59f1151e17be /bin/sh
/ @ ls
bin           etc           lib           mnt           package.json  root          sbin          sys           usr
dev           home          media         opt           proc          run           srv           tmp           var
/ @ %

$ docker stop 59f1151e17be

Found it. Alright so that’s not ideal. In most projects I’ve worked on, keeping a list of dependencies under version control (e.g. git) is a must. If one thing has to live in my filesystem, I think it should be the source code… right? …we’ll explore that question another time. For now let’s try to get this dependency manager package.json file in our host OS.

Docker can share files from the host OS with the containers it creates. I found “volumes” and “bind mounts”.

Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker. Volumes have several advantages over bind mounts […]

I’m gonna go ahead and ignore that general recommendation to use volumes. I don’t see the value just yet because we want package.json in our local dir/git repository. Doesn’t sound like the use case for volumes, but “mounting” a local dir in the container sounds about right.

--mount: Consists of multiple key-value pairs, separated by commas and each consisting of a <key>=<value> tuple. The --mount syntax is more verbose than -v or --volume, but the order of the keys is not significant, and the value of the flag is easier to understand.

- The `type` of the mount, which can be bind, volume, or tmpfs. This topic discusses bind mounts, so the type is always bind.
- The `source` of the mount. For bind mounts, this is the path to the file or directory on the Docker daemon host. May be specified as source or src.
- The `destination` takes as its value the path where the file or directory is mounted in the container. May be specified as destination, dst, or target.

We typically see files like package.json in our project root, so let’s share this dir with the container. We’ll then want npm (running inside the container) to generate the package.json in that shared dir.

$ docker run \
  -it \
  --mount type=bind,source=$(pwd),destination=/home/app \
  node:lts-alpine \
  COMMAND

The docs (at the time of writing, early 2020) indicate we can mount using -v or –mount. They recommend –mount because it’s more verbose/easier to understand. In this case we’re mounting our current dir (pwd) to /home/app in the container (I chose /home/app arbitrarily by the way, it doesn’t exist).

Using the mount option we should be able to see our host OS project files in the container’s /home/app dir. Let’s verify that by running shell in the container.

~/Code/experiment $ docker run \
  -it \
  --mount type=bind,source=$(pwd),destination=/home/app \
  node:lts-alpine \
  /bin/sh


/ @ ls
bin    dev    etc    home   lib    media  mnt    opt    proc   root   run    sbin   srv    sys    tmp    usr    var
/ @ cd /home/app
/home/app @ ls

We don’t have any files in our project root yet, so let’s open a new terminal tab in our host OS and create some:

~/Code/experiment $ ls

~/Code/experiment $ touch foo
~/Code/experiment $ ls
foo

Back in our container shell, list files again:

/home/app @ ls
foo
/home/app @ touch bar
/home/app @ ls
bar           foo

We should see the “bar” file we created within the container, in our host OS project dir:

~/Code/experiment $ ls
bar          foo

And we do see it, good.

Now, in order to create that package.json file in /home/app, we must tell Docker our “work dir” will be “/home/app” (via -w). The docker run command is getting quite complex, we’ll address that soon.

~/Code/experiment $ docker run \
  -it \
  --mount type=bind,source=$(pwd),destination=/home/app \
  -w=/home/app \
  node:lts-alpine \
  npm init

This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg>` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
package name: (app) experiment
version: (1.0.0)
description:
git repository:
keywords:
author: Javier Treviño Saldaña
license: (ISC)
About to write to /home/app/package.json:

{
  "name": "experiment",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "Javier Treviño Saldaña",
  "license": "ISC",
  "description": ""
}


Is this OK? (yes)

The command exits and we’re back in our host OS. Let’s see if we have a package.json now:

~/Code/experiment $ cat package.json
{
  "name": "experiment",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "Javier Treviño Saldaña",
  "license": "ISC",
  "description": ""
}

Sweet. We were able to run npm init in a container, and it modified our host OS. We’ve reached a good first milestone. Before moving on, let’s go ahead and make it easy to run npm.

I want to be able to run npm without knowing all those details about Docker, normally you would just type npm init. One way to achieve that:

$ mkdir bin
$ vim bin/npm
#!/bin/sh
docker run \
  -it \
  --mount type=bind,source=$(pwd),destination=/home/app \
  -w=/home/app \
  node:lts-alpine \
  npm "$@"

Note that "$@" to pass all arguments to npm. The quotes are necessary to expand as separate words (e.g. command "$1" "$2" instead of `command “$1 $2”). Let’s make it executable:

$ chmod +x bin/npm
$ bin/npm

Usage: npm <command>

where <command> is one of:
    access, adduser, audit, bin, bugs, c, cache, ci, cit,
    clean-install, clean-install-test, completion, config,
    create, ddp, dedupe, deprecate, dist-tag, docs, doctor,
    edit, explore, fund, get, help, help-search, hook, i, init,
    install, install-ci-test, install-test, it, link, list, ln,
    login, logout, ls, org, outdated, owner, pack, ping, prefix,
    profile, prune, publish, rb, rebuild, repo, restart, root,
    run, run-script, s, se, search, set, shrinkwrap, star,
    stars, start, stop, t, team, test, token, tst, un,
    uninstall, unpublish, unstar, up, update, v, version, view,
    whoami

npm <command> -h  quick help on <command>
npm -l            display full usage info
npm help <term>   search for help on <term>
npm help npm      involved overview

Specify configs in the ini-formatted file:
    /root/.npmrc
or on the command line via: npm <command> --key value
Config info can be viewed via: npm help config

npm@6.13.4 /usr/local/lib/node_modules/npm

Cool, still works. In the npm script we added above, we pass all the script arguments to npm so we can run “init” and whatnot.

$ bin/npm init

This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg>` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
package name: (app)

Last step I personally took to get npm init working, instead of bin/npm, is to add the current ./bin directory to my PATH variable. For those not familiar with this variable, when you run a command like npm or ruby, the PATH variable dictates in which directories the executable might live. So we’ll give the top priority to the local bin/ directory if it exists. You can add this to your .bashrc/.zshrc:

# Include project's ./bin dir in PATH
export PATH="./bin:$PATH"

I usually have a bin/ directory on every project, and the command above lets me run its executables easily.

Once you modify your .bashrc/.zshrc, open a new terminal window so this configuration takes effect, and you should be able to simply run (as long as you have bin/npm in this dir):

$ npm

Usage: npm <command>

where <command> is one of:
    access, adduser, audit, bin, bugs, c, cache, ci, cit,
    clean-install, clean-install-test, completion, config,
    create, ddp, dedupe, deprecate, dist-tag, docs, doctor,
    edit, explore, fund, get, help, help-search, hook, i, init,
    install, install-ci-test, install-test, it, link, list, ln,
    login, logout, ls, org, outdated, owner, pack, ping, prefix,
    profile, prune, publish, rb, rebuild, repo, restart, root,
    run, run-script, s, se, search, set, shrinkwrap, star,
    stars, start, stop, t, team, test, token, tst, un,
    uninstall, unpublish, unstar, up, update, v, version, view,
    whoami

npm <command> -h  quick help on <command>
npm -l            display full usage info
npm help <term>   search for help on <term>
npm help npm      involved overview

Specify configs in the ini-formatted file:
    /root/.npmrc
or on the command line via: npm <command> --key value
Config info can be viewed via: npm help config

npm@6.13.4 /usr/local/lib/node_modules/npm

Whew. That’s quite a lot. I do think it was worth it. There are some details to work out but we were able to get npm running and writing to our host OS without having to install it.

One thing that bothers me is the fact new containers are being created on every docker run. Seems like a waste for our use case. For now let’s add the –rm flag to at least delete the container once it completed the command it ran. Although I gotta say, even if it’s “creating new containers”, npm responds pretty fast. Makes me think it’s not an issue, at least not now.

$ cat bin/npm

#!/bin/sh
docker run \
  -it \
  --rm \
  --mount type=bind,source=$(pwd),destination=/home/app \
  -w=/home/app \
  node:lts-alpine \
  npm "$@"

Ok, we got a package.json and npm working. Now what?

Well, I’m taking a coffee break. When you’re ready, keep reading: React - Docker for Development