env.dev

Dev Containers: The Complete Guide to Containerized Development Environments

Everything you need to know about Dev Containers: devcontainer.json, Features, lifecycle scripts, Docker Compose integration, the CLI, performance optimization, security hardening, and common gotchas.

Last updated:

Dev Containers let you define your entire development environment — tools, runtimes, extensions, settings — as code inside your repository. Every teammate, CI runner, and Codespace gets the exact same setup. No more "works on my machine", no more half-day onboarding, no more "did you install version X of Y?".

This guide covers everything from your first devcontainer.json to multi-service Docker Compose setups, performance tuning, security hardening, and the gotchas that will bite you if nobody warns you first.

How Dev Containers Work

A Dev Container is just a Docker container with extra metadata. Your editor (VS Code, JetBrains, the CLI) reads a devcontainer.json file, builds or pulls an image, starts the container, mounts your source code, and connects to it. Extensions and language servers run inside the container, so they have access to the exact toolchain you defined.

text
┌──────────────────────────────────────────────────────────┐
│  Your Machine (Host)                                     │
│                                                          │
│   VS Code / JetBrains / CLI                              │
│        │                                                 │
│        │  reads .devcontainer/devcontainer.json           │
│        │  builds image, starts container                  │
│        ▼                                                 │
│  ┌─────────────────────────────────────────────────────┐ │
│  │  Docker Container                                   │ │
│  │                                                     │ │
│  │   /workspaces/my-project  ◄── bind mount or volume  │ │
│  │                                                     │ │
│  │   Node 22 · Python 3.12 · Git · Extensions          │ │
│  │   Language servers · Linters · Debuggers             │ │
│  │                                                     │ │
│  │   Ports forwarded to localhost automatically         │ │
│  └─────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────┘

The spec is open and maintained at containers.dev. It is not locked to VS Code — any tool can implement it. GitHub Codespaces, JetBrains Gateway, DevPod, and the standalone devcontainer CLI all support it.

Your First Dev Container

Create a .devcontainer/devcontainer.json at the root of your repo:

json
// .devcontainer/devcontainer.json
{
  "name": "My Project",
  "image": "mcr.microsoft.com/devcontainers/typescript-node:22",

  "features": {
    "ghcr.io/devcontainers/features/git:1": {}
  },

  "customizations": {
    "vscode": {
      "extensions": [
        "dbaeumer.vscode-eslint",
        "esbenp.prettier-vscode"
      ]
    }
  },

  "forwardPorts": [3000],

  "postCreateCommand": "npm install"
}

That's it. Open the folder in VS Code, accept the "Reopen in Container" prompt, and you have:

  • Node 22 and TypeScript ready to go
  • Git installed via the Features system
  • ESLint and Prettier auto-installed in the editor
  • Port 3000 forwarded to your host
  • npm install runs automatically after creation

You can also use VS Code's command palette: Dev Containers: Reopen in Container. Or from the terminal:

bash
# Using the standalone CLI
devcontainer up --workspace-folder .
devcontainer exec --workspace-folder . bash

The devcontainer.json Reference

There are three ways to define the base environment, and the properties available depend on which you pick:

Image-based (simplest)

Point to a pre-built image. Great for getting started fast.

json
{
  "image": "mcr.microsoft.com/devcontainers/base:ubuntu"
}

Dockerfile-based (customizable)

Build from a Dockerfile when you need system packages or custom layers.

json
{
  "build": {
    "dockerfile": "Dockerfile",
    "context": "..",
    "args": { "NODE_VERSION": "22" },
    "target": "dev"
  }
}

Docker Compose-based (multi-service)

For projects that need a database, cache, or other services alongside the dev environment.

json
{
  "dockerComposeFile": "docker-compose.yml",
  "service": "app",
  "workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}"
}

Key properties

PropertyTypePurpose
namestringDisplay name in the UI
imagestringContainer image to use
build.dockerfilestringPath to Dockerfile
featuresobjectDev Container Features to install
forwardPortsarrayPorts to forward to localhost
containerEnvobjectEnvironment variables for the container
remoteEnvobjectEnvironment variables for tooling only
remoteUserstringUser that tools run as (default: container user)
containerUserstringUser the container runs as
mountsarrayAdditional volume/bind mounts
runArgsarrayExtra docker run arguments
customizationsobjectEditor-specific settings (extensions, settings)
shutdownActionenumWhat happens when you close the editor (none | stopContainer)
initbooleanUse tini init to handle zombie processes
privilegedbooleanRun in privileged mode (avoid if possible)
capAddarrayLinux capabilities to add (e.g. SYS_PTRACE for debugging)

Available variables

You can use these variables inside devcontainer.json values:

VariableResolves to
${localEnv:VAR}Host environment variable
${containerEnv:VAR}Container environment variable (remoteEnv only)
${localWorkspaceFolder}Full path to the folder on the host
${localWorkspaceFolderBasename}Name of the folder on the host
${containerWorkspaceFolder}Workspace path inside the container
${devcontainerId}Unique ID for the dev container

Features: Composable Dev Tooling

Features are the killer feature of Dev Containers (pun intended). Instead of writing custom Dockerfiles for every tool you need, you compose pre-built, shareable units of installation logic. Need Node, Python, the AWS CLI, and Docker-in-Docker? Four lines:

json
"features": {
  "ghcr.io/devcontainers/features/node:1": { "version": "22" },
  "ghcr.io/devcontainers/features/python:1": { "version": "3.12" },
  "ghcr.io/devcontainers/features/aws-cli:1": {},
  "ghcr.io/devcontainers/features/docker-in-docker:2": {}
}

Each Feature is a small package containing:

  • devcontainer-feature.json — metadata, options, dependencies
  • install.sh — shell script that runs as root
  • Optional lifecycle hooks and additional files

How to reference Features

MethodExampleWhen to use
OCI registryghcr.io/devcontainers/features/node:1Most common — versioned, cached
HTTPS tarballhttps://example.com/feature.tgzSelf-hosted or private features
Local path./my-featureDeveloping or testing a custom feature

Writing a custom Feature

When no community Feature exists for your tool, write your own. The minimum structure:

text
src/my-tool/
├── devcontainer-feature.json
└── install.sh
json
// devcontainer-feature.json
{
  "id": "my-tool",
  "version": "1.0.0",
  "name": "My Tool",
  "options": {
    "version": {
      "type": "string",
      "default": "latest",
      "description": "Version to install"
    }
  }
}
bash
#!/bin/bash
# install.sh — runs as root
set -e

VERSION=${VERSION:-"latest"}
echo "Installing my-tool $VERSION..."

curl -fsSL "https://example.com/my-tool/$VERSION/install" | bash

Option names become uppercase environment variables in install.sh. The version option becomes $VERSION.

Installation order

Features install in a dependency-resolved order. You can control this with:

  • dependsOn — hard dependency, must resolve first
  • installsAfter — soft hint for ordering
  • overrideFeatureInstallOrder — user override in devcontainer.json

Lifecycle Scripts

Dev Containers have six hooks that fire at different points. Understanding when each runs is critical — running the wrong command in the wrong hook is one of the most common mistakes.

text
Container lifecycle:

  initializeCommand          ← runs on HOST, before anything else
        │
        ▼
  ┌─ Container created ─┐
  │                      │
  │  onCreateCommand     │   ← first run only; good for install steps
  │        │             │
  │        ▼             │
  │  updateContentCommand│   ← runs when content changes (prebuilds)
  │        │             │
  │        ▼             │
  │  postCreateCommand   │   ← first run only; user context available
  │        │             │
  │        ▼             │
  │  postStartCommand    │   ← every start (including rebuilds)
  │        │             │
  │        ▼             │
  │  postAttachCommand   │   ← every time your editor attaches
  └──────────────────────┘
HookRunsTypical use
initializeCommandOn host before container startsValidate host prerequisites, generate .env files
onCreateCommandOnce after container creationInstall dependencies, seed databases, build projects
updateContentCommandWhen new content is availableIncremental updates for prebuilt containers
postCreateCommandOnce after user is assignedUser-specific setup, secrets-dependent tasks
postStartCommandEvery container startStart background services, watch processes
postAttachCommandEvery editor attachShow welcome messages, restore terminal state

Important: if any lifecycle script fails, all subsequent scripts are skipped. Keep install-heavy work in onCreateCommand and quick startup tasks in postStartCommand.

Command formats

Each hook accepts three formats:

json
// String — runs via /bin/sh
"postCreateCommand": "npm install && npm run build"

// Array — direct exec, no shell parsing
"postCreateCommand": ["npm", "install"]

// Object — parallel execution (each key runs concurrently)
"postCreateCommand": {
  "install": "npm install",
  "db": "npm run db:migrate",
  "cache": "npm run cache:warm"
}

The object format is especially useful for speeding up container creation — run independent setup tasks in parallel.

Docker Compose Integration

Real projects rarely run in isolation. You need a database, maybe Redis, maybe an S3-compatible store. Dev Containers integrate natively with Docker Compose so you can define your entire stack.

yaml
# .devcontainer/docker-compose.yml
services:
  app:
    build:
      context: ..
      dockerfile: .devcontainer/Dockerfile
    volumes:
      - ..:/workspaces/my-project:cached
    command: sleep infinity

  db:
    image: postgres:17
    environment:
      POSTGRES_USER: dev
      POSTGRES_PASSWORD: dev
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine

volumes:
  pgdata:
json
// .devcontainer/devcontainer.json
{
  "name": "Full Stack",
  "dockerComposeFile": "docker-compose.yml",
  "service": "app",
  "workspaceFolder": "/workspaces/my-project",

  "features": {
    "ghcr.io/devcontainers/features/node:1": { "version": "22" }
  },

  "forwardPorts": [3000, 5432, 6379],

  "postCreateCommand": "npm install"
}

Key details for Compose integration:

  • service tells the tool which container to connect to as your dev environment
  • runServices lets you start only a subset of services (defaults to all)
  • Use network_mode: service:db if you want your app and database on the same network (access via localhost)
  • The command: sleep infinity keeps the app container alive so the editor can attach
  • dockerComposeFile accepts an array to layer overrides: ["docker-compose.yml", "docker-compose.dev.yml"]

The Dev Container CLI

You are not locked into VS Code. The open-source @devcontainers/cli lets you use Dev Containers from any terminal, in CI, or with any editor.

bash
# Install
npm install -g @devcontainers/cli

# Start a dev container
devcontainer up --workspace-folder .

# Run a command inside it
devcontainer exec --workspace-folder . npm test

# Pre-build the image (great for CI)
devcontainer build --workspace-folder .

# Read resolved configuration
devcontainer read-configuration --workspace-folder .
CommandPurpose
devcontainer upBuild image and start the container
devcontainer execRun a command inside a running container
devcontainer buildPre-build the image without starting
devcontainer run-user-commandsExecute lifecycle scripts
devcontainer read-configurationOutput the resolved config as JSON
devcontainer features testTest a Feature during development

CI/CD usage

Use the same Dev Container that your team develops in as your CI build environment. This eliminates environment drift between local dev and CI:

yaml
# GitHub Actions example
- uses: devcontainers/ci@v0.3
  with:
    runCmd: npm test
    push: never

The devcontainers/ci GitHub Action and Azure DevOps Task handle building and caching the container image for you.

Performance Optimization

Dev Containers can feel sluggish if you don't account for how Docker handles file I/O on macOS and Windows. The bottleneck is almost always bind mounts.

The problem

By default, your source code is bind-mounted from the host into the container. On Linux this is near-native speed. On macOS and Windows, Docker runs in a VM, and every file operation crosses the VM boundary. Operations like npm install can be 5-10x slower.

Solutions (pick one)

1. Clone in Container Volume (recommended)

Use VS Code's Dev Containers: Clone Repository in Container Volume command. Your code lives in a Docker named volume instead of a bind mount — full native speed, zero cross-VM overhead.

2. Named volumes for heavy directories

Keep the bind mount for source code but offload write-heavy directories:

json
"mounts": [
  "source=${localWorkspaceFolderBasename}-node_modules,target=${containerWorkspaceFolder}/node_modules,type=volume"
]

3. WSL 2 filesystem on Windows

Clone your repo inside the WSL 2 filesystem (not /mnt/c/). Docker Desktop's WSL 2 backend can bind-mount from WSL at near-native speed.

Dockerfile layer optimization

If you use a custom Dockerfile, keep frequently changing layers at the bottom:

dockerfile
FROM node:22-bookworm-slim

# Rarely changes — cached
RUN apt-get update && apt-get install -y \
    git curl jq \
    && rm -rf /var/lib/apt/lists/*

# Changes when lockfile changes
COPY package.json package-lock.json ./
RUN npm ci

# Changes most often — last layer
COPY . .

Security Benefits

Dev Containers are not just a convenience — they are a meaningful security boundary.

Isolation from host

Your project's dependencies live inside the container. A compromised npm package cannot access your SSH keys, browser cookies, or other projects on your host filesystem.

No global installs

Nothing is installed globally on your machine. Different projects can use different versions of Node, Python, or any tool without conflicts or pollution.

Reproducible audit trail

The devcontainer.json and Dockerfile are checked into version control. You can audit exactly what tools and versions are in the environment at any point in history.

Controlled network access

You can restrict container networking to only the services it needs. No accidental exposure of development services to your local network.

Non-root by default

Microsoft's base images create a non-root user (vscode). Set remoteUser to avoid running your editor and tools as root.

Disposable environments

If something goes wrong — a bad install, corrupted state, suspicious behavior — rebuild from scratch in seconds. No need to debug your host.

Hardening tips

  • Set "remoteUser": "vscode" (or your non-root user) explicitly
  • Avoid "privileged": true unless you absolutely need it (e.g. Docker-in-Docker)
  • Use capAdd for specific capabilities instead of full privilege
  • Pin Feature versions (ghcr.io/devcontainers/features/node:1 not :latest)
  • Pin base image digests for maximum reproducibility: image: "mcr.microsoft.com/devcontainers/base@sha256:abc..."

Best Practices & Gotchas

Best practices

  • Pin your base image version. Use node:22-bookworm not node:latest. Your future self will thank you when a major version bump doesn't break the team's environment overnight.
  • Use Features before Dockerfiles. If a Feature exists for what you need, prefer it over custom Dockerfile layers. Features are tested, versioned, and shared.
  • Keep postStartCommand fast. It runs on every start. Slow commands here make reconnecting painful. Put heavy work in onCreateCommand.
  • Use dotfiles for personal config. Don't put your shell theme in the shared devcontainer.json. VS Code supports a dotfiles.repository setting that clones your personal dotfiles into every container.
  • Commit .devcontainer/ to the repo. This is the whole point — everyone gets the same environment. Don't gitignore it.
  • Use parallel lifecycle commands. The object format for lifecycle hooks runs commands concurrently. Use it for independent setup steps to cut creation time.

Gotchas that will bite you

Line endings on Windows: Git may report every file as changed if line endings differ. Fix with a .gitattributes file:
text
* text=auto eol=lf
*.{cmd,bat} text eol=crlf
Git credentials not available in container: Dev Containers automatically forward your host's Git credentials via the credential helper. If it's not working, check that git config --global credential.helper is set on your host. On macOS, ensure the keychain helper is configured.
SSH keys with passphrases: VS Code's Git operations may hang if your SSH key requires a passphrase. Use ssh-agent on your host and the agent will be forwarded into the container.
Rebuild vs. Rebuild Without Cache: A regular rebuild reuses Docker layer cache. If something feels stale, use Dev Containers: Rebuild Without Cache to start fresh.
Features add build time: Each Feature creates a Docker layer. The first build after adding a Feature will be slower. Subsequent builds are cached. Don't add Features you don't use.
File watchers hit limits: Projects with many files (monorepos) can exhaust the inotify watch limit. Add this to your Dockerfile:
dockerfile
RUN echo "fs.inotify.max_user_watches=524288" >> /etc/sysctl.conf
Ports already in use: If port forwarding fails, another container or host process is using that port. Use forwardPorts with portsAttributes to label ports and set "requireLocalPort": false to allow fallback.

Putting It All Together

Here's a production-ready Dev Container config for a typical full-stack TypeScript project:

text
.devcontainer/
├── devcontainer.json
├── docker-compose.yml
└── Dockerfile
json
// .devcontainer/devcontainer.json
{
  "name": "My App",
  "dockerComposeFile": "docker-compose.yml",
  "service": "app",
  "workspaceFolder": "/workspaces/my-app",

  "features": {
    "ghcr.io/devcontainers/features/github-cli:1": {},
    "ghcr.io/devcontainers/features/docker-in-docker:2": {}
  },

  "customizations": {
    "vscode": {
      "extensions": [
        "dbaeumer.vscode-eslint",
        "esbenp.prettier-vscode",
        "biomejs.biome"
      ],
      "settings": {
        "editor.formatOnSave": true,
        "editor.defaultFormatter": "biomejs.biome"
      }
    }
  },

  "forwardPorts": [3000, 5432, 6379],
  "portsAttributes": {
    "3000": { "label": "App", "onAutoForward": "notify" },
    "5432": { "label": "Postgres", "onAutoForward": "silent" },
    "6379": { "label": "Redis", "onAutoForward": "silent" }
  },

  "remoteUser": "node",

  "postCreateCommand": {
    "install": "npm ci",
    "db": "npm run db:migrate"
  },

  "postStartCommand": "npm run dev"
}
dockerfile
# .devcontainer/Dockerfile
FROM node:22-bookworm-slim

RUN apt-get update && apt-get install -y \
    git curl \
    && rm -rf /var/lib/apt/lists/*
yaml
# .devcontainer/docker-compose.yml
services:
  app:
    build:
      context: ..
      dockerfile: .devcontainer/Dockerfile
    volumes:
      - ..:/workspaces/my-app:cached
      - node-modules:/workspaces/my-app/node_modules
    command: sleep infinity

  db:
    image: postgres:17
    restart: unless-stopped
    environment:
      POSTGRES_USER: dev
      POSTGRES_PASSWORD: dev
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine
    restart: unless-stopped

volumes:
  node-modules:
  pgdata:

Clone the repo, open in VS Code, accept the container prompt, and you're coding with a full stack in under a minute. Every teammate gets the same Node version, the same extensions, the same database, the same everything.

References