Dev Containers let you define your entire development environment — tools, runtimes, extensions, settings — as code inside your repository. Every teammate, CI runner, and Codespace gets the exact same setup. No more "works on my machine", no more half-day onboarding, no more "did you install version X of Y?".
This guide covers everything from your first devcontainer.json to multi-service Docker Compose setups, performance tuning, security hardening, and the gotchas that will bite you if nobody warns you first.
How Dev Containers Work
A Dev Container is just a Docker container with extra metadata. Your editor (VS Code, JetBrains, the CLI) reads a devcontainer.json file, builds or pulls an image, starts the container, mounts your source code, and connects to it. Extensions and language servers run inside the container, so they have access to the exact toolchain you defined.
┌──────────────────────────────────────────────────────────┐
│ Your Machine (Host) │
│ │
│ VS Code / JetBrains / CLI │
│ │ │
│ │ reads .devcontainer/devcontainer.json │
│ │ builds image, starts container │
│ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Docker Container │ │
│ │ │ │
│ │ /workspaces/my-project ◄── bind mount or volume │ │
│ │ │ │
│ │ Node 22 · Python 3.12 · Git · Extensions │ │
│ │ Language servers · Linters · Debuggers │ │
│ │ │ │
│ │ Ports forwarded to localhost automatically │ │
│ └─────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────┘The spec is open and maintained at containers.dev. It is not locked to VS Code — any tool can implement it. GitHub Codespaces, JetBrains Gateway, DevPod, and the standalone devcontainer CLI all support it.
Your First Dev Container
Create a .devcontainer/devcontainer.json at the root of your repo:
// .devcontainer/devcontainer.json
{
"name": "My Project",
"image": "mcr.microsoft.com/devcontainers/typescript-node:22",
"features": {
"ghcr.io/devcontainers/features/git:1": {}
},
"customizations": {
"vscode": {
"extensions": [
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode"
]
}
},
"forwardPorts": [3000],
"postCreateCommand": "npm install"
}That's it. Open the folder in VS Code, accept the "Reopen in Container" prompt, and you have:
- Node 22 and TypeScript ready to go
- Git installed via the Features system
- ESLint and Prettier auto-installed in the editor
- Port 3000 forwarded to your host
npm installruns automatically after creation
You can also use VS Code's command palette: Dev Containers: Reopen in Container. Or from the terminal:
# Using the standalone CLI
devcontainer up --workspace-folder .
devcontainer exec --workspace-folder . bashThe devcontainer.json Reference
There are three ways to define the base environment, and the properties available depend on which you pick:
Image-based (simplest)
Point to a pre-built image. Great for getting started fast.
{
"image": "mcr.microsoft.com/devcontainers/base:ubuntu"
}Dockerfile-based (customizable)
Build from a Dockerfile when you need system packages or custom layers.
{
"build": {
"dockerfile": "Dockerfile",
"context": "..",
"args": { "NODE_VERSION": "22" },
"target": "dev"
}
}Docker Compose-based (multi-service)
For projects that need a database, cache, or other services alongside the dev environment.
{
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}"
}Key properties
| Property | Type | Purpose |
|---|---|---|
| name | string | Display name in the UI |
| image | string | Container image to use |
| build.dockerfile | string | Path to Dockerfile |
| features | object | Dev Container Features to install |
| forwardPorts | array | Ports to forward to localhost |
| containerEnv | object | Environment variables for the container |
| remoteEnv | object | Environment variables for tooling only |
| remoteUser | string | User that tools run as (default: container user) |
| containerUser | string | User the container runs as |
| mounts | array | Additional volume/bind mounts |
| runArgs | array | Extra docker run arguments |
| customizations | object | Editor-specific settings (extensions, settings) |
| shutdownAction | enum | What happens when you close the editor (none | stopContainer) |
| init | boolean | Use tini init to handle zombie processes |
| privileged | boolean | Run in privileged mode (avoid if possible) |
| capAdd | array | Linux capabilities to add (e.g. SYS_PTRACE for debugging) |
Available variables
You can use these variables inside devcontainer.json values:
| Variable | Resolves to |
|---|---|
| ${localEnv:VAR} | Host environment variable |
| ${containerEnv:VAR} | Container environment variable (remoteEnv only) |
| ${localWorkspaceFolder} | Full path to the folder on the host |
| ${localWorkspaceFolderBasename} | Name of the folder on the host |
| ${containerWorkspaceFolder} | Workspace path inside the container |
| ${devcontainerId} | Unique ID for the dev container |
Features: Composable Dev Tooling
Features are the killer feature of Dev Containers (pun intended). Instead of writing custom Dockerfiles for every tool you need, you compose pre-built, shareable units of installation logic. Need Node, Python, the AWS CLI, and Docker-in-Docker? Four lines:
"features": {
"ghcr.io/devcontainers/features/node:1": { "version": "22" },
"ghcr.io/devcontainers/features/python:1": { "version": "3.12" },
"ghcr.io/devcontainers/features/aws-cli:1": {},
"ghcr.io/devcontainers/features/docker-in-docker:2": {}
}Each Feature is a small package containing:
devcontainer-feature.json— metadata, options, dependenciesinstall.sh— shell script that runs as root- Optional lifecycle hooks and additional files
How to reference Features
| Method | Example | When to use |
|---|---|---|
| OCI registry | ghcr.io/devcontainers/features/node:1 | Most common — versioned, cached |
| HTTPS tarball | https://example.com/feature.tgz | Self-hosted or private features |
| Local path | ./my-feature | Developing or testing a custom feature |
Writing a custom Feature
When no community Feature exists for your tool, write your own. The minimum structure:
src/my-tool/
├── devcontainer-feature.json
└── install.sh// devcontainer-feature.json
{
"id": "my-tool",
"version": "1.0.0",
"name": "My Tool",
"options": {
"version": {
"type": "string",
"default": "latest",
"description": "Version to install"
}
}
}#!/bin/bash
# install.sh — runs as root
set -e
VERSION=${VERSION:-"latest"}
echo "Installing my-tool $VERSION..."
curl -fsSL "https://example.com/my-tool/$VERSION/install" | bashOption names become uppercase environment variables in install.sh. The version option becomes $VERSION.
Installation order
Features install in a dependency-resolved order. You can control this with:
dependsOn— hard dependency, must resolve firstinstallsAfter— soft hint for orderingoverrideFeatureInstallOrder— user override indevcontainer.json
Lifecycle Scripts
Dev Containers have six hooks that fire at different points. Understanding when each runs is critical — running the wrong command in the wrong hook is one of the most common mistakes.
Container lifecycle:
initializeCommand ← runs on HOST, before anything else
│
▼
┌─ Container created ─┐
│ │
│ onCreateCommand │ ← first run only; good for install steps
│ │ │
│ ▼ │
│ updateContentCommand│ ← runs when content changes (prebuilds)
│ │ │
│ ▼ │
│ postCreateCommand │ ← first run only; user context available
│ │ │
│ ▼ │
│ postStartCommand │ ← every start (including rebuilds)
│ │ │
│ ▼ │
│ postAttachCommand │ ← every time your editor attaches
└──────────────────────┘| Hook | Runs | Typical use |
|---|---|---|
| initializeCommand | On host before container starts | Validate host prerequisites, generate .env files |
| onCreateCommand | Once after container creation | Install dependencies, seed databases, build projects |
| updateContentCommand | When new content is available | Incremental updates for prebuilt containers |
| postCreateCommand | Once after user is assigned | User-specific setup, secrets-dependent tasks |
| postStartCommand | Every container start | Start background services, watch processes |
| postAttachCommand | Every editor attach | Show welcome messages, restore terminal state |
Important: if any lifecycle script fails, all subsequent scripts are skipped. Keep install-heavy work in onCreateCommand and quick startup tasks in postStartCommand.
Command formats
Each hook accepts three formats:
// String — runs via /bin/sh
"postCreateCommand": "npm install && npm run build"
// Array — direct exec, no shell parsing
"postCreateCommand": ["npm", "install"]
// Object — parallel execution (each key runs concurrently)
"postCreateCommand": {
"install": "npm install",
"db": "npm run db:migrate",
"cache": "npm run cache:warm"
}The object format is especially useful for speeding up container creation — run independent setup tasks in parallel.
Docker Compose Integration
Real projects rarely run in isolation. You need a database, maybe Redis, maybe an S3-compatible store. Dev Containers integrate natively with Docker Compose so you can define your entire stack.
# .devcontainer/docker-compose.yml
services:
app:
build:
context: ..
dockerfile: .devcontainer/Dockerfile
volumes:
- ..:/workspaces/my-project:cached
command: sleep infinity
db:
image: postgres:17
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
pgdata:// .devcontainer/devcontainer.json
{
"name": "Full Stack",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspaces/my-project",
"features": {
"ghcr.io/devcontainers/features/node:1": { "version": "22" }
},
"forwardPorts": [3000, 5432, 6379],
"postCreateCommand": "npm install"
}Key details for Compose integration:
servicetells the tool which container to connect to as your dev environmentrunServiceslets you start only a subset of services (defaults to all)- Use
network_mode: service:dbif you want your app and database on the same network (access vialocalhost) - The
command: sleep infinitykeeps the app container alive so the editor can attach dockerComposeFileaccepts an array to layer overrides:["docker-compose.yml", "docker-compose.dev.yml"]
The Dev Container CLI
You are not locked into VS Code. The open-source @devcontainers/cli lets you use Dev Containers from any terminal, in CI, or with any editor.
# Install
npm install -g @devcontainers/cli
# Start a dev container
devcontainer up --workspace-folder .
# Run a command inside it
devcontainer exec --workspace-folder . npm test
# Pre-build the image (great for CI)
devcontainer build --workspace-folder .
# Read resolved configuration
devcontainer read-configuration --workspace-folder .| Command | Purpose |
|---|---|
| devcontainer up | Build image and start the container |
| devcontainer exec | Run a command inside a running container |
| devcontainer build | Pre-build the image without starting |
| devcontainer run-user-commands | Execute lifecycle scripts |
| devcontainer read-configuration | Output the resolved config as JSON |
| devcontainer features test | Test a Feature during development |
CI/CD usage
Use the same Dev Container that your team develops in as your CI build environment. This eliminates environment drift between local dev and CI:
# GitHub Actions example
- uses: devcontainers/ci@v0.3
with:
runCmd: npm test
push: neverThe devcontainers/ci GitHub Action and Azure DevOps Task handle building and caching the container image for you.
Performance Optimization
Dev Containers can feel sluggish if you don't account for how Docker handles file I/O on macOS and Windows. The bottleneck is almost always bind mounts.
The problem
By default, your source code is bind-mounted from the host into the container. On Linux this is near-native speed. On macOS and Windows, Docker runs in a VM, and every file operation crosses the VM boundary. Operations like npm install can be 5-10x slower.
Solutions (pick one)
1. Clone in Container Volume (recommended)
Use VS Code's Dev Containers: Clone Repository in Container Volume command. Your code lives in a Docker named volume instead of a bind mount — full native speed, zero cross-VM overhead.
2. Named volumes for heavy directories
Keep the bind mount for source code but offload write-heavy directories:
"mounts": [
"source=${localWorkspaceFolderBasename}-node_modules,target=${containerWorkspaceFolder}/node_modules,type=volume"
]3. WSL 2 filesystem on Windows
Clone your repo inside the WSL 2 filesystem (not /mnt/c/). Docker Desktop's WSL 2 backend can bind-mount from WSL at near-native speed.
Dockerfile layer optimization
If you use a custom Dockerfile, keep frequently changing layers at the bottom:
FROM node:22-bookworm-slim
# Rarely changes — cached
RUN apt-get update && apt-get install -y \
git curl jq \
&& rm -rf /var/lib/apt/lists/*
# Changes when lockfile changes
COPY package.json package-lock.json ./
RUN npm ci
# Changes most often — last layer
COPY . .Security Benefits
Dev Containers are not just a convenience — they are a meaningful security boundary.
Isolation from host
Your project's dependencies live inside the container. A compromised npm package cannot access your SSH keys, browser cookies, or other projects on your host filesystem.
No global installs
Nothing is installed globally on your machine. Different projects can use different versions of Node, Python, or any tool without conflicts or pollution.
Reproducible audit trail
The devcontainer.json and Dockerfile are checked into version control. You can audit exactly what tools and versions are in the environment at any point in history.
Controlled network access
You can restrict container networking to only the services it needs. No accidental exposure of development services to your local network.
Non-root by default
Microsoft's base images create a non-root user (vscode). Set remoteUser to avoid running your editor and tools as root.
Disposable environments
If something goes wrong — a bad install, corrupted state, suspicious behavior — rebuild from scratch in seconds. No need to debug your host.
Hardening tips
- Set
"remoteUser": "vscode"(or your non-root user) explicitly - Avoid
"privileged": trueunless you absolutely need it (e.g. Docker-in-Docker) - Use
capAddfor specific capabilities instead of full privilege - Pin Feature versions (
ghcr.io/devcontainers/features/node:1not:latest) - Pin base image digests for maximum reproducibility:
image: "mcr.microsoft.com/devcontainers/base@sha256:abc..."
Best Practices & Gotchas
Best practices
- Pin your base image version. Use
node:22-bookwormnotnode:latest. Your future self will thank you when a major version bump doesn't break the team's environment overnight. - Use Features before Dockerfiles. If a Feature exists for what you need, prefer it over custom Dockerfile layers. Features are tested, versioned, and shared.
- Keep
postStartCommandfast. It runs on every start. Slow commands here make reconnecting painful. Put heavy work inonCreateCommand. - Use dotfiles for personal config. Don't put your shell theme in the shared
devcontainer.json. VS Code supports adotfiles.repositorysetting that clones your personal dotfiles into every container. - Commit
.devcontainer/to the repo. This is the whole point — everyone gets the same environment. Don't gitignore it. - Use parallel lifecycle commands. The object format for lifecycle hooks runs commands concurrently. Use it for independent setup steps to cut creation time.
Gotchas that will bite you
.gitattributes file:* text=auto eol=lf
*.{cmd,bat} text eol=crlfgit config --global credential.helper is set on your host. On macOS, ensure the keychain helper is configured.ssh-agent on your host and the agent will be forwarded into the container.RUN echo "fs.inotify.max_user_watches=524288" >> /etc/sysctl.confforwardPorts with portsAttributes to label ports and set "requireLocalPort": false to allow fallback.Putting It All Together
Here's a production-ready Dev Container config for a typical full-stack TypeScript project:
.devcontainer/
├── devcontainer.json
├── docker-compose.yml
└── Dockerfile// .devcontainer/devcontainer.json
{
"name": "My App",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspaces/my-app",
"features": {
"ghcr.io/devcontainers/features/github-cli:1": {},
"ghcr.io/devcontainers/features/docker-in-docker:2": {}
},
"customizations": {
"vscode": {
"extensions": [
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode",
"biomejs.biome"
],
"settings": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "biomejs.biome"
}
}
},
"forwardPorts": [3000, 5432, 6379],
"portsAttributes": {
"3000": { "label": "App", "onAutoForward": "notify" },
"5432": { "label": "Postgres", "onAutoForward": "silent" },
"6379": { "label": "Redis", "onAutoForward": "silent" }
},
"remoteUser": "node",
"postCreateCommand": {
"install": "npm ci",
"db": "npm run db:migrate"
},
"postStartCommand": "npm run dev"
}# .devcontainer/Dockerfile
FROM node:22-bookworm-slim
RUN apt-get update && apt-get install -y \
git curl \
&& rm -rf /var/lib/apt/lists/*# .devcontainer/docker-compose.yml
services:
app:
build:
context: ..
dockerfile: .devcontainer/Dockerfile
volumes:
- ..:/workspaces/my-app:cached
- node-modules:/workspaces/my-app/node_modules
command: sleep infinity
db:
image: postgres:17
restart: unless-stopped
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
node-modules:
pgdata:Clone the repo, open in VS Code, accept the container prompt, and you're coding with a full stack in under a minute. Every teammate gets the same Node version, the same extensions, the same database, the same everything.
References
- containers.dev — official Dev Container specification site
- devcontainer.json Reference — complete property reference for all scenarios
- Dev Container Features Reference — how Features work, how to author them
- VS Code Dev Containers Documentation — setup, usage, and configuration in VS Code
- devcontainers/spec — the open specification repository on GitHub
- devcontainers/cli — the reference CLI implementation
- Improve Dev Container Disk Performance — VS Code guide to bind mount alternatives and volume strategies
- Using Images, Dockerfiles, and Docker Compose — official guide to the three configuration approaches