env.dev

OpenCode

Open-source MIT-licensed terminal AI coding agent. BYO-model, MCP-native, signs in with your GitHub Copilot subscription.

Visit OpenCode

Quick Install

# Official installer (macOS / Linux / WSL)
curl -fsSL https://opencode.ai/install | bash

# npm
npm install -g opencode-ai@latest

# Homebrew
brew install anomalyco/tap/opencode

OpenCode is an MIT-licensed terminal AI coding agent maintained by Anomaly (the org that grew out of SST), released under sst/opencode. As of 2026-05-04, the repo sits at roughly 154,000 GitHub stars with the latest release v1.14.33 tagged on 2026-05-02 — multiple patch releases per day are normal. The pitch is straightforward: a Claude-Code-shaped agent that runs anywhere, reads any model, and is not owned by any one foundation lab.

The strategic shift happened on 2026-01-16: a paid GitHub Copilot subscription now authenticates directly into OpenCode. The same tier you already pay for Copilot in VS Code now drives an autonomous terminal agent — at no extra cost, with no extra account. A week before that, on 2026-01-09, Anthropic blocked Pro/Max OAuth from third-party clients, so OpenCode users now ride Copilot, an Anthropic API key, or one of OpenCode's own gateways. The model story changed; the agent stayed open-source.

TL;DR

  • MIT-licensed terminal AI agent at sst/opencode — ~154k stars, v1.14.33 (2026-05-02).
  • 75+ providers via models.dev — Claude, GPT, Gemini, xAI, DeepSeek, Qwen, plus any OpenAI-compatible endpoint (Ollama, vLLM, LM Studio).
  • Copilot integration since 2026-01-16 — Pro, Pro+, Business, and Enterprise subscriptions sign in via device-code flow.
  • Plan / Build modes + custom subagents — read-only planning mode, full-tool build mode, plus user-defined agents that compose MCP tools.
  • Project rules in AGENTS.md, falling back to CLAUDE.md if you migrated from Claude Code.

What is OpenCode?

OpenCode is a terminal-first AI coding agent shaped like Anthropic's Claude Code: it lives in your shell, reads files, runs commands, edits multiple files in a session, and approves or asks before destructive actions. The agent itself is open-source TypeScript with a Bubble Tea-style TUI; the TUI is a client to a local agent server, which means the same session can be driven from a different terminal, a desktop app, or — in beta — a mobile client. Same loop, different surface.

The deeper bet is decoupling. OpenCode treats the agent harness, the tool registry, and the model provider as three orthogonal things. The agent loop is one project. Tools live behind Model Context Protocol — the same MCP every other 2026 agent speaks. Models come from models.dev, a separately-maintained registry currently listing 75+ providers. The harness is the part you keep; the model is the part you swap.

Why is OpenCode growing so fast in 2026?

Three things happened in the same six-month window. First, Anthropic shipped Claude Code in March 2025, proved the terminal-agent shape worked, and made it the obvious target to clone. Second, GitHub Copilot wired itself into OpenCode in January 2026, instantly handing the open-source project a paid distribution channel. Third, Anthropic's own decision to block Pro/Max OAuth a week earlier created a pool of unhappy power-users who needed somewhere to go. OpenCode was already mid-flight; the timing did the rest.

The numbers reflect that. Per the OpenCode homepage as of May 2026: 850+ contributors, 11,000+ commits, self-reported 6.5M monthly developers. Treat the dev count as marketing — it is sourced only to opencode.ai itself — but the 154k stars and the daily release cadence are independently visible on GitHub Releases. Star velocity has been higher than Claude Code's for most of 2026.

How does OpenCode compare to Claude Code?

Same shape, different bet on lock-in. Claude Code is the polished, closed-source reference implementation that only talks to Anthropic; OpenCode is the open-source alternative that talks to anything. The right tool depends on which lock-in you mind more: a single foundation lab, or a community project with a smaller team behind it.

 OpenCodeClaude CodeCursor agentCopilot agent
SourceOpen (MIT)ClosedClosedClosed
BYO modelYes — 75+ providersNo (Anthropic only)Limited (Cursor menu)Limited (Copilot menu)
Terminal-nativeYes — TUI + desktopYesNo (IDE)Optional (gh-copilot)
MCP supportYes (OAuth via DCR)YesYesYes
Parallel subagentsSingle-loopYes (4.7+)Yes (Composer 2)Limited
Native sandboxNo (issue #4667)Permission promptsBackground Agent VMsGitHub-side runners
Pricing modelTool free; pay per token$20–200/mo Anthropic$20+/mo Cursor$10+/mo GitHub

For the broader landscape — IDE agents, builders, terminals — see the AI editor comparison and the 2026 LLM coding model comparison.

How do I install OpenCode?

The official installer is a curl-pipe-bash script — read it before you run it, like every curl-pipe-bash script — but every major package manager carries OpenCode now. Pick whichever one already manages your other CLIs.

install.sh — pick one
# Official installer (macOS / Linux / WSL)
curl -fsSL https://opencode.ai/install | bash

# npm (works anywhere Node is installed)
npm install -g opencode-ai@latest

# Homebrew (macOS / Linux)
brew install anomalyco/tap/opencode

# Windows native
scoop install opencode
choco install opencode

After installation, run opencode in any project to launch the TUI. The first run prints the path to the global config: ~/.config/opencode/ on Linux/macOS, %APPDATA%\opencode\ on Windows. Project-local config goes in opencode.json at the repo root, or opencode.jsonc if you want comments. Set OPENCODE_CONFIG to point elsewhere.

How do I sign in with my Copilot subscription?

Inside the TUI, type /connect, pick GitHub Copilot, and paste the device code into the browser tab that opens. OpenCode stores the token at ~/.local/share/opencode/auth.json with file-mode 0600. Pro, Pro+, Business, and Enterprise subscriptions all work; Copilot Free does not authorise the API endpoints OpenCode needs.

opencode auth — non-interactive flows
# Add an Anthropic API key
$ opencode auth login
> Provider: Anthropic
> API key: sk-ant-...

# Add an OpenAI-compatible endpoint (Ollama, vLLM, LM Studio)
$ opencode auth login
> Provider: OpenAI-compatible
> Base URL: http://localhost:11434/v1
> API key: ollama

# List configured providers
$ opencode auth list

Anthropic Pro/Max OAuth was blocked on 2026-01-09 — that path no longer works in any third-party client. Bring an Anthropic API key, sign in with Copilot, or route through one of OpenCode's gateways (Zen, pay-as-you-go; Black, $20/$100/$200 monthly tiers).

What models can OpenCode use?

The provider list is delegated to models.dev — a 2026 community-maintained registry of model IDs, pricing, and context windows. As of May 2026 it carries Anthropic, OpenAI, Google, xAI, DeepSeek, Qwen, Mistral, plus self-hosted options. The model picker inside OpenCode reads this registry directly, so a new release on models.dev shows up in the TUI without an OpenCode upgrade.

Daily-driver agentic work

Highest reliability under long sessions.

claude-sonnet-4-6

Multi-hour autonomy

Long-horizon refactors, full-file rewrites.

claude-opus-4-7

Cost-sensitive throughput

~12× cheaper input than Opus on same tasks.

gpt-5

Whole-repo read passes

1M-token context for cross-file analysis.

gemini-2.5-pro

Local / air-gapped

Ollama; raise num_ctx above 4K for tool calls.

ollama/qwen3:8b

Open-weights coding

~66% SWE-bench Verified, MIT-licensed.

deepseek-v3.1

Plan vs Build — the two modes

OpenCode ships two built-in agent modes, switched with a Tab keypress. Plan is read-only — edits and shell calls default to ask, so the model can investigate the codebase, draft a plan, and propose actions without surprising your file system. Build is the full-tools mode where the model edits files, runs the shell, and uses MCP servers without per-step prompts. Most sessions start in Plan, the user reads the plan, presses Tab, and lets it run.

On top of the two built-ins, OpenCode supports subagents — named agents with their own model, tool list, and system prompt, invoked from the parent agent or directly via slash commands. The two stock ones are general and explore (a cheap read-only agent for fanning out file searches). Custom subagents live in .opencode/agents/. The shape is closer to Claude Code's subagents than to Cursor 2.0's parallel Composer windows — a tree, not a fleet.

Where do project rules live?

Project rules go in AGENTS.md at the repo root. Run /init in a fresh project and OpenCode generates one based on the codebase it reads. Global rules live in ~/.config/opencode/AGENTS.md — keep them small; this file is loaded into every conversation. Both are merged with extra files configured via the instructions array, which accepts globs and even remote URLs (5s timeout, cached locally).

If you migrated from Claude Code, OpenCode falls back to CLAUDE.md and ~/.claude/skills/ — the same project memory file you already wrote works as-is. The deeper context-discipline pattern is in context management for AI coding.

opencode.json — minimal project config
{
  "$schema": "https://opencode.ai/config.json",
  "agent": {
    "build": {
      "model": "anthropic/claude-sonnet-4-6"
    },
    "plan": {
      "model": "anthropic/claude-haiku-4-5-20251001"
    }
  },
  "instructions": [".cursor/rules/*.mdc", "docs/architecture/**/*.md"],
  "permissions": {
    "edit": "ask",
    "bash": { "rm *": "deny", "git push *": "ask", "*": "allow" }
  }
}

MCP — what tools can OpenCode actually use?

OpenCode is first-class MCP from day one. Local stdio servers, remote HTTP servers, and OAuth-protected MCP endpoints (via Dynamic Client Registration / RFC 7591) all work; tokens are stored at ~/.local/share/opencode/mcp-auth.json. Tool names are namespaced as <server>_<tool>, so the permission system can match wildcards per server. The configuration end of MCP is the same whether you target Claude Code, Cursor, or OpenCode — see Claude Code MCP servers for the JSON shape.

opencode.json — MCP servers
{
  "mcp": {
    "postgres": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres", "$DATABASE_URL"]
    },
    "exa": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "exa-mcp-server"],
      "env": { "EXA_API_KEY": "$EXA_API_KEY" }
    },
    "sentry": {
      "type": "remote",
      "url": "https://mcp.sentry.dev/sse"
    }
  }
}

When should you NOT use OpenCode?

  • You need long-running isolated VMs. OpenCode runs in your shell, not in a sandboxed cloud VM. Cursor 3 Background Agents and Devin both spin up isolated environments per task. OpenCode does not — there is an open feature request (#4667) but no native sandbox in May 2026.
  • You want parallel agents fanning out on subtasks. Cursor 2.0 Composer 2 lets you fork three or four parallel agents in one window; Claude Code 4.7 added the same. OpenCode is a single agent loop with a subagent tree — the parent waits for the subagent.
  • You ship in a regulated environment requiring vendor SLAs. OpenCode is community-run; SOC 2 / HIPAA stories live with the model provider, not the agent. Claude Code, Copilot Enterprise, and Amazon Q Developer all have vendor-side compliance pages. OpenCode does not.
  • You hate curl-pipe-bash installers. The official installer is curl ... | bash. Use npm, Homebrew, Scoop, or Chocolatey to side-step it — all are first-class.
  • Your work is mostly tab-completion in an IDE. OpenCode has no inline completion; that is a different category. Pair it with GitHub Copilot or Cursor in the editor and reach for OpenCode for multi-file tasks.

Gotchas worth knowing before you commit

  • Ollama default context is 4K tokens — too small for tool calls. Raise num_ctx in the model file to 32K+ before pointing OpenCode at a local model, or you will see truncated tool arguments and the agent will loop.
  • Multi-tool sessions burn context fast. The community GitHub MCP server is the most-cited offender — it returns large JSON responses. Allowlist only the tools you actually need, or reach for explore subagent for read-heavy fan-out.
  • Provider rate-limit traps. Streaming + retries + tool calls trip Anthropic and OpenAI abuse heuristics more than chat UIs do. If you see 429s when Cursor and Claude.ai seem fine, throttle retry.maxAttempts in config and use a higher-tier API key.
  • Pricing-page churn. The hosted plans (Zen, Black) have changed tier names twice in 2026 already — issue #15872 tracks docs drift. Check the live pricing page before committing to a plan.
  • Process hangs on multi-GB stdin. Issue #731 — piping a large tarball into the TUI can hang the session. Use a tool call instead of pasting the file.

A pragmatic OpenCode setup

  1. 1.Install via npm or Homebrew. Skip curl-pipe-bash unless you read it. Confirm with opencode --version.
  2. 2.Sign in with Copilot first. If you have a paid Copilot subscription, /connect uses what you already pay for. Add an Anthropic API key second, for sessions where you specifically want Sonnet 4.6.
  3. 3.Run /init in your project. Read the generated AGENTS.md. Trim it to five-to-ten bullets the model would otherwise re-discover every session.
  4. 4.Add one MCP server, not five. Pick the database or the issue-tracker, not both. Watch how often the agent reaches for it before adding a second.
  5. 5.Lock the permissions file. Set edit: ask until you trust the agent on this codebase. Deny rm * and git push --force unconditionally.
  6. 6.Default to Plan mode. Switch to Build only after reading the plan. The model is patient; you should be too.

References

Frequently Asked Questions

What is OpenCode?

OpenCode is an MIT-licensed terminal AI coding agent maintained at github.com/sst/opencode by the Anomaly organisation (originally SST). It runs in your shell, edits files, runs commands, and supports 75+ model providers via the models.dev registry — Claude, GPT, Gemini, DeepSeek, Qwen, and any OpenAI-compatible endpoint including local Ollama.

How does OpenCode compare to Claude Code?

Same shape, different lock-in. Both are terminal-first agents with MCP support, slash commands, and project rules in a markdown file. Claude Code is closed-source and Anthropic-only; OpenCode is open-source MIT and supports any provider. Claude Code 4.7 has parallel subagents and a more polished default experience; OpenCode wins on portability, model choice, and zero per-seat cost when you bring your own API key or Copilot subscription.

Does OpenCode work with my GitHub Copilot subscription?

Yes, since 2026-01-16 (per GitHub's official changelog). Pro, Pro+, Business, and Enterprise subscriptions all authenticate via /connect inside the OpenCode TUI using GitHub's device-code flow. Copilot Free does not authorise the API endpoints OpenCode needs. The token is stored at ~/.local/share/opencode/auth.json with file-mode 0600.

What models can OpenCode use?

OpenCode reads its model list from models.dev, which carries 75+ providers as of May 2026 — Anthropic Claude (Opus 4.7, Sonnet 4.6, Haiku 4.5), OpenAI (GPT-5), Google (Gemini 2.5 Pro), xAI, DeepSeek V3.1, Qwen3-Coder, plus any OpenAI-compatible endpoint such as Ollama, vLLM, or LM Studio. Anthropic Pro/Max OAuth was blocked on 2026-01-09; bring an API key, sign in with Copilot, or use OpenCode's Zen / Black gateways instead.

Is OpenCode free?

The agent itself is free and MIT-licensed — you only pay the underlying model provider for tokens. If you already pay for GitHub Copilot, /connect uses that subscription at no extra cost. OpenCode also offers optional hosted gateways (Zen, pay-as-you-go; Black, $20/$100/$200 monthly tiers) for users who want a single bill instead of provider keys.

Can OpenCode run autonomously?

Yes — it ships two built-in modes (Plan, read-only; Build, full tools) plus a custom-subagent system. The single-loop autonomy is closer to Claude Code than to Cursor 2.0 Composer or Devin: there are no native isolated VMs (open feature request #4667), so unattended runs should be sandboxed by you — Docker, a fresh user, or a dedicated machine — and the permissions block in opencode.json should deny rm * and git push --force.