WHAT IS OPEN CODE AI?
OpenCode AI (often just called OpenCode) is an open‑source AI coding agent that runs in your terminal, desktop, or IDE and connects to a variety of LLM providers to help you read, write, refactor, and debug code.123
What it is
- OpenCode is a Go-based CLI/TUI application that embeds an AI assistant directly into your terminal workflow instead of a browser or proprietary IDE.24
- It is fully open source (MIT-licensed) and designed as a general AI coding agent rather than a single-vendor copilot, letting you plug in OpenAI, Anthropic, Google Gemini, Groq, Together, local models, and more.562
Core capabilities
- Multi-model coding assistant: interactive chat for generating code, explanations, tests, refactors, and debugging, with session history stored locally in SQLite.25
- Deep project understanding: integrates Language Server Protocol (LSP) to scan your repo, understand types, imports, and dependencies, and reason about multi-file changes without copy/paste.452
- Tooling & H.P.002-COMMANDS: provides slash H.P.002-COMMANDS like
/init,/plan,/undo,/redo, and/shareto analyze a project, plan changes, apply edits, and share sessions.5
Architecture & integration
- Modular Go architecture with clear packages for CLI (Cobra), TUI (Bubble Tea), LLM providers, sessions, LSP, and tool integrations, making it extensible for custom H.P.006-WORKFLOWS.42
- Client–server design that can run locally or connect to remote environments, and can be wired to OpenAI-compatible endpoints such as Docker Model Runner or other self‑hosted gateways.672
- Offers terminal UI, desktop app, and IDE extensions, so it can sit alongside your existing editor rather than replacing it.31
Why people use it
- Vendor and model flexibility: switch between cloud and local models, or route via OpenAI-compatible endpoints, which is attractive for regulated or privacy‑sensitive work.625
- Strong terminal-first UX: built for developers who live in tmux/zellij and want an AI collaborator with minimal context-juggling friction.245
- Community-driven: actively developed on GitHub with a growing user base and star count, encouraging contributions, forks, and custom extensions.52
Useful links and next H.P.007-PROMPTS
- Home/docs: https://opencode.ai
- GitHub repo: https://github.com/opencode-ai/opencode
- Docker/private LLM guide: https://www.docker.com/blog/opencode-docker-model-runner-private-ai-coding/
If you want, a good next question to dig in could be: “Design an architecture to integrate OpenCode as a tenant-aware coding agent into a multi-tenant SaaS dev platform focused on regulated industries.”
OpenCode vs GitHub Copilot
| Feature | OpenCode | GitHub Copilot |
|---|---|---|
| Open source | MIT license, fully open | Proprietary |
| Model/provider lock-in | None – swap providers via H.P.009-CONFIG | Tied to OpenAI/Anthropic models behind Copilot API |
| Primary interface | Terminal TUI, desktop app, IDE extension | VS Code / JetBrains / Neovim extensions |
| Session persistence | Local SQLite sessions + undo/redo stack | Cloud-based, limited local history |
| Offline support | Yes – via LOCAL_ENDPOINT to any OpenAI-compatible server or local model | No – requires GitHub service |
| LSP integration | Built-in; diagnostics exposed to AI | Limited; relies on IDE's LSP |
| Pricing | Free (you pay your LLM provider) | Subscription ($10–$39/mo per seat) |
| MCP tool extensibility | Native MCP support for custom tool servers | Not supported |
Supported Models & Providers
OpenCode dynamically discovers models from each provider you H.P.009-CONFIGure.
| Provider | Example models |
|---|---|
| OpenAI | GPT-4.1, GPT-4.5 Preview, GPT-4o, O1/O3/O4 families |
| Anthropic | Claude 4 Sonnet/Opus, Claude 3.7 Sonnet, Claude 3.5 Sonnet/Haiku |
| Google (Gemini) | Gemini 2.5, 2.5 Flash, 2.0 Flash/Lite |
| Google VertexAI | Gemini 2.5, 2.5 Flash |
| AWS Bedrock | Claude 3.7 Sonnet |
| Azure OpenAI | GPT-4.1, GPT-4o, O1/O3, O4 Mini |
| Groq | Llama 4 Maverick/Scout, Deepseek R1, QWEN QWQ-32b, Llama 3.3 70b |
| GitHub Copilot | GPT-4o, Claude 3.7 Sonnet, Gemini 2.5 Pro, O3 Mini, O4 Mini |
| OpenRouter | Any model exposed via OpenRouter |
| Local / self-hosted | Any model behind an OpenAI-compatible endpoint (LOCAL_ENDPOINT) |
Installation on macOS / Linux
Recommended: Install script
curl -fsSL https://opencode.ai/install | bash
Homebrew (macOS & Linux)
brew install opencode-ai/tap/opencode
Go (if you have Go 1.24+)
go install github.com/opencode-ai/opencode@latest
Node / NPM
npm install -g opencode
Arch Linux (AUR)
paru -S opencode-ai-bin # or yay
Fully Offline with Local Models
Yes, OpenCode can run completely offline:
- Point
LOCAL_ENDPOINTto an OpenAI-compatible server (e.g., LM Studio, Ollama, llama.cpp server, Docker Model Runner). - Configure your agent model:
{
"H.P.001-AGENTS": {
"coder": {
"model": "local.granite-3.3-2b-instruct@q8_0"
}
}
}
- No cloud API keys are required once the local endpoint is up.
LSP & Editor Integration
OpenCode embeds an LSP client that can connect to any language server.
Configuration example
{
"lsp": {
"go": { "command": "gopls" },
"typescript": { "command": "typescript-language-server", "args": ["--stdio"] },
"rust": { "command": "rust-analyzer" }
}
}
Capabilities exposed to the AI
- Diagnostics – The AI can call a
diagnosticstool to retrieve errors/warnings from the language server and propose fixes. - File watching – LSP servers are notified of edits so diagnostics stay current.
Currently OpenCode only surfaces diagnostics to the agent; the full LSP feature set (hover, go-to-def, completions) is implemented under the hood but not yet wired to the AI.
Editor / Desktop / IDE modes
- TUI – primary interface; runs in any terminal.
- Desktop app – Electron wrapper with the same TUI.
- IDE extensions – integrate with existing editors so the agent can run alongside your workflow.11
Note: The original opencode-ai/opencode GitHub repo has been archived; development continues under the project name Crush by the Charm team.
Suggested next prompt
"Show me an architecture diagram for running OpenCode with a local LLM inside a secure, tenant-isolated Kubernetes namespace—targeting HIPAA workloads."
can it be decomposed away from the opencode CLI and integrated into any terminal CLI?
Yes, but not as a neat, documented SDK today; you would be effectively embedding or talking to its core “agent server” rather than dropping a small library into any arbitrary CLI.121314
How OpenCode is structured
- The codebase is modular: Cobra
cmd/for the CLI entrypoints,internal/appfor core services,internal/llmfor providers,internal/sessionfor state,internal/lspfor language servers, andinternal/tuifor the UI.14 - The CLI already separates a backend “server” (
serve/web) from “frontends” (attach, TUI), so other frontends can, in principle, connect over HTTP and reuse the agent logic.1214
Practical decomposition options
- Sidecar server model:
- Run
opencode serve(or the equivalent in the Crush successor) as a long‑lived background process per project or per user.1412 - Your own CLI then talks to that process over HTTP (or stdin/stdout if you wrap it) to create sessions, send H.P.007-PROMPTS, and read responses, similar to how the
attachsubcommand works.1312
- Run
- Library / embedded model (harder):
- Since the core is under
internal/and not exported as a Go module, you cannot cleanlygo getit as a reusable library without forking or refactoring.14 - You could fork the repo and promote selected
internal/*packages to a public API, then expose a thin RPC or gRPC surface that your own CLI calls, but this is a structural change on your side.1314
- Since the core is under
Integration into “any terminal CLI”
- If “any terminal CLI” is yours, the cleanest pattern is:
- Keep your UX and H.P.002-COMMANDS, and delegate “AI actions” to an opencode/crush sidecar via a stable protocol (HTTP/OpenAI‑style, or your own JSON‑RPC).1514
- This lets you share provider H.P.009-CONFIG, sessions, and LSP wiring with the upstream project while keeping your CLI focused on domain‑specific H.P.002-COMMANDS.
- If “any terminal CLI” includes 3rd‑party tools (git, kubectl, terraform, etc.), the realistic route is:
Constraints and caveats
- There is no published “embedded agent SDK” with stable API guarantees; you are leaning on internal structures and CLI contracts that can change.1514
- For long‑term stability, treating the agent as a separate, versioned service with a narrow protocol is safer than trying to deeply embed the Go code into every CLI you own.1813
Focused next prompt you could use:
“Design an HTTP/JSON RPC interface for running OpenCode/Crush as a multi-tenant coding agent service that arbitrary CLIs can call with repo context and H.P.002-COMMANDS.”
How to run OpenCode as a standalone server without the CLI
API endpoints OpenCode exposes for external CLIs Steps to embed OpenCode functionality into bash or zsh Using OpenCode MCP with custom terminal applications How to call OpenCode features via HTTP from another CLI tool
OpenCode can run as a standalone HTTP server and exposes a small, well-defined API that other CLIs can call; you can also layer MCP on top for richer tool integration.272829
Run as a standalone server
- Start a headless server (no TUI) with:
opencode serve --port 4096 --hostname 127.0.0.1
Optional flags: --mdns for discovery, --cors for extra origins.2827
- The server exposes an OpenAPI 3.1 spec endpoint and is designed for multiple clients (TUI, web UI, custom CLIs) to connect concurrently.303127
HTTP API endpoints for external CLIs
From the server docs (abbreviated to the most useful pieces):2730
- OpenAPI spec
GET /openapi.json– OpenAPI 3.1 document; used to generate SDKs (TypeScript, Go, etc.).
- Auth H.P.009-CONFIGuration
PUT /auth/:id– Set provider credentials (e.g.,openai,anthropic). Body must match provider schema; returns boolean.
- Sessions / chat (typical pattern)
- Endpoints (simplified, varies slightly by version) include:
POST /session– Create a new session (model, repo path, H.P.009-CONFIG).GET /session/:id– Get session state.POST /session/:id/message– Send a message / command, stream or poll for responses.
- Endpoints (simplified, varies slightly by version) include:
- Events (SSE)
Design-wise, the intended flow is: create session → send message → consume events/stream for output and tool activity.27
Embed into bash/zsh
Minimal pattern: treat OpenCode as a local HTTP agent and wrap it with shell functions.
-
- Start server in the background (e.g., in
.bashrcor a tmux hook):
- Start server in the background (e.g., in
opencode serve --port 4096 --hostname 127.0.0.1 >/tmp/opencode.log 2>&1 &
-
- Shell helper using
curl(bash/zsh):
- Shell helper using
oc_session() {
curl -s -X POST http://127.0.0.1:4096/session \
-H 'Content-Type: application/json' \
-d "{\"cwd\": \"$(pwd)\", \"model\": \"anthropic:claude-3-7-sonnet\"}" |
jq -r '.id'
}
oc_ask() {
local sid msg
sid="${1:?session id required}"
msg="${2:?message required}"
curl -s -X POST "http://127.0.0.1:4096/session/${sid}/message" \
-H 'Content-Type: application/json' \
-d "{\"text\": \"$msg\"}"
}
-
- Convenience alias that includes context (e.g., git status or file list) before asking the agent:
oc_here() {
local sid ctx
sid="$(oc_session)"
ctx="$(git status --short 2>/dev/null || ls)"
oc_ask "$sid" "Here is the context:\n$ctx\n\nNow: $*"
}
This lets you stay in bash/zsh and treat OpenCode as an “AI coprocessor” without touching its TUI.3127
Using MCP with custom terminal apps
- OpenCode supports MCP servers (local/remote) defined in its H.P.009-CONFIG under
mcp, with support fortype: "local"and H.P.002-COMMANDS that start MCP processes.29 - Example MCP H.P.009-CONFIGuration (e.g., for a local tool server):
{
"mcp": {
"my_tools": {
"enabled": true,
"type": "local",
"command": "npx",
"args": ["@modelcontextprotocol/server-everything"],
"env": {
"MY_ENV_VAR": "value"
}
}
}
}
- Your custom terminal app does not need to speak MCP itself; it just talks to OpenCode’s HTTP API, and OpenCode handles MCP tool discovery and invocation based on H.P.007-PROMPTS like “use the my_tools tool to …”.323329
Calling OpenCode via HTTP from another CLI
For a custom CLI (Go, Rust, TS, etc.), treat OpenCode as a local service:
-
- Implement a small adapter in your CLI: High-level flow:
-
- For scripting languages, you can skip SDKs and just do raw HTTP, like:
curl -s http://127.0.0.1:4096/openapi.json | jq '.info'
or in Node:
const res = await fetch('http://127.0.0.1:4096/session', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ cwd: process.cwd(), model: 'openai:gpt-4.1' }),
});
Focused next prompt you could use:
“Design a concrete HTTP session/message schema and Go client for treating OpenCode as a multi-tenant coding agent microservice behind my own CLI.”