MPIsaac Ventures
Back to Blog

How to Use Claude Code (2026 Tutorial for Engineers)

Michael Isaac
Michael Isaac
Operator. 30 yrs in enterprise AI.15 min read

I've used Claude Code as my primary coding harness through sustained production work, and working code for the patterns in this tutorial is intended to live in the agentinfra-examples companion repo as the build catches up. The official quickstart covers initial install and first-prompt startup. It does not tell you which auth path to pick, when subagents actually pay off, or what breaks when you wire MCP servers into a long-running session. This tutorial covers the parts the docs skip: install, auth choice, project bootstrap, MCP wiring, PreToolUse hooks, subagents, and the failure modes I have hit in my own setup.

TL;DR

Install Claude Code via npm install -g @anthropic-ai/claude-code, authenticate with a Claude subscription (Pro or Max) for subscription billing, run it inside a git repo with a CLAUDE.md describing your conventions, and use /init to bootstrap that file. For real work, add a .mcp.json for tool servers, wire PreToolUse hooks for any destructive command, and reach for subagents only when parallelism or context isolation is needed. Stay on the picker's default for routine edits, and use /model to escalate to a heavier reasoning model when the agent shows concrete failure signals: repeated failed edits on the same file, plans that contradict the previous turn, excessive file churn without convergence, or an inability to hold architecture constraints across more than a few turns. Subscription billing is the right starting place for solo work where the spend pattern is unknown; metered API billing per token is the right shape for team deployments with shared budgets. Current dollar figures live on Anthropic's pricing page.

What you'll need

  • A current Node.js LTS for the CLI install. Check the current install instructions in the Claude Code docs before installing, since the supported runtime floor has moved between releases. I run Node 22 LTS.
  • A terminal with TTY (iTerm2, Ghostty, Warp, or VS Code's integrated terminal) for interactive use. Claude Code is a TUI, so the everyday session expects a real terminal. Headless and automation modes are a separate surface; validate the current non-interactive flags against the docs on the day you wire CI or a parent agent around it.
  • A Claude account. Either a Pro or Max subscription (subscription billing, quota-based with rolling rate limits) or an Anthropic API key (metered API billing per token, better for team-wide deployment with shared budgets). I run Max because my own daily usage tends to push past the lower tier's window, but that is a personal-workload call, not a general recommendation. Subscription rate limits are quota-based and workload-dependent; on a heavy day my own session occasionally bumps the ceiling, but throughput on any given account turns on prompt size, tool-call volume, and how aggressively subagents are dispatched.
  • Git, working in a repo. Claude Code can run anywhere, but /init, hooks, and project-scoped MCP all assume a repo root.
  • Optional: Claude.ai login for session sync across the desktop/web/IDE surfaces.

Cost as of 2026-05-06: Pro and Max are subscription billing at two different tiers; API usage is metered billing per token on a separate invoice. Current dollar figures live on Anthropic's pricing page and should be checked there rather than trusted from this page. For solo work where the spend pattern is unknown, the lower subscription tier is the cheaper place to start; upgrading in the billing console does not lose config.

Step 1, Install the CLI

npm install -g @anthropic-ai/claude-code
claude --version

The package name is @anthropic-ai/claude-code and the binary is claude. If claude is not on your PATH after install, your npm global prefix is probably not in PATH; check with npm config get prefix and add the resulting bin/ directory.

What might surprise you: the CLI updates itself in the background once authenticated, and the install command does not pin a version. For reproducible installs on a build agent, install a specific version explicitly (npm install -g @anthropic-ai/claude-code@<version>) and confirm the current auto-update control in /config and the docs before a long unattended run. That knob has moved between point releases in my own setup.

Step 2, Authenticate

claude

The first run opens a browser for OAuth against your Claude account. If you are on a headless box, it prints a URL to copy.

Two auth modes exist:

  1. Subscription (Pro/Max), flat monthly cost, governed by rolling subscription usage limits. Read /usage and the current plan UI on the day you care, because the exact window and counter language has moved before.
  2. API key, set ANTHROPIC_API_KEY in your env, billed per token via the Anthropic Console.

Pick subscription for solo work where the spend pattern is predictable. API mode is where I lean for team deployments because shared budgets, key rotation, audit posture, SSO, and cloud-provider routes like Bedrock or Vertex all matter more once multiple operators touch the same system. Before standardizing on API mode, validate the controls exposed on the actual plan: per-key spend limits, audit events, service-account custody, and the logging story if a cloud provider sits in front. The right billing surface is an operations decision, not a vibe.

Credential precedence between a logged-in subscription session and an ANTHROPIC_API_KEY env var has behaved differently across releases in my own setup. Before any long agentic run, confirm which credential the current session is billing against with /status or the current equivalent, then unset whichever env vars are not intended for that run.

Step 3, Bootstrap the project

In your repo:

cd ~/code/your-repo
claude
> /init

/init reads the codebase and writes a CLAUDE.md at the repo root. In my workflow this is the highest-leverage repo-local tuning file, alongside model choice, permission settings, hooks, and MCP config. Its contents load into every conversation in that repo, so the discipline is to keep it short, repo-specific, and focused on constraints that are not obvious from code. Mine stays brief on purpose: stack, run commands, conventions, and the explicit prohibitions. Anything longer starts paying token cost on every turn for context the agent could derive from the code itself.

What the /init output gets wrong on first pass:

  • It tends to over-document obvious project structure. Delete the parts derivable from package.json or a tree listing.
  • It rarely captures hard constraints. Add them deliberately: "do not edit migrations once merged", "tests must hit a real database", "this codebase ships to a regulated environment".

Per Anthropic's Claude Code best practices, CLAUDE.md files also load from ~/.claude/CLAUDE.md (user-global) and any parent directories. Layer them: global for personal style, project for repo conventions.

Step 4, Pick the right model for the task

/model

This opens a picker listing the model choices available to your account at the time you run it. Model lineups shift, default selections change between releases, and long-context variants come and go, so the picker is the source of truth, not this page.

My posture: start on whatever the default is. It handles most routine edits quickly in my workflow. Escalate to a heavier reasoning model when I notice the agent thrashing on an architectural question, an ambiguous bug, or a refactor that needs to hold a lot of files in mind at once. Drop to a smaller, faster model for trivial scripted work where latency matters more than depth.

The model switch is per-session. Close the session and you are back to the default. Check the Claude Code docs for the current model lineup before assuming any specific slug is still on the menu.

Step 5, Add Project-Scoped MCP Servers

The Model Context Protocol lets Claude Code call external tools (databases, browsers, APIs) the same way it calls Read/Edit/Bash. Configure project-scoped MCP via a .mcp.json at the repo root:

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["-y", "@playwright/mcp@latest"]
    },
    "postgres": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-postgres",
        "${POSTGRES_MCP_URL}"
      ]
    }
  }
}

Treat the package names and flags above as a starting shape, not a stable contract. MCP server packages have churned: names get renamed, args change, and @latest quietly drags in behavior that did not exist when you wrote the config. Before committing a .mcp.json to a repo other people will clone, verify each server's current package name, accepted arguments, and connection-string format against that server's own README on the day you commit, and pin a version rather than tracking @latest for anything you want to behave the same way next month.

Then in a session:

/mcp

This lists configured servers and their connection state. You will see connected, failed, or pending. If failed, the error message in /mcp is usually a stderr dump from the server process; read it.

Note what the Postgres entry does NOT do: hardcode a connection string. .mcp.json is checked into git for the whole team, so anything inline ships to every clone, every fork, and every CI log that prints the file. Connection strings, API tokens, and /Users/me/.nvm/... absolute paths all belong in environment variables resolved at session start, or in user-scoped ~/.claude.json config that never leaves the operator's machine. The shared .mcp.json should describe which servers the project uses, not whose credentials run them.

The credentials themselves want the least privilege the task tolerates. An agent with postgres://app_owner@.../prod can drop tables on the first hallucinated DELETE. For most analysis, schema introspection, and query-shaping work, a read-only role scoped to the relevant schemas is enough. I keep a separate pg_agent_ro role with SELECT only; write-capable roles appear for migration sessions and come back out afterward.

The Playwright MCP is the one I reach for most often: screenshots, clicks, browser state, no pretending a UI works because unit tests pass. The tradeoff is obvious. A browser MCP pointed at an everyday Chrome profile inherits logged-in sessions and can mutate real apps. Use a disposable profile, a non-production account, or a dedicated test environment. For database work, MCP beats psql in Bash because schema introspection is native and a read-only role caps the blast radius.

The MCP docs are at modelcontextprotocol.io, the canonical home for the protocol itself. User-scoped MCP (configured in ~/.claude.json) is shared across all your projects and is the right place for credentials and machine-specific binary paths; project-scoped .mcp.json is checked into git for the team and should stay free of both. Pick deliberately, and treat the project file as public.

Step 6, Wire hooks for anything destructive

Hooks are shell commands the harness runs around tool calls. The PreToolUse hook is the one to learn first because it can block a tool call before it executes.

The settings file location and hook schema have moved between Claude Code releases in my own setup, so confirm the current hooks docs before relying on the exact JSON below. The rough shape lives in ~/.claude/settings.json (user) or .claude/settings.json (project):

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Bash",
        "hooks": [
          {
            "type": "command",
            "command": "~/.claude/hooks/block-rm-rf.sh"
          }
        ]
      }
    ]
  }
}

And ~/.claude/hooks/block-rm-rf.sh:

#!/usr/bin/env bash
# TOY EXAMPLE. Pattern-matching a handful of literal substrings is not a
# production safety boundary. Real operators want structured command
# policy: tokenize the command, walk the AST or argv, and decide allow
# vs. block per-binary with explicit rules. See discussion below.
set -uo pipefail

if ! command -v jq >/dev/null 2>&1; then
  echo "hook misconfigured: jq not found on PATH; allowing tool call" >&2
  exit 0
fi

input=$(cat)
cmd=$(printf '%s' "$input" | jq -r '.tool_input.command // empty')

if [ -z "$cmd" ]; then
  exit 0
fi

if printf '%s' "$cmd" | grep -qE 'rm -rf|git push --force|git reset --hard'; then
  echo "Blocked dangerous command: $cmd" >&2
  exit 2
fi
exit 0

Be honest about what this script is and is not. It is a toy: the regex catches rm -rf, git push --force, and a couple of friends. It misses aliases, shell functions, heredocs, find . -delete, git push -f, and any command assembled at runtime. A senior operator would not treat literal grep -E over the raw command string as a reliable pre-execution control.

A production-grade PreToolUse hook wants structured command policy: parse argv, inspect the binary and flags, and decide allow or block against an explicit ruleset. Block recursive deletion, force-pushes, hard resets, and writes under known production roots. Make the parse-failure branch deliberate too.

Two policy choices matter. I dropped -e from set so an ordinary shell mistake does not become a silent denial. I also made the learning example fail open when jq is missing or the command is empty. For a real destructive-command hook in shared infrastructure, fail-closed is usually the better posture. Pick deliberately, test both branches with claude --debug, and document the choice in the script header.

Exit code 2 blocks the tool call and feeds the stderr message back to Claude, which then has to pick a different approach. I run hooks against any operation that crosses out of my local sandbox: shared infrastructure pushes, deletions, force-pushes, package publishes. The literal-grep version catches the obvious typo and the obvious hallucination, and that is genuinely better than nothing on a personal machine. It is not a security boundary. Anything that needs to be a security boundary lives behind a real policy engine, a least-privilege credential, or both.

What might surprise you: hooks fire as your shell user, not in any sandbox. A misconfigured hook is your responsibility. Test with claude --debug before trusting one, exercise both the allow and block paths, and confirm the script behaves the way the comments claim.

Step 7, Use subagents for parallelism, not for everything

Subagents are isolated Claude conversations launched from the parent session. The Agent tool dispatches them. They have their own context window, their own tool whitelist, and they return only their final message to the parent.

> Spawn an agent to audit every TypeScript file under src/api for missing return types,
  while I keep working on the migration.

Behind the scenes this calls the Agent tool. Subagents earn their cost when:

  • The task is genuinely independent (parallel test runs, parallel audits, parallel PR reviews).
  • The output is large and you want it summarized rather than dumped into your main context.
  • You need a different tool whitelist for safety reasons.

Subagents add token cost, latency, and serialization overhead when the parent session could do the task directly. Each spawned agent runs its own context window and round-trips its result back through the parent, so wrapping routine work in a subagent for politeness pays the coordination tax without buying parallelism or isolation. Default to direct work, and reach for subagents only when there is an actual reason.

The custom-subagent definitions live in ~/.claude/agents/<name>.md (user) or .claude/agents/<name>.md (project). The frontmatter description field is what the parent matches against when it decides to dispatch. Vague descriptions get ignored; specific ones get triggered.

Common errors

Error: command not found: claude

What it means: npm's global bin directory is not in your PATH. Fix:

npm config get prefix
# add <prefix>/bin to your PATH
echo 'export PATH="$(npm config get prefix)/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc

Error: MCP server failed to connect: spawn npx ENOENT

What it means: npx is not on PATH from Claude Code's spawn environment, which on macOS launchd-spawned shells often differs from your interactive shell. Fix: Use the absolute path to your node binary in .mcp.json:

{
  "mcpServers": {
    "playwright": {
      "command": "/Users/you/.nvm/versions/node/v22.10.0/bin/npx",
      "args": ["-y", "@playwright/mcp@latest"]
    }
  }
}

Error: rate_limit_error mid-session on a Pro/Max plan

What it means: the current subscription usage window has filled. Anthropic changes the exact limit language over time, and the old support URL I had for the policy now resolves to a missing page, so I would treat /usage and the current plan UI as the source of truth.

Fix: wait it out, or switch to a smaller/faster model via /model if that option is available on the current plan. Both stay inside the subscription billing surface.

Things nobody talks about

  1. Auto-update silently breaks reproducibility. The CLI updates itself between runs unless you disable it. A workflow that worked yesterday can behave differently today because the harness changed underneath you. If you script CI around Claude Code, pin to a specific version (npm install -g @anthropic-ai/claude-code@<version>) and confirm the current way to disable auto-update against the live Claude Code docs and the actual /config UI on the version you run, since the exact control has moved between point releases in my own setup. I have had hooks fire differently across two consecutive sessions because the matcher semantics shifted in a point release.

  2. The subscription usage budget is shared across surfaces. Claude Code sessions, Claude.ai web tabs, and the desktop app draw from the same Pro/Max budget. Running Claude Code in two terminals plus Claude.ai in a browser eats the allowance faster. The exact window language changes, so check /usage and the current plan UI instead of anchoring on an old support article.

  3. Treat long CLAUDE.md files as a context-cost risk. Anthropic's docs describe CLAUDE.md as project context loaded into the conversation, but the harness does not publicly document whether that content is summarized, cached, or re-sent verbatim each turn, and the answer can shift between point releases. The safe operator assumption: a multi-thousand-line CLAUDE.md is paying token cost somewhere on a long session, and you have no visibility into where. I keep mine under 200 lines and offload deep convention docs to docs/ files that the agent reads on demand. If your toolchain version documents the exact behavior, trust the docs over my heuristic.

  4. MCP servers run as long-lived child processes. They do not get cleaned up if Claude Code crashes. After a few crashed sessions, expect orphan npx processes burning CPU. pgrep -f mcp and kill is part of my recovery flow.

  5. Hooks see tool_input but not the full conversation. A hook cannot make decisions based on the user's intent, only on the literal tool args. For intent-aware blocking, use a SessionStart hook to inject policy into the system prompt instead of trying to detect bad-intent shell commands at PreToolUse time.

  6. Vendor lock-in is medium. The binary is closed, but the patterns transfer: MCP servers, CLAUDE.md, and hook scripts are plain files. Migrating loses the harness UX and subagent dispatcher; it does not erase the operating discipline.

Conclusion

Use Claude Code when you want a TUI-native coding harness with first-class hooks, MCP, and subagents, and you are okay with a closed-source binary that auto-updates. Pick a subscription tier first, write a tight CLAUDE.md, add MCP for the tools you actually use, and put PreToolUse hooks on anything destructive before you trust the agent unattended. Reach for subagents only when the work is genuinely parallel.

If you are evaluating alternatives, the closest comparison points are Cursor (IDE-native, paid editor) and pi-mono (the harness I run for one of my production apps, fast loop, different ergonomics). I keep working code samples for the patterns above in the agentinfra-examples repo.

★ Insight ───────────────────────────────────── Claude Code is fun because the loop is small enough to understand and sharp enough to matter. A terminal, a repo, a few guardrails, and suddenly the model is not a chat box off to the side. It is a weirdly capable junior engineer with shell access. That is the magic and the danger. The install takes minutes; the craft is learning where to put the rails. ─────────────────────────────────────────────────