Fleet 1.13:Teams are now shipping 5x more PRs with autonomous pipelines.See what's new →
Fleet
← All posts
Technical

Getting Started with Fleet in 5 Minutes

From install to your first autonomous agent fleet in under five minutes. One binary, one config file, one command. No Docker, no Node.js, no cloud account.

April 5, 2026·5 min read

This is the fastest path from "never heard of Fleet" to "agents running and coordinating on my repo." I'll keep it tight.

What you need

A Mac, Linux, or WSL2 terminal. An existing AI coding setup (Claude Code, Cursor, Copilot, whatever you're already using). Five minutes.

That's the whole list. No Docker, no Node.js, no package manager, no cloud account.

Install

curl -fsSL https://fleetctl.ai/install | sh

This downloads a single binary. The entire product is that binary. Confirm it worked:

fleet version

Initialize a fleet on your repo

cd into whatever project you want to try this on and run:

fleet init

Fleet creates a .fleet/ directory with a starter config. Open .fleet/config.yaml. You'll see something like this:

agents:
  - name: product-owner
    role: product-owner
    model: claude-sonnet      # ticket refinement doesn't need Opus
    subscriptions:
      - ticket_created
    routes_to:
      - frontend-dev
      - backend-dev

  - name: frontend-dev
    role: developer
    model: claude-opus         # complex implementation work
    department: engineering
    subscriptions:
      - ticket_ready

  - name: tech-lead
    role: tech-lead
    model: claude-sonnet       # review works well on Sonnet
    department: engineering
    subscriptions:
      - pr_needs_review

Three agents out of the box. The PO agent subscribes to new tickets, refines them, and routes them to the right developer agent. The developer agent wakes up when a refined ticket is ready. The tech lead agent wakes up when a PR needs review.

Notice the model config. Each agent runs the model appropriate to its job. Sonnet for ticket refinement and review. Opus for implementation. You can change these to whatever you want later.

Start the fleet

fleet start --all

Check what's running:

fleet status

Output:

Fleet: 3 agents running | 0 idle | 0 errors
  product-owner   running  (tmux: fleet-po)             model: sonnet
  frontend-dev    running  (tmux: fleet-frontend-dev)    model: opus
  tech-lead       running  (tmux: fleet-tech-lead)       model: sonnet

Each agent runs in a managed session. Fleet handles startup, shutdown, health checks, and restart-on-crash.

Turn on the reactive engine

This is the part that changes everything. Up until now you've been manually prompting each agent. The watcher makes the pipeline autonomous.

fleet watcher start --supervised

The watcher monitors GitHub labels, event subscriptions, and cron schedules. When a condition is met, it routes the event to the right agent.

Test it. Go to one of your GitHub issues and add the label "ready." Watch your terminal.

Fleet detects the label. The PO agent picks up the ticket, refines it with acceptance criteria and scope, and routes it to frontend-dev. The development agent starts working. When it opens a PR, a pr_needs_review event fires and tech-lead picks it up.

Nobody prompted anything. Nobody read an email. Nobody manually assigned a reviewer. The event chain ran itself.

Check the audit log

fleet log

Timeline of every event, every agent action, every decision. Timestamps, agent names, event types, outcomes. This is the thing your engineering manager will care about. Remember where it is.

What to do next

You now have a working agent fleet with reactive coordination. A few things worth trying from here.

Browse the template library. Fleet ships with 136 pre-built configs. Run fleet templates list to see them. There's a full-stack dev team with PO routing, a DevOps pipeline crew, a security review squad, an SRE monitoring setup, and a lot more.

Add an SRE agent. Subscribe it to deployment_complete events. It monitors error rates after each deploy and can trigger a rollback if something spikes. Runs great on Haiku since it's making simple threshold decisions.

Turn on Brain. Fleet's scoring engine evaluates every agent run on six dimensions and flags anomalies. fleet brain start activates it.

Set up a pipeline. Multi-stage workflows with approval gates. fleet pipeline init scaffolds your first one.

If you're evaluating Fleet for your team

The fastest way to build an internal case is to run Fleet on one repo for a week, then pull the fleet log, point to the event chain (ticket, PO refinement, agent implementation, automated review, merge), and show how the pipeline ran autonomously.

There's a page at fleetctl.ai/for-leaders that covers the business case without any code. Send it to your VP. It speaks their language.

Try Fleet

One binary. Five minutes. See every agent, coordinate every handoff, and keep a full audit trail of what your fleet did.