Back to blog
X24LABS

How Stitch stacks up: a deeper look at the local-first CI market

Most CI assistants want you to adopt their cloud, their monorepo, or their SDK. Stitch reads the pipeline you already wrote and runs it next to the agent you already own. Here is what that actually changes versus Gitar, Nx Cloud, and Dagger plus AI.

Every CI-adjacent tool launched in the last eighteen months arrives with the same pitch: connect your repo, hand over your pipeline, let our agent fix the failures. The shape of the ask is identical even when the products are not. Gitar wants your repo in their cloud. Nx Cloud wants your build graph reorganized into their monorepo. Dagger wants your YAML rewritten in their SDK. Each has a defensible reason. None of those reasons remove the migration tax from your team.

Stitch was built around a different observation. The developer already has the pipeline, already owns the machine, and increasingly already has an authenticated AI agent sitting in their terminal. The job is not to introduce a new control plane. The job is to connect the three things that already exist.

This post is about the parts of the comparison that do not fit on a feature matrix.

The market is split three ways

Once you actually look at the products, the AI-CI category is not one market. It is three.

Stitch is none of these. It is a fourth category: a local-first verify loop that reads your existing CI config and runs it on your machine with whichever AI agent you already use.

Capability matrix

CapabilityStitchGitarNx CloudDagger + AI
Uses your existing CI configYesNoNoNo
Runs jobs locallyYesCloud onlyCloud + localContainers
Pluggable AI agentAny CLI agentBuilt-in onlyBuilt-in onlyBuilt-in only
Requires new infraNoneSaaS accountNx workspaceDagger SDK
Native Claude Code integrationShips with a skillNoNoNo
PricingFree, MITPaid plansFree OSS + paid CloudFree OSS engine + paid Cloud

The matrix understates what is actually happening. The interesting part is what each row implies for an engineering team three months in.

What “uses your existing CI config” actually means

If a tool reads .github/workflows/*.yml or .gitlab-ci.yml directly, your CI is still the source of truth. The pipeline you ship to production is the pipeline the verify loop runs. Drift is impossible by construction.

If a tool requires you to translate your pipeline into a new format (Dagger SDK, Nx project graph, a vendor’s SaaS YAML), you now own two pipelines. One you push to CI. One the local tool understands. Every change to either has to be mirrored. In practice, the mirror gets stale. In practice, the local tool slowly diverges from production. In practice, “passes locally” stops meaning “passes in CI.”

The migration is not the cost. The ongoing maintenance of two configs is the cost. Stitch sidesteps it by refusing to own the config.

Local execution is not just a latency story

Cloud-only tools advertise scale. The honest read is that they advertise scale because cloud is the only mode they offer. There is no local CLI you can run on a laptop without a network round trip. That has three consequences nobody puts on a sales page.

  1. Your code leaves the machine. Every iteration ships diffs to a SaaS backend. For regulated environments, that is a compliance review.
  2. You pay per minute of compute. A fast-feedback workflow on a slow runner gets expensive quickly.
  3. You inherit their availability. When the SaaS has an incident, your verify loop stops.

Local-first inverts all three. Stitch jobs run on the machine where the editor lives. Nothing leaves unless you configure a notification channel that sends it. There is no Stitch backend to be down.

”Pluggable AI agent” is the only honest answer

Gitar, Nx Cloud, and the AI features bolted onto Dagger Cloud all ship with a single proprietary agent. The choice is the vendor’s, the cost is the vendor’s bill, the model upgrade cadence is the vendor’s roadmap.

Stitch does not have an agent. It invokes the agent you already use: Claude Code, Codex, anything CLI-compatible. The credentials are yours. The model choice is yours. When Anthropic ships a faster Claude or OpenAI ships a cheaper Codex, you pick it up the same day, with no Stitch release required.

This matters because the AI agent market is moving faster than any CI vendor can ship. Locking the agent into the CI tool is a bet that the CI vendor will keep pace with frontier models. So far, none have.

Infrastructure cost is the real pricing question

The pricing row hides the more important number. “Free, MIT” for Stitch means the only spend is your existing AI agent subscription. A SaaS line item priced from $20 per user per month, or an Nx Cloud plan, or a Dagger Cloud plan, all mean per-seat bills that grow with the team, on top of the AI agent subscription.

For a five-person team already using Claude Code, Stitch adds zero. The competing tools add a four-figure annual line item before the verify loop runs once.

Where Stitch loses

Honest comparisons need this section.

The matrix is not “Stitch wins everywhere.” The matrix is “Stitch wins where the problem is pre-push verification with an AI fix loop on the pipeline you already have.”

The bet

The bet underneath Stitch is that the developer’s machine is the right place for the verify loop, and the developer’s existing AI agent is the right place for the fixes. Every other tool in the category is making the opposite bet: the right place is the vendor’s cloud, with the vendor’s agent, on the vendor’s pricing.

If the bet is right, the local-first tools win on latency, cost, privacy, and agent freedom. If the bet is wrong, the cloud tools win on scale and central control. We think the bet is right because the agents are already on the developer’s machine. The hard part is no longer “where do we run the AI?” The hard part is “stop forcing the AI somewhere else.”

Back to blog