The Gap Routines Close
Claude Code has always been something you run while you watch. You open a session, give it a prompt, watch it think, accept or reject its changes, and move on. Great for focused work. Not great for the kinds of tasks that need to run overnight, fire at 3 AM, or react to an event you did not see happen.
Anything scheduled or event-driven needed a second layer: a laptop left open, a VPS with cron, a CI runner, or some combination. Claude Code was the worker. Something else was the scheduler. That split is what Routines remove.
A routine is a saved Claude Code configuration that runs on Anthropic's cloud. A prompt, one or more GitHub repositories, a set of MCP connectors, and one or more triggers. Save it once, and it fires on its own every time a trigger matches. Laptops can close. VPSes can stay off. The work still happens.
Three Trigger Types, Combinable
A single routine can have any combination of three trigger types attached.
Schedule. Hourly, daily, weekdays, weekly, or a custom cron expression down to one-hour granularity. Times are entered in local time and converted automatically, so a routine scheduled for 9 AM runs at 9 AM wall-clock wherever the cloud infrastructure lives.
API. Every routine can get a dedicated HTTPS endpoint and a bearer token. POST to the endpoint with the token in the Authorization header, the routine fires. Wire up monitoring tools, deploy pipelines, Stripe webhooks, internal admin tools, or anything else that can make an authenticated HTTP request.
GitHub event. The routine reacts to repository events. Pull request opened, push to a branch, issue labeled, workflow run completed, release published. Filters narrow it down further: author, title, base branch, labels, draft status, whether the PR came from a fork.
The three can stack. A pull request review routine can run nightly on a schedule, fire from a deploy script webhook, AND react to every new PR on a specific repo. Same prompt. Three entry points.
Creating a Routine
Three surfaces, all writing to the same account.
From the CLI:
/schedule daily backlog triage at 8am weekdaysClaude walks through the prompt, the repo, the connectors, and the trigger. Confirm, and the routine is saved to the account. It shows up at claude.ai/code/routines immediately.
From the web: claude.ai/code/routines has a "New routine" button with a form-driven wizard. Name, prompt, repositories, environment, connectors, trigger. Click create.
From the Desktop app: The new Claude Code Desktop app has a "New task" button with a "New remote task" option. That creates a routine. Picking "New local task" instead creates a local Desktop scheduled task, which is different (runs on the machine, not the cloud).
All three surfaces converge. A routine created in the CLI shows up on the web. A routine edited on the web can be triggered from the CLI with /schedule run.
Three Use Cases That Fit
The routines model works best for unattended, repeatable work with a clear outcome. Three patterns show up often.
Scheduled Data or Content Pipeline
A routine fires on a schedule (nightly, hourly, weekly) against a codebase that does a known job. Pull external data, process it, write results to a hosted database or a Slack channel, done. No server required. No cron file to maintain.
Fits: backlog triage, overnight reports, regular content generation, data enrichment passes, periodic security scans.
Automated Pull Request Review
A routine with a GitHub trigger on pull_request.opened runs a team's review checklist against every incoming PR. Inline comments for security, performance, style. A summary comment for human reviewers. Optional auto-approve for trivial changes.
Fits: open source maintainers, teams with consistent style guides, solo builders who want a second pair of eyes on their own work.
Alert-Driven Incident Response
Get the Weekly IT + AI Roundup
What changed this week in NinjaOne, ServiceNow, CrowdStrike, and AI. One email, every Monday.
No spam, unsubscribe anytime. Privacy Policy
A routine with an API trigger is called from a monitoring tool when an error threshold is crossed. The alert body passes through as context. The routine clones the relevant repo, reads the stack trace, correlates with recent commits, and opens a draft pull request with a proposed fix.
Fits: on-call workflows, deploy verification, Stripe or billing webhook triage, production alert first-response.
What Makes a Routine Succeed
Routines run autonomously. No permission-mode picker. No approval prompts during a run. Whatever the routine can reach, it can do without asking. That changes how the prompt should be written.
Be explicit about success. The prompt has to say what "done" looks like. "Review the PR and add inline comments" is not enough. "Review the PR against the checklist below. For each finding, leave an inline comment tagged [security], [performance], or [style]. If no findings, post a summary comment with 'LGTM' and no other text." is better.
Scope what the routine can reach. Select only the repositories the routine actually needs. Enable branch push restrictions (the default, branches prefixed with claude/). Include only the MCP connectors the task actually uses. The smaller the blast radius, the safer unattended runs become.
Use environments intentionally. Each routine runs in a configured cloud environment that controls network access, environment variables, and setup scripts. Keep secrets in the environment variables, not the prompt. Keep the setup script fast so each run does not spend five minutes installing dependencies.
Write for the worst case. The routine will run on a Tuesday when the input data is malformed, a connector is rate limited, or a tool returns an unexpected schema. A good prompt tells Claude what to do when the ideal path breaks.
Four Gotchas That Bite First
Routines run as you. Anything the routine does through your GitHub identity or MCP connectors appears as you. Commits carry your name. Slack messages use your linked account. If a team wants separation, create a dedicated bot GitHub account and connect that identity instead of a personal one.
Daily run cap. Routines share subscription usage limits, and there is a per-account daily cap on how many routine runs can start. Check the current consumption at claude.ai/settings/usage. Plan around it or enable extra usage from Settings → Billing if the account runs close to the cap.
Research preview. Anthropic calls this a research preview. Request and response shapes may change. The API /fire endpoint ships behind an experimental-cc-routine-2026-04-01 beta header. The two most recent prior header versions continue to work during migrations, but anyone wiring production systems against this should budget for at least one schema update in the next six months.
No local file access. The routine clones the repo fresh at the start of each run and writes to /tmp during the run. Any laptop filesystem, VPS disk, or mounted volume is invisible to the routine. Cron jobs that depended on local state need a different storage layer (a hosted database, object storage, or a committed file in the repo itself).
Who Routines Are Not For
Not every scheduled task belongs in a routine. Three categories stay where they are.
Workloads tied to specific hardware. Anything running against a local GPU, an NPU, or custom inference hardware. Cloud routines cannot reach that hardware. The work stays local.
Long-lived process state. Knowledge graphs with local databases, large indexed corpora, anything that expects files to persist between runs. Routines clone fresh and write to /tmp only.
Polling loops and always-open sockets. Bots that listen on a websocket, services that hold a TCP connection open, work that needs a long-lived process in memory. Routines are bursty by design: they fire on a trigger and exit.
The pattern to watch for: anything that needs specific hardware, local files, or a long-running process stays on whatever infrastructure it already lives on. Anything scheduled or event-driven that works against GitHub-hosted code is a candidate for a routine.
The Bigger Shift
Cron jobs and CI pipelines used to handle the "runs without me" layer. Routines do something different: they move the AI agent itself to the cloud, with the scheduler built in. The prompt is the job spec. The trigger is the entry point. Everything in between is handled.
The number of cases where "stand up a small VPS to run Claude on a cron" was the right answer shrinks sharply. For anything new, start with a routine and only reach for infrastructure if the routine cannot do the job. That flip, from infrastructure-first to routine-first, is the actual story.
Choosing the Right Tool for Each Workflow
Routines make a class of automation dramatically easier to stand up. They do not answer the bigger question: which AI model should drive the routine, which connectors fit a team's stack, and which workflows are worth automating at all.
Matching a workflow to the right AI and the right tooling is its own skill. LLM Match Maker exists for exactly that question. If you are looking at the new Routines feature and wondering which tools to connect, which model to run, and which workflows to move first, that is the matching problem we solve.
Routines remove the scheduler problem. Picking what to schedule is the next decision. That is where we help.