Platform
Thirteen specialist agents. Three phases. One graph.
Every agent owns a slice of the delivery pipeline and hands off by typed contract. Pick a phase to dive into the agents that power it — or see all thirteen at a glance on the handoff graph below.
Typed contractsTier-matched costAudited tool useHuman-gated reviews
agents · handoff graph
planbuildship
The problem
A single "build-everything" AI is easy to prompt and impossible to trust. AlgorithmShift factors the work into thirteen narrow agents across three phases, each tuned to the cheapest model that can handle its job, each callable in isolation, each auditable. You pay for what you use and trust what you ship.
The three phases
Pick a phase to see its agents.
Phase
4 agentsPlan
Capture intent, break it into tasks, gate with review.
- requirements
- tasks
- review
- master merge
Phase
6 agentsBuild
Design tokens, schema, pages, migrations, integrations, and debug.
- design
- schema
- pages
- migrations
- integration
- debug
Phase
3 agentsShip
Release notes, audit trail, master-spec rollup.
- tests
- security
- release notes
13
Specialist agents
plan · build · ship
3 tiers
Haiku · Sonnet · Opus
cheapest that fits the job
Typed
Contracts at every handoff
no freeform prompt chaining
Audited
Every tool call logged
SIEM-ready out of the box
FAQ
Common questions
Why multiple agents instead of one?
Specialisation lowers cost (cheap models handle narrow jobs), raises quality (each prompt is tuned to its task), and keeps the blast radius of a failure small. Parallelism is a side benefit — independent agents run concurrently.
Can I customise an agent's prompt?
System prompts are read-only for safety, but you can attach workspace-level context (glossary, voice samples, domain notes) that every agent threads into its own prompt. For fully custom behaviour, use the Agent Builder to author a new agent.
Do agents share memory?
No — agents communicate only via typed artifacts. This makes every handoff traceable and every agent replayable in isolation. If you need shared state, it lives in the workspace as a first-class artifact.
Which model powers each agent?
Most agents run Sonnet by default with Opus escalation on parse/validation failures. Cheap tasks (tasks, migrations, integration, review) run Haiku. Enterprise customers can route individual agents to their own hosted models.
How are agent updates rolled out?
Every agent is versioned. Workflows pin agent versions so improvements don't surprise live pipelines — you opt in per workspace. Changelogs ship with each release.
See the graph run end-to-end.
Bring a real requirement. Watch the agents hand it off — plan to build to ship.