Platform

Orchestration
approve once, ship many.

A single approval fans out into a task graph: schema, design, pages, migrations, integrations, tests, security. The orchestrator parallelises what's independent, waits on what's blocked, retries what fails, and pauses on human review — atomically.
Task DAGParallel fanoutAtomic retriesHuman-in-the-loop
acme / apps / customers / run 84c2
requirement approvedschemadesigncustomers / indexcustomers / [id]admin / rulesrelease

The problem

Teams burn weeks on the coordination around the work — who's waiting on what, who approved which version, which step needs to re-run. AlgorithmShift turns all of that into a task graph the platform runs for you: atomic, auditable, and resumable. You approve intent; the platform delivers artifacts.
[01]

One approval expands into a task graph.

When a requirement is approved, the orchestrator reads its shape and emits every downstream task — one per page, one per schema entity, one per integration — with the dependency graph wired up.
  • Fanout is deterministic: same requirement → same task set
  • Task nodes carry agent assignment, input payload, output contract
  • Approval records are first-class nodes, not side-channels
requirement → task graphyaml
# one approved requirement — auto-fanout
requirement.approved: "Add customer health score"
  ↓
spec                 (approved)
├── schema           (extend customer entity)
│   └── migrations   (apply.sql + rollback.sql)
├── design           (tokens + health badge)
└── pages
    ├── customers/index      (list + score column)
    ├── customers/[id]       (detail + trend card)
    └── admin/health-rules   (rule editor)
        ├── tests            (unit + integration)
        └── security         (auth-bypass + PII scan)
[02]

Parallelism wherever the graph allows it.

Independent branches execute concurrently. Schema and design run in parallel because neither blocks the other. Three pages that depend only on schema + design fan out three ways at once.
  • Automatic parallelism from `depends_on` declarations
  • Per-app concurrency ceiling protects model costs
  • Bottlenecks surface as long bars on the run timeline

Run timeline — 6 tasks, 3 lanes

schema
done
design
done
pages/index
done
pages/[id]
retry
pages/rules
done
tests
blocked

Three pages ran in parallel. One failed, retried automatically — tests held until all green.

[03]

Idempotent task claims and atomic commits.

Each task claims its row, runs, and records the result in one transaction. A crash mid-run reappears as an unclaimed task on restart — no duplicate pages, no half-written artifacts, no orphaned spend.
  • Orchestration keys guarantee at-most-once effects
  • Retry reclaims only the failed frontier
  • Replays are byte-identical when inputs match
task lifecycleyaml
# task claimed, executed, and recorded atomically
task: pages/customers/[id]
  attempt_1:
    started: 09:07:12
    agent: pages (sonnet)
    status: failed
    error: "schema-out-of-sync: health_score not in v17"
  retry_after: schema re-run
  attempt_2:
    started: 09:12:03
    status: applied
    artifact: iter_4f19/pages/customers/[id].tsx
[04]

Human review is a task, not a Slack thread.

When a step declares `approval: human`, the orchestrator pauses that branch, surfaces the artifact, and captures the reviewer decision into the same ledger. Nothing downstream runs until the review is resolved.
  • Reviewer + decision + rationale persisted as an artifact
  • Parallel branches not gated by the review keep running
  • Rejection routes feedback back into the agent's next retry
ship-a-page • run 84c2
  1. 09:03:14specapproved by alice
  2. 09:04:02schemaapplied
  3. 09:04:03designapplied (parallel)
  4. 09:07:21pages/[id]pending review
  5. 09:11:48reviewalice → approved
  6. 09:12:03testsapplied
[05]

Cross-environment awareness.

The same graph runs against dev, staging, or prod. Each environment carries its own run ledger so you can see at a glance which tasks are applied, pending, or blocked in every target — without stitching together three dashboards.
  • Per-env run history with commit linkage
  • Blocked-in-prod surfaces immediately in the dashboard
  • Replay a run from any environment to reproduce an issue
dev
18/18in sync
staging
16/182 pending
prod
15/18awaiting release
[06]

Traceable, top to bottom.

Every task links back to the requirement that fanned it out, the approval that released it, the agent that produced it, and the commit it landed in. Walk the chain either way — requirement → code, or code → requirement.
  • Stable IDs across requirement → task → artifact → commit
  • One-click trace view in the app timeline
  • Exportable audit bundle per release
trace: iter_4f19/pages/customers/[id].tsxlog
requirement: req_8a11  "Add customer health score"
  ↳ approval:   apr_c4d2  alice@acme  2026-04-18 09:03
  ↳ task:      task_91ef   pages agent (sonnet, 4.2k tok, $0.08)
    ↳ artifact: iter_4f19/pages/customers/[id].tsx
      ↳ commit:   feat: customer detail with health score

Parallel

Independent tasks fan out

no manual scheduling

Atomic

Idempotent task commits

crash-safe retries

Pausable

Human review is first-class

blocks only what it must

Traceable

Every artifact → requirement

both directions

FAQ

Common questions

What happens if two people approve the same requirement?
The orchestrator claims the fanout step with a unique key. Only the first approval starts tasks; the second becomes a no-op recorded in the audit log.
Can I cancel a running graph?
Yes — cancel from the run view. In-flight tasks finish atomically (so no corruption), downstream tasks are marked cancelled. Partial progress is preserved and you can resume selectively.
How does retry differ from replay?
Retry picks up the failed frontier with the existing inputs. Replay re-runs the whole graph from scratch against a specified environment — useful for reproducing prod issues in staging.
Do agents see each other's outputs?
Yes — declared via typed contracts. A downstream agent receives a parsed artifact reference, not raw prompt context, so upstream churn doesn't ripple unpredictably.

See the full platform in action.

Bring a real requirement. Watch it become a running app you can ship.