Automation
Overview
The automation system lets you define reusable tasks in your asd.yaml and run them with asd run <task>. Tasks are YAML-based step sequences that can start background services, wait for health checks, validate output, and clean up after themselves.
asd run dev # Run the 'dev' task
asd run test # Run the 'test' task
asd run # List all available tasks When you run asd up without arguments, it looks for tasks named up, dev, or start in that order.
Defining Tasks
Tasks live in the automation: section of your asd.yaml:
automation:
dev:
- run: "pnpm dev"
background: true
skipWhen:
port: 5173
- waitFor: "http://127.0.0.1:5173"
test:
- run: "pnpm test"
expect: "passed"
build:
- run: "pnpm build"
environment:
NODE_ENV: "production" Each task is a list of steps executed in order. Steps can run commands, wait for conditions, or validate output.
Step Properties
Running Commands
| Property | Type | Description |
|---|---|---|
run | string | Shell command to execute |
background | boolean | Run as a daemon (stays alive after the step completes) |
backgroundTeardownCommand | string | Custom cleanup command instead of SIGKILL when the task ends |
shell | "bash" or "sh" or "bun" | Shell to use (default: bash) |
timeout | number | Step timeout in milliseconds |
retries | number | Number of retry attempts on failure |
retryDelayMs | number | Delay between retries in milliseconds |
environment | object | Environment variables scoped to this step only |
automation:
start-api:
- run: "node server.js"
background: true
environment:
PORT: "8080"
NODE_ENV: "development"
backgroundTeardownCommand: "pkill -f server.js" Step-level environment variables are scoped to that step only. They do not leak into subsequent steps.
Waiting for Services
| Property | Type | Description |
|---|---|---|
waitFor | string | URL to poll until it responds (200 or 401 = success) |
waitForTunnel | object | Poll a tunnel URL built from credentials |
sleep | number | Fixed delay in milliseconds |
timeoutMs | number | Maximum wait time for waitFor |
automation:
dev:
- run: "pnpm dev"
background: true
- waitFor: "http://127.0.0.1:5173/health"
timeoutMs: 30000
- run: "echo 'Dev server is ready'" waitForTunnel builds the URL from your tunnel credentials automatically:
- waitForTunnel:
prefix: "app"
timeout: 10000 Idempotent Steps with skipWhen
The skipWhen block makes steps idempotent. If the condition is already met, the step is skipped entirely:
- run: "pnpm dev"
background: true
skipWhen:
port: 5173 # Skip if port 5173 is already listening All skipWhen conditions (at least one must match to skip):
| Condition | Example | Skips when |
|---|---|---|
port | port: 5173 | TCP port is listening |
process | process: "node server" | Matching process is running |
file | file: ".ready" | File exists |
http | http: { url: "...", status: 200 } | HTTP endpoint responds |
command | command: "test -f .ready" | Shell command exits 0 |
Port values support template macros: port: "${APP_PORT}".
Output Validation
Steps can validate their output using assertions:
- run: "pnpm test"
expect: "All tests passed" # Required substring in output
notIn: "FAIL" # Must NOT appear in output
- run: "curl -s http://localhost:3000/health"
expectRegex: "status.*ok" # Regex match on output
- run: "curl -s http://localhost:3000/api/info"
assertJson:
jsonpath: "$.status"
equals: "healthy"
type: "string" File Assertions
- run: "pnpm build"
- assertFile:
path: "dist/index.js"
contains: "export default"
notContains: "console.log"
- assertNotExists:
path: "dist/debug.log" Port and File Checks
- portCheck:
host: "127.0.0.1"
port: 8080
timeoutMs: 5000
- fileCheck:
path: "artifacts/report.json"
exists: true
sizeGreaterThan: 1024 HTTP Steps
Make HTTP requests and validate responses directly:
- http:
method: POST
url: "http://localhost:3000/api/data"
headers:
Authorization: "Bearer ${API_TOKEN}"
Content-Type: "application/json"
body:
key: "value"
expectStatus: [200, 201]
assertJson:
jsonpath: "$.id"
type: "string" Wait for an HTTP endpoint to become available:
- httpWaitFor:
url: "http://localhost:3000/health"
expectStatus: 200
intervalMs: 500 Conditional Workflows
Branch execution based on step results:
- when:
id: 10
command: "pnpm lint"
expect: "No errors"
thenOnSuccess:
- run: "echo 'Lint passed'"
thenOnFail:
- run: "echo 'Lint failed, running autofix'"
- run: "pnpm lint --fix"
finally:
- run: "echo 'Lint check complete'" Reference a previous when result without re-running:
- whenRef: 10
on: success
then:
- run: "pnpm build" Template Macros in Automation
All string values in automation steps support template expansion.
Random Values
- run: "start-server --port ${RAND_PORT}"
environment:
TOKEN: "${getRandomString(charset='hex',length=32)}"
SESSION_ID: "${getRandomString(charset='alnum',length=18)}" Available charsets: alnum, hex, alpha, safe.
Port Allocation
- run: "start-server --port ${getRandomPort(name='SERVER_PORT')}"
background: true
- waitFor: "http://127.0.0.1:${SERVER_PORT}" getRandomPort(name="VAR") allocates a free port and exports it as an environment variable for subsequent steps.
For multiple ports: ${getRandomPorts(count=3,name="PORTS",sep=";")}.
Built-in Variables
Every automation step has access to these pre-generated values:
| Variable | Description |
|---|---|
RAND_STRING | Random alphanumeric (18 chars) |
RAND_HEX | Random hex (12 chars) |
RAND_ALPHA | Random letters (10 chars) |
RAND_INT | Random integer string |
RAND_PORT | Free TCP port |
Same macro literal = same value within a task run. Different literals = independent values.
Interactive TUI Testing
The automation system supports testing interactive terminal applications through a pseudo-terminal (PTY) via tmux:
- pty:
open: true
run: "asd net"
size:
cols: 120
rows: 40
- pty:
waitFor: "[Services]"
timeoutMs: 5000
- pty:
sendKeys: ["Tab", "Down", "Enter"]
- pty:
waitForIdle:
stableMs: 300
timeoutMs: 5000
- pty:
capturePane: "tmp/screen.txt"
- pty:
exit: true PTY testing uses tmux as the primary backend with script as a fallback. This is used extensively in ASD’s own integration test suite.
Failure Behavior
When a step fails, ASD stops execution of the remaining steps in the task. The exit code and error output are displayed in the terminal.
- Foreground steps — a non-zero exit code is treated as a failure. If
expect,notIn, orexpectRegexassertions fail, the step also fails. - Background steps — ASD starts the process and moves on. If the process crashes after startup, subsequent
waitFororportChecksteps detect the failure via timeout. retries— when set, ASD re-runs the failed step up to N times withretryDelayMsbetween attempts before giving up.when/thenOnFail— use conditional workflows to handle expected failures gracefully (e.g., run autofix after a lint failure).
There is no automatic rollback. If a background step started successfully before a later step fails, that background process continues running. Use asd down to stop all background processes.
Logs
Background processes started by asd run write their output to .asd/workspace/logs/. Each process gets a log file named after the task step.
# View logs for a specific service
asd logs caddy
asd logs tunnel
# Or read the files directly
ls .asd/workspace/logs/ Foreground step output is printed directly to your terminal. Background step output is captured in log files so you can review it later.
Stopping Background Tasks
Background tasks started by asd run continue running after the command exits. To stop them:
# Stop all services managed by ASD
asd down
# Stop a specific service in the network TUI
asd net stop <service-id>
# Or kill processes directly
pkill -f "pnpm dev" If a step defines backgroundTeardownCommand, that command runs when asd down stops the task. Otherwise, ASD sends SIGTERM to the background process.
CLI Options
asd run dev # Run task
asd run dev --follow # Follow logs after starting
asd run dev --json # Output in JSON format
asd run # List available tasks Related Guides
- Configuration Reference (asd.yaml) — full asd.yaml specification
- CLI Command Reference — all CLI commands
- Getting Started — first tunnel setup