diff --git a/CHANGELOG.md b/CHANGELOG.md index 99a73a7..43ff01f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,6 +7,84 @@ All notable changes to `@switchbot/openapi-cli` are documented in this file. The format is loosely based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/). This project follows [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [3.2.1] - 2026-04-25 + +### Added — plan resource model, MCP risk profiles, rules safety primitives + +- `switchbot plan save [file]` — persist a validated plan to `~/.switchbot/plans/.json` + with status `pending`; prints the assigned `planId`. +- `switchbot plan list` — table of saved plans with status, creation time, and step count. +- `switchbot plan review ` — show full step list and current status of a saved plan. +- `switchbot plan approve ` — transition a `pending` plan to `approved`; required + before `plan execute` will run it. +- `switchbot plan execute ` — execute an `approved` plan; marks it `executed` on + completion; all steps are recorded in the audit log with `planId` traceability. +- MCP `send_command` now returns a `riskProfile` field per action: `riskLevel`, `requiresConfirmation`, + `supportsDryRun`, `idempotencyHint`, and `recommendedMode` — computed from actual device type + and command at request time. +- Rules engine now enforces `maxFiringsPerHour` (sliding 3600 s count window), + `suppressIfAlreadyDesired` (skip `turnOn`/`turnOff` when device's live state already matches), + and `hysteresis` / `requires_stable_for` (fire only after trigger is continuously stable for + the specified duration). All three are validated in `rules lint`. + +### Fixed + +- `rules lint` now validates `hysteresis` / `requires_stable_for` duration syntax and warns + when `hysteresis` and `requires_stable_for` are both set. + +## [3.2.0] - 2026-04-25 + +### Added — daemon, upgrade-check, scenes validate/simulate, rules summary + +- `switchbot daemon start/stop/status` — runs `rules run` as a cross-platform detached + background process; PID written to `~/.switchbot/daemon.pid`; log at `daemon.log`. +- `switchbot upgrade-check` — fetches latest version from npm registry, compares semver, + and prints an upgrade command; exits 1 when a newer version is available. +- `switchbot scenes validate [sceneId...]` — confirms scene IDs exist in your account; + exits 1 if any are missing. +- `switchbot scenes simulate ` — shows exactly what `scenes execute` would POST + without sending the request. +- `switchbot rules summary` — aggregates `rule-fire` audit entries per rule over a + configurable time window, printing a table of fires / throttled / errors. +- `switchbot rules last-fired` — shows the N most recent `rule-fire` / `rule-fire-dry` + audit entries, newest first; `--rule` filter supported. + +### Added — circuit breaker, health report, `--explain`, keychain hints + +- `CircuitBreaker` class in `utils/retry.ts`; module-level `apiCircuitBreaker` singleton + wired in `api/client.ts`. Opens after 5 consecutive 5xx/network failures; auto-probes + after 60 s. 4xx responses excluded from failure counting. +- `switchbot health` — reports quota usage, audit error rate, circuit-breaker state, and + process info in JSON or Prometheus text (`--prometheus`) format. +- `devices command --explain` — prints risk level, device type, and safety reason before + execution without sending the request. +- `config set-token` now prints a tip to run `switchbot auth keychain store` when + file-backend credentials are saved on a platform with a native keychain. +- `doctor` now includes a `keychain` check that warns when file-backend credentials are + in use and a native keychain is available. + +### Added — rules safety schema + conflict analyzer + +- Policy schema v0.2 extended with optional fields: `throttle.dedupe_window`, + `cooldown`, `requires_stable_for`. All validated by `rules lint`. +- `ThrottleGate.check()` now accepts `dedupeWindowMs` and returns `dedupedBy` field. +- `rules/conflict-analyzer.ts` — static analysis detecting opposing-action pairs, + high-frequency catch-all triggers without throttle, and destructive rule actions. +- `switchbot rules conflicts [path]` and `switchbot rules doctor [path]` subcommands. + +### Added — risk metadata, plan traceability + +- `agentSafetyTier` in capabilities output maps to `riskLevel` (`high`/`medium`/`low`), + `idempotencyHint`, and `recommendedMode`. +- `plan run` now generates a UUID `planId` and stamps it on every audit entry produced + during that run, enabling plan-to-audit traceability. + +### Added — stable error JSON, path diagnostics, doctor enhancements + +- `resolutionHint` and `candidateMatches` fields added to structured error output. +- `name-resolver.ts` now rejects ambiguous device names when multiple candidates match. +- `doctor` reports PATH discoverability and npm global bin reachability. + ## [3.0.0] - 2026-04-24 Major release — breaking changes, full feature parity across all branches. diff --git a/README.md b/README.md index fdcbe68..fde8572 100644 --- a/README.md +++ b/README.md @@ -64,6 +64,8 @@ Under the hood every surface shares the same catalog, cache, and HMAC client — - [`plan`](#plan--declarative-batch-operations) - [`mcp`](#mcp--model-context-protocol-server) - [`doctor`](#doctor--self-check) + - [`health`](#health--runtime-health-report) + - [`upgrade-check`](#upgrade-check--version-check) - [`quota`](#quota--api-request-counter) - [`history`](#history--audit-log) - [`catalog`](#catalog--device-type-catalog) @@ -92,7 +94,7 @@ Under the hood every surface shares the same catalog, cache, and HMAC client — - 🎨 **Dual output modes** — colorized tables by default; `--json` passthrough for `jq` and scripting - 🔐 **Secure credentials** — HMAC-SHA256 signed requests; config file written with `0600`; env-var override for CI - 🔍 **Dry-run mode** — preview every mutating request before it hits the API -- 🧪 **Fully tested** — 1765 Vitest tests, mocked axios, zero network in CI +- 🧪 **Fully tested** — 1856 Vitest tests, mocked axios, zero network in CI - ⚡ **Shell completion** — Bash / Zsh / Fish / PowerShell ## Requirements @@ -275,7 +277,7 @@ executes for you. Supported triggers: **MQTT** (device events), Supported conditions: `time_between` (quiet hours) and `device_state` (live API check with per-tick dedup). Every fire is recorded in `~/.switchbot/audit.log`. `rules run` is long-running; use -`rules reload` to hot-reload policy without dropping listeners. +`daemon start` / `daemon reload` for the managed background mode. ```bash # 1. Author rules under `automation.rules`. See examples/policies/automation.yaml @@ -285,16 +287,37 @@ Supported conditions: `time_between` (quiet hours) and `device_state` switchbot rules lint # exit 0 valid, 1 error switchbot rules list --json | jq . # structured summary -# 3. Run the engine. --dry-run overrides every rule into audit-only mode; +# 3. Inspect a single rule in full detail (trigger, conditions, actions, +# cooldown, hysteresis, maxFiringsPerHour, suppressIfAlreadyDesired, last fired). +switchbot rules explain "motion on" +switchbot rules explain "motion on" --json + +# 4. Run the engine. --dry-run overrides every rule into audit-only mode; # --max-firings bounds a demo session. switchbot rules run --dry-run --max-firings 5 -# 4. Edit policy.yaml in another shell, then hot-reload without restart. -switchbot rules reload # SIGHUP on Unix, sentinel file on Windows +# 5. Edit policy.yaml in another shell, then hot-reload without restart. +switchbot daemon reload # managed daemon reload -# 5. Review recorded fires. +# 6. Review recorded fires. switchbot rules tail --follow # stream rule-* audit lines switchbot rules replay --since 1h --json # per-rule fires/dries/throttled/errors +switchbot rules summary # aggregate fires/errors per rule (24h window) +switchbot rules last-fired -n 20 # 20 most recent fire entries + +# 7. Conflict and health analysis. +switchbot rules conflicts # opposing actions, high-frequency MQTT, + # destructive commands, quiet-hours gaps +switchbot rules doctor --json # lint + conflicts combined; exit 0 when clean +``` + +When `quiet_hours` is configured in `policy.yaml`, `rules conflicts` additionally flags event-driven (MQTT / webhook) rules that lack a `time_between` condition — they would fire uninhibited during the quiet window. The hint in each finding includes a ready-to-paste `time_between` condition to add. + +Webhook trigger token management: + +```bash +switchbot rules webhook-rotate-token # rotate the bearer token for webhook triggers +switchbot rules webhook-show-token # print current token (creates one if absent) ``` See [`docs/design/phase4-rules.md`](./docs/design/phase4-rules.md) for @@ -356,6 +379,12 @@ switchbot devices command ABC123 turnOn --dry-run switchbot config set-token # Save to ~/.switchbot/config.json switchbot config show # Print current source + masked secret switchbot config list-profiles # List saved profiles + +# Print (or write) the recommended AI-agent profile template +switchbot config agent-profile # print to stdout +switchbot config agent-profile --write # write to ~/.switchbot/profiles/agent.json (mode 0600) +switchbot config agent-profile --write --force # overwrite if it already exists +switchbot config agent-profile --json # structured JSON envelope ``` ### `devices` — list, status, control @@ -551,6 +580,10 @@ skipped devices appear under `summary.skipped` with `skippedReason:'offline'`. ```bash switchbot scenes list # Columns: sceneId, sceneName switchbot scenes execute + +# One-shot summary: risk profile, execution hint, estimated commands +switchbot scenes explain +switchbot scenes explain --json ``` ### `webhook` — receive device events over HTTP @@ -743,15 +776,18 @@ switchbot plan validate plan.json # Preview — mutations skipped, GETs still execute switchbot --dry-run plan run plan.json -# Run — pass --yes to allow destructive steps -switchbot plan run plan.json --yes +# Save / review / approve / execute for destructive plans +switchbot plan save plan.json +switchbot plan review +switchbot plan approve +switchbot plan execute switchbot plan run plan.json --continue-on-error # Run with per-step TTY confirmation for destructive steps (human-in-the-loop) switchbot plan run plan.json --require-approval ``` -A plan file is a JSON document with `version`, `description`, and a `steps` array of `command`, `scene`, or `wait` steps. Steps execute sequentially; a failed step stops the run unless `--continue-on-error` is set. `--require-approval` prompts for each destructive step individually, letting you approve or reject without re-running the whole plan. See [`docs/agent-guide.md`](./docs/agent-guide.md) for the full schema and agent integration patterns. +A plan file is a JSON document with `version`, `description`, and a `steps` array of `command`, `scene`, or `wait` steps. Steps execute sequentially; a failed step stops the run unless `--continue-on-error` is set. `plan run` is the preview/direct path, but destructive steps are blocked by default and should go through `plan save` → `plan review` → `plan approve` → `plan execute`. See [`docs/agent-guide.md`](./docs/agent-guide.md) for the full schema and agent integration patterns. ### `devices watch` — poll status @@ -792,6 +828,56 @@ switchbot doctor --json Runs local checks (Node version, credentials, profiles, catalog, cache, quota, clock, MQTT, policy, MCP) and exits 1 if any check fails. `warn` results exit 0. The MQTT check reports `ok` when REST credentials are configured (auto-provisioned on first use). Use this to diagnose connectivity or config issues before running automation. +`--json` output includes `maturityScore` (0–100) and `maturityLabel` (`production-ready` / `mostly-ready` / `needs-work` / `not-ready`) to give an at-a-glance readiness rating: + +```bash +switchbot doctor --json | jq '{score: .data.maturityScore, label: .data.maturityLabel}' +``` + +Pass `--fix --yes` to auto-apply safe fixes (e.g. clear stale cache entries) without a prompt. + +### `health` — runtime health report + +```bash +# One-shot report: quota, audit error rate, circuit-breaker state +switchbot health check +switchbot health check --prometheus # Prometheus text format +switchbot health check --json + +# Start a long-running HTTP server with /healthz and /metrics +switchbot health serve # default port 3100, bind 127.0.0.1 +switchbot health serve --port 8080 +switchbot health serve --json # print {"status":"listening",...} on start +``` + +`/healthz` returns a JSON health report (HTTP 200 when `ok`/`degraded`, 503 when circuit is open). +`/metrics` returns Prometheus text metrics (`switchbot_quota_used_total`, `switchbot_circuit_open`, …). +Port conflicts are reported immediately with a clear hint to choose a different port via `--port`. + +### `upgrade-check` — version check + +```bash +switchbot upgrade-check # human output; exits 1 when update available +switchbot upgrade-check --json # structured JSON output +switchbot upgrade-check --timeout 5000 # custom registry timeout (ms) +``` + +Queries the npm registry for the latest published version and compares it against the running version. +`--json` output: + +```json +{ + "current": "3.2.1", + "latest": "4.0.0", + "upToDate": false, + "updateAvailable": true, + "breakingChange": true, + "installCommand": "npm install -g @switchbot/openapi-cli@4.0.0" +} +``` + +`breakingChange` is `true` when the latest major version is higher than the current — useful for agents or CI that need to distinguish breaking upgrades from patch releases. + ### `quota` — API request counter ```bash @@ -876,6 +962,11 @@ switchbot policy validate --no-snippet # plain error list, no source # Report the schema version the file declares switchbot policy migrate + +# Snapshot and restore the active policy +switchbot policy backup # write timestamped backup alongside policy file +switchbot policy backup --out ./backups/ # custom destination directory +switchbot policy restore # overwrite active policy from backup (auto-backups first) ``` Path resolution order: positional `[path]` > `SWITCHBOT_POLICY_PATH` env var > default policy path. @@ -1005,7 +1096,7 @@ npm install npm run dev -- # Run from TypeScript sources via tsx npm run build # Compile to dist/ -npm test # Run the Vitest suite (1765 tests) +npm test # Run the Vitest suite (1856 tests) npm run test:watch # Watch mode npm run test:coverage # Coverage report (v8, HTML + text) ``` @@ -1045,6 +1136,8 @@ src/ │ ├── webhook-listener.ts # HTTP listener (bearer token, localhost-only) │ ├── pid-file.ts # Hot-reload via SIGHUP or sentinel file │ ├── audit-query.ts # Audit log filtering + aggregation +│ ├── conflict-analyzer.ts # Static conflict detection (opposing actions, +│ │ # high-freq MQTT, destructive cmds, quiet-hours gaps) │ ├── suggest.ts # Heuristic-based rule YAML generation │ └── types.ts # Shared rule/trigger/condition/action types ├── status-sync/ @@ -1060,8 +1153,11 @@ src/ │ ├── device-meta.ts # `devices meta` — local aliases / hide flags │ ├── install.ts # `switchbot install` / `uninstall` │ ├── policy.ts # `policy validate/new/migrate/diff/add-rule` -│ ├── rules.ts # `rules suggest/lint/list/run/reload/tail/replay` +│ ├── rules.ts # `rules suggest/lint/list/explain/run/reload/tail/replay/ +│ │ # conflicts/doctor/summary/last-fired/webhook-*` │ ├── scenes.ts +│ ├── health.ts # `health check/serve` — report + HTTP endpoints +│ ├── upgrade-check.ts # `upgrade-check` — npm registry version check │ ├── status-sync.ts # `status-sync run/start/stop/status` │ ├── webhook.ts │ ├── watch.ts # `devices watch ` @@ -1082,7 +1178,7 @@ src/ ├── format.ts # renderRows / filterFields / output-format dispatch ├── audit.ts # JSONL audit log writer └── quota.ts # Local daily-quota counter -tests/ # Vitest suite (1765 tests, mocked axios, no network) +tests/ # Vitest suite (1856 tests, mocked axios, no network) ``` ### Release flow diff --git a/docs/agent-guide.md b/docs/agent-guide.md index 2f969df..03086d9 100644 --- a/docs/agent-guide.md +++ b/docs/agent-guide.md @@ -82,7 +82,7 @@ Add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) | --- | --- | --- | | `list_devices` | Enumerate physical devices + IR remotes | read | | `get_device_status` | Live status for one device | read | -| `send_command` | Dispatch a built-in or customize command | action (destructive needs `confirm: true`) | +| `send_command` | Dispatch a built-in or customize command | action (destructive needs `confirm: true` and still defaults to reviewed execution) | | `list_scenes` | Enumerate saved manual scenes | read | | `run_scene` | Execute a saved manual scene | action | | `search_catalog` | Look up device type by name/alias | read | @@ -102,7 +102,7 @@ Add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) | `rules_suggest` | Draft automation rule YAML from intent | read | | `policy_add_rule` | Inject rule YAML into `automation.rules[]` with diff | action | -The MCP server refuses destructive commands (Smart Lock `unlock`, Garage Door `open`, etc.) unless the tool call includes `confirm: true`. The allowed list is the `destructive: true` commands in the catalog — `switchbot schema export | jq '[.data.types[].commands[] | select(.destructive)]'` shows every one. +The MCP server refuses destructive commands (Smart Lock `unlock`, Garage Door `open`, etc.) unless the tool call includes `confirm: true`, and the default safety profile still blocks direct destructive execution in favor of the reviewed CLI flow (`plan save` → `plan review` → `plan approve` → `plan execute`). The allowed list is the `destructive: true` commands in the catalog — `switchbot schema export | jq '[.data.types[].commands[] | select(.destructive)]'` shows every one. ### `get_device_history` — zero-cost state lookup @@ -187,7 +187,10 @@ output before running. ```bash cat plan.json | switchbot plan validate - # exit 2 on schema error cat plan.json | switchbot --dry-run plan run - # preview — mutations skipped -cat plan.json | switchbot plan run - --yes # allow destructive steps +cat plan.json | switchbot plan save - # reviewed destructive path +switchbot plan review +switchbot plan approve +switchbot plan execute cat plan.json | switchbot --json plan run - # machine-readable outcome ``` @@ -195,8 +198,9 @@ cat plan.json | switchbot --json plan run - # machine-readable outcome - Steps execute sequentially. A failed step stops the run (exit 1) unless you pass `--continue-on-error`. - `wait` uses `setTimeout`; `ms` is capped at 600 000 so a malformed plan can't hang the agent. -- Destructive commands are **skipped** (not failed) without `--yes`, so an agent that omits the flag gets a clean "needs confirmation" summary. -- `--require-approval` enables per-step TTY confirmation for destructive steps — approve with `y`, reject with any other key. Non-TTY environments (CI, pipes) auto-reject. Mutually exclusive with `--json`. `--yes` takes precedence. +- Destructive commands are **skipped** (not failed) without `--yes`, so an agent that omits the flag gets a clean preview summary. +- `plan run --yes` is reserved for explicit dev profiles. The default production path for destructive work is `plan save` → `plan review` → `plan approve` → `plan execute`. +- `--require-approval` enables per-step TTY confirmation for destructive steps during execution. Non-TTY environments (CI, pipes) auto-reject. - Every successful/failed step lands in `--audit-log` (see [Observability](#observability)). --- @@ -292,7 +296,7 @@ Use `switchbot doctor` to confirm the CLI is healthy before orchestrating anythi ## Safety rails -1. **Destructive-command guard**: Smart Lock `unlock`, Garage Door `open`, and anything else tagged `destructive: true` in the catalog **refuses to run** without `--yes` (or `confirm: true` in MCP, or explicit dev intent). There is no bypass flag for autonomous agents beyond `--yes` — that's by design. +1. **Destructive-command guard**: Smart Lock `unlock`, Garage Door `open`, and anything else tagged `destructive: true` in the catalog **refuses to run** directly in the default profile. Use the reviewed plan workflow by default; only explicit dev profiles may pair direct execution with `--yes` / `confirm: true`. 2. **Dry-run**: Global `--dry-run` short-circuits every mutating HTTP request. GETs still execute. Command names are validated against the device catalog — unknown commands exit 2 when the device type has a known catalog entry, as do commands on read-only sensors. Use it for any "what would this do?" flow before letting the agent commit. 3. **Quota**: The SwitchBot API has a per-account daily quota. `--retry-on-429 ` and `--backoff ` handle throttling; `~/.switchbot/quota.json` tracks daily counts. 4. **Audit log**: `--audit-log [path]` appends every mutating command (including dry-runs) to JSONL for post-hoc review. @@ -354,7 +358,7 @@ switchbot rules suggest --intent "..." | switchbot policy add-rule --dry-run switchbot rules suggest --intent "..." | switchbot policy add-rule --enable # Step 4: Lint and reload -switchbot rules lint && switchbot rules reload +switchbot rules lint && switchbot daemon reload ``` MCP agents use `rules_suggest` + `policy_add_rule` tools for the same @@ -374,7 +378,7 @@ After the user confirms the rule fires correctly: ```bash # Edit policy.yaml: set dry_run: false # Then reload: -switchbot rules lint && switchbot rules reload +switchbot rules lint && switchbot daemon reload ``` Use `switchbot rules replay --since 24h --json` regularly to surface misfires. diff --git a/docs/json-contract.md b/docs/json-contract.md index aa36adf..bb60eac 100644 --- a/docs/json-contract.md +++ b/docs/json-contract.md @@ -38,8 +38,11 @@ stdout. "error": { "code": 2, "kind": "usage" | "guard" | "api" | "runtime", + "subKind": "ambiguous-name-match" | "device-not-found" | "device-offline" | ... , "message": "human-readable description", "hint": "optional remediation string", + "resolutionHint": "structured remediation for agent/script consumers", + "candidateMatches": [{ "deviceId": "...", "name": "..." }], "context": { "optional, command-specific": true } } } @@ -48,8 +51,20 @@ stdout. - Both success and error envelopes are written to **stdout** so a single `cli --json ... | jq` pipe can decode either shape (SYS-1 contract). - `code` is the process exit code. `2` = usage / guard, `1` = runtime / api. +- `subKind` classifies the specific error within its `kind`. Known values: + - `ambiguous-name-match` — `--name` matched more than one device; check `candidateMatches` + - `device-not-found` — device ID does not exist in the account + - `device-offline` — device is BLE-only or hub is offline + - `command-not-supported` — device does not support the requested command + - `auth-failed` — invalid token or secret + - `quota-exceeded` — daily API quota reached + - `device-internal-error` — API code 190 generic device error +- `resolutionHint` is a machine-stable remediation string agents can surface + directly (e.g. "Narrow with --type / --room, or use the deviceId directly"). +- `candidateMatches` is populated on `ambiguous-name-match` errors; each + entry has at least `name` and one of `deviceId` / `sceneId`. - Additional fields may appear on specific error classes - (`retryable`, `retryAfterMs`, `transient`, `subKind`, `errorClass`). + (`retryable`, `retryAfterMs`, `transient`, `errorClass`). --- diff --git a/package-lock.json b/package-lock.json index 7f8cbc4..9da76ea 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,12 +1,12 @@ { "name": "@switchbot/openapi-cli", - "version": "3.0.0", + "version": "3.2.1", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "@switchbot/openapi-cli", - "version": "3.0.0", + "version": "3.2.1", "license": "MIT", "dependencies": { "@modelcontextprotocol/sdk": "^1.29.0", diff --git a/package.json b/package.json index 2e6f202..a371506 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "@switchbot/openapi-cli", - "version": "3.0.0", + "version": "3.2.1", "description": "SwitchBot smart home CLI — control devices, run scenes, stream real-time events, and integrate AI agents via MCP. Full API v1.1 coverage.", "keywords": [ "switchbot", diff --git a/src/api/client.ts b/src/api/client.ts index 3395276..afc629f 100644 --- a/src/api/client.ts +++ b/src/api/client.ts @@ -15,7 +15,7 @@ import { getBackoffStrategy, isQuotaDisabled, } from '../utils/flags.js'; -import { nextRetryDelayMs, sleep } from '../utils/retry.js'; +import { nextRetryDelayMs, sleep, CircuitBreaker, CircuitOpenError } from '../utils/retry.js'; import { recordRequest, checkDailyCap } from '../utils/quota.js'; import { readProfileMeta } from '../config.js'; import { getActiveProfile } from '../lib/request-context.js'; @@ -48,6 +48,19 @@ export class DryRunSignal extends Error { } } +/** + * Module-level circuit breaker for the SwitchBot API. Shared across all + * client instances in the process. Opens after 5 consecutive 5xx / network + * errors; resets to half-open after 60 s. + * Exported for health-check inspection. + */ +export const apiCircuitBreaker = new CircuitBreaker('switchbot-api', { + failureThreshold: 5, + resetTimeoutMs: 60_000, +}); + +export { CircuitOpenError }; + type RetryableConfig = InternalAxiosRequestConfig & { __retryCount?: number }; export function createClient(): AxiosInstance { @@ -69,6 +82,9 @@ export function createClient(): AxiosInstance { // Inject auth headers; optionally log the request; short-circuit on --dry-run. client.interceptors.request.use((config: InternalAxiosRequestConfig) => { + // Circuit breaker check — fail fast when the API is consistently down. + apiCircuitBreaker.checkAndAllow(); + // Pre-flight cap check: refuse the call before it touches the network. if (dailyCap) { const check = checkDailyCap(dailyCap); @@ -130,6 +146,8 @@ export function createClient(): AxiosInstance { `API error code: ${data.statusCode}`; throw new ApiError(msg, data.statusCode); } + // Successful HTTP response — record for circuit breaker. + apiCircuitBreaker.recordSuccess(); return response; }, (error) => { @@ -156,6 +174,8 @@ export function createClient(): AxiosInstance { return sleep(delay).then(() => client.request(config)); } } + // Network-level failure — record for circuit breaker. + apiCircuitBreaker.recordFailure(); throw new ApiError( `Request timed out after ${getTimeout()}ms (override with --timeout )`, 0, @@ -215,6 +235,12 @@ export function createClient(): AxiosInstance { } } + // Record 5xx and network errors for circuit breaker. 4xx errors are + // expected business responses — don't count toward circuit threshold. + if (status === undefined || status >= 500) { + apiCircuitBreaker.recordFailure(); + } + // P8: quota already recorded in the request interceptor before // dispatch — no extra bookkeeping needed here on the error path. // Timeouts, DNS failures, 5xx, and exhausted retries all counted diff --git a/src/commands/batch.ts b/src/commands/batch.ts index 446b592..dfe433d 100644 --- a/src/commands/batch.ts +++ b/src/commands/batch.ts @@ -13,6 +13,7 @@ import { parseFilter, applyFilter, FilterSyntaxError } from '../utils/filter.js' import { isDryRun } from '../utils/flags.js'; import { DryRunSignal } from '../api/client.js'; import { getCachedTypeMap, getCachedDevice, loadStatusCache } from '../devices/cache.js'; +import { allowsDirectDestructiveExecution, destructiveExecutionHint } from '../lib/destructive-mode.js'; interface BatchStepTiming { startedAt: string; @@ -155,7 +156,7 @@ export function registerBatchCommand(devices: Command): void { .option('--stagger ', 'Fixed delay between task starts in ms (default 0 = random 20-60ms jitter)', intArg('--stagger', { min: 0 }), '0') .option('--plan', '[DEPRECATED, use --emit-plan] With --dry-run: emit a plan JSON document instead of executing anything') .option('--emit-plan', 'With --dry-run: emit a plan JSON document instead of executing anything') - .option('--yes', 'Allow destructive commands (Smart Lock unlock, garage open, ...)') + .option('--yes', 'Allow destructive commands only from an explicit dev profile') .option('--type ', '"command" (default) or "customize" for user-defined IR buttons', enumArg('--type', COMMAND_TYPES), 'command') .option('--stdin', 'Read deviceIds from stdin, one per line (same as trailing "-")') .option('--idempotency-key-prefix ', 'Client-supplied prefix for idempotency keys (key per device: -). process-local 60s window; cache is per Node process (MCP session, batch run, plan run). Independent CLI invocations do not share cache.', stringArg('--idempotency-key-prefix')) @@ -191,7 +192,8 @@ Planning: Safety: Destructive commands (Smart Lock unlock, Garage Door Opener turnOn/turnOff, - Keypad createKey/deleteKey) are blocked by default. Pass --yes to override. + Keypad createKey/deleteKey) are blocked by default. Use the reviewed plan + flow instead of direct execution. --dry-run intercepts every POST and reports the intended calls without hitting the API. @@ -199,7 +201,7 @@ Examples: $ switchbot devices batch turnOff --filter 'type~=Light,family=home' $ switchbot devices batch turnOn --ids ID1,ID2,ID3 $ switchbot devices list --format=id --filter 'type=Bot' | switchbot devices batch toggle - - $ switchbot devices batch unlock --filter 'type=Smart Lock' --yes + $ switchbot devices batch unlock --filter 'type=Smart Lock' --dry-run --emit-plan `) .action( async ( @@ -315,11 +317,26 @@ Examples: code: 2, kind: 'guard', message: `Refusing to run destructive command "${cmd}" on ${blockedForDestructive.length} device(s) without --yes: ${deviceIds.join(', ')}`, - hint: 'Re-issue the call with --yes to proceed.', + hint: 'Re-issue the call with --yes only from an explicit dev profile, or use the reviewed plan flow.', context: { command: cmd, deviceIds }, }); } + if (blockedForDestructive.length > 0 && options.yes && !allowsDirectDestructiveExecution()) { + exitWithError({ + code: 2, + kind: 'guard', + message: `Direct destructive execution is disabled for batch command "${cmd}".`, + hint: destructiveExecutionHint(), + context: { + command: cmd, + blockedCount: blockedForDestructive.length, + deviceIds: blockedForDestructive.map((item) => item.deviceId), + requiredWorkflow: 'plan-approval', + }, + }); + } + // parameter may be a JSON object string; mirror the single-command action. let parsedParam: unknown = parameter ?? 'default'; if (parameter) { diff --git a/src/commands/capabilities.ts b/src/commands/capabilities.ts index 728ee28..9f29c1b 100644 --- a/src/commands/capabilities.ts +++ b/src/commands/capabilities.ts @@ -34,6 +34,9 @@ function countStatusQueries(entries: DeviceCatalogEntry[]): number { export type AgentSafetyTier = 'read' | 'action' | 'destructive'; export type Verifiability = 'local' | 'deviceConfirmed' | 'deviceDependent' | 'none'; +export type RiskLevel = 'low' | 'medium' | 'high'; +export type IdempotencyHint = 'safe' | 'caution' | 'non-idempotent'; +export type RecommendedMode = 'direct' | 'plan' | 'review-before-execute'; const AGENT_GUIDE = { safetyTiers: { @@ -41,6 +44,16 @@ const AGENT_GUIDE = { action: 'Mutates device or cloud state but is reversible and routine (turnOn, setColor).', destructive: 'Hard to reverse / physical-world side effects (unlock, garage open, delete key). Requires explicit user confirmation.', }, + riskLevels: { + low: 'Read-only or non-mutating. Safe to call autonomously.', + medium: 'Mutates state (action tier). Prefer `plan` workflow. Reversible.', + high: 'Destructive / hard-to-reverse. Must go through review-before-execute. Direct --yes execution is reserved for explicit dev profiles.', + }, + recommendedModes: { + direct: 'May be called directly without a plan step.', + plan: 'Prefer batching in a plan for traceability and dry-run support.', + 'review-before-execute': 'Must be reviewed/approved before execution. Use `plan save`, `plan review`, `plan approve`, then `plan execute`.', + }, verifiability: { local: 'Result is fully verifiable from the CLI return value itself.', deviceConfirmed: 'Device returns an ack with an observable state field.', @@ -61,55 +74,146 @@ interface CommandMeta { typicalLatencyMs: number; } +interface RiskMeta { + riskLevel: RiskLevel; + requiresConfirmation: boolean; + supportsDryRun: boolean; + idempotencyHint: IdempotencyHint; + recommendedMode: RecommendedMode; +} + +function deriveRiskMeta(meta: CommandMeta): RiskMeta { + const riskLevel: RiskLevel = meta.agentSafetyTier === 'destructive' ? 'high' + : meta.agentSafetyTier === 'action' ? 'medium' : 'low'; + return { + riskLevel, + requiresConfirmation: meta.agentSafetyTier === 'destructive', + supportsDryRun: meta.mutating, + idempotencyHint: meta.idempotencySupported ? 'safe' : meta.mutating ? 'non-idempotent' : 'safe', + recommendedMode: meta.agentSafetyTier === 'destructive' ? 'review-before-execute' + : meta.agentSafetyTier === 'action' ? 'plan' : 'direct', + }; +} + +function meta( + mutating: boolean, + consumesQuota: boolean, + idempotencySupported: boolean, + agentSafetyTier: AgentSafetyTier, + verifiability: Verifiability, + typicalLatencyMs: number, +): CommandMeta { + return { mutating, consumesQuota, idempotencySupported, agentSafetyTier, verifiability, typicalLatencyMs }; +} + +const READ_LOCAL = meta(false, false, false, 'read', 'local', 20); +const READ_REMOTE = meta(false, true, false, 'read', 'local', 500); +const ACTION_LOCAL = meta(true, false, false, 'action', 'local', 20); +const ACTION_REMOTE = meta(true, true, false, 'action', 'deviceDependent', 900); +const ACTION_REMOTE_IDEMPOTENT = meta(true, true, true, 'action', 'deviceDependent', 900); +const DESTRUCTIVE_LOCAL = meta(true, false, false, 'destructive', 'local', 20); +const DESTRUCTIVE_REMOTE = meta(true, true, false, 'destructive', 'deviceDependent', 1200); +const READ_NONE = meta(false, false, false, 'read', 'none', 50); + const COMMAND_META: Record = { - // devices: reads - 'devices list': { mutating: false, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 600 }, - 'devices status': { mutating: false, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 500 }, - 'devices describe': { mutating: false, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 600 }, - 'devices types': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 20 }, - 'devices commands': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 20 }, - 'devices watch': { mutating: false, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 500 }, - // devices meta (local metadata — no quota, no API call) - 'devices meta set': { mutating: true, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'action', verifiability: 'local', typicalLatencyMs: 5 }, - 'devices meta get': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 5 }, - 'devices meta list': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 5 }, - 'devices meta clear': { mutating: true, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'action', verifiability: 'local', typicalLatencyMs: 5 }, - // devices: actions - 'devices command': { mutating: true, consumesQuota: true, idempotencySupported: true, agentSafetyTier: 'action', verifiability: 'deviceDependent', typicalLatencyMs: 800 }, - 'devices batch': { mutating: true, consumesQuota: true, idempotencySupported: true, agentSafetyTier: 'action', verifiability: 'deviceDependent', typicalLatencyMs: 1200 }, - // scenes - 'scenes list': { mutating: false, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 500 }, - 'scenes execute': { mutating: true, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'action', verifiability: 'deviceDependent', typicalLatencyMs: 1500 }, - 'scenes describe': { mutating: false, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 500 }, - // webhook - 'webhook setup': { mutating: true, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'action', verifiability: 'local', typicalLatencyMs: 500 }, - 'webhook query': { mutating: false, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 500 }, - 'webhook delete': { mutating: true, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'destructive', verifiability: 'local', typicalLatencyMs: 500 }, - // quota - 'quota status': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 10 }, - 'quota show': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 10 }, - 'quota reset': { mutating: true, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'action', verifiability: 'local', typicalLatencyMs: 10 }, - // doctor / schema / capabilities / catalog / config / cache / events / history / plan - 'doctor': { mutating: false, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 900 }, - 'schema export': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 20 }, - 'capabilities': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 15 }, - 'catalog': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 15 }, - 'config set-token': { mutating: true, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'destructive', verifiability: 'local', typicalLatencyMs: 5 }, - 'config show': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 5 }, - 'config list-profiles': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 5 }, - 'cache status': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 5 }, - 'cache clear': { mutating: true, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'action', verifiability: 'local', typicalLatencyMs: 5 }, - 'events mqtt-tail': { mutating: false, consumesQuota: true, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 500 }, - 'history show': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 20 }, - 'history replay': { mutating: true, consumesQuota: true, idempotencySupported: true, agentSafetyTier: 'action', verifiability: 'deviceDependent', typicalLatencyMs: 1000 }, - 'history range': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 50 }, - 'history stats': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 20 }, - 'history aggregate': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 80 }, - 'plan run': { mutating: true, consumesQuota: true, idempotencySupported: true, agentSafetyTier: 'action', verifiability: 'deviceDependent', typicalLatencyMs: 2000 }, - 'plan validate': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 10 }, - 'plan schema': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 10 }, - 'completion': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 5 }, - 'mcp serve': { mutating: false, consumesQuota: false, idempotencySupported: false, agentSafetyTier: 'read', verifiability: 'local', typicalLatencyMs: 10 }, + 'agent-bootstrap': READ_LOCAL, + 'auth keychain describe': READ_LOCAL, + 'auth keychain get': READ_LOCAL, + 'auth keychain set': DESTRUCTIVE_LOCAL, + 'auth keychain delete': DESTRUCTIVE_LOCAL, + 'auth keychain migrate': DESTRUCTIVE_LOCAL, + 'cache show': READ_LOCAL, + 'cache clear': ACTION_LOCAL, + 'capabilities': READ_LOCAL, + 'catalog path': READ_LOCAL, + 'catalog show': READ_LOCAL, + 'catalog search': READ_LOCAL, + 'catalog diff': READ_LOCAL, + 'catalog refresh': ACTION_LOCAL, + 'completion': READ_LOCAL, + 'config set-token': DESTRUCTIVE_LOCAL, + 'config show': READ_LOCAL, + 'config list-profiles': READ_LOCAL, + 'config agent-profile': ACTION_LOCAL, + 'daemon start': ACTION_LOCAL, + 'daemon stop': ACTION_LOCAL, + 'daemon status': READ_LOCAL, + 'daemon reload': ACTION_LOCAL, + 'devices list': READ_REMOTE, + 'devices status': READ_REMOTE, + 'devices command': ACTION_REMOTE_IDEMPOTENT, + 'devices types': READ_LOCAL, + 'devices commands': READ_LOCAL, + 'devices describe': READ_REMOTE, + 'devices batch': ACTION_REMOTE_IDEMPOTENT, + 'devices watch': READ_REMOTE, + 'devices explain': READ_LOCAL, + 'devices expand': READ_LOCAL, + 'devices meta set': ACTION_LOCAL, + 'devices meta get': READ_LOCAL, + 'devices meta list': READ_LOCAL, + 'devices meta clear': ACTION_LOCAL, + 'doctor': READ_LOCAL, + 'events tail': READ_NONE, + 'events mqtt-tail': READ_REMOTE, + 'health check': READ_LOCAL, + 'health serve': READ_LOCAL, + 'history show': READ_LOCAL, + 'history replay': ACTION_REMOTE_IDEMPOTENT, + 'history range': READ_LOCAL, + 'history stats': READ_LOCAL, + 'history verify': READ_LOCAL, + 'history aggregate': READ_LOCAL, + 'install': ACTION_LOCAL, + 'mcp serve': READ_LOCAL, + 'plan schema': READ_LOCAL, + 'plan validate': READ_LOCAL, + 'plan suggest': READ_LOCAL, + 'plan run': ACTION_REMOTE_IDEMPOTENT, + 'plan save': ACTION_LOCAL, + 'plan list': READ_LOCAL, + 'plan review': READ_LOCAL, + 'plan approve': DESTRUCTIVE_LOCAL, + 'plan execute': DESTRUCTIVE_REMOTE, + 'policy validate': READ_LOCAL, + 'policy new': ACTION_LOCAL, + 'policy migrate': ACTION_LOCAL, + 'policy diff': READ_LOCAL, + 'policy add-rule': ACTION_LOCAL, + 'policy backup': READ_LOCAL, + 'policy restore': DESTRUCTIVE_LOCAL, + 'quota status': READ_LOCAL, + 'quota reset': ACTION_LOCAL, + 'rules suggest': READ_LOCAL, + 'rules lint': READ_LOCAL, + 'rules list': READ_LOCAL, + 'rules run': ACTION_REMOTE, + 'rules reload': ACTION_LOCAL, + 'rules tail': READ_LOCAL, + 'rules replay': READ_LOCAL, + 'rules webhook-rotate-token': DESTRUCTIVE_LOCAL, + 'rules webhook-show-token': DESTRUCTIVE_LOCAL, + 'rules conflicts': READ_LOCAL, + 'rules doctor': READ_LOCAL, + 'rules summary': READ_LOCAL, + 'rules last-fired': READ_LOCAL, + 'schema export': READ_LOCAL, + 'scenes list': READ_REMOTE, + 'scenes execute': ACTION_REMOTE, + 'scenes describe': READ_REMOTE, + 'scenes validate': READ_REMOTE, + 'scenes simulate': READ_REMOTE, + 'scenes explain': READ_REMOTE, + 'status-sync run': ACTION_REMOTE, + 'status-sync start': ACTION_LOCAL, + 'status-sync stop': ACTION_LOCAL, + 'status-sync status': READ_LOCAL, + 'uninstall': ACTION_LOCAL, + 'upgrade-check': READ_REMOTE, + 'webhook setup': ACTION_REMOTE, + 'webhook query': READ_REMOTE, + 'webhook update': ACTION_REMOTE, + 'webhook delete': DESTRUCTIVE_REMOTE, }; function metaFor(command: string): CommandMeta | null { @@ -140,7 +244,7 @@ const IDEMPOTENCY_CONTRACT = { mcp: 'MCP send_command accepts the same idempotencyKey field with identical semantics.', }; -interface CompactLeaf { +export interface CompactLeaf { name: string; mutating: boolean; consumesQuota: boolean; @@ -148,6 +252,26 @@ interface CompactLeaf { agentSafetyTier: AgentSafetyTier; verifiability: Verifiability; typicalLatencyMs: number; + riskLevel: RiskLevel; + requiresConfirmation: boolean; + supportsDryRun: boolean; + idempotencyHint: IdempotencyHint; + recommendedMode: RecommendedMode; +} + +function enumerateLeafNames(program: Command, prefix = ''): string[] { + const out: string[] = []; + for (const cmd of program.commands) { + const full = prefix ? `${prefix} ${cmd.name()}` : cmd.name(); + if (cmd.commands.length === 0) out.push(full); + else out.push(...enumerateLeafNames(cmd, full)); + } + return out; +} + +function validateCommandMetaCoverage(program: Command): string[] { + const leaves = enumerateLeafNames(program); + return leaves.filter((leaf) => !COMMAND_META[leaf]).sort().map((leaf) => `missing:${leaf}`); } function enumerateLeaves(program: Command, prefix = ''): CompactLeaf[] { @@ -156,20 +280,8 @@ function enumerateLeaves(program: Command, prefix = ''): CompactLeaf[] { const full = prefix ? `${prefix} ${cmd.name()}` : cmd.name(); if (cmd.commands.length === 0) { const meta = metaFor(full); - if (meta) { - out.push({ name: full, ...meta }); - } else { - // Unknown leaf → default to read-safe with a warning flag so agents notice. - out.push({ - name: full, - mutating: false, - consumesQuota: false, - idempotencySupported: false, - agentSafetyTier: 'read', - verifiability: 'local', - typicalLatencyMs: 50, - }); - } + if (!meta) throw new Error(`capabilities metadata missing for leaf command "${full}"`); + out.push({ name: full, ...meta, ...deriveRiskMeta(meta) }); } else { out.push(...enumerateLeaves(cmd, full)); } @@ -196,6 +308,10 @@ export function registerCapabilitiesCommand(program: Command): void { .option('--surface ', 'Restrict surfaces block to one of: cli, mcp, plan, mqtt, all (default: all)', enumArg('--surface', SURFACES)) .option('--project ', 'Project top-level fields (e.g. --project identity,commands,agentGuide)', stringArg('--project')) .action((opts: { minimal?: boolean; compact?: boolean; used?: boolean; surface?: string; project?: string }) => { + const coverageIssues = validateCommandMetaCoverage(program); + if (coverageIssues.length > 0) { + throw new Error(`capabilities metadata coverage error: ${coverageIssues.join(', ')}`); + } const compact = Boolean(opts.minimal || opts.compact); const catalog = getEffectiveCatalog(); const leaves = enumerateLeaves(program); @@ -289,8 +405,10 @@ export function registerCapabilitiesCommand(program: Command): void { // Flat command → meta map keyed by full command path. Published in // addition to the tree (where every leaf `subcommands[*]` already // carries the same fields via spread) so agents can do O(1) lookup - // without walking the tree. - commandMeta: COMMAND_META, + // without walking the tree. Includes derived risk metadata fields. + commandMeta: Object.fromEntries( + Object.entries(COMMAND_META).map(([k, v]) => [k, { ...v, ...deriveRiskMeta(v) }]) + ), ...(globalFlags ? { globalFlags } : {}), catalog: { typeCount: catalog.length, diff --git a/src/commands/config.ts b/src/commands/config.ts index 97eab14..08d2884 100644 --- a/src/commands/config.ts +++ b/src/commands/config.ts @@ -1,5 +1,7 @@ import { Command } from 'commander'; import fs from 'node:fs'; +import os from 'node:os'; +import path from 'node:path'; import readline from 'node:readline'; import { execFileSync } from 'node:child_process'; import { stringArg } from '../utils/arg-parsers.js'; @@ -269,6 +271,26 @@ Files are written with mode 0600. Profiles live under ~/.switchbot/profiles/ + +Then use the profile: + $ switchbot --profile agent devices list +`) + .action((opts: { write?: boolean; force?: boolean }) => { + const template = { + label: 'agent', + description: 'AI agent profile — conservative limits + audit enabled', + limits: { + dailyCap: 100, + }, + defaults: { + auditLog: true, + }, + }; + + if (opts.write) { + const dir = path.join(os.homedir(), '.switchbot', 'profiles'); + const dest = path.join(dir, 'agent.json'); + if (!opts.force && fs.existsSync(dest)) { + exitWithError({ code: 2, kind: 'usage', message: `Agent profile already exists: ${dest}. Use --force to overwrite.` }); + } + if (!fs.existsSync(dir)) { + fs.mkdirSync(dir, { recursive: true, mode: 0o700 }); + } + fs.writeFileSync(dest, JSON.stringify(template, null, 2), { mode: 0o600 }); + if (isJsonMode()) { + printJson({ ok: true, path: dest, template }); + } else { + console.log(`Agent profile written: ${dest}`); + console.log(`Next: switchbot --profile agent config set-token `); + } + } else { + if (isJsonMode()) { + printJson(template); + } else { + console.log(JSON.stringify(template, null, 2)); + console.log(''); + console.log('Write with: switchbot config agent-profile --write'); + } + } + }); } diff --git a/src/commands/daemon.ts b/src/commands/daemon.ts new file mode 100644 index 0000000..acde787 --- /dev/null +++ b/src/commands/daemon.ts @@ -0,0 +1,397 @@ +import { Command } from 'commander'; +import { spawn } from 'node:child_process'; +import fs from 'node:fs'; +import path from 'node:path'; +import { fileURLToPath } from 'node:url'; +import { isJsonMode, printJson, exitWithError } from '../utils/output.js'; +import { readPidFile, writePidFile, isPidAlive, getDefaultPidFilePaths, writeReloadSentinel, sighupSupported } from '../rules/pid-file.js'; +import { stringArg } from '../utils/arg-parsers.js'; +import chalk from 'chalk'; +import { + DAEMON_LOG_FILE, + DAEMON_PID_FILE, + DAEMON_STATE_FILE, + HEALTHZ_PID_FILE, + readDaemonState, + writeDaemonState, + type DaemonState, +} from '../lib/daemon-state.js'; + +interface DaemonRuntimeStatus { + status: 'running' | 'stopped'; + pid: number | null; + pidFile: string; + logFile: string; + stateFile: string; + health: { + configured: boolean; + pid: number | null; + pidFile: string; + port: number | null; + running: boolean; + url: string | null; + }; + lastReloadAt: string | null; + lastReloadStatus: 'ok' | 'failed' | null; + lastReloadMessage: string | null; + startedAt: string | null; + stoppedAt: string | null; +} + +function readDaemonPid(): number | null { + return readPidFile(DAEMON_PID_FILE); +} + +function readHealthPid(): number | null { + return readPidFile(HEALTHZ_PID_FILE); +} + +function killIfAlive(pid: number | null, signal: NodeJS.Signals = 'SIGTERM'): void { + if (!pid) return; + if (!isPidAlive(pid)) return; + process.kill(pid, signal); +} + +function buildHealthSummary(state: DaemonState | null) { + const healthPid = readHealthPid(); + const healthRunning = healthPid !== null && isPidAlive(healthPid); + const port = state?.healthzPort ?? null; + return { + configured: port !== null, + pid: healthRunning ? healthPid : null, + pidFile: HEALTHZ_PID_FILE, + port, + running: healthRunning, + url: port !== null ? `http://127.0.0.1:${port}/healthz` : null, + }; +} + +function getDaemonStatus(): DaemonRuntimeStatus { + const state = readDaemonState(); + const pid = readDaemonPid(); + const running = pid !== null && isPidAlive(pid); + return { + status: running ? 'running' : 'stopped', + pid: running ? pid : null, + pidFile: DAEMON_PID_FILE, + logFile: DAEMON_LOG_FILE, + stateFile: DAEMON_STATE_FILE, + health: buildHealthSummary(state), + lastReloadAt: state?.lastReloadAt ?? null, + lastReloadStatus: state?.lastReloadStatus ?? null, + lastReloadMessage: state?.lastReloadMessage ?? null, + startedAt: state?.startedAt ?? null, + stoppedAt: state?.stoppedAt ?? null, + }; +} + +function persistState(partial: Partial): DaemonState { + const previous = readDaemonState(); + const next: DaemonState = { + ...(previous ?? {}), + status: partial.status ?? previous?.status ?? 'stopped', + pid: partial.pid ?? previous?.pid ?? null, + ...partial, + logFile: DAEMON_LOG_FILE, + pidFile: DAEMON_PID_FILE, + stateFile: DAEMON_STATE_FILE, + }; + writeDaemonState(next); + return next; +} + +function renderHumanStatus(status: DaemonRuntimeStatus): void { + if (status.status === 'running' && status.pid !== null) { + console.log(`${chalk.green('●')} Daemon is running (pid ${status.pid}).`); + } else { + console.log(`${chalk.grey('○')} Daemon is not running.`); + } + if (status.startedAt) console.log(` Started: ${status.startedAt}`); + if (status.stoppedAt) console.log(` Stopped: ${status.stoppedAt}`); + console.log(` Log: ${status.logFile}`); + console.log(` PID: ${status.pidFile}`); + console.log(` State: ${status.stateFile}`); + if (status.lastReloadAt) { + console.log(` Reload: ${status.lastReloadAt} (${status.lastReloadStatus ?? 'unknown'})`); + if (status.lastReloadMessage) console.log(` ${status.lastReloadMessage}`); + } + if (status.health.configured) { + if (status.health.running) { + console.log(`${chalk.green('✓')} Health server running (pid ${status.health.pid})`); + } else { + console.log(`${chalk.yellow('!')} Health server configured but not running`); + } + if (status.health.url) console.log(` Health: ${status.health.url}`); + } +} + +export function registerDaemonCommand(program: Command): void { + const daemon = program + .command('daemon') + .description('Manage the background SwitchBot rules daemon and its health endpoint.') + .addHelpText('after', ` +The daemon runs \`switchbot rules run\` as a detached background process, +tracks runtime metadata in ~/.switchbot/daemon.state.json, and can optionally +co-launch a health endpoint via \`switchbot health serve\`. + +Subcommands: + start [--policy ] Start the daemon (no-op if already running). + stop Stop the daemon and any co-launched health server. + status Report the daemon state, log path, and health summary. + reload Trigger a hot reload for the running rules engine. + +The daemon reads the same policy file as \`switchbot rules run\`. +`); + + daemon + .command('start') + .description('Start the rules-engine daemon in the background.') + .option('--policy ', 'Policy file path (default: auto-detected)', stringArg('--policy')) + .option('--force', 'Restart even if the daemon appears to be running.') + .option('--healthz-port ', 'Also start a health HTTP server on this port (default: disabled).') + .action((opts: { policy?: string; force?: boolean; healthzPort?: string }) => { + const current = getDaemonStatus(); + if (current.status === 'running' && !opts.force) { + if (isJsonMode()) { + printJson({ result: 'already-running', ...current }); + } else { + console.log(`Daemon is already running (pid ${current.pid}). Use --force to restart.`); + } + return; + } + + if (opts.force) { + try { + killIfAlive(current.pid); + killIfAlive(current.health.pid); + } catch { + // best effort + } + } + + const thisFile = fileURLToPath(import.meta.url); + const cliEntry = path.resolve(path.dirname(thisFile), 'index.js'); + const args = ['rules', 'run']; + if (opts.policy) args.push(opts.policy); + + fs.mkdirSync(path.dirname(DAEMON_PID_FILE), { recursive: true, mode: 0o700 }); + persistState({ + status: 'starting', + pid: null, + startedAt: new Date().toISOString(), + stoppedAt: undefined, + failedAt: undefined, + failureReason: undefined, + healthzPort: opts.healthzPort ? Number.parseInt(opts.healthzPort, 10) : null, + healthzPid: null, + healthzPidFile: HEALTHZ_PID_FILE, + }); + + const logFd = fs.openSync(DAEMON_LOG_FILE, 'a'); + const child = spawn(process.execPath, [cliEntry, ...args], { + detached: true, + stdio: ['ignore', logFd, logFd], + env: { ...process.env }, + }); + child.unref(); + fs.closeSync(logFd); + + const newPid = child.pid; + if (!newPid) { + persistState({ + status: 'failed', + pid: null, + failedAt: new Date().toISOString(), + failureReason: 'Failed to spawn daemon process.', + }); + exitWithError({ code: 1, kind: 'runtime', message: 'Failed to spawn daemon process.' }); + } + writePidFile(DAEMON_PID_FILE, newPid); + + let healthzPid: number | null = null; + let healthzPort: number | null = opts.healthzPort ? Number.parseInt(opts.healthzPort, 10) : null; + if (healthzPort !== null) { + const healthArgs = ['health', 'serve', '--port', String(healthzPort)]; + const healthLogFd = fs.openSync(DAEMON_LOG_FILE, 'a'); + const healthChild = spawn(process.execPath, [cliEntry, ...healthArgs], { + detached: true, + stdio: ['ignore', healthLogFd, healthLogFd], + env: { ...process.env }, + }); + healthChild.unref(); + fs.closeSync(healthLogFd); + if (healthChild.pid) { + healthzPid = healthChild.pid; + writePidFile(HEALTHZ_PID_FILE, healthzPid); + } + } + + persistState({ + status: 'running', + pid: newPid, + startedAt: new Date().toISOString(), + stoppedAt: undefined, + failedAt: undefined, + failureReason: undefined, + healthzPort, + healthzPid, + healthzPidFile: HEALTHZ_PID_FILE, + }); + + const status = getDaemonStatus(); + if (isJsonMode()) { + printJson({ result: 'started', ...status }); + } else { + console.log(`${chalk.green('✓')} Daemon started (pid ${newPid})`); + console.log(` Log: ${DAEMON_LOG_FILE}`); + console.log(` PID: ${DAEMON_PID_FILE}`); + console.log(` State: ${DAEMON_STATE_FILE}`); + console.log(` Reload: switchbot daemon reload`); + if (status.health.running && status.health.url) { + console.log(`${chalk.green('✓')} Health server started (pid ${status.health.pid})`); + console.log(` Health: ${status.health.url}`); + } + } + }); + + daemon + .command('stop') + .description('Stop the background daemon by sending SIGTERM.') + .action(() => { + const status = getDaemonStatus(); + if (status.status !== 'running' || status.pid === null) { + persistState({ status: 'stopped', pid: null, stoppedAt: new Date().toISOString() }); + if (isJsonMode()) { + printJson({ result: 'not-running', ...getDaemonStatus() }); + } else { + console.log(`No running daemon found (pid file: ${DAEMON_PID_FILE}).`); + } + return; + } + try { + killIfAlive(status.pid); + killIfAlive(status.health.pid); + } catch (err) { + persistState({ + status: 'failed', + pid: status.pid, + failedAt: new Date().toISOString(), + failureReason: err instanceof Error ? err.message : String(err), + }); + exitWithError({ + code: 1, + kind: 'runtime', + message: `Failed to stop daemon (pid ${status.pid}): ${err instanceof Error ? err.message : String(err)}`, + }); + } + + try { fs.unlinkSync(DAEMON_PID_FILE); } catch { /* best effort */ } + try { fs.unlinkSync(HEALTHZ_PID_FILE); } catch { /* best effort */ } + persistState({ + status: 'stopped', + pid: null, + healthzPid: null, + stoppedAt: new Date().toISOString(), + }); + + if (isJsonMode()) { + printJson({ result: 'stopped', ...getDaemonStatus() }); + } else { + console.log(`${chalk.green('✓')} Daemon stopped (pid ${status.pid}).`); + } + }); + + daemon + .command('status') + .description('Report whether the daemon is currently running.') + .action(() => { + const status = getDaemonStatus(); + if (isJsonMode()) { + printJson(status); + return; + } + renderHumanStatus(status); + }); + + daemon + .command('reload') + .description('Trigger a hot reload on the running rules-engine daemon.') + .action(() => { + const daemonStatus = getDaemonStatus(); + if (daemonStatus.status !== 'running' || daemonStatus.pid === null) { + persistState({ + status: 'failed', + failedAt: new Date().toISOString(), + failureReason: 'No running daemon to reload.', + lastReloadAt: new Date().toISOString(), + lastReloadStatus: 'failed', + lastReloadMessage: 'No running daemon to reload.', + }); + exitWithError({ + code: 2, + kind: 'usage', + message: `No running daemon found (pid file: ${DAEMON_PID_FILE}).`, + }); + } + + const pidPaths = getDefaultPidFilePaths(); + const rulesPid = readPidFile(pidPaths.pidFile); + if (rulesPid === null || !isPidAlive(rulesPid)) { + persistState({ + status: 'failed', + failedAt: new Date().toISOString(), + failureReason: 'Rules engine PID is missing or stale.', + lastReloadAt: new Date().toISOString(), + lastReloadStatus: 'failed', + lastReloadMessage: 'Rules engine PID is missing or stale.', + }); + exitWithError({ + code: 2, + kind: 'usage', + message: `No running rules engine found for daemon reload (pid file: ${pidPaths.pidFile}).`, + }); + } + + let method: 'SIGHUP' | 'sentinel'; + try { + if (sighupSupported()) { + process.kill(rulesPid, 'SIGHUP'); + method = 'SIGHUP'; + } else { + writeReloadSentinel(pidPaths.reloadFile); + method = 'sentinel'; + } + } catch (err) { + const message = err instanceof Error ? err.message : String(err); + persistState({ + status: 'failed', + failedAt: new Date().toISOString(), + failureReason: message, + lastReloadAt: new Date().toISOString(), + lastReloadStatus: 'failed', + lastReloadMessage: message, + }); + exitWithError({ + code: 1, + kind: 'runtime', + message: `Failed to reload daemon: ${message}`, + }); + } + + persistState({ + status: 'running', + lastReloadAt: new Date().toISOString(), + lastReloadStatus: 'ok', + lastReloadMessage: method === 'SIGHUP' + ? `Sent SIGHUP to rules engine pid ${rulesPid}.` + : `Wrote reload sentinel ${pidPaths.reloadFile}.`, + }); + const status = getDaemonStatus(); + if (isJsonMode()) { + printJson({ result: 'reloaded', method, ...status }); + } else { + console.log(`${chalk.green('✓')} Reload requested via ${method}.`); + if (status.lastReloadMessage) console.log(` ${status.lastReloadMessage}`); + } + }); +} diff --git a/src/commands/devices.ts b/src/commands/devices.ts index 7e017a7..eeea126 100644 --- a/src/commands/devices.ts +++ b/src/commands/devices.ts @@ -34,6 +34,7 @@ import { registerDevicesMetaCommand } from './device-meta.js'; import { isDryRun } from '../utils/flags.js'; import { DryRunSignal } from '../api/client.js'; import { resolveField, resolveFieldList, listSupportedFieldInputs } from '../schema/field-aliases.js'; +import { allowsDirectDestructiveExecution, destructiveExecutionHint } from '../lib/destructive-mode.js'; const EXPAND_HINTS: Record = { 'Air Conditioner': { command: 'setAll', flags: '--temp 26 --mode cool --fan low --power on' }, @@ -383,7 +384,8 @@ Examples: .option('--name-category ', 'Narrow --name by category: physical|ir', enumArg('--name-category', ['physical', 'ir'] as const)) .option('--name-room ', 'Narrow --name by room name (substring match)', stringArg('--name-room')) .option('--type ', 'Command type: "command" for built-in commands (default), "customize" for user-defined IR buttons', enumArg('--type', COMMAND_TYPES), 'command') - .option('--yes', 'Confirm a destructive command (Smart Lock unlock, Garage open, …). --dry-run is always allowed without --yes.') + .option('--yes', 'Confirm a destructive command in an explicit dev profile. --dry-run is always allowed without --yes.') + .option('--explain', 'Print a human-readable summary of what this command would do (risk level, device type, idempotency) then exit without executing.') .option('--allow-unknown-device', 'Allow targeting a deviceId that is not in the local cache. By default unknown IDs exit 2 so --dry-run is a reliable pre-flight gate; use this flag for scripted pass-through.') .option('--skip-param-validation', 'Skip client-side parameter validation (escape hatch — prefer fixing the argument over using this).') .option('--idempotency-key ', 'Client-supplied key to dedupe retries. process-local 60s window; cache is per Node process (MCP session, batch run, plan run). Independent CLI invocations do not share cache.', stringArg('--idempotency-key')) @@ -421,8 +423,8 @@ Common errors: Safety: Destructive commands (Smart Lock unlock, Garage Door Opener turnOn/turnOff, - Keypad createKey/deleteKey, …) are blocked by default. Pass --yes to confirm, - or --dry-run to preview without sending. + Keypad createKey/deleteKey, …) are blocked by default. Use the reviewed plan + flow instead, or --dry-run to preview without sending. Examples: $ switchbot devices command ABC123 turnOn @@ -430,9 +432,9 @@ Examples: $ switchbot devices command ABC123 setAll "26,1,3,on" $ switchbot devices command ABC123 startClean '{"action":"sweep","param":{"fanLevel":2,"times":1}}' $ switchbot devices command ABC123 "MyButton" --type customize - $ switchbot devices command unlock --yes + $ switchbot devices command unlock --dry-run `) - .action(async (deviceIdArg: string | undefined, cmdArg: string | undefined, parameter: string | undefined, options: { name?: string; nameStrategy?: string; nameType?: string; nameCategory?: 'physical' | 'ir'; nameRoom?: string; type: string; yes?: boolean; allowUnknownDevice?: boolean; skipParamValidation?: boolean; idempotencyKey?: string }) => { + .action(async (deviceIdArg: string | undefined, cmdArg: string | undefined, parameter: string | undefined, options: { name?: string; nameStrategy?: string; nameType?: string; nameCategory?: 'physical' | 'ir'; nameRoom?: string; type: string; yes?: boolean; explain?: boolean; allowUnknownDevice?: boolean; skipParamValidation?: boolean; idempotencyKey?: string }) => { // Declared outside try so the DryRunSignal catch branch can reference them. let _deviceId: string | undefined; let _cmd: string | undefined; @@ -543,25 +545,73 @@ Examples: } const cachedForGuard = getCachedDevice(deviceId); - if ( - !options.yes && - !isDryRun() && - isDestructiveCommand(cachedForGuard?.type, cmd, options.type) - ) { + + // --explain: print intent + risk metadata without executing + if (options.explain) { + const isDestructive = isDestructiveCommand(cachedForGuard?.type, cmd, options.type); + const reason = getDestructiveReason(cachedForGuard?.type, cmd, options.type); + const riskLevel = isDestructive ? 'high' : options.type === 'command' ? 'medium' : 'low'; + const recommendedMode = isDestructive ? 'review-before-execute' : 'direct'; + if (isJsonMode()) { + printJson({ + intent: `Send command "${cmd}" to device ${deviceId}`, + deviceType: cachedForGuard?.type ?? 'unknown', + deviceName: cachedForGuard?.name ?? null, + command: cmd, + parameter: parameter ?? null, + commandType: options.type, + riskLevel, + requiresConfirmation: isDestructive, + safetyReason: reason ?? null, + recommendedMode, + note: 'This is a dry explanation only — command was NOT executed.', + }); + } else { + console.log(`Command: ${cmd} on device ${deviceId}`); + console.log(`Device type: ${cachedForGuard?.type ?? 'unknown'}${cachedForGuard?.name ? ` (${cachedForGuard.name})` : ''}`); + console.log(`Parameter: ${parameter ?? '(none)'}`); + console.log(`Risk level: ${riskLevel}`); + if (reason) console.log(`Safety reason: ${reason}`); + if (isDestructive) console.log(`Requires plan approval by default. ${destructiveExecutionHint()}`); + console.log('(not executed — remove --explain to run)'); + } + process.exit(0); + } + + const destructive = isDestructiveCommand(cachedForGuard?.type, cmd, options.type); + if (!isDryRun() && destructive && !options.yes && !allowsDirectDestructiveExecution()) { + const typeLabel = cachedForGuard?.type ?? 'unknown'; + const reason = getDestructiveReason(cachedForGuard?.type, cmd, options.type); + exitWithError({ + kind: 'guard', + message: `Direct destructive execution disabled — destructive command "${cmd}" on ${typeLabel}.`, + hint: reason ? `${destructiveExecutionHint()} Reason: ${reason}` : destructiveExecutionHint(), + context: { + command: cmd, + deviceType: typeLabel, + deviceId, + directExecutionAllowed: false, + requiredWorkflow: 'plan-approval', + ...(reason ? { safetyReason: reason, destructiveReason: reason } : {}), + }, + }); + } + + if (!options.yes && !isDryRun() && destructive) { const typeLabel = cachedForGuard?.type ?? 'unknown'; const reason = getDestructiveReason(cachedForGuard?.type, cmd, options.type); exitWithError({ kind: 'guard', message: `Refusing to run destructive command "${cmd}" on ${typeLabel} without --yes.`, hint: reason - ? `Re-run with --yes to confirm. Reason: ${reason}` - : 'Re-run with --yes to confirm, or --dry-run to preview without sending.', + ? `Re-run with --yes only from an explicit dev profile, or use the reviewed plan flow. Reason: ${reason}` + : `Re-run with --yes only from an explicit dev profile, use the reviewed plan flow, or --dry-run to preview without sending.`, context: { command: cmd, deviceType: typeLabel, deviceId, ...(reason ? { safetyReason: reason, destructiveReason: reason } : {}) }, }); } // Warn when --yes is given but the command is not destructive (no-op flag) - if (options.yes && !isDestructiveCommand(cachedForGuard?.type, cmd, options.type) && !isDryRun()) { + if (options.yes && !destructive && !isDryRun()) { console.error(`Note: --yes has no effect; "${cmd}" is not a destructive command.`); } diff --git a/src/commands/doctor.ts b/src/commands/doctor.ts index 60841b9..1e620ab 100644 --- a/src/commands/doctor.ts +++ b/src/commands/doctor.ts @@ -2,6 +2,7 @@ import { Command } from 'commander'; import fs from 'node:fs'; import os from 'node:os'; import path from 'node:path'; +import { execSync } from 'node:child_process'; import { printJson, isJsonMode, exitWithError } from '../utils/output.js'; import { getEffectiveCatalog } from '../devices/catalog.js'; import { configFilePath, listProfiles, readProfileMeta } from '../config.js'; @@ -19,6 +20,8 @@ import { import { validateLoadedPolicy } from '../policy/validate.js'; import { selectCredentialStore } from '../credentials/keychain.js'; import { getActiveProfile } from '../lib/request-context.js'; +import { readDaemonState } from '../lib/daemon-state.js'; +import { isPidAlive } from '../rules/pid-file.js'; interface Check { name: string; @@ -505,6 +508,46 @@ function checkPolicy(): Check { } } +async function checkKeychain(): Promise { + try { + const { selectCredentialStore } = await import('../credentials/keychain.js'); + const store = await selectCredentialStore(); + const desc = store.describe(); + const isNative = desc.backend !== 'file'; + if (!isNative) { + // Native keychain not available or not detected + return { + name: 'keychain', + status: 'warn', + detail: { + backend: desc.backend, + message: 'OS native keychain not detected — credentials stored in plain file (~/.switchbot/config.json). Consider installing a keychain backend for better security.', + hint: process.platform === 'linux' + ? 'Install libsecret (secret-tool) for GNOME Keyring support.' + : process.platform === 'darwin' + ? 'macOS Keychain is available — re-run `switchbot config set-token` to store credentials there.' + : 'Windows Credential Manager is available — re-run `switchbot config set-token` to use it.', + }, + }; + } + return { + name: 'keychain', + status: 'ok', + detail: { + backend: desc.backend, + writable: desc.writable, + message: `Credentials stored in OS keychain (${desc.backend}).`, + }, + }; + } catch (err) { + return { + name: 'keychain', + status: 'warn', + detail: { message: `Keychain probe failed: ${err instanceof Error ? err.message : String(err)}` }, + }; + } +} + function checkNodeVersion(): Check { const major = Number(process.versions.node.split('.')[0]); if (Number.isFinite(major) && major < 18) { @@ -513,6 +556,187 @@ function checkNodeVersion(): Check { return { name: 'node', status: 'ok', detail: `Node ${process.versions.node}` }; } +type ShellFlavor = 'powershell' | 'cmd' | 'bash' | 'zsh' | 'fish' | 'unknown'; + +function detectShellFlavor(): ShellFlavor { + const shell = (process.env.SHELL ?? '').toLowerCase(); + const comspec = (process.env.COMSPEC ?? '').toLowerCase(); + if (shell.includes('pwsh') || shell.includes('powershell')) return 'powershell'; + if (comspec.includes('powershell') || comspec.includes('pwsh')) return 'powershell'; + if (comspec.endsWith('cmd.exe')) return 'cmd'; + if (shell.endsWith('/fish') || shell === 'fish') return 'fish'; + if (shell.endsWith('/zsh') || shell === 'zsh') return 'zsh'; + if (shell.endsWith('/bash') || shell === 'bash') return 'bash'; + return process.platform === 'win32' ? 'powershell' : 'unknown'; +} + +function buildPathFix(shell: ShellFlavor, missingSegment: string | null, npmBinDir: string | null): string { + if (!npmBinDir) { + return process.platform === 'win32' + ? 'Run: npm prefix -g and add that directory to your PATH.' + : 'Run: npm prefix -g and add /bin to your PATH.'; + } + switch (shell) { + case 'powershell': + return `$env:Path = "${missingSegment};" + $env:Path # persist via your PowerShell profile or System Properties`; + case 'cmd': + return `set PATH=${missingSegment};%PATH% && setx PATH "${missingSegment};%PATH%"`; + case 'fish': + return `fish_add_path "${missingSegment}"`; + case 'zsh': + return `export PATH="${missingSegment}:$PATH" # add to ~/.zshrc`; + case 'bash': + return `export PATH="${missingSegment}:$PATH" # add to ~/.bashrc`; + default: + return `export PATH="${missingSegment}:$PATH"`; + } +} + +function checkPathDiscoverability(): Check { + // Detect whether the `switchbot` binary is reachable on PATH. + // This catches the common "npm install -g worked but PATH not updated" failure. + const isWindows = process.platform === 'win32'; + const binaryName = isWindows ? 'switchbot.cmd' : 'switchbot'; + + // Find where npm puts global bins. + let npmBinDir: string | null = null; + try { + const prefix = execSync('npm prefix -g', { timeout: 4000, encoding: 'utf-8' }).trim(); + npmBinDir = isWindows ? prefix : path.join(prefix, 'bin'); + } catch { + // npm not on PATH or other error; fall through. + } + + // Check whether `switchbot` resolves via PATH. + let binaryOnPath = false; + let resolvedPath: string | null = null; + try { + const which = execSync( + isWindows ? `where ${binaryName}` : `which ${binaryName}`, + { timeout: 3000, encoding: 'utf-8' }, + ).trim().split(/\r?\n/)[0]; + if (which) { + binaryOnPath = true; + resolvedPath = which; + } + } catch { + binaryOnPath = false; + } + + if (binaryOnPath) { + return { + name: 'path', + status: 'ok', + detail: { + binaryOnPath: true, + resolvedPath, + npmBinDir, + message: `switchbot is reachable at ${resolvedPath}`, + }, + }; + } + + // Not on PATH — figure out what the user should add. + const currentPath = process.env.PATH ?? ''; + const missingSegment = npmBinDir && !currentPath.split(path.delimiter).includes(npmBinDir) + ? npmBinDir + : null; + const currentShell = detectShellFlavor(); + const shellFix = buildPathFix(currentShell, missingSegment, npmBinDir); + + return { + name: 'path', + status: 'warn', + detail: { + binaryOnPath: false, + resolvedPath: null, + npmBinDir, + missingPathSegment: missingSegment, + currentShell, + fix: shellFix, + message: `'switchbot' is not on PATH. ${shellFix}`, + }, + }; +} + +function checkDaemon(): Check { + const state = readDaemonState(); + if (!state) { + return { + name: 'daemon', + status: 'warn', + detail: { + present: false, + message: 'No daemon state file found. Start one with `switchbot daemon start` if you want long-running automation.', + }, + }; + } + const pid = state.pid; + const running = pid !== null && (pid === process.pid || isPidAlive(pid)); + return { + name: 'daemon', + status: running ? 'ok' : 'warn', + detail: { + present: true, + status: running ? 'running' : state.status, + pid: running ? pid : null, + stateFile: state.stateFile, + pidFile: state.pidFile, + logFile: state.logFile, + startedAt: state.startedAt ?? null, + stoppedAt: state.stoppedAt ?? null, + lastReloadAt: state.lastReloadAt ?? null, + lastReloadStatus: state.lastReloadStatus ?? null, + healthConfigured: typeof state.healthzPort === 'number', + healthzPort: state.healthzPort ?? null, + message: running + ? 'daemon running' + : 'daemon not running; use `switchbot daemon start` for long-running automation', + }, + }; +} + +async function checkHealthEndpoint(): Promise { + const state = readDaemonState(); + if (!state || typeof state.healthzPort !== 'number') { + return { + name: 'health', + status: 'warn', + detail: { + present: false, + message: 'No health endpoint configured. Start the daemon with `--healthz-port ` to enable it.', + }, + }; + } + const url = `http://127.0.0.1:${state.healthzPort}/healthz`; + try { + const res = await fetch(url); + const body = await res.json() as { data?: { overall?: string } }; + const overall = body?.data?.overall ?? 'unknown'; + return { + name: 'health', + status: res.ok && overall !== 'down' ? 'ok' : 'warn', + detail: { + present: true, + url, + httpStatus: res.status, + overall, + message: res.ok ? `health endpoint reachable at ${url}` : `health endpoint returned HTTP ${res.status}`, + }, + }; + } catch (err) { + return { + name: 'health', + status: 'warn', + detail: { + present: true, + url, + message: `health endpoint probe failed: ${err instanceof Error ? err.message : String(err)}`, + }, + }; + } +} + function checkMqtt(): Check { // MQTT credentials are auto-provisioned from the SwitchBot API using the // account's token+secret — no extra env vars needed. Report availability @@ -641,7 +865,9 @@ interface DoctorRunOpts { const CHECK_REGISTRY: CheckDef[] = [ { name: 'node', description: 'Node.js version compatibility', run: () => checkNodeVersion() }, + { name: 'path', description: 'switchbot binary reachable on PATH', run: () => checkPathDiscoverability() }, { name: 'credentials', description: 'credentials file present and parseable', run: () => checkCredentials() }, + { name: 'keychain', description: 'OS keychain backend availability and usage', run: () => checkKeychain() }, { name: 'profiles', description: 'profile definitions valid', run: () => checkProfiles() }, { name: 'catalog', description: 'catalog loads', run: () => checkCatalog() }, { name: 'catalog-schema', description: 'catalog vs agent-bootstrap version aligned', run: () => checkCatalogSchema() }, @@ -656,6 +882,8 @@ const CHECK_REGISTRY: CheckDef[] = [ { name: 'mcp', description: 'MCP server instantiable + tool count', run: () => checkMcp() }, { name: 'policy', description: 'policy.yaml present + schema-valid (if configured)', run: () => checkPolicy() }, { name: 'audit', description: 'recent command errors (last 24h)', run: () => checkAudit() }, + { name: 'daemon', description: 'daemon state file + runtime status', run: () => checkDaemon() }, + { name: 'health', description: 'health endpoint availability (daemon --healthz-port)', run: () => checkHealthEndpoint() }, ]; interface FixResult { @@ -719,7 +947,7 @@ interface DoctorCliOptions { export function registerDoctorCommand(program: Command): void { program .command('doctor') - .description('Self-check the SwitchBot CLI setup: credentials, catalog, cache, quota, MQTT, MCP') + .description('Self-check the SwitchBot CLI setup: credentials, catalog, cache, quota, MQTT, daemon, health, MCP') .option('--section ', 'Comma-separated list of checks to run (see --list for names)') .option('--list', 'Print the registered check names and exit 0 without running any check') .option('--fix', 'Apply safe, reversible remediations for failing checks (e.g. clear stale cache)') @@ -734,6 +962,7 @@ Examples: $ switchbot --json doctor | jq '.checks[] | select(.status != "ok")' $ switchbot doctor --list $ switchbot doctor --section credentials,mcp --json + $ switchbot doctor --section daemon,health --json $ switchbot doctor --probe --json $ switchbot doctor --fix --yes --json `) @@ -788,6 +1017,15 @@ Examples: const overallFail = summary.fail > 0; const overall: 'ok' | 'warn' | 'fail' = overallFail ? 'fail' : summary.warn > 0 ? 'warn' : 'ok'; + const total = summary.ok + summary.warn + summary.fail; + const rawScore = total > 0 ? Math.round(((summary.ok + summary.warn * 0.5) / total) * 100) : 100; + const maturityScore = Math.min(100, Math.max(0, rawScore)); + const maturityLabel: 'production-ready' | 'mostly-ready' | 'needs-work' | 'not-ready' = + maturityScore >= 90 ? 'production-ready' + : maturityScore >= 70 ? 'mostly-ready' + : maturityScore >= 40 ? 'needs-work' + : 'not-ready'; + let fixes: FixResult[] | undefined; if (opts.fix) { fixes = applyFixes(checks, Boolean(opts.yes)); @@ -802,6 +1040,8 @@ Examples: const payload: Record = { ok: overall === 'ok', overall, + maturityScore, + maturityLabel, generatedAt: new Date().toISOString(), schemaVersion: DOCTOR_SCHEMA_VERSION, summary, diff --git a/src/commands/health.ts b/src/commands/health.ts new file mode 100644 index 0000000..87839e8 --- /dev/null +++ b/src/commands/health.ts @@ -0,0 +1,120 @@ +import http from 'node:http'; +import { Command } from 'commander'; +import { printJson, isJsonMode, printTable, handleError } from '../utils/output.js'; +import { getHealthReport, toPrometheusText } from '../utils/health.js'; +import { intArg } from '../utils/arg-parsers.js'; + +const HEALTHZ_SCHEMA_VERSION = '1.1'; + +/** + * Create an HTTP request handler for the health endpoints. Exposed separately + * so integration tests can call it directly without binding a port. + */ +export function createHealthHandler(auditLogPath?: string): http.RequestListener { + return (req, res) => { + const url = (req.url ?? '/').split('?')[0]; + if (url === '/healthz') { + const report = getHealthReport(auditLogPath); + const statusCode = report.overall === 'down' ? 503 : 200; + res.writeHead(statusCode, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ schemaVersion: HEALTHZ_SCHEMA_VERSION, data: report })); + } else if (url === '/metrics') { + const report = getHealthReport(auditLogPath); + res.writeHead(200, { 'Content-Type': 'text/plain; version=0.0.4; charset=utf-8' }); + res.end(toPrometheusText(report)); + } else { + res.writeHead(404, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify({ error: 'Not found', paths: ['/healthz', '/metrics'] })); + } + }; +} + +export function registerHealthCommand(program: Command): void { + const health = program + .command('health') + .description('Report process health: quota, audit error rate, circuit breaker state.'); + + health + .command('check') + .description('Print a one-shot health report.') + .option('--prometheus', 'Emit Prometheus text format.') + .option('--audit-log ', 'Audit log path (default: ~/.switchbot/audit.log).') + .action((opts: { prometheus?: boolean; auditLog?: string }) => { + const report = getHealthReport(opts.auditLog); + if (opts.prometheus) { + process.stdout.write(toPrometheusText(report)); + return; + } + if (isJsonMode()) { + printJson(report); + return; + } + const statusEmoji = report.overall === 'ok' ? '✓' : report.overall === 'degraded' ? '⚠' : '✗'; + console.log(`${statusEmoji} overall: ${report.overall} (${report.generatedAt})`); + console.log(''); + printTable( + ['Component', 'Status', 'Detail'], + [ + ['quota', report.quota.status, + `${report.quota.used}/${report.quota.limit} (${report.quota.percentUsed}% used, ${report.quota.remaining} remaining)`], + ['audit', report.audit.status, + report.audit.present + ? `${report.audit.recentErrors}/${report.audit.recentTotal} errors in 24h (${report.audit.errorRatePercent}%)` + : 'log not present'], + ['circuit', report.circuit.status, + `${report.circuit.name}: ${report.circuit.state} (failures: ${report.circuit.failures})`], + ['process', 'ok', + `pid ${report.process.pid} · uptime ${report.process.uptimeSeconds}s · mem ${report.process.memoryMb}MB`], + ], + ); + if (report.overall !== 'ok') process.exit(1); + }); + + // switchbot health serve [--port ] + health + .command('serve') + .description('Start an HTTP server exposing /healthz (JSON) and /metrics (Prometheus).') + .option('--port ', 'Port to listen on.', intArg('--port'), '3100') + .option('--host ', 'Bind address.', '127.0.0.1') + .option('--audit-log ', 'Audit log path.') + .addHelpText('after', ` +Endpoints: + GET /healthz JSON health report (HTTP 200 ok/degraded, 503 when circuit is open). + GET /metrics Prometheus text metrics. + +Example: + $ switchbot health serve --port 3100 + $ curl http://127.0.0.1:3100/healthz +`) + .action((opts: { port: string; host: string; auditLog?: string }) => { + const port = parseInt(opts.port, 10); + const handler = createHealthHandler(opts.auditLog); + const server = http.createServer(handler); + + server.on('error', (err: NodeJS.ErrnoException) => { + if (err.code === 'EADDRINUSE') { + handleError(Object.assign(new Error(`Port ${port} is already in use. Choose a different port with --port.`), { code: err.code })); + } else { + handleError(err); + } + }); + + server.listen(port, opts.host, () => { + const addr = server.address(); + const boundPort = typeof addr === 'object' && addr !== null ? addr.port : port; + if (isJsonMode()) { + printJson({ status: 'listening', host: opts.host, port: boundPort, endpoints: ['/healthz', '/metrics'] }); + } else { + console.log(`health server listening on ${opts.host}:${boundPort}`); + console.log(' GET /healthz — JSON health report'); + console.log(' GET /metrics — Prometheus text metrics'); + } + }); + + function shutdown() { + server.close(() => process.exit(0)); + } + process.on('SIGTERM', shutdown); + process.on('SIGINT', shutdown); + }); +} diff --git a/src/commands/mcp.ts b/src/commands/mcp.ts index 9dbdbf4..66fe6bc 100644 --- a/src/commands/mcp.ts +++ b/src/commands/mcp.ts @@ -58,6 +58,7 @@ import { planMigration } from '../policy/migrate.js'; import { suggestPlan } from './plan.js'; import { suggestRule } from '../rules/suggest.js'; import { addRuleToPolicyFile, AddRuleError } from '../policy/add-rule.js'; +import { allowsDirectDestructiveExecution, destructiveExecutionHint } from '../lib/destructive-mode.js'; import { writeFileSync } from 'node:fs'; import { readAudit, type AuditEntry } from '../utils/audit.js'; import { parseDurationToMs } from '../utils/flags.js'; @@ -198,6 +199,45 @@ function topNFromMap(counts: Map, n: number): Array<{ key: strin .map(([key, count]) => ({ key, count })); } +/** + * Compute per-action risk metadata from the device catalog. + * `idempotencyHint` is sourced from CommandSpec.idempotent when available; + * falls back to "safe" only for unknown commands on non-destructive paths. + */ +function buildRiskProfile( + typeName: string | undefined, + command: string, + commandType: string, + isDestructive: boolean, +): { + riskLevel: 'high' | 'medium' | 'low'; + requiresConfirmation: boolean; + supportsDryRun: true; + idempotencyHint: 'safe' | 'non-idempotent'; + recommendedMode: 'review-before-execute' | 'plan' | 'direct'; +} { + // Look up the catalog spec to get the authoritative idempotent flag. + let idempotencyHint: 'safe' | 'non-idempotent' = isDestructive ? 'non-idempotent' : 'safe'; + if (typeName && commandType === 'command') { + const entry = findCatalogEntry(typeName); + const entries = Array.isArray(entry) ? entry : entry ? [entry] : []; + for (const e of entries) { + const spec = e.commands.find((c) => c.command === command); + if (spec !== undefined) { + idempotencyHint = spec.idempotent === true ? 'safe' : 'non-idempotent'; + break; + } + } + } + return { + riskLevel: isDestructive ? 'high' : commandType === 'command' ? 'medium' : 'low', + requiresConfirmation: isDestructive, + supportsDryRun: true, + idempotencyHint, + recommendedMode: isDestructive ? 'review-before-execute' : 'plan', + }; +} + export function createSwitchBotMcpServer(options?: { eventManager?: EventSubscriptionManager }): McpServer { const eventManager = options?.eventManager; const server = new McpServer( @@ -395,7 +435,7 @@ API docs: https://github.com/OpenWonderLabs/SwitchBotAPI`, { title: 'Send a control command to a device', description: - 'Execute a control command on a device (turnOn, setColor, startClean, unlock, openDoor, createKey, etc.). Destructive commands (Smart Lock unlock, Garage Door open, Keypad createKey/deleteKey) require confirm:true to proceed; otherwise rejected. Commands are validated offline against the device catalog. Use idempotencyKey to safely deduplicate retries within 60 seconds.', + 'Execute a control command on a device (turnOn, setColor, startClean, unlock, openDoor, createKey, etc.). Destructive commands require confirm:true and are still blocked in the default safety profile; use the reviewed plan workflow unless an explicit dev profile allows direct execution. Commands are validated offline against the device catalog. Use idempotencyKey to safely deduplicate retries within 60 seconds.', _meta: { agentSafetyTier: 'action' }, inputSchema: z.object({ deviceId: z.string().describe('Device ID from list_devices'), @@ -430,6 +470,18 @@ API docs: https://github.com/OpenWonderLabs/SwitchBotAPI`, command: z.string().optional(), deviceId: z.string().optional(), result: z.unknown().optional().describe('API response body from SwitchBot (absent on dryRun)'), + riskProfile: z + .object({ + riskLevel: z.enum(['high', 'medium', 'low']), + requiresConfirmation: z.boolean(), + supportsDryRun: z.literal(true), + idempotencyHint: z.enum(['safe', 'non-idempotent']), + recommendedMode: z.enum(['review-before-execute', 'plan', 'direct']), + }) + .optional() + .describe( + 'Device+command-specific risk metadata. riskLevel:"high" means confirm:true was required. Always present on dryRun responses so agents can preview risk before committing.', + ), verification: z .object({ verifiable: z.boolean(), @@ -511,7 +563,9 @@ API docs: https://github.com/OpenWonderLabs/SwitchBotAPI`, parameter: effectiveParameter ?? 'default', commandType: effectiveType, }; - const structured = { ok: true as const, dryRun: true as const, wouldSend }; + const dryIsDestructive = isDestructiveCommand(cached.type, effectiveCommand, effectiveType); + const dryRiskProfile = buildRiskProfile(cached.type, effectiveCommand, effectiveType, dryIsDestructive); + const structured = { ok: true as const, dryRun: true as const, riskProfile: dryRiskProfile, wouldSend }; return { content: [{ type: 'text', text: JSON.stringify(structured, null, 2) }], structuredContent: structured, @@ -534,7 +588,26 @@ API docs: https://github.com/OpenWonderLabs/SwitchBotAPI`, typeName = physical ? physical.deviceType : ir!.remoteType; } - if (isDestructiveCommand(typeName, effectiveCommand, effectiveType) && !confirm) { + const destructive = isDestructiveCommand(typeName, effectiveCommand, effectiveType); + if (destructive && !allowsDirectDestructiveExecution()) { + const reason = getDestructiveReason(typeName, effectiveCommand, effectiveType); + return mcpError( + 'guard', 3, + `Direct destructive execution is disabled for command "${effectiveCommand}" on device type "${typeName}".`, + { + hint: reason ? `${destructiveExecutionHint()} Reason: ${reason}` : destructiveExecutionHint(), + context: { + command: effectiveCommand, + deviceType: typeName, + directExecutionAllowed: false, + requiredWorkflow: 'plan-approval', + ...(reason ? { safetyReason: reason, destructiveReason: reason } : {}), + }, + }, + ); + } + + if (destructive && !confirm) { const reason = getDestructiveReason(typeName, effectiveCommand, effectiveType); const entry = typeName ? findCatalogEntry(typeName) : null; const spec = @@ -610,17 +683,20 @@ API docs: https://github.com/OpenWonderLabs/SwitchBotAPI`, return apiErrorToMcpError(err); } const isIr = getCachedDevice(deviceId)?.category === 'ir'; + const liveIsDestructive = destructive; + const riskProfile = buildRiskProfile(typeName, effectiveCommand, effectiveType, liveIsDestructive); const structured: { ok: true; command: string; deviceId: string; result: unknown; + riskProfile: typeof riskProfile; verification?: { verifiable: boolean; reason: string; suggestedFollowup: string; }; - } = { ok: true as const, command: effectiveCommand, deviceId, result }; + } = { ok: true as const, command: effectiveCommand, deviceId, result, riskProfile }; if (isIr) { structured.verification = { verifiable: false, @@ -1472,7 +1548,7 @@ API docs: https://github.com/OpenWonderLabs/SwitchBotAPI`, { title: 'Validate and execute a SwitchBot plan', description: - 'Execute a Plan JSON object (version 1.0). Destructive command steps are skipped unless yes=true. ' + + 'Execute a Plan JSON object (version 1.0). Destructive command steps are skipped unless yes=true, and the default safety profile still refuses direct destructive execution in favor of the reviewed plan workflow. ' + 'Scene and wait steps run in order. Returns per-step results and a summary.', _meta: { agentSafetyTier: 'action' }, inputSchema: z.object({ @@ -1519,6 +1595,38 @@ API docs: https://github.com/OpenWonderLabs/SwitchBotAPI`, const continueOnError = continue_on_error === true; const allowDestructive = yes === true; + const destructiveSteps = validated.plan.steps + .map((step, index) => ({ step, index })) + .filter((entry): entry is { step: typeof validated.plan.steps[number] & { type: 'command' }; index: number } => entry.step.type === 'command') + .map(({ step, index }) => { + const resolvedDeviceId = resolveDeviceId(step.deviceId, step.deviceName); + const commandType = step.commandType ?? 'command'; + const deviceType = getCachedDevice(resolvedDeviceId)?.type; + return { + index: index + 1, + deviceId: resolvedDeviceId, + command: step.command, + commandType, + deviceType: deviceType ?? null, + destructive: isDestructiveCommand(deviceType, step.command, commandType), + }; + }) + .filter((step) => step.destructive); + if (allowDestructive && destructiveSteps.length > 0 && !allowsDirectDestructiveExecution()) { + return mcpError('guard', 3, 'Direct destructive execution is disabled for plan_run.', { + hint: destructiveExecutionHint(), + context: { + destructiveSteps: destructiveSteps.map((step) => ({ + step: step.index, + deviceId: step.deviceId, + deviceType: step.deviceType, + command: step.command, + commandType: step.commandType, + })), + requiredWorkflow: 'plan-approval', + }, + }); + } for (let i = 0; i < validated.plan.steps.length; i++) { const step = validated.plan.steps[i]; diff --git a/src/commands/plan.ts b/src/commands/plan.ts index e94b837..56c0ce2 100644 --- a/src/commands/plan.ts +++ b/src/commands/plan.ts @@ -1,12 +1,21 @@ import { Command } from 'commander'; import fs from 'node:fs'; import readline from 'node:readline'; -import { printJson, isJsonMode, handleError } from '../utils/output.js'; +import { randomUUID } from 'node:crypto'; +import { printJson, isJsonMode, handleError, exitWithError } from '../utils/output.js'; import { executeCommand, isDestructiveCommand } from '../lib/devices.js'; import { executeScene } from '../lib/scenes.js'; import { getCachedDevice } from '../devices/cache.js'; import { resolveDeviceId } from '../utils/name-resolver.js'; import { COMMAND_KEYWORDS } from '../lib/command-keywords.js'; +import { + savePlanRecord, + loadPlanRecord, + updatePlanRecord, + listPlanRecords, + PLANS_DIR, +} from '../lib/plan-store.js'; +import { allowsDirectDestructiveExecution, destructiveExecutionHint } from '../lib/destructive-mode.js'; export interface PlanCommandStep { type: 'command'; @@ -38,6 +47,21 @@ export interface Plan { steps: PlanStep[]; } +function findDestructivePlanSteps(plan: Plan): Array<{ index: number; deviceId: string; command: string; commandType: 'command' | 'customize'; deviceType: string | null }> { + const destructive: Array<{ index: number; deviceId: string; command: string; commandType: 'command' | 'customize'; deviceType: string | null }> = []; + for (let i = 0; i < plan.steps.length; i++) { + const step = plan.steps[i]; + if (step.type !== 'command') continue; + const resolvedDeviceId = resolveDeviceId(step.deviceId, step.deviceName); + const deviceType = getCachedDevice(resolvedDeviceId)?.type; + const commandType = step.commandType ?? 'command'; + if (isDestructiveCommand(deviceType, step.command, commandType)) { + destructive.push({ index: i + 1, deviceId: resolvedDeviceId, command: step.command, commandType, deviceType: deviceType ?? null }); + } + } + return destructive; +} + const PLAN_JSON_SCHEMA = { $schema: 'https://json-schema.org/draft/2020-12/schema', $id: 'https://switchbot.dev/plan-1.0.json', @@ -264,6 +288,89 @@ interface PlanRunResult { summary: { total: number; ok: number; error: number; skipped: number }; } +/** Shared plan-execution core used by both `plan run` and `plan execute`. */ +async function executePlanSteps( + plan: Plan, + planId: string, + options: { yes?: boolean; requireApproval?: boolean; continueOnError?: boolean }, +): Promise { + const out: PlanRunResult = { + plan, + results: [], + summary: { total: plan.steps.length, ok: 0, error: 0, skipped: 0 }, + }; + for (let i = 0; i < plan.steps.length; i++) { + const step = plan.steps[i]; + const idx = i + 1; + if (step.type === 'wait') { + await new Promise((r) => setTimeout(r, step.ms)); + out.results.push({ step: idx, type: 'wait', ms: step.ms, status: 'ok' }); + out.summary.ok++; + if (!isJsonMode()) console.log(` ${idx}. wait ${step.ms}ms`); + continue; + } + if (step.type === 'scene') { + try { + await executeScene(step.sceneId); + out.results.push({ step: idx, type: 'scene', sceneId: step.sceneId, status: 'ok' }); + out.summary.ok++; + if (!isJsonMode()) console.log(` ${idx}. ✓ scene ${step.sceneId}`); + } catch (err) { + const msg = err instanceof Error ? err.message : String(err); + out.results.push({ step: idx, type: 'scene', sceneId: step.sceneId, status: 'error', error: msg }); + out.summary.error++; + if (!isJsonMode()) console.log(` ${idx}. ✗ scene ${step.sceneId}: ${msg}`); + if (!options.continueOnError) break; + } + continue; + } + const resolvedDeviceId = resolveDeviceId(step.deviceId, step.deviceName); + const deviceType = getCachedDevice(resolvedDeviceId)?.type; + const commandType = step.commandType ?? 'command'; + const destructive = isDestructiveCommand(deviceType, step.command, commandType); + let approvalDecision: 'approved' | undefined; + if (destructive && !options.yes) { + if (options.requireApproval) { + const approved = await promptApproval(idx, step.command, resolvedDeviceId); + if (approved) { + approvalDecision = 'approved'; + } else { + out.results.push({ step: idx, type: 'command', deviceId: resolvedDeviceId, command: step.command, status: 'skipped', error: 'destructive — rejected at prompt', decision: 'rejected' }); + out.summary.skipped++; + if (!isJsonMode()) console.log(` ${idx}. ✗ skipped ${step.command} on ${resolvedDeviceId} (rejected)`); + if (!options.continueOnError) break; + continue; + } + } else { + out.results.push({ step: idx, type: 'command', deviceId: resolvedDeviceId, command: step.command, status: 'skipped', error: 'destructive — rerun with --yes' }); + out.summary.skipped++; + if (!isJsonMode()) console.log(` ${idx}. ⚠ skipped ${step.command} on ${resolvedDeviceId} (destructive — pass --yes)`); + if (!options.continueOnError) break; + continue; + } + } + try { + await executeCommand(resolvedDeviceId, step.command, step.parameter, commandType, undefined, { planId }); + out.results.push({ step: idx, type: 'command', deviceId: resolvedDeviceId, command: step.command, status: 'ok', ...(approvalDecision ? { decision: approvalDecision } : {}) }); + out.summary.ok++; + if (!isJsonMode()) console.log(` ${idx}. ✓ ${step.command} on ${resolvedDeviceId}`); + } catch (err) { + if (err instanceof Error && err.name === 'DryRunSignal') { + out.results.push({ step: idx, type: 'command', deviceId: resolvedDeviceId, command: step.command, status: 'ok' }); + out.summary.ok++; + if (!isJsonMode()) console.log(` ${idx}. ◦ dry-run ${step.command} on ${resolvedDeviceId}`); + continue; + } + const msg = err instanceof Error ? err.message : String(err); + out.results.push({ step: idx, type: 'command', deviceId: resolvedDeviceId, command: step.command, status: 'error', error: msg }); + out.summary.error++; + if (!isJsonMode()) console.log(` ${idx}. ✗ ${step.command} on ${resolvedDeviceId}: ${msg}`); + if (!options.continueOnError) break; + } + } + return out; +} + export function registerPlanCommand(program: Command): void { const plan = program .command('plan') @@ -282,7 +389,10 @@ Workflow: $ switchbot plan schema > plan.schema.json # export the contract $ switchbot plan validate my-plan.json # check shape without running $ switchbot --dry-run plan run my-plan.json # preview (mutations skipped) - $ switchbot plan run my-plan.json --yes # execute destructive steps + $ switchbot plan save my-plan.json # store a reviewed plan + $ switchbot plan review + $ switchbot plan approve + $ switchbot plan execute $ cat plan.json | switchbot plan run - # or stream via stdin `); @@ -375,7 +485,7 @@ against the live API without executing any mutations. plan .command('run') - .description('Validate + execute a plan. Respects --dry-run; destructive steps require --yes') + .description('Validate + preview/execute a plan. Respects --dry-run; destructive steps require the reviewed plan flow by default') .argument('[file]', 'Path to plan.json, or "-" / omit to read stdin') .option('--yes', 'Authorize destructive commands (e.g. Smart Lock unlock, Garage open)') .option('--require-approval', 'Prompt for confirmation before each destructive step (TTY only; mutually exclusive with --json)') @@ -405,135 +515,200 @@ against the live API without executing any mutations. } process.exit(2); } - - const out: PlanRunResult = { - plan: v.plan, - results: [], - summary: { total: v.plan.steps.length, ok: 0, error: 0, skipped: 0 }, - }; - - try { - for (let i = 0; i < v.plan.steps.length; i++) { - const step = v.plan.steps[i]; - const idx = i + 1; - if (step.type === 'wait') { - await new Promise((r) => setTimeout(r, step.ms)); - out.results.push({ step: idx, type: 'wait', ms: step.ms, status: 'ok' }); - out.summary.ok++; - if (!isJsonMode()) console.log(` ${idx}. wait ${step.ms}ms`); - continue; - } - if (step.type === 'scene') { - try { - await executeScene(step.sceneId); - out.results.push({ step: idx, type: 'scene', sceneId: step.sceneId, status: 'ok' }); - out.summary.ok++; - if (!isJsonMode()) console.log(` ${idx}. ✓ scene ${step.sceneId}`); - } catch (err) { - const msg = err instanceof Error ? err.message : String(err); - out.results.push({ step: idx, type: 'scene', sceneId: step.sceneId, status: 'error', error: msg }); - out.summary.error++; - if (!isJsonMode()) console.log(` ${idx}. ✗ scene ${step.sceneId}: ${msg}`); - if (!options.continueOnError) break; - } - continue; - } - // command - const resolvedDeviceId = resolveDeviceId(step.deviceId, step.deviceName); - const deviceType = getCachedDevice(resolvedDeviceId)?.type; - const commandType = step.commandType ?? 'command'; - const destructive = isDestructiveCommand(deviceType, step.command, commandType); - let approvalDecision: 'approved' | undefined; - if (destructive && !options.yes) { - if (options.requireApproval) { - const approved = await promptApproval(idx, step.command, resolvedDeviceId); - if (approved) { - approvalDecision = 'approved'; - } else { - out.results.push({ - step: idx, - type: 'command', - deviceId: resolvedDeviceId, - command: step.command, - status: 'skipped', - error: 'destructive — rejected at prompt', - decision: 'rejected', - }); - out.summary.skipped++; - if (!isJsonMode()) - console.log(` ${idx}. ✗ skipped ${step.command} on ${resolvedDeviceId} (rejected)`); - if (!options.continueOnError) break; - continue; - } - } else { - out.results.push({ - step: idx, - type: 'command', - deviceId: resolvedDeviceId, - command: step.command, - status: 'skipped', - error: 'destructive — rerun with --yes', - }); - out.summary.skipped++; - if (!isJsonMode()) - console.log(` ${idx}. ⚠ skipped ${step.command} on ${resolvedDeviceId} (destructive — pass --yes)`); - if (!options.continueOnError) break; - continue; - } - } - try { - await executeCommand(resolvedDeviceId, step.command, step.parameter, commandType); - out.results.push({ - step: idx, - type: 'command', - deviceId: resolvedDeviceId, + const planId = randomUUID(); + const destructiveSteps = findDestructivePlanSteps(v.plan); + if (options.yes && destructiveSteps.length > 0 && !allowsDirectDestructiveExecution()) { + exitWithError({ + code: 2, + kind: 'guard', + message: `Direct destructive execution is disabled for plan run (${destructiveSteps.length} destructive step${destructiveSteps.length === 1 ? '' : 's'}).`, + hint: destructiveExecutionHint(), + context: { + planId, + destructiveSteps: destructiveSteps.map((step) => ({ + step: step.index, + deviceId: step.deviceId, + deviceType: step.deviceType, command: step.command, - status: 'ok', - ...(approvalDecision ? { decision: approvalDecision } : {}), - }); - out.summary.ok++; - if (!isJsonMode()) - console.log(` ${idx}. ✓ ${step.command} on ${resolvedDeviceId}`); - } catch (err) { - if (err instanceof Error && err.name === 'DryRunSignal') { - out.results.push({ - step: idx, - type: 'command', - deviceId: resolvedDeviceId, - command: step.command, - status: 'ok', - }); - out.summary.ok++; - if (!isJsonMode()) - console.log(` ${idx}. ◦ dry-run ${step.command} on ${resolvedDeviceId}`); - continue; - } - const msg = err instanceof Error ? err.message : String(err); - out.results.push({ - step: idx, - type: 'command', - deviceId: resolvedDeviceId, - command: step.command, - status: 'error', - error: msg, - }); - out.summary.error++; - if (!isJsonMode()) - console.log(` ${idx}. ✗ ${step.command} on ${resolvedDeviceId}: ${msg}`); - if (!options.continueOnError) break; - } - } - + commandType: step.commandType, + })), + requiredWorkflow: 'plan-approval', + }, + }); + } + let out: PlanRunResult; + try { + out = await executePlanSteps(v.plan, planId, options); if (isJsonMode()) { - printJson({ ran: true, ...out }); + printJson({ ran: true, planId, ...out }); } else { const { ok, error, skipped, total } = out.summary; console.log(`\nsummary: ok=${ok} error=${error} skipped=${skipped} total=${total}`); } } catch (err) { handleError(err); + return; } if (out.summary.error > 0) process.exit(1); }, ); + + // ---- Plan resource-model subcommands (P0-3) -------------------------------- + + plan + .command('save') + .description('Save a plan JSON to ~/.switchbot/plans/ with status=pending (waiting for approval).') + .argument('[file]', 'Path to plan.json, or "-" / omit to read stdin') + .action(async (file: string | undefined) => { + let raw: unknown; + try { + raw = await readPlanSource(file); + } catch (err) { + handleError(err); + return; + } + const v = validatePlan(raw); + if (!v.ok) { + exitWithError({ + code: 2, kind: 'usage', + message: `Plan is invalid (${v.issues.length} issue${v.issues.length > 1 ? 's' : ''})`, + context: { issues: v.issues }, + }); + } + const record = savePlanRecord(v.plan); + if (isJsonMode()) { + printJson({ saved: true, planId: record.planId, status: record.status, createdAt: record.createdAt, plansDir: PLANS_DIR }); + } else { + console.log(`✓ Plan saved — planId: ${record.planId}`); + console.log(` Status: ${record.status}`); + console.log(` Path: ${PLANS_DIR}/${record.planId}.json`); + console.log(` Next: switchbot plan review ${record.planId}`); + console.log(` switchbot plan approve ${record.planId}`); + } + }); + + plan + .command('list') + .description('List saved plans in ~/.switchbot/plans/ with their approval status.') + .action(() => { + const records = listPlanRecords(); + if (isJsonMode()) { + printJson({ plans: records.map((r) => ({ planId: r.planId, status: r.status, createdAt: r.createdAt, approvedAt: r.approvedAt ?? null, executedAt: r.executedAt ?? null, description: r.plan.description ?? null })) }); + return; + } + if (records.length === 0) { + console.log('No saved plans. Use: switchbot plan save '); + return; + } + for (const r of records) { + const parts = [`${r.planId.slice(0, 8)}…`, r.status, r.createdAt.slice(0, 16)]; + if (r.plan.description) parts.push(`"${r.plan.description}"`); + console.log(parts.join(' ')); + } + }); + + plan + .command('review') + .description('Show the details of a saved plan (steps, status, approval history).') + .argument('', 'Plan UUID from "plan list"') + .action((planId: string) => { + const record = loadPlanRecord(planId); + if (!record) { + exitWithError({ code: 2, kind: 'usage', message: `Plan ${planId} not found in ${PLANS_DIR}` }); + } + if (isJsonMode()) { + printJson(record); + return; + } + console.log(`planId: ${record.planId}`); + console.log(`status: ${record.status}`); + console.log(`createdAt: ${record.createdAt}`); + if (record.approvedAt) console.log(`approvedAt: ${record.approvedAt}`); + if (record.executedAt) console.log(`executedAt: ${record.executedAt}`); + if (record.plan.description) console.log(`description: ${record.plan.description}`); + console.log(`steps (${record.plan.steps.length}):`); + for (let i = 0; i < record.plan.steps.length; i++) { + const step = record.plan.steps[i]; + if (step.type === 'command') { + const id = step.deviceId ?? step.deviceName ?? '?'; + console.log(` ${i + 1}. command ${step.command} on ${id}${step.note ? ` # ${step.note}` : ''}`); + } else if (step.type === 'scene') { + console.log(` ${i + 1}. scene ${step.sceneId}${step.note ? ` # ${step.note}` : ''}`); + } else { + console.log(` ${i + 1}. wait ${step.ms}ms`); + } + } + }); + + plan + .command('approve') + .description('Approve a saved plan, allowing `plan execute` to run it.') + .argument('', 'Plan UUID from "plan list"') + .action((planId: string) => { + const record = loadPlanRecord(planId); + if (!record) { + exitWithError({ code: 2, kind: 'usage', message: `Plan ${planId} not found in ${PLANS_DIR}` }); + } + if (record.status === 'executed') { + exitWithError({ code: 2, kind: 'guard', message: `Plan ${planId} has already been executed.` }); + } + if (record.status === 'rejected') { + exitWithError({ code: 2, kind: 'guard', message: `Plan ${planId} was rejected. Save a new plan to start fresh.` }); + } + // 'failed' plans may be re-approved and retried — intentionally no block here. + const updated = updatePlanRecord(planId, { status: 'approved', approvedAt: new Date().toISOString() }); + if (isJsonMode()) { + printJson({ ok: true, planId: updated.planId, status: updated.status, approvedAt: updated.approvedAt }); + } else { + console.log(`✓ Plan ${planId.slice(0, 8)}… approved.`); + console.log(` Next: switchbot plan execute ${planId}`); + } + }); + + plan + .command('execute') + .description('Execute a pre-approved plan. Only runs if status=approved; audit entries are tagged with planId.') + .argument('', 'Plan UUID from "plan list" (must be in approved status)') + .option('--yes', 'Deprecated no-op: approved plans already authorize destructive steps') + .option('--require-approval', 'Prompt for each destructive step (TTY only)') + .option('--continue-on-error', 'Keep running after a failed step') + .action(async (planId: string, options: { yes?: boolean; requireApproval?: boolean; continueOnError?: boolean }) => { + if (options.requireApproval && isJsonMode()) { + exitWithError({ code: 1, kind: 'usage', message: '--require-approval cannot be used with --json' }); + } + const record = loadPlanRecord(planId); + if (!record) { + exitWithError({ code: 2, kind: 'usage', message: `Plan ${planId} not found in ${PLANS_DIR}` }); + } + if (record.status !== 'approved') { + exitWithError({ + code: 2, kind: 'guard', + message: `Plan ${planId.slice(0, 8)}… cannot be executed: status is "${record.status}", expected "approved".`, + hint: record.status === 'pending' ? `Run: switchbot plan approve ${planId}` : record.status === 'failed' ? `Re-run: switchbot plan approve ${planId}` : undefined, + context: { planId, status: record.status }, + }); + } + let out: PlanRunResult; + try { + out = await executePlanSteps(record.plan, planId, { ...options, yes: true }); + } catch (err) { + handleError(err); + return; + } + const { ok, error, skipped } = out.summary; + const succeeded = error === 0 && skipped === 0; + const failureReason = succeeded ? undefined : [error > 0 ? `${error} error${error > 1 ? 's' : ''}` : null, skipped > 0 ? `${skipped} skipped` : null].filter(Boolean).join(', '); + if (succeeded) { + updatePlanRecord(planId, { status: 'executed', executedAt: new Date().toISOString() }); + } else { + updatePlanRecord(planId, { status: 'failed', failedAt: new Date().toISOString(), failureReason }); + } + if (isJsonMode()) { + printJson({ ran: true, planId, succeeded, ...out }); + } else { + console.log(`\nsummary: ok=${ok} error=${error} skipped=${skipped} total=${out.summary.total}`); + if (!succeeded) console.error(`Plan marked as failed (${failureReason}). Re-run after fixing to retry.`); + } + if (!succeeded) process.exit(1); + }); } diff --git a/src/commands/policy.ts b/src/commands/policy.ts index 3944948..669a9c2 100644 --- a/src/commands/policy.ts +++ b/src/commands/policy.ts @@ -1,9 +1,9 @@ -import { readFileSync, writeFileSync, existsSync, mkdirSync } from 'node:fs'; -import { dirname } from 'node:path'; +import { readFileSync, writeFileSync, existsSync, mkdirSync, copyFileSync, statSync } from 'node:fs'; +import { dirname, resolve as resolvePath } from 'node:path'; import { fileURLToPath } from 'node:url'; import { Command } from 'commander'; import { parse as yamlParse } from 'yaml'; -import { printJson, emitJsonError, isJsonMode } from '../utils/output.js'; +import { printJson, emitJsonError, isJsonMode, exitWithError } from '../utils/output.js'; import { loadPolicyFile, resolvePolicyPath, @@ -493,6 +493,128 @@ Reads rule YAML from stdin. Combine with 'rules suggest' for a full pipeline: throw err; } }); + + // switchbot policy backup [file] + policy + .command('backup [file]') + .description('Copy the active policy to a backup file (default: .bak.yaml).') + .option('--force', 'Overwrite an existing backup file.') + .addHelpText('after', ` +Creates a point-in-time snapshot of the active policy file so it can be +restored if a migration or manual edit breaks things. + +Default backup path: same directory as the policy, with ".bak.yaml" suffix. + +Examples: + $ switchbot policy backup + $ switchbot policy backup ./my-backup.yaml + $ switchbot policy backup --force +`) + .action((fileArg: string | undefined, opts: { force?: boolean }) => { + const source = resolvePolicyPath({}); + if (!existsSync(source)) { + exitPolicyError('file-not-found', `policy file not found: ${source}`, { path: source }); + } + + const dest = fileArg + ? resolvePath(fileArg) + : source.replace(/\.yaml$/, '.bak.yaml').replace(/\.yml$/, '.bak.yml'); + + if (!opts.force && existsSync(dest)) { + exitWithError({ code: 2, kind: 'usage', message: `Backup file already exists: ${dest}. Use --force to overwrite.` }); + } + + try { + copyFileSync(source, dest); + } catch (err) { + exitPolicyError('internal', `Failed to write backup: ${err instanceof Error ? err.message : String(err)}`, { source, dest }); + } + + const size = statSync(dest).size; + if (isJsonMode()) { + printJson({ ok: true, source, dest, sizeBytes: size }); + } else { + console.log(`Backup written: ${dest} (${size} bytes)`); + console.log(`Restore with: switchbot policy restore ${dest}`); + } + }); + + // switchbot policy restore + policy + .command('restore ') + .description('Restore a policy backup, validating it before applying.') + .option('--no-validate', 'Skip schema validation before restoring (use if migrating manually).') + .addHelpText('after', ` +Validates the backup file against the current schema before overwriting the +active policy. A .pre-restore.bak.yaml snapshot of the current policy is +automatically created before overwriting. Use --no-validate to skip +validation (e.g. when restoring an older version for manual migration). + +Example: + $ switchbot policy restore ./policy.bak.yaml +`) + .action((fileArg: string, opts: { validate?: boolean }) => { + const source = resolvePath(fileArg); + if (!existsSync(source)) { + exitPolicyError('file-not-found', `restore source not found: ${source}`, { path: source }); + } + + // Validate before touching the active policy. + if (opts.validate !== false) { + let loaded; + try { + loaded = loadPolicyFile(source); + } catch (err) { + if (err instanceof PolicyFileNotFoundError) { + exitPolicyError('file-not-found', `restore source not found: ${err.policyPath}`, { path: err.policyPath }); + } + if (err instanceof PolicyYamlParseError) { + exitPolicyError('yaml-parse', `YAML parse error in ${err.policyPath}: ${err.message}`, { path: err.policyPath }); + } + throw err; + } + const vResult = validateLoadedPolicy(loaded); + if (!vResult.valid) { + const firstError = vResult.errors[0]?.message ?? 'schema validation failed'; + exitWithError({ + code: 1, kind: 'usage', + message: `Backup failed validation: ${firstError}`, + context: { errorCount: vResult.errors.length, hint: 'Use --no-validate to restore anyway.' }, + }); + } + } + + const dest = resolvePolicyPath({}); + const destDir = dirname(dest); + if (!existsSync(destDir)) { + mkdirSync(destDir, { recursive: true, mode: 0o700 }); + } + + // Take an auto-backup of the current policy before overwriting. + if (existsSync(dest)) { + const autoBackup = dest.replace(/\.yaml$/, '.pre-restore.bak.yaml').replace(/\.yml$/, '.pre-restore.bak.yml'); + try { + copyFileSync(dest, autoBackup); + } catch { + // best-effort — if it fails, proceed anyway + } + } + + try { + copyFileSync(source, dest); + } catch (err) { + exitPolicyError('internal', `Failed to restore: ${err instanceof Error ? err.message : String(err)}`, { source, dest }); + } + + const size = statSync(dest).size; + if (isJsonMode()) { + printJson({ ok: true, restored: dest, from: source, sizeBytes: size }); + } else { + console.log(`Policy restored from: ${source}`); + console.log(`Active policy: ${dest} (${size} bytes)`); + console.log('Run `switchbot policy validate` to confirm the restored file is valid.'); + } + }); } function readStdinText(): Promise { diff --git a/src/commands/rules.ts b/src/commands/rules.ts index 216200b..cd3b5b2 100644 --- a/src/commands/rules.ts +++ b/src/commands/rules.ts @@ -2,7 +2,7 @@ import { Command } from 'commander'; import fs from 'node:fs'; import os from 'node:os'; import path from 'node:path'; -import { isJsonMode, printJson, exitWithError } from '../utils/output.js'; +import { isJsonMode, printJson, exitWithError, printTable } from '../utils/output.js'; import { loadPolicyFile, resolvePolicyPath, @@ -14,6 +14,11 @@ import { validateLoadedPolicy } from '../policy/validate.js'; import type { AutomationBlock, Rule } from '../rules/types.js'; import { isWebhookTrigger } from '../rules/types.js'; import { lintRules, RulesEngine, type LintResult } from '../rules/engine.js'; +import { + analyzeConflicts, + type ConflictReport, + type QuietHours, +} from '../rules/conflict-analyzer.js'; import { tryLoadConfig } from '../config.js'; import { fetchMqttCredential } from '../mqtt/credential.js'; import { SwitchBotMqttClient } from '../mqtt/client.js'; @@ -45,6 +50,7 @@ interface LoadedAutomation { automation: AutomationBlock | null; aliases: Record; schemaVersion?: string; + quietHours: QuietHours | null; } function loadAutomation(policyPathFlag: string | undefined): LoadedAutomation | null { @@ -91,7 +97,12 @@ function loadAutomation(policyPathFlag: string | undefined): LoadedAutomation | if (typeof v === 'string') aliases[k] = v; } } - return { path, automation, aliases, schemaVersion: result.schemaVersion }; + const rawQH = data.quiet_hours as { start?: unknown; end?: unknown } | null | undefined; + const quietHours: QuietHours | null = + rawQH && typeof rawQH.start === 'string' && typeof rawQH.end === 'string' + ? { start: rawQH.start, end: rawQH.end } + : null; + return { path, automation, aliases, schemaVersion: result.schemaVersion, quietHours }; } function describeTrigger(rule: Rule): string { @@ -663,6 +674,210 @@ function registerSuggest(rules: Command): void { ); } +function formatConflictReport(report: ConflictReport): string { + const lines: string[] = []; + lines.push(`findings: ${report.findings.length} errors: ${report.counts.error} warnings: ${report.counts.warning} info: ${report.counts.info}`); + if (report.findings.length === 0) { + lines.push('No conflicts detected.'); + return lines.join('\n'); + } + for (const f of report.findings) { + lines.push(` [${f.severity}] ${f.code}: ${f.message}`); + if (f.hint) lines.push(` hint: ${f.hint}`); + } + return lines.join('\n'); +} + +function registerConflicts(rules: Command): void { + rules + .command('conflicts [path]') + .description('Detect conflicting or risky rule patterns (opposing actions, high-frequency catch-all, destructive commands).') + .action((pathArg: string | undefined) => { + const loaded = loadAutomation(pathArg); + if (!loaded) return; + const allRules = loaded.automation?.rules ?? []; + const report = analyzeConflicts(allRules, loaded.quietHours); + if (isJsonMode()) { + printJson({ + policyPath: loaded.path, + ruleCount: allRules.length, + ...report, + }); + } else { + console.log(formatConflictReport(report)); + } + process.exit(report.clean ? 0 : 1); + }); +} + +function registerDoctor(rules: Command): void { + rules + .command('doctor [path]') + .description('Combined health check: lint + conflict analysis + operational guidance.') + .action((pathArg: string | undefined) => { + const loaded = loadAutomation(pathArg); + if (!loaded) return; + const allRules = loaded.automation?.rules ?? []; + const lintResult = lintRules(loaded.automation); + const conflictReport = analyzeConflicts(allRules, loaded.quietHours); + + const overall = lintResult.valid && conflictReport.clean; + + if (isJsonMode()) { + printJson({ + policyPath: loaded.path, + policySchemaVersion: loaded.schemaVersion, + automationEnabled: loaded.automation?.enabled === true, + overall, + lint: lintResult, + conflicts: conflictReport, + }); + } else { + console.log('=== Lint ==='); + console.log(formatLintHuman(lintResult, loaded.schemaVersion)); + console.log('\n=== Conflicts ==='); + console.log(formatConflictReport(conflictReport)); + console.log(`\noverall: ${overall ? 'ok' : 'issues found'}`); + } + process.exit(overall ? 0 : 1); + }); +} + +function registerSummary(rules: Command): void { + rules + .command('summary') + .description('Aggregate rule-* audit entries per rule over a time window (fires, throttled, errors).') + .option('--file ', `Audit log path (default ${DEFAULT_AUDIT_PATH})`) + .option('--since ', 'Only entries newer than this window (default: 24h). E.g. 1h, 7d.') + .option('--rule ', 'Filter to a single rule name.') + .action((opts: { file?: string; since?: string; rule?: string }) => { + const file = opts.file ?? DEFAULT_AUDIT_PATH; + const entries = fs.existsSync(file) ? readAudit(file) : []; + const sinceMs = resolveSinceMs(opts.since ?? '24h'); + const filtered = filterRuleAudits(entries, { sinceMs, ruleName: opts.rule }); + const report = aggregateRuleAudits(filtered); + if (isJsonMode()) { + printJson({ file, window: opts.since ?? '24h', ruleFilter: opts.rule ?? null, ...report }); + return; + } + console.log(`Rule summary (${opts.since ?? '24h'} window, ${report.total} entries)`); + if (report.summaries.length === 0) { + console.log('(no rule activity in this window)'); + return; + } + printTable( + ['Rule', 'Trigger', 'Fires', 'Throttled', 'Errors', 'Error%', 'Last fired'], + report.summaries.map((s) => [ + s.rule, + s.triggerSource ?? '-', + String(s.fires), + String(s.throttled), + String(s.errors), + `${(s.errorRate * 100).toFixed(1)}%`, + s.lastAt ?? '-', + ]), + ); + }); +} + +function registerLastFired(rules: Command): void { + rules + .command('last-fired') + .description('Show the N most recently fired rule-fire entries from the audit log.') + .option('--file ', `Audit log path (default ${DEFAULT_AUDIT_PATH})`) + .option('--rule ', 'Filter to a single rule name.') + .option('-n ', 'Number of entries to show (default: 10).', (v) => Number.parseInt(v, 10)) + .action((opts: { file?: string; rule?: string; n?: number }) => { + const file = opts.file ?? DEFAULT_AUDIT_PATH; + const n = opts.n ?? 10; + const entries = fs.existsSync(file) ? readAudit(file) : []; + const fires = filterRuleAudits(entries, { + ruleName: opts.rule, + kinds: ['rule-fire', 'rule-fire-dry'], + }); + const recent = fires.slice(-n).reverse(); + if (isJsonMode()) { + printJson({ file, ruleFilter: opts.rule ?? null, count: recent.length, entries: recent }); + return; + } + if (recent.length === 0) { + console.log(`(no rule-fire entries in ${file}${opts.rule ? ` for rule "${opts.rule}"` : ''})`); + return; + } + for (const e of recent) { + const parts = [e.t, e.kind, e.rule?.name ?? '-']; + if (e.deviceId) parts.push(`device=${e.deviceId}`); + if (e.command) parts.push(`cmd=${e.command}`); + if (e.result) parts.push(`result=${e.result}`); + console.log(parts.join(' ')); + } + }); +} + +function registerExplain(rules: Command): void { + rules + .command('explain [path]') + .description('Show full detail for a named rule: trigger, conditions, actions, and last-fired time.') + .option('--file ', `Audit log path (default ${DEFAULT_AUDIT_PATH}).`) + .action((name: string, pathArg: string | undefined, opts: { file?: string }) => { + const loaded = loadAutomation(pathArg); + if (!loaded) return; + const allRules = loaded.automation?.rules ?? []; + const rule = allRules.find((r) => r.name === name); + if (!rule) { + exitWithError({ + code: 1, + kind: 'usage', + message: `Rule "${name}" not found in policy.`, + extra: { + subKind: 'rule-not-found', + available: allRules.map((r) => r.name), + }, + }); + return; + } + + const auditFile = opts.file ?? DEFAULT_AUDIT_PATH; + const entries = fs.existsSync(auditFile) ? readAudit(auditFile) : []; + const fires = filterRuleAudits(entries, { ruleName: name, kinds: ['rule-fire', 'rule-fire-dry'] }); + const lastFired = fires.length > 0 ? fires[fires.length - 1].t : null; + + const detail = { + name: rule.name, + enabled: rule.enabled !== false, + trigger: describeTrigger(rule), + conditions: rule.conditions ?? [], + actions: rule.then, + cooldown: rule.cooldown ?? rule.throttle?.max_per ?? null, + hysteresis: rule.hysteresis ?? rule.requires_stable_for ?? null, + maxFiringsPerHour: rule.maxFiringsPerHour ?? null, + suppressIfAlreadyDesired: rule.suppressIfAlreadyDesired ?? false, + dryRun: rule.dry_run === true, + lastFired, + }; + + if (isJsonMode()) { + printJson(detail); + return; + } + + console.log(`name: ${detail.name}`); + console.log(`enabled: ${detail.enabled}`); + console.log(`trigger: ${detail.trigger}`); + console.log(`conditions: ${detail.conditions.length === 0 ? '(none)' : JSON.stringify(detail.conditions)}`); + console.log(`actions: ${detail.actions.length}`); + for (const a of detail.actions) { + console.log(` - ${a.command}${a.device ? ` [${a.device}]` : ''}${a.on_error ? ` on_error=${a.on_error}` : ''}`); + } + if (detail.cooldown) console.log(`cooldown: ${detail.cooldown}`); + if (detail.hysteresis) console.log(`hysteresis: ${detail.hysteresis}`); + if (detail.maxFiringsPerHour !== null) console.log(`maxFiringsPerHour: ${detail.maxFiringsPerHour}`); + if (detail.suppressIfAlreadyDesired) console.log(`suppressIfAlreadyDesired: true`); + if (detail.dryRun) console.log(`dry_run: true`); + console.log(`last fired: ${detail.lastFired ?? '(never)'}`); + }); +} + export function registerRulesCommand(program: Command): void { const rules = program .command('rules') @@ -677,11 +892,16 @@ Subcommands: suggest Generate a candidate rule YAML from intent (heuristic, no LLM). lint [path] Static-check rule definitions; no MQTT, no API calls. list [path] Print a human/JSON summary of each rule's trigger + actions. + explain Show full detail for a rule: trigger, conditions, actions, last-fired. run [path] Subscribe to MQTT (+ cron/webhook) and execute matching rules. reload Hot-reload the running engine's policy (SIGHUP on Unix, pid-file sentinel on Windows). tail Stream rule-* entries from the audit log (--follow tails). replay Per-rule aggregate: fires/dries/throttled/errors + window. + conflicts [path] Detect conflicting or risky rule patterns. + doctor [path] Combined health check: lint + conflict analysis + summary. + summary Aggregate rule-fire counts per rule over a time window. + last-fired Show the N most recently fired rule-fire audit entries. webhook-rotate-token Rotate the bearer token used for webhook triggers. webhook-show-token Print the current bearer token (creating one if absent). @@ -699,10 +919,15 @@ Exit codes (lint): registerSuggest(rules); registerLint(rules); registerList(rules); + registerExplain(rules); registerRun(rules); registerReload(rules); registerTail(rules); registerReplay(rules); + registerConflicts(rules); + registerDoctor(rules); + registerSummary(rules); + registerLastFired(rules); registerWebhookRotateToken(rules); registerWebhookShowToken(rules); } diff --git a/src/commands/scenes.ts b/src/commands/scenes.ts index 81d8e95..e9618bc 100644 --- a/src/commands/scenes.ts +++ b/src/commands/scenes.ts @@ -128,4 +128,142 @@ Example: handleError(error); } }); + + // switchbot scenes validate [sceneId...] + scenes + .command('validate') + .description('Verify that one or more scenes exist. If no IDs are given, validates all scenes are reachable.') + .argument('[sceneId...]', 'Scene IDs to validate (default: all scenes)') + .addHelpText('after', ` +Note: SwitchBot API v1.1 does not expose scene steps; validation only confirms +the scene IDs exist in your account. + +Examples: + $ switchbot scenes validate + $ switchbot scenes validate T12345678 T87654321 +`) + .action(async (sceneIds: string[]) => { + try { + const sceneList = await fetchScenes(); + const sceneMap = new Map(sceneList.map((s) => [s.sceneId, s.sceneName])); + const targets = sceneIds.length > 0 ? sceneIds : sceneList.map((s) => s.sceneId); + const results = targets.map((id) => ({ + sceneId: id, + sceneName: sceneMap.get(id) ?? null, + valid: sceneMap.has(id), + })); + const allValid = results.every((r) => r.valid); + if (isJsonMode()) { + printJson({ ok: allValid, results }); + if (!allValid) process.exit(1); + return; + } + for (const r of results) { + const icon = r.valid ? '✓' : '✗'; + const label = r.valid ? r.sceneName! : '(not found)'; + console.log(`${icon} ${r.sceneId} ${label}`); + } + if (!allValid) process.exit(1); + } catch (error) { + handleError(error); + } + }); + + // switchbot scenes simulate + scenes + .command('simulate') + .description('Show what `scenes execute` would do without actually executing the scene.') + .argument('', 'Scene ID from "scenes list"') + .addHelpText('after', ` +Note: SwitchBot API v1.1 does not expose scene step details. Simulation reports +the scene name, confirms it exists, and shows the POST that would be issued. + +Example: + $ switchbot scenes simulate T12345678 +`) + .action(async (sceneId: string) => { + try { + const sceneList = await fetchScenes(); + const found = sceneList.find((s) => s.sceneId === sceneId); + if (!found) { + throw new StructuredUsageError(`scene not found: ${sceneId}`, { + error: 'scene_not_found', + sceneId, + candidates: sceneList.map((s) => ({ sceneId: s.sceneId, sceneName: s.sceneName })), + }); + } + const simulation = { + sceneId: found.sceneId, + sceneName: found.sceneName, + wouldSend: { method: 'POST', url: `/v1.1/scenes/${sceneId}/execute` }, + note: 'SwitchBot API v1.1 does not expose individual scene steps.', + }; + if (isJsonMode()) { + printJson({ simulated: true, ...simulation }); + return; + } + console.log(`sceneId: ${simulation.sceneId}`); + console.log(`sceneName: ${simulation.sceneName}`); + console.log(`wouldSend: ${simulation.wouldSend.method} ${simulation.wouldSend.url}`); + console.log(`note: ${simulation.note}`); + } catch (error) { + handleError(error); + } + }); + + // switchbot scenes explain + scenes + .command('explain') + .description('Explain in plain language what a scene does and how to execute it safely.') + .argument('', 'Scene ID from "scenes list"') + .addHelpText('after', ` +Shows the scene name, action description, risk level, and the exact command to +run. Unlike "simulate" (which shows raw HTTP detail), "explain" is aimed at a +human or agent deciding whether to proceed. + +Note: SwitchBot API v1.1 does not expose scene step details; risk is reported +as "low" because scenes only trigger pre-configured automations in the app. + +Example: + $ switchbot scenes explain T12345678 +`) + .action(async (sceneId: string) => { + try { + const sceneList = await fetchScenes(); + const found = sceneList.find((s) => s.sceneId === sceneId); + if (!found) { + throw new StructuredUsageError(`scene not found: ${sceneId}`, { + error: 'scene_not_found', + sceneId, + candidates: sceneList.map((s) => ({ sceneId: s.sceneId, sceneName: s.sceneName })), + }); + } + const explanation = { + sceneId: found.sceneId, + sceneName: found.sceneName, + action: `Trigger scene (POST /v1.1/scenes/${found.sceneId}/execute)`, + riskLevel: 'low' as const, + idempotent: null as boolean | null, + toExecute: `switchbot scenes execute ${found.sceneId}`, + dryRun: isDryRun(), + note: 'SwitchBot API v1.1 does not expose individual scene steps.', + }; + if (isJsonMode()) { + printJson(explanation); + return; + } + console.log(`sceneId: ${explanation.sceneId}`); + console.log(`sceneName: ${explanation.sceneName}`); + console.log(`action: ${explanation.action}`); + console.log(`riskLevel: ${explanation.riskLevel}`); + console.log(`idempotent: unknown (scene steps not exposed by API)`); + console.log(`toExecute: ${explanation.toExecute}`); + if (explanation.dryRun) { + console.log(`dryRun: true (pass --dry-run to execute would be a no-op)`); + } + console.log(`note: ${explanation.note}`); + } catch (error) { + handleError(error); + } + }); } diff --git a/src/commands/upgrade-check.ts b/src/commands/upgrade-check.ts new file mode 100644 index 0000000..0b58a14 --- /dev/null +++ b/src/commands/upgrade-check.ts @@ -0,0 +1,87 @@ +import { Command } from 'commander'; +import { createRequire } from 'node:module'; +import https from 'node:https'; +import { isJsonMode, printJson } from '../utils/output.js'; +import chalk from 'chalk'; + +const require = createRequire(import.meta.url); +const { name: pkgName, version: currentVersion } = require('../../package.json') as { name: string; version: string }; + +function fetchLatestVersion(packageName: string, timeoutMs = 8000): Promise { + const encoded = packageName.replace('/', '%2F'); + const url = `https://registry.npmjs.org/${encoded}/latest`; + return new Promise((resolve, reject) => { + const req = https.get(url, { timeout: timeoutMs }, (res) => { + const chunks: Buffer[] = []; + res.on('data', (c: Buffer) => chunks.push(c)); + res.on('end', () => { + try { + const body = JSON.parse(Buffer.concat(chunks).toString('utf-8')) as { version?: string }; + if (typeof body.version === 'string') resolve(body.version); + else reject(new Error('version field missing from registry response')); + } catch (err) { + reject(err); + } + }); + }); + req.on('timeout', () => { req.destroy(); reject(new Error(`registry request timed out after ${timeoutMs}ms`)); }); + req.on('error', reject); + }); +} + +function semverGt(a: string, b: string): boolean { + const numParts = (v: string) => v.replace(/-.*$/, '').split('.').map((n) => Number.parseInt(n, 10)); + const [aMaj, aMin, aPat] = numParts(a); + const [bMaj, bMin, bPat] = numParts(b); + if (aMaj !== bMaj) return aMaj > bMaj; + if (aMin !== bMin) return aMin > bMin; + if (aPat !== bPat) return aPat > bPat; + // Same numeric version: release (no prerelease) > prerelease + return !a.includes('-') && b.includes('-'); +} + +export function registerUpgradeCheckCommand(program: Command): void { + program + .command('upgrade-check') + .description('Check whether a newer version of this CLI is available on npm.') + .option('--timeout ', 'Registry request timeout in milliseconds (default: 8000)', (v) => Number.parseInt(v, 10)) + .action(async (opts: { timeout?: number }) => { + let latestVersion: string; + try { + latestVersion = await fetchLatestVersion(pkgName, opts.timeout ?? 8000); + } catch (err) { + const msg = err instanceof Error ? err.message : String(err); + if (isJsonMode()) { + printJson({ ok: false, error: msg, current: currentVersion }); + } else { + console.error(chalk.red(`upgrade-check failed: ${msg}`)); + } + process.exit(1); + } + + const upToDate = !semverGt(latestVersion, currentVersion); + const currentMajor = Number.parseInt(currentVersion.split('.')[0], 10); + const latestMajor = Number.parseInt(latestVersion.split('.')[0], 10); + const result = { + current: currentVersion, + latest: latestVersion, + upToDate, + updateAvailable: !upToDate, + breakingChange: latestMajor > currentMajor, + installCommand: upToDate ? null : `npm install -g ${pkgName}@${latestVersion}`, + }; + + if (isJsonMode()) { + printJson(result); + return; + } + + if (upToDate) { + console.log(`${chalk.green('✓')} You are running the latest version (${currentVersion}).`); + } else { + console.log(`${chalk.yellow('!')} Update available: ${chalk.bold(currentVersion)} → ${chalk.bold(latestVersion)}`); + console.log(` Run: ${chalk.cyan(`npm install -g ${pkgName}@${latestVersion}`)}`); + process.exit(1); + } + }); +} diff --git a/src/index.ts b/src/index.ts index c356397..25763c2 100644 --- a/src/index.ts +++ b/src/index.ts @@ -29,6 +29,9 @@ import { registerAuthCommand } from './commands/auth.js'; import { registerInstallCommand } from './commands/install.js'; import { registerUninstallCommand } from './commands/uninstall.js'; import { registerStatusSyncCommand } from './commands/status-sync.js'; +import { registerHealthCommand } from './commands/health.js'; +import { registerUpgradeCheckCommand } from './commands/upgrade-check.js'; +import { registerDaemonCommand } from './commands/daemon.js'; import { primeCredentials } from './credentials/prime.js'; import { getActiveProfile } from './lib/request-context.js'; @@ -54,6 +57,7 @@ const TOP_LEVEL_COMMANDS = [ 'config', 'devices', 'scenes', 'webhook', 'completion', 'mcp', 'quota', 'catalog', 'cache', 'events', 'doctor', 'schema', 'history', 'plan', 'capabilities', 'agent-bootstrap', 'install', 'uninstall', 'status-sync', + 'health', 'upgrade-check', 'daemon', ] as const; const cacheModeArg = (value: string): string => { @@ -117,6 +121,9 @@ registerAuthCommand(program); registerInstallCommand(program); registerUninstallCommand(program); registerStatusSyncCommand(program); +registerHealthCommand(program); +registerUpgradeCheckCommand(program); +registerDaemonCommand(program); // Prime keychain-stored credentials before any command runs. This is a // best-effort probe: failures are silently swallowed inside primeCredentials, diff --git a/src/lib/daemon-state.ts b/src/lib/daemon-state.ts new file mode 100644 index 0000000..51b4931 --- /dev/null +++ b/src/lib/daemon-state.ts @@ -0,0 +1,72 @@ +import fs from 'node:fs'; +import os from 'node:os'; +import path from 'node:path'; + +function getStateDir(): string { + return path.join(os.homedir(), '.switchbot'); +} + +function getDaemonPidFile(): string { + return path.join(getStateDir(), 'daemon.pid'); +} + +function getDaemonLogFile(): string { + return path.join(getStateDir(), 'daemon.log'); +} + +function getDaemonStateFile(): string { + return path.join(getStateDir(), 'daemon.state.json'); +} + +function getHealthzPidFile(): string { + return path.join(getStateDir(), 'healthz.pid'); +} + +export const DAEMON_PID_FILE = getDaemonPidFile(); +export const DAEMON_LOG_FILE = getDaemonLogFile(); +export const DAEMON_STATE_FILE = getDaemonStateFile(); +export const HEALTHZ_PID_FILE = getHealthzPidFile(); + +export interface DaemonState { + status: 'starting' | 'running' | 'stopped' | 'failed'; + pid: number | null; + startedAt?: string; + stoppedAt?: string; + failedAt?: string; + failureReason?: string; + logFile: string; + pidFile: string; + stateFile: string; + healthzPid?: number | null; + healthzPort?: number | null; + healthzPidFile?: string; + lastReloadAt?: string; + lastReloadStatus?: 'ok' | 'failed'; + lastReloadMessage?: string; +} + +function ensureStateDir(): void { + fs.mkdirSync(getStateDir(), { recursive: true, mode: 0o700 }); +} + +export function writeDaemonState(state: DaemonState): void { + ensureStateDir(); + fs.writeFileSync(getDaemonStateFile(), JSON.stringify(state, null, 2), { mode: 0o600 }); +} + +export function readDaemonState(): DaemonState | null { + try { + const raw = fs.readFileSync(getDaemonStateFile(), 'utf-8'); + return JSON.parse(raw) as DaemonState; + } catch { + return null; + } +} + +export function removeDaemonState(): void { + try { + fs.unlinkSync(getDaemonStateFile()); + } catch { + // best effort + } +} diff --git a/src/lib/destructive-mode.ts b/src/lib/destructive-mode.ts new file mode 100644 index 0000000..cf4c75a --- /dev/null +++ b/src/lib/destructive-mode.ts @@ -0,0 +1,13 @@ +import { getActiveProfile } from './request-context.js'; + +const DIRECT_DESTRUCTIVE_PROFILES = new Set(['dev', 'development']); + +export function allowsDirectDestructiveExecution(profile = getActiveProfile()): boolean { + if (process.env.SWITCHBOT_ALLOW_DIRECT_DESTRUCTIVE === '1') return true; + if (!profile) return false; + return DIRECT_DESTRUCTIVE_PROFILES.has(profile.toLowerCase()); +} + +export function destructiveExecutionHint(): string { + return "Use 'switchbot plan save ' -> 'switchbot plan review ' -> 'switchbot plan approve ' -> 'switchbot plan execute ' instead."; +} diff --git a/src/lib/devices.ts b/src/lib/devices.ts index a3cebb5..5e2bc48 100644 --- a/src/lib/devices.ts +++ b/src/lib/devices.ts @@ -160,7 +160,7 @@ export async function executeCommand( parameter: unknown, commandType: 'command' | 'customize', client?: AxiosInstance, - options?: { idempotencyKey?: string } + options?: { idempotencyKey?: string; planId?: string } ): Promise { const c = client ?? createClient(); const body = { @@ -176,6 +176,7 @@ export async function executeCommand( parameter, commandType, dryRun: isDryRun(), + ...(options?.planId ? { planId: options.planId } : {}), }; // Wrap in idempotency cache if key is provided diff --git a/src/lib/plan-store.ts b/src/lib/plan-store.ts new file mode 100644 index 0000000..2e94d21 --- /dev/null +++ b/src/lib/plan-store.ts @@ -0,0 +1,88 @@ +import fs from 'node:fs'; +import os from 'node:os'; +import path from 'node:path'; +import { randomUUID } from 'node:crypto'; +import type { Plan } from '../commands/plan.js'; + +export const PLANS_DIR = path.join(os.homedir(), '.switchbot', 'plans'); + +export type PlanStatus = 'pending' | 'approved' | 'rejected' | 'executed' | 'failed'; + +export interface PlanRecord { + planId: string; + createdAt: string; + status: PlanStatus; + approvedAt?: string; + executedAt?: string; + /** Set when status transitions to 'failed'. */ + failedAt?: string; + /** Summary of why the plan failed (e.g. "1 error, 2 skipped"). */ + failureReason?: string; + plan: Plan; +} + +function ensurePlansDir(): void { + fs.mkdirSync(PLANS_DIR, { recursive: true, mode: 0o700 }); +} + +function planPath(planId: string): string { + return path.join(PLANS_DIR, `${planId}.json`); +} + +const UUID_V4_RE = /^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/i; + +function assertValidPlanId(planId: string): void { + if (!UUID_V4_RE.test(planId)) { + throw new Error(`invalid planId: ${planId}`); + } +} + +export function savePlanRecord(plan: Plan): PlanRecord { + ensurePlansDir(); + const record: PlanRecord = { + planId: randomUUID(), + createdAt: new Date().toISOString(), + status: 'pending', + plan, + }; + fs.writeFileSync(planPath(record.planId), JSON.stringify(record, null, 2), { mode: 0o600 }); + return record; +} + +export function loadPlanRecord(planId: string): PlanRecord | null { + assertValidPlanId(planId); + try { + const raw = fs.readFileSync(planPath(planId), 'utf-8'); + return JSON.parse(raw) as PlanRecord; + } catch { + return null; + } +} + +export function updatePlanRecord(planId: string, updates: Partial): PlanRecord { + assertValidPlanId(planId); + const record = loadPlanRecord(planId); + if (!record) throw new Error(`Plan ${planId} not found in ${PLANS_DIR}`); + const updated = { ...record, ...updates }; + fs.writeFileSync(planPath(planId), JSON.stringify(updated, null, 2), { mode: 0o600 }); + return updated; +} + +export function listPlanRecords(): PlanRecord[] { + try { + if (!fs.existsSync(PLANS_DIR)) return []; + return fs + .readdirSync(PLANS_DIR) + .filter((f) => f.endsWith('.json')) + .flatMap((f) => { + try { + return [JSON.parse(fs.readFileSync(path.join(PLANS_DIR, f), 'utf-8')) as PlanRecord]; + } catch { + return []; + } + }) + .sort((a, b) => a.createdAt.localeCompare(b.createdAt)); + } catch { + return []; + } +} diff --git a/src/policy/schema/v0.2.json b/src/policy/schema/v0.2.json index 2451beb..58aa07e 100644 --- a/src/policy/schema/v0.2.json +++ b/src/policy/schema/v0.2.json @@ -130,10 +130,39 @@ "type": "string", "pattern": "^\\d+[smh]$", "description": "Minimum spacing between fires, e.g. \"10m\". Later triggers inside the window are suppressed and audited." + }, + "dedupe_window": { + "type": "string", + "pattern": "^\\d+[smh]$", + "description": "Deduplicate identical events arriving within this window after the last fire. Complements max_per: use when rapid sensor bursts should collapse into one action." } }, "required": ["max_per"] }, + "cooldown": { + "type": "string", + "pattern": "^\\d+[smh]$", + "description": "Shorthand cooldown — minimum pause after a rule fires before it can fire again. Equivalent to throttle.max_per but at rule level. Takes precedence over throttle.max_per when both are set." + }, + "requires_stable_for": { + "type": "string", + "pattern": "^\\d+[smh]$", + "description": "Hysteresis guard — the triggering device state must remain stable (unchanged) for this duration before the rule fires. Prevents action storms from transient sensor flicker. Validated at lint; enforcement is best-effort in the engine." + }, + "hysteresis": { + "type": "string", + "pattern": "^\\d+[smh]$", + "description": "Alias for requires_stable_for with clearer semantics. Takes precedence when both are set. The triggering state must be continuously observed for this duration before the rule fires." + }, + "maxFiringsPerHour": { + "type": "integer", + "minimum": 1, + "description": "Maximum number of times this rule may fire in any rolling 60-minute window. Applied after cooldown/throttle checks. Useful as an absolute safety cap for high-frequency MQTT triggers." + }, + "suppressIfAlreadyDesired": { + "type": "boolean", + "description": "When true, skip the rule's actions if the target device's last-known cached state already matches the expected outcome of the command (e.g. skip turnOn if powerState is already 'on'). Best-effort: requires a warm device cache." + }, "dry_run": { "type": "boolean", "default": true, diff --git a/src/rules/conflict-analyzer.ts b/src/rules/conflict-analyzer.ts new file mode 100644 index 0000000..1f44a42 --- /dev/null +++ b/src/rules/conflict-analyzer.ts @@ -0,0 +1,221 @@ +/** + * Static conflict analysis for automation rules. + * + * Detects patterns that are technically valid but likely to cause + * operational problems: + * + * 1. Opposing-action pairs — same device, opposite commands (e.g. + * turnOn / turnOff), triggered by the same source within a short + * window and with no throttle on either rule. + * + * 2. High-frequency MQTT rules without throttle — rules that listen + * on `device.shadow` (catch-all) with no throttle can fire on + * every shadow push (up to once per second) and exhaust the daily + * API quota quickly. + * + * 3. Potentially-destructive action without quiet-hours protection — + * a rule that targets a destructive verb is technically blocked by + * the engine, but we can still flag it early so users don't get a + * surprise at runtime. + * + * Results are designed to be consumed by `rules doctor --json` and + * by CI pipelines. Each finding carries a `severity` so callers can + * decide how to gate on them. + */ + +import type { Rule, Condition } from './types.js'; +import { isTimeBetween, isAllCondition, isAnyCondition, isNotCondition } from './types.js'; +import { parseMaxPerMs } from './throttle.js'; +import { isDestructiveCommand } from './destructive.js'; + +export type ConflictSeverity = 'error' | 'warning' | 'info'; + +export interface ConflictFinding { + severity: ConflictSeverity; + code: string; + message: string; + /** Rule names involved in this finding. */ + rules: string[]; + hint?: string; +} + +export interface ConflictReport { + findings: ConflictFinding[]; + /** Count per severity level. */ + counts: Record; + /** True when there are no error-severity findings. */ + clean: boolean; +} + +/** Known opposing command pairs (order-independent). */ +const OPPOSING_PAIRS: Array<[string, string]> = [ + ['turnOn', 'turnOff'], + ['lock', 'unlock'], + ['open', 'close'], + ['openDoor', 'closeDoor'], + ['openCurtain', 'closeCurtain'], + ['turnOn', 'standby'], + ['brightnessUp', 'brightnessDown'], + ['volumeUp', 'volumeDown'], + ['fanSpeedUp', 'fanSpeedDown'], +]; + +function commandsAreOpposing(a: string, b: string): boolean { + for (const [x, y] of OPPOSING_PAIRS) { + if ((a === x && b === y) || (a === y && b === x)) return true; + } + return false; +} + +function extractDeviceFromAction(action: { command: string; device?: string }): string | null { + return action.device ?? null; +} + +function extractCommandVerb(command: string): string { + // command strings are like "devices command turnOn" — extract last token + const parts = command.trim().split(/\s+/); + return parts[parts.length - 1] ?? command; +} + +function effectiveCooldownMs(rule: Rule): number | null { + if (rule.cooldown) { + try { return parseMaxPerMs(rule.cooldown); } catch { return null; } + } + if (rule.throttle?.max_per) { + try { return parseMaxPerMs(rule.throttle.max_per); } catch { return null; } + } + return null; +} + +function hasTimeBetweenGuard(conditions: Condition[] | null | undefined): boolean { + if (!conditions) return false; + for (const c of conditions) { + if (isTimeBetween(c)) return true; + if (isAllCondition(c) && hasTimeBetweenGuard(c.all)) return true; + if (isAnyCondition(c) && hasTimeBetweenGuard(c.any)) return true; + if (isNotCondition(c) && hasTimeBetweenGuard([c.not])) return true; + } + return false; +} + +export interface QuietHours { + start: string; + end: string; +} + +export function analyzeConflicts(rules: Rule[], quietHours?: QuietHours | null): ConflictReport { + const findings: ConflictFinding[] = []; + const active = rules.filter((r) => r.enabled !== false); + + // 1. Opposing-action pairs on the same device + for (let i = 0; i < active.length; i++) { + for (let j = i + 1; j < active.length; j++) { + const a = active[i]; + const b = active[j]; + // Only flag when they share the same trigger source (otherwise they + // can't race each other in normal operation). + if (a.when.source !== b.when.source) continue; + + const cooldownA = effectiveCooldownMs(a); + const cooldownB = effectiveCooldownMs(b); + // If both rules have meaningful cooldowns (≥ 5 minutes), the risk is + // low — skip. + const bothThrottled = + cooldownA !== null && cooldownA >= 5 * 60_000 && + cooldownB !== null && cooldownB >= 5 * 60_000; + if (bothThrottled) continue; + + for (const actionA of a.then) { + for (const actionB of b.then) { + const deviceA = extractDeviceFromAction(actionA); + const deviceB = extractDeviceFromAction(actionB); + // Skip if devices can't be compared. + if (!deviceA || !deviceB || deviceA !== deviceB) continue; + const verbA = extractCommandVerb(actionA.command); + const verbB = extractCommandVerb(actionB.command); + if (commandsAreOpposing(verbA, verbB)) { + const noThrottle = cooldownA === null || cooldownB === null; + findings.push({ + severity: noThrottle ? 'warning' : 'info', + code: 'opposing-actions', + message: `Rules "${a.name}" and "${b.name}" issue opposing commands (${verbA} / ${verbB}) on device "${deviceA}" via the same trigger source.`, + rules: [a.name, b.name], + hint: noThrottle + ? 'Add a "cooldown" or "throttle.max_per" to both rules to prevent rapid state oscillation.' + : undefined, + }); + } + } + } + } + } + + // 2. High-frequency MQTT catch-all rules without throttle + for (const rule of active) { + if (rule.when.source !== 'mqtt') continue; + const event = (rule.when as { event: string }).event; + const isHighFreq = event === 'device.shadow' || event === '*'; + if (!isHighFreq) continue; + const cooldown = effectiveCooldownMs(rule); + if (cooldown === null) { + findings.push({ + severity: 'warning', + code: 'high-frequency-no-throttle', + message: `Rule "${rule.name}" listens on "${event}" (high-frequency catch-all) with no throttle/cooldown. This can rapidly exhaust the daily API quota.`, + rules: [rule.name], + hint: 'Add "cooldown: 1m" or "throttle: { max_per: 1m }" to rate-limit this rule.', + }); + } else if (cooldown < 30_000) { + findings.push({ + severity: 'info', + code: 'high-frequency-low-throttle', + message: `Rule "${rule.name}" listens on "${event}" with a throttle under 30 s. Consider increasing to at least 1 m to protect API quota.`, + rules: [rule.name], + }); + } + } + + // 3. Destructive actions in rules (engine blocks these at runtime, but + // surface early with clear guidance). + for (const rule of active) { + for (let i = 0; i < rule.then.length; i++) { + const verb = extractCommandVerb(rule.then[i].command); + if (isDestructiveCommand(verb)) { + findings.push({ + severity: 'error', + code: 'destructive-action-in-rule', + message: `Rule "${rule.name}" then[${i}] contains destructive command "${verb}". The engine blocks this at runtime.`, + rules: [rule.name], + hint: 'Remove the destructive command or replace it with a non-destructive alternative.', + }); + } + } + } + + // 4. Event-driven rules with no time_between guard when quiet_hours is defined. + // Cron rules fire on an explicit schedule so their overlap with quiet hours + // requires schedule analysis — flag those separately when needed. + if (quietHours?.start && quietHours?.end) { + for (const rule of active) { + if (rule.when.source === 'cron') continue; + if (!hasTimeBetweenGuard(rule.conditions)) { + findings.push({ + severity: 'warning', + code: 'no-quiet-hours-guard', + message: `Rule "${rule.name}" (${rule.when.source} trigger) has no time_between condition and may fire during quiet hours (${quietHours.start}–${quietHours.end}).`, + rules: [rule.name], + hint: `Add a conditions entry "{ time_between: ['${quietHours.end}', '${quietHours.start}'] }" to block this rule during quiet hours.`, + }); + } + } + } + + const counts: Record = { error: 0, warning: 0, info: 0 }; + for (const f of findings) counts[f.severity]++; + + return { + findings, + counts, + clean: counts.error === 0, + }; +} diff --git a/src/rules/engine.ts b/src/rules/engine.ts index a08efcd..f549b4d 100644 --- a/src/rules/engine.ts +++ b/src/rules/engine.ts @@ -31,7 +31,7 @@ import { type DeviceStatusFetcher, } from './matcher.js'; import { ThrottleGate, parseMaxPerMs } from './throttle.js'; -import { executeRuleAction } from './action.js'; +import { executeRuleAction, parseRuleCommand } from './action.js'; import { CronScheduler } from './cron-scheduler.js'; import { WebhookListener, DEFAULT_WEBHOOK_PORT } from './webhook-listener.js'; import { @@ -141,6 +141,88 @@ export function lintRules(automation: AutomationBlock | null | undefined): LintR message: `throttle.max_per "${r.throttle.max_per}" is not a valid duration.`, }); } + if (r.throttle.dedupe_window) { + try { + parseMaxPerMs(r.throttle.dedupe_window); + } catch { + issues.push({ + rule: r.name, + severity: 'error', + code: 'invalid-dedupe-window', + message: `throttle.dedupe_window "${r.throttle.dedupe_window}" is not a valid duration.`, + }); + } + } + } + + // cooldown field validation + if (r.cooldown) { + try { + parseMaxPerMs(r.cooldown); + } catch { + issues.push({ + rule: r.name, + severity: 'error', + code: 'invalid-cooldown', + message: `cooldown "${r.cooldown}" is not a valid duration (expected e.g. "10m", "1h").`, + }); + } + if (r.throttle?.max_per) { + issues.push({ + rule: r.name, + severity: 'warning', + code: 'cooldown-throttle-overlap', + message: `Both "cooldown" and "throttle.max_per" are set. "cooldown" takes precedence.`, + }); + } + } + + // requires_stable_for field validation + if (r.requires_stable_for) { + try { + parseMaxPerMs(r.requires_stable_for); + } catch { + issues.push({ + rule: r.name, + severity: 'error', + code: 'invalid-requires-stable-for', + message: `requires_stable_for "${r.requires_stable_for}" is not a valid duration.`, + }); + } + } + + // hysteresis field validation + if (r.hysteresis) { + try { + parseMaxPerMs(r.hysteresis); + } catch { + issues.push({ + rule: r.name, + severity: 'error', + code: 'invalid-hysteresis', + message: `hysteresis "${r.hysteresis}" is not a valid duration (expected e.g. "10m", "1h").`, + }); + } + if (r.requires_stable_for) { + issues.push({ + rule: r.name, + severity: 'warning', + code: 'hysteresis-requires-stable-overlap', + message: `Both "hysteresis" and "requires_stable_for" are set. "hysteresis" takes precedence.`, + }); + } + } + + // maxFiringsPerHour field validation + if (r.maxFiringsPerHour !== undefined) { + if (!Number.isInteger(r.maxFiringsPerHour) || r.maxFiringsPerHour < 1) { + issues.push({ + rule: r.name, + severity: 'error', + code: 'invalid-maxFiringsPerHour', + message: `maxFiringsPerHour must be a positive integer (got ${r.maxFiringsPerHour}).`, + }); + } } const enabled = r.enabled !== false; @@ -223,6 +305,8 @@ export class RulesEngine { private rules: Rule[]; private aliases: Record; private readonly throttle = new ThrottleGate(); + /** hysteresis / requires_stable_for: tracks when each (rule::device) trigger was first seen. */ + private hysteresisFirstSeen = new Map(); private unsubscribeMessage: (() => void) | null = null; private unsubscribeState: (() => void) | null = null; private cronScheduler: CronScheduler | null = null; @@ -460,6 +544,23 @@ export class RulesEngine { await this.enqueue(() => this.onMqttMessage(payload, { preParsed: true })); } + /** + * Dispatch a pre-built EngineEvent through all matching MQTT rules. + * Used by tests that need full control over the event timestamp (e.g. + * hysteresis tests that advance time manually). + */ + async ingestEventForTest(event: EngineEvent): Promise { + for (const rule of this.rules) { + if (!isMqttTrigger(rule.when)) continue; + const resolvedFilter = rule.when.device + ? this.aliases[rule.when.device] ?? rule.when.device + : undefined; + if (!matchesMqttTrigger(rule.when, event, resolvedFilter)) continue; + this.stats.eventsProcessed++; + await this.enqueue(() => this.dispatchRule(rule, event)); + } + } + /** * Fire a cron rule directly without needing the scheduler/timers. * Used by tests that want to exercise the dispatch pipeline without @@ -601,6 +702,13 @@ export class RulesEngine { fetchStatus, }); if (!cond.matched) { + // If conditions are not met, the trigger is no longer "continuously stable" — + // reset the hysteresis clock so intermittent satisfaction never accumulates. + const hasHysteresis = rule.hysteresis ?? rule.requires_stable_for; + if (hasHysteresis) { + const hysteresisKey = `${rule.name}::${event.deviceId ?? ''}`; + this.hysteresisFirstSeen.delete(hysteresisKey); + } if (cond.unsupported.length > 0) { writeAudit({ t: event.t.toISOString(), @@ -628,9 +736,13 @@ export class RulesEngine { return; } - const windowMs = rule.throttle ? parseMaxPerMs(rule.throttle.max_per) : null; + // cooldown takes precedence over throttle.max_per when both are set. + const effectiveMaxPerMs = rule.cooldown + ? parseMaxPerMs(rule.cooldown) + : rule.throttle ? parseMaxPerMs(rule.throttle.max_per) : null; + const dedupeWindowMs = rule.throttle?.dedupe_window ? parseMaxPerMs(rule.throttle.dedupe_window) : null; const throttleKey = event.deviceId; - const check = this.throttle.check(rule.name, windowMs, event.t.getTime(), throttleKey); + const check = this.throttle.check(rule.name, effectiveMaxPerMs, event.t.getTime(), throttleKey, dedupeWindowMs); if (!check.allowed) { this.stats.throttled++; writeAudit({ @@ -648,14 +760,90 @@ export class RulesEngine { matchedDevice: event.deviceId, fireId, reason: check.nextAllowedAt - ? `throttled — next allowed at ${new Date(check.nextAllowedAt).toISOString()}` - : 'throttled', + ? `${check.dedupedBy ?? 'throttled'} — next allowed at ${new Date(check.nextAllowedAt).toISOString()}` + : check.dedupedBy ?? 'throttled', }, }); this.opts.onFire?.({ ruleName: rule.name, fireId, status: 'throttled', deviceId: event.deviceId }); return; } + // hysteresis / requires_stable_for: require the trigger to have been continuously + // observed for the specified duration before firing. + const hysteresisMs = rule.hysteresis + ? parseMaxPerMs(rule.hysteresis) + : rule.requires_stable_for ? parseMaxPerMs(rule.requires_stable_for) : null; + if (hysteresisMs !== null) { + const hysteresisKey = `${rule.name}::${event.deviceId ?? ''}`; + const firstSeen = this.hysteresisFirstSeen.get(hysteresisKey); + const now = event.t.getTime(); + if (firstSeen === undefined) { + this.hysteresisFirstSeen.set(hysteresisKey, now); + writeAudit({ + t: event.t.toISOString(), kind: 'rule-throttled', + deviceId: event.deviceId ?? 'unknown', command: rule.then[0]?.command ?? '', + parameter: null, commandType: 'command', dryRun: true, result: 'ok', + rule: { name: rule.name, triggerSource: rule.when.source, matchedDevice: event.deviceId, fireId, reason: `hysteresis — first observation, waiting ${hysteresisMs}ms for stability` }, + }); + this.opts.onFire?.({ ruleName: rule.name, fireId, status: 'throttled', deviceId: event.deviceId, reason: 'hysteresis-first-seen' }); + return; + } + if (now - firstSeen < hysteresisMs) { + writeAudit({ + t: event.t.toISOString(), kind: 'rule-throttled', + deviceId: event.deviceId ?? 'unknown', command: rule.then[0]?.command ?? '', + parameter: null, commandType: 'command', dryRun: true, result: 'ok', + rule: { name: rule.name, triggerSource: rule.when.source, matchedDevice: event.deviceId, fireId, reason: `hysteresis — stable for ${now - firstSeen}ms of required ${hysteresisMs}ms` }, + }); + this.opts.onFire?.({ ruleName: rule.name, fireId, status: 'throttled', deviceId: event.deviceId, reason: 'hysteresis-not-stable' }); + return; + } + // Stable long enough — clear so the next trigger starts fresh. + this.hysteresisFirstSeen.delete(hysteresisKey); + } + + // maxFiringsPerHour: count-based rate cap over a rolling 1-hour window. + if (rule.maxFiringsPerHour !== undefined) { + const countCheck = this.throttle.checkMaxFirings(rule.name, rule.maxFiringsPerHour, 3_600_000, event.t.getTime(), event.deviceId); + if (!countCheck.allowed) { + this.stats.throttled++; + writeAudit({ + t: event.t.toISOString(), kind: 'rule-throttled', + deviceId: event.deviceId ?? 'unknown', command: rule.then[0]?.command ?? '', + parameter: null, commandType: 'command', dryRun: true, result: 'ok', + rule: { name: rule.name, triggerSource: rule.when.source, matchedDevice: event.deviceId, fireId, reason: `maxFiringsPerHour — ${countCheck.count}/${countCheck.max} in last hour` }, + }); + this.opts.onFire?.({ ruleName: rule.name, fireId, status: 'throttled', deviceId: event.deviceId, reason: 'maxFiringsPerHour' }); + return; + } + } + + // suppressIfAlreadyDesired: skip if device's live state already matches the command outcome. + if (rule.suppressIfAlreadyDesired) { + const firstAction = rule.then[0]; + const parsed = firstAction ? parseRuleCommand(firstAction.command) : null; + const verb = parsed?.verb ?? firstAction?.command; + if ((verb === 'turnOn' || verb === 'turnOff') && (firstAction?.device || event.deviceId)) { + const targetId = firstAction?.device ? (this.aliases[firstAction.device] ?? firstAction.device) : event.deviceId!; + try { + const deviceStatus = await fetchStatus(targetId); + const powerState = deviceStatus['powerState'] as string | undefined; + if ((verb === 'turnOn' && powerState === 'on') || (verb === 'turnOff' && powerState === 'off')) { + writeAudit({ + t: event.t.toISOString(), kind: 'rule-throttled', + deviceId: targetId, command: verb ?? '', + parameter: null, commandType: 'command', dryRun: true, result: 'ok', + rule: { name: rule.name, triggerSource: rule.when.source, matchedDevice: event.deviceId, fireId, reason: `suppressIfAlreadyDesired — powerState already "${powerState}"` }, + }); + this.opts.onFire?.({ ruleName: rule.name, fireId, status: 'throttled', deviceId: event.deviceId, reason: 'already-desired' }); + return; + } + } catch { + // best-effort — if status fetch fails, proceed with the action + } + } + } + let fired = false; let allDry = true; for (const action of rule.then) { @@ -680,6 +868,9 @@ export class RulesEngine { if (fired) { if (allDry) this.stats.dryFires++; else this.stats.fires++; this.throttle.record(rule.name, event.t.getTime(), throttleKey); + if (rule.maxFiringsPerHour !== undefined) { + this.throttle.recordFire(rule.name, event.t.getTime(), throttleKey); + } this.opts.onFire?.({ ruleName: rule.name, fireId, status: allDry ? 'dry' : 'fired', deviceId: event.deviceId }); } } diff --git a/src/rules/throttle.ts b/src/rules/throttle.ts index 024f87b..dc15388 100644 --- a/src/rules/throttle.ts +++ b/src/rules/throttle.ts @@ -4,6 +4,8 @@ * Semantics: * - `max_per: "10m"` → a rule may fire at most once every 10 minutes * per (rule, deviceId) pair. + * - `dedupe_window: "5s"` → suppress fires whose key already fired + * within the window (collapses rapid sensor bursts into one action). * - Fires that would violate the window are **suppressed** (not * queued) and surface as `{ allowed: false, reason: 'throttled' }`. * - When a rule has no `throttle` block, `ThrottleGate.check` returns @@ -32,10 +34,20 @@ export interface ThrottleCheckResult { lastFiredAt?: number; /** When the window will reopen. */ nextAllowedAt?: number; + /** Whether this was blocked by the dedupe_window (vs max_per). */ + dedupedBy?: 'dedupe_window' | 'max_per' | 'cooldown' | 'maxFiringsPerHour'; +} + +export interface MaxFiringsCheckResult { + allowed: boolean; + count: number; + max: number; } export class ThrottleGate { private lastFireAt = new Map(); + /** Sliding-window fire-time log for count-based maxFiringsPerHour. */ + private fireTimes = new Map(); private keyOf(ruleName: string, deviceId?: string): string { return deviceId ? `${ruleName}::${deviceId}` : ruleName; @@ -51,26 +63,58 @@ export class ThrottleGate { windowMs: number | null, now: number, deviceId?: string, + dedupeWindowMs?: number | null, ): ThrottleCheckResult { - if (windowMs === null || windowMs <= 0) return { allowed: true }; const key = this.keyOf(ruleName, deviceId); const last = this.lastFireAt.get(key); + + // dedupe_window check: suppress if last fire was within this (typically smaller) window + if (dedupeWindowMs !== null && dedupeWindowMs !== undefined && dedupeWindowMs > 0) { + if (last !== undefined) { + const dedupeEnd = last + dedupeWindowMs; + if (now < dedupeEnd) { + return { allowed: false, lastFiredAt: last, nextAllowedAt: dedupeEnd, dedupedBy: 'dedupe_window' }; + } + } + } + + // max_per / cooldown check + if (windowMs === null || windowMs <= 0) return { allowed: true, lastFiredAt: last }; if (last === undefined) return { allowed: true }; const earliest = last + windowMs; if (now >= earliest) return { allowed: true, lastFiredAt: last }; - return { allowed: false, lastFiredAt: last, nextAllowedAt: earliest }; + return { allowed: false, lastFiredAt: last, nextAllowedAt: earliest, dedupedBy: windowMs > 0 ? 'max_per' : undefined }; } record(ruleName: string, now: number, deviceId?: string): void { this.lastFireAt.set(this.keyOf(ruleName, deviceId), now); } + /** Count-based check: has the rule fired >= maxCount times in the last windowMs? */ + checkMaxFirings(ruleName: string, maxCount: number, windowMs: number, now: number, deviceId?: string): MaxFiringsCheckResult { + const key = this.keyOf(ruleName, deviceId); + const times = (this.fireTimes.get(key) ?? []).filter((t) => now - t < windowMs); + this.fireTimes.set(key, times); + return { allowed: times.length < maxCount, count: times.length, max: maxCount }; + } + + /** Record a count-based fire (call alongside record()). */ + recordFire(ruleName: string, now: number, deviceId?: string): void { + const key = this.keyOf(ruleName, deviceId); + const times = this.fireTimes.get(key) ?? []; + times.push(now); + this.fireTimes.set(key, times); + } + /** Drop everything — used by engine.reload when a rule is removed. */ forget(ruleName: string): void { const prefix = `${ruleName}::`; for (const k of this.lastFireAt.keys()) { if (k === ruleName || k.startsWith(prefix)) this.lastFireAt.delete(k); } + for (const k of this.fireTimes.keys()) { + if (k === ruleName || k.startsWith(prefix)) this.fireTimes.delete(k); + } } /** @@ -85,6 +129,11 @@ export class ThrottleGate { const ruleName = sep === -1 ? k : k.slice(0, sep); if (!ruleNames.has(ruleName)) this.lastFireAt.delete(k); } + for (const k of this.fireTimes.keys()) { + const sep = k.indexOf('::'); + const ruleName = sep === -1 ? k : k.slice(0, sep); + if (!ruleNames.has(ruleName)) this.fireTimes.delete(k); + } } /** Test helper — exposes the underlying size. */ diff --git a/src/rules/types.ts b/src/rules/types.ts index f1e9ae3..087f5b8 100644 --- a/src/rules/types.ts +++ b/src/rules/types.ts @@ -80,6 +80,8 @@ export interface Action { export interface Throttle { max_per: string; + /** Deduplicate identical events arriving within this window after the last fire. */ + dedupe_window?: string; } export interface Rule { @@ -90,6 +92,16 @@ export interface Rule { then: Action[]; throttle?: Throttle | null; dry_run?: boolean; + /** Shorthand cooldown — equivalent to throttle.max_per. Takes precedence when both are set. */ + cooldown?: string; + /** Hysteresis guard — device state must remain stable for this duration before the rule fires. */ + requires_stable_for?: string; + /** Alias for requires_stable_for with clearer semantics (takes precedence when both are set). */ + hysteresis?: string; + /** Maximum times this rule may fire per rolling hour (count-based rate limit). */ + maxFiringsPerHour?: number; + /** Skip the action if the device's last-known cached state already matches the desired command outcome. */ + suppressIfAlreadyDesired?: boolean; } export interface AutomationBlock { diff --git a/src/utils/audit.ts b/src/utils/audit.ts index e6603ba..15964c2 100644 --- a/src/utils/audit.ts +++ b/src/utils/audit.ts @@ -48,6 +48,8 @@ export interface AuditEntry { dryRun: boolean; result?: 'ok' | 'error'; error?: string; + /** When execution is initiated via `plan run`, the stable plan ID for traceability. */ + planId?: string; /** Present for rule-engine kinds; absent for direct CLI command entries. */ rule?: AuditRuleContext; } diff --git a/src/utils/health.ts b/src/utils/health.ts new file mode 100644 index 0000000..c4cdf04 --- /dev/null +++ b/src/utils/health.ts @@ -0,0 +1,152 @@ +/** + * Health report utilities — collects process, quota, audit, and circuit + * breaker state into a single snapshot suitable for /health-style checks + * and Prometheus-compatible metrics export. + * + * No side effects: reading is safe to call from any context. + */ + +import fs from 'node:fs'; +import os from 'node:os'; +import path from 'node:path'; +import { todayUsage, DAILY_QUOTA } from './quota.js'; +import { readAudit } from './audit.js'; +import { apiCircuitBreaker } from '../api/client.js'; + +const DEFAULT_AUDIT_PATH = path.join(os.homedir(), '.switchbot', 'audit.log'); +const AUDIT_ERROR_WINDOW_MS = 24 * 60 * 60 * 1000; // 24h + +export interface QuotaHealth { + used: number; + limit: number; + percentUsed: number; + remaining: number; + status: 'ok' | 'warn' | 'critical'; +} + +export interface AuditHealth { + present: boolean; + recentErrors: number; + recentTotal: number; + errorRatePercent: number; + status: 'ok' | 'warn'; +} + +export interface CircuitHealth { + name: string; + state: 'closed' | 'open' | 'half-open'; + failures: number; + status: 'ok' | 'open'; +} + +export interface ProcessHealth { + pid: number; + uptimeSeconds: number; + platform: string; + nodeVersion: string; + memoryMb: number; +} + +export interface HealthReport { + generatedAt: string; + overall: 'ok' | 'degraded' | 'down'; + process: ProcessHealth; + quota: QuotaHealth; + audit: AuditHealth; + circuit: CircuitHealth; +} + +export function getHealthReport(auditPath = DEFAULT_AUDIT_PATH): HealthReport { + const now = new Date(); + + // Process info + const procHealth: ProcessHealth = { + pid: process.pid, + uptimeSeconds: Math.floor(process.uptime()), + platform: process.platform, + nodeVersion: process.version, + memoryMb: Math.round(process.memoryUsage().rss / 1024 / 1024), + }; + + // Quota + const { total: used } = todayUsage(now); + const pct = Math.round((used / DAILY_QUOTA) * 100); + const quotaHealth: QuotaHealth = { + used, + limit: DAILY_QUOTA, + percentUsed: pct, + remaining: Math.max(0, DAILY_QUOTA - used), + status: pct >= 90 ? 'critical' : pct >= 70 ? 'warn' : 'ok', + }; + + // Audit error rate (last 24h) + let auditHealth: AuditHealth; + if (!fs.existsSync(auditPath)) { + auditHealth = { present: false, recentErrors: 0, recentTotal: 0, errorRatePercent: 0, status: 'ok' }; + } else { + const entries = readAudit(auditPath); + const windowStart = now.getTime() - AUDIT_ERROR_WINDOW_MS; + const recent = entries.filter((e) => new Date(e.t).getTime() >= windowStart); + const errors = recent.filter((e) => e.result === 'error').length; + const total = recent.length; + const errorRate = total > 0 ? Math.round((errors / total) * 100) : 0; + auditHealth = { + present: true, + recentErrors: errors, + recentTotal: total, + errorRatePercent: errorRate, + status: errorRate >= 30 ? 'warn' : 'ok', + }; + } + + // Circuit breaker + const cbStats = apiCircuitBreaker.getStats(); + const circuitHealth: CircuitHealth = { + name: apiCircuitBreaker.name, + state: cbStats.state, + failures: cbStats.failures, + status: cbStats.state === 'open' ? 'open' : 'ok', + }; + + // Overall + const degraded = + quotaHealth.status !== 'ok' || + auditHealth.status !== 'ok' || + circuitHealth.status !== 'ok'; + const down = circuitHealth.status === 'open'; + const overall: HealthReport['overall'] = down ? 'down' : degraded ? 'degraded' : 'ok'; + + return { + generatedAt: now.toISOString(), + overall, + process: procHealth, + quota: quotaHealth, + audit: auditHealth, + circuit: circuitHealth, + }; +} + +/** + * Render a minimal Prometheus-compatible text metrics export. + * Only includes the most actionable gauges. + */ +export function toPrometheusText(report: HealthReport): string { + const lines: string[] = []; + const push = (name: string, value: number, help?: string) => { + if (help) lines.push(`# HELP ${name} ${help}`); + lines.push(`# TYPE ${name} gauge`); + lines.push(`${name} ${value}`); + }; + + push('switchbot_quota_used_total', report.quota.used, 'SwitchBot API requests used today'); + push('switchbot_quota_remaining', report.quota.remaining, 'SwitchBot API quota remaining today'); + push('switchbot_quota_percent_used', report.quota.percentUsed, 'SwitchBot API quota percent used today'); + push('switchbot_audit_recent_errors', report.audit.recentErrors, 'Audit log errors in the last 24h'); + push('switchbot_audit_error_rate_percent', report.audit.errorRatePercent, 'Audit error rate percent (last 24h)'); + push('switchbot_circuit_open', report.circuit.state === 'open' ? 1 : 0, 'API circuit breaker open (1=open, 0=closed/half-open)'); + push('switchbot_circuit_failures', report.circuit.failures, 'Consecutive API failures recorded by circuit breaker'); + push('switchbot_process_uptime_seconds', report.process.uptimeSeconds, 'Process uptime in seconds'); + push('switchbot_process_memory_mb', report.process.memoryMb, 'Process RSS memory usage in MB'); + + return lines.join('\n') + '\n'; +} diff --git a/src/utils/output.ts b/src/utils/output.ts index a3b6f13..c849451 100644 --- a/src/utils/output.ts +++ b/src/utils/output.ts @@ -212,14 +212,25 @@ export type ErrorSubKind = | 'auth-failed' | 'quota-exceeded' | 'device-internal-error' + | 'ambiguous-name-match' | 'unknown-api-error'; +export interface CandidateMatch { + deviceId?: string; + sceneId?: string; + name: string; +} + export interface ErrorPayload { code: number; kind: 'usage' | 'api' | 'runtime' | 'guard'; subKind?: ErrorSubKind; message: string; hint?: string; + /** Structured remediation hint for agent/script consumers. */ + resolutionHint?: string; + /** Candidate matches when the error is caused by an ambiguous name query. */ + candidateMatches?: CandidateMatch[]; retryable?: boolean; context?: Record; retryAfterMs?: number; @@ -256,9 +267,36 @@ export function buildErrorPayload(error: unknown): ErrorPayload { kind: 'usage', message: error.message, errorClass: 'usage', - transient: false + transient: false, }; - if (error.context) payload.context = error.context; + if (error.context) { + const ctx = error.context as Record; + const { error: errorType, candidates, hint } = ctx; + if (errorType === 'ambiguous_name_match') { + payload.subKind = 'ambiguous-name-match'; + } + if (Array.isArray(candidates) && candidates.length > 0) { + const normalized = (candidates as unknown[]) + .map((c) => { + if (typeof c !== 'object' || c === null) return null; + const o = c as Record; + const name = typeof o.name === 'string' ? o.name + : typeof o.sceneName === 'string' ? o.sceneName + : ''; + const match: CandidateMatch = { name }; + if (typeof o.deviceId === 'string') match.deviceId = o.deviceId; + if (typeof o.sceneId === 'string') match.sceneId = o.sceneId; + return match; + }) + .filter((c): c is CandidateMatch => c !== null && c.name.length > 0); + if (normalized.length > 0) payload.candidateMatches = normalized; + } + if (typeof hint === 'string') { + payload.resolutionHint = hint; + } + // Preserve full context for backward compatibility (including candidates / hint). + payload.context = ctx; + } return payload; } if (error instanceof UsageError) { @@ -344,29 +382,45 @@ export function handleError(error: unknown): never { if (payload.kind === 'usage') { console.error(payload.message); - const ctx = payload.context; - if (ctx && Array.isArray(ctx.candidates) && ctx.candidates.length > 0) { - const names = ctx.candidates + if (Array.isArray(payload.candidateMatches) && payload.candidateMatches.length > 0) { + const names = payload.candidateMatches .map((c) => { - if (typeof c === 'string') return c; - if (c && typeof c === 'object') { - const o = c as Record; - const name = typeof o.name === 'string' - ? o.name - : typeof o.sceneName === 'string' ? o.sceneName : undefined; - const id = typeof o.deviceId === 'string' - ? o.deviceId - : typeof o.sceneId === 'string' ? o.sceneId : typeof o.id === 'string' ? o.id : undefined; - if (name && id) return `${name} (${id})`; - return name ?? id ?? JSON.stringify(c); - } - return String(c); + const id = c.deviceId ?? c.sceneId; + if (c.name && id) return `${c.name} (${id})`; + return c.name ?? id ?? JSON.stringify(c); }) .slice(0, 6); console.error(`Did you mean: ${names.join(', ')}?`); + } else { + const ctx = payload.context; + if (ctx && Array.isArray(ctx.candidates) && ctx.candidates.length > 0) { + const names = (ctx.candidates as unknown[]) + .map((c) => { + if (typeof c === 'string') return c; + if (c && typeof c === 'object') { + const o = c as Record; + const name = typeof o.name === 'string' + ? o.name + : typeof o.sceneName === 'string' ? o.sceneName : undefined; + const id = typeof o.deviceId === 'string' + ? o.deviceId + : typeof o.sceneId === 'string' ? o.sceneId : typeof o.id === 'string' ? o.id : undefined; + if (name && id) return `${name} (${id})`; + return name ?? id ?? JSON.stringify(c); + } + return String(c); + }) + .slice(0, 6); + console.error(`Did you mean: ${names.join(', ')}?`); + } } - if (ctx && typeof ctx.hint === 'string') { - console.error(ctx.hint); + if (payload.resolutionHint) { + console.error(payload.resolutionHint); + } else { + const ctx = payload.context; + if (ctx && typeof ctx.hint === 'string') { + console.error(ctx.hint); + } } process.exit(2); } diff --git a/src/utils/retry.ts b/src/utils/retry.ts index d5a3e98..3c5285e 100644 --- a/src/utils/retry.ts +++ b/src/utils/retry.ts @@ -8,6 +8,13 @@ * * If the server returns a `Retry-After` header we always prefer it over our * own backoff — the API explicitly told us when to come back. + * + * Circuit breaker: + * Prevents hammering a consistently-failing endpoint. Tracks consecutive + * failures; when `failureThreshold` is exceeded the circuit opens and + * subsequent calls fail immediately (with CircuitOpenError). After + * `resetTimeoutMs` the circuit enters half-open state: the next call is + * allowed as a probe — success closes it, failure re-opens it. */ export type BackoffStrategy = 'linear' | 'exponential'; @@ -65,3 +72,102 @@ export function nextRetryDelayMs( export function sleep(ms: number): Promise { return new Promise((resolve) => setTimeout(resolve, ms)); } + +// --------------------------------------------------------------------------- +// Circuit breaker +// --------------------------------------------------------------------------- + +export type CircuitState = 'closed' | 'open' | 'half-open'; + +export interface CircuitBreakerOptions { + /** Number of consecutive failures before the circuit opens. Default: 5. */ + failureThreshold?: number; + /** Milliseconds to keep the circuit open before entering half-open. Default: 60_000. */ + resetTimeoutMs?: number; +} + +/** + * Thrown when a call is blocked because the circuit is open. + */ +export class CircuitOpenError extends Error { + constructor( + public readonly circuitName: string, + public readonly nextAttemptMs: number, + ) { + super(`Circuit "${circuitName}" is open — too many recent failures. Next probe allowed in ${Math.ceil((nextAttemptMs - Date.now()) / 1000)}s.`); + this.name = 'CircuitOpenError'; + } +} + +export class CircuitBreaker { + private state: CircuitState = 'closed'; + private failures = 0; + private lastOpenedAt = 0; + private readonly failureThreshold: number; + private readonly resetTimeoutMs: number; + + constructor( + public readonly name: string, + opts: CircuitBreakerOptions = {}, + ) { + this.failureThreshold = opts.failureThreshold ?? 5; + this.resetTimeoutMs = opts.resetTimeoutMs ?? 60_000; + } + + getState(): CircuitState { + this._maybeHalfOpen(); + return this.state; + } + + getStats(): { state: CircuitState; failures: number; lastOpenedAt: number; nextProbeMs: number } { + this._maybeHalfOpen(); + return { + state: this.state, + failures: this.failures, + lastOpenedAt: this.lastOpenedAt, + nextProbeMs: this.state === 'open' ? this.lastOpenedAt + this.resetTimeoutMs : 0, + }; + } + + /** + * Check if a call is allowed. Throws `CircuitOpenError` when the circuit + * is open and the reset timeout hasn't elapsed. Call `recordSuccess()` or + * `recordFailure()` after the operation completes. + */ + checkAndAllow(): void { + this._maybeHalfOpen(); + if (this.state === 'open') { + throw new CircuitOpenError(this.name, this.lastOpenedAt + this.resetTimeoutMs); + } + // closed or half-open: allow the call + } + + recordSuccess(): void { + this.failures = 0; + this.state = 'closed'; + } + + recordFailure(): void { + this.failures++; + if (this.state === 'half-open' || this.failures >= this.failureThreshold) { + this.state = 'open'; + this.lastOpenedAt = Date.now(); + } + } + + /** Reset to closed — useful for testing or manual recovery. */ + reset(): void { + this.state = 'closed'; + this.failures = 0; + this.lastOpenedAt = 0; + } + + private _maybeHalfOpen(): void { + if ( + this.state === 'open' && + Date.now() >= this.lastOpenedAt + this.resetTimeoutMs + ) { + this.state = 'half-open'; + } + } +} diff --git a/tests/api/client.test.ts b/tests/api/client.test.ts index 91627b7..a8dea97 100644 --- a/tests/api/client.test.ts +++ b/tests/api/client.test.ts @@ -65,10 +65,11 @@ vi.mock('../../src/utils/quota.js', () => ({ DAILY_QUOTA: 10_000, })); -import { createClient, ApiError, DryRunSignal } from '../../src/api/client.js'; +import { createClient, ApiError, DryRunSignal, apiCircuitBreaker } from '../../src/api/client.js'; describe('createClient', () => { beforeEach(() => { + apiCircuitBreaker.reset(); captured.request = null; captured.success = null; captured.failure = null; @@ -372,6 +373,7 @@ describe('createClient — 429 retry', () => { const originalArgv = process.argv; beforeEach(() => { + apiCircuitBreaker.reset(); captured.request = null; captured.success = null; captured.failure = null; diff --git a/tests/commands/capabilities.test.ts b/tests/commands/capabilities.test.ts index 272aeff..d485e20 100644 --- a/tests/commands/capabilities.test.ts +++ b/tests/commands/capabilities.test.ts @@ -29,10 +29,21 @@ function makeProgram(): Command { const history = p.command('history').description('Device history and aggregation'); history.command('aggregate').description('Aggregate device history'); - p.command('scenes').description('List and run scenes'); - p.command('schema').description('Export device catalog'); - p.command('mcp').description('Start MCP server'); - p.command('plan').description('Execute batch plans'); + const scenes = p.command('scenes').description('List and run scenes'); + scenes.command('list').description('List scenes'); + scenes.command('execute').description('Execute a scene'); + scenes.command('describe').description('Describe a scene'); + + const schema = p.command('schema').description('Export device catalog'); + schema.command('export').description('Export schema'); + + const mcp = p.command('mcp').description('Start MCP server'); + mcp.command('serve').description('Serve MCP'); + + const plan = p.command('plan').description('Execute batch plans'); + plan.command('schema').description('Plan schema'); + plan.command('validate').description('Validate a plan'); + plan.command('run').description('Run a plan'); return p; } @@ -247,6 +258,19 @@ describe('capabilities B3/B4', () => { expect(metaSet!.mutating).toBe(true); }); + it('fails closed when a leaf command is missing metadata', async () => { + const program = makeProgram(); + program.command('mystery').description('Unknown leaf'); + program.exitOverride(); + const chunks: string[] = []; + const logSpy = vi.spyOn(console, 'log').mockImplementation((...args: unknown[]) => { + chunks.push(args.map(String).join(' ')); + }); + registerCapabilitiesCommand(program); + await expect(program.parseAsync(['node', 'test', 'capabilities'])).rejects.toThrow(/missing:mystery/); + logSpy.mockRestore(); + }); + it('P15: resources catalog exposes scenes / webhooks / keys', async () => { const out = await runCapabilitiesWith([]); const resources = out.resources as Record; diff --git a/tests/commands/config.test.ts b/tests/commands/config.test.ts index ecf56d9..2a75f7c 100644 --- a/tests/commands/config.test.ts +++ b/tests/commands/config.test.ts @@ -1,4 +1,4 @@ -import { describe, it, expect, vi, beforeEach } from 'vitest'; +import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; import fs from 'node:fs'; import os from 'node:os'; import path from 'node:path'; @@ -154,4 +154,78 @@ describe('config command', () => { fs.rmSync(dir, { recursive: true, force: true }); }); }); + + describe('agent-profile', () => { + let tmpHome: string; + let homedirSpy: ReturnType; + + beforeEach(() => { + tmpHome = fs.mkdtempSync(path.join(os.tmpdir(), 'sbagent-')); + homedirSpy = vi.spyOn(os, 'homedir').mockReturnValue(tmpHome); + }); + + afterEach(() => { + homedirSpy.mockRestore(); + fs.rmSync(tmpHome, { recursive: true, force: true }); + }); + + it('emits the template as JSON without --write', async () => { + const res = await runCli(registerConfigCommand, ['--json', 'config', 'agent-profile']); + expect(res.exitCode).toBeNull(); + const out = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + const tpl = out.data ?? out; + expect(tpl.label).toBe('agent'); + expect(tpl.limits.dailyCap).toBe(100); + expect(tpl.defaults.auditLog).toBe(true); + }); + + it('prints JSON to stdout in text mode without --write', async () => { + const res = await runCli(registerConfigCommand, ['config', 'agent-profile']); + expect(res.exitCode).toBeNull(); + const combined = res.stdout.join('\n'); + expect(combined).toContain('"label": "agent"'); + expect(combined).toContain('"dailyCap": 100'); + }); + + it('--write creates ~/.switchbot/profiles/agent.json with mode 0600', async () => { + const res = await runCli(registerConfigCommand, ['config', 'agent-profile', '--write']); + expect(res.exitCode).toBeNull(); + const dest = path.join(tmpHome, '.switchbot', 'profiles', 'agent.json'); + expect(fs.existsSync(dest)).toBe(true); + const parsed = JSON.parse(fs.readFileSync(dest, 'utf-8')); + expect(parsed.label).toBe('agent'); + expect(parsed.limits.dailyCap).toBe(100); + expect(res.stdout.join('\n')).toContain('agent.json'); + }); + + it('--write refuses to overwrite without --force (exit 2)', async () => { + const dest = path.join(tmpHome, '.switchbot', 'profiles', 'agent.json'); + fs.mkdirSync(path.dirname(dest), { recursive: true }); + fs.writeFileSync(dest, '{"original":true}', 'utf-8'); + + const res = await runCli(registerConfigCommand, ['config', 'agent-profile', '--write']); + expect(res.exitCode).toBe(2); + expect(JSON.parse(fs.readFileSync(dest, 'utf-8'))).toEqual({ original: true }); + }); + + it('--write --force overwrites an existing agent.json', async () => { + const dest = path.join(tmpHome, '.switchbot', 'profiles', 'agent.json'); + fs.mkdirSync(path.dirname(dest), { recursive: true }); + fs.writeFileSync(dest, '{"original":true}', 'utf-8'); + + const res = await runCli(registerConfigCommand, ['config', 'agent-profile', '--write', '--force']); + expect(res.exitCode).toBeNull(); + const parsed = JSON.parse(fs.readFileSync(dest, 'utf-8')); + expect(parsed.label).toBe('agent'); + }); + + it('--write --json returns ok:true with path and template', async () => { + const res = await runCli(registerConfigCommand, ['--json', 'config', 'agent-profile', '--write']); + expect(res.exitCode).toBeNull(); + const out = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + expect(out.data.ok).toBe(true); + expect(typeof out.data.path).toBe('string'); + expect(out.data.template.label).toBe('agent'); + }); + }); }); diff --git a/tests/commands/devices.test.ts b/tests/commands/devices.test.ts index b630e82..739f6b6 100644 --- a/tests/commands/devices.test.ts +++ b/tests/commands/devices.test.ts @@ -2041,7 +2041,7 @@ describe('devices command', () => { expect(res.exitCode).toBe(2); expect(apiMock.__instance.post).not.toHaveBeenCalled(); expect(res.stderr.join('\n')).toMatch(/destructive command "unlock"/); - expect(res.stderr.join('\n')).toMatch(/--yes/); + expect(res.stderr.join('\n')).toMatch(/plan/); }); it('allows Smart Lock unlock when --yes is passed', async () => { diff --git a/tests/commands/doctor.test.ts b/tests/commands/doctor.test.ts index ea1eb0d..565cf5a 100644 --- a/tests/commands/doctor.test.ts +++ b/tests/commands/doctor.test.ts @@ -2,6 +2,15 @@ import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest'; import fs from 'node:fs'; import os from 'node:os'; import path from 'node:path'; +import { execSync } from 'node:child_process'; + +vi.mock('node:child_process', async (importOriginal) => { + const actual = await importOriginal(); + return { + ...actual, + execSync: vi.fn(), + }; +}); import { registerDoctorCommand } from '../../src/commands/doctor.js'; import { runCli } from '../helpers/cli.js'; @@ -18,6 +27,9 @@ describe('doctor command', () => { // DEFAULT_POLICY_PATH is evaluated at module load time using the real homedir, // so mock the env var to keep tests isolated from the developer's real policy file. process.env.SWITCHBOT_POLICY_PATH = path.join(tmp, '.config', 'openclaw', 'switchbot', 'policy.yaml'); + process.env.SHELL = '/bin/bash'; + // Default: execSync throws (simulates binary not on PATH / npm not available) + vi.mocked(execSync).mockReset().mockImplementation(() => { throw new Error('not found'); }); }); afterEach(() => { homedirSpy.mockRestore(); @@ -528,4 +540,141 @@ describe('doctor command', () => { delete process.env.SWITCHBOT_POLICY_PATH; } }); + + // --------------------------------------------------------------------- + // P0-1: PATH / binary discoverability check + // --------------------------------------------------------------------- + it('path check: is present in the --list output', async () => { + const res = await runCli(registerDoctorCommand, ['--json', 'doctor', '--list']); + const payload = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + const names = payload.data.checks.map((c: { name: string }) => c.name); + expect(names).toContain('path'); + }); + + it('path check: returns ok with binaryOnPath:true when binary is found', async () => { + vi.mocked(execSync).mockImplementation((cmd: unknown) => { + const c = String(cmd); + if (c.includes('npm prefix')) return '/usr/local\n' as never; + if (c.includes('which') || c.includes('where')) return '/usr/local/bin/switchbot\n' as never; + throw new Error('unexpected'); + }); + const res = await runCli(registerDoctorCommand, ['--json', 'doctor', '--section', 'path']); + const payload = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + const check = payload.data.checks.find((c: { name: string }) => c.name === 'path'); + expect(check).toBeDefined(); + expect(check.status).toBe('ok'); + expect(check.detail.binaryOnPath).toBe(true); + expect(typeof check.detail.resolvedPath).toBe('string'); + }); + + it('path check: returns warn with fix hint when binary is not on PATH', async () => { + vi.mocked(execSync).mockImplementation((cmd: unknown) => { + const c = String(cmd); + if (c.includes('npm prefix')) return '/usr/local\n' as never; + throw new Error('not found'); + }); + const res = await runCli(registerDoctorCommand, ['--json', 'doctor', '--section', 'path']); + const payload = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + const check = payload.data.checks.find((c: { name: string }) => c.name === 'path'); + expect(check).toBeDefined(); + expect(check.status).toBe('warn'); + expect(check.detail.binaryOnPath).toBe(false); + expect(check.detail.resolvedPath).toBeNull(); + expect(typeof check.detail.fix).toBe('string'); + expect(['bash', 'cmd']).toContain(check.detail.currentShell); + expect(check.detail.fix).toMatch(/\.bashrc|export PATH|set PATH|setx PATH/); + }); + + it('path check: detail includes npmBinDir when npm prefix succeeds', async () => { + vi.mocked(execSync).mockImplementation((cmd: unknown) => { + const c = String(cmd); + if (c.includes('npm prefix')) return '/home/user/.npm-global\n' as never; + throw new Error('not found'); + }); + const res = await runCli(registerDoctorCommand, ['--json', 'doctor', '--section', 'path']); + const payload = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + const check = payload.data.checks.find((c: { name: string }) => c.name === 'path'); + expect(check.detail.npmBinDir).toBeTruthy(); + }); + + it('path check: emits PowerShell-specific fix hints when SHELL indicates pwsh', async () => { + process.env.SHELL = 'pwsh'; + vi.mocked(execSync).mockImplementation((cmd: unknown) => { + const c = String(cmd); + if (c.includes('npm prefix')) return '/usr/local\n' as never; + throw new Error('not found'); + }); + const res = await runCli(registerDoctorCommand, ['--json', 'doctor', '--section', 'path']); + const payload = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + const check = payload.data.checks.find((c: { name: string }) => c.name === 'path'); + expect(check.detail.currentShell).toBe('powershell'); + expect(check.detail.fix).toMatch(/\$env:Path/); + }); + + it('daemon and health checks appear in --list output', async () => { + const res = await runCli(registerDoctorCommand, ['--json', 'doctor', '--list']); + const payload = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + const names = payload.data.checks.map((c: { name: string }) => c.name); + expect(names).toContain('daemon'); + expect(names).toContain('health'); + }); + + it('daemon check reads daemon.state.json and reports running metadata', async () => { + const sbDir = path.join(tmp, '.switchbot'); + fs.mkdirSync(sbDir, { recursive: true }); + fs.writeFileSync(path.join(sbDir, 'daemon.pid'), `${process.pid}\n`); + fs.writeFileSync( + path.join(sbDir, 'daemon.state.json'), + JSON.stringify({ + status: 'running', + pid: process.pid, + logFile: path.join(sbDir, 'daemon.log'), + pidFile: path.join(sbDir, 'daemon.pid'), + stateFile: path.join(sbDir, 'daemon.state.json'), + startedAt: '2026-04-25T00:00:00.000Z', + healthzPort: 3210, + }), + ); + const res = await runCli(registerDoctorCommand, ['--json', 'doctor', '--section', 'daemon']); + const payload = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + const daemon = payload.data.checks.find((c: { name: string }) => c.name === 'daemon'); + expect(daemon.status).toBe('ok'); + expect(daemon.detail.present).toBe(true); + expect(daemon.detail.pid).toBe(process.pid); + expect(daemon.detail.healthConfigured).toBe(true); + }); + + describe('maturity score', () => { + it('includes maturityScore (0–100) and maturityLabel in --json output', async () => { + process.env.SWITCHBOT_TOKEN = 't'; + process.env.SWITCHBOT_SECRET = 's'; + const res = await runCli(registerDoctorCommand, ['--json', 'doctor']); + const payload = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + expect(typeof payload.data.maturityScore).toBe('number'); + expect(payload.data.maturityScore).toBeGreaterThanOrEqual(0); + expect(payload.data.maturityScore).toBeLessThanOrEqual(100); + expect(['production-ready', 'mostly-ready', 'needs-work', 'not-ready']).toContain( + payload.data.maturityLabel, + ); + }); + + it('maturityLabel is lower when credentials are missing (score < 100)', async () => { + // No creds → credentials:fail → score is reduced + const res = await runCli(registerDoctorCommand, ['--json', 'doctor']); + expect(res.exitCode).toBe(1); + const payload = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + expect(payload.data.maturityScore).toBeLessThan(100); + expect(['production-ready', 'mostly-ready', 'needs-work', 'not-ready']).toContain( + payload.data.maturityLabel, + ); + }); + + it('maturityScore is an integer', async () => { + process.env.SWITCHBOT_TOKEN = 't'; + process.env.SWITCHBOT_SECRET = 's'; + const res = await runCli(registerDoctorCommand, ['--json', 'doctor']); + const payload = JSON.parse(res.stdout.filter((l) => l.trim().startsWith('{')).join('')); + expect(Number.isInteger(payload.data.maturityScore)).toBe(true); + }); + }); }); diff --git a/tests/commands/health-serve.test.ts b/tests/commands/health-serve.test.ts new file mode 100644 index 0000000..6ffc6d3 --- /dev/null +++ b/tests/commands/health-serve.test.ts @@ -0,0 +1,140 @@ +/** + * Integration tests for `health serve` HTTP endpoints. + * Uses createHealthHandler directly to avoid binding real ports during unit tests. + */ +import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; +import { createServer } from 'node:http'; +import type { Server } from 'node:http'; + +// Mock the api/client module so getHealthReport doesn't need real config. +vi.mock('../../src/api/client.js', () => ({ + createClient: vi.fn(() => ({ get: vi.fn(), post: vi.fn() })), + ApiError: class ApiError extends Error { + constructor(message: string, public readonly code: number) { + super(message); + this.name = 'ApiError'; + } + }, + DryRunSignal: class DryRunSignal extends Error { + constructor(public readonly method: string, public readonly url: string) { + super('dry-run'); + this.name = 'DryRunSignal'; + } + }, + apiCircuitBreaker: { + name: 'switchbot-api', + getStats: vi.fn(() => ({ state: 'closed', failures: 0, lastOpenedAt: 0, nextProbeMs: 0 })), + checkAndAllow: vi.fn(), + recordSuccess: vi.fn(), + recordFailure: vi.fn(), + reset: vi.fn(), + getState: vi.fn(() => 'closed'), + }, + CircuitOpenError: class CircuitOpenError extends Error { + constructor(public readonly circuitName: string, public readonly nextAttemptMs: number) { + super('circuit open'); + this.name = 'CircuitOpenError'; + } + }, +})); + +import { createHealthHandler } from '../../src/commands/health.js'; + +// ── helpers ────────────────────────────────────────────────────────────────── + +function httpGet(port: number, path: string): Promise<{ status: number; body: string; contentType: string }> { + return new Promise((resolve, reject) => { + const { request } = require('node:http') as typeof import('node:http'); + const req = request({ host: '127.0.0.1', port, path, method: 'GET' }, (res) => { + const chunks: Buffer[] = []; + res.on('data', (c: Buffer) => chunks.push(c)); + res.on('end', () => resolve({ + status: res.statusCode ?? 0, + body: Buffer.concat(chunks).toString('utf-8'), + contentType: String(res.headers['content-type'] ?? ''), + })); + }); + req.on('error', reject); + req.end(); + }); +} + +function startServer(): Promise<{ server: Server; port: number }> { + return new Promise((resolve, reject) => { + const handler = createHealthHandler(); + const server = createServer(handler); + server.listen(0, '127.0.0.1', () => { + const addr = server.address(); + if (!addr || typeof addr === 'string') { reject(new Error('unexpected address')); return; } + resolve({ server, port: addr.port }); + }); + server.on('error', reject); + }); +} + +// ── tests ───────────────────────────────────────────────────────────────────── + +describe('health serve HTTP endpoints', () => { + let server: Server; + let port: number; + + beforeEach(async () => { + const started = await startServer(); + server = started.server; + port = started.port; + }); + + afterEach(() => new Promise((resolve) => server.close(() => resolve()))); + + it('GET /healthz returns 200 with JSON body including overall and schemaVersion', async () => { + const res = await httpGet(port, '/healthz'); + expect(res.status).toBe(200); + expect(res.contentType).toContain('application/json'); + const body = JSON.parse(res.body) as Record; + expect(body.schemaVersion).toBeDefined(); + const data = body.data as Record; + expect(['ok', 'degraded', 'down']).toContain(data.overall); + expect(data.quota).toBeDefined(); + expect(data.circuit).toBeDefined(); + expect(data.process).toBeDefined(); + }); + + it('GET /metrics returns 200 with Prometheus text', async () => { + const res = await httpGet(port, '/metrics'); + expect(res.status).toBe(200); + expect(res.contentType).toContain('text/plain'); + expect(res.body).toContain('switchbot_quota_used_total'); + expect(res.body).toContain('switchbot_circuit_open'); + }); + + it('GET /unknown returns 404 with JSON error', async () => { + const res = await httpGet(port, '/does-not-exist'); + expect(res.status).toBe(404); + const body = JSON.parse(res.body) as Record; + expect(body.paths).toBeDefined(); + }); +}); + +describe('health serve — port conflict', () => { + it('emits EADDRINUSE error when the port is already in use', async () => { + const blocker = await new Promise<{ server: Server; port: number }>((resolve, reject) => { + const s = createServer(); + s.listen(0, '127.0.0.1', () => { + const addr = s.address(); + if (!addr || typeof addr === 'string') { reject(new Error('unexpected')); return; } + resolve({ server: s, port: addr.port }); + }); + s.on('error', reject); + }); + + const { createHealthHandler } = await import('../../src/commands/health.js'); + const conflictServer = createServer(createHealthHandler()); + const err = await new Promise((resolve) => { + conflictServer.on('error', resolve as (e: Error) => void); + conflictServer.listen(blocker.port, '127.0.0.1'); + }); + + await new Promise((r) => blocker.server.close(() => r())); + expect(err.code).toBe('EADDRINUSE'); + }); +}); diff --git a/tests/commands/mcp.test.ts b/tests/commands/mcp.test.ts index cb1243a..f15ce6d 100644 --- a/tests/commands/mcp.test.ts +++ b/tests/commands/mcp.test.ts @@ -132,6 +132,7 @@ describe('mcp server', () => { cacheMock.map.clear(); cacheMock.getCachedDevice.mockClear(); cacheMock.updateCacheFromDeviceList.mockClear(); + delete process.env.SWITCHBOT_ALLOW_DIRECT_DESTRUCTIVE; }); it('exposes the twenty-one tools with titles and input schemas', async () => { @@ -195,7 +196,24 @@ describe('mcp server', () => { expect(apiMock.__instance.post).not.toHaveBeenCalled(); }); - it('send_command allows destructive commands when confirm:true', async () => { + it('send_command rejects direct destructive commands in the default profile even with confirm:true', async () => { + cacheMock.map.set('LOCK1', { type: 'Smart Lock', name: 'Front Door', category: 'physical' }); + const { client } = await pair(); + + const res = await client.callTool({ + name: 'send_command', + arguments: { deviceId: 'LOCK1', command: 'unlock', confirm: true }, + }); + + expect(res.isError).toBe(true); + const parsed = JSON.parse((res.content as Array<{ text: string }>)[0].text); + expect(parsed.error.kind).toBe('guard'); + expect(parsed.error.hint).toMatch(/plan save|plan execute/); + expect(apiMock.__instance.post).not.toHaveBeenCalled(); + }); + + it('send_command allows destructive commands with confirm:true when the direct-execution override is enabled', async () => { + process.env.SWITCHBOT_ALLOW_DIRECT_DESTRUCTIVE = '1'; cacheMock.map.set('LOCK1', { type: 'Smart Lock', name: 'Front Door', category: 'physical' }); apiMock.__instance.post.mockResolvedValueOnce({ data: { statusCode: 100, body: { commandId: 'xyz' } }, @@ -209,9 +227,6 @@ describe('mcp server', () => { expect(res.isError).toBeFalsy(); expect(apiMock.__instance.post).toHaveBeenCalledTimes(1); - const [url, body] = apiMock.__instance.post.mock.calls[0]; - expect(url).toBe('/v1.1/devices/LOCK1/commands'); - expect(body).toMatchObject({ command: 'unlock', commandType: 'command' }); }); it('send_command rejects an unknown command name before calling the API', async () => { @@ -706,6 +721,27 @@ describe('mcp server', () => { expect(summary.error).toBe(0); }); + it('plan_run rejects direct destructive execution when yes=true in the default profile', async () => { + cacheMock.map.set('LOCK1', { type: 'Smart Lock', name: 'Front Door', category: 'physical' }); + const { client } = await pair(); + + const res = await client.callTool({ + name: 'plan_run', + arguments: { + yes: true, + plan: { + version: '1.0', + steps: [{ type: 'command', deviceId: 'LOCK1', command: 'unlock' }], + }, + }, + }); + + expect(res.isError).toBe(true); + const parsed = JSON.parse((res.content as Array<{ text: string }>)[0].text); + expect(parsed.error.kind).toBe('guard'); + expect(parsed.error.hint).toMatch(/plan save|plan execute/); + }); + it('audit_query filters entries by result', async () => { const tmp = fs.mkdtempSync(path.join(os.tmpdir(), 'sbmcp-audit-')); const auditPath = path.join(tmp, 'audit.log'); @@ -1023,4 +1059,61 @@ describe('mcp server', () => { expect(sc).toEqual(cliData); }); }); + + // ── riskProfile.idempotencyHint ────────────────────────────────────────── + describe('send_command riskProfile.idempotencyHint', () => { + type RiskProfile = { idempotencyHint: string; riskLevel: string; requiresConfirmation: boolean }; + type DryRunResponse = { ok: boolean; dryRun: boolean; riskProfile: RiskProfile; wouldSend: unknown }; + + it('turnOn → idempotencyHint:safe (catalog says idempotent:true)', async () => { + cacheMock.map.set('BOT1', { type: 'Bot', name: 'Test Bot', category: 'physical' }); + const { client } = await pair(); + const res = await client.callTool({ name: 'send_command', arguments: { deviceId: 'BOT1', command: 'turnOn', dryRun: true } }); + expect(res.isError).toBeFalsy(); + const sc = res.structuredContent as DryRunResponse; + expect(sc.riskProfile.idempotencyHint).toBe('safe'); + }); + + it('toggle → idempotencyHint:non-idempotent (catalog says idempotent:false)', async () => { + cacheMock.map.set('PLUG1', { type: 'Plug Mini (US)', name: 'Test Plug', category: 'physical' }); + const { client } = await pair(); + const res = await client.callTool({ name: 'send_command', arguments: { deviceId: 'PLUG1', command: 'toggle', dryRun: true } }); + expect(res.isError).toBeFalsy(); + const sc = res.structuredContent as DryRunResponse; + expect(sc.riskProfile.idempotencyHint).toBe('non-idempotent'); + }); + + it('unlock (destructive, idempotent:true) → idempotencyHint:safe', async () => { + cacheMock.map.set('LOCK1', { type: 'Smart Lock', name: 'Front Door', category: 'physical' }); + const { client } = await pair(); + // unlock is destructive — live path needs confirm:true + const res = await client.callTool({ name: 'send_command', arguments: { deviceId: 'LOCK1', command: 'unlock', dryRun: true } }); + expect(res.isError).toBeFalsy(); + const sc = res.structuredContent as DryRunResponse; + // destructive but catalog marks idempotent:true → hint should be safe + expect(sc.riskProfile.idempotencyHint).toBe('safe'); + // riskLevel must still be high (it is destructive) + expect(sc.riskProfile.riskLevel).toBe('high'); + }); + + it('live path: turnOn → idempotencyHint:safe', async () => { + cacheMock.map.set('BOT1', { type: 'Bot', name: 'Test Bot', category: 'physical' }); + apiMock.__instance.post.mockResolvedValue({ data: { statusCode: 100, body: {} } }); + const { client } = await pair(); + const res = await client.callTool({ name: 'send_command', arguments: { deviceId: 'BOT1', command: 'turnOn' } }); + expect(res.isError).toBeFalsy(); + const sc = res.structuredContent as { ok: boolean; riskProfile: RiskProfile }; + expect(sc.riskProfile.idempotencyHint).toBe('safe'); + }); + + it('live path: toggle → idempotencyHint:non-idempotent', async () => { + cacheMock.map.set('PLUG1', { type: 'Plug Mini (US)', name: 'Test Plug', category: 'physical' }); + apiMock.__instance.post.mockResolvedValue({ data: { statusCode: 100, body: {} } }); + const { client } = await pair(); + const res = await client.callTool({ name: 'send_command', arguments: { deviceId: 'PLUG1', command: 'toggle' } }); + expect(res.isError).toBeFalsy(); + const sc = res.structuredContent as { ok: boolean; riskProfile: RiskProfile }; + expect(sc.riskProfile.idempotencyHint).toBe('non-idempotent'); + }); + }); }); diff --git a/tests/commands/plan-resource.test.ts b/tests/commands/plan-resource.test.ts new file mode 100644 index 0000000..b269452 --- /dev/null +++ b/tests/commands/plan-resource.test.ts @@ -0,0 +1,288 @@ +/** + * Regression tests for the plan resource model (save/list/review/approve/execute) + * and the JSON error contract for plan subcommands. + * + * Bugs covered: + * 1. plan execute marks status=executed even on error/skipped steps. + * 2. plan approve/review/execute emit bare console.error instead of structured + * error envelope under --json. + */ + +import { describe, it, expect, vi, beforeEach } from 'vitest'; +import { randomUUID } from 'node:crypto'; + +// ── api client mock ───────────────────────────────────────────────────────── +const apiMock = vi.hoisted(() => { + const instance = { get: vi.fn(), post: vi.fn() }; + return { + createClient: vi.fn(() => instance), + __instance: instance, + DryRunSignal: class DryRunSignal extends Error { + constructor(public readonly method: string, public readonly url: string) { + super('dry-run'); + this.name = 'DryRunSignal'; + } + }, + }; +}); +vi.mock('../../src/api/client.js', () => ({ + createClient: apiMock.createClient, + ApiError: class ApiError extends Error { + constructor(message: string, public readonly code: number) { + super(message); + this.name = 'ApiError'; + } + }, + DryRunSignal: apiMock.DryRunSignal, +})); + +// ── device cache mock ──────────────────────────────────────────────────────── +const cacheMock = vi.hoisted(() => ({ + map: new Map(), + getCachedDevice: vi.fn((id: string) => cacheMock.map.get(id) ?? null), + updateCacheFromDeviceList: vi.fn(), +})); +vi.mock('../../src/devices/cache.js', () => ({ + getCachedDevice: cacheMock.getCachedDevice, + updateCacheFromDeviceList: cacheMock.updateCacheFromDeviceList, + loadCache: vi.fn(() => null), + clearCache: vi.fn(), + isListCacheFresh: vi.fn(() => false), + listCacheAgeMs: vi.fn(() => null), + getCachedStatus: vi.fn(() => null), + setCachedStatus: vi.fn(), + clearStatusCache: vi.fn(), + loadStatusCache: vi.fn(() => ({ entries: {} })), + describeCache: vi.fn(() => ({ + list: { path: '', exists: false }, + status: { path: '', exists: false, entryCount: 0 }, + })), +})); + +// ── flags mock ─────────────────────────────────────────────────────────────── +const flagsMock = vi.hoisted(() => ({ + dryRun: false, + isDryRun: vi.fn(() => flagsMock.dryRun), + isVerbose: vi.fn(() => false), + getTimeout: vi.fn(() => 30000), + getConfigPath: vi.fn(() => undefined), + getProfile: vi.fn(() => undefined), + getAuditLog: vi.fn(() => null), + getCacheMode: vi.fn(() => ({ listTtlMs: 0, statusTtlMs: 0 })), + getFormat: vi.fn(() => undefined), + getFields: vi.fn(() => undefined), +})); +vi.mock('../../src/utils/flags.js', () => flagsMock); + +// ── plan-store in-memory mock ──────────────────────────────────────────────── +type PlanStatus = 'pending' | 'approved' | 'rejected' | 'executed' | 'failed'; +interface StoredPlan { version: string; steps: unknown[] } +interface PlanRecord { + planId: string; + createdAt: string; + status: PlanStatus; + approvedAt?: string; + executedAt?: string; + failedAt?: string; + failureReason?: string; + plan: StoredPlan; +} + +const planStore = vi.hoisted(() => { + const store = new Map(); + return { + store, + PLANS_DIR: '/mock/plans', + savePlanRecord: vi.fn((plan: StoredPlan): PlanRecord => { + const record: PlanRecord = { planId: randomUUID(), createdAt: new Date().toISOString(), status: 'pending', plan }; + store.set(record.planId, record); + return record; + }), + loadPlanRecord: vi.fn((planId: string): PlanRecord | null => store.get(planId) ?? null), + updatePlanRecord: vi.fn((planId: string, updates: Partial): PlanRecord => { + const r = store.get(planId); + if (!r) throw new Error(`Plan ${planId} not found`); + const updated = { ...r, ...updates }; + store.set(planId, updated); + return updated; + }), + listPlanRecords: vi.fn((): PlanRecord[] => [...store.values()].sort((a, b) => a.createdAt.localeCompare(b.createdAt))), + }; +}); +vi.mock('../../src/lib/plan-store.js', () => planStore); + +import { registerPlanCommand } from '../../src/commands/plan.js'; +import { runCli } from '../helpers/cli.js'; + +// ── helpers ────────────────────────────────────────────────────────────────── +function simplePlan(): StoredPlan { + return { version: '1.0', steps: [{ type: 'command', deviceId: 'BOT1', command: 'turnOn' }] }; +} + +function lockPlan(): StoredPlan { + return { version: '1.0', steps: [{ type: 'command', deviceId: 'LOCK1', command: 'unlock' }] }; +} + +function parseJsonOutput(stdout: string[]): Record { + const line = stdout.find((l) => l.trim().startsWith('{')); + if (!line) throw new Error('No JSON line in stdout: ' + JSON.stringify(stdout)); + return JSON.parse(line) as Record; +} + +function makeApproved(plan: StoredPlan): PlanRecord { + const r = planStore.savePlanRecord(plan); + return planStore.updatePlanRecord(r.planId, { status: 'approved', approvedAt: new Date().toISOString() }); +} + +// ── tests ──────────────────────────────────────────────────────────────────── +describe('plan resource model', () => { + beforeEach(() => { + planStore.store.clear(); + planStore.savePlanRecord.mockClear(); + planStore.loadPlanRecord.mockClear(); + planStore.updatePlanRecord.mockClear(); + apiMock.__instance.post.mockReset(); + cacheMock.map.clear(); + flagsMock.dryRun = false; + }); + + // ── plan execute status machine ────────────────────────────────────────── + describe('plan execute — status transitions', () => { + it('marks status=executed when all steps succeed', async () => { + const record = makeApproved(simplePlan()); + apiMock.__instance.post.mockResolvedValue({ data: { statusCode: 100, body: {} } }); + + await runCli(registerPlanCommand, ['plan', 'execute', record.planId, '--yes']); + + const updated = planStore.store.get(record.planId); + expect(updated?.status).toBe('executed'); + expect(updated?.executedAt).toBeDefined(); + expect(updated?.failedAt).toBeUndefined(); + }); + + it('marks status=failed (not executed) when a step errors', async () => { + const record = makeApproved(simplePlan()); + apiMock.__instance.post.mockRejectedValue(new Error('network error')); + + const res = await runCli(registerPlanCommand, ['plan', 'execute', record.planId, '--yes', '--continue-on-error']); + + expect(res.exitCode).toBe(1); + const updated = planStore.store.get(record.planId); + expect(updated?.status).toBe('failed'); + expect(updated?.failedAt).toBeDefined(); + expect(updated?.failureReason).toMatch(/error/); + expect(updated?.executedAt).toBeUndefined(); + }); + + it('marks status=failed when a destructive step errors (plan execute forces yes:true)', async () => { + cacheMock.map.set('LOCK1', { type: 'Smart Lock', name: 'Front', category: 'physical' }); + const record = makeApproved(lockPlan()); + + // No API mock for unlock — the step is attempted (plan execute forces yes:true) + // and fails because the mock returns undefined. + const res = await runCli(registerPlanCommand, ['plan', 'execute', record.planId]); + + expect(res.exitCode).toBe(1); + const updated = planStore.store.get(record.planId); + expect(updated?.status).toBe('failed'); + expect(updated?.failureReason).toMatch(/error/); + }); + + it('failed plan can be re-approved and retried', async () => { + const record = planStore.savePlanRecord(simplePlan()); + planStore.updatePlanRecord(record.planId, { + status: 'failed', + failedAt: new Date().toISOString(), + failureReason: '1 error', + }); + + await runCli(registerPlanCommand, ['plan', 'approve', record.planId]); + expect(planStore.store.get(record.planId)?.status).toBe('approved'); + + apiMock.__instance.post.mockResolvedValue({ data: { statusCode: 100, body: {} } }); + await runCli(registerPlanCommand, ['plan', 'execute', record.planId, '--yes']); + expect(planStore.store.get(record.planId)?.status).toBe('executed'); + }); + + it('--json output includes succeeded:true on clean run', async () => { + const record = makeApproved(simplePlan()); + apiMock.__instance.post.mockResolvedValue({ data: { statusCode: 100, body: {} } }); + + const res = await runCli(registerPlanCommand, ['--json', 'plan', 'execute', record.planId, '--yes']); + const out = (parseJsonOutput(res.stdout).data as Record); + expect(out.succeeded).toBe(true); + expect(out.ran).toBe(true); + }); + + it('--json output includes succeeded:false on failed run', async () => { + const record = makeApproved(simplePlan()); + apiMock.__instance.post.mockRejectedValue(new Error('boom')); + + const res = await runCli(registerPlanCommand, ['--json', 'plan', 'execute', record.planId, '--yes', '--continue-on-error']); + const out = (parseJsonOutput(res.stdout).data as Record); + expect(out.succeeded).toBe(false); + expect(out.ran).toBe(true); + }); + }); + + // ── JSON error contract ────────────────────────────────────────────────── + describe('plan subcommands — JSON error contract', () => { + it('plan review --json emits error envelope for unknown planId', async () => { + const res = await runCli(registerPlanCommand, ['--json', 'plan', 'review', 'no-such-plan']); + expect(res.exitCode).toBe(2); + const out = parseJsonOutput(res.stdout); + expect(out.error).toBeDefined(); + expect((out.error as Record).code).toBe(2); + expect((out.error as Record).message).toMatch(/not found/i); + }); + + it('plan approve --json emits error envelope for already-executed plan', async () => { + const record = planStore.savePlanRecord(simplePlan()); + planStore.updatePlanRecord(record.planId, { status: 'executed', executedAt: new Date().toISOString() }); + + const res = await runCli(registerPlanCommand, ['--json', 'plan', 'approve', record.planId]); + expect(res.exitCode).toBe(2); + const out = parseJsonOutput(res.stdout); + expect((out.error as Record).message).toMatch(/already been executed/i); + }); + + it('plan approve --json emits error envelope for rejected plan', async () => { + const record = planStore.savePlanRecord(simplePlan()); + planStore.updatePlanRecord(record.planId, { status: 'rejected' }); + + const res = await runCli(registerPlanCommand, ['--json', 'plan', 'approve', record.planId]); + expect(res.exitCode).toBe(2); + const out = parseJsonOutput(res.stdout); + expect((out.error as Record).message).toMatch(/rejected/i); + }); + + it('plan execute --json emits error envelope for pending plan (not approved)', async () => { + const record = planStore.savePlanRecord(simplePlan()); + // status is 'pending' by default + + const res = await runCli(registerPlanCommand, ['--json', 'plan', 'execute', record.planId]); + expect(res.exitCode).toBe(2); + const out = parseJsonOutput(res.stdout); + const err = out.error as Record; + expect(err.message).toMatch(/cannot be executed/i); + expect((err.context as Record)?.status).toBe('pending'); + }); + + it('plan execute --json emits error envelope when plan not found', async () => { + const res = await runCli(registerPlanCommand, ['--json', 'plan', 'execute', 'ghost-id']); + expect(res.exitCode).toBe(2); + const out = parseJsonOutput(res.stdout); + expect((out.error as Record).code).toBe(2); + }); + + it('plan approve --json returns ok:true on success', async () => { + const record = planStore.savePlanRecord(simplePlan()); + + const res = await runCli(registerPlanCommand, ['--json', 'plan', 'approve', record.planId]); + expect(res.exitCode).not.toBe(2); + const out = parseJsonOutput(res.stdout); + expect((out.data as Record).ok).toBe(true); + expect((out.data as Record).status).toBe('approved'); + }); + }); +}); diff --git a/tests/commands/plan.test.ts b/tests/commands/plan.test.ts index 08d5563..0108f94 100644 --- a/tests/commands/plan.test.ts +++ b/tests/commands/plan.test.ts @@ -75,6 +75,7 @@ describe('plan command', () => { apiMock.__instance.post.mockReset(); cacheMock.map.clear(); flagsMock.dryRun = false; + flagsMock.getProfile.mockReturnValue(undefined); }); function writePlan(obj: unknown): string { @@ -199,7 +200,20 @@ describe('plan command', () => { expect(res.stdout.join('\n')).toMatch(/skipped=1/); }); - it('runs destructive commands when --yes is passed', async () => { + it('rejects direct destructive commands when --yes is passed outside a dev profile', async () => { + cacheMock.map.set('LOCK1', { type: 'Smart Lock', name: 'Front', category: 'physical' }); + const file = writePlan({ + version: '1.0', + steps: [{ type: 'command', deviceId: 'LOCK1', command: 'unlock' }], + }); + const res = await runCli(registerPlanCommand, ['plan', 'run', file, '--yes']); + expect(apiMock.__instance.post).not.toHaveBeenCalled(); + expect(res.exitCode).toBe(2); + expect(res.stderr.join('\n')).toMatch(/plan save|plan execute/); + }); + + it('allows direct destructive commands with --yes in a dev profile', async () => { + flagsMock.getProfile.mockReturnValue('dev'); cacheMock.map.set('LOCK1', { type: 'Smart Lock', name: 'Front', category: 'physical' }); const file = writePlan({ version: '1.0', diff --git a/tests/commands/policy.test.ts b/tests/commands/policy.test.ts index 6986d90..1d57e03 100644 --- a/tests/commands/policy.test.ts +++ b/tests/commands/policy.test.ts @@ -379,4 +379,114 @@ describe('switchbot policy (commander surface)', () => { expect(stderr.join('\n')).toContain('policy file not found'); }); }); + + // ── policy backup / restore ────────────────────────────────────────────── + describe('policy backup', () => { + let policyFile: string; + + beforeEach(() => { + policyFile = path.join(tmpDir, 'policy.yaml'); + fs.writeFileSync(policyFile, 'version: "0.2"\n', 'utf-8'); + process.env.SWITCHBOT_POLICY_PATH = policyFile; + }); + + afterEach(() => { + delete process.env.SWITCHBOT_POLICY_PATH; + }); + + it('creates a .bak.yaml backup alongside the policy', () => { + const { stdout, exitCode } = runCli(['policy', 'backup']); + expect(exitCode).toBe(0); + const backupPath = policyFile.replace(/\.yaml$/, '.bak.yaml'); + expect(fs.existsSync(backupPath)).toBe(true); + expect(fs.readFileSync(backupPath, 'utf-8')).toBe(fs.readFileSync(policyFile, 'utf-8')); + expect(stdout.join('\n')).toContain('Backup written'); + }); + + it('writes backup to an explicit path', () => { + const dest = path.join(tmpDir, 'my-snapshot.yaml'); + const { stdout, exitCode } = runCli(['policy', 'backup', dest]); + expect(exitCode).toBe(0); + expect(fs.existsSync(dest)).toBe(true); + expect(stdout.join('\n')).toContain(dest); + }); + + it('refuses to overwrite existing backup without --force', () => { + const backupPath = policyFile.replace(/\.yaml$/, '.bak.yaml'); + fs.writeFileSync(backupPath, 'original\n', 'utf-8'); + + const { exitCode } = runCli(['policy', 'backup']); + expect(exitCode).toBe(2); + expect(fs.readFileSync(backupPath, 'utf-8')).toBe('original\n'); + }); + + it('overwrites existing backup with --force', () => { + const backupPath = policyFile.replace(/\.yaml$/, '.bak.yaml'); + fs.writeFileSync(backupPath, 'old\n', 'utf-8'); + + const { exitCode } = runCli(['policy', 'backup', '--force']); + expect(exitCode).toBe(0); + expect(fs.readFileSync(backupPath, 'utf-8')).not.toBe('old\n'); + }); + + it('--json returns ok:true with source and dest', () => { + const { stdout, exitCode } = runCli(['--json', 'policy', 'backup']); + expect(exitCode).toBe(0); + const out = JSON.parse(stdout[0]) as { data: Record }; + expect(out.data.ok).toBe(true); + expect(typeof out.data.source).toBe('string'); + expect(typeof out.data.dest).toBe('string'); + }); + + it('exits 2 when the policy file does not exist', () => { + fs.unlinkSync(policyFile); + const { exitCode } = runCli(['policy', 'backup']); + expect(exitCode).toBe(2); + }); + }); + + describe('policy restore', () => { + let policyFile: string; + let backupFile: string; + + beforeEach(() => { + policyFile = path.join(tmpDir, 'policy.yaml'); + backupFile = path.join(tmpDir, 'policy.bak.yaml'); + // Write a valid v0.2 policy as the backup source. + fs.writeFileSync(backupFile, 'version: "0.2"\n', 'utf-8'); + // Write a different active policy. + fs.writeFileSync(policyFile, 'version: "0.2"\n# original\n', 'utf-8'); + process.env.SWITCHBOT_POLICY_PATH = policyFile; + }); + + afterEach(() => { + delete process.env.SWITCHBOT_POLICY_PATH; + }); + + it('restores the backup to the active policy path', () => { + const { stdout, exitCode } = runCli(['policy', 'restore', backupFile]); + expect(exitCode).toBe(0); + expect(fs.readFileSync(policyFile, 'utf-8')).toBe(fs.readFileSync(backupFile, 'utf-8')); + expect(stdout.join('\n')).toContain('Policy restored'); + }); + + it('auto-creates a pre-restore backup of the existing policy', () => { + runCli(['policy', 'restore', backupFile]); + const autoBackup = policyFile.replace(/\.yaml$/, '.pre-restore.bak.yaml'); + expect(fs.existsSync(autoBackup)).toBe(true); + }); + + it('exits 2 when the restore source does not exist', () => { + const { exitCode } = runCli(['policy', 'restore', path.join(tmpDir, 'missing.yaml')]); + expect(exitCode).toBe(2); + }); + + it('--json returns ok:true with restored path', () => { + const { stdout, exitCode } = runCli(['--json', 'policy', 'restore', backupFile]); + expect(exitCode).toBe(0); + const out = JSON.parse(stdout[0]) as { data: Record }; + expect(out.data.ok).toBe(true); + expect(out.data.restored).toBe(policyFile); + }); + }); }); diff --git a/tests/commands/rules.test.ts b/tests/commands/rules.test.ts index 52426ee..91c8920 100644 --- a/tests/commands/rules.test.ts +++ b/tests/commands/rules.test.ts @@ -458,4 +458,88 @@ describe('switchbot rules (commander surface)', () => { expect(stdout.join('\n')).toMatch(/no rules recorded/); }); }); + + describe('rules explain', () => { + const explainPolicy = [ + 'automation:', + ' enabled: true', + ' rules:', + ' - name: motion on', + ' when:', + ' source: mqtt', + ' event: motion.detected', + ' conditions:', + ' - time_between: ["22:00", "07:00"]', + ' then:', + ' - command: "devices command LAMP turnOn"', + ' device: LAMP', + ' cooldown: "5m"', + ' maxFiringsPerHour: 6', + ' suppressIfAlreadyDesired: true', + 'aliases:', + ' "LAMP": "AA-BB-CC-DD-EE-01"', + '', + ].join('\n'); + + it('prints human-readable detail for a known rule', async () => { + const p = path.join(tmpDir, 'policy.yaml'); + fs.writeFileSync(p, v02Policy(explainPolicy), 'utf-8'); + const { stdout, exitCode } = await runCli(['rules', 'explain', 'motion on', p]); + expect(exitCode).toBe(0); + const out = stdout.join('\n'); + expect(out).toContain('motion on'); + expect(out).toContain('mqtt:motion.detected'); + expect(out).toContain('5m'); + expect(out).toContain('6'); + expect(out).toContain('suppressIfAlreadyDesired'); + expect(out).toContain('(never)'); + }); + + it('emits a JSON envelope with all fields', async () => { + const p = path.join(tmpDir, 'policy.yaml'); + fs.writeFileSync(p, v02Policy(explainPolicy), 'utf-8'); + const { stdout, exitCode } = await runCli(['--json', 'rules', 'explain', 'motion on', p]); + expect(exitCode).toBe(0); + const body = JSON.parse(stdout[0]) as { data: Record }; + const d = body.data; + expect(d.name).toBe('motion on'); + expect(d.enabled).toBe(true); + expect(d.trigger).toBe('mqtt:motion.detected'); + expect(d.cooldown).toBe('5m'); + expect(d.maxFiringsPerHour).toBe(6); + expect(d.suppressIfAlreadyDesired).toBe(true); + expect(d.lastFired).toBeNull(); + }); + + it('exits 1 with usage error when rule name is not found', async () => { + const p = path.join(tmpDir, 'policy.yaml'); + fs.writeFileSync(p, v02Policy(explainPolicy), 'utf-8'); + const { exitCode } = await runCli(['rules', 'explain', 'no-such-rule', p]); + expect(exitCode).toBe(1); + }); + + it('reflects last-fired time from audit log', async () => { + const p = path.join(tmpDir, 'policy.yaml'); + const auditFile = path.join(tmpDir, 'audit.log'); + fs.writeFileSync(p, v02Policy(explainPolicy), 'utf-8'); + const entry = { + t: '2026-04-25T08:00:00.000Z', + kind: 'rule-fire', + deviceId: 'LAMP', + command: 'turnOn', + parameter: null, + commandType: 'command', + dryRun: false, + result: 'ok', + rule: { name: 'motion on', triggerSource: 'mqtt', fireId: 'f-1' }, + }; + fs.writeFileSync(auditFile, JSON.stringify(entry) + '\n'); + const { stdout, exitCode } = await runCli([ + '--json', 'rules', 'explain', 'motion on', '--file', auditFile, p, + ]); + expect(exitCode).toBe(0); + const body = JSON.parse(stdout[0]) as { data: Record }; + expect(body.data.lastFired).toBe('2026-04-25T08:00:00.000Z'); + }); + }); }); diff --git a/tests/commands/scenes.test.ts b/tests/commands/scenes.test.ts index ded14ef..aa1a672 100644 --- a/tests/commands/scenes.test.ts +++ b/tests/commands/scenes.test.ts @@ -254,4 +254,48 @@ describe('scenes command', () => { expect(err).not.toContain('Did you mean:'); }); }); + + describe('explain', () => { + function mockScenes() { + apiMock.__instance.get.mockResolvedValue({ + data: { + body: [ + { sceneId: 'S1', sceneName: 'Good Morning' }, + { sceneId: 'S2', sceneName: 'Movie Time' }, + ], + }, + }); + } + + it('--json returns explanation envelope with riskLevel and toExecute', async () => { + mockScenes(); + const res = await runCli(registerScenesCommand, ['--json', 'scenes', 'explain', 'S1']); + expect(res.exitCode).not.toBe(2); + const out = JSON.parse(res.stdout.find((l) => l.trim().startsWith('{'))!) as Record; + expect((out.data as Record).sceneId).toBe('S1'); + expect((out.data as Record).sceneName).toBe('Good Morning'); + expect((out.data as Record).riskLevel).toBe('low'); + expect((out.data as Record).toExecute).toBe('switchbot scenes execute S1'); + expect((out.data as Record).idempotent).toBeNull(); + }); + + it('plaintext output includes key explanation fields', async () => { + mockScenes(); + const res = await runCli(registerScenesCommand, ['scenes', 'explain', 'S1']); + expect(res.exitCode).not.toBe(2); + const out = res.stdout.join('\n'); + expect(out).toContain('Good Morning'); + expect(out).toContain('riskLevel'); + expect(out).toContain('toExecute'); + expect(out).toContain('scenes execute S1'); + }); + + it('--json emits error envelope for unknown sceneId', async () => { + mockScenes(); + const res = await runCli(registerScenesCommand, ['--json', 'scenes', 'explain', 'MISSING']); + expect(res.exitCode).toBe(2); + const out = JSON.parse(res.stdout.find((l) => l.trim().startsWith('{'))!) as Record; + expect((out.error as Record).message).toMatch(/scene not found/i); + }); + }); }); diff --git a/tests/commands/upgrade-check.test.ts b/tests/commands/upgrade-check.test.ts new file mode 100644 index 0000000..3090c7b --- /dev/null +++ b/tests/commands/upgrade-check.test.ts @@ -0,0 +1,59 @@ +import { describe, it, expect } from 'vitest'; + +// Mirror of semverGt in upgrade-check.ts — tests pin the contract. +// If the implementation changes, these tests should catch regressions. +function semverGt(a: string, b: string): boolean { + const numParts = (v: string) => v.replace(/-.*$/, '').split('.').map((n) => Number.parseInt(n, 10)); + const [aMaj, aMin, aPat] = numParts(a); + const [bMaj, bMin, bPat] = numParts(b); + if (aMaj !== bMaj) return aMaj > bMaj; + if (aMin !== bMin) return aMin > bMin; + if (aPat !== bPat) return aPat > bPat; + // Same numeric version: release (no prerelease) > prerelease + return !a.includes('-') && b.includes('-'); +} + +describe('semverGt (upgrade-check)', () => { + it('release > prerelease of same version', () => { + expect(semverGt('3.2.1', '3.2.1-rc.1')).toBe(true); + expect(semverGt('3.2.1', '3.2.1-beta.1')).toBe(true); + }); + + it('newer patch > older patch', () => { + expect(semverGt('3.2.2', '3.2.1')).toBe(true); + expect(semverGt('3.2.1', '3.2.2')).toBe(false); + }); + + it('same version is not gt', () => { + expect(semverGt('3.2.1', '3.2.1')).toBe(false); + }); + + it('newer minor > older minor', () => { + expect(semverGt('3.3.0', '3.2.9')).toBe(true); + }); + + it('newer major wins regardless of minor/patch', () => { + expect(semverGt('4.0.0', '3.99.99')).toBe(true); + }); +}); + +describe('breakingChange detection (upgrade-check)', () => { + function majorOf(v: string): number { + return Number.parseInt(v.split('.')[0], 10); + } + function isBreaking(latest: string, current: string): boolean { + return majorOf(latest) > majorOf(current); + } + + it('same major → no breaking change', () => { + expect(isBreaking('3.5.0', '3.2.1')).toBe(false); + }); + + it('major bump → breaking change', () => { + expect(isBreaking('4.0.0', '3.99.9')).toBe(true); + }); + + it('older latest → no breaking change', () => { + expect(isBreaking('2.0.0', '3.0.0')).toBe(false); + }); +}); diff --git a/tests/lib/plan-store.test.ts b/tests/lib/plan-store.test.ts new file mode 100644 index 0000000..b6d6797 --- /dev/null +++ b/tests/lib/plan-store.test.ts @@ -0,0 +1,18 @@ +import { describe, it, expect } from 'vitest'; +import { loadPlanRecord, updatePlanRecord } from '../../src/lib/plan-store.js'; + +describe('plan-store security', () => { + it('loadPlanRecord rejects non-UUID planId (path traversal guard)', () => { + expect(() => loadPlanRecord('../../etc/passwd')).toThrow(/invalid planId/i); + expect(() => loadPlanRecord('../other')).toThrow(/invalid planId/i); + expect(() => loadPlanRecord('not-a-uuid')).toThrow(/invalid planId/i); + }); + + it('updatePlanRecord rejects non-UUID planId', () => { + expect(() => updatePlanRecord('../../etc/passwd', {})).toThrow(/invalid planId/i); + }); + + it('loadPlanRecord accepts valid UUID v4 (returns null if file does not exist)', () => { + expect(loadPlanRecord('00000000-0000-4000-8000-000000000000')).toBeNull(); + }); +}); diff --git a/tests/rules/conflict-analyzer.test.ts b/tests/rules/conflict-analyzer.test.ts new file mode 100644 index 0000000..d3e5188 --- /dev/null +++ b/tests/rules/conflict-analyzer.test.ts @@ -0,0 +1,95 @@ +import { describe, it, expect } from 'vitest'; +import { analyzeConflicts } from '../../src/rules/conflict-analyzer.js'; +import type { Rule } from '../../src/rules/types.js'; + +function mqttRule(name: string, extra: Partial = {}): Rule { + return { + name, + when: { source: 'mqtt', event: 'motion.detected' }, + then: [{ command: 'devices command LAMP turnOn', device: 'LAMP' }], + ...extra, + }; +} + +function webhookRule(name: string, extra: Partial = {}): Rule { + return { + name, + when: { source: 'webhook', path: '/motion' }, + then: [{ command: 'devices command LAMP turnOn', device: 'LAMP' }], + ...extra, + }; +} + +function cronRule(name: string, extra: Partial = {}): Rule { + return { + name, + when: { source: 'cron', schedule: '0 22 * * *' }, + then: [{ command: 'devices command LAMP turnOff', device: 'LAMP' }], + ...extra, + }; +} + +describe('analyzeConflicts — quiet-hours gap detection', () => { + const quietHours = { start: '22:00', end: '07:00' }; + + it('returns no findings when quietHours is not provided', () => { + const report = analyzeConflicts([mqttRule('r1')]); + const qhFindings = report.findings.filter((f) => f.code === 'no-quiet-hours-guard'); + expect(qhFindings).toHaveLength(0); + }); + + it('flags mqtt rule with no time_between condition when quietHours is set', () => { + const report = analyzeConflicts([mqttRule('r1')], quietHours); + const qhFindings = report.findings.filter((f) => f.code === 'no-quiet-hours-guard'); + expect(qhFindings).toHaveLength(1); + expect(qhFindings[0].rules).toContain('r1'); + expect(qhFindings[0].severity).toBe('warning'); + expect(qhFindings[0].message).toContain('22:00–07:00'); + }); + + it('flags webhook rule with no time_between condition when quietHours is set', () => { + const report = analyzeConflicts([webhookRule('wh1')], quietHours); + const qhFindings = report.findings.filter((f) => f.code === 'no-quiet-hours-guard'); + expect(qhFindings).toHaveLength(1); + expect(qhFindings[0].rules).toContain('wh1'); + }); + + it('does not flag a rule that has a top-level time_between condition', () => { + const r = mqttRule('guarded', { + conditions: [{ time_between: ['07:00', '22:00'] }], + }); + const report = analyzeConflicts([r], quietHours); + const qhFindings = report.findings.filter((f) => f.code === 'no-quiet-hours-guard'); + expect(qhFindings).toHaveLength(0); + }); + + it('does not flag a time_between inside an all-condition (nested guard)', () => { + const r = mqttRule('nested', { + conditions: [{ all: [{ time_between: ['07:00', '22:00'] }] }], + }); + const report = analyzeConflicts([r], quietHours); + const qhFindings = report.findings.filter((f) => f.code === 'no-quiet-hours-guard'); + expect(qhFindings).toHaveLength(0); + }); + + it('does not flag cron rules (schedule-driven, not event-driven)', () => { + const report = analyzeConflicts([cronRule('nightly')], quietHours); + const qhFindings = report.findings.filter((f) => f.code === 'no-quiet-hours-guard'); + expect(qhFindings).toHaveLength(0); + }); + + it('does not flag disabled rules', () => { + const r = mqttRule('disabled-r', { enabled: false }); + const report = analyzeConflicts([r], quietHours); + const qhFindings = report.findings.filter((f) => f.code === 'no-quiet-hours-guard'); + expect(qhFindings).toHaveLength(0); + }); + + it('hint contains an actionable time_between snippet', () => { + const report = analyzeConflicts([mqttRule('r1')], quietHours); + const f = report.findings.find((x) => x.code === 'no-quiet-hours-guard'); + expect(f?.hint).toContain('time_between'); + expect(f?.hint).toContain('07:00'); + expect(f?.hint).toContain('22:00'); + }); +}); diff --git a/tests/rules/engine.test.ts b/tests/rules/engine.test.ts index 9ce5da1..939df08 100644 --- a/tests/rules/engine.test.ts +++ b/tests/rules/engine.test.ts @@ -693,4 +693,280 @@ describe('RulesEngine.reload', () => { expect(result.changed).toBe(false); expect(result.errors.join(' ')).toMatch(/engine not running/); }); + + // ── hysteresis ───────────────────────────────────────────────────────── + describe('hysteresis / requires_stable_for', () => { + it('suppresses the first observation, fires after the window has elapsed', async () => { + const fires: EngineFireEntry[] = []; + const rule: Rule = { + name: 'stable-motion', + when: { source: 'mqtt', event: 'motion.detected' }, + then: [{ command: 'devices command turnOn', device: 'hallway lamp' }], + dry_run: true, + hysteresis: '5s', + }; + const engine = new RulesEngine({ + automation: automation([rule]), + aliases: { 'hallway lamp': 'AA-BB-CC' }, + mqttClient: mqtt as unknown as SwitchBotMqttClient, + mqttCredential: fakeCredential, + skipApiCall: true, + onFire: (e) => fires.push(e), + }); + await engine.start(); + + // first observation — should be throttled (first-seen) + const t0 = new Date(2026, 3, 22, 10, 0, 0); + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: t0, deviceId: 'SENSOR1' }); + await engine.drainForTest(); + expect(fires.map((f) => f.status)).toContain('throttled'); + expect(fires.at(-1)?.reason).toMatch(/hysteresis/); + fires.length = 0; + + // second observation within window (3s later) — still throttled + const t1 = new Date(t0.getTime() + 3_000); + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: t1, deviceId: 'SENSOR1' }); + await engine.drainForTest(); + expect(fires.map((f) => f.status)).toContain('throttled'); + fires.length = 0; + + // third observation after window (6s after start) — should fire + const t2 = new Date(t0.getTime() + 6_000); + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: t2, deviceId: 'SENSOR1' }); + await engine.drainForTest(); + expect(fires.map((f) => f.status)).toContain('dry'); + + await engine.stop(); + }); + + it('resets the hysteresis clock when conditions become unmatched', async () => { + const fires: EngineFireEntry[] = []; + const rule: Rule = { + name: 'stable-with-cond', + when: { source: 'mqtt', event: 'motion.detected' }, + conditions: [{ device: 'hallway lamp', field: 'power', op: '==', value: 'on' }], + then: [{ command: 'devices command turnOn', device: 'hallway lamp' }], + dry_run: true, + hysteresis: '5s', + }; + + // Flip the condition on/off via a mutable holder. + let lampOn = true; + const engine = new RulesEngine({ + automation: automation([rule]), + aliases: { 'hallway lamp': 'AA-BB-CC' }, + mqttClient: mqtt as unknown as SwitchBotMqttClient, + mqttCredential: fakeCredential, + skipApiCall: true, + statusFetcher: async () => ({ power: lampOn ? 'on' : 'off' }), + onFire: (e) => fires.push(e), + }); + await engine.start(); + + const base = new Date(2026, 3, 22, 10, 0, 0); + + // t=0: conditions pass — hysteresis starts + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: base, deviceId: 'SENSOR1' }); + await engine.drainForTest(); + expect(fires.at(-1)?.status).toBe('throttled'); + fires.length = 0; + + // t=3s: conditions fail (lamp turns off) — clock must reset + lampOn = false; + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: new Date(base.getTime() + 3_000), deviceId: 'SENSOR1' }); + await engine.drainForTest(); + expect(fires.at(-1)?.status).toBe('conditions-failed'); + fires.length = 0; + + // t=8s: conditions pass again — clock starts fresh from t=8s + lampOn = true; + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: new Date(base.getTime() + 8_000), deviceId: 'SENSOR1' }); + await engine.drainForTest(); + // 8s > 5s from start but only 0s from reset — should be throttled again (first-seen) + expect(fires.at(-1)?.status).toBe('throttled'); + fires.length = 0; + + // t=14s: 6s after the second start — now stable long enough + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: new Date(base.getTime() + 14_000), deviceId: 'SENSOR1' }); + await engine.drainForTest(); + expect(fires.at(-1)?.status).toBe('dry'); + + await engine.stop(); + }); + }); + + // ── maxFiringsPerHour ─────────────────────────────────────────────────── + describe('maxFiringsPerHour', () => { + it('allows fires up to the cap and throttles once the cap is reached', async () => { + const fires: EngineFireEntry[] = []; + const rule: Rule = { + name: 'capped-rule', + when: { source: 'mqtt', event: 'motion.detected' }, + then: [{ command: 'devices command turnOn', device: 'hallway lamp' }], + dry_run: true, + maxFiringsPerHour: 2, + }; + const engine = new RulesEngine({ + automation: automation([rule]), + aliases: { 'hallway lamp': 'AA-BB-CC' }, + mqttClient: mqtt as unknown as SwitchBotMqttClient, + mqttCredential: fakeCredential, + skipApiCall: true, + onFire: (e) => fires.push(e), + }); + await engine.start(); + + const base = new Date(2026, 3, 22, 10, 0, 0); + + // First fire: allowed + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: base, deviceId: 'S1' }); + await engine.drainForTest(); + expect(fires.at(-1)?.status).toBe('dry'); + fires.length = 0; + + // Second fire (5 min later): still allowed + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: new Date(base.getTime() + 5 * 60_000), deviceId: 'S1' }); + await engine.drainForTest(); + expect(fires.at(-1)?.status).toBe('dry'); + fires.length = 0; + + // Third fire (10 min later): throttled — cap reached + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: new Date(base.getTime() + 10 * 60_000), deviceId: 'S1' }); + await engine.drainForTest(); + expect(fires.at(-1)?.status).toBe('throttled'); + expect(fires.at(-1)?.reason).toMatch(/maxFiringsPerHour/); + + await engine.stop(); + }); + + it('resets the count after the 1-hour window slides', async () => { + const fires: EngineFireEntry[] = []; + const rule: Rule = { + name: 'hourly-reset', + when: { source: 'mqtt', event: 'motion.detected' }, + then: [{ command: 'devices command turnOn', device: 'hallway lamp' }], + dry_run: true, + maxFiringsPerHour: 1, + }; + const engine = new RulesEngine({ + automation: automation([rule]), + aliases: { 'hallway lamp': 'AA-BB-CC' }, + mqttClient: mqtt as unknown as SwitchBotMqttClient, + mqttCredential: fakeCredential, + skipApiCall: true, + onFire: (e) => fires.push(e), + }); + await engine.start(); + + const base = new Date(2026, 3, 22, 10, 0, 0); + + // Fire once — consumes the cap + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: base, deviceId: 'S1' }); + await engine.drainForTest(); + expect(fires.at(-1)?.status).toBe('dry'); + fires.length = 0; + + // 30 min later — cap still consumed + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: new Date(base.getTime() + 30 * 60_000), deviceId: 'S1' }); + await engine.drainForTest(); + expect(fires.at(-1)?.status).toBe('throttled'); + fires.length = 0; + + // 61 min later — window has slid past; first fire dropped out of count + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: new Date(base.getTime() + 61 * 60_000), deviceId: 'S1' }); + await engine.drainForTest(); + expect(fires.at(-1)?.status).toBe('dry'); + + await engine.stop(); + }); + }); + + // ── suppressIfAlreadyDesired ──────────────────────────────────────────── + describe('suppressIfAlreadyDesired', () => { + it('suppresses turnOn when device is already on', async () => { + const fires: EngineFireEntry[] = []; + const rule: Rule = { + name: 'auto-on', + when: { source: 'mqtt', event: 'motion.detected' }, + then: [{ command: 'devices command turnOn', device: 'hallway lamp' }], + dry_run: true, + suppressIfAlreadyDesired: true, + }; + const engine = new RulesEngine({ + automation: automation([rule]), + aliases: { 'hallway lamp': 'LAMP-ID' }, + mqttClient: mqtt as unknown as SwitchBotMqttClient, + mqttCredential: fakeCredential, + skipApiCall: true, + statusFetcher: async () => ({ powerState: 'on' }), + onFire: (e) => fires.push(e), + }); + await engine.start(); + + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: new Date(), deviceId: 'S1' }); + await engine.drainForTest(); + + expect(fires.at(-1)?.status).toBe('throttled'); + expect(fires.at(-1)?.reason).toMatch(/already-desired/); + + await engine.stop(); + }); + + it('allows turnOn when device is currently off', async () => { + const fires: EngineFireEntry[] = []; + const rule: Rule = { + name: 'auto-on', + when: { source: 'mqtt', event: 'motion.detected' }, + then: [{ command: 'devices command turnOn', device: 'hallway lamp' }], + dry_run: true, + suppressIfAlreadyDesired: true, + }; + const engine = new RulesEngine({ + automation: automation([rule]), + aliases: { 'hallway lamp': 'LAMP-ID' }, + mqttClient: mqtt as unknown as SwitchBotMqttClient, + mqttCredential: fakeCredential, + skipApiCall: true, + statusFetcher: async () => ({ powerState: 'off' }), + onFire: (e) => fires.push(e), + }); + await engine.start(); + + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: new Date(), deviceId: 'S1' }); + await engine.drainForTest(); + + expect(fires.at(-1)?.status).toBe('dry'); + + await engine.stop(); + }); + + it('proceeds (best-effort) when statusFetcher rejects', async () => { + const fires: EngineFireEntry[] = []; + const rule: Rule = { + name: 'auto-on', + when: { source: 'mqtt', event: 'motion.detected' }, + then: [{ command: 'devices command turnOn', device: 'hallway lamp' }], + dry_run: true, + suppressIfAlreadyDesired: true, + }; + const engine = new RulesEngine({ + automation: automation([rule]), + aliases: { 'hallway lamp': 'LAMP-ID' }, + mqttClient: mqtt as unknown as SwitchBotMqttClient, + mqttCredential: fakeCredential, + skipApiCall: true, + statusFetcher: async () => { throw new Error('network error'); }, + onFire: (e) => fires.push(e), + }); + await engine.start(); + + await engine.ingestEventForTest({ source: 'mqtt', event: 'motion.detected', t: new Date(), deviceId: 'S1' }); + await engine.drainForTest(); + + // Status fetch failed — best-effort, so the rule should still fire + expect(fires.at(-1)?.status).toBe('dry'); + + await engine.stop(); + }); + }); }); diff --git a/tests/utils/retry.test.ts b/tests/utils/retry.test.ts index 01df3c6..e9d0c0c 100644 --- a/tests/utils/retry.test.ts +++ b/tests/utils/retry.test.ts @@ -3,6 +3,8 @@ import { parseRetryAfter, computeBackoff, nextRetryDelayMs, + CircuitBreaker, + CircuitOpenError, } from '../../src/utils/retry.js'; describe('parseRetryAfter', () => { @@ -79,3 +81,70 @@ describe('nextRetryDelayMs', () => { expect(nextRetryDelayMs(1, 'exponential', 'not-a-date')).toBe(2_000); }); }); + +describe('CircuitBreaker', () => { + it('starts closed and allows calls', () => { + const cb = new CircuitBreaker('test'); + expect(cb.getState()).toBe('closed'); + expect(() => cb.checkAndAllow()).not.toThrow(); + }); + + it('opens after failureThreshold consecutive failures', () => { + const cb = new CircuitBreaker('test', { failureThreshold: 3 }); + cb.recordFailure(); + cb.recordFailure(); + expect(cb.getState()).toBe('closed'); + cb.recordFailure(); + expect(cb.getState()).toBe('open'); + }); + + it('throws CircuitOpenError while open', () => { + const cb = new CircuitBreaker('api', { failureThreshold: 2 }); + cb.recordFailure(); + cb.recordFailure(); + expect(() => cb.checkAndAllow()).toThrow(CircuitOpenError); + expect(() => cb.checkAndAllow()).toThrow(/Circuit "api" is open/); + }); + + it('resets to closed and clears failure count on recordSuccess', () => { + const cb = new CircuitBreaker('test', { failureThreshold: 2 }); + cb.recordFailure(); + cb.recordFailure(); + expect(cb.getState()).toBe('open'); + cb.reset(); + expect(cb.getState()).toBe('closed'); + expect(() => cb.checkAndAllow()).not.toThrow(); + }); + + it('enters half-open after resetTimeoutMs and allows one probe', () => { + const cb = new CircuitBreaker('test', { failureThreshold: 1, resetTimeoutMs: 0 }); + cb.recordFailure(); + // With resetTimeoutMs: 0 the transition to half-open happens on the very + // next getState() / checkAndAllow() call because Date.now() >= openedAt + 0. + expect(cb.getState()).toBe('half-open'); + expect(() => cb.checkAndAllow()).not.toThrow(); + }); + + it('re-opens on failure while half-open', () => { + const cb = new CircuitBreaker('test', { failureThreshold: 1, resetTimeoutMs: 0 }); + cb.recordFailure(); + // Force transition to half-open. + void cb.getState(); + // A failure during the probe re-opens the circuit. + cb.recordFailure(); + // Next getState() with resetTimeoutMs: 0 will immediately half-open again, + // so check failures directly via getStats(). + const stats = cb.getStats(); + expect(stats.state === 'open' || stats.state === 'half-open').toBe(true); + expect(stats.failures).toBeGreaterThan(0); + }); + + it('closes fully on success while half-open', () => { + const cb = new CircuitBreaker('test', { failureThreshold: 1, resetTimeoutMs: 0 }); + cb.recordFailure(); + expect(cb.getState()).toBe('half-open'); + cb.recordSuccess(); + expect(cb.getState()).toBe('closed'); + expect(() => cb.checkAndAllow()).not.toThrow(); + }); +});