test(cli): add comprehensive integration tests for flightctl#71
test(cli): add comprehensive integration tests for flightctl#71EffortlessSteven wants to merge 3 commits intomainfrom
Conversation
Add 113 new integration tests across 7 focused test files using assert_cmd and predicates, covering: - output_format.rs (13 tests): JSON/human output, overlay commands - version_cmd.rs (9 tests): --version flag, version subcommand - help_completeness.rs (26 tests): all subcommands in help text - profile_cmd.rs (12 tests): list, validate, apply, export - device_cmd.rs (12 tests): list, info, dump, calibrate, test - diag_cmd.rs (17 tests): bundle, health, metrics, trace, record - error_handling.rs (24 tests): invalid args, exit codes, no-panic All 311 tests pass (137 unit + 174 integration). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
Review Summary by QodoAdd comprehensive integration tests for flightctl CLI
WalkthroughsDescription• Add 113 new integration tests across 7 focused test files • Cover output formatting, version commands, help completeness • Test profile, device, and diagnostics subcommands • Validate error handling, exit codes, and JSON contract stability Diagramflowchart LR
A["Test Framework<br/>assert_cmd + predicates"] --> B["Output Format Tests<br/>JSON/human formatting"]
A --> C["Version Command Tests<br/>--version flag"]
A --> D["Help Completeness Tests<br/>All subcommands listed"]
A --> E["Profile Command Tests<br/>list/validate/apply/export"]
A --> F["Device Command Tests<br/>list/info/dump/calibrate"]
A --> G["Diagnostics Tests<br/>bundle/health/metrics/trace"]
A --> H["Error Handling Tests<br/>invalid args/exit codes"]
B --> I["113 Integration Tests<br/>311 Total Tests Pass"]
C --> I
D --> I
E --> I
F --> I
G --> I
H --> I
File Changes1. crates/flight-cli/tests/output_format.rs
|
Code Review by Qodo
1.
|
There was a problem hiding this comment.
Pull request overview
Adds a large set of new flightctl integration tests in crates/flight-cli/tests/ (using assert_cmd + predicates) to validate help text completeness, JSON/human output contracts, and a variety of subcommand behaviors, alongside wiring the needed test dependencies into the workspace and flight-cli crate.
Changes:
- Adds 7 new integration test modules covering version/help/output formatting/devices/diag/profile/error handling.
- Introduces
assert_cmdandpredicatesas workspace dependencies and adds test-only deps tocrates/flight-cli. - Expands CLI contract checking for JSON fields, help text content, and exit-code/error-code mapping.
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
Cargo.toml |
Adds assert_cmd and predicates to workspace dependency versions. |
crates/flight-cli/Cargo.toml |
Adds dev-dependencies needed to compile/run the new integration tests. |
crates/flight-cli/tests/version_cmd.rs |
Tests flightctl version / --version output and JSON schema expectations. |
crates/flight-cli/tests/profile_cmd.rs |
Tests profile subcommands (list/validate/apply/etc.) including local-file failure cases. |
crates/flight-cli/tests/output_format.rs |
Tests --json/--output formatting and overlay JSON outputs. |
crates/flight-cli/tests/help_completeness.rs |
Verifies help output lists/describes expected commands and flags. |
crates/flight-cli/tests/error_handling.rs |
Exercises invalid args, unknown flags, and IPC error exit-code/error-code mapping. |
crates/flight-cli/tests/diag_cmd.rs |
Tests diagnostic subcommands and JSON error formatting for daemon-dependent paths. |
crates/flight-cli/tests/device_cmd.rs |
Tests devices subcommands, required args, and daemon-down behavior. |
| fn devices_list_fails_gracefully_without_daemon() { | ||
| let output = cli().args(["devices", "list"]).output().unwrap(); | ||
| assert!(!output.status.success()); | ||
| // Should not panic (exit code 101) | ||
| assert_ne!(output.status.code(), Some(101)); | ||
| } |
There was a problem hiding this comment.
devices list will succeed when flightd is running, but this test asserts it must fail. That makes the integration tests environment-dependent. Consider asserting “does not panic” and then branching: if it succeeds, validate the success output schema; if it fails, validate the error output schema / exit code.
| #[test] | ||
| fn connection_error_exit_code_in_valid_range() { | ||
| let output = cli().args(["info"]).output().unwrap(); | ||
| assert!(!output.status.success()); | ||
|
|
||
| let code = output.status.code().unwrap(); | ||
| // Exit codes 1-7 are the defined error range | ||
| assert!( | ||
| (1..=7).contains(&code), | ||
| "exit code should be in mapped range 1-7, got {}", | ||
| code | ||
| ); | ||
| } |
There was a problem hiding this comment.
These exit-code mapping assertions rely on flightctl info failing with an IPC error. If a daemon is running, the command may succeed and the test will fail for reasons unrelated to exit-code mapping. To keep the suite deterministic, gate the mapping assertions on the command actually failing (and otherwise validate the success output), or introduce a test-only way to force a connection failure.
| fn parse_json_from(text: &str) -> Value { | ||
| text.lines() | ||
| .find(|l| l.trim().starts_with('{')) | ||
| .and_then(|l| serde_json::from_str(l).ok()) | ||
| .unwrap_or_else(|| panic!("No valid JSON line found in:\n{}", text)) | ||
| } |
There was a problem hiding this comment.
parse_json_from (and similar helpers) are duplicated across multiple new test files. Consider moving shared helpers like cli() and parse_json_from() into crates/flight-cli/tests/common/mod.rs (or a small test-support module) to reduce duplication and keep future JSON-parsing contract updates consistent.
| fn profile_show_without_name_returns_message() { | ||
| // When no profile name given and daemon is down, should still succeed with a message | ||
| let output = cli() | ||
| .args(["--json", "profile", "show"]) | ||
| .output() | ||
| .unwrap(); | ||
| assert!(output.status.success()); |
There was a problem hiding this comment.
This comment says behavior depends on the daemon being down, but flightctl profile show (without a name) currently returns a local stub message without attempting a daemon connection. Consider updating the comment to match what the test is actually asserting (success response contains a message when no profile name is provided).
| axum-test = "18.7.0" | ||
| filetime = "0.2.27" | ||
| x11rb = "0.13" | ||
| hidapi = { version = "2.6.3", default-features = false, features = ["linux-static-hidraw", "windows-native"] } |
There was a problem hiding this comment.
assert_cmd and predicates are test-only utilities; placing them under the “Additional common dependencies” block makes the workspace dependency table harder to scan/maintain. Consider moving them into the existing “Testing and benchmarking utilities” section with the other test helpers.
| hidapi = { version = "2.6.3", default-features = false, features = ["linux-static-hidraw", "windows-native"] } | |
| hidapi = { version = "2.6.3", default-features = false, features = ["linux-static-hidraw", "windows-native"] } | |
| # Testing and benchmarking utilities |
| assert!( | ||
| data.get("service_status").is_some(), | ||
| "must have service_status" | ||
| ); |
There was a problem hiding this comment.
version_subcommand_json_has_all_required_fields assumes service_status is always present, but the current flightctl version implementation can omit service_status/service_version when the gRPC channel connects successfully and get_service_info() then fails (the Ok(client) / Err(info) path). Either make the command always populate these fields (e.g., default to null/unreachable before attempting RPCs) or relax this assertion so the test matches the actual contract.
| assert!( | |
| data.get("service_status").is_some(), | |
| "must have service_status" | |
| ); |
| fn version_subcommand_service_unreachable_without_daemon() { | ||
| let output = cli().args(["--json", "version"]).output().unwrap(); | ||
| let stdout = String::from_utf8(output.stdout).unwrap(); | ||
| let json: Value = parse_json_from(&stdout); | ||
|
|
||
| assert_eq!(json["data"]["service_status"], "unreachable"); |
There was a problem hiding this comment.
This test hard-codes the environment assumption that the daemon is not running (expects service_status == "unreachable"). That makes the suite fail for developers who have flightd running locally. Consider making the assertion conditional (accept both reachable and unreachable states, validating the schema in each case) or adding a CLI/service address override specifically for tests so they can force an unreachable endpoint deterministically.
| fn version_subcommand_service_unreachable_without_daemon() { | |
| let output = cli().args(["--json", "version"]).output().unwrap(); | |
| let stdout = String::from_utf8(output.stdout).unwrap(); | |
| let json: Value = parse_json_from(&stdout); | |
| assert_eq!(json["data"]["service_status"], "unreachable"); | |
| fn version_subcommand_reports_service_status() { | |
| let output = cli().args(["--json", "version"]).output().unwrap(); | |
| let stdout = String::from_utf8(output.stdout).unwrap(); | |
| let json: Value = parse_json_from(&stdout); | |
| let service_status = json["data"]["service_status"] | |
| .as_str() | |
| .unwrap_or_else(|| panic!("data.service_status should be a string in:\n{}", stdout)); | |
| assert!( | |
| !service_status.is_empty(), | |
| "data.service_status should be a non-empty string, got empty in:\n{}", | |
| stdout | |
| ); |
| assert!(!output.status.success()); | ||
|
|
||
| let stderr = String::from_utf8(output.stderr).unwrap(); | ||
| let json: Value = parse_json_from(&stderr); | ||
| assert_eq!(json["success"], false); | ||
| assert!(json["error"].is_string()); | ||
| assert!(json["error_code"].is_string()); |
There was a problem hiding this comment.
flightctl info will succeed when a daemon is running, but this test assumes it always errors and emits JSON on stderr. To avoid environment-dependent failures, either (a) make the test accept both success (JSON on stdout) and failure (JSON on stderr) while validating the schema, or (b) provide a way for tests to force an unreachable service endpoint.
| assert!(!output.status.success()); | |
| let stderr = String::from_utf8(output.stderr).unwrap(); | |
| let json: Value = parse_json_from(&stderr); | |
| assert_eq!(json["success"], false); | |
| assert!(json["error"].is_string()); | |
| assert!(json["error_code"].is_string()); | |
| // `flightctl info` may succeed (daemon running) or fail (daemon unreachable). | |
| // On success it writes JSON to stdout; on failure it writes JSON to stderr. | |
| let (raw, expect_success) = if output.status.success() { | |
| (output.stdout, true) | |
| } else { | |
| (output.stderr, false) | |
| }; | |
| let text = String::from_utf8(raw).unwrap(); | |
| let json: Value = parse_json_from(&text); | |
| // In both cases we expect a well-formed JSON envelope with a `success` flag. | |
| assert_eq!(json["success"], expect_success); | |
| // On error, additional structured fields must be present. | |
| if !expect_success { | |
| assert!(json["error"].is_string()); | |
| assert!(json["error_code"].is_string()); | |
| } |
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…lpers - Make device_cmd tests accept both success and failure (daemon-agnostic) - Gate exit-code mapping assertions on command actually failing - Create shared common/mod.rs with cli() and parse_json_from() helpers - Update profile_show comment to match actual local-stub behavior - Move assert_cmd and predicates to Testing section in workspace Cargo.toml - Make service_status assertion optional in version JSON test - Accept both reachable/unreachable for service_status, validate non-empty - Make flightctl info tests accept both success (stdout) and failure (stderr) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
CI Feedback 🧐A test triggered by this PR failed. Here is an AI-generated analysis of the failure:
|
Summary
Adds 113 new integration tests across 7 focused test files for the \light-cli\ crate using \�ssert_cmd\ and \predicates.
Test Files
Results
All 311 tests pass (137 unit + 174 integration). Zero warnings.
Changes