-
Notifications
You must be signed in to change notification settings - Fork 12
spike test to work with catalog #4119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -28,6 +28,7 @@ | |||||
| "test": "concurrently \"pnpm:lint\" \"pnpm:test:*\" --names \"lint,test:\"", | ||||||
| "test-with-percy": "percy exec --parallel -- pnpm test:wait-for-servers", | ||||||
| "test:wait-for-servers": "./scripts/test-wait-for-servers.sh", | ||||||
| "test:catalog:runner": "./scripts/run-catalog-runner.sh --test-module ./tests/integration/catalog/modules/daily-report-dashboard.module.gts", | ||||||
|
||||||
| "test:catalog:runner": "./scripts/run-catalog-runner.sh --test-module ./tests/integration/catalog/modules/daily-report-dashboard.module.gts", | |
| "catalog:runner": "./scripts/run-catalog-runner.sh --test-module ./tests/integration/catalog/modules/daily-report-dashboard.module.gts", |
| Original file line number | Diff line number | Diff line change | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,62 @@ | ||||||||||||
| #!/bin/sh | ||||||||||||
|
|
||||||||||||
| set -eu | ||||||||||||
|
|
||||||||||||
| TEST_MODULE="" | ||||||||||||
| FILTER="Integration | Catalog | runner" | ||||||||||||
| NO_RUN="false" | ||||||||||||
|
|
||||||||||||
| while [ "$#" -gt 0 ]; do | ||||||||||||
| case "$1" in | ||||||||||||
| --test-module) | ||||||||||||
| TEST_MODULE="${2:-}" | ||||||||||||
| shift 2 | ||||||||||||
| ;; | ||||||||||||
| --filter) | ||||||||||||
| FILTER="${2:-}" | ||||||||||||
| shift 2 | ||||||||||||
| ;; | ||||||||||||
| --no-run) | ||||||||||||
| NO_RUN="true" | ||||||||||||
| shift 1 | ||||||||||||
| ;; | ||||||||||||
| *) | ||||||||||||
| echo "[catalog-runner] Unknown argument: $1" >&2 | ||||||||||||
| exit 1 | ||||||||||||
| ;; | ||||||||||||
| esac | ||||||||||||
| done | ||||||||||||
|
|
||||||||||||
| if [ -z "$TEST_MODULE" ]; then | ||||||||||||
| echo "[catalog-runner] Missing --test-module <path-to-test-module.gts|ts|js>" >&2 | ||||||||||||
| exit 1 | ||||||||||||
| fi | ||||||||||||
|
|
||||||||||||
| ROOT_DIR="$(pwd)" | ||||||||||||
| SOURCE_PATH="$ROOT_DIR/$TEST_MODULE" | ||||||||||||
|
|
||||||||||||
|
Comment on lines
+35
to
+37
|
||||||||||||
| ROOT_DIR="$(pwd)" | |
| SOURCE_PATH="$ROOT_DIR/$TEST_MODULE" | |
| SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" | |
| ROOT_DIR="$SCRIPT_DIR/.." | |
| SOURCE_PATH="$ROOT_DIR/$TEST_MODULE" |
Copilot
AI
Mar 5, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The run-catalog-runner.sh script always writes the absolute path of the source file into the generated output (// Source: $SOURCE_PATH), and this generated file is committed to the repo. Each time a different developer runs the script, the source comment will change (e.g., from /Users/tintinthong/github/boxel/... to their own local path), creating a spurious diff. Consider using a project-relative path (e.g., by stripping $ROOT_DIR/ from $SOURCE_PATH) so the generated comment is consistent across environments.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,43 @@ | ||
| import { module, test } from 'qunit'; | ||
|
|
||
| import { renderCard } from '../../helpers/render-component'; | ||
|
|
||
| import runnerModule from './generated/test-module'; | ||
| import { setupCatalogIsolatedCardTest } from './setup'; | ||
|
|
||
| module('Integration | Catalog | runner', function (hooks) { | ||
| setupCatalogIsolatedCardTest(hooks, { setupRealm: 'manual' }); | ||
|
|
||
| for (let caseDefinition of (runnerModule.cases ?? []) as any[]) { | ||
| test(caseDefinition.id, async function (this: any, assert) { | ||
| console.info(`[catalog-runner] START ${caseDefinition.id}`); | ||
|
|
||
| let seed = | ||
| typeof caseDefinition.seed === 'function' | ||
| ? await caseDefinition.seed(this) | ||
| : (caseDefinition.seed ?? {}); | ||
| await this.setupCatalogRealm( | ||
| seed, | ||
| `catalog-isolated:${caseDefinition.id}`, | ||
| ); | ||
|
|
||
| let cardURL = | ||
| typeof caseDefinition.cardURL === 'function' | ||
| ? await caseDefinition.cardURL(this) | ||
| : caseDefinition.cardURL; | ||
| let format = caseDefinition.format ?? 'isolated'; | ||
| let card = await this.store.get(cardURL); | ||
| await renderCard(this.loader, card as any, format); | ||
|
|
||
| if (typeof caseDefinition.test !== 'function') { | ||
| throw new Error( | ||
| `Case "${caseDefinition.id}" is missing an async test(ctx, assert) function`, | ||
| ); | ||
| } | ||
|
|
||
| await caseDefinition.test(this, assert); | ||
|
|
||
| console.info(`[catalog-runner] PASS ${caseDefinition.id}`); | ||
| }); | ||
| } | ||
| }); |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,97 @@ | ||
| # catalog-test-runner | ||
|
|
||
| This runner lets you execute catalog integration tests by passing a test module file. | ||
|
|
||
| ## How it works | ||
|
|
||
| 1. Run `scripts/run-catalog-runner.sh` with `--test-module`. | ||
| 2. Script copies your module into: | ||
| - `tests/integration/catalog/generated/test-module.gts` | ||
| 3. QUnit runs `tests/integration/catalog/catalog-runner-test.gts`. | ||
| 4. The runner reads `default export` from `generated/test-module.gts` and executes each case. | ||
|
|
||
| `setupCatalogIsolatedCardTest(..., { setupRealm: 'manual' })` is used so each case can seed its own realm contents via `this.setupCatalogRealm(seed, cacheKey)`. | ||
|
|
||
| ## Script | ||
|
|
||
| Path: | ||
| - `packages/host/scripts/run-catalog-runner.sh` | ||
|
|
||
| Arguments: | ||
| - `--test-module <path>`: required path to `.gts` / `.ts` / `.js` module | ||
| - `--filter <qunit filter>`: optional, defaults to `Integration | Catalog | runner` | ||
| - `--no-run`: optional, only generate `generated/test-module.gts` | ||
|
|
||
| ## Module API | ||
|
|
||
| Your test module must `export default` an object with `cases`: | ||
|
|
||
| ```ts | ||
| export default { | ||
| cases: [ | ||
| { | ||
| id: 'case-name', | ||
| format: 'isolated', // optional, defaults to 'isolated' | ||
| seed: async (ctx) => ({ | ||
| 'SomeCard/instance.json': { | ||
| data: { | ||
| type: 'card', | ||
| attributes: {}, | ||
| meta: { adoptsFrom: { module: '...', name: '...' } }, | ||
| }, | ||
| }, | ||
| }), | ||
| cardURL: (ctx) => `${ctx.testRealmURL}SomeCard/instance`, | ||
| test: async (ctx, assert) => { | ||
| // DOM assertions here | ||
| }, | ||
| }, | ||
| ], | ||
| } as any; | ||
| ``` | ||
|
|
||
| ### Case fields | ||
|
|
||
| - `id`: test name in QUnit output | ||
| - `seed`: object or function returning realm contents to load before rendering | ||
| - `cardURL`: URL or function returning URL of card instance to render | ||
| - `test(ctx, assert)`: assertions/actions after render | ||
| - `format`: render format (`isolated`, `fitted`, etc.) | ||
|
|
||
| ## Logging | ||
|
|
||
| Runner logs per case: | ||
| - `[catalog-runner] START <case-id>` | ||
| - `[catalog-runner] PASS <case-id>` | ||
|
|
||
| These appear in browser logs and CI test output. | ||
|
|
||
| ## Commands | ||
|
|
||
| From `packages/host`: | ||
|
|
||
| ```bash | ||
| ./scripts/run-catalog-runner.sh \ | ||
| --test-module ./tests/integration/catalog/modules/daily-report-dashboard.module.gts | ||
| ``` | ||
|
|
||
| or: | ||
|
|
||
| ```bash | ||
| pnpm test:catalog:runner | ||
| ``` | ||
|
|
||
| Generate only: | ||
|
|
||
| ```bash | ||
| ./scripts/run-catalog-runner.sh \ | ||
| --test-module ./tests/integration/catalog/modules/daily-report-dashboard.module.gts \ | ||
| --no-run | ||
| ``` | ||
|
|
||
| ## Browser URL | ||
|
|
||
| After host test app is running: | ||
|
|
||
| - `http://localhost:4200/tests?filter=Integration%20%7C%20Catalog%20%7C%20runner` | ||
| - `http://localhost:4200/tests?filter=daily-report-dashboard` |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,99 @@ | ||
| // Auto-generated by scripts/run-catalog-runner.sh | ||
| // Source: /Users/tintinthong/github/boxel/packages/host/./tests/integration/catalog/modules/daily-report-dashboard.module.gts | ||
|
|
||
| import { click, waitFor, waitUntil } from '@ember/test-helpers'; | ||
|
|
||
| export default { | ||
| cases: [ | ||
| { | ||
| id: 'daily-report-dashboard', | ||
| format: 'isolated', | ||
| seed: async (ctx: any) => { | ||
| let realm = ctx.catalogRealm as any; | ||
| return { | ||
| 'Skill/daily-report-skill.json': { | ||
| data: { | ||
| type: 'card', | ||
| attributes: { | ||
| cardTitle: 'Daily Report Generation', | ||
| cardDescription: | ||
| 'Generates daily report output from activity logs', | ||
| instructions: | ||
| 'Generate a daily report from activity log cards.', | ||
| commands: [ | ||
| { | ||
| codeRef: { | ||
| module: '@cardstack/boxel-host/commands/write-text-file', | ||
| name: 'default', | ||
| }, | ||
| requiresApproval: false, | ||
| }, | ||
| ], | ||
| }, | ||
| meta: { | ||
| adoptsFrom: { | ||
| module: 'https://cardstack.com/base/skill', | ||
| name: 'Skill', | ||
| }, | ||
| }, | ||
| }, | ||
| }, | ||
| 'PolicyManual/ops.json': { | ||
| data: { | ||
| type: 'card', | ||
| attributes: { | ||
| manualTitle: 'Ops Policy', | ||
| activityLogCardType: { | ||
| module: `${realm.url}daily-report-dashboard/activity-log`, | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This seed path dereferences Useful? React with 👍 / 👎. |
||
| name: 'ActivityLog', | ||
| }, | ||
| }, | ||
| meta: { | ||
| adoptsFrom: { | ||
| module: `${realm.url}daily-report-dashboard/policy-manual`, | ||
| name: 'PolicyManual', | ||
| }, | ||
| }, | ||
| }, | ||
| }, | ||
| 'DailyReportDashboard/ops.json': { | ||
| data: { | ||
| type: 'card', | ||
| relationships: { | ||
| policyManual: { | ||
| links: { | ||
| self: `${ctx.testRealmURL}PolicyManual/ops`, | ||
| }, | ||
| }, | ||
| }, | ||
| meta: { | ||
| adoptsFrom: { | ||
| module: `${realm.url}daily-report-dashboard/daily-report-dashboard`, | ||
| name: 'DailyReportDashboard', | ||
| }, | ||
| }, | ||
| }, | ||
| }, | ||
| }; | ||
| }, | ||
| cardURL: (ctx: any) => `${ctx.testRealmURL}DailyReportDashboard/ops`, | ||
| test: async (_ctx: any, assert: any) => { | ||
| await waitFor('.generate-report-button'); | ||
| assert | ||
| .dom('.empty-state') | ||
| .exists('dashboard starts with empty reports'); | ||
| await click('.generate-report-button'); | ||
| await waitUntil( | ||
| () => Boolean(document.querySelector('.reports-grid')), | ||
| { | ||
| timeout: 10000, | ||
| }, | ||
| ); | ||
| assert | ||
| .dom('.empty-state') | ||
| .doesNotExist('reports list replaces empty state'); | ||
| assert.dom('.reports-grid').exists('generated report is displayed'); | ||
| }, | ||
| }, | ||
| ], | ||
| } as any; | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When
policyManualIdisundefined(i.e.,this.args.model.policyManualis not set), thedailyReportsQueryproducesevery: []which, according to the query engine, resolves to SQLTRUE. This means the query returns allDailyReportcards regardless of which policy manual they belong to. While the generate button is correctly disabled whenpolicyManualis absent (isGenerateDisabled), the report list would still show all reports from any policy manual—which may be unintended behavior.Consider conditionally skipping the
PrerenderedCardSearchentirely when nopolicyManualis set, or using a filter that returns no results (e.g., by adding a non-matching constraint) rather than returning all reports.