Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions packages/catalog-realm/commands/generate-daily-report.gts
Original file line number Diff line number Diff line change
Expand Up @@ -96,8 +96,10 @@ export class GenerateDailyReport extends Command<

let prompt =
'Generate daily report for the selected date from the attached activity log cards using the policy manual and update the attached daily report card';
let skillCardId = new URL('../Skill/daily-report-skill', import.meta.url)
.href;
let skillCardId = new URL(
'../daily-report-dashboard/Skill/daily-report-skill',
import.meta.url,
).href;
let useCommand = new UseAiAssistantCommand(this.commandContext);
await useCommand.execute({
roomId: 'new',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,31 @@ import { PolicyManual } from './policy-manual';
import { GenerateDailyReport } from '../commands/generate-daily-report';

class Isolated extends Component<typeof DailyReportDashboard> {
private get policyManualId(): string | undefined {
return this.args.model.policyManual?.id;
}

get isGenerateDisabled() {
return this._generateReport.isRunning || !this.args.model.policyManual;
}

get dailyReportsQuery(): Query {
let policyManualId = this.policyManualId;
return {
filter: {
on: {
module: new URL('./daily-report', import.meta.url).href,
name: 'DailyReport',
},
eq: {
'policyManual.id': this.args.model.policyManual!.id,
},
every: policyManualId
? [
{
eq: {
'policyManual.id': policyManualId,
},
},
]
: [],
Comment on lines +32 to +40
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When policyManualId is undefined (i.e., this.args.model.policyManual is not set), the dailyReportsQuery produces every: [] which, according to the query engine, resolves to SQL TRUE. This means the query returns all DailyReport cards regardless of which policy manual they belong to. While the generate button is correctly disabled when policyManual is absent (isGenerateDisabled), the report list would still show all reports from any policy manual—which may be unintended behavior.

Consider conditionally skipping the PrerenderedCardSearch entirely when no policyManual is set, or using a filter that returns no results (e.g., by adding a non-matching constraint) rather than returning all reports.

Copilot uses AI. Check for mistakes.
},

sort: [
Expand Down Expand Up @@ -136,7 +151,7 @@ class Isolated extends Component<typeof DailyReportDashboard> {
</div>
<Button
class='generate-report-button'
@disabled={{this._generateReport.isRunning}}
@disabled={{this.isGenerateDisabled}}
{{on 'click' this.generateReport}}
>
{{#if this._generateReport.isRunning}}
Expand Down
1 change: 1 addition & 0 deletions packages/host/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
"test": "concurrently \"pnpm:lint\" \"pnpm:test:*\" --names \"lint,test:\"",
"test-with-percy": "percy exec --parallel -- pnpm test:wait-for-servers",
"test:wait-for-servers": "./scripts/test-wait-for-servers.sh",
"test:catalog:runner": "./scripts/run-catalog-runner.sh --test-module ./tests/integration/catalog/modules/daily-report-dashboard.module.gts",
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test:catalog:runner script name starts with test:, which means the main test script's pnpm:test:* glob pattern will include it when running pnpm test. This would cause the catalog runner to execute alongside test:wait-for-servers — but the catalog runner itself invokes ember test --filter "Integration | Catalog | runner", which expects an already-running Ember test server (unlike test:wait-for-servers which manages the server lifecycle). Running this script in parallel with the standard test flow is likely unintended and could cause failures.

Consider renaming the script to something that does not start with test: (e.g., catalog:runner or run:catalog:runner) to prevent it from being inadvertently included in the wildcard-expanded pnpm test run.

Suggested change
"test:catalog:runner": "./scripts/run-catalog-runner.sh --test-module ./tests/integration/catalog/modules/daily-report-dashboard.module.gts",
"catalog:runner": "./scripts/run-catalog-runner.sh --test-module ./tests/integration/catalog/modules/daily-report-dashboard.module.gts",

Copilot uses AI. Check for mistakes.
"ember-test-pre-built": "sleep 15 && ember exam --path ./dist --split $HOST_TEST_PARTITION_COUNT --partition $HOST_TEST_PARTITION --preserve-test-name",
"wait": "sleep 10000000"
},
Expand Down
62 changes: 62 additions & 0 deletions packages/host/scripts/run-catalog-runner.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
#!/bin/sh

set -eu

TEST_MODULE=""
FILTER="Integration | Catalog | runner"
NO_RUN="false"

while [ "$#" -gt 0 ]; do
case "$1" in
--test-module)
TEST_MODULE="${2:-}"
shift 2
;;
--filter)
FILTER="${2:-}"
shift 2
;;
--no-run)
NO_RUN="true"
shift 1
;;
*)
echo "[catalog-runner] Unknown argument: $1" >&2
exit 1
;;
esac
done

if [ -z "$TEST_MODULE" ]; then
echo "[catalog-runner] Missing --test-module <path-to-test-module.gts|ts|js>" >&2
exit 1
fi

ROOT_DIR="$(pwd)"
SOURCE_PATH="$ROOT_DIR/$TEST_MODULE"

Comment on lines +35 to +37
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The script assumes it is always run from the packages/host directory (using $(pwd) as ROOT_DIR). If it is ever run from the repository root or another directory, $ROOT_DIR/$TEST_MODULE and $ROOT_DIR/tests/integration/catalog/generated will point to incorrect locations, and the script will fail silently or overwrite unexpected paths. The script should either enforce a check that the working directory is correct, or compute the directory relative to the script's own location ($(dirname "$0")), which is the safer pattern for shell scripts.

Suggested change
ROOT_DIR="$(pwd)"
SOURCE_PATH="$ROOT_DIR/$TEST_MODULE"
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
ROOT_DIR="$SCRIPT_DIR/.."
SOURCE_PATH="$ROOT_DIR/$TEST_MODULE"

Copilot uses AI. Check for mistakes.
if [ ! -f "$SOURCE_PATH" ]; then
echo "[catalog-runner] Test module not found: $SOURCE_PATH" >&2
exit 1
fi

GENERATED_DIR="$ROOT_DIR/tests/integration/catalog/generated"
GENERATED_MODULE_PATH="$GENERATED_DIR/test-module.gts"

mkdir -p "$GENERATED_DIR"

{
echo "// Auto-generated by scripts/run-catalog-runner.sh"
echo "// Source: $SOURCE_PATH"
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The run-catalog-runner.sh script always writes the absolute path of the source file into the generated output (// Source: $SOURCE_PATH), and this generated file is committed to the repo. Each time a different developer runs the script, the source comment will change (e.g., from /Users/tintinthong/github/boxel/... to their own local path), creating a spurious diff. Consider using a project-relative path (e.g., by stripping $ROOT_DIR/ from $SOURCE_PATH) so the generated comment is consistent across environments.

Copilot uses AI. Check for mistakes.
echo
cat "$SOURCE_PATH"
} > "$GENERATED_MODULE_PATH"

echo "[catalog-runner] Wrote $GENERATED_MODULE_PATH"

if [ "$NO_RUN" = "true" ]; then
exit 0
fi

echo "[catalog-runner] Running ember test --filter \"$FILTER\""
pnpm exec ember test --filter "$FILTER"
43 changes: 43 additions & 0 deletions packages/host/tests/integration/catalog/catalog-runner-test.gts
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
import { module, test } from 'qunit';

import { renderCard } from '../../helpers/render-component';

import runnerModule from './generated/test-module';
import { setupCatalogIsolatedCardTest } from './setup';

module('Integration | Catalog | runner', function (hooks) {
setupCatalogIsolatedCardTest(hooks, { setupRealm: 'manual' });

for (let caseDefinition of (runnerModule.cases ?? []) as any[]) {
test(caseDefinition.id, async function (this: any, assert) {
console.info(`[catalog-runner] START ${caseDefinition.id}`);

let seed =
typeof caseDefinition.seed === 'function'
? await caseDefinition.seed(this)
: (caseDefinition.seed ?? {});
await this.setupCatalogRealm(
seed,
`catalog-isolated:${caseDefinition.id}`,
);

let cardURL =
typeof caseDefinition.cardURL === 'function'
? await caseDefinition.cardURL(this)
: caseDefinition.cardURL;
let format = caseDefinition.format ?? 'isolated';
let card = await this.store.get(cardURL);
await renderCard(this.loader, card as any, format);

if (typeof caseDefinition.test !== 'function') {
throw new Error(
`Case "${caseDefinition.id}" is missing an async test(ctx, assert) function`,
);
}

await caseDefinition.test(this, assert);

console.info(`[catalog-runner] PASS ${caseDefinition.id}`);
});
}
});
97 changes: 97 additions & 0 deletions packages/host/tests/integration/catalog/catalog-test-runner.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# catalog-test-runner

This runner lets you execute catalog integration tests by passing a test module file.

## How it works

1. Run `scripts/run-catalog-runner.sh` with `--test-module`.
2. Script copies your module into:
- `tests/integration/catalog/generated/test-module.gts`
3. QUnit runs `tests/integration/catalog/catalog-runner-test.gts`.
4. The runner reads `default export` from `generated/test-module.gts` and executes each case.

`setupCatalogIsolatedCardTest(..., { setupRealm: 'manual' })` is used so each case can seed its own realm contents via `this.setupCatalogRealm(seed, cacheKey)`.

## Script

Path:
- `packages/host/scripts/run-catalog-runner.sh`

Arguments:
- `--test-module <path>`: required path to `.gts` / `.ts` / `.js` module
- `--filter <qunit filter>`: optional, defaults to `Integration | Catalog | runner`
- `--no-run`: optional, only generate `generated/test-module.gts`

## Module API

Your test module must `export default` an object with `cases`:

```ts
export default {
cases: [
{
id: 'case-name',
format: 'isolated', // optional, defaults to 'isolated'
seed: async (ctx) => ({
'SomeCard/instance.json': {
data: {
type: 'card',
attributes: {},
meta: { adoptsFrom: { module: '...', name: '...' } },
},
},
}),
cardURL: (ctx) => `${ctx.testRealmURL}SomeCard/instance`,
test: async (ctx, assert) => {
// DOM assertions here
},
},
],
} as any;
```

### Case fields

- `id`: test name in QUnit output
- `seed`: object or function returning realm contents to load before rendering
- `cardURL`: URL or function returning URL of card instance to render
- `test(ctx, assert)`: assertions/actions after render
- `format`: render format (`isolated`, `fitted`, etc.)

## Logging

Runner logs per case:
- `[catalog-runner] START <case-id>`
- `[catalog-runner] PASS <case-id>`

These appear in browser logs and CI test output.

## Commands

From `packages/host`:

```bash
./scripts/run-catalog-runner.sh \
--test-module ./tests/integration/catalog/modules/daily-report-dashboard.module.gts
```

or:

```bash
pnpm test:catalog:runner
```

Generate only:

```bash
./scripts/run-catalog-runner.sh \
--test-module ./tests/integration/catalog/modules/daily-report-dashboard.module.gts \
--no-run
```

## Browser URL

After host test app is running:

- `http://localhost:4200/tests?filter=Integration%20%7C%20Catalog%20%7C%20runner`
- `http://localhost:4200/tests?filter=daily-report-dashboard`
99 changes: 99 additions & 0 deletions packages/host/tests/integration/catalog/generated/test-module.gts
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
// Auto-generated by scripts/run-catalog-runner.sh
// Source: /Users/tintinthong/github/boxel/packages/host/./tests/integration/catalog/modules/daily-report-dashboard.module.gts

import { click, waitFor, waitUntil } from '@ember/test-helpers';

export default {
cases: [
{
id: 'daily-report-dashboard',
format: 'isolated',
seed: async (ctx: any) => {
let realm = ctx.catalogRealm as any;
return {
'Skill/daily-report-skill.json': {
data: {
type: 'card',
attributes: {
cardTitle: 'Daily Report Generation',
cardDescription:
'Generates daily report output from activity logs',
instructions:
'Generate a daily report from activity log cards.',
commands: [
{
codeRef: {
module: '@cardstack/boxel-host/commands/write-text-file',
name: 'default',
},
requiresApproval: false,
},
],
},
meta: {
adoptsFrom: {
module: 'https://cardstack.com/base/skill',
name: 'Skill',
},
},
},
},
'PolicyManual/ops.json': {
data: {
type: 'card',
attributes: {
manualTitle: 'Ops Policy',
activityLogCardType: {
module: `${realm.url}daily-report-dashboard/activity-log`,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Guard catalog runner seed against missing catalog realm

This seed path dereferences realm.url without checking whether ctx.catalogRealm exists, but our default host CI runs with SKIP_CATALOG=true (.github/workflows/ci-host.yaml lines 81 and 107), and in that mode catalogRealm is null (packages/host/config/environment.js lines 12 and 63-66). In that environment this case throws before realm setup completes, so Integration | Catalog | runner fails consistently in the standard CI configuration.

Useful? React with 👍 / 👎.

name: 'ActivityLog',
},
},
meta: {
adoptsFrom: {
module: `${realm.url}daily-report-dashboard/policy-manual`,
name: 'PolicyManual',
},
},
},
},
'DailyReportDashboard/ops.json': {
data: {
type: 'card',
relationships: {
policyManual: {
links: {
self: `${ctx.testRealmURL}PolicyManual/ops`,
},
},
},
meta: {
adoptsFrom: {
module: `${realm.url}daily-report-dashboard/daily-report-dashboard`,
name: 'DailyReportDashboard',
},
},
},
},
};
},
cardURL: (ctx: any) => `${ctx.testRealmURL}DailyReportDashboard/ops`,
test: async (_ctx: any, assert: any) => {
await waitFor('.generate-report-button');
assert
.dom('.empty-state')
.exists('dashboard starts with empty reports');
await click('.generate-report-button');
await waitUntil(
() => Boolean(document.querySelector('.reports-grid')),
{
timeout: 10000,
},
);
assert
.dom('.empty-state')
.doesNotExist('reports list replaces empty state');
assert.dom('.reports-grid').exists('generated report is displayed');
},
},
],
} as any;
Loading
Loading