Conversation
📝 WalkthroughWalkthroughSwitches release persistence to use Release.UUID(), adds a legacy releases migration step during Store.Restore, enables DB-backed releases in manager/tests, and comments out several assertions in E2E tests related to jobs and release status checks. Changes
Sequence Diagram(s)sequenceDiagram
participant Store as Store.Restore
participant InMem as InMemoryReleasesRepo
participant Releases as ActiveReleasesRepo
participant Log as Logger
Store->>InMem: ListLegacyReleases(ctx)
InMem-->>Store: []legacyReleases
loop for each legacyRelease
Store->>Releases: Upsert(legacyRelease.UUID(), legacyRelease...)
Releases-->>Store: UpsertResult / error
alt error
Store->>Log: Warn("failed to migrate legacy release", err)
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/workspace-engine/test/e2e/engine_jobs_test.go`:
- Around line 929-939: Re-enable and restore the commented strict-mode assertion
block so the test verifies that strict mode blocks redeploy after a resource
re-add: uncomment the lines that call engine.Workspace().Jobs().GetPending() and
the subsequent comparison of len(pendingJobsAfter) against
expectedJobsAfterReAdd (variables pendingJobsAfter and expectedJobsAfterReAdd)
inside the same test function, and ensure the t.Logf/t.Fatalf behavior matches
the surrounding tests' style (use the same error messages and failure pattern
used elsewhere in engine_jobs_test.go) so the test structure remains consistent.
- Around line 845-853: The test disabled the post-readd assertion so it no
longer verifies pending job count after re-adding the resource; re-enable the
block that calls engine.Workspace().Jobs().GetPending(), set
expectedJobsAfterReAdd to 1, and assert len(pendingJobsAfter) ==
expectedJobsAfterReAdd (logging the actual jobs with t.Logf on mismatch and
failing with t.Fatalf) so the retry-policy path validates that exactly one job
remains after the resource is re-added.
In `@apps/workspace-engine/test/e2e/engine_policy_paused_versions_test.go`:
- Around line 138-145: Re-add the snapshot-status assertion to the
paused-version test so it validates that a release created from a snapshot still
reports ready even when the source version is paused: after retrieving the
release with engine.Workspace().Releases().Get(releaseID) check
release.Version.Status == oapi.DeploymentVersionStatusReady and call t.Errorf
with the original message if it is not; keep the commented rationale ("Release
version is a snapshot...") and use the same error text ("expected release
version status to remain ready (snapshot), got %s") so the test structure
matches other *_test.go files.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
apps/workspace-engine/pkg/workspace/store/repository/db/releases/mapper.goapps/workspace-engine/pkg/workspace/store/repository/db/releases/repo.goapps/workspace-engine/pkg/workspace/store/store.goapps/workspace-engine/svc/workspaceconsumer/consumer.goapps/workspace-engine/test/e2e/engine_jobs_test.goapps/workspace-engine/test/e2e/engine_policy_paused_versions_test.goapps/workspace-engine/test/integration/dbtest.go
| // pendingJobsAfter := engine.Workspace().Jobs().GetPending() | ||
|
|
||
| // // Expected: 1 job (new release target = fresh start) | ||
| // expectedJobsAfterReAdd := 1 | ||
| // if len(pendingJobsAfter) != expectedJobsAfterReAdd { | ||
| // t.Logf("Expected %d jobs after resource re-add, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | ||
| // t.Logf("Jobs: %v", pendingJobsAfter) | ||
| // t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | ||
| // } |
There was a problem hiding this comment.
Restore the post-readd assertion in the retry-policy path.
This block is fully disabled, so the test no longer validates the expected job count after re-adding the resource and may pass on regressions.
Suggested fix
- // pendingJobsAfter := engine.Workspace().Jobs().GetPending()
-
- // // Expected: 1 job (new release target = fresh start)
- // expectedJobsAfterReAdd := 1
- // if len(pendingJobsAfter) != expectedJobsAfterReAdd {
- // t.Logf("Expected %d jobs after resource re-add, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
- // t.Logf("Jobs: %v", pendingJobsAfter)
- // t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
- // }
+ pendingJobsAfter := engine.Workspace().Jobs().GetPending()
+ expectedJobsAfterReAdd := 1
+ if len(pendingJobsAfter) != expectedJobsAfterReAdd {
+ t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // pendingJobsAfter := engine.Workspace().Jobs().GetPending() | |
| // // Expected: 1 job (new release target = fresh start) | |
| // expectedJobsAfterReAdd := 1 | |
| // if len(pendingJobsAfter) != expectedJobsAfterReAdd { | |
| // t.Logf("Expected %d jobs after resource re-add, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | |
| // t.Logf("Jobs: %v", pendingJobsAfter) | |
| // t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | |
| // } | |
| pendingJobsAfter := engine.Workspace().Jobs().GetPending() | |
| expectedJobsAfterReAdd := 1 | |
| if len(pendingJobsAfter) != expectedJobsAfterReAdd { | |
| t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/workspace-engine/test/e2e/engine_jobs_test.go` around lines 845 - 853,
The test disabled the post-readd assertion so it no longer verifies pending job
count after re-adding the resource; re-enable the block that calls
engine.Workspace().Jobs().GetPending(), set expectedJobsAfterReAdd to 1, and
assert len(pendingJobsAfter) == expectedJobsAfterReAdd (logging the actual jobs
with t.Logf on mismatch and failing with t.Fatalf) so the retry-policy path
validates that exactly one job remains after the resource is re-added.
| // // In strict mode (no policy), the cancelled job counts as an attempt | ||
| // // This blocks redeployment | ||
| // pendingJobsAfter := engine.Workspace().Jobs().GetPending() | ||
|
|
||
| // Expected: 0 jobs (cancelled job blocks in strict mode) | ||
| expectedJobsAfterReAdd := 0 | ||
| if len(pendingJobsAfter) != expectedJobsAfterReAdd { | ||
| t.Logf("Expected %d jobs after resource re-add (strict mode blocks), got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | ||
| t.Logf("Jobs: %v", pendingJobsAfter) | ||
| t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | ||
| } | ||
| // // Expected: 0 jobs (cancelled job blocks in strict mode) | ||
| // expectedJobsAfterReAdd := 0 | ||
| // if len(pendingJobsAfter) != expectedJobsAfterReAdd { | ||
| // t.Logf("Expected %d jobs after resource re-add (strict mode blocks), got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | ||
| // t.Logf("Jobs: %v", pendingJobsAfter) | ||
| // t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | ||
| // } |
There was a problem hiding this comment.
Strict-mode behavior assertion is currently disabled.
With this block commented out, the test no longer proves that strict mode blocks redeploy after resource re-add.
Suggested fix
- // // In strict mode (no policy), the cancelled job counts as an attempt
- // // This blocks redeployment
- // pendingJobsAfter := engine.Workspace().Jobs().GetPending()
-
- // // Expected: 0 jobs (cancelled job blocks in strict mode)
- // expectedJobsAfterReAdd := 0
- // if len(pendingJobsAfter) != expectedJobsAfterReAdd {
- // t.Logf("Expected %d jobs after resource re-add (strict mode blocks), got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
- // t.Logf("Jobs: %v", pendingJobsAfter)
- // t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
- // }
+ pendingJobsAfter := engine.Workspace().Jobs().GetPending()
+ expectedJobsAfterReAdd := 0
+ if len(pendingJobsAfter) != expectedJobsAfterReAdd {
+ t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // // In strict mode (no policy), the cancelled job counts as an attempt | |
| // // This blocks redeployment | |
| // pendingJobsAfter := engine.Workspace().Jobs().GetPending() | |
| // Expected: 0 jobs (cancelled job blocks in strict mode) | |
| expectedJobsAfterReAdd := 0 | |
| if len(pendingJobsAfter) != expectedJobsAfterReAdd { | |
| t.Logf("Expected %d jobs after resource re-add (strict mode blocks), got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | |
| t.Logf("Jobs: %v", pendingJobsAfter) | |
| t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | |
| } | |
| // // Expected: 0 jobs (cancelled job blocks in strict mode) | |
| // expectedJobsAfterReAdd := 0 | |
| // if len(pendingJobsAfter) != expectedJobsAfterReAdd { | |
| // t.Logf("Expected %d jobs after resource re-add (strict mode blocks), got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | |
| // t.Logf("Jobs: %v", pendingJobsAfter) | |
| // t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | |
| // } | |
| pendingJobsAfter := engine.Workspace().Jobs().GetPending() | |
| expectedJobsAfterReAdd := 0 | |
| if len(pendingJobsAfter) != expectedJobsAfterReAdd { | |
| t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter)) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/workspace-engine/test/e2e/engine_jobs_test.go` around lines 929 - 939,
Re-enable and restore the commented strict-mode assertion block so the test
verifies that strict mode blocks redeploy after a resource re-add: uncomment the
lines that call engine.Workspace().Jobs().GetPending() and the subsequent
comparison of len(pendingJobsAfter) against expectedJobsAfterReAdd (variables
pendingJobsAfter and expectedJobsAfterReAdd) inside the same test function, and
ensure the t.Logf/t.Fatalf behavior matches the surrounding tests' style (use
the same error messages and failure pattern used elsewhere in
engine_jobs_test.go) so the test structure remains consistent.
| _, ok := engine.Workspace().Releases().Get(releaseID) | ||
| if !ok { | ||
| t.Errorf("expected release %s to still exist after version paused", releaseID) | ||
| } | ||
| // Release version is a snapshot - will still show "ready" even though source version is paused | ||
| if release.Version.Status != oapi.DeploymentVersionStatusReady { | ||
| t.Errorf("expected release version status to remain ready (snapshot), got %s", release.Version.Status) | ||
| } | ||
| // if release.Version.Status != oapi.DeploymentVersionStatusReady { | ||
| // t.Errorf("expected release version status to remain ready (snapshot), got %s", release.Version.Status) | ||
| // } |
There was a problem hiding this comment.
Snapshot-status verification was removed from the paused-version test.
The test now checks only release existence, but no longer validates the snapshot status behavior it describes.
Suggested fix
- _, ok := engine.Workspace().Releases().Get(releaseID)
+ release, ok := engine.Workspace().Releases().Get(releaseID)
if !ok {
t.Errorf("expected release %s to still exist after version paused", releaseID)
}
- // Release version is a snapshot - will still show "ready" even though source version is paused
- // if release.Version.Status != oapi.DeploymentVersionStatusReady {
- // t.Errorf("expected release version status to remain ready (snapshot), got %s", release.Version.Status)
- // }
+ if release.Version.Status != oapi.DeploymentVersionStatusReady {
+ t.Errorf("expected release version status to remain ready (snapshot), got %s", release.Version.Status)
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/workspace-engine/test/e2e/engine_policy_paused_versions_test.go` around
lines 138 - 145, Re-add the snapshot-status assertion to the paused-version test
so it validates that a release created from a snapshot still reports ready even
when the source version is paused: after retrieving the release with
engine.Workspace().Releases().Get(releaseID) check release.Version.Status ==
oapi.DeploymentVersionStatusReady and call t.Errorf with the original message if
it is not; keep the commented rationale ("Release version is a snapshot...") and
use the same error text ("expected release version status to remain ready
(snapshot), got %s") so the test structure matches other *_test.go files.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/workspace-engine/test/e2e/engine_deployment_test.go`:
- Around line 836-843: Restore the disabled post-delete assertion in
TestEngine_DeploymentRemovalWithJobs by re-enabling the loop that fetches
pending jobs via engine.Workspace().Jobs().GetPending() and iterates over
pendingJobsAfter to ensure none of the job.Id values match any id in
jobsForDeployment; if a match is found call t.Fatalf with the existing message
("job %s for deleted deployment still exists") so the test fails on regression.
Ensure the variables jobsForDeployment and pendingJobsAfter are in scope and
handle empty/nil pendingJobsAfter gracefully (no-op) to match the surrounding
test structure.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
apps/workspace-engine/test/e2e/engine_deployment_test.go
| // pendingJobsAfter := engine.Workspace().Jobs().GetPending() | ||
| // for _, job := range pendingJobsAfter { | ||
| // for _, jobID := range jobsForDeployment { | ||
| // if job.Id == jobID { | ||
| // t.Fatalf("job %s for deleted deployment still exists", jobID) | ||
| // } | ||
| // } | ||
| // } |
There was a problem hiding this comment.
Restore the post-delete job cleanup assertion (currently disabled).
Line 836-Line 843 comment out the core expectation of TestEngine_DeploymentRemovalWithJobs. With this removed, the test no longer verifies that pending jobs linked to the deleted deployment are cleaned up, so regressions can slip through.
Proposed fix
- // Verify jobs for this deployment are gone
- // pendingJobsAfter := engine.Workspace().Jobs().GetPending()
- // for _, job := range pendingJobsAfter {
- // for _, jobID := range jobsForDeployment {
- // if job.Id == jobID {
- // t.Fatalf("job %s for deleted deployment still exists", jobID)
- // }
- // }
- // }
+ // Verify jobs for this deployment are gone
+ pendingJobsAfter := engine.Workspace().Jobs().GetPending()
+ deletedJobIDs := make(map[string]struct{}, len(jobsForDeployment))
+ for _, id := range jobsForDeployment {
+ deletedJobIDs[id] = struct{}{}
+ }
+ for _, job := range pendingJobsAfter {
+ if _, exists := deletedJobIDs[job.Id]; exists {
+ t.Fatalf("job %s for deleted deployment still exists", job.Id)
+ }
+ }As per coding guidelines "Follow the existing test structure used in *_test.go files".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/workspace-engine/test/e2e/engine_deployment_test.go` around lines 836 -
843, Restore the disabled post-delete assertion in
TestEngine_DeploymentRemovalWithJobs by re-enabling the loop that fetches
pending jobs via engine.Workspace().Jobs().GetPending() and iterates over
pendingJobsAfter to ensure none of the job.Id values match any id in
jobsForDeployment; if a match is found call t.Fatalf with the existing message
("job %s for deleted deployment still exists") so the test fails on regression.
Ensure the variables jobsForDeployment and pendingJobsAfter are in scope and
handle empty/nil pendingJobsAfter gracefully (no-op) to match the surrounding
test structure.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
apps/workspace-engine/test/e2e/engine_deployment_test.go (1)
802-803:⚠️ Potential issue | 🟠 MajorRestore the deleted-deployment job cleanup check (currently disabled).
At Lines 835-842, commenting out the assertion removes the key guarantee of this test: jobs tied to the deleted deployment must not remain pending. Since job-ID tracking was also removed around Line 803, this regression check is now effectively gone.
Proposed fix
// Count jobs for this deployment deploymentJobs := 0 + jobsForDeployment := make(map[string]struct{}) for _, job := range pendingJobs { release, ok := engine.Workspace().Releases().Get(job.ReleaseId) if !ok { continue } if release.ReleaseTarget.DeploymentId == deploymentID { deploymentJobs++ + jobsForDeployment[job.Id] = struct{}{} assert.NotNil(t, job.DispatchContext, "pending job should have DispatchContext") assert.Equal(t, jobAgentID, job.DispatchContext.JobAgent.Id) assert.NotNil(t, job.DispatchContext.Release) assert.NotNil(t, job.DispatchContext.Deployment) assert.Equal(t, deploymentID, job.DispatchContext.Deployment.Id) } } @@ // Verify jobs for this deployment are gone - // pendingJobsAfter := engine.Workspace().Jobs().GetPending() - // for _, job := range pendingJobsAfter { - // for _, jobID := range jobsForDeployment { - // if job.Id == jobID { - // t.Fatalf("job %s for deleted deployment still exists", jobID) - // } - // } - // } + pendingJobsAfter := engine.Workspace().Jobs().GetPending() + for _, job := range pendingJobsAfter { + if _, exists := jobsForDeployment[job.Id]; exists { + t.Fatalf("job %s for deleted deployment still exists", job.Id) + } + }As per coding guidelines "Follow the existing test structure used in *_test.go files".
Also applies to: 835-842
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/workspace-engine/test/e2e/engine_deployment_test.go` around lines 802 - 803, Restore the deleted-deployment job cleanup check by reintroducing job-ID tracking and the assertion that ensures no jobs tied to the deleted deployment remain pending: re-add the deploymentJobs (or similar job-ID collection) increment/append logic where jobs are created and restore the assertion block that iterates over recorded job IDs to verify they are not pending (the previously removed lines around the deploymentJobs variable and the assertion at lines ~835-842). Ensure the test collects created job IDs when calling the job-creation helper and then asserts none of those IDs are present in the pending queue after the deployment deletion, using the same helper/assertion functions already used elsewhere in this test file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@apps/workspace-engine/test/e2e/engine_deployment_test.go`:
- Around line 802-803: Restore the deleted-deployment job cleanup check by
reintroducing job-ID tracking and the assertion that ensures no jobs tied to the
deleted deployment remain pending: re-add the deploymentJobs (or similar job-ID
collection) increment/append logic where jobs are created and restore the
assertion block that iterates over recorded job IDs to verify they are not
pending (the previously removed lines around the deploymentJobs variable and the
assertion at lines ~835-842). Ensure the test collects created job IDs when
calling the job-creation helper and then asserts none of those IDs are present
in the pending queue after the deployment deletion, using the same
helper/assertion functions already used elsewhere in this test file.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
apps/workspace-engine/test/e2e/engine_deployment_test.go
Summary by CodeRabbit
New Features
Bug Fixes
Tests