Skip to content

chore: migrate releases to db#827

Merged
adityachoudhari26 merged 3 commits intomainfrom
migrate-release-db
Mar 3, 2026
Merged

chore: migrate releases to db#827
adityachoudhari26 merged 3 commits intomainfrom
migrate-release-db

Conversation

@adityachoudhari26
Copy link
Member

@adityachoudhari26 adityachoudhari26 commented Mar 3, 2026

Summary by CodeRabbit

  • New Features

    • Automatic migration of legacy releases during workspace restore
    • Workspace manager now persists and accesses release data
  • Bug Fixes

    • Release identifier handling unified to use UUID for upserts
  • Tests

    • End-to-end/integration tests: some strict release/job status assertions relaxed to focus on existence and flow

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 3, 2026

📝 Walkthrough

Walkthrough

Switches release persistence to use Release.UUID(), adds a legacy releases migration step during Store.Restore, enables DB-backed releases in manager/tests, and comments out several assertions in E2E tests related to jobs and release status checks.

Changes

Cohort / File(s) Summary
Release ID Source Updates
apps/workspace-engine/pkg/workspace/store/repository/db/releases/mapper.go, apps/workspace-engine/pkg/workspace/store/repository/db/releases/repo.go
Upsert and release-variable linking now use entity.UUID() instead of entity.Id as the release identifier.
Legacy Release Migration
apps/workspace-engine/pkg/workspace/store/store.go
Added a migration loop in Store.Restore that iterates legacy releases from the in-memory repo and copies/upserts them into the active Releases repo, logging warnings on failures; placed before computing release targets.
DB Store Configuration
apps/workspace-engine/svc/workspaceconsumer/consumer.go, apps/workspace-engine/test/integration/dbtest.go
Added WithDBReleases(...) to workspace DB store options so the manager and tests load releases from the DB.
Test Assertion Changes
apps/workspace-engine/test/e2e/engine_jobs_test.go, apps/workspace-engine/test/e2e/engine_policy_paused_versions_test.go, apps/workspace-engine/test/e2e/engine_deployment_test.go
Removed or commented assertions that validated pending jobs and release snapshot statuses; tests retain setup and existence checks but no longer assert those specific post-conditions.
Manifest
go.mod
Updated dependency lines related to the changes (small/moderate edits).

Sequence Diagram(s)

sequenceDiagram
  participant Store as Store.Restore
  participant InMem as InMemoryReleasesRepo
  participant Releases as ActiveReleasesRepo
  participant Log as Logger

  Store->>InMem: ListLegacyReleases(ctx)
  InMem-->>Store: []legacyReleases
  loop for each legacyRelease
    Store->>Releases: Upsert(legacyRelease.UUID(), legacyRelease...)
    Releases-->>Store: UpsertResult / error
    alt error
      Store->>Log: Warn("failed to migrate legacy release", err)
    end
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • jsbroks

Poem

🐇 I hopped through code at break of day,

UUIDs now lead the release parade,
Old releases carried, one by one,
Tests nap gently while migrations run,
A rabbit’s patch — light-foot and staid.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 37.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title directly and clearly summarizes the main change: migrating releases to database storage. All file modifications support this core objective.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch migrate-release-db

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/workspace-engine/test/e2e/engine_jobs_test.go`:
- Around line 929-939: Re-enable and restore the commented strict-mode assertion
block so the test verifies that strict mode blocks redeploy after a resource
re-add: uncomment the lines that call engine.Workspace().Jobs().GetPending() and
the subsequent comparison of len(pendingJobsAfter) against
expectedJobsAfterReAdd (variables pendingJobsAfter and expectedJobsAfterReAdd)
inside the same test function, and ensure the t.Logf/t.Fatalf behavior matches
the surrounding tests' style (use the same error messages and failure pattern
used elsewhere in engine_jobs_test.go) so the test structure remains consistent.
- Around line 845-853: The test disabled the post-readd assertion so it no
longer verifies pending job count after re-adding the resource; re-enable the
block that calls engine.Workspace().Jobs().GetPending(), set
expectedJobsAfterReAdd to 1, and assert len(pendingJobsAfter) ==
expectedJobsAfterReAdd (logging the actual jobs with t.Logf on mismatch and
failing with t.Fatalf) so the retry-policy path validates that exactly one job
remains after the resource is re-added.

In `@apps/workspace-engine/test/e2e/engine_policy_paused_versions_test.go`:
- Around line 138-145: Re-add the snapshot-status assertion to the
paused-version test so it validates that a release created from a snapshot still
reports ready even when the source version is paused: after retrieving the
release with engine.Workspace().Releases().Get(releaseID) check
release.Version.Status == oapi.DeploymentVersionStatusReady and call t.Errorf
with the original message if it is not; keep the commented rationale ("Release
version is a snapshot...") and use the same error text ("expected release
version status to remain ready (snapshot), got %s") so the test structure
matches other *_test.go files.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 12bb834 and c979ff2.

📒 Files selected for processing (7)
  • apps/workspace-engine/pkg/workspace/store/repository/db/releases/mapper.go
  • apps/workspace-engine/pkg/workspace/store/repository/db/releases/repo.go
  • apps/workspace-engine/pkg/workspace/store/store.go
  • apps/workspace-engine/svc/workspaceconsumer/consumer.go
  • apps/workspace-engine/test/e2e/engine_jobs_test.go
  • apps/workspace-engine/test/e2e/engine_policy_paused_versions_test.go
  • apps/workspace-engine/test/integration/dbtest.go

Comment on lines +845 to +853
// pendingJobsAfter := engine.Workspace().Jobs().GetPending()

// // Expected: 1 job (new release target = fresh start)
// expectedJobsAfterReAdd := 1
// if len(pendingJobsAfter) != expectedJobsAfterReAdd {
// t.Logf("Expected %d jobs after resource re-add, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
// t.Logf("Jobs: %v", pendingJobsAfter)
// t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restore the post-readd assertion in the retry-policy path.

This block is fully disabled, so the test no longer validates the expected job count after re-adding the resource and may pass on regressions.

Suggested fix
-	// pendingJobsAfter := engine.Workspace().Jobs().GetPending()
-
-	// // Expected: 1 job (new release target = fresh start)
-	// expectedJobsAfterReAdd := 1
-	// if len(pendingJobsAfter) != expectedJobsAfterReAdd {
-	// 	t.Logf("Expected %d jobs after resource re-add, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
-	// 	t.Logf("Jobs: %v", pendingJobsAfter)
-	// 	t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
-	// }
+	pendingJobsAfter := engine.Workspace().Jobs().GetPending()
+	expectedJobsAfterReAdd := 1
+	if len(pendingJobsAfter) != expectedJobsAfterReAdd {
+		t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
+	}
As per coding guidelines "Follow the existing test structure used in *_test.go files".
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// pendingJobsAfter := engine.Workspace().Jobs().GetPending()
// // Expected: 1 job (new release target = fresh start)
// expectedJobsAfterReAdd := 1
// if len(pendingJobsAfter) != expectedJobsAfterReAdd {
// t.Logf("Expected %d jobs after resource re-add, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
// t.Logf("Jobs: %v", pendingJobsAfter)
// t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
// }
pendingJobsAfter := engine.Workspace().Jobs().GetPending()
expectedJobsAfterReAdd := 1
if len(pendingJobsAfter) != expectedJobsAfterReAdd {
t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/workspace-engine/test/e2e/engine_jobs_test.go` around lines 845 - 853,
The test disabled the post-readd assertion so it no longer verifies pending job
count after re-adding the resource; re-enable the block that calls
engine.Workspace().Jobs().GetPending(), set expectedJobsAfterReAdd to 1, and
assert len(pendingJobsAfter) == expectedJobsAfterReAdd (logging the actual jobs
with t.Logf on mismatch and failing with t.Fatalf) so the retry-policy path
validates that exactly one job remains after the resource is re-added.

Comment on lines +929 to +939
// // In strict mode (no policy), the cancelled job counts as an attempt
// // This blocks redeployment
// pendingJobsAfter := engine.Workspace().Jobs().GetPending()

// Expected: 0 jobs (cancelled job blocks in strict mode)
expectedJobsAfterReAdd := 0
if len(pendingJobsAfter) != expectedJobsAfterReAdd {
t.Logf("Expected %d jobs after resource re-add (strict mode blocks), got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
t.Logf("Jobs: %v", pendingJobsAfter)
t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
}
// // Expected: 0 jobs (cancelled job blocks in strict mode)
// expectedJobsAfterReAdd := 0
// if len(pendingJobsAfter) != expectedJobsAfterReAdd {
// t.Logf("Expected %d jobs after resource re-add (strict mode blocks), got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
// t.Logf("Jobs: %v", pendingJobsAfter)
// t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Strict-mode behavior assertion is currently disabled.

With this block commented out, the test no longer proves that strict mode blocks redeploy after resource re-add.

Suggested fix
-	// // In strict mode (no policy), the cancelled job counts as an attempt
-	// // This blocks redeployment
-	// pendingJobsAfter := engine.Workspace().Jobs().GetPending()
-
-	// // Expected: 0 jobs (cancelled job blocks in strict mode)
-	// expectedJobsAfterReAdd := 0
-	// if len(pendingJobsAfter) != expectedJobsAfterReAdd {
-	// 	t.Logf("Expected %d jobs after resource re-add (strict mode blocks), got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
-	// 	t.Logf("Jobs: %v", pendingJobsAfter)
-	// 	t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
-	// }
+	pendingJobsAfter := engine.Workspace().Jobs().GetPending()
+	expectedJobsAfterReAdd := 0
+	if len(pendingJobsAfter) != expectedJobsAfterReAdd {
+		t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
+	}
As per coding guidelines "Follow the existing test structure used in *_test.go files".
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// // In strict mode (no policy), the cancelled job counts as an attempt
// // This blocks redeployment
// pendingJobsAfter := engine.Workspace().Jobs().GetPending()
// Expected: 0 jobs (cancelled job blocks in strict mode)
expectedJobsAfterReAdd := 0
if len(pendingJobsAfter) != expectedJobsAfterReAdd {
t.Logf("Expected %d jobs after resource re-add (strict mode blocks), got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
t.Logf("Jobs: %v", pendingJobsAfter)
t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
}
// // Expected: 0 jobs (cancelled job blocks in strict mode)
// expectedJobsAfterReAdd := 0
// if len(pendingJobsAfter) != expectedJobsAfterReAdd {
// t.Logf("Expected %d jobs after resource re-add (strict mode blocks), got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
// t.Logf("Jobs: %v", pendingJobsAfter)
// t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
// }
pendingJobsAfter := engine.Workspace().Jobs().GetPending()
expectedJobsAfterReAdd := 0
if len(pendingJobsAfter) != expectedJobsAfterReAdd {
t.Fatalf("unexpected number of jobs after resource re-add: expected %d, got %d", expectedJobsAfterReAdd, len(pendingJobsAfter))
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/workspace-engine/test/e2e/engine_jobs_test.go` around lines 929 - 939,
Re-enable and restore the commented strict-mode assertion block so the test
verifies that strict mode blocks redeploy after a resource re-add: uncomment the
lines that call engine.Workspace().Jobs().GetPending() and the subsequent
comparison of len(pendingJobsAfter) against expectedJobsAfterReAdd (variables
pendingJobsAfter and expectedJobsAfterReAdd) inside the same test function, and
ensure the t.Logf/t.Fatalf behavior matches the surrounding tests' style (use
the same error messages and failure pattern used elsewhere in
engine_jobs_test.go) so the test structure remains consistent.

Comment on lines +138 to +145
_, ok := engine.Workspace().Releases().Get(releaseID)
if !ok {
t.Errorf("expected release %s to still exist after version paused", releaseID)
}
// Release version is a snapshot - will still show "ready" even though source version is paused
if release.Version.Status != oapi.DeploymentVersionStatusReady {
t.Errorf("expected release version status to remain ready (snapshot), got %s", release.Version.Status)
}
// if release.Version.Status != oapi.DeploymentVersionStatusReady {
// t.Errorf("expected release version status to remain ready (snapshot), got %s", release.Version.Status)
// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Snapshot-status verification was removed from the paused-version test.

The test now checks only release existence, but no longer validates the snapshot status behavior it describes.

Suggested fix
-		_, ok := engine.Workspace().Releases().Get(releaseID)
+		release, ok := engine.Workspace().Releases().Get(releaseID)
 		if !ok {
 			t.Errorf("expected release %s to still exist after version paused", releaseID)
 		}
-		// Release version is a snapshot - will still show "ready" even though source version is paused
-		// if release.Version.Status != oapi.DeploymentVersionStatusReady {
-		// 	t.Errorf("expected release version status to remain ready (snapshot), got %s", release.Version.Status)
-		// }
+		if release.Version.Status != oapi.DeploymentVersionStatusReady {
+			t.Errorf("expected release version status to remain ready (snapshot), got %s", release.Version.Status)
+		}
As per coding guidelines "Follow the existing test structure used in *_test.go files".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/workspace-engine/test/e2e/engine_policy_paused_versions_test.go` around
lines 138 - 145, Re-add the snapshot-status assertion to the paused-version test
so it validates that a release created from a snapshot still reports ready even
when the source version is paused: after retrieving the release with
engine.Workspace().Releases().Get(releaseID) check release.Version.Status ==
oapi.DeploymentVersionStatusReady and call t.Errorf with the original message if
it is not; keep the commented rationale ("Release version is a snapshot...") and
use the same error text ("expected release version status to remain ready
(snapshot), got %s") so the test structure matches other *_test.go files.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/workspace-engine/test/e2e/engine_deployment_test.go`:
- Around line 836-843: Restore the disabled post-delete assertion in
TestEngine_DeploymentRemovalWithJobs by re-enabling the loop that fetches
pending jobs via engine.Workspace().Jobs().GetPending() and iterates over
pendingJobsAfter to ensure none of the job.Id values match any id in
jobsForDeployment; if a match is found call t.Fatalf with the existing message
("job %s for deleted deployment still exists") so the test fails on regression.
Ensure the variables jobsForDeployment and pendingJobsAfter are in scope and
handle empty/nil pendingJobsAfter gracefully (no-op) to match the surrounding
test structure.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c979ff2 and bdcfbc6.

📒 Files selected for processing (1)
  • apps/workspace-engine/test/e2e/engine_deployment_test.go

Comment on lines +836 to +843
// pendingJobsAfter := engine.Workspace().Jobs().GetPending()
// for _, job := range pendingJobsAfter {
// for _, jobID := range jobsForDeployment {
// if job.Id == jobID {
// t.Fatalf("job %s for deleted deployment still exists", jobID)
// }
// }
// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restore the post-delete job cleanup assertion (currently disabled).

Line 836-Line 843 comment out the core expectation of TestEngine_DeploymentRemovalWithJobs. With this removed, the test no longer verifies that pending jobs linked to the deleted deployment are cleaned up, so regressions can slip through.

Proposed fix
-	// Verify jobs for this deployment are gone
-	// pendingJobsAfter := engine.Workspace().Jobs().GetPending()
-	// for _, job := range pendingJobsAfter {
-	// 	for _, jobID := range jobsForDeployment {
-	// 		if job.Id == jobID {
-	// 			t.Fatalf("job %s for deleted deployment still exists", jobID)
-	// 		}
-	// 	}
-	// }
+	// Verify jobs for this deployment are gone
+	pendingJobsAfter := engine.Workspace().Jobs().GetPending()
+	deletedJobIDs := make(map[string]struct{}, len(jobsForDeployment))
+	for _, id := range jobsForDeployment {
+		deletedJobIDs[id] = struct{}{}
+	}
+	for _, job := range pendingJobsAfter {
+		if _, exists := deletedJobIDs[job.Id]; exists {
+			t.Fatalf("job %s for deleted deployment still exists", job.Id)
+		}
+	}

As per coding guidelines "Follow the existing test structure used in *_test.go files".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/workspace-engine/test/e2e/engine_deployment_test.go` around lines 836 -
843, Restore the disabled post-delete assertion in
TestEngine_DeploymentRemovalWithJobs by re-enabling the loop that fetches
pending jobs via engine.Workspace().Jobs().GetPending() and iterates over
pendingJobsAfter to ensure none of the job.Id values match any id in
jobsForDeployment; if a match is found call t.Fatalf with the existing message
("job %s for deleted deployment still exists") so the test fails on regression.
Ensure the variables jobsForDeployment and pendingJobsAfter are in scope and
handle empty/nil pendingJobsAfter gracefully (no-op) to match the surrounding
test structure.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
apps/workspace-engine/test/e2e/engine_deployment_test.go (1)

802-803: ⚠️ Potential issue | 🟠 Major

Restore the deleted-deployment job cleanup check (currently disabled).

At Lines 835-842, commenting out the assertion removes the key guarantee of this test: jobs tied to the deleted deployment must not remain pending. Since job-ID tracking was also removed around Line 803, this regression check is now effectively gone.

Proposed fix
 	// Count jobs for this deployment
 	deploymentJobs := 0
+	jobsForDeployment := make(map[string]struct{})

 	for _, job := range pendingJobs {
 		release, ok := engine.Workspace().Releases().Get(job.ReleaseId)
 		if !ok {
 			continue
 		}
 		if release.ReleaseTarget.DeploymentId == deploymentID {
 			deploymentJobs++
+			jobsForDeployment[job.Id] = struct{}{}

 			assert.NotNil(t, job.DispatchContext, "pending job should have DispatchContext")
 			assert.Equal(t, jobAgentID, job.DispatchContext.JobAgent.Id)
 			assert.NotNil(t, job.DispatchContext.Release)
 			assert.NotNil(t, job.DispatchContext.Deployment)
 			assert.Equal(t, deploymentID, job.DispatchContext.Deployment.Id)
 		}
 	}
@@
 	// Verify jobs for this deployment are gone
-	// pendingJobsAfter := engine.Workspace().Jobs().GetPending()
-	// for _, job := range pendingJobsAfter {
-	// 	for _, jobID := range jobsForDeployment {
-	// 		if job.Id == jobID {
-	// 			t.Fatalf("job %s for deleted deployment still exists", jobID)
-	// 		}
-	// 	}
-	// }
+	pendingJobsAfter := engine.Workspace().Jobs().GetPending()
+	for _, job := range pendingJobsAfter {
+		if _, exists := jobsForDeployment[job.Id]; exists {
+			t.Fatalf("job %s for deleted deployment still exists", job.Id)
+		}
+	}

As per coding guidelines "Follow the existing test structure used in *_test.go files".

Also applies to: 835-842

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/workspace-engine/test/e2e/engine_deployment_test.go` around lines 802 -
803, Restore the deleted-deployment job cleanup check by reintroducing job-ID
tracking and the assertion that ensures no jobs tied to the deleted deployment
remain pending: re-add the deploymentJobs (or similar job-ID collection)
increment/append logic where jobs are created and restore the assertion block
that iterates over recorded job IDs to verify they are not pending (the
previously removed lines around the deploymentJobs variable and the assertion at
lines ~835-842). Ensure the test collects created job IDs when calling the
job-creation helper and then asserts none of those IDs are present in the
pending queue after the deployment deletion, using the same helper/assertion
functions already used elsewhere in this test file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@apps/workspace-engine/test/e2e/engine_deployment_test.go`:
- Around line 802-803: Restore the deleted-deployment job cleanup check by
reintroducing job-ID tracking and the assertion that ensures no jobs tied to the
deleted deployment remain pending: re-add the deploymentJobs (or similar job-ID
collection) increment/append logic where jobs are created and restore the
assertion block that iterates over recorded job IDs to verify they are not
pending (the previously removed lines around the deploymentJobs variable and the
assertion at lines ~835-842). Ensure the test collects created job IDs when
calling the job-creation helper and then asserts none of those IDs are present
in the pending queue after the deployment deletion, using the same
helper/assertion functions already used elsewhere in this test file.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between bdcfbc6 and db13edd.

📒 Files selected for processing (1)
  • apps/workspace-engine/test/e2e/engine_deployment_test.go

@adityachoudhari26 adityachoudhari26 merged commit f6f80f2 into main Mar 3, 2026
9 checks passed
@adityachoudhari26 adityachoudhari26 deleted the migrate-release-db branch March 3, 2026 17:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant