Skip to content

Comments

migration-script-for-tenant-change#211

Open
MallanagoudaB wants to merge 3 commits intodevelopfrom
tenantMigrationScript
Open

migration-script-for-tenant-change#211
MallanagoudaB wants to merge 3 commits intodevelopfrom
tenantMigrationScript

Conversation

@MallanagoudaB
Copy link
Collaborator

@MallanagoudaB MallanagoudaB commented Feb 17, 2026

Summary by CodeRabbit

  • New Features
    • Added comprehensive tenant migration functionality that enables secure migration between tenant and organization configurations with credential-based authentication and detailed migration progress logging and reporting.

@coderabbitai
Copy link

coderabbitai bot commented Feb 18, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

A new tenant migration system comprising configuration, execution script, and output log. The system performs multi-phase MongoDB-based migration, mapping current tenant/organization entities and data to new identifiers while preserving relationships and metadata.

Changes

Cohort / File(s) Summary
Tenant Migration System
src/migrations/tenantMigration/input.json, src/migrations/tenantMigration/migrate.js
Configuration file defining admin credentials and tenant/org mappings, paired with a multi-phase migration script that authenticates via REST API, validates user roles, copies entity types and entities to new identifiers, remaps relationships, and generates summary statistics.
Migration Artifacts
src/migrations/tenantMigration/migration-output.txt
Log artifact documenting step-by-step migration execution flow including authentication, MongoDB connection, phase completions, and final success confirmation.

Sequence Diagram

sequenceDiagram
    actor Admin as Admin/User
    participant Script as Migration Script
    participant API as REST API
    participant MongoDB as MongoDB
    participant FS as File System

    Admin->>Script: Trigger migration with input.json
    Script->>FS: Read input.json config
    Script->>API: POST login (credentials)
    API-->>Script: JWT token + userId, tenantCode
    Script->>MongoDB: Connect & verify target tenant empty
    
    Note over Script,MongoDB: Phase 1: Copy Entity Types
    Script->>MongoDB: Query currentTenantId entity types
    Script->>MongoDB: Insert to newTenantId (update IDs, metadata)
    MongoDB-->>Script: Entity type ID mappings
    
    Note over Script,MongoDB: Phase 2: Copy Entities
    Script->>MongoDB: Query currentTenantId entities
    Script->>MongoDB: Map entityTypeIds via Phase 1 mappings
    Script->>MongoDB: Insert to newTenantId (batch operations)
    
    Note over Script,MongoDB: Phase 3: Fix targetedEntityTypes
    Script->>MongoDB: Remap referenced entity type IDs
    MongoDB-->>Script: Confirm updates
    
    Note over Script,MongoDB: Phase 4: Fix Group References
    Script->>MongoDB: Build entity ID mappings (old → new)
    Script->>MongoDB: Bulk update group references
    MongoDB-->>Script: Completion status
    
    Script->>FS: Log summary & completion message
    Script-->>Admin: Migration success with statistics
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 Hop, skip, and a leap through data streams,
Four phases dancing through MongoDB dreams,
Tenants mapped and entities freed,
A migration journey of rabbit-speed,
From old domains to new frontiers we bound! 🌱

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'migration-script-for-tenant-change' clearly summarizes the main change—adding a migration script for tenant operations. It directly corresponds to the primary deliverable in the PR (the new migrate.js script and supporting files).

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch tenantMigrationScript

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@MallanagoudaB
Copy link
Collaborator Author

@coderabbitai full review

@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2026

✅ Actions performed

Full review triggered.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (2)
src/migrations/tenantMigration/input.json (1)

2-7: Consistent typos in JSON keys: "loginCredentails", "createrUserName", "createrPassword", "createrType".

These keys are referenced in migrate.js so they are consistent across files, but they contain misspellings ("Credentails" → "Credentials", "creater" → "creator"). Fixing now avoids confusion and prevents these typos from becoming a de-facto API contract.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/migrations/tenantMigration/input.json` around lines 2 - 7, The JSON keys
under the login object are misspelled and should be renamed to correct,
consistent keys: change "loginCredentails" → "loginCredentials" and each
"createrUserName", "createrPassword", "createrType" → "creatorUserName",
"creatorPassword", "creatorType"; update any references in migrate.js that
access those keys (search for "loginCredentails", "createrUserName",
"createrPassword", "createrType") to use the corrected names so the JSON schema
and code remain consistent.
src/migrations/tenantMigration/migrate.js (1)

39-46: Output file is overwritten without warning on re-run.

Line 43 uses writeFileSync which silently truncates the previous migration log. If an operator needs to inspect a prior run's output after a failure, it's lost. Consider appending a timestamp to the filename or warning before overwrite.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/migrations/tenantMigration/migrate.js` around lines 39 - 46, The current
runMigration function uses fs.writeFileSync(OUTPUT_FILE, ...) which
unconditionally truncates the previous migration log; modify runMigration to
avoid silent overwrite by either renaming the existing OUTPUT_FILE (e.g., append
a timestamp/UUID) before writing or switching to an append strategy;
specifically, in the code paths around OUTPUT_FILE and fs.writeFileSync replace
the direct overwrite with a check using fs.existsSync(OUTPUT_FILE) and then
either move/rename the existing file (preserving it with a timestamp) or use
fs.appendFileSync to prepend a header with the current timestamp, ensuring the
original file is preserved rather than silently lost.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/migrations/tenantMigration/input.json`:
- Around line 1-14: This file contains plaintext secrets under the JSON keys
loginCredentails -> createrUserName / createrPassword; remove the real
credentials from src/migrations/tenantMigration/input.json, replace with
placeholder values (or a minimal template) and commit that as
input.example.json, add src/migrations/tenantMigration/input.json to .gitignore
so the real file is not tracked, and update any migration loader code to read
credentials from environment variables (dotenv) or a secure secret source
instead of reading createrUserName/createrPassword from the tracked JSON; ensure
you also purge the secret-containing file from history if it has already been
committed.

In `@src/migrations/tenantMigration/migrate.js`:
- Around line 134-135: The DB name extraction using MONGODB_URL.split('/').pop()
can include query parameters; change the extraction in migrate.js so
dbNameFromUrl strips query params before passing to client.db (e.g., split on
'/' then split the last segment on '?' and take [0]) or use a robust URL parse
that handles mongodb:// and mongodb+srv://; update the variable dbNameFromUrl
and ensure client.db(dbNameFromUrl) receives the cleaned database name.
- Around line 142-148: The current pre-check using existingTargetData (sum of
entityTypeCollection.countDocuments and entityCollection.countDocuments for
newTenantId) blocks retries after a partial failure because orphaned Phase 1/2
records remain; update migrate.js to either wrap the multi-phase operations in a
MongoDB transaction around the phases to guarantee atomic rollback, or add a CLI
flag (e.g., --cleanup or --force) handled in the migration entrypoint that, when
set, deletes all documents with tenantId === newTenantId from
entityTypeCollection and entityCollection before starting, and ensure the flag
is validated and logged; alternatively (minimum) add clear logging/documentation
in the migration command describing manual recovery steps to remove newTenantId
records so reruns are possible.
- Around line 22-23: The code reads BASE_URL and MONGODB_URL from process.env
without validation; add an explicit check after the constants (for BASE_URL and
MONGODB_URL in migrate.js) that verifies both are present and non-empty and if
not logs a clear error (naming the missing env var(s)) and exits with non-zero
status (throw Error or process.exit(1)) so the script fails fast with an
actionable message rather than producing downstream "undefined" errors.
- Around line 77-87: The extracted userId from loginResponse (const userId =
loginResponse.data?.result?.user?.id) is not validated and can be undefined for
other login shapes; add a guard like the token check: verify userId after
extraction and throw an Error (e.g., 'Login failed. userId missing.') if it's
falsy so subsequent code that uses userId for createdBy/updatedBy doesn't write
undefined values; update the migration flow near where token, decodedToken, and
log(`✅ Login successful. userId: ${userId}`) are handled to validate userId and
fail fast.
- Around line 214-247: The loop that iterates over cursor and calls
entityTypeCollection.findOne for each oldEntity (the block using
oldEntity/entityTypeId and newEntityType) creates an N+1 query; before the while
loop build an in-memory lookup map from
registryDetails.tenantMigrationReferenceId (from the Phase 1 results or
entityTypeCollection query) to the new entityType document or _id, then replace
the per-entity findOne with a lookup like map[oldEntity.entityTypeId.toString()]
to get newEntityType; do the same optimization for Phase 3 where
entityTypeCollection.findOne is called (Lines ~276–279) so both phases use the
prebuilt map instead of repeated DB queries.
- Around line 176-195: The batch insert builds newEntityTypes from
oldEntityTypes by spreading oldType and setting _id: undefined which the MongoDB
driver serializes to null; instead remove the _id key entirely so MongoDB
generates new ObjectIds: when mapping oldType (in the newEntityTypes creation),
delete the _id property from the object before inserting (or construct the new
object without including _id) and then call
entityTypeCollection.insertMany(newEntityTypes); apply the same fix to the
analogous Phase 2 mapping block (the second spot that spreads oldType and
attempts to unset _id before insertMany).
- Around line 379-384: The code currently throws when a group member oldId isn't
found in idMapping; instead, update the loop that iterates over oldIds to skip
unmapped IDs: when idMapping[oldId.toString()] is undefined, emit a warning
(e.g., logger.warn or console.warn) that includes the oldId and context (tenant
/ parent entity id) and continue to the next oldId rather than throwing; ensure
this change preserves the rest of the group mapping logic and downstream calls
(such as addSubEntityToParent) so cross-tenant references are simply ignored
during migration.

In `@src/migrations/tenantMigration/migration-output.txt`:
- Around line 1-39: Remove the generated runtime artifact
src/migrations/tenantMigration/migration-output.txt from the commit and ensure
migrate.js’s output file is ignored going forward by adding the exact path
src/migrations/tenantMigration/migration-output.txt to .gitignore; locate where
migrate.js writes migration-output.txt (search for "migration-output.txt" or
writeFile calls in migrate.js) to confirm no other generated files need ignoring
and then remove the file from the repo (git rm --cached) before committing the
updated .gitignore.

---

Nitpick comments:
In `@src/migrations/tenantMigration/input.json`:
- Around line 2-7: The JSON keys under the login object are misspelled and
should be renamed to correct, consistent keys: change "loginCredentails" →
"loginCredentials" and each "createrUserName", "createrPassword", "createrType"
→ "creatorUserName", "creatorPassword", "creatorType"; update any references in
migrate.js that access those keys (search for "loginCredentails",
"createrUserName", "createrPassword", "createrType") to use the corrected names
so the JSON schema and code remain consistent.

In `@src/migrations/tenantMigration/migrate.js`:
- Around line 39-46: The current runMigration function uses
fs.writeFileSync(OUTPUT_FILE, ...) which unconditionally truncates the previous
migration log; modify runMigration to avoid silent overwrite by either renaming
the existing OUTPUT_FILE (e.g., append a timestamp/UUID) before writing or
switching to an append strategy; specifically, in the code paths around
OUTPUT_FILE and fs.writeFileSync replace the direct overwrite with a check using
fs.existsSync(OUTPUT_FILE) and then either move/rename the existing file
(preserving it with a timestamp) or use fs.appendFileSync to prepend a header
with the current timestamp, ensuring the original file is preserved rather than
silently lost.

Copy link
Collaborator

@VISHNUDAS-tunerlabs VISHNUDAS-tunerlabs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reviewed on feb 20

@@ -0,0 +1,38 @@
==============================
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MallanagoudaB Should we push this file with PR? I don't think so

@@ -0,0 +1,14 @@
{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MallanagoudaB we should include a detailed and well structured README file as well

if (!roleTitles.includes(loginCredentails.createrType)) {
throw new Error(
`Role validation failed. User does not have required role: "${
loginCredentails.createrType
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MallanagoudaB we should treat loginCredentails.createrType case insensitive.


log(`✅ Connected to DB: ${dbNameFromUrl}\n`)

const existingTargetData =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MallanagoudaB why are we preventing migration if new tenant already has data? is it required?

_id: undefined,
tenantId: newTenantId,
orgId: newOrgId,
createdBy: userId,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MallanagoudaB what happens if user id is not passed in the input. it is always optional right?

'registryDetails.tenantMigrationReferenceId': oldEntity.entityTypeId.toString(),
})

if (!newEntityType) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MallanagoudaB if at any point in time some skipping happens due to any missing please make sure we are logging it


log('🚀 Phase 3 - Fixing targetedEntityTypes...')

const newEntitiesWithTargets = entityCollection.find({
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MallanagoudaB Why are you not fixig targetted entity type in phase 2. please do it then only

orgId: newOrgId,
entityTypeId: newEntityType._id,
entityType: newEntityType.name,
createdBy: userId,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MallanagoudaB what if creator Id is not passed? that case you should keep current data only

* ==========================================================
*/

log('🚀 Phase 3 - Fixing targetedEntityTypes...')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MallanagoudaB phase 3 can be done within phase 2. you are doing the work again , it is not needed


const idMapping = {}

const mappingCursor = entityCollection.find(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MallanagoudaB
fetch entities - if gps exists and not empty
batch them
for loop each batch
within groups also apply batching
logic should be generic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants