diff --git a/.reuse/dep5 b/.reuse/dep5
deleted file mode 100644
index 632f58c..0000000
--- a/.reuse/dep5
+++ /dev/null
@@ -1,29 +0,0 @@
-Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
-Upstream-Name: commerce-migration-toolkit
-Upstream-Contact: cmt-technical-user@sap.com
-Source: https://github.com/sap-samples/commerce-migration-toolkit
-Disclaimer: The code in this project may include calls to APIs (“API Calls”) of
- SAP or third-party products or services developed outside of this project
- (“External Products”).
- “APIs” means application programming interfaces, as well as their respective
- specifications and implementing code that allows software to communicate with
- other software.
- API Calls to External Products are not licensed under the open source license
- that governs this project. The use of such API Calls and related External
- Products are subject to applicable additional agreements with the relevant
- provider of the External Products. In no event shall the open source license
- that governs this project grant any rights in or to any External Products,or
- alter, expand or supersede any terms of the applicable additional agreements.
- If you have a valid license agreement with SAP for the use of a particular SAP
- External Product, then you may make use of any API Calls included in this
- project’s code for that SAP External Product, subject to the terms of such
- license agreement. If you do not have a valid license agreement for the use of
- a particular SAP External Product, then you may only make use of any API Calls
- in this project for that SAP External Product for your internal, non-productive
- and non-commercial test and evaluation of such API Calls. Nothing herein grants
- you any rights to use or access any SAP External Product, or provide any third
- parties the right to use of access any SAP External Product, through API Calls.
-
-Files: *
-Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors
-License: Apache-2.0
diff --git a/README.md b/README.md
index c09b874..5520650 100644
--- a/README.md
+++ b/README.md
@@ -8,115 +8,9 @@
>
> Please refer to the new repository for more details: https://github.com/SAP/sap-commerce-db-sync
>
----
-The Commerce Migration Toolkit is a `self-service tool` that allows `SAP Customers / Partners` to migrate a `SAP Commerce on-premise installation to SAP Commerce Cloud (ccv2)`.
+
-The implementation of this tool is based on regular SAP Commerce extensions. Adding the extensions to your code base in the cloud subscription will provide you with the functionality to migrate the source database, and, paired with the self-service media process described on [this CXWorks article](https://www.sap.com/cxworks/article/508629017/migrate_to_sap_commerce_cloud_migrate_media_with_azcopy) allows to self-service the migration of an on-premise customer environment to their cloud subscription.
-
-# Getting started
-
-* [Prerequisites](commercemigration/resources/doc/prerequisites/PREREQUISITES-GUIDE.md) Carefully read the prerequisites and make sure you meet the requirements before you start the migration. Some of the prerequisites may require code adaptations or database cleanup tasks to prepare for the migration. Therefore, ensure you reserve sufficient time so that you can adhere to your project plan.
-* [Security Guide](commercemigration/resources/doc/security/SECURITY-GUIDE.md) A data migration typically features sensitive data and uses delicate system access. Make sure you have read the Security Guide before you proceed with any migration activities and thereby acknowledge the security recommendations stated in the guide.
-* [User Guide](commercemigration/resources/doc/user/USER-GUIDE.md) When ready to start the migration activities, follow the instructions in the User Guide to trigger the data migration.
-* [Performance Guide](commercemigration/resources/doc/performance/PERFORMANCE-GUIDE.md) Performance is crucial for any data migration, not only for large databases but also generally to reduce the time of the cut-over window. The performance guide explains the basic concept of performance tuning and also provides benchmarks that will give you an impression of how to estimate the cutover time window.
-* [Configuration Guide](commercemigration/resources/doc/configuration/CONFIGURATION-GUIDE.md) The extensions ship with a default configuration that may need to be adjusted depending on the desired behaviour. This guide explains how different features and behaviours can be configured.
-* [Developer Guide](commercemigration/resources/doc/developer/DEVELOPER-GUIDE.md) If you want to contribute please read this guide.
-* [Troubleshooting Guide](commercemigration/resources/doc/troubleshooting/TROUBLESHOOTING-GUIDE.md) A collection of common problems and how to tackle them.
-* [CMT Hints](commercemigration/resources/doc/user/CMT-Hints.md) A collection of good practices and performance problems with CMT.
-
-# System Landscape
-
-
-
-# Features Overview
-
-* Database Connectivity
- * Supported source databases: Oracle, MySQL, HANA, MSSQL
- * UI based connection validation
-* Schema Differences
- * UI based schema differences detector
- * Automated target schema adaption
- * Table creation / removal
- * Column creation / removal
- * Configurable behaviour
-* Data Copy
- * UI based copy trigger
- * Configurable target table truncation
- * Configurable index disabling
- * Read/write batching with configurable sizes
- * Copy parallelization
- * Cluster awareness
- * Column exclusions
- * Table exclusions/inclusions
- * Incremental mode (delta)
- * Custom tables
- * Staged approach using table prefix
-* Reporting / Audit
- * Automated reporting for schema changes
- * Automated reporting for copy processes
- * Stored on blob storage
- * Logging of all actions triggered from the UI
-
-
-# Compatibility
-
- * SAP Commerce (>=1811)
- * Tested with source databases:
- * MSSQL (2019)
- * MySQL (5.7)
- * Oracle (XE 11g)
- * HANA (express 2.0)
- * Target database (MSSQL)
-
-
-# Limitations
-
- * Database Features
- * The tool only copies over table data. Any other database features like 'views', stored procedures', 'synonyms' ... will be ignored.
- * Only the database vendors mentioned in the Compatibility section are supported
- * The target database must always be MSSQL (SQL Server)
-
-# Demo Video
-
-https://sapvideoa35699dc5.hana.ondemand.com/?entry_id=1_gxduwrl3
-
-# Get the Code and Upgradability
-
-For instructions on how to get this code please refer to the official GitHub documentation:
-
-https://docs.github.com/en/github/getting-started-with-github/using-github
-
-Typical steps:
-
-1. Clone or download the repository
-2. Move the extensions to your commerce installation (if not already there)
-3. Commit the extensions to your own git repository
-4. To upgrade to a new version repeat the steps above.
-
-The detailed installation and configuration steps can be found in the [User Guide](commercemigration/resources/doc/user/USER-GUIDE.md)
-
-
-Since the code is under your control then, this allows you to make changes to the extensions if needed.
-
-Alternative:
-
-Use git submodule and point to this repository. For more information refer to:
-
-https://git-scm.com/docs/git-submodule
-
-SAP Commerce Cloud supports git submodule and allows you to fetch the repository upon build time, so there is no need to copy the extensions to your own repository.
-To upgrade make sure your submodule points to the desired release / commit
-
-
-# How to Obtain Support
-
-This repository is provided "as-is"; no support is available.
-
-Find more information about SAP Commerce Cloud Setup on our [help site](https://help.sap.com/viewer/product/SAP_COMMERCE_CLOUD_PUBLIC_CLOUD/LATEST/en-US).
-
-With regards CMT, access to the database for customers is and will not be possible in the future and SAP does not provide any additional support on CMT in particular. Support can be bought as paid engagement from SAP Consulting only.
-
-# License
+## License
Copyright (c) 2021 SAP SE or an SAP affiliate company. All rights reserved. This project is licensed under the Apache Software License, version 2.0 except as noted otherwise in the [LICENSE file](LICENSES/Apache-2.0.txt).
diff --git a/REUSE.toml b/REUSE.toml
new file mode 100644
index 0000000..8fa20e8
--- /dev/null
+++ b/REUSE.toml
@@ -0,0 +1,11 @@
+version = 1
+SPDX-PackageName = "commerce-migration-toolkit"
+SPDX-PackageSupplier = "cmt-technical-user@sap.com"
+SPDX-PackageDownloadLocation = "https://github.com/sap-samples/commerce-migration-toolkit"
+SPDX-PackageComment = "The code in this project may include calls to APIs (“API Calls”) of\n SAP or third-party products or services developed outside of this project\n (“External Products”).\n “APIs” means application programming interfaces, as well as their respective\n specifications and implementing code that allows software to communicate with\n other software.\n API Calls to External Products are not licensed under the open source license\n that governs this project. The use of such API Calls and related External\n Products are subject to applicable additional agreements with the relevant\n provider of the External Products. In no event shall the open source license\n that governs this project grant any rights in or to any External Products,or\n alter, expand or supersede any terms of the applicable additional agreements.\n If you have a valid license agreement with SAP for the use of a particular SAP\n External Product, then you may make use of any API Calls included in this\n project’s code for that SAP External Product, subject to the terms of such\n license agreement. If you do not have a valid license agreement for the use of\n a particular SAP External Product, then you may only make use of any API Calls\n in this project for that SAP External Product for your internal, non-productive\n and non-commercial test and evaluation of such API Calls. Nothing herein grants\n you any rights to use or access any SAP External Product, or provide any third\n parties the right to use of access any SAP External Product, through API Calls."
+
+[[annotations]]
+path = "**"
+precedence = "aggregate"
+SPDX-FileCopyrightText = "2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors"
+SPDX-License-Identifier = "Apache-2.0"
diff --git a/commercemigration/.classpath b/commercemigration/.classpath
deleted file mode 100644
index 0c0dcb8..0000000
--- a/commercemigration/.classpath
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/commercemigration/.project b/commercemigration/.project
deleted file mode 100644
index faca1a8..0000000
--- a/commercemigration/.project
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
- commercemigration
-
-
-
-
-
- org.eclipse.jdt.core.javabuilder
-
-
-
-
- org.springframework.ide.eclipse.core.springbuilder
-
-
-
-
-
- org.springframework.ide.eclipse.core.springnature
- org.eclipse.jdt.core.javanature
-
-
diff --git a/commercemigration/.settings/org.eclipse.jdt.core.prefs b/commercemigration/.settings/org.eclipse.jdt.core.prefs
deleted file mode 100644
index dc0b35c..0000000
--- a/commercemigration/.settings/org.eclipse.jdt.core.prefs
+++ /dev/null
@@ -1,394 +0,0 @@
-eclipse.preferences.version=1
-org.eclipse.jdt.core.codeComplete.argumentPrefixes=
-org.eclipse.jdt.core.codeComplete.argumentSuffixes=
-org.eclipse.jdt.core.codeComplete.fieldPrefixes=
-org.eclipse.jdt.core.codeComplete.fieldSuffixes=
-org.eclipse.jdt.core.codeComplete.localPrefixes=
-org.eclipse.jdt.core.codeComplete.localSuffixes=
-org.eclipse.jdt.core.codeComplete.staticFieldPrefixes=
-org.eclipse.jdt.core.codeComplete.staticFieldSuffixes=
-org.eclipse.jdt.core.compiler.annotation.inheritNullAnnotations=disabled
-org.eclipse.jdt.core.compiler.annotation.missingNonNullByDefaultAnnotation=ignore
-org.eclipse.jdt.core.compiler.annotation.nonnull=org.eclipse.jdt.annotation.NonNull
-org.eclipse.jdt.core.compiler.annotation.nonnull.secondary=
-org.eclipse.jdt.core.compiler.annotation.nonnullbydefault=org.eclipse.jdt.annotation.NonNullByDefault
-org.eclipse.jdt.core.compiler.annotation.nonnullbydefault.secondary=
-org.eclipse.jdt.core.compiler.annotation.nullable=org.eclipse.jdt.annotation.Nullable
-org.eclipse.jdt.core.compiler.annotation.nullable.secondary=
-org.eclipse.jdt.core.compiler.annotation.nullanalysis=disabled
-org.eclipse.jdt.core.compiler.codegen.inlineJsrBytecode=enabled
-org.eclipse.jdt.core.compiler.codegen.methodParameters=do not generate
-org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.8
-org.eclipse.jdt.core.compiler.codegen.unusedLocal=preserve
-org.eclipse.jdt.core.compiler.compliance=1.8
-org.eclipse.jdt.core.compiler.debug.lineNumber=generate
-org.eclipse.jdt.core.compiler.debug.localVariable=generate
-org.eclipse.jdt.core.compiler.debug.sourceFile=generate
-org.eclipse.jdt.core.compiler.doc.comment.support=enabled
-org.eclipse.jdt.core.compiler.problem.annotationSuperInterface=ignore
-org.eclipse.jdt.core.compiler.problem.assertIdentifier=error
-org.eclipse.jdt.core.compiler.problem.autoboxing=ignore
-org.eclipse.jdt.core.compiler.problem.comparingIdentical=ignore
-org.eclipse.jdt.core.compiler.problem.deadCode=ignore
-org.eclipse.jdt.core.compiler.problem.deprecation=ignore
-org.eclipse.jdt.core.compiler.problem.deprecationInDeprecatedCode=enabled
-org.eclipse.jdt.core.compiler.problem.deprecationWhenOverridingDeprecatedMethod=enabled
-org.eclipse.jdt.core.compiler.problem.discouragedReference=ignore
-org.eclipse.jdt.core.compiler.problem.emptyStatement=ignore
-org.eclipse.jdt.core.compiler.problem.enumIdentifier=error
-org.eclipse.jdt.core.compiler.problem.explicitlyClosedAutoCloseable=ignore
-org.eclipse.jdt.core.compiler.problem.fallthroughCase=ignore
-org.eclipse.jdt.core.compiler.problem.fatalOptionalError=enabled
-org.eclipse.jdt.core.compiler.problem.fieldHiding=ignore
-org.eclipse.jdt.core.compiler.problem.finalParameterBound=ignore
-org.eclipse.jdt.core.compiler.problem.finallyBlockNotCompletingNormally=ignore
-org.eclipse.jdt.core.compiler.problem.forbiddenReference=ignore
-org.eclipse.jdt.core.compiler.problem.hiddenCatchBlock=ignore
-org.eclipse.jdt.core.compiler.problem.includeNullInfoFromAsserts=disabled
-org.eclipse.jdt.core.compiler.problem.incompatibleNonInheritedInterfaceMethod=ignore
-org.eclipse.jdt.core.compiler.problem.incompleteEnumSwitch=ignore
-org.eclipse.jdt.core.compiler.problem.indirectStaticAccess=ignore
-org.eclipse.jdt.core.compiler.problem.invalidJavadoc=ignore
-org.eclipse.jdt.core.compiler.problem.invalidJavadocTags=enabled
-org.eclipse.jdt.core.compiler.problem.invalidJavadocTagsDeprecatedRef=disabled
-org.eclipse.jdt.core.compiler.problem.invalidJavadocTagsNotVisibleRef=disabled
-org.eclipse.jdt.core.compiler.problem.invalidJavadocTagsVisibility=private
-org.eclipse.jdt.core.compiler.problem.localVariableHiding=ignore
-org.eclipse.jdt.core.compiler.problem.methodWithConstructorName=ignore
-org.eclipse.jdt.core.compiler.problem.missingDefaultCase=ignore
-org.eclipse.jdt.core.compiler.problem.missingDeprecatedAnnotation=ignore
-org.eclipse.jdt.core.compiler.problem.missingEnumCaseDespiteDefault=disabled
-org.eclipse.jdt.core.compiler.problem.missingHashCodeMethod=ignore
-org.eclipse.jdt.core.compiler.problem.missingJavadocComments=ignore
-org.eclipse.jdt.core.compiler.problem.missingJavadocCommentsOverriding=disabled
-org.eclipse.jdt.core.compiler.problem.missingJavadocCommentsVisibility=public
-org.eclipse.jdt.core.compiler.problem.missingJavadocTagDescription=return_tag
-org.eclipse.jdt.core.compiler.problem.missingJavadocTags=ignore
-org.eclipse.jdt.core.compiler.problem.missingJavadocTagsMethodTypeParameters=disabled
-org.eclipse.jdt.core.compiler.problem.missingJavadocTagsOverriding=disabled
-org.eclipse.jdt.core.compiler.problem.missingJavadocTagsVisibility=public
-org.eclipse.jdt.core.compiler.problem.missingOverrideAnnotation=ignore
-org.eclipse.jdt.core.compiler.problem.missingOverrideAnnotationForInterfaceMethodImplementation=enabled
-org.eclipse.jdt.core.compiler.problem.missingSerialVersion=ignore
-org.eclipse.jdt.core.compiler.problem.missingSynchronizedOnInheritedMethod=ignore
-org.eclipse.jdt.core.compiler.problem.noEffectAssignment=ignore
-org.eclipse.jdt.core.compiler.problem.noImplicitStringConversion=ignore
-org.eclipse.jdt.core.compiler.problem.nonExternalizedStringLiteral=ignore
-org.eclipse.jdt.core.compiler.problem.nonnullParameterAnnotationDropped=ignore
-org.eclipse.jdt.core.compiler.problem.nonnullTypeVariableFromLegacyInvocation=ignore
-org.eclipse.jdt.core.compiler.problem.nullAnnotationInferenceConflict=ignore
-org.eclipse.jdt.core.compiler.problem.nullReference=ignore
-org.eclipse.jdt.core.compiler.problem.nullSpecViolation=ignore
-org.eclipse.jdt.core.compiler.problem.nullUncheckedConversion=ignore
-org.eclipse.jdt.core.compiler.problem.overridingPackageDefaultMethod=ignore
-org.eclipse.jdt.core.compiler.problem.parameterAssignment=ignore
-org.eclipse.jdt.core.compiler.problem.pessimisticNullAnalysisForFreeTypeVariables=ignore
-org.eclipse.jdt.core.compiler.problem.possibleAccidentalBooleanAssignment=ignore
-org.eclipse.jdt.core.compiler.problem.potentialNullReference=ignore
-org.eclipse.jdt.core.compiler.problem.potentiallyUnclosedCloseable=ignore
-org.eclipse.jdt.core.compiler.problem.rawTypeReference=ignore
-org.eclipse.jdt.core.compiler.problem.redundantNullAnnotation=ignore
-org.eclipse.jdt.core.compiler.problem.redundantNullCheck=ignore
-org.eclipse.jdt.core.compiler.problem.redundantSpecificationOfTypeArguments=ignore
-org.eclipse.jdt.core.compiler.problem.redundantSuperinterface=ignore
-org.eclipse.jdt.core.compiler.problem.reportMethodCanBePotentiallyStatic=ignore
-org.eclipse.jdt.core.compiler.problem.reportMethodCanBeStatic=ignore
-org.eclipse.jdt.core.compiler.problem.specialParameterHidingField=disabled
-org.eclipse.jdt.core.compiler.problem.staticAccessReceiver=ignore
-org.eclipse.jdt.core.compiler.problem.suppressOptionalErrors=disabled
-org.eclipse.jdt.core.compiler.problem.suppressWarnings=enabled
-org.eclipse.jdt.core.compiler.problem.syntacticNullAnalysisForFields=disabled
-org.eclipse.jdt.core.compiler.problem.syntheticAccessEmulation=ignore
-org.eclipse.jdt.core.compiler.problem.typeParameterHiding=ignore
-org.eclipse.jdt.core.compiler.problem.unavoidableGenericTypeProblems=enabled
-org.eclipse.jdt.core.compiler.problem.uncheckedTypeOperation=ignore
-org.eclipse.jdt.core.compiler.problem.unclosedCloseable=ignore
-org.eclipse.jdt.core.compiler.problem.undocumentedEmptyBlock=ignore
-org.eclipse.jdt.core.compiler.problem.unhandledWarningToken=ignore
-org.eclipse.jdt.core.compiler.problem.unlikelyCollectionMethodArgumentType=ignore
-org.eclipse.jdt.core.compiler.problem.unlikelyCollectionMethodArgumentTypeStrict=disabled
-org.eclipse.jdt.core.compiler.problem.unlikelyEqualsArgumentType=ignore
-org.eclipse.jdt.core.compiler.problem.unnecessaryElse=ignore
-org.eclipse.jdt.core.compiler.problem.unnecessaryTypeCheck=ignore
-org.eclipse.jdt.core.compiler.problem.unqualifiedFieldAccess=ignore
-org.eclipse.jdt.core.compiler.problem.unsafeTypeOperation=ignore
-org.eclipse.jdt.core.compiler.problem.unusedDeclaredThrownException=ignore
-org.eclipse.jdt.core.compiler.problem.unusedDeclaredThrownExceptionExemptExceptionAndThrowable=enabled
-org.eclipse.jdt.core.compiler.problem.unusedDeclaredThrownExceptionIncludeDocCommentReference=enabled
-org.eclipse.jdt.core.compiler.problem.unusedDeclaredThrownExceptionWhenOverriding=disabled
-org.eclipse.jdt.core.compiler.problem.unusedExceptionParameter=ignore
-org.eclipse.jdt.core.compiler.problem.unusedImport=ignore
-org.eclipse.jdt.core.compiler.problem.unusedLabel=ignore
-org.eclipse.jdt.core.compiler.problem.unusedLocal=ignore
-org.eclipse.jdt.core.compiler.problem.unusedObjectAllocation=ignore
-org.eclipse.jdt.core.compiler.problem.unusedParameter=ignore
-org.eclipse.jdt.core.compiler.problem.unusedParameterIncludeDocCommentReference=enabled
-org.eclipse.jdt.core.compiler.problem.unusedParameterWhenImplementingAbstract=disabled
-org.eclipse.jdt.core.compiler.problem.unusedParameterWhenOverridingConcrete=disabled
-org.eclipse.jdt.core.compiler.problem.unusedPrivateMember=ignore
-org.eclipse.jdt.core.compiler.problem.unusedTypeParameter=ignore
-org.eclipse.jdt.core.compiler.problem.unusedWarningToken=ignore
-org.eclipse.jdt.core.compiler.problem.varargsArgumentNeedCast=ignore
-org.eclipse.jdt.core.compiler.source=1.8
-org.eclipse.jdt.core.compiler.taskCaseSensitive=enabled
-org.eclipse.jdt.core.compiler.taskPriorities=
-org.eclipse.jdt.core.compiler.taskTags=
-org.eclipse.jdt.core.formatter.align_type_members_on_columns=false
-org.eclipse.jdt.core.formatter.alignment_for_arguments_in_allocation_expression=16
-org.eclipse.jdt.core.formatter.alignment_for_arguments_in_enum_constant=16
-org.eclipse.jdt.core.formatter.alignment_for_arguments_in_explicit_constructor_call=16
-org.eclipse.jdt.core.formatter.alignment_for_arguments_in_method_invocation=16
-org.eclipse.jdt.core.formatter.alignment_for_arguments_in_qualified_allocation_expression=16
-org.eclipse.jdt.core.formatter.alignment_for_assignment=0
-org.eclipse.jdt.core.formatter.alignment_for_binary_expression=16
-org.eclipse.jdt.core.formatter.alignment_for_compact_if=16
-org.eclipse.jdt.core.formatter.alignment_for_conditional_expression=80
-org.eclipse.jdt.core.formatter.alignment_for_enum_constants=0
-org.eclipse.jdt.core.formatter.alignment_for_expressions_in_array_initializer=16
-org.eclipse.jdt.core.formatter.alignment_for_multiple_fields=16
-org.eclipse.jdt.core.formatter.alignment_for_parameters_in_constructor_declaration=16
-org.eclipse.jdt.core.formatter.alignment_for_parameters_in_method_declaration=16
-org.eclipse.jdt.core.formatter.alignment_for_selector_in_method_invocation=16
-org.eclipse.jdt.core.formatter.alignment_for_superclass_in_type_declaration=16
-org.eclipse.jdt.core.formatter.alignment_for_superinterfaces_in_enum_declaration=16
-org.eclipse.jdt.core.formatter.alignment_for_superinterfaces_in_type_declaration=16
-org.eclipse.jdt.core.formatter.alignment_for_throws_clause_in_constructor_declaration=16
-org.eclipse.jdt.core.formatter.alignment_for_throws_clause_in_method_declaration=16
-org.eclipse.jdt.core.formatter.blank_lines_after_imports=2
-org.eclipse.jdt.core.formatter.blank_lines_after_package=1
-org.eclipse.jdt.core.formatter.blank_lines_before_field=0
-org.eclipse.jdt.core.formatter.blank_lines_before_first_class_body_declaration=0
-org.eclipse.jdt.core.formatter.blank_lines_before_imports=1
-org.eclipse.jdt.core.formatter.blank_lines_before_member_type=1
-org.eclipse.jdt.core.formatter.blank_lines_before_method=1
-org.eclipse.jdt.core.formatter.blank_lines_before_new_chunk=1
-org.eclipse.jdt.core.formatter.blank_lines_before_package=0
-org.eclipse.jdt.core.formatter.blank_lines_between_import_groups=1
-org.eclipse.jdt.core.formatter.blank_lines_between_type_declarations=1
-org.eclipse.jdt.core.formatter.brace_position_for_annotation_type_declaration=next_line
-org.eclipse.jdt.core.formatter.brace_position_for_anonymous_type_declaration=next_line
-org.eclipse.jdt.core.formatter.brace_position_for_array_initializer=next_line
-org.eclipse.jdt.core.formatter.brace_position_for_block=next_line
-org.eclipse.jdt.core.formatter.brace_position_for_block_in_case=next_line
-org.eclipse.jdt.core.formatter.brace_position_for_constructor_declaration=next_line
-org.eclipse.jdt.core.formatter.brace_position_for_enum_constant=next_line
-org.eclipse.jdt.core.formatter.brace_position_for_enum_declaration=next_line
-org.eclipse.jdt.core.formatter.brace_position_for_method_declaration=next_line
-org.eclipse.jdt.core.formatter.brace_position_for_switch=next_line
-org.eclipse.jdt.core.formatter.brace_position_for_type_declaration=next_line
-org.eclipse.jdt.core.formatter.comment.clear_blank_lines=true
-org.eclipse.jdt.core.formatter.comment.clear_blank_lines_in_block_comment=false
-org.eclipse.jdt.core.formatter.comment.clear_blank_lines_in_javadoc_comment=false
-org.eclipse.jdt.core.formatter.comment.format_block_comments=true
-org.eclipse.jdt.core.formatter.comment.format_comments=true
-org.eclipse.jdt.core.formatter.comment.format_header=false
-org.eclipse.jdt.core.formatter.comment.format_html=true
-org.eclipse.jdt.core.formatter.comment.format_javadoc_comments=true
-org.eclipse.jdt.core.formatter.comment.format_line_comments=false
-org.eclipse.jdt.core.formatter.comment.format_source_code=true
-org.eclipse.jdt.core.formatter.comment.indent_parameter_description=true
-org.eclipse.jdt.core.formatter.comment.indent_root_tags=true
-org.eclipse.jdt.core.formatter.comment.insert_new_line_before_root_tags=insert
-org.eclipse.jdt.core.formatter.comment.insert_new_line_for_parameter=insert
-org.eclipse.jdt.core.formatter.comment.line_length=120
-org.eclipse.jdt.core.formatter.compact_else_if=true
-org.eclipse.jdt.core.formatter.continuation_indentation=2
-org.eclipse.jdt.core.formatter.continuation_indentation_for_array_initializer=2
-org.eclipse.jdt.core.formatter.format_guardian_clause_on_one_line=false
-org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_annotation_declaration_header=true
-org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_enum_constant_header=true
-org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_enum_declaration_header=true
-org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_type_header=true
-org.eclipse.jdt.core.formatter.indent_breaks_compare_to_cases=true
-org.eclipse.jdt.core.formatter.indent_empty_lines=false
-org.eclipse.jdt.core.formatter.indent_statements_compare_to_block=true
-org.eclipse.jdt.core.formatter.indent_statements_compare_to_body=true
-org.eclipse.jdt.core.formatter.indent_switchstatements_compare_to_cases=true
-org.eclipse.jdt.core.formatter.indent_switchstatements_compare_to_switch=true
-org.eclipse.jdt.core.formatter.indentation.size=3
-org.eclipse.jdt.core.formatter.insert_new_line_after_annotation=insert
-org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_local_variable=insert
-org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_member=insert
-org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_parameter=do not insert
-org.eclipse.jdt.core.formatter.insert_new_line_after_opening_brace_in_array_initializer=do not insert
-org.eclipse.jdt.core.formatter.insert_new_line_at_end_of_file_if_missing=do not insert
-org.eclipse.jdt.core.formatter.insert_new_line_before_catch_in_try_statement=insert
-org.eclipse.jdt.core.formatter.insert_new_line_before_closing_brace_in_array_initializer=do not insert
-org.eclipse.jdt.core.formatter.insert_new_line_before_else_in_if_statement=insert
-org.eclipse.jdt.core.formatter.insert_new_line_before_finally_in_try_statement=insert
-org.eclipse.jdt.core.formatter.insert_new_line_before_while_in_do_statement=insert
-org.eclipse.jdt.core.formatter.insert_new_line_in_empty_annotation_declaration=insert
-org.eclipse.jdt.core.formatter.insert_new_line_in_empty_anonymous_type_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_new_line_in_empty_block=insert
-org.eclipse.jdt.core.formatter.insert_new_line_in_empty_enum_constant=insert
-org.eclipse.jdt.core.formatter.insert_new_line_in_empty_enum_declaration=insert
-org.eclipse.jdt.core.formatter.insert_new_line_in_empty_method_body=insert
-org.eclipse.jdt.core.formatter.insert_new_line_in_empty_type_declaration=insert
-org.eclipse.jdt.core.formatter.insert_space_after_and_in_type_parameter=insert
-org.eclipse.jdt.core.formatter.insert_space_after_assignment_operator=insert
-org.eclipse.jdt.core.formatter.insert_space_after_at_in_annotation=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_at_in_annotation_type_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_binary_operator=insert
-org.eclipse.jdt.core.formatter.insert_space_after_closing_angle_bracket_in_type_arguments=insert
-org.eclipse.jdt.core.formatter.insert_space_after_closing_angle_bracket_in_type_parameters=insert
-org.eclipse.jdt.core.formatter.insert_space_after_closing_brace_in_block=insert
-org.eclipse.jdt.core.formatter.insert_space_after_closing_paren_in_cast=insert
-org.eclipse.jdt.core.formatter.insert_space_after_colon_in_assert=insert
-org.eclipse.jdt.core.formatter.insert_space_after_colon_in_case=insert
-org.eclipse.jdt.core.formatter.insert_space_after_colon_in_conditional=insert
-org.eclipse.jdt.core.formatter.insert_space_after_colon_in_for=insert
-org.eclipse.jdt.core.formatter.insert_space_after_colon_in_labeled_statement=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_allocation_expression=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_annotation=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_array_initializer=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_constructor_declaration_parameters=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_constructor_declaration_throws=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_enum_constant_arguments=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_enum_declarations=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_explicitconstructorcall_arguments=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_for_increments=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_for_inits=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_declaration_parameters=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_declaration_throws=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_invocation_arguments=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_multiple_field_declarations=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_multiple_local_declarations=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_parameterized_type_reference=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_superinterfaces=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_type_arguments=insert
-org.eclipse.jdt.core.formatter.insert_space_after_comma_in_type_parameters=insert
-org.eclipse.jdt.core.formatter.insert_space_after_ellipsis=insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_parameterized_type_reference=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_type_arguments=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_type_parameters=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_brace_in_array_initializer=insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_bracket_in_array_allocation_expression=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_bracket_in_array_reference=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_annotation=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_cast=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_catch=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_constructor_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_enum_constant=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_for=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_if=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_method_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_method_invocation=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_parenthesized_expression=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_switch=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_synchronized=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_while=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_postfix_operator=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_prefix_operator=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_question_in_conditional=insert
-org.eclipse.jdt.core.formatter.insert_space_after_question_in_wildcard=do not insert
-org.eclipse.jdt.core.formatter.insert_space_after_semicolon_in_for=insert
-org.eclipse.jdt.core.formatter.insert_space_after_unary_operator=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_and_in_type_parameter=insert
-org.eclipse.jdt.core.formatter.insert_space_before_assignment_operator=insert
-org.eclipse.jdt.core.formatter.insert_space_before_at_in_annotation_type_declaration=insert
-org.eclipse.jdt.core.formatter.insert_space_before_binary_operator=insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_parameterized_type_reference=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_type_arguments=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_type_parameters=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_brace_in_array_initializer=insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_bracket_in_array_allocation_expression=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_bracket_in_array_reference=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_annotation=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_cast=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_catch=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_constructor_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_enum_constant=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_for=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_if=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_method_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_method_invocation=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_parenthesized_expression=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_switch=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_synchronized=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_while=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_colon_in_assert=insert
-org.eclipse.jdt.core.formatter.insert_space_before_colon_in_case=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_colon_in_conditional=insert
-org.eclipse.jdt.core.formatter.insert_space_before_colon_in_default=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_colon_in_for=insert
-org.eclipse.jdt.core.formatter.insert_space_before_colon_in_labeled_statement=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_allocation_expression=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_annotation=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_array_initializer=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_constructor_declaration_parameters=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_constructor_declaration_throws=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_enum_constant_arguments=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_enum_declarations=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_explicitconstructorcall_arguments=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_for_increments=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_for_inits=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_declaration_parameters=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_declaration_throws=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_invocation_arguments=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_multiple_field_declarations=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_multiple_local_declarations=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_parameterized_type_reference=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_superinterfaces=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_type_arguments=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_comma_in_type_parameters=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_ellipsis=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_parameterized_type_reference=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_type_arguments=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_type_parameters=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_annotation_type_declaration=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_anonymous_type_declaration=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_array_initializer=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_block=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_constructor_declaration=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_enum_constant=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_enum_declaration=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_method_declaration=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_switch=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_type_declaration=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_allocation_expression=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_reference=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_type_reference=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_annotation=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_annotation_type_member_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_catch=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_constructor_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_enum_constant=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_for=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_if=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_method_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_method_invocation=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_parenthesized_expression=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_switch=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_synchronized=insert
-org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_while=insert
-org.eclipse.jdt.core.formatter.insert_space_before_parenthesized_expression_in_return=insert
-org.eclipse.jdt.core.formatter.insert_space_before_parenthesized_expression_in_throw=insert
-org.eclipse.jdt.core.formatter.insert_space_before_postfix_operator=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_prefix_operator=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_question_in_conditional=insert
-org.eclipse.jdt.core.formatter.insert_space_before_question_in_wildcard=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_semicolon=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_semicolon_in_for=do not insert
-org.eclipse.jdt.core.formatter.insert_space_before_unary_operator=do not insert
-org.eclipse.jdt.core.formatter.insert_space_between_brackets_in_array_type_reference=do not insert
-org.eclipse.jdt.core.formatter.insert_space_between_empty_braces_in_array_initializer=do not insert
-org.eclipse.jdt.core.formatter.insert_space_between_empty_brackets_in_array_allocation_expression=do not insert
-org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_annotation_type_member_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_constructor_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_enum_constant=do not insert
-org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_method_declaration=do not insert
-org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_method_invocation=do not insert
-org.eclipse.jdt.core.formatter.keep_else_statement_on_same_line=false
-org.eclipse.jdt.core.formatter.keep_empty_array_initializer_on_one_line=true
-org.eclipse.jdt.core.formatter.keep_imple_if_on_one_line=false
-org.eclipse.jdt.core.formatter.keep_then_statement_on_same_line=false
-org.eclipse.jdt.core.formatter.lineSplit=130
-org.eclipse.jdt.core.formatter.never_indent_block_comments_on_first_column=false
-org.eclipse.jdt.core.formatter.never_indent_line_comments_on_first_column=false
-org.eclipse.jdt.core.formatter.number_of_blank_lines_at_beginning_of_method_body=0
-org.eclipse.jdt.core.formatter.number_of_empty_lines_to_preserve=50
-org.eclipse.jdt.core.formatter.put_empty_statement_on_new_line=true
-org.eclipse.jdt.core.formatter.tabulation.char=mixed
-org.eclipse.jdt.core.formatter.tabulation.size=3
-org.eclipse.jdt.core.formatter.use_tabs_only_for_leading_indentations=false
-org.eclipse.jdt.core.formatter.wrap_before_binary_operator=true
diff --git a/commercemigration/.settings/org.eclipse.jdt.ui.prefs b/commercemigration/.settings/org.eclipse.jdt.ui.prefs
deleted file mode 100644
index 50f3889..0000000
--- a/commercemigration/.settings/org.eclipse.jdt.ui.prefs
+++ /dev/null
@@ -1,75 +0,0 @@
-#Tue Feb 03 16:09:22 CET 2009
-comment_clear_blank_lines=true
-comment_format_comments=true
-comment_format_header=false
-comment_format_html=true
-comment_format_source_code=true
-comment_indent_parameter_description=true
-comment_indent_root_tags=true
-comment_line_length=160
-comment_new_line_for_parameter=false
-comment_separate_root_tags=true
-eclipse.preferences.version=1
-editor_save_participant_org.eclipse.jdt.ui.postsavelistener.cleanup=true
-formatter_settings_version=11
-org.eclipse.jdt.ui.exception.name=e
-org.eclipse.jdt.ui.gettersetter.use.is=true
-org.eclipse.jdt.ui.ignorelowercasenames=true
-org.eclipse.jdt.ui.importorder=de.hybris;java;javax;org;com;de;
-org.eclipse.jdt.ui.javadoc=true
-org.eclipse.jdt.ui.keywordthis=false
-org.eclipse.jdt.ui.ondemandthreshold=50
-org.eclipse.jdt.ui.overrideannotation=true
-org.eclipse.jdt.ui.staticondemandthreshold=50
-org.eclipse.jdt.ui.text.custom_code_templates=/**\n * @return the ${bare_field_name}\n *//**\n * @param ${param} the ${bare_field_name} to set\n *//**\n * \n *//*\n * [y] hybris Platform\n * \n * Copyright (c) 2000-${year} SAP SE\n * All rights reserved.\n * \n * This software is the confidential and proprietary information of SAP \n * Hybris ("Confidential Information"). You shall not disclose such \n * Confidential Information and shall use it only in accordance with the \n * terms of the license agreement you entered into with SAP Hybris.\n *//**\n *\n *//**\n * \n *//**\n * \n *//**\n * \n */${filecomment}\n${package_declaration}\n\n${typecomment}\n${type_declaration}\n\n\n\n// ${todo} Auto-generated catch block\n${exception_var}.printStackTrace();// ${todo} Auto-generated method stub\n${body_statement}${body_statement}\n// ${todo} Auto-generated constructor stubreturn ${field};${field} \= ${param};/**\n * @return the ${bare_field_name}\n *//**\n * @param ${param} the ${bare_field_name} to set\n *//**\n * \n *//**\n * \n *//**\n *\n *//**\n * \n *//**\n * \n *//**\n * \n */${filecomment}\n${package_declaration}\n\n${typecomment}\n${type_declaration}\n// ${todo} Auto-generated catch block\n${exception_var}.printStackTrace();// ${todo} Auto-generated function stub\n${body_statement}${body_statement}\n// ${todo} Auto-generated constructor stubreturn ${field};${field} \= ${param};
-sp_cleanup.add_default_serial_version_id=true
-sp_cleanup.add_generated_serial_version_id=false
-sp_cleanup.add_missing_annotations=true
-sp_cleanup.add_missing_deprecated_annotations=true
-sp_cleanup.add_missing_methods=false
-sp_cleanup.add_missing_nls_tags=false
-sp_cleanup.add_missing_override_annotations=true
-sp_cleanup.add_serial_version_id=false
-sp_cleanup.always_use_blocks=true
-sp_cleanup.always_use_parentheses_in_expressions=false
-sp_cleanup.always_use_this_for_non_static_field_access=false
-sp_cleanup.always_use_this_for_non_static_method_access=false
-sp_cleanup.convert_to_enhanced_for_loop=false
-sp_cleanup.correct_indentation=false
-sp_cleanup.format_source_code=true
-sp_cleanup.format_source_code_changes_only=false
-sp_cleanup.make_local_variable_final=true
-sp_cleanup.make_parameters_final=true
-sp_cleanup.make_private_fields_final=true
-sp_cleanup.make_type_abstract_if_missing_method=false
-sp_cleanup.make_variable_declarations_final=true
-sp_cleanup.never_use_blocks=false
-sp_cleanup.never_use_parentheses_in_expressions=true
-sp_cleanup.on_save_use_additional_actions=true
-sp_cleanup.organize_imports=true
-sp_cleanup.qualify_static_field_accesses_with_declaring_class=false
-sp_cleanup.qualify_static_member_accesses_through_instances_with_declaring_class=true
-sp_cleanup.qualify_static_member_accesses_through_subtypes_with_declaring_class=true
-sp_cleanup.qualify_static_member_accesses_with_declaring_class=false
-sp_cleanup.qualify_static_method_accesses_with_declaring_class=false
-sp_cleanup.remove_private_constructors=true
-sp_cleanup.remove_trailing_whitespaces=true
-sp_cleanup.remove_trailing_whitespaces_all=true
-sp_cleanup.remove_trailing_whitespaces_ignore_empty=false
-sp_cleanup.remove_unnecessary_casts=true
-sp_cleanup.remove_unnecessary_nls_tags=false
-sp_cleanup.remove_unused_imports=true
-sp_cleanup.remove_unused_local_variables=false
-sp_cleanup.remove_unused_private_fields=true
-sp_cleanup.remove_unused_private_members=false
-sp_cleanup.remove_unused_private_methods=true
-sp_cleanup.remove_unused_private_types=true
-sp_cleanup.sort_members=false
-sp_cleanup.sort_members_all=false
-sp_cleanup.use_blocks=true
-sp_cleanup.use_blocks_only_for_return_and_throw=false
-sp_cleanup.use_parentheses_in_expressions=false
-sp_cleanup.use_this_for_non_static_field_access=false
-sp_cleanup.use_this_for_non_static_field_access_only_if_necessary=true
-sp_cleanup.use_this_for_non_static_method_access=false
-sp_cleanup.use_this_for_non_static_method_access_only_if_necessary=true
diff --git a/commercemigration/.settings/org.springframework.ide.eclipse.beans.core.prefs b/commercemigration/.settings/org.springframework.ide.eclipse.beans.core.prefs
deleted file mode 100644
index 7f9d66e..0000000
--- a/commercemigration/.settings/org.springframework.ide.eclipse.beans.core.prefs
+++ /dev/null
@@ -1,3 +0,0 @@
-#Fri May 15 12:07:57 CEST 2009
-eclipse.preferences.version=1
-org.springframework.ide.eclipse.beans.core.ignoreMissingNamespaceHandler=false
diff --git a/commercemigration/.settings/org.springframework.ide.eclipse.core.prefs b/commercemigration/.settings/org.springframework.ide.eclipse.core.prefs
deleted file mode 100644
index de7d63d..0000000
--- a/commercemigration/.settings/org.springframework.ide.eclipse.core.prefs
+++ /dev/null
@@ -1,52 +0,0 @@
-eclipse.preferences.version=1
-org.springframework.ide.eclipse.core.builders.enable.aopreferencemodelbuilder=true
-org.springframework.ide.eclipse.core.builders.enable.beanmetadatabuilder=true
-org.springframework.ide.eclipse.core.enable.project.preferences=false
-org.springframework.ide.eclipse.core.validator.enable.org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.enable.org.springframework.ide.eclipse.bestpractices.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.enable.org.springframework.ide.eclipse.core.springvalidator=false
-org.springframework.ide.eclipse.core.validator.enable.org.springframework.ide.eclipse.data.core.datavalidator=true
-org.springframework.ide.eclipse.core.validator.enable.org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.autowire.autowire-org.springframework.ide.eclipse.beans.core.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.beanAlias-org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.beanClass-org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.beanConstructorArgument-org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.beanDefinition-org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.beanDefinitionHolder-org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.beanFactory-org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.beanInitDestroyMethod-org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.beanProperty-org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.beanReference-org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.methodOverride-org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.parsingProblems-org.springframework.ide.eclipse.beans.core.beansvalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.requiredProperty-org.springframework.ide.eclipse.beans.core.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.beans.core.toolAnnotation-org.springframework.ide.eclipse.beans.core.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.bestpractices.com.springsource.sts.bestpractices.AvoidDriverManagerDataSource-org.springframework.ide.eclipse.bestpractices.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.bestpractices.com.springsource.sts.bestpractices.ImportElementsAtTopRulee-org.springframework.ide.eclipse.bestpractices.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.bestpractices.com.springsource.sts.bestpractices.ParentBeanSpecifiesAbstractClassRule-org.springframework.ide.eclipse.bestpractices.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.bestpractices.com.springsource.sts.bestpractices.RefElementRule-org.springframework.ide.eclipse.bestpractices.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.bestpractices.com.springsource.sts.bestpractices.TooManyBeansInFileRule-org.springframework.ide.eclipse.bestpractices.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.bestpractices.com.springsource.sts.bestpractices.UnnecessaryValueElementRule-org.springframework.ide.eclipse.bestpractices.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.bestpractices.com.springsource.sts.bestpractices.UseBeanInheritance-org.springframework.ide.eclipse.bestpractices.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.bestpractices.legacyxmlusage.jndiobjectfactory-org.springframework.ide.eclipse.bestpractices.beansvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.core.springClasspath-org.springframework.ide.eclipse.core.springvalidator=false
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.data.core.invalidDerivedQuery-org.springframework.ide.eclipse.data.core.datavalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.data.core.invalidParameterType-org.springframework.ide.eclipse.data.core.datavalidator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.action-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.actionstate-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.attribute-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.attributemapper-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.beanaction-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.evaluationaction-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.evaluationresult-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.exceptionhandler-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.import-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.inputattribute-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.mapping-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.outputattribute-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.set-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.state-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.subflowstate-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.transition-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.variable-org.springframework.ide.eclipse.webflow.core.validator=true
-org.springframework.ide.eclipse.core.validator.rule.enable.org.springframework.ide.eclipse.webflow.core.validation.webflowstate-org.springframework.ide.eclipse.webflow.core.validator=true
diff --git a/commercemigration/.springBeans b/commercemigration/.springBeans
deleted file mode 100644
index 56b1f14..0000000
--- a/commercemigration/.springBeans
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
- 1
-
-
-
-
-
-
- resources/commercemigration-spring.xml
- web/webroot/WEB-INF/commercemigration-web-spring.xml
-
-
-
-
diff --git a/commercemigration/buildcallbacks.xml b/commercemigration/buildcallbacks.xml
deleted file mode 100644
index b21172a..0000000
--- a/commercemigration/buildcallbacks.xml
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- PATCHING azurecloudserver.jar to enable configurable fake tenants in AzureCloudUtils
-
-
-
-
-
- ${ext.azurecloud.path}/bin/azurecloudserver.jar doesn't exist. Cannot patch AzureCloudUtils to
- enable fake tenants!
-
-
-
-
-
-
-
diff --git a/commercemigration/dump.txt b/commercemigration/dump.txt
deleted file mode 100644
index e69de29..0000000
diff --git a/commercemigration/extensioninfo.xml b/commercemigration/extensioninfo.xml
deleted file mode 100644
index f73084d..0000000
--- a/commercemigration/extensioninfo.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
-
-
-
-
-
-
-
-
diff --git a/commercemigration/external-dependencies.xml b/commercemigration/external-dependencies.xml
deleted file mode 100644
index 4300661..0000000
--- a/commercemigration/external-dependencies.xml
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
- 4.0.0
- de.hybris.platform
- commercemigration
- 6.7.0.0-RC19
-
- jar
-
-
-
- com.google.code.gson
- gson
- 2.8.6
-
-
- com.google.guava
- guava
- 28.0-jre
-
-
- org.apache.commons
- commons-dbcp2
- 2.7.0
-
-
- com.microsoft.azure
- azure-storage
- 8.1.0
-
-
- com.zaxxer
- HikariCP
- 3.4.5
-
-
- com.github.freva
- ascii-table
- 1.1.0
-
-
- com.fasterxml.jackson.datatype
- jackson-datatype-jsr310
- 2.12.3
-
-
-
diff --git a/commercemigration/lib/.gitkeep b/commercemigration/lib/.gitkeep
deleted file mode 100644
index e69de29..0000000
diff --git a/commercemigration/project.properties b/commercemigration/project.properties
deleted file mode 100644
index 091cf2b..0000000
--- a/commercemigration/project.properties
+++ /dev/null
@@ -1,482 +0,0 @@
-# Specifies the location of the spring context file putted automatically to the global platform application context.
-commercemigration.application-context=commercemigration-spring.xml
-installed.tenants=
-task.engine.loadonstartup=false
-solrfacetsearch.solrClientPool.checkInterval=0
-#backoffice.cockpitng.reset.scope=widgets,cockpitConfig
-#backoffice.cockpitng.reset.triggers=start,login
-##
-# Specifies the profile name of data source that serves as migration input
-#
-# @values name of the data source profile
-# @optional true
-##
-migration.input.profiles=source
-##
-# Specifies the profile name of data sources that serves as migration output
-#
-# @values name of the data source profile
-# @optional true
-##
-migration.output.profiles=target
-##
-# Specifies the driver class for the source jdbc connection
-#
-# @values any valid jdbc driver class
-# @optional false
-##
-migration.ds.source.db.driver=
-##
-# Specifies the url for the source jdbc connection
-#
-# @values any valid jdbc url
-# @optional false
-##
-migration.ds.source.db.url=
-##
-# Specifies the user name for the source jdbc connection
-#
-# @values any valid user name for the jdbc connection
-# @optional false
-##
-migration.ds.source.db.username=
-##
-# Specifies the password for the source jdbc connection
-#
-# @values any valid password for the jdbc connection
-# @optional false
-##
-migration.ds.source.db.password=
-##
-# Specifies the table prefix used on the source commerce database.
-# This may be relevant if a commerce installation was initialized using 'db.tableprefix'.
-#
-# @values any valid commerce database table prefix.
-# @optional true
-##
-migration.ds.source.db.tableprefix=
-##
-# Specifies the schema the respective commerce installation is deployed to.
-#
-# @values any valid schema name for the commerce installation
-# @optional false
-##
-migration.ds.source.db.schema=
-##
-# Specifies the name of the type system that should be taken into account
-#
-# @values any valid type system name
-# @optional true
-##
-migration.ds.source.db.typesystemname=DEFAULT
-##
-# Specifies the suffix which is used for the source typesystem
-#
-# @values the suffix used for typesystem. I.e, 'attributedescriptors1' means the suffix is '1'
-# @optional true
-# @dependency migration.ds.source.db.typesystemname
-##
-migration.ds.source.db.typesystemsuffix=
-##
-# Specifies minimum amount of idle connections available in the source db pool
-#
-# @values integer value
-# @optional false
-##
-migration.ds.source.db.connection.pool.size.idle.min=${db.pool.minIdle}
-##
-# Specifies maximum amount of connections in the source db pool
-#
-# @values integer value
-# @optional false
-##
-migration.ds.source.db.connection.pool.size.active.max=${db.pool.maxActive}
-##
-# Specifies the driver class for the target jdbc connection
-#
-# @values any valid jdbc driver class
-# @optional false
-##
-migration.ds.target.db.driver=${db.driver}
-##
-# Specifies the url for the target jdbc connection
-#
-# @values any valid jdbc url
-# @optional false
-##
-migration.ds.target.db.url=${db.url}
-##
-# Specifies the user name for the target jdbc connection
-#
-# @values any valid user name for the jdbc connection
-# @optional false
-##
-migration.ds.target.db.username=${db.username}
-##
-# Specifies the password for the target jdbc connection
-#
-# @values any valid password for the jdbc connection
-# @optional false
-##
-migration.ds.target.db.password=${db.password}
-##
-# Specifies the table prefix used on the target commerce database.
-# This may be relevant if a commerce installation was initialized using 'db.tableprefix' / staged approach.
-#
-# @values any valid commerce database table prefix.
-# @optional true
-##
-migration.ds.target.db.tableprefix=${db.tableprefix}
-migration.ds.target.db.catalog=
-##
-# Specifies the schema the target commerce installation is deployed to.
-#
-# @values any valid schema name for the commerce installation
-# @optional false
-##
-migration.ds.target.db.schema=dbo
-##
-# Specifies the name of the type system that should be taken into account
-#
-# @values any valid type system name
-# @optional true
-##
-migration.ds.target.db.typesystemname=DEFAULT
-##
-# Specifies the suffix which is used for the target typesystem
-#
-# @values the suffix used for typesystem. I.e, 'attributedescriptors1' means the suffix is '1'
-# @optional true
-# @dependency migration.ds.source.db.typesystemname
-##
-migration.ds.target.db.typesystemsuffix=
-##
-# Specifies minimum amount of idle connections available in the target db pool
-#
-# @values integer value
-# @optional false
-##
-migration.ds.target.db.connection.pool.size.idle.min=${db.pool.minIdle}
-##
-# Specifies maximum amount of connections in the target db pool
-#
-# @values integer value
-# @optional false
-##
-migration.ds.target.db.connection.pool.size.active.max=${db.pool.maxActive}
-##
-# When using the staged approach, multiple sets of commerce tables may exists (each having its own tableprefix).
-# To prevent cluttering the db, this property specifies the maximum number of table sets that can exist,
-# if exceeded the schema migrator will complain and suggest a cleanup.
-#
-# @values integer value
-# @optional true
-##
-migration.ds.target.db.max.stage.migrations=5
-##
-# Specifies whether the data migration shall be triggered by the 'update running system' operation.
-#
-# @values true or false
-# @optional true
-##
-migration.trigger.updatesystem=false
-##
-# Globally enables / disables schema migration. If set to false, no schema changes will be applied.
-#
-# @values true or false
-# @optional true
-##
-migration.schema.enabled=true
-##
-# Specifies if tables which are missing in the target should be added by schema migration.
-#
-# @values true or false
-# @optional true
-# @dependency migration.schema.enabled
-##
-migration.schema.target.tables.add.enabled=true
-##
-# Specifies if extra tables in target (compared to source schema) should be removed by schema migration.
-#
-# @values true or false
-# @optional true
-# @dependency migration.schema.enabled
-##
-migration.schema.target.tables.remove.enabled=false
-##
-# Specifies if columns which are missing in the target tables should be added by schema migration.
-#
-# @values true or false
-# @optional true
-# @dependency migration.schema.enabled
-##
-migration.schema.target.columns.add.enabled=true
-##
-# Specifies if extra columns in target tables (compared to source schema) should be removed by schema migration.
-#
-# @values true or false
-# @optional true
-# @dependency migration.schema.enabled
-##
-migration.schema.target.columns.remove.enabled=true
-##
-# Specifies if the schema migrator should be automatically triggered before data copy process is started
-#
-# @values true or false
-# @optional true
-# @dependency migration.schema.enabled
-##
-migration.schema.autotrigger.enabled=false
-##
-# Specifies the number of rows to read per batch. This only affects tables which can be batched.
-#
-# @values integer value
-# @optional true
-##
-migration.data.reader.batchsize=1000
-##
-# Specifies if the target tables should be truncated before data is copied over.
-#
-# @values true or false
-# @optional true
-##
-migration.data.truncate.enabled=true
-##
-# If truncation of target tables is enabled, this property specifies tables that should be excluded from truncation.
-#
-# @values comma separated list of table names
-# @optional true
-# @dependency migration.data.truncate.enabled
-##
-migration.data.truncate.excluded=
-##
-# Specifies the number of threads used per table to write data to target.
-# Note that this value applies per table, so in total the number of threads will depend on
-# 'migration.data.maxparalleltablecopy'.
-# [total number of writer threads] = [migration.data.workers.writer.maxtasks] * [migration.data.maxparalleltablecopy]
-#
-# @values integer value
-# @optional true
-# @dependency migration.data.maxparalleltablecopy
-##
-migration.data.workers.writer.maxtasks=10
-##
-# Specifies the number of threads used per table to read data from source.
-# Note that this value applies per table, so in total the number of threads will depend on
-# 'migration.data.maxparalleltablecopy'.
-# [total number of reader threads] = [migration.data.workers.reader.maxtasks] * [migration.data.maxparalleltablecopy]
-
-# @values integer value
-# @optional true
-# @dependency migration.data.maxparalleltablecopy
-##
-migration.data.workers.reader.maxtasks=3
-##
-# Specifies the number of retries in case a worker task fails.
-#
-# @values integer value
-# @optional true
-##
-migration.data.workers.retryattempts=0
-##
-# Specifies the number of tables that are copied over in parallel.
-#
-# @values integer value
-# @optional true
-##
-migration.data.maxparalleltablecopy=2
-##
-# If set to true, the migration will abort as soon as an error occured.
-# If set to false, the migration will try to continue if the state of the runtime allows.
-#
-# @values true or false
-# @optional true
-##
-migration.data.failonerror.enabled=true
-##
-# Specifies the columns to be excluded
-#
-# @values migration.data.columns.excluded.[tablename]=[comma separated list of column names]
-# @optional true
-##
-migration.data.columns.excluded.attributedescriptors=
-##
-# Specifies the columns to be nullified. Whatever value there was will be replaced with NULL in the target column.
-#
-# @values migration.data.columns.nullify.[tablename]=[comma separated list of column names]
-# @optional true
-##
-migration.data.columns.nullify.attributedescriptors=
-##
-# If set to true, all indices in the target table will be removed before copying over the data.
-#
-# @values true of false
-# @optional true
-##
-migration.data.indices.drop.enabled=false
-##
-# If set to true, all indices in the target table will be disabled (NOT removed) before copying over the data.
-# After the data copy the indices will be enabled and rebuilt again.
-#
-# @values true of false
-# @optional true
-##
-migration.data.indices.disable.enabled=false
-##
-# If disabling of indices is enabled, this property specifies the tables that should be included.
-# If no tables specified, indices for all tables will be disabled.
-#
-# @values comma separated list of tables
-# @optional true
-# @dependency migration.data.indices.disable.enabled
-##
-migration.data.indices.disable.included=
-##
-# Flag to enable the migration of audit tables.
-#
-# @values true or false
-# @optional true
-##
-migration.data.tables.audit.enabled=true
-##
-# Specifies a list of custom tables to migrate. Custom tables are tables that are not part of the commerce type system.
-#
-# @values comma separated list of table names.
-# @optional true
-##
-migration.data.tables.custom=
-##
-# Tables to exclude from migration (use table names name without prefix)
-#
-# @values comma separated list of table names.
-# @optional true
-##
-migration.data.tables.excluded=SYSTEMINIT,StoredHttpSessions
-##
-# Tables to include (use table names name without prefix)
-#
-# @values comma separated list of table names.
-# @optional true
-##
-migration.data.tables.included=
-##
-# Run migration in the cluster (based on commerce cluster config). The 'HAC' node will be the primary one.
-# A scheduling algorithm decides which table will run on which node. Nodes are notified using cluster events.
-#
-# @values true or false
-# @optional true
-##
-migration.cluster.enabled=false
-##
-# If set to true, the migration will resume from where it stopped (either due to errors or cancellation).
-#
-# @values true or false
-# @optional true
-##
-migration.scheduler.resume.enabled=false
-##
-# If set to true, the migration will run in incremental mode. Only rows that were modified after a given timestamp
-# will be taken into account.
-#
-# @values true or false
-# @optional true
-##
-migration.data.incremental.enabled=false
-##
-# Only these tables will be taken into account for incremental migration.
-#
-# @values comma separated list of tables.
-# @optional true
-# @dependency migration.data.incremental.enabled
-##
-migration.data.incremental.tables=
-##
-# Records created or modified after this timestamp will be copied only.
-#
-# @values The timestamp in ISO-8601 ISO_ZONED_DATE_TIME format
-# @optional true
-# @dependency migration.data.incremental.enabled
-##
-migration.data.incremental.timestamp=
-##
-# Specifies the timeout of the data pipe.
-#
-# @values integer value
-# @optional true
-##
-migration.data.pipe.timeout=7200
-##
-# Specifies the capacity of the data pipe.
-#
-# @values integer value
-# @optional true
-##
-migration.data.pipe.capacity=100
-##
-# Specifies the timeout of the migration monitor.
-# If there was no activity for too long the migration will be marked as 'stalled' and aborted.
-#
-# @values integer value
-# @optional true
-##
-migration.stalled.timeout=7200
-##
-# Specifies blob storage connection string for storing reporting files.
-#
-# @values any azure blob storage connection string
-# @optional true
-##
-migration.data.report.connectionstring=${media.globalSettings.cloudAzureBlobStorageStrategy.connection}
-##
-# Specifies the properties that should be masked in HAC.
-#
-# @values any property key
-# @optional true
-##
-migration.properties.masked=migration.data.report.connectionstring,migration.ds.source.db.password,migration.ds.target.db.password
-##
-# Specifies the default locale used.
-#
-# @values any locale
-# @optional true
-##
-migration.locale.default=en-US
-##
-# If set to true, the JDBC queries ran against the source and target data sources will be logged in the storage pointed by the property {migration.data.report.connectionstring}
-#
-# @values true or false
-# @optional false
-##
-migration.log.sql=false
-##
-# Specifies the number of log entries to add to the in-memory collection of JDBC log entries of a JDBC queries store before flushing the collection contents into the blob file storage associated with the JDBC store's data souce and clearing the in-memory collection to free memory
-#
-# @values an integer number
-# @optional 10,000,000
-##
-migration.log.sql.memory.flush.threshold.nbentries=10000000
-##
-# If set to true, the values of the parameters of the JDBC queries ran against the source data source will be logged in the JDBC queries logs (migration.log.sql has to be true to enable this type of logging). For security reasons, the tool will never log parameter values for the queries ran against the target datasource.
-#
-# @values true or false
-# @optional true
-##
-migration.log.sql.source.showparameters=true
-##
-# Specifies the name of the container where the tool will store the files related to migration in the blob storage pointed by the property {migration.data.report.connectionstring}
-#
-# @values any string
-# @optional migration
-##
-migration.data.filestorage.container.name=migration
-# Enhanced Logging
-log4j2.appender.migrationAppender.type=Console
-log4j2.appender.migrationAppender.name=MigrationAppender
-log4j2.appender.migrationAppender.layout.type=PatternLayout
-log4j2.appender.migrationAppender.layout.pattern=%-5p [%t] [%c{1}] %X{migrationID,pipeline,clusterID} %m%n
-log4j2.logger.migrationToolkit.name=org.sap.commercemigration
-log4j2.logger.migrationToolkit.level=INFO
-log4j2.logger.migrationToolkit.appenderRef.migration.ref=MigrationAppender
-log4j2.logger.migrationToolkit.additivity=false
-
-
diff --git a/commercemigration/resources/commercemigration-beans.xml b/commercemigration/resources/commercemigration-beans.xml
deleted file mode 100644
index ecfb502..0000000
--- a/commercemigration/resources/commercemigration-beans.xml
+++ /dev/null
@@ -1,153 +0,0 @@
-
-
-
-
-
-
- RUNNING
- PROCESSED
- COMPLETED
- ABORTED
- STALLED
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- No prefix, no type system suffix
-
-
- No prefix, with type system suffix
-
-
- With prefix, with type system suffix
-
-
- With prefix, with type system suffix, no additional suffix
-
-
- I.e, LP tables
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/commercemigration/resources/commercemigration-items.xml b/commercemigration/resources/commercemigration-items.xml
deleted file mode 100644
index 93434a3..0000000
--- a/commercemigration/resources/commercemigration-items.xml
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
-
-
-
-
-
-
diff --git a/commercemigration/resources/commercemigration-spring.xml b/commercemigration/resources/commercemigration-spring.xml
deleted file mode 100644
index 832dbfa..0000000
--- a/commercemigration/resources/commercemigration-spring.xml
+++ /dev/null
@@ -1,258 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/commercemigration/resources/commercemigration.build.number b/commercemigration/resources/commercemigration.build.number
deleted file mode 100644
index c219c33..0000000
--- a/commercemigration/resources/commercemigration.build.number
+++ /dev/null
@@ -1,9 +0,0 @@
-#Ant properties
-#Thu Dec 06 15:21:08 CET 2018
-builddate=20181206 1521
-description=commercemigration
-name=commercemigration
-releasedate=20180313 1012
-vendor=hybris
-version=6.7.0.9
-version.api=6.7.0
diff --git a/commercemigration/resources/commercemigration/dummy.txt b/commercemigration/resources/commercemigration/dummy.txt
deleted file mode 100644
index e69de29..0000000
diff --git a/commercemigration/resources/commercemigration/sap-hybris-platform.png b/commercemigration/resources/commercemigration/sap-hybris-platform.png
deleted file mode 100644
index 3984ada..0000000
Binary files a/commercemigration/resources/commercemigration/sap-hybris-platform.png and /dev/null differ
diff --git a/commercemigration/resources/doc/architecture_diagram.drawio b/commercemigration/resources/doc/architecture_diagram.drawio
deleted file mode 100644
index 765e3e9..0000000
--- a/commercemigration/resources/doc/architecture_diagram.drawio
+++ /dev/null
@@ -1 +0,0 @@
-7Vxdc5s4FP01eYwHEF9+jOMknd2kzWy63fapI4OM2QByhdzY/fWVQNiAhD82BpJZ0mkLFyHg3KN7D1ciF+A6Xt8RuFw8YB9FF4bmry/A9MIwdNMw2X/cssktjuXmhoCEvmi0MzyFv5AwasK6Cn2UVhpSjCMaLqtGDycJ8mjFBgnBL9VmcxxVr7qEAZIMTx6MZOs/oU8XudU1nJ39AwqDRXFl3R7nR2JYNBZPki6gj19KJnBzAa4JxjTfitfXKOLgFbjk5902HN3eGEEJPeaET/iPh2/wavPj658p+jH9+Pfyw82l6OUnjFbigcXN0k2BAOuFgc12JuwJltzoRXjFOp28LEKKnpbQ48YX5n9mW9A4Yns625yHUXSNI0yyfoAPkTv3eDeU4GdUOmJ7LprN2RFxM4hQtG58Sn2LHSMdwjGiZMOaiBMsR8At+GaK3Zed8+zCtig5zh4LIxSECbZd7zBlGwLWUyA+AuOA4BVHL2Vghklwj+b8trR9kAjCw1nRh3YqVE4NKqDLWAGgwErXW8PKkLB6+nwnwcXQSnzkC4gO0BBGYZCw7SgDtc7KucX/KFmZ/fAzcEJL9vxH2J/ETfELRXCGokechjTE/IIe8wNiJ02490IWUu5rDeLQ9/nZ2wZX4lYp5s8QRDBNxTOmz4h6i2KniCSaRBnr2FG0h6oyYQRBDKt3fgCJH9c4jhFhvucDwI64i2eEbQV862q5jBiwGd51EmXuOp4+kjcLZ80wpThWDl5hmYgWU5PZwpiljKt6p5n1i5IG2SG2F8YBw8xjwRgSyrbuECTfdcNds7+jZRLUGKmdnwaiG2BWw4bMCldBivbiqylz4uGzkg5oTVGSMjKkAxvOxQbdct4WHazD6XYrZAjmEfZEIaNplobmqpShafp0ci2njHn200poLroxqrnclr1guLIXQGtesCUvTCeNfvAhhSnFBB32RWsIGnYVQUshHFW6sTUEnWNlY3cK0VSo6a4VgCvB8vjXdFCIXSlEZ79C7J8f40EhdqAJGmjQoBBlVnQqCQpWDgqxFzZICrFvOhxRkHmPCvGAF+oKUfZCpwqxAP0tKcT9CNYVoilr7E4VoiEXQzpXiNv6qsBEHysUgOOOLBkXU2sLGHDCKyAL+IwzPl7NRBQEk5O0Y0VpNeqxmsL0YbrILqBvY0QxuQBexeCcEo3euqwl5kuFXtvG3YqvjNZ8NZZcg/wAFVkJE7rAAU5gdLOzljzEAdy1ucc8A2bGfxGlGzGdBFcUV73GUCSbr8I52c43vjMCdrE/XZePTjflvUdEQvb0PBHLrs4fhz/Df3EdAwKviIf2NRQ+pJAEaF+PwFaTgaCI6d2f1ftTOTY79YoQuCk1WOIwoWmp50duaI4IllXmyMHm+hjUOJXfwI5h2yd5ReSUS4bTmy8SEYeXyHZeIpuiVDkKGZ2+NRpyxhjeGs/+nnAgO9WSk2I2stvXBLnS1LdAPQCgXgif5vm6TgUqkIvAXx4/ShDiFY3ChMXEYgWHVpZI2p7Ith0i6sgmDbGqbsv8Fq8DvnBlBF9SMPq5TL4LTZxdoB7Fby3XAnywsXP8EO3idYKTqi6R4mdzlO1O/m0n+IsR5tgSQSy1VndbokiR/d+z/DtC8enHarvxm9Z2Zn3FiFsRa6e2b0ncyRXgYQ1JK+LutNIAUL1sdivzimLWIPPOJPMOMMCpzQP0XfkFcuW3Q0l3YFWNUxVvoO/yIpALtMNM6xuIo6ZiZULXcXRYhtdnHDUVczedxlFTrqf/n1U7eNsVWdOtBhBL21+SrbfvRLUDRUn2raZm0+k7Nctrwz6xyKo9EhSHKZJgK8HRlP8aKiZZdJ5A7znIhq9qxro5EFcqG4Yqr2uaq91KabRS+6kXakKcOqPQw0k6WjA4papLY6ZulARnYEj9RVdX5OiC4ZXZdauotpyfI3L58urXivAE/bSapR4Jl8p8vJ8gCiZJfpdIVHW7qJVJSyYmN7eGqjDHbzr/93tavvGjy23V2dauKGHbdUrIQcNSBA3QFiFM89VJuwLeGTO4dSCBnzNX20fm6nyh6Sty9eucZQ3OOsVZRq/OsgdnneCsXn0lFzWGmf8eihr1LwtUs70drxwfahpd1jRcrVrTUH060GlN491/NnYA73FtwKmqiF2u/x2+2uxmoI2N6ur73gdar18GHviKUgO1UaL4mvVMlR62u/vdH3kVbfcbVMDNbw==
\ No newline at end of file
diff --git a/commercemigration/resources/doc/concept_overview.png b/commercemigration/resources/doc/concept_overview.png
deleted file mode 100644
index 04beb35..0000000
Binary files a/commercemigration/resources/doc/concept_overview.png and /dev/null differ
diff --git a/commercemigration/resources/doc/configuration/CONFIGURATION-GUIDE.md b/commercemigration/resources/doc/configuration/CONFIGURATION-GUIDE.md
deleted file mode 100644
index d9ac21c..0000000
--- a/commercemigration/resources/doc/configuration/CONFIGURATION-GUIDE.md
+++ /dev/null
@@ -1,42 +0,0 @@
-# CMT - Configuration Guide
-
-## Configuration reference
-
-[Configuration Reference](CONFIGURATION-REFERENCE.md) To get an overview of the configurable properties.
-
-## Configure incremental data migration
-
-For large tables, it often makes sense to copy the bulk of data before the cutover, and then only copy the rows that have changed in a given time frame. This helps to reduce the cutover window for production systems.
-To configure the incremental copy, set the following properties:
-```
-migration.data.incremental.enabled=
-migration.data.incremental.tables=
-migration.data.incremental.timestamp=
-migration.data.truncate.enabled=
-```
-example:
-```
-migration.data.incremental.enabled=true
-migration.data.incremental.tables=orders,orderentries
-migration.data.incremental.timestamp=2020-07-28T18:44:00+01:00[Europe/Zurich]
-migration.data.truncate.enabled=false
-```
-
-> **LIMITATION**: Tables must have the following columns: modifiedTS, PK. Furthermore, this is an incremental approach... only modified and inserted rows are taken into account. Deletions on the source side are not handled.
-
-The timestamp refers to whatever timezone the source database is using (make sure to include the timezone).
-
-During the migration, the data copy process is using an UPSERT command to make sure new records are inserted and modified records are updated. Also make sure to disable truncation as this is not desired for incremental copy.
-
-Only tables configured for incremental will be taken into consideration, as long as they are not already excluded by the general filter properties. All other tables will be ignored.
-
-After the incremental migration you may have to migrate the numberseries table again, to ensure the PK generation will be aligned.
-For this, disable incremental mode and use the property migration.data.tables.included to only migrate that one table.
-
-## Configure logging
-
-Use the following property to configure the log level:
-
-log4j2.logger.migrationToolkit.level
-
-Default value is INFO.
diff --git a/commercemigration/resources/doc/configuration/CONFIGURATION-REFERENCE.md b/commercemigration/resources/doc/configuration/CONFIGURATION-REFERENCE.md
deleted file mode 100644
index 5921b68..0000000
--- a/commercemigration/resources/doc/configuration/CONFIGURATION-REFERENCE.md
+++ /dev/null
@@ -1,81 +0,0 @@
-
-# CMT - Configuration Reference
-
-
-| Property | Description | Default | values | optional | dependency |
-| --- | --- | --- | --- | --- | --- |
-| commercemigration.application-context | | commercemigration-spring.xml | | | |
-| installed.tenants | | | | | |
-| log4j2.appender.migrationAppender.layout.pattern | | %-5p [%t] [%c{1}] %X{migrationID,pipeline,clusterID} %m%n | | | |
-| log4j2.appender.migrationAppender.layout.type | | PatternLayout | | | |
-| log4j2.appender.migrationAppender.name | | MigrationAppender | | | |
-| log4j2.appender.migrationAppender.type | | Console | | | |
-| log4j2.logger.migrationToolkit.additivity | | false | | | |
-| log4j2.logger.migrationToolkit.appenderRef.migration.ref | | MigrationAppender | | | |
-| log4j2.logger.migrationToolkit.level | | INFO | | | |
-| log4j2.logger.migrationToolkit.name | | org.sap.commercemigration | | | |
-| migration.cluster.enabled | Run migration in the cluster (based on commerce cluster config). The 'HAC' node will be the primary one. A scheduling algorithm decides which table will run on which node. Nodes are notified using cluster events.| false | true or false | true | |
-| migration.data.columns.excluded.attributedescriptors | Specifies the columns to be excluded| | migration.data.columns.excluded.[tablename]=[comma separated list of column names] | true | |
-| migration.data.columns.nullify.attributedescriptors | Specifies the columns to be nullified. Whatever value there was will be replaced with NULL in the target column.| | migration.data.columns.nullify.[tablename]=[comma separated list of column names] | true | |
-| migration.data.failonerror.enabled | If set to true, the migration will abort as soon as an error occured. If set to false, the migration will try to continue if the state of the runtime allows.| true | true or false | true | |
-| migration.data.filestorage.container.name | Specifies the name of the container where the tool will store the files related to migration in the blob storage pointed by the property {migration.data.report.connectionstring}| migration | any string | migration | |
-| migration.data.incremental.enabled | If set to true, the migration will run in incremental mode. Only rows that were modified after a given timestamp will be taken into account.| false | true or false | true | |
-| migration.data.incremental.tables | Only these tables will be taken into account for incremental migration.| | comma separated list of tables. | true | migration.data.incremental.enabled |
-| migration.data.incremental.timestamp | Records created or modified after this timestamp will be copied only.| | The timestamp in ISO-8601 ISO_ZONED_DATE_TIME format | true | migration.data.incremental.enabled |
-| migration.data.indices.disable.enabled | If set to true, all indices in the target table will be disabled (NOT removed) before copying over the data. After the data copy the indices will be enabled and rebuilt again.| false | true of false | true | |
-| migration.data.indices.disable.included | If disabling of indices is enabled, this property specifies the tables that should be included. If no tables specified, indices for all tables will be disabled.| | comma separated list of tables | true | migration.data.indices.disable.enabled |
-| migration.data.indices.drop.enabled | If set to true, all indices in the target table will be removed before copying over the data.| false | true of false | true | |
-| migration.data.maxparalleltablecopy | Specifies the number of tables that are copied over in parallel.| 2 | integer value | true | |
-| migration.data.pipe.capacity | Specifies the capacity of the data pipe.| 100 | integer value | true | |
-| migration.data.pipe.timeout | Specifies the timeout of the data pipe.| 7200 | integer value | true | |
-| migration.data.reader.batchsize | Specifies the number of rows to read per batch. This only affects tables which can be batched.| 1000 | integer value | true | |
-| migration.data.report.connectionstring | Specifies blob storage connection string for storing reporting files.| ${media.globalSettings.cloudAzureBlobStorageStrategy.connection} | any azure blob storage connection string | true | |
-| migration.data.tables.audit.enabled | Flag to enable the migration of audit tables.| true | true or false | true | |
-| migration.data.tables.custom | Specifies a list of custom tables to migrate. Custom tables are tables that are not part of the commerce type system.| | comma separated list of table names. | true | |
-| migration.data.tables.excluded | Tables to exclude from migration (use table names name without prefix)| SYSTEMINIT,StoredHttpSessions | comma separated list of table names. | true | |
-| migration.data.tables.included | Tables to include (use table names name without prefix)| | comma separated list of table names. | true | |
-| migration.data.truncate.enabled | Specifies if the target tables should be truncated before data is copied over.| true | true or false | true | |
-| migration.data.truncate.excluded | If truncation of target tables is enabled, this property specifies tables that should be excluded from truncation.| | comma separated list of table names | true | migration.data.truncate.enabled |
-| migration.data.workers.reader.maxtasks | Specifies the number of threads used per table to read data from source. Note that this value applies per table, so in total the number of threads will depend on 'migration.data.maxparalleltablecopy'. [total number of reader threads] = [migration.data.workers.reader.maxtasks] * [migration.data.maxparalleltablecopy]| 3 | integer value | true | migration.data.maxparalleltablecopy |
-| migration.data.workers.retryattempts | Specifies the number of retries in case a worker task fails.| 0 | integer value | true | |
-| migration.data.workers.writer.maxtasks | Specifies the number of threads used per table to write data to target. Note that this value applies per table, so in total the number of threads will depend on 'migration.data.maxparalleltablecopy'. [total number of writer threads] = [migration.data.workers.writer.maxtasks] * [migration.data.maxparalleltablecopy]| 10 | integer value | true | migration.data.maxparalleltablecopy |
-| migration.ds.source.db.connection.pool.size.active.max | Specifies maximum amount of connections in the source db pool| ${db.pool.maxActive} | integer value | false | |
-| migration.ds.source.db.connection.pool.size.idle.min | Specifies minimum amount of idle connections available in the source db pool| ${db.pool.minIdle} | integer value | false | |
-| migration.ds.source.db.driver | Specifies the driver class for the source jdbc connection| | any valid jdbc driver class | false | |
-| migration.ds.source.db.password | Specifies the password for the source jdbc connection| | any valid password for the jdbc connection | false | |
-| migration.ds.source.db.schema | Specifies the schema the respective commerce installation is deployed to.| | any valid schema name for the commerce installation | false | |
-| migration.ds.source.db.tableprefix | Specifies the table prefix used on the source commerce database. This may be relevant if a commerce installation was initialized using 'db.tableprefix'.| | any valid commerce database table prefix. | true | |
-| migration.ds.source.db.typesystemname | Specifies the name of the type system that should be taken into account| DEFAULT | any valid type system name | true | |
-| migration.ds.source.db.typesystemsuffix | Specifies the suffix which is used for the source typesystem| | the suffix used for typesystem. I.e, 'attributedescriptors1' means the suffix is '1' | true | migration.ds.source.db.typesystemname |
-| migration.ds.source.db.url | Specifies the url for the source jdbc connection| | any valid jdbc url | false | |
-| migration.ds.source.db.username | Specifies the user name for the source jdbc connection| | any valid user name for the jdbc connection | false | |
-| migration.ds.target.db.catalog | | | | | |
-| migration.ds.target.db.connection.pool.size.active.max | Specifies maximum amount of connections in the target db pool| ${db.pool.maxActive} | integer value | false | |
-| migration.ds.target.db.connection.pool.size.idle.min | Specifies minimum amount of idle connections available in the target db pool| ${db.pool.minIdle} | integer value | false | |
-| migration.ds.target.db.driver | Specifies the driver class for the target jdbc connection| ${db.driver} | any valid jdbc driver class | false | |
-| migration.ds.target.db.max.stage.migrations | When using the staged approach, multiple sets of commerce tables may exists (each having its own tableprefix). To prevent cluttering the db, this property specifies the maximum number of table sets that can exist, if exceeded the schema migrator will complain and suggest a cleanup.| 5 | integer value | true | |
-| migration.ds.target.db.password | Specifies the password for the target jdbc connection| ${db.password} | any valid password for the jdbc connection | false | |
-| migration.ds.target.db.schema | Specifies the schema the target commerce installation is deployed to.| dbo | any valid schema name for the commerce installation | false | |
-| migration.ds.target.db.tableprefix | Specifies the table prefix used on the target commerce database. This may be relevant if a commerce installation was initialized using 'db.tableprefix' / staged approach.| ${db.tableprefix} | any valid commerce database table prefix. | true | |
-| migration.ds.target.db.typesystemname | Specifies the name of the type system that should be taken into account| DEFAULT | any valid type system name | true | |
-| migration.ds.target.db.typesystemsuffix | Specifies the suffix which is used for the target typesystem| | the suffix used for typesystem. I.e, 'attributedescriptors1' means the suffix is '1' | true | migration.ds.source.db.typesystemname |
-| migration.ds.target.db.url | Specifies the url for the target jdbc connection| ${db.url} | any valid jdbc url | false | |
-| migration.ds.target.db.username | Specifies the user name for the target jdbc connection| ${db.username} | any valid user name for the jdbc connection | false | |
-| migration.input.profiles | Specifies the profile name of data source that serves as migration input| source | name of the data source profile | true | |
-| migration.locale.default | Specifies the default locale used.| en-US | any locale | true | |
-| migration.log.sql | If set to true, the JDBC queries ran against the source and target data sources will be logged in the storage pointed by the property {migration.data.report.connectionstring}| false | true or false | false | |
-| migration.log.sql.memory.flush.threshold.nbentries | Specifies the number of log entries to add to the in-memory collection of JDBC log entries of a JDBC queries store before flushing the collection contents into the blob file storage associated with the JDBC store's data souce and clearing the in-memory collection to free memory| 10000000 | an integer number | 10,000,000 | |
-| migration.log.sql.source.showparameters | If set to true, the values of the parameters of the JDBC queries ran against the source data source will be logged in the JDBC queries logs (migration.log.sql has to be true to enable this type of logging). For security reasons, the tool will never log parameter values for the queries ran against the target datasource.| true | true or false | true | |
-| migration.output.profiles | Specifies the profile name of data sources that serves as migration output| target | name of the data source profile | true | |
-| migration.properties.masked | Specifies the properties that should be masked in HAC.| migration.data.report.connectionstring,migration.ds.source.db.password,migration.ds.target.db.password | any property key | true | |
-| migration.scheduler.resume.enabled | If set to true, the migration will resume from where it stopped (either due to errors or cancellation).| false | true or false | true | |
-| migration.schema.autotrigger.enabled | Specifies if the schema migrator should be automatically triggered before data copy process is started| false | true or false | true | migration.schema.enabled |
-| migration.schema.enabled | Globally enables / disables schema migration. If set to false, no schema changes will be applied.| true | true or false | true | |
-| migration.schema.target.columns.add.enabled | Specifies if columns which are missing in the target tables should be added by schema migration.| true | true or false | true | migration.schema.enabled |
-| migration.schema.target.columns.remove.enabled | Specifies if extra columns in target tables (compared to source schema) should be removed by schema migration.| true | true or false | true | migration.schema.enabled |
-| migration.schema.target.tables.add.enabled | Specifies if tables which are missing in the target should be added by schema migration.| true | true or false | true | migration.schema.enabled |
-| migration.schema.target.tables.remove.enabled | Specifies if extra tables in target (compared to source schema) should be removed by schema migration.| false | true or false | true | migration.schema.enabled |
-| migration.stalled.timeout | Specifies the timeout of the migration monitor. If there was no activity for too long the migration will be marked as 'stalled' and aborted.| 7200 | integer value | true | |
-| migration.trigger.updatesystem | Specifies whether the data migration shall be triggered by the 'update running system' operation.| false | true or false | true | |
-| solrfacetsearch.solrClientPool.checkInterval | | 0 | | | |
-| task.engine.loadonstartup | | false | | | |
diff --git a/commercemigration/resources/doc/developer/DEVELOPER-GUIDE.md b/commercemigration/resources/doc/developer/DEVELOPER-GUIDE.md
deleted file mode 100644
index c434eae..0000000
--- a/commercemigration/resources/doc/developer/DEVELOPER-GUIDE.md
+++ /dev/null
@@ -1,46 +0,0 @@
-# CMT - Developer Guide
-
-## Quick Start
-
-To install the Commerce Migration Toolkit, follow these steps:
-
-Add the following extensions to your localextensions.xml:
-```
-
-
-```
-
-Make sure you add the source db driver to commercemigration/lib if necessary.
-
-Use the following sample configuration and add it to your local.properties file:
-
-```
-migration.ds.source.db.driver=com.mysql.jdbc.Driver
-migration.ds.source.db.url=jdbc:mysql://localhost:3600/localdev?useConfigs=maxPerformance&characterEncoding=utf8&useTimezone=true&serverTimezone=UTC&nullCatalogMeansCurrent=true
-migration.ds.source.db.username=[user]
-migration.ds.source.db.password=[password]
-migration.ds.source.db.tableprefix=
-migration.ds.source.db.schema=localdev
-
-migration.ds.target.db.driver=${db.driver}
-migration.ds.target.db.url=${db.url}
-migration.ds.target.db.username=${db.username}
-migration.ds.target.db.password=${db.password}
-migration.ds.target.db.tableprefix=${db.tableprefix}
-migration.ds.target.db.catalog=${db.catalog}
-migration.ds.target.db.schema=dbo
-
-```
-
-
-## Contributing to the Commerce Migration Toolkit
-
-To contribute to the Commerce Migration Toolkit, follow these steps:
-
-1. Fork this repository;
-2. Create a branch: `git checkout -b `;
-3. Make your changes and commit them: `git commit -m ''`;
-4. Push to the original branch: `git push origin /`;
-5. Create the pull request.
-
-Alternatively, see the GitHub documentation on [creating a pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request).
diff --git a/commercemigration/resources/doc/performance/PERFORMANCE-GUIDE.md b/commercemigration/resources/doc/performance/PERFORMANCE-GUIDE.md
deleted file mode 100644
index 8fa33f0..0000000
--- a/commercemigration/resources/doc/performance/PERFORMANCE-GUIDE.md
+++ /dev/null
@@ -1,160 +0,0 @@
-# CMT - Performance Guide
-
-
-## Benchmarks
-
-### AWS to SAP Commerce Cloud
-
-Source Database:
-
-* AWS Mysql: db.m6g.large
-* Tables: 974
-* Row Count: 158'855'795
-* Total Volume at source (incl. Indexes): 51 GB
-
-Results:
-
-| Tier | Mem | CPU | Duration | parTables | rWorkers | wWorkers | batchSize | disIdx | DB size at target |
-|------|-----|-----|----------|-----------|----------|----------|-----------|--------|-------------------|
-| S12 | 4GB | 2 | 2h11m | 2 | 5 | 15 | 2000 | TRUE | 72GB |
-| S12 | 4GB | 2 | 3h4m | 2 | 5 | 15 | 2000 | FALSE | 92GB |
-| S12 | 4GB | 2 | 2h59m | 2 | 5 | 15 | 4000 | FALSE | 92GB |
-| S12 | 6GB | 2 | 2h53m | 2 | 10 | 20 | 3000 | FALSE | 92GB |
-| S12 | 6GB | 2 | 2h09m | 2 | 5 | 15 | 3000 | TRUE | 72GB |
-| S12 | 6GB | 6 | 1h35m | 2 | 5 | 15 | 3000 | TRUE | 72GB |
-| S12 | 8GB | 6 | 1h30m | 2 | 10 | 30 | 3000 | TRUE | 75GB |
-
-> **NOTE**: DB size differs in source and target due to different storage concepts (indexes).
-
-## Technical Concept
-
-
-
-
-
-### Scheduler
-
-The table scheduler is responsible for triggering the copy process for each table.
-The set of tables the scheduler actually works with is based on the copy item provider and the respective filters configured. How many tables can be scheduled in parallel is determined by the following property:
-
-`migration.data.maxparalleltablecopy`
-
-
-
-### Reader Workers
-
-Each scheduled table will get a set of reader workers. The source table will be read using the 'keyset/seek' pagination, if possible. For this, a unique key will be identified (typically 'PK' or 'ID') and out of this the parallel batches can be determined. In case this is not possible, the readers will fall back to offset pagination.
-Each reader worker is using its own db connection.
-How many reader workers a table can have is defined by the following property:
-
-`migration.data.workers.reader.maxtasks`
-
-The size of the batches each reader will query depends on the following property:
-
-`migration.data.reader.batchsize`
-
-### Blocking Pipe
-
-The batches read by the reader workers will be written to a blocking pipe as wrapped datasets.
-The pipe is a blocking queue that can be used to throttle the throughput and is configurable in this way:
-
-`migration.data.pipe.timeout`
-
-`migration.data.pipe.capacity`
-
-The pipe will throw an exception if it has been blocked for too long (maybe because the writers are too slow).
-Default value for the timeout should be enough though.
-If the pipe is running full by reaching the max capacity, it will block and wait until the writers free-up space in it.
-
-
-### Writer Workers
-
-The writers will read from the pipe until the pipe is sealed. Each dataset will then be written to the database in a prepared statement / batch insert way. Each writer batch is using its own db connection and transaction (one commit per batch). In case the batch insert fails, a rollback happens.
-How many writer workers a table can have is defined by the following property:
-
-`migration.data.workers.writer.maxtasks`
-
-The batch size for the writers is bound to the readers batch size.
-
-## Perfomance Tuning
-
-### Scaling the Infrastructure
-
-In case the Customer needs to reduce the duration of the overall migration (for example to achieve a performance goal for the data migration), a temporary upscale of the infrastructure in SAP Comemrce Cloud can be requested by creating a [Support Ticket](https://help.sap.com/viewer/265a2902eb8d41a3bf79c5e5320785fa/latest/en-US/2d42670d4a07477f8a553c26e89dae3f.html).
-
-The following example shows how the ticket would be made. Make sure that the environment name is mentioned, because multiple environments of the same type can exist in the same subscription. One Ticket will be needed per each environment and upscale request. Please allow up to 3 days for the ticket to be executed.
-Please note that ALL the information in the ticket between curly brackets are mandatory to be specified, and SAP will communicate back to the Customer the confirmation and duration for the upscale. SAP will downscale the system to its original size at the end of the communicated window. Such window depends on the information communicated by the Customer in the ticket.
-Please use the specified template below for the ticket creation, and attach the [filled in questionnaire](template_for_scheduled_operational_activity.docx) you find in this same folder to the created ticket: in the questionnaire, please specify:
-1. All your customer data
-2. Reason for the request: Commerce Migration from on-premise, temporary upscale needed for a more performant data migration
-3. Start and End event times: planned duration for your data migration.
-
-```
-Dear SAP Support,
-
-we are planning a database migration of XX GB from our on-premise system into the SAP Commerce Cloud environment on .
-We would like to request a system upscale to increase the performances of the data migration.
-We understand SAP will upscale our system for a pre-determined amount of time that will be communicated from SAP to us, and should the migration finish ahead of the communicated time window, we will notify SAP so the system can be brought back to its normal capacity in a timely manner.
-
-Thank you
-```
-
-### Degree of Parallelization
-
-In most cases there are a lot of small tables and few very large tables. This leads to the fact that the duration of the overall migration mostly depends on these large tables. Increasing the number of parallel tables won't help to speed up large tables, instead the number workers should be increased. The workers influence how fast a single table can be migrated, since the more workers there are the more batches of the large table can be executed in parallel. Therefore, it makes sense to reduce the parallel tables to a rather low number to save resources on the infrastructure and in turn use the resources for increased batch parallelisation in the large tables.
-
-How many workers for both readers and writer should be set, depends on the power of the involved databases and the underlying infrastructure.
-Since reading is typically faster than writing a ratio of 1:3 (3 writer workers for 1 one reader worker) should be ok.
-Have a look at the benchmarks to see how far you can go with the parallelisation.
-Keep in mind that processing 2 tables in parallel already leads to `2 * rWorkers + 2 * wWorkers` threads / connections in total.
-
-
-### Memory & CPU
-
-By increasing the parallelization degree you can easily overload the system, which may lead to out of memory.
-
-> **NOTE**: On SAP Commerce Cloud, an out of memory exception can sometimes be hidden. Typically you know you were running out of memory if the pod (backoffice) immediately shuts down or restarts without further notice (related to SAP Commerce Cloud health checks).
-
-Have a close look at the memory metrics and make sure it is in a healthy range throughout the copy process.
-To solve memory issues, either decrease the degree of parallelization or reduce the capacity of the data pipe as such.
-
-
-### DB Connections
-
-Some properties may impact each other which can lead to side effects.
-
-Given:
-
-`migration.data.maxparalleltablecopy`
-
-`migration.data.workers.reader.maxtasks`
-
-`migration.data.workers.writer.maxtasks`
-
-`migration.ds.source.db.connection.pool.size.active.max`
-
-`migration.ds.target.db.connection.pool.size.active.max`
-
-
-The amount of database connections can be defined as follows:
-
-`#[dbconnectionstarget] >= #[maxparalleltablecopy] * #[maxwritertasks]`
-
-`#[dbconnectionssource] >= #[maxparalleltablecopy] * #[maxreadertasks]`
-
-
-
-
-### Disabling Indexes
-
-Indexes can be a bottleneck when inserting batches.
-MsSQL offers a way to temporarily disable indexes during the copy process.
-This can be done using the property:
-
-`migration.data.indices.disable.enabled`
-
-This will disable the indexes on a table right before it starts the copy. Once finished, the they will be rebuilt again.
-
-> **NOTE**: Re-enabling the indexes itself may take quite some time for large tables and this may temporarily slow down and lock the copy process.
-
-> **NOTE**: Disabling the indexes can have the unwanted side effect that duplicate key inserts won't be detected and reported. Therefore only do this if you are sure that no duplicates are around.
diff --git a/commercemigration/resources/doc/performance/performance_architecture.png b/commercemigration/resources/doc/performance/performance_architecture.png
deleted file mode 100644
index 4164152..0000000
Binary files a/commercemigration/resources/doc/performance/performance_architecture.png and /dev/null differ
diff --git a/commercemigration/resources/doc/performance/template_for_scheduled_operational_activity.docx b/commercemigration/resources/doc/performance/template_for_scheduled_operational_activity.docx
deleted file mode 100644
index 8417a4c..0000000
Binary files a/commercemigration/resources/doc/performance/template_for_scheduled_operational_activity.docx and /dev/null differ
diff --git a/commercemigration/resources/doc/prerequisites/PREREQUISITES-GUIDE.md b/commercemigration/resources/doc/prerequisites/PREREQUISITES-GUIDE.md
deleted file mode 100644
index a4b7aea..0000000
--- a/commercemigration/resources/doc/prerequisites/PREREQUISITES-GUIDE.md
+++ /dev/null
@@ -1,22 +0,0 @@
-# CMT - Prerequisites Guide
-
-Carefully read the prerequisites and make sure you meet the requirements before you commence the migration. Some of the prerequisites may require code adaptations or database cleanup tasks to prepare for the migration, therefore make sure you reserve enough time so that you can adhere to your project plan.
-
-## Prerequisites
-
-Before you begin, ensure you have met the following requirements:
-
-* Your code base is compatible with the SAP Commerce version required by SAP Commerce Cloud (at minimum).
-* The code base is exactly the same in both target and source systems. It includes:
- * platform version
- * custom extensions
- * set of configured extensions
- * type system definition as specified in \*-items.xml
-* The attribute data types which are database-specific must be compatible with the target database
-* Orphaned-types cleanup has been performed in the source system. Data referencing deleted types has been removed.
-* The target system is in a state where it can be initialized and the data imported
-* The source system is updated with the same \*-items.xml as deployed on the target system (ie. update system has been performed)
-* The connectivity to the source database from SAP Commerce Cloud happens via a secured channel, such as the self-serviced VPN that can be created in SAP Commerce Cloud Portal. It is obligatory, and the customer's responsibility, to secure the data transmission
-* Old type systems have been deleted in the source system
-* A check for duplicates has been performed and existing duplicates in the source database have been removed
-* The task engine has been disabled in all target nodes (cronjob.timertask.loadonstartup=false)
diff --git a/commercemigration/resources/doc/security/SECURITY-GUIDE.md b/commercemigration/resources/doc/security/SECURITY-GUIDE.md
deleted file mode 100644
index 439fd17..0000000
--- a/commercemigration/resources/doc/security/SECURITY-GUIDE.md
+++ /dev/null
@@ -1,77 +0,0 @@
-# CMT - Security Guide
-
-Before you proceed, please make sure you acknowledge the security recommendations below:
-
-## VPN access to the source database is mandatory
- * The data transfer over a non-authenticated JDBC channel can lead to illegitimate access and undesired data leak or manipulation.
- * Therefore, access to the database through a VPN is mandatory to block unauthorised access.
- * To setup a VPN connection, use the VPN self-service functionality provided by SAP Commerce Cloud Portal.
-
-## Transmission of data over non-encrypted channel
-
- * It is mandatory to enforce TLS on the source DB server.
- * It is mandatory to enforce the usage of TLS v1.2 or v1.3, and to support only strong cipher suites.
-
-## Accounts and Credentials
-
- * Use a dedicated read only database user for the data migration on the source database. Don't forget to remove this user once the migration activities are finished.
- * Use a dedicated HAC account during the migration. Create the account on both the source and target system. Remove the account once the migration activities are finished.
- * 'Users' table will be overwritten during the migration. Reset admin users's passwords after the migration.
-
-## System Availability
-
- * The data migration increases the load on the source infrastructure (database), therefore it is mandatory to stop the applications on the source environment.
- * This is especially the case if you run multiple migrations in parallel. For that reason, be sure to avoid multiple migrations concurrently.
- * When using the staged approach, you could end up with many staged copies in the target database, which can impact the availability of the target database. Therefore the number of staged copies is limited to 1 by default (See property: `migration.ds.target.db.max.stage.migrations`)
-
-## Cleanup
-
-It is mandatory to leave the system in a clean state:
- * Remove the migration extensions after the migration. This applies to all environments once you have finished the migration activities, including the production environment.
- * Delete the tables that are resulting from the staged migrations and not required for the functioning of the application.
- * You may want to use the following to support cleanup: [Support Cleanup](../support/SUPPORT-GUIDE.md)
- * Be aware that it eventually is your responsibility, what data is stored in the target database.
-
-
-## Audit and Logging
-
-All actions triggered from CMT will be logged:
- * validate data source
- * preview schema migration
- * create schema script
- * execute schema script
- * run migration
- * stop migration
-
-The format is: `CMT Action: - User: - Time: `
-
-Example:
-
-```
-CMT Action: Data sources tab clicked - User:admin - Time:2021-03-10T10:27:29.675351
-CMT Action: Validate connections button clicked - User:admin - Time:2021-03-10T10:27:32.258041
-CMT Action: Validate connections button clicked - User:admin - Time:2021-03-10T10:27:36.223859
-CMT Action: Schema migration tab clicked - User:admin - Time:2021-03-10T10:27:38.188141
-CMT Action: Preview schema migration changes button clicked - User:admin - Time:2021-03-10T10:27:40.492816
-Starting preview of source and target db diff...
-....
-CMT Action: Data migration tab clicked - User:admin - Time:2021-03-10T10:28:31.993621
-CMT Action: Start data migration executed - User:admin - Time:2021-03-10T10:28:33.710384
-0/1 tables migrated. 0 failed. State: RUNNING
-173/173 processed. Completed in {223.6 ms}. Last Update: {2021-03-10T09:28:34.153}
-1/1 tables migrated. 0 failed. State: PROCESSED
-Migration finished (PROCESSED) in 00:00:00.296
-Migration finished on Node 0 with result false
-DefaultMigrationPostProcessor Finished
-Finished writing database migration report
-```
-
-## Reporting
-
-In order to be able to track past activities, the tool creates reports against the following actions:
-
- * SQL statements executed during schema migration (file name: timestamp of execution);
- * Summary of the migration copy process (file name: migration id)
-
-The reports are automatically written to the hotfolder blob storage ('migration' folder).
-Sensitive data is not written to the reports (i.e.: passwords).
diff --git a/commercemigration/resources/doc/support/SUPPORT-GUIDE.md b/commercemigration/resources/doc/support/SUPPORT-GUIDE.md
deleted file mode 100644
index 1eb5e51..0000000
--- a/commercemigration/resources/doc/support/SUPPORT-GUIDE.md
+++ /dev/null
@@ -1,79 +0,0 @@
-# CMT - Support Guide
-
-Here you can find some guidelines for members of Cloud Support Team.
-
-## Staged migration approach
-
-In order to display a summary of source and target schemas, you can utilize following groovy script:
-
-``groovy/MigrationSummaryScript.groovy``
-
-Or else copy the script from here:
-
-```
-package groovy
-
-import de.hybris.platform.util.Config
-import org.apache.commons.lang.StringUtils
-
-import java.util.stream.Collectors
-
-def result = generateMigrationSummary(migrationContext)
-println result
-return result
-
-def generateMigrationSummary(context) {
- StringBuilder sb = new StringBuilder();
- try {
- final String sourcePrefix = context.getDataSourceRepository().getDataSourceConfiguration().getTablePrefix();
- final String targetPrefix = context.getDataTargetRepository().getDataSourceConfiguration().getTablePrefix();
- final String dbPrefix = Config.getString("db.tableprefix", "");
- final Set sourceSet = migrationContext.getDataSourceRepository().getAllTableNames()
- .stream()
- .map({ tableName -> tableName.replace(sourcePrefix, "") })
- .collect(Collectors.toSet());
-
- final Set targetSet = migrationContext.getDataTargetRepository().getAllTableNames()
- sb.append("------------").append("\n")
- sb.append("All tables: ").append(sourceSet.size() + targetSet.size()).append("\n")
- sb.append("Source tables: ").append(sourceSet.size()).append("\n")
- sb.append("Target tables: ").append(targetSet.size()).append("\n")
- sb.append("------------").append("\n")
- sb.append("Source prefix: ").append(sourcePrefix).append("\n")
- sb.append("Target prefix: ").append(targetPrefix).append("\n")
- sb.append("DB prefix: ").append(dbPrefix).append("\n")
- sb.append("------------").append("\n")
- sb.append("Migration Type: ").append("\n")
- sb.append(StringUtils.isNotEmpty(dbPrefix) &&
- StringUtils.isNotEmpty(targetPrefix) && !StringUtils.equalsIgnoreCase(dbPrefix, targetPrefix) ? "STAGED" : "DIRECT").append("\n")
- sb.append("------------").append("\n")
- sb.append("Found prefixes:").append("\n")
-
- Map prefixes = new HashMap<>()
- targetSet.forEach({ tableName ->
- String srcTable = schemaDifferenceService.findCorrespondingSrcTable(sourceSet, tableName);
- String prefix = tableName.replace(srcTable, "");
- prefixes.put(prefix, targetSet.stream().filter({ e -> e.startsWith(prefix) }).count());
- });
- prefixes.forEach({ k, v -> sb.append("Prefix: ").append(k).append(" number of tables: ").append(v).append("\n") });
- sb.append("------------").append("\n");
-
- } catch (Exception e) {
- e.printStackTrace();
- }
- return sb.toString();
-}
-
-
-```
-
-It prints information as follows:
-* total number of all tables (source + target)
-* number of source tables
-* number of target tables
-* prefixes defined as property (source, target & current database schema)
-* all detected prefixes in target database with the number of tables
-
-You can use this information to remove staged tables generated during data migration process.
-
- 
diff --git a/commercemigration/resources/doc/support/support_groovy_preview.png b/commercemigration/resources/doc/support/support_groovy_preview.png
deleted file mode 100644
index 5b9582e..0000000
Binary files a/commercemigration/resources/doc/support/support_groovy_preview.png and /dev/null differ
diff --git a/commercemigration/resources/doc/troubleshooting/TROUBLESHOOTING-GUIDE.md b/commercemigration/resources/doc/troubleshooting/TROUBLESHOOTING-GUIDE.md
deleted file mode 100644
index 4380ef2..0000000
--- a/commercemigration/resources/doc/troubleshooting/TROUBLESHOOTING-GUIDE.md
+++ /dev/null
@@ -1,135 +0,0 @@
-# CMT - Troubleshooting Guide
-
-## Duplicate values for indexes
-
-Symptom:
-
-Pipeline aborts during copy process with message like:
-```
-FAILED! Reason: The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name 'dbo.cmtmedias' and the index name 'cmtcodeVersionIDX_30'. The duplicate key value is (DefaultCronJobFinishNotificationTemplate_de, ).
-```
-
-Solution:
-
-This can happen if you are using a case sensitive collation on the source database either at database level or table/column level.
-The commerce cloud target database is case insensitive by default and will treat values like 'ABC'/'abc' as equal during index creation.
-If possible, remove the duplicate rows before any migration activities. In case this is not possible consult Support.
-
-> **Note**: Mysql doesn't take into account NULL values for index checks. SQL Server does and will thus fail with duplicates.
-
-## Migration fails for unknown reason
-
-Symptom:
-
-If you were overloading the system for a longer period of time, you may encounted one of the nodes was restarting in the background without notice.
-
-
-Solution:
-
-In any case, check the logs (Kibana).
-Check in dynatrace whether a process crash log exists for the node.
-In case the process crashed, throttle the performance by changing the respective properties.
-
-
-## MySQL: xy table does not exist error
-
-Symptom:
-
-`java.sql.SQLSyntaxErrorException: Table '' doesn't exist`
-even though the table should exist.
-
-Solution:
-
-This is a changed behaviour in the driver 8x vs 5x used before. In case there are multiple catalogs in the database, the driver distorts the reading of the table information...
-
-... add the url parameter
-
-`nullCatalogMeansCurrent=true`
-
-... to your JDBC connection URL and the error should disappear.
-
-## MySQL: java.sql.SQLException: HOUR_OF_DAY ...
-
-Symptom:
-
-
-```
-Caused by: java.sql.SQLException: HOUR_OF_DAY: 2 -> 3
-at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129) ~[mysql-connector-java-8.0.19.jar:8.0.19]
-at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) ~[mysql-connector-java-8.0.19.jar:8.0.19]
-at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89) ~[mysql-connector-java-8.0.19.jar:8.0.19]
-at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63) ~[mysql-connector-java-8.0.19.jar:8.0.19]
-at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:73) ~[mysql-connector-java-8.0.19.jar:8.0.19]
-at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:85) ~[mysql-connector-java-8.0.19.jar:8.0.19]
-at com.mysql.cj.jdbc.result.ResultSetImpl.getTimestamp(ResultSetImpl.java:903) ~[mysql-connector-java-8.0.19.jar:8.0.19]
-at com.mysql.cj.jdbc.result.ResultSetImpl.getObject(ResultSetImpl.java:1243) ~[mysql-connector-java-8.0.19.jar:8.0.19]
-```
-
-Solution:
-
-Known issue on MySQL when dealing with time/date objects. Workaround is to add...
-
-`&useTimezone=true&serverTimezone=UTC`
-
-...to your source connection string.
-
-
-## Backoffice does not load
-
-Symptom:
-
-Backoffice does not load properly after the migration.
-
-Solution:
-
-- use F4 mode (admin user) and reset the backoffice settings on the fly.
-- browser cache reload
-
-## Proxy error in Hac
-
-Symptom:
-
-Hac throws / displays proxy errors when using migration features.
-
-Solution:
-
-Change the default proxy value in the Commerce Cloud Portal to a higher value.
-This can be done on the edit view of the respective endpoint.
-
-## MSSQL: Boolean type
-
-The boolean type in MSSQL is a bit data type storing 0/1 values.
-In case you were using queries including TRUE/FALSE values, you may have to change or convert the queries in your code to use the bit values.
-
-## Sudden increase of memory
-
-Symptom:
-
-The memory consumption is more or less stable throughout the copy process, but then suddenly increases for certain table(s).
-
-Solution:
-
-If batching of reading and writing is not possible due to the definition of the source table, the copy process falls back to a non-batched mechanism.
-This requires loading the full table in memory at once which, depending on the table size, may lead to unhealthy memory consumption.
-For small tables this is typically not an issue, but for large tables it should be mitigated by looking at the indexes for example.
-
-## Some tables are copied over very slowly
-
-Symptom:
-
-While some tables are running smoothly, others seem to suffer from low throughput.
-This may happen for the props table for example.
-
-Solution:
-
-The copy process tries to apply batching for reading and writing where possible.
-For this, the source table is scanned for either a 'PK' column (normal Commerce table) or an 'ID' column (audit tables).
-Some tables don't have such a column, like the props table. In this case the copy process tries to identify the smallest unique (compound) index and uses it for batching.
-If a table is slow, check the following:
-- ID or PK column exist?
-- ID or PK column are unique indexes?
-- Any other unique index exists?
-
-If the smallest compound unique index consists of too many columns, the reading may impose high processing load on the source database due to the sort buffer running full.
-Depending on the source database, you may have to tweak some db settings to efficiently process the query.
-Alternatively you may have to think about adding a custom unique index manually.
diff --git a/commercemigration/resources/doc/user/CMT-Hints.md b/commercemigration/resources/doc/user/CMT-Hints.md
deleted file mode 100644
index b0138f4..0000000
--- a/commercemigration/resources/doc/user/CMT-Hints.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# CMT hints
-## Common problems / recommendations
-Problem with validation connection of CCV1 DB
-Authentication problem - after a few bad login attempts you might lock your DB user and you have to talk to DBA to unlock it.
-Authentication problem - sometimes DBA generates a password with a # sign and it is being read as a comment mark and all signs after it is not being read. I have tried to escape that character but couldn't so I think the best way is to talk do DBA to regenerate the password
-Missing schema at connection string. By default, Hana tries to use schema from a user so it would dbmigration but this schema is empty and you should add to connection string parameter currentSchema\=${migration.ds.source.db.schema} to login into correct schema. If you don't know your schema usually it is 3 letters of customer code + environment code for example for Distrelec s1 schema would be dats but sometimes there is also a number in the end so it might be datd2.
-Timeout problem - you should check if you using NAT IP usually it is starting from 100. If you using the proper IP there might be also missing NATing on the CCV2 side or some problems with the VPN.
-Login out from HAC during migration. I recommend using a separate browser for migration and don't click anything on this browser and it shouldn't log out because even if you have a separate tab for that browser can suspend this tab and you will be logged out.
-You can add a property log4j2.logger.migrationToolkit.level=Debug so you will see all progress also at kibana.
-Increasing batchsize
-Scaling up backoffice aspect
-Shut down all aspects except backoffice. If you got running hotfolders it might process some data during that time and it might use initialized type system and might break the type system.
-Add properties for dropping indexes. It might help if you have some duplicates in DB.
-* migration.data.indices.drop.enabled=true
-* migration.data.indices.disable.enabled=true
-
-During migration also faced one problem and I saw at logs migration stalled. Probably this might be some CMT bug because the migration process was working properly. I fixed that problem adding property migration.stalled.timeout=20000 by default it is 2 h
-It is good to check how much RAM Hana is using before migration because sometimes it can cause random problems and migration might be interrupted and it is worth restarting Hana.
-## Performance
-During our Distrelec project, we made several migrations with different configs, and finally, we have achieved over 3 times better performance.
-
-For all migration, we used 4000 DTU
-
-Our first migration was before DB refresh and It took over 9h which for 70 GB DB on Azure is a very bad time. It was performed using the default project.properties parameters and default scaling of backoffice aspect (2 CPU 6 GB ram 2 pods). In this run, our migration didn't use so much DTU which in the old ADF approach wasn't a problem.
-
-Our next migration was after refresh from production and DB was much bigger it has 120 GB on Azure after migration and took only 3:30 h. We changed at this run a few parameters such as:
-
-* migration.data.workers.reader.maxtasks=10
-* migration.data.workers.writer.maxtasks=20
-* migration.data.reader.batchsize=4000
- We notice that HANA has better performance with queries with bigger results than small ones I think the default is 1000.
-
-We have also scaled up backoffice to 6 CPU 8 GB RAM and left only 1 pod and as a result, we get almost 3 times better performance. I think 6 CPU and 8 GB RAM is the maximum that we can give as a migration team with more CPU and RAM pods weren't stable in my case. At this run, we used a lot more DTU, and a few times we reached 100% usage so performance was much better.
-
-Our last run was with a similar DB (120 GB) that has failed because of a lack of memory on the pod. So we made a ticket to scale up backoffice aspect to max values l1 + support can give and we get 8 Cores 16 GB RAM. This run also give us 10 min better results and migration was stable.
-
-
-
-My recommendation is to always scale up backoffice at least to the maximum we can get at model-t 12 CPU 20 GB RAM and for bigger DB request for more resources for migration time and scale down after migration. I see there is a huge usage of CPU and RAM during CMT migration and has a very big impact on performance. Also, you can try increasing batchsize.
-
-What you can do after the first migration is to check the times of migration of all tables and check with the customer if you can clean up tables that were copied for a long time with some retention time.
diff --git a/commercemigration/resources/doc/user/USER-GUIDE.md b/commercemigration/resources/doc/user/USER-GUIDE.md
deleted file mode 100644
index 2085b06..0000000
--- a/commercemigration/resources/doc/user/USER-GUIDE.md
+++ /dev/null
@@ -1,198 +0,0 @@
-# CMT - User Guide
-
-The toolkit can be used with three different approaches:
-1. Staged copy approach: this method allows you to use the SAP Commerce prefix feature to create a separate staged copy of your tables in the database. This way, while migrating, you can preserve a full copy of the existing database in your SAP Commerce Cloud subscription and migrate the data on the staged tables. When you are satisfied with the data copied in the staged tables, you can then switch the prefixes to shift your SAP Commerce installation to the migrated data. The main difference with the direct copy approach is in the configuration and usage of the prefixes within the extensions, and in terms of the cleanup at the end of the migration;
-2. Direct copy approach: this method directly overwrites the data of your database in SAP Commerce Cloud;
-3. Incremental approach: can be used after each of the previous approaches, to incrementally migrate some selected data. Please check the [Configure incremental data migration](../configuration/CONFIGURATION-GUIDE.md) section
-
-You can see more details below at [How to choose the best approach for my migration](#How-to-choose-the-best-approach-for-my-migration)
-
-## Proxy Timeout
-
-Some operations in the admin console may take more time to execute than the default proxy timeout in SAP Commerce Cloud allows.
-To make sure you don't run into a proxy timeout exception, please adjust the value in your endpoint accordingly:
-
-
-
-A value between 10 and 20 minutes should be safe. However, it depends on the components and systems involved.
-
-> **IMPORTANT**: make sure to revert the value to either the default value or a value that suits your needs after completion of the migration.
-
-## Install the extensions
-
-To install the Commerce Migration Toolkit, follow these steps:
-
-Add the following extensions to your localextensions.xml:
-```
-
-
-```
-
-> **NOTE**: For SAP Commerce Cloud make sure the extensions are actually being loaded by the manifest.json
-
-Make sure you add the source db driver to commercemigration/lib if necessary.
-
-## Configure the extensions
-Configure the extensions as needed in your local.properties. See the [Property Configuration Reference](../configuration/CONFIGURATION-REFERENCE.md).
-
-At least you have to configure the connection to your source database. Here is an example for mysql:
-
-```
-migration.ds.source.db.driver=com.mysql.jdbc.Driver
-migration.ds.source.db.url=jdbc:mysql://[host]:3600/localdev?useConfigs=maxPerformance&characterEncoding=utf8&useTimezone=true&serverTimezone=UTC&nullCatalogMeansCurrent=true
-migration.ds.source.db.username=[username]
-migration.ds.source.db.password=[pw]
-migration.ds.source.db.tableprefix=
-migration.ds.source.db.schema=localdev
-```
-
-> **NOTE**: If you are not running in SAP Commerce Cloud (i.e locally) make sure the target database is MSSQL.
-
-> **NOTE for HANA DB users:** The database schema name must be configured directly in the URL. You can call the **migration.ds.source.db.schema** property from the URL with following: `¤tSchema=${migration.ds.source.db.schema}`
-
-## Build and start the platform
-
-Build and start the on-premise SAP Commerce platform.
-
-For a local installation:
-```
-> ant all initialize && ./hybrisserver.sh
-```
-
-On SAP Commerce Cloud:
-
-* Trigger a build and deploy to the respective environment with initialization (if not yet done).
-
-For Staged copy approach, each table prefix requires an initialization first. Imagine the following example scenario:
-
-* Commerce runtime uses the prefix 'cc'
-* Data is being migrated to the prefix 'cmt'
-
-For this, the system has to be initialized twice:
-1. ```db.tableprefix = cc``` for first initialization
-2. ```db.tableprefix = cmt``` for second initialization
-
-Once finished, use the following properties to control which prefix is for the commerce runtime and which prefix is used to copy data to:
-```
-migration.ds.target.db.tableprefix =
-db.tableprefix =
-```
-
-
-## Establish a secure connection
-It is mandatory to establish a secure connection between your on-premise and SAP Commerce Cloud environments. To do so, you can use the [self-service VPN feature from the SAP Cloud Portal] (https://help.sap.com/viewer/0fa6bcf4736c46f78c248512391eb467/v2005/en-US/f63dfaed22d949ed9aadbb7835584586.html) and for more information you can check out [this cool CXWorks article](https://www.sap.com/cxworks/article/435960520/setting_up_a_vpn_for_sap_commerce_cloud).
-
-## Data Source Validation
-After having established your secure connectivity, validate the source and target database connections. For this, open the HAC and go to Migration->Data Sources
-
-
-
-## Check Schema Differences
-Check if there are any schema differences. For this, open the HAC and go to Migration->Schema Migration. By clicking the "Preview Schema Migration Changes" you will see a list of schema differences if any.
-
-
-
-In case there are schema differences switch to the right tab and generate the sql script to adjust the target schema.
-
-
-
-Make sure to review the script and execute it if you think it is all fine.
-
-## Copy Schema
-After you have analysed all the schema differences and understood what data you want to migrate, you can use the "Migrate Schema" button to modify the target SAP Commerce Cloud schema and make it equivalent to the source schema. Please note, this operation executes the following in the target schema:
-* Create tables
-* Add/drop columns to existing tables
-
-In the event of a staged copy approach, the system detects how many stage tables already exist in the target database. If this number exceeds pre-defined config parameter value:
-`migration.ds.target.db.max.stage.migrations` (by default set to 5),
-there will be numerous queries generated that removes all tables and indexes corresponding with the identified staged tables. Please note, that current schema stays untouched so as to not disrupt your system.
-When the system does not detect any more stage tables, you should see queries creating new tables respectively.
-
-> **NOTE**: no changes are made to the source database.
-
-## Start the Data Migration
-Start the data migration. For this, open the HAC and go to Migration->Data Migration. Click on "Copy Source Data" to start the migration.
-
-
-
-The migration progress will be displayed in the HAC. It also shows some useful performance metrics:
-* Current memory utilisation
-* Current cpu utilisation
-* Current DTU utilisation (if available)
-* Current source and target db pool consumption
-* Current I/O (rows read / written)
-
-Check the console output for further migration progress information, i.e.:
-```
-...
-INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {mediaformatlp->mediaformatlp} finished in {513.6 ms}...
-INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {bundletemplatestatus->bundletemplatestatus} finished in {440.8 ms}...
-INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {cxusertosegment->cxusertosegment} finished in {1.644 s}...
-INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {triggerscj->triggerscj} finished in {410.8 ms}...
-INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {droolskiebase->droolskiebase} finished in {303.5 ms}...
-INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {306 of 306} tables migrated...
-INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. Tables migration took {25.57 s}...
-```
-> **NOTE**: The process will only take into consideration the intersection of the source and target tables. Tables that are in the source schema, but not in the target schema (and vice versa), will be ignored.
-
-## Verify the migrated data
-Check the UI to verify that all tables have been copied successfully. All the logs from the migration will be available in the Kibana interface. At the end of the data copy, you can find a report in the HAC and there will be a button available, enabling you to download the report.
-
-
-
-## Start the Media Migration
-While you are migrating the database, use the process described in the [azcopy cxworks](https://www.sap.com/cxworks/article/508629017/migrate_to_sap_commerce_cloud_migrate_media_with_azcopy) article to migrate your medias.
-
-## Perform update running system
-After both the database and media migrations are completed, perform an update running system. Do not skip this step as this is fundamental for the correct SAP Commerce Cloud functioning.
-
-### Direct approach
-For a local installation:
-```
-> ant updatesystem
-```
-
-On SAP Commerce Cloud:
-
-Execute a deployment with data migration mode "Migrate Data" (equivalent to system update).
-
-### Staged approach
-
-If you were using the staged approach, simply navigate to your properties file and invert the two prefixes configured at the beginning:
-```
-migration.ds.target.db.tableprefix =
-db.tableprefix =
-```
-Execute a deployment with data migration mode "Migrate Data" (equivalent to system update).
-
-If you want to remove the set of pre-existing tables, you can:
-* generate SQL schema scripts once again (the system will detect some staged tables, see property 'migration.ds.target.db.max.stage.migrations'). You can review such a script and run it by clicking the "Execute script" button;
-* open a ticket to request SAP to remove such tables.
-
-## Test the migrated data
-
-Execute thorough testing on the migrated environment to ensure data quality. The data is copied in a one-to-one method, from the source database. So there might be some adjustments needed after the copy process. Some examples of adjustments can be, data sorted in the database that refers to particular parts of the infrastructure that might have changed in SAP Commerce Cloud (e.g. Solr references, Data Hub references, etc...). Also the passwords are migrated in a one-to-one fashion, if in your source system you have changed the default encryption key. Please reference the section "Key Management and Key Rotation" of [this guide](https://help.sap.com/viewer/d0224eca81e249cb821f2cdf45a82ace/2005/en-US/8b2c75c886691014bc12b8b532a96f58.html) to align it in SAP Commerce Cloud.
-
-## Finish the migration process
-
-After the migration is finished, the Commerce Migration Toolkit should be removed. In order to achieve this you should perform reverse steps to [installing the CMT](#install-the-extensions). It means removing following extensions from your localextensions.xml:
-```
-
-
-```
-
-> **NOTE**: Bear in mind that the **commercemigration** extension disables the task engine. If you want to keep the Commerce Migration Toolkit installed (e.g. because more test migrations are planned on this environment) remember about enabling task engine by setting `task.engine.loadonstartup=true` property in the CCv2 Portal until the next migration try is performed.
-
-## How to choose the best approach for my migration
-
-Staged copy approach:
-* By having a separate migration prefix, the code base in SAP Commerce Cloud differentiates itself with the one on-premise by allowing you to be more flexible in terms of executing an upgrade and migration at the same time. This is only valid until you switch the prefixes, so be sure to have executed thorough testing before doing this;
-* The original prefix tables can always be used as a safe rollback in case of issues;
-* This approach is recommended to ensure that you do not lose any data in the target system
-
-Direct copy approach:
-* This approach can be used when you are very sure about the successful execution of the data migration and are ok with potentially having to re-initialize the system in case of problems.
-
-Incremental approach:
-* This should be used when the requirement for a short cutover time is critical and, for the tables that are not selected as part of the incremental migration, it is acceptable to either introduce some data freeze or ignore data changes after the initial bulk copy;
-* This approach can be used on top of both the staged copy and direct copy approach.
diff --git a/commercemigration/resources/doc/user/hac_migrate_data.png b/commercemigration/resources/doc/user/hac_migrate_data.png
deleted file mode 100644
index 2c310a2..0000000
Binary files a/commercemigration/resources/doc/user/hac_migrate_data.png and /dev/null differ
diff --git a/commercemigration/resources/doc/user/hac_report.png b/commercemigration/resources/doc/user/hac_report.png
deleted file mode 100644
index 6833931..0000000
Binary files a/commercemigration/resources/doc/user/hac_report.png and /dev/null differ
diff --git a/commercemigration/resources/doc/user/hac_schema_diff_exec.png b/commercemigration/resources/doc/user/hac_schema_diff_exec.png
deleted file mode 100644
index d2b6195..0000000
Binary files a/commercemigration/resources/doc/user/hac_schema_diff_exec.png and /dev/null differ
diff --git a/commercemigration/resources/doc/user/hac_schema_diff_prev.png b/commercemigration/resources/doc/user/hac_schema_diff_prev.png
deleted file mode 100644
index 2491267..0000000
Binary files a/commercemigration/resources/doc/user/hac_schema_diff_prev.png and /dev/null differ
diff --git a/commercemigration/resources/doc/user/hac_validate_ds.png b/commercemigration/resources/doc/user/hac_validate_ds.png
deleted file mode 100644
index 3cd6b5f..0000000
Binary files a/commercemigration/resources/doc/user/hac_validate_ds.png and /dev/null differ
diff --git a/commercemigration/resources/doc/user/proxy_timeout.png b/commercemigration/resources/doc/user/proxy_timeout.png
deleted file mode 100644
index fb15190..0000000
Binary files a/commercemigration/resources/doc/user/proxy_timeout.png and /dev/null differ
diff --git a/commercemigration/resources/groovy/MigrationSummaryScript.groovy b/commercemigration/resources/groovy/MigrationSummaryScript.groovy
deleted file mode 100644
index 6c98e6a..0000000
--- a/commercemigration/resources/groovy/MigrationSummaryScript.groovy
+++ /dev/null
@@ -1,45 +0,0 @@
-package groovy
-
-import de.hybris.platform.util.Config
-import org.apache.commons.lang.StringUtils
-import java.util.stream.Collectors
-
-def result = generateMigrationSummary(migrationContext)
-return result
-
-def generateMigrationSummary(context) {
- StringBuilder sb = new StringBuilder();
- final String sourcePrefix = context.getDataSourceRepository().getDataSourceConfiguration().getTablePrefix();
- final String targetPrefix = context.getDataTargetRepository().getDataSourceConfiguration().getTablePrefix();
- final String dbPrefix = Config.getString("db.tableprefix", "");
- final Set sourceSet = migrationContext.getDataSourceRepository().getAllTableNames()
- .stream()
- .map({ tableName -> tableName.replace(sourcePrefix, "") })
- .collect(Collectors.toSet());
-
- final Set targetSet = migrationContext.getDataTargetRepository().getAllTableNames()
- sb.append("------------").append("\n")
- sb.append("All tables: ").append(sourceSet.size() + targetSet.size()).append("\n")
- sb.append("Source tables: ").append(sourceSet.size()).append("\n")
- sb.append("Target tables: ").append(targetSet.size()).append("\n")
- sb.append("------------").append("\n")
- sb.append("Source prefix: ").append(sourcePrefix).append("\n")
- sb.append("Target prefix: ").append(targetPrefix).append("\n")
- sb.append("DB prefix: ").append(dbPrefix).append("\n")
- sb.append("------------").append("\n")
- sb.append("Migration Type: ").append("\n")
- sb.append(StringUtils.isNotEmpty(dbPrefix) &&
- StringUtils.isNotEmpty(targetPrefix) && !StringUtils.equalsIgnoreCase(dbPrefix, targetPrefix) ? "STAGED" : "DIRECT").append("\n")
- sb.append("------------").append("\n")
- sb.append("Found staging prefixes:").append("\n")
-
- Set prefixes = schemaDifferenceService.findStagingPrefixes(context);
- Map prefixesCount = new HashMap<>()
- prefixes.forEach({ prefix ->
- prefixesCount.put(prefix, targetSet.stream().filter({ e -> e.startsWith(prefix) }).count());
- });
- prefixesCount.forEach({ k, v -> sb.append("Prefix: ").append(k).append(" number of tables: ").append(v).append("\n") });
- sb.append("------------").append("\n");
- return sb.toString();
-}
-
diff --git a/commercemigration/resources/localization/commercemigration-locales_de.properties b/commercemigration/resources/localization/commercemigration-locales_de.properties
deleted file mode 100644
index 5a6a9c3..0000000
--- a/commercemigration/resources/localization/commercemigration-locales_de.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-# -----------------------------------------------------------------------
-# [y] hybris Platform
-#
-# Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved.
-#
-# This software is the confidential and proprietary information of SAP
-# ("Confidential Information"). You shall not disclose such Confidential
-# Information and shall use it only in accordance with the terms of the
-# license agreement you entered into with SAP.
-# -----------------------------------------------------------------------
-# put localizations of item types into this file
-# Note that you can also add special locatizations which
-# can be retrieved with the
-#
-# ...tools.localization.Localization.getLocalizedString(...)
-#
-# methods.
-#
-# syntax for type localizations:
-#
-# type..name=XY
-# type...name=XY
-# type..description=XY
-# type...description=XY
-#
-# yourcustomlocalekey=value
-
diff --git a/commercemigration/resources/localization/commercemigration-locales_en.properties b/commercemigration/resources/localization/commercemigration-locales_en.properties
deleted file mode 100644
index 5a6a9c3..0000000
--- a/commercemigration/resources/localization/commercemigration-locales_en.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-# -----------------------------------------------------------------------
-# [y] hybris Platform
-#
-# Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved.
-#
-# This software is the confidential and proprietary information of SAP
-# ("Confidential Information"). You shall not disclose such Confidential
-# Information and shall use it only in accordance with the terms of the
-# license agreement you entered into with SAP.
-# -----------------------------------------------------------------------
-# put localizations of item types into this file
-# Note that you can also add special locatizations which
-# can be retrieved with the
-#
-# ...tools.localization.Localization.getLocalizedString(...)
-#
-# methods.
-#
-# syntax for type localizations:
-#
-# type..name=XY
-# type...name=XY
-# type..description=XY
-# type...description=XY
-#
-# yourcustomlocalekey=value
-
diff --git a/commercemigration/resources/localization/commercemigration-locales_es.properties b/commercemigration/resources/localization/commercemigration-locales_es.properties
deleted file mode 100644
index 5a6a9c3..0000000
--- a/commercemigration/resources/localization/commercemigration-locales_es.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-# -----------------------------------------------------------------------
-# [y] hybris Platform
-#
-# Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved.
-#
-# This software is the confidential and proprietary information of SAP
-# ("Confidential Information"). You shall not disclose such Confidential
-# Information and shall use it only in accordance with the terms of the
-# license agreement you entered into with SAP.
-# -----------------------------------------------------------------------
-# put localizations of item types into this file
-# Note that you can also add special locatizations which
-# can be retrieved with the
-#
-# ...tools.localization.Localization.getLocalizedString(...)
-#
-# methods.
-#
-# syntax for type localizations:
-#
-# type..name=XY
-# type...name=XY
-# type..description=XY
-# type...description=XY
-#
-# yourcustomlocalekey=value
-
diff --git a/commercemigration/resources/localization/commercemigration-locales_fr.properties b/commercemigration/resources/localization/commercemigration-locales_fr.properties
deleted file mode 100644
index 5a6a9c3..0000000
--- a/commercemigration/resources/localization/commercemigration-locales_fr.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-# -----------------------------------------------------------------------
-# [y] hybris Platform
-#
-# Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved.
-#
-# This software is the confidential and proprietary information of SAP
-# ("Confidential Information"). You shall not disclose such Confidential
-# Information and shall use it only in accordance with the terms of the
-# license agreement you entered into with SAP.
-# -----------------------------------------------------------------------
-# put localizations of item types into this file
-# Note that you can also add special locatizations which
-# can be retrieved with the
-#
-# ...tools.localization.Localization.getLocalizedString(...)
-#
-# methods.
-#
-# syntax for type localizations:
-#
-# type..name=XY
-# type...name=XY
-# type..description=XY
-# type...description=XY
-#
-# yourcustomlocalekey=value
-
diff --git a/commercemigration/resources/localization/commercemigration-locales_it.properties b/commercemigration/resources/localization/commercemigration-locales_it.properties
deleted file mode 100644
index 5a6a9c3..0000000
--- a/commercemigration/resources/localization/commercemigration-locales_it.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-# -----------------------------------------------------------------------
-# [y] hybris Platform
-#
-# Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved.
-#
-# This software is the confidential and proprietary information of SAP
-# ("Confidential Information"). You shall not disclose such Confidential
-# Information and shall use it only in accordance with the terms of the
-# license agreement you entered into with SAP.
-# -----------------------------------------------------------------------
-# put localizations of item types into this file
-# Note that you can also add special locatizations which
-# can be retrieved with the
-#
-# ...tools.localization.Localization.getLocalizedString(...)
-#
-# methods.
-#
-# syntax for type localizations:
-#
-# type..name=XY
-# type...name=XY
-# type..description=XY
-# type...description=XY
-#
-# yourcustomlocalekey=value
-
diff --git a/commercemigration/resources/localization/commercemigration-locales_ja.properties b/commercemigration/resources/localization/commercemigration-locales_ja.properties
deleted file mode 100644
index 5a6a9c3..0000000
--- a/commercemigration/resources/localization/commercemigration-locales_ja.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-# -----------------------------------------------------------------------
-# [y] hybris Platform
-#
-# Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved.
-#
-# This software is the confidential and proprietary information of SAP
-# ("Confidential Information"). You shall not disclose such Confidential
-# Information and shall use it only in accordance with the terms of the
-# license agreement you entered into with SAP.
-# -----------------------------------------------------------------------
-# put localizations of item types into this file
-# Note that you can also add special locatizations which
-# can be retrieved with the
-#
-# ...tools.localization.Localization.getLocalizedString(...)
-#
-# methods.
-#
-# syntax for type localizations:
-#
-# type..name=XY
-# type...name=XY
-# type..description=XY
-# type...description=XY
-#
-# yourcustomlocalekey=value
-
diff --git a/commercemigration/resources/localization/commercemigration-locales_ko.properties b/commercemigration/resources/localization/commercemigration-locales_ko.properties
deleted file mode 100644
index 5a6a9c3..0000000
--- a/commercemigration/resources/localization/commercemigration-locales_ko.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-# -----------------------------------------------------------------------
-# [y] hybris Platform
-#
-# Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved.
-#
-# This software is the confidential and proprietary information of SAP
-# ("Confidential Information"). You shall not disclose such Confidential
-# Information and shall use it only in accordance with the terms of the
-# license agreement you entered into with SAP.
-# -----------------------------------------------------------------------
-# put localizations of item types into this file
-# Note that you can also add special locatizations which
-# can be retrieved with the
-#
-# ...tools.localization.Localization.getLocalizedString(...)
-#
-# methods.
-#
-# syntax for type localizations:
-#
-# type..name=XY
-# type...name=XY
-# type..description=XY
-# type...description=XY
-#
-# yourcustomlocalekey=value
-
diff --git a/commercemigration/resources/localization/commercemigration-locales_pt.properties b/commercemigration/resources/localization/commercemigration-locales_pt.properties
deleted file mode 100644
index 5a6a9c3..0000000
--- a/commercemigration/resources/localization/commercemigration-locales_pt.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-# -----------------------------------------------------------------------
-# [y] hybris Platform
-#
-# Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved.
-#
-# This software is the confidential and proprietary information of SAP
-# ("Confidential Information"). You shall not disclose such Confidential
-# Information and shall use it only in accordance with the terms of the
-# license agreement you entered into with SAP.
-# -----------------------------------------------------------------------
-# put localizations of item types into this file
-# Note that you can also add special locatizations which
-# can be retrieved with the
-#
-# ...tools.localization.Localization.getLocalizedString(...)
-#
-# methods.
-#
-# syntax for type localizations:
-#
-# type..name=XY
-# type...name=XY
-# type..description=XY
-# type...description=XY
-#
-# yourcustomlocalekey=value
-
diff --git a/commercemigration/resources/localization/commercemigration-locales_ru.properties b/commercemigration/resources/localization/commercemigration-locales_ru.properties
deleted file mode 100644
index 5a6a9c3..0000000
--- a/commercemigration/resources/localization/commercemigration-locales_ru.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-# -----------------------------------------------------------------------
-# [y] hybris Platform
-#
-# Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved.
-#
-# This software is the confidential and proprietary information of SAP
-# ("Confidential Information"). You shall not disclose such Confidential
-# Information and shall use it only in accordance with the terms of the
-# license agreement you entered into with SAP.
-# -----------------------------------------------------------------------
-# put localizations of item types into this file
-# Note that you can also add special locatizations which
-# can be retrieved with the
-#
-# ...tools.localization.Localization.getLocalizedString(...)
-#
-# methods.
-#
-# syntax for type localizations:
-#
-# type..name=XY
-# type...name=XY
-# type..description=XY
-# type...description=XY
-#
-# yourcustomlocalekey=value
-
diff --git a/commercemigration/resources/localization/commercemigration-locales_zh.properties b/commercemigration/resources/localization/commercemigration-locales_zh.properties
deleted file mode 100644
index 5a6a9c3..0000000
--- a/commercemigration/resources/localization/commercemigration-locales_zh.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-# -----------------------------------------------------------------------
-# [y] hybris Platform
-#
-# Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved.
-#
-# This software is the confidential and proprietary information of SAP
-# ("Confidential Information"). You shall not disclose such Confidential
-# Information and shall use it only in accordance with the terms of the
-# license agreement you entered into with SAP.
-# -----------------------------------------------------------------------
-# put localizations of item types into this file
-# Note that you can also add special locatizations which
-# can be retrieved with the
-#
-# ...tools.localization.Localization.getLocalizedString(...)
-#
-# methods.
-#
-# syntax for type localizations:
-#
-# type..name=XY
-# type...name=XY
-# type..description=XY
-# type...description=XY
-#
-# yourcustomlocalekey=value
-
diff --git a/commercemigration/resources/sql/createSchedulerTables.sql b/commercemigration/resources/sql/createSchedulerTables.sql
deleted file mode 100644
index 2b74b48..0000000
--- a/commercemigration/resources/sql/createSchedulerTables.sql
+++ /dev/null
@@ -1,112 +0,0 @@
-
-DROP TABLE IF EXISTS MIGRATIONTOOLKIT_TABLECOPYTASKS;
-
-CREATE TABLE MIGRATIONTOOLKIT_TABLECOPYTASKS (
- targetnodeId int NOT NULL,
- migrationId NVARCHAR(255) NOT NULL,
- pipelinename NVARCHAR(255) NOT NULL,
- sourcetablename NVARCHAR(255) NOT NULL,
- targettablename NVARCHAR(255) NOT NULL,
- columnmap NVARCHAR(MAX) NULL,
- duration NVARCHAR (255) NULL,
- sourcerowcount int NOT NULL DEFAULT 0,
- targetrowcount int NOT NULL DEFAULT 0,
- failure char(1) NOT NULL DEFAULT '0',
- error NVARCHAR(MAX) NULL,
- published char(1) NOT NULL DEFAULT '0',
- truncated char(1) NOT NULL DEFAULT '0',
- lastupdate DATETIME2 NOT NULL DEFAULT '0001-01-01 00:00:00',
- avgwriterrowthroughput numeric(10,2) NULL DEFAULT 0,
- avgreaderrowthroughput numeric(10,2) NULL DEFAULT 0,
- copymethod NVARCHAR(255) NULL,
- keycolumns NVARCHAR(255) NULL,
- PRIMARY KEY (migrationid, targetnodeid, pipelinename)
-);
-
-DROP TABLE IF EXISTS MIGRATIONTOOLKIT_TABLECOPYBATCHES;
-
-CREATE TABLE MIGRATIONTOOLKIT_TABLECOPYBATCHES (
- migrationId NVARCHAR(255) NOT NULL,
- batchId int NOT NULL DEFAULT 0,
- pipelinename NVARCHAR(255) NOT NULL,
- lowerBoundary NVARCHAR(255) NOT NULL,
- upperBoundary NVARCHAR(255) NULL,
- PRIMARY KEY (migrationid, batchId, pipelinename)
-);
-
-DROP TABLE IF EXISTS MIGRATIONTOOLKIT_TABLECOPYSTATUS;
-
-CREATE TABLE MIGRATIONTOOLKIT_TABLECOPYSTATUS (
- migrationId NVARCHAR(255) NOT NULL,
- startAt datetime2 NOT NULL DEFAULT GETUTCDATE(),
- endAt datetime2,
- lastUpdate datetime2,
- total int NOT NULL DEFAULT 0,
- completed int NOT NULL DEFAULT 0,
- failed int NOT NULL DEFAULT 0,
- status NVARCHAR(255) NOT NULL DEFAULT 'RUNNING'
- PRIMARY KEY (migrationid)
-);
-
-IF OBJECT_ID ('MIGRATIONTOOLKIT_TABLECOPYSTATUS_Update','TR') IS NOT NULL
- DROP TRIGGER MIGRATIONTOOLKIT_TABLECOPYSTATUS_Update;
-
-CREATE TRIGGER MIGRATIONTOOLKIT_TABLECOPYSTATUS_Update
-ON MIGRATIONTOOLKIT_TABLECOPYTASKS
-AFTER INSERT, UPDATE
-AS
-BEGIN
- DECLARE @relevant_count integer = 0
- SET NOCOUNT ON
- -- latest update overall = latest update timestamp of updated tasks
- UPDATE s
- SET s.lastUpdate = t.latestUpdate
- FROM MIGRATIONTOOLKIT_TABLECOPYSTATUS s
- INNER JOIN (
- SELECT migrationId, MAX(lastUpdate) AS latestUpdate
- FROM inserted
- GROUP BY migrationId
- ) AS t
- ON s.migrationId = t.migrationId
-
- SELECT @relevant_count = COUNT(pipelinename)
- FROM inserted
- WHERE failure = '1'
- OR duration IS NOT NULL
-
- IF @relevant_count > 0
- BEGIN
- -- updated completed count when tasks completed
- UPDATE s
- SET s.completed = t.completed
- FROM MIGRATIONTOOLKIT_TABLECOPYSTATUS s
- INNER JOIN (
- SELECT migrationId, COUNT(pipelinename) AS completed
- FROM MIGRATIONTOOLKIT_TABLECOPYTASKS
- WHERE duration IS NOT NULL
- GROUP BY migrationId
- ) AS t
- ON s.migrationId = t.migrationId
- -- update failed count when tasks failed
- UPDATE s
- SET s.failed = t.failed
- FROM MIGRATIONTOOLKIT_TABLECOPYSTATUS s
- INNER JOIN (
- SELECT migrationId, COUNT(pipelinename) AS failed
- FROM MIGRATIONTOOLKIT_TABLECOPYTASKS
- WHERE failure = '1'
- GROUP BY migrationId
- ) AS t
- ON s.migrationId = t.migrationId
-
- UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS
- SET endAt = GETUTCDATE()
- WHERE total = completed
- AND endAt IS NULL
-
- UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS
- SET status = 'PROCESSED'
- WHERE status = 'RUNNING'
- AND total = completed
- END
-END;
diff --git a/commercemigration/src/de/hybris/platform/azure/media/AzureCloudUtils.java b/commercemigration/src/de/hybris/platform/azure/media/AzureCloudUtils.java
deleted file mode 100644
index c98caac..0000000
--- a/commercemigration/src/de/hybris/platform/azure/media/AzureCloudUtils.java
+++ /dev/null
@@ -1,74 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package de.hybris.platform.azure.media;
-
-import de.hybris.platform.core.Registry;
-import de.hybris.platform.media.storage.MediaStorageConfigService;
-import de.hybris.platform.util.Config;
-import org.apache.commons.lang.StringUtils;
-
-public class AzureCloudUtils {
- private static final int MIN_AZURE_MEDIA_FOLDER_QUALIFIER_SIZE = 3;
- private static final int MAX_AZURE_MEDIA_FOLDER_QUALIFIER_SIZE = 63;
- private static final String AZURE_MEDIA_FOLDER_QUALIFIER_REGEX = "[a-z0-9-]+";
- private static final char HYPHEN = '-';
- private static final String DOUBLE_HYPHEN = "--";
-
- public AzureCloudUtils() {
- }
-
- public static String computeContainerAddress(MediaStorageConfigService.MediaFolderConfig config) {
- String configuredContainer = config.getParameter("containerAddress");
- String addressSuffix = StringUtils.isNotBlank(configuredContainer)
- ? configuredContainer
- : config.getFolderQualifier();
- String addressPrefix = getTenantPrefix();
- return toValidContainerName(addressPrefix + "-" + addressSuffix);
- }
-
- private static String toValidContainerName(String name) {
- return name.toLowerCase().replaceAll("[/. !?]", "").replace('_', '-');
- }
-
- private static String toValidPrefixName(String name) {
- return name.toLowerCase().replaceAll("[/. !?_-]", "");
- }
-
- private static String getTenantPrefix() {
- // return "sys-" +
- // Registry.getCurrentTenantNoFallback().getTenantID().toLowerCase();
- String defaultPrefix = Registry.getCurrentTenantNoFallback().getTenantID();
- String prefix = toValidPrefixName(Config.getString("db.tableprefix", defaultPrefix));
- return "sys-" + prefix.toLowerCase();
- }
-
- public static boolean hasValidMediaFolderName(final MediaStorageConfigService.MediaFolderConfig config) {
- final String containerAddress = computeContainerAddress(config);
- return hasValidLength(containerAddress) && hasValidFormat(containerAddress);
- }
-
- private static boolean hasValidLength(final String folderQualifier) {
- return folderQualifier.length() >= MIN_AZURE_MEDIA_FOLDER_QUALIFIER_SIZE
- && folderQualifier.length() <= MAX_AZURE_MEDIA_FOLDER_QUALIFIER_SIZE;
- }
-
- private static boolean hasValidFormat(final String folderQualifier) {
- if (!folderQualifier.matches(AZURE_MEDIA_FOLDER_QUALIFIER_REGEX)) {
- return false;
- }
-
- if (folderQualifier.contains(String.valueOf(HYPHEN))) {
- return hasHyphenValidFormat(folderQualifier);
- }
-
- return true;
- }
-
- private static boolean hasHyphenValidFormat(final String folderQualifier) {
- final char firstChar = folderQualifier.charAt(0);
- final char lastChar = folderQualifier.charAt(folderQualifier.length() - 1);
- return !folderQualifier.contains(DOUBLE_HYPHEN) && firstChar != HYPHEN && lastChar != HYPHEN;
- }
-}
diff --git a/commercemigration/src/de/hybris/platform/core/TenantPropertiesLoader.java b/commercemigration/src/de/hybris/platform/core/TenantPropertiesLoader.java
deleted file mode 100644
index 578cfbb..0000000
--- a/commercemigration/src/de/hybris/platform/core/TenantPropertiesLoader.java
+++ /dev/null
@@ -1,28 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package de.hybris.platform.core;
-
-import de.hybris.bootstrap.ddl.PropertiesLoader;
-
-import java.util.Objects;
-
-public class TenantPropertiesLoader implements PropertiesLoader {
- private final Tenant tenant;
-
- public TenantPropertiesLoader(final Tenant tenant) {
- Objects.requireNonNull(tenant);
- this.tenant = tenant;
- }
-
- @Override
- public String getProperty(final String key) {
- return tenant.getConfig().getParameter(key);
- }
-
- @Override
- public String getProperty(final String key, final String defaultValue) {
- return tenant.getConfig().getString(key, defaultValue);
- }
-}
diff --git a/commercemigration/src/org/sap/commercemigration/CommercemigrationStandalone.java b/commercemigration/src/org/sap/commercemigration/CommercemigrationStandalone.java
deleted file mode 100644
index 59cc738..0000000
--- a/commercemigration/src/org/sap/commercemigration/CommercemigrationStandalone.java
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration;
-
-import de.hybris.platform.core.Registry;
-import de.hybris.platform.jalo.JaloSession;
-import de.hybris.platform.util.RedeployUtilities;
-import de.hybris.platform.util.Utilities;
-
-/**
- * Demonstration of how to write a standalone application that can be run
- * directly from within eclipse or from the commandline.
- * To run this from commandline, just use the following command:
- *
- * java -jar bootstrap/bin/ybootstrap.jar "new org.sap.commercemigration.CommercemigrationStandalone().run();"
- * From eclipse, just run as Java Application. Note that you maybe need
- * to add all other projects like ext-commerce, ext-pim to the Launch
- * configuration classpath.
- */
-public class CommercemigrationStandalone {
- /**
- * Main class to be able to run it directly as a java program.
- *
- * @param args
- * the arguments from commandline
- */
- public static void main(final String[] args) {
- new CommercemigrationStandalone().run();
- }
-
- public void run() {
- Registry.activateStandaloneMode();
- Registry.activateMasterTenant();
-
- final JaloSession jaloSession = JaloSession.getCurrentSession();
- System.out.println("Session ID: " + jaloSession.getSessionID()); // NOPMD
- System.out.println("User: " + jaloSession.getUser()); // NOPMD
- Utilities.printAppInfo();
-
- RedeployUtilities.shutdown();
- }
-}
diff --git a/commercemigration/src/org/sap/commercemigration/adapter/DataRepositoryAdapter.java b/commercemigration/src/org/sap/commercemigration/adapter/DataRepositoryAdapter.java
deleted file mode 100644
index ffb084e..0000000
--- a/commercemigration/src/org/sap/commercemigration/adapter/DataRepositoryAdapter.java
+++ /dev/null
@@ -1,25 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.adapter;
-
-import org.sap.commercemigration.MarkersQueryDefinition;
-import org.sap.commercemigration.OffsetQueryDefinition;
-import org.sap.commercemigration.SeekQueryDefinition;
-import org.sap.commercemigration.context.MigrationContext;
-import org.sap.commercemigration.dataset.DataSet;
-
-public interface DataRepositoryAdapter {
- long getRowCount(MigrationContext context, String table) throws Exception;
-
- DataSet getAll(MigrationContext context, String table) throws Exception;
-
- DataSet getBatchWithoutIdentifier(MigrationContext context, OffsetQueryDefinition queryDefinition) throws Exception;
-
- DataSet getBatchOrderedByColumn(MigrationContext context, SeekQueryDefinition queryDefinition) throws Exception;
-
- DataSet getBatchMarkersOrderedByColumn(MigrationContext context, MarkersQueryDefinition queryDefinition)
- throws Exception;
-
-}
diff --git a/commercemigration/src/org/sap/commercemigration/adapter/impl/ContextualDataRepositoryAdapter.java b/commercemigration/src/org/sap/commercemigration/adapter/impl/ContextualDataRepositoryAdapter.java
deleted file mode 100644
index c9a89db..0000000
--- a/commercemigration/src/org/sap/commercemigration/adapter/impl/ContextualDataRepositoryAdapter.java
+++ /dev/null
@@ -1,87 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.adapter.impl;
-
-import org.sap.commercemigration.MarkersQueryDefinition;
-import org.sap.commercemigration.OffsetQueryDefinition;
-import org.sap.commercemigration.SeekQueryDefinition;
-import org.sap.commercemigration.adapter.DataRepositoryAdapter;
-import org.sap.commercemigration.constants.CommercemigrationConstants;
-import org.sap.commercemigration.context.MigrationContext;
-import org.sap.commercemigration.dataset.DataSet;
-import org.sap.commercemigration.repository.DataRepository;
-
-import java.time.Instant;
-
-/**
- * Controls the way the repository is accessed by adapting the most common
- * reading operations based on the configured context
- */
-public class ContextualDataRepositoryAdapter implements DataRepositoryAdapter {
-
- private DataRepository repository;
-
- public ContextualDataRepositoryAdapter(DataRepository repository) {
- this.repository = repository;
- }
-
- @Override
- public long getRowCount(MigrationContext context, String table) throws Exception {
- if (context.isIncrementalModeEnabled()) {
- return repository.getRowCountModifiedAfter(table, getIncrementalTimestamp(context));
- } else {
- return repository.getRowCount(table);
- }
- }
-
- @Override
- public DataSet getAll(MigrationContext context, String table) throws Exception {
- if (context.isIncrementalModeEnabled()) {
- return repository.getAllModifiedAfter(table, getIncrementalTimestamp(context));
- } else {
- return repository.getAll(table);
- }
- }
-
- @Override
- public DataSet getBatchWithoutIdentifier(MigrationContext context, OffsetQueryDefinition queryDefinition)
- throws Exception {
- if (context.isIncrementalModeEnabled()) {
- return repository.getBatchWithoutIdentifier(queryDefinition, getIncrementalTimestamp(context));
- } else {
- return repository.getBatchWithoutIdentifier(queryDefinition);
- }
- }
-
- @Override
- public DataSet getBatchOrderedByColumn(MigrationContext context, SeekQueryDefinition queryDefinition)
- throws Exception {
- if (context.isIncrementalModeEnabled()) {
- return repository.getBatchOrderedByColumn(queryDefinition, getIncrementalTimestamp(context));
- } else {
- return repository.getBatchOrderedByColumn(queryDefinition);
- }
- }
-
- @Override
- public DataSet getBatchMarkersOrderedByColumn(MigrationContext context, MarkersQueryDefinition queryDefinition)
- throws Exception {
- if (context.isIncrementalModeEnabled()) {
- return repository.getBatchMarkersOrderedByColumn(queryDefinition, getIncrementalTimestamp(context));
- } else {
- return repository.getBatchMarkersOrderedByColumn(queryDefinition);
- }
- }
-
- private Instant getIncrementalTimestamp(MigrationContext context) {
- Instant incrementalTimestamp = context.getIncrementalTimestamp();
- if (incrementalTimestamp == null) {
- throw new IllegalStateException(
- "Timestamp cannot be null in incremental mode. Set a timestamp using the property "
- + CommercemigrationConstants.MIGRATION_DATA_INCREMENTAL_TIMESTAMP);
- }
- return incrementalTimestamp;
- }
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/DataCopyMethod.java b/commercemigration/src/org/sap/commercemigration/concurrent/DataCopyMethod.java
deleted file mode 100644
index 4b96c25..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/DataCopyMethod.java
+++ /dev/null
@@ -1,9 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent;
-
-public enum DataCopyMethod {
- SEEK, OFFSET, DEFAULT
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/DataPipe.java b/commercemigration/src/org/sap/commercemigration/concurrent/DataPipe.java
deleted file mode 100644
index 19b1ef4..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/DataPipe.java
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent;
-
-import javax.annotation.concurrent.ThreadSafe;
-
-/**
- * Used to separate database reading and writing operations, after reading data
- * from the DB, the result is put to the pipe and can be used by the database
- * writer later on -> asynchronously
- *
- * @param
- */
-@ThreadSafe
-public interface DataPipe {
- void requestAbort(Exception e);
-
- void put(MaybeFinished value) throws Exception;
-
- MaybeFinished get() throws Exception;
-
- int size();
-
- int getWaitersCount();
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/DataPipeFactory.java b/commercemigration/src/org/sap/commercemigration/concurrent/DataPipeFactory.java
deleted file mode 100644
index cb8efc1..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/DataPipeFactory.java
+++ /dev/null
@@ -1,14 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent;
-
-import javax.annotation.concurrent.ThreadSafe;
-
-import org.sap.commercemigration.context.CopyContext;
-
-@ThreadSafe
-public interface DataPipeFactory {
- DataPipe create(CopyContext context, CopyContext.DataCopyItem item) throws Exception;
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/DataThreadPoolConfigBuilder.java b/commercemigration/src/org/sap/commercemigration/concurrent/DataThreadPoolConfigBuilder.java
deleted file mode 100644
index 8d7f29e..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/DataThreadPoolConfigBuilder.java
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent;
-
-import org.sap.commercemigration.DataThreadPoolConfig;
-import org.sap.commercemigration.context.MigrationContext;
-
-public class DataThreadPoolConfigBuilder {
-
- private DataThreadPoolConfig config;
-
- public DataThreadPoolConfigBuilder(MigrationContext context) {
- config = new DataThreadPoolConfig();
- }
-
- public DataThreadPoolConfigBuilder withPoolSize(int poolSize) {
- config.setPoolSize(poolSize);
- return this;
- }
-
- public DataThreadPoolConfig build() {
- return config;
- }
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/DataThreadPoolFactory.java b/commercemigration/src/org/sap/commercemigration/concurrent/DataThreadPoolFactory.java
deleted file mode 100644
index fff6690..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/DataThreadPoolFactory.java
+++ /dev/null
@@ -1,17 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent;
-
-import org.sap.commercemigration.DataThreadPoolConfig;
-import org.sap.commercemigration.context.CopyContext;
-import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;
-
-public interface DataThreadPoolFactory {
- ThreadPoolTaskExecutor create(CopyContext context, DataThreadPoolConfig config);
-
- void destroy(ThreadPoolTaskExecutor executor);
-
- DataThreadPoolMonitor getMonitor();
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/DataThreadPoolMonitor.java b/commercemigration/src/org/sap/commercemigration/concurrent/DataThreadPoolMonitor.java
deleted file mode 100644
index 4102fb6..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/DataThreadPoolMonitor.java
+++ /dev/null
@@ -1,16 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent;
-
-import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;
-
-public interface DataThreadPoolMonitor {
- void subscribe(ThreadPoolTaskExecutor executor);
- void unsubscribe(ThreadPoolTaskExecutor executor);
-
- int getActiveCount();
-
- int getMaxPoolSize();
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/DataWorkerExecutor.java b/commercemigration/src/org/sap/commercemigration/concurrent/DataWorkerExecutor.java
deleted file mode 100644
index 03c102d..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/DataWorkerExecutor.java
+++ /dev/null
@@ -1,15 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent;
-
-import java.util.concurrent.Callable;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.Future;
-
-public interface DataWorkerExecutor {
- Future safelyExecute(Callable callable) throws InterruptedException;
-
- void waitAndRethrowUncaughtExceptions() throws ExecutionException, InterruptedException;
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/MDCTaskDecorator.java b/commercemigration/src/org/sap/commercemigration/concurrent/MDCTaskDecorator.java
deleted file mode 100644
index 3bc9c97..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/MDCTaskDecorator.java
+++ /dev/null
@@ -1,25 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent;
-
-import org.slf4j.MDC;
-import org.springframework.core.task.TaskDecorator;
-
-import java.util.Map;
-
-public class MDCTaskDecorator implements TaskDecorator {
- @Override
- public Runnable decorate(Runnable runnable) {
- Map contextMap = MDC.getCopyOfContextMap();
- return () -> {
- try {
- MDC.setContextMap(contextMap);
- runnable.run();
- } finally {
- MDC.clear();
- }
- };
- }
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/MaybeFinished.java b/commercemigration/src/org/sap/commercemigration/concurrent/MaybeFinished.java
deleted file mode 100644
index c9f4a1a..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/MaybeFinished.java
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent;
-
-/**
- * MaybeFinished keeps track status of the data set that is currently being
- * processed -> if all is ok, then status will be done, if theres an exception,
- * it will be poison
- *
- * @param
- */
-public final class MaybeFinished {
- private final T value;
- private final boolean done;
- private final boolean poison;
-
- private MaybeFinished(T value, boolean done, boolean poison) {
- this.value = value;
- this.done = done;
- this.poison = poison;
- }
-
- public static MaybeFinished of(T value) {
- return new MaybeFinished<>(value, false, false);
- }
-
- public static MaybeFinished finished(T value) {
- return new MaybeFinished<>(value, true, false);
- }
-
- public static MaybeFinished poison() {
- return new MaybeFinished<>(null, true, true);
- }
-
- public T getValue() {
- return value;
- }
-
- public boolean isDone() {
- return done;
- }
-
- public boolean isPoison() {
- return poison;
- }
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/PipeAbortedException.java b/commercemigration/src/org/sap/commercemigration/concurrent/PipeAbortedException.java
deleted file mode 100644
index 6526a78..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/PipeAbortedException.java
+++ /dev/null
@@ -1,15 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent;
-
-public class PipeAbortedException extends Exception {
- public PipeAbortedException(String message) {
- super(message);
- }
-
- public PipeAbortedException(String message, Throwable cause) {
- super(message, cause);
- }
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/impl/DefaultDataPipe.java b/commercemigration/src/org/sap/commercemigration/concurrent/impl/DefaultDataPipe.java
deleted file mode 100644
index 6d2c203..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/impl/DefaultDataPipe.java
+++ /dev/null
@@ -1,122 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent.impl;
-
-import org.sap.commercemigration.concurrent.DataPipe;
-import org.sap.commercemigration.concurrent.MaybeFinished;
-import org.sap.commercemigration.concurrent.PipeAbortedException;
-import org.sap.commercemigration.constants.CommercemigrationConstants;
-import org.sap.commercemigration.context.CopyContext;
-import org.sap.commercemigration.scheduler.DatabaseCopyScheduler;
-import org.sap.commercemigration.service.DatabaseCopyTaskRepository;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.util.concurrent.ArrayBlockingQueue;
-import java.util.concurrent.BlockingQueue;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.atomic.AtomicReference;
-
-public class DefaultDataPipe implements DataPipe {
- private static final Logger LOG = LoggerFactory.getLogger(DefaultDataPipe.class);
-
- private final BlockingQueue> queue;
- private final int defaultTimeout;
- private final AtomicReference abortException = new AtomicReference<>();
- private final AtomicInteger size = new AtomicInteger();
- private final CopyContext context;
- private final CopyContext.DataCopyItem copyItem;
- private final DatabaseCopyTaskRepository taskRepository;
- private final DatabaseCopyScheduler scheduler;
-
- public DefaultDataPipe(DatabaseCopyScheduler scheduler, DatabaseCopyTaskRepository taskRepository,
- CopyContext context, CopyContext.DataCopyItem copyItem, int timeoutInSeconds, int capacity) {
- this.taskRepository = taskRepository;
- this.scheduler = scheduler;
- this.context = context;
- this.copyItem = copyItem;
- this.queue = new ArrayBlockingQueue<>(capacity);
- defaultTimeout = timeoutInSeconds;
- }
-
- @Override
- public void requestAbort(Exception cause) {
- if (this.abortException.compareAndSet(null, cause)) {
- if (context.getMigrationContext().isFailOnErrorEnabled()) {
- try {
- scheduler.abort(context);
- } catch (Exception ex) {
- LOG.warn("could not abort", ex);
- }
- }
- try {
- taskRepository.markTaskFailed(context, copyItem, cause);
- } catch (Exception e) {
- LOG.warn("could not update error status!", e);
- }
- try {
- flushPipe();
- } catch (Exception e) {
- LOG.warn("Could not flush pipe", e);
- }
- }
- }
-
- private void flushPipe() throws Exception {
- // make sure waiting queue offers can be flushed
- while (getWaitersCount() > 0) {
- queue.poll(defaultTimeout, TimeUnit.SECONDS);
- size.decrementAndGet();
- }
- queue.clear();
- }
-
- private boolean isAborted() throws Exception {
- if (this.abortException.get() == null && scheduler.isAborted(this.context)) {
- requestAbort(new PipeAbortedException("Migration aborted"));
- }
- return this.abortException.get() != null;
- }
-
- private void assertPipeNotAborted() throws Exception {
- if (isAborted()) {
- throw new PipeAbortedException("Pipe aborted", this.abortException.get());
- }
- }
-
- @Override
- public void put(MaybeFinished value) throws Exception {
- assertPipeNotAborted();
- size.incrementAndGet();
- if (!queue.offer(value, defaultTimeout, TimeUnit.SECONDS)) {
- throw new RuntimeException("cannot put new item in time");
- }
- }
-
- @Override
- public MaybeFinished get() throws Exception {
- assertPipeNotAborted();
- MaybeFinished element = queue.poll(defaultTimeout, TimeUnit.SECONDS);
- size.decrementAndGet();
- if (element == null) {
- throw new RuntimeException(String.format(
- "cannot get new item in time. Consider increasing the value of the property '%s' or '%s'",
- CommercemigrationConstants.MIGRATION_DATA_PIPE_TIMEOUT,
- CommercemigrationConstants.MIGRATION_DATA_PIPE_CAPACITY));
- }
- return element;
- }
-
- @Override
- public int size() {
- return size.get();
- }
-
- @Override
- public int getWaitersCount() {
- return size.get() - queue.size();
- }
-}
diff --git a/commercemigration/src/org/sap/commercemigration/concurrent/impl/DefaultDataPipeFactory.java b/commercemigration/src/org/sap/commercemigration/concurrent/impl/DefaultDataPipeFactory.java
deleted file mode 100644
index e3101dd..0000000
--- a/commercemigration/src/org/sap/commercemigration/concurrent/impl/DefaultDataPipeFactory.java
+++ /dev/null
@@ -1,228 +0,0 @@
-/*
- * Copyright: 2021 SAP SE or an SAP affiliate company and commerce-migration-toolkit contributors.
- * License: Apache-2.0
-*/
-package org.sap.commercemigration.concurrent.impl;
-
-import com.google.common.collect.Lists;
-import org.apache.commons.lang3.tuple.Pair;
-import org.fest.util.Collections;
-import org.sap.commercemigration.DataThreadPoolConfig;
-import org.sap.commercemigration.MarkersQueryDefinition;
-import org.sap.commercemigration.adapter.DataRepositoryAdapter;
-import org.sap.commercemigration.adapter.impl.ContextualDataRepositoryAdapter;
-import org.sap.commercemigration.concurrent.DataCopyMethod;
-import org.sap.commercemigration.concurrent.DataPipe;
-import org.sap.commercemigration.concurrent.DataPipeFactory;
-import org.sap.commercemigration.concurrent.DataThreadPoolConfigBuilder;
-import org.sap.commercemigration.concurrent.DataWorkerExecutor;
-import org.sap.commercemigration.concurrent.DataThreadPoolFactory;
-import org.sap.commercemigration.concurrent.MaybeFinished;
-import org.sap.commercemigration.concurrent.impl.task.BatchMarkerDataReaderTask;
-import org.sap.commercemigration.concurrent.impl.task.BatchOffsetDataReaderTask;
-import org.sap.commercemigration.concurrent.impl.task.DataReaderTask;
-import org.sap.commercemigration.concurrent.impl.task.DefaultDataReaderTask;
-import org.sap.commercemigration.concurrent.impl.task.PipeTaskContext;
-import org.sap.commercemigration.context.CopyContext;
-import org.sap.commercemigration.dataset.DataSet;
-import org.sap.commercemigration.performance.PerformanceCategory;
-import org.sap.commercemigration.performance.PerformanceRecorder;
-import org.sap.commercemigration.scheduler.DatabaseCopyScheduler;
-import org.sap.commercemigration.service.DatabaseCopyBatch;
-import org.sap.commercemigration.service.DatabaseCopyTaskRepository;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.core.task.AsyncTaskExecutor;
-import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Optional;
-import java.util.Set;
-import java.util.stream.Collectors;
-
-public class DefaultDataPipeFactory implements DataPipeFactory {
-
- private static final Logger LOG = LoggerFactory.getLogger(DefaultDataPipeFactory.class);
-
- private final DatabaseCopyTaskRepository taskRepository;
- private final DatabaseCopyScheduler scheduler;
- private final AsyncTaskExecutor executor;
- private final DataThreadPoolFactory dataReadWorkerPoolFactory;
-
- public DefaultDataPipeFactory(DatabaseCopyScheduler scheduler, DatabaseCopyTaskRepository taskRepository,
- AsyncTaskExecutor executor, DataThreadPoolFactory dataReadWorkerPoolFactory) {
- this.scheduler = scheduler;
- this.taskRepository = taskRepository;
- this.executor = executor;
- this.dataReadWorkerPoolFactory = dataReadWorkerPoolFactory;
- }
-
- @Override
- public DataPipe create(CopyContext context, CopyContext.DataCopyItem item) throws Exception {
- int dataPipeTimeout = context.getMigrationContext().getDataPipeTimeout();
- int dataPipeCapacity = context.getMigrationContext().getDataPipeCapacity();
- DataPipe pipe = new DefaultDataPipe<>(scheduler, taskRepository, context, item, dataPipeTimeout,
- dataPipeCapacity);
- DataThreadPoolConfig threadPoolConfig = new DataThreadPoolConfigBuilder(context.getMigrationContext())
- .withPoolSize(context.getMigrationContext().getMaxParallelReaderWorkers()).build();
- final ThreadPoolTaskExecutor taskExecutor = dataReadWorkerPoolFactory.create(context, threadPoolConfig);
- DataWorkerExecutor workerExecutor = new DefaultDataWorkerExecutor<>(taskExecutor);
- try {
- executor.submit(() -> {
- try {
- scheduleWorkers(context, workerExecutor, pipe, item);
- workerExecutor.waitAndRethrowUncaughtExceptions();
- pipe.put(MaybeFinished.finished(DataSet.EMPTY));
- } catch (Exception e) {
- LOG.error("Error scheduling worker tasks ", e);
- try {
- pipe.put(MaybeFinished.poison());
- } catch (Exception p) {
- LOG.error("Cannot contaminate pipe ", p);
- }
- if (e instanceof InterruptedException) {
- Thread.currentThread().interrupt();
- }
- } finally {
- if (taskExecutor != null) {
- dataReadWorkerPoolFactory.destroy(taskExecutor);
- }
- }
- });
- } catch (Exception e) {
- throw new RuntimeException("Error invoking reader tasks ", e);
- }
- return pipe;
- }
-
- private void scheduleWorkers(CopyContext context, DataWorkerExecutor workerExecutor,
- DataPipe pipe, CopyContext.DataCopyItem copyItem) throws Exception {
- DataRepositoryAdapter dataRepositoryAdapter = new ContextualDataRepositoryAdapter(
- context.getMigrationContext().getDataSourceRepository());
- String table = copyItem.getSourceItem();
- long totalRows = copyItem.getRowCount();
- long pageSize = context.getMigrationContext().getReaderBatchSize();
- try {
- PerformanceRecorder recorder = context.getPerformanceProfiler().createRecorder(PerformanceCategory.DB_READ,
- table);
- recorder.start();
-
- PipeTaskContext pipeTaskContext = new PipeTaskContext(context, pipe, table, dataRepositoryAdapter, pageSize,
- recorder, taskRepository);
-
- String batchColumn = "";
- // help.sap.com/viewer/d0224eca81e249cb821f2cdf45a82ace/LATEST/en-US/08a27931a21441b59094c8a6aa2a880e.html
- if (context.getMigrationContext().getDataSourceRepository().isAuditTable(table) && context
- .getMigrationContext().getDataSourceRepository().getAllColumnNames(table).contains("ID")) {
- batchColumn = "ID";
- } else if (context.getMigrationContext().getDataSourceRepository().getAllColumnNames(table)
- .contains("PK")) {
- batchColumn = "PK";
- }
- LOG.debug("Using batchColumn: {}", batchColumn.isEmpty() ? "NONE" : batchColumn);
-
- if (batchColumn.isEmpty()) {
- // trying offset queries with unique index columns
- Set batchColumns;
- DataSet uniqueColumns = context.getMigrationContext().getDataSourceRepository().getUniqueColumns(table);
- if (uniqueColumns.isNotEmpty()) {
- if (uniqueColumns.getColumnCount() == 0) {
- throw new IllegalStateException(
- "Corrupt dataset retrieved. Dataset should have information about unique columns");
- }
- batchColumns = uniqueColumns.getAllResults().stream().map(row -> String.valueOf(row.get(0)))
- .collect(Collectors.toSet());
- taskRepository.updateTaskCopyMethod(context, copyItem, DataCopyMethod.OFFSET.toString());
- taskRepository.updateTaskKeyColumns(context, copyItem, batchColumns);
-
- List batches = null;
- if (context.getMigrationContext().isSchedulerResumeEnabled()) {
- Set pendingBatchesForPipeline = taskRepository
- .findPendingBatchesForPipeline(context, copyItem);
- batches = pendingBatchesForPipeline.stream()
- .map(b -> Long.valueOf(b.getLowerBoundary().toString())).collect(Collectors.toList());
- taskRepository.resetPipelineBatches(context, copyItem);
- } else {
- batches = new ArrayList<>();
- for (long offset = 0; offset < totalRows; offset += pageSize) {
- batches.add(offset);
- }
- }
-
- for (int batchId = 0; batchId < batches.size(); batchId++) {
- long offset = batches.get(batchId);
- DataReaderTask dataReaderTask = new BatchOffsetDataReaderTask(pipeTaskContext, batchId, offset,
- batchColumns);
- taskRepository.scheduleBatch(context, copyItem, batchId, offset, offset + pageSize);
- workerExecutor.safelyExecute(dataReaderTask);
- }
- } else {
- // If no unique columns available to do batch sorting, fallback to read all
- LOG.warn(
- "Reading all rows at once without batching for table {}. Memory consumption might be negatively affected",
- table);
- taskRepository.updateTaskCopyMethod(context, copyItem, DataCopyMethod.DEFAULT.toString());
- if (context.getMigrationContext().isSchedulerResumeEnabled()) {
- taskRepository.resetPipelineBatches(context, copyItem);
- }
- taskRepository.scheduleBatch(context, copyItem, 0, 0, totalRows);
- DataReaderTask dataReaderTask = new DefaultDataReaderTask(pipeTaskContext);
- workerExecutor.safelyExecute(dataReaderTask);
- }
- } else {
- // do the pagination by value comparison
- taskRepository.updateTaskCopyMethod(context, copyItem, DataCopyMethod.SEEK.toString());
- taskRepository.updateTaskKeyColumns(context, copyItem, Lists.newArrayList(batchColumn));
-
- List> batchMarkersList = null;
- if (context.getMigrationContext().isSchedulerResumeEnabled()) {
- batchMarkersList = new ArrayList<>();
- Set pendingBatchesForPipeline = taskRepository
- .findPendingBatchesForPipeline(context, copyItem);
- batchMarkersList.addAll(pendingBatchesForPipeline.stream()
- .map(b -> Collections.list(b.getLowerBoundary())).collect(Collectors.toList()));
- taskRepository.resetPipelineBatches(context, copyItem);
- } else {
- MarkersQueryDefinition queryDefinition = new MarkersQueryDefinition();
- queryDefinition.setTable(table);
- queryDefinition.setColumn(batchColumn);
- queryDefinition.setBatchSize(pageSize);
- DataSet batchMarkers = dataRepositoryAdapter
- .getBatchMarkersOrderedByColumn(context.getMigrationContext(), queryDefinition);
- batchMarkersList = batchMarkers.getAllResults();
- }
-
- for (int i = 0; i < batchMarkersList.size(); i++) {
- List