From 6af5c6e8c27a3e0ab3edd1e39141b5e36cb41054 Mon Sep 17 00:00:00 2001 From: Komh Date: Tue, 13 Jan 2026 11:04:08 +0800 Subject: [PATCH] add solution: How_to_Migrate_VirtualMachine_From_VMware Signed-off-by: Komh --- ...w_to_Migrate_VirtualMachine_From_VMware.md | 313 ++++++++++++++++++ 1 file changed, 313 insertions(+) create mode 100644 docs/en/solutions/How_to_Migrate_VirtualMachine_From_VMware.md diff --git a/docs/en/solutions/How_to_Migrate_VirtualMachine_From_VMware.md b/docs/en/solutions/How_to_Migrate_VirtualMachine_From_VMware.md new file mode 100644 index 0000000..d74709e --- /dev/null +++ b/docs/en/solutions/How_to_Migrate_VirtualMachine_From_VMware.md @@ -0,0 +1,313 @@ +--- +kind: + - How To +products: + - Alauda Container Platform +ProductsVersion: + - 4.2.x +--- + +# Migrating VMware Virtual Machines to Alauda Container Platform Virtualization + +## Overview + +This document describes how to migrate virtual machines from a VMware cluster to **Alauda Container Platform (ACP) Virtualization with KubeVirt** using the **Alauda Build of Forklift Operator**. + +Forklift supports multiple source platforms including VMware, OpenShift Virtualization (OCP), Red Hat Virtualization (RHV), OpenStack, and ACP itself. This guide specifically focuses on the workflow for migrating from VMware to ACP (the destination provider named `host`). + +## Environment Information + +Alauda Container Platform: >= 4.2.0 + +Forklift Version: v2.9.0-alauda.3 + +ESXi Version: >= 6.7.0 + +## Prerequisites + +- **Alauda Container Platform Environment**: An available ACP cluster with virtualization enabled. +- **Operator Bundle**: The Alauda Build of Forklift Operator must be downloaded from the Alauda cloud. +- **Network Plugins**: Multus must be installed (_Platform Management → Cluster Management → Cluster Plugins → Install Multus_). +- **VMware Environment**: + - The ESXi hostname must be resolvable (via DNS or CoreDNS override). + - The SSH service must be enabled on the ESXi host. + - VMware Tools must be installed in the guest VM. +- **Mechanism Note**: Forklift builds migration pods using ESXi hostnames to construct the `V2V_libvirtURL` and connects via `esx://` over SSH to retrieve disk images. + +## Terminology + +Before proceeding, understand the following key concepts used in the migration process: + +- **Provider**: Represents the source or destination virtualization platform (e.g., `vmware`, `ocp`, `rhv`, `openstack`, `acp`). A default destination provider named **host** is automatically created for the current ACP cluster. +- **StorageMap**: Maps storage classes used in the source environment to storage classes in the destination ACP cluster. +- **NetworkMap**: Maps source subnets/networks to destination subnets/networks. +- **Plan**: A migration plan describing which virtual machines to migrate. It references a `StorageMap` and a `NetworkMap`. +- **Migration**: Triggers the execution of a `Plan` and provides real-time status updates. + +## Migration Procedure + +The migration process is divided into the following steps: + +1. Upload and deploy the operator +2. Deploy the Forklift Controller +3. Prepare the VDDK Init Image +4. Add the VMware Provider +5. Create Network and Storage Maps +6. Execute the Migration Plan +7. Post-Migration Configuration + +### 1. Upload Forklift Operator Using Violet + +Use the `violet` tool to upload the Forklift operator artifact to the platform. + +```bash +export PLATFORM_URL=https:/// +export PLATFORM_USER= +export PLATFORM_PASSWORD= + +violet push \ + --platform-address $PLATFORM_URL \ + --platform-username $PLATFORM_USER \ + --platform-password $PLATFORM_PASSWORD +``` + +### 2. Deploy the Operator + +1. Navigate to **Administrator → Marketplace → OperatorHub**. +2. Locate **forklift-operator**. +3. Click **Deploy**. + +### 3. Create ForkliftController Instance + +Create a `ForkliftController` resource to initialize the system. + +1. Navigate to **Deployed Operators → Resource Instances** under the Forklift Operator. +2. Create the `ForkliftController`. + +Verify that all pods are running: + +```bash +kubectl get pod -n konveyor-forklift +``` + +Expected pods include: + +- `forklift-api` +- `forklift-controller` +- `forklift-operator` +- `forklift-validation` +- `forklift-volume-populator-controller` + +_Note: A provider named **host** will be automatically created to represent the current ACP cluster, serving exclusively as a destination._ + +### 4. Prepare VDDK Init Image + +The VMware Virtual Disk Development Kit (VDDK) is required for disk transfer. + +1. Download the matching VMware VDDK Linux package from VMware. +2. Extract the package: + ```bash + tar xf VMware-vix-disklib-.x86_64.tar.gz + ``` +3. Create a `Containerfile`: + ``` + FROM registry.access.redhat.com/ubi8/ubi-minimal + USER 1001 + COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib + RUN mkdir -p /opt + ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"] + ``` +4. Build and push the image to your registry: + ```bash + podman build -t registry.example.com/kubev2v/vddk: . + podman push registry.example.com/kubev2v/vddk: + ``` + +### 5. Add VMware Provider + +Create a secret containing VMware credentials and register the Provider. +The sdkEndpoint for VMware defines how the tool connects to the source or target environment, `vcenter` connects via vCenter for managing multiple hosts, while `esxi` connects directly to a single ESXi host. + +```bash +export VMWARE_URL=https:///sdk +export VMWARE_USER= +export VMWARE_PASSWORD= +export VDDKIMAGE=registry.example.com/kubev2v/vddk:8.0 +export SDK_ENDPOINT='esxi' + +# Create Secret +kubectl -n konveyor-forklift create secret generic vmware \ + --from-literal=url=$VMWARE_URL \ + --from-literal=user=$VMWARE_USER \ + --from-literal=password=$VMWARE_PASSWORD \ + --from-literal=insecureSkipVerify=true + +kubectl label secret vmware -n konveyor-forklift \ + createdForProviderType=vsphere \ + createdForResourceType=providers + +# Create Provider +kubectl apply -f - < + +kubectl apply -f - < + +kubectl label pvc -n $TARGET_NS $VM_PVC vm.cpaas.io/used-by=$VM_NAME +kubectl label pvc -n $TARGET_NS $VM_PVC vm.cpaas.io/reclaim-policy=Delete +``` + +Once labeled, the virtual disks will be visible in the VM details page on ACP.