-
Notifications
You must be signed in to change notification settings - Fork 14
Add Konveyor deployment and usage guide #106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,259 @@ | ||||||||||||||||||||||||||||||||||||||||||||||||
| --- | ||||||||||||||||||||||||||||||||||||||||||||||||
| kind: | ||||||||||||||||||||||||||||||||||||||||||||||||
| - Solution | ||||||||||||||||||||||||||||||||||||||||||||||||
| products: | ||||||||||||||||||||||||||||||||||||||||||||||||
| - Alauda Application Services | ||||||||||||||||||||||||||||||||||||||||||||||||
| ProductsVersion: | ||||||||||||||||||||||||||||||||||||||||||||||||
| - 4.x | ||||||||||||||||||||||||||||||||||||||||||||||||
| --- | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| # How to Deploy and Use Konveyor | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| ## Overview | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| Konveyor is a CNCF (Cloud Native Computing Foundation) project that provides a modular platform for application modernization. It supports the entire lifecycle of modernization: discovery, assessment, analysis, and execution. This guide covers deploying the Konveyor Hub (Tackle) platform and its core components. | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| ## Prerequisites | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| - Kubernetes cluster with kubectl access | ||||||||||||||||||||||||||||||||||||||||||||||||
| - StorageClass that supports ReadWriteMany (RWX) access mode | ||||||||||||||||||||||||||||||||||||||||||||||||
| - StorageClass for RWO volumes (for databases) | ||||||||||||||||||||||||||||||||||||||||||||||||
| - (Optional) LoadBalancer or Ingress Controller for external access | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| ## Install Konveyor Operator | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| Download the Konveyor Operator plugin from [Alauda Cloud Console](https://cloud.alauda.io/) Marketplace, and install it using the [violet tool](https://docs.alauda.io/container_platform/4.1/configure/backup/install.html). | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| ## Deploy Konveyor Hub (Tackle) | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| ### Create Tackle Instance | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| Deploy the Tackle platform by creating a Tackle CR. The Tackle instance must be deployed in the same namespace as the konveyor-operator. | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| ```yaml | ||||||||||||||||||||||||||||||||||||||||||||||||
| cat << EOF | kubectl create -f - | ||||||||||||||||||||||||||||||||||||||||||||||||
| apiVersion: tackle.konveyor.io/v1alpha1 | ||||||||||||||||||||||||||||||||||||||||||||||||
| kind: Tackle | ||||||||||||||||||||||||||||||||||||||||||||||||
| metadata: | ||||||||||||||||||||||||||||||||||||||||||||||||
| name: tackle | ||||||||||||||||||||||||||||||||||||||||||||||||
| namespace: konveyor-tackle | ||||||||||||||||||||||||||||||||||||||||||||||||
| spec: | ||||||||||||||||||||||||||||||||||||||||||||||||
| feature_auth_required: true | ||||||||||||||||||||||||||||||||||||||||||||||||
| feature_isolate_namespace: true | ||||||||||||||||||||||||||||||||||||||||||||||||
| feature_analysis_archiver: true | ||||||||||||||||||||||||||||||||||||||||||||||||
| hub_database_volume_size: 5Gi | ||||||||||||||||||||||||||||||||||||||||||||||||
| hub_bucket_volume_size: 100Gi | ||||||||||||||||||||||||||||||||||||||||||||||||
| rwx_supported: true | ||||||||||||||||||||||||||||||||||||||||||||||||
| hub_bucket_storage_class: nfs # Replace with your RWX StorageClass | ||||||||||||||||||||||||||||||||||||||||||||||||
| rwo_storage_class: sc-topolvm # Replace with your RWO StorageClass | ||||||||||||||||||||||||||||||||||||||||||||||||
| EOF | ||||||||||||||||||||||||||||||||||||||||||||||||
| ``` | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| ### Verify Deployment | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| Check the status of the pods in the `konveyor-tackle` namespace: | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| ```bash | ||||||||||||||||||||||||||||||||||||||||||||||||
| kubectl get pods -n konveyor-tackle | ||||||||||||||||||||||||||||||||||||||||||||||||
| ``` | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| Ensure all pods are in `Running` or `Completed` state before proceeding. | ||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||
| > [!WARNING] | ||||||||||||||||||||||||||||||||||||||||||||||||
| > The Tackle instance must be in the same namespace as the konveyor-operator. Otherwise, resources may not be cleaned up properly when deleted. | ||||||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||||||
| > The Tackle instance must be in the same namespace as the konveyor-operator. Otherwise, resources may not be cleaned up properly when deleted. | |
| > The Tackle instance must be deployed in the same namespace as the `konveyor-operator`. If you deploy it in a different namespace, some resources created by the operator (such as PersistentVolumeClaims, ConfigMaps, Secrets, and ServiceAccounts) might not be automatically deleted when the Tackle custom resource is removed. In that case, you must manually clean up these resources in the affected namespaces, for example: | |
| > | |
| > ```bash | |
| > # Delete common resources labeled for the Tackle instance | |
| > kubectl delete pvc,configmap,secret,sa -l app.kubernetes.io/instance=tackle -n <namespace> | |
| > | |
| > # Verify that no Tackle-related resources remain | |
| > kubectl get all,pvc,configmap,secret,sa -n <namespace> | grep tackle || true | |
| > ``` |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation lacks input validation guidance for the Tackle CR specification values. Consider adding a note about valid ranges for volume sizes and consequences of setting values too low (e.g., minimum recommended sizes for hub_database_volume_size and hub_bucket_volume_size based on expected workload).
| | `spec.hub_database_volume_size` | `5Gi` | Size requested for Hub database volume | | |
| | `spec.hub_bucket_volume_size` | `100Gi` | Size requested for Hub bucket volume | | |
| | `spec.keycloak_database_data_volume_size` | `1Gi` | Size requested for Keycloak DB volume | | |
| | `spec.cache_data_volume_size` | `100Gi` | Size requested for Tackle Cache volume | | |
| | `spec.cache_storage_class` | N/A | StorageClass requested for Tackle Cache volume | | |
| | `spec.hub_bucket_storage_class` | N/A | StorageClass requested for Tackle Hub Bucket volume (RWX) | | |
| | `spec.rwo_storage_class` | N/A | StorageClass requested for RWO database volumes | | |
| | `spec.hub_database_volume_size` | `5Gi` | Size requested for Hub database volume (Kubernetes quantity, for example `5Gi`, `20Gi`). For production, allocate at least `20Gi` to avoid database out-of-space conditions. | | |
| | `spec.hub_bucket_volume_size` | `100Gi` | Size requested for Hub bucket volume (Kubernetes quantity). For typical workloads, allocate at least `50Gi`–`100Gi` depending on the expected number and size of analyses and reports. | | |
| | `spec.keycloak_database_data_volume_size` | `1Gi` | Size requested for Keycloak DB volume (Kubernetes quantity). Do not set below `1Gi` for stability. | | |
| | `spec.cache_data_volume_size` | `100Gi` | Size requested for Tackle Cache volume (Kubernetes quantity) used for analysis data and intermediate artifacts. | | |
| | `spec.cache_storage_class` | N/A | StorageClass requested for Tackle Cache volume | | |
| | `spec.hub_bucket_storage_class` | N/A | StorageClass requested for Tackle Hub Bucket volume (RWX) | | |
| | `spec.rwo_storage_class` | N/A | StorageClass requested for RWO database volumes | | |
| > [!NOTE] | |
| > All `*_volume_size` fields must use valid Kubernetes resource quantity values (for example, `1Gi`, `5Gi`, `20Gi`). Setting volumes too small can cause pods to fail with disk-full errors, prevent new analyses from being stored, or interrupt database migrations. When in doubt, start with the recommended minimums above and monitor actual usage to adjust sizes safely. |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The configuration table shows 'N/A' as the default value for storage class fields, but the example YAML in lines 47-48 provides specific values ('nfs' and 'sc-topolvm'). This inconsistency could confuse users. Consider clarifying whether these fields are required or optional, and if optional, what the fallback behavior is when not specified.
| | `spec.hub_bucket_storage_class` | N/A | StorageClass requested for Tackle Hub Bucket volume (RWX) | | |
| | `spec.rwo_storage_class` | N/A | StorageClass requested for RWO database volumes | | |
| | `spec.hub_bucket_storage_class` | Required | StorageClass requested for Tackle Hub Bucket volume (RWX); no default is set | | |
| | `spec.rwo_storage_class` | Required | StorageClass requested for RWO database volumes; no default is set | | |
| Fields marked as **Required** have no default value and must be explicitly set in your Tackle `spec`. |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The port-forward command uses 'services/tackle-ui' which may not work on all kubectl versions. The more standard and portable format is 'service/tackle-ui' (singular). Consider updating to ensure compatibility across different kubectl versions.
| kubectl -n konveyor-tackle port-forward services/tackle-ui 8080:8080 | |
| kubectl -n konveyor-tackle port-forward service/tackle-ui 8080:8080 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use a stronger password in the example.
The example password admin@123 is weak and commonly used. Consider suggesting a stronger password example to encourage better security practices.
Suggested change
- - Enter a new password (e.g., `admin@123`)
+ - Enter a new password (e.g., `Admin@Tackle2024!`)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Enter a new password (e.g., `admin@123`) | |
| - Enter a new password (e.g., `Admin@Tackle2024!`) |
🤖 Prompt for AI Agents
In `@docs/en/solutions/How_to_Use_Konveyor.md` at line 114, The example password
"Enter a new password (e.g., `admin@123`)" is weak; update that text to show a
stronger example (for instance: `Str0ng!Admin#2026`) and/or replace with a
password pattern recommendation like "use a 12+ character password with upper
and lower case letters, numbers, and symbols" so the line referencing the
example password is replaced with a stronger example or a pattern
recommendation.
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation suggests setting the Tackle admin password to a very weak example value (admin@123), which could easily be reused verbatim in production and allow trivial admin account compromise if exposed over the network. An attacker who can reach the Tackle UI could simply try this common pattern to gain full administrative access. Instead of providing a weak concrete password example, recommend instructing users to choose a strong, unique password and, if needed, describe password complexity requirements without giving a specific easily guessable value.
| - Enter a new password (e.g., `admin@123`) | |
| - Enter a strong, unique password that meets your organization's security policy (for example, at least 12 characters with a mix of letters, numbers, and symbols) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Konveyor AI KAI supported LLM providers models latest documentation
💡 Result:
Short answer — Konveyor AI (Kai) is model‑agnostic and integrates with LLM providers via its MCP/adapter layer; there is no single fixed “supported models” list in the docs — supported providers/models depend on the configured MCP/adapters (OpenAI, Anthropic, Google/AIS/Vertex, AWS Bedrock/Azure, local Ollama/Groq/replicate-style adapters used in MCP ecosystems). See Konveyor Kai docs and repo for the latest configuration details and examples:
- Konveyor AI / Kai overview & design (states model‑agnostic approach and integration points). [1]
- Kai GitHub repository (docs, README, and examples; configuration and adapters live in the repo). [2]
- Konveyor blog / release notes (latest Kai release and roadmap details). [3]
If you want, I can: (a) fetch the exact "providers/models" configuration files or examples from the Kai repo (or specific docs) right now, or (b) list typical provider/model names used with MCP adapters (OpenAI: gpt-4/gpt-3.5; Anthropic: Claude 3 family; Google: Gemini; AWS Bedrock; Ollama/local models; Groq, etc.) with citations. Which do you prefer?
🏁 Script executed:
# Search for configuration or model-related files in the codebase
find . -type f \( -name "*.py" -o -name "*.yaml" -o -name "*.yml" -o -name "*.json" -o -name "*.md" \) | xargs grep -l "kai_llm_provider\|kai_llm_model\|supported.*model\|supported.*provider" 2>/dev/null | head -20Repository: alauda/knowledge
Length of output: 102
🏁 Script executed:
# Read the full context around the table in the documentation file
cat -n ./docs/en/solutions/How_to_Use_Konveyor.md | sed -n '180,210p'Repository: alauda/knowledge
Length of output: 1258
🏁 Script executed:
# Search for provider/model configuration in Python files or config files
find . -type f \( -name "*.py" -o -name "config*.py" -o -name "*provider*.py" -o -name "*model*.py" \) -exec grep -l "openai\|azure_openai\|bedrock\|ollama\|anthropic\|groq\|google" {} \; 2>/dev/null | head -20Repository: alauda/knowledge
Length of output: 112
🏁 Script executed:
# Look for any adapter or configuration YAML files
find . -type f \( -name "*adapter*" -o -name "*provider*" \) \( -name "*.yaml" -o -name "*.yml" -o -name "*.json" \) 2>/dev/null | head -20Repository: alauda/knowledge
Length of output: 42
🏁 Script executed:
# Check for context before and after the table - any disclaimer about what this list represents
cat -n ./docs/en/solutions/How_to_Use_Konveyor.md | sed -n '186,205p'Repository: alauda/knowledge
Length of output: 962
🏁 Script executed:
# Search for any links to external Konveyor documentation or references
grep -n "konveyor\|kai\|provider\|model" ./docs/en/solutions/How_to_Use_Konveyor.md | grep -i "doc\|repo\|reference\|link\|href"Repository: alauda/knowledge
Length of output: 286
Add clarification directing readers to official Konveyor documentation for the most current provider and model support.
The table lists common provider-model combinations, but AI models and provider support evolve frequently. Add a note directing readers to the Konveyor Official Documentation for the latest supported configurations, or explicitly indicate whether this list is comprehensive or illustrative.
🤖 Prompt for AI Agents
In `@docs/en/solutions/How_to_Use_Konveyor.md` around lines 190 - 201, The
"Supported Providers and Models" table is static and can become outdated; add a
short clarifying sentence under the "Supported Providers and Models" heading
stating that the table is illustrative (not exhaustive) and directing readers to
the Konveyor Official Documentation (https://konveyor.io/docs/konveyor/) for the
most current provider and model support; include the link text and ensure the
note is visible immediately after the table so readers know where to check for
updates.
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The secret creation commands expose sensitive API keys in command-line arguments, which may be logged in shell history. Consider documenting a more secure approach, such as reading from a file using '--from-file' or prompting for input interactively, or at minimum adding a warning about shell history exposure.
| kubectl create secret generic kai-api-keys -n konveyor-tackle \ | |
| --from-literal=OPENAI_API_BASE='https://api.openai.com/v1' \ | |
| --from-literal=OPENAI_API_KEY='<YOUR_OPENAI_KEY>' | |
| ``` | |
| **For Google:** | |
| ```bash | |
| kubectl create secret generic kai-api-keys -n konveyor-tackle \ | |
| --from-literal=GOOGLE_API_KEY='<YOUR_GOOGLE_API_KEY>' | |
| # Recommended: avoid putting API keys directly on the command line, as they | |
| # may be recorded in shell history. Store them in a file and use --from-file. | |
| kubectl create secret generic kai-api-keys -n konveyor-tackle \ | |
| --from-literal=OPENAI_API_BASE='https://api.openai.com/v1' \ | |
| --from-file=OPENAI_API_KEY=/path/to/file-containing-openai-key |
For Google:
# Recommended: avoid putting API keys directly on the command line, as they
# may be recorded in shell history. Store them in a file and use --from-file.
kubectl create secret generic kai-api-keys -n konveyor-tackle \
--from-file=GOOGLE_API_KEY=/path/to/file-containing-google-api-keyThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid exposing API keys in shell command history.
The current approach passes sensitive API keys directly as command-line arguments, which exposes them in shell history. Use --from-file or stdin to prevent credential leakage.
🔒 Recommended secure alternatives
Option 1: Using environment variables and stdin
- **For OpenAI:**
-
- ```bash
- kubectl create secret generic kai-api-keys -n konveyor-tackle \
- --from-literal=OPENAI_API_BASE='https://api.openai.com/v1' \
- --from-literal=OPENAI_API_KEY='<YOUR_OPENAI_KEY>'
- ```
-
- **For Google:**
-
- ```bash
- kubectl create secret generic kai-api-keys -n konveyor-tackle \
- --from-literal=GOOGLE_API_KEY='<YOUR_GOOGLE_API_KEY>'
- ```
+ **For OpenAI:**
+
+ ```bash
+ cat <<EOF | kubectl apply -f -
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: kai-api-keys
+ namespace: konveyor-tackle
+ type: Opaque
+ stringData:
+ OPENAI_API_BASE: 'https://api.openai.com/v1'
+ OPENAI_API_KEY: '<YOUR_OPENAI_KEY>'
+ EOF
+ ```
+
+ **For Google:**
+
+ ```bash
+ cat <<EOF | kubectl apply -f -
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: kai-api-keys
+ namespace: konveyor-tackle
+ type: Opaque
+ stringData:
+ GOOGLE_API_KEY: '<YOUR_GOOGLE_API_KEY>'
+ EOF
+ ```Option 2: Using a file
Create a file (e.g., kai-secrets.env) with the credentials, then:
kubectl create secret generic kai-api-keys -n konveyor-tackle \
--from-env-file=kai-secrets.env
rm kai-secrets.env # Remove after creation🤖 Prompt for AI Agents
In `@docs/en/solutions/How_to_Use_Konveyor.md` around lines 218 - 233, The docs
currently recommend using kubectl create secret ... --from-literal which exposes
secrets in shell history; update the examples that reference the kubectl create
secret generic kai-api-keys commands to use a secure method such as piping a
YAML Secret manifest to kubectl apply (using stringData in the manifest) or
using --from-env-file and removing the temp file, and replace the inline
--from-literal OpenAI/GOOGLE examples with the cat <<EOF | kubectl apply -f -
YAML snippets or the --from-env-file workflow described in the comment so
credentials are not passed on the command line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Konveyor Crane project status 2026 community version
💡 Result:
Short status (community Crane — Jan 14, 2026)
- Crane is no longer part of the Konveyor core toolset; the project was moved to a separate GitHub organization (migtools) as a community migration tool. [1]
- The community repo is active on GitHub (migtools/crane). The latest published release on the releases page is v0.0.5 (prebuilt assets and tags present) and the repo shows open issues/PR activity. For the absolute latest commits, issues, or new releases check the repo directly. [2]
Sources
- Konveyor community update (project refocus / Crane moved out of core). [1]
- migtools/crane GitHub (repo + releases page, latest release tag v0.0.5). [2]
Update documentation to reflect current Crane status.
The statement that the "Community version is currently inactive" is inaccurate. Crane is actively maintained in the migtools organization on GitHub with recent releases (v0.0.5) and ongoing development activity (open issues and pull requests). Remove the inactivity claim or update it to reflect that Crane is a separately-maintained community project that remains actively developed.
🤖 Prompt for AI Agents
In `@docs/en/solutions/How_to_Use_Konveyor.md` at line 253, Update the Crane
description line to remove the inaccurate "Community version is currently
inactive" claim and instead state its current status (e.g., "actively maintained
as a community project in the migtools organization with recent releases and
ongoing development"); modify the table entry containing "**Crane**" to reflect
this wording so documentation accurately represents Crane's active maintenance
and development.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Konveyor operator installation Alauda Cloud Console💡 Result:
Short answer — install Konveyor on an Alauda-managed cluster by adding the Konveyor Operator into Alauda’s OperatorHub (either from OperatorHub catalog image or as a custom/operator package) and then create the Konveyor/Tackle CR. Steps:
References:
If you want, I can produce the exact YAML manifests (CatalogSource / OperatorGroup / Subscription / Tackle CR) tailored to your cluster/namespace and the precise commands you’d run in Alauda’s cluster shell.
🌐 Web query:
violet tool Alauda documentation operator installation💡 Result:
Summary — how to install an Alauda Operator using the violet tool
Download violet from the Platform Management → Marketplace / Upload Packages → Download Packaging and Listing Tool (choose binary for your OS). Make it executable and place it on your PATH. [1]
(Optional) On the machine where you’ll upload packages, verify/install violet:
Obtain the operator package (.tgz / bundle) from the Alauda Custom Portal / Marketplace (or vendor). [2]
Upload/push the operator package to your target cluster with violet:
violet push .tgz --platform-address https:// --platform-username --platform-password --clusters
In the Alauda Web Console (Administrator view) go to Marketplace → OperatorHub. If the package status is Absent, upload via Marketplace → Upload Packages (Operator tab) or the web UI; if Ready, click Install and follow operator-specific install settings. For upgrades, upload the new package and use OperatorHub’s upgrade workflow. [2][3]
If you want, tell me your OS and operator package name and I’ll give the exact violet commands to run.
Sources: Alauda docs — CLI/violet tool and OperatorHub upload/install instructions. [1][2][3]
Sources
[1] Alauda — CLI Tools (violet) (docs.alauda.io).
[2] Alauda — Operator / Marketplace / Upload Packages (docs.alauda.io).
[3] Alauda AI / operator upload example using violet (docs.alauda.io).
🌐 Web query:
Konveyor operator Alauda marketplace install💡 Result:
Short answer — yes; you can install the Konveyor (Tackle) operator from an OperatorHub-like marketplace on Alauda. Steps (concise):
kubectl create -f https://operatorhub.io/install/konveyor-0.2/konveyor-operator.yaml (or follow the CatalogSource + OperatorGroup + Subscription method for latest builds). [2]
References:
If you want, I can give the exact kubectl manifests (CatalogSource/Subscription) or step-by-step Alauda-console screenshots — which do you prefer?
Correct the documentation link for violet tool.
The link to
https://docs.alauda.io/container_platform/4.1/configure/backup/install.htmlappears to point to backup configuration documentation rather than operator installation or violet tool documentation. This should reference the correct Alauda documentation for operator package upload/installation (e.g., Marketplace/OperatorHub or violet CLI tool documentation).🤖 Prompt for AI Agents