Releases: kubescape/node-agent
Release v0.3.38
Summary by CodeRabbit
Release Notes
- Bug Fixes
- Improved rule evaluation error handling. When a rule fails to compile, evaluation now stops immediately instead of continuing to process remaining expressions, reducing unnecessary computation and preventing inconsistent results.
Release v0.3.36
Summary by CodeRabbit
-
Updates
- Gadget toolset renamed and bumped to v0.48.1; images moved to a new registry path.
-
Performance
- Event data flow simplified to use direct deep-copies, removing pooling and clarifying ownership.
-
Features
- ECS runtime alert support and ECS metadata accessors added to events.
-
Tests
- Tests enhanced to detect unexpected/extra fields in data sources.
-
Chores
- Broad dependency version updates across modules.
Release v0.3.33
Summary by CodeRabbit
- Bug Fixes
- Improved robustness of expression evaluation by caching failed compilations to avoid repeated work and noisy errors.
- Added safeguards so failed or missing expressions are skipped safely and return empty results instead of causing failures.
- Improved logging for compilation/evaluation issues to aid diagnosis without affecting runtime behavior.
Release v0.3.32
Summary
Implement ClusterUID enrichment for runtime alerts by fetching the kube-system namespace UID and populating it in all RuntimeAlert structures.
Changes
Dependencies
- Updated
armoapi-goto v0.0.672 (includes new ClusterUID field)
New Files
pkg/utils/clusteruid.go- Utility function to fetch kube-system namespace UID
Modified Files
cmd/main.go- Fetch ClusterUID at startup and pass to exporterspkg/exporters/exporters_bus.go- Update InitExporters to accept clusterUID parameterpkg/exporters/http_exporter.go- Store and populate ClusterUID in alerts
Implementation Details
-
Startup Phase: After creating the Kubernetes client, the agent fetches the UID of the
kube-systemnamespace using the newGetClusterUIDutility function. -
Error Handling: If the namespace cannot be accessed (e.g., due to RBAC restrictions), a warning is logged and an empty string is returned. The agent continues operating normally with an empty ClusterUID field.
-
Alert Enrichment: The ClusterUID is passed through the exporter chain and populated in:
RuntimeAlertK8sDetails.ClusterUIDfor all K8s alertsHttpRuleAlert.SourcePodInfo.ClusterUIDfor HTTP rule alerts
-
Backward Compatibility: The field uses
omitemptyand existing functionality is not affected if ClusterUID is empty.
Testing
- ✅ Code compiles successfully
- ✅ Unit tests pass
- Manual testing needed: Deploy to test cluster and verify ClusterUID is populated
Related PRs
- armosec/armoapi-go#602 - Add ClusterUID field to RuntimeAlertK8sDetails
Next Steps
After this PR is merged and a new version is released:
- Update private-node-agent with new dependencies
- Update Helm charts with RBAC permissions (
namespacesget/list)
RBAC Requirements
Note: For ClusterUID to be populated, the agent's ServiceAccount needs permissions to read namespaces:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]This will be added to Helm charts in a separate PR.
Summary by CodeRabbit
-
New Features
- Alerts (runtime and malware) now include a stable cluster UID so cluster context is preserved across emitted alerts.
- Agent obtains the cluster UID at startup and attaches it to exporter payloads before initialization.
-
Tests
- Unit tests updated to validate the cluster UID is populated in exporter instances.
-
Chores
- Dependency version bumped.
Release v0.3.31
Merge pull request #706 from kubescape/bump bump: update cel-go dependency to v0.26.1
Release v0.3.25
Summary
Add comprehensive unit tests for the Azure ResourceGroup parsing functionality that was merged in #697.
Test Coverage
-
✅ Tests for
parseAzureResourceGroupwith 9 test cases covering:- Valid Azure providerID formats from AKS
- Case-insensitive matching (uppercase, mixed case)
- Edge cases (no trailing path, empty strings, malformed IDs)
- Non-Azure providerIDs
-
✅ Tests for
enrichCloudMetadataForAzurewith 5 test cases covering:- Successful enrichment from valid providerID
- Guard conditions (wrong provider, already set ResourceGroup, nil metadata)
- No change when resourceGroups marker is missing
Test Results
All 14 test cases pass successfully:
=== RUN TestParseAzureResourceGroup
--- PASS: TestParseAzureResourceGroup (0.00s)
=== RUN TestEnrichCloudMetadataForAzure
--- PASS: TestEnrichCloudMetadataForAzure (0.00s)
PASS
ok github.com/kubescape/node-agent/pkg/cloudmetadata 0.047s
Related
- Follows up on #697 which added the Azure ResourceGroup enrichment functionality
Summary by CodeRabbit
- Tests
- Added unit tests for Azure resource group parsing from providerIDs, including validation of various formats and edge cases.
- Added unit tests for Azure cloud metadata enrichment, covering conditional data population and error handling scenarios.
✏️ Tip: You can customize this high-level summary in your review settings.
Release v0.3.22
Summary by CodeRabbit
- Bug Fixes
- Improved field accessor retrieval to robustly handle nil receivers, invalid cached values, and type assertion failures, preventing potential application crashes.
- Enhanced caching logic with comprehensive validation checks and strengthened fallback mechanisms to ensure reliable field access throughout the application.
- Increased overall application stability by eliminating edge cases that could cause unexpected behavior.
✏️ Tip: You can customize this high-level summary in your review settings.
Release v0.3.20
Replace inner logic for plural forms to use imported k8s-interface as shared package
Summary by CodeRabbit
- Chores
- Updated
github.com/kubescape/k8s-interfacedependency from v0.0.201 to v0.0.202. - Improved internal resource type handling consistency across sensor components.
- Updated
✏️ Tip: You can customize this high-level summary in your review settings.
Release v0.3.19
Overview
The host-scanner is a K8s daemonset which sensing some basic stuff from a K8s node and expose them in a K8s YAML-like format via HTTP handlers and it runs by Kubescape only for the period of Kubescape scanning process.
We want to merge host-scanner into node-agent and let the node-agent itself to sense the stuff and send it to K8s API server as new CRDs.
The motivation for this change is well explained in this slack thread
we're trying to reduce the footprint of KS helm chart so it will be easier to install.
In addition the current implementation requires the host-scanner to open a port for KS to scrape the data which is a security posture we want to avoid. (privileged pod with open port - not so good)
How to Test
As ususal
Related issues/PRs:
kubescape/helm-charts#773
kubescape/kubescape#1916
Summary by CodeRabbit
Release Notes
- New Features
- Host Sensor Manager: New system to periodically collect and report host diagnostics including OS release, kernel version, security configurations, open ports, running services, and network information. Data is stored as Kubernetes resources. Feature is configurable to enable/disable and customize collection frequency.
✏️ Tip: You can customize this high-level summary in your review settings.
Release v0.3.18
Summary by CodeRabbit
-
New Features
- Enhanced cloud provider detection with support for additional cloud platforms (Alibaba, Oracle, OpenStack, Hetzner, Linode).
- Improved cloud metadata discovery with fallback mechanisms for better reliability.
-
Chores
- Updated dependencies to latest versions for improved stability and compatibility.
✏️ Tip: You can customize this high-level summary in your review settings.