Commitment API endpoints report ram,cores,instances, but CRs are managed only for ram#645
Commitment API endpoints report ram,cores,instances, but CRs are managed only for ram#645
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRegisters three resources per committed flavor group—RAM, CPU cores, and instances—replacing the previous single-resource model. Capacity, usage, service-info registration, resource naming, Nova server decoding, and tests were updated so RAM handles commitments while cores and instances are derived resources. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Nova as Nova API
participant NovaClient as novaClient
participant Commit as Commitments
participant Calc as Capacity/Usage Calc
participant Reporter as Report Store
Nova->>NovaClient: List servers (include metadata, tags, image, volumes)
NovaClient->>NovaClient: Decode into ServerDetail (Metadata, Tags, OSType)
NovaClient->>Commit: Provide ServerDetails
Commit->>Calc: Aggregate per-flavorGroup (ram, vcpus, instances) per AZ
Calc->>Reporter: Emit three resources per flavorGroup (_ram, _cores, _instances)
Reporter->>Commit: Persist/return ResourceUsageReport
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
internal/scheduling/reservations/commitments/api_report_usage_test.go (1)
659-703: The new resource counters still are not asserted here.This helper now finds VM details on
_instances, but it still only verifies RAM usage._coresis never checked, and_instances.Usageis not compared against the expected VM count, so regressions in either new counter would still pass this integration matrix. Consider extendingExpectedResourceUsagewith expected cores/instances totals, or at least assertingactualInstancesAZ.Usage == uint64(len(expectedAZ.VMs)).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/scheduling/reservations/commitments/api_report_usage_test.go` around lines 659 - 703, The test currently only verifies RAM usage and VM subresources but misses asserting the new counters (instances count and cores); update the test in api_report_usage_test.go to (a) assert actualInstancesAZ.Usage == uint64(len(expectedAZ.VMs)) using the existing expectedAZ.VMs, and (b) also check the corresponding cores resource (derive coresResourceName by replacing "_ram" with "_cores", fetch actualCoresResource and actualCoresAZ) and compare actualCoresAZ.Usage to an expected value (either add a Cores field to ExpectedResourceUsage/expectedAZ or assert against a derived value like sum of VM core counts if available); adjust ExpectedResourceUsage/expectedAZ types if you choose to store explicit cores/instances expectations and update all places that construct expectations accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/scheduling/reservations/commitments/capacity.go`:
- Around line 57-79: The code currently reuses azCapacity (from
c.calculateAZCapacity) for RAM, Cores and Instances causing unit mismatch for
ResourceNameCores and ResourceNameInstances; update the logic in the block that
builds report.Resources so that after computing azCapacity (RAM), you create
distinct PerAZ maps for cores and instances (e.g. derive new maps via functions
like convertRAMToCores or convertRAMToInstances or explicitly build
zero/placeholder maps) instead of passing the same azCapacity reference or using
c.copyAZCapacity(azCapacity); ensure you populate
report.Resources[ResourceNameCores(groupName)] and
report.Resources[ResourceNameInstances(groupName)] with PerAZ maps that reflect
their own units (or mark them non-capacity-bearing) and keep
ResourceNameRAM(groupName) using azCapacity.
In `@internal/scheduling/reservations/commitments/state.go`:
- Around line 49-70: getFlavorGroupNameFromResource currently drops the resource
dimension and returns only the group name, causing non-RAM resources to be
misinterpreted as RAM; change the function signature to return (flavorGroupName
string, suffix string, err error) and have it detect and return the matched
suffix (one of ResourceSuffixRAM, ResourceSuffixCores, ResourceSuffixInstances)
along with the name, keeping the existing prefix/suffix validation using
resourceNamePrefix, ResourceSuffixRAM, ResourceSuffixCores,
ResourceSuffixInstances; then update the caller FromCommitment to consume the
new suffix and explicitly reject any non-RAM commitments (e.g., check suffix ==
ResourceSuffixRAM and return an error otherwise) so CommitmentState (which
carries TotalMemoryBytes) only represents RAM-backed resources.
---
Nitpick comments:
In `@internal/scheduling/reservations/commitments/api_report_usage_test.go`:
- Around line 659-703: The test currently only verifies RAM usage and VM
subresources but misses asserting the new counters (instances count and cores);
update the test in api_report_usage_test.go to (a) assert
actualInstancesAZ.Usage == uint64(len(expectedAZ.VMs)) using the existing
expectedAZ.VMs, and (b) also check the corresponding cores resource (derive
coresResourceName by replacing "_ram" with "_cores", fetch actualCoresResource
and actualCoresAZ) and compare actualCoresAZ.Usage to an expected value (either
add a Cores field to ExpectedResourceUsage/expectedAZ or assert against a
derived value like sum of VM core counts if available); adjust
ExpectedResourceUsage/expectedAZ types if you choose to store explicit
cores/instances expectations and update all places that construct expectations
accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7d9bfd42-1f5d-4109-b6f5-55ee5703d5fa
📒 Files selected for processing (9)
internal/scheduling/reservations/commitments/api_info.gointernal/scheduling/reservations/commitments/api_info_test.gointernal/scheduling/reservations/commitments/api_report_capacity_test.gointernal/scheduling/reservations/commitments/api_report_usage_test.gointernal/scheduling/reservations/commitments/capacity.gointernal/scheduling/reservations/commitments/state.gointernal/scheduling/reservations/commitments/state_test.gointernal/scheduling/reservations/commitments/usage.gointernal/scheduling/reservations/commitments/usage_test.go
| // Calculate per-AZ capacity (placeholder: capacity=0 for all resources) | ||
| azCapacity, err := c.calculateAZCapacity(ctx, groupName, groupData) | ||
| if err != nil { | ||
| return liquid.ServiceCapacityReport{}, fmt.Errorf("failed to calculate capacity for %s: %w", groupName, err) | ||
| } | ||
|
|
||
| report.Resources[resourceName] = &liquid.ResourceCapacityReport{ | ||
| // === 1. RAM Resource === | ||
| ramResourceName := liquid.ResourceName(ResourceNameRAM(groupName)) | ||
| report.Resources[ramResourceName] = &liquid.ResourceCapacityReport{ | ||
| PerAZ: azCapacity, | ||
| } | ||
|
|
||
| // === 2. Cores Resource === | ||
| coresResourceName := liquid.ResourceName(ResourceNameCores(groupName)) | ||
| report.Resources[coresResourceName] = &liquid.ResourceCapacityReport{ | ||
| PerAZ: c.copyAZCapacity(azCapacity), | ||
| } | ||
|
|
||
| // === 3. Instances Resource === | ||
| instancesResourceName := liquid.ResourceName(ResourceNameInstances(groupName)) | ||
| report.Resources[instancesResourceName] = &liquid.ResourceCapacityReport{ | ||
| PerAZ: c.copyAZCapacity(azCapacity), | ||
| } |
There was a problem hiding this comment.
Don’t publish _cores and _instances by cloning the RAM-capacity map.
Line 72 and Line 78 reuse azCapacity verbatim, but those resources are published and consumed in different units than RAM: _cores usage is raw vCPU count and _instances usage is VM count. That makes the new capacity numbers incomparable with their corresponding usage numbers. Please derive a separate PerAZ map per resource (or keep the derived resources non-capacity-bearing until that conversion exists).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/scheduling/reservations/commitments/capacity.go` around lines 57 -
79, The code currently reuses azCapacity (from c.calculateAZCapacity) for RAM,
Cores and Instances causing unit mismatch for ResourceNameCores and
ResourceNameInstances; update the logic in the block that builds
report.Resources so that after computing azCapacity (RAM), you create distinct
PerAZ maps for cores and instances (e.g. derive new maps via functions like
convertRAMToCores or convertRAMToInstances or explicitly build zero/placeholder
maps) instead of passing the same azCapacity reference or using
c.copyAZCapacity(azCapacity); ensure you populate
report.Resources[ResourceNameCores(groupName)] and
report.Resources[ResourceNameInstances(groupName)] with PerAZ maps that reflect
their own units (or mark them non-capacity-bearing) and keep
ResourceNameRAM(groupName) using azCapacity.
| // getFlavorGroupNameFromResource extracts the flavor group name from a LIQUID resource name. | ||
| // Supports all resource types: _ram, _cores, _instances | ||
| func getFlavorGroupNameFromResource(resourceName string) (string, error) { | ||
| if !strings.HasPrefix(resourceName, resourceNamePrefix) || !strings.HasSuffix(resourceName, resourceNameSuffix) { | ||
| return "", fmt.Errorf("invalid resource name: %s", resourceName) | ||
| if !strings.HasPrefix(resourceName, resourceNamePrefix) { | ||
| return "", fmt.Errorf("invalid resource name: %s (missing prefix)", resourceName) | ||
| } | ||
| // Remove prefix and suffix | ||
| name := strings.TrimPrefix(resourceName, resourceNamePrefix) | ||
| name = strings.TrimSuffix(name, resourceNameSuffix) | ||
| // Validate that the extracted group name is not empty | ||
| if name == "" { | ||
| return "", fmt.Errorf("invalid resource name: %s (empty group name)", resourceName) | ||
|
|
||
| // Try each known suffix | ||
| for _, suffix := range []string{ResourceSuffixRAM, ResourceSuffixCores, ResourceSuffixInstances} { | ||
| if strings.HasSuffix(resourceName, suffix) { | ||
| // Remove prefix and suffix | ||
| name := strings.TrimPrefix(resourceName, resourceNamePrefix) | ||
| name = strings.TrimSuffix(name, suffix) | ||
| // Validate that the extracted group name is not empty | ||
| if name == "" { | ||
| return "", fmt.Errorf("invalid resource name: %s (empty group name)", resourceName) | ||
| } | ||
| return name, nil | ||
| } | ||
| } | ||
| return name, nil | ||
|
|
||
| return "", fmt.Errorf("invalid resource name: %s (unknown suffix)", resourceName) |
There was a problem hiding this comment.
Don't drop the resource dimension here.
Line 57 now accepts _cores and _instances, but CommitmentState still only carries TotalMemoryBytes. FromCommitment() consumes this helper immediately, so those resources get silently reinterpreted as RAM-backed state and can produce the wrong reservation target once the API reports all three resources. Keep this boundary RAM-only, or return the matched suffix so callers can explicitly filter on ResourceSuffixRAM.
Preserve the suffix instead of discarding it
-func getFlavorGroupNameFromResource(resourceName string) (string, error) {
+func getFlavorGroupNameFromResource(resourceName string) (string, string, error) {
if !strings.HasPrefix(resourceName, resourceNamePrefix) {
- return "", fmt.Errorf("invalid resource name: %s (missing prefix)", resourceName)
+ return "", "", fmt.Errorf("invalid resource name: %s (missing prefix)", resourceName)
}
// Try each known suffix
for _, suffix := range []string{ResourceSuffixRAM, ResourceSuffixCores, ResourceSuffixInstances} {
if strings.HasSuffix(resourceName, suffix) {
// Remove prefix and suffix
name := strings.TrimPrefix(resourceName, resourceNamePrefix)
name = strings.TrimSuffix(name, suffix)
// Validate that the extracted group name is not empty
if name == "" {
- return "", fmt.Errorf("invalid resource name: %s (empty group name)", resourceName)
+ return "", "", fmt.Errorf("invalid resource name: %s (empty group name)", resourceName)
}
- return name, nil
+ return name, suffix, nil
}
}
- return "", fmt.Errorf("invalid resource name: %s (unknown suffix)", resourceName)
+ return "", "", fmt.Errorf("invalid resource name: %s (unknown suffix)", resourceName)
}flavorGroupName, suffix, err := getFlavorGroupNameFromResource(commitment.ResourceName)
if err != nil {
return nil, err
}
if suffix != ResourceSuffixRAM {
return nil, fmt.Errorf("unsupported commitment resource for reservation state: %s", commitment.ResourceName)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/scheduling/reservations/commitments/state.go` around lines 49 - 70,
getFlavorGroupNameFromResource currently drops the resource dimension and
returns only the group name, causing non-RAM resources to be misinterpreted as
RAM; change the function signature to return (flavorGroupName string, suffix
string, err error) and have it detect and return the matched suffix (one of
ResourceSuffixRAM, ResourceSuffixCores, ResourceSuffixInstances) along with the
name, keeping the existing prefix/suffix validation using resourceNamePrefix,
ResourceSuffixRAM, ResourceSuffixCores, ResourceSuffixInstances; then update the
caller FromCommitment to consume the new suffix and explicitly reject any
non-RAM commitments (e.g., check suffix == ResourceSuffixRAM and return an error
otherwise) so CommitmentState (which carries TotalMemoryBytes) only represents
RAM-backed resources.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
internal/scheduling/reservations/commitments/api_report_usage_test.go (1)
670-673: Consider using a helper function for resource name derivation.The string manipulation
resourceName[:len(resourceName)-4] + "_instances"assumes all test resources end with_ram(exactly 4 characters). While this works for the current test cases, it could be made more robust.♻️ Optional: Use strings.TrimSuffix for clarity
- // Derive the instances resource name from the ram resource name - // hw_version_hana_1_ram -> hw_version_hana_1_instances - instancesResourceName := resourceName[:len(resourceName)-4] + "_instances" // replace "_ram" with "_instances" + // Derive the instances resource name from the ram resource name + // hw_version_hana_1_ram -> hw_version_hana_1_instances + instancesResourceName := strings.TrimSuffix(resourceName, "_ram") + "_instances"This is minor since the test data is controlled, but
strings.TrimSuffixmakes the intent clearer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/scheduling/reservations/commitments/api_report_usage_test.go` around lines 670 - 673, Replace the ad-hoc slicing in the test that computes instancesResourceName from resourceName (the expression resourceName[:len(resourceName)-4] + "_instances" used before actualInstancesResource) with a small helper that uses strings.TrimSuffix to remove the "_ram" suffix and append "_instances" (e.g., deriveInstancesResourceName(name string) string { return strings.TrimSuffix(name, "_ram") + "_instances" }); update the test to call that helper where instancesResourceName is set so the intent is clearer and the code is robust to name length changes.internal/scheduling/reservations/commitments/usage.go (1)
40-58: Consider using struct marshaling instead of manual map conversion.The
flavorAttributesandsubresourceAttributesstructs are defined with proper JSON tags but then manually converted tomap[string]anyinbuildVMAttributes(lines 537-549). This is redundant and error-prone if fields are added later.♻️ Optional: Use json.Marshal for attribute conversion
func buildVMAttributes(vm VMUsageInfo, commitmentID string) map[string]any { // Build metadata map (never nil for JSON) metadata := vm.Metadata if metadata == nil { metadata = map[string]string{} } // Build tags slice (never nil for JSON) tags := vm.Tags if tags == nil { tags = []string{} } attributes := subresourceAttributes{ Status: vm.Status, Metadata: metadata, Tags: tags, Flavor: flavorAttributes{ Name: vm.FlavorName, VCPUs: vm.VCPUs, MemoryMiB: vm.MemoryMB, DiskGiB: vm.DiskGB, }, OSType: "", } - // Convert to map[string]any and add extra fields - result := map[string]any{ - "status": attributes.Status, - "metadata": attributes.Metadata, - "tags": attributes.Tags, - "flavor": map[string]any{ - "name": attributes.Flavor.Name, - "vcpu": attributes.Flavor.VCPUs, - "ram_mib": attributes.Flavor.MemoryMiB, - "disk_gib": attributes.Flavor.DiskGiB, - }, - "os_type": attributes.OSType, - } + // Marshal struct to JSON and back to map for dynamic field addition + attrBytes, _ := json.Marshal(attributes) + var result map[string]any + _ = json.Unmarshal(attrBytes, &result) // Add commitment_id - nil for PAYG, string for committed if commitmentID != "" { result["commitment_id"] = commitmentID } else { result["commitment_id"] = nil } return result }Alternatively, keep the manual conversion if the small performance overhead of marshal/unmarshal is a concern, but ensure the manual map stays in sync with the struct definitions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/scheduling/reservations/commitments/usage.go` around lines 40 - 58, The manual construction of a map in buildVMAttributes duplicates the fields defined on flavorAttributes and subresourceAttributes and risks drifting out of sync; replace the manual map conversion by marshaling the appropriate struct instance (flavorAttributes / subresourceAttributes) to JSON and then unmarshaling into map[string]any (or directly returning the marshaled JSON where appropriate) so the JSON tags are honored and new fields stay synchronized—update buildVMAttributes to construct the flavorAttributes/subresourceAttributes structs, json.Marshal them, then json.Unmarshal into the target map instead of manually setting keys.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@internal/scheduling/reservations/commitments/api_report_usage_test.go`:
- Around line 670-673: Replace the ad-hoc slicing in the test that computes
instancesResourceName from resourceName (the expression
resourceName[:len(resourceName)-4] + "_instances" used before
actualInstancesResource) with a small helper that uses strings.TrimSuffix to
remove the "_ram" suffix and append "_instances" (e.g.,
deriveInstancesResourceName(name string) string { return
strings.TrimSuffix(name, "_ram") + "_instances" }); update the test to call that
helper where instancesResourceName is set so the intent is clearer and the code
is robust to name length changes.
In `@internal/scheduling/reservations/commitments/usage.go`:
- Around line 40-58: The manual construction of a map in buildVMAttributes
duplicates the fields defined on flavorAttributes and subresourceAttributes and
risks drifting out of sync; replace the manual map conversion by marshaling the
appropriate struct instance (flavorAttributes / subresourceAttributes) to JSON
and then unmarshaling into map[string]any (or directly returning the marshaled
JSON where appropriate) so the JSON tags are honored and new fields stay
synchronized—update buildVMAttributes to construct the
flavorAttributes/subresourceAttributes structs, json.Marshal them, then
json.Unmarshal into the target map instead of manually setting keys.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f9d010f2-acb5-47c4-b0ea-c1e60a9026b8
📒 Files selected for processing (4)
internal/scheduling/nova/nova_client.gointernal/scheduling/reservations/commitments/api_report_usage_test.gointernal/scheduling/reservations/commitments/usage.gointernal/scheduling/reservations/commitments/usage_test.go
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/scheduling/nova/nova_client.go`:
- Around line 119-126: Init currently swallows errors from
liquidapi.NewOSTypeProber (api.osTypeProber) causing all VMs to be billed with
os_type=""; change Init() to either propagate the error (return it) when
NewOSTypeProber fails, or set an explicit degraded-state signal on the nova
client (e.g., add a boolean field like osTypeProberInitialized or an error field
osTypeProberErr on the API struct) and ensure callers/billing code check that
flag and treat "prober unavailable" differently from "os genuinely unknown";
update the code paths that log or use os_type to consult this new flag/error and
log a clear, high-severity message when initialization failed instead of
silently continuing.
In `@internal/scheduling/reservations/commitments/usage.go`:
- Around line 524-550: CalculateUsage() is only computing
ServiceUsageReport.InfoVersion from the Knowledge CRD, so new Nova-derived
fields (metadata, tags, os_type) added in the subresourceAttributes/result map
(constructed in usage.go, e.g., Flavor/OSType/metadata/tags) are not reflected
in InfoVersion; update CalculateUsage() to include a deterministic hash/version
contribution for these Nova-derived fields (metadata, tags, os_type and flavor
attributes built into result) when computing InfoVersion (or alternatively
remove them from the info-versioned payload), ensuring the function that
computes InfoVersion (ServiceUsageReport.InfoVersion) incorporates the
serialized/normalized form of the result map or the specific fields so
downstream caches see a version bump on Nova-side changes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4112554c-f583-4d6d-b66b-4c8f3af656eb
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (3)
go.modinternal/scheduling/nova/nova_client.gointernal/scheduling/reservations/commitments/usage.go
✅ Files skipped from review due to trivial changes (1)
- go.mod
There was a problem hiding this comment.
♻️ Duplicate comments (1)
internal/scheduling/reservations/commitments/usage.go (1)
69-73:⚠️ Potential issue | 🟠 Major
InfoVersionstill misses Nova- and assignment-driven payload changes.
InfoVersionis still derived only from the knowledge CRD timestamp, but Lines 504-522 now emit Nova-derived fields plus per-VMcommitment_id. A Nova-only change or reservation reassignment can therefore leave the same version while the cached subresource payload changed. Please fold a deterministic hash of the emitted attributes intoInfoVersionand normalize map/slice order before hashing.Also applies to: 504-522
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/scheduling/reservations/commitments/usage.go` around lines 69 - 73, InfoVersion currently only uses knowledge.Get()'s LastContentChange Unix timestamp and ignores Nova-derived fields and per-VM commitment_id emitted later; compute a deterministic hash of the exact payload attributes you emit (the Nova-derived fields and each VM's commitment_id), normalizing map and slice order (sort map keys, sort slices/IDs) before serializing, then fold that hash into InfoVersion (e.g. compute sha256 of the normalized payload, reduce to an int64 via truncation/xor and combine with the existing infoVersion timestamp) so InfoVersion changes whenever those emitted attributes change; implement this near where InfoVersion is set (reference InfoVersion variable and knowledge.Get()) and ensure the normalization logic is used both when building the emitted payload (the code emitting Nova-derived fields/per-VM commitment_id) and when computing the hash.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@internal/scheduling/reservations/commitments/usage.go`:
- Around line 69-73: InfoVersion currently only uses knowledge.Get()'s
LastContentChange Unix timestamp and ignores Nova-derived fields and per-VM
commitment_id emitted later; compute a deterministic hash of the exact payload
attributes you emit (the Nova-derived fields and each VM's commitment_id),
normalizing map and slice order (sort map keys, sort slices/IDs) before
serializing, then fold that hash into InfoVersion (e.g. compute sha256 of the
normalized payload, reduce to an int64 via truncation/xor and combine with the
existing infoVersion timestamp) so InfoVersion changes whenever those emitted
attributes change; implement this near where InfoVersion is set (reference
InfoVersion variable and knowledge.Get()) and ensure the normalization logic is
used both when building the emitted payload (the code emitting Nova-derived
fields/per-VM commitment_id) and when computing the hash.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b70478ca-3e55-4d5a-81bb-11d045391873
📒 Files selected for processing (2)
internal/scheduling/reservations/commitments/api_change_commitments_test.gointernal/scheduling/reservations/commitments/usage.go
Test Coverage ReportTest Coverage 📊: 68.2% |
No description provided.