Skip to content

Conversation

@chideat
Copy link
Collaborator

@chideat chideat commented Sep 4, 2025

build(deps): update Go dependencies to latest versions
- Update Go version from 1.23.0 to 1.24.0
- Update cert-manager from v1.9.1 to v1.18.2
- Update Kubernetes dependencies to v0.32.0
- Update controller-runtime to v0.19.0
- Update various other dependencies to latest versions

Summary by CodeRabbit

  • Chores
    • Upgraded Go to 1.24 and broadly updated dependencies (Kubernetes libraries, Gateway API, YAML v3, logging/testing/tooling) for improved compatibility and support.
  • Bug Fixes
    • Aligned persistent volume claim resource handling to current Kubernetes types, improving storage provisioning compatibility.
    • Minor stability improvements in failover and sentinel routines.
  • Refactor
    • Adopted typed work queues in controller event handling for clearer, safer request processing.

chideat and others added 2 commits September 4, 2025 13:07
- Update Go version from 1.23.0 to 1.24.0
- Update cert-manager from v1.9.1 to v1.18.2
- Update Kubernetes dependencies to v0.32.0
- Update controller-runtime to v0.19.0
- Update various other dependencies to latest versions

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings September 4, 2025 05:16
@coderabbitai
Copy link

coderabbitai bot commented Sep 4, 2025

Walkthrough

Bumps Go toolchain and numerous dependencies (notably Kubernetes and controller-runtime). Updates PVC spec to use VolumeResourceRequirements in two places. Adjusts controller watch CreateFunc signatures to typed rate-limiting queues. Adds explicit discard of slices.IndexFunc return values in three ops files. No functional control-flow changes.

Changes

Cohort / File(s) Summary of Changes
Dependencies and tooling
go.mod
Updated Go to 1.24.0 and upgraded a wide range of direct and indirect dependencies (Kubernetes libs, controller-runtime v0.19.0, testing/logging libs, gateway-api, yaml v3, etc.). No source code edits in this file beyond version bumps.
PVC Resources type migration
internal/builder/clusterbuilder/statefulset.go, internal/controller/middleware/redis/redisfailover.go
Switched PersistentVolumeClaimSpec.Resources from corev1.ResourceRequirements to corev1.VolumeResourceRequirements. No logic changes; Requests and related fields remain the same.
Typed workqueue in controller watches
internal/controller/middleware/redis_controller.go
Updated CreateFunc callback signatures to use workqueue.TypedRateLimitingInterface[reconcile.Request] instead of workqueue.RateLimitingInterface. Enqueue logic unchanged.
Explicitly discarding IndexFunc result
internal/ops/failover/actor/actor_patch_labels.go, internal/ops/failover/engine.go, internal/ops/sentinel/actor/actor_heal_pod.go
Assigned slices.IndexFunc return value to blank identifier (_) to satisfy compiler; side-effect search callbacks unchanged.

Sequence Diagram(s)

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

I thump my paw on versioned ground,
New go.mod carrots all around. 🥕
PVCs wear fresher threads,
Typed queues guide the recon threads,
While IndexFunc hops, result ignored—
Compile-time peace is thus restored.
Onward, I bound—dependencies adorned!

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.2.2)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/product/migration-guide for migration instructions

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch dev

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR updates Go dependencies to their latest versions, including a major Go version bump from 1.23.0 to 1.24.0. The update includes significant upgrades to Kubernetes dependencies (v0.32.0), controller-runtime (v0.19.0), and cert-manager (v1.18.2), along with numerous other dependency updates.

  • Updates Go from version 1.23.0 to 1.24.0
  • Updates Kubernetes ecosystem dependencies to v0.32.0
  • Adapts code to handle API changes in updated dependencies

Reviewed Changes

Copilot reviewed 7 out of 8 changed files in this pull request and generated no comments.

Show a summary per file
File Description
go.mod Updates all Go dependencies to latest versions including Go 1.24.0
internal/ops/sentinel/actor/actor_heal_pod.go Adds blank identifier to handle unused return value from slices.IndexFunc
internal/ops/failover/engine.go Adds blank identifier to handle unused return value from slices.IndexFunc
internal/ops/failover/actor/actor_patch_labels.go Adds blank identifier to handle unused return value from slices.IndexFunc
internal/controller/middleware/redis_controller.go Updates workqueue interface to use typed version for controller-runtime compatibility
internal/controller/middleware/redis/redisfailover.go Updates PVC resource type from ResourceRequirements to VolumeResourceRequirements
internal/builder/clusterbuilder/statefulset.go Updates PVC resource type from ResourceRequirements to VolumeResourceRequirements

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
internal/ops/failover/actor/actor_patch_labels.go (1)

66-71: Bug: missing return after RequeueWithError causes fall-through.

On error fetching master, the code logs and calls actor.RequeueWithError(err) but does not return, so execution proceeds with an invalid masterNode/masterAddr.

Apply this diff:

 func (a *actorPatchLabels) Do(ctx context.Context, val types.RedisInstance) *actor.ActorResult {
@@
   masterNode, err := inst.Monitor().Master(ctx)
   if err != nil {
     logger.Error(err, "get master failed")
-    actor.RequeueWithError(err)
+    return actor.RequeueWithError(err)
   }
internal/controller/middleware/redis/redisfailover.go (1)

304-317: Bug in resourceDiff: CPU limits never compared.

The last comparison repeats memory; CPU limit diffs won’t be detected and could suppress needed reconciles.

Apply this diff:

 func resourceDiff(r1 corev1.ResourceRequirements, r2 corev1.ResourceRequirements) bool {
   if result := r1.Requests.Cpu().Cmp(*r2.Requests.Cpu()); result != 0 {
     return true
   }
   if result := r1.Requests.Memory().Cmp(*r2.Requests.Memory()); result != 0 {
     return true
   }
-  if result := r1.Limits.Memory().Cmp(*r2.Limits.Memory()); result != 0 {
+  if result := r1.Limits.Memory().Cmp(*r2.Limits.Memory()); result != 0 {
     return true
   }
-  if result := r1.Limits.Memory().Cmp(*r2.Limits.Memory()); result != 0 {
+  if result := r1.Limits.Cpu().Cmp(*r2.Limits.Cpu()); result != 0 {
     return true
   }
   return false
 }
🧹 Nitpick comments (4)
internal/ops/failover/actor/actor_patch_labels.go (1)

79-87: Prefer a direct range lookup for clarity.

Using slices.IndexFunc solely for side effects is non-obvious. A simple range with early break is clearer and avoids the blank identifier.

Apply this diff:

-    var node redis.RedisNode
-    _ = slices.IndexFunc(inst.Nodes(), func(i redis.RedisNode) bool {
-      if i.GetName() == pod.GetName() {
-        node = i
-        return true
-      }
-      return false
-    })
+    var node redis.RedisNode
+    for _, n := range inst.Nodes() {
+      if n.GetName() == pod.GetName() {
+        node = n
+        break
+      }
+    }
internal/builder/clusterbuilder/statefulset.go (1)

27-29: Duplicate import alias for apps/v1.

Both appsv1 "k8s.io/api/apps/v1" and appv1 "k8s.io/api/apps/v1" are imported; only one alias is necessary.

Apply this diff:

-  appsv1 "k8s.io/api/apps/v1"
-  appv1 "k8s.io/api/apps/v1"
+  appsv1 "k8s.io/api/apps/v1"

and update the signature below to use appsv1:

-func IsStatefulsetChanged(newSts, sts *appv1.StatefulSet, logger logr.Logger) bool {
+func IsStatefulsetChanged(newSts, sts *appsv1.StatefulSet, logger logr.Logger) bool {
internal/ops/sentinel/actor/actor_heal_pod.go (1)

144-147: Fix typos in user-facing messages (“inconsist” → “inconsistent”).

These appear in events/logs; polish helps operators.

Apply this diff:

-            inst.SendEventf(corev1.EventTypeWarning, config.EventCleanResource,
-              "force delete pod with inconsist annotation %s", node.GetName())
+            inst.SendEventf(corev1.EventTypeWarning, config.EventCleanResource,
+              "force delete pod with inconsistent annotation %s", node.GetName())

Also applies to: 160-162

internal/ops/failover/engine.go (1)

298-301: LoadBalancer check should also consider Hostname, not just IP.

Sentinel path already checks both; align here to avoid false negatives on LB with hostname-only ingress.

Apply this diff:

-        if slices.IndexFunc(svc.Status.LoadBalancer.Ingress, func(i corev1.LoadBalancerIngress) bool {
-          return i.IP == announceIP
-        }) < 0 {
+        if slices.IndexFunc(svc.Status.LoadBalancer.Ingress, func(i corev1.LoadBalancerIngress) bool {
+          return i.IP == announceIP || i.Hostname == announceIP
+        }) < 0 {
           return actor.NewResult(CommandHealPod)
         }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 108fdf7 and 520a4bf.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (7)
  • go.mod (1 hunks)
  • internal/builder/clusterbuilder/statefulset.go (1 hunks)
  • internal/controller/middleware/redis/redisfailover.go (1 hunks)
  • internal/controller/middleware/redis_controller.go (2 hunks)
  • internal/ops/failover/actor/actor_patch_labels.go (1 hunks)
  • internal/ops/failover/engine.go (1 hunks)
  • internal/ops/sentinel/actor/actor_heal_pod.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
internal/ops/sentinel/actor/actor_heal_pod.go (1)
pkg/types/redis/redis.go (1)
  • RedisSentinelNode (113-138)
🔇 Additional comments (11)
internal/ops/failover/actor/actor_patch_labels.go (1)

81-87: Explicitly discarding IndexFunc result is correct for Go 1.24 toolchain.

This silences “unused result” linters while preserving the side-effect lookup. LGTM.

internal/builder/clusterbuilder/statefulset.go (2)

321-326: PVC Resources: switch to VolumeResourceRequirements is correct for k8s v0.32.

Type update aligns with upstream API changes. No functional issues seen.


315-330: PVC specs are consistent
All PersistentVolumeClaimSpec instances use corev1.VolumeResourceRequirements; no corev1.ResourceRequirements were found in any PVC spec.

internal/ops/sentinel/actor/actor_heal_pod.go (1)

89-95: Explicitly ignoring IndexFunc return is fine.

Matches the pattern used elsewhere in this PR and unblocks builds with stricter linters.

internal/ops/failover/engine.go (1)

142-149: Discarding IndexFunc result explicitly is good.

Keeps the side-effect lookup while satisfying linters. No behavior change.

internal/controller/middleware/redis/redisfailover.go (1)

184-189: PVC Resources type update to VolumeResourceRequirements is correct.

Matches k8s API changes; no logic change.

internal/controller/middleware/redis_controller.go (2)

874-874: LGTM! Updated for typed rate-limiting interface.

The signature change from workqueue.RateLimitingInterface to workqueue.TypedRateLimitingInterface[reconcile.Request] is part of the controller-runtime upgrade to v0.19.0, providing better type safety for the queue operations.


891-891: LGTM! Updated for typed rate-limiting interface.

The signature change from workqueue.RateLimitingInterface to workqueue.TypedRateLimitingInterface[reconcile.Request] is part of the controller-runtime upgrade to v0.19.0, providing better type safety for the queue operations.

go.mod (3)

3-3: LGTM! Go version upgrade to 1.24.0.

The Go version upgrade from 1.23.0 to 1.24.0 aligns with the PR objectives and brings the latest language features and improvements.


27-93: LGTM! Indirect dependencies updated consistently.

The indirect dependency updates appear consistent with the major dependency upgrades and align with the newer Go toolchain requirements.


6-24: No deprecated Kubernetes APIs detected; manual compatibility review advised

  • Scan found no v1beta1 or extensions/v1beta1 usages in our Go code.
  • Controller-runtime reconcile patterns (reconcile.Result{}) remain unchanged.
  • No direct cert-manager API imports or CRD versions detected in code.

Continue with a focused manual review of cert-manager CRD changes and any controller-runtime behavioural tweaks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants