Skip to content

Make NodeReadinessRule spec fields immutable to prevent security issues#164

Open
Priyankasaggu11929 wants to merge 2 commits intokubernetes-sigs:mainfrom
Priyankasaggu11929:add-immutability-constraints
Open

Make NodeReadinessRule spec fields immutable to prevent security issues#164
Priyankasaggu11929 wants to merge 2 commits intokubernetes-sigs:mainfrom
Priyankasaggu11929:add-immutability-constraints

Conversation

@Priyankasaggu11929
Copy link
Member

Currently, NodeReadinessRule.spec fields (especially spec.taint) are mutable after creation.
And so, when these fields are modified in-place, previously managed taints becomes orphaned on nodes selected by the NRR rule.

(please see Example 1 in the testing section to see below case in action)

For example, let's say I chnaged the taint effect from NoSchedule to NoExecute, this will make the NRC controller add a new taint on the node, but without removing the original one (clean up today only happens during NRR rule deletion, not in place updates).

So, if a malicious or misconfigured update then reverts the rule back to NoSchedule, it will leave the NoExecute taint on the node but the taint will no longer be managed by the NRC controller.

This will lead to significant workload disruption (unexpected pod evictions, permanent scheduling failures etc) until these taints are manually removed.

And if we don't make these fields immutable, then that will require NRC controller to track and reconcile previously managed taints (in case of in-place updates), which significantly increase the controller complexity and could introduce potential race conditions.

Proposed Fix

  • The PR is making the following NodeReadinessRule.spec fields immutable using CRD validation (x-kubernetes-validations):

    • spec.taint.key
    • spec.taint.value
    • spec.taint.effect
    • spec.nodeSelector
    • spec.conditions

    These fields collectively define the identity and scope of the taint managed by the NRC controller.
    So, preventing these fields from being modified after creation will ensure that the NRC controller always maintains consistent ownership of the taints it manages.

    And If a user indeed needs to change these values, they must delete and recreate the NodeReadinessRule.

  • Additional, this PR also fixes an issue with hasTaintBySpec function (see the example 2 in the testing section) where taint comparison currently ignores the value field, and thus, causing updates that only changed the taint value to be missed during NRC controller reconciliation.

Type of Change

/kind bug
/kind api-change

Testing

Example 1

Steps to reproduce the scenario of how mutating the taint effect create an orphaned taint and permanent scheduling failures.

// using security-agent-readiness examples from branch 
// https://github.com/kubernetes-sigs/node-readiness-controller/pull/154

❯ kind create cluster --config=examples/security-agent-readiness/kind-cluster-config.yaml
❯ kubectl get nodes security-agent-demo-worker -o json | jq .spec.taints
[
  {
    "effect": "NoSchedule",
    "key": "readiness.k8s.io/security-agent-ready",
    "value": "pending"
  }
]

// build and deploy NRC controller
❯ make docker-build IMG=controller:latest
❯ kind load docker-image controller:latest --name security-agent-demo
❯ make deploy IMG=controller:latest
❯ kubectl apply -f config/crd/bases/

// install npd (ignore the falco part for this experiment for now)
❯ USE_NPD=true ./examples/security-agent-readiness/setup-falco.sh

// apply nrr rule
❯ kubectl apply -f ~/node-readiness-controller/examples/security-agent-readiness/npd-variant/security-agent-readiness-rule-npd.yaml
❯ kubectl get nrr security-agent-readiness-rule-npd -o wide
NAME                                MODE         TAINT                                   AGE
security-agent-readiness-rule-npd   continuous   readiness.k8s.io/security-agent-ready   23s

// create a nginx deployment with matching tolerations of the above rule with effect `NoSchedule`
❯ kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-with-toleration
  labels:
    app: nginx-tolerant
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx-tolerant
  template:
    metadata:
      labels:
        app: nginx-tolerant
    spec:
      tolerations:
      - key: "readiness.k8s.io/security-agent-ready"
        operator: "Equal"
        value: "pending"
        effect: "NoSchedule"
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
EOF

// nginx-* pods are scheduled on the worker node
❯ kubectl get pods -o wide -n default
NAME                                     READY   STATUS    RESTARTS   AGE   IP           NODE                         NOMINATED NODE   READINESS GATES
nginx-with-toleration-84559fbfc4-cvjkx   1/1     Running   0          59s   10.244.1.2   security-agent-demo-worker   <none>           <none>
nginx-with-toleration-84559fbfc4-dgb89   1/1     Running   0          59s   10.244.1.6   security-agent-demo-worker   <none>           <none>
nginx-with-toleration-84559fbfc4-kbvph   1/1     Running   0          59s   10.244.1.4   security-agent-demo-worker   <none>           <none>
nginx-with-toleration-84559fbfc4-khktp   1/1     Running   0          59s   10.244.1.5   security-agent-demo-worker   <none>           <none>
nginx-with-toleration-84559fbfc4-kkwj8   1/1     Running   0          59s   10.244.1.3   security-agent-demo-worker   <none>           <none>

// now, edit the NRR rule with taint effect NoSchedule to NoExecute
❯ kubectl edit nrr security-agent-readiness-rule-npd 
❯ kubectl get nrr security-agent-readiness-rule-npd -o json | jq .spec.taint
{
  "effect": "NoExecute",
  "key": "readiness.k8s.io/security-agent-ready",
  "value": "pending"
}

// The NRC controller will now add a second `NoExecute` taint on the worker node (TaintAdopted event is emitted).
// This means the worker node at this point has 2 taints on it
// and the one with `NoSchedule` effect is not deleted by the controller because of 
// in-place change or mutation and so that taint is now orphaned.
// And all 5 nginx pods are evicted immediately (notice the `TaintManagerEviction` instances in the event logs)

❯ kubectl get node security-agent-demo-worker -o json | jq .spec.taints
[
  {
    "effect": "NoSchedule",
    "key": "readiness.k8s.io/security-agent-ready",
    "value": "pending"
  },
  {
    "effect": "NoExecute",
    "key": "readiness.k8s.io/security-agent-ready",
    "value": "pending"
  }
]

❯ kubectl get events | grep -E "TaintManagerEviction"
7m37s       Normal    TaintManagerEviction      pod/nginx-with-toleration-84559fbfc4-cvjkx    Marking for deletion Pod default/nginx-with-toleration-84559fbfc4-cvjkx
7m37s       Normal    TaintManagerEviction      pod/nginx-with-toleration-84559fbfc4-dgb89    Marking for deletion Pod default/nginx-with-toleration-84559fbfc4-dgb89
7m37s       Normal    TaintManagerEviction      pod/nginx-with-toleration-84559fbfc4-kbvph    Marking for deletion Pod default/nginx-with-toleration-84559fbfc4-kbvph
7m37s       Normal    TaintManagerEviction      pod/nginx-with-toleration-84559fbfc4-khktp    Marking for deletion Pod default/nginx-with-toleration-84559fbfc4-khktp
7m37s       Normal    TaintManagerEviction      pod/nginx-with-toleration-84559fbfc4-kkwj8    Marking for deletion Pod default/nginx-with-toleration-84559fbfc4-kkwj8


// now, if I change back the NRR taint effect again from `NoExecute` to `NoSchedule`
// and since the `NoSchedule` taint was already present on the worker node,  
// this change doesn't emit any new events (TaintAdded/Adopted etc.)
// And now, the node has two taints with 2 different effects (`NoExecute` +`NoSchedule`)
// but the more dangerous `NoExecute` taint is no longer managed by NRC and is orphaned
// which is going to cause further disruption.

❯ kubectl edit nrr security-agent-readiness-rule-npd 

❯ kubectl get nrr security-agent-readiness-rule-npd -o json | jq .spec.taint
{
  "effect": "NoSchedule",
  "key": "readiness.k8s.io/security-agent-ready",
  "value": "pending"
}

Below are the node events throughout the above testing steps

❯ kubectl describe node security-agent-demo-worker 
Name:               security-agent-demo-worker
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=security-agent-demo-worker
                    kubernetes.io/os=linux
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 12 Mar 2026 17:41:41 +0530
Taints:             readiness.k8s.io/security-agent-ready=pending:NoExecute
                    readiness.k8s.io/security-agent-ready=pending:NoSchedule
Unschedulable:      false
...
  Type                      Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                      ------  -----------------                 ------------------                ------                       -------
  falco.org/FalcoNotReady   True    Thu, 12 Mar 2026 18:14:42 +0530   Thu, 12 Mar 2026 17:49:38 +0530   FalcoNotDeployed             Falco is not deployed or not responding on port 8765
  MemoryPressure            False   Thu, 12 Mar 2026 18:14:42 +0530   Thu, 12 Mar 2026 17:41:41 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure              False   Thu, 12 Mar 2026 18:14:42 +0530   Thu, 12 Mar 2026 17:41:41 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure               False   Thu, 12 Mar 2026 18:14:42 +0530   Thu, 12 Mar 2026 17:41:41 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                     True    Thu, 12 Mar 2026 18:14:42 +0530   Thu, 12 Mar 2026 17:41:55 +0530   KubeletReady                 kubelet is posting ready status
...
Non-terminated Pods:          (3 in total)
  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
  falco                       node-problem-detector-falco-rrcxm    20m (0%)      100m (0%)   64Mi (0%)        128Mi (0%)     29m
  kube-system                 kindnet-d252t                        100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      36m
  kube-system                 kube-proxy-t5mxr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         36m
...
Events:
  Type     Reason            Age   From                       Message
  ----     ------            ----  ----                       -------
  Normal   RegisteredNode    36m   node-controller            Node security-agent-demo-worker event: Registered Node security-agent-demo-worker in Controller
  Warning  FalcoNotDeployed  28m   falco-monitor              Node condition falco.org/FalcoNotReady is now: True, reason: FalcoNotDeployed, message: "Falco is not deployed or not responding on port 8765"
  Normal   TaintAdopted      24m   node-readiness-controller  Taint 'readiness.k8s.io/security-agent-ready:NoSchedule' is now managed by rule 'security-agent-readiness-rule-npd'

and

❯ kubectl get events
LAST SEEN   TYPE      REASON                    OBJECT                                        MESSAGE
118s        Warning   FailedScheduling          pod/nginx-with-toleration-84559fbfc4-5vmxb    0/2 nodes are available: 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
118s        Warning   FailedScheduling          pod/nginx-with-toleration-84559fbfc4-8wmmw    0/2 nodes are available: 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
118s        Warning   FailedScheduling          pod/nginx-with-toleration-84559fbfc4-9z7bt    0/2 nodes are available: 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
27m         Normal    Scheduled                 pod/nginx-with-toleration-84559fbfc4-cvjkx    Successfully assigned default/nginx-with-toleration-84559fbfc4-cvjkx to security-agent-demo-worker
27m         Normal    Pulling                   pod/nginx-with-toleration-84559fbfc4-cvjkx    Pulling image "nginx:latest"
27m         Normal    Pulled                    pod/nginx-with-toleration-84559fbfc4-cvjkx    Successfully pulled image "nginx:latest" in 14.85s (14.851s including waiting). Image size: 62960551 bytes.
27m         Normal    Created                   pod/nginx-with-toleration-84559fbfc4-cvjkx    Container created
27m         Normal    Started                   pod/nginx-with-toleration-84559fbfc4-cvjkx    Container started
22m         Normal    TaintManagerEviction      pod/nginx-with-toleration-84559fbfc4-cvjkx    Marking for deletion Pod default/nginx-with-toleration-84559fbfc4-cvjkx
22m         Normal    Killing                   pod/nginx-with-toleration-84559fbfc4-cvjkx    Stopping container nginx
27m         Normal    Scheduled                 pod/nginx-with-toleration-84559fbfc4-dgb89    Successfully assigned default/nginx-with-toleration-84559fbfc4-dgb89 to security-agent-demo-worker
27m         Normal    Pulling                   pod/nginx-with-toleration-84559fbfc4-dgb89    Pulling image "nginx:latest"
27m         Normal    Pulled                    pod/nginx-with-toleration-84559fbfc4-dgb89    Successfully pulled image "nginx:latest" in 1.543s (21.647s including waiting). Image size: 62960551 bytes.
27m         Normal    Created                   pod/nginx-with-toleration-84559fbfc4-dgb89    Container created
27m         Normal    Started                   pod/nginx-with-toleration-84559fbfc4-dgb89    Container started
22m         Normal    TaintManagerEviction      pod/nginx-with-toleration-84559fbfc4-dgb89    Marking for deletion Pod default/nginx-with-toleration-84559fbfc4-dgb89
22m         Normal    Killing                   pod/nginx-with-toleration-84559fbfc4-dgb89    Stopping container nginx
118s        Warning   FailedScheduling          pod/nginx-with-toleration-84559fbfc4-dl7tz    0/2 nodes are available: 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
27m         Normal    Scheduled                 pod/nginx-with-toleration-84559fbfc4-kbvph    Successfully assigned default/nginx-with-toleration-84559fbfc4-kbvph to security-agent-demo-worker
27m         Normal    Pulling                   pod/nginx-with-toleration-84559fbfc4-kbvph    Pulling image "nginx:latest"
27m         Normal    Pulled                    pod/nginx-with-toleration-84559fbfc4-kbvph    Successfully pulled image "nginx:latest" in 1.562s (18.185s including waiting). Image size: 62960551 bytes.
27m         Normal    Created                   pod/nginx-with-toleration-84559fbfc4-kbvph    Container created
27m         Normal    Started                   pod/nginx-with-toleration-84559fbfc4-kbvph    Container started
22m         Normal    TaintManagerEviction      pod/nginx-with-toleration-84559fbfc4-kbvph    Marking for deletion Pod default/nginx-with-toleration-84559fbfc4-kbvph
22m         Normal    Killing                   pod/nginx-with-toleration-84559fbfc4-kbvph    Stopping container nginx
27m         Normal    Scheduled                 pod/nginx-with-toleration-84559fbfc4-khktp    Successfully assigned default/nginx-with-toleration-84559fbfc4-khktp to security-agent-demo-worker
27m         Normal    Pulling                   pod/nginx-with-toleration-84559fbfc4-khktp    Pulling image "nginx:latest"
27m         Normal    Pulled                    pod/nginx-with-toleration-84559fbfc4-khktp    Successfully pulled image "nginx:latest" in 1.931s (20.113s including waiting). Image size: 62960551 bytes.
27m         Normal    Created                   pod/nginx-with-toleration-84559fbfc4-khktp    Container created
27m         Normal    Started                   pod/nginx-with-toleration-84559fbfc4-khktp    Container started
22m         Normal    TaintManagerEviction      pod/nginx-with-toleration-84559fbfc4-khktp    Marking for deletion Pod default/nginx-with-toleration-84559fbfc4-khktp
22m         Normal    Killing                   pod/nginx-with-toleration-84559fbfc4-khktp    Stopping container nginx
27m         Normal    Scheduled                 pod/nginx-with-toleration-84559fbfc4-kkwj8    Successfully assigned default/nginx-with-toleration-84559fbfc4-kkwj8 to security-agent-demo-worker
27m         Normal    Pulling                   pod/nginx-with-toleration-84559fbfc4-kkwj8    Pulling image "nginx:latest"
27m         Normal    Pulled                    pod/nginx-with-toleration-84559fbfc4-kkwj8    Successfully pulled image "nginx:latest" in 1.805s (16.648s including waiting). Image size: 62960551 bytes.
27m         Normal    Created                   pod/nginx-with-toleration-84559fbfc4-kkwj8    Container created
27m         Normal    Started                   pod/nginx-with-toleration-84559fbfc4-kkwj8    Container started
22m         Normal    TaintManagerEviction      pod/nginx-with-toleration-84559fbfc4-kkwj8    Marking for deletion Pod default/nginx-with-toleration-84559fbfc4-kkwj8
22m         Normal    Killing                   pod/nginx-with-toleration-84559fbfc4-kkwj8    Stopping container nginx
118s        Warning   FailedScheduling          pod/nginx-with-toleration-84559fbfc4-zhdmv    0/2 nodes are available: 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
27m         Normal    SuccessfulCreate          replicaset/nginx-with-toleration-84559fbfc4   Created pod: nginx-with-toleration-84559fbfc4-cvjkx
27m         Normal    SuccessfulCreate          replicaset/nginx-with-toleration-84559fbfc4   Created pod: nginx-with-toleration-84559fbfc4-kkwj8
27m         Normal    SuccessfulCreate          replicaset/nginx-with-toleration-84559fbfc4   Created pod: nginx-with-toleration-84559fbfc4-kbvph
27m         Normal    SuccessfulCreate          replicaset/nginx-with-toleration-84559fbfc4   Created pod: nginx-with-toleration-84559fbfc4-dgb89
27m         Normal    SuccessfulCreate          replicaset/nginx-with-toleration-84559fbfc4   Created pod: nginx-with-toleration-84559fbfc4-khktp
22m         Normal    SuccessfulCreate          replicaset/nginx-with-toleration-84559fbfc4   Created pod: nginx-with-toleration-84559fbfc4-dl7tz
22m         Normal    SuccessfulCreate          replicaset/nginx-with-toleration-84559fbfc4   Created pod: nginx-with-toleration-84559fbfc4-zhdmv
22m         Normal    SuccessfulCreate          replicaset/nginx-with-toleration-84559fbfc4   Created pod: nginx-with-toleration-84559fbfc4-9z7bt
22m         Normal    SuccessfulCreate          replicaset/nginx-with-toleration-84559fbfc4   Created pod: nginx-with-toleration-84559fbfc4-8wmmw
22m         Normal    SuccessfulCreate          replicaset/nginx-with-toleration-84559fbfc4   (combined from similar events): Created pod: nginx-with-toleration-84559fbfc4-5vmxb
27m         Normal    ScalingReplicaSet         deployment/nginx-with-toleration              Scaled up replica set nginx-with-toleration-84559fbfc4 from 0 to 5
38m         Normal    Starting                  node/security-agent-demo-control-plane        Starting kubelet.
38m         Normal    NodeHasSufficientMemory   node/security-agent-demo-control-plane        Node security-agent-demo-control-plane status is now: NodeHasSufficientMemory
38m         Normal    NodeHasNoDiskPressure     node/security-agent-demo-control-plane        Node security-agent-demo-control-plane status is now: NodeHasNoDiskPressure
38m         Normal    NodeHasSufficientPID      node/security-agent-demo-control-plane        Node security-agent-demo-control-plane status is now: NodeHasSufficientPID
38m         Normal    NodeAllocatableEnforced   node/security-agent-demo-control-plane        Updated Node Allocatable limit across pods
38m         Normal    Starting                  node/security-agent-demo-control-plane        Starting kubelet.
38m         Normal    NodeAllocatableEnforced   node/security-agent-demo-control-plane        Updated Node Allocatable limit across pods
37m         Normal    NodeHasSufficientMemory   node/security-agent-demo-control-plane        Node security-agent-demo-control-plane status is now: NodeHasSufficientMemory
37m         Normal    NodeHasNoDiskPressure     node/security-agent-demo-control-plane        Node security-agent-demo-control-plane status is now: NodeHasNoDiskPressure
37m         Normal    NodeHasSufficientPID      node/security-agent-demo-control-plane        Node security-agent-demo-control-plane status is now: NodeHasSufficientPID
37m         Normal    RegisteredNode            node/security-agent-demo-control-plane        Node security-agent-demo-control-plane event: Registered Node security-agent-demo-control-plane in Controller
37m         Normal    Starting                  node/security-agent-demo-control-plane        
37m         Normal    NodeReady                 node/security-agent-demo-control-plane        Node security-agent-demo-control-plane status is now: NodeReady
29m         Warning   FalcoNotDeployed          node/security-agent-demo-control-plane        Node condition falco.org/FalcoNotReady is now: True, reason: FalcoNotDeployed, message: "Falco is not deployed or not responding on port 8765"
28m         Normal    FalcoHealthy              node/security-agent-demo-control-plane        Node condition falco.org/FalcoNotReady is now: False, reason: FalcoHealthy, message: "Falco security monitoring is functional"
8m31s       Warning   FalcoNotDeployed          node/security-agent-demo-control-plane        Node condition falco.org/FalcoNotReady is now: True, reason: FalcoNotDeployed, message: "Falco is not deployed or not responding on port 8765"
8m11s       Normal    FalcoHealthy              node/security-agent-demo-control-plane        Node condition falco.org/FalcoNotReady is now: False, reason: FalcoHealthy, message: "Falco security monitoring is functional"
37m         Normal    Starting                  node/security-agent-demo-worker               Starting kubelet.
37m         Normal    NodeHasSufficientMemory   node/security-agent-demo-worker               Node security-agent-demo-worker status is now: NodeHasSufficientMemory
37m         Normal    NodeHasNoDiskPressure     node/security-agent-demo-worker               Node security-agent-demo-worker status is now: NodeHasNoDiskPressure
37m         Normal    NodeHasSufficientPID      node/security-agent-demo-worker               Node security-agent-demo-worker status is now: NodeHasSufficientPID
37m         Normal    NodeAllocatableEnforced   node/security-agent-demo-worker               Updated Node Allocatable limit across pods
37m         Normal    RegisteredNode            node/security-agent-demo-worker               Node security-agent-demo-worker event: Registered Node security-agent-demo-worker in Controller
37m         Normal    Starting                  node/security-agent-demo-worker               
37m         Normal    NodeReady                 node/security-agent-demo-worker               Node security-agent-demo-worker status is now: NodeReady
29m         Warning   FalcoNotDeployed          node/security-agent-demo-worker               Node condition falco.org/FalcoNotReady is now: True, reason: FalcoNotDeployed, message: "Falco is not deployed or not responding on port 8765"
25m         Normal    TaintAdopted              node/security-agent-demo-worker               Taint 'readiness.k8s.io/security-agent-ready:NoSchedule' is now managed by rule 'security-agent-readiness-rule-npd'
22m         Normal    TaintAdded                node/security-agent-demo-worker               Taint 'readiness.k8s.io/security-agent-ready:NoExecute' added by rule 'security-agent-readiness-rule-npd'

Example 2

Apart from the above immutability issue, currently, the hasTaintBySpec function only checks a taint as a combination of (key+effect) and not (key+effect+value) and so, changing value doesn't trigger NRC controller to reconcile and add a new taint.

Because the controller currently considers the following taints as equivalent:

  • readiness.k8s.io/security-agent-ready=pending:NoSchedule
  • readiness.k8s.io/security-agent-ready=ready:NoSchedule

This is fixed in the second commt - f8d7b38

❯ kubectl get nrr security-agent-readiness-rule-npd -o json | jq .spec.taint
{
  "effect": "NoSchedule",
  "key": "readiness.k8s.io/security-agent-ready",
  "value": "pending"
}

// change the taint value (from `pending` to `ready`)-
❯ kubectl edit nrr security-agent-readiness-rule-npd 
❯ kubectl get nrr security-agent-readiness-rule-npd -o json | jq .spec.taint
{
  "effect": "NoSchedule",
  "key": "readiness.k8s.io/security-agent-ready",
  "value": "ready"
}

// the NRC controller didn't add a new taint for `ready` value.
❯ kubectl get node security-agent-demo-worker -o json | jq .spec.taints
[
  {
    "effect": "NoSchedule",
    "key": "readiness.k8s.io/security-agent-ready",
    "value": "pending"
  },
  {
    "effect": "NoExecute",
    "key": "readiness.k8s.io/security-agent-ready",
    "value": "pending"
  }
]

Checklist

  • make test passes
  • make test-e2e passes
  • make lint passes
  • make verify passes

Does this PR introduce a user-facing change?

NodeReadinessRule taint fields (`spec.taint.key`, `spec.taint.value`, `spec.taint.effect`, `spec.nodeSelector`, `spec.conditions`) are now immutable. Users must recreate the rule to change these values.

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API labels Mar 12, 2026
@k8s-ci-robot k8s-ci-robot requested a review from dchen1107 March 12, 2026 14:21
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Priyankasaggu11929
Once this PR has been reviewed and has the lgtm label, please assign ajaysundark for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Mar 12, 2026
@netlify
Copy link

netlify bot commented Mar 12, 2026

Deploy Preview for node-readiness-controller ready!

Name Link
🔨 Latest commit f8d7b38
🔍 Latest deploy log https://app.netlify.com/projects/node-readiness-controller/deploys/69b2cbd1fd870000080e45d2
😎 Deploy Preview https://deploy-preview-164--node-readiness-controller.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@Priyankasaggu11929
Copy link
Member Author

cc: @ajaysundark for review. Thanks!

@ajaysundark ajaysundark self-requested a review March 12, 2026 17:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/bug Categorizes issue or PR as related to a bug. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants