-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Integrate resource quotas with scale up #8835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Integrate resource quotas with scale up #8835
Conversation
|
Hi @norbertcyran. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
4433dbd to
5ff5949
Compare
| func (o *ScaleUpOrchestrator) newQuotasTracker() (*resourcequotas.Tracker, error) { | ||
| var nodes []*apiv1.Node | ||
| nodeInfos, err := o.autoscalingCtx.ClusterSnapshot.ListNodeInfos() | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| for _, nodeInfo := range nodeInfos { | ||
| node := nodeInfo.Node() | ||
| if utils.IsVirtualKubeletNode(node) { | ||
| continue | ||
| } | ||
| nodes = append(nodes, nodeInfo.Node()) | ||
| } | ||
| return o.quotasTrackerFactory.NewQuotasTracker(o.autoscalingCtx, nodes) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That probably could be done better:
- Checking for virtual kubelet nodes could be done via
resourcequotas.NodeFilter - Passing all the nodes, including upcoming nodes via
nodesparameter inScaleUp(this way we could also remove some logic inScaleUparound upcoming nodes, which are used for checking total node limits)
That would make it also easier to integrate with the other implementations of the Orchestrator
Notes to interviewers: do you think it's worth the hassle?
5ff5949 to
41a74a5
Compare
|
/hold let's not merge it before #8834. The change is ready for review, though |
|
/assign towca BigDarkClown |
41a74a5 to
658a76c
Compare
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: norbertcyran The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
658a76c to
d5d93e4
Compare
d5d93e4 to
ebcee0f
Compare
What type of PR is this?
/kind feature
What this PR does / why we need it:
Integrating new
resourcequotaspackage with scale up orchestrator.resourcequotasreplaces the old resource management done byk8s.io/autoscaling/cluster-autoscaler/core/scaleup/resourcepackage.Which issue(s) this PR fixes:
Part of #8703
Special notes for your reviewer:
Please focus on possible regressions. I tested it manually on GKE and added some unit tests, but since the changes directly touch the most important CA functionality, let's ensure that we don't break anything here.
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: