-
Notifications
You must be signed in to change notification settings - Fork 25
Refactor configuration builers and add support for overriding the current k8s context #47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@nefilim Hey there, sorry for the delay. Couldn't get to it until today. I've taken a quick look and I think I know where you are going with this. However, I'm not sure how you would use the new API. Could you please give me a snippet for the intended usage and how it would look like to change the config for a specific call/operation? so that we are on the same page. |
|
Clicked the wrong button there 😅 |
|
hi @iabudiab - I'm not looking to make any changes that would allow for config changes on a per call basis, I've only added support for specifying the context (as opposed to using the default context from Here is how I've used it in my project: private func createClient(context: String, logger: Logger) throws -> KubernetesClient {
logger.info("Creating k8s client for context: \(context)")
guard let config = KubernetesClientConfig.initialize(context: context, logger: logger) else {
throw KubernetesError.configurationError("Unable to initialize Kubernetes client - check kubeconfig")
}
return KubernetesClient(config: config)
}
let clientProd = createClient("context-prod", logger)
let clientStage = createClient("context-stage", logger)The salient parts of my changes:
I think that |
|
@nefilim Ah, I see, in #46 you've asked about specifying the context on instance and operation level. And somehow only the later came to mind. Thanks very much for clarifying and providing more details. And thanks for the PR 👍. I think, I'll merge this and take it from there (see below). Unless, of course, you want to continue working on it. Just let me know. The points you've brought make perfect sense and seem to be going the right direction. However, I would like to decouple the Kubeconfig from the Client Config more and make them more simple. (This is the main reason why they were internal until now). I was thinking something along these lines: // - Loaders are extensions on KubeConfig
// - Loading a KubeConfig does just that => we get a KubeConfig struct
// - Also provide some shortcuts for default locations and env vars
let kubeconfig = KubeConfig.load(contentsOf: url)
let kubeconfig = KubeConfig.load(fromEnvironment: envVar)
let kubeconfig = KubeConfig.loadFromSreviceAccount()
// - Then we can map a Kubeconfig to a ClientConfig
// - We could also specify some specific context to use
let clientConfig = KubernetesClientConfig(fromKubeconfig: kubeconfig)
let clientConfig = KubernetesClientConfig(fromKubeconfig: kubeconfig, context: "some context")
// Create instance
let client = KubernetesClient(config: clientConfig)This way, your private func createClient(context: String, logger: Logger) throws -> KubernetesClient {
logger.info("Creating k8s client for context: \(context)")
guard
let kubeconfig = KubeConfig.defaultLocalConfig()
let clientConfig = KubernetesClientConfig(fromKubeconfig: kubeconfig, context: context, logger: logger)
else {
throw KubernetesError.configurationError("Unable to initialize Kubernetes client - check kubeconfig")
}
return KubernetesClient(config: clientConfig)
} |
|
hi @iabudiab - ah ok, sounds like a good idea to simplify by implementing KubeConfig extensions. I'm not sure service account belongs in there though 🤔 KubeConfig models That said I haven't played around with it yet, let me try and see how easily a SA fits into a KubeConfig - maybe it's less of a mismatch than I'm thinking 😅 |
|
Ok took a quick stab at the changes, don't think it's possible to map a SA to a KubeConfig easily, SA is context unaware and I believe there's no standardized way to determine the context from the environment. I believe the SA use case is typically from within a cluster, while KubeConfig is typically from outside the cluster, I think previously the code supported falling back from KubeConfig to SA - not sure if this is a common use case worth supporting? |
|
Thank you a lot for all the work you've put into this 👍 I'll merge it soon. |
|
thank you @iabudiab for merging! 🙏 I noticed there are a lot of warnings (treated as errors it seems) on the main branch now- I'm not able to replicate any of these locally - I get no warnings or errors at all, I'm using the Swift 6.1.2 toolchain on macOS - what can I do to replicate these locally and try to address them? I'd love to get a release out so I can release my project in turn. |
|
Hey @nefilim, really sorry for not answering this whole time. I was away on vacation the last two weeks. Just got back and read your comment. I've seen the warnings/errors before going away, but didn't have the time to address them 😿 But no worries, I'll release a new version with those updates and fixes this week for sure. |
Should address #46