Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 87 additions & 0 deletions config/v1alpha1/types_cluster_monitoring.go
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,11 @@ type ClusterMonitoringSpec struct {
// When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time.
// +optional
MetricsServerConfig MetricsServerConfig `json:"metricsServerConfig,omitempty,omitzero"`
// prometheusOperatorConfig is an optional field that can be used to configure the Prometheus Operator component.
// Specifically, it can configure how the Prometheus Operator instance is deployed, pod scheduling, and resource allocation.
// When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time.
// +optional
PrometheusOperatorConfig PrometheusOperatorConfig `json:"prometheusOperatorConfig,omitempty,omitzero"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this configuration relate to the configuration proposed in #2463?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is Prometheus Operator, the other one is Prometheus config. Of course they are related but they have different configs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the Prometheus config used by the PrometheusOperator?

Would it make sense to co-locate the configurations under a top-level prometheus field?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not directly . Prometheus Config is use by Prometheus. PrometheusOperator manages Prometheus instances, a
Alertmanagare, etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So what configures the Prometheus instances created by the Prometheus Operator to use the Prometheus Config?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CMO takes PrometheusK8sConfing configmap and create a CR.
PrometheosOperator takes that CR and configure Prometheus.

I can understand your idea but PrometheusOperator manages all these components and I it's not a good idea to have all fields inside PrometheusOperator.

A core feature of the Prometheus Operator is to monitor the Kubernetes API server for changes to specific objects and ensure that the current Prometheus deployments match these objects. The Operator acts on the following [Custom Resource Definitions (CRDs)](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/):

    Prometheus, which defines a desired Prometheus deployment.

    PrometheusAgent, which defines a desired Prometheus deployment, but running in Agent mode.

    Alertmanager, which defines a desired Alertmanager deployment.

    ThanosRuler, which defines a desired Thanos Ruler deployment.

    ServiceMonitor, which declaratively specifies how groups of Kubernetes services should be monitored. The Operator automatically generates Prometheus scrape configuration based on the current state of the objects in the API server.

    PodMonitor, which declaratively specifies how group of pods should be monitored. The Operator automatically generates Prometheus scrape configuration based on the current state of the objects in the API server.

    Probe, which declaratively specifies how groups of ingresses or static targets should be monitored. The Operator automatically generates Prometheus scrape configuration based on the definition.

    ScrapeConfig, which declaratively specifies scrape configurations to be added to Prometheus. This CustomResourceDefinition helps with scraping resources outside the Kubernetes cluster.

    PrometheusRule, which defines a desired set of Prometheus alerting and/or recording rules. The Operator generates a rule file, which can be used by Prometheus instances.

    AlertmanagerConfig, which declaratively specifies subsections of the Alertmanager configuration, allowing routing of alerts to custom receivers, and setting inhibit rules.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So to make sure I am following along, the CMO will:

  • Deploy the PrometheusOperator based on the PrometheusOperatorConfig
  • Create Prometheus CRs using the configurations provided in PrometheusK8sConfig. Does this apply to all Prometheus CRs?

While these are two distinct things, they are both inherently related to how the CMO handles prometheus configuration on the cluster.

I can understand your idea but PrometheusOperator manages all these components and I it's not a good idea to have all fields inside PrometheusOperator.

I'm not suggesting that we put all the fields under PrometheusOperatorConfig, I'm suggesting we use a shared parent field named prometheus that can have sibling fields for configuring the Prometheus Operator itself and, separately, configuring the individual Prometheus instance configurations. This way, if you want to add additional configuration options related to prometheus in the future, you don't have to add another Prometheus* field.

Copy link
Contributor Author

@marioferh marioferh Sep 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Deploy the PrometheusOperator based on the PrometheusOperatorConfig
  • Create Prometheus CRs using the configurations provided in PrometheusK8sConfig. Does this apply to all Prometheus CRs?

Correct

I'm not suggesting that we put all the fields under PrometheusOperatorConfig, I'm suggesting we use a shared parent field named prometheus that can have sibling fields for configuring the Prometheus Operator itself and, separately, configuring the individual Prometheus instance configurations. This way, if you want to add additional configuration options related to prometheus in the future, you don't have to add another Prometheus* field.

But they are different things, the are related but from my point of view and how CMO works it makes no sense.
https://github.com/prometheus-operator/prometheus-operator
https://github.com/prometheus/prometheus

@danielmellado @simonpasquier any thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would keep @marioferh approach. While I understand @everettraven concern about API organization, the reality is that prometheusOperatorConfig and prometheusK8sConfig are solving different problems and IMHO having them under a shared parent would actually make things more confusing.

Think about it from an operator's perspective: when you're configuring prometheusOperatorConfig, you're basically saying "how should we deploy and run the Prometheus Operator pods themselves" - stuff like resource limits, node scheduling, log levels. But when you're dealing with prometheusK8sConfig, you're configuring "what should the actual Prometheus servers do" - scraping rules, storage, retention policies, etc. Again, I think mixing them together would be confusing.

Plus, we already have a working pattern with alertmanagerConfig and metricsServerConfig that users understand. Why break that consistency for a theoretical future problem?

If we do end up with too many prometheus fields later, I'm totally happy to revisit the structure, but I think that for now the separation actually makes the API clearer and more intuitive.

@simonpasquier wdyt?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do end up with too many prometheus fields later, I'm totally happy to revisit the structure

If you go with a distributed field approach, you have to maintain that essentially forever or go through a pretty painful process to refine the structure once you've promoted the API to v1.

Why break that consistency for a theoretical future problem?

I'm not sold that doing something like:

prometheusConfig:
  operator:
    ...
  servers:
    ...

breaks that consistency, but if you folks feel strongly that users will have a better experience with multiple prometheus*Config fields I won't block it.

If you don't think you'll ever have more than the two fields for the operator and servers respectively, this probably isn't that big of a deal.

Think about it from an operator's perspective: when you're configuring prometheusOperatorConfig, you're basically saying "how should we deploy and run the Prometheus Operator pods themselves" - stuff like resource limits, node scheduling, log levels. But when you're dealing with prometheusK8sConfig, you're configuring "what should the actual Prometheus servers do" - scraping rules, storage, retention policies, etc. Again, I think mixing them together would be confusing

I think the example above still considers this perspective and difference of field responsibilities. Except now you have a dedicated umbrella field that captures everything related to the configuration of the "Prometheus stack" (operator and the servers).

Again, if you folks feel strongly that this doesn't make sense and users would have a better experience with the currently implemented approach I won't stop it from being done, but it must be clear to users what each prometheus*Config field is responsible for and when they should/should not be specifying the fields.

}

// UserDefinedMonitoring config for user-defined projects.
Expand Down Expand Up @@ -416,6 +421,88 @@ type MetricsServerConfig struct {
TopologySpreadConstraints []v1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"`
}

// PrometheusOperatorConfig provides configuration options for the Prometheus Operator instance
// Use this configuration to control how the Prometheus Operator instance is deployed, how it logs, and how its pods are scheduled.
// +kubebuilder:validation:MinProperties=1
type PrometheusOperatorConfig struct {
// logLevel defines the verbosity of logs emitted by Alertmanager.
// This field allows users to control the amount and severity of logs generated, which can be useful
// for debugging issues or reducing noise in production environments.
// Allowed values are Error, Warn, Info, and Debug.
// When set to Error, only errors will be logged.
// When set to Warn, both warnings and errors will be logged.
// When set to Info, general information, warnings, and errors will all be logged.
// When set to Debug, detailed debugging information will be logged.
// When omitted, this means no opinion and the platform is left to choose a reasonable default, that is subject to change over time.
// The current default value is `Info`.
// +optional
LogLevel LogLevel `json:"logLevel,omitempty"`
Comment on lines +428 to +439
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix copy-paste error in logLevel documentation.

The godoc incorrectly states "logs emitted by Alertmanager" but this field configures the Prometheus Operator component.

📝 Suggested fix
-	// logLevel defines the verbosity of logs emitted by Alertmanager.
+	// logLevel defines the verbosity of logs emitted by the Prometheus Operator.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// logLevel defines the verbosity of logs emitted by Alertmanager.
// This field allows users to control the amount and severity of logs generated, which can be useful
// for debugging issues or reducing noise in production environments.
// Allowed values are Error, Warn, Info, and Debug.
// When set to Error, only errors will be logged.
// When set to Warn, both warnings and errors will be logged.
// When set to Info, general information, warnings, and errors will all be logged.
// When set to Debug, detailed debugging information will be logged.
// When omitted, this means no opinion and the platform is left to choose a reasonable default, that is subject to change over time.
// The current default value is `Info`.
// +optional
LogLevel LogLevel `json:"logLevel,omitempty"`
// logLevel defines the verbosity of logs emitted by the Prometheus Operator.
// This field allows users to control the amount and severity of logs generated, which can be useful
// for debugging issues or reducing noise in production environments.
// Allowed values are Error, Warn, Info, and Debug.
// When set to Error, only errors will be logged.
// When set to Warn, both warnings and errors will be logged.
// When set to Info, general information, warnings, and errors will all be logged.
// When set to Debug, detailed debugging information will be logged.
// When omitted, this means no opinion and the platform is left to choose a reasonable default, that is subject to change over time.
// The current default value is `Info`.
// +optional
LogLevel LogLevel `json:"logLevel,omitempty"`
🤖 Prompt for AI Agents
In @config/v1alpha1/types_cluster_monitoring.go around lines 428 - 439, The doc
comment for the LogLevel field incorrectly says "logs emitted by Alertmanager";
update the comment to refer to the Prometheus Operator component instead and
adjust any wording that mentions Alertmanager specifically (e.g., the first
sentence and any examples) so it accurately documents that LogLevel controls the
verbosity of logs from the Prometheus Operator; keep existing allowed values,
default note, and +optional tag intact for the LogLevel `logLevel` field.

// nodeSelector defines the nodes on which the Pods are scheduled
// nodeSelector is optional.
//
// When omitted, this means the user has no opinion and the platform is left
// to choose reasonable defaults. These defaults are subject to change over time.
// The current default value is `kubernetes.io/os: linux`.
// When specified, resources must contain at least 1 entry and must not contain more than 10 entries.
// +optional
// +kubebuilder:validation:MinProperties=1
// +kubebuilder:validation:MaxProperties=10
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
Comment on lines +440 to +450
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix incorrect constraint reference in nodeSelector documentation.

Line 446 mentions "resources must contain at least 1 entry" but this is the nodeSelector field documentation. The constraint text should reference nodeSelector entries, not resources.

📝 Suggested fix
 	// The current default value is `kubernetes.io/os: linux`.
-	// When specified, resources must contain at least 1 entry and must not contain more than 10 entries.
+	// When specified, nodeSelector must contain at least 1 entry and must not contain more than 10 entries.
🤖 Prompt for AI Agents
In @config/v1alpha1/types_cluster_monitoring.go around lines 440 - 450, The
comment for the NodeSelector field incorrectly refers to "resources must contain
at least 1 entry"; update the documentation text for the NodeSelector map
(symbol: NodeSelector map[string]string `json:"nodeSelector,omitempty"`) to
reference "nodeSelector entries" instead of "resources" and ensure the
MinProperties/MaxProperties validation comments remain accurate for nodeSelector
(i.e., change the phrasing to "When specified, nodeSelector must contain at
least 1 entry and must not contain more than 10 entries.").

// resources defines the compute resource requests and limits for the KubeStateMetrics container.
// This includes CPU, memory and HugePages constraints to help control scheduling and resource usage.
// When not specified, defaults are used by the platform. Requests cannot exceed limits.
// This field is optional.
// More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
// This is a simplified API that maps to Kubernetes ResourceRequirements.
// The current default values are:
// resources:
// - name: cpu
// request: 4m
// limit: null
// - name: memory
// request: 40Mi
// limit: null
// When specified, resources must contain at least 1 entry and must not contain more than 10 entries.
// +optional
// +listType=map
// +listMapKey=name
// +kubebuilder:validation:MaxItems=10
// +kubebuilder:validation:MinItems=1
Resources []ContainerResource `json:"resources,omitempty"`
Comment on lines +451 to +471
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix copy-paste error in resources documentation.

The godoc incorrectly references "KubeStateMetrics container" but this field configures resources for the Prometheus Operator container.

📝 Suggested fix
-	// resources defines the compute resource requests and limits for the KubeStateMetrics container.
+	// resources defines the compute resource requests and limits for the Prometheus Operator container.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// resources defines the compute resource requests and limits for the KubeStateMetrics container.
// This includes CPU, memory and HugePages constraints to help control scheduling and resource usage.
// When not specified, defaults are used by the platform. Requests cannot exceed limits.
// This field is optional.
// More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
// This is a simplified API that maps to Kubernetes ResourceRequirements.
// The current default values are:
// resources:
// - name: cpu
// request: 4m
// limit: null
// - name: memory
// request: 40Mi
// limit: null
// When specified, resources must contain at least 1 entry and must not contain more than 10 entries.
// +optional
// +listType=map
// +listMapKey=name
// +kubebuilder:validation:MaxItems=10
// +kubebuilder:validation:MinItems=1
Resources []ContainerResource `json:"resources,omitempty"`
// resources defines the compute resource requests and limits for the Prometheus Operator container.
// This includes CPU, memory and HugePages constraints to help control scheduling and resource usage.
// When not specified, defaults are used by the platform. Requests cannot exceed limits.
// This field is optional.
// More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
// This is a simplified API that maps to Kubernetes ResourceRequirements.
// The current default values are:
// resources:
// - name: cpu
// request: 4m
// limit: null
// - name: memory
// request: 40Mi
// limit: null
// When specified, resources must contain at least 1 entry and must not contain more than 10 entries.
// +optional
// +listType=map
// +listMapKey=name
// +kubebuilder:validation:MaxItems=10
// +kubebuilder:validation:MinItems=1
Resources []ContainerResource `json:"resources,omitempty"`
🤖 Prompt for AI Agents
In @config/v1alpha1/types_cluster_monitoring.go around lines 451 - 471, The doc
comment for the Resources field incorrectly references "KubeStateMetrics
container"; update the comment above the Resources []ContainerResource
declaration to state that it defines compute resource requests and limits for
the Prometheus Operator container (not KubeStateMetrics), keep the rest of the
documentation intact (including defaults, optionality, list validation tags and
MaxItems/MinItems constraints) and ensure any explanatory lines (e.g., "When
specified, resources must contain at least 1 entry...") remain accurate and
refer to the Prometheus Operator container.

// tolerations defines tolerations for the pods.
// tolerations is optional.
//
// When omitted, this means the user has no opinion and the platform is left
// to choose reasonable defaults. These defaults are subject to change over time.
// Defaults are empty/unset.
// Maximum length for this list is 10
// Minimum length for this list is 1
// +kubebuilder:validation:MaxItems=10
// +kubebuilder:validation:MinItems=1
// +listType=atomic
// +optional
Tolerations []v1.Toleration `json:"tolerations,omitempty"`
// topologySpreadConstraints defines rules for how Prometheus Operator Pods should be distributed
// across topology domains such as zones, nodes, or other user-defined labels.
// topologySpreadConstraints is optional.
// This helps improve high availability and resource efficiency by avoiding placing
// too many replicas in the same failure domain.
//
// When omitted, this means no opinion and the platform is left to choose a default, which is subject to change over time.
// This field maps directly to the `topologySpreadConstraints` field in the Pod spec.
// Default is empty list.
// Maximum length for this list is 10.
// Minimum length for this list is 1
// Entries must have unique topologyKey and whenUnsatisfiable pairs.
// +kubebuilder:validation:MaxItems=10
// +kubebuilder:validation:MinItems=1
// +listType=map
// +listMapKey=topologyKey
// +listMapKey=whenUnsatisfiable
// +optional
TopologySpreadConstraints []v1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"`
}

// AuditProfile defines the audit log level for the Metrics Server.
// +kubebuilder:validation:Enum=None;Metadata;Request;RequestResponse
type AuditProfile string
Expand Down
Loading