Skip to content

Duplicate ports warning #6284

Closed
Closed
@sebhoss

Description

@sebhoss

What happened?

Description

I recently updated prometheus-operator and saw the following line in the logs of the operator:

level=warn ts=2024-02-05T06:00:35.226955216Z caller=klog.go:106 component=k8s_client_runtime func=Warning msg="spec.template.spec.containers[1].ports[0]: duplicate port definition with spec.template.spec.initContainers[0].ports[0]"

I wasn't sure which pod actually declared duplicate ports, so after a short search I noticed that all prometheus pods now have both an initContainer (init-config-reloader) as well as a regular container (config-reloader) running on port 8080. Since I'm not the one who declared those ports/containers, I'm not sure whether I should do something about this warning.

Steps to Reproduce

  • Deploy latest prometheus-operator
  • Deploy minimal Prometheus resource
  • Watch logs of operator

Expected Result

No warning should appear or at least the warning should tell me what to do.

Actual Result

Warning confuses me & I want to go home

Prometheus Operator Version

v0.71.2

Kubernetes Version

v1.28.6

Kubernetes Cluster Type

kubeadm

How did you deploy Prometheus-Operator?

yaml manifests

Manifests

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  labels:
    app.kubernetes.io/component: prometheus
    app.kubernetes.io/instance: autoscaling
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: monitoring
  name: autoscaling
  namespace: prometheus
spec:
  additionalAlertManagerConfigs:
    key: prometheus-configuration
    name: alertmanager-configuration
    optional: false
  enableFeatures:
    - auto-gomaxprocs
  enableAdminAPI: true
  enforcedSampleLimit: 100
  evaluationInterval: 30s
  image: harbor.infra.run/dockerhub/prom/prometheus:v2.49.1
  imagePullSecrets:
    - name: harbor-infra-run-pull-secret
  nodeSelector:
    kubernetes.io/os: linux
  podMetadata:
    labels:
      app.kubernetes.io/component: prometheus
      app.kubernetes.io/instance: autoscaling
      app.kubernetes.io/name: prometheus
      app.kubernetes.io/part-of: monitoring
  podMonitorNamespaceSelector:
    matchLabels:
      autoscaling.prometheus.kube.infra.run/discovery: enabled
  podMonitorSelector:
    matchLabels:
      autoscaling.prometheus.kube.infra.run/discovery: enabled
  replicas: 2
  replicaExternalLabelName: prometheus_replica
  shards: 1
  retention: '6h'
  ruleNamespaceSelector:
    matchLabels:
      autoscaling.prometheus.kube.infra.run/discovery: enabled
  ruleSelector:
    matchLabels:
      autoscaling.prometheus.kube.infra.run/discovery: enabled
  scrapeInterval: 15s
  scrapeTimeout: 10s
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 2000
  serviceAccountName: prometheus-autoscaling
  serviceMonitorNamespaceSelector:
    matchLabels:
      autoscaling.prometheus.kube.infra.run/discovery: enabled
  serviceMonitorSelector:
    matchLabels:
      autoscaling.prometheus.kube.infra.run/discovery: enabled
  storage:
    volumeClaimTemplate:
      metadata:
        name: prometheus-autoscaling-data
        labels:
          app.kubernetes.io/component: prometheus
          app.kubernetes.io/instance: autoscaling
          app.kubernetes.io/name: prometheus
          app.kubernetes.io/part-of: monitoring
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: csi-rbd
  thanos:
    image: harbor.infra.run/quay/thanos/thanos:v0.34.0
  web:
    pageTitle: Prometheus Autoscaling

prometheus-operator log output

level=info ts=2024-02-05T06:00:34.810681Z caller=main.go:181 msg="Starting Prometheus Operator" version="(version=0.71.2, branch=refs/tags/v0.71.2, revision=af2014407bdc25c4fc2b26cd99c9655235ebdf88)"
level=info ts=2024-02-05T06:00:34.810751593Z caller=main.go:182 build_context="(go=go1.21.6, platform=linux/amd64, user=Action-Run-ID-7656327832, date=20240125-14:57:39, tags=unknown)"
level=info ts=2024-02-05T06:00:34.810767765Z caller=main.go:193 msg="namespaces filtering configuration " config="{allow_list=\"\",deny_list=\"\",prometheus_allow_list=\"\",alertmanager_allow_list=\"\",alertmanagerconfig_allow_list=\"\",thanosruler_allow_list=\"\"}"
level=info ts=2024-02-05T06:00:34.825852745Z caller=main.go:222 msg="connection established" cluster-version=v1.28.6
level=info ts=2024-02-05T06:00:34.855289664Z caller=operator.go:333 component=prometheus-controller msg="Kubernetes API capabilities" endpointslices=true
level=info ts=2024-02-05T06:00:34.874428002Z caller=operator.go:319 component=prometheusagent-controller msg="Kubernetes API capabilities" endpointslices=true
level=info ts=2024-02-05T06:00:34.893257626Z caller=server.go:298 msg="starting insecure server" address=[::]:8080
level=info ts=2024-02-05T06:00:34.992779203Z caller=operator.go:428 component=prometheusagent-controller msg="successfully synced all caches"
level=info ts=2024-02-05T06:00:34.994188604Z caller=operator.go:280 component=thanos-controller msg="successfully synced all caches"
level=info ts=2024-02-05T06:00:34.99522074Z caller=operator.go:542 component=thanos-controller key=prometheus/thanos-ruler-global msg="sync thanos-ruler"
level=info ts=2024-02-05T06:00:34.99545999Z caller=operator.go:311 component=alertmanager-controller msg="successfully synced all caches"
level=info ts=2024-02-05T06:00:34.999846973Z caller=operator.go:390 component=prometheus-controller msg="successfully synced all caches"
level=info ts=2024-02-05T06:00:35.003405968Z caller=operator.go:987 component=prometheus-controller key=prometheus/general msg="sync prometheus"
level=info ts=2024-02-05T06:00:35.164126655Z caller=operator.go:542 component=thanos-controller key=prometheus/thanos-ruler-global msg="sync thanos-ruler"
level=warn ts=2024-02-05T06:00:35.226955216Z caller=klog.go:106 component=k8s_client_runtime func=Warning msg="spec.template.spec.containers[1].ports[0]: duplicate port definition with spec.template.spec.initContainers[0].ports[0]"
level=warn ts=2024-02-05T06:00:35.309322232Z caller=klog.go:106 component=k8s_client_runtime func=Warning msg="spec.template.spec.containers[1].ports[0]: duplicate port definition with spec.template.spec.initContainers[0].ports[0]"
level=info ts=2024-02-05T06:00:35.319266257Z caller=operator.go:987 component=prometheus-controller key=prometheus/heavy msg="sync prometheus"
level=info ts=2024-02-05T06:00:35.344992244Z caller=operator.go:542 component=thanos-controller key=prometheus/thanos-ruler-global msg="sync thanos-ruler"
level=info ts=2024-02-05T06:00:35.428711029Z caller=operator.go:542 component=thanos-controller key=prometheus/thanos-ruler-global msg="sync thanos-ruler"
level=warn ts=2024-02-05T06:00:35.508559635Z caller=klog.go:106 component=k8s_client_runtime func=Warning msg="spec.template.spec.containers[1].ports[0]: duplicate port definition with spec.template.spec.initContainers[0].ports[0]"
level=info ts=2024-02-05T06:00:35.517673114Z caller=operator.go:987 component=prometheus-controller key=prometheus/autoscaling msg="sync prometheus"
level=warn ts=2024-02-05T06:00:35.677154596Z caller=klog.go:106 component=k8s_client_runtime func=Warning msg="spec.template.spec.containers[1].ports[0]: duplicate port definition with spec.template.spec.initContainers[0].ports[0]"
level=info ts=2024-02-05T06:00:35.685996793Z caller=operator.go:987 component=prometheus-controller key=prometheus/lernraum msg="sync prometheus"
level=warn ts=2024-02-05T06:00:35.862035321Z caller=klog.go:106 component=k8s_client_runtime func=Warning msg="spec.template.spec.containers[1].ports[0]: duplicate port definition with spec.template.spec.initContainers[0].ports[0]"
level=info ts=2024-02-05T06:00:35.866906713Z caller=operator.go:987 component=prometheus-controller key=prometheus/meta msg="sync prometheus"
level=warn ts=2024-02-05T06:00:36.048482343Z caller=klog.go:106 component=k8s_client_runtime func=Warning msg="spec.template.spec.containers[1].ports[0]: duplicate port definition with spec.template.spec.initContainers[0].ports[0]"

Anything else?

I'm not sure whether the warning appeared in earlier versions..

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions