HiveMQ Platform Operator Upgrade Guides

The HiveMQ Platform Operator Upgrade Guides show you how to upgrade to the latest operator version. Each guide provides detailed upgrade information on upgrading your HiveMQ Platform Helm charts, HiveMQ Platform Operator, and HiveMQ Platform.

Upgrades of the HiveMQ Platform always trigger a rolling restart. Based on the HiveMQ Platform Operator version from which you upgrade, the upgrade to HiveMQ Platform Operator version 1.2.x can trigger a rolling restart. If a rolling restart is required, we recommend following the Upgrade the HiveMQ Platform Operator and HiveMQ Platform guide to ensure all changes are applied in a single rolling restart.

When upgrading to HiveMQ Platform Operator 2.x or above, refer to the Migration Guide for HiveMQ Platform Operator 2.0.0.

When you upgrade from HiveMQ Platform Operator 1.x.y to 1.2.x or above, a rolling restart is triggered. Starting with HiveMQ Platform Operator 1.2.0 an improved StatefulSet configuration avoids most rolling restarts when the operator is updated.
When you migrate to the latest HiveMQ Platform Operator, the operator automatically updates the HiveMQ Platform Custom Resource Definition (CRD). The new CRD only adds new fields and is fully backward compatible.

Upgrade the HiveMQ Platform Operator

To upgrade the HiveMQ Platform Operator, use the following procedure:

  1. Refresh your local Helm repositories:

    helm repo update
  2. Make sure that your-operator-values.yml file does not configure the operator version.
    If the operator version is configured in the .yml file, update the your-operator-values.yml file to the appropriate new version as follows:

    # Container image configuration
    image:
      name: hivemq-platform-operator
      initImageName: hivemq-platform-operator-init
      tag: 2.0.0
  3. Upgrade your HiveMQ Platform Operator:

    helm upgrade --install your-hivemq-operator hivemq/hivemq-platform-operator -f your-operator-values.yml -n <namespace>
  4. Verify that the operator is running with the new version.

    The command helm list returns the APP VERSION of the HiveMQ Platform Operator.
    If the upgrade is successful, the expected new version is returned.
    For example, 2.0.0.

    helm list

Upgrade the HiveMQ Platform Operator and HiveMQ Platform

To upgrade both the HiveMQ Platform Operator and the HiveMQ Platform, use the following procedure:

  1. Refresh your local Helm repositories:

    helm repo update
  2. Uninstall the running HiveMQ Platform Operator:

    helm uninstall your-hivemq-operator -n <namespace>

    If the HiveMQ Platform Operator upgrade triggers a rolling restart, this step ensures that both upgrades are performed in a single rolling restart.

  3. Upgrade your HiveMQ Platform:

    helm upgrade --install your-hivemq-platform hivemq/hivemq-platform -f your-platform-values.yml -n <namespace>
  4. Make sure that your-operator-values.yml file does not configure the operator version.
    If the operator version is configured in the .yml file, update the your-operator-values.yml file to the appropriate new version as follows:

    # Container image configuration
    image:
      name: hivemq-platform-operator
      initImageName: hivemq-platform-operator-init
      tag: 2.0.0
  5. Upgrade your HiveMQ Platform Operator.

    helm upgrade --install your-hivemq-operator hivemq/hivemq-platform-operator -f your-operator-values.yml -n <namespace>
  6. Verify that the operator is running with the new version.

    The command helm list returns the APP VERSION of the HiveMQ Platform Operator.
    If the upgrade is successful, the expected new version is returned.
    For example, 2.0.0.

    helm list

Migration Guide for HiveMQ Platform Operator 2.0.0

The HiveMQ Platform Operator 2.0.0 changes the handling and inheritance of metadata (labels and annotations).

Overview of Changes

In HiveMQ Platform Operator 1.x versions:

  • All metadata configured on the custom resource is inherited by all managed Kubernetes resources (PodInfo ConfigMap, Role, RoleBinding, ServiceAccount, Services, StatefulSet, and HiveMQ Platform Pods).

  • In addition, the operator adds the hivemq-platform: <release-name> label for internal label selectors.

  • Custom metadata defined on the StatefulSet is also propagated to the StatefulSet Template and HiveMQ Platform Pods.

  • Changes to StatefulSet Template metadata do not trigger rolling restarts.

In HiveMQ Platform Operator 2.x versions:

  • Only four default app.kubernetes.io labels from the custom resource are inherited by all managed Kubernetes resources.

  • In addition, the operator still adds the hivemq-platform: <release-name> label for internal label selectors.

  • Other resource-specific metadata now applies to the intended resources only.

  • Changes to the StatefulSet Template metadata do trigger rolling restarts.

This change improves metadata isolation and provides more predictable behavior, but may require adjustments to your existing configurations.
Starting with HiveMQ Platform Operator 2.0.0, you can trigger a manual rolling restart by modifying StatefulSet Template metadata. For example, adding an arbitrary annotation is enough to trigger a rolling restart.

Migration Impact

When upgrading from operator 1.x to 2.x, you may experience:

  • Metadata changes on existing resources.

  • Potential rolling restart if StatefulSet Template metadata changes.

  • Need to reconfigure monitoring, logging, or other tools that rely on specific labels/annotations.

Before and After Comparison

This example illustrates how metadata inheritance works in both operator versions.

Example values-with-metadata.yaml file for the HiveMQ Platform Helm chart
nodes:
  # Annotations to add to the HiveMQ Pods
  annotations:
    sts-template-annotation: sts-template
  # Labels to add to the HiveMQ Pods
  labels:
    sts-template-label: sts-template

operator:
  # Annotations to add to the HiveMQ Platform custom resource.
  annotations:
    global-annotation: custom-resource
  # Labels to add to the HiveMQ Platform custom resource.
  labels:
    global-label: custom-resource

services:
  # MQTT service configuration
  - type: mqtt
    exposed: true
    containerPort: 1883
    port: 1883
    # Annotations to add to the service
    annotations:
      service-annotation: mqtt-service
    # Labels to add to the service
    labels:
      service-label: mqtt-service

Install the HiveMQ Platform via Helm:

helm install <release-name> hivemq/hivemq-platform -n <namespace> -f values-with-metadata.yaml

With a 1.x operator version the following metadata for services and the HiveMQ Platform pods will be deployed.

Metadata of the created MQTT Service (HiveMQ Platform Operator 1.x)
labels:
  app.kubernetes.io/instance: <release-name>
  app.kubernetes.io/managed-by: Helm
  app.kubernetes.io/name: hivemq-platform
  app.kubernetes.io/version: 4.x.y
  global-label: custom-resource
  helm.sh/chart: hivemq-platform-0.x.y
  hivemq-platform: <release-name>
  service-label: mqtt-service
annotations:
  global-annotation: custom-resource
  meta.helm.sh/release-name: <release-name>
  meta.helm.sh/release-namespace: <namespace>
  service-annotation: mqtt-service
Metadata of the created HiveMQ Pod (HiveMQ Platform Operator 1.x)
labels:
  app.kubernetes.io/instance: <release-name>
  app.kubernetes.io/managed-by: Helm
  app.kubernetes.io/name: hivemq-platform
  app.kubernetes.io/version: 4.x.y
  global-label: custom-resource
  helm.sh/chart: hivemq-platform-0.x.y
  hivemq-platform: <release-name>
  service-label: mqtt-service
annotations:
  global-annotation: custom-resource
  meta.helm.sh/release-name: <release-name>
  meta.helm.sh/release-namespace: <namespace>
  service-annotation: mqtt-service

You can see that the global-annotation and global-label from the custom resource metadata are configured on the managed resources as well. Also, the Helm chart metadata is inherited.

Starting the same HiveMQ Platform with a 2.x operator will result in the following metadata for services and the HiveMQ Platform pods.

Metadata of the created MQTT Service (HiveMQ Platform Operator 2.x)
labels:
  app.kubernetes.io/instance: <release-name>
  app.kubernetes.io/managed-by: Helm
  app.kubernetes.io/name: hivemq-platform
  app.kubernetes.io/version: 4.x.y
  hivemq-platform: <release-name>
  service-label: mqtt-service
annotations:
  service-annotation: mqtt-service
Metadata of the created HiveMQ Pod (HiveMQ Platform Operator 2.x)
labels:
  app.kubernetes.io/instance: <release-name>
  app.kubernetes.io/managed-by: Helm
  app.kubernetes.io/name: hivemq-platform
  app.kubernetes.io/version: 4.x.y
  hivemq-platform: <release-name>
  sts-template-label: sts-template
annotations:
  sts-template-annotation: sts-template

We only inherit four default app.kubernetes.io labels from the custom resource. In addition, the operator adds the hivemq-platform: <release-name> label that is used for internal label selectors. Only the specific metadata that is defined for each resource is configured on that resource.

Migration Steps

To ensure a smooth migration to HiveMQ Platform Operator 2.0.0:

  1. Test the upgrade in a non-production environment first

    Deploy your current configuration with the new operator version to identify any issues.

  2. Review your current metadata configuration

    Identify any dependencies on the old metadata inheritance behavior in:

    • Monitoring and alerting rules

    • Log aggregation configurations

    • Network policies

    • Custom tooling or scripts

  3. Update resource-specific metadata

    If you relied on global metadata inheritance, move those labels/annotations to the specific resource configurations where they are needed.

  4. Plan for rolling restarts

    StatefulSet Template metadata changes now trigger rolling restarts. During migration, review your metadata configuration to identify any changes that could cause unexpected rolling restarts. Pay particular attention to metadata that is not preserved by the backward compatibility feature (see the Backward Compatibility section below).

    If an extension restart operation is in progress during migration, the operator will also initiate a rolling restart. This is required due to technical changes in how the operator communicates with HiveMQ Platform pods, which are no longer interoperable. All other reconciliation states will proceed without triggering a forced rolling restart.

Backward Compatibility

To prevent an unexpected rolling restart during the operator migration, the operator maintains a set of well-known labels and annotations from common tools until the next rolling restart is triggered by a configuration change.

Well-known metadata preserved includes labels and annotations from:

  • Helm

  • Kubectl

  • K9s

  • Lens

  • ArgoCD

This ensures that existing tooling continues to work during the transition period.

We strongly recommend testing the operator upgrade on a test or staging system before applying it to production environments.