HiveMQ Legacy Operator to Platform Operator Migration Guide
Introduction
The HiveMQ Platform Operator for Kubernetes is the recommended Kubernetes operator for the HiveMQ Platform. This guide shows you how to migrate your HiveMQ Legacy Operator Helm chart configuration to the HiveMQ Platform Operator.
This migration guide is written for HiveMQ Platform Operator version 1.6.2 or higher. |
The HiveMQ Platform Operator for Kubernetes represents a significant evolution from the HiveMQ Legacy Operator. Changes in the Kubernetes platform and the need to align with current best practices for cloud-native services necessitate this major upgrade. The HiveMQ Platform Operator offers many new features and improvements and ensures compatibility with all Kubernetes versions 1.24 and higher.
Upgrading from the HiveMQ Legacy Operator to the HiveMQ Platform Operator is a sizable change and cannot be treated as a simple drop-in replacement. Migration requires careful planning and, depending on the complexity of your architecture and environment, can require adjustments to ensure compatibility.
We strongly recommend that you carefully review the entire guide before you begin your migration process.
The HiveMQ Legacy Operator and the HiveMQ Platform Operator can be installed in the same Kubernetes cluster side by side without conflicts. This offers greater flexibility, enabling you to test and execute a smoother migration process. |
High-level Operator Comparison
In brief, here are some of the key differences between the HiveMQ Legacy Operator and the HiveMQ Platform Operator for Kubernetes:
-
The HiveMQ Platform Operator uses the HiveMQPlatform custom resource definition (CRD) whereas the HiveMQ Legacy Operator uses the HiveMQCluster custom resource definition. The HiveMQPlatform CRD is simpler and adapts to newly released Kubernetes features for StatefulSet and Services. Additionally, the new CRD allows parallel operation of the Legacy Operator and Platform Operator in the same Kubernetes cluster during migration.
-
The Legacy Operator Helm chart configures and installs the Legacy Operator and a HiveMQ Cluster in a single Helm chart release. In contrast, the HiveMQ Platform Operator provides two Helm charts, one for the Platform Operator and one for the HiveMQ Platform cluster. The two-chart solution simplifies the installation and allows for easy deployment of multiple HiveMQ Platform clusters. This approach also enables the Platform Operator and the HiveMQ Platform clusters to be updated independently of each other.
-
The HiveMQ Platform Operator exclusively supports Kubernetes StatefulSet to deploy HiveMQ Platform clusters and follows Kubernetes best practices for stateful applications.
-
Unlike the HiveMQ Legacy Operator, which requires a Kubernetes-specific container image (
hivemq4:k8s-x.y.z
), the HiveMQ Platform Operator leverages the standard HiveMQ container images (hivemq4:x.y.z
) published on DockerHub. This approach simplifies the use of HiveMQ containers and makes container customization more straightforward. -
The HiveMQ Platform Operator uses the same base container image for the Operator and the HiveMQ Platform. This reduces the security surface of the images.
High-level Migration Process Overview
-
Install the HiveMQ Platform Operator with the provided HiveMQ Platform Operator Helm chart, as described in the documentation. The HiveMQ Platform Operator Helm chart installs the HiveMQPlatform custom resource definition (CRD) and the HiveMQ Platform Operator itself.
The HiveMQ Platform Operator Helm chart does not install a HiveMQ Platform cluster. -
Prepare the HiveMQ Platform cluster installation. Use your existing HiveMQ Legacy Operator Helm chart configuration files as the starting point to configure the HiveMQ Platform Helm chart.
See the documentation for detailed instructions on how to configure your HiveMQ Platform Helm chart.
Use the following Helm command to print out your existing HiveMQ Legacy Operator Helm Chart custom values:helm get values <legacy-release-name>
-
Collect and deploy all additional files that are needed such as extension configuration files, license files, and keystore files, according to the guidelines in the HiveMQ Platform Helm chart documentation.
-
Install your HiveMQ Platform cluster with the customized HiveMQ Platform Helm chart.
-
Verify that the HiveMQ Platform cluster is correctly installed.
-
Once verification of the new HiveMQ Platform cluster is complete, uninstall the HiveMQ Legacy Operator.
HiveMQ Legacy Operator to HiveMQ Platform Operator Migration Details
Install the HiveMQ Platform Operator
You can use the HiveMQ Platform Operator Helm chart to install the HiveMQ Platform Operator alongside the HiveMQ Legacy Operator. The Platform Operator uses the HiveMQPlatform
custom resource definition and the Legacy operator uses the existing HiveMQCluster
custom resource definition.
Installing the Platform Operator is a separate step from installing a HiveMQ Platform cluster. |
-
Install or update the HiveMQ Helm chart repository to ensure you are using the latest version:
helm repo add hivemq https://hivemq.github.io/helm-charts && helm repo update
-
Install your HiveMQ Platform Operator with the HiveMQ Platform Operator Helm chart:
helm install your-platform-operator hivemq/hivemq-platform-operator
The Platform Operator Helm chart is preconfigured with sensible default settings to give you a working installation. You can start with the default settings, or refer to the Platform Operator documentation to adjust your HiveMQ Platform Operator Helm chart as needed. At this point, your HiveMQ Platform Operator is running and ready to manage custom resources of type
HiveMQPlatform
.
Configure the HiveMQ Platform Helm Chart
After the HiveMQ Platform Operator is successfully installed, you are ready to configure and install your HiveMQ Platform with Helm.
For more detailed information on how to use our Helm chart, see the documentation.
Next, review your Legacy Operator Helm chart configuration and follow the comparisons below to create your HiveMQ Platform Helm chart configuration.
Configuration Changes in the Helm Charts
The next sections review the HiveMQ Legacy Operator configuration options and show how they are configured in the HiveMQ Platform Helm chart.
Only configuration options located in the Legacy Operator hivemq
tag are discussed here.
To print out your HiveMQ Legacy Operator custom configuration, enter the following Helm command:
helm get values <legacy-release-name>
Resources, image options, and configurations
The following configuration options have moved:
Legacy Configuration | New Configuration | Description |
---|---|---|
|
|
The number of HiveMQ Platform nodes. |
|
|
The Kubernetes requests and limits for CPU. |
|
|
The Kubernetes requests and limits for memory. |
|
|
The Kubernetes requests and limits for storage. |
|
|
The HiveMQ-specific JVM configuration. |
|
|
The HiveMQ-specific log level. |
|
|
The specific environment variables to be added. |
The following configuration options are no longer supported:
-
hivemq.cpuLimitRatio
-
hivemq.memoryLimitRatio
-
hivemq.ephemeralStorageLimitRatio
It is recommended to configure Kubernetes resource requests and limits to the same value. This is the default configuration for the HiveMQ Platform Helm chart. If required, the Kubernetes limits can be overridden. For production use cases, overriding the Kubernetes limits is not recommended and can lead to non-predictive resource consumption for the HiveMQ broker. -
hivemq.dnsSuffix
is no longer needed. The HiveMQ Platform Operator relies on the functional DNS service resolution of Kubernetes.
The HiveMQ container image options can now be explicitly configured. The Legacy Operator used the image configuration option to set the container repository, the name, and the version tag. The HiveMQ Platform Helm chart separates each value:
image:
repository: docker.io/hivemq
name: hivemq4
tag: 4.35.0
The HiveMQ Platform Operator supports all standard HiveMQ container images starting with HiveMQ version 4.19.0, including 4.28 (LTS).
The container images with a k8s-
tag prefix that the Legacy Operator uses should not be used for the HiveMQ Platform Operator.
Example: Instead of hivemq4:k8s-4.35.0
tag, use the standard hivemq4:4.35.0
container image tag.
By default, the HiveMQ Platform Helm chart uses the newest version of the HiveMQ Platform container tags.
HiveMQ Configuration Changes
The HiveMQ Platform Helm chart uses Kubernetes ConfigMaps or Secrets to deploy the HiveMQ configuration and HiveMQ extension configurations.
The Platform Operator mounts the ConfigMap as the config.xml
and tracing.xml
configuration file in the <HIVEMQ-HOME>/conf-k8s
folder.
By default the logback.xml
in <HIVEMQ-HOME>/conf
is used to configure logging.
You can put an optional logback.xml
in the ConfigMap to override the logging configuration.
The HiveMQ Platform Operator automatically handles the setup of the configuration folder and the optional logback.xml
file.
Review the documentation for details on configuration options.
HiveMQ restrictions, security, and MQTT configurations
Legacy Configuration | New Configuration | Description |
---|---|---|
|
|
The individual options remain the same. |
|
|
The options listed change names, options not listed remain the same. |
|
|
The options listed change names, options not listed remain the same. |
|
|
Options to configure HiveMQ overload protection. |
|
|
Options to configure HiveMQ cluster replication. |
HiveMQ Extension Configuration Changes
Review the HiveMQ Platform documentation when migrating extension configurations from the Legacy Operator Helm chart configuration.
HiveMQ extension configuration files are now mounted to the standard conf/config.xml
file location. Legacy configuration locations are not supported.
For custom extensions that do not use this standard location, an additional volume mount is required.
Legacy Configuration | New Configuration | Description |
---|---|---|
|
|
Reference to an existing ConfigMap with the extension configuration. For HiveMQ Enterprise Extensions, the ConfigMap with the configuration must use the name |
The HiveMQ Platform Operator supports both ConfigMaps and Secrets for extension configuration files. Use extensions.configMapName for a ConfigMap and extensions.secretName for a Secret.
|
extensions:
- name: hivemq-kafka-extension
enabled: true
supportsHotReload: true
configMapName: "hivemq-kafka-configuration"
extensionUri: preinstalled
The following extension configuration options are no longer supported:
-
hivemq.extensions.static
-
hivemq.extensions.updateStrategy
-
hivemq.extensions.initialization
The
static
andupdateStrategy
extension options are obsolete. The HiveMQ Platform Operator restarts extensions automatically when the extension configuration changes. If the extension supports hot reload of the configuration file, a restart is skipped.The
initialization
extension option is no longer supported. Previously, the most common use case for theinitialization
option was loading JDBC drivers in the Enterprise Security Extension. Since the Enterprise Security Extension now ships with default JDBC drivers, this option is no longer required. If you must use a custom JDBC driver, see this configuration example.
The HiveMQ Platform Operator adds support for the following extension configuration options:
-
customizationURI
to load custom transformers for the HiveMQ Kafka, Google Cloud Pub/Sub, Amazon Kinesis, and Security enterprise extensions. -
priority
andstartPriority
to change the order in which the extensions are loaded.
Additional useful extension configuration examples are provided in the documentation.
Initialization Containers, Sidecar Containers, and Initialization Changes
Detailed information on how to configure init containers and sidecars with the HiveMQ Platform Helm chart can be found in the documentation.
Legacy Configuration | New Configuration | Description |
---|---|---|
|
|
Configures the init containers that run before the HiveMQ container starts. |
|
|
Configures the containers that run alongside the HiveMQ container. |
The following initialization configuration option is no longer supported:
-
hivemq.initialization
The HiveMQ Platform Operator does not support initialization routines. Such routines were already deprecated with the Legacy Operator. Use
additionalInitContainers
oradditionalContainers
instead. Detailed information on how to configure init containers and sidecars can be found in the documentation.
Configuration of Controller Templates (StatefulSet)
The HiveMQ Legacy Operator supports configurations to supply custom controller templates. The option to use custom controller templates is no longer available in the HiveMQ Platform Helm chart. The Platform Operator only supports Kubernetes StatefulSet as a deployment option. StatefulSet is the best fit in terms of Kubernetes controllers for HiveMQ. The HiveMQ Platform Helm chart provides several options to customize the StatefulSet template.
Legacy Configuration | New Configuration | Description |
---|---|---|
|
|
Configures additional pod labels. |
|
|
Configures additional pod annotations. |
|
|
Configures Kubernetes pod affinity options. |
|
|
Configures Kubernetes pod affinity options. |
|
|
Configures the podSecurity context. |
|
|
Configures the containerSecurityContext. |
|
|
Configures additionalVolumes for the HiveMQ pod. |
|
|
Configures the persistent volumes to be mounted. |
The following controller template options are no longer supported:
-
hivemq.controllerTemplate
-
hivemq.priorityClassname
-
hivemq.runtimeClassName
-
hivemq.topologySpreadConstraints
-
hivemq.operatorHints
-
hivemq.customProperties
-
hivemq.serviceAccountAnnotations
-
hivemq.configMaps
-
hivemq.secrets
-
nameOverride
-
namespaceOverride
-
fullnameOverride
-
generateLabels
The hivemq.additionalVolumeMounts configuration is now part of the additionalVolumes configuration.
|
If necessary, the entire StatefulSet configuration can be customized by overwriting the following configuration option:
helm upgrade --install <my-hivemq-platform> hivemq/hivemq-platform --set-file config.overrideStatefulSet=stateful-set-spec.yaml -n <namespace>
In such cases, special care must be taken to align port settings between the HiveMQ configuration, the Kubernetes Service resources, and the StatefulSet resource. More information is available in the documentation. |
Configuration of Kubernetes Services Changes
The Legacy Operator uses the hivemq.ports
configuration to define Kubernetes services.
The HiveMQ Platform Operator uses the services
configuration instead.
The services
section provides all necessary options to configure TLS, the HiveMQ proxy protocol, a custom HiveMQ listener name, and additional Kubernetes service configuration options.
It is also possible to configure custom service names.
Legacy Configuration | New Configuration | Description |
---|---|---|
|
|
Configures the Kubernetes services. |
|
Configures the type of service. Available types are |
|
|
Configures additional service type information. Available types are |
|
|
|
Configures the service name. |
|
|
Configures the service port. |
|
Configures the port on the container to be used. |
|
|
|
Configures whether the service is exposed. |
|
Configures annotations for the service. |
|
|
Configures labels for the service. |
|
|
Configures the sessionAffinity.
Available values are |
|
|
Configures the Kubernetes externalTrafficPolicy.
Available values are |
The following service configuration option is no longer supported:
-
hivemq.ports.patch
The HiveMQ Platform Helm chart provides direct access to configure service annotations as well as the sessionAffinity configuration, labels, and serviceType. There is no longer a need to use JSON patches. For more information, refer to the documentation.
Monitoring Configuration Changes
The HiveMQ Platform Operator and HiveMQ Platform Helm charts include several options to support monitoring with Prometheus and Grafana. You can install the default Grafana dashboard and a Prometheus ServiceMonitor or overwrite the dashboard with your custom dashboard.
The following Legacy Operator option to install a Prometheus stack as a sub-dependency is no longer supported:
-
monitoring.dedicated
See the documentation for detailed information on how to install your monitoring stack.