Configure Your HiveMQ Cluster
The following configuration descriptions assume that you are using Helm to manage your deployment of the HiveMQ Operator and HiveMQ Cluster custom resource on Kubernetes. You can also use a different method to create and manage your manifests manually.
When you use the default configuration of the HiveMQ Helm Chart to deploy the HiveMQ Kubernetes Operator, the operator automatically creates a HiveMQ Cluster custom resource on your Kubernetes cluster. For most use cases, you need to adjust some configuration settings. For more information, see our recommended settings. |
Kubernetes version v1.21 deprecated Pod Security Policies and Kubernetes version v1.25 removes 'Pod Security Policies' entirely.
To run the HiveMQ Kubernetes Operator on Kubernetes v1.25, you must disable the deprecated Pod Security Policy setting in the global section of your operator configuration before you upgrade to Kubernetes v1.25 or higher.
|
global: rbac: pspEnabled: false
HiveMQ broker configuration validation
Since HiveMQ version 4.10, configuration validation of the HiveMQ broker prevents broker startup when invalid configuration values are detected.
The new configuration behavior can impact HiveMQ deployments that you manage with the HiveMQ Kubernetes Operator (if your configuration contains invalid settings).
In the event that a configuration validation fails, we recommend that you check your hivemq.log and correct invalid configuration values.
For more information, see HiveMQ Configuration Validation.HiveMQ also gives you the option to skip the configuration validation as follows: |
hivemq: env: - name: "HIVEMQ_SKIP_CONFIG_VALIDATION" value: "true"
Basic Configuration
Set HiveMQ License File
-
Create a ConfigMap that contains your license file:
kubectl create configmap hivemq-license --from-file=/path/to/my-license.lic
-
Configure the mapping in your HiveMQ Cluster custom resource:
apiVersion: hivemq.com/v1
kind: HiveMQCluster
metadata:
name: hivemq-cluster1
spec:
configMaps:
- name: hivemq-license
path: /opt/hivemq/license
HiveMQ Extensions
Add and Remove Extensions
By default, your HiveMQ deployment includes all HiveMQ Enterprise Extensions, Prometheus, DNS discovery, and the HiveMQ allow-all extension.
Use the extensions
field in your custom values.yaml file to enable the HiveMQ Enterprise Extensions or to install your custom extensions.
extensions:
- name: hivemq-enterprise-security-extension
extensionUri: preinstalled
enabled: false
configMap: ese-configuration
updateStrategy: serial
Property Name | Description |
---|---|
|
The name of the extension. |
|
The URL where the extension is stored. For example, the HiveMQ Marketplace or a publicly available URL. |
|
The name of the ConfigMap that stores the configuration files for your extension. For more information, see Configuration of Extensions. |
|
Sets the desired state of the selected extension. |
|
Defines whether the extension is restarted when the linked ConfigMap changes. The default setting is false. |
|
An (idempotent) initialization script that runs when the extension is installed or updated. The default setting is an undefined string. If you edit the script, the script automatically re-executes. |
|
Defines whether updates to the extension are processed in series or in parallel. The default setting is serial. |
If you want to add a custom extension, consider using a Continuous Deployment pipeline to release the extension to a cluster-internal object storage such as MinIO. You can link to public objects or the extension URI to your artifact storage.
extensions:
- name: your-custom-extension
extensionUri: https://your-server/path/to/your-custom-extension.zip
enabled: true
configMap: your-custom-configuration
updateStrategy: serial
To add multiple extensions to your HiveMQ cluster, specify a list of extensions. Each extension must have a name, extension URI, and an enabled flag.
To remove an extension from your HiveMQ deployment, remove the extension declaration in your custom values yaml file. For more information see, Revise HiveMQ Cluster Configuration with Helm.
Enable / Disable Extensions at Runtime
Removing an extension usually leads to a rolling upgrade of your HiveMQ deployment.
Sometimes, it makes sense to disable an extension instead of removing it from the cluster.
To disable or enable HiveMQ extensions at runtime, change the enabled
flag of the extension in your custom values yaml file.
For more information, see Revise HiveMQ Cluster Configuration with Helm.
Extension Configuration with a ConfigMap
HiveMQ extensions are configured with configuration files.
To allow the HiveMQ Kubernetes Operator to manage the extension configuration files, you provide the extension configuration in a ConfigMap.
A ConfigMap is a Kubernetes API object that lets you store and share non-sensitive, unencrypted configuration information.
ConfigMaps allow you to decouple your configurations from your Pods and components, which helps keep your workloads portable.
Plain text values in your ConfigMaps are not encrypted. Do not use ConfigMaps for confidential information such as passwords, OAuth tokens, or SSH keys. |
ConfigMaps provide a data section where you can store items (keys) and their values.
ConfigMaps cannot be added at run-time.
Adding, removing, or editing the configMap field initiates a rolling upgrade of the CR.
|
Create a ConfigMap
The following procedure shows you how to place the open-source message log extension into a ConfigMap that a HiveMQ Cluster configuration references.
1. Save the example ConfigMap yaml file to your local file system as myConfig.yaml
.
apiVersion: v1
kind: ConfigMap
data:
mqttMessageLog.properties: |-
verbose=true
client-connect=false
metadata:
labels:
app: hivemq
name: config-extension
For HiveMQ Enterprise Extensions, the key name of the configuration must be the ID of the extension followed by the .xml suffix.
For example, hivemq-enterprise-security-extension.xml .
|
kubectl apply -f myConfig.yaml
This example creates the following HiveMQ Cluster extension configuration that references the ConfigMap that contains your extension configuration information.
- name: hivemq-mqtt-message-log-extension
configMap: config-extension
enabled: true
extensionUri: https://www.hivemq.com/releases/extensions/hivemq-mqtt-message-log-extension-1.1.0.zip
static: true
updateStrategy: serial
Each time you change the ConfigMap, the HiveMQ operator automatically initiates a rolling update of the extension configuration. |
Enable Monitoring
The HiveMQ Kubernetes Operator provides seamless integration with the Prometheus Operator.
Use the monitoring
field to enable Prometheus and an associated Grafana dashboard:
monitoring:
enabled: true
dedicated: false
Field | Type | Default | Description |
---|---|---|---|
enabled |
Boolean |
|
Specifies whether the operator enables integration to your existing Prometheus monitoring solution. |
dedicated |
Boolean |
|
Specifies whether the operator installs a full Prometheus monitoring solution that is preconfigured for monitoring Kubernetes and HiveMQ. |
The default login credentials for the Grafana dashboard that is created are username: admin
password: prom-operator
.
You must configure the serviceMonitorSelector of your Prometheus manifests to pick up the HiveMQ ServiceMonitor. Otherwise, Prometheus does not scrape the target. |
Currently, when you deploy a Prometheus operator with the HiveMQ Helm Chart, multiple skipping unknown hook: “crd-install” warnings are logged.
These warnings can be ignored.
|
Define Initialization Routines (deprecated)
You can pre-provision your HiveMQ container with init containers.
Use the initialization
field in your custom resource to append init containers to your HiveMQ deployment:
We recommend the use of initContainer instead of initialization routines, as init-containers offer more functionality
|
hivemq:
initialization:
- name: init-plugin
args:
- wget https://www.hivemq.com/releases/extensions/hivemq-file-rbac-extension-4.0.0.zip;
unzip hivemq-file-rbac-extension-4.0.0.zip -d /hivemq-data/extensions
All init containers provide a volume mount on /hivemq-data
.
The volume mount allows you to add files and folder structures that are recursively copied to the HiveMQ directory when the container starts.
To facilitate specification of basic script steps, the initialization field uses the default image busybox:latest and command ["/bin/sh", "-c"].
|
Ownership and file mode changes to the files in /hivemq-data can be overwritten upon startup.
|
Add Init Containers
If desired, you can add one or more specialized init containers that run before the containers in your HiveMQ pod. The init containers can contain utilities or setup scripts that are not present in an app image. For more information, see Init Containers.
To specify an init container for a HiveMQ pod, add the initContainers
field into the pod specification as an array of container items.
hivemq:
initContainers:
- name: init-cfg
image: busybox
command:
- /bin/sh
- "-c"
args:
- |
echo
The array in initContainers
specifies a list of initialization containers that belong to the pod.
Init containers are executed in order prior to containers being started.
If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy.
The name for an init container must be unique among all containers.
For more information, see Containers.
Each init container must exit successfully before the next container starts.
If an init container fails to start, it is retried according to the pod restartPolicy.
However, if your pod restartPolicy is set to Always
, the init containers use restartPolicy OnFailure
.
A pod cannot be Ready until all init containers have succeeded.
If the pod restarts or is restarted, all init containers must execute again.
|
Set Resource Limits and Requests
By default, your HiveMQ deployment sets sensible resource limits. To override the default resource limits, use the following fields:
Resource | Default | Description |
---|---|---|
|
1 |
The ratio of the CPU limit. For example, a ratio setting of 2 = cpu: 2 → limit ⇒ 4. |
|
4 |
Amount of CPU requested |
|
1 |
The ratio of the memory limit. This ratio is usually 1. |
|
4096M |
Amount of memory requested |
|
1 |
The ratio of the ephemeral storage limit |
|
15Gi |
The amount of ephemeral disk space requested. By default, the HiveMQ data folder uses this amount. |
Configure HiveMQ Ports
In the root of the specification, you can use the ports
field to configure which ports are mapped to your pods.
To map a port, use the following fields:
Field | Description |
---|---|
|
The name of the port |
|
The port number that is exposed |
|
Creates a service that points to the selected port. The naming schema is: |
|
A list of strings with JSON patches that are applied to the resulting service when |
The default values for the ports
field are as follows:
hivemq:
ports:
- name: "mqtt"
port: 1883
expose: true
patch:
- '[{"op":"add","path":"/spec/selector/hivemq.com~1node-offline","value":"false"},{"op":"add","path":"/metadata/annotations","value":{"service.spec.externalTrafficPolicy":"Local"}}]'
# If you want Kubernetes to expose the MQTT port
# - '[{"op":"add","path":"/spec/type","value":"LoadBalancer"}]'
- name: "cc"
port: 8080
expose: true
patch:
- '[{"op":"add","path":"/spec/sessionAffinity","value":"ClientIP"}]'
# If you want Kubernetes to expose the HiveMQ control center via a load balancer.
# Warning: You should consider configuring proper security and TLS beforehand. Ingress may be a better option here.
# - '[{"op":"add","path":"/spec/type","value":"LoadBalancer"}]'
Configure Your HiveMQ Cluster
Field | Default | Description |
---|---|---|
clusterReplicaCount |
2 |
The number of copies the cluster maintains for each piece of persistent data. A replica count of 2 = one original and one copy. |
clusterOverloadProtection |
true |
Automatically reduces the rate of incoming messages from message-producing MQTT clients that significantly contribute to the overload of the cluster. |
nodeCount |
3 |
the number of cluster nodes in the HiveMQ cluster. |
For more information on high availability clustering with HiveMQ, see HiveMQ Clusters.
TLS Listener
This procedure shows you how to configure and verify a TLS listener. You can create the necessary server and client certificates and the corresponding Keystores with the keytool and OpenSSL command-line tools.
This sample procedure is not intended for production use. For more information, see TLS for your cloud-based MQTT broker. |
keytool -genkey -keyalg RSA -alias hivemq -keystore hivemq.jks -storepass changeme -validity 360 -keysize 2048
kubectl create secret generic --from-file=hivemq.jks hivemq-jks
kubectl edit hivemq-cluster <my-cluster>
secrets
area to mount the Keystore into the configuration directory:hivemq:
secrets:
- name: hivemq-jks
path: /opt/hivemq/conf
listenerConfiguration
field: <tls-tcp-listener>
<port>8883</port>
<bind-address>0.0.0.0</bind-address>
<proxy-protocol>true</proxy-protocol>
<tls>
<keystore>
<path>/opt/hivemq/conf/hivemq.jks</path>
<password>changeme</password>
<private-key-password>changeme</private-key-password>
</keystore>
</tls>
</tls-tcp-listener>
mqtt
port so that it corresponds to the new listener. - expose: true
name: mqtt
patch:
- '[{"op":"add","path":"/spec/selector/hivemq.com~1node-offline","value":"false"},{"op":"add","path":"/metadata/annotations","value":{"service.spec.externalTrafficPolicy":"Local"}}]'
port: 8883
You must always specify a port named mqtt as this port will be used for the liveness check of the resulting Pods.
|
RUNNING
and enter:kubectl port-forward svc/hivemq-hivemq-mqtt-tls 8883:8883
mqtt sub -p 8883 -t test --cafile server.pem -d
Define DNS Suffix
If desired, you can specify a cluster domain suffix to use for DNS discovery.
When no dnsSuffix
is set, the default is svc.cluster.local
.
hivemq:
dnsSuffix: svc.cluster.local
Add Pod Labels
You can specify labels for your HiveMQ pod templates as desired. Pod labels help you identify and organize your pods. Labels can be attached to objects at creation time and subsequently added and modified at any time.
hivemq:
podLabels:
test: “myTestLabel"
Add Pod Annotations
Pod annotations allow you to add non-identifying metadata to your HiveMQ pod templates. You can use annotations to provide useful information and context for yourself or your DevOps team.
hivemq:
podAnnotations:
my-informative-annotation: my-useful-value-1
Set Priority Class Name
If desired, you can specify a priority class name to set the priority to a HiveMQ pod template.
Kubernetes ships with two common priority classes that you can use to ensure that critical components are always scheduled first:
-
system-cluster-critical
is the highest possible priority. -
system-node-critical
is the next highest priority.
hivemq:
priorityClassName: system-node-critical
To use other priority class names, you must create a PriorityClass
with the associated name.
For more information, see PriorityClass.
If you do not specify a priority class name, the HiveMQ Kubernetes Operator automatically sets the pod priority to your defined default priority. If no default priority is present, the operator sets the pod priority to zero.
Set Runtime Class Name
If desired, you can specify a runtime class name to reference a particular RuntimeClass
object in the underlying controller to run your HiveMQ pod templates.
For more information, see RuntimeClass.
The Kubernetes RuntimeClass feature is used to select the container runtime configuration.
Kubernetes uses the container runtime configuration to run the containers of a pod.
You can set different RuntimeClass
objects for your pods to provide a balance of performance versus security.
You can also use RuntimeClass
objects to run different pods with the same container runtime and different settings.
hivemq:
runtimeClassName: myclass
If no RuntimeClass
resource object matches the specified runtimeClassName
, the pod is not run.
If you do not set a runtimeClassName
or the value is empty, the HiveMQ Kubernetes Operator uses the default RuntimeHandler.
The default handler is equivalent to the behavior when the RuntimeClass feature is disabled.
Define Tolerations
If desired, you can apply tolerations to your HiveMQ pods that allow the pods to schedule onto nodes with matching taints.
In Kubernetes, taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. You can apply one or more taints to a node. For more information, see Taints and Tolerations.
Taints are applied to nodes and allow a node to repel specific pods.
You can put multiple taints on the same node.
Each taint has a key, value, and effect.
Tolerances are applied to pods and allow (but do not require) the pod to schedule onto nodes that have matching taints.
You can put multiple tolerations on the same pod.
A toleration matches a taint if the keys and effects are the same and one of the following operations applies:
-
The
operator
field is set toExists
(in which case no value is specified). -
The
operator
field is set toEqual
and the specified values match.
The way Kubernetes processes multiple taints and tolerations is similar to a filter. Kubernetes starts with all taints on the node, then ignores the taints for which the pod has a matching toleration. The remaining un-ignored taints have the indicated effects on the pod.
hivemq:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: “NoSchedule"
Field | Type | Description |
---|---|---|
|
String |
Specifies the taint effect to match. An empty field matches all taint effects.
The following values are possible:
|
|
String |
Specifies the taint key to which the toleration applies. An empty |
|
String |
Represents the relationship of the
|
|
Integer |
When the |
|
String |
Specifies the taint value to which the toleration matches. If the |
Additional Volumes
If desired, you can add further Kubernetes volumes to your HiveMQ pods.
The named volumes that you add to a pod can be accessed by all containers in the pod.
Kubernetes supports several types of volumes.
For more information see, Types of Volumes.
hivemq:
additionalVolumes:
- name: test-data1
emptyDir: {}
Make sure that a volume directory is already created in your container. |
Additional Volume Mounts
When you add further Kubernetes volumes to your HiveMQ pods, you must also define how you want Kubernetes to mount the volume within the container.
hivemq:
additionalVolumeMounts:
- name: test-data1
mountPath: /cache
Field | Type | Description |
---|---|---|
|
String |
The path within the container at which the volume is mounted. The path must not contain colons |
|
String |
Defines how mounts are propagated from the host to container and from the container to the host. The default setting is |
|
String |
The name of the mount. This name must match the name of a volume. |
|
Boolean |
Defines whether the volume is mounted in the container as read-only. |
|
String |
Defines the path within the volume from which Kubernetes mounts the volume of the container. The default setting is |
|
String |
Defines an expanded path within the volume from which Kubernetes mounts the volume of the container. This path behaves similarly to |
Add Topology Spread Constraints
If desired, you can define one or more pod topology spread constraints to control how Kubernetes schedules matching pods across the given nodes, zones, regions, or other user-defined topology domains of your Kubernetes cluster. For more information, see Pod Topology Spread Constraints.
hivemq:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: node
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
foo: hivemq
When you define multiple topology spread constraints for a pod, the constraints are combined with AND statements. The Kubernetes scheduler seeks a node for the incoming pod that satisfies all the constraints. |
Field | Type | Description | ||
---|---|---|---|---|
|
Integer |
Defines the degree to which pods can be unevenly distributed.
|
||
|
String |
Defines the key for the node labels. |
||
|
String |
A label that Kubernetes uses to find matching pods. Pods that match this label selector are counted to determine the number of pods in the corresponding topology domain. |
||
|
String |
Specifies how Kubernetes handles pods that do not satisfy the spread constraint,
|
Pod Security Configuration
Kubernetes provides security guidelines and standard controllers.
Starting with version 1.25, Kubernetes implements a new Pod Security Admission controller that enables new isolation levels by namespace or cluster-wide.
Before v1.25 Pod Security Policies were used.
To run the HiveMQ operator with Kubernetes v1.25, you must set the pspEnabled
tag in the global
rbac
setting of your operator configuration to false
.
To learn more on how to operate HiveMQ securely on Kubernetes, see Pod Security Context configuration and Configure a Security Context for a Pod or Container in the Kubernetes documentation.
Set Pod Security Context
For more information, see Configure a Security Context for a Pod or Container
Some fields in PodSecurityContext are also present in SecurityContext .
Field values of SecurityContext take precedence over field values of PodSecurityContext .
|
hivemq:
podSecurityContext:
runAsUser: 10000
Field | Type | Description |
---|---|---|
|
Integer |
A special supplemental group that applies to all containers in a pod. The format is init64. Some volume types allow the Kubernetes to change the ownership of the volume to be owned by the pod:
If the |
|
Integer |
Defines the behavior for changing ownership and permission of the volume before the volume is exposed inside a pod. The format is init64. This field only applies to volume types that support
|
|
Integer |
Specifies the primary group ID for all processes that run in any containers of the pod. The format is init64. The default primary group ID is root(0). The default HiveMQ image is built for the primary group ID root(0). To view an example that shows how to customize your HiveMQ image and |
|
Boolean |
Specifies whether the container must run as a non-root user. When |
|
Integer |
The user ID (UID) to run the entrypoint of the container process. If unspecified, |
|
String |
The SELinux context that is applied to all containers. If unspecified, the container runtime allocates a random SELinux context for each container.
|
|
String |
Defines the seccomp profile settings of a pod or container. Only one profile source can be set.
|
|
String |
The Windows-specific settings that are applied to all containers. If unspecified, the options defined in the
|
Set Container Security Context
If desired, you can provide a custom security context for your HiveMQ containers.
A security context defines privilege and access control settings for a pod or container.
Security settings that you specify for a container apply only to the individual container.
Some security configuration fields are present in both SecurityContext
and PodSecurityContext
.
When both are set, the values in SecurityContext
take precedence.
When there is overlap, the settings you define in containerSecurityContext
override settings made at the pod level.
Container settings do not affect pod volumes. |
hivemq:
containerSecurityContext:
runAsUser: 10000
Field | Type | Description |
---|---|---|
|
Boolean |
Controls whether a process can gain more privileges than its parent processes. This bool directly controls if the |
|
String |
Specifies POSIX capabilities to add or remove when the container runs.
|
|
Boolean |
Specifies whether the container runs in privileged mode. When set to |
|
String |
Specifies the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for read-only paths and masked paths. This property requires the ProcMountType feature flag to be enabled. |
|
Boolean |
Specifies whether the container has a read-only root file system. The default setting is |
|
Integer |
Specifies the group ID (GID) to run the entry point of the container process. The format is init64. If unset, the runtime default is used. If |
|
Boolean |
Specifies that the container must run as a non-root user. When set to |
|
Integer |
Specifies the user ID (UID) to run the entry point of the container process. The format is init64. If unset, defaults to the user specified in the image metadata. If |
|
String |
Specifies the SELinux context that is applied to the container. If unspecified, the container runtime allocates a random SELinux context for each container. If SELinux options are provided at both the pod and container level, the container options override the pod options. The following
|
|
String |
Defines the seccomp profile settings of a pod or container. Only one profile source can be set.
If seccomp options are provided at both the pod and container level, the container options override the pod options. |
|
String |
The Windows-specific settings that are applied to all containers. If unspecified, the options defined in the
|
Volume Claim Templates
When you use StatefulSets, you can add volumeClassTemplates
to provide stable storage for your HiveMQ pods with persistent volumes.
The volumeClaimTemplates
is a list of claims that pods are allowed to reference.
The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod.
Every claim in this list must have at least one matching (by name) volumeMount
in one container in the template.
For more information, see Stable Storage and Persistent Volume Claims.
Claims listed in the volumeClaimTemplates list take precedence over any volumes in the template that have the same name.
|
Add Operator Hints
The operatorHints
section provides options for configuring operations logic such as surge node orchestration and persistent volume claim (PVC) clean-up.
To set operator hints, the following setting must be present in a ControllerTemplate: cluster-stateful-set.yaml. |
hivemq:
operatorHints:
statefulSet:
surgeNode: true
surgeNodeCleanupPvc: true
Field | Value | Description |
---|---|---|
|
Boolean |
Speciifes properties that are relevant for deploying a StatefulSet:
|
Add Sidecar Containers
In some cases, it can be useful to add a container that runs along with a HiveMQ container in a pod to enhance or extend the functionality of the container (without changing the current container).
A Sidecar container is a second container that can be added to the pod definition. sidecars
must be placed in the same pod as the main application container and use the same resources as the main container.
For more information, see Using Pods
Configure HiveMQ
This section lists specific sections of the config.xml
that are represented in the HiveMQ Custom Resource Definition.
You must specify these parameters in the restrictions, MQTT, and security section in your manifest:
hivemq:
mqtt:
maxQos: 1
If you need to edit the config.xml
of your deployment at a more granular level, use the configOverride
field.
To ensure HiveMQ can still interact with the operator correctly, start with the default value of the configOverride field when you make low-level changes to the configuration. |
Restrictions
Field | Value | Description |
---|---|---|
|
65535 |
The maximum number of characters HiveMQ accepts in an MQTT-client ID |
|
65535 |
The maximum number of characters HiveMQ accepts in a topic string |
|
-1 |
The maximum number of MQTT connections HiveMQ allows. A setting of -1 = unlimited. |
|
0 |
The maximum incoming traffic as bytes per second (b/s) |
|
10000 |
The time in milliseconds that HiveMQ waits for the CONNECT message of a client before closing an open TCP socket |
For more information, see Restrictions.
MQTT Options
Field | Default | Description |
---|---|---|
|
4294967295 |
The length of time in seconds that can pass after the client disconnects before the session expires |
|
4294967296 |
The length of time in seconds that can pass after a message arrives at the broker until the message expires |
|
268435460 |
The maximum size, in bytes, of MQTT packets that the HiveMQ broker accepts |
|
10 |
The maximum number of PUBLISH messages that are not yet acknowledged by the HiveMQ broker each client can send |
|
65535 |
The maximum value that the HiveMQ broker accepts in the |
|
true |
Allows connections from clients that send a CONNECT packet with a keepAlive=0 setting |
|
true |
To reduce the packet size of PUBLISH messages, an alias can replace the topic. The topic aliases must be a number between 1 and 65535. '0' is not allowed. |
|
5 |
Limits the number of topic aliases per client |
|
true |
Associates an identifier with every topic filter in a SUBSCRIBE message |
|
true |
Defines whether the HiveMQ broker accepts subscriptions with a topic filter that use wildcard characters |
|
true |
Defines whether the HiveMQ broker supports shared subscriptions |
|
true |
Defines whether the retained messages feature is enabled on the HiveMQ broker |
|
2 |
Defines the maximum Quality of Service (QoS) level that can be used in MQTT PUBLISH messages |
|
1000 |
Limits the number of messages the HiveMQ broker queues per client |
|
discard |
Defines how the HiveMQ handles new messages for a client when the queue of the client is full |
For more information, see MQTT Specific Configuration Options.
Security
Field | Value | Description |
---|---|---|
|
true |
Allows the use of empty client IDs. If this is set to true, HiveMQ automatically generates a random client ID when the clientId of a CONNECT packet is empty. |
|
false |
Enables UTF-8 validation of UTF-8 PUBLISH payloads |
|
true |
Enables UTF-8 validation of topic names and client IDs |
|
true |
Allows the client to request problem information. If this is set to false, no reason string and user property values are sent to clients. |
|
true |
Enables audit logging for the HiveMQ control center |
For more information, see the Security Configuration section of MQTT Configuration.
Allow-all Extension
By default, the HiveMQ Docker image comes with the allow-all extension that permits all MQTT connections without requiring authentication.
Before you use HiveMQ in production, add an appropriate security extension and remove the HiveMQ allow-all extension.
To disable the extension, set the HIVEMQ_ALLOW_ALL_CLIENTS
environment variable to false:
hivemq:
env:
- name: HIVEMQ_ALLOW_ALL_CLIENTS
value: "false"
For more information, see Default Authentication Behaviour
Use a HiveMQ Custom Image
Currently, the HiveMQ Operator renders the hivemqVersion
as the image tag.
hivemq:
hivemqVersion: latest
image: my-repo/hivemq-k8s-image
If necessary, You can also define imagePullPolicy .
|
Specify Log Level
Use the logLevel
field to specify the log level for the root logger:
hivemq:
logLevel: INFO
Specify Custom Java Options
To specify java flags such as GC options or network properties, use the javaOptions
field.
The default value of the javaOptions
field works well on most environments:
-XX:+UnlockExperimentalVMOptions -XX:InitialRAMPercentage=40 -XX:MaxRAMPercentage=50 -XX:MinRAMPercentage=30
.
Specify Custom Environment Variables
To append custom variables to the existing environment of the HiveMQ container, use the env
field:
hivemq:
env:
- name: TEST_VAR
value: FOO
It is also possible to specify environment variables directly from secret objects:
env:
- name: CLUSTER_KEYSTORE_KEY_PASS
valueFrom:
secretKeyRef:
key: keystore-key-pass
name: hivemq-cluster-tls-secrets
Use a Custom Controller Template
The HiveMQ Operator supports the use of custom controller templates to deploy HiveMQ. Custom templates make it possible to use controllers such as StatefulSet and DaemonSet for your cluster deployments.
The method is similar to how Helm templates are written, but instead of a gotemplate the HiveMQ operator uses a Jinja-like language (Jinjava).
The context provided to the template consists of the HiveMQCluster object (variable name spec) as well as some built-in templating functions.
hivemq:
controllerTemplate: "my-deployment.yaml"
Deployments can be YAML based (.yaml, .yml) or JSON based (.json).
Template Functions
The template context provides built-in functions for some common tasks:
-
util:escapeJson(String)
: Escapes a given input string to be JSON compliant -
util:indent(Integer, String)
: Indents the given multi-line input string for YAML templates -
util:getPort(ClusterSpec, String)
: Returns the port object for the given port name -
util:stringReplace(String, String, String)
: Runs replaceAll on the first argument and replaces the 2nd argument string with the 3rd argument string. -
util:render(ClusterSpec, String)
: Renders a given string with the same templating context as the template itself. For example, renders a custom property from the cluster specification.
Custom Variables
You can also specify additional properties on the HiveMQCluster specification:
hivemq:
customProperties:
myCustomProperty: "customValue"
This value can be used in custom templates.
For example, {{ spec.customProperties.myCustomProperty }}
.
In this example, the value evaluates to customValue
.
HiveMQ Custom Resource Patches
Use these files as a basis for your own custom resource file structures. The sample files include patches that you can use to update your HiveMQ Cluster deployment in various ways:
-
Install and configure your extensions
-
Configure your HiveMQ licenses
-
Configure how ports are mapped and exposed
for more information, see Patch Kubernetes objects.
Configuration Override Patch
The example config-override.yaml
patch shows how you can override the default config.xml
template of your HiveMQ cluster custom resource.
The override is useful when you need to configure detailed parameters that are not included in the hivemqCluster.json
schema.
To demonstrate how block scalar strings are formatted for this kind of structure, the patch file applies the default template that is configured in the hivemqCluster.json
schema.
hivemq:
configOverride: |-
<?xml version="1.0"?>
<hivemq>
<listeners>
--LISTENER-CONFIGURATION--
</listeners>
<control-center>
<listeners>
<http>
<port>${HIVEMQ_CONTROL_CENTER_PORT}</port>
<bind-address>0.0.0.0</bind-address>
</http>
</listeners>
<users>
<user>
<name>${HIVEMQ_CONTROL_CENTER_USER}</name>
<password>${HIVEMQ_CONTROL_CENTER_PASSWORD}</password>
</user>
</users>
</control-center>
<cluster>
<transport>
--TRANSPORT_TYPE--
</transport>
<enabled>true</enabled>
<discovery>
<extension>
<reload-interval>${HIVEMQ_DNS_DISCOVERY_INTERVAL}</reload-interval>
</extension>
</discovery>
<replication>
<replica-count>${HIVEMQ_CLUSTER_REPLICA_COUNT}</replica-count>
</replication>
</cluster>
<overload-protection>
<enabled>${HIVEMQ_CLUSTER_OVERLOAD_PROTECTION}</enabled>
</overload-protection>
<restrictions>
<max-client-id-length>${HIVEMQ_MAX_CLIENT_ID_LENGTH}</max-client-id-length>
<max-topic-length>${HIVEMQ_MAX_TOPIC_LENGTH}</max-topic-length>
<max-connections>-${HIVEMQ_MAX_CONNECTIONS}</max-connections>
<incoming-bandwidth-throttling>${HIVEMQ_INCOMING_BANDWIDTH_THROTTLING}</incoming-bandwidth-throttling>
<no-connect-idle-timeout>${HIVEMQ_NO_CONNECT_IDLE_TIMEOUT}</no-connect-idle-timeout>
</restrictions>
<mqtt>
<session-expiry>
<max-interval>${HIVEMQ_SESSION_EXPIRY_INTERVAL}</max-interval>
</session-expiry>
<packets>
<max-packet-size>${HIVEMQ_MAX_PACKET_SIZE}</max-packet-size>
</packets>
<receive-maximum>
<server-receive-maximum>${HIVEMQ_SERVER_RECEIVE_MAXIMUM}</server-receive-maximum>
</receive-maximum>
<keep-alive>
<max-keep-alive>${HIVEMQ_KEEPALIVE_MAX}</max-keep-alive>
<allow-unlimited>${HIVEMQ_KEEPALIVE_ALLOW_UNLIMITED}</allow-unlimited>
</keep-alive>
<topic-alias>
<enabled>${HIVEMQ_TOPIC_ALIAS_ENABLED}</enabled>
<max-per-client>${HIVEMQ_TOPIC_ALIAS_MAX_PER_CLIENT}</max-per-client>
</topic-alias>
<subscription-identifier>
<enabled>${HIVEMQ_SUBSCRIPTION_IDENTIFIER_ENABLED}</enabled>
</subscription-identifier>
<wildcard-subscriptions>
<enabled>${HIVEMQ_WILDCARD_SUBSCRIPTION_ENABLED}</enabled>
</wildcard-subscriptions>
<shared-subscriptions>
<enabled>${HIVEMQ_SHARED_SUBSCRIPTION_ENABLED}</enabled>
</shared-subscriptions>
<quality-of-service>
<max-qos>${HIVEMQ_MAX_QOS}</max-qos>
</quality-of-service>
<retained-messages>
<enabled>${HIVEMQ_RETAINED_MESSAGES_ENABLED}</enabled>
</retained-messages>
<queued-messages>
<max-queue-size>${HIVEMQ_QUEUED_MESSAGE_MAX_QUEUE_SIZE}</max-queue-size>
<strategy>${HIVEMQ_QUEUED_MESSAGE_STRATEGY}</strategy>
</queued-messages>
</mqtt>
<security>
<!-- Allows the use of empty client ids -->
<allow-empty-client-id>
<enabled>${HIVEMQ_ALLOW_EMPTY_CLIENT_ID}</enabled>
</allow-empty-client-id>
<!-- Configures validation for UTF-8 PUBLISH payloads -->
<payload-format-validation>
<enabled>${HIVEMQ_PAYLOAD_FORMAT_VALIDATION}</enabled>
</payload-format-validation>
<!-- test-->
<utf8-validation>
<enabled>${HIVEMQ_TOPIC_FORMAT_VALIDATION}</enabled>
</utf8-validation>
<!-- Allows clients to request problem information -->
<allow-request-problem-information>
<enabled>${HIVEMQ_ALLOW_REQUEST_PROBLEM_INFORMATION}</enabled>
</allow-request-problem-information>
</security>
</hivemq>
To eliminate the need for any special formatting, you can also use a JSON patch. For more information, see JSON Patch. |
Initialization Patch
The example initialization.yaml
patch shows how to use initialization routines.
This example shows how to install an extension.
However, you usually use the 'extensions' field for this type of task.
hivemq:
initialization:
- name: init-kafka-plugin
args:
- |
# Setup extension
wget https://www.hivemq.com/releases/extensions/hivemq-kafka-extension-1.0.0.zip
unzip hivemq-kafka-extension-1.0.0.zip -d /hivemq-data/extensions
rm /hivemq-data/extensions/hivemq-kafka-extension/kafka-configuration.example.xml
chmod -R 777 /hivemq-data/extensions/hivemq-kafka-extension
HiveMQ Enterprise Extension for Kafka Patch
The example kafka.yaml
patch shows how to manage extensions.
For more information, see Kafka Extension Configuration.
hivemq:
extensions:
- name: hivemq-kafka-extension
extensionUri: https://www.hivemq.com/releases/extensions/hivemq-kafka-extension-1.1.0.zip
configMap: kafka-configuration
enabled: true
Before you apply the Kafka extension patch, you must create ConfigMaps for the configuration of the extension and your enterprise extension licence. |
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: hivemq
name: kafka-configuration
data:
kafka-configuration.xml: |-
<kafka-configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="kafka-extension.xsd">
<kafka-clusters>
<kafka-cluster>
<id>cluster01</id>
<bootstrap-servers>kafka.operator.svc.cluster.local:9071</bootstrap-servers>
<authentication>
<plain>
<username>test</username>
<password>test123</password>
</plain>
</authentication>
</kafka-cluster>
</kafka-clusters>
<mqtt-to-kafka-mappings>
<mqtt-to-kafka-mapping>
<id>mapping01</id>
<cluster-id>cluster01</cluster-id>
<mqtt-topic-filters>
<mqtt-topic-filter>mytopic/#</mqtt-topic-filter>
</mqtt-topic-filters>
<kafka-topic>my-kafka-topic</kafka-topic>
</mqtt-to-kafka-mapping>
</mqtt-to-kafka-mappings>
<kafka-to-mqtt-mappings>
<kafka-to-mqtt-mapping>
<id>mapping02</id>
<cluster-id>cluster01</cluster-id>
<kafka-topics>
<kafka-topic>first-kafka-topic</kafka-topic>
<kafka-topic>second-kafka-topic</kafka-topic>
<!-- Arbitrary number of Kafka topics -->
</kafka-topics>
</kafka-to-mqtt-mapping>
</kafka-to-mqtt-mappings>
</kafka-configuration>
apiVersion: v1
data:
hivemq.lic: |-
my-license-file
kafka-license.elic: |-
my-extension-license-file
kind: ConfigMap
metadata:
labels:
app: hivemq
name: hivemq-license
To apply the Kafka extension patch, after you create the necessary ConfigMaps, enter:
kubectl patch hmqc <cluster-name> --type=merge --patch "$(cat kafka.yaml)"
License Patch
The example license.yaml
shows how to install a license when you use the HiveMQ operator.
apiVersion: hivemq.com/v1
kind: HiveMQCluster
spec:
configMaps:
- name: hivemq-license
path: /opt/hivemq/license
To apply the license patch, enter the following command:
kubectl patch hmqc <cluster-name> --type=merge --patch "$(cat license.yaml)"
Before you apply the license patch, you must create a ConfigMap for the associated license. For more information, see the Example ConfigMap for a Kafka extension license. |
Listener Patch
The example listener-config.yaml
shows how to configure additional listeners.
This example uses the default listener and templated environment variable as well as an additional hardcoded listener on port 1884.
You can use this method to configure other types of listeners. For more information, see Listeners.
To directly reference a service on Kubernetes and use the correct port even if the load balancer port changes, you can use service port environment variables in this definition. For more information, see Kubernetes Environment Variables. |
hivemq:
listenerConfiguration: >
<tcp-listener>
<port>${HIVEMQ_MQTT_PORT}</port>
<bind-address>0.0.0.0</bind-address>
</tcp-listener>
<tcp-listener>
<port>1884</port>
<bind-address>0.0.0.0</bind-address>
</tcp-listener>
To configure a TLS listener, you must provide the associated Keystore and Truststore in the configurations field. |
Ports Patch
The example ports.yaml
shows how to configure additional ports.
When you apply this patch to a HiveMQ cluster that uses the default configuration, you add an API port and expose the port as a service.
hivemq:
ports:
# These are the default ports that get exposed if you don't override this field.
- name: mqtt
port: 1883
patch:
- '[{"op":"add","path":"/spec/selector/hivemq.com~1node-offline","value":"false"},{"op":"add","path":"/metadata/annotations","value":{"service.spec.externalTrafficPolicy":"Local"}}]'
- name: "cc"
port: 8080
expose: true
patch:
- '[{"op":"add","path":"/spec/sessionAffinity","value":"ClientIP"}]'
# If you want Kubernetes to expose the HiveMQ control center via load balancer.
# End of default ports
# If your extension exposes a custom REST API, you can expose the port to a service like such:
# The service will be called "hivemq-<cluster-name>-<port-name>"
- name: my-api
port: 8082
expose: true