HiveMQ and Docker
Docker is a popular and widely adopted open-source platform that eases the deployment and delivery of applications and services in a containerized environment. Docker integrates seamlessly with a wide range of tools and technologies, including orchestration platforms like Kubernetes, continuous integration/delivery pipelines, and cloud infrastructure such as load balancers and block storage. For more information, see Get Started with Docker.
To help you streamline your development, deployment, and management efforts, HiveMQ provides a continuously updated Docker repository on Docker Hub.
HiveMQ Docker Hub Repository
A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application including code, runtime, system tools, system libraries, and settings.
The HiveMQ Docker Hub repository provides access to the latest HiveMQ DNS Discovery Image and HiveMQ Base Image as well as many previous container image versions. You can use these base images to create custom HiveMQ images.
Docker Hub Tags
The HiveMQ Docker Hub repository provides different versions of the HiveMQ images using tags.
The following tags are currently available:
Tag | Description |
---|---|
|
Points to the latest version of the HiveMQ base image. |
|
Points to the latest version of the HiveMQ DNS discovery image. |
|
Base image that provides the indicated broker version (for example, 4.0.0). |
|
DNS discovery image based on the selected base image version. |
For your convenience, a k8s-<version> tag is provided on Docker Hub to support legacy deployments of the HiveMQ Kubernetes Operator.
|
HiveMQ Docker Images on GitHub
The hivemq4-docker-images
repository on GitHub provides the Dockerfile and context for the official HiveMQ Enterprise MQTT Broker Docker images hosted in the HiveMQ Docker Hub repository.
HiveMQ Base Image
The HiveMQ base image installs and optimizes the HiveMQ installation for execution as a container.
The base image can be used to build custom images or to run a Dockerized HiveMQ instance locally for testing purposes.
To specify an alternative image name, use the environment variable TARGETIMAGE
.
For example:
# replace myregistry/custom-hivemq:1.2.3 with the desired image name
# replace 4.28.0 with the desired HiveMQ version
TARGETIMAGE=myregistry/custom-hivemq:1.2.3 HIVEMQ_VERSION=4.28.0 ./build.sh
Run a Single HiveMQ Instance on Docker
To start a single HiveMQ instance and allow access to the MQTT port and the HiveMQ Control Center, get Docker and run the following command:
docker run -p 8080:8080 -p 1883:1883 hivemq/hivemq4:latest
The command creates a Dockerized local HiveMQ instance and allows you to connect to your HiveMQ broker on port 1883 and your HiveMQ Control Center on port 8080.
HiveMQ DNS Discovery Image
The HiveMQ DNS Cluster Discovery Extension is included in the HiveMQ DNS Discovery Image and optimized for orchestration software that provides a round-robin style A record.
The HiveMQ DNS discovery image is based on the HiveMQ base image.
We recommend using the HiveMQ DNS discovery image to run HiveMQ in a cluster.
Run a HiveMQ Cluster with Docker
To enable a HiveMQ cluster, the HiveMQ nodes must be able to find each other through cluster discovery.
For running HiveMQ in a cluster, we recommend using our DNS discovery image. This image has the HiveMQ DNS Discovery Extension built in. The image can be used with any container orchestration engine that supports service discovery using a round-robin A record.
A custom solution supplying the A record could be used as well. |
The extension is tailor-made for Dockerized orchestrated deployments and is available as a free download from the official HiveMQ website. For information on how to utilize the HiveMQ DNS Discovery Image with different container orchestration solutions, see Docker Swarm and Kubernetes.
Other environments are compatible as well (provided they support DNS discovery in some way). |
Environment Variables
The following environment variables can be used to customize your discovery and broker configuration:
Environment Variable | Default Value | Description |
---|---|---|
|
|
Address to get the A record that will be used for cluster discovery |
|
|
Interval in seconds after which to search for new nodes |
|
|
Wait time for DNS resolution in seconds |
|
|
Port used for cluster transport |
|
|
base64 encoded license file to use for the broker |
|
|
Cluster transport bind address - only necessary if the default policy (resolve hostname) fails |
|
|
Username for the Control Center login |
|
|
Password for the Control Center login |
Deploy a HiveMQ Cluster with Docker Swarm
The following example shows how to create a containerized HiveMQ cluster with the HiveMQ DNS Discovery Image and Docker Swarm for local testing purposes.
We do not recommend using Docker Swarm in production. |
Run the following command to start a single-node Docker Swarm cluster:
docker swarm init
Create an overlay network on which the cluster nodes can communicate:
docker network create -d overlay --attachable myNetwork
Create the HiveMQ service on the network with the current version of the HiveMQ DNS Discovery Image:
docker service create \
--replicas 3 --network myNetwork \
--env HIVEMQ_DNS_DISCOVERY_ADDRESS=tasks.hivemq \
--publish target=1883,published=1883 \
--publish target=8080,published=8080 \
-p 8000:8000/udp \
--name hivemq \
hivemq/hivemq4:dns-latest
This command creates a three-node cluster that forwards the MQTT port (1883) and HiveMQ Control Center (8080) port to the host network.
When you connect MQTT clients on port 1883, the connection is forwarded to any available cluster node.
The HiveMQ Control Center can be used in a single-node cluster. A sticky session for the HTTP requests in clusters with multiple nodes cannot be upheld with this configuration because the internal load balancer forwards requests in an alternating fashion.
For sticky sessions, the Docker Swarm Enterprise version is required. |
Deploy a HiveMQ Cluster on Kubernetes
We highly recommend using the HiveMQ Platform Operator for Kubernetes to create and manage your HiveMQ deployments on Kubernetes. The operator provides a deeper integration of HiveMQ with Kubernetes. |
On Kubernetes, an appropriate deployment configuration is necessary to utilize DNS discovery. A headless service provides a DNS record for the broker that can be used for discovery.
For more information on how to run a HiveMQ cluster with Docker and Kubernetes, see our How to run a HiveMQ cluster with Docker and Kubernetes blog post. |
The following example shows the configuration for a HiveMQ cluster with three nodes that uses DNS discovery in a replication controller setup.
You must replace HIVEMQ_DNS_DISCOVERY_ADDRESS according to your Kubernetes namespace and configured domain.
|
apiVersion: v1
kind: ReplicationController
metadata:
name: hivemq-replica
spec:
replicas: 3
selector:
app: hivemq-cluster
template:
metadata:
name: hivemq-cluster
labels:
app: hivemq-cluster
spec:
containers:
- name: hivemq-pods
image: hivemq/hivemq4:dns-latest
ports:
- containerPort: 8080
protocol: TCP
name: web-ui
- containerPort: 1883
protocol: TCP
name: mqtt
env:
- name: HIVEMQ_DNS_DISCOVERY_ADDRESS
value: "hivemq-discovery.default.svc.cluster.local."
- name: HIVEMQ_DNS_DISCOVERY_TIMEOUT
value: "20"
- name: HIVEMQ_DNS_DISCOVERY_INTERVAL
value: "21"
readinessProbe:
tcpSocket:
port: 1883
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 60
livenessProbe:
tcpSocket:
port: 1883
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 60
---
kind: Service
apiVersion: v1
metadata:
name: hivemq-discovery
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
selector:
app: hivemq-cluster
ports:
- protocol: TCP
port: 1883
targetPort: 1883
clusterIP: None
Access the HiveMQ Control Center on Kubernetes
To access the HiveMQ Control Center for a cluster that runs on Kubernetes, follow these steps:
Create a service that exposes the Control Center of the HiveMQ service. Use the following YAML(web.yaml) definition:
kind: Service
apiVersion: v1
metadata:
name: hivemq-control-center
spec:
selector:
app: hivemq-cluster
ports:
- protocol: TCP
port: 8080
targetPort: 8080
sessionAffinity: ClientIP
type: LoadBalancer
To create the service, enter kubectl create -f web.yaml
Depending on the provider of your Kubernetes environment, load balancers can be unavailable or additional configuration can be necessary to access the HiveMQ Control Center. |
Connect External Clients on the MQTT Port on Kubernetes
To allow access to the MQTT port of a cluster that runs on Kubernetes, follow these steps:
Create a service that exposes the MQTT port with a load balancer. You can use the following YAML(mqtt.yaml) definition:
kind: Service
apiVersion: v1
metadata:
name: hivemq-mqtt
spec:
selector:
app: hivemq-cluster1
ports:
- protocol: TCP
port: 1883
targetPort: 1883
type: LoadBalancer
To create the service, enter kubectl create -f mqtt.yaml
You can now connect MQTT clients through the external endpoint of the load balancer on port 1883.
For more information, see our HiveMQ and Kubernetes blog post.
Manage a HiveMQ Cluster with Docker
To scale your HiveMQ cluster up to 5 nodes, run the following command:
docker service scale hivemq=5
To remove your HiveMQ cluster, enter:
docker service rm hivemq
To read the logs for all HiveMQ nodes in real-time, enter:
docker service logs hivemq -f
To get the log for a single HiveMQ node, enter the following commands to get the list of service containers and then select the desired log:
docker service ps hivemq
To print a specific log, enter:
docker service logs <id>
Replace <id> with the desired container ID from the service ps list.
|
Override the Cluster Bind Address
By default, the HiveMQ DNS Discovery image attempts to set the bind address with the ${HOSTNAME}
of the container to ensure that HiveMQ binds the cluster connection to the correct interface and forms a cluster.
To override the default behavior, set any value for the HIVEMQ_BIND_ADDRESS
environment variable.
The broker attempts to use the value that you set as the bind address.
Add a HiveMQ License
To add your HiveMQ license to the Docker container, you must set the HIVEMQ_LICENSE
environment variable of the container to the base64-encoded string of your license file.
To base64 encode your license file as a string, run the following command:
cat path/to/your/hivemq-license.lic | base64
Change User Credentials for the HiveMQ Control Center
The default HiveMQ Control Center login credentials are admin:hivemq
.
To change these credentials, use the HIVEMQ_CONTROL_CENTER_USER
and HIVEMQ_CONTROL_CENTER_PASSWORD
environment variables.
Use a SHA256 hashed value of your desired password. For more information on how to generate the password hash, see Generate a SHA256 Password. |
Disable the HiveMQ Allow-all Extension
Since version 4.3, HiveMQ only allows MQTT clients to connect if a security extension is present.
For testing purposes, HiveMQ includes a hivemq-allow-all-extension
that authorizes all MQTT clients to connect to HiveMQ.
Before you use HiveMQ in production, you must add an appropriate security extension and remove the hivemq-allow-all-extension
.
You can download security extensions from the HiveMQ Marketplace or develop your own security extension.
HiveMQ Docker images come preconfigured with the hivemq-allow-all-extension
by default.
To override this behavior, set the HIVEMQ_ALLOW_ALL_CLIENTS
environment variable to false
.
This will cause the entry point script to delete the extension on startup.
Set the Cluster Transport Type
By default, the HiveMQ DNS discovery image uses the User Datagram Protocol (UDP) for the cluster transport.
To use TCP as the transport type, set the HIVEMQ_CLUSTER_TRANSPORT_TYPE
environment variable to TCP
.
In general, we recommend using TCP for your cluster transport since TCP makes HiveMQ less susceptible to network splits under high network load. |
Create a Custom HiveMQ Docker Image
You can build your own image from the provided base image and utilize any of the provided HiveMQ versions.
ARG TAG=latest
# (1)
FROM hivemq/hivemq4:${TAG}
# (2)
ENV MY_CUSTOM_EXTENSION_ENV myvalue
# (3)
ENV HIVEMQ_CLUSTER_PORT 8000
# (4)
COPY --chmod=660 your-license.lic /opt/hivemq/license/your-license.lic
COPY --chmod=660 myconfig.xml /opt/hivemq/conf/config.xml
COPY myextension /opt/hivemq/extensions/myextension
COPY --chmod=755 myentrypoint.sh /opt/myentrypoint.sh
# (5)
ENTRYPOINT ["/opt/myentrypoint.sh"]
The specified custom HiveMQ Docker image does the following:
-
Uses the
hivemq/hivemq4:latest
image as a base, with a build argument that (optionally) specifies which base tag to use. -
Defines an environment variable for the extension.
-
Defines an environment variable that is substituted in the HiveMQ configuration file during startup. For more information, see Using environment variables.
-
Copies required files such as a valid HiveMQ license file, a customized configuration, a custom extension folder, and a custom entry point to the corresponding folders and applies proper file permissions inside the container.
-
Optional setting that defines the entry point for the image. The definition of an entry point allows you to run additional commands or programs (for configuration purposes) before you start the actual HiveMQ instance.
To build the Docker file, enter the following command:
docker build --build-arg TAG=4.0.0 -t hivemq-extension .
The result is an image built on the HiveMQ base image version 4.0.0 and the current path as the build context.
The finished image is tagged locally as hivemq-myextension:latest
.