HiveMQ and Docker
The popularity of Docker for deploying all kinds of applications and services in a containerized environment has increased steadily.
That’s not surprising since orchestration platforms such as Kubernetes, Docker Swarm, and Mesos continuously improve their functionality for managing containers.
Docker integrates easily with different cloud infrastructures such as load balancers or block storage as well as enterprise grade platforms for bare metal installations.
This ease of integration makes building complex services from basic container images a breeze.
To help you streamline your development and deployment efforts, HiveMQ provides a continuously-updated Docker repository on Docker Hub.
HiveMQ DockerHub Repository
The HiveMQ team maintains the The HiveMQ DockerHub repository. Our repository creates a central location for container images of the current HiveMQ DNS Discovery Image and HiveMQ Base Image and many previous versions. You can use these images to create Custom HiveMQ Images.
Tags
The following tags are currently available:
Tag | Description |
---|---|
|
Points to the latest version of the HiveMQ base image |
|
Points to the latest version of the HiveMQ DNS discovery image |
|
Base image that provides the indicated version of the broker (for example, 4.0.0) |
|
DNS discovery image based on the selected base image version |
HiveMQ DNS Discovery Image
The HiveMQ DNS Discovery Image is pre-installed with The HiveMQ DNS Discovery Extension and optimized for use with orchestration software that provides a round-robin style A-record.
Run a HiveMQ Cluster in Docker
To enable a HiveMQ cluster, the HiveMQ nodes must be able to find each other through cluster discovery. HiveMQ offers a DNS Discovery Extension that leverages round-robin style A-records to achieve cluster discovery. The extension is tailor made for Dockerized orchestrated deployments. For information on how utilize the HiveMQ DNS Discovery Image with different container-orchestration solutions, see Docker Swarm and Kubernetes.
Docker Swarm HiveMQ Cluster
The following example shows how to create a containerized HiveMQ cluster with the HiveMQ DNS Discovery Image and Docker Swarm.
We do not recommend using Docker Swarm in production. |
Run the following command to start a single node Swarm cluster:
docker swarm init
Create an overlay network on which the cluster nodes can communicate:
docker network create -d overlay --attachable myNetwork
Create the HiveMQ service on the network with the current version of the HiveMQ DNS Discovery Image
docker service create \
--replicas 3 --network myNetwork \
--env HIVEMQ_DNS_DISCOVERY_ADDRESS=tasks.hivemq \
--publish target=1883,published=1883 \
--publish target=8080,published=8080 \
-p 8000:8000/udp \
--name hivemq \
hivemq/hivemq4:dns-latest
This procedure provides a three-node cluster that forwards the MQTT(1883) and Web UI(8080) ports to the host network.
When you connect MQTT clients on port 1883, the connection is forwarded to any of the cluster nodes.
The HiveMQ Control Center can be used in a single node cluster. A sticky session for the HTTP requests in clusters with multiple nodes cannot be upheld with this configuration because the internal load balancer forwards requests in an alternating fashion. The Docker Swarm Enterprise version is required for sticky sessions.
Managing the Cluster
To scale the cluster up to 5 nodes, run
docker service scale hivemq=5
To remove the cluster, run
docker service rm hivemq
To read the logs for all HiveMQ nodes in real time, use
docker service logs hivemq -f
To get the log for a single node, get the list of service containers using
docker service ps hivemq
And print the log using
docker service logs <id>
where <id>
is the container ID listed in the service ps
command.
Kubernetes HiveMQ Cluster
Consider using the HiveMQ Kubernetes Operator instead. The operator provides a deeper integration of HiveMQ with Kubernetes. |
On Kubernetes, an appropriate deployment configuration is necessary to utilize DNS discovery. A headless service provides a DNS record for the broker that can be used for discovery.
For more information on how to run a HiveMQ cluster with Docker and Kubernetes, we highly recommend this blog, How to run a HiveMQ cluster with Docker and Kubernetes. |
The following example shows the configuration for a HiveMQ cluster with three nodes that uses DNS discovery in a replication controller setup.
You must replace HIVEMQ_DNS_DISCOVERY_ADDRESS according to your Kubernetes namespace and configured domain.
|
apiVersion: v1
kind: ReplicationController
metadata:
name: hivemq-replica
spec:
replicas: 3
selector:
app: hivemq-cluster
template:
metadata:
name: hivemq-cluster
labels:
app: hivemq-cluster
spec:
containers:
- name: hivemq-pods
image: hivemq/hivemq4:dns-latest
ports:
- containerPort: 8080
protocol: TCP
name: web-ui
- containerPort: 1883
protocol: TCP
name: mqtt
env:
- name: HIVEMQ_DNS_DISCOVERY_ADDRESS
value: "hivemq-discovery.default.svc.cluster.local."
- name: HIVEMQ_DNS_DISCOVERY_TIMEOUT
value: "20"
- name: HIVEMQ_DNS_DISCOVERY_INTERVAL
value: "21"
readinessProbe:
tcpSocket:
port: 1883
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 60
livenessProbe:
tcpSocket:
port: 1883
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 60
---
kind: Service
apiVersion: v1
metadata:
name: hivemq-discovery
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
selector:
app: hivemq-cluster
ports:
- protocol: TCP
port: 1883
targetPort: 1883
clusterIP: None
Accessing the Web UI
To access the HiveMQ Control Center for a cluster that runs on Kubernetes, follow these steps:
Create a service that exposes the Control Center of the HiveMQ service. Use the following YAML(web.yaml) definition:
kind: Service
apiVersion: v1
metadata:
name: hivemq-web-ui
spec:
selector:
app: hivemq-cluster
ports:
- protocol: TCP
port: 8080
targetPort: 8080
sessionAffinity: ClientIP
type: LoadBalancer
To create the service, enter kubectl create -f web.yaml
Depending on the provider of your Kubernetes environment, load balancers can be unavailable or additional configuration can be necessary to access the HiveMQ Control Center. |
Connecting external clients on the MQTT port
To allow access to the MQTT port of a cluster that runs on Kubernetes, follow these steps:
Create a service that exposes the MQTT port with a load balancer. You can use the following YAML(mqtt.yaml) definition:
kind: Service
apiVersion: v1
metadata:
name: hivemq-mqtt
spec:
selector:
app: hivemq-cluster1
ports:
- protocol: TCP
port: 1883
targetPort: 1883
type: LoadBalancer
To create the service, enter kubectl create -f mqtt.yaml
You can now connect MQTT clients through the external endpoint of the load balancer on port 1883.
For more information and resources, see HiveMQ and Kubernetes blog post.
Environment Variables
The following environment variables are available for custom configuration of the HiveMQ DNS Discovery Docker container:
Environment Variable | Default Value | Description |
---|---|---|
|
|
Address to get the A record that will be used for cluster discovery |
|
|
Interval in seconds after which to search for new nodes |
|
|
Wait time for DNS resolution in seconds |
|
|
Port used for cluster transport |
|
|
base64 encoded license file to use for the broker |
|
|
Cluster transport bind address - only necessary if the default policy (resolve hostname) fails |
|
|
Username for the Control Center login |
|
|
Password for the Control Center login |
Adding a HiveMQ License
To add your HiveMQ license to the Docker container, you must set the HIVEMQ_LICENSE
environment variable of the container to the base64-encoded string of your license file.
To base64 encode your license file as a string, run the following command:
cat path/to/your/hivemq-license.lic | base64
Changing User Credentials for the HiveMQ Control Center
By default, the HiveMQ Control Center login credentials are admin:hivemq
.
To change these credentials, use the HIVEMQ_CONTROL_CENTER_USER
and HIVEMQ_CONTROL_CENTER_PASSWORD
environment variables.
Use an SHA256 hashed value of your desired password. For more information on how to generate the password hash, see Generate a SHA256 Password. |
Overriding the Bind Address
By default, this image attempts to set the bind address with the ${HOSTNAME}
of the container to ensure that HiveMQ binds the cluster connection to the correct interface and forms a cluster.
To override the default behavior, set any value for the HIVEMQ_BIND_ADDRESS
environment variable.
The broker attempts to use the value that you set as the bind address.
HiveMQ Base Image
The HiveMQ base image installs and optimizes the HiveMQ installation for execution as a container.
THe base image is designed to build custom images or to run a Dockerized HiveMQ locally for testing purposes.
Run a HiveMQ Single Node in Docker
To start a single HiveMQ instance and allow access to the MQTT port and the HiveMQ Control Center, get Docker and run the following command:
docker run -p 8080:8080 -p 1883:1883 hivemq/hivemq4:latest
This command creates a Dockerized local HiveMQ instance and allows you to connect to the broker (1883) or the HiveMQ Control Center (8080) through the respective ports.
Creating a custom HiveMQ Docker Image
You can build your own image from the provided base image and utilize any of the provided HiveMQ versions. The example shows a Dockerfile that does all of the following:
ARG TAG=latest
# (1)
FROM hivemq/hivemq4:${TAG}
# (2)
ENV MY_CUSTOM_EXTENSION_ENV myvalue
# (3)
ENV HIVEMQ_CLUSTER_PORT 8000
# (4)
COPY --chown=hivemq:hivemq your-license.lic /opt/hivemq/license/your-license.lic
COPY --chown=hivemq:hivemq myconfig.xml /opt/hivemq/conf/config.xml
COPY --chown=hivemq:hivemq myextension /opt/hivemq/extensions/myextension
COPY --chown=hivemq:hivemq myentrypoint.sh /opt/myentrypoint.sh
# (5)
RUN chmod +x /opt/myentrypoint.sh
# (6)
ENTRYPOINT ["/opt/myentrypoint.sh"]
This custom image does the following:
-
Uses the
hivemq/hivemq4:latest
image as a base, with a build argument that (optionally) specifies which base tag to use. -
Defines an environment variable for the extension.
-
Defines an environment variable that is substituted in the HiveMQ configuration file on start up. For more information, see Using environment variables.
-
Copies required files such as a valid HiveMQ license file, a customized configuration, a custom extension folder and custom entry point to the corresponding folders and applies proper ownership inside the container.
-
Sets the custom entry point as executable.
-
Defines the entry point for the image. This definition is optional, but it allows you to run additional commands or programs (for configuration purposes) before you start the actual HiveMQ instance.
To build the Dockerfile, enter the following command:
docker build --build-arg TAG=4.0.0 -t hivemq-extension .
The result is an image that is built on the HiveMQ base image version 4.0.0 and the current path as the build context.
The finished image is tagged locally as hivemq-myextension:latest
.