HiveMQ and Docker
The popularity of Docker for deploying all kinds of applications and services in a containerized environment has been increasing exponentially.
That’s not surprising since orchestration platforms such as Kubernetes, Docker Swarm and Mesos keep improving functionality for managing containers.
The ability to integrate Docker with different cloud infrastructures (think load balancers, block storage) and enterprise grade platforms for bare metal installations makes building complex services from basic container images a breeze.
Recognizing this growing popularity and importance we introduced a continuously updated HiveMQ Docker repository on Docker Hub that can help you streamline your development and deployment efforts.
HiveMQ DockerHub Repository
The HiveMQ DockerHub repository is maintained by the HiveMQ team and provides a single location for the following container images containing select past and current HiveMQ versions of the HiveMQ DNS Discovery Image and the HiveMQ Base Image, which can be used to create Custom HiveMQ Images.
Tags
The following table lists all currently available tags:
Tag | Description |
---|---|
|
This tag will always point to the latest version of the HiveMQ base image |
|
This tag will always point to the latest version of the HiveMQ DNS discovery image |
|
Base image providing the given version of the broker (e.g. 3.4.1) |
|
DNS discovery image based on the given version base image |
HiveMQ DNS Discovery Image
The HiveMQ DNS Disovery Image comes with the HiveMQ DNS Discovery Plugin pre-installed and is optimized for the use with Orchestration software that provided a Round-robin A record DNS service.
Run a HiveMQ Cluster in Docker
To enable a HiveMQ cluster, it is necessary that the HiveMQ nodes are able to find each other, via cluster discovery. We introduced the DNS Discovery Plugin that leverages a Round-robin A records DNS service to achieve cluster discovery and is tailor made to fit dockerized, orchestrated deployments. The upcoming sections show how to utilize the HiveMQ DNS Discovery Image with Docker Swarm and Kubernetes respectively.
Docker Swarm HiveMQ Cluster
The following illustrates how to create a containerized HiveMQ Cluster, using the HiveMQ DNS Discovery Image and Docker Swarm.
We do not recommend using Docker Swarm in production |
Start a single node Swarm cluster by running:
docker swarm init
Create an overlay network for the cluster nodes to communicate on:
docker network create -d overlay --attachable myNetwork
Create the HiveMQ service on the network, using the latest HiveMQ DNS Discovery Image
docker service create \
--replicas 3 --network myNetwork \
--env HIVEMQ_DNS_DISCOVERY_ADDRESS=tasks.hivemq \
--publish target=1883,published=1883 \
--publish target=8080,published=8080 \
-p 8000:8000/udp \
--name hivemq \
hivemq/hivemq3:dns-latest
This will provide a 3 node cluster with the MQTT(1883) and Web UI(8080) ports forwarded to the host network.
This means you can connect MQTT clients on port 1883. The connection will be forwarded to any of the cluster nodes.
The HiveMQ Web UI can be used in a single node cluster. A sticky session for the HTTP requests in clusters with multiple nodes cannot be upheld with this configuration, as the internal load balancer forwards requests in an alternating fashion. To use sticky sessions the Docker Swarm Enterprise version is required.
Managing the Cluster
To scale the cluster up to 5 nodes, run
docker service scale hivemq=5
To remove the cluster, run
docker service rm hivemq
To read the logs for all HiveMQ nodes in real time, use
docker service logs hivemq -f
To get the log for a single node, get the list of service containers using
docker service ps hivemq
And print the log using
docker service logs <id>
where <id>
is the container ID listed in the service ps
command.
Kubernetes HiveMQ Cluster
On Kubernetes, an appropriate deployment configuration is necessary to utilize DNS discovery. A headless service will provide a DNS record for the broker that can be used for discovery.
Following is an example configuration for a HiveMQ cluster with 3 nodes using DNS discovery in a replication controller setup.
You need to replace HIVEMQ_DNS_DISCOVERY_ADDRESS according to your Kubernetes namespace and configured domain.
|
apiVersion: v1
kind: ReplicationController
metadata:
name: hivemq-replica
spec:
replicas: 3
selector:
app: hivemq-cluster
template:
metadata:
name: hivemq-cluster
labels:
app: hivemq-cluster
spec:
containers:
- name: hivemq-pods
image: hivemq/hivemq3:dns-latest
ports:
- containerPort: 8080
protocol: TCP
name: web-ui
- containerPort: 1883
protocol: TCP
name: mqtt
env:
- name: HIVEMQ_DNS_DISCOVERY_ADDRESS
value: "hivemq-discovery.default.svc.cluster.local."
- name: HIVEMQ_DNS_DISCOVERY_TIMEOUT
value: "20"
- name: HIVEMQ_DNS_DISCOVERY_INTERVAL
value: "21"
readinessProbe:
tcpSocket:
port: 1883
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 60
livenessProbe:
tcpSocket:
port: 1883
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 60
---
kind: Service
apiVersion: v1
metadata:
name: hivemq-discovery
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
selector:
app: hivemq-cluster
ports:
- protocol: TCP
port: 1883
targetPort: 1883
clusterIP: None
Accessing the Web UI
To access the HiveMQ Web UI for a cluster running on Kubernetes, follow these steps:
Create a service exposing the Web UI of the HiveMQ service. Use the following YAML(web.yaml) definition:
kind: Service
apiVersion: v1
metadata:
name: hivemq-web-ui
spec:
selector:
app: hivemq-cluster
ports:
- protocol: TCP
port: 8080
targetPort: 8080
sessionAffinity: ClientIP
type: LoadBalancer
Create the service using kubectl create -f web.yaml
Depending on your provider of the Kubernetes environment, load balancers might not be available or additional configuration may be necessary to access the Web UI. |
Connecting external clients on the MQTT port
To allow access for the MQTT port of a cluster running on Kubernetes, follow these steps:
Create a service exposing the MQTT port using a load balancer. You can use the following YAML(mqtt.yaml) definition:
kind: Service
apiVersion: v1
metadata:
name: hivemq-mqtt
spec:
selector:
app: hivemq-cluster1
ports:
- protocol: TCP
port: 1883
targetPort: 1883
type: LoadBalancer
Create the service using kubectl create -f mqtt.yaml
You can now connect MQTT clients via the load balancers external endpoint on port 1883.
Please check out our HiveMQ and Kubernetes blog post for more information and resources.
Environment Variables
The following table lists all available environment variables that can be used for custom configuration of the HiveMQ DNS Discovery Docker container
Environment Variable | Default Value | Description |
---|---|---|
|
|
Address to get the A record that will be used for cluster discovery |
|
|
Interval in seconds after which to search for new nodes |
|
|
Wait time for DNS resolution in seconds |
|
|
Port used for cluster transport |
|
|
base64 encoded license file to use for the broker |
|
|
Cluster transport bind address - only necessary if the default policy (resolve hostname) fails |
|
|
Username for the Web UI login |
|
|
Password for the Web UI login |
Adding a HiveMQ License
To add your HiveMQ license to the Docker container you must set the container’s environment variables HIVEMQ_LICENSE
to the base64 encoded String of your license file.
To base64 encode your license file as a String run this command:
cat path/to/your/hivemq-license.lic | base64
Changing User Credentials for the HiveMQ Web UI
By default the HiveMQ Web UI login credentials are admin:hivemq
. If you wish to change those credentials you can use the environment variables HIVEMQ_WEB_UI_USER
and HIVEMQ_WEB_UI_PASSWORD
.
A SHA256 hashed value of your desired password is expected. See Generate a SHA256 Password to read more about how to generate the password hash. |
Overriding the Bind Address
By default this image will attempt to set the bind address using the containers ${HOSTNAME}
to ensure that HiveMQ will bind the cluster connection to the correct interface so a cluster can be formed.
This behavior can be overridden by setting any value for the environment variable HIVEMQ_BIND_ADDRESS
. The broker will attempt to use the given value as the bind address instead.
HiveMQ Base Image
The HiveMQ base image installs and optimizes the HiveMQ installation for execution as a container.
It is meant to be used to build custom images or to run a dockerized HiveMQ locally for testing purposes.
Run a HiveMQ Single Node in Docker
To start a single HiveMQ instance and allow access to the MQTT port as well as the Web UI, get Docker and run the following command:
docker run -p 8080:8080 -p 1883:1883 hivemq/hivemq3:latest
You now have a dockerized local HiveMQ and can connect to the broker (1883) or the WebUI (8080) via the respective ports.
Creating a custom HiveMQ Docker Image
You can build your own image from the provided base image and utilize any of the provided HiveMQ versions. Here is an example of a Dockerfile that does all of the following:
ARG TAG=latest
# (1)
FROM hivemq/hivemq3:${TAG}
# (2)
ENV MY_CUSTOM_PLUGIN_ENV myvalue
# (3)
ENV HIVEMQ_CLUSTER_PORT 8000
# (4)
COPY --chown=hivemq:hivemq your-license.lic /opt/hivemq/license/your-license.lic
COPY --chown=hivemq:hivemq myconfig.xml /opt/hivemq/conf/config.xml
COPY --chown=hivemq:hivemq myplugin.jar /opt/hivemq/plugins/myplugin.jar
COPY --chown=hivemq:hivemq myentrypoint.sh /opt/myentrypoint.sh
# (5)
RUN chmod +x /opt/myentrypoint.sh
# (6)
ENTRYPOINT ["/opt/myentrypoint.sh"]
This custom image:
-
Uses the
hivemq/hivemq3:latest
image as a base, with a build argument that (optionally) specifies which base tag to use. -
Defines an environment variable for the plugin.
-
Defines an environment variable that is substituted in the HiveMQ configuration file on start up. For details, see Using environment variables for configuration.
-
Copies required files such as a valid HiveMQ license file, a customized configuration, a custom plugin file and custom entry point to the corresponding folders and applies proper ownership inside the container.
-
Sets the custom entry point as executable.
-
Defines the entry point for the image. This definition is optional, but it allows you to run additional commands or programs (for configuration purposes) before you start the actual HiveMQ instance.
Here’s a way to build the Dockerfile
docker build --build-arg TAG=3.4.2 -t hivemq-myplugin .
The result is an image built on the HiveMQ base image version 3.4.2 and the current path as the build context. The finished image is tagged locally as hivemq-myplugin:latest
.