System Requirements (HiveMQ 3.4)

This documentation is for the HiveMQ 3.4 legacy version. For up-to-date information on the current version of HiveMQ, please switch to the latest version of our HiveMQ Platform documentation and update your bookmarks as needed.

HiveMQ is a high performance MQTT broker and is designed to run on server hardware. While HiveMQ also runs on embedded devices, its full potential is unleashed on server hardware.

Supported Operating Systems

  • Production: Linux is the only supported operating system for production environments. CentOS7 or other RHEL-based distributions are recommended.

  • Development: Windows, Mac OS X or Linux.

Minimum Hardware Requirements

  • At least 4GB of RAM

  • 4 or more CPUs

  • 100GB or more free disk space.

Environment

  • Production: Oracle JRE and OpenJDK JRE 7 or newer is required.

  • Development: Oracle JDK and OpenJDK JDK 7 or newer is recommended.

When JRE 9 is used, please update to the latest version.
System resources
HiveMQ scales with your system resources. If you scale up to more CPUs and RAM, HiveMQ delivers higher throughput and lower latencies. The performance of persistence features is bound to IO of the underlying system.

Linux Configuration Optimizations

The following chapters describe how to optimize your linux configuration.

Open file limit

In case HiveMQ is running on a Linux OS, please make sure that the maximum amount of files that the HiveMQ process may open is sufficient. An easy was to do this is to add the following lines to the /etc/security/limits.conf file:

hivemq  hard    nofile  1000000
hivemq  soft    nofile  1000000
root    hard    nofile  1000000
root    soft    nofile  1000000

Tweaking TCP

On systems with many connections it may also be necessary to enable the system to open more sockets and tweak some TCP configurations. In order to do this, add the following lines to the /etc/sysctl.conf file:

# This causes the kernel to actively send RST packets when a service is overloaded.
net.ipv4.tcp_fin_timeout = 30

# The maximum file handles that can be allocated.
fs.file-max = 5097152

# Enable fast recycling of waiting sockets.
net.ipv4.tcp_tw_recycle = 1

# Allow to reuse waiting sockets for new connections when it is safe from protocol viewpoint.
net.ipv4.tcp_tw_reuse = 1

# The default size of receive buffers used by sockets.
net.core.rmem_default = 524288

# The default size of send buffers used by sockets.
net.core.wmem_default = 524288

# The maximum size of received buffers used by sockets.
net.core.rmem_max = 67108864

# The maximum size of sent buffers used by sockets.
net.core.wmem_max = 67108864

# The size of the receive buffer for each TCP connection. (min, default, max)
net.ipv4.tcp_rmem = 4096 87380 16777216

# The size of the sent buffer for each TCP connection. (min, default, max)
net.ipv4.tcp_wmem = 4096 65536 16777216

For the changes to take effect, type sysctl -p or restart the system.

Time Synchronisation

For features like Time-To-Live (TTL) it is necessary for all cluster nodes to have the same consistent accurate time. In case of TTL a time de-synchronisation could lead to inconsistent data as messages could be marked as expired on one node while they are still valid on another.

There are different solutions for this synchronisation problem. One possible solution is NTP - the Network Time Protocol. It uses external sources like atomic clocks or NTP servers to obtain the accurate time. In unix operation systems the NTP is implemented as the daemon ntpd.