HiveMQ is a high performance MQTT broker and is designed to run on server hardware. While HiveMQ also runs on embedded devices, its full potential is unleashed on server hardware.
Production: Linux is the only supported operating system for production environments. CentOS7 or other RHEL-based distributions are recommended.
Development: Windows, Mac OS X or Linux.
Production: Oracle JRE and OpenJDK JRE 7 or newer is required.
Development: Oracle JDK and OpenJDK JDK 7 or newer is recommended.
|When JRE 9 is used, please update to the latest version.
System resourcesHiveMQ scales with your system resources. If you scale up to more CPUs and RAM, HiveMQ delivers higher throughput and lower latencies. The performance of persistence features is bound to IO of the underlying system.
The following chapters describe how to optimize your linux configuration.
In case HiveMQ is running on a Linux OS, please make sure that the maximum amount of files that the HiveMQ process may open is sufficient.
An easy was to do this is to add the following lines to the
hivemq hard nofile 1000000 hivemq soft nofile 1000000 root hard nofile 1000000 root soft nofile 1000000
On systems with many connections it may also be necessary to enable the system to open more sockets and tweak some TCP configurations.
In order to do this, add the following lines to the
# This causes the kernel to actively send RST packets when a service is overloaded. net.ipv4.tcp_fin_timeout = 30 # The maximum file handles that can be allocated. fs.file-max = 5097152 # Enable fast recycling of waiting sockets. net.ipv4.tcp_tw_recycle = 1 # Allow to reuse waiting sockets for new connections when it is safe from protocol viewpoint. net.ipv4.tcp_tw_reuse = 1 # The default size of receive buffers used by sockets. net.core.rmem_default = 524288 # The default size of send buffers used by sockets. net.core.wmem_default = 524288 # The maximum size of received buffers used by sockets. net.core.rmem_max = 67108864 # The maximum size of sent buffers used by sockets. net.core.wmem_max = 67108864 # The size of the receive buffer for each TCP connection. (min, default, max) net.ipv4.tcp_rmem = 4096 87380 16777216 # The size of the sent buffer for each TCP connection. (min, default, max) net.ipv4.tcp_wmem = 4096 65536 16777216
For the changes to take effect, type
sysctl -p or restart the system.
For features like Time-To-Live (TTL) it is necessary for all cluster nodes to have the same consistent accurate time. In case of TTL a time de-synchronisation could lead to inconsistent data as messages could be marked as expired on one node while they are still valid on another.
There are different solutions for this synchronisation problem. One possible solution is NTP - the Network Time Protocol. It uses external sources like atomic clocks or NTP servers to obtain the accurate time. In unix operation systems the NTP is implemented as the daemon ntpd.