Logging
HiveMQ implements a powerful Logback logging system that helps you monitor, diagnose, and troubleshoot your applications. The default HiveMQ logging configuration is suitable for most use cases. For information on how to change the default behavior, see Adjust Logging. For information on how to generate machine-readable structured log files that can be easily shipped to the log management tool of your choice, see Machine Readable Log Files.
HiveMQ writes all log data to the log
folder of your HiveMQ installation.
The current log file is named hivemq.log
. The standard HiveMQ logging configuration uses a log rolling policy that archives the log file every day at midnight.
HiveMQ archives the daily log files with the name hivemq.$DATE.log and stores each of the archived log files for 30 days.
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${hivemq.log.folder}/hivemq.log</file>
<append>true</append>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${hivemq.log.folder}/hivemq.%d{yyyy-MM-dd}.log</fileNamePattern>
<!-- keep 30 days' worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%-30(%d %level)- %msg%n%ex</pattern>
</encoder>
</appender>
By default, HiveMQ deletes archived log files after 30 days. If you want to retain your log files longer, adjust the <maxHistory> setting or backup the files manually.
|
log
folder contenthivemq.log hivemq.2018-05-11.log hivemq.2018-05-12.log hivemq.2018-05-13.log event.log event-1.log.gz event-2.log.gz event-3.log.gz event-4.log.gz
To protect against distributed denial-of-service attacks (DDoS), the default setting of HiveMQ does not log malicious client behavior. DDoS attacks can overload your system with superfluous log entries. If you want HiveMQ to log these entries, you can set your log level to DEBUG
. HiveMQ logs many entries on the DEBUG
level. When you select the DEBUG log level, we recommend that you monitor the size of your log file frequently.
HiveMQ stores information for key events in a separate log file. For more information, see Event Log. |
Logging Levels
The HiveMQ logging subsystem uses the standard Logback Java logging framework. Logback offers a variety of features that let you tailor your HiveMQ logging to meet your individual needs.
The Logback library defines 5 log levels that you can select from:
-
TRACE
: Logs highly detailed information about a wide range of HiveMQ behaviors.
For example, TRACE - Metrics timer [com.hivemq.information.executor.idle] added.
The TRACE level logs the largest amount of information and is not recommended for production environments. -
DEBUG
: Logs information about the significant, normal, and insignificant HiveMQ behaviors.
For example, DEBUG - Setting shared subscriptions enabled to true.
The DEBUG level logs a large amount of information and is not recommended for production environments. -
INFO
: Logs information about runtime events of interest such as HiveMQ startup or shutdown. For example, INFO - Shutting down extension system or INFO - Started TCP Listener on address 0.0.0.0 and on port 1883. INFO is the default log level of HiveMQ. -
WARN
: Logs information about undesirable behaviors of HiveMQ that have not yet limited execution of normal operations. For example, WARN - The configured maximum qos (3) does not exist. It was set to (2) instead. -
ERROR
: Logs information about unexpected conditions and tasks that HiveMQ is unable to complete successfully. For example, ERROR - Could not read the configuration file /opt/hivemq/config.xml. Using default config.
Each log level has a corresponding logging method: trace(), debug(), info(), warn(), error().
If no log level is assigned, the logger inherits the level of its closest ancestor. The root logger defaults to DEBUG
.
By default, HiveMQ scans the logback.xml
file in the conf
folder for changes every 60 seconds. HiveMQ applies the changes that you make to the logback.xml
during runtime. You do not need to restart HiveMQ.
To change the frequency with which HiveMQ checks the logback.xml
, adjust the scanPeriod
setting. To turn off automatic scanning, set scan="false"
.
<configuration scan="true" scanPeriod="60 seconds">
If you disable automatic scanning, you must restart HiveMQ to apply changes that you make to the logback.xml file.
|
Time-based Policy with Total Size Limit
To limit the total combined size of your log files as well as the number of days the files are stored, add a <totalSizeCap>
property to your logging configuration.
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${hivemq.log.folder}/hivemq.log</file>
<append>true</append>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${hivemq.log.folder}/hivemq.%d{yyyy-MM-dd}.log</fileNamePattern>
<!-- keep 30 days' worth of history -->
<maxHistory>30</maxHistory>
<!-- maximum combined log file size is 30GB -->
<totalSizeCap>30GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%-30(%d %level)- %msg%n%ex</pattern>
</encoder>
</appender>
Size and Time-based Policy with Total SIze Limit
To archive your log files by date, limit the size of each log file, and set an overall limit to the combined size of your log file, use a SizeAndTimeBasedRollingPolicy
in your logging configuration. This configuration is useful for a variety of uses cases. For example, if your post-processing tools impose size limits on log files.
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
...
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!-- rollover daily -->
<fileNamePattern>mylog-%d{yyyy-MM-dd}.%i.txt</fileNamePattern>
<!-- maximum size of a single log file is 100MB -->
<maxFileSize>100MB</maxFileSize>
<!-- keep 30 days worth of history -->
<maxHistory>30</maxHistory>
<!-- maximum combined log file size is 30GB -->
<totalSizeCap>30GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%msg%n</pattern>
</encoder>
</appender>
The %i tag in <fileNamePattern> is mandatory.
|
If you only want to limit the combined size of log archives and do not need to limit the size of the individual log files, you can simply add a totalSizeCap property to your TimeBasedRollingPolicy .
|
Event Log
In addition to the information that HiveMQ provides in the hivemq.log
file, HiveMQ records key events to a separate event.log
file.
To record events, you must use the DEBUG log level.
|
The HiveMQ event log provides information for the following events:
The current event log file is named event.log
. The standard HiveMQ logging configuration uses a log rolling policy that archives the event log file when the size of the file reaches 100 MB.
HiveMQ archives event log files with the name event-$COUNT.log.gz. A maximum of 5 event log files can be archived at the same time. When the limit is reached, HiveMQ deletes the oldest file.
event.log
configuration <appender name="EVENT-FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${hivemq.log.folder}/event.log</file>
<append>true</append>
<encoder>
<pattern>%-24(%d)- %msg%n%ex</pattern>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>${hivemq.log.folder}/event-%i.log.gz</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>5</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>100MB</maxFileSize>
</triggeringPolicy>
</appender>
Client Connect
When an MQTT client successfully connects to the broker, HiveMQ logs an entry similar to the following statement:
2020-08-27 09:26:32,244 - Client ID: testClient, IP: 127.0.0.1, Clean Start: true, Session Expiry: 0 connected.
Client Disconnect
In MQTT 5, DISCONNECT packets can include an optional disconnect reason string. When an MQTT 5 client provides a reason string in the DISCONNECT packet, the HiveMQ log entry includes the reason string.
The log entry for a client disconnect that lacks a reason string is similar to the following example:
2018-08-14 08:41:37,576 - Client ID: testClient, IP:127.0.0.1 disconnected gracefully.
The log entry for an MQTT 5 client that provides a disconnect reason string is similar to this example:
2018-08-14 08:44:19,172 - Client ID: testClient, IP:127.0.0.1 disconnected gracefully. Reason given by client: This is my reason string
Dropped Message
In MQTT, dropped messages are messages that the MQTT broker does not publish. Dropped messages can occur for various reasons. For more information, see Dropped Messages
The log entry for a dropped message is similar to this example:
2020-08-27 09:37:12,661 - Outgoing publish message was dropped. Receiving client: subscriber1, topic: test, qos: 1, reason: The client message queue is full.
Client Session Expiry
The session expiry defines the length of time in seconds that can pass after the client disconnects until the session of the client expires. If a client with the same client ID reconnects before the defined length of time elapses, the session expiry timer resets. For more information, see Session Expiry.
When HiveMQ removes an expired client session from its persistence, HiveMQ logs an entry similar to the following statement:
2018-08-09 17:39:39,776 - Client ID: subscriber1 session has expired at 2018-08-09 17:39:26. All persistent data for this client has been removed.
The timestamp at the beginning of the expiry log entry shows when HiveMQ removed the expired session from the persistence and created the log output. The second timestamp shows the moment when the session expired. |
Machine-Readable Log Files
Machine-readable log files give you the ability to use tools that can automatically parse, process, and analyze the abundant information that your HiveMQ logs contain. Centralized log management and analytics solutions such as Elastic Stack, Loki, Splunk, and Datadog can help you gain the visibility you need to ensure that your applications are available and performant at all times. These types of log management software rely on log files that are structured in a standardized log format. Structured logs use a defined format to add information to logs and make it easier to interact with and filter the content of the log files in various ways.
JSON Logging
HiveMQ gives you the option to generate all of your HiveMQ log files and HiveMQ Enterprise Extension log files as structured logs in the highly machine-readable JSON (JavaScript Object Notation) data format. Logging to JSON is a widely-used standard for log management and monitoring. JSON formatted data can be parsed to nearly all programming languages and is simple to read and write for both humans and machines. Additionally, you can easily enrich your machine-readable JSON HiveMQ log files with additional content and metadata. The flexibility of the JSON format allows you to create field-rich databases that fit your individual use cases. For example, to quickly parse specific data from the ERROR log level to expedite your troubleshooting.
JSON log files are richer than other log files and take up more space. When logging to JSON, ensure that you have adequate storage space and an appropriate log rotation and archiving policy in place. |
JSON Logging Configuration
To configure JSON logging for all of your HiveMQ log files, update the logback.xml
file in the conf
folder of your HiveMQ instance as follows:
-
Go to the
conf
folder of your HiveMQ home directory and rename your currentlogback.xml
file tologback-standard.xml
. -
To use the example logback.xml file and enable JSON logging, copy the
logback.xml
file that is provided in theconf
|examples
|logging
|json
subfolder of your HiveMQ instance to theconf
folder of your HiveMQ home directory.
<configuration scan="true" scanPeriod="60 seconds">
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<!-- the console output is NOT formatted as JSON -->
<encoder>
<pattern>%-30(%d %level)- %msg%n%ex</pattern>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${hivemq.log.folder}/hivemq.log</file>
<append>true</append>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${hivemq.log.folder}/hivemq.%d{yyyy-MM-dd}.log</fileNamePattern>
<!-- keep 30 days' worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<!-- this encoder handles the JSON encoding -->
<encoder name="JSON-ENCODER" class="net.logstash.logback.encoder.LogstashEncoder">
<timeZone>UTC</timeZone>
<timestampPattern>yyyy-MM-dd'T'HH:mm:ss.SSS'Z'</timestampPattern>
<fieldNames>
<timestamp>time</timestamp>
<logger>[ignore]</logger>
<version>[ignore]</version>
<levelValue>[ignore]</levelValue>
<thread>thread</thread>
</fieldNames>
</encoder>
</appender>
<appender name="MIGRATIONS-FILE" class="ch.qos.logback.core.FileAppender">
<file>${hivemq.log.folder}/migration.log</file>
<append>true</append>
<!-- this encoder handles the JSON encoding -->
<encoder name="JSON-ENCODER" class="net.logstash.logback.encoder.LogstashEncoder">
<timeZone>UTC</timeZone>
<timestampPattern>yyyy-MM-dd'T'HH:mm:ss.SSS'Z'</timestampPattern>
<fieldNames>
<timestamp>time</timestamp>
<logger>[ignore]</logger>
<version>[ignore]</version>
<levelValue>[ignore]</levelValue>
<thread>thread</thread>
</fieldNames>
</encoder>
</appender>
<appender name="EVENT-FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${hivemq.log.folder}/event.log</file>
<append>true</append>
<!-- this encoder handles the JSON encoding -->
<encoder name="JSON-ENCODER" class="net.logstash.logback.encoder.LogstashEncoder">
<timeZone>UTC</timeZone>
<timestampPattern>yyyy-MM-dd'T'HH:mm:ss.SSS'Z'</timestampPattern>
<fieldNames>
<timestamp>time</timestamp>
<logger>[ignore]</logger>
<version>[ignore]</version>
<levelValue>[ignore]</levelValue>
<thread>thread</thread>
</fieldNames>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>${hivemq.log.folder}/event-%i.log.gz</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>5</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>100MB</maxFileSize>
</triggeringPolicy>
</appender>
<logger name="event.client-connected" level="DEBUG" additivity="false">
<appender-ref ref="EVENT-FILE"/>
</logger>
<logger name="event.client-disconnected" level="DEBUG" additivity="false">
<appender-ref ref="EVENT-FILE"/>
</logger>
<logger name="event.message-dropped" level="DEBUG" additivity="false">
<appender-ref ref="EVENT-FILE"/>
</logger>
<logger name="event.client-session-expired" level="DEBUG" additivity="false">
<appender-ref ref="EVENT-FILE"/>
</logger>
<logger name="migrations" level="DEBUG" additivity="false">
<appender-ref ref="MIGRATIONS-FILE"/>
</logger>
<root level="${HIVEMQ_LOG_LEVEL:-INFO}">
<appender-ref ref="FILE"/>
<appender-ref ref="CONSOLE"/>
</root>
<logger name="jetbrains.exodus" level="WARN"/>
<logger name="org.eclipse.jetty" level="ERROR"/>
<logger name="com.google.common.util.concurrent.Futures.CombinedFuture" level="OFF"/>
<logger name="oshi" level="ERROR"/>
<logger name="org.jgroups" level="INFO"/>
</configuration>
Available HiveMQ Event Log Fields for JSON Logging
When JSON logging is configured in the logback.xml
file of your HiveMQ configuration, the event log
statements for HiveMQ and all HiveMQ Enterprise Extensions are enriched with additional fields of contextual information.
The time , event , hivemqid , and message fields are provided for every event. All other fields are provided when the information is applicable for the particular event. Fields that are not applicable within the current context are not printed to the log file.
|
Field | Description |
---|---|
time |
The UTC timestamp of the event in RFC3339 format |
event |
The type of event |
hivemqId |
The ID of the HiveMQ node on which the event occurred |
message |
A short description of the event |
clientid |
The MQTT client identifier of the client |
topic |
The MQTT topic |
reason |
A reason string with information on why the event occurred |
ip |
The IP address of the MQTT client |
sessionExpiryInterval |
The session expiry that is set for the MQTT client |
reasonCode |
The MQTT reason code |
qos |
The MQTT quality of service level |
cleanStart |
Indicates whether a clean session flag is set for the session |
consumerId |
The extension consumer ID |
expiredAt |
The UTC timestamp in RFC 3339 format when the client session expired |
gracefulDisconnect |
Indicates whether the client disconnect was graceful or ungraceful |
{
"time": "2015-05-28T14:07:17Z",
"event": "OUTBOUND_PUBLISH_DROPPED",
"hivemqId": "35yIM",
"clientId": "my-receiving-client",
"topic": "some-busy-topic",
"qos": 2,
"reason": "The client message queue is full",
"message": "Outgoing publish message was dropped. Receiving client: my-receiving-client, topic: some-busy-topic, qos: 2, reason: The client message queue is full."
}
{
"time": "2015-05-28T14:07:17Z",
"event": "INBOUND_PUBLISH_DROPPED",
"hivemqId": "35yIM",
"clientId": "backend-client",
"topic": "my/topic",
"qos": 1,
"reason": "Extension prevented onward delivery of inbound PUBLISH",
"message": "Incoming publish message was dropped. Receiving client: backend-client, topic: my/topic, qos: 1, reason: Extension prevented onward delivery of inbound PUBLISH."
}
{
"time": "2015-05-28T14:07:17Z",
"event": "OUTBOUND_PUBLISH_DROPPED_SHARED",
"hivemqId": "35yIM",
"topic": "$share/group/my/topic",
"qos": 1,
"reason": "The shared subscription message queue is full",
"message": "Outgoing publish message was dropped. Receiving shared subscription: $share/group, topic: my/topic, qos: 0, reason: The shared subscription message queue is full."
}
{
"time": "2015-05-28T14:07:17Z",
"event": "OUTBOUND_PUBLISH_DROPPED_CONSUMER",
"hivemqId": "35yIM",
"consumerId": "kafka-consumer-1",
"topic": "my/topic",
"qos": 1,
"reason": "The consumer message queue is full",
"message": "Outgoing publish message was dropped. Receiving consumer: {}, topic: my/topic, qos: 0, reason: The shared subscription message queue is full."
}
{
"time": "2015-05-28T14:07:17Z",
"event": "OUTBOUND_PACKET_DROPPED",
"hivemqId": "35yIM",
"clientId": "backend-client",
"reason": "Maximum packet size exceeded, size: 512 bytes, max: 500 bytes.",
"message": "Outgoing MQTT packet was dropped. Receiving client: backend-client, messageType: PUBACK, reason: Maximum packet size exceeded, size: 512 bytes, max: 500 bytes."
}
{
"time": "2015-05-28T14:07:17Z",
"event": "CLIENT_CONNECTED",
"hivemqId": "35yIM",
"clientId": "backend-client",
"ip": "127.0.0.1",
"sessionExpiryInterval": 1234,
"cleanStart": true,
"message": "Client ID: backend-client, IP: 127.0.0.1, Clean Start: true, Session Expiry: 1234 connected."
}
{
"time": "2015-05-28T14:07:17Z",
"event": "CLIENT_DISCONNECTED",
"hivemqId": "35yIM",
"clientId": "backend-client",
"ip": "127.0.0.1",
"gracefulDisconnect": true,
"reason": "Reauthenticate after 24h",
"message": "Client ID: backend-client, IP: 127.0.0.1 disconnected gracefully. Reason given by client: Reauthenticate after 24h"
}
{
"time": "2015-05-28T14:07:17Z",
"event": "CLIENT_DISCONNECTED",
"hivemqId": "35yIM",
"ip": "127.0.0.1",
"gracefulDisconnect": false,
"message": "Client ID: UNKNOWN, IP: 127.0.0.1 disconnected gracefully."
}
{
"time": "2015-05-28T14:07:17Z",
"event": "CLIENT_DISCONNECTED_BY_SERVER",
"hivemqId": "35yIM",
"clientId": "backend-client",
"ip": "127.0.0.1",
"reason": "Another client connected with the same clientId",
"message": "Client ID: backend-client, IP: 127.0.0.1 was disconnected. reason: Another client connected with the same clientId."
}
{
"time": "2015-05-28T14:07:17Z",
"event": "INBOUND_AUTH",
"hivemqId": "35yIM",
"clientId": "backend-client",
"ip": "127.0.0.1",
"reasonCode": "CONTINUE_AUTHENTICATION",
"message": "Sent AUTH to Client ID: backend-client, IP: 127.0.0.1, reason code: CONTINUE_AUTHENTICATION."
}
{
"time": "2015-05-28T14:07:17Z",
"event": "OUTBOUND_AUTH",
"hivemqId": "35yIM",
"clientId": "backend-client",
"ip": "127.0.0.1",
"reasonCode": "SUCCESS",
"message": "Received AUTH from Client ID: backend-client, IP: 127.0.0.1, reason code: SUCCESS."
}
{
"time": "2015-05-28T14:07:17Z",
"event": "CLIENT_SESSION_EXPIRED",
"hivemqId": "35yIM",
"clientId": "backend-client",
"expiredAt": "2015-05-28T10:00:00Z",
"message": "Client ID: backend-client session has expired at 2015-05-28 10:00:00. All persistent data for this client has been removed."
}
We use pretty printing for the example log statements to add line breaks and indentations for increased human readability, the actual log files are not formatted in this manner. |
Available HiveMQ and HiveMQ Migration Log Fields for JSON Logging
When JSON logging is configured in the logback.xml
file of your HiveMQ configuration, the hivemq log
and migration.log
statements for HiveMQ and all HiveMQ Enterprise Extensions are enriched with additional contextual attributes.
The time , level , hivemqid , and message fields are provided for every event. All other fields are provided when the information is applicable for the particular event. Fields that are not applicable within the current context are not printed to the log file.
|
Field | Description |
---|---|
time |
The UTC timestamp of the log entry in RFC3339 format. |
level |
The log level of the entry. For example, ERROR, WARN, INFO, DEBUG, or TRACE. |
hivemqid |
The ID of the HiveMQ node on which the event occurred. |
message |
A short description of the event. |
clientId |
The MQTT client identifier of the client. |
topic |
The MQTT topic. |
reason |
A reason string with information on why the event occurred. |
ip |
The IP address of the MQTT client. |
{
"time": "2015-05-28T14:07:17Z",
"level": "INFO",
"thread": "cluster-join-exec-1",
"message": "Finished cluster replication successfully in 4113ms.",
"hivemqId": "35yIM"
}
{
"time": "2015-05-28T14:07:17Z",
"level": "INFO",
"hivemqId": "35yIM",
"thread": "cluster-join-exec-2",
"message": "Cluster nodes found by discovery: [35yIM|19] (4) [35yIM, P272c, GzYss, Wcie9]."
}
{
"time": "2015-05-28T14:07:17Z",
"level": "DEBUG",
"hivemqId": "35yIM",
"thread": "netty-loop-1",
"message": "Request client takeover for client sub at node Wcie9.",
"clientId": "sub"
}
{
"time": "2015-05-28T14:07:17Z",
"level": "DEBUG",
"hivemqId": "35yIM",
"thread": "netty-loop-1",
"message": "Disconnecting already connected client with id sub because another client connects with that id",
"clientId": "sub"
}
{
"time": "2015-05-28T14:07:17Z",
"level": "DEBUG",
"hivemqId": "35yIM",
"thread": "netty-loop-2",
"message": "A client (IP: 2.182.17.98) sent other message before CONNECT. Disconnecting client.",
"ip": "2.182.17.98"
}
{
"time": "2015-05-28T14:07:17Z",
"level": "ERROR",
"hivemqId": "35yIM",
"thread": "netty-loop-2",
"message": "Something failed unexpectedly, reason: null",
"stackTrace": "java.lang.NullPointerException: null\n\tat mqtt5.MiniMain.main(MiniMain.java:9)"
}
{
"time": "2015-05-28T14:07:17Z",
"level": "DEBUG",
"hivemqId": "35yIM",
"thread": "netty-loop-2",
"message": "Original exception",
"stackTrace": "java.lang.NullPointerException: null\n\tat mqtt5.MiniMain.main(MiniMain.java:9)"
}
JSON Logging Use Cases
JSON Logging gives HiveMQ administrators the ability to collect and ship logs to a central location where the log messages can be converted into meaningful data that is easier to search and analyze.
For example, with the help of JSON logging and a log management tool of your choice, you can gain immediate insights into unexpected behavior of specific clients. The structured log format allows you to search for individual clients and observe all events that happened with the client (connects, disconnects, and so on). The same is true for monitoring security concerns. If you notice that your authentication system is under heavy load, you can quickly query targeted information for authentication activity from your event logs.
Extension Log Files
By default, HiveMQ logs information to the hivemq.log
file for each HiveMQ extension that you run. The additional entries can make it more difficult to evaluate information in the log output. The use of separate log files for your HiveMQ extensions can make it easier for you to efficiently monitor and troubleshoot your applications.
Add Extension Log File
The following example shows you how to create a separate log file for the HiveMQ File RBAC Extension. The steps are similar for all HiveMQ extensions.
Prerequisites
-
An installed version of the HiveMQ File RBAC Extension.
For more information, see File RBAC Extension and File RBAC Extension GitHub repository.
Once you have successfully installed the RBAC extension, a log entry similar to the following statement confirms your installation.
INFO - Extension "File Role Based Access Control Extension" version 4.0.0 started successfully.
Extension Log File Configuration
To create a separate log file for the extension, you must add an appropriately defined appender to the logback.xml
of your HiveMQ configuration.
The additional appender and logger configurations specify the output file, rolling policy, log pattern, and logger that are used to log extension information.
The following code example places a log file named file-rbac.log
in the log
folder of your HiveMQ installation.
The rollingPolicy
in the example archives the log file every day at midnight, compresses the file, and stores each log file for 30 days.
Example appender for RBAC extension log file configuration
<appender name="FILE-RBAC" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!-- name and location of the log file -->
<file>${hivemq.home}/log/file-rbac.log</file>
<append>true</append>
<encoder>
<pattern>%-30(%d [%thread]) - %level - %logger{32} - %msg%n%ex</pattern>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${hivemq.home}/log/pval_plugin_event.%d{yyyy-MM-dd}.log.gz</fileNamePattern>
<!-- keep 30 days' worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
</appender>
The following example logger configuration adds the required logger.
Example RBAC extension logger configuration
<logger name="com.hivemq.extensions.rbac" level="INFO" additivity="false">
<appender-ref ref="FILE-RBAC"/>
</logger>
In the logger configuration, the logger name must match the package name of the associated extension and the
appender-ref must match the name of the defined appender.
|
Extension Log File Verification
To verify creation of the extension log file, go to the log
folder of your HiveMQ installation. Based on the scanPeriod
that is set in the logback.xml
file in your conf
folder, HiveMQ scans the changes and updates your configuration. If you have disabled automatic scanning (scan=false
), you must restart HiveMQ to apply the changes.
Look for a new file named file-rbac.log
in the log
directory.
Open the file-rbac.log
file and confirm that an extension start entry or other INFO entries are visible.
Syslog
The HiveMQ logging subsystem can log to a Syslog server. Use of a Syslog server allows you to consolidate log files from multiple HiveMQ broker nodes into a single log file. A configurable prefix in the log statements ensures that each statement can be associated with the HiveMQ node on which it was created. The unified log view simplifies the management and analysis of your logs and makes it easier to debug large HiveMQ cluster deployments.
For more information, see A Quick guide to Syslog and HiveMQ.
To activate Syslog, append your logback.xml
file with the following configuration.
Replace $IP-Adress
with the address of your Syslog server, and replace X
with the identifier of the node.
<appender name="SYSLOG" class="ch.qos.logback.classic.net.SyslogAppender">
// IP-Address of your syslog server
<syslogHost>$IP-Address</syslogHost>
<facility>user</facility>
// replace X with the node identifier
<suffixPattern>[nodeX] %-30(%d %level)- %msg%n%ex</suffixPattern>
</appender>
<root level="DEBUG">
<appender-ref ref="SYSLOG" />
</root>