Monitoring
System monitoring is an essential part of every production-software deployment. Monitoring your MQTT brokers is vital, especially in clustered environments. HiveMQ is designed to help you enable different kinds of monitoring easily. When you run HiveMQ in critical infrastructure, we strongly recommend use of an appropriate monitoring application.
The highly-performant HiveMQ metrics subsystem allows you to monitor relevant metrics in low-latency high-throughput environments with no reduction in system performance.
JMX
HiveMQ provides extensive support for Java Management Extensions (JMX) to monitor internals of HiveMQ and the Java virtual machine (JVM). JMX is a proven industry standard for Java monitoring and many external tools support JMX natively or via extensions. The HiveMQ core distribution ships with the HiveMQ JVM Metrics Plugin and the HiveMQ JMX Plugin. If you prefer not to use JMX for monitoring, simply delete the JMX plugin from your plugin
directory.
Be sure to set all JMX options properly. For more information, see JMX documentation. |
The JVM Metrics Plugin adds several additional metrics about the JVM on which HiveMQ runs. These metrics provide useful insights into memory usage and other runtime metrics of the JVM.
The JMX Plugin enables JMX monitoring for HiveMQ. When the plugin is loaded, you can use JMX monitoring tools such as JConsole to get statistics and insights for your HiveMQ deployment.
Configuration
If you run HiveMQ behind NAT (Network Address Translation), you need to set some additional options:
JAVA_OPTS="$JAVA_OPTS -Djava.rmi.server.hostname=<PUBLIC_IP>" JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.rmi.port=9010"
This configuration allows you to connect with JConsole using PUBLIC_IP:9010.
MBeans
When JMX is activated, the following MBeans (managed Java objects) are available for monitoring:
MBean Name | Description |
---|---|
|
The HiveMQ metrics and statistics. A list of all available metrics is available here. |
|
Statistics and metrics about used native memory. |
|
All information about the Java Virtual Machine. |
|
Actions that can be started via JMX. |
Graphite
Graphite is a graphing system for monitoring and displaying statistics from different data sources. Graphite is highly scalable and useful when you need to monitor one or more HiveMQ cluster nodes. For example, if the built-in JMX Monitoring is not sufficient for your use case or you want to preserve statistics history.
Graphite Server
We strongly recommend that you install Graphite on a different server than HiveMQ.
|
Graphite monitoring is not part of the HiveMQ installation. You can download a free plugin that enables HiveMQ to report to Graphite on GitHub.
Maintenance
Maintenance actions are often resource intensive and should therefore not be started when HiveMQ instances have been added to the cluster or removed recently. Before starting a maintenance action, make sure the actual cluster size is equal to the supposed cluster size.
Maintenance actions | Description |
---|---|
|
Clean up outdated subscriptions that were added during network splits. |
$SYS Topic
The free and open source HiveMQ SYS Topic Plugin enables HiveMQ to support a special MQTT topic tree called $SYS Topic. MQTT clients can subscribe to this topic and receive system information and statistics of the HiveMQ MQTT broker. Once the plugin is installed, the following $SYS subtopics are exposed:
Topic | Description |
---|---|
|
The currently connected clients. |
|
The clients which are not connected and have a persistent session on the broker. |
|
The maximum number of active clients which were connected simultaneously. |
|
The total count of connected and disconnect (with persistent session) clients. |
|
The total bytes received. |
|
The total bytes sent. |
|
The moving average of the number of CONNECT packets received by the broker during the last minute. |
|
The moving average of the number of CONNECT packets received by the broker during the last 5 minutes. |
|
The moving average of the number of CONNECT packets received by the broker during the last 15 minutes. |
|
The moving average of the number of all types of MQTT messages received by the broker during the last minute. |
|
The moving average of the number of all types of MQTT messages received by the broker during the last 5 minutes. |
|
The moving average of the number of all types of MQTT messages received by the broker during the last 15 minutes. |
|
The moving average of the number of all types of MQTT messages sent by the broker during the last minute. |
|
The moving average of the number of all types of MQTT messages sent by the broker during the last 5 minutes. |
|
The moving average of the number of all types of MQTT messages sent by the broker during the last 15 minutes. |
|
The moving average of the number of MQTT PUBLISH messages dropped by the broker during the last minute. |
|
The moving average of the number of MQTT PUBLISH messages dropped by the broker during the last 5 minutes. |
|
The moving average of the number of MQTT PUBLISH messages dropped by the broker during the last 15 minutes. |
|
The moving average of the number of MQTT PUBLISH messages received by the broker during the last minute. |
|
The moving average of the number of MQTT PUBLISH messages received by the broker during the last 5 minutes. |
|
The moving average of the number of MQTT PUBLISH messages received by the broker during the last 15 minutes. |
|
The moving average of the number of MQTT PUBLISH messages sent by the broker during the last minute. |
|
The moving average of the number of MQTT PUBLISH messages sent by the broker during the last 5 minutes. |
|
The moving average of the number of MQTT PUBLISH messages sent by the broker during the last 15 minutes. |
|
The total number of MQTT PUBLISH messages that have been dropped due to inflight/queuing limits. |
|
The total MQTT PUBLISH messages received. |
|
The total MQTT PUBLISH messages sent. |
|
The total MQTT messages received. |
|
The amount of all retained messages. |
|
The total MQTT messages sent. |
|
The total count of subscriptions |
|
The current time on the broker. Only published on subscription. |
|
The uptime of the broker in seconds. Only published on subscription. |
|
The HiveMQ version. Only published on subscription. |
$SYS Topic standard
There is no official standardization what $SYS topics should exist.
There is a consensus between broker vendors of available $SYS topics available here.
These special topics are not always fully interoperable between MQTT brokers and clients should not rely on these topics.
|
It is not possible for any client to publish to the $SYS topic or one of its subtopics. These values are published exclusively by HiveMQ.
While $SYS topics are a good fit for broker monitoring in a trusted environment, we recommend not using SYS topics in production and relying on a more sophisticated monitoring solution. |
Available Metrics
Metric Types
There are five different types of Metrics available. The following table shows all available metric types:
Metric Type | Description |
---|---|
|
A gauge returns a simple value at the point of time the metric was requested |
|
A counter is a simple incrementing and decrementing number. |
|
A histogram measures the distribution of values in a stream of data. They allow to measure min, mean, max, standard deviation of values and quantiles. |
|
A meter measures the rate at which a set of events occur. Meters measure mean, 1-, 5-, and 15-minute moving averages of events. |
|
A timer is basically a histogram of the duration of a type of event and a meter of the rate of its occurrence. It captures rate and duration information. |
The following table lists metrics that are available for monitoring HiveMQ regardless if the HiveMQ server instance runs in single mode or as part of a cluster:
Metric | Type | Description |
---|---|---|
|
|
Cache statistic capturing the average load penalty of the payload persistence cache |
|
|
Cache statistic capturing the eviction count of the payload persistence cache |
|
|
Cache statistic capturing the hit count of the payload persistence cache |
|
|
Cache statistic capturing the hit rate of the payload persistence cache |
|
|
Cache statistic capturing the load count of the payload persistence cache |
|
|
Cache statistic capturing the load exception count of the payload persistence cache |
|
|
Cache statistic capturing the load exception rate of the payload persistence cache |
|
|
Cache statistic capturing the load success count of the payload persistence cache |
|
|
Cache statistic capturing the miss count of the payload persistence cache |
|
|
Cache statistic capturing the miss rate of the payload persistence cache |
|
|
Cache statistic capturing the request count of the payload persistence cache |
|
|
Cache statistic capturing the total load time of the payload persistence cache |
|
|
Cache statistic capturing the average load penalty of the shared subscription cache |
|
|
Cache statistic capturing the eviction count of the shared subscription cache |
|
|
Cache statistic capturing the hit count of the shared subscription cache |
|
|
Cache statistic capturing the hit rate of the shared subscription cache |
|
|
Cache statistic capturing the load count of the shared subscription cache |
|
|
Cache statistic capturing the load exception count of the shared subscription cache |
|
|
Cache statistic capturing the load exception rate of the shared subscription cache |
|
|
Cache statistic capturing the load success count of the shared subscription cache |
|
|
Cache statistic capturing the miss count of the shared subscription cache |
|
|
Cache statistic capturing the miss rate of the shared subscription cache |
|
|
Cache statistic capturing the request count of the shared subscription cache |
|
|
Cache statistic capturing the total load time of the shared subscription cache |
|
|
Measures the current rate of completed CallbackExecutor jobs |
|
|
Captures metrics about the job durations for jobs submitted to the CallbackExecutor |
|
|
Measures how many CallbackExecutor jobs are running at the moment |
|
|
Measures the current rate of submitted jobs to the CallbackExecutor |
|
|
Counts the amount of clients with an inflight queue, which is at least half-full |
|
|
The number of retry attempts that are processed to resolve the name of a node |
|
|
Captures the current number of nodes in the cluster |
|
|
Measures the time spent waiting for cluster topology changes |
|
|
Currently used direct memory in bytes |
|
|
Measures the rate of inconsequential exceptions thrown during the socket life cycle |
|
|
Counts every closed connection that was closed because the client missed sending PINGREQ message during the keep-alive interval |
|
|
Measures the rate of logging statements of all levels |
|
|
Measures the rate of logging statements in DEBUG level |
|
|
Measures the rate of logging statements in ERROR level |
|
|
Measures the rate of logging statements in INFO level |
|
|
Measures the rate of logging statements in TRACE level |
|
|
Measures the rate of logging statements in WARN level |
|
|
Counts every dropped message. |
|
|
Counts the messages that have been dropped because the in flight window was full |
|
|
Counts the messages that have been dropped due to internal errors |
|
|
Counts the messages that have been dropped because the client disconnected and has no persistent session |
|
|
Counts the messages with QoS 0 that have been dropped because the client socket was not writeable |
|
|
Counts the messages with QoS 0 that have been dropped because the queue for the client wasn’t empty |
|
|
Counts the messages that have been dropped because the client session message queue was full |
|
|
Measures the current rate of dropped messages. |
|
|
Counts every incoming MQTT CONNECT message |
|
|
Measures the current rate of incoming MQTT CONNECT messages |
|
|
Counts every incoming MQTT DISCONNECT message |
|
|
Measures the current rate of incoming MQTT DISCONNECT messages |
|
|
Counts every incoming MQTT PINGREQ message |
|
|
Measures the current rate of incoming MQTT PINGREQ messages |
|
|
Counts every incoming MQTT PUBACK message |
|
|
Measures the current rate of incoming MQTT PUBACK messages |
|
|
Counts every incoming MQTT PUBCOMP message |
|
|
Measures the current rate of incoming MQTT PUBCOMP messages |
|
|
Measures the distribution of incoming MQTT message size (including MQTT packet headers) |
|
|
Counts every incoming MQTT PUBLISH message |
|
|
Measures the current rate of incoming MQTT PUBLISH messages |
|
|
Counts every incoming MQTT PUBREC message |
|
|
Measures the current rate of incoming MQTT PUBREC messages |
|
|
Counts every incoming MQTT PUBREL message |
|
|
Measures the current rate of incoming MQTT PUBREL messages |
|
|
Counts every incoming MQTT SUBSCRIBE message |
|
|
Measures the current rate of incoming MQTT SUBSCRIBE messages |
|
|
Measures the size distribution of incoming MQTT messages (including MQTT packet headers) |
|
|
Counts every incoming MQTT message |
|
|
Measures the current rate of incoming MQTT messages |
|
|
Counts every incoming MQTT UNSUBSCRIBE message |
|
|
Measures the current rate of incoming MQTT UNSUBSCRIBE messages |
|
|
Counts every outgoing MQTT CONNACK message |
|
|
Measures the current rate of outgoing MQTT CONNACK messages |
|
|
Counts every outgoing MQTT PINGRESP message |
|
|
Measures the current rate of outgoing MQTT PINGRESP messages |
|
|
Counts every outgoing MQTT PUBACK message |
|
|
Measures the current rate of outgoing MQTT PUBACK messages |
|
|
Counts every outgoing MQTT PUBCOMP message |
|
|
Measures the current rate of outgoing MQTT PUBCOMP messages |
|
|
Measures the size distribution of outgoing MQTT messages (including MQTT packet headers) |
|
|
Counts every outgoing MQTT PUBLISH message |
|
|
Measures the current rate of outgoing MQTT PUBLISH messages |
|
|
Counts every outgoing MQTT PUBREC message |
|
|
Measures the current rate of outgoing MQTT PUBREC messages |
|
|
Counts every outgoing MQTT PUBREL message |
|
|
Measures the current rate of outgoing MQTT PUBREL messages |
|
|
Counts every outgoing MQTT SUBACK message |
|
|
Measures the current rate of outgoing MQTT SUBACK messages |
|
|
Measures the size distribution of outgoing MQTT messages (including MQTT packet headers) |
|
|
Counts every outgoing MQTT message |
|
|
Measures the current rate of outgoing MQTT messages |
|
|
Counts every outgoing MQTT UNSUBACK message |
|
|
Measures the current rate of outgoing MQTT UNSUBACK messages |
|
|
Measures the current rate of resent PUBLISH messages (QoS > 0) |
|
|
Measures the current rate of resent PUBREL messages (OoS = 2) |
|
|
The current amount of retained messages |
|
|
Metrics about the mean payload-size of retained messages in bytes |
|
|
The current rate of newly retained messages |
|
|
The current (last 5 seconds) amount of read bytes |
|
|
The total amount of read bytes |
|
|
The current (last 5 seconds) amount of written bytes |
|
|
Total amount of written bytes |
|
|
The current total number of active MQTT connections |
|
|
The mean total number of active MQTT connections |
|
|
Counts clients which disconnected after sending a DISCONNECT Message |
|
|
Counts clients which disconnected without sending a DISCONNECT Message |
|
|
Counts all clients which disconnected from HiveMQ (= graceful + ungraceful) |
|
|
Measure the rate of completed tasks submitted to the scheduler in charge of the cleanup of the persistence payload |
|
|
Captures metrics about the job durations for jobs submitted to the scheduler in charge of the cleanup of the persistence payload |
|
|
Counts tasks that are currently running in the scheduler in charge of the cleanup of the persistence payload |
|
|
Measures about the tasks that have been scheduled to run only once in the scheduler in charge of the cleanup of the persistence payload |
|
|
Counts the periodic tasks which ran longer then their time frame allowed in the scheduler in charge of the cleanup of the persistence payload |
|
|
Metrics about how much percent of their allowed time frame periodic tasks used while running the cleanup of the persistence payload |
|
|
Measures about the tasks that have been scheduled to run repetitively in the scheduler in charge of the cleanup of the persistence payload |
|
|
Measures about the tasks that have been submitted to the scheduler in charge of the cleanup of the persistence payload |
|
|
Measure the rate of completed tasks submitted to the persistence executor |
|
|
Captures metrics about the job durations for jobs submitted to the persistence executor |
|
|
Counts tasks that are currently running in the persistence executor |
|
|
Measures about the tasks that have been submitted to the scheduler responsible for persistence |
|
|
Measure the rate of completed tasks submitted to the scheduler responsible for persistence |
|
|
Captures metrics about the job durations for jobs submitted to the scheduler responsible for persistence |
|
|
Counts tasks that are currently running in the scheduler responsible for persistence |
|
|
Measures about the tasks that have been scheduled to run once in the scheduler responsible for persistence |
|
|
Counts the periodic tasks which ran longer then their time frame allowed in the scheduler responsible for persistence |
|
|
Metrics about how much percent of their allowed time frame periodic tasks used in the scheduler responsible for persistence |
|
|
Measures about the tasks that have been scheduled to run repetitively in the scheduler responsible for persistence |
|
|
Measures about the tasks that have been submitted to the scheduler responsible for persistence |
|
|
Current amount of disk I/O tasks that are enqueued by the client session persistence |
|
|
Measures the mean execution time (in nanoseconds) of client session disk I/O tasks |
|
|
Current amount of single writer task queues that are not empty |
|
|
Current amount of disk I/O tasks that are enqueued by the outgoing message flow persistence |
|
|
Measures the mean execution time (in nanoseconds) of outgoing message flow disk I/O tasks |
|
|
Current count of loops that all single writer threads have done without executing a task |
|
|
Current amount of disk I/O tasks that are enqueued by the queued messages persistence |
|
|
Measures the mean execution time (in nanoseconds) of queued messages disk I/O tasks |
|
|
Current amount of tasks that are enqueued by the request event bus |
|
|
Measures the mean execution time (in nanoseconds) of request event bus tasks |
|
|
Current amount of disk I/O tasks that are enqueued by the retained message persistence |
|
|
Measures the mean execution time (in nanoseconds) of retained message disk I/O tasks |
|
|
Current amount of threads that are executing disk I/O tasks |
|
|
Current amount of disk I/O tasks that are enqueued by the subscription persistence |
|
|
Measures the mean execution time (in nanoseconds) of subscription disk I/O tasks |
|
|
Current amount of disk I/O tasks that are enqueued by all persistence executors |
|
|
Holds the current amount of payloads stored in the payload persistence |
|
|
Holds the current amount of payloads stored in the payload persistence, that can be removed by the cleanup |
|
|
Metrics about the AfterLoginCallback |
|
|
Metrics about the AfterLoginCallback |
|
|
Metrics about the OnAuthenticationCallback |
|
|
Metrics about the OnAuthorizationCallback |
|
|
Metrics about the OnConnackSend Callback |
|
|
Metrics about the OnConnectCallback |
|
|
Metrics about the OnDisconnectCallback |
|
|
Metrics about the OnInsufficientPermissionDisconnectCallback |
|
|
Metrics about the OnInsufficientPermissionDisconnectCallback |
|
|
Metrics about the OnPingCallback |
|
|
Metrics about the OnPubackReceived Callback |
|
|
Metrics about the OnPubackSend Callback |
|
|
Metrics about the OnPubcompReceived Callback |
|
|
Metrics about the OnPubcompSend Callback |
|
|
Metrics about the OnPublishReceivedCallback |
|
|
Metrics about the OnPublishSend Callback |
|
|
Metrics about the OnPubrecSend Callback |
|
|
Metrics about the OnPubrecSend Callback |
|
|
Metrics about the OnPubrelReceived Callback |
|
|
Metrics about the OnPubrelSend Callback |
|
|
Metrics about the RestrictionsAfterLoginCallback |
|
|
Metrics about the OnSubackSend Callback |
|
|
Metrics about the OnSubscribeCallback |
|
|
Metrics about the OnTopicSubscriptionCallback |
|
|
Metrics about the OnUnsubackSend Callback |
|
|
Metrics about the OnUnsubscribeReceivedCallback |
|
|
Measure the rate of completed tasks submitted to the plugin executor |
|
|
Measure the rate of completed tasks submitted to the scheduler responsible for plugins |
|
|
Counts tasks that are currently running in the scheduler responsible for plugins |
|
|
Measures about the tasks that have been scheduled to run once in the scheduler responsible for plugins |
|
|
Counts the periodic tasks which ran longer then their time frame allowed in the scheduler responsible for plugins |
|
|
Metrics about how much percent of their allowed time frame periodic tasks used in the scheduler responsible for plugins |
|
|
Measures about the tasks that have been scheduled to run repetitively in the scheduler responsible for plugins |
|
|
Measures about the tasks that have been submitted to the scheduler responsible for plugins |
|
|
Measures the rate of messages put into the publish queue |
|
|
Measures the current count of messages in the publish queue |
|
|
Measures the current count of stored sessions. These sessions include all sessions, including online and offline clients |
|
|
Measures the current count of active persistent sessions (= Online MQTT clients which are connected with cleanSession=false). |
|
|
Measure the rate of completed tasks submitted to the single-writer executor |
|
|
Measure the rate of completed tasks submitted to the scheduler responsible for single-writer |
|
|
Counts tasks that are currently running in the scheduler responsible for single-writer |
|
|
Measures about the tasks that have been submitted to the scheduler responsible for single-writer |
|
|
Measures the current count of subscriptions on the broker |
Metric | Type | Description |
---|---|---|
|
|
Maximum allowed amount of file descriptors as seen by the JVM |
|
|
Amount of open file descriptors as seen by the JVM |
|
|
Current amount of free physical memory in bytes |
|
|
Total amount of physical memory (bytes) available |
|
|
Current CPU usage for the JVM process (0.0 idle – 1.0 full CPU usage) |
|
|
Total amount of CPU time the JVM process has used to this point(in nanoseconds) |
|
|
Current amount of free swap space in bytes |
|
|
Total amount of swap space available in bytes |
|
|
Current CPU usage for the whole system (0.0 idle – 1.0 full CPU usage) |
|
|
OS Uptime in seconds |
|
|
The amount of actual physical memory, in bytes |
|
|
The amount of physical memory currently available, in bytes. |
|
|
The current memory committed to the paging/swap file(s), in bytes |
|
|
The current size of the paging/swap file(s), in bytes. |
|
|
Amount of currently open file descriptors |
|
|
Maximum allowed amount of file descriptors |
|
|
Percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request |
|
|
Percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request. |
|
|
Percentage of time that the CPU used to service hardware IRQs |
|
|
Percentage of CPU utilization that occurred while executing at the user level with nice priority |
|
|
Percentage of time that the CPU used to service soft IRQs |
|
|
Percentage of time which the hypervisor dedicated for other guests in the system. |
|
|
Percentage of CPU utilization that occurred while executing at the system level (kernel) |
|
|
Percentage of CPU utilization that occurred while executing at the user level (application) |
|
|
Percentage of total CPU utilization for convenience (not idle, calculated as sum of usage values) |
|
|
Number of threads of the HiveMQ process as seen by the OS |
|
|
Amount of milliseconds the the HiveMQ process has executed in user mode as seen by the OS |
|
|
Amount of milliseconds the HiveMQ process has executed in kernel/system mode as seen by the OS |
|
|
Virtual Memory Size (VSZ) in bytes. It includes all memory that the HiveMQ process can access, including memory that is swapped out and memory that is from shared libraries. |
|
|
Resident Set Size (RSS) in bytes. It is used to show how much memory is allocated to the HiveMQ process and is in RAM. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory. |
|
|
Number of bytes the HiveMQ process has written to disk |
|
|
Number of bytes the HiveMQ process has read from disk |
|
|
Bytes received by the network interface |
|
|
Bytes sent by the network interface |
|
|
Packets sent by the network interface |
|
|
Packets received by the network interface |
|
|
Input errors for the network interface |
|
|
Output errors for the network interface |
|
|
Total size of disk with name |
|
|
Amount of read operations for disk with name |
|
|
Amount of read operations for disk with name |
|
|
Amount of bytes read from disk with name |
|
|
Amount of bytes written from disk with name |
The following table (Table 4) lists metrics that are only available for monitoring if the HiveMQ server instance is part of a cluster.
Metric | Type | Description |
---|---|---|
|
|
Provides measures for every class that made once a SEND request (every class get its own metric) |
Monitoring dropped messages
In a healthy MQTT environment, messages should never be dropped, especially if your MQTT clients rely on receiving important messages.
It’s highly recommended to monitor if messages are dropped or if the average queued messages are at a critical number.
The following metrics of HiveMQ can be used to monitor the drop rate (and total count) of messages and the average queued message usage in the system.
Metric | Description |
---|---|
|
The rate of dropped messages per second. This metric also exposes a total counter of dropped messages. |
|
The number of clients with an inflight queue, which is at least half-full. |
We recommend that the average number of queued messages should not exceed 50% of the maximum amount queued messages for a longer time if you want to prevent message drop. |
Monitoring of Plugins
With the powerful HiveMQ Plugin System it is possible to add integration plugins for virtually everything you can imagine. A common pitfall when writing plugins is that these plugins block HiveMQ threads in some way and the overall performance of the installation can decrease dramatically. Fortunately HiveMQ offers a way to monitor the execution time of plugin callback on specific callbacks.
The following metrics can be monitored (e.g. with JMX) for plugin callback execution times:
Metric Name | Description |
---|---|
|
Metrics about the AfterLoginCallback |
|
Metrics about the AfterLoginCallback |
|
Metrics about the OnAuthenticationCallback |
|
Metrics about the OnAuthorizationCallback |
|
Metrics about the OnConnackSend Callback |
|
Metrics about the OnConnectCallback |
|
Metrics about the OnDisconnectCallback |
|
Metrics about the OnInsufficientPermissionDisconnect Callback |
|
Metrics about the OnInsufficientPermissionDisconnect Callback |
|
Metrics about the OnPubackReceived Callback |
|
Metrics about the OnPubackSend Callback |
|
Metrics about the OnPubcompReceived Callback |
|
Metrics about the OnPubcompSend Callback |
|
Metrics about the OnPublishReceivedCallback |
|
Metrics about the OnPublishSend Callback |
|
Metrics about the OnPubrecReceived Callback |
|
Metrics about the OnPubrecSend Callback |
|
Metrics about the OnPubreReceived Callback |
|
Metrics about the OnPubrelSend Callback |
|
Metrics about the RestrictionsAfterLoginCallback |
|
Metrics about the OnSubackSend Callback |
|
Metrics about the OnSubscribeCallback |
|
Metrics about the OnUnsubackSend Callback |
|
Metrics about the OnUnsubscribeReceivedCallback |
For all these metrics you can get the following details
Attribute Name | Description |
---|---|
|
The 50th percentile for callback execution times |
|
The 75th percentile for callback execution times |
|
The 95th percentile for callback execution times |
|
The 98th percentile for callback execution times |
|
The 99th percentile for callback execution times |
|
The 99.9th percentile for callback execution times |
|
The mean for callback execution times |
|
The standard deviation of callback execution times |
|
The total count of callback executions |
|
The average rate (events/s) of callback executions in the last 15 minutes |
|
The average rate (events/s) of callback executions in the last 5 minutes |
|
The average rate (events/s) of callback executions in the last minute |
|
The mean rate (events/s) of callback executions |
|
The maximum rate (events/s) of callback executions |
|
The minimum rate (events/s) of callback executions |