4.19.x to 4.20.x Migration Guide
This is a minor HiveMQ upgrade. HiveMQ 4.20 is a drop in replacement for HiveMQ 4.19.x.
You can learn more about all the new features HiveMQ 4.20 introduces in our release blogpost.
HiveMQ is prepackaged with multiple HiveMQ Enterprise Extensions (disabled), the open-source MQTT CLI tool, and the HiveMQ Swarm load-testing tool (both located in the tools
folder of your HiveMQ installation).
Starting with the HiveMQ 4.9 LTS release, HiveMQ provides enhanced version compatibility for all HiveMQ releases.
For more information, see HiveMQ Rolling Upgrade Policy and our Introducing Flexible MQTT Platform Upgrades with HiveMQ blog post.
When you migrate from one HiveMQ version to another, review the upgrade information for each version between your current HiveMQ version and the target HiveMQ version. Note changes that are relevant to your use case and adjust your configuration as needed. |
Upgrade a HiveMQ Cluster
Rolling upgrades are supported, and it is possible to run HiveMQ version 4.19 and version 4.20 simultaneously in the same cluster. By default, the HiveMQ cluster enables all new cluster features when all nodes are upgraded to the new version. No manual intervention is required.
Please follow the instructions in our user guide to ensure a seamless and successful rolling upgrade.
For more information, see HiveMQ Clusters.
Upgrade a Single-node HiveMQ Instance
-
Create a backup of the entire HiveMQ 4.19.x installation folder from which you want to migrate
-
Install HiveMQ 4.20 as described in the HiveMQ Installation Guide
-
Migrate the contents of the configuration file from your old HiveMQ 4.19.x installation
-
To migrate your persistent data, copy everything from the
data
folder of your backup to the data folder of the new HiveMQ 4.20 installation.
Configuration File Changes
HiveMQ 4.20 makes some changes to configuration files.
Starting with HiveMQ 4.20, the HiveMQ Data Hub is enabled by default and is configured with a different XML-tag name.
If you have used Data Hub (previously called Data Governance Hub) prior to HiveMQ 4.20, check the Data Hub Config Change section for additional information.
HiveMQ prevents the startup if your configuration file contains invalid values. For more information, see Configuration Validation. |
Persistent Data Migration
When you migrate, HiveMQ 4.20 automatically updates the file storage formats of all the data that you copied into your new data folder.
To migrate the persistent data, you must copy everything in the data folder of the previous HiveMQ 4.19.x installation to the data folder of your new HiveMQ 4.20 installation.
cp -r /opt/hivemq-4.19.0/data/* /opt/hivemq-4.20.0/data/
The first time you start HiveMQ 4.20, the file storage formats of the persistent data from your previous installation are automatically updated in the new persistent storage.
Configuration and Default Behavior Change for HiveMQ Data Hub
In HiveMQ 4.20, HiveMQ Data Governance Hub is renamed to HiveMQ Data Hub.
The XML tags in the HiveMQ config.xml
file reflect the new name and have a new default behavior.
Starting with HiveMQ 4.20, the HiveMQ Data Hub data-validation
and behavior-validation
features are enabled by default.
To disable the Data Hub, you must set the enabled
tags for data validation
and behavior validation
in the data-hub
section of your HiveMQ config.xml
file to false
.
config.xml
file<hivemq>
<data-hub>
<data-validation>
<enabled>false</enabled>
</data-validation>
<behavior-validation>
<enabled>false</enabled>
</behavior-validation>
</data-hub>
</hivemq>
New Control Center Permissions
In HiveMQ 4.20, the HiveMQ Control Center adds the option configure granular permission of managing schemas, and policies of Data Hub.
If you currently use the HiveMQ Enterprise Security Extension to restrict access to your control center and you want specific users to continue to have access to such information, you must grant the new permissions to these users.
You can assign one or more of the new permissions to the appropriate users or roles as desired.
Permission | Description | Additional permissions |
---|---|---|
HIVEMQ_VIEW_PAGE_SCHEMAS_LIST |
allowed to view schema list |
|
HIVEMQ_VIEW_PAGE_SCHEMA_DETAIL |
allowed to view schema detail |
|
HIVEMQ_EDIT_DATA_SCHEMA |
allowed to create/edit schemas |
|
HIVEMQ_VIEW_PAGE_DATA_POLICIES_LIST |
allowed to view data policy list |
HIVEMQ_VIEW_DATA_TOPIC |
HIVEMQ_VIEW_PAGE_BEHAVIOR_POLICIES_LIST |
allowed to view behavior policy list |
HIVEMQ_VIEW_DATA_CLIENT_ID |
HIVEMQ_VIEW_PAGE_DATA_POLICY_DETAIL |
allowed to view data policy detail |
HIVEMQ_VIEW_DATA_CLIENT_ID, HIVEMQ_VIEW_DATA_TOPIC |
HIVEMQ_VIEW_PAGE_BEHAVIOR_POLICY_DETAIL |
allowed to view behavior policy detail |
HIVEMQ_VIEW_DATA_CLIENT_ID, HIVEMQ_VIEW_DATA_TOPIC |
HIVEMQ_VIEW_PAGE_DATA_HUB_CHARTS |
allowed to view data hub charts |
HIVEMQ_VIEW_DATA_CLIENT_ID |
HIVEMQ_EDIT_DATA_DATA_POLICY |
allowed to create/edit data policies |
HIVEMQ_VIEW_DATA_TOPIC |
HIVEMQ_EDIT_DATA_BEHAVIOR_POLICY |
allowed to create/edit behavior policies |
HIVEMQ_VIEW_DATA_CLIENT_ID, HIVEMQ_VIEW_DATA_TOPIC |
Some new permissions rely on the presence of other permissions.
Permissions required to gain full usability, are listed in the Additional permissions column.
For example, when you grant the permission HIVEMQ_EDIT_DATA_BEHAVIOR_POLICY , you need to grant HIVEMQ_VIEW_DATA_CLIENT_ID, and HIVEMQ_VIEW_DATA_TOPIC to ensure the proper usability.
|
For more information, see Control Center Access Control Permissions
Changes in the OpenAPI Integration for the Data Hub
HiveMQ 4.20 introduces changes to the REST API tags for the Data Hub.
The changes in the tags of the REST resources for Data Hub result in different names in the generated classes.
If you currently use the OpenAPI integration for the Data Hub to generate code, you must adjust your code to accommodate the changes.
The most significant change is to replace
/data-governance-hub/
by /data-hub/
.
For more information, see HiveMQ REST API.
Renamed Policy Functions with Namespace in the HiveMQ Data Hub
HiveMQ 4.20 renames one existing policy function with the namespace as a prefix:
-
UserProperties.add
changes toMqtt.UserProperties.add
The new naming helps organize functions by their purpose for easier policy management. As new functions are added, each function will be prefixed with the appropriate namespace.
Data Policy Default Behavior for Failed Validations
HiveMQ 4.20 comes with an updated default behavior in the Data Hub data policy engine.
In previous versions, the policy engine automatically dropped the message when a validation failed. Now, we have changed the drop behavior to make the handling of dropped messages more explicit and easier to understand.
In HiveMQ 4.20, we introduce a new `Mqtt.drop`function that can drop any MQTT message. You can even drop valid messages for debugging purposes. For more information, see Functions.
To achieve the same behavior as before HiveMQ 4.20, onFailure
actions need an extra Mqtt.drop
at the end of the action pipeline, if no other terminal function is provided.
Policy Migration
If you have existing policies that contain outdated UserProperties.add
functions or require an additional Mqtt.drop
, use the following procedure to migrate the affected policies to the new Mqtt.UserProperties.add
function names:
-
To get all existing policies in the broker enter:
curl -X GET http://localhost:8888/api/v1/data-validation/policies
. -
Change all
functionId
fields in affected policies to use the new function names (Mqtt.MqttUserProperties.add
) and add to theonFailure
action’s pipeline an additionalMqtt.drop
-
Delete each of the outdated policies one by one with the following command:
curl -X DELETE -H "Content-Type: application/json" http://localhost:8888/api/v1/data-validation/policies/{policyId}
. -
Re-upload your newly revised policies one by one with the following command:
curl -X POST --data @policy.json -H "Content-Type: application/json" http://localhost:8888/api/v1/data-validation/policies
.
Changed Metric Names for Data Hub
HiveMQ 4.20 renamed the metrics according to the product name: data-governance-hub
to data-hub
.
For more information, see Data hub Metrics