4.17.x to 4.18.x Migration Guide
This is a minor HiveMQ upgrade. HiveMQ 4.18 is a drop in replacement for HiveMQ 4.17.x.
You can learn more about all the new features HiveMQ 4.18 introduces in our release blogpost.
HiveMQ 4.18 is prepackaged with multiple HiveMQ Enterprise Extensions (disabled), the open-source MQTT CLI tool, and the HiveMQ Swarm load-testing tool (both located in the tools folder of your HiveMQ installation).
|
Upgrade a HiveMQ Cluster
Rolling upgrades are supported, and it is possible to run HiveMQ version 4.17 and version 4.18 simultaneously in the same cluster. By default, the HiveMQ cluster enables all new cluster features when all nodes are upgraded to the new version. No manual intervention is required.
Please follow the instructions in our user guide to ensure a seamless and successful rolling upgrade.
For more information, see HiveMQ Clusters.
Upgrade a Single-node HiveMQ Instance
-
Create a backup of the entire HiveMQ 4.17.x installation folder from which you want to migrate
-
Install HiveMQ 4.18 as described in the HiveMQ Installation Guide
-
Migrate the contents of the configuration file from your old HiveMQ 4.17.x installation
-
To migrate your persistent data, copy everything from the
data
folder of your backup to the data folder of the new HiveMQ 4.18 installation.
Configuration File Changes
You can upgrade from HiveMQ 4.17.x to HiveMQ 4.18 without making changes to your configuration file.
Since 4.10.0, HiveMQ prevents the startup if your configuration file contains invalid values. For more information, see Changed Validation Behavior for the HiveMQ Configuration File and Configuration Validation. |
Persistent Data Migration
When you migrate, HiveMQ 4.18 automatically updates the file storage formats of all the data that you copied into your new data folder.
To migrate the persistent data, you must copy everything in the data folder of the previous HiveMQ 4.17.x installation to the data folder of your new HiveMQ 4.18 installation.
cp -r /opt/hivemq-4.17.0/data/* /opt/hivemq-4.18.0/data/
The first time you start HiveMQ 4.18, the file storage formats of the persistent data from your previous installation are automatically updated in the new persistent storage.
Changes in the OpenAPI Integration for the Data Hub
HiveMQ 4.18 introduces changes to the REST API tags for the Data Hub.
The changes in the tags of the REST resources for Data Hub result in different names in the generated classes.
If you currently use the OpenAPI integration for the Data Hub to generate code, you must adjust your code to accommodate the changes.
For more information, see HiveMQ REST API.
New Licensing Model for the HiveMQ Data Hub
Starting with HiveMQ 4.17, the HiveMQ Data Hub introduced a flexible new licensing model for that uses a dedicated license.
The new model makes it easier to handle Data Hub licenses since the Data Hub License can be added to any deployment without the necessity to touch your HiveMQ broker license.
If you currently use the closed beta version of the HiveMQ Data Hub, the special HiveMQ broker license (file ending lic ) is no longer valid for use with the Data Hub.
All participants in the HiveMQ Data Hub closed beta must update to the new .plic license file.
If you are a member of the closed beta and have not yet received a replacement license file from you account representative, contact our sales team for assistance.
|
Additionally, starting with HiveMQ version 4.17, a free version of the Data Hub is included in the HiveMQ Platform bundle. The free mode of the Data Hub allows you to create one policy and gives you access to limited functionality.
The licensed versions provide all available functionality and allow you to create different numbers of policies according to the terms of the license.
All dedicated Data Hub licenses use the file extension .plic
.
└─ <HiveMQ folder>
├── README.txt
├── audit
├── backup
├── bin
├── conf
├── data
├── extensions
├── license
│ ├── broker.lic
│ └── data-hub.plic
├── log
├── third-party-licenses
└── tools
New Variable Notation Handling in the HiveMQ Data Hub Interpolation Engine
Further action is only required if you need to migrate an existing HiveMQ Data Hub 4.15 policy to HiveMQ Data Hub version 4.16 or higher. |
To align the behavior with other parts of the platform, the HiveMQ Data Hub supports only dollar + single curly bracket notation for policy variables: ${}
.
Starting with HiveMQ 4.16, all variables that were previously denoted as $variable
must be denoted as ${variable}
.
Since HiveMQ 4.16, the HiveMQ Data Hub interpolation engine no longer interpolates variables that use the old $
prefixed notation.
The HiveMQ Data Hub currently offers four predefined policy variables:
Variable | Type | Old notation example | New notation example |
---|---|---|---|
|
String |
|
|
|
String |
|
|
|
String |
|
|
|
String |
|
|
{
"id": "policy1",
"matching": {
"topicFilter": "topic/+"
},
"validation": {
"validators": [
{
"type": "schema",
"arguments": {
"strategy": "ALL_OF",
"schemas": [
{
"schemaId": "schema1",
"version": "latest"
}
]
}
}
]
},
"onSuccess": {
"pipeline": [
{
"id": "logOperationSuccess",
"functionId": "System.log",
"arguments": {
"level": "DEBUG",
"message": "${clientId} sent a publish on topic '${topic}' with result '${validationResult}'"
}
}
]
},
"onFailure": {
"pipeline": [
{
"id": "logOperationFailure",
"functionId": "System.log",
"arguments": {
"level": "WARN",
"message": "${clientId} sent an invalid publish on topic '${topic}' with result '${validationResult}'"
}
}
]
}
}
To include uninterpolated variables prefixed with dollar + curly bracket in a policy, escape the variable with a backslash: \${topic} .
|
Policy Migration
Use the following procedure to migrate existing policies that contain $
prefixed variables to the new notation:
-
To get all existing policies in the broker enter:
curl -X GET http://localhost:8888/api/v1/data-validation/policies
. -
Change all variables in affected policies to use the new notation as described above.
-
Delete each affected policies one by one with the following command:
curl -X DELETE -H "Content-Type: application/json" http://localhost:8888/api/v1/data-validation/policies/{policyId}
. -
Re-upload your newly migrated policies one by one with the following command:
curl -X POST --data @policy.json -H "Content-Type: application/json" http://localhost:8888/api/v1/data-validation/policies
.
New Unknown Variable Behavior in the HiveMQ Data Hub Interpolation Engine
HiveMQ automatically checks for the presence of unknown variables.
Starting with HiveMQ 4.16, if an unknown variable is present in a data validation policy, the policy is not created and an error is returned.
Previously, unknown variables in a data validation policy were ignored and not interpolated.
Currently, the HiveMQ Data Hub recognizes only the following four predefined variables as known variables:
-
clientId
-
topic
-
policyId
-
validationResult
Renamed Policy Functions with Namespace in the HiveMQ Data Hub
HiveMQ 4.16 renames two existing policy functions with the namespace as a prefix:
-
log
changes toSystem.log
-
to
changes toDelivery.redirectTo
The new naming helps organize functions by their purpose for easier policy management.
As new functions are added, each function will be prefixed with the appropriate namespace.
For example, the new Metrics.Counter.increment
function in HiveMQ 4.16. In addition, the new naming convention can also accommodate multi-level namespaces.
To see how renamed functions are used in a policy, check our example policy.
Policy Migration
If you have existing policies that contain outdated log
and to
functions, use the following procedure to migrate the affected policies to the new System.log
and Delivery.redirectTo
function names:
-
To get all existing policies in the broker enter:
curl -X GET http://localhost:8888/api/v1/data-validation/policies
. -
Change all
functionId
fields in affected policies to use the new function names (System.log
andDelivery.redirectTo
). -
Delete each of the outdated policies one by one with the following command:
curl -X DELETE -H "Content-Type: application/json" http://localhost:8888/api/v1/data-validation/policies/{policyId}
. -
Re-upload your newly revised policies one by one with the following command:
curl -X POST --data @policy.json -H "Content-Type: application/json" http://localhost:8888/api/v1/data-validation/policies
.
Schema Version Support in the HiveMQ Data Hub
HiveMQ version 4.16 introduces the ability to associate different versions of a schema to the same schemaId
.
Your HiveMQ system now automatically assigns a version number when you create or update a schema.
As a result, you must specify a version for all schemas in your policy configurations:
-
To specify a particular version of a schema, enter the associated version number. For example,
version:"1"
orversion:"2"
. -
To specify that your HiveMQ system automatically uses the most recent version of the schema that is available, enter
version:"latest"
.
For policies created prior to version 4.16, HiveMQ automatically assigns the schema version "latest" .Once you upgrade to HiveMQ 4.16, we highly recommend that you update all your policy files including policies stored in your GitHub repository and that you use the HiveMQ REST API to update all your policies in the HiveMQ Data Hub. |
Schema versioning is a new feature of HiveMQ 4.16. If you are currently testing HiveMQ Data Hub version 4.15 and want to do a rolling upgrade from HiveMQ 4.15 to 4.16, do not attempt to add versions to your existing schemas until the upgrade to version 4.16 is complete. |
{
"id": "policy1",
"matching": {
"topicFilter": "topic/+"
},
"validation": {
"validators": [
{
"type": "schema",
"arguments": {
"strategy": "ALL_OF",
"schemas": [
{
"schemaId": "schema1",
"version": "1"
},
{
"schemaId": "schema2",
"version": "latest"
}
]
}
}
]
}
}
Changed Location for HiveMQ Enterprise Extension Configuration Files
To increase consistency and ease of use, HiveMQ 4.15 introduced a unified default configuration file location for all HiveMQ Enterprise extensions.
Since HiveMQ 4.15, the configuration location for each enterprise extension is conf/config.xml
:
-
HiveMQ Enterprise Extension for Kafka:
HIVEMQ_HOME/extensions/hivemq-kafka-extension/conf/config.xml
-
HiveMQ Enterprise Security Extension:
HIVEMQ_HOME/extensions/hivemq-enterprise-security-extension/conf/config.xml
-
HiveMQ Enterprise Extension for Amazon Kinesis:
HIVEMQ_HOME/extensions/hivemq-amazon-kinesis-extension/conf/config.xml
-
HiveMQ Enterprise Distributed Tracing Extension:
HIVEMQ_HOME/extensions/hivemq-distributed-tracing-extension/conf/config.xml
-
HiveMQ Enterprise Bridge Extension:
HIVEMQ_HOME/extensions/hivemq-bridge-extension/conf/config.xml
-
HiveMQ Enterprise Extension for Google Cloud Pub/Sub:
HIVEMQ_HOME/extensions/hivemq-google-cloud-pubsub-extension/conf/config.xml
└─ <HiveMQ folder>
├── README.txt
├── audit
├── backup
├── bin
├── conf
├── data
├── extensions
│ ├── hivemq-<enterprise-extension-name>
│ │ ├── conf
│ │ │ ├── config.xml (needs to be added by the user)
│ │ │ ├── config.xsd
│ │ │ └── examples
│ │ │ └── config.xml (an example configuration)
│ │ ├── <enterprise-extension-name>.jar
│ │ ├── hivemq-extension.xml
│ │ └── third-party-licenses
│ │
├── license
├── log
├── third-party-licenses
└── tools
The previously-used configuration file locations are still supported, but can be deprecated in future versions. We recommend that you move your existing configurations to the new location. |
Changed Validation Behavior for the HiveMQ Configuration File
Since HiveMQ 4.10, HiveMQ automatically validates the values that are currently entered in your config.xml
file during startup.
HiveMQ 4.10 added an XML Schema Definition (XSD) file that defines the structure HiveMQ uses for validation.
The XSD specifies the elements and attributes that can be used in a HiveMQ configuration.
If HiveMQ detects one or more invalid values in the configuration file, your HiveMQ deployment does not start.
When a startup fails due to an invalid configuration, HiveMQ logs detailed information about each configuration error to the hivemq.log
file.
For more information, see Configuration Validation.
To continue the HiveMQ startup, we recommend that you review your log statements and enter valid configuration values where indicated.
In HiveMQ versions 4.9.x and lower, invalid configuration values detected during validation do not prevent HiveMQ startup. In these versions, when an invalid value is detected during startup, HiveMQ automatically uses the HiveMQ default configuration values, continues the startup, and logs a warning. |
It is possible to revert to the configuration validation behavior of HiveMQ versions 4.9.x and lower.
You can enable the legacy startup validation behavior in two ways:
-
Add a line with the following Java system property to your
bin/run.sh
file: :JAVA_OPTS="$JAVA_OPTS -Dhivemq.config.skip-validation=true"
-or- -
Provide the environment variable:
export HIVEMQ_SKIP_CONFIG_VALIDATION=true
For more information, see Change Configuration Validation Behavior
The new configuration validation of the HiveMQ broker can impact HiveMQ deployments that you manage with the HiveMQ Kubernetes Operator. For more information, see HiveMQ Kubernetes Operator - Configure Your HiveMQ Cluster. |