Kubernetes Distributions
A Kubernetes distribution is a packaged version of Kubernetes that includes additional features, tools, integrations, and support. There are a lage number of certified Kubernetes service providers and certified Kubernetes distributions. For more information, see Kubernetes partners .
The setup of Kubernetes partner offerings varies. Individual configuration can be required to meet vendor-specific requirements for container runtime, network configuration, and more. |
Based on the distribution, different vendor-specific steps can be required to use the HiveMQ Platform Operator in a particular environment.
The following examples provide step-by-step procedures for a few popular Kubernetes distributions.
Installation on Red Hat OpenShift Container Platform
OpenShift is a family of containerization software products developed by Red Hat that uses Kubernetes as the underlying container orchestration engine.
OpenShift Container Platform is a hybrid cloud platform that simplifies the deployment, management, and scaling of containerized applications.
The tools, services, and automation OpenShift Platform provides help organizations adopt containerization at scale in a secure and manageable way.
Requirements
-
See the HiveMQ Operator Requirements.
-
Running OpenShift Container Platform cluster version 4.14.x or higher.
-
An OpenShift user account with cluster administration privileges.
-
Installed OpenShift CLI.
-
Installed jq CLI helper command.
Preparation
-
Login in to your OpenShift Container Platform cluster with the following CLI command:
oc login -u <your-user> https://<your-custom-openshift-console-url>:<your-port>
-
Verify that your user account has cluster-admin privileges:
oc get clusterrolebindings -o json | jq ' .items[] | select(.metadata.name=="<your-user>")
The output shows the cluster role binding resource with the <your-user> Cluster role bound to it.
-
Verify that the Kubernetes control plane is running:
oc cluster-info
-
Verify your Kubernetes cluster nodes are available:
oc get nodes
-
Make sure to add or update your local Helm repository:
helm repo add hivemq https://hivemq.github.io/helm-charts helm repo update
Installation
The installation on OpenShift requires a user account with cluster administration privileges. Administrative rights are needed due to the cluster-wide scope of some Kubernetes resources. For example, the installation of the HiveMQ platform custom resource definition on OpenShift and the OpenShift cluster role and cluster role bindings the Platform Operator uses to manage HiveMQ Platforms across different namespaces.
Additionally, OpenShift by default prevents processes in pods from being executed as root user.
During the creation of a namespace (project), OpenShift assigns a User ID (UID) range, a supplemental group ID (GID) range, and unique SELinux MCS labels to the namespace.
fsGroup
by default has no range defined, instead is equal to the minimum value of the openshift.io/sa.scc.supplemental-groups
annotation.
OpenShift assigns container processes root group (GID=0) permissions and a random user ID (UID). |
When a pod is deployed into the namespace, OpenShift assigns the first UID and GID from the defined namespace range to the pod.
Any attempt to assign a UID outside the assigned range causes the pod to fail or requires special privileges.
The HiveMQ Platform Operator and the HiveMQ Platform must run with a non-root user (UID) from the namespace defined range. |
-
Create a new namespace (project) to install the HiveMQ Platform Operator:
oc new-project <hivemq-platform-operator-namespace>
-
Review the range of valid UIDs and GIDs defined in your
<hivemq-platform-operator-namespace>
:oc describe namespace <hivemq-platform-operator-namespace>
Check your defined ranges in the annotations field of your namespace.
UIDs are defined in theopenshift.io/sa.scc.uid-range
annotation field.
Supplemental GIDs should be defined in theopenshift.io/sa.scc.supplemental-groups
annotation field. -
Configure your HiveMQ Platform Operator’s
podSecurityContext
:podSecurityContext: enabled: true runAsNonRoot: true
Omit the runAsUser
configuration so that OpenShift automatically assigns a random UID from the defined range of UIDs.
Alternatively, you can define therunAsUser
configuration with a UID from the defined range of UIDs.
The GID on OpenShift is always the root group (GID=0). Therefore,runAsGroup
must not be defined in the configuration. -
Install the HiveMQ Operator Platform in your namespace:
helm upgrade --install <your-hivemq-operator> hivemq/hivemq-platform-operator -f operator-values.yaml -n <hivemq-platform-operator-namespace>
-
Create an additional namespace (project) on OpenShift for the HiveMQ Platform:
oc new-project <hivemq-platform-namespace>
-
Review the range of valid UIDs and GIDs defined in your
<hivemq-platform-namespace>
:oc describe namespace <hivemq-platform-namespace>
Check your defined ranges in the annotations field of your namespace.
UIDs are defined in theopenshift.io/sa.scc.uid-range
annotation field.
Supplemental GIDs are defined in theopenshift.io/sa.scc.supplemental-groups
annotation field. -
Configure your HiveMQ Platform’s
podSecurityContext
:nodes: podSecurityContext: enabled: true runAsNonRoot: true
Omit the runAsUser
configuration so that OpenShift automatically assigns a random UID from the defined range of UIDs.
Alternatively you can define therunAsUser
configuration with a UID from the defined range of UIDs.
The GID on OpenShift is always the root group (GID=0). Therefore,runAsGroup
must not be defined in the configuration. -
Install the HiveMQ Platform in your namespace:
helm upgrade --install <your-hivemq-platform> hivemq/hivemq-platform -f platform-values.yaml -n <hivemq-platform-namespace>
Installation on Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS) is a managed container orchestration service provided by Microsoft Azure. AKS allows users to deploy, manage, and scale containerized applications using Kubernetes, without the need to manage the underlying infrastructure. For organizations that use Microsoft Azure as their cloud provider, AKS is a popular choice to simplify the process of creating, configuring, and scaling Kubernetes clusters.
Requirements
-
See the HiveMQ Operator Requirements.
-
Running AKS cluster version 1.27.x or higher.
-
Installed Azure CLI.
Preparation
-
Create an Azure Resource Group:
az group create --name <resource-name> --location <azure-region>
-
Create an AKS cluster with 3 nodes:
az aks create -g <resource-name> -n <cluster-name> --enable-managed-identity --node-count 3 --enable-addons monitoring --generate-ssh-keys
-
Add access credentials of the cluster to your local Kubernetes context:
az aks get-credentials --resource-group <resource-name> --name <cluster-name>
-
Check the status of your Kubernetes cluster:
az aks show --name <cluster-name> --resource-group <resource-name> az aks nodepool list --cluster-name <cluster-name> --resource-group <resource-name> --output table
-
Get the list of available Kubernetes nodes:
kubectl get nodes
-
Make sure to add or update your local Helm repository:
helm repo add hivemq https://hivemq.github.io/helm-charts helm repo update
Installation
-
Install the HiveMQ Platform Operator:
helm install <your-operator> hivemq/hivemq-platform-operator -n <namespace>
-
Customize the HiveMQ Platform and add a load balancer:
services: - type: mqtt exposed: true containerPort: 1883 serviceType: LoadBalancer - type: control-center exposed: true containerPort: 8080 port: 80 serviceType: LoadBalancer
-
Install the HiveMQ Platform:
helm install <your-hivemq-platform> hivemq/hivemq-platform -n <namespace> -f customized-values.yaml
-
Test your HiveMQ Platform:
helm test <your-hivemq-platform> -n <namespace>
-
List your HiveMQ services and their external access URLs:
kubectl get service -n <namespace>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hivemq-platform-cc-80 LoadBalancer 10.0.104.134 **.***.***.** 80:30668/TCP 114m
The service is now reachable via the external IP address.
Make sure that network traffic is allowed.
Installation on Amazon Elastic Kubernetes Service (EKS)
Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service provided by Amazon Web Services (AWS). that eliminates the need to install, operate, and maintain your own Kubernetes clusters.
For organizations that use AWS as their cloud provider, Amazon EKS is a popular choice to simplify the process of creating, configuring, and scaling Kubernetes clusters.
Requirements
-
See the HiveMQ Operator Requirements.
-
Running EKS cluster version 1.24.x or higher.
-
Installed AWS CLI.
Preparation
-
Create an EKS cluster with 3 nodes:
For more information, see AWS create cluster and AWS Creating a managed node group.
-
Add the access credentials of the cluster to your local Kubernetes context:
aws eks update-kubeconfig --region <region> --name <cluster-name>
Check the status of your Kubernetes cluster:aws eks describe-cluster --name <cluster-name> --region <region> --query 'cluster.{Endpoint:endpoint,Status:status}'
-
View the list of available Kubernetes nodes:
kubectl get nodes
-
Make sure to add or update your local Helm repository:
helm repo add hivemq https://hivemq.github.io/helm-charts helm repo update
Installation
-
Install the HiveMQ Platform Operator:
helm install <your-operator> hivemq/hivemq-platform-operator -n <namespace>
-
Customize the HiveMQ Platform and add a load balancer:
services: - type: mqtt exposed: true containerPort: 1883 serviceType: LoadBalancer - type: control-center exposed: true containerPort: 8080 serviceType: LoadBalancer
-
Install the HiveMQ Platform:
helm install <your-hivemq-platform> hivemq/hivemq-platform -n <namespace> -f customized-values.yaml
-
Test your HiveMQ Platform:
helm test <your-hivemq-platform> -n <namespace>
-
List your HiveMQ services and their external access URLs:
kubectl get service -n <namespace>
-
Check the output of the command
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hivemq-platform-cc-80 LoadBalancer 10.0.104.134 **.***.***.** 80:30668/TCP 114m
-
The service is now reachable via the external IP address.
Make sure that network traffic is allowed.