Skip to content

Setup Kamaji on Azure

This guide will lead you through the process of creating a working Kamaji setup on on MS Azure.

The material here is relatively dense. We strongly encourage you to dedicate time to walk through these instructions, with a mind to learning. We do NOT provide any "one-click" deployment here. However, once you've understood the components involved it is encouraged that you build suitable, auditable GitOps deployment processes around your final infrastructure.

The guide requires:

  • a bootstrap machine
  • a Kubernetes cluster to run the Admin and Tenant Control Planes
  • an arbitrary number of machines to host Tenants' workloads


Prepare the bootstrap workspace

On the bootstrap machine, clone the repo and prepare the workspace directory:

git clone
cd kamaji/deploy

We assume you have installed on the bootstrap machine:

Make sure you have a valid Azure subscription, and login to Azure:

az account set --subscription "MySubscription"
az login

Access Management Cluster

In Kamaji, a Management Cluster is a regular Kubernetes cluster which hosts zero to many Tenant Cluster Control Planes. The Management Cluster acts as cockpit for all the Tenant clusters and implements Monitoring, Logging, and Governance of all the Kamaji setup, including all Tenant Clusters. For this guide, we're going to use an instance of Azure Kubernetes Service (AKS) as Management Cluster.

Throughout the following instructions, shell variables are used to indicate values that you should adjust to your own Azure environment:

source kamaji-azure.env

az group create \
  --name $KAMAJI_RG \
  --location $KAMAJI_REGION

az network vnet create \
  --resource-group $KAMAJI_RG \
  --name $KAMAJI_VNET_NAME \
  --location $KAMAJI_REGION \
  --address-prefix $KAMAJI_VNET_ADDRESS

az network vnet subnet create \
  --resource-group $KAMAJI_RG \
  --vnet-name $KAMAJI_VNET_NAME \
  --address-prefixes $KAMAJI_SUBNET_ADDRESS

KAMAJI_SUBNET_ID=$(az network vnet subnet show \
  --resource-group ${KAMAJI_RG} \
  --vnet-name ${KAMAJI_VNET_NAME} \
  --name ${KAMAJI_SUBNET_NAME} \
  --query id --output tsv)

az aks create \
  --resource-group $KAMAJI_RG \
  --name $KAMAJI_CLUSTER \
  --location $KAMAJI_REGION \
  --vnet-subnet-id $KAMAJI_SUBNET_ID \
  --zones 1 2 3 \
  --node-count 3 \
  --nodepool-name $KAMAJI_CLUSTER

Once the cluster formation succedes, get credentials to access the cluster as admin

az aks get-credentials  \
  --resource-group $KAMAJI_RG \

And check you can access:

kubectl cluster-info

Install Cert Manager

Kamaji takes advantage of the dynamic admission control, such as validating and mutating webhook configurations. These webhooks are secured by a TLS communication, and the certificates are managed by cert-manager, making it a prerequisite that must be installed:

helm repo add jetstack
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.11.0 \
  --set installCRDs=true

Install Kamaji Controller

Installing Kamaji via Helm charts is the preferred way. The Kamaji controller needs to access a Datastore in order to save data of the tenants' clusters. The Kamaji Helm Chart provides the installation of a basic unmanaged etcd as datastore, out of box.

Install Kamaji with helm using an unmanaged etcd as default datastore:

helm repo add clastix
helm repo update
helm install kamaji clastix/kamaji -n kamaji-system --create-namespace

A managed datastore is highly recommended in production

The kamaji-etcd project provides the code to setup a multi-tenant etcd running as StatefulSet made of three replicas. Optionally, Kamaji offers support for a more robust storage system, as MySQL, PostgreSQL, or NATS compatible database, thanks to the native kine integration.

Create Tenant Cluster

Tenant Control Plane

With Kamaji on AKS, the tenant control plane is accessible:

  • from tenant worker nodes through an internal loadbalancer
  • from tenant admin user through an external loadbalancer responding to https://${TENANT_NAME}.${TENANT_NAME}.${TENANT_DOMAIN}:443

Create a tenant control plane of example:

kind: TenantControlPlane
  name: ${TENANT_NAME}
  namespace: ${TENANT_NAMESPACE}
  labels: ${TENANT_NAME}
  dataStore: default
      replicas: 3
        apiServer: []
        controllerManager: []
        scheduler: []
            cpu: 250m
            memory: 512Mi
          limits: {}
            cpu: 125m
            memory: 256Mi
          limits: {}
            cpu: 125m
            memory: 256Mi
          limits: {}
      serviceType: LoadBalancer
    version: ${TENANT_VERSION}
      cgroupfs: systemd
      - ResourceQuota
      - LimitRanger
    port: ${TENANT_PORT}
    serviceCidr: ${TENANT_SVC_CIDR}
    podCidr: ${TENANT_POD_CIDR}
    coreDNS: {}
    kubeProxy: {}
        port: ${TENANT_PROXY_PORT}
            cpu: 100m
            memory: 128Mi
          limits: {}
apiVersion: v1
kind: Service
  name: ${TENANT_NAME}-public
  namespace: ${TENANT_NAMESPACE}
  annotations: ${TENANT_NAME}
  - port: 443
    protocol: TCP
    targetPort: ${TENANT_PORT}
  selector: ${TENANT_NAME}
  type: LoadBalancer

kubectl -n ${TENANT_NAMESPACE} apply -f ${TENANT_NAMESPACE}-${TENANT_NAME}-tcp.yaml

Make sure:

  • the following annotation: is set on the tcp service. It tells Azure to expose the service within an internal loadbalancer.

  • the following annotation:${TENANT_NAME} is set the public loadbalancer service. It tells Azure to expose the Tenant Control Plane with public domain name: ${TENANT_NAME}.${TENANT_DOMAIN}.

Working with Tenant Control Plane

Check the access to the Tenant Control Plane:

curl -k https://${TENANT_NAME}.${KAMAJI_REGION}
curl -k https://${TENANT_NAME}.${KAMAJI_REGION}

Let's retrieve the kubeconfig in order to work with it:

kubectl get secrets -n ${TENANT_NAMESPACE} ${TENANT_NAME}-admin-kubeconfig -o json \
  | jq -r '.data["admin.conf"]' \
  | base64 --decode \

kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig config \
  set-cluster ${TENANT_NAME} \
  --server https://${TENANT_NAME}.${KAMAJI_REGION}

and let's check it out:

kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get svc

default       kubernetes   ClusterIP    <none>        443/TCP     6m

Check out how the Tenant Control Plane advertises itself:

kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get ep

NAME         ENDPOINTS           AGE
kubernetes   57m

Join worker nodes

The Tenant Control Plane is made of pods running in the Kamaji Management Cluster. At this point, the Tenant Cluster has no worker nodes. So, the next step is to join some worker nodes to the Tenant Control Plane.

Kamaji does not provide any helper for creation of tenant worker nodes, instead it leverages the Cluster Management API. This allows you to create the Tenant Clusters, including worker nodes, in a completely declarative way. Currently, a Cluster API ControlPlane provider for Azure is not yet available: check the road-map on the official repository.

An alternative approach to create and join worker nodes in Azure is to manually create the VMs, turn them into Kubernetes worker nodes and then join through the kubeadm command.

Create an Azure VM Stateful Set to host worker nodes

az network vnet subnet create \
   --resource-group $KAMAJI_RG \
   --vnet-name $KAMAJI_VNET_NAME \
   --address-prefixes $TENANT_SUBNET_ADDRESS

az vmss create \
   --name $TENANT_VMSS \
   --resource-group $KAMAJI_RG \
   --image $TENANT_VM_IMAGE \
   --vnet-name $KAMAJI_VNET_NAME \
   --subnet $TENANT_SUBNET_NAME \
   --computer-name-prefix $TENANT_NAME- \
   --load-balancer "" \
   --instance-count 0

az vmss update \
   --resource-group $KAMAJI_RG \
   --name $TENANT_VMSS \
   --set virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].enableIPForwarding=true

az vmss scale \
   --resource-group $KAMAJI_RG \
   --name $TENANT_VMSS \
   --new-capacity 3

Once all the machines are ready, follow the related documentation in order to:

  • install containerd as container runtime
  • install crictl, the command line for working with containerd
  • install kubectl, kubelet, and kubeadm in the desired version

After the installation is complete on all the nodes, store the entire command of joining in a variable:

TENANT_ADDR=$(kubectl -n ${TENANT_NAMESPACE} get svc ${TENANT_NAME} -o json | jq -r ."spec.loadBalancerIP")
JOIN_CMD=$(echo "sudo kubeadm join ${TENANT_ADDR}:6443 ")$(kubeadm --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig token create --print-join-command |cut -d" " -f4-)

Use a loop to log in to and run the join command on each node:

VMIDS=($(az vmss list-instances \
   --resource-group $KAMAJI_RG \
   --name $TENANT_VMSS \
   --query [].instanceId \
   --output tsv))

for i in ${!VMIDS[@]}; do
  az vmss run-command create \
      --name join-tenant-control-plane \
      --vmss-name  $TENANT_VMSS \
      --resource-group $KAMAJI_RG \
      --instance-id ${VMID} \
      --script "${JOIN_CMD}"

Checking the nodes:

kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get nodes

NAME               STATUS     ROLES    AGE    VERSION
tenant-00-000000   NotReady   <none>   112s   v1.25.0
tenant-00-000002   NotReady   <none>   92s    v1.25.0
tenant-00-000003   NotReady   <none>   71s    v1.25.0

The cluster needs a CNI plugin to get the nodes ready. In this guide, we are going to install calico, but feel free to use one of your taste.

Download the latest stable Calico manifest:

curl -O

As per documentation, Calico in VXLAN mode is supported on Azure while IPIP packets are blocked by the Azure network fabric. Make sure you edit the manifest above and set the following variables:

  • CLUSTER_TYPE="k8s"

Apply to the Tenant Cluster:

kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig apply -f calico.yaml

And after a while, nodes will be ready

kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get nodes 

NAME               STATUS   ROLES    AGE     VERSION
tenant-00-000000   Ready    <none>   3m38s   v1.25.0
tenant-00-000002   Ready    <none>   3m18s   v1.25.0
tenant-00-000003   Ready    <none>   2m57s   v1.25.0


To get rid of the Kamaji infrastructure, remove the RESOURCE_GROUP:

az group delete --name $KAMAJI_RG --yes --no-wait

That's all folks!