Headroom Lab

In order to demonstrate the headroom effect on POD scheduling time, we are going to create two different VNGs, one with manual headroom capacity and one without, following that, we will run two new deployments on the cluster, each one will be directed to a different VNG, we’ll see that the one direct toward the VNG with the headroom capacity will be scheduled much sooner if not immediate while the other one will be pending for a while.


Follow the same VNG creation process, this VNG will be created with some manual Headroom. Specify example-3-1 as the Name of the new VNG, under the Node Selection section specify the following Node Labels (formatted as key: value):

  • env: ocean-workshop
  • example: 3-1 vng3-1

Scroll down and open the Advanced section, and find the Headroom configurations area.

Specify the following Headroom capacity and click the Save button.

Reserve: 3
CPU: 100
Memory: 256
GPU: 0
Headroom

Now let’s create another VNG, this one without any Headroom capacity.
Specify example-3-2 as the Name of the new VNG, under the Node Selection section specify the following Node Labels (formatted as key: value):

  • env: ocean-workshop
  • example: 3-2 vng3-2

Create a file headroom-example.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-3-1
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-dev
        image: nginx
        resources:
          requests:
            memory: "100Mi"
            cpu: "256m"
      nodeSelector:
        example: 3-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-3-2
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-dev
        image: nginx
        resources:
          requests:
            memory: "100Mi"
            cpu: "256m"
      nodeSelector:
        example: 3-2

Allow a few minutes for the headroom to be created within the cluster before applying this file to recognize the full benefits.

The first directed to the VNG labeled with the env: test1 node label and the second directed to the VNG labeled with the env: test2 label.

Apply the file to the cluster, run kubectl apply -f headroom-example.yaml

➜  kubectl apply -f headroom-example.yaml                                                                                                                                                                                                          
deployment.apps/example-3-1 created
deployment.apps/example-3-2 created

Run kubectl get pods,nodes -o wide We’ll now see the Pods from the first deployment (example-3-1) are scheduled and running while the Pods from the second deployment still do not (example-3-2)

➜  kubectl get pods,nodes                                                                                                                                                                                                                   
NAME                               READY   STATUS    RESTARTS   AGE
pod/example-2-1-df4d44b8c-6m4c7    0/1     Pending   0          89m
pod/example-2-2-87c894f4d-pl88p    0/1     Pending   0          89m
pod/example-3-1-5b698dd-5n2wh      1/1     Running   0          11s
pod/example-3-1-5b698dd-cj5gp      1/1     Running   0          11s
pod/example-3-1-5b698dd-nj2tc      1/1     Running   0          11s
pod/example-3-2-8479878fc5-78qvt   0/1     Pending   0          7s
pod/example-3-2-8479878fc5-8426m   0/1     Pending   0          7s
pod/example-3-2-8479878fc5-vkcx5   0/1     Pending   0          7s

NAME                                        STATUS     ROLES   AGE     VERSION
node/aks-agentpool-46800815-vmss000000      Ready      agent   64d     v1.24.15
node/aks-omnp42b1a1b7-24932048-vmss000000   Ready      agent   2m58s   v1.25.11
node/aks-omnp9785780e-16929516-vmss000000   Ready      agent   2m26s   v1.25.11
node/aks-omnpd070f13c-31297240-vmss000000   Ready      agent   16m     v1.25.11

A couple of minutes later we can see that the Pods from the second Deployment have also been scheduled

➜ kubectl get pods,nodes                                                                                                                         NAME                               READY   STATUS    RESTARTS   AGE
pod/example-2-1-df4d44b8c-6m4c7    0/1     Pending   0          89m
pod/example-2-2-87c894f4d-pl88p    0/1     Pending   0          89m
pod/example-3-1-5b698dd-5n2wh      1/1     Running   0          5m27s
pod/example-3-1-5b698dd-cj5gp      1/1     Running   0          5m27s
pod/example-3-1-5b698dd-nj2tc      1/1     Running   0          5m27s
pod/example-3-2-8479878fc5-78qvt   1/1     Running   0          2m17s
pod/example-3-2-8479878fc5-8426m   1/1     Running   0          2m17s
pod/example-3-2-8479878fc5-vkcx5   1/1     Running   0          2m17s

NAME                                        STATUS     ROLES   AGE     VERSION
node/aks-agentpool-46800815-vmss000000      Ready      agent   64d     v1.24.15
node/aks-omnp42b1a1b7-24932048-vmss000000   Ready      agent   2m58s   v1.25.11
node/aks-omnp9785780e-16929516-vmss000000   Ready      agent   2m26s   v1.25.11
node/aks-omnpd070f13c-31297240-vmss000000   Ready      agent   16m     v1.25.11