Network Cost Lab

In order to illustrate how the Ocean Network Costs section of Cost Analysis can help with allocating costs and troubleshooting networking, we will (2) groups of deployments:

Group 1 (default routing behavior)

  • (12) nginx pods hosting a large .mp4 file
  • (12) replicas ‘pulling’ the large file from the nginx pods
  • associated service(s)

Group 2 (Topology Aware Routing enabled)

  • (12) nginx pods hosting a large .mp4 file
  • (12) replicas ‘pulling’ the large file from the nginx pods
  • associated service(s)

We’ve already installed/enabled Ocean Network Cost at this point (see previous section)

Follow these instructions to see Network Cost in action and also see a great way to help reduce network cost when taking advantage of Ocean and its desired effect of spreading replicas across AZs by default.

  • We will use pre-compiled images in this lab, details of their contents will be at the end.
  • We are assuming you’ve got (3) nodes running in your cluster across (3) AZs.

Create (2) namespaces for visibility:

kubectl create namespace spread
kubectl create namespace spread-hint
Group 1 >

Copy/paste the following to create a file called ‘nginx-spread.yaml’

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-spread
  labels:
    app: nginx-spread
spec:
  replicas: 12
  selector:
    matchLabels:
      app: nginx-spread
  template:
    metadata:
      labels:
        app: nginx-spread
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            app: nginx-spread
      containers:
      - name: nginxspread
        image: public.ecr.aws/t8y7s3k5/nginx-large:latest
        ports:
        - containerPort: 80

Copy/paste the following to create a file called ‘nginx-spread-service.yaml’

apiVersion: v1
kind: Service
metadata:
  name: nginx-spread-service
  labels:
    app: nginx-spread
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx-spread

Run the following commands on your existing EKS cluster to create the previous deployment and associated service:

kubectl apply -f nginx-spread.yaml --namespace=spread
kubectl apply -f nginx-spread-service.yaml --namespace=spread

You should now see (12) pods running evenly across the (3) nodes in your cluster.

We will now deploy (12) pods that will ‘pull’ the image from the previously deployed pods.

Copy/paste the following to create a file called ‘fetch-spread.yaml’

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fetch-spread
spec:
  replicas: 12
  selector:
    matchLabels:
      app: fetch-spread
  template:
    metadata:
      labels:
        app: fetch-spread
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            app: fetch-spread
      containers:
      - name: fetchspread
        image: public.ecr.aws/t8y7s3k5/fetch-spread:latest

Run the following commands on your existing EKS cluster to create the previous deployment and associated service:

kubectl apply -f fetch-spread.yaml --namespace=spread

In an hour or so you will see the Network Cost column populated and over time see the ‘spread’ namespace show ‘Inter-AZ’ traffic and the associated cost.

Group 2 >

In this section we will explore an option to use when trying to avoid pods communicating across AZs, which will resource Inter-AZ transfer costs. We will take advantage of ‘Topology Aware Routing’, details of which can be found here.

Copy/paste the following to create a file called ‘nginx-spread-hint.yaml’

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-spread-hint
  labels:
    app: nginx-spread-hint
spec:
  replicas: 12
  selector:
    matchLabels:
      app: nginx-spread-hint
  template:
    metadata:
      labels:
        app: nginx-spread-hint
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            app: nginx-spread-hint
      containers:
      - name: nginxspread
        image: public.ecr.aws/t8y7s3k5/nginx-large:latest
        ports:
        - containerPort: 80

Copy/paste the following to create a file called ‘nginx-spread-hint-service.yaml’

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.kubernetes.io/topology-aware-hints: auto
  name: nginx-spread-hint-service
  labels:
    app: nginx-spread-hint
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx-spread-hint

Run the following commands on your existing EKS cluster to create the previous deployment and associated service:

kubectl apply -f nginx-spread-hint.yaml --namespace=spread-hint
kubectl apply -f nginx-spread-hint-service.yaml --namespace=spread-hint

You should now see (12) pods running evenly across the (3) nodes in your cluster.

We will now deploy (12) pods that will ‘pull’ the image from the previously deployed pods.

Copy/paste the following to create a file called ‘fetch-spread-hint.yaml’

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fetch-spread-hint
spec:
  replicas: 12
  selector:
    matchLabels:
      app: fetch-spread-hint
  template:
    metadata:
      labels:
        app: fetch-spread-hint
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            app: fetch-spread-hint
      containers:
      - name: fetchspread
        image: public.ecr.aws/t8y7s3k5/fetch-spread-hint:latest

Run the following commands on your existing EKS cluster to create the previous deployment and associated service:

kubectl apply -f fetch-spread-hint.yaml --namespace=spread-hint

In an hour or so you will see the ‘spread’ namespace show ‘Inter-AZ’ traffic and the associated cost.

Note the differences in traffic between the 2 groups.