Advanced Workshop - Prometheus

In this workshop, we will incorperate the use of Verification Providers and Templates into the workhop. We will provide sample templates for prometheus, but you can modify our examples to use the APM tool of your choice (Prometheus, Datadog, New Relic, Cloudwatch).

For today’s workshop, we will walk through how to deploy prometheus into your cluster or if you already have Prometheus installed, how to retrieve the necessary information for the Ocean CD Verification Provider. Let’s dive into installing prometheus.

Deploy Prometheus

First we are going to install Prometheus. Run the following command:

kubectl create namespace prometheus

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

helm install prometheus prometheus-community/prometheus \
    --namespace prometheus \
    --set alertmanager.persistentVolume.storageClass="gp2" \
    --set server.persistentVolume.storageClass="gp2"

Make note of the prometheus endpoint in helm response (you will need this later). It should look similar to below:

The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-server.prometheus.svc.cluster.local

We will soon update this DNS name for the verification provider template. First, let’s check if Prometheus components deployed as expected

kubectl get all -n prometheus

You should see response similar to below. They should all be Ready and Available

NAME                                                 READY   STATUS    RESTARTS   AGE
pod/prometheus-alertmanager-868f8db8c4-67j2x         2/2     Running   0          78s
pod/prometheus-kube-state-metrics-6df5d44568-c4tkn   1/1     Running   0          78s
pod/prometheus-node-exporter-dh6f4                   1/1     Running   0          78s
pod/prometheus-node-exporter-v8rd8                   1/1     Running   0          78s
pod/prometheus-node-exporter-vcbjq                   1/1     Running   0          78s
pod/prometheus-pushgateway-759689fbc6-hvjjm          1/1     Running   0          78s
pod/prometheus-server-546c64d959-qxbzd               2/2     Running   0          78s

NAME                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/prometheus-alertmanager         ClusterIP   10.100.38.47     <none>        80/TCP     78s
service/prometheus-kube-state-metrics   ClusterIP   10.100.165.139   <none>        8080/TCP   78s
service/prometheus-node-exporter        ClusterIP   None             <none>        9100/TCP   78s
service/prometheus-pushgateway          ClusterIP   10.100.150.237   <none>        9091/TCP   78s
service/prometheus-server               ClusterIP   10.100.209.224   <none>        80/TCP     78s

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/prometheus-node-exporter   3         3         3       3            3           <none>          78s

NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/prometheus-alertmanager         1/1     1            1           78s
deployment.apps/prometheus-kube-state-metrics   1/1     1            1           78s
deployment.apps/prometheus-pushgateway          1/1     1            1           78s
deployment.apps/prometheus-server               1/1     1            1           78s

NAME                                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/prometheus-alertmanager-868f8db8c4         1         1         1       78s
replicaset.apps/prometheus-kube-state-metrics-6df5d44568   1         1         1       78s
replicaset.apps/prometheus-pushgateway-759689fbc6          1         1         1       78s
replicaset.apps/prometheus-server-546c64d959               1         1         1       78s

Ocean CD Entities

For this next section, we will modify our end-to-end YAML to make sure our Ocean CD entities tie into the k8s resources we created in the last section.

Let’s start by creating a local file on your machine and name it oceancd-entity.yaml


kind: "VerificationProvider"
name: "prometheus-vp"
clusterIds:
  - "oceancd-demo"
prometheus:
  address: "http://prometheus-server.prometheus.svc.cluster.local:80"

---

kind: verificationTemplate
name: oceancd-workshop-vt
metrics:
- name: My-first-metric
  interval: 5s
  count: 10
  failureCondition: result[0] >= 100
  failureLimit: 5
  provider:
    prometheus:
      query: sum(container_cpu_usage_seconds_total{namespace="oceancd-workshop"})

---

kind: Strategy
name: "oceancd-workshop"
canary: 
  backgroundVerification: 
    templateNames: 
      - "oceancd-workshop-vt"
  steps: 
    - name: "My-first-phase"
      setWeight: 20
      verification: 
        templateNames: 
          - "oceancd-workshop-vt"
    - name: "second-phase"
      setWeight: 40
      verification: 
        templateNames: 
          - "oceancd-workshop-vt"
    - name: "third-phase"
      setWeight: 80
      verification: 
        templateNames: 
          - "oceancd-workshop-vt"
      pause: 
        duration: 1m


---

kind: RolloutSpec
name: "OceanCD-Rolloutspec-1"
spotDeployment: 
  clusterId: "oceancd-demo"
  namespace: "oceancd-workshop"
  name: "nginx-deployment"
strategy: 
  name: "oceancd-workshop"
traffic: 
  canaryService: "rollouts-demo-canary"
  stableService: "rollouts-demo-stable"
failurePolicy: 
  action: abort

Before applying this yaml file to the cluster, we need to verify two items:

  1. ClusterIds- This is referenced for both Verification Provider and Rollout Spec. If the name of the cluster you created is not “oceancd-demo”, then you will need to update this.

  2. Verification Provider Address- If you followed the tutorial provided above for deploying prometheus, you should not need to make any changes to the address listed in the yaml. To confirm the address you should use in the Verification Provider, run the following command:

 kubectl get svc -n prometheus 

service

We can then utilize the following naming convention

 <svc name>.<ns name>.svc.cluster.local:portnumber 

From our screenshot above, the full address would be:

address: "http://prometheus-server.prometheus.svc.cluster.local:80"
  1. Run the command:
oceancd apply -f oceancd-entity.yaml

Make sure your address listed in Verification Provider includes http:// and the port as shown above

Trigger a Rollout

Now that we have created all of the k8s resources and Ocean CD entities, let’s test it! Go back to your first YAML file that was created and update the image version to the following:

image: public.ecr.aws/nginx/nginx:1.23

Now save that file and apply.

kubectl apply -f k8s-e2e.yaml

This will trigger your first rollout. You should now see all the pieces coming together:

  • Production Ready Canary Deployment
  • Verification Driven Rollouts- Tied into your actual promethus metrics
  • Spot Deployment
  • Automated Rollbacks in the event you encounter a failure

workloads

Now let’s make some changes to the underlying strategy. Change your Ocean CD strategy within the oceancd-entity.yaml to the following:

kind: Strategy
name: "oceancd-workshop"
canary: 
  backgroundVerification: 
    templateNames: 
      - "oceancd-workshop-vt"
  steps: 
    - name: "My-first-phase"
      setWeight: 20
      verification: 
        templateNames: 
          - "oceancd-workshop-vt"
      pause: 
        duration: 
    - name: "second-phase"
      setWeight: 40
      verification: 
        templateNames: 
          - "oceancd-workshop-vt"
    - name: "third-phase"
      setWeight: 80
      verification: 
        templateNames: 
          - "oceancd-workshop-vt"
      pause: 
        duration: 1m

This is inserting an unspecified pause duration to the strategy within Ocean CD. This means before completing the first phase of the rollout, it will require a manual approval to proceed to the next phase. This gives you the ability to simulate the scenario of a Sr. Developer needing to manually promote a deployment to the next phase and is a commonly asked for feature amongst our customers.

  1. Run the command:
oceancd apply -f oceancd-entity.yaml

Trigger a Rollout

Now that we have modified the strategy, let’s test it. Go back to k8s-e2e.yaml file that was created and update the image version to the following:

image: public.ecr.aws/nginx/nginx:1-perl

Now save that file and apply.

kubectl apply -f k8s-e2e.yaml

service

Now if we review the rollout within the Ocean CD console, we will see the UI is prompting me to manually promote this rollout to the next phase. Once we click promote, it will follow the same strategy as we utilized prior.

Conclusion

This concludes the content for the advanced workshop, we encourage you to continue making changes to your Ocean CD entities and continue testing the capabilities of Ocean CD. Some additional scenarios to test within Ocean CD:

  • Changing prometheus queries, incorperating arguments into your query
  • Change Strategy - Include manual approval processes at different phases
  • Create another Spot Deployment - Reuse the same strategy
  • Make a change that will intentionally fail your rollout and see the automated rollback

Windows Users

At this time, the Ocean CD CLI is only supported on MAC and Linux operating systems. If you are a Windows users, we are working to provide these same templates in JSON format and you can easily run them in Postman by following the link below:

Run in Postman