Skip to content

Kubernetes cheatsheet

1. Create a Cluster

There are many tools you can choose for creating a Kubernetes cluster. kubeadm is part of the official Kubernetes project and is designed to be a minimalistic, composable, and extensible way to bootstrap a Kubernetes cluster. k3s is a lightweight, standalone distribution of Kubernetes. It is designed to be a fully compliant Kubernetes distribution while being optimized for resource-constrained environments.

  • If you need more control and are comfortable with manual configurations, kubeadm might be a better fit.
  • If you prefer a simpler, all-in-one solution with less manual intervention, k3s would be a better choice.
kubernetes cheatsheet,k8s,devop,mlop,cloud engineer,ci/cd

2. Deploy an App

Once you have a running Kubernetes cluster, you an deploy your applications on top it. You applications must be packaged into one of the supported container formats before the deployment.

You can use the Kubernetes command line interface, kubectl to create and manage the a deployment. kubectl uses the Kubernetes API to interact with the cluster.

The Deployment instructs Kubernetes how to create and update instances of your application. Once you’ve created a Deployment, the Kubernetes control plane schedules the application instances included in that Deployment to run on individual Nodes in the cluster.

kubernetes cheatsheet,k8s,devop,mlop,cloud engineer,ci/cd
Deploy your application on Kubernete kubectl create deployment
List your deploymentskubectl get deployments
Check nodes in the clusterkubectl get nodes
Forward communications into the cluster-wide, private networkkubectl proxy

3. Explore your App

To get information about deployed applications and their environments, the most common operations are:

  • kubectl get – list resources
  • kubectl describe – show detailed information about a resource
  • kubectl logs – print the logs from a container in a pod
  • kubectl exec – execute a command on a container in a pod
kubernetes cheatsheet,k8s,devop,mlop,cloud engineer,ci/cd
Check the existing Pods kubectl get pods
Check what containers are inside that Pod and what images are used to build those containerskubectl describe pods
 Check the output of your applicationcurl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/
Check the container logskubectl logs "POD_NAME"

4. Expose your App to public

A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. A Service is defined using YAML or JSON.

Each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node. These IPs need a service to expose to outside.

  • ClusterIP (default) – Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
  • NodePort – Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
  • LoadBalancer – Creates an external load balancer in the current cloud and assigns a fixed, external IP to the Service. Superset of NodePort.
  • ExternalName – Maps the Service to the contents of the externalName field (e.g., by returning a CNAME record with its value. No proxying of any kind is set up.
kubernetes cheatsheet,k8s,devop,mlop,cloud engineer,ci/cd
List the current Services from the clusterkubectl get services
Expose it to external traffic with NodePort as parameterkubectl expose deployment <DeploymentName> --type="NodePort" --port 8080
Check the IP addresseskubectl get svc <ServiceName>
Confirm your App is still running curl <exposed IP and port>

5. Scale your App

Scaling in Kubernetes refers to the ability to dynamically adjust the number of running instances (Pods) of a given application or service based on the demand or workload. Kubernetes provides two primary types of scaling: horizontal scaling (scaling out) and vertical scaling (scaling up).

  • Scaling out: You can change the number of replicas in a deployment to achieve the scaling out.
  • Scaling up: You can change the number of computing resources to achieve the scaling up. It may require restarting the Pod to apply resource changes.
Scale the deployment to 5 replicas (You can change the number)kubectl scale deployment <DeploymentName>--replicas=5
After scaling, check the number of Podskubectl get pods -o wide
Check the info of Service, which is load-balancing the traffickubectl describe <ServiceName>

6. Update your App

In Kubernetes, rolling updates refer to the process of gradually updating or rolling out a new version of an application across the pods in a deployment.

The goal is to minimize downtime and ensure a smooth transition from the old version to the new one. Rolling updates follow a phased approach, where pods are replaced one at a time, ensuring that the application remains available during the update.

kubernetes cheatsheet,k8s,devop,mlop,cloud engineer,ci/cd

Steps in a Rolling Update

Modify the deployment configuration file ( e.g. deployment.yaml) to include the new version of the container image or any other desired changes.
Apply the updated deployment configuration to trigger the rolling update. kubectl apply -f updated-deployment.yaml
Monitor the status of the rolling update kubectl get deployments
Check the rollout status kubectl rollout status deployment <DeploymentName>
If issues are detected during the rolling update, rollback to the previous versionkubectl rollout undo deployment <DeploymentName>