Optimizing Kubernetes Pod Management: Part 1
Introduction
When working with Kubernetes (K8s), creating pods manually is discouraged. Instead, it is recommended to use the following K8s components/resources to manage pods effectively:
ReplicationController
ReplicaSet
Deployment
Selectors and labels are fundamental concepts in Kubernetes that help identify and group resources. They play a crucial role in various aspects of resource management and communication within a Kubernetes cluster.
Labels in Kubernetes
Labels are key-value pairs associated with Kubernetes resources. They are used to identify and categorize resources, allowing you to organize and group them based on different characteristics. Labels are applied to resources in the metadata section of their YAML configurations.
Here's an example of labels applied to a Pod:
apiVersion: v1
kind: Pod
metadata:
name: webapp-pod
labels: # Labels attached to the pod
app: webapp # Label "app" with value "my-web-app"
environment: production # Label "environment" with value "production"
spec:
containers:
- name: webapp-container
image: my-webapp-image:latest
In this example:
The
app
label has the valuewebapp
, categorizing the Pod as part of the "webapp" application.The
environment
label has the valueproduction
, indicating that this Pod is meant for a production environment.
Selectors in Kubernetes
Selectors are used to filter and select resources based on their labels. They enable you to perform targeted operations or resource selections within your cluster.
Types of Selectors
Equality-Based Selector: Selects resources where labels match specific values exactly.
Example: Select all Pods with the label
app=webapp
.# Define a Service with label selectors apiVersion: v1 kind: Service metadata: name: webapp-service spec: selector: # Selector to filter pods app: my-web-app # Select pods with the label "app" set to "my-web-app" ports: - protocol: TCP port: 80 targetPort: 8080
Set-Based Selector: Selects resources based on sets of conditions.
Match Labels: Select resources where labels match specified values.
Example: Select all Pods with labels
app=webapp
andenvironment=production
.selector: matchLabels: # Using Match Label Selector app: webapp # Select pods with the label "app" : value "my-web-app" environment: production # Select pods with the label "environment" : value "production"
Match Expressions: Select resources using complex conditions.
Example: Select all Pods with labels where
app=webapp
andenvironment
starts withprod
.apiVersion: apps/v1 kind: ReplicaSet metadata: name: my-replicaset spec: replicas: 3 selector: matchExpressions: # Using Set-Based Match Expressions Selector - {key: environment, operator: In, values: [dev, staging]} - {key: app, operator: NotIn, values: [old-app]} template: metadata: labels: environment: dev # Label applied to pods managed by this ReplicaSet spec: containers: - name: my-container image: my-image
ReplicationController (RC)
RC is a K8s component for creating and managing pods.
Ensures a specified number of pods are always running for your application.
Manages the pod lifecycle, creating new pods if needed.
Allows scaling up and down the pod count.
Save this file with rc.yml filename.
---
apiVersion: v1
kind: ReplicationController
metadata:
name: javawebapprc
spec:
replicas: 3
selector:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: pankaj1998/maven-web-app
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
nodePort: 30785
...
Commands for RC:
# Delete all resources
kubectl delete all --all
# Apply the RC configuration from rc.yml
kubectl apply -f rc.yml
# Get a list of pods with details
kubectl get pods -o wide
# Get a list of services
kubectl get svc
# Get a list of replication controllers
kubectl get rc
# Scale the ReplicationController to 5 replicas
kubectl scale rc javawebapprc --replicas 5
# Scale the ReplicationController back to 3 replicas
kubectl scale rc javawebapprc --replicas 3
# Delete the ReplicationController
kubectl delete rc javawebapprc
ReplicaSet
A next-gen replacement for ReplicationController with similar functions.
Maintains a specified number of pods.
Supports set-based selectors for pod management.
Save the file with rs.yml filename.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: javawebapprs
spec:
replicas: 3
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: pankaj1998/maven-web-app
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
nodePort: 30785
...
Commands for ReplicaSet:
# Delete all resources
kubectl delete all --all
# Apply the ReplicaSet configuration from rs.yml
kubectl apply -f rs.yml
# Get a list of pods with details
kubectl get pods -o wide
# Get a list of services
kubectl get svc
# Get a list of replication controllers
kubectl get rc
# Scale the ReplicaSet to 5 replicas
kubectl scale rs javawebapprc --replicas 5
# Scale the ReplicaSet back to 3 replicas
kubectl scale rs javawebapprc --replicas 3
# Delete the ReplicaSet
kubectl delete rs javawebapprs
ReplicationController Vs ReplicaSet
Certainly, here's a table highlighting the key differences between ReplicationController and ReplicaSet in Kubernetes:
Aspect | ReplicationController | ReplicaSet |
API Version | apiVersion: v1 | apiVersion: apps/v1 |
Recommended Version | Yes (for legacy applications) | Yes (recommended) |
Pod Selector | Equality-based (labels) | Set-based (matchLabels or matchExpressions) |
Scale Pods | Supported | Supported |
Selectors | Simple label selectors | Supports complex label selectors with matchLabels and matchExpressions |
Pod Lifecycle Management | Manages pod lifecycle | Manages pod lifecycle |
Rolling Updates | Not supported by default | Supported by default |
Recommended Use Case | Legacy deployments | Modern deployments |
Controller Termination | Graceful termination | Graceful termination |
Please note that while ReplicationController can still be used for legacy applications, ReplicaSet is the recommended choice for modern deployments due to its more advanced label selection capabilities and better support for rolling updates.
Deployment
The recommended approach for deploying applications in K8s.
Allows scaling pods up/down and supports rollouts/rollbacks.
Enables zero-downtime deployments.
Deployment Strategies
Recreate: Deletes existing pods and creates new ones (downtime).
Rolling Update: Replaces pods one by one.
The default strategy is Rolling Update.
Save the file with deployment.yml filename.
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
spec:
replicas: 2
strategy:
type: Recreate # ... (For RollingUpdate Change to RollingUpdate)
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app : javawebapp
spec:
containers:
- name: javawebappcontainer
image: pankaj1998/maven-web-app
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
nodePort: 30785
...
Commands for Deployment:
# Delete all resources
kubectl delete all --all
# Apply the Deployment configuration from java-app-deployment.yml
kubectl apply -f java-app-deployment.yml
# Get a list of pods with details
kubectl get pods -o wide
# Get a list of services
kubectl get svc
# Get a list of deployments
kubectl get deployment
# Scale the Deployment to 5 replicas
kubectl scale deployment javawebappdeployment --replicas 5
# Scale the Deployment back to 3 replicas
kubectl scale deployment javawebappdeployment --replicas 3
# Delete the Deployment
kubectl delete deployment javawebappdeployment
Blue-Green Deployment
An approach to deploying applications with less risk, zero downtime, easy rollbacks, and a seamless user experience.
I am providing the GitHub URL here: https://github.com/Pankaj-Surya/maven-web-app/tree/main/blue-green-deployment
Check out the above GitHub and clone it at last follow the below steps.
Steps for Blue-Green Deployment:
Create
blue-deployment.yml
.Expose Blue Pods using
live-service.yml
.Access the application using the Live Service (Blue Pods).
Modify code, build, and create a Docker image. Push it to Docker Hub.
Create
green-deployment.yml
with the latest image.Expose Green Pods using
pre-prod-service.yml
.Access the application using the Pre-Prod Service (Green Pods).
Note the Live Service URL and Pre-Prod Service URL.
Access both URLs to see the difference.
Modify the selector in
live-service
from v1 to v2 and apply it usingkubectl
.
Live Service URL: http://3.100.202.11:30785/java-web-app/
Pre-Prod Service URL: http://3.100.202.11:31785/java-web-app/
Assuming you have already created the blue-deployment.yml
, live-service.yml
, green-deployment.yml
, and pre-pod-service.yml
, let's proceed with the deployment and service updates:
Initial Blue-Green Deployment:
# Apply Blue Deployment kubectl apply -f blue-deployment.yml # Apply Live Service kubectl apply -f live-service.yml # Find the URL for the Live Service minikube service javawebapplivesvc --url # Apply Green Deployment kubectl apply -f green-deployment.yml # Apply Pre-Prod Service kubectl apply -f pre-pod-service.yml # Find the URL for the Pre-Prod Service minikube service javawebappprepodsvc --url
Switching Green Deployment to Live:
Update the
live-service.yml
file to change the selector fromversion: v1
toversion: v2
. You can edit the YAML file manually or use a text editor.apiVersion: v1 kind: Service metadata: name: javawebapplivesvc spec: type: NodePort selector: app: java-web-app version: v2 # Change from 'v1' to 'v2' ports: - port: 80 targetPort: 8080 nodePort: 30785
Apply the Updated Live Service:
After making the change, apply the updated
live-service.yml
to modify the service selector.kubectl apply -f live-service.yml
Now, your "green" deployment with version: v2
pods should be the one serving traffic through the javawebapplivesvc
service. The blue deployment can be considered the backup or a previous version, and you can switch between them as needed by updating the selector in the live-service.yml
file.