π 1οΈβ£ Types of Deployment Strategies in EKS / Kubernetes
(Kubernetes supports rolling & recreate natively β canary & blue-green are implemented using patterns/tools.)
β Rolling Update (Default)
What it is: Pods are replaced gradually β new pods come up while old pods are terminated step by step.
How it works:
Controlled by maxUnavailable and maxSurge.
Use case: Default for stateless microservices.
Pros:
- zero downtime possible
- simple
- built-in
- resource efficient
Cons:
- mixed old/new versions during rollout
- risky if backward compatibility not maintained
Interview line: Rolling update is my default for stateless services with backward-compatible releases.
π΅π’ Blue-Green Deployment
What it is: Two full environments β blue (old) and green (new). Traffic switches all at once.
How in EKS: Two deployments + service selector switch or LB switch.
Use case:
- high-risk fintech releases
- schema-impacting changes
- need instant rollback
Pros:
- instant rollback
- no mixed versions
- clean validation
Cons:
- double resource cost
- environment sync needed
π€ Canary Deployment
What it is: Release to small % of traffic first β observe β expand.
How in EKS:
- Argo Rollouts
- service mesh
- weighted LB routing
- ingress weighted rules
Use case:
- critical APIs
- user-facing flows
- performance-risk releases
Pros:
- lowest risk
- metrics-driven rollout
- early failure detection
Cons:
- tooling needed
- routing complexity
Senior interview line: For customer-facing fintech APIs, I prefer canary with automated metric checks.
π₯ Recreate
What it is: All old pods killed β then new pods created.
Use case:
- stateful singleton
- batch job
- incompatible versions
Cons:
- downtime guaranteed
- rarely used for APIs
π‘ 2οΈβ£ Pod Disruption Budget (PDB)
β What It Is
PDB defines minimum pods that must remain available during voluntary disruptions.
β Protects Against
- node drain
- cluster upgrade
- autoscaler scale-down
- manual eviction
β Example Use Case
If service has 5 replicas and needs at least 3 alive:
minAvailable: 3Upgrade wonβt evict below 3.
β οΈ Important Interview Point
PDB does NOT protect against:
- pod crash
- node crash
- OOM kill
Only voluntary disruptions.
β Common Mistake
PDB = replicas count β blocks upgrades completely.
π©Ί 3οΈβ£ Readiness vs Liveness Probe
Interviewers love this.
β Readiness Probe β βCan I receive traffic?β
If fails β pod removed from Service endpoints.
Use case:
- app started but DB not ready
- warmup phase
- dependency checks
Does NOT restart pod.
β Liveness Probe β βShould I restart pod?β
If fails β kubelet restarts container.
Use case:
- deadlock
- stuck thread
- hung process
π§ Senior Interview Line
Readiness controls traffic flow; liveness controls container restart.
β Common Mistake
Using same endpoint for both β restart loops.
β€οΈ 4οΈβ£ Pod Affinity vs Anti-Affinity
β Pod Affinity
Schedule pods together.
Use case:
- app + cache
- tightly coupled services
- low latency pairing
π« Pod Anti-Affinity
Schedule pods apart.
Use case:
- spread replicas across nodes/AZ
- high availability
Example: Donβt place same app replicas on same node.
β οΈ Tradeoff
Anti-affinity can cause unschedulable pods if cluster small.
π₯ 5οΈβ£ Node Selector vs Node Affinity
β Node Selector (Simple)
Match exact label.
nodeSelector:
disktype: ssdSimple but rigid.
β Node Affinity (Advanced)
Supports:
- required rules
- preferred rules
- expressions
More flexible.
π§ Interview Line
Node selector is exact match; node affinity supports expressive scheduling logic.
β£οΈ 6οΈβ£ Taints & Tolerations
β What They Do
Taints repel pods. Tolerations allow pods onto tainted nodes.
β Use Cases
- dedicated GPU nodes
- system node pools
- fintech sensitive workloads
- spot node pools
Example: Only ML pods tolerate GPU taint.
β οΈ Senior Note
Taint = repel Affinity = attract
Say that β interviewers like it.
π 7οΈβ£ Kubernetes Autoscaling Types
πΌ HPA β Horizontal Pod Autoscaler
β Scales
Number of pods.
Based on:
- CPU
- memory
- custom metrics
Best for stateless services.
β Pros
- fast
- common
- safe
- works with CA/Karpenter
π VPA β Vertical Pod Autoscaler
β Scales
Pod resource requests/limits.
β οΈ Important
Usually restarts pods. Not used with HPA on same resource metric.
Best for:
- stateful
- batch
- memory-bound apps
π§± Cluster Autoscaler (or Karpenter)
β Scales
Nodes β not pods.
Adds/removes nodes based on pending pods.
β Works With
HPA β creates pods CA/Karpenter β adds nodes
π§ Interview Line
HPA scales pods, Cluster Autoscaler scales nodes β they work together.
π§ Senior One-Shot Summary Answer
If interviewer asks open:
For safe deployments I use rolling by default, canary or blue-green for high-risk releases. PDB protects availability during voluntary disruption. Readiness controls traffic, liveness controls restart. Affinity/anti-affinity and taints/tolerations control placement and isolation. For scaling, HPA handles pod count, VPA handles resource sizing, and Cluster Autoscaler or Karpenter handles node capacity.