Kubernetes has become the de facto standard for container orchestration — every major cloud provider supports it, and Kubernetes skills are in high demand across DevOps, cloud engineering, and backend development roles. Yet for beginners, the learning curve feels steep: clusters, pods, services, YAML files everywhere.
This tutorial cuts through the complexity. You will go from zero Kubernetes knowledge to deploying a real application on a live cluster — and it will not cost you a cent. No credit card, no complex setup, just follow along and you will have a working deployment by the end.
Why Learn Kubernetes in 2026?
Kubernetes is not just a buzzword — it is the infrastructure backbone for companies of every size, from startups running a handful of microservices to enterprises managing tens of thousands of containers. Here is why it matters right now:
- Job market demand: Kubernetes appears in over 70% of DevOps job postings. Companies actively seek engineers who can deploy, manage, and troubleshoot K8s clusters.
- Cloud-native standard: Every major cloud — AWS (EKS), Google Cloud (GKE), Azure (AKS) — offers managed Kubernetes. Learning it once lets you work across all of them.
- Self-healing and auto-scaling: Kubernetes automatically restarts failed containers, replaces crashed Pods, and scales your application up or down based on traffic.
- Declarative infrastructure: You describe what you want in YAML, and Kubernetes makes it happen. No manual server configuration, no drift.
Kubernetes Explained — No Jargon
Think of Kubernetes as a container orchestration platform — a system that decides where to run your containers, keeps them healthy, connects them together, and scales them when demand changes. If Docker is the engine that runs a single container, Kubernetes is the fleet manager that runs hundreds.
Here is the mental model:
- You write a YAML file describing what you want (e.g., “3 replicas of my web app”).
- Kubernetes reads that file, finds servers to run the containers on, starts them, monitors them, and fixes things if they break.
- You sit back and let Kubernetes handle the operational complexity.
Core Concepts: Pods, Nodes, Clusters
Before you touch a terminal, understand these three building blocks:
Pods
A Pod is the smallest deployable unit in Kubernetes. It wraps one or more containers that share the same network and storage. Most Pods run a single container, but sidecar patterns (like adding a logging agent alongside your app) are common in production.
Nodes
A Node is a single machine (virtual or physical) in the cluster. Each Node runs the Kubelet agent, which communicates with the control plane and ensures the assigned Pods are running. Nodes come in two flavors: control-plane nodes (the brain) and worker nodes (the muscle).
Clusters
A Cluster is the full environment — one or more control-plane nodes plus a set of worker nodes. The control plane schedules workloads, monitors health, and manages networking. Worker nodes execute the actual containers.
Visual hierarchy: Cluster → Nodes → Pods → Containers.
Free Kubernetes Environments — No Credit Card
You do not need to pay to learn Kubernetes. Here are the best free platforms covered in the video:
| Platform | What You Get | Best For |
|---|---|---|
| Killercoda | Instant playground, no signup | Quick experiments, following tutorials |
| Play with K8s | Free cluster for 4 hours | Hands-on labs |
| Google Cloud (GKE) | $300 credits for 90 days | Production-like experience |
| Oracle Cloud (OKE) | Always-free tier | Long-term learning without expiry |
| Civo | $250 credits | Fast K3s clusters |
| Linode | $100 credits | Simple setup |
For the fastest start, open killercoda.com/playgrounds in your browser and you get a live Kubernetes cluster in seconds — no account needed.
The Demo App — What You Will Deploy
The tutorial deploys a real Node.js application from the companion repository shazforiot/k8s-beginner-project. The app is intentionally simple — a web server that responds on /, /health, /ready, and /info endpoints — but it is packaged with production-grade best practices:
- Non-root container user
- Resource limits (CPU and memory)
- Liveness and readiness probes
- Rolling update strategy
- Graceful shutdown handling
- Multi-arch Docker build (amd64 + arm64)
The Dockerfile
FROM node:20-alpine
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
WORKDIR /app
COPY app/package*.json ./
COPY app/index.js ./
RUN chown -R nodejs:nodejs /app
USER nodejs
ENV NODE_ENV=production
ENV PORT=3000
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"
CMD ["node", "index.js"]
Notice the security hardening: the container runs as nodejs (UID 1001), not root. The HEALTHCHECK instruction ensures Docker itself monitors the app’s health.
Step 1 — Create a Namespace
Namespaces provide logical isolation within a cluster. Think of them as folders that group related resources together. For this project, create a namespace called k8s-demo:
apiVersion: v1
kind: Namespace
metadata:
name: k8s-demo
labels:
app: k8s-demo
environment: development
Apply it:
kubectl apply -f k8s/namespace.yaml
Step 2 — Create a ConfigMap
A ConfigMap stores non-sensitive configuration data separately from your container image. This lets you change configuration without rebuilding the Docker image:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: k8s-demo
data:
APP_NAME: "K8s Demo App"
APP_VERSION: "1.0.0"
LOG_LEVEL: "info"
Apply it:
kubectl apply -f k8s/configmap.yaml
Step 3 — Deploy Your Application
The Deployment is the most important resource. It defines how many Pod replicas to run, which container image to use, health checks, resource limits, and the update strategy. Here is the full deployment YAML used in the tutorial:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-demo-app
namespace: k8s-demo
spec:
replicas: 3
selector:
matchLabels:
app: k8s-demo
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: k8s-demo
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1001
fsGroup: 1001
containers:
- name: app
image: ghcr.io/shazforiot/k8s-beginner-project:main
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: app-config
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
Key details to understand:
replicas: 3— Kubernetes ensures 3 Pods are always running. If one crashes, a replacement starts automatically.strategy: RollingUpdate— When you update the image, Kubernetes replaces Pods one at a time with zero downtime (maxUnavailable: 0).livenessProbe— If/healthfails, Kubernetes restarts the container.readinessProbe— If/readyfails, Kubernetes removes the Pod from the Service load balancer without restarting it.resources— Requests are the minimum guaranteed resources; limits are the ceiling. Setting both is a best practice to prevent noisy-neighbor problems.
Apply it:
kubectl apply -f k8s/deployment.yaml
Check that the Pods are running:
kubectl get pods -n k8s-demo -o wide
Step 4 — Expose Your App with a Service
Pods have temporary IP addresses that change when they are recreated. A Service provides a stable IP and DNS name, and load-balances traffic across all healthy Pods:
apiVersion: v1
kind: Service
metadata:
name: k8s-demo-service
namespace: k8s-demo
spec:
type: LoadBalancer
selector:
app: k8s-demo
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000
The type: LoadBalancer creates an external IP address (on cloud providers) so your app is reachable from the internet. On local setups like Docker Desktop, you can use kubectl get svc -n k8s-demo to find the access URL.
Apply it:
kubectl apply -f k8s/service.yaml
Step 5 — Scaling and Self-Healing Demo
One of Kubernetes’ most impressive features is its ability to scale and self-heal automatically.
Manual Scaling
To increase replicas on the fly:
kubectl scale deployment k8s-demo-app -n k8s-demo --replicas=5
Watch the new Pods spin up in real time:
kubectl get pods -n k8s-demo -w
Auto-Scaling with HPA
The HorizontalPodAutoscaler (HPA) automatically adjusts the number of replicas based on CPU and memory utilization:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: k8s-demo-hpa
namespace: k8s-demo
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: k8s-demo-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
This configuration tells Kubernetes: “Keep between 2 and 10 replicas. Scale up when CPU exceeds 70% or memory exceeds 80%. Scale back down after a 5-minute stabilization window.”
Self-Healing in Action
Delete a Pod manually to see Kubernetes replace it automatically:
kubectl delete pod <pod-name> -n k8s-demo
kubectl get pods -n k8s-demo -w
Within seconds, a new Pod is scheduled to replace the deleted one. This is the desired state loop — Kubernetes continuously reconciles the actual state with the desired state you defined in your YAML.
Step 6 — Ingress (Optional Advanced Feature)
For production setups, an Ingress resource provides HTTP/HTTPS routing, host-based routing, and TLS termination:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-demo-ingress
namespace: k8s-demo
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx
rules:
- host: k8s-demo.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: k8s-demo-service
port:
number: 80
Note: Ingress requires an Ingress Controller (such as NGINX or Traefik) installed in your cluster. It is optional for learning but essential for production.
Essential kubectl Commands Cheat Sheet
These are the commands you will use daily:
| Command | What It Does |
|---|---|
kubectl get pods -n k8s-demo |
List all Pods |
kubectl get pods -n k8s-demo -o wide |
List Pods with extra details (node, IP) |
kubectl get pods -n k8s-demo -w |
Watch Pods in real time |
kubectl describe pod <name> -n k8s-demo |
Show events, status, and config |
kubectl logs -f <pod-name> -n k8s-demo |
Stream Pod logs |
kubectl exec -it <pod-name> -n k8s-demo -- sh |
Shell into a running Pod |
kubectl apply -f file.yaml |
Create or update a resource |
kubectl delete -f file.yaml |
Delete a resource |
kubectl scale deployment <name> --replicas=N |
Manually scale replicas |
kubectl get all -n k8s-demo |
List all resources in the namespace |
kubectl delete namespace k8s-demo |
Delete the namespace and everything in it |
CI/CD Pipeline — From Push to Deployed
The companion repository includes a complete GitHub Actions CI/CD pipeline that automates the entire workflow:
- Build & Test — Runs Node.js build and tests on every push.
- Docker Build — Builds a multi-arch image (amd64 + arm64) and pushes it to GitHub Container Registry.
- Security Scan — Runs Trivy vulnerability scanning on the image.
- Deploy — Automatically deploys the new image to your Kubernetes cluster on pushes to
main.
To set up the pipeline for your own cluster:
- Base64-encode your kubeconfig:
cat ~/.kube/config | base64 - Add it as a GitHub secret named
KUBE_CONFIG - Push to the
mainbranch — the pipeline handles the rest
Beginner Mistakes to Avoid
- Not setting resource limits. Without
resources.requestsandresources.limits, a single greedy Pod can starve others. Always set both. - Running containers as root. Set
runAsNonRoot: trueandreadOnlyRootFilesystem: truein your security context. This is a hard requirement in most production clusters. - Skipping health checks. Without liveness and readiness probes, Kubernetes cannot detect or recover from application failures.
- Hardcoding configuration in images. Use ConfigMaps for non-sensitive values and Secrets for credentials. Rebuilding an image just to change a log level is wasteful.
- Ignoring namespaces. Deploying everything to
defaultcreates clutter and makes cleanup harder. Use namespaces to organize resources logically.
Resources and Next Steps
- Full source code and YAML files: github.com/shazforiot/k8s-beginner-project
- Killercoda (instant playground): killercoda.com/playgrounds
- Play with K8s: labs.play-with-k8s.com
- Google Cloud Free Tier: cloud.google.com/free
- Oracle Cloud Free Tier: oracle.com/cloud/free
This is Part 1 of the Kubernetes series. Upcoming parts cover:
- Part 2: Kubernetes + Docker Compose
- Part 3: CI/CD with GitHub Actions
- Part 4: Production-ready setup
Frequently Asked Questions
Can I learn Kubernetes for free?
Yes. Platforms like Killercoda offer instant Kubernetes playgrounds with no signup or credit card. Cloud providers like Google Cloud ($300 credits for 90 days), Oracle Cloud (always-free tier), Civo ($250 credits), and Linode ($100 credits) also let you run real clusters at no cost. This tutorial shows you how to use all of them.
What is the difference between a Pod and a Deployment in Kubernetes?
A Pod is the smallest deployable unit in Kubernetes — it runs one or more containers. A Deployment is a higher-level controller that manages Pods. It ensures a specified number of Pod replicas are running, handles rolling updates, and self-heals when Pods fail. You almost always create Deployments, not standalone Pods.
Do I need to know Docker before learning Kubernetes?
Having basic Docker knowledge helps — understanding what a container image is and how to build one. However, you do not need to be a Docker expert. This tutorial includes a simple Dockerfile and walks you through building and deploying it step by step.
What kubectl commands should beginners know first?
Start with these: kubectl get pods, kubectl get services, kubectl apply -f file.yaml, kubectl describe pod <name>, kubectl logs <pod-name>, kubectl scale deployment <name> --replicas=N, and kubectl delete -f file.yaml. These cover 90% of daily operations.
How does Kubernetes self-healing work?
When you define a Deployment with 3 replicas, Kubernetes continuously monitors the actual state. If a Pod crashes or the node it runs on fails, the Kubernetes controller detects the mismatch and automatically creates a replacement Pod on a healthy node. This loop — observed state vs. desired state — is the core of Kubernetes self-healing.
Video Chapters — Quick Navigation
- 0:00 — Why learn Kubernetes in 2026?
- 1:30 — Kubernetes explained (no jargon)
- 4:00 — Core concepts: Pods, Nodes, Clusters
- 7:00 — Setting up FREE environment
- 10:00 — Your first deployment (hands-on)
- 14:00 — Exposing your app to the internet
- 17:00 — Scaling & self-healing demo
- 20:00 — Next steps & resources