Alright. Let’s break this down properly — no fluff.
You’re asking how Role, ClusterRole, RoleBinding, ClusterRoleBinding, and ServiceAccount actually work together. Most people memorize definitions. That’s useless. You need to understand the control flow.
🔥 The Core Model (RBAC in Kubernetes)
Kubernetes RBAC answers one question:
“Can this identity perform this action on this resource in this namespace?”
There are 3 moving parts:
- Who? →
UserorServiceAccount - What permissions? →
RoleorClusterRole - Who gets those permissions? →
RoleBindingorClusterRoleBinding
Think of it like this:
- Role / ClusterRole = Permission policy
- Binding = Attach policy to identity
- ServiceAccount = Identity for pods
🧠 First: Understand Scope
This is where people mess up.
| Resource | Scope |
|---|---|
| Role | Namespace-scoped |
| RoleBinding | Namespace-scoped |
| ClusterRole | Cluster-wide |
| ClusterRoleBinding | Cluster-wide |
Now let’s walk through real scenarios.
✅ Scenario 1 — App Pod Needs to Read Secrets in Its Own Namespace
Problem:
Your backend pod needs to read Kubernetes Secrets in dev namespace.
Step 1: Create ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend-sa
namespace: devThis is the identity your pod will use.
Step 2: Create Role (namespace scoped)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-reader
namespace: dev
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]This says:
In namespace
dev, allow reading secrets.
Step 3: Bind the Role to ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: bind-secret-reader
namespace: dev
subjects:
- kind: ServiceAccount
name: backend-sa
namespace: dev
roleRef:
kind: Role
name: secret-reader
apiGroup: rbac.authorization.k8s.ioNow:
backend-sa can read secrets in dev namespace only.
Step 4: Attach SA to Pod
spec:
serviceAccountName: backend-saResult:
- Pod can read secrets in
dev - Cannot read secrets in
prod - Cannot list pods in kube-system
Good. Principle of least privilege.
✅ Scenario 2 — Dev Team Can Manage Pods Only in Dev Namespace
You don’t want devs touching prod.
Step 1: Create Role in dev
kind: Role
metadata:
name: pod-manager
namespace: dev
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "create", "delete"]Step 2: RoleBinding for a User
kind: RoleBinding
metadata:
name: bind-dev-user
namespace: dev
subjects:
- kind: User
name: [email protected]
roleRef:
kind: Role
name: pod-manager
apiGroup: rbac.authorization.k8s.ioResult:
- Dev user can manage pods in dev namespace
- Cannot touch prod
- Cannot view cluster-wide resources
This is namespace isolation done properly.
✅ Scenario 3 — Monitoring Tool Needs to Read Pods Across ALL Namespaces
Namespace Role won’t work.
You need cluster-level permission.
Step 1: Create ClusterRole
kind: ClusterRole
metadata:
name: pod-reader-cluster
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]This is cluster-wide permission.
Step 2: Bind It to ServiceAccount
kind: ClusterRoleBinding
metadata:
name: bind-monitoring
subjects:
- kind: ServiceAccount
name: monitoring-sa
namespace: monitoring
roleRef:
kind: ClusterRole
name: pod-reader-cluster
apiGroup: rbac.authorization.k8s.ioResult:
Monitoring pod can read pods in:
- dev
- prod
- kube-system
- everywhere
⚠️ Important: RoleBinding Can Bind ClusterRole
This confuses people.
You can bind a ClusterRole using a RoleBinding.
Example:
kind: RoleBindingWith:
roleRef:
kind: ClusterRoleWhat happens?
👉 Permissions are still limited to that namespace.
ClusterRole defines rules. RoleBinding defines scope.
This is powerful and often misunderstood.
✅ Scenario 4 — Admin Needs Full Cluster Control
Use built-in ClusterRole:
cluster-adminBind it:
kind: ClusterRoleBinding
subjects:
- kind: User
name: [email protected]
roleRef:
kind: ClusterRole
name: cluster-adminNow they own everything.
Be very careful with this.
💣 Common Mistakes I See (Especially in EKS)
Since you're working in EKS, this matters.
❌ Mistake 1: Giving cluster-admin to every ServiceAccount
Lazy and dangerous.
❌ Mistake 2: Not Understanding IRSA vs RBAC
IAM controls AWS API access. RBAC controls Kubernetes API access.
Different layers.
❌ Mistake 3: Using ClusterRole When Role Is Enough
If your app runs in one namespace, don’t use cluster-wide permissions.
🧩 How It All Connects (Execution Flow)
When a Pod makes API request:
-
It uses ServiceAccount token
-
Kubernetes API checks:
- Is there a RoleBinding or ClusterRoleBinding?
- Does referenced Role/ClusterRole allow this verb?
- Is namespace scope valid?
-
If yes → allowed
-
If no → forbidden
You can test:
kubectl auth can-i get pods --as=system:serviceaccount:dev:backend-sa -n dev🎯 Mental Model Summary
| Component | Think of it as |
|---|---|
| ServiceAccount | Identity |
| Role | Namespace permissions |
| ClusterRole | Cluster-wide permissions |
| RoleBinding | Attach permissions in one namespace |
| ClusterRoleBinding | Attach permissions cluster-wide |
If You Want to Think Like a Senior Engineer
Ask yourself:
- Does this workload need namespace or cluster access?
- Is this human or machine identity?
- Can I reduce permissions further?
- Can I isolate by namespace instead of cluster role?
If you want, next I can:
- Explain this in EKS context (aws-auth, Access Entries, IAM mapping)
- Show a production-grade RBAC design pattern
- Or walk through a broken setup and debug it