EKS
Serviceaccount

Alright. Let’s break this down properly — no fluff.

You’re asking how Role, ClusterRole, RoleBinding, ClusterRoleBinding, and ServiceAccount actually work together. Most people memorize definitions. That’s useless. You need to understand the control flow.


🔥 The Core Model (RBAC in Kubernetes)

Kubernetes RBAC answers one question:

“Can this identity perform this action on this resource in this namespace?”

There are 3 moving parts:

  1. Who?User or ServiceAccount
  2. What permissions?Role or ClusterRole
  3. Who gets those permissions?RoleBinding or ClusterRoleBinding

Think of it like this:

  • Role / ClusterRole = Permission policy
  • Binding = Attach policy to identity
  • ServiceAccount = Identity for pods

🧠 First: Understand Scope

This is where people mess up.

ResourceScope
RoleNamespace-scoped
RoleBindingNamespace-scoped
ClusterRoleCluster-wide
ClusterRoleBindingCluster-wide

Now let’s walk through real scenarios.


✅ Scenario 1 — App Pod Needs to Read Secrets in Its Own Namespace

Problem:

Your backend pod needs to read Kubernetes Secrets in dev namespace.


Step 1: Create ServiceAccount

apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend-sa
  namespace: dev

This is the identity your pod will use.


Step 2: Create Role (namespace scoped)

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: secret-reader
  namespace: dev
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "list"]

This says:

In namespace dev, allow reading secrets.


Step 3: Bind the Role to ServiceAccount

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: bind-secret-reader
  namespace: dev
subjects:
- kind: ServiceAccount
  name: backend-sa
  namespace: dev
roleRef:
  kind: Role
  name: secret-reader
  apiGroup: rbac.authorization.k8s.io

Now:

backend-sa can read secrets in dev namespace only.


Step 4: Attach SA to Pod

spec:
  serviceAccountName: backend-sa

Result:

  • Pod can read secrets in dev
  • Cannot read secrets in prod
  • Cannot list pods in kube-system

Good. Principle of least privilege.


✅ Scenario 2 — Dev Team Can Manage Pods Only in Dev Namespace

You don’t want devs touching prod.


Step 1: Create Role in dev

kind: Role
metadata:
  name: pod-manager
  namespace: dev
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "create", "delete"]

Step 2: RoleBinding for a User

kind: RoleBinding
metadata:
  name: bind-dev-user
  namespace: dev
subjects:
- kind: User
  name: [email protected]
roleRef:
  kind: Role
  name: pod-manager
  apiGroup: rbac.authorization.k8s.io

Result:

  • Dev user can manage pods in dev namespace
  • Cannot touch prod
  • Cannot view cluster-wide resources

This is namespace isolation done properly.


✅ Scenario 3 — Monitoring Tool Needs to Read Pods Across ALL Namespaces

Namespace Role won’t work.

You need cluster-level permission.


Step 1: Create ClusterRole

kind: ClusterRole
metadata:
  name: pod-reader-cluster
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]

This is cluster-wide permission.


Step 2: Bind It to ServiceAccount

kind: ClusterRoleBinding
metadata:
  name: bind-monitoring
subjects:
- kind: ServiceAccount
  name: monitoring-sa
  namespace: monitoring
roleRef:
  kind: ClusterRole
  name: pod-reader-cluster
  apiGroup: rbac.authorization.k8s.io

Result:

Monitoring pod can read pods in:

  • dev
  • prod
  • kube-system
  • everywhere

⚠️ Important: RoleBinding Can Bind ClusterRole

This confuses people.

You can bind a ClusterRole using a RoleBinding.

Example:

kind: RoleBinding

With:

roleRef:
  kind: ClusterRole

What happens?

👉 Permissions are still limited to that namespace.

ClusterRole defines rules. RoleBinding defines scope.

This is powerful and often misunderstood.


✅ Scenario 4 — Admin Needs Full Cluster Control

Use built-in ClusterRole:

cluster-admin

Bind it:

kind: ClusterRoleBinding
subjects:
- kind: User
  name: [email protected]
roleRef:
  kind: ClusterRole
  name: cluster-admin

Now they own everything.

Be very careful with this.


💣 Common Mistakes I See (Especially in EKS)

Since you're working in EKS, this matters.

❌ Mistake 1: Giving cluster-admin to every ServiceAccount

Lazy and dangerous.

❌ Mistake 2: Not Understanding IRSA vs RBAC

IAM controls AWS API access. RBAC controls Kubernetes API access.

Different layers.

❌ Mistake 3: Using ClusterRole When Role Is Enough

If your app runs in one namespace, don’t use cluster-wide permissions.


🧩 How It All Connects (Execution Flow)

When a Pod makes API request:

  1. It uses ServiceAccount token

  2. Kubernetes API checks:

    • Is there a RoleBinding or ClusterRoleBinding?
    • Does referenced Role/ClusterRole allow this verb?
    • Is namespace scope valid?
  3. If yes → allowed

  4. If no → forbidden

You can test:

kubectl auth can-i get pods --as=system:serviceaccount:dev:backend-sa -n dev

🎯 Mental Model Summary

ComponentThink of it as
ServiceAccountIdentity
RoleNamespace permissions
ClusterRoleCluster-wide permissions
RoleBindingAttach permissions in one namespace
ClusterRoleBindingAttach permissions cluster-wide

If You Want to Think Like a Senior Engineer

Ask yourself:

  • Does this workload need namespace or cluster access?
  • Is this human or machine identity?
  • Can I reduce permissions further?
  • Can I isolate by namespace instead of cluster role?

If you want, next I can:

  • Explain this in EKS context (aws-auth, Access Entries, IAM mapping)
  • Show a production-grade RBAC design pattern
  • Or walk through a broken setup and debug it

Your move.


💬 Need a Quick Summary?

Hey! Don't have time to read everything? I get it. 😊
Click below and I'll give you the main points and what matters most on this page.
Takes about 5 seconds • Uses Perplexity AI