TERRAFORM
Debug

🧱 Terraform Debugging — Interview Round (15 High-Value Cases)


🔧 Q1 — Terraform plan shows destroy/recreate for many resources unexpectedly

Answer: First I stop and don’t apply. I check what changed — usually for_each keys, count index shift, or variable value change. Then I run terraform plan -out=plan.tf and inspect diff carefully. Many times it’s caused by list order change or renamed resource block.


🔧 Q2 — Resource exists in AWS but Terraform wants to create it again

Answer: State is missing mapping. I verify using terraform state list. If missing, I use terraform import to attach existing resource to state. After import, I adjust config to match real settings and run plan again.


🔧 Q3 — Terraform says resource will be destroyed but config still present

Answer: Likely resource block name or module path changed. Terraform tracks by address, not cloud name. I check if resource moved between modules. Fix using terraform state mv instead of recreate.


🔧 Q4 — Apply fails with “Error acquiring state lock”

Answer: Another run is holding lock or previous run crashed. If using S3+DynamoDB backend, I check lock table entry. Verify no active apply running. Then use terraform force-unlock <id> only after confirmation.


🔧 Q5 — Terraform apply hangs for long time on one resource

Answer: I enable debug logs using TF_LOG=TRACE terraform apply. Check if provider is waiting on API state like “creating”. Then I verify resource status in cloud console. Often it’s provider-side timeout or dependency wait.


🔧 Q6 — Plan always shows changes even without code updates

Answer: Usually unordered attributes, timestamps, or external controller modifying resource. I inspect diff fields. If non-critical, use lifecycle ignore_changes. But I confirm it’s safe before ignoring.


🔧 Q7 — Module variable not taking new value

Answer: Check variable precedence — tfvars, CLI vars, env vars. Run terraform console to inspect variable value. Often wrong tfvars file not passed or default overriding.


🔧 Q8 — Terraform fails saying dependency cycle detected

Answer: Two resources reference each other directly or indirectly. I break the cycle using data source or split resource. Terraform graph cannot resolve circular dependencies.


🔧 Q9 — Provider authentication works locally but fails in CI

Answer: Check how credentials are passed in CI — env vars, role assumption, or profile. Terraform doesn’t auto-load local profiles in CI. I print env and test provider auth separately.


🔧 Q10 — Terraform deleted and recreated resources after small tag change

Answer: Some resources treat tag block as ForceNew depending on provider version. I check provider schema and version changelog. Pin provider version to avoid behavior shifts.


🔧 Q11 — Remote state data source returning old values

Answer: Remote state not refreshed yet. Upstream stack not applied. I run apply in source stack first. Remote state reads outputs, not live infra.


🔧 Q12 — After refactoring module, Terraform wants to recreate everything

Answer: Because resource addresses changed. I use terraform state mv to map old addresses to new module path. Refactoring must include state migration — not just file move.


🔧 Q13 — Terraform shows “known after apply” causing downstream errors

Answer: Value depends on runtime-generated attribute. I redesign dependency to avoid needing it at plan time. Sometimes split apply into two stages.


🔧 Q14 — Apply fails due to API rate limit errors

Answer: Reduce parallelism using -parallelism=5. Check provider retry config. Large parallel applies can exceed API quotas. Split stack if needed.


🔧 Q15 — State and real infra totally out of sync

Answer: Run plan with refresh. If heavily drifted, import missing resources and remove stale ones with state rm. Worst case — rebuild state via import. Never blindly destroy/reapply production.



💬 Need a Quick Summary?

Hey! Don't have time to read everything? I get it. 😊
Click below and I'll give you the main points and what matters most on this page.
Takes about 5 seconds • Uses Perplexity AI