Blog

How to Recover from a Partial Terraform Deployment on GCP: A Snowplow Quick Start Guide

By
Snowplow Team
&
July 16, 2024
Share this post

Q: What was the issue during Snowplow Quick Start setup on GCP?

A user attempting to deploy Snowplow on GCP using Terraform encountered the following sequence of problems:

  1. Ran terraform apply before enabling the Cloud SQL Admin API, causing the deployment to fail.

  2. Enabled the API and re-ran terraform apply, but still hit a failure.

  3. After waiting (as suggested), the user tried again, but this time the error said the Cloud SQL instance already existed.

  4. Attempting to recover, they deleted the Terraform state files (terraform.tfstate, terraform.tfstate.backup).

  5. Subsequent Terraform runs now threw errors stating that many GCP resources (load balancers, service accounts, firewalls, etc.) already existed.

Q: Why did this happen?

Terraform uses a state file to track the resources it manages. When a deployment partially succeeds but fails midway (e.g., due to missing APIs), the actual GCP resources may still be created, but Terraform won’t know that if the state is missing or corrupted.

Deleting the tfstate file effectively tells Terraform, “I’ve never created anything,” while the cloud provider (GCP) still holds those resources. This mismatch leads to conflicts and duplication errors.

Q: What’s the correct way to recover from a failed Terraform deployment?

Instead of deleting the state file, use Terraform’s built-in tools to import existing resources or refresh the state.

Step 1: Restore the original state file (if possible)

If you still have the terraform.tfstate.backup, try restoring it:

cp terraform.tfstate.backup terraform.tfstate
terraform plan

This will restore Terraform’s knowledge of the partially created resources.

Step 2: Run terraform refresh to sync state with actual resources

If you restored your state file, run:

terraform refresh

This updates the state to reflect the current state of resources in GCP. It won’t create or destroy anything, but it ensures Terraform is aware of what already exists.

Step 3: Manually import resources if state is lost

If your state files are gone, use terraform import to bring existing resources back under Terraform control.

Example:

terraform import google_sql_database_instance.my_instance projects/PROJECT_ID/instances/INSTANCE_ID

Do this for each resource mentioned in the error messages (service accounts, load balancers, etc.).

Terraform will then recognize them and won’t try to recreate them.

Q: Can Terraform be told to "skip" existing resources?

No. Terraform does not natively support skipping the creation of existing resources if they're not in the state file. This is by design, as Terraform treats the state as the source of truth.

That said, the terraform import process exists exactly for this use case — to bridge Terraform’s state with pre-existing infrastructure.

Q: Are there Snowplow-specific best practices for Terraform deployments?

Yes — here are some tips tailored to Snowplow deployments on GCP:

Always enable required APIs (like sqladmin.googleapis.com) before running terraform apply.

✅ Use a remote backend (like Google Cloud Storage) for storing Terraform state. This prevents accidental deletions and enables team collaboration.

✅ Keep the state file under version control (with access controls) or use remote state locking.

✅ Use terraform plan to preview changes before applying them — and avoid blind re-runs.

✅ If deploying iteratively, prefer terraform taint or terraform destroy -target to reset individual resources.

Q: What should I do next if I’m stuck?

If Terraform is too tangled and you've already deleted state files, consider these options:

Option 1: Clean Slate

Manually delete the resources from GCP (use caution), then re-run terraform apply. This can be time-consuming and error-prone but may be easier than importing dozens of resources.

Option 2: Rebuild the State

Use terraform import to manually import existing resources. This gives you the most control but requires familiarity with Terraform resource names and identifiers.

Option 3: Reach Out to Snowplow

If you're stuck, don't hesitate to contact Snowplow or explore our professional services. We’ve helped many teams recover and stabilize infrastructure after deployment hiccups.

Final Thoughts

Deploying Snowplow on GCP with Terraform is powerful but requires understanding how Terraform manages state. Avoid deleting .tfstate files unless you are absolutely certain you want to reset the infrastructure from scratch.

Pro tip: Using Snowplow with tools like Terraform, dbt, and BigQuery allows for robust, scalable, and highly customizable event data pipelines. Just remember to treat your infrastructure as code — and your state file as gold.

Subscribe to our newsletter

Get the latest content to your inbox monthly.

Get Started

Whether you’re modernizing your customer data infrastructure or building AI-powered applications, Snowplow helps eliminate engineering complexity so you can focus on delivering smarter customer experiences.