Terraform Import in 2026: The Complete Guide Including the New import Block
Every infrastructure team hits this wall eventually. The AWS account already has hundreds of resources — VPCs, security groups, RDS clusters, S3 buckets — that predate any Terraform adoption. Someone spun them up through the console during a prototype phase, or you inherited them from a team that didn’t use IaC. Now you want Terraform managing them, and the question is how to get there without blowing anything up.
That’s what terraform import is for. The mechanics have changed significantly since Terraform 1.5 introduced declarative import blocks, and with OpenTofu now a credible alternative to the commercial product, it’s worth walking through all the approaches in one place.
The brownfield problem
“Brownfield” infrastructure means resources that exist outside Terraform’s state. Terraform doesn’t know about them, so it can’t track changes, and if you tried to run terraform plan against a fresh configuration, it would try to create duplicates of everything. The import workflow solves this by recording existing resources into the state file so that Terraform treats them as under management.
The workflow looks simple from the outside: tell Terraform a resource exists, write config that describes it, reconcile any drift, and you’re done. In practice the reconciliation step is where most teams spend their time — more on that later.
The old CLI approach
The original terraform import command has been around since Terraform 0.7. It works, and it’s not going away. The syntax is:
terraform import <resource_type>.<resource_name> <resource_id>
For an S3 bucket named my-app-assets-prod:
terraform import aws_s3_bucket.assets my-app-assets-prod
Before running that command you need a matching resource block in your configuration, otherwise Terraform has nowhere to record the state:
resource "aws_s3_bucket" "assets" {
bucket = "my-app-assets-prod"
}
The import command writes to the state file. It does not write HCL. After the import you run terraform plan and Terraform shows you every attribute that differs between your stub resource block and what actually exists in AWS. You then update the HCL until the plan shows no changes.
For one or two resources this is manageable. For fifty security groups across three environments, it becomes a grind. Each resource requires its own import command, and you have to manually reverse-engineer the HCL from the AWS console or CLI output. The CLI approach also doesn’t compose well with pipelines — you can’t run terraform import inside an apply job and have it be idempotent.
The import block (Terraform 1.5+)
Terraform 1.5 (released June 2023) added declarative import blocks. You write the import alongside your resource configuration, run a normal plan-and-apply cycle, and Terraform handles the state write during apply. This is the approach to use for any new import work.
import {
to = aws_s3_bucket.assets
id = "my-app-assets-prod"
}
resource "aws_s3_bucket" "assets" {
bucket = "my-app-assets-prod"
}
The import block lives in any .tf file — a dedicated imports.tf is a clean convention. During terraform plan, Terraform shows the import as a planned operation. During terraform apply, it writes the resource to state before processing any changes.
Here’s a more complete example importing three related resources — an S3 bucket, an EC2 instance, and a security group that the instance uses:
import {
to = aws_s3_bucket.app_data
id = "my-app-data-prod"
}
import {
to = aws_instance.app_server
id = "i-0abc123def456789"
}
import {
to = aws_security_group.app_sg
id = "sg-0abc123def456789"
}
resource "aws_s3_bucket" "app_data" {
bucket = "my-app-data-prod"
tags = {
Environment = "prod"
ManagedBy = "terraform"
}
}
resource "aws_instance" "app_server" {
ami = "ami-0c02fb55956c7d316"
instance_type = "t3.medium"
vpc_security_group_ids = [aws_security_group.app_sg.id]
tags = {
Name = "app-server-prod"
Environment = "prod"
}
}
resource "aws_security_group" "app_sg" {
name = "app-sg-prod"
description = "Application security group"
vpc_id = "vpc-0abc123def456789"
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
After apply succeeds, the import blocks can be removed. The resources are in state and will be managed normally from that point forward. This is important: import blocks are one-time operations. Leave them in permanently and you’ll get errors on subsequent applies because Terraform will try to import a resource that’s already in state.
generate-config-out: skipping the HCL stub
The part that makes the import block genuinely powerful for brownfield work is terraform plan -generate-config-out. This flag tells Terraform to write HCL for all resources referenced by import blocks, instead of requiring you to write it yourself.
terraform plan -generate-config-out=generated.tf
You write the import blocks, leave out the resource blocks entirely, run the plan command, and Terraform generates generated.tf with the full resource configuration pulled from the provider’s read logic. The output is verbose — every attribute, even ones you’d normally omit because they have sensible defaults — but it’s a starting point you can edit down.
# Workflow for a new batch of imports
# 1. Write only the import blocks in imports.tf
# 2. Generate config
terraform plan -generate-config-out=generated.tf
# 3. Review and clean up generated.tf
# Remove read-only computed attributes that shouldn't be set in config
# Remove attributes that match defaults you don't need to be explicit about
# 4. Run a clean plan to verify no unexpected changes
terraform plan
# 5. Apply
terraform apply
# 6. Remove the import blocks from imports.tf
One gotcha: the generated config includes computed attributes like id, arn, and owner_id that shouldn’t appear in resource blocks. Terraform flags these during the subsequent plan. You’ll need to remove them manually. For an S3 bucket the generated output might include bucket_domain_name, bucket_regional_domain_name, and hosted_zone_id — all of these are computed and should be deleted from your config before the real plan.
Drift after import
Getting resources into state is only half the problem. Once they’re there, Terraform will detect any difference between your HCL and the actual resource configuration and offer to change it. This is called drift, and it’s where imports go wrong.
The most common scenario: you import an EC2 instance, write a stub resource block, run the plan, and Terraform wants to change the user_data, iam_instance_profile, or network interface configuration because you didn’t capture those in the HCL. If you apply without reviewing carefully, you’re modifying production resources.
A safer pattern for large brownfield imports:
# After importing, generate a plan and save it
terraform plan -out=review.tfplan
# Inspect every proposed change carefully
terraform show review.tfplan
# Only apply when you've confirmed the plan matches your expectations
terraform apply review.tfplan
For resources where drift is acceptable or expected (like an EC2 instance with a manually updated user_data), you can use lifecycle blocks to tell Terraform to ignore specific attributes:
resource "aws_instance" "app_server" {
ami = "ami-0c02fb55956c7d316"
instance_type = "t3.medium"
lifecycle {
ignore_changes = [user_data, tags["LastModified"]]
}
}
Use ignore_changes sparingly. It’s a way to acknowledge that something is managed outside Terraform, not a way to paper over config you didn’t bother to write correctly.
Partial imports and state corruption
Terraform import is not transactional. If you’re importing a batch of resources and the process fails halfway through — network timeout, API rate limit, misconfigured credentials — you’ll have some resources in state and some not. The state file won’t be corrupted in the data-loss sense, but it will be inconsistent with your configuration.
Before any import session, back up the state:
# If using local state
cp terraform.tfstate terraform.tfstate.backup
# If using S3 backend with versioning enabled (you should always have this)
# The backup is automatic — just note the current version ID before proceeding
aws s3api head-object \
--bucket my-tfstate-bucket \
--key path/to/terraform.tfstate \
--query 'VersionId'
For the S3 backend setup and remote state best practices, see our post on Terraform State.
If something goes wrong and you need to remove a partially imported resource from state without touching the real infrastructure:
terraform state rm aws_instance.app_server
This removes the resource from state without destroying it in AWS.
Import in GitLab CI pipelines
The import block approach composes better with CI than the CLI command, but there’s still a challenge: import is a one-time operation, and your pipeline runs repeatedly. You need a way to run the import on the first execution and skip it on subsequent ones.
The cleanest pattern is a separate pipeline stage that only runs when a specific variable is set:
# .gitlab-ci.yml
stages:
- validate
- import
- plan
- apply
variables:
TF_ROOT: ${CI_PROJECT_DIR}/infrastructure
TF_STATE_NAME: production
.terraform-base:
image: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
before_script:
- cd ${TF_ROOT}
- terraform init
-backend-config="address=${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${TF_STATE_NAME}"
-backend-config="lock_address=${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${TF_STATE_NAME}/lock"
-backend-config="unlock_address=${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${TF_STATE_NAME}/lock"
-backend-config="username=gitlab-ci-token"
-backend-config="password=${CI_JOB_TOKEN}"
-backend-config="lock_method=POST"
-backend-config="unlock_method=DELETE"
-backend-config="retry_wait_min=5"
terraform-import:
extends: .terraform-base
stage: import
script:
- terraform apply -auto-approve -target=null
rules:
# Only run this job when explicitly triggered with RUN_IMPORT=true
- if: '$RUN_IMPORT == "true"'
# Keep import blocks in a separate file, check them in temporarily
# Remove import blocks and re-commit after successful import
For a more complete GitLab CI + Terraform setup, the GitLab CI + Terraform IaC Pipeline post covers the full pipeline configuration including state backend, merge request plans, and apply gates. The Run Terraform from GitLab CI post has the authentication and credentials setup.
The critical discipline: after the import apply succeeds, remove the import blocks from your .tf files and commit. Otherwise the next pipeline run will fail trying to import a resource that’s already in state.
An alternative worth considering for large imports is running the import locally with the same backend configuration as your CI pipeline. This gives you faster iteration on the HCL cleanup step without burning pipeline minutes, and the state changes are immediately visible to CI because they share the remote backend.
OpenTofu compatibility
The import block syntax and generate-config-out both work identically in OpenTofu. OpenTofu forked from Terraform 1.6 and has maintained full compatibility with the import features. If your organization has moved to OpenTofu following the 2023 license change, the import workflow described here applies without modification — just replace the terraform binary with tofu.
# OpenTofu import — same syntax, same behavior
tofu plan -generate-config-out=generated.tf
tofu apply
The Terraform vs OpenTofu 2026 post covers the feature divergence and migration considerations in more depth if you’re evaluating which binary to standardize on.
A note on modules
The import block works with resources inside modules, which was a significant limitation of the original CLI approach. The to argument accepts module paths:
import {
to = module.networking.aws_vpc.main
id = "vpc-0abc123def456789"
}
This makes it practical to import resources into an existing module structure rather than importing them at the root level and refactoring later.
Wrapping up
The tactical summary: use import blocks for any new import work, use generate-config-out to avoid writing HCL stubs from scratch, always review the plan before applying, back up state before starting, and remove import blocks after a successful apply. The old CLI command is still there if you need it for scripting or automation contexts where declarative blocks don’t fit, but for interactive brownfield onboarding the block approach is substantially less error-prone.
The hardest part of importing at scale isn’t the syntax — it’s the drift reconciliation. Budget time for the post-import plan review. Resources that have been hand-managed for months will have configuration that doesn’t match any reasonable HCL stub, and sorting out what to codify versus what to ignore takes judgment.
Comments