S3
Stores the state as a given key in a given bucket on Amazon S3.
This backend also supports state locking and consistency checking via Dynamo DB, which can be enabled by setting the dynamodb_table
field to an existing DynamoDB table name.
A single DynamoDB table can be used to lock multiple remote state files.
Terraform generates key names that include the values of the bucket
and key
variables.
Warning! It is highly recommended that you enable Bucket Versioning on the S3 bucket to allow for state recovery in the case of accidental deletions and human error.
Example Configuration
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
This assumes we have a bucket created called mybucket
. The Terraform state is written to the key path/to/my/key
.
Note that for the access credentials we recommend using a partial configuration.
State Storage
The S3 backend stores state data in an S3 object at the path set by the key
parameter in the S3 bucket indicated by the bucket
parameter.
Using the example shown above, the state would be stored at the path path/to/my/key
in the bucket mybucket
.
When using workspaces, the state for the default
workspace is stored at the location described above.
Other workspaces are stored using the path <workspace_key_prefix>/<workspace_name>/<key>
.
The default workspace key prefix is env:
and it can be configured using the parameter workspace_key_prefix
.
Using the example above, the state for the workspace development
would be stored at the path env:/development/path/to/my/key
.
Permissions Required
S3 Bucket Permissions
When not using workspaces(or when only using the default
workspace), Terraform will need the following AWS IAM permissions on the target backend bucket:
s3:ListBucket
onarn:aws:s3:::mybucket
. At a minimum, this must be able to list the path where the state is stored.s3:GetObject
onarn:aws:s3:::mybucket/path/to/my/key
andarn:aws:s3:::mybucket/path/to/my/key.tflock
s3:PutObject
onarn:aws:s3:::mybucket/path/to/my/key
andarn:aws:s3:::mybucket/path/to/my/key.tflock
Note: s3:DeleteObject
is not needed, as Terraform will not delete the state storage.
This is seen in the following AWS IAM Statement:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": [
"arn:aws:s3:::mybucket/path/to/my/key",
"arn:aws:s3:::mybucket/path/to/my/key.tflock"
]
}
]
}
When using workspaces, Terraform will also need permissions to create, list, read, update, and delete the workspace state storage:
s3:ListBucket
onarn:aws:s3:::mybucket
. At a minumum, this must be able to list the path where thedefault
workspace is stored as well as the other workspaces.s3:GetObject
onarn:aws:s3:::mybucket/path/to/my/key
,arn:aws:s3:::mybucket/<workspace_key_prefix>/*/path/to/my/key
andarn:aws:s3:::mybucket/<workspace_key_prefix>/*/path/to/my/key.tflock
s3:PutObject
onarn:aws:s3:::mybucket/path/to/my/key
,arn:aws:s3:::mybucket/<workspace_key_prefix>/*/path/to/my/key
andarn:aws:s3:::mybucket/<workspace_key_prefix>/*/path/to/my/key.tflock
s3:DeleteObject
onarn:aws:s3:::mybucket/<workspace_key_prefix>/*/path/to/my/key
andarn:aws:s3:::mybucket/<workspace_key_prefix>/*/path/to/my/key.tflock
Note: AWS can control access to S3 buckets with either IAM policies
attached to users/groups/roles (like the example above) or resource policies
attached to bucket objects (which look similar but also require a Principal
to
indicate which entity has those permissions). For more details, see Amazon's
documentation about
S3 access control.
DynamoDB Table Permissions
If you are using state locking, Terraform will need the following AWS IAM
permissions on the DynamoDB table (arn:aws:dynamodb:::table/mytable
):
This is seen in the following AWS IAM Statement:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:*:*:table/mytable"
}
]
}
Data Source Configuration
To make use of the S3 remote state in another configuration,
use the terraform_remote_state
data source.
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "terraform-state-prod"
key = "network/terraform.tfstate"
region = "us-east-1"
}
}
The terraform_remote_state
data source will return all of the root module
outputs defined in the referenced remote state (but not any outputs from
nested modules unless they are explicitly output again in the root). An
example output might look like:
data.terraform_remote_state.network:
id = 2016-10-29 01:57:59.780010914 +0000 UTC
addresses.# = 2
addresses.0 = 52.207.220.222
addresses.1 = 54.196.78.166
backend = s3
config.% = 3
config.bucket = terraform-state-prod
config.key = network/terraform.tfstate
config.region = us-east-1
elb_address = web-elb-790251200.us-east-1.elb.amazonaws.com
public_subnet_id = subnet-1e05dd33
Configuration
This backend requires the configuration of the AWS Region and S3 state storage. Other configuration, such as enabling DynamoDB state locking, is optional.
Credentials and Shared Configuration
Warning: We recommend using environment variables to supply credentials and other sensitive data. If you use -backend-config
or hardcode these values directly in your configuration, Terraform will include these values in both the .terraform
subdirectory and in plan files. Refer to Credentials and Sensitive Data for details.
The following configuration is required:
region
- (Required) AWS Region of the S3 Bucket and DynamoDB Table (if used). This can also be sourced from theAWS_DEFAULT_REGION
andAWS_REGION
environment variables.
The following configuration is optional:
use_lockfile
- (Experimental, Optional) Whether to use a lockfile for locking the state file. Defaults tofalse
.access_key
- (Optional) AWS access key. If configured, must also configuresecret_key
. This can also be sourced from theAWS_ACCESS_KEY_ID
environment variable, AWS shared credentials file (e.g.~/.aws/credentials
), or AWS shared configuration file (e.g.~/.aws/config
).allowed_account_ids
- (Optional) List of allowed AWS account IDs to prevent potential destruction of a live environment. Conflicts withforbidden_account_ids
.custom_ca_bundle
- (Optional) File containing custom root and intermediate certificates. Can also be set using theAWS_CA_BUNDLE
environment variable. Setting ca_bundle in the shared config file is not supported.ec2_metadata_service_endpoint
- (Optional) Custom endpoint URL for the EC2 Instance Metadata Service (IMDS) API. Can also be set with theAWS_EC2_METADATA_SERVICE_ENDPOINT
environment variable.ec2_metadata_service_endpoint_mode
- (Optional) Mode to use in communicating with the metadata service. Valid values areIPv4
andIPv6
. Can also be set with theAWS_EC2_METADATA_SERVICE_ENDPOINT_MODE
environment variable.forbidden_account_ids
- (Optional) List of forbidden AWS account IDs to prevent potential destruction of a live environment. Conflicts withallowed_account_ids
.http_proxy
- (Optional) URL of a proxy to use for HTTP requests when accessing the AWS API. Can also be set using theHTTP_PROXY
orhttp_proxy
environment variables.https_proxy
- (Optional) URL of a proxy to use for HTTPS requests when accessing the AWS API. Can also be set using theHTTPS_PROXY
orhttps_proxy
environment variables.iam_endpoint
- (Optional, Deprecated) Custom endpoint URL for the AWS Identity and Access Management (IAM) API. Useendpoints.iam
instead.insecure
- (Optional) Whether to explicitly allow the backend to perform "insecure" SSL requests. If omitted, the default value isfalse
.no_proxy
- (Optional) Comma-separated list of hosts that should not use HTTP or HTTPS proxies. Each value can be one of:- A domain name
- An IP address
- A CIDR address
- An asterisk (
*
), to indicate that no proxying should be performed Domain name and IP address values can also include a port number. Can also be set using theNO_PROXY
orno_proxy
environment variables.
max_retries
- (Optional) The maximum number of times an AWS API request is retried on retryable failure. Defaults to 5.profile
- (Optional) Name of AWS profile in AWS shared credentials file (e.g.~/.aws/credentials
) or AWS shared configuration file (e.g.~/.aws/config
) to use for credentials and/or configuration. This can also be sourced from theAWS_PROFILE
environment variable.retry_mode
- (Optional) Specifies how retries are attempted. Valid values arestandard
andadaptive
. Can also be configured using theAWS_RETRY_MODE
environment variable or the shared config file parameterretry_mode
.secret_key
- (Optional) AWS access key. If configured, must also configureaccess_key
. This can also be sourced from theAWS_SECRET_ACCESS_KEY
environment variable, AWS shared credentials file (e.g.~/.aws/credentials
), or AWS shared configuration file (e.g.~/.aws/config
).shared_config_files
- (Optional) List of paths to AWS shared configuration files. Defaults to~/.aws/config
.shared_credentials_file
- (Optional, Deprecated, useshared_credentials_files
instead) Path to the AWS shared credentials file. Defaults to~/.aws/credentials
.shared_credentials_files
- (Optional) List of paths to AWS shared credentials files. Defaults to~/.aws/credentials
.skip_credentials_validation
- (Optional) Skip credentials validation via the STS API. Useful for testing and for AWS API implementations that do not have STS available.skip_region_validation
- (Optional) Skip validation of provided region name.skip_requesting_account_id
- (Optional) Whether to skip requesting the account ID. Useful for AWS API implementations that do not have the IAM, STS API, or metadata API.skip_metadata_api_check
- (Optional) Skip usage of EC2 Metadata API.skip_s3_checksum
- (Optional) Do not include checksum when uploading S3 Objects. Useful for some S3-Compatible APIs.sts_endpoint
- (Optional, Deprecated) Custom endpoint URL for the AWS Security Token Service (STS) API. Useendpoints.sts
instead.sts_region
- (Optional) AWS region for STS. If unset, AWS will use the same region for STS as other non-STS operations.token
- (Optional) Multi-Factor Authentication (MFA) token. This can also be sourced from theAWS_SESSION_TOKEN
environment variable.use_dualstack_endpoint
- (Optional) Force the backend to resolve endpoints with DualStack capability. Can also be set with theAWS_USE_DUALSTACK_ENDPOINT
environment variable or in a shared config file (use_dualstack_endpoint
).use_fips_endpoint
- (Optional) Force the backend to resolve endpoints with FIPS capability. Can also be set with theAWS_USE_FIPS_ENDPOINT
environment variable or in a shared config file (use_fips_endpoint
).
Overriding AWS API endpoints
The optional argument endpoints
contains the following arguments:
dynamodb
- (Optional) Custom endpoint URL for the AWS DynamoDB API. This can also be sourced from the environment variableAWS_ENDPOINT_URL_DYNAMODB
or the deprecated environment variableAWS_DYNAMODB_ENDPOINT
.iam
- (Optional) Custom endpoint URL for the AWS IAM API. This can also be sourced from the environment variableAWS_ENDPOINT_URL_IAM
or the deprecated environment variableAWS_IAM_ENDPOINT
.s3
- (Optional) Custom endpoint URL for the AWS S3 API. This can also be sourced from the environment variableAWS_ENDPOINT_URL_S3
or the deprecated environment variableAWS_S3_ENDPOINT
.sso
- (Optional) Custom endpoint URL for the AWS IAM Identity Center (formerly known as AWS SSO) API. This can also be sourced from the environment variableAWS_ENDPOINT_URL_SSO
.sts
- (Optional) Custom endpoint URL for the AWS STS API. This can also be sourced from the environment variableAWS_ENDPOINT_URL_STS
or the deprecated environment variableAWS_STS_ENDPOINT
.
The environment variable AWS_ENDPOINT_URL
can be used to set a base endpoint URL for all services.
Endpoints can also be overridden using the AWS shared configuration file.
Setting the parameter endpoint_url
on a profile will set that endpoint for all services.
To set endpoints for specific services, create a services
section and set the endpoint_url
parameters for each desired service.
Endpoints set for specific services will override the base endpoint configured in the profile.
Assume Role Configuration
The argument assume_role
contains the following arguments:
role_arn
- (Required) Amazon Resource Name (ARN) of the IAM Role to assume.duration
- (Optional) The duration individual credentials will be valid. Credentials are automatically renewed up to the maximum defined by the AWS account. Specified using the format<hours>h<minutes>m<seconds>s
with any unit being optional. For example, an hour and a half can be specified as1h30m
or90m
. Must be between 15 minutes (15m) and 12 hours (12h).external_id
- (Optional) External identifier to use when assuming the role.policy
- (Optional) IAM Policy JSON describing further restricting permissions for the IAM Role being assumed.policy_arns
- (Optional) Set of Amazon Resource Names (ARNs) of IAM Policies describing further restricting permissions for the IAM Role being assumed.session_name
- (Optional) Session name to use when assuming the role.source_identity
- (Optional) Source identity specified by the principal assuming the role.tags
- (Optional) Map of assume role session tags.transitive_tag_keys
- (Optional) Set of assume role session tag keys to pass to any subsequent sessions.
Multiple assume_role
values can be specified, and the roles will be assumed in order.
terraform {
backend "s3" {
bucket = "example-bucket"
key = "path/to/state"
region = "us-east-1"
assume_role = {
role_arn = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/Terraform"
}
}
}
Assume Role With Web Identity Configuration
The following assume_role_with_web_identity
configuration block is optional:
role_arn
- (Required) Amazon Resource Name (ARN) of the IAM Role to assume. Can also be set with theAWS_ROLE_ARN
environment variable.duration
- (Optional) The duration individual credentials will be valid. Credentials are automatically renewed up to the maximum defined by the AWS account. Specified using the format<hours>h<minutes>m<seconds>s
with any unit being optional. For example, an hour and a half can be specified as1h30m
or90m
. Must be between 15 minutes (15m) and 12 hours (12h).policy
- (Optional) IAM Policy JSON describing further restricting permissions for the IAM Role being assumed.policy_arns
- (Optional) Set of Amazon Resource Names (ARNs) of IAM Policies describing further restricting permissions for the IAM Role being assumed.session_name
- (Optional) Session name to use when assuming the role. Can also be set with theAWS_ROLE_SESSION_NAME
environment variable.web_identity_token
- (Optional) The value of a web identity token from an OpenID Connect (OIDC) or OAuth provider. One ofweb_identity_token
orweb_identity_token_file
is required.web_identity_token_file
- (Optional) File containing a web identity token from an OpenID Connect (OIDC) or OAuth provider. One ofweb_identity_token_file
orweb_identity_token
is required. Can also be set with theAWS_WEB_IDENTITY_TOKEN_FILE
environment variable.
terraform {
backend "s3" {
bucket = "example-bucket"
key = "path/to/state"
region = "us-east-1"
assume_role_with_web_identity = {
role_arn = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/Terraform"
web_identity_token = "<token value>"
}
}
}
S3 State Storage
The following configuration is required:
bucket
- (Required) Name of the S3 Bucket.key
- (Required) Path to the state file inside the S3 Bucket. When using a non-default workspace, the state path will be<workspace_key_prefix>/<workspace_name>/<key>
(see also theworkspace_key_prefix
configuration).
The following configuration is optional:
acl
- (Optional) Canned ACL to be applied to the state and lock files.encrypt
- (Optional) Enable server side encryption of the state and lock files.endpoint
- (Optional, Deprecated) Custom endpoint URL for the AWS S3 API. Useendpoints.s3
instead.force_path_style
- (Optional, Deprecated) Enable path-style S3 URLs (https://<HOST>/<BUCKET>
instead ofhttps://<BUCKET>.<HOST>
).kms_key_id
- (Optional) Amazon Resource Name (ARN) of a Key Management Service (KMS) Key to use for encrypting the state and lock files. Note that if this value is specified, Terraform will needkms:Encrypt
,kms:Decrypt
andkms:GenerateDataKey
permissions on this KMS key.sse_customer_key
- (Optional) The key to use for encrypting state and lock files with Server-Side Encryption with Customer-Provided Keys (SSE-C). This is the base64-encoded value of the key, which must decode to 256 bits. This can also be sourced from theAWS_SSE_CUSTOMER_KEY
environment variable, which is recommended due to the sensitivity of the value. Setting it inside a terraform file will cause it to be persisted to disk interraform.tfstate
.use_path_style
- (Optional) Enable path-style S3 URLs (https://<HOST>/<BUCKET>
instead ofhttps://<BUCKET>.<HOST>
).workspace_key_prefix
- (Optional) Prefix applied to the state path inside the bucket. This is only relevant when using a non-default workspace. Defaults toenv:
.
State Locking
State locking is an opt-in feature of the S3 backend.
Locking can be enabled via an S3 "lockfile" (introduced as experimental in Terraform 1.10) or DynamoDB. To support migration from older versions of Terraform which only support DynamoDB-based locking, the S3 and DynamoDB arguments below can be configured simultaneously. In a future minor version the DynamoDB locking mechanism will be removed.
To enable S3 state locking, use the following optional argument:
use_lockfile
- (Optional, Experimental) Whether to use a lockfile for locking the state file. Defaults tofalse
.
To enable DynamoDB state locking, use the following optional arguments:
dynamodb_endpoint
- (Optional, Deprecated) Custom endpoint URL for the AWS DynamoDB API. Useendpoints.dynamodb
instead.dynamodb_table
- (Optional) Name of DynamoDB Table to use for state locking and consistency. The table must have a partition key namedLockID
with type ofString
.
Multi-account AWS Architecture
A common architectural pattern is for an organization to use a number of separate AWS accounts to isolate different teams and environments. For example, a "staging" system will often be deployed into a separate AWS account than its corresponding "production" system, to minimize the risk of the staging environment affecting production infrastructure, whether via rate limiting, misconfigured access controls, or other unintended interactions.
The S3 backend can be used in a number of different ways that make different tradeoffs between convenience, security, and isolation in such an organization. This section describes one such approach that aims to find a good compromise between these tradeoffs, allowing use of Terraform's workspaces feature to switch conveniently between multiple isolated deployments of the same configuration.
Use this section as a starting-point for your approach, but note that you will probably need to make adjustments for the unique standards and regulations that apply to your organization. You will also need to make some adjustments to this approach to account for existing practices within your organization, if for example other tools have previously been used to manage infrastructure.
Terraform is an administrative tool that manages your infrastructure, and so ideally the infrastructure that is used by Terraform should exist outside of the infrastructure that Terraform manages. This can be achieved by creating a separate administrative AWS account which contains the user accounts used by human operators and any infrastructure and tools used to manage the other accounts. Isolating shared administrative tools from your main environments has a number of advantages, such as avoiding accidentally damaging the administrative infrastructure while changing the target infrastructure, and reducing the risk that an attacker might abuse production infrastructure to gain access to the (usually more privileged) administrative infrastructure.
Administrative Account Setup
Your administrative AWS account will contain at least the following items:
- One or more IAM user for system administrators that will log in to maintain infrastructure in the other accounts.
- Optionally, one or more IAM groups to differentiate between different groups of users that have different levels of access to the other AWS accounts.
- An S3 bucket that will contain the Terraform state files for each workspace.
- A DynamoDB table that will be used for locking to prevent concurrent operations on a single workspace.
Provide the S3 bucket name and DynamoDB table name to Terraform within the
S3 backend configuration using the bucket
and dynamodb_table
arguments
respectively, and configure a suitable workspace_key_prefix
to contain
the states of the various workspaces that will subsequently be created for
this configuration.
Environment Account Setup
For the sake of this section, the term "environment account" refers to one of the accounts whose contents are managed by Terraform, separate from the administrative account described above.
Your environment accounts will eventually contain your own product-specific infrastructure. Along with this it must contain one or more IAM roles that grant sufficient access for Terraform to perform the desired management tasks.
Delegating Access
Each Administrator will run Terraform using credentials for their IAM user in the administrative account. IAM Role Delegation is used to grant these users access to the roles created in each environment account.
Full details on role delegation are covered in the AWS documentation linked above. The most important details are:
- Each role's Assume Role Policy must grant access to the administrative AWS account, which creates a trust relationship with the administrative AWS account so that its users may assume the role.
- The users or groups within the administrative account must also have a policy that creates the converse relationship, allowing these users or groups to assume that role.
Since the purpose of the administrative account is only to host tools for managing other accounts, it is useful to give the administrative accounts restricted access only to the specific operations needed to assume the environment account role and access the Terraform state. By blocking all other access, you remove the risk that user error will lead to staging or production resources being created in the administrative account by mistake.
When configuring Terraform, use either environment variables or the standard
credentials file ~/.aws/credentials
to provide the administrator user's
IAM credentials within the administrative account to both the S3 backend and
to Terraform's AWS provider.
Use conditional configuration to pass a different assume_role
value to
the AWS provider depending on the selected workspace. For example:
variable "workspace_iam_roles" {
default = {
staging = "arn:aws:iam::STAGING-ACCOUNT-ID:role/Terraform"
production = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/Terraform"
}
}
provider "aws" {
# No credentials explicitly set here because they come from either the
# environment or the global credentials file.
assume_role = {
role_arn = var.workspace_iam_roles[terraform.workspace]
}
}
If workspace IAM roles are centrally managed and shared across many separate
Terraform configurations, the role ARNs could also be obtained via a data
source such as terraform_remote_state
to avoid repeating these values.
Creating and Selecting Workspaces
With the necessary objects created and the backend configured, run
terraform init
to initialize the backend and establish an initial workspace
called "default". This workspace will not be used, but is created automatically
by Terraform as a convenience for users who are not using the workspaces
feature.
Create a workspace corresponding to each key given in the workspace_iam_roles
variable value above:
$ terraform workspace new staging
Created and switched to workspace "staging"!
...
$ terraform workspace new production
Created and switched to workspace "production"!
...
Due to the assume_role
setting in the AWS provider configuration, any
management operations for AWS resources will be performed via the configured
role in the appropriate environment AWS account. The backend operations, such
as reading and writing the state from S3, will be performed directly as the
administrator's own user within the administrative account.
$ terraform workspace select staging
$ terraform apply
...
Running Terraform in Amazon EC2
Teams that make extensive use of Terraform for infrastructure management often run Terraform in automation to ensure a consistent operating environment and to limit access to the various secrets and other sensitive information that Terraform configurations tend to require.
When running Terraform in an automation tool running on an Amazon EC2 instance, consider running this instance in the administrative account and using an instance profile in place of the various administrator IAM users suggested above. An IAM instance profile can also be granted cross-account delegation access via an IAM policy, giving this instance the access it needs to run Terraform.
To isolate access to different environment accounts, use a separate EC2 instance for each target account so that its access can be limited only to the single account.
Similar approaches can be taken with equivalent features in other AWS compute services, such as ECS.
Protecting Access to Workspace State
In a simple implementation of the pattern described in the prior sections, all users have access to read and write states for all workspaces. In many cases it is desirable to apply more precise access constraints to the Terraform state objects in S3, so that for example only trusted administrators are allowed to modify the production state, or to control reading of a state that contains sensitive information.
Amazon S3 supports fine-grained access control on a per-object-path basis using IAM policy. A full description of S3's access control mechanism is beyond the scope of this guide, but an example IAM policy granting access to only a single state object within an S3 bucket is shown below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::example-bucket",
"Condition": {
"StringEquals": {
"s3:prefix": "path/to/state"
}
}
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::example-bucket/myapp/production/tfstate"
}
]
}
The example backend configuration below documents the corresponding bucket
and key
arguments:
terraform {
backend "s3" {
bucket = "example-bucket"
key = "path/to/state"
region = "us-east-1"
}
}
Refer to the AWS documentation on S3 access control for more details.
DynamoDB does not assign a separate resource ARN to each key in a table, but you can write more precise policies for a DynamoDB table using an IAM Condition
element.
For example, you can use the dynamodb:LeadingKeys
condition key to match on the partition key values that the S3 backend will use:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem"
],
"Resource": "arn:aws:dynamodb:us-east-1:12341234:table/example-table",
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"example-bucket/path/to/state",
"example-bucket/path/to/state-md5"
]
}
}
}
]
}
Note that DynamoDB ARNs are regional and account-specific, unlike S3 bucket ARNs, so you must also specify the correct region and AWS account ID for your DynamoDB table in the Resource
element.
The example backend configuration below documents the corresponding arguments:
terraform {
backend "s3" {
bucket = "example-bucket"
key = "path/to/state"
region = "us-east-1"
dynamodb_table = "example-table"
}
}
Refer to the AWS documentation on DynamoDB fine-grained locking for more details.
Configuring Custom User-Agent Information
Note this feature is optional and only available in Terraform v0.13.1+.
By default, the underlying AWS client used by the Terraform AWS Provider creates requests with User-Agent headers including information about Terraform and AWS Go SDK versions. To provide additional information in the User-Agent headers, the TF_APPEND_USER_AGENT
environment variable can be set and its value will be directly added to HTTP requests. e.g.
$ export TF_APPEND_USER_AGENT="JenkinsAgent/i-12345678 BuildID/1234 (Optional Extra Information)"
Support for "S3 Compatible" Storage Providers
Support for S3 Compatible storage providers is offered as “best effort”.
HashiCorp only tests the s3
backend against Amazon S3, so cannot offer any guarantees when using an alternate provider.