In projects that have more than one contributor it is important to have a remote backend for terraform
so the tfstate
(that is the representation of real resources) are not lost or have some race condition when
two or more people apply modifications.
To have a remote backend you a state and a locking storage, in AWS the best approach is to have S3
and DynamoDB
.
The first step is to deploy the resource before defining the backend, so
Now you can deploy those resources terraform apply
. After successfully applying the resources you can change the local backend for the remote one:
Errors can happen while managing infrastructure with terraform, so you need strategies to experiment new approaches and to protect production environments. Terraform provide us with some approaches:
Terraform can create different workspace that have isolation of resources, each workspace would have their own state, so nothing that you do on that workspace will affect another.
You can see available workspaces by running:
And to switch between them:
Put configuration files for each environment in different folders. Configure diferent backend for each environment. Using different authentication mechanism and access control. Ex.: separete AWS account with separe S3 as backend To avoid deployment errors it is recommended to separate futher by components like: vpc, services and datas-storages.
Each component is organized with 3 files:
You can define passwords in AWS Secret Manager and access it in terraform
If you have different components, each one with its state and backend, how do you provide some variables, like database addres, to a web service. For this you use terraform_remote_state.
The first step is to add output variables to your component:
When you run terraform apply
those variables values will be stored in the terraform state. To read from terraform state in S3 you could do this: