This post explores ways to structure your Terraform configuration when it’s to be used to deploy infrastructure across multiple cloud accounts, for multiple customers of yours & for multiple environments for each app involved — development, staging, production. One prime example where this might be very useful to you is if you build a multi-tenant SaaS application.
The key advantage we’re looking to gain here is to allow Terraform to maintain separate “state” for every environment of every app in every account for every customer. This is akin to maintaining an always up-to-date dynamic inventory of your infrastructure, which simplifies so many things down the line.
Terraform works on a folder level such that all Terraform configuration files in the directory where you run Terraform, are processed to create the infrastructure. Going by this principle, it’s pretty straight-forward that simply separating out customer, account or environment-specific configuration files into separate directories is enough to maintain independent states for each of them. The end result will look something like this:
$ tree -a --dirsfirst
├── components
│ ├── application.tf
│ ├── common.tf
│ ├── global.tf
├── modules
│ ├── module1
│ ├── module2
│ └── module3
├── production
│ ├── customer1
│ │ ├── application.tf -> ../../components/application.tf
│ │ ├── common.tf -> ../../components/common.tf
│ │ └── terraform.tfvars
│ ├── customer2
│ │ ├── application.tf -> ../../components/application.tf
│ │ ├── common.tf -> ../../components/common.tf
│ │ └── terraform.tfvars
│ └── global
│ ├── common.tf -> ../../components/common.tf
│ ├── global.tf -> ../../components/global.tf
│ └── terraform.tfvars
├── staging
│ ├── customer1
│ │ ├── application.tf -> ../../components/application.tf
│ │ ├── common.tf -> ../../components/common.tf
│ │ └── terraform.tfvars
│ ├── customer2
│ │ ├── application.tf -> ../../components/application.tf
│ │ ├── common.tf -> ../../components/common.tf
│ │ └── terraform.tfvars
│ └── global
│ ├── common.tf -> ../../components/common.tf
│ ├── global.tf -> ../../components/global.tf
│ └── terraform.tfvars
├── apply.sh
├── destroy.sh
└── plan.sh
All your resources live in the configuration files in the components
directory & symlinks in every customer-specific directory point to them. This way, you don’t duplicate code but still achieve the directory separation required for state separation! The terraform.tfvars
in each directory determines the specific parameters of each deployment.
You don’t run Terraform CLI directly anywhere — neither in the root nor in the child directories. You only run the Shell scripts at the root which in-turn cd
into every directory & run the intended Terraform CLI command.
The advantage of this approach is the easy visibility of the entire code & customer/environment separation mechanism to every developer who opens the code repository. The downside is the additional verbosity introduced in the numerous files & directories. To overcome this, you could do away with all the environment/customer directories & symlinks & instead generate them using a script during a “build” stage, before running Terraform on them. This does make things a bit obscure for developers new to the codebase, but compacts the code to a large extent. If you do this, you might end up with something like this:
/tf-infra
├── _global
│ └── global
│ ├── README.md
│ ├── main.tf
│ ├── outputs.tf
│ ├── terraform.tfvars
│ └── variables.tf
└── staging
└── eu-west-1
├── saas
│ ├── _template
│ │ └── dynamic.tf.tpl
│ ├── customer1
│ │ ├── auto-generated.tf
│ │ └── terraform.tfvars
│ ├── customer2
│ │ ├── auto-generated.tf
│ │ └── terraform.tfvars
...
In this case, you need 2 sets of scripts. First is one that uses some templating mechanism (even sed
will do) to generate the additional files & directories by rewriting the “template” config files. See this GitHub comment for such a script. The next script to run is the same as above, the one that cd
s into every directory & actually runs Terraform.