Nowadays most of the applications run in the cloud. At Metosin we use AWS quite a lot as our cloud provider. For providing an example how we define AWS infrastructures at Metosin we have implemented an AWS demonstration using Terraform. In this blog post we describe the implementation decisions of this demonstration
Terraform is a widely used Infrastructure as Code (IaC) tool. Using Terraform one can define cloud resources declaratively for all major cloud providers. This is a huge benefit compared to cloud providers' native tools - you just need to learn one IaC tool and you are able to use it to create infrastructure in AWS, Azure, or GCP.
The picture below depicts the network architecture of the demonstration.
The demonstration creates an AWS infrastructure comprising a Virtual Private Cloud (VPC), an Elastic Container Service (ECS), and a Relational Database Service (RDS), and a dummy application that can be deployed to ECS.
The web application is packaged into a Docker image via the Jib support in the pack.alpha library. The Elastic Container Service is using the Fargate launch type. Fargate is used to remove the need to provision virtual machines for running Docker containers.
The Fargate runtime is connected to a private subnet (which does not have a direct route to the public internet). An RDS/PostgreSQL database instance is also attached to the private subnet. An Application Load Balancer (ALB) is deployed to a public subnet, in order to expose the application to the public internet. A bastion host module is also provided for SSH tunneling.
There are three one-time setup tasks:
- Terraform and AWS command line interface (cli) installations.
- Terraform backend. One needs to create a Terraform backend - we are actually using Terraform itself to configure Terraform backend, see the terraform-backend directory.
- Sops installation and RDS secret. More about Sops in the next chapter.
When using tools such as Ansible, it is common to keep secrets under version control with encrypted files via Ansible Vault. Ansible Vault is handy, since decryption works quite seamlessly and the Vault password can even be kept in a separate service. The Vault password file itself can also be an executable, being able to fetch the shared secrets.
For Terraform, there isn't yet a similar defacto solution. Also, the Terraform state should be guarded, since sensitive data that is related to resources (like RDS master passwords) are stored directly in the resource state.
In this demo, we chose to use the sops tool, together with a Terraform provider for sops for storing secrets as encrypted files in version control. The Terraform provider is then used to decrypt the secrets and pass them to the resources. Also, the Terraform state file is stored in an S3 bucket with public access disabled and a bucket policy that requires the use of a KMS key, for encryption at rest.
Sops seems like a very valuable tool, allowing the use of multiple keys for encryption (e.g. PGP, and all major cloud KMS services) and encrypting JSON/YAML values yield more diff friendly modifications. See the Sops Motivation for more background.
When designing the Terraform solution we considered various options. One option could have been to create everything in one Terraform state. This would have made the initial setup simpler but we wanted to have more freedom to modify individual parts of the infrastructure and therefore created a Terraform solution in which the overall infrastructure is divided into module-based Terraform states. The rationale for this is that the demonstration is used internally at Metosin for educational purposes but also as a baseline for new AWS projects - therefore we needed to find a good balance between simplicity and flexibility.
For educational purposes we have provided a utility script that creates the whole infrastructure: apply-all.sh. It is just a helper script for Terraform learners to make their first Terraform experience more pleasant. In real life you would develop individual modules independently by calling
terraform plan/apply in the module's directory. The demonstration provides both an easy ramp-up for learners but modularity and flexibility for real-life projects.
If you want to create individual modules one by one follow the dependency graph to find out the order in which the modules need to be created and follow the instructions in the module's README file. Basically you just need to source the terraform-init file and then run
Metosin has a strategy that learning is one of the key success factors at the company - therefore also the authors of this blog post got an internal educational budget for implementing this demonstration and template. But it doesn't stop here. Metosin has a roadmap that we will be using the cloud-busting repo as a sandbox for experimenting technologies that we are using and are interested in - all employees will be invited to implement cloud-related demonstrations. So, stay tuned, most probably you are going to see demonstrations on immutable databases like Datomic and Crux, analytics on AWS, and probably even some other cloud providers.
It was an interesting exercise to implement this AWS ECS demonstration. The requirement was to implement a Terraform example for learning purposes but so that the solution can also be used as a baseline for real projects - even though those can be seen as conflicting requirements we think that we were able find the right balance.
Kari Marttila & Kimmo Koskinen