A couple weeks ago I took it upon myself to create a new terraform data source. Overall, it was an extremely pleasant experience. I learned a lot and wandered a little outside my comfort zone. Here are my thoughts.
The Problem When designing and building some infrastructure in AWS, I discovered a potential problem. When defining an Auto Scaling group, you are required to provide a min_size, max_size, and desired_capacity. min_size and max_size are self-explanatory.

After spinning up a pool of t2.small servers and running some load tests, it became clear I was severely under-powered. Easy enough, I shut down an instance, changed to an m5.large, and started it up again. As usual I opened up a ping to the machine so I could see the second it came back online.
I waited.
And waited…
…
OK this isn’t right.
What happened? Turns out, Amazon has a special network driver in the form of a kernel module.

While terraform makes a lot of infrastructure deployment easy, sometimes it can feel a little bit like trying to thread a needle with while wearing oven mitts. The idea is simple, the execution is hard. One of those principles that hasn’t found it’s way into the HCL is the idea of conditionals.
Let’s say you’re trying to write a service module. The meat of this particular class of service is the same whether provided publicly or privately.

In the science (fiction) community, terraforming is the act of taking a planet, moon, or other celestial body and making it suitable for human life. While a simplistic definition (more information can be found in this wikipedia article), the name is quite suitable.
In the context of IT, Terraform is a infrastructure management tool. It is stateful, which means it maintains it’s own record on how everything is supposed to look.