Subscribe to receive notifications of new posts:

Terraforming Cloudflare: in quest of the optimal setup


8 min read

This is a guest post by Dimitris Koutsourelis and Alexis Dimitriadis, working for the Security Team at Workable, a company that makes software to help companies find and hire great people.


This post is about our introductive journey to the infrastructure-as-code practice; managing Cloudflare configuration in a declarative and version-controlled way. We'd like to share the experience we've gained during this process; our pain points, limitations we faced, different approaches we took and provide parts of our solution and experimentations.

Terraform world

Terraform is a great tool that fulfills our requirements, and fortunately, Cloudflare maintains its own provider that allows us to manage its service configuration hasslefree.

On top of that, Terragrunt, is a thin wrapper that provides extra commands and functionality for keeping Terraform configurations DRY, and managing remote state.

The combination of both leads to a more modular and re-usable structure for Cloudflare resources (configuration), by utilizing terraform and terragrunt modules.

We've chosen to use the latest version of both tools (Terraform-v0.12 & Terragrunt-v0.19 respectively) and constantly upgrade to take advantage of the valuable new features and functionality, which at this point in time, remove important limitations.

Workable context

Our set up includes multiple domains that are grouped in two distinct Cloudflare organisations: production & staging. Our environments have their own purposes and technical requirements (i.e.: QA, development, sandbox and production) which translates to slightly different sets of Cloudflare zone configuration.

Our approach

Our main goal was to have a modular set up with the ability to manage any configuration for any zone, while keeping code repetition to a minimum. This is more complex than it sounds; we have repeatedly changed our Terraform folder structure - and other technical aspects - during the development period. The following sections illustrate a set of alternatives through our path, along with pros & cons.


Terraform configuration is based on the project's directory structure, so this is the place to start.

Instead of retaining the Cloudflare organisation structure (production & staging as root level directories containing the zones that belong in each organization), our decision was to group zones that share common configuration under the same directory. This helps keep the code dry and the set up consistent and readable.

On the down side, this structure adds an extra layer of complexity, as two different sets of credentials need to be handled conditionally and two state files (at the environments/ root level) must be managed and isolated using workspaces.

On top of that, we used Terraform modules, to keep sets of common configuration across zone groups into a single place.
Terraform modules repository

│    ├── firewall/
│        ├──
│        ├──
│    ├── zone_settings/
│        ├──
│        ├──
│    └── [...]  

Terragrunt modules repository

│    ├── [...]
│    ├── dev/
│    ├── qa/
│    ├── demo/
│        ├── zone-8/ (production)
│            └── terragrunt.hcl
│        ├── zone-9/ (staging)
│            └── terragrunt.hcl
│        ├── config.tfvars
│        ├──
│        └──
│    ├── config.tfvars
│    ├── secrets.tfvars
│    ├──
│    ├──
│    └── terragrunt.hcl

The Terragrunt modules tree gives flexibility, since we are able to apply configuration on a zone, group zone, or organisation level (which is inline with Cloudflare configuration capabilities - i.e.: custom error pages can also be configured on the organisation level).

Resource types

We decided to implement Terraform resources in different ways, to cover our requirements more efficiently.

1. Static resource

The first thought that came to mind was having one, or multiple .tf files implementing all the resources with hardcoded values assigned to each attribute. It's simple and straightforward, but can have a high maintenance cost if it leads to code copy/paste between environments.

So, common settings seem to be a good use case; we chose to implement access_rules Terraform resources accordingly:

resource "cloudflare_access_rule" "no_17" {
  notes   = "this is a description"
  mode    = "blacklist"
  configuration = {
    target  = "ip"
    value   = "x.x.x.x"
2. Parametrized resources

Our next step was to add variables to gain flexibility. This is useful when few attributes of a shared resource configuration differ between multiple zones. Most of the configuration remains the same (as described above) and the variable instantiation is added in the Terraform module, while their values are fed through the Terragrunt module, as input variables, or entries inside_.tfvars_ files. The zone_settings_override resource was implemented accordingly:


resource "cloudflare_zone_settings_override" "zone_settings" {
  zone_id = var.zone_id
  settings {
    always_online       = "on"
    always_use_https    = "on"
    browser_check       = var.browser_check
    mobile_redirect {
      mobile_subdomain  = var.mobile_redirect_subdomain
      status            = var.mobile_redirect_status
      strip_uri         = var.mobile_redirect_uri
    waf                 = "on"
    webp                = "off"
    websockets          = "on"


module "zone_settings" {
  source        = "[email protected]:foo/modules/zone_settings"
  zone_name     = var.zone_name
  browser_check = var.zone_settings_browser_check


#zone settings
zone_settings_browser_check = "off"
3. Dynamic resource

At that point, we thought that a more interesting approach would be to create generic resource templates to manage all instances of a given resource in one place. A template is implemented as a Terraform module and creates each resource dynamically, based on its input: data fed through the Terragrunt modules (/environments in our case), or entries in the tfvars files.

We chose to implement the account_member resource this way.

variable "users" {
  description   = "map of users - roles"
  type          = map(list(string))
variable "member_roles" {
  description   = "account role ids"
  type          = map(string)


resource "cloudflare_account_member" "account_member" {
 for_each          = var.users
 email_address     = each.key
 role_ids          = [for role in each.value : lookup(var.member_roles, role)]
 lifecycle {
   prevent_destroy = true

We feed the template with a list of users (list of maps). Each member is assigned a number of roles. To make code more readable, we mapped users to role names instead of role ids:

member_roles = {
  admin       = "000013091sds0193jdskd01d1dsdjhsd1"
  admin_ro    = "0000ds81hd131bdsjd813hh173hds8adh"
  analytics   = "0000hdsa8137djahd81y37318hshdsjhd"
  super_admin = "00001534sd1a2123781j5gj18gj511321"
users = {
  "[email protected]"  = ["super_admin"]
  "[email protected]"  = ["analytics", "audit_logs", "cache_purge", "cf_workers"]
  "[email protected]"  = ["cf_stream"]
  "[email protected]" = ["cf_stream"]

Another interesting case we dealt with was the rate_limit resource; the variable declaration (list of objects) & implementation goes as follows:

variable "rate_limits" {
  description   = "list of rate limits"
  default       = []
  type          = list(object(
    disabled    = bool,
    threshold   = number,
    description = string,
    period      = number,
    match       = object({
      request   = object({
        url_pattern     = map(string),
        schemes         = list(string),
        methods         = list(string)
      response          = object({
        statuses        = list(number),
        origin_traffic  = bool
    action      = object({
      mode      = string,
      timeout   = number


locals {
data "cloudflare_zones" "zone" {
  filter {
    name    = var.zone_name
    status  = "active"
    paused  = false
resource "cloudflare_rate_limit" "rate_limit" {
  count         = length(var.rate_limits)
  zone_id       =  lookup([0], "id")
  disabled      = var.rate_limits[count.index].disabled
  threshold     = var.rate_limits[count.index].threshold
  description   = var.rate_limits[count.index].description
  period        = var.rate_limits[count.index].period
  match {
    request {
      url_pattern     = local.url_patterns[count.index]
      schemes         = var.rate_limits[count.index].match.request.schemes
      methods         = var.rate_limits[count.index].match.request.methods
    response {
      statuses        = var.rate_limits[count.index].match.response.statuses
      origin_traffic  = var.rate_limits[count.index].match.response.origin_traffic
  action {
    mode        = var.rate_limits[count.index].action.mode
    timeout     = var.rate_limits[count.index].action.timeout


common_rate_limits = [
    disabled      = false
    threshold     = 50
    description   = "sample description"
    period        = 60
   match  = {
      request   = {
        url_pattern  = {
          "subdomain"   = "foo"
          "path"        = "/api/v1/bar"
        schemes         = [ "_ALL_", ]
        methods         = [ "GET", "POST", ]
      response  = {
        statuses        = []
        origin_traffic  = true
    action  = {
      mode      = "simulate"
      timeout   = 3600

The biggest advantage of this approach is that all common rate_limit rules are in one place and each environment can include its own rules in their .tfvars. The combination of those using Terraform built-in concat() function, achieves a 2-layer join of the two lists (common|unique rules). So we wanted to give it a try:

locals {
  rate_limits  = concat(var.common_rate_limits, var.unique_rate_limits)

There is however a drawback: .tfvars files can only contain static values. So, since all url attributes - that include the zone name itself - have to be set explicitly in the data of each environment, it means that every time a change is needed to a url, this value has to be copied across all environments and change the zone name to match the environment.

The solution we came up with, in order to make the zone name dynamic, was to split the url attribute into 3 parts: subdomain, domain and path. This is effective for the .tfvars, but the added complexity to handle the new variables is non negligible. The corresponding code illustrates the issue:

locals {
  rate_limits   = concat(var.common_rate_limits, var.unique_rate_limits)
  url_patterns  = [for rate_limit in local.rate_limits:  "${lookup(rate_limit.match.request.url_pattern, "subdomain", null) != null ? "${lookup(rate_limit.match.request.url_pattern, "subdomain")}." : ""}"${lookup(rate_limit.match.request.url_pattern, "domain", null) != null ? "${lookup(rate_limit.match.request.url_pattern, "domain")}" : ${var.zone_name}}${lookup(rate_limit.match.request.url_pattern, "path", null) != null ? lookup(rate_limit.match.request.url_pattern, "path") : ""}"]

Readability vs functionality: although flexibility is increased and code duplication is reduced, the url transformations have an impact on code's readability and ease of debugging (it took us several minutes to spot a typo). You can imagine this is even worse if you attempt to implement a more complex resource (such as page_rule which is a list of maps with four url attributes).

The underlying issue here is that at the point we were implementing our resources, we had to choose maps over objects due to their capability to omit attributes, using the lookup() function (by setting default values). This is a requirement for certain resources such as page_rules: only certain attributes need to be defined (and others ignored).

In the end, the context will determine if more complex resources can be implemented with dynamic resources.

4. Sequential resources

Cloudflare page rule resource has a specific peculiarity that differentiates it from other types of resources: the priority attribute.
When a page rule is applied, it gets a unique id and priority number which corresponds to the order it has been submitted. Although Cloudflare API and terraform provider give the ability to explicitly specify the priority, there is a catch.

Terraform doesn't respect the order of resources inside a .tf file (even in a _for each loop!); each resource is randomly picked up and then applied to the provider. So, if page_rule priority is important - as in our case - the submission order counts. The solution is to lock the sequence in which the resources are created through the depends_on meta-attribute:

resource "cloudflare_page_rule" "no_3" {
  depends_on  = [cloudflare_page_rule.no_2]
  zone_id     = lookup([0], "id")
  target      = "www.${var.zone_name}/foo"
  status      = "active"
  priority    = 3
  actions {
    forwarding_url {
      status_code    = 301
      url            = "https://www.${var.zone_name}"
resource "cloudflare_page_rule" "no_2" {
  depends_on  = [cloudflare_page_rule.no_1]
  zone_id     = lookup([0], "id")
  target      = "www.${var.zone_name}/lala*"
  status      = "active"
  priority    = 24
  actions {
    ssl                     = "flexible"
    cache_level             = "simplified"
    resolve_override        = "bar.${var.zone_name}"
    host_header_override    = ""
resource "cloudflare_page_rule" "page_rule_1" {
  zone_id   = lookup([0], "id")
  target    = "*.${var.zone_name}/foo/*"
  status    = "active"
  priority  = 1
  actions {
    forwarding_url {
      status_code     = 301
      url             = "https://foo.${var.zone_name}/$1/$2"

So we had to go with to a more static resource configuration because the depends_on attribute only takes static values (not dynamically calculated ones during the runtime).


After changing our minds several times along the way on Terraform structure and other technical details, we believe that there isn't a single best solution. It all comes down to the requirements and keeping a balance between complexity and simplicity. In our case, a mixed approach is good middle ground.

Terraform is evolving quickly, but at this point it lacks some common coding capabilities. So over engineering can be a catch (which we fell-in too many times). Keep it simple and as DRY as possible. :)

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet application, ward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.

Follow on X


Related posts

May 22, 2024 1:00 PM

AI Gateway is generally available: a unified interface for managing and scaling your generative AI workloads

AI Gateway is an AI ops platform that provides speed, reliability, and observability for your AI applications. With a single line of code, you can unlock powerful features including rate limiting, custom caching, real-time logs, and aggregated analytics across multiple providers...