
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Fri, 03 Apr 2026 17:07:34 GMT</lastBuildDate>
        <item>
            <title><![CDATA[A one-line Kubernetes fix that saved 600 hours a year]]></title>
            <link>https://blog.cloudflare.com/one-line-kubernetes-fix-saved-600-hours-a-year/</link>
            <pubDate>Thu, 26 Mar 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ When we investigated why our Atlantis instance took 30 minutes to restart, we discovered a bottleneck in how Kubernetes handles volume permissions. By adjusting the fsGroupChangePolicy, we reduced restart times to 30 seconds. ]]></description>
            <content:encoded><![CDATA[ <p>Every time we restarted Atlantis, the tool we use to plan and apply Terraform changes, we’d be stuck for 30 minutes waiting for it to come back up. No plans, no applies, no infrastructure changes for any repository managed by Atlantis. With roughly 100 restarts a month for credential rotations and onboarding, that added up to over <b>50 hours of blocked engineering time every month</b>, and paged the on-call engineer every time.</p><p>This was ultimately caused by a safe default in Kubernetes that had silently become a bottleneck as the persistent volume used by Atlantis grew to millions of files. Here’s how we tracked it down and fixed it with a one-line change.</p>
    <div>
      <h3>Mysteriously slow restarts</h3>
      <a href="#mysteriously-slow-restarts">
        
      </a>
    </div>
    <p>We manage dozens of Terraform projects with GitLab merge requests (MRs) using <a href="https://www.runatlantis.io/"><u>Atlantis</u></a>, which handles planning and applying. It enforces locking to ensure that only one MR can modify a project at a time. </p><p>It runs on Kubernetes as a singleton StatefulSet and relies on a Kubernetes PersistentVolume (PV) to keep track of repository state on disk. Whenever a Terraform project needs to be onboarded or offboarded, or credentials used by Terraform are updated, we have to restart Atlantis to pick up those changes — a process that can take 30 minutes.</p><p>The slow restart was apparent when we recently ran out of inodes on the persistent storage used by Atlantis, forcing us to restart it to resize the volume. Inodes are consumed by each file and directory entry on disk, and the number available to a filesystem is determined by parameters passed when creating it. The Ceph persistent storage implementation provided by our Kubernetes platform does not expose a way to pass flags to <code>mkfs</code>, so we’re at the mercy of default values: growing the filesystem is the only way to grow available inodes, and restarting a PV requires a pod restart. </p><p>We talked about extending the alert window, but that would just mask the problem and delay our response to actual issues. Instead, we decided to investigate exactly why it was taking so long.</p>
    <div>
      <h3>Bad behavior</h3>
      <a href="#bad-behavior">
        
      </a>
    </div>
    <p>When we were asked to do a rolling restart of Atlantis to pick up a change to the secrets it uses, we would run <code>kubectl rollout restart statefulset atlantis</code>, which would gracefully terminate the existing Atlantis pod before spinning up a new one. The new pod would appear almost immediately, but looking at it would show:</p>
            <pre><code>$ kubectl get pod atlantis-0
atlantis-0                                                        0/1     
Init:0/1     0             30m
</code></pre>
            <p>...so what gives? Naturally, the first thing to check would be events for that pod. It's waiting around for an init container to run, so maybe the pod events would illuminate why?</p>
            <pre><code>$ kubectl events --for=pod/atlantis-0
LAST SEEN   TYPE      REASON                   OBJECT                   MESSAGE
30m         Normal    Killing                  Pod/atlantis-0   Stopping container atlantis-server
30m        Normal    Scheduled                Pod/atlantis-0   Successfully assigned atlantis/atlantis-0 to 36com1167.cfops.net
22s         Normal    Pulling                  Pod/atlantis-0   Pulling image "oci.example.com/git-sync/master:v4.1.0"
22s         Normal    Pulled                   Pod/atlantis-0   Successfully pulled image "oci.example.com/git-sync/master:v4.1.0" in 632ms (632ms including waiting). Image size: 58518579 bytes.</code></pre>
            <p>That looks almost normal... but what's taking so long between scheduling the pod and actually starting to pull the image for the init container? Unfortunately that was all the data we had to go on from Kubernetes itself. But surely there <i>had</i> to be something more that can tell us why it's taking so long to actually start running the pod.</p>
    <div>
      <h3>Going deeper</h3>
      <a href="#going-deeper">
        
      </a>
    </div>
    <p>In Kubernetes, a component called <code>kubelet</code> that runs on each node is responsible for coordinating pod creation, mounting persistent volumes, and many other things. From my time on our Kubernetes team, I know that <code>kubelet</code> runs as a systemd service and so its logs should be available to us in Kibana. Since the pod has been scheduled, we know the host name we're interested in, and the log messages from <code>kubelet</code> include the associated object, so we could filter for <code>atlantis</code> to narrow down the log messages to anything we found interesting.</p><p>We were able to observe the Atlantis PV being mounted shortly after the pod was scheduled. We also observed all the secret volumes mount without issue. However, there was still a big unexplained gap in the logs. We saw:</p>
            <pre><code>[operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-94b75052-8d70-4c67-993a-9238613f3b99\" (UniqueName: \"kubernetes.io/csi/rook-ceph-nvme.rbd.csi.ceph.com^0001-000e-rook-ceph-nvme-0000000000000002-a6163184-670f-422b-a135-a1246dba4695\") pod \"atlantis-0\" (UID: \"83089f13-2d9b-46ed-a4d3-cba885f9f48a\") device mount path \"/state/var/lib/kubelet/plugins/kubernetes.io/csi/rook-ceph-nvme.rbd.csi.ceph.com/d42dcb508f87fa241a49c4f589c03d80de2f720a87e36932aedc4c07840e2dfc/globalmount\"" pod="atlantis/atlantis-0"
[pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[atlantis-storage], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="atlantis/atlantis-0" podUID="83089f13-2d9b-46ed-a4d3-cba885f9f48a"
[util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="atlantis/atlantis-0"</code></pre>
            <p>The last two messages looped several times until eventually we observed the pod actually start up properly.</p><p>So <code>kubelet</code> thinks that the pod is otherwise ready to go, but it's not starting it and something's timing out.</p>
    <div>
      <h3>The missing piece</h3>
      <a href="#the-missing-piece">
        
      </a>
    </div>
    <p>The lowest-level logs we had on the pod didn't show us what's going on. What else do we have to look at? Well, the last message before it hangs is the PV being mounted onto the node. Ordinarily, if the PV has issues mounting (e.g. due to still being stuck mounted on another node), that will bubble up as an event. But something's still going on here, and the only thing we have left to drill down on is the PV itself. So I plug that into Kibana, since the PV name is unique enough to make a good search term... and immediately something jumps out:</p>
            <pre><code>[volume_linux.go:49] Setting volume ownership for /state/var/lib/kubelet/pods/83089f13-2d9b-46ed-a4d3-cba885f9f48a/volumes/kubernetes.io~csi/pvc-94b75052-8d70-4c67-993a-9238613f3b99/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699</code></pre>
            <p>Remember how I said at the beginning we'd just run out of inodes? In other words, we have a <i>lot</i> of files on this PV. When the PV is mounted, <code>kubelet</code> is running <code>chgrp -R</code> to recursively change the group on every file and folder across this filesystem. No wonder it was taking so long — that's a ton of entries to traverse even on fast flash storage!</p><p>The pod's <code>spec.securityContext</code> included <code>fsGroup: 1</code>, which ensures that processes running under GID 1 can access files on the volume. Atlantis runs as a non-root user, so without this setting it wouldn’t have permission to read or write to the PV. The way Kubernetes enforces this is by recursively updating ownership on the entire PV <i>every time it's mounted</i>.</p>
    <div>
      <h3>The fix</h3>
      <a href="#the-fix">
        
      </a>
    </div>
    <p>Fixing this was heroically...boring. Since version 1.20, Kubernetes has supported an additional field on <code>pod.spec.securityContext</code> called <code>fsGroupChangePolicy</code>. This field defaults to <code>Always</code>, which leads to the exact behavior we see here. It has another option, <code>OnRootMismatch</code>, to only change permissions if the root directory of the PV doesn't have the right permissions. If you don’t know exactly how files are created on your PV, do not set <code>fsGroupChangePolicy</code>: <code>OnRootMismatch</code>. We checked to make sure that nothing should be changing the group on anything in the PV, and then set that field: </p>
            <pre><code>spec:
  template:
    spec:
      securityContext:
        fsGroupChangePolicy: OnRootMismatch</code></pre>
            <p>Now, it takes about 30 seconds to restart Atlantis, down from the 30 minutes it was when we started.</p><p>Default Kubernetes settings are sensible for small volumes, but they can become bottlenecks as data grows. For us, this one-line change to <code>fsGroupChangePolicy</code> reclaimed nearly 50 hours of blocked engineering time per month. This was time our teams had been spending waiting for infrastructure changes to go through, and time that our on-call engineers had been spending responding to false alarms. That’s roughly 600 hours a year returned to productive work, from a fix that took longer to diagnose than deploy.</p><p>Safe defaults in Kubernetes are designed for small, simple workloads. But as you scale, they can slowly become bottlenecks. If you’re running workloads with large persistent volumes, it’s worth checking whether recursive permission changes like this are silently eating your restart time. Audit your <code>securityContext</code> settings, especially <code>fsGroup</code> and <code>fsGroupChangePolicy</code>. <code>OnRootMismatch</code> has been available since v1.20.</p><p>Not every fix is heroic or complex, and it’s usually worth asking “why does the system behave this way?”</p><p>If debugging infrastructure problems at scale sounds interesting, <a href="https://cloudflare.com/careers"><u>we’re hiring</u></a>. Come join us on the <a href="https://community.cloudflare.com/"><u>Cloudflare Community</u></a> or our <a href="https://discord.cloudflare.com/"><u>Discord</u></a> to talk shop.</p> ]]></content:encoded>
            <category><![CDATA[Kubernetes]]></category>
            <category><![CDATA[Terraform]]></category>
            <category><![CDATA[Platform Engineering]]></category>
            <category><![CDATA[Infrastructure]]></category>
            <category><![CDATA[SRE]]></category>
            <guid isPermaLink="false">6bSk27AUeu3Ja7pTySyy0t</guid>
            <dc:creator>Braxton Schafer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Shifting left at enterprise scale: how we manage Cloudflare with Infrastructure as Code]]></title>
            <link>https://blog.cloudflare.com/shift-left-enterprise-scale/</link>
            <pubDate>Tue, 09 Dec 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare has shifted to Infrastructure as Code and policy enforcement to manage internal Cloudflare accounts. This new architecture uses Terraform, custom tooling, and Open Policy Agent to enforce security baselines and increase engineering velocity. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The Cloudflare platform is a critical system for Cloudflare itself. We are our own Customer Zero – using our products to secure and optimize our own services. </p><p>Within our security division, a dedicated Customer Zero team uses its unique position to provide a constant, high-fidelity feedback loop to product and engineering that drives continuous improvement of our products<b>. </b>And we do this at a global scale — where a single misconfiguration can propagate across our edge in seconds and lead to unintended consequences. If you've ever hesitated before pushing a change to production, sweating because you know one small mistake could lock every employee out of critical application or take down a production service, you know the feeling. The risk of unintended consequences is real, and it keeps us up at night.</p><p>This presents an interesting challenge: How do we ensure hundreds of internal production Cloudflare accounts are secured consistently while minimizing human error?</p><p>While the Cloudflare dashboard is excellent for <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> and analytics, manually clicking through hundreds of accounts to ensure security settings are identical is a recipe for mistakes. To keep our sanity and our security intact, we stopped treating our configurations as manual point-and-click tasks and started treating them like code. We adopted “shift left” principles to move security checks to the earliest stages of development. </p><p>This wasn't an abstract corporate goal for us. It was a survival mechanism to catch errors before they caused an incident, and it required a fundamental change in our governance architecture.</p>
    <div>
      <h2>What Shift Left means to us</h2>
      <a href="#what-shift-left-means-to-us">
        
      </a>
    </div>
    <p>"Shifting left" refers to moving validation steps earlier in the software development lifecycle (SDLC). In practice, this means integrating testing, security audits, and policy compliance checks directly into the <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">continuous integration and continuous deployment (CI/CD) pipeline</a>. By catching issues or misconfigurations at the merge request stage, we identify issues when the cost of remediation is lowest, rather than discovering them after deployment.</p><p>When we think about applying shift left principles at Cloudflare, four key principles stand out:</p><ul><li><p><b>Consistency</b>: Configurations must be easily copied and reused across accounts.</p></li><li><p><b>Scalability</b>: Large changes can be applied rapidly across multiple accounts.</p></li><li><p><b>Observability</b>: Configurations must be auditable by anyone for current state, accuracy, and security.</p></li><li><p><b>Governance</b>: Guardrails must be proactive — enforced before deployment to avoid incidents.</p></li></ul>
    <div>
      <h2>A production IaC operating model</h2>
      <a href="#a-production-iac-operating-model">
        
      </a>
    </div>
    <p>To support this model, we transitioned all production accounts to being managed with Infrastructure as Code (IaC). Every modification is tracked, tied to a user, commit, and an internal ticket. Teams still use the dashboard for analytics and insights, but critical production changes are all done in code.</p><p>This model ensures every change is peer-reviewed, and policies, though set by the security team, are implemented by the owning engineering teams themselves.</p><p>This setup is grounded in two major technologies: <a href="https://developer.hashicorp.com/terraform"><u>Terraform</u></a> and a custom CI/CD pipeline.</p>
    <div>
      <h2>Our enterprise IaC stack</h2>
      <a href="#our-enterprise-iac-stack">
        
      </a>
    </div>
    <p>We chose Terraform for its mature open-source ecosystem, strong community support, and deep integration with Policy as Code tooling. Furthermore, using the <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs"><u>Cloudflare Terraform Provider</u></a> internally allows us to actively <a href="https://blog.cloudflare.com/tag/dogfooding/"><u>dogfood</u></a> the experience and improve it for our customers.</p><p>To manage the scale of hundreds of accounts and around 30 merge requests per day, our CI/CD pipeline runs on <a href="https://www.runatlantis.io/"><u>Atlantis</u></a>, integrated with <a href="https://about.gitlab.com/"><u>GitLab</u></a>. We also use a custom go program, tfstate-butler, that acts as a broker to securely store state files.</p><p>tfstate-butler operates as an HTTP backend for Terraform. The primary design driver was security: It ensures unique encryption keys per state file to limit the blast radius of any potential compromise. </p><p>All internal account configurations are defined in a centralized <a href="https://developers.cloudflare.com/pages/configuration/monorepos/"><u>monorepo</u></a>. Individual teams own and deploy their specific configurations and are the designated code owners for their sections of this centralized repository, ensuring accountability. To read more about this configuration, check out <a href="https://blog.cloudflare.com/terraforming-cloudflare-at-cloudflare/"><u>How Cloudflare uses Terraform to manage Cloudflare</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/17DDCeUrkEeWqtqpIPZoV4/114b63e0b8c408843b14c447ed27ed97/image1.png" />
          </figure><p><sup>Infrastructure as Code Data Flow Diagram</sup></p>
    <div>
      <h2>Baselines and Policy as Code</h2>
      <a href="#baselines-and-policy-as-code">
        
      </a>
    </div>
    <p>The entire shift left strategy hinges on establishing a strong security baseline for all internal production Cloudflare accounts. The baseline is a collection of security policies that are defined in code (Policy as Code). This baseline is not merely a set of guidelines but rather a required security configuration we enforce across the platform — e.g., maximum session length, required logs, specific WAF configurations, etc. </p><p>This setup is where policy enforcement shifts from manual audits to automated gates. We use the <a href="https://www.openpolicyagent.org/"><u>Open Policy Agent (OPA)</u></a> framework and its policy language, <a href="https://www.openpolicyagent.org/docs/policy-language#what-is-rego"><u>Rego</u></a>, via the Atlantis Conftest Policy Checking feature.</p>
    <div>
      <h2>Defining policies as code</h2>
      <a href="#defining-policies-as-code">
        
      </a>
    </div>
    <p>Rego policies define the specific security requirements that make up the baseline for all Cloudflare provider resources. We currently maintain approximately 50 policies.</p><p>For example, here is a Rego policy that validates only @cloudflare.com emails are allowed to be used in an access policy: </p>
            <pre><code># validate no use of non-cloudflare email
warn contains reason if {
    r := tfplan.resource_changes[_]
    r.mode == "managed"
    r.type == "cloudflare_access_policy"

    include := r.change.after.include[_]
    email_address := include.email[_]
    not endswith(email_address, "@cloudflare.com")

    reason := sprintf("%-40s :: only @cloudflare.com emails are allowed", [r.address])
}
warn contains reason if {
    r := tfplan.resource_changes[_]
    r.mode == "managed"
    r.type == "cloudflare_access_policy"

    require := r.change.after.require[_]
    email_address := require.email[_]
    not endswith(email_address, "@cloudflare.com")

    reason := sprintf("%-40s :: only @cloudflare.com emails are allowed", [r.address])
}</code></pre>
            
    <div>
      <h2>Enforcing the baseline</h2>
      <a href="#enforcing-the-baseline">
        
      </a>
    </div>
    <p>The policy check runs on every merge request (MR), ensuring configurations are compliant <i>before</i> deployment. Policy check output is shown directly in the GitLab MR comment thread. </p><p>Policy enforcement operates in two modes:</p><ol><li><p><b>Warning:</b> Leaves a comment on the MR, but allows the merge.</p></li><li><p><b>Deny:</b> Blocks the deployment outright.</p></li></ol><p>If the policy check determines the configuration being applied in the MR deviates from the baseline, the output will return which resources are out of compliance.</p><p>The example below shows an output from a policy check identifying 3 discrepancies in a merge request:</p>
            <pre><code>WARN - cloudflare_zero_trust_access_application.app_saas_xxx :: "session_duration" must be less than or equal to 10h

WARN - cloudflare_zero_trust_access_application.app_saas_xxx_pay_per_crawl :: "session_duration" must be less than or equal to 10h

WARN - cloudflare_zero_trust_access_application.app_saas_ms :: you must have at least one require statement of auth_method = "swk"

41 tests, 38 passed, 3 warnings, 0 failures, 0 exception</code></pre>
            
    <div>
      <h2>Handling policy exceptions</h2>
      <a href="#handling-policy-exceptions">
        
      </a>
    </div>
    <p>We understand that exceptions are necessary, but they must be managed with the same rigor as the policy itself. When a team requires an exception, they submit a request via Jira.</p><p>Once approved by the Customer Zero team, the exception is formalized by submitting a pull request to the central exceptions.rego repository. Exceptions can be made at various levels:</p><ul><li><p><b>Account:</b> Exclude account_x from policy_y.</p></li><li><p><b>Resource Category</b>: Exclude all resource_a’s in account_x from policy_y.</p></li><li><p><b>Specific Resource: </b>Exclude resource_a_1 in account_x from policy_y.</p></li></ul><p>This example shows a session length exception for five specific applications under two separate Cloudflare accounts: </p>
            <pre><code>{  
    "exception_type": "session_length",
    "exceptions": [
        {
            "account_id": "1xxxx",
              "tf_addresses": [
                "cloudflare_access_application.app_identity_access_denied",
                "cloudflare_access_application.enforcing_ext_auth_worker_bypass",
                "cloudflare_access_application.enforcing_ext_auth_worker_bypass_dev",
            ],
        },
        {
            "account_id": "2xxxx",
              "tf_addresses": [
                "cloudflare_access_application.extra_wildcard_application",
                "cloudflare_access_application.wildcard",
            ],
        },
    ],
}</code></pre>
            
    <div>
      <h2>Challenges and lessons learned</h2>
      <a href="#challenges-and-lessons-learned">
        
      </a>
    </div>
    <p>Our journey wasn't without obstacles. We had years of clickops (manual changes made directly in the dashboard) scattered across hundreds of accounts. Trying to import the existing chaos into a strict infrastructure as code system felt like trying to change the tires on a moving car. To this day, importing resources continues to be an ongoing process.</p><p>We also ran into limitations of our own tools. We found edge cases in the Cloudflare Terraform provider that only appear when you try to manage infrastructure at this scale. These weren't just minor speed bumps. They were hard lessons on the necessity of eating our own dogfood, so we could build even better solutions.</p><p>That friction clarified exactly what we were up against, leading us to three hard-earned lessons.</p>
    <div>
      <h2>Lesson 1: high barriers to entry stall adoption </h2>
      <a href="#lesson-1-high-barriers-to-entry-stall-adoption">
        
      </a>
    </div>
    <p>The first hurdle for any large-scale IaC rollout is onboarding existing, manually configured resources. We gave teams two options: manually creating Terraform resources and import blocks, or using <a href="https://github.com/cloudflare/cf-terraforming"><u>cf-terraforming</u></a>.</p><p>We quickly discovered that Terraform fluency varies across teams, and the learning curve for manually importing existing resources proved to be much steeper than we anticipated.</p><p>Luckily the cf-terraforming command-line utility uses the Cloudflare API to automatically generate the necessary Terraform code and import statements, significantly accelerating the migration process. </p><p>We also formed an internal community where experienced engineers could guide teams through the nuances of the provider and help unblock complex imports.</p>
    <div>
      <h2>Lesson 2: drift happens </h2>
      <a href="#lesson-2-drift-happens">
        
      </a>
    </div>
    <p>We also had to tackle configuration drift, which occurs when the IaC process is bypassed to expedite urgent changes. While making edits directly in the dashboard is faster during an incident, it leaves the Terraform state out of sync with reality.</p><p>We implemented a custom drift detection service that constantly compares the state defined by Terraform with the actual deployed state via the Cloudflare API. When drift is detected, an automated system creates an internal ticket and assigns it to the owning team with varying Service Level Agreements (SLAs) for remediation. </p>
    <div>
      <h2>Lesson 3: automation is key</h2>
      <a href="#lesson-3-automation-is-key">
        
      </a>
    </div>
    <p>Cloudflare innovates quickly, so our set of products and APIs is ever-growing. Unfortunately, that meant that our Terraform provider was often behind in terms of feature parity with the product. </p><p>We solved that issue with the release of our <a href="https://blog.cloudflare.com/automatically-generating-cloudflares-terraform-provider/"><u>v5 provider</u></a>, which automatically generates the Terraform provider based on the OpenAPI specification. This transition wasn’t without bumps as we hardened our approach to code generation, but this approach ensures that the API and Terraform stay in sync, reducing the chance of capability drift. </p>
    <div>
      <h2>The core lesson: proactive &gt; reactive</h2>
      <a href="#the-core-lesson-proactive-reactive">
        
      </a>
    </div>
    <p>By centralizing our security baselines, mandating peer reviews, and enforcing policies before any change hits production, we minimize the possibility of configuration errors, accidental deletions, or policy violations. The architecture not only helps to prevent manual mistakes, but actually increases engineering velocity because teams are confident their changes are compliant. </p><p>The key lesson from our work with Customer Zero is this: While the Cloudflare dashboard is excellent for day-to-day operations, achieving enterprise-level scale and consistent governance requires a different approach. When you treat your Cloudflare configurations as living code, you can scale securely and confidently. </p><p>Have thoughts on Infrastructure as Code? Keep the conversation going and share your experiences over at <a href="http://community.cloudflare.com"><u>community.cloudflare.com</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Infrastructure as Code]]></category>
            <category><![CDATA[Customer Zero]]></category>
            <category><![CDATA[Terraform]]></category>
            <category><![CDATA[Dogfooding]]></category>
            <guid isPermaLink="false">6Kx2Ob32avdygaasok3ZCr</guid>
            <dc:creator>Chase Catelli</dc:creator>
            <dc:creator>Ryan Pesek</dc:creator>
            <dc:creator>Derek Pitts</dc:creator>
        </item>
        <item>
            <title><![CDATA[Automatically generating Cloudflare’s Terraform provider]]></title>
            <link>https://blog.cloudflare.com/automatically-generating-cloudflares-terraform-provider/</link>
            <pubDate>Tue, 24 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ The Cloudflare Terraform provider used to be manually maintained. With the help of our existing OpenAPI code generation pipeline, we’re now automatically generating the provider for better  ]]></description>
            <content:encoded><![CDATA[ <p>In November 2022, we announced the transition to <a href="https://blog.cloudflare.com/open-api-transition/"><u>OpenAPI Schemas for the Cloudflare API</u></a>. Back then, we had an audacious goal to make the OpenAPI schemas the source of truth for our SDK ecosystem and reference documentation. During 2024’s Developer Week, we backed this up by <a href="https://blog.cloudflare.com/workers-production-safety/"><u>announcing that our SDK libraries are now automatically generated</u></a> from these OpenAPI schemas. Today, we’re excited to announce the latest pieces of the ecosystem to now be automatically generated — the Terraform provider and API reference documentation.</p><p>This means that the moment a new feature or attribute is added to our products and the team documents it, you’ll be able to see how it’s meant to be used across our SDK ecosystem <i>and</i> make use of it immediately. No more delays. No more lacking coverage of API endpoints.</p><p>You can find the new documentation site at <a href="https://developers.cloudflare.com/api-next/"><u>https://developers.cloudflare.com/api-next/</u></a>, and you can try the preview release candidate of the Terraform provider by <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/5.0.0-alpha1"><u>installing 5.0.0-alpha1</u></a>.</p>
    <div>
      <h2>Why Terraform? </h2>
      <a href="#why-terraform">
        
      </a>
    </div>
    <p>For anyone who is unfamiliar with <a href="https://www.terraform.io/"><u>Terraform</u></a>, it is a tool for managing your infrastructure as code, much like you would with your application code. Many of our customers (big and small) rely on Terraform to orchestrate their infrastructure in a technology-agnostic way. Under the hood, it is essentially an HTTP client with lifecycle management built in, which means it makes use of our publicly documented APIs in a way that understands how to create, read, update and delete for the life of the resource. </p>
    <div>
      <h2>Keeping Terraform updated — the old way</h2>
      <a href="#keeping-terraform-updated-the-old-way">
        
      </a>
    </div>
    <p>Historically, Cloudflare has manually maintained a Terraform provider, but since the provider internals require their own unique way of doing things, responsibility for maintenance and support has landed on the shoulders of a handful of individuals. The service teams always had difficulties keeping up with the number of changes, due to the amount of cognitive overhead required to ship a single change in the provider. In order for a team to get a change to the provider, it took a minimum of 3 pull requests (4 if you were adding support to <a href="https://github.com/cloudflare/cf-terraforming"><u>cf-terraforming</u></a>).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6spvs4QAkY7BXLNfABDSQs/838f9b224838cd174376eb413cce7848/image6.png" />
          </figure><p>Even with the 4 pull requests completed, it didn’t offer guarantees on coverage of all available attributes, which meant small yet important details could be forgotten and not exposed to customers, causing frustration when trying to configure a resource.</p><p>To address this, our Terraform provider needed to be relying on the same OpenAPI schemas that the rest of our SDK ecosystem was <a href="https://blog.cloudflare.com/lessons-from-building-an-automated-sdk-pipeline/"><u>already benefiting from</u></a>.</p>
    <div>
      <h2>Updating Terraform automatically</h2>
      <a href="#updating-terraform-automatically">
        
      </a>
    </div>
    <p>The thing that differentiates Terraform from our SDKs is that it manages the lifecycle of resources. With that comes a new range of problems related to known values and managing differences in the request and response payloads. Let’s compare the two different approaches of creating a new DNS record and fetching it back.</p><p>With our Go SDK:</p>
            <pre><code>// Create the new record
record, _ := client.DNS.Records.New(context.TODO(), dns.RecordNewParams{
	ZoneID: cloudflare.F("023e105f4ecef8ad9ca31a8372d0c353"),
	Record: dns.RecordParam{
		Name:    cloudflare.String("@"),
		Type:    cloudflare.String("CNAME"),
        Content: cloudflare.String("example.com"),
	},
})


// Wasteful fetch, but shows the point
client.DNS.Records.Get(
	context.Background(),
	record.ID,
	dns.RecordGetParams{
		ZoneID: cloudflare.String("023e105f4ecef8ad9ca31a8372d0c353"),
	},
)
</code></pre>
            <p>
And with Terraform:</p>
            <pre><code>resource "cloudflare_dns_record" "example" {
  zone_id = "023e105f4ecef8ad9ca31a8372d0c353"
  name    = "@"
  content = "example.com"
  type    = "CNAME"
}</code></pre>
            <p>On the surface, it looks like the Terraform approach is simpler, and you would be correct. The complexity of knowing how to create a new resource and maintain changes are handled for you. However, the problem is that for Terraform to offer this abstraction and data guarantee, all values must be known at apply time. That means that even if you’re not using the <code>proxied</code> value, Terraform needs to know what the value needs to be in order to save it in the state file and manage that attribute going forward. The error below is what Terraform operators commonly see from providers when the value isn’t known at apply time.</p>
            <pre><code>Error: Provider produced inconsistent result after apply

When applying changes to example_thing.foo, provider "provider[\"registry.terraform.io/example/example\"]"
produced an unexpected new value: .foo: was null, but now cty.StringVal("").</code></pre>
            <p>Whereas when using the SDKs, if you don’t need a field, you just omit it and never need to worry about maintaining known values.</p><p>Tackling this for our OpenAPI schemas was no small feat. Since introducing Terraform generation support, the quality of our schemas has improved by an order of magnitude. Now we are explicitly calling out all default values that are present, variable response properties based on the request payload, and any server-side computed attributes. All of this means a better experience for anyone that interacts with our APIs.</p>
    <div>
      <h3>Making the jump from terraform-plugin-sdk to terraform-plugin-framework</h3>
      <a href="#making-the-jump-from-terraform-plugin-sdk-to-terraform-plugin-framework">
        
      </a>
    </div>
    <p>To build a Terraform provider and expose resources or data sources to operators, you need two main things: a provider server and a provider.</p><p>The provider server takes care of exposing a <a href="https://github.com/hashicorp/terraform/blob/main/docs/plugin-protocol/README.md"><u>gRPC server</u></a> that Terraform core (via the CLI) uses to communicate when managing resources or reading data sources from the operator provided configuration.</p><p>The provider is responsible for wrapping the resources and data sources, communicating with the remote services, and managing the state file. To do this, you either rely on the <a href="https://github.com/hashicorp/terraform-plugin-sdk"><u>terraform-plugin-sdk</u></a> (commonly referred to as SDKv2) or <a href="https://github.com/hashicorp/terraform-plugin-framework"><u>terraform-plugin-framework</u></a>, which includes all the interfaces and methods provided by Terraform in order to manage the internals correctly. The decision as to which plugin you use depends on the age of your provider. SDKv2 has been around longer and is what most Terraform providers use, but due to the age and complexity, it has many core unresolved issues that must remain in order to facilitate backwards compatibility for those who rely on it. <code>terraform-plugin-framework</code> is the new version that, while lacking the breadth of features SDKv2 has, provides a more Go-like approach to building providers and addresses many of the underlying bugs in SDKv2.</p><p><i>(For a deeper comparison between SDKv2 and the framework, you can check out a </i><a href="https://www.youtube.com/watch?v=4P69E44mJGo"><i><u>conversation between myself and John Bristowe from Octopus Deploy</u></i></a><i>.)</i></p><p>The majority of the Cloudflare Terraform provider is built using SDKv2, but at the beginning of 2023, we <a href="https://github.com/cloudflare/terraform-provider-cloudflare/pull/2170"><u>took the plunge to multiplex</u></a> and offer both in our provider. To understand why this was needed, we have to understand a little about SDKv2. The way SDKv2 is structured isn't really conducive to representing null or "unset" values consistently and reliably. You can use the <a href="https://pkg.go.dev/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema#ResourceData.GetRawConfig"><u>experimental ResourceData.GetRawConfig</u></a> to check whether the value is set, null, or unknown in the config, but writing it back as null isn't really supported.</p><p>This caveat first popped up for us when the Edge Rules Engine (Rulesets) started onboarding new services and those services needed to support API responses that contained booleans in an unset (or missing), <code>true</code>, or <code>false</code> state each with their own reasoning and purpose. While this isn’t a conventional API design at Cloudflare, it is a valid way to do things that we should be able to work with. However, as mentioned above, the SDKv2 provider couldn't. This is because when a value isn't present in the response or read into state, it gets a Go-compatible zero value for the default. This showed up as the inability to unset values after they had been written to state as false values (and vice versa).</p><p>The only solution we have here to reliably use the three states of those boolean values is to migrate to the <code>terraform-plugin-framework</code>, which has the <a href="https://github.com/hashicorp/terraform-plugin-framework/blob/main/types/bool_value.go"><u>correct implementation of writing back unset values</u></a>.</p><p>Once we started adding more functionality using <code>terraform-plugin-framework</code> in the old provider, it was clear that it was a better developer experience, so we <a href="https://github.com/cloudflare/terraform-provider-cloudflare/pull/2871"><u>added a ratchet</u></a> to prevent SDKv2 usage going forward to get ahead of anyone unknowingly setting themselves up to hit this issue.</p><p>When we decided that we would be automatically generating the Terraform provider, it was only fitting that we also brought all the resources over to be based on the <code>terraform-plugin-framework</code> and leave the issues from SDKv2 behind for good. This did complicate the migration as with the improved internals came changes to major components like the schema and <a href="https://en.wikipedia.org/wiki/Create,_read,_update_and_delete"><u>CRUD operations</u></a> that we needed to familiarize ourselves with. However, it has been a worthwhile investment because by doing so, we’ve future-proofed the foundations of the provider and are now making fewer compromises on a great Terraform experience due to buggy, legacy internals.</p>
    <div>
      <h3>Iteratively finding bugs </h3>
      <a href="#iteratively-finding-bugs">
        
      </a>
    </div>
    <p>One of the common struggles with code generation pipelines is that unless you have existing tools that implement your new thing, it’s hard to know if it works or is reasonable to use. Sure, you can also generate your tests to exercise the new thing, but if there is a bug in the pipeline, you are very likely to not see it as a bug as you will be generating test assertions that show the bug is expected behavior.</p><p>One of the essential feedback loops we have had is the existing acceptance test suite. All resources within the existing provider had a mix of regression and functionality tests. Best of all, as the test suite is creating and managing real resources, it was very easy to know whether the outcome was a working implementation or not by looking at the HTTP traffic to see whether the API calls were accepted by the remote endpoints. Getting the test suite ported over was only a matter of copying over all the existing tests and checking for any type assertion differences (such as list to single nested list) before kicking off a test run to determine whether the resource was working correctly.</p><p>While the centralized schema pipeline was a huge quality of life improvement for having schema fixes propagate to the whole ecosystem almost instantly, it couldn’t help us solve the largest hurdle, which was surfacing bugs that hide other bugs. This was time-consuming because when fixing a problem in Terraform, you have three places where you can hit an error:</p><ol><li><p>Before any API calls are made, Terraform implements logical schema validation and when it encounters validation errors, it will immediately halt.</p></li><li><p>If any API call fails, it will stop at the CRUD operation and return the diagnostics, immediately halting.</p></li><li><p>After the CRUD operation has run, Terraform then has checks in place to ensure all values are known.</p></li></ol><p>That means that if we hit the bug at step 1 and then fixed the bug, there was no guarantee or way to tell that we didn’t have two more waiting for us. Not to mention that if we found a bug in step 2 and shipped a fix, that it wouldn’t then identify a bug in the first step on the next round of testing.</p><p>There is no silver bullet here and our workaround was instead to notice patterns of problems in the schema behaviors and apply CI lint rules within the OpenAPI schemas before it got into the code generation pipeline. Taking this approach incrementally cut down the number of bugs in step 1 and 2 until we were largely only dealing with the type in step 3.</p>
    <div>
      <h3>A more reusable approach to model and struct conversion </h3>
      <a href="#a-more-reusable-approach-to-model-and-struct-conversion">
        
      </a>
    </div>
    <p>Within Terraform provider CRUD operations, it is fairly common to see boilerplate like the following:</p>
            <pre><code>var plan ThingModel
diags := req.Plan.Get(ctx, &amp;plan)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
	return
}

out, err := r.client.UpdateThingModel(ctx, client.ThingModelRequest{
	AttrA: plan.AttrA.ValueString(),
	AttrB: plan.AttrB.ValueString(),
	AttrC: plan.AttrC.ValueString(),
})
if err != nil {
	resp.Diagnostics.AddError(
		"Error updating project Thing",
		"Could not update Thing, unexpected error: "+err.Error(),
	)
	return
}

result := convertResponseToThingModel(out)
tflog.Info(ctx, "created thing", map[string]interface{}{
	"attr_a": result.AttrA.ValueString(),
	"attr_b": result.AttrB.ValueString(),
	"attr_c": result.AttrC.ValueString(),
})

diags = resp.State.Set(ctx, result)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
	return
}</code></pre>
            <p>At a high level:</p><ul><li><p>We fetch the proposed updates (known as a plan) using <code>req.Plan.Get()</code></p></li><li><p>Perform the update API call with the new values</p></li><li><p>Manipulate the data from a Go type into a Terraform model (<code>convertResponseToThingModel</code>)</p></li><li><p>Set the state by calling <code>resp.State.Set()</code></p></li></ul><p>Initially, this doesn’t seem too problematic. However, the third step where we manipulate the Go type into the Terraform model quickly becomes cumbersome, error-prone, and complex because all of your resources need to do this in order to swap between the type and associated Terraform models.</p><p>To avoid generating more complex code than needed, one of the improvements featured in our provider is that all CRUD methods use unified <code>apijson.Marshal, apijson.Unmarshal</code>, and <code>apijson.UnmarshalComputed</code> methods that solve this problem by centralizing the conversion and handling logic based on the struct tags.</p>
            <pre><code>var data *ThingModel

resp.Diagnostics.Append(req.Plan.Get(ctx, &amp;data)...)
if resp.Diagnostics.HasError() {
	return
}

dataBytes, err := apijson.Marshal(data)
if err != nil {
	resp.Diagnostics.AddError("failed to serialize http request", err.Error())
	return
}
res := new(http.Response)
env := ThingResultEnvelope{*data}
_, err = r.client.Thing.Update(
	// ...
)
if err != nil {
	resp.Diagnostics.AddError("failed to make http request", err.Error())
	return
}

bytes, _ := io.ReadAll(res.Body)
err = apijson.UnmarshalComputed(bytes, &amp;env)
if err != nil {
	resp.Diagnostics.AddError("failed to deserialize http request", err.Error())
	return
}
data = &amp;env.Result

resp.Diagnostics.Append(resp.State.Set(ctx, &amp;data)...)</code></pre>
            <p>Instead of needing to generate hundreds of instances of type-to-model converter methods, we can instead decorate the Terraform model with the correct tags and handle marshaling and unmarshaling of the data consistently. It’s a minor change to the code that in the long run makes the generation more reusable and readable. As an added benefit, this approach is great for bug fixing as once you identify a bug with a particular type of field, fixing that in the unified interface fixes it for other occurrences you may not yet have found.</p>
    <div>
      <h2>But wait, there’s more (docs)!</h2>
      <a href="#but-wait-theres-more-docs">
        
      </a>
    </div>
    <p>To top off our OpenAPI schema usage, we’re tightening the SDK integration with our <a href="https://developers.cloudflare.com/api-next/"><u>new API documentation site</u></a>. It’s using the same pipeline we’ve invested in for the last two years while addressing some of the common usage issues.</p>
    <div>
      <h3>SDK aware </h3>
      <a href="#sdk-aware">
        
      </a>
    </div>
    <p>If you’ve used our API documentation site, you know we give you examples of interacting with the API using command line tools like curl. This is a great starting point, but if you’re using one of the SDK libraries, you need to do the mental gymnastics to convert it to the method or type definition you want to use. Now that we’re using the same pipeline to generate the SDKs <b>and</b> the documentation, we’re solving that by providing examples in all the libraries you <i>could</i> use — not just curl.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2SNCehksc30kXXQvVKYC47/a3a6071be64d006a2da9b2e615d143ae/image2.png" />
            
            </figure><p><sup><i>Example using cURL to fetch all zones.</i></sup></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/50PeyK8oOLb51mCLF4ikds/764db96a24232b611ec88d5ff8f8844f/image4.png" />
            
            </figure><p><sup><i>Example using the Typescript library to fetch all zones.</i></sup></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5rQn6OY3R1yi5iot1oxti4/09cf62ea46ede21d1541b5012497efdb/image5.png" />
            
            </figure><p><sup><i>Example using the Python library to fetch all zones.</i></sup></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Na9y9ta3fLBMEAvJK4uaH/41ecf061a5a088f4bdb313d70b173a9a/image7.png" />
            
            </figure><p><sup><i>Example using the Go library to fetch all zones.</i></sup></p><p>With this improvement, we also remember the language selection so if you’ve selected to view the documentation using our Typescript library and keep clicking around, we keep showing you examples using Typescript until it is swapped out.</p><p>Best of all, when we introduce new attributes to existing endpoints or add SDK languages, this documentation site is automatically kept in sync with the pipeline. It is no longer a huge effort to keep it all up to date.</p>
    <div>
      <h3>Faster and more efficient rendering</h3>
      <a href="#faster-and-more-efficient-rendering">
        
      </a>
    </div>
    <p>A problem we’ve always struggled with is the sheer number of API endpoints and how to represent them. As of this post, we have 1,330 endpoints, and for each of those endpoints, we have a request payload, a response payload, and multiple types associated with it. When it comes to rendering this much information, the solutions we’ve used in the past have had to make tradeoffs in order to make parts of the representation work.</p><p>This next iteration of the API documentation site addresses this is a couple of ways:</p><ul><li><p>It's implemented as a modern React application that pairs an interactive client-side experience with static pre-rendered content, resulting in a quick initial load and fast navigation. (Yes, it even works without JavaScript enabled!). </p></li><li><p>It fetches the underlying data incrementally as you navigate.</p></li></ul><p>By solving this foundational issue, we’ve unlocked other planned improvements to the documentation site and SDK ecosystem to improve the user experience without making tradeoffs like we’ve needed to in the past. </p>
    <div>
      <h3>Permissions</h3>
      <a href="#permissions">
        
      </a>
    </div>
    <p>One of the most requested features to be re-implemented into the documentation site has been minimum required permissions for API endpoints. One of the previous iterations of the documentation site had this available. However, unknown to most who used it, the values were manually maintained and were regularly incorrect, causing support tickets to be raised and frustration for users.</p><p>Inside Cloudflare's identity and access management system, answering the question “what do I need to access this endpoint” isn’t a simple one. The reason for this is that in the normal flow of a request to the control plane, we need two different systems to provide parts of the question, which can then be combined to give you the full answer. As we couldn’t initially automate this as part of the OpenAPI pipeline, we opted to leave it out instead of having it be incorrect with no way of verifying it.</p><p>Fast-forward to today, and we’re excited to say endpoint permissions are back! We built some new tooling that abstracts answering this question in a way that we can integrate into our code generation pipeline and have all endpoints automatically get this information. Much like the rest of the code generation platform, it is focused on having service teams own and maintain high quality schemas that can be reused with value adds introduced without any work on their behalf.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/641gSS5MLQpCvEANYXcVK6/447cf0b873ecb60fdbbc415df0424363/image3.png" />
            
            </figure>
    <div>
      <h2>Stop waiting for updates</h2>
      <a href="#stop-waiting-for-updates">
        
      </a>
    </div>
    <p>With these announcements, we’re putting an end to waiting for updates to land in the SDK ecosystem. These new improvements allow us to streamline the ability of new attributes and endpoints the moment teams document them. So what are you waiting for? Check out the <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/5.0.0-alpha1"><u>Terraform provider</u></a> and <a href="https://developers.cloudflare.com/api-next/"><u>API documentation site</u></a> today.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[SDK]]></category>
            <category><![CDATA[Terraform]]></category>
            <category><![CDATA[Open API]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">1M8zVthnUiMpJpGylQuptu</guid>
            <dc:creator>Jacob Bednarz</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare uses Terraform to manage Cloudflare]]></title>
            <link>https://blog.cloudflare.com/terraforming-cloudflare-at-cloudflare/</link>
            <pubDate>Thu, 17 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare uses the Cloudflare Terraform provider extensively to make changes to our internal accounts as easy as opening a pull request. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Configuration management is far from a solved problem. As organizations scale beyond a handful of administrators, having a secure, auditable, and self-service way of updating system settings becomes invaluable. Managing a Cloudflare account is no different. With <a href="https://www.cloudflare.com">dozens of products</a> and <a href="https://api.cloudflare.com/">hundreds of API endpoints</a>, keeping track of current configuration and making bulk updates across multiple zones can be a challenge. While the Cloudflare Dashboard is great for analytics and feature exploration, any changes that could potentially impact users really should get a code review before being applied!</p><p>This is where <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs">Cloudflare's Terraform provider</a> can come in handy. Built as a layer on top of the <a href="https://github.com/cloudflare/cloudflare-go">cloudflare-go</a> library, the provider allows users to interface with the Cloudflare API using stateful <a href="https://developer.hashicorp.com/terraform">Terraform</a> resource declarations. Not only do we actively support this provider for customers, we make extensive use of it internally! In this post, we hope to provide some best practices we've learned about managing complex Cloudflare configurations in Terraform.</p>
    <div>
      <h2>Why Terraform</h2>
      <a href="#why-terraform">
        
      </a>
    </div>
    <p>Unsurprisingly, we find Cloudflare's products to be pretty useful for securing and enhancing the performance of services we deploy internally. We use <a href="https://www.cloudflare.com/dns/">DNS</a>, <a href="https://www.cloudflare.com/waf/">WAF</a>, <a href="https://www.cloudflare.com/products/zero-trust/">Zero Trust</a>, <a href="https://www.cloudflare.com/products/zero-trust/email-security/">Email Security</a>, <a href="https://www.cloudflare.com/developer-platform-hub/">Workers</a>, and all manner of <a href="https://www.cloudflare.com/whats-new/">experimental new features</a> throughout the company. This <a href="https://en.wikipedia.org/wiki/Eating_your_own_dog_food">dog-fooding</a> allows us to battle-harden the services we provide to users and feed our desired features back to the product teams all while running the backend of Cloudflare. But, as Cloudflare grew, so did the complexity and importance of our configuration.</p><p>When we were a much smaller company, we only had a handful of accounts with designated administrators making changes on behalf of their colleagues. However, over time this handful of accounts grew into hundreds with each managed by separate teams. Independent accounts are useful in that they allow service-owners to make modifications that can't impact others, but it comes with overhead.</p><p>We faced the challenge of ensuring consistent security policies, up-to-date account memberships, and change visibility. While our  accounts were still administered by kind human stewards, we had numerous instances of account members not being removed after they transferred to a different team. While this never became a security incident, it demonstrated the shortcomings of manually provisioning account memberships. In the case of a production service migration, the administrator executing the change would often hop on a video call and ask for others to triple-check an IP address, ruleset, or access policy update. It was an era of looking through the audit logs to see what broke a service.</p><p>We wanted to make it easier for developers and users to make the changes they wanted without having to reach out to an administrator. Defining our configuration in code using Terraform has allowed us to keep tabs on the complexity of configuration while improving visibility and change management practices. By dogfooding the Cloudflare Terraform provider, we've been able to ensure:</p><ul><li><p>Modifications to accounts are peer reviewed by the team that owns an account.</p></li><li><p>Each change is tied to a user, commit, and a ticket explaining the rationale for the change.</p></li><li><p>API Tokens are tied to service accounts rather than individual human users, meaning they survive team changes and offboarding.</p></li><li><p>Account configuration can be audited by anyone at the company for current state, accuracy, and security without needing to add everyone as a member of every account.</p></li><li><p>Large changes, such as <a href="/how-cloudflare-implemented-fido2-and-zero-trust/">enforcing hard keys</a> can be done rapidly– even in a single pull request.</p></li><li><p>Configuration can be easily copied and reused across accounts to promote best practices and speed up development.</p></li><li><p>We can use and iterate on our awesome provider and provide a better experience to other users (shoutout in particular to <a href="https://github.com/jacobbednarz">Jacob</a>!).</p></li></ul>
    <div>
      <h2>Terraform in CI/CD</h2>
      <a href="#terraform-in-ci-cd">
        
      </a>
    </div>
    <p><a href="https://github.com/hashicorp/terraform">Terraform</a> has a fairly mature open source ecosystem, built from years of running-in-production experience. Thus, there are a number of ways to make interacting with the system feel as comfortable to developers as git. One of these tools is <a href="https://www.runatlantis.io/">Atlantis</a>.</p><p>Atlantis acts as <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">continuous integration/continuous delivery (CI/CD)</a> for Terraform; fitting neatly into version control workflows, and giving visibility into the changes being deployed in each code change. We use Atlantis to display Terraform plans (effectively a diff in configuration) within pull requests and apply the changes after the pull request has been approved. Having all the output from the terraform provider in the comments of a pull request means there's no need to fiddle with the state locally or worry about where a state lock is coming from. Using Terraform CI/CD like this makes configuration management approachable to developers and non-technical folks alike.</p><p>In this example pull request, I'm adding a user to the cloudflare-cool-account (see the code in the next section). Once the PR is opened, Bitbucket posts a webhook to Atlantis, telling it to run a `terraform plan` using this branch. The resulting comment is placed in the pull request. Notice that this pull request can't be applied or merged yet as it doesn't have an approval! Once the pull request is approved, I would comment "atlantis apply", wait for Atlantis to post a comment containing the output of the command, and merge the pull request if that output looks correct.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6OqbJjxYbT2Dw4qpvFltng/d672644824a4808946b7f71d5d2fb3b9/image4-25.png" />
            
            </figure><p>Our Terraforming Cloudflare architecture consists of a monorepo with one directory (and tfstate) for each internally-owned Cloudflare account. This keeps all of our Cloudflare configuration centralized for easier oversight while remaining neatly organized.</p><p>It will be possible in a future (<a href="https://github.com/cloudflare/terraform-provider-cloudflare/issues/1646">as of this writing</a>) release to manage multiple Cloudflare accounts in the same tfstate, but we've found that accounts in our use generally map fairly neatly onto teams. Teams can be configured as <a href="https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners">CODEOWNERS</a> for a given directory and be tagged on any pull requests to that account. With teams owning separate accounts and each account having a separate tfstate, it's rare for pull requests to get stuck waiting for a lock on the tfstate. Team-account-sized states remain relatively small, meaning that they also build quickly. Later on, we'll share some of the other optimizations we've made to keep the repo user-friendly.</p><p>Each of our terraform states, given that they <a href="https://developer.hashicorp.com/terraform/language/settings/backends/configuration#credentials-and-sensitive-data">include secrets (including the API key!)</a>, is stored encrypted in an internal datastore. When a pull request is opened, Atlantis reaches out to a piece of middleware (that we may open source once it's cleaned up a bit) that retrieves and decrypts the state for processing. Once the pull request is applied, the state is encrypted and put away again.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WMBQIoo8yaKC6RLz75E1T/c29b2d834e0f580ca5912733a7a0e51e/image2-44.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76bxQk0hlFPFOHRyscuObV/23e258f576ab1e9b0c171c3fb5ee0ada/image5-14.png" />
            
            </figure><p>We execute a daily Terraform apply across all tfstates to capture any unintended config drift and rotate certificates when they approach expiration. This prevents unrelated changes from popping up in pull request diffs and causing confusion. While we could run more frequent state applies to ensure Terraform remains firmly up to date, once-a-day rectification strikes a balance between code enforcement and avoiding state locks while users are running Terraform plans in pull requests.</p><p>One of the problems that we encountered during our transition to Terraform is that folks were in the habit of making updates to configuration in the Dashboard and were still able to edit settings there. Thus, we didn't always have a single source of truth for our configuration in code. It also meant the change would get mysteriously (to them) reverted the next day! So that's why I'm excited to share a new Zero Trust Dashboard toggle that we've been turning on for our accounts internally: API/Terraform read-only mode.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Ks2CZ0OxPwNbV5Kal95Tj/ce12fabf21a7fe81c67690f4fc4cf4ef/image1-59.png" />
            
            </figure><p>Easily one of my favorite new features</p><p>With this button, we're able to politely prevent manual changes to your Cloudflare account’s Zero Trust configuration without removing permissions from the set of users who can fix settings manually in a break-glass emergency scenario. <a href="https://api.cloudflare.com/#zero-trust-organization-update-your-zero-trust-organization">Check out how you can enable this setting in your Zero Trust organization</a>.</p>
    <div>
      <h2>Slick Snippets and Terraforming Recommendations</h2>
      <a href="#slick-snippets-and-terraforming-recommendations">
        
      </a>
    </div>
    <p>As our Terraform repository has matured, we've refined how we define Cloudflare resources in code. By finding a sweet spot between code reuse and readability, we've been able to minimize operational overhead and generally let users get their work done. Here's a couple of useful snippets that have been particularly valuable to us.</p>
    <div>
      <h3>Account Membership</h3>
      <a href="#account-membership">
        
      </a>
    </div>
    <p>This allows for defining a fairly straightforward mapping of user emails to account privileges without code duplication or complex modules. We pull the list of human-friendly names of <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/data-sources/account_roles">account roles</a> from the API to show <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/account_member">user</a> permission assignments at a glance. Note: <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/account_member#status">status</a> is a new argument that allows for accounts to be added without sending an email to the user; perfect for when an organization is using SSO. (Thanks <a href="https://github.com/patrobinson">patrobinson</a> for the <a href="https://github.com/cloudflare/terraform-provider-cloudflare/issues/1654">feature request</a> and <a href="https://github.com/markblackman">mblackman</a> for the <a href="https://github.com/cloudflare/terraform-provider-cloudflare/pull/1920">PR</a>!)</p>
            <pre><code>variables.tf
—-
data "cloudflare_account_roles" "my_account" {
	account_id = var.account_id
}

locals {
  roles = {
	for role in data.cloudflare_account_roles.my_account.roles :
  	role.name =&gt; role
  }
}

members.tf
—-
locals {
  users = {
    emerson = {
      roles = [
        local.roles["Administrator"].id
      ]
    }
    lucian = {
      roles = [
        local.roles["Super Administrator - All Privileges"].id
      ]
    }
    walruto = {
      roles = [
        local.roles_by_name["Audit Logs Viewer"].id,
        local.roles_by_name["Cloudflare Access"].id,
        local.roles_by_name["DNS"].id
      ]
  }
}

resource "cloudflare_account_member" "account_member" {
  for_each  	= local.users
  account_id	= var.account_id
  email_address = "${each.key}@cloudflare.com"
  role_ids  	= each.value.roles
  status            = "accepted"
}</code></pre>
            
    <div>
      <h3>Defining Auto-Refreshing Access Service Tokens</h3>
      <a href="#defining-auto-refreshing-access-service-tokens">
        
      </a>
    </div>
    <p>The <a href="https://github.com/cloudflare/terraform-provider-cloudflare/issues/1866">GitHub issue</a> and <a href="https://github.com/cloudflare/terraform-provider-cloudflare/pull/1872">provider change</a> that enabled automatic Access service token refreshes actually came from a need inside Cloudflare. Here's how we ended up implementing it. We begin by defining a set of services that need to connect to our hostnames that are protected by Access. Each of these <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/access_service_token">tokens</a> are created and stored in a secret key value store. Next, we reference those access tokens by ID in the target <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/access_policy">Access policies</a>. Once this has run, the service owner or the service itself can retrieve the credentials from the data store. (Note: we're using Vault here, but any storage provider could be used in its place).</p>
            <pre><code>tokens.tf
—
locals {
  service_tokens = toset([
    "customer-service",     # TICKET-120
    "full-service",               # TICKET-128
    "quality-of-service"      # TICKET-420 
    "room-service"            # TICKET-927
  ])
}

resource "cloudflare_access_service_token" "token" {
  for_each   = local.service_tokens
  account_id = var.account_id
  name   	= each.key
  min_days_for_renewal = 30
}

resource "vault_generic_secret" "access_service_token" {
  for_each   = local.service_tokens
  path = "kv/secrets/${each.key}/access_service_token"
  disable_read = true

  data_json = jsonencode({
	client_id        = cloudflare_access_service_token.token["${each.key}"].client_id,
client_secret = cloudflare_access_service_token.token["${each.key}"].client_secret
  })
}

super_cool_hostname.tf
—
resource "cloudflare_access_application" "super_cool_hostname" {
  account_id             	            = var.account_id
  name                   	            = "Super Cool Hostname"
  domain                 	            = "supercool.hostname.tld"
}

resource "cloudflare_access_policy" "super_cool_hostname_service_access" {
  application_id = cloudflare_access_application.super_cool_hostname.id
  zone_id    	= data.cloudflare_zone.hostname_tld.id
  name       	= "TICKET-927 Allow Room Service "
  decision   	= "non_identity"
  precedence 	= 1
  include {
	service_token = [cloudflare_access_service_token.token["room-service"].id]
  }
}</code></pre>
            
    <div>
      <h3>mTLS (Authenticated Origin Pulls) certificate creation and rotation</h3>
      <a href="#mtls-authenticated-origin-pulls-certificate-creation-and-rotation">
        
      </a>
    </div>
    <p>To further defense-in-depth objectives, we've been rolling out mTLS throughout our internal systems. One of the places where we can take advantage of our Terraform provider is in defining <a href="/protecting-the-origin-with-tls-authenticated-origin-pulls/">AOP (Authenticated Origin Pulls)</a> certificates to lock down the Cloudflare-edge-to-origin connection. Anyone who has <a href="https://www.cloudflare.com/application-services/solutions/certificate-lifecycle-management/">managed certificates</a> of any kind can speak to the headaches they can cause. Having certificate configurations in Terraform takes out the manual work of rotation and expiration.</p><p>In this example we're defining <a href="https://api.cloudflare.com/#per-hostname-authenticated-origin-pull-properties">hostname-level AOP</a> as opposed to <a href="https://api.cloudflare.com/#zone-level-authenticated-origin-pulls-properties">zone-level AOP</a>. We start by cutting a certificate for each hostname. Once again we're using Vault for certificate creation, but other backends could be used just as well. This certificate is created with a (not-shown) 30 day expiration, but set to renew automatically. This means once the time-to-expiration is equal to <a href="https://registry.terraform.io/providers/hashicorp/vault/latest/docs/resources/pki_secret_backend_cert#min_seconds_remaining">min_seconds_remaining</a>, the resource will be automatically tainted and replaced on the next Terraform run. We like to give this automation plenty of room before expiration to take into account holiday seasons and avoid sending alerts to humans when the alerts hit seven days to expiration. For the rest of this snippet, the <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/authenticated_origin_pulls_certificate">certificate is uploaded to Cloudflare</a> and the ID from that upload is then <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/authenticated_origin_pulls">placed in the AOP configuration</a> for the given hostname. The <a href="https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#create_before_destroy">create_before_destroy</a> meta-argument ensures that the replacement certificate is uploaded successfully before we remove the certificate that's currently in place.</p>
            <pre><code>locals {
  hostnames = toset([
	"supercool.hostname.tld",
            "thatsafinelooking.hostname.tld"
  ])
}

resource "vault_pki_secret_backend_cert" "vault_cert" {
  for_each          	      = local.hostnames
  backend           	      = "pki-aop"
  name              	      = "default"
  auto_renew         	      = true
  common_name       	      = "${each.key}.aop.pki.vault.cfdata.org"
  min_seconds_remaining = 864000 // renew when there are 10 days left before expiration
}

resource "cloudflare_authenticated_origin_pulls_certificate" "aop_cert" {
  for_each  = local.hostnames
  zone_id   = data.cloudflare_zone.hostname_tld.id
  type 	      = "per-hostname"

  certificate = vault_pki_secret_backend_cert.vault_cert["${each.key}"].certificate
  private_key = vault_pki_secret_backend_cert.vault_cert["${each.key}"].private_key

  lifecycle {
	create_before_destroy = true
  }
}

resource "cloudflare_authenticated_origin_pulls" "aop_config" {
  for_each                           	= local.hostnames
  zone_id    	                        = data.cloudflare_zone.hostname_tld.id
  authenticated_origin_pulls_certificate = cloudflare_authenticated_origin_pulls_certificate.aop_cert["${each.key}"].id
  hostname                           	= "${each.key}"
  enabled                            	= true
}</code></pre>
            
    <div>
      <h3>Terraform recommendations</h3>
      <a href="#terraform-recommendations">
        
      </a>
    </div>
    <p>The comfortable automation that we've achieved thus far did not come without some hair-pulling. Below are a few of the learnings that have allowed us to maintain the repository as a side project run by two engineers (shoutout <a href="https://github.com/dhaynespls">David</a>).</p>
    <div>
      <h4><b>Store your state somewhere safe</b></h4>
      <a href="#store-your-state-somewhere-safe">
        
      </a>
    </div>
    <p>It feels worth repeating that the tfstate <a href="https://developer.hashicorp.com/terraform/language/settings/backends/configuration#credentials-and-sensitive-data"><b>contains secrets</b></a> <b>including any API keys you're using with providers</b> and <b>the default location of the tfstate is in the current working directory.</b> It's very easy to accidentally commit this to source control. By defining a <a href="https://developer.hashicorp.com/terraform/language/settings/backends/configuration">backend</a>, the state can be stored with a cloud storage provider, <a href="https://developer.hashicorp.com/terraform/language/settings/backends/local">in a secure location on a filesystem</a>, <a href="https://developer.hashicorp.com/terraform/language/settings/backends/pg">in a database</a>, or even <a href="https://mirio.dev/2022/09/18/implementing-a-terraform-state-backend/">Cloudflare Workers</a>! Wherever the state is stored, make sure it is encrypted.</p>
    <div>
      <h5><b>Choose simplicity, avoid modules</b></h5>
      <a href="#choose-simplicity-avoid-modules">
        
      </a>
    </div>
    <p><a href="https://developer.hashicorp.com/terraform/language/modules">Modules</a> are intended to reduce code repetition for well-defined chunks of systems such as "I want three clusters of whizz-bangs in locations A, C, and F." If cloud-computing was like <a href="https://wiki.factorio.com/Blueprint">Factorio</a>, this would be amazing. However, financial, technical, and physical constraints mean subtle differences in systems develop over time such as "I want fewer whizz-bangs in C and the whizz-bangs in F should get a different network topology." In Terraform, implementation logic of these requirements is moved to the module code. <a href="https://github.com/hashicorp/hcl">HCL</a> is absolutely not the place to write decipherable conditionals. While module versioning prevents having to make every change backwards-compatible, keeping module usage up-to-date becomes another chore for repository maintainers.</p><p>An understandable code base is a user-friendly codebase. It's rare that a deeply cryptic error will return from a misconfigured resource definition. Conversely, modules, especially custom ones, can lead users on a head-scratching adventure. This kind of system can't scale with confused users.</p><p>A few well-designed <a href="https://developer.hashicorp.com/terraform/language/meta-arguments/for_each">for_each</a> loops (we're obviously fans) can achieve similar objectives as modules without the complexity. It's fine to use plain old resources too! Especially when there are more than a handful of varying arguments, it's more valuable for the configuration to be clear than to be eloquent. For example: an <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/account_member">account_member</a> resource makes sense to be in a for_loop, but a <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/page_rule">page_rule</a> probably doesn't.</p>
    <div>
      <h5><b>Keep tfstates small</b></h5>
      <a href="#keep-tfstates-small">
        
      </a>
    </div>
    <p>Maintaining quick pull-request-to-plan turnaround keeps Terraform from feeling like a burden on users' time. Furthermore, if a plan is taking 30 minutes to run, a rollback in the case of an issue would also take 30 minutes! This post describes our single-account-to-tfstate model.</p><p>However, after noticing slow-downs coming from the large number of AOP certificate configurations in a big zone, we moved that code to a separate tfstate. We were able to make this change because AOP configuration is fairly self-contained. To ensure there would be no fighting between the states, we kept the API token permissions for each tfstate mutually exclusive of each other. Our Atlantis Terraform plans typically finish under five minutes. If it feels impossible to keep the size of a tfstate down to a reasonable amount of time, it may be worth considering a different tool for that bit of configuration management.</p>
    <div>
      <h5><b>Know when to use a Different tool</b></h5>
      <a href="#know-when-to-use-a-different-tool">
        
      </a>
    </div>
    <p>Terraform isn't a panacea. We generally don't use Terraform to manage DNS records, for example. We use <a href="/improving-the-resiliency-of-our-infrastructure-dns-zone/">OctoDNS</a> which integrates more neatly into our infrastructure automation. DNS records can quickly add up to long state-rendering times and are often dynamically generated from systems that Terraform doesn't know about. To avoid conflicts, there should only ever be one system publishing changes to DNS records.</p><p>We also haven't figured out a maintainable way of managing Workers scripts in Terraform. When a .js script in the Terraform directory changes, Terraform isn't aware of it. This means a change needs to occur somewhere else in a .tf file before the plan diff is generated. It likely isn't an unsolvable issue, but doesn't seem particularly worth cramming into Terraform when there are better options for Worker management like <a href="https://developers.cloudflare.com/workers/wrangler/">Wrangler</a>.</p>
    <div>
      <h2>Looking forward</h2>
      <a href="#looking-forward">
        
      </a>
    </div>
    <p>We're continuing to invest in the Cloudflare Terraforming experience both for our own use and for the benefit of our users. With the provider, we hope to offer a comfortable and scalable method of interacting with Cloudflare products. Hopefully this post has presented some useful suggestions to anyone interested in adopting Cloudflare-configuration-as-code. Don't hesitate to reach out on the <a href="https://github.com/cloudflare/terraform-provider-cloudflare">GitHub project</a> for troubleshooting, bug reports, or feature requests. For more in depth documentation on using Terraform to manage your Cloudflare account, <a href="https://developers.cloudflare.com/terraform/">read on here</a>. And if you don't have a Cloudflare account already, <a href="https://dash.cloudflare.com/sign-up/teams">click here</a> to get started.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Terraform]]></category>
            <guid isPermaLink="false">13LOJFjSYZuMAEcqDWYnk0</guid>
            <dc:creator>Michael Wolf</dc:creator>
            <dc:creator>David Haynes</dc:creator>
        </item>
        <item>
            <title><![CDATA[Automating Cloudflare Tunnel with Terraform]]></title>
            <link>https://blog.cloudflare.com/automating-cloudflare-tunnel-with-terraform/</link>
            <pubDate>Fri, 14 May 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ An overview on how to use Terraform to automatically deploy Named Tunnels into your infrastructure with Cloudflare. 
 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare Tunnel allows you to connect applications securely and quickly to Cloudflare’s edge. With Cloudflare Tunnel, teams can expose anything to the world, from internal subnets to containers, in a secure and fast way. Thanks to recent developments with our <a href="https://github.com/cloudflare/terraform-provider-cloudflare/issues/603">Terraform provider</a> and the advent of <a href="/argo-tunnels-that-live-forever/">Named Tunnels</a> it’s never been easier to spin up.</p>
    <div>
      <h3>Classic Tunnels to Named Tunnels</h3>
      <a href="#classic-tunnels-to-named-tunnels">
        
      </a>
    </div>
    <p>Historically, the biggest limitation to using Cloudflare Tunnel at scale was that the process to create a tunnel was manual. A user needed to download the binary for their OS, install/compile it, and then run the command <code>cloudflared tunnel login</code>. This would open a browser to their Cloudflare account so they could download a <code>cert.pem</code> file to authenticate their tunnel against Cloudflare’s edge with their account.</p><p>With the jump to Named Tunnels and a supported <a href="https://api.cloudflare.com/#argo-tunnel-create-argo-tunnel">API endpoint</a> Cloudflare users can automate this manual process. Named Tunnels also moved to allow a <code>.json</code> file for the origin side tunnel credentials instead of (or with) the <code>cert.pem</code> file. It has been a dream of mine since joining Cloudflare to write a Cloudflare Tunnel as code, along with my instance/application, and deploy it while I go walk my dog. Tooling should be easy to deploy and robust to use. That dream is now a reality and my dog could not be happier.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gSV6JJ9YIcaT8Mb0Jd3M3/7903db595fad3dcf6d88732542562f99/image3-2.png" />
            
            </figure>
    <div>
      <h3>Okay, so what?</h3>
      <a href="#okay-so-what">
        
      </a>
    </div>
    <p>The ability to dynamically generate a tunnel and tie it into a back end application(s) brings several benefits to users including: putting more of their Cloudflare config in code, auto-scaling resources, dynamically spinning up resources such as bastion servers for secure logins, and saving time from avoiding manually generating/maintaining tunnels.</p><p>Tunnels also allow traffic to connect securely into Cloudflare’s edge for <i>only</i> the particular account they are affiliated with. In a world where IPs are increasingly ephemeral, tunnels allow for a modern approach to tying your application(s) into Cloudflare. Putting automation around tunnels allows teams to incorporate them into their existing <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD (continuous improvement/continuous development) pipelines</a>.</p><p>Most importantly, the spin up of an environment securely tied into Cloudflare can be achieved with some Terraform config and then by running <code>terraform apply</code>. I can then go take my pup on an adventure while my environment kicks off.</p>
    <div>
      <h3>Why Terraform?</h3>
      <a href="#why-terraform">
        
      </a>
    </div>
    <p>While there are numerous Infrastructure as Code tools out there, Terraform has an actively maintained <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs">Cloudflare provider</a>. This is not to say that this same functionality cannot be re-created by making use of the API endpoint with a tool of your choice. The overarching concepts here should translate quite nicely. Using Terraform we can deploy Cloudflare resources, origin resources, and configure our server all with one tool. Let’s see what setting that up looks like.</p>
    <div>
      <h3>Terraform Config</h3>
      <a href="#terraform-config">
        
      </a>
    </div>
    <p>The technical bits of this will cover how to set up an automated Named Tunnel that will proxy traffic to a Google compute instance (GCP) which is my backend for this example. These concepts should be the same regardless of where you host your applications such as an onprem location to a multi-cloud solution.</p><p>With <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/ingress">Cloudflare Tunnel’s Ingress Rules</a>, we can use a single tunnel to proxy traffic to a number of local services. In our case we will tie into a Docker container running <a href="https://httpbin.org/">HTTPbin</a> and the local SSH daemon. These endpoints are being used to represent a standard login protocol (such as SSH or RDP) and an example web application (HTTPbin). We can even take it a step further by applying a <a href="https://www.cloudflare.com/teams/access/">Zero Trust framework with Cloudflare Access</a> over the SSH hostname.</p><p>The version of Terraform used in this example is 0.15.0. Please refer to the provider documentation when using the Cloudflare Terraform provider. Tunnels are compatible with Terraform version 0.13+.</p>
            <pre><code>cdlg at cloudflare in ~/Documents/terraform/blog on master
$ terraform --version
Terraform v0.15.0
on darwin_amd64
+ provider registry.terraform.io/cloudflare/cloudflare v2.18.0
+ provider registry.terraform.io/hashicorp/google v3.56.0
+ provider registry.terraform.io/hashicorp/random v3.0.1
+ provider registry.terraform.io/hashicorp/template v2.2.0</code></pre>
            <p>Here is what the Terraform hierarchy looks like for this setup.</p>
            <pre><code>cdlg at cloudflare in ~/Documents/terraform/blog on master
$ tree .
.
├── README.md
├── access.tf
├── argo.tf
├── bootstrap.tf
├── instance.tf
├── server.tpl
├── terraform.tfstate
├── terraform.tfstate.backup
├── terraform.tfvars
├── terraform.tfvars.example
├── test.plan
└── versions.tf

0 directories, 12 files</code></pre>
            <p>We can ignore the files <code>README.md</code> and <code>terraform.tfvars.example</code> for now. The files ending in <code>.tf</code> is where our Terraform configuration lives. Each file is designated to a specific purpose. For example, the <code>instance.tf</code> file only contains the scope of the GCP server resources used with this deployment and the affiliated DNS records pointing to the tunnel on it.</p>
            <pre><code>cdlg at cloudflare in ~/Documents/terraform/blog on master
$ cat instance.tf
# Instance information
data "google_compute_image" "image" {
  family  = "ubuntu-minimal-1804-lts"
  project = "ubuntu-os-cloud"
}

resource "google_compute_instance" "origin" {
  name         = "test"
  machine_type = var.machine_type
  zone         = var.zone
  tags         = ["no-ssh"]

  boot_disk {
    initialize_params {
      image = data.google_compute_image.image.self_link
    }
  }

  network_interface {
    network = "default"
    access_config {
      // Ephemeral IP
    }
  }
  // Optional config to make instance ephemeral
  scheduling {
    preemptible       = true
    automatic_restart = false
  }

  metadata_startup_script = templatefile("./server.tpl",
    {
      web_zone    = var.cloudflare_zone,
      account     = var.cloudflare_account_id,
      tunnel_id   = cloudflare_argo_tunnel.auto_tunnel.id,
      tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
      secret      = random_id.argo_secret.b64_std
    })
}



# DNS settings to CNAME to tunnel target
resource "cloudflare_record" "http_app" {
  zone_id = var.cloudflare_zone_id
  name    = var.cloudflare_zone
  value   = "${cloudflare_argo_tunnel.auto_tunnel.id}.cfargotunnel.com"
  type    = "CNAME"
  proxied = true
}

resource "cloudflare_record" "ssh_app" {
  zone_id = var.cloudflare_zone_id
  name    = "ssh"
  value   = "${cloudflare_argo_tunnel.auto_tunnel.id}.cfargotunnel.com"
  type    = "CNAME"
  proxied = true
}</code></pre>
            <p>This is a personal preference — if desired, the entire Terraform config could be put into one file. One thing to note is the usage of variables throughout the files. For example, the value of <code>var.cloudflare_zone</code> is populated with the value provided to it from the <code>terraform.tfvars</code> file. This allows the configuration to be used as a template with other deployments. The only change that would be necessary is updating the relevant variables, such as in the <code>terraform.tfvars</code> file, when re-using the configuration.</p><p>When using a credentials file (vs environment variables such as a <code>.tfvars</code> file) it is very important that this file is exempted from the version tracking tool. With git this is accomplished with a <code>.gitignore</code> file. Before running this example the <code>terraform.tfvars.example</code> file is copied to <code>terraform.tfvars</code> within the same directory and filled in as needed. The <code>.gitignore</code> file is told to ignore any file named <code>terraform.tfvars</code> to exempt the actual variables from version tracking.</p>
            <pre><code>cdlg at cloudflare in ~/Documents/terraform/blog on master
$ cat .gitignore
# Local .terraform directories
**/.terraform/*

# .tfstate files
*.tfstate
*.tfstate.*

# Crash log files
crash.log

# Ignore any .tfvars files that are generated automatically for each Terraform run. Most
# .tfvars files are managed as part of configuration and so should be included in
# version control.
#
# example.tfvars
terraform.tfvars

# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
override.tf.json
*_override.tf
*_override.tf.json

# Include override files you do wish to add to version control using negated pattern
#
# !example_override.tf

# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
# example: *tfplan*
*tfplan*
*.plan*
*lock*</code></pre>
            <p>Now to the fun stuff! To create a Cloudflare Tunnel in Terraform we only need to set the following resources in our Terraform config (this is what populates the <code>argo.tf</code> file).</p>
            <pre><code>resource "random_id" "argo_secret" {
  byte_length = 35
}

resource "cloudflare_argo_tunnel" "auto_tunnel" {
  account_id = var.cloudflare_account_id
  name       = "zero_trust_ssh_http"
  secret     = random_id.argo_secret.b64_std
}</code></pre>
            <p>That’s it.</p><p>Technically you could get away with just the <code>cloudflare_argo_tunnel</code> resource, but using the <code>random_id</code> resource helps with not having to hard code the secret for the tunnel. Instead we can dynamically generate a secret for our tunnel each time we run Terraform.</p><p>Let’s break down what is happening in the <code>cloudflare_argo_tunnel</code> resource: we are passing the Cloudflare account ID (via the <code>var.cloudflare_account_id</code> variable), a name for our tunnel, and the dynamically generated secret for the tunnel, which is pulled from the <code>random_id</code> resource. Tunnels expect the secret to be in base64 standard encoding and at least 32 characters.</p><p>Using Named Tunnels now gives customers a UUID (universal unique identity) target to tie their applications to. These endpoints are routed off an internal domain to Cloudflare and can only be used with zones in your account, as mentioned earlier. This means that one tunnel can proxy multiple applications for various zones in your account, thanks to Cloudflare Tunnel Ingress Rules.</p><p>Now that we have a target for our services, we can create a tunnel/applications in the GCP instance. Terraform has a feature called a <a href="https://www.terraform.io/docs/language/functions/templatefile.html">templatefile function</a> that allows you to pass input variables as local variables (i.e. what the server can use to configure things) to an argument called <code>metadata_startup_script</code>.</p>
            <pre><code>resource "google_compute_instance" "origin" {
...
  metadata_startup_script = templatefile("./server.tpl", 
    {
      web_zone    = var.cloudflare_zone,
      account     = var.cloudflare_account_id,
      tunnel_id   = cloudflare_argo_tunnel.auto_tunnel.id,
      tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
      secret      = random_id.argo_secret.b64_std
    })
}</code></pre>
            <p>This abbreviated section of the <code>google_compute_instance</code> resource shows a templatefile using 5 variables passed to the file located at <code>./server.tpl</code>. The file <code>server.tpl</code> is a bash script within the local directory that will configure the newly created GCP instance.</p><p>As indicated earlier, Named Tunnels can make use of a JSON credentials file instead of the historic use of a <code>cert.pem</code> file. By using a templatefile function pointing to a bash script (or cloud-init, etc…) we can dynamically generate the fields that populate both the <code>cert.json</code> file and the <code>config.yml</code> file used for Ingress Rules on the server/host. Then the bash script can install <code>cloudflared</code> as <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/run-tunnel/run-as-service">a system service</a>, so it is persistent (i.e it can come back up after the machine is rebooted). Here is an example of this.</p>
            <pre><code>wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb
sudo dpkg -i cloudflared-stable-linux-amd64.deb
mkdir ~/.cloudflared
touch ~/.cloudflared/cert.json
touch ~/.cloudflared/config.yml
cat &gt; ~/.cloudflared/cert.json &lt;&lt; "EOF"
{
    "AccountTag"   : "${account}",
    "TunnelID"     : "${tunnel_id}",
    "TunnelName"   : "${tunnel_name}",
    "TunnelSecret" : "${secret}"
}
EOF
cat &gt; ~/.cloudflared/config.yml &lt;&lt; "EOF"
tunnel: ${tunnel_id}
credentials-file: /etc/cloudflared/cert.json
logfile: /var/log/cloudflared.log
loglevel: info

ingress:
  - hostname: ${web_zone}
    service: http://localhost:8080
  - hostname: ssh.${web_zone}
    service: ssh://localhost:22
  - hostname: "*"
    service: hello-world
EOF

sudo cloudflared service install
sudo cp -via ~/.cloudflared/cert.json /etc/cloudflared/

cd /tmp
sudo docker-compose up -d &amp;&amp; sudo service cloudflared start</code></pre>
            <p>In this example, a <a href="https://tldp.org/LDP/abs/html/here-docs.html">heredoc</a> is used to fill in the variable fields for the <code>cert.json</code> file and another heredoc is used to fill in the <code>config.yml</code> (Ingress Rules) file with the variables we set in Terraform. Taking a quick look at the <code>cert.json</code> file we can see that the Account ID is provided to it which secures the tunnel to your specific account. The UUID  of the tunnel is then passed in along with the name that was assigned in the tunnel’s name argument. Lastly the 35 character secret is then passed to the tunnel. These are the necessary parameters to get our tunnel spun up against Cloudflare’s edge.</p><p>The <code>config.yml</code> file is where we set up the Ingress Rules for the Cloudflare Tunnel. The first few lines tell the tunnel which UUID to attach to, where the credentials are on the OS, and where the tunnel should write logs to. The log level of <code>info</code> is good for general use but for troubleshooting <code>debug</code> may be needed.</p><p>Next the first - <code>hostname:</code> line says any requests bound for that particular hostname need to be proxied to the service (HTTPbin) running at <code>localhost</code> port 8080. Following that the SSH target is defined and will proxy requests to the local SSH port. The next hostname is interesting in that we have a wildcard character. This functionality allows other zones or hostnames on the Account to point to the tunnel without being explicitly defined in Ingress Rules. The service that will respond to these requests is a built in <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/ingress#supported-protocols">hello world service</a> the tunnel provides.</p><p>Pretty neat, but what else can we do? We can block all <a href="https://developers.cloudflare.com/cloudflare-one/faq/tunnel/#how-can-origin-servers-be-secured-when-using-tunnel">inbound networking</a> to the server and instead use Cloudflare Tunnel to proxy the connections to Cloudflare’s edge. To safeguard the SSH hostname <a href="https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/self-hosted-apps/">an Access policy</a> can be applied over it.</p>
    <div>
      <h3>SSH and Zero Trust</h3>
      <a href="#ssh-and-zero-trust">
        
      </a>
    </div>
    <p>The Access team has several tutorials on how to <a href="https://developers.cloudflare.com/cloudflare-one/api-terraform/access-with-terraform">tie your policies into Terraform</a>. Using this as a guide we can create the Access related Terraform resources for the SSH endpoint.</p>
            <pre><code># Access policy to apply zero trust policy over SSH endpoint
resource "cloudflare_access_application" "ssh_app" {
  zone_id          = var.cloudflare_zone_id
  name             = "Access protection for ssh.${var.cloudflare_zone}"
  domain           = "ssh.${var.cloudflare_zone}"
  session_duration = "1h"
}

resource "cloudflare_access_policy" "ssh_policy" {
  application_id = cloudflare_access_application.ssh_app.id
  zone_id        = var.cloudflare_zone_id
  name           = "Example Policy for ssh.${var.cloudflare_zone}"
  precedence     = "1"
  decision       = "allow"

  include {
    email = [var.cloudflare_email]
  }
}</code></pre>
            <p>In the above <code>cloudflare_access_application</code> resource, a variable, <code>var.cloudflare_zone_id</code>, is used to pull in the Cloudflare Zone’s ID based on the value of the variable provided in the <code>terraform.tfvars</code> file. The Zone Name is also dynamically populated at runtime in the <code>var.cloudflare_zone</code> fields based on the value provided in the <code>terraform.tfvars</code> file. We also limit the scope of this access policy to <code>ssh.targetdomain.com</code> using the <code>domain</code> argument in the <code>cloudflare_access_application</code> resource.</p><p>In the <code>cloudflare_access_policy</code> resource, we take the information provided by the <code>cloudflare_access_application</code> resource called <code>ssh_app</code> and apply it as an active policy. The scope of who is allowed to log into this endpoint is the user’s email as provided by the <code>var.cloudflare_email</code> variable.</p>
    <div>
      <h3>Terraform Spin up and SSH Connection</h3>
      <a href="#terraform-spin-up-and-ssh-connection">
        
      </a>
    </div>
    <p>Now to connect to this SSH endpoint. First we need to spin up our environment. This can be done with <code>terraform plan</code> and then <code>terraform apply</code>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7t9hVHmNwlWQASoIFnboLt/9efaaf728a3a432e73afff6c666544f6/image5-2.png" />
            
            </figure><p>On my workstation I have <code>cloudflared</code> installed and updated my SSH config to proxy traffic for this SSH endpoint through <code>cloudflared</code>.</p>
            <pre><code>cdlg at cloudflare in ~
$ cloudflared --version
cloudflared version 2021.4.0 (built 2021-04-07-2111 UTC)

cdlg at cloudflare in ~
$ grep -A2 'ssh.chrisdlg.com' ~/.ssh/config
Host ssh.chrisdlg.com
    IdentityFile /Users/cdlg/.ssh/google_compute_engine
    ProxyCommand /usr/local/bin/cloudflared access ssh --hostname %h</code></pre>
            <p>I can then SSH with my local user on the remote machine (cdlg) at the SSH hostname (ssh.chrisdlg.com). The instance of cloudflared running on my workstation will then proxy this request.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1BcyIgXbVE2wu6rRq6pZgJ/8690b09b9e60f28a7fac1377942b7bc4/image4-3.png" />
            
            </figure><p>This will open a new tab in my current browser and direct me to the Cloudflare Access application recently created with Terraform. Earlier in the Access resource we set the Cloudflare user as denoted by the <code>var.cloudflare_email</code> variable as the criteria for the Access policy. If the correct email address is provided the user will receive an email similar to the following.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Z4k965ZDENHc6gTFX0AZx/6914c9f98fdab2960ce873491966471d/image1-3.png" />
            
            </figure><p>Following the link or providing the pin on the previously opened tab will complete the authentication. Hitting ‘approve’ tells Cloudflare Access that the user should be allowed through per the length of the <code>session_duration</code> argument in the <code>cloudflare_access_application</code> resource. Navigating back to the terminal we can see that we are now on the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zogkUR1oIkrVNTAXAMlnl/35dfd66180b79030974a4df03154978a/image6.png" />
            
            </figure><p>If we check the server’s authentication log we can see that connections from the tunnel are coming in via <code>localhost (127.0.0.1)</code>. This allows us to lock down external network access on the SSH port of the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1HQdBMj9vWK7HiKP6VXwcI/383ba96ae8d91fd2eec69bf43a41e895/image2-2.png" />
            
            </figure><p>The full config of this deployment can be viewed <a href="https://github.com/cloudflare/argo-tunnel-examples/tree/master/terraform-zerotrust-ssh-http-gcp">here</a>.</p><p>The roadmap for Cloudflare Tunnels is bright. Hopefully this walkthrough provided some quick context on what you can achieve with Cloudflare Tunnels and Cloudflare. Personally my dog is quite happy that I have more time to take him on walks. We’re very excited to see what you build with Cloudflare Tunnels and Cloudflare!</p> ]]></content:encoded>
            <category><![CDATA[Terraform]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">62yhRjZfOrEwEMIQAf61Am</guid>
            <dc:creator>Chris De La Garza</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s Partnership with HashiCorp and Bootstrapping Terraform with Cf-Terraforming]]></title>
            <link>https://blog.cloudflare.com/cloudflares-partnership-with-hashicorp-and-bootstrapping-terraform-with-cf-terraforming/</link>
            <pubDate>Sat, 17 Apr 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ Learn more about Cloudflare and Hashicorp’s partnership and about our new release for our Terraform bootstrapping tool - cf-terraforming. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare and HashiCorp have been technology partners since 2018, and in that time Cloudflare’s integration with HashiCorp’s technology has deepened, especially with <a href="https://www.terraform.io/">Terraform</a>, HashiCorp’s infrastructure-as-code product. Today we are announcing a major update to our Terraform bootstrapping tool, <a href="https://github.com/cloudflare/cf-terraforming">cf-terraforming</a>. In this blog, I recap the history of our partnership, the <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs?utm_source=WEBSITE&amp;utm_medium=CLOUDFLARE&amp;utm_offer=ARTICLE_PAGE&amp;utm_content=BLOG">HashiCorp Terraform Verified Provider for Cloudflare</a>, and how getting started with Terraform for Cloudflare developers is easier than ever before with the new version of cf-terraforming.</p>
    <div>
      <h2>Cloudflare and HashiCorp</h2>
      <a href="#cloudflare-and-hashicorp">
        
      </a>
    </div>
    <p>Members of the open source community wrote and supported the first version of Cloudflare's Terraform provider. Eventually our customers began to bring up Terraform in conversations more often. Because of customer demand, we started supporting and developing the Terraform provider ourselves. You can read the initial v1.0 announcement for the provider <a href="/getting-started-with-terraform-and-cloudflare-part-1/">here</a>. Soon after, Cloudflare’s Terraform provider became ‘verified’ and we began working with HashiCorp to provide a high quality experience for developers.</p><p>HashiCorp Terraform allows developers to control their infrastructure-as-code through a standard configuration language, HashiCorp Configuration Language (HCL). It works across a myriad of different types of infrastructure including cloud service providers, containers, virtual machines, bare metal, etc. Terraform makes it easy for developers to follow best practices when interacting with SaaS, PaaS, and other service provider APIs that set up infrastructure. Like developers already do with software code, they can store infrastructure configuration as code in git, manage changes through code reviews, and track versions and commit history over time. Terraform also makes it easier to roll back changes if developers discover issues after a deployment.</p><blockquote><p><i>Our developers love the power of Cloudflare + Terraform for infrastructure provisioning. IT operations teams want a platform that provides complete visibility and control. IT teams want a platform that is easy to use, does not have a steep learning curve, and provides levers to control resources much better. Cloudflare + Terraform platform provides just that.</i><b>– Dmitry Zadorojnii, Chief Technology Officer, Autodoc GmbH</b></p></blockquote><p>Since the 1.0 release of Cloudflare’s Terraform provider, Cloudflare has continued to build out the capabilities exposed in the provider while HashiCorp has expanded its ecosystem by developing additional features like the <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs">Terraform Registry</a>. This helps developers find documentation on Cloudflare’s Terraform provider or any others. Terraform itself has also matured greatly–Terraform’s v0.12 release had many changes, outlined <a href="https://www.hashicorp.com/resources/a-2nd-tour-of-terraform-0-12">here</a>.</p><blockquote><p><i>“Leveraging the power of Terraform, you can codify your Cloudflare configuration. Codification enables version control and automation, increasing productivity and reducing human error. We are pleased to have Cloudflare as a technology partner and look forward to our ongoing collaboration.”</i><b>– Asvin Ramesh, Director, Technology Alliances, HashiCorp</b></p></blockquote>
    <div>
      <h3>Getting started with Cloudflare’s Terraform Provider</h3>
      <a href="#getting-started-with-cloudflares-terraform-provider">
        
      </a>
    </div>
    <p>Here are some great resources for developers looking to better understand how to get started using Terraform with Cloudflare:</p><ul><li><p><a href="https://developers.cloudflare.com/terraform/">Cloudflare’s Developer Docs for Terraform</a></p></li><li><p><a href="https://learn.hashicorp.com/tutorials/terraform/cloudflare-static-website?utm_source=WEBSITE&amp;utm_medium=CLOUDFLARE&amp;utm_offer=ARTICLE_PAGE&amp;utm_content=BLOG">HashiCorp Tutorial: Host a Static Website with S3 and Cloudflare</a></p></li><li><p><a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs?utm_source=WEBSITE&amp;utm_medium=CLOUDFLARE&amp;utm_offer=ARTICLE_PAGE&amp;utm_content=BLOG">Cloudflare Terraform Provider Documentation</a></p></li><li><p><a href="https://github.com/cloudflare/terraform-provider-cloudflare">Github repo for Cloudflare’s Terraform Provider</a></p></li></ul>
    <div>
      <h3>Bootstrapping Terraform configuration with Cf-Terraforming</h3>
      <a href="#bootstrapping-terraform-configuration-with-cf-terraforming">
        
      </a>
    </div>
    <p>We released the first version of <a href="https://github.com/cloudflare/cf-terraforming">cf-terraforming</a> in early 2019 in this <a href="/introducing-cf-terraform/">blog post</a>. Since then we learned a few lessons about building and maintaining such a tool. In this section I’ll recap why we built such a tool in the first place, lessons learned over the last two years, and what is new and improved with the version we are launching today.</p>
    <div>
      <h3>Why <code>terraform-ing</code> and why would a developer need this tool?</h3>
      <a href="#why-terraform-ing-and-why-would-a-developer-need-this-tool">
        
      </a>
    </div>
    <p>The name for the cf-terraforming library comes from another tool created by dtan4 on github: <a href="https://github.com/dtan4/terraforming">https://github.com/dtan4/terraforming</a>. The original tool allowed users to generate <a href="https://www.terraform.io/docs/language/files/index.html">tfconfig</a> and <a href="https://www.terraform.io/docs/language/state/index.html">tfstate</a> files for existing AWS resources. This made it much easier for existing AWS customers to begin using Terraform. Terraform generally expects to be authoritative about the configuration it manages. Effectively, it expects that you only make changes to that config through Terraform and not anywhere else, like an <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">API</a> or a dashboard. If you were an existing customer of AWS (or Cloudflare) and already had everything configured via API or a UI, this posed a challenge: How do I quickly and correctly get all of my existing config into Terraform so that it can be the authoritative source of truth? For AWS resources, before dtan4’s terraforming you had to manually write the tfconfig for every object and the run import commands to generate the corresponding tfstate. For sizable deployments, this could be nigh impossible.</p><p>cf-terraforming served to solve the same problem for Cloudflare services. I had many conversations with customers who had been using Cloudflare for years and who were interested in migrating control of their Cloudflare configuration to Terraform. Cf-terraforming gave them a way to quickly convert all of their existing Cloudflare usage into tfconfig and tfstate.</p><blockquote><p><a href="/introducing-cf-terraform/"><i>cf-terraforming</i></a><i> was one of the enabling technologies that we used to bootstrap Cloudflare into our Terraform Git-ops workflow. We had thousands of records to port, import, and manage, and doing this by hand would have been an arduous and error-prone task. Using cf-terraforming to generate our initial set of resources allowed our engineers to submit Cloudflare changes, enabling our product engineers to be infrastructure operators.</i>– <b>Sean Chittenden, Infrastructure, DoorDash</b></p></blockquote>
    <div>
      <h3>What we have learned</h3>
      <a href="#what-we-have-learned">
        
      </a>
    </div>
    <p>After having cf-terraforming available for some time, we’ve learned quite a bit about the challenges in managing such a tool.</p>
    <div>
      <h4>Duplication of effort when resources change</h4>
      <a href="#duplication-of-effort-when-resources-change">
        
      </a>
    </div>
    <p>When Cloudflare releases new services or features today, that typically means new or incremental changes to Cloudflare’s APIs. This in turns means updates to our Terraform provider. Since Terraform is a golang program, before we can update our provider we have to first update the <a href="https://github.com/cloudflare/cloudflare-go">cloudflare-go</a> library. Depending on the change, this can be a couple lines in each repo or extensive changes to both. Once we launched cf-terraforming, we now had a third library that needed synchronous changes alongside the provider and go library. Missing a change meant that if someone tried to use cf-terraforming, they may have incomplete config or state, which would not work.</p>
    <div>
      <h4>Impact of changes to Terraform for the tool</h4>
      <a href="#impact-of-changes-to-terraform-for-the-tool">
        
      </a>
    </div>
    <p>Not only did our own API changes create additional work, but changes to Terraform itself could mean changes for cf-terraforming. The Terraform 0.12 update was a massive update that required a lot of careful testing and coordination with our provider. It also meant changes in HCL and in provider interactions that cf-terraforming had to account for. Such a massive one-time hit was very difficult to account for, and we’ve struggled to ensure compatibility.</p>
    <div>
      <h4>TF State management</h4>
      <a href="#tf-state-management">
        
      </a>
    </div>
    <p>The ability to have cf-terraforming generate a tfstate file was both incredibly important and also experimental. In general a developer never really needs to concern themselves with what is in the tfstate file but needs to know it contains the actual state of those resources such that references in the config can be resolved and managed correctly. We opened up that black box, which meant that we were involved in state file implementation details that we ultimately shouldn’t be.</p><p>Given these lessons, we looked at how we could update cf-terraforming to alleviate these problems and provide a better tool for both customers and ourselves. After some prototyping to validate our ideas, we came up with a new model that has been productized and is now available for customers.</p>
    <div>
      <h3>What’s new in cf-terraforming</h3>
      <a href="#whats-new-in-cf-terraforming">
        
      </a>
    </div>
    <p>Today we are launching a new version of cf-terraforming that improves upon our previous work. This new version makes it easier for us to support new resources or changes to existing resources and simplifies the workflow for developers looking to bootstrap their Cloudflare Terraform configuration.</p>
    <div>
      <h3>Simplified management</h3>
      <a href="#simplified-management">
        
      </a>
    </div>
    <p>Instead of hand crafting how to generate both the tfconfig and tfstate for each of the 48 or so resources supported in the Terraform provider, we now leverage Terraform’s capabilities to do more auto generation of what’s needed for similar resource types. HashiCorp has a great CLI tool called <a href="https://github.com/hashicorp/terraform-exec">terraform-exec</a> that provides powerful out-of-the-box capabilities we can take advantage of. Using terraform-exec we get access to `terraform providers schema -json`, which gives us the json schema of our provider. We use this to auto generate the fields we need to populate from the API. In many cases the API response fields map one to one with the json schema, which allows us to automatically populate the tfconfig. In other cases some small tweaks are necessary, which still saves a lot of time to initially support the resource and lowers the burden for future changes. Through this method, if the terraform provider changes for any reason, we can build new versions of cf-terraforming that will fetch the new schema from terraform-exec versus us having to make a lot of manual code changes to the config generation.</p><p>For tfstate, we simplify our approach by outputting the full set of <a href="https://www.terraform.io/docs/cli/import/index.html">terraform import</a> calls that would need to be run for those resources instead of attempting to generate the tfstate definition itself. This removes virtually any need for future library changes since the import commands do not change if Cloudflare’s API or provider changes.</p>
    <div>
      <h3>How to use the new cf-terraforming</h3>
      <a href="#how-to-use-the-new-cf-terraforming">
        
      </a>
    </div>
    <p>With that, let’s look at the new cf-terraforming in action. For this walkthrough let’s assume we have an existing zone on Cloudflare with DNS records and firewall rules configured. We want to start managing this zone in Terraform, but we don’t want to have to define all of our configuration by hand.</p><p>Our goal is to have a ".tf" file with the DNS records resources and firewall rules along with filter resources AND for Terraform to be aware of the equivalent state for those resources. Our inputs are the zone we already have created in Cloudflare, and our tool is the cf-terraforming library. If you are following along at home, you will need <a href="https://learn.hashicorp.com/tutorials/terraform/install-cli">terraform installed</a> and at least <a href="https://golang.org/doc/install">Go v1.12.x installed</a>.</p>
    <div>
      <h4>Getting the environment setup</h4>
      <a href="#getting-the-environment-setup">
        
      </a>
    </div>
    <p>Before we can use cf-terraforming or the provider, we need an API token. I’ll briefly go through the steps here, but for a more in-depth walkthrough see the <a href="https://developers.cloudflare.com/api/tokens/create">API developer docs</a>. On the Cloudflare dashboard we generate an API token <a href="https://dash.cloudflare.com/profile/api-tokens">here</a> with the following setup:</p><p><b>Permissions</b>Zone:DNS:ReadZone:Firewall Services:Read</p><p><b>Zone Resources:</b>garrettgalow.party (my zone, but this should be your own)</p><p><b>TTL</b>Valid until: 2021-03-30 00:00:00Z</p><p>Note: I set an expiration date on the token so that when I inevitably forget about this token, it will expire and reduce the risk of exposure in the future. This is optional, but it’s a good practice when creating tokens you only need for a short period of time especially if they have edit access.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/QhxR9kePwOQ8nKV4XmBBS/d30c2ea051f799f4fbb38467aa6da00c/cf-tf_token_border-1.png" />
            
            </figure><p>API Token summary from the Cloudflare Dashboard</p><p>Now we set the API Token we created as an environment variable so that both Terraform and cf-terraforming can access it for any commands (and so I don’t have to remove it from code examples).</p>
            <pre><code>$export CLOUDFLARE_API_TOKEN=&lt;token_secret&gt;</code></pre>
            <p>Terraform requires us to have a folder to hold our Terraform configuration and state. For that we create a folder for our use case and create a <code>cloudflare.tf</code> config file with a provider definition for Cloudflare so Terraform knows we will be using the Cloudflare provider.</p>
            <pre><code>mkdir terraforming_test
cd terraforming_test
cat &gt; cloudflare.tf &lt;&lt;'EOF'
terraform {
    required_providers {
        cloudflare = {
            source = "cloudflare/cloudflare"
        }	
    }
}

provider "cloudflare" {
# api_token  = ""  ## Commented out as we are using an environment var
}
EOF</code></pre>
            <p>Here is the content of our <code>cloudflare.tf</code> file if you would rather copy and paste it into your text editor of choice:</p>
            <pre><code>terraform {
    required_providers {
        cloudflare = {
            source = "cloudflare/cloudflare"
        }	
    }
}

provider "cloudflare" {
# api_token  = ""  ## Commented out as we are using an environment var
}</code></pre>
            <p>We call <code>terraform init</code> to ensure Terraform is fully initialized and has the Cloudflare provider installed. At the time of writing this blog post, this is what <code>terraform -v</code> gives me for version info. We recommend that you use the latest versions of both Terraform and the Cloudflare provider.</p>
            <pre><code>$ terraform -v
Terraform v0.14.10
+ provider registry.terraform.io/cloudflare/cloudflare v2.19.2</code></pre>
            <p>And finally we install cf-terraforming with the following command:</p>
            <pre><code>$ GO111MODULE=on go get -u github.com/cloudflare/cf-terraforming/...</code></pre>
            <p>If you’re using <a href="https://brew.sh/">Homebrew</a> on MacOS, this can be simplified to:</p>
            <pre><code>$ brew tap cloudflare/cloudflare
$ brew install --cask cloudflare/cloudflare/cf-terraforming</code></pre>
            
    <div>
      <h4>Using cf-terraforming to generate Terraform configuration</h4>
      <a href="#using-cf-terraforming-to-generate-terraform-configuration">
        
      </a>
    </div>
    <p>We are now ready to start generating a Terraform config. To begin, we run cf-terraforming to generate the first blocks of config for the DNS record resources and append it to the <code>cloudflare.tf</code> file we previously created.</p>
            <pre><code>cf-terraforming generate --resource-type cloudflare_record --zone &lt;zone_id&gt; &gt;&gt; cloudflare.tf</code></pre>
            <p>Breaking this command down:</p><p><code>generate</code> is the command that will produce a valid HCL config of resources</p><p><code>--resource-type</code> specifies the Terraform resource name that we want to generate an HCL config for. You can only generate configuration for one resource at a time. In this example we are using <code>cloudflare_record</code></p><p><code>--zone</code> specifies the Cloudflare zone ID we wish to fetch all the DNS records for so cf-terraforming can create the appropriate API calls</p><p>Example:</p>
            <pre><code>$ cf-terraforming generate --resource-type cloudflare_record --zone 9c2f972575d986b99fa03c7bbfaab414 &gt;&gt; cloudflare.tf
$</code></pre>
            <p>Success will return with no output to console. If you want to see the output before adding it to the config file, run the command without <code>&gt;&gt; cloudflare.tf</code> and it will output to console.</p><p>Here is the partial output in my case, if it is not appended to the config file:</p>
            <pre><code>$ cf-terraforming generate --resource-type cloudflare_record --zone 9c2f972575d986b99fa03c7bbfaab414
resource "cloudflare_record" "terraform_managed_resource_db185030f44e358e1c2162a9ecda7253" {
name = "api"
proxied = true
ttl = 1
type = "A"
value = "x.x.x.x"
zone_id = "9c2f972575d986b99fa03c7bbfaab414"
}
resource "cloudflare_record" "terraform_managed_resource_e908d014ebef5011d5981b3ba961a011" {
...</code></pre>
            <p>The output resources are given standardized names of “terraform_managed_resource_&lt;resource_id&gt;”. Because the resource id is included in the name, the object names between the config we just exported and the state we will import will always be consistent. This is necessary to ensure Terraform knows which config belongs to which state.</p><p>After generating the DNS record resources, we now do the same for both firewall rules and filters.</p>
            <pre><code>cf-terraforming generate --resource-type cloudflare_firewall_rule --zone &lt;zone_id&gt; &gt;&gt; cloudflare.tf
cf-terraforming generate --resource-type cloudflare_filter --zone &lt;zone_id&gt; &gt;&gt; cloudflare.tf</code></pre>
            <p>Example:</p>
            <pre><code>$ cf-terraforming generate --resource-type cloudflare_firewall_rule --zone 9c2f972575d986b99fa03c7bbfaab414 &gt;&gt; cloudflare.tf
$ cf-terraforming generate --resource-type cloudflare_filter --zone 9c2f972575d986b99fa03c7bbfaab414 &gt;&gt; cloudflare.tf
$</code></pre>
            
    <div>
      <h4>Using cf-terraforming to import Terraform state</h4>
      <a href="#using-cf-terraforming-to-import-terraform-state">
        
      </a>
    </div>
    <p>Before we can ask Terraform to verify the config, we need to import the state so that Terraform does not attempt to create new objects but instead reuses the existing objects we already have in Cloudflare.</p><p>Similar to what we did with the generate command, we use the import command to generate <code>terraform import</code> commands.</p>
            <pre><code>cf-terraforming import --resource-type cloudflare_record --zone &lt;zone_id&gt;</code></pre>
            <p>Breaking this command down:</p><p><code>import</code> is the command that will produce a valid <code>terraform import</code> command that we can then run<code>--resource-type</code> (same as the generate command) specifies the Terraform resource name that we want to create import commands for. You can only use one resource at a time. In this example we are using <code>cloudflare_record</code><code>--zone</code> (same as the generate command) specifies the Cloudflare zone ID we wish to fetch all the DNS records for so cf-terraforming can populate the commands with the appropriate API calls</p><p>And an example with output:</p>
            <pre><code>$ cf-terraforming import --resource-type cloudflare_record --zone 9c2f972575d986b99fa03c7bbfaab414
terraform import cloudflare_record.terraform_managed_resource_db185030f44e358e1c2162a9ecda7253 9c2f972575d986b99fa03c7bbfaab414/db185030f44e358e1c2162a9ecda7253
terraform import cloudflare_record.terraform_managed_resource_e908d014ebef5011d5981b3ba961a011 9c2f972575d986b99fa03c7bbfaab414/e908d014ebef5011d5981b3ba961a011
terraform import cloudflare_record.terraform_managed_resource_3f62e6950a5e0889a14cf5b913e87699 9c2f972575d986b99fa03c7bbfaab414/3f62e6950a5e0889a14cf5b913e87699
terraform import cloudflare_record.terraform_managed_resource_47581f47852ad2ba61df90b15933903d 9c2f972575d986b99fa03c7bbfaab414/47581f47852ad2ba61df90b15933903d$</code></pre>
            <p>The output of this will be ready to use <code>terraform import</code> commands. Running the generated <code>terraform import</code> command will leverage existing Cloudflare Terraform provider functionality to import the resource state into Terraform’s <code>terraform.tfstate</code> file. This removes the tedium of pulling all the appropriate resource IDs from Cloudflare’s API and then formatting these commands one by one. The order of operations of the config then state is important as Terraform expects there to be configuration in the <code>.tf</code> file for these resources before importing the state.</p><p>Note: Be careful when you actually import these resources, though, as from that point on any subsequent Terraform actions like plan or apply will expect this resource to be there. Removing the state is possible but requires manually editing the <code>terraform.tfstate</code> file. Terraform does keep a backup locally in case you make a mistake though.</p><p>Now we actually run these <code>terraform import</code> commands to import the state. Below shows what that looks like for a single resource.</p>
            <pre><code>$ terraform import cloudflare_record.terraform_managed_resource_47581f47852ad2ba61df90b15933903d 9c2f972575d986b99fa03c7bbfaab414/47581f47852ad2ba61df90b15933903d
cloudflare_record.terraform_managed_resource_47581f47852ad2ba61df90b15933903d: Importing from ID "9c2f972575d986b99fa03c7bbfaab414/47581f47852ad2ba61df90b15933903d"...
cloudflare_record.terraform_managed_resource_47581f47852ad2ba61df90b15933903d: Import prepared!
Prepared cloudflare_record for import
cloudflare_record.terraform_managed_resource_47581f47852ad2ba61df90b15933903d: Refreshing state... [id=47581f47852ad2ba61df90b15933903d]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.</code></pre>
            <p>With <code>cloudflare_record</code> imported, now we do the same for the firewall_rules and filters.</p>
            <pre><code>cf-terraforming import --resource-type cloudflare_firewall_rule --zone &lt;zone_id&gt;
cf-terraforming import --resource-type cloudflare_filter --zone &lt;zone_id&gt;</code></pre>
            <p>Shown with output:</p>
            <pre><code>$ cf-terraforming import --resource-type cloudflare_firewall_rule --zone 9c2f972575d986b99fa03c7bbfaab414
terraform import cloudflare_firewall_rule.terraform_managed_resource_0de909f3229341a2b8214737903f2caf 9c2f972575d986b99fa03c7bbfaab414/0de909f3229341a2b8214737903f2caf
terraform import cloudflare_firewall_rule.terraform_managed_resource_0c722eb85e1c47dcac83b5824bad4a7c 9c2f972575d986b99fa03c7bbfaab414/0c722eb85e1c47dcac83b5824bad4a7c
$ cf-terraforming import --resource-type cloudflare_filter --zone 9c2f972575d986b99fa03c7bbfaab414
terraform import cloudflare_filter.terraform_managed_resource_ee048570bb874972bbb6557f7529e094 9c2f972575d986b99fa03c7bbfaab414/ee048570bb874972bbb6557f7529e094
terraform import cloudflare_filter.terraform_managed_resource_1bb6cd50e2534a64a9ec698fd841ffc5 9c2f972575d986b99fa03c7bbfaab414/1bb6cd50e2534a64a9ec698fd841ffc5
$</code></pre>
            <p>As with <code>cloudflare_record</code>, we run these <code>terraform import</code> commands to ensure all the state is successfully imported.</p>
    <div>
      <h4>Verifying everything is correct</h4>
      <a href="#verifying-everything-is-correct">
        
      </a>
    </div>
    <p>Now that we have both the configuration and state in place, we call <code>terraform plan</code> to see if Terraform can verify everything is in place. If all goes well then you will be greeted with the following “nothing to do” message:</p>
            <pre><code>No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.</code></pre>
            <p>You now can begin managing these resources in Terraform. If you want to add more resources into Terraform, follow these steps for other resources. You can find which resources are supported in the <a href="https://github.com/cloudflare/cf-terraforming">README</a>. We will add additional resources over time, but if there are specific ones you are looking for, please create GitHub issues or upvote any existing ones.</p>
    <div>
      <h3>It has never been easier to get started with Cloudflare + Terraform</h3>
      <a href="#it-has-never-been-easier-to-get-started-with-cloudflare-terraform">
        
      </a>
    </div>
    <p>Whether you are an existing Cloudflare customer and have been curious about Terraform or you are looking to expand your infrastructure-as-code to include Cloudflare’s services, you have everything you need to get building with Terraform, the Cloudflare provider, and cf-terraforming. For questions, comments, or feature requests for either the <a href="https://github.com/cloudflare/terraform-provider-cloudflare">provider</a> or <a href="https://github.com/cloudflare/cf-terraforming">cf-terraforming</a>, see the respective github repos.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[HashiCorp]]></category>
            <category><![CDATA[Terraform]]></category>
            <guid isPermaLink="false">te8kR7mJGYwAQ5oZ6HoyE</guid>
            <dc:creator>Garrett Galow</dc:creator>
        </item>
        <item>
            <title><![CDATA[Stream Firewall Events directly to your SIEM]]></title>
            <link>https://blog.cloudflare.com/stream-firewall-events-directly-to-your-siem/</link>
            <pubDate>Fri, 24 Apr 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ As of today, customers using Cloudflare Logs can create Logpush jobs that send only Firewall Events. These events arrive much faster than our existing HTTP requests logs: they are typically delivered to your logging platform within 60 seconds of sending the response to the client. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The highest trafficked sites using Cloudflare receive billions of requests per day. But only about 5% of those requests typically trigger security rules, whether they be “managed” rules such as our <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF</a> and DDoS protections, or custom rules such as those configured by customers using our powerful Firewall Rules and Rate Limiting engines.</p><p>When enforcement is taken on a request that interrupts the flow of malicious traffic, a <a href="/updates-to-firewall-analytics/#event-based-logging">Firewall Event is logged with detail</a> about the request including which rule triggered us to take action and what action we took, e.g., challenged or blocked outright.</p><p>Previously, if you wanted to ingest all of these events into your <a href="https://www.cloudflare.com/learning/security/what-is-siem/">SIEM</a> or logging platform, you had to take the whole firehose of requests—good and bad—and then filter them client side. If you’re paying by the log line or scaling your own storage solution, this cost can add up quickly. And if you have a security team monitoring logs, they’re being sent a lot of extraneous data to sift through before determining what needs their attention most.</p><p>As of today, customers using Cloudflare Logs can create <a href="https://developers.cloudflare.com/logs/about">Logpush jobs</a> that send only Firewall Events. These events arrive much faster than our existing HTTP requests logs: they are typically delivered to your logging platform within 60 seconds of sending the response to the client.</p><p>In this post we’ll show you how to use Terraform and Sumo Logic, an <a href="https://developers.cloudflare.com/logs/analytics-integrations/">analytics integration partner</a>, to get this logging set up live in just a few minutes.</p>
    <div>
      <h2>Process overview</h2>
      <a href="#process-overview">
        
      </a>
    </div>
    <p>The steps below take you through the process of configuring Cloudflare Logs to push security events directly to your logging platform. For purposes of this tutorial, we’ve chosen Sumo Logic as our log destination, but you’re free to use any of our <a href="https://developers.cloudflare.com/logs/analytics-integrations/">analytics partners</a>, or any logging platform that can read from cloud storage such as <a href="https://developers.cloudflare.com/logs/logpush/aws-s3/">AWS S3</a>, <a href="https://developers.cloudflare.com/logs/logpush/azure/">Azure Blob Storage</a>, or <a href="https://developers.cloudflare.com/logs/logpush/google-cloud-storage/">Google Cloud Storage</a>.</p><p>To configure Sumo Logic and Cloudflare we make use of Terraform, a popular Infrastructure-as-Code tool from HashiCorp. If you’re new to Terraform, see <a href="/getting-started-with-terraform-and-cloudflare-part-1/">Getting started with Terraform and Cloudflare</a> for a guided walkthrough with best practice recommendations such as how to version and store your configuration in git for easy rollback.</p><p>Once the infrastructure is in place, you’ll send a malicious request towards your site to trigger the Cloudflare Web Application Firewall, and watch as the Firewall Events generated by that request shows up in Sumo Logic about a minute later.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7HQ0Ky3j85kh2abquFBJf7/e0ee2a5d4d613bf597c4298e4be73467/image2-18.png" />
            
            </figure>
    <div>
      <h2>Prerequisites</h2>
      <a href="#prerequisites">
        
      </a>
    </div>
    
    <div>
      <h3>Install Terraform and Go</h3>
      <a href="#install-terraform-and-go">
        
      </a>
    </div>
    <p>First you’ll need to install Terraform. See our Developer Docs for <a href="https://developers.cloudflare.com/terraform/installing/">instructions</a>.</p><p>Next you’ll need to install Go. The easiest way on macOS to do so is with <a href="https://brew.sh/">Homebrew</a>:</p>
            <pre><code>$ brew install golang
$ export GOPATH=$HOME/go
$ mkdir $GOPATH</code></pre>
            <p><a href="https://golang.org/">Go</a> is required because the Sumo Logic Terraform Provider is a "community" plugin, which means it has to be built and installed manually rather than automatically through the Terraform Registry, as will happen later for the Cloudflare Terraform Provider.</p>
    <div>
      <h3>Install the Sumo Logic Terraform Provider Module</h3>
      <a href="#install-the-sumo-logic-terraform-provider-module">
        
      </a>
    </div>
    <p>The official installation instructions for installing the Sumo Logic provider can be found on their <a href="https://github.com/SumoLogic/sumologic-terraform-provider">GitHub Project page</a>, but here are my notes:</p>
            <pre><code>$ mkdir -p $GOPATH/src/github.com/terraform-providers &amp;&amp; cd $_
$ git clone https://github.com/SumoLogic/sumologic-terraform-provider.git
$ cd sumologic-terraform-provider
$ make install</code></pre>
            
    <div>
      <h2>Prepare Sumo Logic to receive Cloudflare Logs</h2>
      <a href="#prepare-sumo-logic-to-receive-cloudflare-logs">
        
      </a>
    </div>
    
    <div>
      <h3>Install Sumo Logic livetail utility</h3>
      <a href="#install-sumo-logic-livetail-utility">
        
      </a>
    </div>
    <p>While not strictly necessary, the <a href="https://help.sumologic.com/05Search/Live-Tail/Live-Tail-CLI">livetail tool</a> from Sumo Logic makes it easy to grab the Cloudflare Logs challenge token we’ll need in a minute, and also to view the fruits of your labor: seeing a Firewall Event appear in Sumo Logic shortly after the malicious request hit the edge.</p><p>On macOS:</p>
            <pre><code>$ brew cask install livetail
...
==&gt; Verifying SHA-256 checksum for Cask 'livetail'.
==&gt; Installing Cask livetail
==&gt; Linking Binary 'livetail' to '/usr/local/bin/livetail'.
?  livetail was successfully installed!</code></pre>
            
    <div>
      <h3>Generate Sumo Logic Access Key</h3>
      <a href="#generate-sumo-logic-access-key">
        
      </a>
    </div>
    <p>This step assumes you already have a Sumo Logic account. If not, you can sign up for a free trial <a href="https://www.sumologic.com/sign-up/">here</a>.</p><ol><li><p>Browse to <code>https://service.$ENV.sumologic.com/ui/#/security/access-keys</code> where <code>$ENV</code> should be replaced by <a href="http://help.sumologic.com/Send_Data/Collector_Management_API/Sumo_Logic_Endpoints">the environment</a> you chose on signup.</p></li><li><p>Click the "+ Add Access Key" button, give it a name, and click "Create Key"</p></li><li><p>In the next step you'll save the Access ID and Access Key that are provided as environment variables, so don’t close this modal until you do.</p></li></ol>
    <div>
      <h3>Generate Cloudflare Scoped API Token</h3>
      <a href="#generate-cloudflare-scoped-api-token">
        
      </a>
    </div>
    <ol><li><p>Log in to the <a href="https://dash.cloudflare.com/">Cloudflare Dashboard</a></p></li><li><p>Click on the profile icon in the top-right corner and then select "My Profile"</p></li><li><p>Select "API Tokens" from the nav bar and click "Create Token"</p></li><li><p>Click the "Get started" button next to the "Create Custom Token" label</p></li></ol><p>On the Create Custom Token screen:</p><ol><li><p>Provide a token name, e.g., "Logpush - Firewall Events"</p></li><li><p>Under Permissions, change Account to Zone, and then select Logs and Edit, respectively, in the two drop-downs to the right</p></li><li><p>Optionally, change Zone Resources and IP Address Filtering to restrict restrict access for this token to specific zones or from specific IPs</p></li></ol><p>Click "Continue to summary" and then "Create token" on the next screen. Save the token somewhere secure, e.g., your password manager, as it'll be needed in just a minute.</p>
    <div>
      <h3>Set environment variables</h3>
      <a href="#set-environment-variables">
        
      </a>
    </div>
    <p>Rather than add sensitive credentials to source files (that may get submitted to your source code repository), we'll set environment variables and have the Terraform modules read from them.</p>
            <pre><code>$ export CLOUDFLARE_API_TOKEN="&lt;your scoped cloudflare API token&gt;"
$ export CF_ZONE_ID="&lt;tag of zone you wish to send logs for&gt;"</code></pre>
            <p>We'll also need your Sumo Logic environment, Access ID, and Access Key:</p>
            <pre><code>$ export SUMOLOGIC_ENVIRONMENT="eu"
$ export SUMOLOGIC_ACCESSID="&lt;access id from previous step&gt;"
$ export SUMOLOGIC_ACCESSKEY="&lt;access key from previous step&gt;"</code></pre>
            
    <div>
      <h3>Create the Sumo Logic Collector and HTTP Source</h3>
      <a href="#create-the-sumo-logic-collector-and-http-source">
        
      </a>
    </div>
    <p>We'll create a directory to store our Terraform project in and build it up as we go:</p>
            <pre><code>$ mkdir -p ~/src/fwevents &amp;&amp; cd $_</code></pre>
            <p>Then we'll create the Collector and HTTP source that will store and provide Firewall Events logs to Sumo Logic:</p>
            <pre><code>$ cat &lt;&lt;'EOF' | tee main.tf
##################
### SUMO LOGIC ###
##################
provider "sumologic" {
    environment = var.sumo_environment
    access_id = var.sumo_access_id
}

resource "sumologic_collector" "collector" {
    name = "CloudflareLogCollector"
    timezone = "Etc/UTC"
}

resource "sumologic_http_source" "http_source" {
    name = "firewall-events-source"
    collector_id = sumologic_collector.collector.id
    timezone = "Etc/UTC"
}
EOF</code></pre>
            <p>Then we'll create a variables file so Terraform has credentials to communicate with Sumo Logic:</p>
            <pre><code>$ cat &lt;&lt;EOF | tee variables.tf
##################
### SUMO LOGIC ###
##################
variable "sumo_environment" {
    default = "$SUMOLOGIC_ENVIRONMENT"
}

variable "sumo_access_id" {
    default = "$SUMOLOGIC_ACCESSID"
}
EOF</code></pre>
            <p>With our Sumo Logic configuration set, we’ll initialize Terraform with <code>terraform init</code> and then preview what changes Terraform is going to make by running <code>terraform plan</code>:</p>
            <pre><code>$ terraform init

Initializing the backend...

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.</code></pre>
            
            <pre><code>$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # sumologic_collector.collector will be created
  + resource "sumologic_collector" "collector" {
      + destroy        = true
      + id             = (known after apply)
      + lookup_by_name = false
      + name           = "CloudflareLogCollector"
      + timezone       = "Etc/UTC"
    }

  # sumologic_http_source.http_source will be created
  + resource "sumologic_http_source" "http_source" {
      + automatic_date_parsing       = true
      + collector_id                 = (known after apply)
      + cutoff_timestamp             = 0
      + destroy                      = true
      + force_timezone               = false
      + id                           = (known after apply)
      + lookup_by_name               = false
      + message_per_request          = false
      + multiline_processing_enabled = true
      + name                         = "firewall-events-source"
      + timezone                     = "Etc/UTC"
      + url                          = (known after apply)
      + use_autoline_matching        = true
    }

Plan: 2 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.</code></pre>
            <p>Assuming everything looks good, let’s execute the plan:</p>
            <pre><code>$ terraform apply -auto-approve
sumologic_collector.collector: Creating...
sumologic_collector.collector: Creation complete after 3s [id=108448215]
sumologic_http_source.http_source: Creating...
sumologic_http_source.http_source: Creation complete after 0s [id=150364538]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.</code></pre>
            <p>Success! At this point you could log into the Sumo Logic web interface and confirm that your Collector and HTTP Source were created successfully.</p>
    <div>
      <h2>Create a Cloudflare Logpush Job</h2>
      <a href="#create-a-cloudflare-logpush-job">
        
      </a>
    </div>
    <p>Before we’ll start sending logs to your collector, you need to demonstrate the ability to read from it. This validation step prevents accidental (or intentional) misconfigurations from overrunning your logs.</p>
    <div>
      <h3>Tail the Sumo Logic Collector and await the challenge token</h3>
      <a href="#tail-the-sumo-logic-collector-and-await-the-challenge-token">
        
      </a>
    </div>
    <p>In a new shell window—you should keep the current one with your environment variables set for use with Terraform—we'll start tailing Sumo Logic for events sent from the <code>firewall-events-source</code> HTTP source.</p><p>The first time that you run livetail you'll need to specify your <a href="https://help.sumologic.com/APIs/General-API-Information/Sumo-Logic-Endpoints-and-Firewall-Security">Sumo Logic Environment</a>, Access ID and Access Key, but these values will be stored in the working directory for subsequent runs:</p>
            <pre><code>$ livetail _source=firewall-events-source
### Welcome to Sumo Logic Live Tail Command Line Interface ###
1 US1
2 US2
3 EU
4 AU
5 DE
6 FED
7 JP
8 CA
Please select Sumo Logic environment: 
See http://help.sumologic.com/Send_Data/Collector_Management_API/Sumo_Logic_Endpoints to choose the correct environment. 3
### Authenticating ###
Please enter your Access ID: &lt;access id&gt;
Please enter your Access Key &lt;access key&gt;
### Starting Live Tail session ###</code></pre>
            
    <div>
      <h3>Request and receive challenge token</h3>
      <a href="#request-and-receive-challenge-token">
        
      </a>
    </div>
    <p>Before requesting a challenge token, we need to figure out where Cloudflare should send logs.</p><p>We do this by asking Terraform for the receiver URL of the recently created HTTP source. Note that we modify the URL returned slightly as Cloudflare Logs expects <code>sumo://</code> rather than <code>https://</code>.</p>
            <pre><code>$ export SUMO_RECEIVER_URL=$(terraform state show sumologic_http_source.http_source | grep url | awk '{print $3}' | sed -e 's/https:/sumo:/; s/"//g')

$ echo $SUMO_RECEIVER_URL
sumo://endpoint1.collection.eu.sumologic.com/receiver/v1/http/&lt;redacted&gt;</code></pre>
            <p>With URL in hand, we can now request the token.</p>
            <pre><code>$ curl -sXPOST -H "Content-Type: application/json" -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" -d '{"destination_conf":"'''$SUMO_RECEIVER_URL'''"}' https://api.cloudflare.com/client/v4/zones/$CF_ZONE_ID/logpush/ownership

{"errors":[],"messages":[],"result":{"filename":"ownership-challenge-bb2912e0.txt","message":"","valid":true},"success":true}</code></pre>
            <p>Back in the other window where your livetail is running you should see something like this:</p>
            <pre><code>{"content":"eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4R0NNIiwidHlwIjoiSldUIn0..WQhkW_EfxVy8p0BQ.oO6YEvfYFMHCTEd6D8MbmyjJqcrASDLRvHFTbZ5yUTMqBf1oniPNzo9Mn3ZzgTdayKg_jk0Gg-mBpdeqNI8LJFtUzzgTGU-aN1-haQlzmHVksEQdqawX7EZu2yiePT5QVk8RUsMRgloa76WANQbKghx1yivTZ3TGj8WquZELgnsiiQSvHqdFjAsiUJ0g73L962rDMJPG91cHuDqgfXWwSUqPsjVk88pmvGEEH4AMdKIol0EOc-7JIAWFBhcqmnv0uAXVOH5uXHHe_YNZ8PNLfYZXkw1xQlVDwH52wRC93ohIxg.pHAeaOGC8ALwLOXqxpXJgQ","filename":"ownership-challenge-bb2912e0.txt"}</code></pre>
            <p>Copy the content value from above into an environment variable, as you'll need it in a minute to create the job:</p>
            <pre><code>$ export LOGPUSH_CHALLENGE_TOKEN="&lt;content value&gt;"</code></pre>
            
    <div>
      <h3>Create the Logpush job using the challenge token</h3>
      <a href="#create-the-logpush-job-using-the-challenge-token">
        
      </a>
    </div>
    <p>With challenge token in hand, we'll use Terraform to create the job.</p><p>First you’ll want to choose the log fields that should be sent to Sumo Logic. You can enumerate the list by querying the dataset:</p>
            <pre><code>$ curl -sXGET -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" https://api.cloudflare.com/client/v4/zones/$CF_ZONE_ID/logpush/datasets/firewall_events/fields | jq .
{
  "errors": [],
  "messages": [],
  "result": {
    "Action": "string; the code of the first-class action the Cloudflare Firewall took on this request",
    "ClientASN": "int; the ASN number of the visitor",
    "ClientASNDescription": "string; the ASN of the visitor as string",
    "ClientCountryName": "string; country from which request originated",
    "ClientIP": "string; the visitor's IP address (IPv4 or IPv6)",
    "ClientIPClass": "string; the classification of the visitor's IP address, possible values are: unknown | clean | badHost | searchEngine | whitelist | greylist | monitoringService | securityScanner | noRecord | scan | backupService | mobilePlatform | tor",
    "ClientRefererHost": "string; the referer host",
    "ClientRefererPath": "string; the referer path requested by visitor",
    "ClientRefererQuery": "string; the referer query-string was requested by the visitor",
    "ClientRefererScheme": "string; the referer url scheme requested by the visitor",
    "ClientRequestHTTPHost": "string; the HTTP hostname requested by the visitor",
    "ClientRequestHTTPMethodName": "string; the HTTP method used by the visitor",
    "ClientRequestHTTPProtocol": "string; the version of HTTP protocol requested by the visitor",
    "ClientRequestPath": "string; the path requested by visitor",
    "ClientRequestQuery": "string; the query-string was requested by the visitor",
    "ClientRequestScheme": "string; the url scheme requested by the visitor",
    "Datetime": "int or string; the date and time the event occurred at the edge",
    "EdgeColoName": "string; the airport code of the Cloudflare datacenter that served this request",
    "EdgeResponseStatus": "int; HTTP response status code returned to browser",
    "Kind": "string; the kind of event, currently only possible values are: firewall",
    "MatchIndex": "int; rules match index in the chain",
    "Metadata": "object; additional product-specific information. Metadata is organized in key:value pairs. Key and Value formats can vary by Cloudflare security product and can change over time",
    "OriginResponseStatus": "int; HTTP origin response status code returned to browser",
    "OriginatorRayName": "string; the RayId of the request that issued the challenge/jschallenge",
    "RayName": "string; the RayId of the request",
    "RuleId": "string; the Cloudflare security product-specific RuleId triggered by this request",
    "Source": "string; the Cloudflare security product triggered by this request",
    "UserAgent": "string; visitor's user-agent string"
  },
  "success": true
}</code></pre>
            <p>Then you’ll append your Cloudflare configuration to the <code>main.tf</code> file:</p>
            <pre><code>$ cat &lt;&lt;EOF | tee -a main.tf

##################
### CLOUDFLARE ###
##################
provider "cloudflare" {
  version = "~&gt; 2.0"
}

resource "cloudflare_logpush_job" "firewall_events_job" {
  name = "fwevents-logpush-job"
  zone_id = var.cf_zone_id
  enabled = true
  dataset = "firewall_events"
  logpull_options = "fields=RayName,Source,RuleId,Action,EdgeResponseStatusDatetime,EdgeColoName,ClientIP,ClientCountryName,ClientASNDescription,UserAgent,ClientRequestHTTPMethodName,ClientRequestHTTPHost,ClientRequestHTTPPath&amp;timestamps=rfc3339"
  destination_conf = replace(sumologic_http_source.http_source.url,"https:","sumo:")
  ownership_challenge = "$LOGPUSH_CHALLENGE_TOKEN"
}
EOF</code></pre>
            <p>And add to the <code>variables.tf</code> file:</p>
            <pre><code>$ cat &lt;&lt;EOF | tee -a variables.tf

##################
### CLOUDFLARE ###
##################
variable "cf_zone_id" {
  default = "$CF_ZONE_ID"
}</code></pre>
            <p>Next we re-run <code>terraform init</code> to install the latest Cloudflare Terraform Provider Module. You’ll need to make sure you have at least version 2.6.0 as this is the version in which we added Logpush job support:</p>
            <pre><code>$ terraform init

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "cloudflare" (terraform-providers/cloudflare) 2.6.0...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.</code></pre>
            <p>With the latest Terraform installed, we check out the plan and then apply:</p>
            <pre><code>$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

sumologic_collector.collector: Refreshing state... [id=108448215]
sumologic_http_source.http_source: Refreshing state... [id=150364538]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # cloudflare_logpush_job.firewall_events_job will be created
  + resource "cloudflare_logpush_job" "firewall_events_job" {
      + dataset             = "firewall_events"
      + destination_conf    = "sumo://endpoint1.collection.eu.sumologic.com/receiver/v1/http/(redacted)"
      + enabled             = true
      + id                  = (known after apply)
      + logpull_options     = "fields=RayName,Source,RuleId,Action,EdgeResponseStatusDatetime,EdgeColoName,ClientIP,ClientCountryName,ClientASNDescription,UserAgent,ClientRequestHTTPMethodName,ClientRequestHTTPHost,ClientRequestHTTPPath&amp;timestamps=rfc3339"
      + name                = "fwevents-logpush-job"
      + ownership_challenge = "(redacted)"
      + zone_id             = "(redacted)"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.</code></pre>
            
            <pre><code>$ terraform apply --auto-approve
sumologic_collector.collector: Refreshing state... [id=108448215]
sumologic_http_source.http_source: Refreshing state... [id=150364538]
cloudflare_logpush_job.firewall_events_job: Creating...
cloudflare_logpush_job.firewall_events_job: Creation complete after 3s [id=13746]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.</code></pre>
            <p>Success! Last step is to test your setup.</p>
    <div>
      <h2>Testing your setup by sending a malicious request</h2>
      <a href="#testing-your-setup-by-sending-a-malicious-request">
        
      </a>
    </div>
    <p>The following step assumes that you have the Cloudflare WAF turned on. Alternatively, you can create a Firewall Rule to match your request and generate a Firewall Event that way.</p><p>First make sure that livetail is running as described earlier:</p>
            <pre><code>$ livetail "_source=firewall-events-source"
### Authenticating ###
### Starting Live Tail session ###</code></pre>
            <p>Then in a browser make the following request <code>https://example.com/&lt;script&gt;alert()&lt;/script&gt;</code>. You should see the following returned:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4FOw7aQFIaaERMSbi2bRH3/3ca02fb0fa377956d28b444f4c5b2034/sqli-upinatoms.png" />
            
            </figure><p>And a few moments later in livetail:</p>
            <pre><code>{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"958052","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"958051","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"973300","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"973307","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"973331","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"981176","Action":"drop","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}</code></pre>
            <p>Note that for this one malicious request Cloudflare Logs actually sent 6 separate Firewall Events to Sumo Logic. The reason for this is that this specific request triggered a variety of different Managed Rules: #958051, 958052, 973300, 973307, 973331, and 981176.</p>
    <div>
      <h2>Seeing it all in action</h2>
      <a href="#seeing-it-all-in-action">
        
      </a>
    </div>
    <p>Here's a demo of  launching <code>livetail</code>, making a malicious request in a browser, and then seeing the result sent from the Cloudflare Logpush job:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5KtWEvKCr6QrbGKWZGzQ3a/1142fa4d2dddb6fd7cd7278d3273e0ee/fwevents-sumo-demo.gif" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Firewall]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Terraform]]></category>
            <category><![CDATA[SIEM]]></category>
            <guid isPermaLink="false">4JdLdzVCAsQGdNwMq7VgCa</guid>
            <dc:creator>Patrick R. Donahue</dc:creator>
        </item>
        <item>
            <title><![CDATA[Terraforming Cloudflare: in quest of the optimal setup]]></title>
            <link>https://blog.cloudflare.com/terraforming-cloudflare/</link>
            <pubDate>Wed, 09 Oct 2019 15:00:00 GMT</pubDate>
            <description><![CDATA[ This post is about our introductive journey to the infrastructure-as-code practice; managing Cloudflare configuration in a declarative and version-controlled way. ]]></description>
            <content:encoded><![CDATA[ <p><i>This is a guest post by Dimitris Koutsourelis and Alexis Dimitriadis, working for the Security Team at </i><a href="https://www.workable.com"><i>Workable</i></a><i>, a company that makes software to help companies find and hire great people.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/37cT4OaneCnyyBgWYOprbJ/b0b143f2cbc1b9dd55ccf7bb76f7fd17/Image_20191002_222359-1-1.png" />
            
            </figure>
    <div>
      <h2>Overview</h2>
      <a href="#overview">
        
      </a>
    </div>
    <p>This post is about our introductive journey to the infrastructure-as-code practice; managing Cloudflare configuration in a declarative and version-controlled way. We'd like to share the experience we've gained during this process; our pain points, limitations we faced, different approaches we took and provide parts of our solution and experimentations.</p>
    <div>
      <h2>Terraform world</h2>
      <a href="#terraform-world">
        
      </a>
    </div>
    <p><a href="https://www.terraform.io/intro/index.html">Terraform</a> is a great tool that fulfills our requirements, and fortunately, Cloudflare maintains its own <a href="https://www.terraform.io/docs/providers/cloudflare/index.html">provider</a> that allows us to manage its service configuration hasslefree.</p><p>On top of that, <a href="https://github.com/gruntwork-io/terragrunt">Terragrunt</a>, is a thin wrapper that provides extra commands and functionality for keeping Terraform configurations DRY, and managing remote state.</p><p>The combination of both leads to a more modular and re-usable structure for Cloudflare <a href="https://www.terraform.io/docs/configuration/resources.html">resources</a> (configuration), by utilizing <a href="https://www.terraform.io/docs/configuration/modules.html">terraform</a> and <a href="https://github.com/gruntwork-io/terragrunt-infrastructure-modules-example">terragrunt</a> modules.</p><p>We've chosen to use the latest version of both tools (<a href="https://www.hashicorp.com/blog/terraform-0-1-2-preview">Terraform-v0.12</a> &amp; <a href="https://github.com/gruntwork-io/terragrunt/releases/tag/v0.19.0">Terragrunt-v0.19</a> respectively) and constantly upgrade to take advantage of the valuable new features and functionality, which at this point in time, remove important limitations.</p>
    <div>
      <h2>Workable context</h2>
      <a href="#workable-context">
        
      </a>
    </div>
    <p>Our set up includes multiple domains that are grouped in two distinct Cloudflare organisations: production &amp; staging. Our environments have their own purposes and technical requirements (i.e.: QA, development, sandbox and production) which translates to slightly different sets of Cloudflare zone configuration.</p>
    <div>
      <h2>Our approach</h2>
      <a href="#our-approach">
        
      </a>
    </div>
    <p>Our main goal was to have a modular set up with the ability to manage any configuration for any zone, while keeping code repetition to a minimum. This is more complex than it sounds; we have repeatedly changed our Terraform folder structure - and other technical aspects - during the development period. The following sections illustrate a set of alternatives through our path, along with pros &amp; cons.</p>
    <div>
      <h3>Structure</h3>
      <a href="#structure">
        
      </a>
    </div>
    <p>Terraform configuration is based on the project's directory structure, so this is the place to start.</p><p>Instead of retaining the Cloudflare organisation structure (production &amp; staging as root level directories containing the zones that belong in each organization), our decision was to group zones that share common configuration under the same directory. This helps keep the code dry and the set up consistent and readable.</p><p>On the down side, this structure adds an extra layer of complexity, as two different sets of credentials need to be handled conditionally and two state files (at the <i>environments/</i> root level) must be managed and isolated using <a href="https://www.terraform.io/docs/state/workspaces.html">workspaces</a>.</p><p>On top of that, we used Terraform modules, to keep sets of common configuration across zone groups into a single place.Terraform modules repository</p>
            <pre><code>modules/
│    ├── firewall/
│        ├── main.tf
│        ├── variables.tf
│    ├── zone_settings/
│        ├── main.tf
│        ├── variables.tf
│    └── [...]  
└──</code></pre>
            <p>Terragrunt modules repository</p>
            <pre><code>environments/
│    ├── [...]
│    ├── dev/
│    ├── qa/
│    ├── demo/
│        ├── zone-8/ (production)
│            └── terragrunt.hcl
│        ├── zone-9/ (staging)
│            └── terragrunt.hcl
│        ├── config.tfvars
│        ├── main.tf
│        └── variables.tf
│    ├── config.tfvars
│    ├── secrets.tfvars
│    ├── main.tf
│    ├── variables.tf
│    └── terragrunt.hcl
└──</code></pre>
            <p>The Terragrunt modules tree gives flexibility, since we are able to apply configuration on a zone, group zone, or organisation level (which is inline with Cloudflare configuration capabilities - i.e.: custom error pages can also be configured on the organisation level).</p>
    <div>
      <h3>Resource types</h3>
      <a href="#resource-types">
        
      </a>
    </div>
    <p>We decided to implement Terraform resources in different ways, to cover our requirements more efficiently.</p>
    <div>
      <h5>1. Static resource</h5>
      <a href="#1-static-resource">
        
      </a>
    </div>
    <p>The first thought that came to mind was having one, or multiple <i>.tf</i> files implementing all the resources with hardcoded values assigned to each attribute. It's simple and straightforward, but can have a high maintenance cost if it leads to code copy/paste between environments.</p><p>So, common settings seem to be a good use case; we chose to implement <i>access_rules</i> Terraform resources accordingly:modules/access_rules/main.tf</p>
            <pre><code>resource "cloudflare_access_rule" "no_17" {
  notes   = "this is a description"
  mode    = "blacklist"
  configuration = {
    target  = "ip"
    value   = "x.x.x.x"
  }
}
[...]</code></pre>
            
    <div>
      <h5>2. Parametrized resources</h5>
      <a href="#2-parametrized-resources">
        
      </a>
    </div>
    <p>Our next step was to add variables to gain flexibility. This is useful when few attributes of a shared resource configuration differ between multiple zones. Most of the configuration remains the same (as described above) and the variable instantiation is added in the Terraform module, while their values are fed through the Terragrunt module, as input variables, or entries inside_.tfvars_ files. The <i>zone_settings_override</i> resource was implemented accordingly:</p><p>modules/zone_settings/main.tf</p>
            <pre><code>resource "cloudflare_zone_settings_override" "zone_settings" {
  zone_id = var.zone_id
  settings {
    always_online       = "on"
    always_use_https    = "on"
    [...]
    browser_check       = var.browser_check
    mobile_redirect {
      mobile_subdomain  = var.mobile_redirect_subdomain
      status            = var.mobile_redirect_status
      strip_uri         = var.mobile_redirect_uri
    }
    
    [...]
    waf                 = "on"
    webp                = "off"
    websockets          = "on"
  }
}</code></pre>
            <p>environments/qa/main.tf</p>
            <pre><code>module "zone_settings" {
  source        = "git@github.com:foo/modules/zone_settings"
  zone_name     = var.zone_name
  browser_check = var.zone_settings_browser_check
  [...]
}</code></pre>
            <p>environments/qa/config.tfvars</p>
            <pre><code>#zone settings
zone_settings_browser_check = "off"
[...]
}</code></pre>
            
    <div>
      <h5>3. Dynamic resource</h5>
      <a href="#3-dynamic-resource">
        
      </a>
    </div>
    <p>At that point, we thought that a more interesting approach would be to create generic resource templates to manage all instances of a given resource in one place. A template is implemented as a Terraform module and creates each resource dynamically, based on its input: data fed through the Terragrunt modules (/environments in our case), or entries in the tfvars files.</p><p>We chose to implement the <i>account_member</i> resource this way.modules/account_members/variables.tf</p>
            <pre><code>variable "users" {
  description   = "map of users - roles"
  type          = map(list(string))
}
variable "member_roles" {
  description   = "account role ids"
  type          = map(string)
}</code></pre>
            <p>modules/account_members/main.tf</p>
            <pre><code>resource "cloudflare_account_member" "account_member" {
 for_each          = var.users
 email_address     = each.key
 role_ids          = [for role in each.value : lookup(var.member_roles, role)]
 lifecycle {
   prevent_destroy = true
 }
}</code></pre>
            <p>We feed the template with a list of users (list of maps). Each member is assigned a number of roles. To make code more readable, we mapped users to role names instead of role ids:environments/config.tfvars</p>
            <pre><code>member_roles = {
  admin       = "000013091sds0193jdskd01d1dsdjhsd1"
  admin_ro    = "0000ds81hd131bdsjd813hh173hds8adh"
  analytics   = "0000hdsa8137djahd81y37318hshdsjhd"
  [...]
  super_admin = "00001534sd1a2123781j5gj18gj511321"
}
users = {
  "user1@workable.com"  = ["super_admin"]
  "user2@workable.com"  = ["analytics", "audit_logs", "cache_purge", "cf_workers"]
  "user3@workable.com"  = ["cf_stream"]
  [...]
  "robot1@workable.com" = ["cf_stream"]
}</code></pre>
            <p>Another interesting case we dealt with was the <i>rate_limit</i> resource; the variable declaration (list of objects) &amp; implementation goes as follows:modules/rate_limit/variables.tf</p>
            <pre><code>variable "rate_limits" {
  description   = "list of rate limits"
  default       = []
 
  type          = list(object(
  {
    disabled    = bool,
    threshold   = number,
    description = string,
    period      = number,
    
    match       = object({
      request   = object({
        url_pattern     = map(string),
        schemes         = list(string),
        methods         = list(string)
      }),
      response          = object({
        statuses        = list(number),
        origin_traffic  = bool
      })
    }),
    action      = object({
      mode      = string,
      timeout   = number
    })
  }))
}</code></pre>
            <p>modules/rate_limit/main.tf</p>
            <pre><code>locals {
 […]
}
data "cloudflare_zones" "zone" {
  filter {
    name    = var.zone_name
    status  = "active"
    paused  = false
  }
}
resource "cloudflare_rate_limit" "rate_limit" {
  count         = length(var.rate_limits)
  zone_id       =  lookup(data.cloudflare_zones.zone.zones[0], "id")
  disabled      = var.rate_limits[count.index].disabled
  threshold     = var.rate_limits[count.index].threshold
  description   = var.rate_limits[count.index].description
  period        = var.rate_limits[count.index].period
  
  match {
    request {
      url_pattern     = local.url_patterns[count.index]
      schemes         = var.rate_limits[count.index].match.request.schemes
      methods         = var.rate_limits[count.index].match.request.methods
    }
    response {
      statuses        = var.rate_limits[count.index].match.response.statuses
      origin_traffic  = var.rate_limits[count.index].match.response.origin_traffic
    }
  }
  action {
    mode        = var.rate_limits[count.index].action.mode
    timeout     = var.rate_limits[count.index].action.timeout
  }
}</code></pre>
            <p>environments/qa/rate_limit.tfvars</p>
            <pre><code>common_rate_limits = [
{
    #1
    disabled      = false
    threshold     = 50
    description   = "sample description"
    period        = 60
   
   match  = {
      request   = {
        url_pattern  = {
          "subdomain"   = "foo"
          "path"        = "/api/v1/bar"
        }
        schemes         = [ "_ALL_", ]
        methods         = [ "GET", "POST", ]
      }
      response  = {
        statuses        = []
        origin_traffic  = true
      }
    }
    action  = {
      mode      = "simulate"
      timeout   = 3600
    }
  },
  [...]
  }
]</code></pre>
            <p>The biggest advantage of this approach is that all common <i>rate_limit</i> rules are in one place and each environment can include its own rules in their <i>.tfvars</i>. The combination of those using Terraform built-in <code>concat()</code> function, achieves a 2-layer join of the two lists (common|unique rules). So we wanted to give it a try:</p>
            <pre><code>locals {
  rate_limits  = concat(var.common_rate_limits, var.unique_rate_limits)
}</code></pre>
            <p>There is however a drawback: <i>.tfvars</i> files can only contain static values. So, since all <i>url</i> attributes - that include the zone name itself - have to be set explicitly in the data of each environment, it means that every time a change is needed to a url, this value has to be copied across all environments and change the zone name to match the environment.</p><p>The solution we came up with, in order to make the zone name dynamic, was to split the <i>url</i> attribute into 3 parts: subdomain, domain and path. This is effective for the <i>.tfvars</i>, but the added complexity to handle the new variables is non negligible. The corresponding code illustrates the issue:modules/rate_limit/main.tf</p>
            <pre><code>locals {
  rate_limits   = concat(var.common_rate_limits, var.unique_rate_limits)
  url_patterns  = [for rate_limit in local.rate_limits:  "${lookup(rate_limit.match.request.url_pattern, "subdomain", null) != null ? "${lookup(rate_limit.match.request.url_pattern, "subdomain")}." : ""}"${lookup(rate_limit.match.request.url_pattern, "domain", null) != null ? "${lookup(rate_limit.match.request.url_pattern, "domain")}" : ${var.zone_name}}${lookup(rate_limit.match.request.url_pattern, "path", null) != null ? lookup(rate_limit.match.request.url_pattern, "path") : ""}"]
}</code></pre>
            <p><i>Readability vs functionality</i>: although flexibility is increased and code duplication is reduced, the url transformations have an impact on code's readability and ease of debugging (it took us several minutes to spot a typo). You can imagine this is even worse if you attempt to implement a more complex resource (such as <i>page_rule</i> which is a list of maps with four <i>url</i> attributes).</p><p>The underlying issue here is that at the point we were implementing our resources, we had to choose maps over objects due to their capability to omit attributes, using the lookup() function (by setting default values). This is a requirement for certain resources such as <i>page_rules</i>: only certain attributes need to be defined (and others ignored).</p><p>In the end, the context will determine if more complex resources can be implemented with dynamic resources.</p>
    <div>
      <h5>4. Sequential resources</h5>
      <a href="#4-sequential-resources">
        
      </a>
    </div>
    <p>Cloudflare page rule resource has a specific peculiarity that differentiates it from other types of resources: the <i>priority</i> attribute.When a page rule is applied, it gets a unique id and priority number which corresponds to the order it has been submitted. Although Cloudflare API and terraform provider give the ability to explicitly specify the priority, there is a catch.</p><p>Terraform doesn't respect the order of resources inside a <i>.tf</i> file (even in a _for each loop!); each resource is randomly picked up and then applied to the provider. So, if page_rule priority is important - as in our case - the submission order counts. The solution is to lock the sequence in which the resources are created through the <i>depends_on</i> meta-attribute:</p>
            <pre><code>resource "cloudflare_page_rule" "no_3" {
  depends_on  = [cloudflare_page_rule.no_2]
  zone_id     = lookup(data.cloudflare_zones.zone.zones[0], "id")
  target      = "www.${var.zone_name}/foo"
  status      = "active"
  priority    = 3
  actions {
    forwarding_url {
      status_code    = 301
      url            = "https://www.${var.zone_name}"
    }
  }
}
resource "cloudflare_page_rule" "no_2" {
  depends_on  = [cloudflare_page_rule.no_1]
  zone_id     = lookup(data.cloudflare_zones.zone.zones[0], "id")
  target      = "www.${var.zone_name}/lala*"
  status      = "active"
  priority    = 24
  actions {
    ssl                     = "flexible"
    cache_level             = "simplified"
    resolve_override        = "bar.${var.zone_name}"
    host_header_override    = "new.domain.com"
  }
}
resource "cloudflare_page_rule" "page_rule_1" {
  zone_id   = lookup(data.cloudflare_zones.zone.zones[0], "id")
  target    = "*.${var.zone_name}/foo/*"
  status    = "active"
  priority  = 1
  actions {
    forwarding_url {
      status_code     = 301
      url             = "https://foo.${var.zone_name}/$1/$2"
    }
  }
}</code></pre>
            <p>So we had to go with to a more static resource configuration because the <i>depends_on</i> attribute only takes static values (not dynamically calculated ones during the runtime).</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>After changing our minds several times along the way on Terraform structure and other technical details, we believe that there isn't a single best solution. It all comes down to the requirements and keeping a balance between complexity and simplicity. In our case, a mixed approach is good middle ground.</p><p>Terraform is evolving quickly, but at this point it lacks some common coding capabilities. So over engineering can be a catch (which we fell-in too many times). Keep it simple and as DRY as possible. :)</p> ]]></content:encoded>
            <category><![CDATA[Terraform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">69I7Vonk88NVPPXGckf6rA</guid>
            <dc:creator>Guest Author</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Cf-Terraforming]]></title>
            <link>https://blog.cloudflare.com/introducing-cf-terraform/</link>
            <pubDate>Fri, 15 Feb 2019 20:02:04 GMT</pubDate>
            <description><![CDATA[ Ever since we implemented support for configuring Cloudflare via Terraform, we’ve been steadily expanding the set of features and services you can manage via this popular open-source tool.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Ever since <a href="/getting-started-with-terraform-and-cloudflare-part-1/">we implemented support for configuring Cloudflare via Terraform</a>, we’ve been steadily expanding the set of features and services you can manage via this popular open-source tool.</p><p>If you're unfamiliar with how Terraform works with Cloudflare, check out <a href="https://developers.cloudflare.com/terraform/">our developer docs</a>.</p><p>We are Terraform users ourselves, and we believe in the stability and reproducibility that can be achieved by defining your infrastructure as code.</p>
    <div>
      <h2>What is Terraform?</h2>
      <a href="#what-is-terraform">
        
      </a>
    </div>
    <p><a href="https://www.terraform.io/">Terraform</a> is an open-source tool that allows you to describe your infrastructure and cloud services (think virtual machines, servers, databases, network configurations, Cloudflare API resources, and more) as human-readable configurations.</p><p>Once you’ve done this, you can run the Terraform command-line tool and it will figure out the difference between your desired state and your current state, and make the API calls in the background necessary to reconcile the two.</p><p>Unlike other solutions, Terraform does not require you to run software on your hosts, and instead of spending time manually configuring machines, creating DNS records, and specifying Page Rules, you can simply run:</p>
            <pre><code>$ terraform apply</code></pre>
            <p>and the state described in your configuration files will be built for you.</p>
    <div>
      <h2>Enter Cloudflare Terraforming</h2>
      <a href="#enter-cloudflare-terraforming">
        
      </a>
    </div>
    <p>Terraform is a tremendous time-saver once you have your configuration files in place, but what do you do if you’re already a Cloudflare user and you need to convert your particular setup, records, resources and rules into Terraform config files in the first place?</p><p>Today, we’re excited to share a <a href="https://github.com/cloudflare/cf-terraforming">new open-source utility</a> to make the migration of even complex Cloudflare configurations into Terraform simple and fast.</p><p>It’s called <a href="https://github.com/cloudflare/cf-terraforming">cf-terraforming</a> and it downloads your Cloudflare setup, meaning everything you’ve defined via the Cloudflare dashboard and API, into Terraform-compliant configuration files in a few commands.</p>
    <div>
      <h2>Getting up and running quickly</h2>
      <a href="#getting-up-and-running-quickly">
        
      </a>
    </div>
    <p>Cf-terraforming is open-source and <a href="https://github.com/cloudflare/cf-terraforming">available on Github now</a>. You need a working <a href="https://golang.org/doc/install">Golang installation</a> and a <a href="https://dash.cloudflare.com/sign-up">Cloudflare account</a> with some resources defined. That’s it!</p><p>Let’s first install cf-terraforming, while also pulling down all dependencies and updating them as necessary:</p>
            <pre><code>$ go get -u github.com/cloudflare/cf-terraforming/...</code></pre>
            <p>Cf-terraforming is a command line tool that you invoke with your Cloudflare credentials, some zone information and the resource type that you want to export. The output is a valid Terraform configuration file describing your resources.</p><p>To use cf-terraforming, first <a href="https://support.cloudflare.com/hc/en-us/articles/200167836-Where-do-I-find-my-Cloudflare-API-key-">get your API key</a> and Account ID from the Cloudflare dashboard. You can find your account id at the bottom right of the overview page for any zone in your account. It also has a quick link to get your API key as well. You can store your key and account ID in environment variables to make it easier to work with the tool:</p>
            <pre><code>export CLOUDFLARE_TOKEN=”&lt;your-key&gt;” 
export CLOUDFLARE_EMAIL=”&lt;your-email&gt;”
export CLOUDFLARE_ACCT_ID=”&lt;your-id&gt;” </code></pre>
            <p>Cf-terraforming can create configuration files for any of the resources currently available in <a href="https://www.terraform.io/docs/providers/cloudflare/index.html">the official Cloudflare Terraform provider</a>, but sometimes it’s also handy to export individual resources as needed.</p><p>Let’s say you’re migrating your Cloudflare configuration to Terraform and you want to describe your Spectrum applications. You simply call cf-terraforming with your credentials, zone, and the spectrum_application command, like so:</p>
            <pre><code>go run cmd/cf-terraforming/main.go --email $CLOUDFLARE_EMAIL --key $CLOUDFLARE_TOKEN --account $CLOUDFLARE_ACCT_ID spectrum_application</code></pre>
            <p>Cf-terraforming will contact the Cloudflare API on your behalf and define your resources in a format that Terraform understands:</p>
            <pre><code>resource"cloudflare_spectrum_application""1150bed3f45247b99f7db9696fffa17cbx9" {    
    protocol = "tcp/8000"    
    dns = {        
        type = "CNAME"        
        name = "example.com"    
    }    
    ip_firewall = "true"    
    tls = "off"    
    origin_direct = [ "tcp://37.241.37.138:8000", ]
}</code></pre>
            <p>You can redirect the output to a file and then start working with Terraform. First, ensure you are in the cf-terraforming directory, then run:</p>
            <pre><code>go run cmd/cf-terraforming/main.go --email $CLOUDFLARE_EMAIL --key $CLOUDFLARE_TOKEN --account $CLOUDFLARE_ACCT_ID spectrum_application &gt; my_spectrum_applications.tf </code></pre>
            <p>The same goes for Zones, DNS records, Workers scripts and routes, security policies and more.</p>
    <div>
      <h2>Download all Cloudflare resources</h2>
      <a href="#download-all-cloudflare-resources">
        
      </a>
    </div>
    <p>Use the <b>all</b> command to download everything and convert it into Terraform config.</p>
            <pre><code>go run cmd/cf-terraforming/main.go --email $CLOUDFLARE_EMAIL --key $CLOUDFLARE_TOKEN --account $CLOUDFLARE_ACCT_ID all</code></pre>
            
    <div>
      <h2>Which resources are supported?</h2>
      <a href="#which-resources-are-supported">
        
      </a>
    </div>
    <p>Currently, <a href="https://github.com/cloudflare/cf-terraforming">cf-terraforming</a> supports every resource type that you can manage via the official <a href="https://www.terraform.io/docs/providers/cloudflare/index.html">Cloudflare Terraform provider</a>:</p><ul><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/access_application.html">access_application</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/access_rule.html">access_rule</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/access_policy.html">access_policy</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/account_member.html">account_member</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/custom_pages.html">custom_pages</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/filter.html">filter</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/firewall_rule.html">firewall_rule</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/load_balancer.html">load_balancer</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/load_balancer_pool.html">load_balancer_pool</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/load_balancer_monitor.html">load_balancer_monitor</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/page_rule.html">page_rule</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/rate_limit.html">rate_limit</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/record.html">record</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/spectrum_application.html">spectrum_application</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/waf_rule.html">waf_rule</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/worker_route.html">worker_route</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/worker_script.html">worker_script</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/zone.html">zone</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/zone_lockdown.html">zone_lockdown</a></p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/r/zone_settings_override.html">zone_settings_override</a></p></li></ul>
    <div>
      <h2>Get involved</h2>
      <a href="#get-involved">
        
      </a>
    </div>
    <p>We’re looking for feedback and any issues you might encounter while getting up and running with cf-terraforming. Please open any issues against <a href="https://github.com/cloudflare/cf-terraforming">the GitHub repo</a>.</p><p>Cf-terraforming is open-source, so if you want to get involved feel free to pick up an open issue or make a pull request.</p>
    <div>
      <h2>Looking forward</h2>
      <a href="#looking-forward">
        
      </a>
    </div>
    <p>We’ll continue to expand <a href="https://www.terraform.io/docs/providers/cloudflare/index.html">the set of Cloudflare resources that you can manage via Terraform</a>, and that you can export via cf-terraforming. Be sure to keep and eye on the <a href="https://github.com/cloudflare/cf-terraforming">cf-terraforming repo</a> for updates.</p> ]]></content:encoded>
            <category><![CDATA[Terraform]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">2H5PM4gKSBPg3AesoilW5y</guid>
            <dc:creator>Zack Proser</dc:creator>
        </item>
        <item>
            <title><![CDATA[Deploy Workers using Terraform]]></title>
            <link>https://blog.cloudflare.com/deploy-workers-using-terraform/</link>
            <pubDate>Thu, 13 Sep 2018 16:03:15 GMT</pubDate>
            <description><![CDATA[ Today we're excited to announce that Cloudflare Workers are now supported in the Cloudflare Terraform Provider.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we're excited to announce that Cloudflare Workers are now supported in the <a href="https://www.terraform.io/docs/providers/cloudflare/index.html">Cloudflare Terraform Provider</a>.</p><p><a href="https://www.terraform.io/">Terraform</a> is a fantastic tool for configuring your infrastructure. Traditionally if you wanted to spin up, tear down or update some of your infrastructure you would have to click around on a website or make some API calls, which is prone to human error. With Terraform, you define your infrastructure in simple, declarative configuration files and let Terraform figure out how to make the API calls for you. This also lets you treat your infrastructure like your code. You can check your Terraform configuration files into version control and integrate them into your normal software development workflow.</p><p>Terraform integrates with <a href="https://www.terraform.io/docs/providers/">many infrastructure providers</a> including Cloudflare. If you'd like to read more about setting up Terraform with Cloudflare, check out <a href="/getting-started-with-terraform-and-cloudflare-part-1/">Getting started with Terraform and Cloudflare</a>. In this post, I'm going to focus specifically on how to integrate <a href="https://www.cloudflare.com/products/cloudflare-workers/">Cloudflare Workers</a> with Terraform.</p><p>In this example we're going to create <a href="https://partyparrot.business">partyparrot.business</a>, and we're going to serve the whole site out of a worker without any origin server. We're starting from scratch here, but if you're already using Cloudflare workers and want to migrate to managing your workers with Terraform, you'll need to import your existing script and routes so that Terraform knows about them. See the "Importing your existing workers" section at the end.</p>
    <div>
      <h3>Prerequisites</h3>
      <a href="#prerequisites">
        
      </a>
    </div>
    <ul><li><p><a href="https://www.terraform.io/intro/getting-started/install.html">Install Terraform</a></p></li><li><p>Provide your Cloudflare credentials via environment variables</p><ul><li><p>Set <code>CLOUDFLARE_EMAIL</code> to your email address</p></li><li><p>Set <code>CLOUDFLARE_TOKEN</code> to your <a href="https://support.cloudflare.com/hc/en-us/articles/200167836-Where-do-I-find-my-Cloudflare-API-key-">Cloudflare API key</a></p></li><li><p>If you're on an Enterprise plan and want to use multiple scripts, you'll also need to set <code>CLOUDFLARE_ORG_ID</code> to your account ID. You can find your account ID by using the <a href="https://api.cloudflare.com/#accounts-list-accounts">List Accounts API</a></p></li></ul></li></ul>
    <div>
      <h3>Create the Terraform config file</h3>
      <a href="#create-the-terraform-config-file">
        
      </a>
    </div>
    <p>Create a file with any name and give it a <code>.tf</code> file extension. This is where we'll define our Terraform resources. In this file, first we'll need to setup the Cloudflare provider:</p>
            <pre><code>provider "cloudflare" {}</code></pre>
            <p>You could define your credentials in this file, but in general it's better to use environment variables so that you can check the configuration file into version control without including any private data.</p><p>Next we're going to create a variable named <code>zone</code>. One of the benefits about defining the zone in a variable as opposed to hard-coding it is that you can setup a separate staging domain and use the same Terraform configuration as your production domain. See the <a href="https://www.terraform.io/docs/configuration/variables.html">documentation</a> for more information on working with variables.</p>
            <pre><code>variable "zone" {
  default = "partyparrot.business"
}</code></pre>
            
    <div>
      <h3>Setting up the worker script</h3>
      <a href="#setting-up-the-worker-script">
        
      </a>
    </div>
    <p>Now let's write our worker script. If you're looking for inspiration, check out some <a href="https://developers.cloudflare.com/workers/recipes/">Worker recipes</a>. For this example, I'll use <a href="https://gist.github.com/jRiest/7893cf10c550057ce1ff53f270683e1c#file-party_parrot_worker-js">this script</a> and name it <code>party_parrot_worker.js</code>.</p><p>Next we need to add a <code>cloudflare_worker_script</code> resource to our Terraform config and reference the script file. Open your <code>.tf</code> file and add the following:</p>
            <pre><code>resource "cloudflare_worker_script" "main_script" {
  zone = "${var.zone}"
  content = "${file("party_parrot_worker.js")}"
}</code></pre>
            <p>If you're new to Terraform, check out the <a href="https://www.terraform.io/docs/configuration/resources.html">Terraform Resource documentation</a> to learn more about this schema. Here we provide 2 parameters, the <code>zone</code> which references the variable we defined earlier and <code>content</code> which references the file we just created.</p><p><b>NOTE:</b> The <a href="https://www.cloudflare.com/plans/enterprise/">Cloudflare Enterprise plan</a> supports using multiple (named) scripts. To use this, the parameters will be slightly different. Remove the <code>zone</code> parameter since named scripts are not tied to a particular zone and instead add a <code>name</code> parameter to define the name of the script. See <a href="https://www.terraform.io/docs/providers/cloudflare/r/worker_script.html">the cloudflare_worker_script documentation</a> for an example.</p>
    <div>
      <h3>Setting up the worker routes</h3>
      <a href="#setting-up-the-worker-routes">
        
      </a>
    </div>
    <p>In order for the worker to start handling traffic, we'll also need to define at least one route. To do so, add a <code>cloudflare_worker_route</code> resource to the Terraform config.</p>
            <pre><code>resource "cloudflare_worker_route" "catch_all_route" {
  zone = "${var.zone}"
  pattern = "*${var.zone}/*"
  enabled = true
  depends_on = ["cloudflare_worker_script.main_script"]
}</code></pre>
            <p>Just like with the script resource, the <code>zone</code> parameter references the variable we defined earlier.</p><p>The <code>pattern</code> parameter defines which requests should be sent to the worker. In this example we use a route pattern like <code>*partyparrot.business*</code> which will match all traffic. If, however, you only want your worker to handle a subset of requests to your zone, you can define a more specific pattern like <code>mysubdomain.example.com/*</code> or <code>*example.com/mypath*</code>. More information on route patterns is available <a href="https://developers.cloudflare.com/workers/api/route-matching/">here</a>.</p><p>The <code>enabled</code> parameter specifies that requests that match the pattern <b>should</b> run the worker. Alternatively, you can set <code>enabled</code> to <code>false</code> which would mean that any requests that match the pattern <b>should not</b> run the worker. You can create multiple route patterns, and more-specific route patterns apply before less-specific route patterns. For example, you could create one route pattern like <code>example.com/assets/*</code> and set <code>enabled = false</code> then create another pattern like <code>*example.com*</code> and set <code>enabled = true</code>. This would enable the worker for all traffic <i>except</i> for requests that match <code>example.com/assets/*</code>.</p><p>Finally, we set <code>depends_on</code> to point to the script resource we created above. In general, Terraform will try to create resources in parallel, but you may get an error if you try to create a route before you have a script. By using the <code>depends_on</code> parameter, Terraform will know to create the script first before creating the route.</p><p><b>NOTE:</b> As with the script resource, some of the parameters are different if you're on the Enterprise plan and using multiple scripts. Remove the <code>enabled</code> parameter and instead set <code>script_name = "${cloudflare_worker_script.your_script_resource.name}"</code> to specify which script the route should run. By directly referencing the script resource using this syntax, Terraform already knows that the route depends on the script, so you can also remove the <code>depends_on</code> parameter. You can see more details in <a href="https://www.terraform.io/docs/providers/cloudflare/r/worker_route.html">the cloudflare_worker_route documentation</a>.</p>
    <div>
      <h3>Applying the Terraform config</h3>
      <a href="#applying-the-terraform-config">
        
      </a>
    </div>
    <p>Now that we've defined our script and route resources in the config file, we're ready to deploy! To initialize Terraform, run <code>terraform init</code></p>
            <pre><code>$ terraform init

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.</code></pre>
            <p>Now to deploy the changes, run <code>terraform apply</code>. Terraform will show you a preview of the changes it will make.</p>
            <pre><code>$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + cloudflare_worker_route.catch_all_route
      id:           &lt;computed&gt;
      enabled:      "true"
      multi_script: &lt;computed&gt;
      pattern:      "*partyparrot.business/*"
      zone:         "partyparrot.business"
      zone_id:      &lt;computed&gt;

  + cloudflare_worker_script.main_script
      id:           &lt;computed&gt;
      content:      "...omitted for brevity..."
      zone:         "partyparrot.business"
      zone_id:      &lt;computed&gt;


Plan: 2 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value:</code></pre>
            <p>If everything looks good, type <code>yes</code> and press return to apply the changes.</p>
            <pre><code>cloudflare_worker_script.main_script: Creating...
  content: "" =&gt; "...omitted for brevity..."
  zone:    "" =&gt; "partyparrot.business"
  zone_id: "" =&gt; "&lt;computed&gt;"
cloudflare_worker_script.main_script: Creation complete after 1s (ID: zone:partyparrot.business)
cloudflare_worker_route.catch_all_route: Creating...
  enabled:      "" =&gt; "true"
  multi_script: "" =&gt; "&lt;computed&gt;"
  pattern:      "" =&gt; "*partyparrot.business/*"
  zone:         "" =&gt; "partyparrot.business"
  zone_id:      "" =&gt; "&lt;computed&gt;"
cloudflare_worker_route.catch_all_route: Creation complete after 0s (ID: af595c1bb7cd4d1698c4d6cbcb364662)

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.</code></pre>
            <p>Congratulations, your worker script and route are deployed! You can see the example script running at <a href="https://partyparrot.business/">partyparrot.business</a>.</p><p>As you make changes to your script or Terraform config, you can run <code>terraform apply</code> again and Terraform will figure out what's changed and deploy any updates.</p>
    <div>
      <h3>Importing your existing workers</h3>
      <a href="#importing-your-existing-workers">
        
      </a>
    </div>
    <p>If you're already using Cloudflare Workers but want to start managing them via Terraform, you'll need to let Terraform know about your existing configuration so it knows how to apply changes going forward.</p><p>First you’ll need to create your <code>.tf</code> file and add <code>cloudflare_worker_script</code> and <code>cloudflare_worker_route</code> resources for all of your existing scripts and routes.</p><p>Next you'll need to individually run the appropriate <code>terraform import ...</code> command for each script and route resource. The import command takes two arguments:</p><ul><li><p>the identifier of the resource that you defined in your <code>.tf</code> file (ex: <code>cloudflare_worker_script.main_script</code> or <code>cloudflare_worker_route.catch_all_route</code>)</p></li><li><p>an ID that's used to lookup the resource from the cloudflare API. See the <a href="https://www.terraform.io/docs/providers/cloudflare/r/worker_script.html">cloudflare_worker_script</a> and <a href="https://www.terraform.io/docs/providers/cloudflare/r/worker_route.html">cloudflare_worker_route</a> documentation for more information.</p></li></ul>
    <div>
      <h3>Wrapping up</h3>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>The complete script and terraform configuration file for this example are hosted <a href="https://gist.github.com/jRiest/7893cf10c550057ce1ff53f270683e1c">on Github</a>.</p><p>Whether you're already using Cloudflare Workers or just getting started, Terraform can be a great way to manage your Workers configuration. If you're interested in learning more, here's a few useful links:</p><ul><li><p><a href="https://developers.cloudflare.com/workers/">Cloudflare Workers documentation</a></p></li><li><p><a href="https://developers.cloudflare.com/terraform/">Cloudflare Terraform Provider documentation</a></p></li><li><p><a href="/tag/workers/">More Workers blog posts</a></p></li><li><p><a href="/tag/terraform/">More Terraform blog posts</a></p></li><li><p><a href="https://github.com/terraform-providers/terraform-provider-cloudflare">terraform-provider-cloudflare source</a></p></li></ul> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Terraform]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">5QupFF8DVLR5KBikEm5App</guid>
            <dc:creator>Jake Riesterer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Getting started with Terraform and Cloudflare (Part 2 of 2)]]></title>
            <link>https://blog.cloudflare.com/getting-started-with-terraform-and-cloudflare-part-2/</link>
            <pubDate>Mon, 30 Apr 2018 16:20:45 GMT</pubDate>
            <description><![CDATA[ Continue exploring Terraform with Cloudflare by enabling load balancing, creating page rules, and rolling back changes. ]]></description>
            <content:encoded><![CDATA[ <p><i> In </i><a href="/getting-started-with-terraform-and-cloudflare-part-1/"><i>Part 1 of Getting Started with Terraform</i></a><i>, we explained how Terraform lets developers store Cloudflare configuration in their own source code repository, institute change management processes that include code review, track their configuration versions and history over time, and easily roll back changes as needed.</i></p><p><i>We covered </i><a href="/getting-started-with-terraform-and-cloudflare-part-1#installingterraform"><i>installing Terraform</i></a><i>, </i><a href="/getting-started-with-terraform-and-cloudflare-part-1#helloworld"><i>provider initialization</i></a><i>, </i><a href="/getting-started-with-terraform-and-cloudflare-part-1#trackingchangehistory"><i>storing configuration in git</i></a><i>, </i><a href="/getting-started-with-terraform-and-cloudflare-part-1#applyingzonesettings"><i>applying zone settings</i></a><i>, and </i><a href="/getting-started-with-terraform-and-cloudflare-part-1#managingratelimits"><i>managing rate limits</i></a><i>. This post continues the Cloudflare Terraform provider walkthrough with examples of </i><a href="#addingloadbalancing"><i>load balancing</i></a><i>, </i><a href="#usingpagerules"><i>page rules</i></a><i>, </i><a href="#reviewingandrollingbackchanges"><i>reviewing and rolling back configuration</i></a><i>, and </i><a href="#importingexistingstateandconfiguration"><i>importing state</i></a><i>.</i></p>
    <div>
      <h3>Reviewing the current configuration</h3>
      <a href="#reviewing-the-current-configuration">
        
      </a>
    </div>
    <p>Before we build on Part 1, let's quickly review what we configured in that post. Because our configuration is in git, we can easily view the current configuration and change history that got us to this point.</p>
            <pre><code>$ git log
commit e1c38cf6f4230a48114ce7b747b77d6435d4646c
Author: Me
Date:   Mon Apr 9 12:34:44 2018 -0700

    Step 4 - Update /login rate limit rule from 'simulate' to 'ban'.

commit 0f7e499c70bf5994b5d89120e0449b8545ffdd24
Author: Me
Date:   Mon Apr 9 12:22:43 2018 -0700

    Step 4 - Add rate limiting rule to protect /login.

commit d540600b942cbd89d03db52211698d331f7bd6d7
Author: Me
Date:   Sun Apr 8 22:21:27 2018 -0700

    Step 3 - Enable TLS 1.3, Always Use HTTPS, and SSL Strict mode.

commit 494c6d61b918fce337ca4c0725c9bbc01e00f0b7
Author: Me
Date:   Sun Apr 8 19:58:56 2018 -0700

    Step 2 - Ignore terraform plugin directory and state file.

commit 5acea176050463418f6ac1029674c152e3056bc6
Author: Me
Date:   Sun Apr 8 19:52:13 2018 -0700

    Step 2 - Initial commit with webserver definition.</code></pre>
            <p>We'll get into more detail about reviewing and rolling back to prior versions of configuration <a href="#reviewingandrollingbackchanges">later in this post</a>, but for now let's review the current version.</p><p>In lines 1-4 below, we configured the Cloudflare Terraform provider. Initially we stored our email address and API key in the <code>cloudflare.tf</code> file, but for security reasons we removed them before committing to a git repository.</p><p>In lines 6-8, we define a <code>variable</code> that can be interpolated into <code>resources</code> definitions. Terraform can be used to mass configure multiple zones through the use of variables, as we'll explore in a future post.</p><p>Lines 10-16 tell Cloudflare to create a DNS <code>A</code> record for <code>www.${var.domain}</code> using IP address <code>203.0.113.10</code>. Later in this post, we'll explore adding a second web server and load balancing between the two origins.</p><p>Lines 18-26 apply zone-wide settings and lines 28-54 define a rate limiting rule to protect against credential stuffing and other brute force attacks.</p>
            <pre><code>$ cat -n cloudflare.tf 
     1	provider "cloudflare" {
     2	  # email pulled from $CLOUDFLARE_EMAIL
     3	  # token pulled from $CLOUDFLARE_TOKEN
     4	}
     5	
     6	variable "domain" {
     7	  default = "example.com"
     8	}
     9	
    10	resource "cloudflare_record" "www" {
    11	  domain  = "${var.domain}"
    12	  name    = "www"
    13	  value   = "203.0.113.10"
    14	  type    = "A"
    15	  proxied = true
    16	}
    17	
    18	resource "cloudflare_zone_settings_override" "example-com-settings" {
    19	  name = "${var.domain}"
    20	
    21	  settings {
    22	    tls_1_3 = "on"
    23	    automatic_https_rewrites = "on"
    24	    ssl = "strict"
    25	  }
    26	}
    27	
    28	resource "cloudflare_rate_limit" "login-limit" {
    29	  zone = "${var.domain}"
    30	
    31	  threshold = 5
    32	  period = 60
    33	  match {
    34	    request {
    35	      url_pattern = "${var.domain}/login"
    36	      schemes = ["HTTP", "HTTPS"]
    37	      methods = ["POST"]
    38	    }
    39	    response {
    40	      statuses = [401, 403]
    41	      origin_traffic = true
    42	    }
    43	  }
    44	  action {
    45	    mode = "ban"
    46	    timeout = 300
    47	    response {
    48	      content_type = "text/plain"
    49	      body = "You have failed to login 5 times in a 60 second period and will be blocked from attempting to login again for the next 5 minutes."
    50	    }
    51	  }
    52	  disabled = false
    53	  description = "Block failed login attempts (5 in 1 min) for 5 minutes."
    54	}</code></pre>
            
    <div>
      <h3>Adding load balancing</h3>
      <a href="#adding-load-balancing">
        
      </a>
    </div>
    <p>Thanks to the <a href="/getting-started-with-terraform-and-cloudflare-part-1#managingratelimits">rate limiting set up in part 1</a>, our login page is protected against credential brute force attacks. Now it's time to focus on performance and reliability. Imagine organic traffic has grown for your web server, and this traffic is increasingly global. It’s time to spread these requests to your origin over multiple data centers.</p><p>Below we'll add a second origin for some basic round robining, and then use the <a href="https://www.cloudflare.com/load-balancing/">Cloudflare Load Balancing</a> product to fail traffic over as needed. We'll then enhance our load balancing configuration through the use of "geo steering" to serve results from an origin server that is geographically closest to your end users.</p>
    <div>
      <h4>1. Add another DNS record for <code>www</code></h4>
      <a href="#1-add-another-dns-record-for-www">
        
      </a>
    </div>
    <p>To get started, we'll add a DNS record for a second web server, which is located in Asia. The IP address for this server is <code>198.51.100.15</code>.</p>
            <pre><code>$ git checkout -b step5-loadbalance
Switched to a new branch 'step5-loadbalance'

$ cat &gt;&gt; cloudflare.tf &lt;&lt;'EOF'
resource "cloudflare_record" "www-asia" {
  domain  = "${var.domain}"
  name    = "www"
  value   = "198.51.100.15"
  type    = "A"
  proxied = true
}
EOF</code></pre>
            <p>Note that while the name of the resource is different as Terraform resources of the same type must be uniquely named, the DNS name, i.e., what your customers will type in their browser, is the same: "www".</p>
    <div>
      <h4>2. Preview and merge the changes</h4>
      <a href="#2-preview-and-merge-the-changes">
        
      </a>
    </div>
    <p>Below we'll check the <code>terraform plan</code>, merge and apply the changes.</p>
            <pre><code>$ terraform plan | grep -v "&lt;computed&gt;"
...
Terraform will perform the following actions:

  + cloudflare_record.www-asia
      domain:      "example.com"
      name:        "www"
      proxied:     "true"
      type:        "A"
      value:       "198.51.100.15"


Plan: 1 to add, 0 to change, 0 to destroy.</code></pre>
            
            <pre><code>$ git add cloudflare.tf
$ git commit -m "Step 5 - Add additional 'www' DNS record for Asia data center."
[step5-loadbalance 6761a4f] Step 5 - Add additional 'www' DNS record for Asia data center.
 1 file changed, 7 insertions(+)

$ git checkout master
Switched to branch 'master'

$ git merge step5-loadbalance 
Updating e1c38cf..6761a4f
Fast-forward
 cloudflare.tf | 7 +++++++
 1 file changed, 7 insertions(+)</code></pre>
            
    <div>
      <h4>3. Apply and verify the changes</h4>
      <a href="#3-apply-and-verify-the-changes">
        
      </a>
    </div>
    <p>Let's add the second DNS record for <a href="http://www.example.com">www.example.com</a>:</p>
            <pre><code>$ terraform apply --auto-approve
...
cloudflare_record.www-asia: Creating...
  created_on:  "" =&gt; "&lt;computed&gt;"
  domain:      "" =&gt; "example.com"
  hostname:    "" =&gt; "&lt;computed&gt;"
  metadata.%:  "" =&gt; "&lt;computed&gt;"
  modified_on: "" =&gt; "&lt;computed&gt;"
  name:        "" =&gt; "www"
  proxiable:   "" =&gt; "&lt;computed&gt;"
  proxied:     "" =&gt; "true"
  ttl:         "" =&gt; "&lt;computed&gt;"
  type:        "" =&gt; "A"
  value:       "" =&gt; "198.51.100.15"
  zone_id:     "" =&gt; "&lt;computed&gt;"
cloudflare_record.www-asia: Creation complete after 1s (ID: fda39d8c9bf909132e82a36bab992864)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.</code></pre>
            <p>With the second DNS record in place, let's try making some requests to see where the traffic is served from:</p>
            <pre><code>$ curl https://www.example.com
Hello, this is 203.0.113.10!

$ curl https://www.example.com
Hello, this is 203.0.113.10!

$ curl https://www.example.com
Hello, this is 198.51.100.15!

$ curl https://www.example.com
Hello, this is 203.0.113.10!</code></pre>
            <p>As you can see above, there is no discernible pattern for which origin receives the request. When Cloudflare connects to an origin with multiple DNS records, one of the IP addresses is selected at random. If both of these IPs are in the same data center and sessions can be shared (i.e., it doesn't matter if the same user hops between origin servers), this may work fine. However, for anything more complicated such as origins in different geographies or active health checks, you're going to want to use Cloudflare's Load Balancing product.</p>
    <div>
      <h4>4. Switch to using Cloudflare Load Balancing</h4>
      <a href="#4-switch-to-using-cloudflare-load-balancing">
        
      </a>
    </div>
    <p>Before proceeding, make sure that your account is enabled for Load Balancing. If you're on an Enterprise plan, you should ask your Customer Success Manager to do this; otherwise, you can subscribe to Load Balancing within the Cloudflare Dashboard.</p><p>As described in the <a href="https://support.cloudflare.com/hc/en-us/articles/115000081911-Tutorial-How-to-Set-Up-Load-Balancing-Intelligent-Failover-on-Cloudflare">load balancing tutorial</a> on the Cloudflare Support site, you will need to do three things:</p><blockquote><p>i. Create a monitor to run health checks against your origin servers.
ii. Create a pool of one or more origin servers that will receive load balanced traffic.
iii. Create a load balancer with an external hostname, e.g., www.example.com, and one or more pools.
iv. Preview and merge the changes.
v. Test the changes.</p></blockquote>
    <div>
      <h5>i. Define and create the health check ("monitor")</h5>
      <a href="#i-define-and-create-the-health-check-monitor">
        
      </a>
    </div>
    <p>To monitor our origins we're going to create a basic health check that makes a GET request to each origin on the URL <a href="https://www.example.com">https://www.example.com</a>. If the origin returns the 200/OK status code within 5 seconds, we'll consider it healthy. If it fails to do so three (3) times in a row, we'll consider it unhealthy. This health check will be run once per minute from several regions, and send an email notification to <a>you@example.com</a> if any failures are detected.</p>
            <pre><code>$ git checkout step5-loadbalance
Switched to branch 'step5-loadbalance'

$ cat &gt;&gt; cloudflare.tf &lt;&lt;'EOF'
resource "cloudflare_load_balancer_monitor" "get-root-https" {
  expected_body = "alive"
  expected_codes = "200"
  method = "GET"
  timeout = 5
  path = "/"
  interval = 60
  retries = 2
  check_regions = ["WNAM", "ENAM", "WEU", "EEU", "SEAS", "NEAS"]
  description = "GET / over HTTPS - expect 200"
}
EOF</code></pre>
            
    <div>
      <h5>ii. Define and create the pool of origins</h5>
      <a href="#ii-define-and-create-the-pool-of-origins">
        
      </a>
    </div>
    <p>We will call our pool "www-servers” and add two origins to it: <code>www-us</code> (<code>203.0.113.10</code>) and www-asia (<code>198.51.100.15</code>). For now, we'll skip any sort of <a href="https://support.cloudflare.com/hc/en-us/articles/115000540888-Load-Balancing-Geographic-Regions">geo routing</a>.</p><p>Note that we reference the monitor we added in the last step. When applying this confirmation, Terraform will figure out that it first needs to create the monitor so that it can look up the ID and provide to the pool we wish to create.</p>
            <pre><code>$ cat &gt;&gt; cloudflare.tf &lt;&lt;'EOF'
resource "cloudflare_load_balancer_pool" "www-servers" {
  name = "www-servers"
  monitor = "${cloudflare_load_balancer_monitor.get-root-https.id}"
  origins {
    name = "www-us"
    address = "203.0.113.10"
  }
  origins {
    name = "www-asia"
    address = "198.51.100.15"
  }
  description = "www origins"
  enabled = true
  minimum_origins = 1
  notification_email = "you@example.com"
}
EOF</code></pre>
            
    <div>
      <h5>iii. Define and create the load balancer</h5>
      <a href="#iii-define-and-create-the-load-balancer">
        
      </a>
    </div>
    <p>Note that when you create a load balancer (LB), it will <a href="https://support.cloudflare.com/hc/en-us/articles/115004954407-How-Does-a-Load-Balancer-Interact-with-Existing-DNS-Records-">replace any existing DNS records with the same name</a>. For example, when we create the "<a href="http://www.example.com">www.example.com</a>" LB below, it will supersede the two www DNS records that you have previously defined. One benefit of leaving these DNS records in place is that if you temporarily disable load balancing, connections to this hostname will still be possible as shown above.</p>
            <pre><code>$ cat &gt;&gt; cloudflare.tf &lt;&lt;'EOF'
resource "cloudflare_load_balancer" "www-lb" {
  zone = "example.com"
  name = "www-lb"
  default_pool_ids = ["${cloudflare_load_balancer_pool.www-servers.id}"]
  fallback_pool_id = "${cloudflare_load_balancer_pool.www-servers.id}"
  description = "example load balancer"
  proxied = true
}
EOF</code></pre>
            
    <div>
      <h5>iv. Preview and merge the changes</h5>
      <a href="#iv-preview-and-merge-the-changes">
        
      </a>
    </div>
    <p>As usual, we take a look at the proposed plan before we apply any changes:</p>
            <pre><code>$ terraform plan
...
Terraform will perform the following actions:

  + cloudflare_load_balancer.www-lb
      id:                         &lt;computed&gt;
      created_on:                 &lt;computed&gt;
      default_pool_ids.#:         &lt;computed&gt;
      description:                "example load balancer"
      fallback_pool_id:           "${cloudflare_load_balancer_pool.www-servers.id}"
      modified_on:                &lt;computed&gt;
      name:                       "www-lb"
      pop_pools.#:                &lt;computed&gt;
      proxied:                    "true"
      region_pools.#:             &lt;computed&gt;
      ttl:                        &lt;computed&gt;
      zone:                       "example.com"
      zone_id:                    &lt;computed&gt;

  + cloudflare_load_balancer_monitor.get-root-https
      id:                         &lt;computed&gt;
      created_on:                 &lt;computed&gt;
      description:                "GET / over HTTPS - expect 200"
      expected_body:              "alive"
      expected_codes:             "200"
      interval:                   "60"
      method:                     "GET"
      modified_on:                &lt;computed&gt;
      path:                       "/"
      retries:                    "2"
      timeout:                    "5"
      type:                       "http"

  + cloudflare_load_balancer_pool.www-servers
      id:                         &lt;computed&gt;
      check_regions.#:            "6"
      check_regions.1151265357:   "SEAS"
      check_regions.1997072153:   "WEU"
      check_regions.2367191053:   "EEU"
      check_regions.2826842289:   "ENAM"
      check_regions.2992567379:   "WNAM"
      check_regions.3706632574:   "NEAS"
      created_on:                 &lt;computed&gt;
      description:                "www origins"
      enabled:                    "true"
      minimum_origins:            "1"
      modified_on:                &lt;computed&gt;
      monitor:                    "${cloudflare_load_balancer_monitor.get-root-https.id}"
      name:                       "www-servers"
      notification_email:         "you@example.com"
      origins.#:                  "2"
      origins.3039426352.address: "198.51.100.15"
      origins.3039426352.enabled: "true"
      origins.3039426352.name:    "www-asia"
      origins.4241861547.address: "203.0.113.10"
      origins.4241861547.enabled: "true"
      origins.4241861547.name:    "www-us"


Plan: 3 to add, 0 to change, 0 to destroy.</code></pre>
            <p>The plan looks good so let's go ahead, merge it in, and apply it.</p>
            <pre><code>$ git add cloudflare.tf
$ git commit -m "Step 5 - Create load balancer (LB) monitor, LB pool, and LB."
[step5-loadbalance bc9aa9a] Step 5 - Create load balancer (LB) monitor, LB pool, and LB.
 1 file changed, 35 insertions(+)

$ terraform apply --auto-approve
...
cloudflare_load_balancer_monitor.get-root-https: Creating...
  created_on:     "" =&gt; "&lt;computed&gt;"
  description:    "" =&gt; "GET / over HTTPS - expect 200"
  expected_body:  "" =&gt; "alive"
  expected_codes: "" =&gt; "200"
  interval:       "" =&gt; "60"
  method:         "" =&gt; "GET"
  modified_on:    "" =&gt; "&lt;computed&gt;"
  path:           "" =&gt; "/"
  retries:        "" =&gt; "2"
  timeout:        "" =&gt; "5"
  type:           "" =&gt; "http"
cloudflare_load_balancer_monitor.get-root-https: Creation complete after 1s (ID: 4238142473fcd48e89ef1964be72e3e0)
cloudflare_load_balancer_pool.www-servers: Creating...
  check_regions.#:            "" =&gt; "6"
  check_regions.1151265357:   "" =&gt; "SEAS"
  check_regions.1997072153:   "" =&gt; "WEU"
  check_regions.2367191053:   "" =&gt; "EEU"
  check_regions.2826842289:   "" =&gt; "ENAM"
  check_regions.2992567379:   "" =&gt; "WNAM"
  check_regions.3706632574:   "" =&gt; "NEAS"
  created_on:                 "" =&gt; "&lt;computed&gt;"
  description:                "" =&gt; "www origins"
  enabled:                    "" =&gt; "true"
  minimum_origins:            "" =&gt; "1"
  modified_on:                "" =&gt; "&lt;computed&gt;"
  monitor:                    "" =&gt; "4238142473fcd48e89ef1964be72e3e0"
  name:                       "" =&gt; "www-servers"
  notification_email:         "" =&gt; "you@example.com"
  origins.#:                  "" =&gt; "2"
  origins.3039426352.address: "" =&gt; "198.51.100.15"
  origins.3039426352.enabled: "" =&gt; "true"
  origins.3039426352.name:    "" =&gt; "www-asia"
  origins.4241861547.address: "" =&gt; "203.0.113.10"
  origins.4241861547.enabled: "" =&gt; "true"
  origins.4241861547.name:    "" =&gt; "www-us"
cloudflare_load_balancer_pool.www-servers: Creation complete after 0s (ID: 906d2a7521634783f4a96c062eeecc6d)
cloudflare_load_balancer.www-lb: Creating...
  created_on:         "" =&gt; "&lt;computed&gt;"
  default_pool_ids.#: "" =&gt; "1"
  default_pool_ids.0: "" =&gt; "906d2a7521634783f4a96c062eeecc6d"
  description:        "" =&gt; "example load balancer"
  fallback_pool_id:   "" =&gt; "906d2a7521634783f4a96c062eeecc6d"
  modified_on:        "" =&gt; "&lt;computed&gt;"
  name:               "" =&gt; "www-lb"
  pop_pools.#:        "" =&gt; "&lt;computed&gt;"
  proxied:            "" =&gt; "true"
  region_pools.#:     "" =&gt; "&lt;computed&gt;"
  ttl:                "" =&gt; "&lt;computed&gt;"
  zone:               "" =&gt; "example.com"
  zone_id:            "" =&gt; "&lt;computed&gt;"
cloudflare_load_balancer.www-lb: Creation complete after 1s (ID: cb94f53f150e5c1a65a07e43c5d4cac4)

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.</code></pre>
            
    <div>
      <h5>iv. Test the changes</h5>
      <a href="#iv-test-the-changes">
        
      </a>
    </div>
    <p>With load balancing in place, let's run those curl requests again to see where the traffic is served from:</p>
            <pre><code>$ for i in {1..4}; do curl https://www.example.com &amp;&amp; sleep 5; done
Hello, this is 198.51.100.15!

Hello, this is 203.0.113.10!

Hello, this is 198.51.100.15!

Hello, this is 203.0.113.10!</code></pre>
            <p>Great, we're now seeing each request load balanced evenly across the two origins we defined.</p>
    <div>
      <h3>Using page rules</h3>
      <a href="#using-page-rules">
        
      </a>
    </div>
    <p>Earlier we configured zone settings that apply to all of example.com. Now we're going to add an exception to these settings by using <a href="https://www.cloudflare.com/features-page-rules/">Page Rules</a>.</p><p>Specifically, we're going to turn increase the security level for a URL we know is expensive to render (and cannot be cached): <a href="https://www.example.com/expensive-db-call">https://www.example.com/expensive-db-call</a>. Additionally, we're going to add a redirect from the previous URL we used to host this page.</p>
    <div>
      <h4>1. Create a new branch and append the page rule</h4>
      <a href="#1-create-a-new-branch-and-append-the-page-rule">
        
      </a>
    </div>
    <p>As usual, we'll create a new branch and append our configuration.</p>
            <pre><code>$ git checkout -b step6-pagerule
Switched to a new branch 'step6-pagerule'

$ cat &gt;&gt; cloudflare.tf &lt;&lt;'EOF'
resource "cloudflare_page_rule" "increase-security-on-expensive-page" {
  zone = "${var.domain}"
  target = "www.${var.domain}/expensive-db-call"
  priority = 10

  actions = {
    security_level = "under_attack",
  }
}

resource "cloudflare_page_rule" "redirect-to-new-db-page" {
  zone = "${var.domain}"
  target = "www.${var.domain}/old-location.php"
  priority = 10

  actions = {
    forwarding_url {
      url = "https://www.${var.domain}/expensive-db-call"
      status_code = 301
    }
  }
}
EOF</code></pre>
            
    <div>
      <h4>2. Preview and merge the changes</h4>
      <a href="#2-preview-and-merge-the-changes">
        
      </a>
    </div>
    <p>You know the drill: preview the changes Terraform is going to make and then merge them into the master branch.</p>
            <pre><code>$ terraform plan
...
Terraform will perform the following actions:

  + cloudflare_page_rule.increase-security-on-expensive-page
      id:                                     &lt;computed&gt;
      actions.#:                              "1"
      actions.0.always_use_https:             "false"
      actions.0.disable_apps:                 "false"
      actions.0.disable_performance:          "false"
      actions.0.disable_security:             "false"
      actions.0.security_level:               "under_attack"
      priority:                               "10"
      status:                                 "active"
      target:                                 "www.example.com/expensive-db-call"
      zone:                                   "example.com"
      zone_id:                                &lt;computed&gt;

  + cloudflare_page_rule.redirect-to-new-db-page
      id:                                     &lt;computed&gt;
      actions.#:                              "1"
      actions.0.always_use_https:             "false"
      actions.0.disable_apps:                 "false"
      actions.0.disable_performance:          "false"
      actions.0.disable_security:             "false"
      actions.0.forwarding_url.#:             "1"
      actions.0.forwarding_url.0.status_code: "301"
      actions.0.forwarding_url.0.url:         "https://www.example.com/expensive-db-call"
      priority:                               "10"
      status:                                 "active"
      target:                                 "www.example.com/old-location.php"
      zone:                                   "example.com"
      zone_id:                                &lt;computed&gt;


Plan: 2 to add, 0 to change, 0 to destroy.</code></pre>
            
            <pre><code>$ git add cloudflare.tf

$ git commit -m "Step 6 - Add two Page Rules."
[step6-pagerule d4fec16] Step 6 - Add two Page Rules.
 1 file changed, 23 insertions(+)

$ git checkout master
Switched to branch 'master'

$ git merge step6-pagerule 
Updating 7a2ac34..d4fec16
Fast-forward
 cloudflare.tf | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)
3. Apply And Verify The Changes
First we'll test requesting the (now missing) old location of the expensive-to-render page.</code></pre>
            
            <pre><code>$ curl -vso /dev/null https://www.example.com/old-location.php 2&gt;&amp;1 | grep "&lt; HTTP\|Location"
&lt; HTTP/1.1 404 Not Found</code></pre>
            <p>As expected, it can't be found. Let's apply the Page Rules, including the redirect that should fix this error.</p>
            <pre><code>$ terraform apply --auto-approve
...
cloudflare_page_rule.redirect-to-new-db-page: Creating...
  actions.#:                              "0" =&gt; "1"
  actions.0.always_use_https:             "" =&gt; "false"
  actions.0.disable_apps:                 "" =&gt; "false"
  actions.0.disable_performance:          "" =&gt; "false"
  actions.0.disable_security:             "" =&gt; "false"
  actions.0.forwarding_url.#:             "0" =&gt; "1"
  actions.0.forwarding_url.0.status_code: "" =&gt; "301"
  actions.0.forwarding_url.0.url:         "" =&gt; "https://www.example.com/expensive-db-call"
  priority:                               "" =&gt; "10"
  status:                                 "" =&gt; "active"
  target:                                 "" =&gt; "www.example.com/old-location.php"
  zone:                                   "" =&gt; "example.com"
  zone_id:                                "" =&gt; "&lt;computed&gt;"
cloudflare_page_rule.increase-security-on-expensive-page: Creating...
  actions.#:                     "0" =&gt; "1"
  actions.0.always_use_https:    "" =&gt; "false"
  actions.0.disable_apps:        "" =&gt; "false"
  actions.0.disable_performance: "" =&gt; "false"
  actions.0.disable_security:    "" =&gt; "false"
  actions.0.security_level:      "" =&gt; "under_attack"
  priority:                      "" =&gt; "10"
  status:                        "" =&gt; "active"
  target:                        "" =&gt; "www.example.com/expensive-db-call"
  zone:                          "" =&gt; "example.com"
  zone_id:                       "" =&gt; "&lt;computed&gt;"
cloudflare_page_rule.redirect-to-new-db-page: Creation complete after 3s (ID: c5c40ff2dc12416b5fe4d0541980c591)
cloudflare_page_rule.increase-security-on-expensive-page: Creation complete after 6s (ID: 1c13fdb84710c4cc8b11daf7ffcca449)

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.</code></pre>
            <p>With the Page Rules in place, let's try that call again, along with the <a href="/introducing-im-under-attack-mode/">I'm Under Attack Mode</a> test:</p>
            <pre><code>$ curl -vso /dev/null https://www.example.com/old-location.php 2&gt;&amp;1 | grep "&lt; HTTP\|Location"
&lt; HTTP/1.1 301 Moved Permanently
&lt; Location: https://www.upinatoms.com/expensive-db-call

$ curl -vso /dev/null https://www.upinatoms.com/expensive-db-call 2&gt;&amp;1 | grep "&lt; HTTP"
&lt; HTTP/1.1 503 Service Temporarily Unavailable</code></pre>
            <p>Great, they work as expected! In the first case, the Cloudflare edge responds with a <code>301</code> redirecting the browser to the new location. In the second case it initially responds with a <code>503</code> (as it is consistent with the "I Am Under Attack” mode).</p>
    <div>
      <h3>Reviewing and rolling back changes</h3>
      <a href="#reviewing-and-rolling-back-changes">
        
      </a>
    </div>
    <p>We've come a long way! Now it's time to tear it all down. Well, maybe just part of it.</p><p>Sometimes when you deploy configuration changes you later determine that they need to be rolled back. You could be performance testing a new configuration and want to revert to your previous configuration when done testing. Or maybe you fat-fingered an IP address and brought your entire site down (#hugops).</p><p>Either way, if you've determined you want to revert your configuration, all you need to do is check the desired branch out and ask Terraform to move your Cloudflare settings back in time. And note that if you accidentally brought your site down you should consider establishing a good strategy for peer reviewing pull requests (rather than merging directly to primary, as I do here for brevity)!</p>
    <div>
      <h4>1. Reviewing your configuration history</h4>
      <a href="#1-reviewing-your-configuration-history">
        
      </a>
    </div>
    <p>Before we figure out how far back in time we want to roll back, let's take a look at our (git) versioned history.</p>
            <pre><code>$ git log
commit d4fec164581bec44684a4d59bb80aec1f1da5a6e
Author: Me
Date:   Wed Apr 18 22:04:52 2018 -0700

    Step 6 - Add two Page Rules.

commit bc9aa9a465a4c8d6deeaa0491814c9f364e9aa8a
Author: Me
Date:   Sun Apr 15 23:58:35 2018 -0700

    Step 5 - Create load balancer (LB) monitor, LB pool, and LB.

commit 6761a4f754e77322629ba4e90a90a3defa1fd4b6
Author: Me
Date:   Wed Apr 11 11:20:25 2018 -0700

    Step 5 - Add additional 'www' DNS record for Asia data center.

commit e1c38cf6f4230a48114ce7b747b77d6435d4646c
Author: Me
Date:   Mon Apr 9 12:34:44 2018 -0700

    Step 4 - Update /login rate limit rule from 'simulate' to 'ban'.

commit 0f7e499c70bf5994b5d89120e0449b8545ffdd24
Author: Me
Date:   Mon Apr 9 12:22:43 2018 -0700

    Step 4 - Add rate limiting rule to protect /login.

commit d540600b942cbd89d03db52211698d331f7bd6d7
Author: Me
Date:   Sun Apr 8 22:21:27 2018 -0700

    Step 3 - Enable TLS 1.3, Always Use HTTPS, and SSL Strict mode.

commit 494c6d61b918fce337ca4c0725c9bbc01e00f0b7
Author: Me
Date:   Sun Apr 8 19:58:56 2018 -0700

    Step 2 - Ignore terraform plugin directory and state file.

commit 5acea176050463418f6ac1029674c152e3056bc6
Author: Me
Date:   Sun Apr 8 19:52:13 2018 -0700

    Step 2 - Initial commit with webserver definition.

Another nice benefit of storing your Cloudflare configuration in git is that you can see who made the change, as well as who reviewed and approved the change (assuming you're peer reviewing pull requests).</code></pre>
            
    <div>
      <h4>2. Examining specific historical changes</h4>
      <a href="#2-examining-specific-historical-changes">
        
      </a>
    </div>
    <p>To begin with, let's see what the last change we made was.</p>
            <pre><code>$ git show
commit d4fec164581bec44684a4d59bb80aec1f1da5a6e
Author: Me
Date:   Wed Apr 18 22:04:52 2018 -0700

    Step 6 - Add two Page Rules.

diff --git a/cloudflare.tf b/cloudflare.tf
index 0b39450..ef11d8a 100644
--- a/cloudflare.tf
+++ b/cloudflare.tf
@@ -94,3 +94,26 @@ resource "cloudflare_load_balancer" "www-lb" {
   description = "example load balancer"
   proxied = true
 }
+
+resource "cloudflare_page_rule" "increase-security-on-expensive-page" {
+  zone = "${var.domain}"
+  target = "www.${var.domain}/expensive-db-call"
+  priority = 10
+
+  actions = {
+    security_level = "under_attack",
+  }
+}
+
+resource "cloudflare_page_rule" "redirect-to-new-db-page" {
+  zone = "${var.domain}"
+  target = "www.${var.domain}/old-location.php"
+  priority = 10
+
+  actions = {
+    forwarding_url {
+      url = "https://${var.domain}/expensive-db-call"
+      status_code = 301
+    }
+  }
+}</code></pre>
            <p>Now let's look at the past few changes:</p>
            <pre><code>$ git log -p -3

... 
// page rule config from above
...

commit bc9aa9a465a4c8d6deeaa0491814c9f364e9aa8a
Author: Me
Date:   Sun Apr 15 23:58:35 2018 -0700

    Step 5 - Create load balancer (LB) monitor, LB pool, and LB.

diff --git a/cloudflare.tf b/cloudflare.tf
index b92cb6f..195b646 100644
--- a/cloudflare.tf
+++ b/cloudflare.tf
@@ -59,3 +59,38 @@ resource "cloudflare_record" "www-asia" {
   type    = "A"
   proxied = true
 }
+resource "cloudflare_load_balancer_monitor" "get-root-https" {
+  expected_body = "alive"
+  expected_codes = "200"
+  method = "GET"
+  timeout = 5
+  path = "/"
+  interval = 60
+  retries = 2
+  description = "GET / over HTTPS - expect 200"
+}
+resource "cloudflare_load_balancer_pool" "www-servers" {
+  name = "www-servers"
+  monitor = "${cloudflare_load_balancer_monitor.get-root-https.id}"
+  check_regions = ["WNAM", "ENAM", "WEU", "EEU", "SEAS", "NEAS"]
+  origins {
+    name = "www-us"
+    address = "203.0.113.10"
+  }
+  origins {
+    name = "www-asia"
+    address = "198.51.100.15"
+  }
+  description = "www origins"
+  enabled = true
+  minimum_origins = 1
+  notification_email = "you@example.com"
+}
+resource "cloudflare_load_balancer" "www-lb" {
+  zone = "${var.domain}"
+  name = "www-lb"
+  default_pool_ids = ["${cloudflare_load_balancer_pool.www-servers.id}"]
+  fallback_pool_id = "${cloudflare_load_balancer_pool.www-servers.id}"
+  description = "example load balancer"
+  proxied = true
+}

commit 6761a4f754e77322629ba4e90a90a3defa1fd4b6
Author: Me
Date:   Wed Apr 11 11:20:25 2018 -0700

    Step 5 - Add additional 'www' DNS record for Asia data center.

diff --git a/cloudflare.tf b/cloudflare.tf
index 9f25a0c..b92cb6f 100644
--- a/cloudflare.tf
+++ b/cloudflare.tf
@@ -52,3 +52,10 @@ resource "cloudflare_rate_limit" "login-limit" {
   disabled = false
   description = "Block failed login attempts (5 in 1 min) for 5 minutes."
 }
+resource "cloudflare_record" "www-asia" {
+  domain  = "${var.domain}"
+  name    = "www"
+  value   = "198.51.100.15"
+  type    = "A"
+  proxied = true
+}</code></pre>
            
    <div>
      <h4>3. Redeploying the previous configuration</h4>
      <a href="#3-redeploying-the-previous-configuration">
        
      </a>
    </div>
    <p>Imagine that shortly after we <a href="#usingpagerules">deployed the Page Rules</a>, we got a call from the Product team that manages this page: "The URL was only being used by one customer and is no longer needed, let's drop the security setting and redirect.”</p><p>While you could always edit the config file directly and delete those entries, it's easier to let git do it for us. To begin with, let's ask git to revert the last commit (without rewriting history).</p>
    <div>
      <h5>i. Revert the branch to the previous commit</h5>
      <a href="#i-revert-the-branch-to-the-previous-commit">
        
      </a>
    </div>
    
            <pre><code>$ git revert HEAD~1..HEAD
[master f9a6f7d] Revert "Step 6 - Bug fix."
 1 file changed, 1 insertion(+), 1 deletion(-)

$ git log -2
commit f9a6f7db72ea1437e146050a5e7556052ecc9a1a
Author: Me
Date:   Wed Apr 18 23:28:09 2018 -0700

    Revert "Step 6 - Add two Page Rules."
    
    This reverts commit d4fec164581bec44684a4d59bb80aec1f1da5a6e.

commit d4fec164581bec44684a4d59bb80aec1f1da5a6e
Author: Me
Date:   Wed Apr 18 22:04:52 2018 -0700

    Step 6 - Add two Page Rules.</code></pre>
            
    <div>
      <h5>ii. Preview the changes</h5>
      <a href="#ii-preview-the-changes">
        
      </a>
    </div>
    <p>As expected, Terraform is indicating it will remove the two Page Rules we just created.</p>
            <pre><code>$ terraform plan
...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  - cloudflare_page_rule.increase-security-on-expensive-page

  - cloudflare_page_rule.redirect-to-new-db-page


Plan: 0 to add, 0 to change, 2 to destroy.</code></pre>
            
    <div>
      <h5>iv. Apply the changes</h5>
      <a href="#iv-apply-the-changes">
        
      </a>
    </div>
    <p>The changes look good, so let's ask Terraform to roll our Cloudflare configuration back.</p>
            <pre><code>$ terraform apply --auto-approve
...
cloudflare_page_rule.redirect-to-new-db-page: Destroying... (ID: c5c40ff2dc12416b5fe4d0541980c591)
cloudflare_page_rule.increase-security-on-expensive-page: Destroying... (ID: 1c13fdb84710c4cc8b11daf7ffcca449)
cloudflare_page_rule.increase-security-on-expensive-page: Destruction complete after 0s
cloudflare_page_rule.redirect-to-new-db-page: Destruction complete after 1s

Apply complete! Resources: 0 added, 0 changed, 2 destroyed.</code></pre>
            <p>Two resources destroyed, as expected. We've rolled back to the previous version.</p>
    <div>
      <h3>Importing existing state and configuration</h3>
      <a href="#importing-existing-state-and-configuration">
        
      </a>
    </div>
    <p>An important point to understand about Terraform is that it is only able to manage configuration that it created, or was explicitly told about after the fact. The reason for this limitation is that Terraform relies on a local <a href="https://www.terraform.io/docs/state/">state file</a> that maps the resource names defined in your configuration file, e.g., <code>cloudflare_load_balancer.www-lb</code> to the IDs generated by Cloudflare's API.</p><p>When Terraform makes calls to Cloudflare's API to create new resources, it persists those IDs to a state file; by default, the <code>terraform.tfstate</code> file in your directory is used, but this can be a remote location as will be discussed in a future blog post. These IDs are later looked up and refreshed when you call <code>terraform plan</code> and <code>terraform apply</code>.</p><p>If you've configured Cloudflare through other means, e.g., by logging into the Cloudflare Dashboard or making <code>curl</code> calls to <code>api.cloudflare.com</code>, Terraform does not (yet) have these resource IDs in the state file. To manage this preexisting configuration you will need to first i) reproduce the configuration in your config file and; ii) import resources one-by-one by providing their IDs and resource names.</p>
    <div>
      <h4>1. Reviewing your current state file</h4>
      <a href="#1-reviewing-your-current-state-file">
        
      </a>
    </div>
    <p>Before importing resources created by other means, let's take a look at how an existing DNS records is represented in the state file.</p>
            <pre><code>$ cat terraform.tfstate | jq '.modules[].resources["cloudflare_record.www"]'
{
  "type": "cloudflare_record",
  "depends_on": [],
  "primary": {
    "id": "c38d3103767284e7cd14d5dad3ab8669",
    "attributes": {
      "created_on": "2018-04-08T00:37:33.76321Z",
      "data.%": "0",
      "domain": "example.com",
      "hostname": "www.example.com",
      "id": "c38d3103767284e7cd14d5dad3ab8669",
      "metadata.%": "2",
      "metadata.auto_added": "false",
      "metadata.managed_by_apps": "false",
      "modified_on": "2018-04-08T00:37:33.76321Z",
      "name": "www",
      "priority": "0",
      "proxiable": "true",
      "proxied": "true",
      "ttl": "1",
      "type": "A",
      "value": "203.0.113.10",
      "zone_id": "e2e6491340be87a3726f91fc4148b126"
    },
    "meta": {
      "schema_version": "1"
    },
    "tainted": false
  },
  "deposed": [],
  "provider": "provider.cloudflare"
}</code></pre>
            <p>As shown in the above JSON, the <code>cloudflare_record</code> resource named "www" has a unique ID of <code>c38d3103767284e7cd14d5dad3ab8669</code>. This ID is what gets interpolated into the API call that Terraform makes to Cloudflare to pull the latest configuration during the plan stage, e.g.,</p>
            <pre><code>GET https://api.cloudflare.com/client/v4/zones/:zone_id/dns_records/c38d3103767284e7cd14d5dad3ab8669</code></pre>
            
    <div>
      <h4>2. Importing existing Cloudflare resources</h4>
      <a href="#2-importing-existing-cloudflare-resources">
        
      </a>
    </div>
    <p>To import an existing record, e.g., another DNS record, you need two things:</p><ol><li><p>The unique identifier that Cloudflare uses to identify the record</p></li><li><p>The resource name to which you wish to map this identifier</p></li></ol>
    <div>
      <h5>i. Download IDs and configuration from api.cloudflare.com</h5>
      <a href="#i-download-ids-and-configuration-from-api-cloudflare-com">
        
      </a>
    </div>
    <p>We start by making an API call to Cloudflare to enumerate the DNS records in our account. The output below has been filtered to show only MX records, as these are what we'll be importing.</p>
            <pre><code>$ curl https://api.cloudflare.com/client/v4/zones/$EXAMPLE_COM_ZONEID \
       -H "X-Auth-Email: you@example.com" -H "X-Auth-Key: $CF_API_KEY" \
       -H "Content-Type: application/json" | jq .

{
  "result": [
    {
      "id": "8ea8c36c8530ee01068c65c0ddc4379b",
      "type": "MX",
      "name": "example.com",
      "content": "alt1.aspmx.l.google.com",
      "proxiable": false,
      "proxied": false,
      "ttl": 1,
      "priority": 15,
      "locked": false,
      "zone_id": "e2e6491340be87a3726f91fc4148b126",
      "zone_name": "example.com",
      "modified_on": "2016-11-06T01:11:50.163221Z",
      "created_on": "2016-11-06T01:11:50.163221Z",
      "meta": {
        "auto_added": false,
        "managed_by_apps": false
      }
    },
    {
      "id": "ad0e9ff2479b13c5fbde77a02ea6fa2c",
      "type": "MX",
      "name": "example.com",
      "content": "alt2.aspmx.l.google.com",
      "proxiable": false,
      "proxied": false,
      "ttl": 1,
      "priority": 15,
      "locked": false,
      "zone_id": "e2e6491340be87a3726f91fc4148b126",
      "zone_name": "example.com",
      "modified_on": "2016-11-06T01:12:00.915649Z",
      "created_on": "2016-11-06T01:12:00.915649Z",
      "meta": {
        "auto_added": false,
        "managed_by_apps": false
      }
    },
    {
      "id": "ad6ee69519cd02a0155a56b6d64c278a",
      "type": "MX",
      "name": "example.com",
      "content": "alt3.aspmx.l.google.com",
      "proxiable": false,
      "proxied": false,
      "ttl": 1,
      "priority": 20,
      "locked": false,
      "zone_id": "e2e6491340be87a3726f91fc4148b126",
      "zone_name": "example.com",
      "modified_on": "2016-11-06T01:12:12.899684Z",
      "created_on": "2016-11-06T01:12:12.899684Z",
      "meta": {
        "auto_added": false,
        "managed_by_apps": false
      }
    },
    {
      "id": "baf6655f33738b7fd902630858878206",
      "type": "MX",
      "name": "example.com",
      "content": "alt4.aspmx.l.google.com",
      "proxiable": false,
      "proxied": false,
      "ttl": 1,
      "priority": 20,
      "locked": false,
      "zone_id": "e2e6491340be87a3726f91fc4148b126",
      "zone_name": "example.com",
      "modified_on": "2016-11-06T01:12:22.599272Z",
      "created_on": "2016-11-06T01:12:22.599272Z",
      "meta": {
        "auto_added": false,
        "managed_by_apps": false
      }
    },
    {
      "id": "a96d72b3c6afe3077f9e9c677fb0a556",
      "type": "MX",
      "name": "example.com",
      "content": "aspmx.lo.google.com",
      "proxiable": false,
      "proxied": false,
      "ttl": 1,
      "priority": 10,
      "locked": false,
      "zone_id": "e2e6491340be87a3726f91fc4148b126",
      "zone_name": "example.com",
      "modified_on": "2016-11-06T01:11:27.700386Z",
      "created_on": "2016-11-06T01:11:27.700386Z",
      "meta": {
        "auto_added": false,
        "managed_by_apps": false
      }
    },

    ...
  ]
}</code></pre>
            
    <div>
      <h5>ii. Create Terraform configuration for existing records</h5>
      <a href="#ii-create-terraform-configuration-for-existing-records">
        
      </a>
    </div>
    <p>In the previous step, we found 5 MX records that we wish to add.</p><table><tr><td><p><b>ID</b></p></td><td><p><b>Priority</b></p></td><td><p><b>Content</b></p></td></tr><tr><td><p>a96d72b3c6afe3077f9e9c677fb0a556</p></td><td><p>10</p></td><td><p>aspmx.lo.google.com</p></td></tr><tr><td><p>8ea8c36c8530ee01068c65c0ddc4379b</p></td><td><p>15</p></td><td><p>alt1.aspmx.l.google.com</p></td></tr><tr><td><p>ad0e9ff2479b13c5fbde77a02ea6fa2c</p></td><td><p>15</p></td><td><p>alt2.aspmx.l.google.com</p></td></tr><tr><td><p>ad6ee69519cd02a0155a56b6d64c278a</p></td><td><p>20</p></td><td><p>alt3.aspmx.l.google.com</p></td></tr><tr><td><p>baf6655f33738b7fd902630858878206</p></td><td><p>20</p></td><td><p>alt4.aspmx.l.google.com</p></td></tr></table><p>Before importing, we need to create Terraform configuration and give each record a unique name that can be referenced during the import.</p>
            <pre><code>$ cat &gt;&gt; cloudflare.tf &lt;&lt;'EOF'
resource "cloudflare_record" "mx-10" {
  domain  = "${var.domain}"
  name    = "${var.domain}"
  value   = "aspmx.lo.google.com"
  type    = "MX"
  priority = "10"
}
resource "cloudflare_record" "mx-15-1" {
  domain  = "${var.domain}"
  name    = "${var.domain}"
  value   = "alt1.aspmx.l.google.com"
  type    = "MX"
  priority = "15"
}
resource "cloudflare_record" "mx-15-2" {
  domain  = "${var.domain}"
  name    = "${var.domain}"
  value   = "alt2.aspmx.l.google.com"
  type    = "MX"
  priority = "15"
}
resource "cloudflare_record" "mx-20-1" {
  domain  = "${var.domain}"
  name    = "${var.domain}"
  value   = "alt3.aspmx.l.google.com"
  type    = "MX"
  priority = "20"
}
resource "cloudflare_record" "mx-20-2" {
  domain  = "${var.domain}"
  name    = "${var.domain}"
  value   = "alt3.aspmx.l.google.com"
  type    = "MX"
  priority = "20"
}
EOF</code></pre>
            
    <div>
      <h5>iii. Import resources into Terraform state</h5>
      <a href="#iii-import-resources-into-terraform-state">
        
      </a>
    </div>
    <p>Before we import the records, let's look at what would happen if we ran a <code>terraform apply</code>.</p>
            <pre><code>$ terraform plan | grep Plan
Plan: 5 to add, 0 to change, 0 to destroy.</code></pre>
            <p>Terraform does not know that these records already exist on Cloudflare, so until the import completes it will attempt to create them as new records. Below we import them one-by-one, specifying the name of the resource and the <code>zoneName/resourceID</code> returned by api.cloudflare.com.</p>
            <pre><code>$ terraform import cloudflare_record.mx-10 example.com/a96d72b3c6afe3077f9e9c677fb0a556
cloudflare_record.mx-10: Importing from ID "example.com/a96d72b3c6afe3077f9e9c677fb0a556"...
cloudflare_record.mx-10: Import complete!
  Imported cloudflare_record (ID: a96d72b3c6afe3077f9e9c677fb0a556)
cloudflare_record.mx-10: Refreshing state... (ID: a96d72b3c6afe3077f9e9c677fb0a556)

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

$ terraform import cloudflare_record.mx-15-1 example.com/8ea8c36c8530ee01068c65c0ddc4379b
cloudflare_record.mx-15-1: Importing from ID "example.com/8ea8c36c8530ee01068c65c0ddc4379b"...
cloudflare_record.mx-15-1: Import complete!
  Imported cloudflare_record (ID: 8ea8c36c8530ee01068c65c0ddc4379b)
cloudflare_record.mx-15-1: Refreshing state... (ID: 8ea8c36c8530ee01068c65c0ddc4379b)

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

$ terraform import cloudflare_record.mx-15-2 example.com/ad0e9ff2479b13c5fbde77a02ea6fa2c
cloudflare_record.mx-15-2: Importing from ID "example.com/ad0e9ff2479b13c5fbde77a02ea6fa2c"...
cloudflare_record.mx-15-2: Import complete!
  Imported cloudflare_record (ID: ad0e9ff2479b13c5fbde77a02ea6fa2c)
cloudflare_record.mx-15-2: Refreshing state... (ID: ad0e9ff2479b13c5fbde77a02ea6fa2c)

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

$ terraform import cloudflare_record.mx-20-1 example.com/ad6ee69519cd02a0155a56b6d64c278a
cloudflare_record.mx-20-1: Importing from ID "example.com/ad6ee69519cd02a0155a56b6d64c278a"...
cloudflare_record.mx-20-1: Import complete!
  Imported cloudflare_record (ID: ad6ee69519cd02a0155a56b6d64c278a)
cloudflare_record.mx-20-1: Refreshing state... (ID: ad6ee69519cd02a0155a56b6d64c278a)

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

$ terraform import cloudflare_record.mx-20-2 example.com/baf6655f33738b7fd902630858878206
cloudflare_record.mx-20-2: Importing from ID "example.com/baf6655f33738b7fd902630858878206"...
cloudflare_record.mx-20-2: Import complete!
  Imported cloudflare_record (ID: baf6655f33738b7fd902630858878206)
cloudflare_record.mx-20-2: Refreshing state... (ID: baf6655f33738b7fd902630858878206)

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.</code></pre>
            <p>Now when we run <code>terraform plan</code> it no longer wants to (re-)create the above records.</p>
            <pre><code>$ terraform plan | grep changes
No changes. Infrastructure is up-to-date.</code></pre>
            
    <div>
      <h3>Wrapping up</h3>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>That's it for today! We covered the <a href="#addingloadbalancing">Load Balancing</a> and <a href="#usingpagerules">Page Rules</a> resources, as well as demonstrated how to <a href="#reviewingandrollingbackchanges">review and roll back configuration changes</a>, and <a href="#importingexistingstateandconfiguration">import state</a>.</p><p>Stay tuned for future Terraform blog posts, where we plan to show how to manage state effectively in a group setting, go multi-cloud with Terraform, and much more.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Terraform]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Page Rules]]></category>
            <category><![CDATA[HashiCorp]]></category>
            <category><![CDATA[Programming]]></category>
            <guid isPermaLink="false">1MMQdxQ3AKGQ3JVvoMYFgB</guid>
            <dc:creator>Patrick R. Donahue</dc:creator>
        </item>
        <item>
            <title><![CDATA[Getting started with Terraform and Cloudflare (Part 1 of 2)]]></title>
            <link>https://blog.cloudflare.com/getting-started-with-terraform-and-cloudflare-part-1/</link>
            <pubDate>Fri, 27 Apr 2018 20:18:10 GMT</pubDate>
            <description><![CDATA[ Write code to manage your Cloudflare configuration using Terraform, and store it in your source code repository of choice for versioned history and rollback. ]]></description>
            <content:encoded><![CDATA[ <p><i>You can read Part 2 of Getting Started with Terraform </i><a href="/getting-started-with-terraform-and-cloudflare-part-2/"><i>here</i></a><i>.</i></p><p>As a Product Manager at Cloudflare, I spend quite a bit of my time talking to customers. One of the most common topics I'm asked about is configuration management. Developers want to know how they can write code to manage their Cloudflare config, without interacting with our APIs or UI directly.</p><p>Following best practices in software development, they want to store configuration in their own source code repository (be it <a href="https://github.com">GitHub</a> or otherwise), institute a change management process that includes code review, and be able to track their configuration versions and history over time. Additionally, they want the ability to quickly and easily roll back changes when required.</p><p>When I first spoke with our engineering teams about these requirements, they gave me the best answer a Product Manager could hope to hear: there's already an open source tool out there that does all of that (and more), with a strong community and plugin system to boot—it's called <a href="https://terraform.io">Terraform</a>.</p><p>This blog post is about getting started using Terraform with Cloudflare and the new version 1.0 of our Terraform provider. A "provider" is simply a plugin that knows how to talk to a specific set of APIs—in this case, Cloudflare, but there are also providers available for AWS, Azure, Google Cloud, Kubernetes, VMware, and <a href="https://www.terraform.io/docs/providers/">many more services</a>. Today's release extends our existing provider that previously only supported DNS records with support for Zone Settings, Rate Limiting, Load Balancing, and Page Rules.</p>
    <div>
      <h3>Before and after Terraform</h3>
      <a href="#before-and-after-terraform">
        
      </a>
    </div>
    <p>Before we jump into some real-world examples of using Terraform with Cloudflare, here is a set of diagrams that depicts the paradigm shift.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2VfXSQWtmvcXHDMG27XO5T/018f57def84f3a2d66f33d29a8a8727d/before-terraform-_3x.png" />
            
            </figure><p>Before Terraform, you needed to learn how to use the configuration interfaces or <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a> of each cloud and edge provider, e.g., Google Cloud and Cloudflare below. Additionally, your ability to store your configuration in your own source code control system depends on vendor-specific configuration export mechanisms (which may or may not exist).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1PfbFLbgtZrfToqvJNr4SD/7b9be75ae4d633f6e54ab7ecdc9f3b5f/with-terraform-_3x-2.png" />
            
            </figure><p>With Terraform, you can store and version your configuration in GitHub (or your source code control system of choice). Once you learn Terraform's configuration syntax, you don't need to bother learning how to use providers' UIs or APIs—you just tell Terraform what you want and it figures out the rest.</p>
    <div>
      <h3>Installing Terraform</h3>
      <a href="#installing-terraform">
        
      </a>
    </div>
    <p>The installation process for Terraform is extremely simple as it ships as a single binary file. Official instructions for installing Terraform can be found <a href="https://www.terraform.io/intro/getting-started/install.html">here</a>, and for purposes of this example we'll show to do so on a macOS using <a href="https://brew.sh/">Homebrew</a>:</p>
            <pre><code>$ brew install terraform
==&gt; Downloading https://homebrew.bintray.com/bottles/terraform-0.11.7.sierra.bottle.tar.gz
######################################################################## 100.0%
==&gt; Pouring terraform-0.11.7.sierra.bottle.tar.gz
?  /usr/local/Cellar/terraform/0.11.7: 6 files, 80.2MB

$ terraform version
Terraform v0.11.7</code></pre>
            <p>The following instructions are adapted from the <a href="https://developers.cloudflare.com/terraform/">Cloudflare Developers - Terraform documentation</a> site, which includes a <a href="https://developers.cloudflare.com/terraform/tutorial/">full tutorial</a> and coverage of <a href="https://developers.cloudflare.com/terraform/advanced-topics/">advanced topics</a>.</p><p>If you're interested in seeing how to use a specific Terraform resource or technique, click on one of the following anchor links:</p><ul><li><p><a href="#installingterraform">Installing Terraform</a></p></li><li><p><a href="#helloworld">Hello, world!</a></p></li><li><p><a href="#trackingchangehistory">Tracking Change History</a></p></li><li><p><a href="#applyingzonesettings">Applying Zone Settings</a></p></li><li><p><a href="#managingratelimits">Managing Rate Limits</a></p></li><li><p>Load Balancing Resource (next post!)</p></li><li><p>Page Rules Resource (next post!)</p></li><li><p>Reviewing and Rolling Back Configuration (next post!)</p></li><li><p>Importing Existing State and Configuration (next post!)</p></li></ul>
    <div>
      <h3>Hello, world!</h3>
      <a href="#hello-world">
        
      </a>
    </div>
    <p>Now that Terraform is installed, it's time to start using it. Let's assume you have a web server for your domain that's accessible on <code>203.0.113.10</code>. You just signed up your domain, <code>example.com</code>, on Cloudflare and want to manage everything with Terraform.</p>
    <div>
      <h4>1. Define your first Terraform config file</h4>
      <a href="#1-define-your-first-terraform-config-file">
        
      </a>
    </div>
    <p>First we'll create a initial Terraform config file. Any files ending in <code>.tf</code> will be processed by Terraform. As you configuration gets more complex you'll want to split the config into separate files and modules, but for now we'll proceed with a single file:</p>
            <pre><code>$ cat &gt; cloudflare.tf &lt;&lt;'EOF'
provider "cloudflare" {
  email = "you@example.com"
  token = "your-api-key"
}

variable "domain" {
  default = "example.com"
}

resource "cloudflare_record" "www" {
  domain  = "${var.domain}"
  name    = "www"
  value   = "203.0.113.10"
  type    = "A"
  proxied = true
}
EOF</code></pre>
            
    <div>
      <h4>2. Initialize Terraform and the Cloudflare provider</h4>
      <a href="#2-initialize-terraform-and-the-cloudflare-provider">
        
      </a>
    </div>
    <p>Now that you've created your basic configuration in HCL let's initialize Terraform and ask it to apply the configuration to Cloudflare. HCL stands for HashiCorp Configuration Lanaguage, and is named after the maker of Terraform.</p>
            <pre><code>$ terraform init

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "cloudflare" (1.0.0)...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.cloudflare: version = "~&gt; 1.0"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
When you run terraform init, any plugins required, such as the Cloudflare Terraform provider, are automatically downloaded and saved locally to a .terraform directory:

$ find .terraform/
.terraform/
.terraform//plugins
.terraform//plugins/darwin_amd64
.terraform//plugins/darwin_amd64/lock.json
.terraform//plugins/darwin_amd64/terraform-provider-cloudflare_v1.0.0_x4</code></pre>
            
    <div>
      <h4>3. Review the execution plan</h4>
      <a href="#3-review-the-execution-plan">
        
      </a>
    </div>
    <p>With the Cloudflare provider installed, let's ask Terraform to show the changes it's planning to make to your Cloudflare account so you can confirm it matches the configuration you intended:</p>
            <pre><code>$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + cloudflare_record.www
      id:          &lt;computed&gt;
      created_on:  &lt;computed&gt;
      domain:      "example.com"
      hostname:    &lt;computed&gt;
      metadata.%:  &lt;computed&gt;
      modified_on: &lt;computed&gt;
      name:        "www"
      proxiable:   &lt;computed&gt;
      proxied:     "true"
      ttl:         &lt;computed&gt;
      type:        "A"
      value:       "203.0.113.10"
      zone_id:     &lt;computed&gt;


Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.</code></pre>
            <p>As you can see in the above "execution plan”, Terraform is going to create a new DNS record, as requested. Values that you've explicitly specified are displayed, e.g., the value of the <code>A</code> record—<code>203.0.113.10</code>—while values that are derived based on other API calls, e.g., looking up the zone_id, or returned after the object is created, are displayed as <code>&lt;computed&gt;</code>.</p>
    <div>
      <h4>4. Applying Your Changes</h4>
      <a href="#4-applying-your-changes">
        
      </a>
    </div>
    <p>The plan command is important, as it allows you to preview the changes for accuracy before actually making them. Once you're comfortable with the execution plan, it's time to apply it:</p>
            <pre><code>$ terraform apply --auto-approve
cloudflare_record.www: Creating...
  created_on:  "" =&gt; "&lt;computed&gt;"
  domain:      "" =&gt; "example.com"
  hostname:    "" =&gt; "&lt;computed&gt;"
  metadata.%:  "" =&gt; "&lt;computed&gt;"
  modified_on: "" =&gt; "&lt;computed&gt;"
  name:        "" =&gt; "www"
  proxiable:   "" =&gt; "&lt;computed&gt;"
  proxied:     "" =&gt; "true"
  ttl:         "" =&gt; "&lt;computed&gt;"
  type:        "" =&gt; "A"
  value:       "" =&gt; "203.0.113.10"
  zone_id:     "" =&gt; "&lt;computed&gt;"
cloudflare_record.www: Creation complete after 1s (ID: c38d3103767284e7cd14d5dad3ab8668)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
</code></pre>
            <p>Note that I specified –auto-approve on the command line for briefer output; without this flag, Terraform will show you the output of <code>terraform plan</code> and then ask for confirmation before applying it.</p>
    <div>
      <h4>Verify the results</h4>
      <a href="#verify-the-results">
        
      </a>
    </div>
    <p>Logging back into the Cloudflare Dashboard and selecting the DNS tab, I can see the record that was created by Terraform:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3eWL2qsaFsPrzVG9d3A4tD/9a054125bc8170e171aa654cc0dfd5b0/Verify-DNS.png" />
            
            </figure><p>If you'd like to see the full results returned from the API call (including the default values that you didn't specify but let Terraform compute), you can run <code>terraform show</code>:</p>
            <pre><code>$ terraform show
cloudflare_record.www:
  id = c38d3103767284e7cd14d5dad3ab8668
  created_on = 2018-04-08T00:37:33.76321Z
  data.% = 0
  domain = example.com
  hostname = www.example.com
  metadata.% = 2
  metadata.auto_added = false
  metadata.managed_by_apps = false
  modified_on = 2018-04-08T00:37:33.76321Z
  name = www
  priority = 0
  proxiable = true
  proxied = true
  ttl = 1
  type = A
  value = 203.0.113.10
  zone_id = e2e6391340be87a3726f91fc4148b122</code></pre>
            
            <pre><code>$ curl https://www.example.com
Hello, this is 203.0.113.10!</code></pre>
            
    <div>
      <h3>Tracking change history</h3>
      <a href="#tracking-change-history">
        
      </a>
    </div>
    <p>In the <code>terraform apply</code> step above, you created and applied some basic Cloudflare configuration. Terraform was able to apply this configuration to your account because you provided your email address and API token at the top of the cloudflare.tf file:</p>
            <pre><code>$ head -n4 cloudflare.tf 
provider "cloudflare" {
  email = "you@example.com"
  token = "your-api-key"
}</code></pre>
            <p>We're now going to store your configuration in GitHub where it can be tracked, peer-reviewed, and rolled back to as needed. But before we do so, we're going to remove your credentials from the Terraform config file so it doesn't get committed to a repository.</p>
    <div>
      <h4>1. Use environment variables for authentication</h4>
      <a href="#1-use-environment-variables-for-authentication">
        
      </a>
    </div>
    <p>As a good security practice we need to remove your Cloudflare credentials from anything that will be committed to a repository. The Cloudflare Terraform provider supports reading these values from the <code>CLOUDFLARE_EMAIL</code> and <code>CLOUDFLARE_TOKEN</code> environment variables, so all we need to do is:</p>
            <pre><code>$ sed -ie 's/^.*email =.*$/  # email pulled from $CLOUDFLARE_EMAIL/' cloudflare.tf
$ sed -ie 's/^.*token =.*$/  # token pulled from $CLOUDFLARE_TOKEN/' cloudflare.tf

$ head -n4 cloudflare.tf 
provider "cloudflare" {
  # email pulled from $CLOUDFLARE_EMAIL
  # token pulled from $CLOUDFLARE_TOKEN
}

$ export CLOUDFLARE_EMAIL=you@example.com
$ export CLOUDFLARE_TOKEN=your-api-key</code></pre>
            <p>Note that you need to leave the empty provider definition in the file, so that Terraform knows to install the Cloudflare plugin.</p><p>After completing the above step, it's a good idea to make sure that you can still authenticate to Cloudflare. By running <code>terraform plan</code> we can get Terraform to pull the current state (which requires a valid email and API key):</p>
            <pre><code>$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

cloudflare_record.www: Refreshing state... (ID: c38d3102767284e7ca14d5dad3ab8b69)

------------------------------------------------------------------------

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.</code></pre>
            <ol><li><p>Store your configuration in GitHubNow that credentials have been removed, it's time to initialize a git repository with your Cloudflare configuration and then push it to GitHub.</p></li></ol><p>First we'll create the GitHub repository to store the config. This can be done via the GitHub UI or with a simple API call:</p>
            <pre><code>$ export GITHUB_USER=your-github-user
$ export GITHUB_TOKEN=your-github-token

$ export GITHUB_URL=$(curl -sSXPOST https://api.github.com/user/repos?access_token=$GITHUB_TOKEN -H 'Content-Type: application/json' \
-d '{"name": "cf-config", "private":"true"}' 2&gt;/dev/null | jq -r .ssh_url)

$ echo $GITHUB_URL
git@github.com:$GITHUB_USER/cf-config.git</code></pre>
            <p>Now we'll initialize a git repository and make our first commit:</p>
            <pre><code>$ git init
Initialized empty Git repository in $HOME/cf-config/.git/

$ git remote add origin $GITHUB_URL
$ git add cloudflare.tf

$ git commit -m "Step 2 - Initial commit with webserver definition."
[master (root-commit) 5acea17] Step 2 - Initial commit with webserver definition.
 1 file changed, 16 insertions(+)
 create mode 100644 cloudflare.tf</code></pre>
            <p>An astute reader may have noticed that we did not commit the <code>.terraform</code> directory nor did we commit the <code>terraform.tfstate</code> file. The former was not committed because this repository may be used on a different architecture, and the plugins contained in this directory are built for the system on which terraform init was run. The latter was not committed as i) it may eventually contain sensitive strings and ii) it is not a good way to keep state in sync, as HashiCorp [explains].</p><p>To prevent git from bugging us about these files, let's add them to a new <code>.gitignore</code> file, commit it, and push everything to GitHub:</p>
            <pre><code>$ cat &gt; .gitignore &lt;&lt;'EOF'
.terraform/
terraform.tfstate*
EOF

$ git add .gitignore

$ git commit -m "Step 2 - Ignore terraform plugin directory and state file."
[master 494c6d6] Step 2 - Ignore terraform plugin directory and state file.
 1 file changed, 2 insertions(+)
 create mode 100644 .gitignore

$ git push
Counting objects: 6, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (6/6), 762 bytes | 0 bytes/s, done.
Total 6 (delta 0), reused 0 (delta 0)
To git@github.com:$GITHUB_USER/cf-config.git
 * [new branch]      master -&gt; master</code></pre>
            
    <div>
      <h3>Applying zone settings</h3>
      <a href="#applying-zone-settings">
        
      </a>
    </div>
    <p>Now that you've got a basic website proxied through Cloudflare, it's time to use Terraform to adjust some additional settings on your zone. Below we'll configure some optional HTTPS settings, and then push the updated configuration to GitHub for posterity.</p><p>We'll use a new git branch for the changes, and then merge it into master before applying. On a team, you might consider using this step as an opportunity for others to review your change before merging and deploying it. Or you may integrate Terraform into your <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD system</a> to perform tests automatically using another Cloudflare domain.</p>
    <div>
      <h4>1. Create a new branch and append the new zone settings</h4>
      <a href="#1-create-a-new-branch-and-append-the-new-zone-settings">
        
      </a>
    </div>
    <p>Here, we modify the Terraform configuration to enable the following settings: <a href="https://www.cloudflare.com/learning-resources/tls-1-3/">TLS 1.3</a>, <a href="/how-to-make-your-site-https-only/">Always Use HTTPS</a>, <a href="/introducing-strict-ssl-protecting-against-a-man-in-the-middle-attack-on-origin-traffic/">Strict SSL mode</a>, and the <a href="https://www.cloudflare.com/waf/">Cloudflare WAF</a>. Strict mode requires a valid SSL certificate on your origin, so be sure to use the <a href="/cloudflare-ca-encryption-origin/">Cloudflare Origin CA</a> to generate one.</p>
            <pre><code>$ git checkout -b step3-https
Switched to a new branch 'step3-https'

$ cat &gt;&gt; cloudflare.tf &lt;&lt;'EOF'

resource "cloudflare_zone_settings_override" "example-com-settings" {
  name = "${var.domain}"

  settings {
    tls_1_3 = "on"
    automatic_https_rewrites = "on"
    ssl = "strict"
    waf = "on"
  }
}
EOF</code></pre>
            
    <div>
      <h4>2. Preview and merge the changes</h4>
      <a href="#2-preview-and-merge-the-changes">
        
      </a>
    </div>
    <p>Let's take a look at what Terraform is proposing before we apply it. We filter the <code>terraform plan</code> output to ignore those values that will be "computed”—in this case, settings that will be left at their default values. For brevity from here on out, we'll omit some extranneous Terraform output; if you'd like to see the output exactly as run, please see the <a href="https://developers.cloudflare.com/terraform/tutorial/">full tutorial</a>.</p>
            <pre><code>$ terraform plan | grep -v "&lt;computed&gt;"
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

cloudflare_record.www: Refreshing state... (ID: c38d3103767284e7cd14d5dad3ab8668)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + cloudflare_zone_settings_override.example-com-settings
      name:                                   "example.com"
      settings.#:                             "1"
      settings.0.automatic_https_rewrites:    "on"
      settings.0.ssl:                         "strict"
      settings.0.tls_1_3:                     "on"
      settings.0.waf:                         "on"


Plan: 1 to add, 0 to change, 0 to destroy.</code></pre>
            <p>The proposed changes look good, so we'll merge them into primary and then apply them with <code>terraform apply</code>. When working on a team, you may want to require pull requests and use this opportunity to peer review any proposed configuration changes.</p>
            <pre><code>$ git add cloudflare.tf
$ git commit -m "Step 3 - Enable TLS 1.3, Always Use HTTPS, and SSL Strict mode."
[step3-https d540600] Step 3 - Enable TLS 1.3, Always Use HTTPS, and SSL Strict mode.
 1 file changed, 11 insertions(+)

$ git checkout master
Switched to branch 'master'

$ git merge step3-https
Updating d26f40b..d540600
Fast-forward
 cloudflare.tf | 11 +++++++++++
 1 file changed, 11 insertions(+)

$ git push
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 501 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), completed with 1 local object.
To git@github.com:$GITHUB_USER/cf-config.git
   d26f40b..d540600  master -&gt; master</code></pre>
            
    <div>
      <h4>3. Apply and verify the changes</h4>
      <a href="#3-apply-and-verify-the-changes">
        
      </a>
    </div>
    <p>Before applying the changes, let's see if we can connect with TLS 1.3. Hint: we should <i>not</i> be able to with default settings. If you want to follow along with this test, you'll need to [compile curl against BoringSSL].</p>
            <pre><code>$ curl -v --tlsv1.3 https://www.upinatoms.com 2&gt;&amp;1 | grep "SSL connection\|error"
* error:1000042e:SSL routines:OPENSSL_internal:TLSV1_ALERT_PROTOCOL_VERSION
curl: (35) error:1000042e:SSL routines:OPENSSL_internal:TLSV1_ALERT_PROTOCOL_VERSION</code></pre>
            <p>As shown above, we receive an error as TLS 1.3 is not yet enabled on your zone. Let's enable it by running terraform apply and try again:</p>
            <pre><code>$ terraform apply --auto-approve
cloudflare_record.www: Refreshing state... (ID: c38d3103767284e7cd14d5dad3ab8668)
cloudflare_zone_settings_override.example-com-settings: Creating...
  initial_settings.#:                     "" =&gt; "&lt;computed&gt;"
  initial_settings_read_at:               "" =&gt; "&lt;computed&gt;"
  name:                                   "" =&gt; "example.com"
  readonly_settings.#:                    "" =&gt; "&lt;computed&gt;"
  settings.#:                             "" =&gt; "1"
  settings.0.advanced_ddos:               "" =&gt; "&lt;computed&gt;"
  settings.0.always_online:               "" =&gt; "&lt;computed&gt;"
  settings.0.always_use_https:            "" =&gt; "&lt;computed&gt;"
  settings.0.automatic_https_rewrites:    "" =&gt; "on"
  settings.0.brotli:                      "" =&gt; "&lt;computed&gt;"
  settings.0.browser_cache_ttl:           "" =&gt; "&lt;computed&gt;"
  settings.0.browser_check:               "" =&gt; "&lt;computed&gt;"
  settings.0.cache_level:                 "" =&gt; "&lt;computed&gt;"
  settings.0.challenge_ttl:               "" =&gt; "&lt;computed&gt;"
  settings.0.cname_flattening:            "" =&gt; "&lt;computed&gt;"
  settings.0.development_mode:            "" =&gt; "&lt;computed&gt;"
  settings.0.edge_cache_ttl:              "" =&gt; "&lt;computed&gt;"
  settings.0.email_obfuscation:           "" =&gt; "&lt;computed&gt;"
  settings.0.hotlink_protection:          "" =&gt; "&lt;computed&gt;"
  settings.0.http2:                       "" =&gt; "&lt;computed&gt;"
  settings.0.ip_geolocation:              "" =&gt; "&lt;computed&gt;"
  settings.0.ipv6:                        "" =&gt; "&lt;computed&gt;"
  settings.0.max_upload:                  "" =&gt; "&lt;computed&gt;"
  settings.0.minify.#:                    "" =&gt; "&lt;computed&gt;"
  settings.0.mirage:                      "" =&gt; "&lt;computed&gt;"
  settings.0.mobile_redirect.#:           "" =&gt; "&lt;computed&gt;"
  settings.0.opportunistic_encryption:    "" =&gt; "&lt;computed&gt;"
  settings.0.origin_error_page_pass_thru: "" =&gt; "&lt;computed&gt;"
  settings.0.polish:                      "" =&gt; "&lt;computed&gt;"
  settings.0.prefetch_preload:            "" =&gt; "&lt;computed&gt;"
  settings.0.privacy_pass:                "" =&gt; "&lt;computed&gt;"
  settings.0.pseudo_ipv4:                 "" =&gt; "&lt;computed&gt;"
  settings.0.response_buffering:          "" =&gt; "&lt;computed&gt;"
  settings.0.rocket_loader:               "" =&gt; "&lt;computed&gt;"
  settings.0.security_header.#:           "" =&gt; "&lt;computed&gt;"
  settings.0.security_level:              "" =&gt; "&lt;computed&gt;"
  settings.0.server_side_exclude:         "" =&gt; "&lt;computed&gt;"
  settings.0.sha1_support:                "" =&gt; "&lt;computed&gt;"
  settings.0.sort_query_string_for_cache: "" =&gt; "&lt;computed&gt;"
  settings.0.ssl:                         "" =&gt; "strict"
  settings.0.tls_1_2_only:                "" =&gt; "&lt;computed&gt;"
  settings.0.tls_1_3:                     "" =&gt; "on"
  settings.0.tls_client_auth:             "" =&gt; "&lt;computed&gt;"
  settings.0.true_client_ip_header:       "" =&gt; "&lt;computed&gt;"
  settings.0.waf:                         "" =&gt; "on"
  settings.0.webp:                        "" =&gt; "&lt;computed&gt;"
  settings.0.websockets:                  "" =&gt; "&lt;computed&gt;"
  zone_status:                            "" =&gt; "&lt;computed&gt;"
  zone_type:                              "" =&gt; "&lt;computed&gt;"
cloudflare_zone_settings_override.example-com-settings: Creation complete after 2s (ID: e2e6491340be87a3726f91fc4148b125)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.</code></pre>
            <p>Now we can try the same command as above, and see that it succeeds. Niiice, TLS 1.3!</p>
            <pre><code>$ curl -v --tlsv1.3 https://www.example.com 2&gt;&amp;1 | grep "SSL connection\|error"
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256</code></pre>
            
    <div>
      <h3>Managing rate limits</h3>
      <a href="#managing-rate-limits">
        
      </a>
    </div>
    <p><i>Before proceeding, make sure that your account is enabled for Rate Limiting. If you’re on an Enterprise plan, you should ask your Customer Success Manager to do this; otherwise, you can subscribe to Rate Limiting within the Cloudflare Dashboard.</i></p><p>With our zone settings locked down, and our site starting to get some more attention, it's unfortunately begun attracting some of the less scrupulous characters on the internet. Our server access logs show attempts to brute force our login page at <code>https://www.example.com/login</code>. Let's see what we can do with Cloudflare's <a href="https://www.cloudflare.com/application-services/products/rate-limiting/">rate limiting product</a>) to put a stop to these efforts.</p>
    <div>
      <h4>1. Create a new branch and append the rate limiting settings</h4>
      <a href="#1-create-a-new-branch-and-append-the-rate-limiting-settings">
        
      </a>
    </div>
    <p>After creating a new branch we specify the rate limiting rule:</p>
            <pre><code>$ git checkout -b step4-ratelimit
Switched to a new branch 'step4-ratelimit'

$ cat &gt;&gt; cloudflare.cf &lt;&lt;'EOF'
resource "cloudflare_rate_limit" "login-limit" {
  zone = "${var.domain}"

  threshold = 5
  period = 60
  match {
    request {
      url_pattern = "${var.domain}/login"
      schemes = ["HTTP", "HTTPS"]
      methods = ["POST"]
    }
    response {
      statuses = [401, 403]
      origin_traffic = true
    }
  }
  action {
    mode = "simulate"
    timeout = 300
    response {
      content_type = "text/plain"
      body = "You have failed to login 5 times in a 60 second period and will be blocked from attempting to login again for the next 5 minutes."
    }
  }
  disabled = false
  description = "Block failed login attempts (5 in 1 min) for 5 minutes."
}
EOF</code></pre>
            <p>This rule is a bit more complex than the zone settings rule, so let's break it down:</p>
            <pre><code>00: resource "cloudflare_rate_limit" "login-limit" {
01:   zone = "${var.domain}"
02:
03:   threshold = 5
04:   period = 60</code></pre>
            <p>The threshold is an integer count of how many times an event (defined by the match block below) has to be detected in the period before the rule takes action. The period is measured in seconds, so the above rule says to take action if the match fires 5 times in 60 seconds.</p>
            <pre><code>05:   match {
06:     request {
07:       url_pattern = "${var.domain}/login"
08:       schemes = ["HTTP", "HTTPS"]
09:       methods = ["POST"]
10:     }
11:     response {
12:       statuses = [401, 403]
13:     }
14:   }</code></pre>
            <p>The match block tells the Cloudflare edge what to be on the lookout for, i.e., HTTP or HTTPS POST requests to <code>https://www.example.com/login</code>. We further restrict the match to HTTP <code>401/Unauthorized</code> or <code>403/Forbidden</code> response codes returned from the origin.</p>
            <pre><code>15:   action {
16:     mode = "simulate"
17:     timeout = 300
18:     response {
19:       content_type = "text/plain"
20:       body = "You have failed to login 5 times in a 60 second period and will be blocked from attempting to login again for the next 5 minutes."
21:     }
22:   }
23:   disabled = false
24:   description = "Block failed login attempts (5 in 1 min) for 5 minutes."
25: }</code></pre>
            <p>After matching traffic, we set the action for our edge to take. When testing, it's a good idea to set the mode to simulate and review logs before taking enforcement action (see below). The timeout field here indicates that we want to enforce this action for 300 seconds (5 minutes) and the response block indicates what should be sent back to the caller that tripped the rate limit.</p>
    <div>
      <h4>2. Preview and merge the changes</h4>
      <a href="#2-preview-and-merge-the-changes">
        
      </a>
    </div>
    <p>As usual, we take a look at the proposed plan before we apply any changes:</p>
            <pre><code>$ terraform plan
...
Terraform will perform the following actions:

  + cloudflare_rate_limit.login-limit
      id:                                     &lt;computed&gt;
      action.#:                               "1"
      action.0.mode:                          "simulate"
      action.0.response.#:                    "1"
      action.0.response.0.body:               "You have failed to login 5 times in a 60 second period and will be blocked from attempting to login again for the next 5 minutes."
      action.0.response.0.content_type:       "text/plain"
      action.0.timeout:                       "300"
      description:                            "Block failed login attempts (5 in 1 min) for 5 minutes."
      disabled:                               "false"
      match.#:                                "1"
      match.0.request.#:                      "1"
      match.0.request.0.methods.#:            "1"
      match.0.request.0.methods.1012961568:   "POST"
      match.0.request.0.schemes.#:            "2"
      match.0.request.0.schemes.2328579708:   "HTTP"
      match.0.request.0.schemes.2534674783:   "HTTPS"
      match.0.request.0.url_pattern:          "www.example.com/login"
      match.0.response.#:                     "1"
      match.0.response.0.origin_traffic:      "true"
      match.0.response.0.statuses.#:          "2"
      match.0.response.0.statuses.1057413486: "403"
      match.0.response.0.statuses.221297644:  "401"
      period:                                 "60"
      threshold:                              "5"
      zone:                                   "example.com"
      zone_id:                                &lt;computed&gt;


Plan: 1 to add, 0 to change, 0 to destroy.</code></pre>
            <p>The plan looks good so let's go ahead, merge it in, and apply it.</p>
            <pre><code>$ git add cloudflare.tf
$ git commit -m "Step 4 - Add rate limiting rule to protect /login."
[step4-ratelimit 0f7e499] Step 4 - Add rate limiting rule to protect /login.
 1 file changed, 28 insertions(+)

$ git checkout master
Switched to branch 'master'

$ git merge step4-ratelimit
Updating 321c2bd..0f7e499
Fast-forward
 cloudflare.tf | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

$ terraform apply --auto-approve
cloudflare_record.www: Refreshing state... (ID: c38d3103767284e7cd14d5dad3ab8668)
cloudflare_zone_settings_override.example-com-settings: Refreshing state... (ID: e2e6491340be87a3726f91fc4148b125)
cloudflare_rate_limit.login-limit: Creating...
  action.#:                               "" =&gt; "1"
  action.0.mode:                          "" =&gt; "simulate"
  action.0.response.#:                    "" =&gt; "1"
  action.0.response.0.body:               "" =&gt; "You have failed to login 5 times in a 60 second period and will be blocked from attempting to login again for the next 5 minutes."
  action.0.response.0.content_type:       "" =&gt; "text/plain"
  action.0.timeout:                       "" =&gt; "300"
  description:                            "" =&gt; "Block failed login attempts (5 in 1 min) for 5 minutes."
  disabled:                               "" =&gt; "false"
  match.#:                                "" =&gt; "1"
  match.0.request.#:                      "" =&gt; "1"
  match.0.request.0.methods.#:            "" =&gt; "1"
  match.0.request.0.methods.1012961568:   "" =&gt; "POST"
  match.0.request.0.schemes.#:            "" =&gt; "2"
  match.0.request.0.schemes.2328579708:   "" =&gt; "HTTP"
  match.0.request.0.schemes.2534674783:   "" =&gt; "HTTPS"
  match.0.request.0.url_pattern:          "" =&gt; "www.example.com/login"
  match.0.response.#:                     "" =&gt; "1"
  match.0.response.0.origin_traffic:      "" =&gt; "true"
  match.0.response.0.statuses.#:          "" =&gt; "2"
  match.0.response.0.statuses.1057413486: "" =&gt; "403"
  match.0.response.0.statuses.221297644:  "" =&gt; "401"
  period:                                 "" =&gt; "60"
  threshold:                              "" =&gt; "5"
  zone:                                   "" =&gt; "example.com"
  zone_id:                                "" =&gt; "&lt;computed&gt;"
cloudflare_rate_limit.login-limit: Creation complete after 1s (ID: 8d518c5d6e63406a9466d83cb8675bb6)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.</code></pre>
            <p>Note that if you haven't purchased rate limiting yet, you will see the following error when attempting to apply the new rule:</p>
            <pre><code>Error: Error applying plan:

1 error(s) occurred:

* cloudflare_rate_limit.login-limit: 1 error(s) occurred:

* cloudflare_rate_limit.login-limit: error creating rate limit for zone: error from makeRequest: HTTP status 400: content "{\n  \"result\": null,\n  \"success\": false,\n  \"errors\": [\n    {\n      \"code\": 10021,\n      \"message\": \"ratelimit.api.not_entitled.account\"\n    }\n  ],\n  \"messages\": []\n}\n"</code></pre>
            
    <div>
      <h4>3. Update the rule to ban (not just simulate)</h4>
      <a href="#3-update-the-rule-to-ban-not-just-simulate">
        
      </a>
    </div>
    <p>After confirming that the rule is triggering as planned in logs (but not yet enforcing), it's time to switch from simulate to ban:</p>
            <pre><code>$ git checkout step4-ratelimit
$ sed -i.bak -e 's/simulate/ban/' cloudflare.tf

$ git diff
diff --git a/cloudflare.tf b/cloudflare.tf
index ed5157c..9f25a0c 100644
--- a/cloudflare.tf
+++ b/cloudflare.tf
@@ -42,7 +42,7 @@ resource "cloudflare_rate_limit" "login-limit" {
     }
   }
   action {
-    mode = "simulate"
+    mode = "ban"
     timeout = 300
     response {
       content_type = "text/plain"</code></pre>
            
            <pre><code>$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

cloudflare_zone_settings_override.example-com-settings: Refreshing state... (ID: e2e6491340be87a3726f91fc4148b126)
cloudflare_rate_limit.login-limit: Refreshing state... (ID: 8d518c5d6e63406a9466d83cb8675bb6)
cloudflare_record.www: Refreshing state... (ID: c38d3103767284e7cd14d5dad3ab8669)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ cloudflare_rate_limit.login-limit
      action.0.mode: "simulate" =&gt; "ban"

Plan: 0 to add, 1 to change, 0 to destroy.</code></pre>
            
    <div>
      <h4>4. Merge and deploy the updated rule, then push config to GitHub</h4>
      <a href="#4-merge-and-deploy-the-updated-rule-then-push-config-to-github">
        
      </a>
    </div>
    
            <pre><code>$ git add cloudflare.tf

$ git commit -m "Step 4 - Update /login rate limit rule from 'simulate' to 'ban'."
[step4-ratelimit e1c38cf] Step 4 - Update /login rate limit rule from 'simulate' to 'ban'.
 1 file changed, 1 insertion(+), 1 deletion(-)

$ git checkout master &amp;&amp; git merge step4-ratelimit &amp;&amp; git push
Switched to branch 'master'
Updating 0f7e499..e1c38cf
Fast-forward
 cloudflare.tf | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 361 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), completed with 1 local object.
To git@github.com:$GITHUB_USER/cf-config.git
   0f7e499..e1c38cf  master -&gt; master</code></pre>
            
            <pre><code>$ terraform apply --auto-approve
cloudflare_rate_limit.login-limit: Refreshing state... (ID: 8d518c5d6e63406a9466d83cb8675bb6)
cloudflare_record.www: Refreshing state... (ID: c38d3103767284e7cd14d5dad3ab8669)
cloudflare_zone_settings_override.example-com-settings: Refreshing state... (ID: e2e6491340be87a3726f91fc4148b126)
cloudflare_rate_limit.login-limit: Modifying... (ID: 8d518c5d6e63406a9466d83cb8675bb6)
  action.0.mode: "simulate" =&gt; "ban"
cloudflare_rate_limit.login-limit: Modifications complete after 0s (ID: 8d518c5d6e63406a9466d83cb8675bb6)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.</code></pre>
            
    <div>
      <h4>5. Confirm the rule works as expected</h4>
      <a href="#5-confirm-the-rule-works-as-expected">
        
      </a>
    </div>
    <p>This step is optional, but it's a good way to demonstrate that the rule is working as expected (note the final <code>429</code> response):</p>
            <pre><code>$ for i in {1..6}; do curl -XPOST -d '{"username": "foo", "password": "bar"}' -vso /dev/null https://www.example.com/login 2&gt;&amp;1 | grep "&lt; HTTP"; sleep 1; done
&lt; HTTP/1.1 401 Unauthorized
&lt; HTTP/1.1 401 Unauthorized
&lt; HTTP/1.1 401 Unauthorized
&lt; HTTP/1.1 401 Unauthorized
&lt; HTTP/1.1 401 Unauthorized
&lt; HTTP/1.1 429 Too Many Requests</code></pre>
            
    <div>
      <h3>Wrapping up</h3>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>That's it for today! Stay tuned next week for <a href="/getting-started-with-terraform-and-cloudflare-part-2/">part 2 of this post</a>, where we continue the tour through the following resources and techniques:</p><ul><li><p>Load Balancing Resource</p></li><li><p>Page Rules Resource</p></li><li><p>Reviewing and Rolling Back Changes</p></li><li><p>Importing Existing State and Configuration</p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Terraform]]></category>
            <category><![CDATA[HashiCorp]]></category>
            <category><![CDATA[Programming]]></category>
            <guid isPermaLink="false">5fWuZ4Wj15WUKG0kNZnWUk</guid>
            <dc:creator>Patrick R. Donahue</dc:creator>
        </item>
    </channel>
</rss>