You’ve had a chance to build a Cloudflare Worker. You’ve tried KV Storage and have a great use case for your Worker. You’ve even demonstrated the usefulness to your product or organization. Now you need to go from writing a single file in the Cloudflare Dashboard UI Editor to source controlled code with multiple environments deployed using your favorite CI tool.
Fortunately, we have a powerful and flexible API for managing your workers. You can customize your deployment to your heart’s content. Our blog has already featured many things made possible by that API:
These tools make deployments easier to configure, but it still takes time to manage. The Serverless Framework Cloudflare Workers plugin removes that deployment overhead so you can spend more time working on your application and less on your deployment.
Focus on your application
Here at Cloudflare, we’ve been working to rebuild our Access product to run entirely on Workers. The move will allow Access to take advantage of the resiliency, performance, and flexibility of Workers. We’ll publish a more detailed post about that migration once complete, but the experience required that we retool some of our process to match or existing development experience as much as possible.
To us this meant:
Git
Easily deploy
Different environments
Unit Testing
CI Integration
Typescript/Multiple Files
Everything Must Be Automated
The Cloudflare Access team looked at three options for automating all of these tools in our pipeline. All of the options will work and could be right for you, but custom scripting can be a chore to maintain and Terraform lacked some extensibility.
Custom Scripting
Serverless Framework
We decided on the Serverless Framework. Serverless Framework provided a tool to mirror our existing process as closely as possible without too much DevOps overhead. Serverless is extremely simple and doesn’t interfere with the application code. You can get a project set up and deployed in seconds. It’s obviously less work than writing your own custom management scripts. But it also requires less boiler plate than Terraform because the Serverless Framework is designed for the “serverless” niche. However, if you are already using Terraform to manage other Cloudflare products, Terraform might be the best fit.
Walkthrough
Everything for the project happens in a YAML file called serverless.yml. Let’s go through the features of the configuration file.
To get started, we need to install serverless from npm and generate a new project.
npm install serverless -g
serverless create --template cloudflare-workers --path myproject
cd myproject
npm install
If you are an enterprise client, you want to use the cloudflare-workers-enterprise template as it will set up more than one worker (but don’t worry, you can add more to any template). Also, I’ll touch on this later, but if you want to write your workers in Rust, use the cloudflare-workers-rust template.
You should now have a project that feels familiar, ready to be added to your favorite source control. In the project should be a serverless.yml file like the following.
service:
name: hello-world
provider:
name: cloudflare
config:
accountId: CLOUDFLARE_ACCOUNT_ID
zoneId: CLOUDFLARE_ZONE_ID
plugins:
- serverless-cloudflare-workers
functions:
hello:
name: hello
script: helloWorld # there must be a file called helloWorld.js
events:
- http:
url: example.com/hello/*
method: GET
headers:
foo: bar
x-client-data: value
The service block simply contains the name of your service. This will be used in your Worker script names if you do not overwrite them.
Under provider, name must be ‘cloudflare’ and you need to add your account and zone IDs. You can find them in the Cloudflare Dashboard.
The plugins section adds the Cloudflare specific code.
Now for the good part: functions. Each block under functions is a Worker script.
name: (optional) If left blank it will be STAGE-service.name-script.identifier. If I removed name from this file and deployed in production stage, the script would be named production-hello-world-hello.
script: the relative path to the javascript file with the worker script. I like to organize mine in a folder called handlers.
events: Currently Workers only support http events. We call these routes. The example provided says that GET https://example.com/hello/ will cause this worker to execute. The headers block is for testing invocations.
At this point you can deploy your worker!
[email protected] CLOUDFLARE_AUTH_KEY=XXXXXXXX serverless deploy
This is very easy to deploy, but it doesn’t address our requirements. Luckily, there’s just a few simple modifications to make.
Maturing our YAML File
Here’s a more complex YAML file.
service:
name: hello-world
package:
exclude:
- node_modules/**
excludeDevDependencies: false
custom:
defaultStage: development
deployVars: ${file(./config/deploy.${self:provider.stage}.yml)}
kv: &kv
- variable: MYUSERS
namespace: users
provider:
name: cloudflare
stage: ${opt:stage, self:custom.defaultStage}
config:
accountId: ${env:CLOUDFLARE_ACCOUNT_ID}
zoneId: ${env:CLOUDFLARE_ZONE_ID}
plugins:
- serverless-cloudflare-workers
functions:
hello:
name: ${self:provider.stage}-hello
script: handlers/hello
webpack: true
environment:
MY_ENV_VAR: ${self:custom.deployVars.env_var_value}
SENTRY_KEY: ${self:custom.deployVars.sentry_key}
resources:
kv: *kv
events:
- http:
url: "${self:custom.deployVars.SUBDOMAIN}.mydomain.com/hello"
method: GET
- http:
url: "${self:custom.deployVars.SUBDOMAIN}.mydomain.com/alsohello*"
method: GET
We can add a custom section where we can put custom variables to use later in the file.
defaultStage: We set this to development so that forgetting to pass a stage doesn’t trigger a production deploy. Combined with the stage option under provider we can set the stage for deployment.
deployVars: We use this custom variable to load another YAML file dependent on the stage. This lets us have different values for different stages. In development, this line loads the file ./config/deploy.development.yml
. Here’s an example file:
env_var_value: true
sentry_key: XXXXX
SUBDOMAIN: dev
kv: Here we are showing off a feature of YAML. If you assign a name to a block using the ‘&’, you can use it later as a YAML variable. This is very handy in a multi script account. We could have named this variable anything, but we are naming it kv since it holds our Workers Key Value storage settings to be used in our function below.
Inside of the kv block we're creating a namespace and binding it to a variable available in your Worker. It will ensure that the namespace “users” exists and is bound to MYUSERS.
kv: &kv
- variable: MYUSERS
namespace: users
provider: The only new part of the provider block is stage.
stage: ${opt:stage, self:custom.defaultStage}
This line sets stage to either the command line option or custom.defaultStage if opt:stage is blank. When we deploy, we pass —stage=production to serverless deploy.
Now under our function we have added webpack, resources, and environment.
webpack: If set to true, will simply bundle each handler into a single file for deployment. It will also take a string representing a path to a webpack config file so you can customize it. This is how we add Typescript support to our projects.
resources: This block is used to automate resource creation. In resources we're linking back to the kv block we created earlier.
Side note: If you would like to include WASM bindings in your project, it can be done in a very similar way to how we included Workers KV. For more information on WASM, see the documentation.
environment: This is the butter for the bread that is managing configuration for different stages. Here we can specify values to bind to variables to use in worker scripts. Combined with YAML magic, we can store our values in the aforementioned config files so that we deploy different values in different stages. With environments, we can easily tie into our CI tool. The CI tool has our deploy.production.yml. We simply run the following command from within our CI.
sls deploy --stage=production
Finally, I added a route to demonstrate that a script can be executed on multiple routes.
At this point I’ve covered (or hinted) at everything on our original list except Unit Testing. There are a few ways to do this.
We have a previous blog post about Unit Testing that covers using cloud worker, a great tool built by Dollar Shave Club.
My team opted to use the classic node frameworks mocha and sinon. Because we are using Typescript, we can build for node or build for v8. You can also make mocha work for non-typescript projects if you use an experimental feature that adds import/export support to node.
--experimental-modules
We’re excited about moving more and more of our services to Cloudflare Workers, and the Serverless Framework makes that easier to do. If you’d like to learn even more or get involved with the project, see us on github.com. For additional information on using Serverless Framework with Cloudflare Workers, check out our documentation on the Serverless Framework.