Subscribe to receive notifications of new posts:

Serverless Performance: Cloudflare Workers, Lambda and Lambda@Edge

2018-07-02

3 min read

A few months ago we released a new way for people to run serverless Javascript called Cloudflare Workers. We believe Workers is the fastest way to execute serverless functions.

If it is truly the fastest, and it is comparable in price, it should be how every team deploys all of their serverless infrastructure. So I set out to see just how fast Worker execution is and prove it.

tl;dr Workers is much faster than Lambda and Lambda@Edge:

Screen-Shot-2018-06-28-at-2.48.05-PM

This is a chart showing what percentage of requests to each service were faster than a given number of ms. It is based on thousands of tests from all around the world, evenly sampled over the past 12 hours. At the 95th percentile, Workers is 441% faster than a Lambda function, and 192% faster than Lambda@Edge.  

The functions being tested simply return the current time. All three scripts are available on Github. The testing is being done by a service called Catchpoint which has hundreds of testing locations around the world.

The Gold Coast

This is every test ran in the last hour, with results over 1500ms filtered out:

Screen-Shot-2018-06-28-at-2.59.13-PM

You can immediately see that Worker results are tightly clustered around the x-axis, while Lambda and Lambda@Edge show much worse performance.

To be fair, comparing my Lambda, which only runs in us-east-1 (Northern Virginia, USA), to a global service like Workers is a a little unfair. This effect becomes even more clear if I only show tests ran in Australia. Sydney is 9,735 miles (53 light-ms) from our instance in us-east-1. It becomes pretty clear how miserable the experience would be for visitors down south:

Screen-Shot-2018-06-28-at-3.02.52-PM

As we only run one instance of each test from Australia every five minutes that's not a conclusive amount of data though, so let's look at the percentile distribution for the past 24 hours:

Screen-Shot-2018-06-28-at-3.03.17-PM

The 50th percentile speed for Workers is 13ms, well faster than a packet could even get half way to Virginia. At the 95th percentile you're looking at 882ms for Lambda, 216ms for Lambda@Edge, and 40ms for Workers. A full 5% of your users will be waiting almost a second for the simplest possible Lambda response, making it impossible to build a responsive application.

Hometurf

As we said, Workers has quite the advantage being deployed everywhere, as compared to Lambda which lives in a single region. (Lambda@Edge has less of an excuse). We believe Worker performance should be great everywhere though, so lets look a little closer to our Lambda instance. First all the tests in North America:

Screen-Shot-2018-06-28-at-3.10.16-PM

There are, amazingly, visitors who will be waiting over two seconds for a Lambda response:

Screen-Shot-2018-06-28-at-3.10.46-PM

Most of that delay is DNS however (Route53?). Just showing the time spent waiting for a response (ignoring DNS and connection time) tells a similar story however (filtering points over 300ms):

Screen-Shot-2018-06-28-at-3.13.29-PM

It's true that Cloudflare has many more points of presence than Lambda@Edge, but not all of this is explained by geographic distribution. To prove it, lets look at the testing location closest to my Northern Virginia-based Lambda function, Washington, DC. Again, looking at the last 24 hours:

Screen-Shot-2018-06-28-at-3.16.37-PM

With no geographic explanation the 95th percentile of Workers is 126% faster than Lambda and 65% faster than Lambda@Edge. I find this incredible. Please feel free play with the chart yourself.

How?!

How is this possible? I have some guesses. Workers is built on V8 isolates, which are significantly faster to spin up (under 5ms) than a full NodeJS process and have a tenth the memory overhead. The effect of having to wait for new processes start is very obvious when you look at the difference in speed for the first request which hits a new Lambda@Edge function:

Screen-Shot-2018-06-27-at-4.59.01-PM-1

Workers has also been carefully architected to avoid moving memory and blocking when it could be avoided, complete with our own optimized implementations of the Javascript APIs. Finally, Workers runs on the same thousands of machines which serve Cloudflare traffic around the world, benefiting from over half a decade of experience pushing the limits of our hardware.

This post has been somewhat self-congratulatory, and I apologize for that. We certainly still have a lot to build and a lot we can still do to improve our performance. It was originally going to be about the power of running your functions distributed all around the world, instead of at a single region. What I'm left with though is the belief that Workers is faster, period.

Here is a full chart for the last hour. I've also exported a CSV of all the data for the past 12 hours for you to explore.

Please reproduce the tests I've done here and share in the comments here or on Hacker News. If I've missed anything, we want to hear about it.

I'll also be sharing a price comparison between the various systems soon. Please subscribe to our blog if you'd like to be notified.

Finally, please try Workers!

Additional reading on Cloudflare Workers

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
Cloudflare WorkersSpeed & ReliabilityServerlessJavaScriptProgrammingDevelopersDeveloper Platform

Follow on X

Cloudflare|@cloudflare

Related posts

October 31, 2024 1:00 PM

Moving Baselime from AWS to Cloudflare: simpler architecture, improved performance, over 80% lower cloud costs

Post-acquisition, we migrated Baselime from AWS to the Cloudflare Developer Platform and in the process, we improved query times, simplified data ingestion, and now handle far more events, all while cutting costs. Here’s how we built a modern, high-performing observability platform on Cloudflare’s network. ...