Subscribe to receive notifications of new posts:

JAMstack podcast episode: Listen to Cloudflare's Kenton Varda speak about originless code

2018-09-15

14 min read

JAMstack Radio is a show all about the JAMstack, a new way to build fast & secure apps or websites. In the most recent episode, the host, Brian Douglas, met with Kenton Varda, tech lead for Cloudflare Workers and author of Sandstorm.io to discuss some of the infinite uses for running code at the edge.

Listen to what Kenton had to say about serverless technology in this twenty two minute podcast here.

Here's the transcript of the podcast as well:

Brian Douglas: Welcome to another installment of JAMstack Radio. In the room I've got Kenton Varda from Cloudflare.

Kenton Varda: Thanks for having me.

Brian: Thanks for coming all the way across San Francisco to chat with me in person. I'm curious who Kenton is, but I'm also curious what Cloudflare is. Can you answer both questions? Let's start with, "Who is Kenton?"

Kenton: I'm an engineer. I'm the architect of Cloudflare Workers. In a past life I worked for Google for several years. I was once known as the "protocol buffers person," I was the one who open sourced that. And I founded a company called Sandstorm that was later acquired by Cloudflare.

Brian: I'm familiar. I remember Sandstorm. Well, I remember the name and I vaguely remember that the acquisition happened. Interesting. You founded Sandstorm, you said?

Kenton: Yep. Jade Wang and I.

Brian: OK, yeah. I know Jade. I met Jade not too long ago. But that's a lot of inside baseball. How about Cloudflare? Now you're a part of Cloudflare. What is that thing?

Kenton: We have computers in 151 locations today and rapidly expanding, thousands of locations in the future. And we let you take that and put that network in front of your website in order to provide a few things. One is it's a large HTTP cache. We can cache your static content at the "edge." We call it the edge. The locations close to the end user, so that they can receive that content quickly.

We have a web application firewall which blocks malicious traffic, we have DDoS protection. We have absorbed the largest DDoS attacks in the world without any trouble and a whole bunch of other features. There's a long list of things that are implemented as a proxy in these locations before the requests go to your "origin server," as we call it.

Brian: OK, cool. And you folks have all these locations, do you own servers? Are you building these things out?

Kenton: Yes. We build the hardware and we send them to a variety of different types of locations. Sometimes it's ISPs that want to have our machines there, so they can serve their customers faster and use less bandwidth upstream. Sometimes it's data centers. It's a variety.

Brian: Cool. So, I'm curious. Your background is in a lot of infrastructure, too? Coming from Sandstorm, and now Cloudflare?

Kenton: Yep. And at Google. I've always done a little infrastructure. Search infrastructure at Google, access control infrastructure, key management. Lots of things.

Brian: Which makes sense on why you mentioned you're now "principal architect."

Kenton: We don't have titles.

Brian: You don't have titles? OK. I was going to ask if there is a Vice Principal Architect at all.

Kenton: I've taken to calling myself the architect of Cloudflare Workers, descriptively. When Sandstorm was acquired by Cloudflare in March of 2017, we came in and I was told, "We'd like to find a way to let people run code on our servers securely and quickly. But we don't know how to do it. What do you think?" And so I started that project, and built it out, and exactly a year after I joined we launched it on March 13 of this year.

Brian: Nice! Congratulations. This is the exact reason why I had you come on. Because Cloudflare Workers is something that I was aware of in the alpha or beta phase when it first was mentioned. I played around with the trial.

I want you to explain Cloudflare Workers, but also before you do that I want to explain what I did. Which is very trivial. Workers sit on the edge, and I made a Worker to change the word "cloud" to "butt."

Kenton: Classic.

Brian: It's very classic because the site that I was testing it on was Cloudflare.com. Whatever it replaced was pretty hilarious. I showed everybody in Slack, and then I moved on and never thought of it, until recently. Could you explain what Cloudflare Workers are?

Kenton: While you were working with the preview, which lets you see what your Worker would do to any random site. But normally you'd run these on your own site. Cloudflare Worker is a piece of JavaScript that you write that can receive HTTP requests that are destined for your domain, but receives them on Cloudflare servers at the edge, close to the end user.

It can run arbitrary code there. It can forward the request on to your origin, or it can decide to respond directly, or you can even make a variety of outbound requests to third-party APIs and do whatever you want.

HTTP in, HTTP out, arbitrary code in between.

Brian: Are there limitations to the JavaScript? Because when you say you could run JavaScript on the edge or on Cloudflare servers, this sounds dangerous. But you also prefaced this too, as well as the security aspect of it. People want to have that. How do you solve that problem?

Kenton: Right. This is the reason why it is JavaScript. We have a lot of customers and they all want to run their code in every location of ours. We need to make sure that we can run lots and lots of different scripts but not allow them to interfere with each other. Each one has to be securely sandboxed.

There's a lot of technologies out there for doing that. But the one that has received by far the most scrutiny, and the most real-world battle testing over the years, would be the V8 JavaScript engine from Google Chrome. We took that and embedded it in a new server environment written in C++ from scratch.

We didn't use Node.js because Node.js is not a sandbox. Not intended for this scenario. So we built something new. The JavaScript runs in a normal JavaScript sandbox and it is limited to an API that only lets it receive HTTP requests, and send HTTP requests to the internet. It does not allow it to see the local file system or interfere with anything else that might be running on that machine.

Brian: OK. Can we talk about use cases for Cloudflare Workers? What would somebody besides somebody like myself who spent all that time writing a joke app. Or Worker, rather. What are use cases you can do on the edge and write JavaScript in?

Kenton: Well, it's arbitrary code. There is infinite use cases. But I can tell you some common ones.

Some people just need to do some silly rewrite of some headers because it's easier to push something to Cloudflare than it is to update their own origin servers.

When you write the script and you submit it through the Cloudflare UI, it is deployed globally in 30 seconds. That's it. Boom, it's done. So that's an easy way to get things done, but it's the less interesting use case.

More interesting is you can do things like route requests to say, you're hosting your website out of S3 or Google cloud storage. You can write a Worker that fetches the content from there and then serves it as your website, and not actually have an origin server.

Other things people like is to optimize their usage of Cloudflare's cache. Historically an HTTP cache is a very fixed-function thing. You can't serve cache content but also have it be personalized. So say you're on a news site, and people have to log in because it's paid content, and then you want to display the site to them but at the top you want to say, "Hi. You're logged in as..." whoever.

Your content on a news site is very cacheable. But all of a sudden it can't be cached anymore because you're personalizing it. Well, you can do that personalization in a Worker after it's already come out of cache at the edge, and therefore serve your site much faster and use much less bandwidth.

But going beyond that, we've had people do HTML template rendering darkly at the edge based on API requests. That will save a lot of bandwidth.

Brian: That's a common use case for Apache servers where you'd take the JavaScript cookie and check to see who you are, where you came from and maybe even your location. And then be able to decide what to render based on the user. It sounds like something super complicated that was used very heavily with servers, and now you can just do it on Cloudflare's site.

Kenton: Or A/B testing. That's another thing that doesn't play well with caches, because you're serving different people different content for the same URL. You can implement that in a Worker now and you can take advantage of the cache.

Brian: Before we started recording I mentioned I had seen a talk at Apollo from the product manager for the team, which I escaped the name--

Kenton: Jonathan Bruce.

Brian: Jonathan, yes. He explained and he went through a couple of use case examples, and I saw A/B testing is one of them as well. It's nice to see a lot of this work move away from the servers. Not that they're trivial, but it sounds like it's an easier approach to do.

Kenton: Yes. Speaking of Apollo, over time we're seeing these use cases get more and more complicated. People started out doing very simple things. But Apollo is a great case. They've taken the Apollo GraphQL, they call it Apollo Server, it's a gateway for GraphQL.

Your GraphQL queries go in, and then it federates out to your Rest endpoints behind that.

They've managed to run the whole thing in a Worker. Which means now that can run on Cloudflare's "edge" and take advantage of the cache, which previously GraphQL queries generally aren't cacheable. Because they're all post requests and often each one's a little bit different and not canonicalized. And now you can fix that with code running at the edge.

Brian: Sorry to zoom back, because I have a lot of experience with Apollo and GraphQL as well. Is that what Apollo is doing personally? Or is this the preferred way for them to cache GraphQL queries when they're using Apollo Server?

Kenton: They are working on a version of Apollo Server that runs at the edge.

Brian: OK.

Kenton: It's not released yet. But soon.

Brian: I need to have them on for a follow-up conversation. They've been on this podcast quite a few episodes ago, so I definitely want to have them talk more about what they're doing on the server side, which is really cool.

You mentioned Google and you mentioned your experience, so it sounds like you've been working on the web and within servers for a while. I'm curious if we could take time to zoom out. You're working on Cloudflare Workers. What's your thought on where the web's going moving forward?

The reason we do this podcast, JAMstack Radio which is JavaScript APIs and markup, is because I personally think there's a shift of a lot of the processing and a lot of the work moving towards front end.

I would consider having Cloudflare own Workers as part of something I don't have to worry about, so I don't have to deal with it, it's an API. Do you see a shift of a lot of major companies using something like a Worker instead of running their own servers going forward?

Kenton: Serverless has been a popular term lately.

Brian: Very popular.

Kenton: People have realized that it's a lot of effort to maintain a server that's not really going to anything useful. You would prefer it to be just writing your code that's specific to your application, and not thinking about, "How do I initialize my server?" or, "What dependencies do I need to build in here?" And there has been this shift towards something called serverless.

My colleague at Cloudflare, Zack Bloom, likes to talk about going a step beyond that to what we call "originless." In serverless, like with Amazon Lambda, you still choose a location in the world where you or your function is running.

You're no longer managing individual servers, but you still have an origin. Typically US East one in Virginia. What I would like to see is, you don't think about where your code runs at all.

You write code and it just runs everywhere. That's what we call "originless," and that's what Cloudflare does.

Because when you deploy code to Cloudflare you do not choose which of our 151 locations it runs in, it is deployed to all of them and we'll run in whichever one receives the request from the users. Whichever one is closest to the user.

And it's not just about being close to the user, but also if you have code that interacts heavily with a particular API. Say the Stripe API, or the Twilio API. It would be great if that code could automatically run next to the servers that are implementing that API without you having to think about that.

I should not decide that my servers are going to run in Virginia, when my servers are talking to people who I don't even know where they are. So that's where I'd like to see it. People have been talking a lot about "edge" compute lately. We consider Workers to be an "edge" compute platform.

But it's funny, because Peter Levine at Andreessen Horowitz said, "The cloud is dead. The new thing is edge compute." But it seems to me that this idea that your code just runs everywhere is what the cloud was always supposed to be in the first place. That's what the metaphor meant. It's not in a specific place, it's everywhere. To me, we're finally getting there.

Brian: "Originless." I'm not sure if it's going to pick up as much steam as serverless. I think that one's got really good feet inside the marketing jaws of tech. But I like "originless," I like the fact that you can ship code and not have to worry about it. Me personally, I'm a tinkerer.

I ship a lot of JavaScript as of late, in the last couple years. I don't want to have to deal with the problems and the headaches of trying to manage my own thing. Just for example, I just cloned a project that happened to my SQL as a dependency. I had the 'brew install that thing and for whatever reason it still didn't work. Dependencies just weren't jiving together. It was a Ruby project, so they weren't jiving together.

But I shouldn't have to think about this as someone coming in four years later down the road trying to commit to this project. I just want to ship code.

I like the fact that I don't have to worry about things like caching and managing my headers and stuff like that. If I can tap into all that tech talent at Cloudflare, and people who are building these cool projects to complete that for me, and pay a small fee, hopefully.Actually, that's a good question. Cloudflare Workers, is it an add-on feature once I have a Cloudflare account. How do I get access to this?

Kenton: It's available to all Cloudflare accounts. The pricing is 50 cents per million requests handled with a minimum of 5 per month. You pay 5, you get your first 10 million, then 50 cents per million after that.

Brian: OK. I'm really excited about the idea of Cloudflare Workers. There's a lot of ideas that can be done. Is there any getting started guides, or tutorials people can get their feet wet with? With Cloudflare Workers?

Kenton: If you go to Developers.Cloudflare.com, or if you just go to CloudflareWorkers.com there's the preview service. The "fiddle," we call it. It's kind of like JSFiddle. You write some code, that's a Worker, and then you see in real time its effect on any web page that you choose. You can go there and try it out. You don't need a Cloudflare account and you don't need to sign in.

Brian: Oh, very cool. Awesome. I'm going to hopefully tinker with that again and build something a little nicer than what I already touched with the Workers. Excited to try that out. Curious, is there anything else Cloudflare is working on in the upcoming future? I know you're probably super focused on Cloudflare Workers so you don't really have the whole roadmap.

Kenton: Well, we're working on lots of things. My next goal with Workers is to introduce some storage. There's not a whole lot specific that I can say about that yet, but the challenge is interesting because we have a network of, as I said, over 150 locations today. In the next few years we expect to exponentially grow that.We expect to have machines in every cell tower, more or less. And we want a storage system that can actually take advantage of that. A storage system where each user's data, if you've built a service on Cloudflare and you store data for users that they interact with, each user's data should live at the Cloudflare location that's closest to that user so that they can interact with it with minimal latency.But there aren't a lot of storage technology out there that can scale to hundreds of nodes, much less thousands of nodes automatically today. It's a new and interesting challenge that I'm working on.

Brian: Cool. Exciting. I'm probably going to keep an eye out on the cloud for a blog, hopefully you're keeping that up to date. I look forward to whenever that gets launched or previewed. I'm going to transition us to JAM picks. I think we had a really awesome conversation about Cloudflare Workers.

These are going to be JAM picks, anything that keeps you jamming, keeps you going. Music picks, we've had a lot of those in the past. Food and tech picks as well. But I will go first.

My pick is Pinterest. Which sounds very weird to say this out loud. We had Zach on here talking about Pinterest and how they're trying to shift towards more of a male demographic, which listeners, if you didn't know I identify as male. I've been using Pinterest mainly because I'm expecting a child.

I'm not picking a bunch of baby stuff and putting on a board. I'm picking a lot of recipes. I find that on Pinterest, if I type in something I have in my cabinet I can get a bunch of recipes for one ingredient, and it's been super useful. Because I'm going to have some leave that I'm going to be taking off.

So I want to be Mr. Mom, hopefully. I'm going to try to achieve that status and do a lot of cooking. I've been setting myself up to do a lot of Pinterest boarding. I'm not even sure if that's a thing, if that's what they call it.

My other pick is going to be meal planning. I really like cooking. I work from home a lot. I really like leveraging the idea of cooking. On top of that, I'm going to wrap in one more pick. I'm definitely going to be trying out Cloudflare Workers. I've been using a Dropbox Paper, so I have a list of all the ideas I want to ship of side projects. I have some coding goals that I have during that time.

So those are my three picks. Kenton, hopefully I stalled long enough that you have decided the things that you are jamming on.

Kenton: I commute up here from Palo Alto on Caltrain every day, which means I get a lot of time to play video games on my Nintendo Switch.

Brian: Oh, nice.

Kenton: And one of my favorites lately is an indie game called Celeste. It's what I would call an agility platformer. Lots of jumping off walls, boosting and 2D side scrolling. It is a lot of fun.

Brian: I have follow up questions about that. How long have you had your Switch?

Kenton: Since sometime last year. Probably about a year ago.

Brian: OK. So you're earlier on the bandwagon. No, actually you were probably a year into it. I'm curious, how do you enjoy the controller? Do you always play it connected? Or do you separate the controller?

Kenton: Yeah, I play it connected because I'm on the on the train, so I have to hold it.

Brian: I just think those little things that come off, those little joypad joystick things are just a little too small for my taste.

Kenton: I do have that problem. My hands get sore, especially from this game Celeste. I had a callus on my thumb when I finished playing it.

Brian: That's hard core.

Kenton: It's intense.

Brian: Either it's hard core or you have a super long commute.

Kenton: It's about 45 minutes. And then I'm going to say I just got back from vacation last week, where I flew to my hometown of Minneapolis, and I just have to talk up the amazing park system and bike trail system there. Because all I did all week was just bike around. There's hundreds of miles of paved, dedicated, bike trails. You don't have to go on streets, and it's just amazing and beautiful in the summer.

Brian: Nice. I didn't know that about Minneapolis. I know the whole, is it like, "10,000 lakes?"

Kenton: "Land of 10,000 Lakes" is Minnesota. It's probably more like 100,000 lakes. There's a lot of lakes.

Brian: And at least a few bike trails.

Kenton: Yes.

Brian: Awesome. Well, Kenton, thanks for coming on to talk about Cloudflare Workers and the awesome city of Minneapolis. Listeners, keep spreading the jam.

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
Cloudflare WorkersServerlessJavaScriptDevelopersDeveloper Platform

Follow on X

Cloudflare|@cloudflare

Related posts

October 31, 2024 1:00 PM

Moving Baselime from AWS to Cloudflare: simpler architecture, improved performance, over 80% lower cloud costs

Post-acquisition, we migrated Baselime from AWS to the Cloudflare Developer Platform and in the process, we improved query times, simplified data ingestion, and now handle far more events, all while cutting costs. Here’s how we built a modern, high-performing observability platform on Cloudflare’s network....

October 24, 2024 1:05 PM

Build durable applications on Cloudflare Workers: you write the Workflows, we take care of the rest

Cloudflare Workflows is now in open beta! Workflows allows you to build reliable, repeatable, long-lived multi-step applications that can automatically retry, persist state, and scale out. Read on to learn how Workflows works, how we built it on top of Durable Objects, and how you can deploy your first Workflows application....