At Cloudflare, we are constantly looking into ways to improve development experience for Workers and make it the most convenient platform for writing serverless code.
As some of you might have already noticed either from our public release notes, on cloudflareworkers.com or in your Cloudflare Workers dashboard, there recently was a small but important change in the look of the inspector.
But before we go into figuring out what it is, let's take a look at our standard example on cloudflareworkers.com:
The example worker code featured here acts as a transparent proxy, while printing requests / responses to the console.
Commonly, when debugging Workers, all you could see from the client-side devtools is the interaction between your browser and the Cloudflare Worker runtime. However, like in most other server-side runtimes, the interaction between your code and the actual origin has been hidden.
This is where console.log
comes in. Although not the most convenient, printing random things out is a fairly popular debugging technique.
Unfortunately, its default output doesn't help much with debugging network interactions. If you try to expand either of request or response objects, all you can see is just a bunch of lazy accessors:
You could expand them one-by-one, getting some properties back, but, when it comes to important parts like headers, that doesn't help much either:
So, since the launch of Workers, what we have been able to suggest instead is certain JS tricks to convert headers to a more readable format:
This works somewhat better, but doesn't scale well, especially if you're trying to debug complex interactions between various requests on a page and subrequests coming from a worker. So we thought: how can we do better?
If you're familiar with Chrome DevTools, you might have noticed before that we were already offering its trimmed-down version in our UI with basic Console and Sources panels. The obvious solution is: why not expose the existing Network panel in addition to these? And we did just* that.
* Unfortunately, this is easier said than done. If you're already faimilar with the Network tab and are interested in the technical implementation details, feel free to skip the next section.
What can you do with the new panel?
You should be able to use most of the things available in regular Chrome DevTools Network panel, but instead of inspecting the interaction between browser and Cloudflare (which is as much as browser devtools can give you), you are now able to peek into the interaction between your Worker and the origin as well.
This means you're able to view request and response headers, including both those internal to your worker and the ones provided by Cloudflare:
Check the original response to verify content modifications:
Same goes for raw responses:
You can also check the time it took worker to reach and get data from your website:
However, note that timings from a debugging service will be different than the ones in production in different locations, so it would make sense to compare these only with other requests on the same page or with the same request as you keep iterating on code of your Worker.
You can view the initiator of each request - this might come in handy if your worker contains complex routing handled by different paths, or if you want to simply check which requests on the page were intercepted and re-issued at all:
Basic features like filtering by type of content also work:
And, finally, you can copy or even export subrequests as HAR for further inspection:
How did we do this?
So far we have been using a built-in mode of the inspector which was specifically designed with JavaScript-only targets in mind. This allows it to avoid loading most of the components that would require a real browser (Chromium-based) backend, and instead leaves just the core that can be integrated directly with V8 in any embedder, whether it's Node.js or, in our case, Cloudflare Workers.
Luckily, the DevTools Protocol itself is pretty well documented - chromedevtools.github.io/devtools-protocol/ - to facilitate third-party implementors.
While this is commonly used from client-side (for editor integration), there are some third-party implementors of the server-side too, even for non-JavaScript targets like Lua, Go, ClojureScript and even system-wide network debugging both on desktop and mobile: github.com/ChromeDevTools/awesome-chrome-devtools.
So there is nothing preventing us from providing our own implementation of Network
domain that would give a native DevTools experience.
On the Workers backend side, we are already in charge of the network stack, which means we have access to all the necessary information to report and can wrap all the request/response handlers into own hooks to send it back to the inspector.
Communication between the inspector and the debugger backend is happening over WebSockets. So far we've been just receiving messages and passing them pretty much directly to V8 as-is. However, if we want to handle Network messages ourselves, that's not going to work anymore and we need to actually parse the messages.
To do that in a standard way, V8 provides some build scripts to generate protocol handlers for any given list of domains. While these are used by Chromium, they require quite a bit of configuration and custom glue for different levels of message serialisation, deserialisation and error handling.
On the other hand, the protocol used for communication is essentially just JSON-RPC, and capnproto, which we're already using in other places behind the scenes, provides JSON (de)serialisation support, so it's easier to reuse it rather than build a separate glue layer for V8.
For example, to provide bindings for [Runtime.callFrame](https://chromedevtools.github.io/devtools-protocol/tot/Runtime/#type-CallFrame)
we need to just define a capnp structure like this:
struct CallFrame {
# Stack entry for runtime errors and assertions.
functionName @0 :Text; # JavaScript function name.
scriptId @1 :ScriptId; # JavaScript script id.
url @2 :Text; # JavaScript script name or url.
lineNumber @3 :Int32; # JavaScript script line number (0-based).
columnNumber @4 :Int32; # JavaScript script column number (0-based).
}
Okay, so by combining these two we can now parse and handle supported Network inspector messages ourselves and pass the rest through to V8 as usual.
Now, we needed to make some changes to the frontend. Wait, you might ask, wasn't the entire point of these changes to speak the same protocol as frontend already does? That's true, but there are other challenges.
First of all, because Network tab was designed to be used in a browser, it relies on various components that are actually irrelevant to us and, if pulled in as-is, would not only make frontend code larger, but also require extra backend support too. Some of them are used for cross-tab integration (e.g. with Profiler), but some are part of the Network tab itself - for example, it doesn't make much sense to use request blocking or mobile throttling when debugging server-side code. So we had some manual untangling to do here.
Another interesting challenge was in handling response bodies. Normally, when you click on a request in Network tab in the browser, and then ask to see its response body, devtools frontend sends a [Network.getResponseBody](https://chromedevtools.github.io/devtools-protocol/tot/Fetch/#method-getResponseBody)
message to the browser backend and then the browser sends it back.
What this means is that, as long as the Network tab is active, browser has to store all of the responses for all of the requests from the page in memory, not knowing which of them are actually going to be requested in the future or not. Such lazy handling makes perfect sense for local or even remote Chrome debugging, where you are commonly fully in charge of both sides.
However, for us it wouldn't be ideal to have to store all of these responses from all of the users in memory on the debugging backend. After some forth and back on different solutions, we decided to deviate from the protocol and instead send original response bodies to the inspector frontend as they come through, and let frontend store them instead. This might seem not ideal either due to sending unnecessary data over the network during debugging sessions, but these tradeoffs make more sense for a shared debugging backend.
There were various smaller challenges and bug fixes to be made and upstreamed, but let them stay behind the scenes.
Is this feature useful to you? What other features would help you to debug and develop workers more efficiently? Or maybe you would like to work on Workers and tooling yourself?
Let us know!
P.S.: If you’re looking for a fun personal project for the holidays, this could be your chance to try out Workers, and play around with our new tools.