Cloudflare has long used machine learning for bot detection, identifying anomalies, customer support and business intelligence. And internally we have a cluster of GPUs used for model training and inference.
For even longer we’ve been running code “at the edge” in more than 200 cities worldwide. Initially, that was code that we wrote and any customization was done through our UI or API. About seven years ago we started deploying custom code, written in Lua, for our enterprise customers.
But it’s quite obvious that using a language that isn’t widely understood, and going through an account executive to get code written, isn’t a viable solution and so four years ago we announced Cloudflare Workers. Workers allows anyone, on any plan, to write code that gets deployed to our edge network. And they can do it in the language they choose.
After launching Workers we added storage through Workers KV as programs need algorithms plus data. And we’ve continued to add to the Workers platform with Workers Unbound, Durable Objects, Jurisdictional Restrictions and more.
But many of today’s applications need access to the latest machine learning and deep learning methods. Those applications need three things: to scale easily, to securely handle models and to use familiar tools.
To help anyone build AI-based applications Cloudflare is extending the Workers platform to include support for NVIDIA GPUs and TensorFlow. Soon you’ll be able to build AI-based applications that run across the Cloudflare network using pre-built or custom models for inference.
NVIDIA + TensorFlow
For many years we’ve looked for appropriate hardware to run on our edge network to enable AI-powered applications. We’ve examined a wide variety of dedicated AI accelerator chips, as well as approaches that use clever encoding of the models to make them run efficiently on CPUs.
Ultimately, we decided that with our large footprint of servers in more than 200 cities worldwide and the strong support for NVIDIA GPUs in AI toolkits that the best solution was a deployment of NVIDIA GPUs to enable AI-based applications. Today we announced that we are partnering with NVIDIA to bring AI to our global edge network.
Previously machine learning models were deployed on expensive centralized servers or using cloud services that limited them to “regions” around the world. Cloudflare and NVIDIA are putting machine learning at the edge, within milliseconds of the global online population, enabling high performance, low latency AI to be deployed by anyone.
Because the models themselves remain in Cloudflare’s data centers, developers can deploy custom models without putting them on end user devices where they might risk being stolen. And because we’ll be deploying models across our large network, scaling becomes trivial and built in.
TensorFlow has become one of the defacto standard libraries and toolsets for building and running AI models. Cloudflare’s edge AI will use TensorFlow allowing developers to train and optimize models using familiar tools and tests before deploying them to our edge network.
In addition to building your own models using TensorFlow, we plan to offer pre-trained models for tasks such as image labeling/object recognition, text detection, and more.
Nata Or Not
As a demonstration of a real AI-based application running on Cloudflare’s infrastructure, the team in the Cloudflare Lisbon office built a website: nataornot.com. Upload a picture of food and it’ll tell you whether it’s one of Portugal’s delicious pasteis de nata (an egg custard tart pastry dusted with cinnamon) or not.
The code used a TensorFlow model built from thousands of pictures of pasteis and other foods which runs on a Cloudflare server which has an NVIDIA A100 Tensor Core GPU in it. If you want to build your own pastel de nata recognizer we’ve open sourced the TensorFlow model here.
If you simply want to know whether the picture of food you have is a pastel de nata or not visit nataornot.com.
Going Forward
If you are interested in Workers AI, you can try it here: https://developers.cloudflare.com/workers-ai/get-started/