
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 06 Apr 2026 18:36:32 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Evaluating image segmentation models for background removal for Images]]></title>
            <link>https://blog.cloudflare.com/background-removal/</link>
            <pubDate>Thu, 28 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ An inside look at how the Images team compared dichotomous image segmentation models to identify and isolate subjects in an image from the background. ]]></description>
            <content:encoded><![CDATA[ <p>Last week, we wrote about <a href="https://blog.cloudflare.com/ai-face-cropping-for-images/"><u>face cropping for Images</u></a>, which runs an open-source face detection model in <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> to automatically crop images of people at scale.</p><p>It wasn’t too long ago when deploying AI workloads was prohibitively complex. Real-time inference previously required specialized (and costly) hardware, and we didn’t always have standard abstractions for deployment. We also didn’t always have Workers AI to enable developers — including ourselves — to ship AI features without this additional overhead.</p><p>And whether you’re skeptical or celebratory of AI, you’ve likely seen its explosive progression. New benchmark-breaking computational models are released every week. We now expect a fairly high degree of accuracy — the more important differentiators are how well a model fits within a product’s infrastructure and what developers do with its predictions.</p><p>This week, we’re introducing <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#segment">background removal for Images</a>. This feature runs a dichotomous image segmentation model on Workers AI to isolate subjects in an image from their backgrounds. We took a controlled, deliberate approach to testing models for efficiency and accuracy.</p><p>Here’s how we evaluated various image segmentation models to develop background removal.</p>
    <div>
      <h2>A primer on image segmentation</h2>
      <a href="#a-primer-on-image-segmentation">
        
      </a>
    </div>
    <p>In computer vision, image segmentation is the process of splitting an image into meaningful parts.</p><p>Segmentation models produce a mask that assigns each pixel to a specific category. This differs from detection models, which don’t classify every pixel but instead mark regions of interest. A face detection model, such as the one that informs <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#gravity"><u>face cropping</u></a>, draws bounding boxes based on where it thinks there are faces. (If you’re curious, <a href="https://blog.cloudflare.com/ai-face-cropping-for-images/#from-pixels-to-people"><u>our post on face cropping</u></a> discusses how we use these bounding boxes to perform crop and zoom operations.)</p><p>Salient object detection is a type of segmentation that highlights the parts of an image that most stand out. Most salient detection models create a binary mask that categorizes the most prominent (or salient) pixels as the “foreground” and all other pixels as the “background”. In contrast, a multi-class mask considers the broader context and labels each pixel as one of several possible classes, like “dog” or “chair”. These multi-class masks are the basis of content analysis models, which distinguish which pixels belong to specific objects or types of objects.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qV2QVZYEdqdigCTuqBuHu/cf4873dddf3b30503aac6643ded1a5ab/image3.png" />
          </figure><p><sub>In this photograph of my dog, a detection model predicts that a bounding box contains a dog; a segmentation model predicts that some pixels belong to a dog, while all other pixels don’t.</sub></p><p>For our use case, we needed a model that could produce a soft saliency mask, which predicts how strongly each pixel belongs to either the foreground (objects of interest) or the background. That is, each pixel is assigned a value on a scale of 0–255, where 0 is completely transparent and 255 is fully opaque. Most background pixels are labeled at (or near) 0; foreground pixels may vary in opacity, depending on its degree of saliency.</p><p>In principle, a background removal feature must be able to accurately predict saliency across a broad range of contexts. For example, e-commerce and retail vendors want to display all products on a uniform, white background; in creative and image editing applications, developers want to enable users to create stickers and cutouts from uploaded content, including images of people or avatars.</p><p>In our research, we focused primarily on the following four image segmentation models:</p><ul><li><p><a href="https://arxiv.org/abs/2005.09007"><b><u>U</u></b><b><u><sup>2</sup></u></b><b><u>-Net (U Square Net)</u></b></a>: Trained on the largest saliency dataset (<a href="https://saliencydetection.net/duts/"><u>DUST-TR</u></a>) of 10,553 images, which were then horizontally flipped to reach a total of 21,106 training images.</p></li><li><p><a href="https://arxiv.org/abs/2203.03041"><b><u>IS-Net (Intermediate Supervision Network)</u></b></a>: A novel, two-step approach from the same authors of U2-Net; this model produces cleaner boundaries for images with noisy, cluttered backgrounds.</p></li><li><p><a href="https://arxiv.org/abs/2401.03407"><b><u>BiRefNet (Bilateral Reference Network)</u></b></a>: Specifically designed to segment complex and high-resolution images with accuracy by checking that the small details match the big picture.</p></li><li><p><a href="https://arxiv.org/abs/2304.02643"><b><u>SAM (Segment Anything Model)</u></b></a>: Developed by Meta to allow segmentation by providing prompts and input points.</p></li></ul><p>Different scales of information allow computational models to build a holistic view of an image. Global context considers the overall shape of objects and how areas of pixels relate to the entire image, while local context traces fine details like edges, corners, and textures. If local context focuses on the trees and their leaves, then global context represents the entire forest.</p><p><a href="https://github.com/xuebinqin/U-2-Net"><u>U</u><u><sup>2</sup></u><u>-Net</u></a> extracts information using a multi-scale approach, where it analyzes an image at different zoom levels, then combines its predictions in a single step. The model analyzes global and local context at the same time, so it works well on images with multiple objects of varying sizes.</p><p><a href="https://github.com/xuebinqin/DIS"><u>IS-Net</u></a> introduces a new, two-step strategy called intermediate supervision. First, the model separates the foreground from the background, identifying potential areas that likely belong to objects of interest — all other pixels are labeled as the background. Second, it refines the boundaries of the highlighted objects to produce a final pixel-level mask.</p><p>The initial suppression of the background results in cleaner, more precise edges, as the segmentation focuses only on the highlighted objects of interest and is less likely to mistakenly include background pixels in the final mask. This model especially excels when dealing with complex images with cluttered backgrounds.</p><p>Both models output their predictions in a single direction for scale. U<sup>2</sup>-Net interprets the global and local context in one pass, while Is-Net begins with the global context, then focuses on the local context.</p><p>In contrast, <a href="https://github.com/ZhengPeng7/BiRefNet"><u>BiRefNet</u></a> refines its predictions over multiple passes, moving in both contextual directions. Like Is-Net, it initially creates a map that roughly highlights the salient object, then traces the finer details. However, BiRefNet moves from global to local context, then from local context back to global. In other words, after refining the edges of the object, it feeds the output back to the large-scale view. This way, the model can check that the small-scale details align with the broader image structure, providing higher accuracy on high-resolution images.</p><p>U<sup>2</sup>-Net, IS-Net, and BiRefNet are exclusively saliency detection models, producing masks that distinguish foreground pixels from background pixels. However, <a href="https://github.com/facebookresearch/segment-anything"><u>SAM</u></a> was designed to be more extensible and general; its primary goal is to segment any object based on specified inputs, not only salient objects. This means that the model can also be used to create multi-class masks that label various objects within an image, even if they aren’t the primary focus of an image.</p>
    <div>
      <h2>How we measure segmentation accuracy</h2>
      <a href="#how-we-measure-segmentation-accuracy">
        
      </a>
    </div>
    <p>In most saliency datasets, the actual location of the object is known as the ground-truth area. These regions are typically defined by human annotators, who manually trace objects of interest in each image. This provides a reliable reference to evaluate model predictions.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wAV8lQcsZHosKoFyEIce1/495b3d70960027b795ec1a62f2d46a59/BLOG-2928_3.png" />
          </figure><p><sub>Photograph by </sub><a href="https://www.linkedin.com/in/fang-allen"><sub><u>Allen Fang</u></sub></a></p><p>Each model outputs a predicted area (where it thinks the foreground pixels are), which can be compared against the ground-truth area (where the foreground pixels actually are).</p><p>Models are evaluated for segmentation accuracy based on common metrics like Intersection over Union, Dice coefficient, and pixel accuracy. Each score takes a slightly different approach to quantify the alignment between the predicted and ground-truth areas (“P” and “G”, respectively, in the formulas below).</p>
    <div>
      <h3>Intersection over Union</h3>
      <a href="#intersection-over-union">
        
      </a>
    </div>
    <p>Intersection over Union (IoU), also called the Jaccard index, measures how well the predicted area matches the true object. That is, it counts the number of foreground pixels that are shared in both the predicted and ground-truth masks. Mathematically, IoU is written as:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6zVQSLlKaFuVUQrDcAlf0Y/4254010745caf0d207d8f8e8181f4c9c/BLOG-2928_4.png" />
          </figure><p><sub>Jaccard formula</sub></p><p>The formula divides the intersection (P∩G), or the pixels where the predicted and ground-truth areas overlap, by the union (P∪G), or the total area of pixels that belong to either area, counting the overlapping pixels only once.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7KFLB15btpCQuKuTqakBjp/91e78ec6d565e3723c5d76b3a65a441d/unnamed__23_.png" />
          </figure><p>IoU produces a score between 0 and 1. A higher value indicates a closer overlap between the predicted and ground-truth areas. A perfect match, although rare, would score 1, while a smaller overlapping area brings the score closer to 0.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/oe82x3rPo8XoNnwG3KBRy/22f591adb6ab27b3ad05f91b13eddff7/BLOG-2928_6.png" />
          </figure>
    <div>
      <h3>Dice coefficient</h3>
      <a href="#dice-coefficient">
        
      </a>
    </div>
    <p>The Dice coefficient, also called the Sørensen–Dice index, similarly compares how well the model’s prediction matches reality, but is much more forgiving than the IoU score. It gives more weight to the shared pixels between the predicted and actual foreground, even if the areas differ in size. Mathematically, the Dice coefficient is written as:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UiJUJrjagwkmQNvdkiPC3/e17eaa8f22f57114a91f1e58fc3a76fb/BLOG-2928_7.png" />
          </figure><p><sub>Sørensen–Dice formula</sub></p><p>The formula divides twice the intersection (P∩G) by the sum of pixels in both predicted and ground-truth areas (P+G), counting any overlapping pixels twice.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7vcFBAoRJ9wpyAt8m4Sn7x/8b1962de717701ff348e90ec8b86286e/BLOG-2928_8.png" />
          </figure><p>Like IoU, the Dice coefficient also produces a value between 0 and 1, indicating a more accurate match as it approaches 1.</p>
    <div>
      <h3>Pixel accuracy</h3>
      <a href="#pixel-accuracy">
        
      </a>
    </div>
    <p>Pixel accuracy measures the percentage of pixels that were correctly labeled as either the foreground or the background. Mathematically, pixel accuracy is written as:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40HkiVe1a2i1dSguDk1TxO/990e49cd4d40a4eaa29078948bc9d7e8/unnamed__24_.png" />
          </figure><p><sub>Pixel accuracy formula</sub></p><p>The formula divides the number of correctly predicted pixels by the total number of pixels in the image.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GX83EmXBSLhGlHvGFLqnn/f65fbd110f4b1d201f7585723ced0f34/image10.png" />
          </figure><p>The total area of correctly predicted pixels is the sum of foreground and background pixels that accurately match the ground-truth areas.</p><p>The correctly predicted foreground is the intersection of the predicted and ground-truth areas (P∩G). The inverse of the predicted area (P’, or 1–P) represents the pixels that the model identifies as the background; the inverse of the ground-truth area (G’, or 1–G) represents the actual boundaries of the background. When these two inverted areas overlap (P’∩G’, or (1–P)∩(1–G)), this intersection is the correctly predicted background.</p>
    <div>
      <h2>Interpreting the metrics</h2>
      <a href="#interpreting-the-metrics">
        
      </a>
    </div>
    <p>Of the three metrics, IoU is the most conservative measure of segmentation accuracy. Small mistakes, such as including extra background pixels in the predicted foreground, reduce the score noticeably. This metric is most valuable for applications that require precise boundaries, such as autonomous driving systems.</p><p>Meanwhile, the Dice coefficient rewards the overlapping pixels more heavily, and subsequently tends to be higher than the IoU score for the same prediction. In model evaluations, this metric is favored over IoU when it’s more important to capture the object than to penalize mistakes. For example, in medical imaging, the risk of missing a true positive substantially outweighs the inconvenience of flagging a false positive.</p><p>In the context of background removal, we biased toward the IoU score and Dice coefficient over pixel accuracy. Pixel accuracy can be misleading, especially when processing an image where background pixels comprise the majority of pixels.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7K8TWmRLdJNIza43UoXhD8/c9a42ed7074ce975afd8f7e783db5849/BLOG-2928_11.png" />
          </figure><p>For example, consider an image with 900 background pixels and 100 foreground pixels. A model that correctly predicts only 5 foreground pixels — 5% of all foreground pixels — will score deceptively high in pixel accuracy. Intuitively, we’d likely say that this model performed poorly. However, assuming all 900 background pixels were correctly predicted, the model maintains 90.5% pixel accuracy, despite missing the subject almost entirely.</p>
    <div>
      <h2>Pixels, predictions, and patterns</h2>
      <a href="#pixels-predictions-and-patterns">
        
      </a>
    </div>
    <p>To determine the most suitable model for the Images API, we performed a series of tests using the open-source <a href="https://github.com/danielgatis/rembg"><u>rembg</u></a> library, which combines all relevant models in a single interface.</p><p>Each model was tasked with outputting a prediction mask to label foreground versus background pixels. We pulled images from two saliency datasets: <a href="https://huggingface.co/datasets/schirrmacher/humans"><b><u>Humans</u></b></a> contains over 7,000 images of people with varying skin tones, clothing, and hairstyles, while <a href="https://xuebinqin.github.io/dis/index.html#overview"><b><u>DIS5K</u></b></a> (version 1.5) spans a vast range of objects and scenes. If a model contained variants that were pre-trained on specific types of segmentation (e.g. clothes, humans), then we repeated the tests for the generalized model and each variant.</p><p>Our experiments were executed on a GPU with 23 GB VRAM to mirror realistic hardware constraints, similar to the environment where we already run a face detection model. We also replicated the same tests on a larger GPU instance with 94 GB VRAM; this served as an upper-bound reference point to benchmark potential speed gains if additional compute were available. Cloudflare typically reserves larger GPUs for more compute-intensive <a href="https://developers.cloudflare.com/workers-ai/models/"><u>AI workloads</u></a> — we viewed these tests more as an exploration for comparison than as a production scenario.</p><p>During our analysis, we started to see key trends emerge:</p><p>On the smaller GPU, inference times were generally faster for lightweight models like U<sup>2</sup>-Net (176 MB) and Is-Net (179 MB). The average speed across both datasets were 307 milliseconds for U<sup>2</sup>-Net and 351 milliseconds for Is-Net. On the opposite end, BiRefNet (973 MB) had noticeably slower output times, averaging 821 milliseconds across its two generalized variants.</p><p>BiRefNet ran 2.4 times faster on the larger GPU, reducing its average inference time to 351 milliseconds — comparable to the other models, despite its larger size. In contrast, the lighter models did not show any notable speed gain with additional compute, suggesting that scaling hardware configurations primarily benefits heavier models. In <a href="https://blog.cloudflare.com/background-removal/#appendix-1-inference-time-in-milliseconds">Appendix 1</a> (“Inference Time in Milliseconds”), we compare speed across models and GPU instances.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/55Tk0RjbvoffPVQT85UJQe/ca1f2280768495f3be52425e642fdd25/BLOG-2928_12.png" />
          </figure><p>We also observed distinct patterns when comparing model performance across the two saliency datasets. Most notably, all models ran faster on the Humans dataset, where images of people tend to be single-subject and relatively uniform. The DIS5K dataset, in contrast, includes images with higher complexity — that is, images with more objects, cluttered backgrounds, or multiple objects of varying scales.</p><p>Slower predictions suggest a relationship between visual complexity and the computation needed to identify the important parts of an image. In other words, datasets with simpler, well-separated objects can be analyzed more quickly, while complex scenes require more computation to generate accurate masks.</p><p>Similarly, complexity challenges accuracy as much as it does efficiency. In our tests, all models demonstrated higher segmentation accuracy with the Humans dataset. In <a href="https://blog.cloudflare.com/background-removal/#appendix-2-measures-of-model-accuracy">Appendix 2</a> (“Measures of Model Accuracy”), we present our results for segmentation accuracy across both datasets.</p><p>Specialized variants scored slightly higher in accuracy compared to their generalized counterparts. But in broad, practical applications, selecting a specialized model for every input isn’t realistic, at least for our initial beta version. We favored general-purpose models that can produce accurate predictions without prior classification. For this reason, we excluded SAM — while powerful in its intended use cases, SAM is designed to work with additional inputs. On unprompted segmentation tasks, it produced lower accuracy scores (and much higher inference times) amongst the models we tested.</p><p>All BiRefNet variants showed greater accuracy compared to other models. The generalized variants (<code>-genera</code>l and <code>-dis</code>) were just as accurate as its more specialized variants like <code>-portrait</code>. The <code>birefnet-general</code> variant, in particular, achieved a high IoU score of 0.87 and Dice coefficient of 0.92, averaged across both datasets.</p><p>In contrast, the generalized U<sup>2</sup>-Net model showed high accuracy on the Humans dataset, reaching an IoU score of 0.89 and a Dice coefficient of 0.94, but received a low IoU score of 0.39 and Dice coefficient of 0.52 on the DIS5K dataset. The <code>isnet-general-use</code> model performed substantially better, obtaining an average IoU score of 0.82 and Dice coefficient of 0.89 across both datasets.</p><p>We observed whether models could interpret both the global and local context of an image. In some scenarios, the U<sup>2</sup>-Net and Is-Net models captured the overall gist of an image, but couldn’t accurately trace fine edges. We designed one test around measuring how well each model could isolate bicycle wheels; for variety, we included images across both interior and exterior backgrounds. Lower scoring models, while correctly labeling the area surrounding the wheel, struggled with the pixels between the thin spokes and produced prediction masks that included these background pixels.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6mzRTqXhZRk0GuzwuIRu4p/b251aa4f3dbeecc11dbba931623607e5/BLOG-2928_13.png" />
          </figure><p><sub>Photograph by </sub><a href="https://unsplash.com/photos/person-near-bike-p6OU_gENRL0"><sub><u>Yomex Owo on Unsplash</u></sub></a><sub></sub></p><p>In other scenarios, the models showed the opposite limitation: they produced masks with clean edges, but failed to identify the focus of the image. We ran another test using a photograph of a gray T-shirt against black gym flooring. Both generalized U<sup>2</sup>-Net and Is-Net models labeled only the logo as the salient object, creating a mask that omitted the rest of the shirt entirely. </p><p>Meanwhile, the BiRefNet model achieved high accuracy across both types of tests. Its architecture passes information bidirectionally, allowing details at the pixel level to be informed by the larger scene (and vice versa). In practice, this means that BiRefNet interprets how fine-grained edges fit into the broader object. For our beta version, we opted to use the BiRefNet model to drive decisions for background removal.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/741GSfhMn8MPykb6NkWUJV/1ef5006aea8f67a4faeec73862d97ced/BLOG-2928_14.png" />
          </figure><p><sub>Unlike lower scoring models, the BiRefNet model understood that the entire shirt is the true subject of the image.</sub></p>
    <div>
      <h2>Applying background removal with the Images API</h2>
      <a href="#applying-background-removal-with-the-images-api">
        
      </a>
    </div>
    <p>The Images API now supports <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#segment">automatic background removal</a> for <a href="https://developers.cloudflare.com/images/upload-images/"><u>hosted</u></a> and <a href="https://developers.cloudflare.com/images/transform-images/"><u>remote</u></a> images. This feature is available in open beta to all Cloudflare users on <a href="https://developers.cloudflare.com/images/pricing/"><u>Free and Paid plans</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iglNDllwEMvg6ygDvTRNc/a354422efd166cb3b48ee10995e78aa4/unnamed__25_.png" />
          </figure><p>Use the <code>segment</code> parameter when optimizing an image through a <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/"><u>specially-formatted Images URL</u></a> or a <a href="https://developers.cloudflare.com/images/transform-images/transform-via-workers/"><u>worker</u></a>, and Cloudflare will isolate the subject of your image and convert the background into transparent pixels. This can be combined with <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/"><u>other optimization operations</u></a>, as shown in the transformation URL below: </p>
            <pre><code>example.com/cdn-cgi/image/gravity=face,zoom=0.5,segment=foreground,background=white/image.png</code></pre>
            <p>This request will:</p><ul><li><p>Crop the image toward the <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#gravity"><u>detected face</u></a>.</p></li><li><p>Isolate the subject in the image, replacing the background with transparent pixels.</p></li><li><p><a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#background"><u>Fill the transparent pixels</u></a> with a solid white color (<code>#FFFFFF</code>).</p></li></ul><p>You can also <a href="https://developers.cloudflare.com/images/transform-images/bindings/"><u>bind the Images API</u></a> to your worker to build programmatic workflows that give more fine-grained control over how images will be optimized. To demonstrate how this works, I made a <a href="https://studio.yaydeanna.workers.dev/"><u>simple image editing app</u></a> for creating cutouts and overlays, built entirely on Images and <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a>. This can be used to create images <a href="https://studio.yaydeanna.workers.dev/?order=0%2C1%2C2&amp;i0=icecream&amp;vertEdge0=bottom&amp;vertVal0=0&amp;horEdge0=left&amp;h0=400&amp;bg0=1&amp;i1=pete&amp;vertEdge1=top&amp;horEdge1=left&amp;h1=700&amp;bg1=1&amp;i2=iceland&amp;vertEdge2=top&amp;horEdge2=left"><u>like the one below</u></a>. Here, we apply background removal to isolate the dog and ice cream cone, then overlay them on a landscape image.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Z6t9ov1t3fbbQojbYbGDh/961cef0f06780bfd8c088772a7add796/image11.png" />
          </figure><p><sub>Photographs by </sub><a href="https://www.pexels.com/@guyjoben/"><sub><u>Guy Hurst</u></sub></a><sub> (landscape), </sub><a href="https://www.pexels.com/@oskar-gackowski-2150870625/"><sub><u>Oskar Gackowski</u></sub></a><sub> (ice cream), and me (dog)</sub></p><p>Here is a snippet that you can use to overlay images in a worker:</p>
            <pre><code>export default {
  async fetch(request,env) {
    const baseURL = "{image-url}";
    const overlayURL = "{image-url}";
    
    // Fetch responses from image URLs
    const [base, overlay] = await Promise.all([fetch(baseURL),fetch(overlayURL)]);

    return (
      await env.IMAGES
        .input(base.body)
        .draw(
          env.IMAGES.input(overlay.body)
            .transform({segment: "foreground"}), // Optimize the overlay image
            {top: 0} // Position the overlay
        )
        .output({format:"image/webp"})
    ).response();
  }
};</code></pre>
            <p>Background removal is another step in our ongoing effort to enable developers to build interactive and imaginative products. These features are an iterative process, and we’ll continue to refine our approach even further. We’re looking forward to sharing our progress with you.</p><p>Read more about applying background removal in our <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#segment"><u>documentation</u></a>.</p>
    <div>
      <h3>Appendix 1: Inference Time in Milliseconds</h3>
      <a href="#appendix-1-inference-time-in-milliseconds">
        
      </a>
    </div>
    
    <div>
      <h4>23 GB VRAM GPU</h4>
      <a href="#23-gb-vram-gpu">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2e97UAIgglJ3kP3ozm8lZT/6a44de14aa5179071eb7bbb3c8f31feb/BLOG-2928_17.png" />
          </figure>
    <div>
      <h4>94 GB VRAM GPU</h4>
      <a href="#94-gb-vram-gpu">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2viOyCtbzsloUAvY8kXPJV/378feb50a1dd822d7c848133fbac6a3f/BLOG-2928_18.png" />
          </figure>
    <div>
      <h3>Appendix 2: Measures of Model Accuracy</h3>
      <a href="#appendix-2-measures-of-model-accuracy">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2G9hwnFrlT4eF2isWyaEjk/d3418df56dff686c27f46d96fc86c37f/BLOG-2928_19.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Image Optimization]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">q17H7D8gSkyNAPELuTHl9</guid>
            <dc:creator>Deanna Lam</dc:creator>
            <dc:creator>Diretnan Domnan</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we built AI face cropping for Images]]></title>
            <link>https://blog.cloudflare.com/ai-face-cropping-for-images/</link>
            <pubDate>Wed, 20 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ AI face cropping for Images automatically crops around faces in an image. Here’s how we built this feature on Workers AI to scale for general availability. ]]></description>
            <content:encoded><![CDATA[ <p>During Developer Week 2024, we introduced <a href="https://blog.cloudflare.com/whats-next-for-cloudflare-media/"><u>AI face cropping in private beta</u></a>. This feature automatically crops images around detected faces, and marks the first release in our upcoming suite of AI image manipulation capabilities.</p><p><a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#gravity"><u>AI face cropping</u></a> is now available in <a href="https://developers.cloudflare.com/images/"><u>Images</u></a> for everyone. To bring this feature to general availability, we moved our CPU-based prototype to a GPU-based implementation in Workers AI, enabling us to address a number of technical challenges, including memory leaks that could hamper large-scale use.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1uwRmMEA9LSDoeZgMbYjcM/71b941d57605b0a5286f6f0ccc7dd5e9/1.png" />
          </figure><p><sup><i>Photograph by </i></sup><a href="https://unsplash.com/photos/woman-in-black-cardigan-standing-beside-pink-flowers-UO-82DJ3rcc"><sup><i><u>Suad Kamardeen (@suadkamardeen) on Unsplash</u></i></sup></a></p>
    <div>
      <h2>Turning raw images into production-ready assets</h2>
      <a href="#turning-raw-images-into-production-ready-assets">
        
      </a>
    </div>
    <p>We developed face cropping with two particular use cases in mind:</p><p><b>Social media platforms and AI chatbots.</b> We observed a lot of traffic from customers who use Images to turn unedited images of people into smaller profile pictures in neat, fixed shapes.</p><p><b>E-commerce platforms.</b> The same product photo might appear in a grid of thumbnails on a gallery page, then again on an individual product page with a larger view. The following example illustrates how cropping can change the emphasis from the model’s shirt to their sunglasses.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/zj35mxsUccGShpq5YGHAD/7ffb7b2f8c517be06e2bab6f42aa9a06/2.png" />
          </figure><p><sup><i>Photograph by </i></sup><a href="https://unsplash.com/photos/a-man-wearing-sunglasses-IJozQuMbo3M"><sup><i><u>Media Modifier (@mediamodifier) on Unsplash</u></i></sup></a></p><p>When handling high volumes of media content, preparing images for production can be tedious. With Images, you don’t need to manually generate and store multiple versions of the same image. Instead, we serve copies of each image, each optimized to your specifications, while you continue to <a href="https://developers.cloudflare.com/images/upload-images/"><u>store only the original image</u></a>.</p>
    <div>
      <h2>Crop everything, everywhere, all at once</h2>
      <a href="#crop-everything-everywhere-all-at-once">
        
      </a>
    </div>
    <p>Cloudflare provides a <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/"><u>library of parameters</u></a> to manipulate how an image is served to the end user. For example, you can crop an image to a square by setting its <code>width</code> and <code>height</code> dimensions to 100x100.</p><p>By default, images are cropped toward the center coordinates of the original image. The <code>gravity</code> parameter can affect how an image gets cropped by changing its focal point. You can specify coordinates to use as the focal point of an image or allow Cloudflare to automatically determine a new focal point.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78jngtcSCwB80JcZgDsW4d/44bcaf9aa61c6c5c66eb281fba91a472/3.png" />
          </figure><p><sup><i>The gravity parameter is useful when cropping images with off-centered subjects. Photograph by </i></sup><a href="https://unsplash.com/photos/selective-focus-photography-of-pink-petaled-flower-EfhCUc_fjrU"><sup><i><u>Andrew Small (@andsmall) on Unsplash</u></i></sup></a></p><p>The <code>gravity=auto</code> option uses a saliency algorithm to pick the most optimal focal point of an image. Saliency detection identifies the parts of an image that are most visually important; the cropping operation is then applied toward this region of interest. Our algorithm analyzes images using visual cues such as color, luminance, and texture, but doesn’t consider context within an image. While this setting works well on images with inanimate objects like plants and skyscrapers, it doesn’t reliably account for subjects as contextually meaningful as people’s faces.</p><p>And yet, images of people comprise the majority of bandwidth usage for many applications, such as an AI chatbot platform that uses Images to serve over 45 million unique transformations each month. This presented an opportunity for us to improve how developers can optimize images of people.</p><p>AI face cropping can be performed by using the <code>gravity=face</code> option, which automatically detects which pixels represent the face (or faces) and uses this information to crop the image. You can also affect how closely the image is cropped toward the face; the <code>zoom</code> parameter controls the threshold for how much of the surrounding area around the face will be included in the image.</p><p>We carefully designed our model pipeline with privacy and confidentiality top of mind. This feature doesn’t support facial identification or recognition. In other words, when you optimize with Cloudflare, we’ll never know that two different images depict the same person, or identify the specific people in a given image. Instead, AI face cropping with Images is intentionally limited to face detection, or identifying the pixels that represent a human face.</p>
    <div>
      <h2>From pixels to people</h2>
      <a href="#from-pixels-to-people">
        
      </a>
    </div>
    <p>Our first step was to select an open-source model that met our requirements. Behind the scenes, our AI face cropping uses <a href="https://github.com/serengil/retinaface"><u>RetinaFace</u></a>, a convolutional neural network model that classifies images with human faces.</p><p>A <a href="https://www.cloudflare.com/learning/ai/what-is-neural-network/"><u>neural network</u></a> is a type of machine learning process that loosely resembles how the human brain works. A basic neural network has three parts: an input layer, one or more hidden layers, and an output layer. Nodes in each layer form an interconnected network to transmit and process data, where each input node is connected to nodes in the next layer.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6bm2f6z6XoV7KncKSXTTmG/01f9fa9da23a3fb792883d90f180780c/4.png" />
          </figure><p><sup><i>A fully connected layer passes data from one layer to the next.</i></sup></p><p>Data enters through the input layer, where it is analyzed before being passed to the first hidden layer. All of the computation is done in the hidden layers, where a result is eventually delivered through the output layer.</p><p>A convolutional neural network (CNN) mirrors how humans look at things. When we look at other people, we start with abstract features, like the outline of their body, before we process specific features, like the color of their eyes or the shape of their lips.</p><p>Similarly, a CNN processes an image piece-by-piece before delivering the final result. Earlier layers look for abstract features like edges and colors and lines; subsequent layers become more complex and are each responsible for identifying the various features that comprise a human face. The last fully connected layer combines all categorized features to produce one final classification of the entire image. In other words, if an image contains all of the individual features that define a human face (e.g. eyes, nose), then the CNN concludes that the image contains a human face.</p><p>We needed a model that could determine whether an image depicts a person (image classification), as well as exactly where they are in the image (object detection). When selecting a model, some factors we considered were:</p><ul><li><p><b>Performance on the </b><a href="http://shuoyang1213.me/WIDERFACE/index.html"><b><u>WIDERFACE</u></b></a><b> dataset.</b> This is the state-of-the-art face detection benchmark dataset, which contains 32,203 images of 393,703 labeled faces with a high degree of variability in scale, pose, and occlusion.</p></li><li><p><b>Speed (in frames per second).</b> Most of our image optimization requests occur on delivery (rather than before an image gets uploaded to storage), so we prioritized performance for end-user delivery.</p></li><li><p><b>Model size.</b> Smaller model sizes run more efficiently.</p></li><li><p><b>Quality</b>. The performance boost from smaller models often gets traded for the quality—the key is balancing speed with results.</p></li></ul><p>Our initial test sample contained 500 images with varying factors like the number of faces in the image, face size, lighting, sharpness, and angle. We tested various models, including <a href="https://github.com/hollance/BlazeFace-PyTorch"><u>BlazeFast</u></a>, <a href="https://arxiv.org/abs/1311.2524"><u>R-CNN</u></a> (and its successors <a href="https://arxiv.org/abs/1504.08083"><u>Fast R-CNN</u></a> and <a href="https://arxiv.org/abs/1506.01497"><u>Faster R-CNN</u></a>), <a href="https://github.com/serengil/retinaface"><u>RetinaFace</u></a>, and <a href="https://arxiv.org/abs/1506.02640"><u>YOLO</u></a> (You Only Look Once).</p><p>Two-stage detectors like BlazeFast and R-CNN propose potential object locations in an image, then identify objects in those regions of interest. One-stage detectors like RetinaFace and YOLO predict object locations and classes in a single pass. In our research, we observed that two-stage detector methods provided higher accuracy, but performed too slowly to be practical for real traffic. On the other hand, one-stage detector methods were efficient and performant while still highly accurate.</p><p>Ultimately, we selected RetinaFace, which showed the highest precision of 99.4% and performed faster than other models with comparable values. We found that RetinaFace delivered strong results even with images containing multiple blurry faces:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5cCiUcM7S7f1XRo5e8f1L5/696dde02a2de76e176f49f99fc784e11/5.png" />
          </figure><p><sup><i>Photograph by </i></sup><a href="https://unsplash.com/photos/people-in-green-life-vest-on-water-during-daytime-1Ltm4zrGSVg"><sup><i><u>Anne Nygård (@polarmermaid) on Unsplash</u></i></sup></a></p><p><a href="https://www.cloudflare.com/learning/ai/inference-vs-training/"><u>Inference</u></a>—the process of using training models to make decisions—can be computationally demanding, especially with very large images. To maintain efficiency, we set a maximum size limit of 1024x1024 pixels when sending images to the model.</p><p>We pass images within these dimensions directly to the model for analysis. But if either width or height dimension exceeds 1024 pixels, then we instead create an inference image to send to the model; this is a smaller copy that retains the same aspect ratio as the original image and does not exceed 1024 pixels in either dimension. For example, a 125x2000 image will be downscaled to 64x1024. Creating this resized, temporary version reduces the amount of data that the model needs to analyze, enabling faster processing.</p><p>The model draws all of the bounding boxes, or the regions within an image that define the detected faces. From there, we construct a new, outer bounding box that encompasses all of the individual boxes, calculating its <code>top-left</code> and <code>bottom-right</code> points based on the boxes that are closest to the top, left, bottom, and right edges of the image.</p><p>The <code>top-left</code> point uses the <code>x</code> coordinate from the left-most box and the <code>y</code> coordinate from the top-most box. Similarly, the <code>bottom-right</code> point uses the <code>x</code> coordinate from the right-most box and the <code>y</code> coordinate from the bottom-most box. These coordinates can be taken from the same bounding boxes; if a single box is closest to both the top and left edges, then we would use its top-left corner as the <code>top-left</code> point of the outer bounding box.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2vchhlXYoakCiy7S2MglHb/6e5b3c1a36c5fa20cd45122a0a966777/6.png" />
          </figure><p><sup><i>AI face cropping identifies regions that represent faces, then determines an outer bounding box and focal point based on the top-most, left-most, right-most, and bottom-most bounding boxes.</i></sup></p><p>Once we define the outer bounding box, we use its center coordinates as the focal point when cropping the image. From our experiments, we found that this produced better and more balanced results for images with multiple faces compared to other methods, like establishing the new focal point around the largest detected face.</p><p>The cropped image area is calculated based on the dimensions of the outer bounding box (“d”) and a specified zoom level (“z”) in the formula (1 ÷ z) × d. The <code>zoom</code> parameter accepts floating points between 0 and 1, where we crop the image to the bounding box when <code>zoom=1</code> and include more of the area around the box as <code>zoom</code> trends toward <code>0</code>.</p><p>Consider an original image that is 2048x2048. First, we create an inference image that is 1024x1024 to meet our size limits for face detection. Second, we define the outer bounding box using the model’s predictions—we’ll use 100x500 for this example. At <code>zoom=0.5</code>, our formula generates a crop area that is twice as large as the bounding box, with new width (“w”) and height (“h”) dimensions of 200x1000:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5KHvyiB2EYWJ7ZGLKm5q99/7fd39d99324b0fce9148fa1d861cc7fa/7.png" />
          </figure><p>We also apply a <code>min</code> function that chooses the smaller number between the input dimensions and the calculated dimensions, ensuring that the new width and height never exceed the dimensions of the image itself. In other words, if you try to zoom out too much, then we use the full width or height of the image instead of defining a crop area that will extend beyond the edge of the image. For example, at <code>zoom=0.25</code>, our formula yields an initial crop area of 400x2000. Here, since the calculated height (2000) is larger than the input height (1024), we use the input height to set the crop area to 400x1024.</p><p>Finally, we need to scale the crop area back to the size of the original image. This applies only when a smaller inference image is created.</p><p>We initially downscaled the original 2048x2048 image by a factor of 2 to create the 1024x1024 inference image. This means that we need to multiply the dimensions of the crop area—400x1024 in our latest example—by 2 to produce our final result: a cropped image that is 800x2048.</p>
    <div>
      <h2>The architecture behind the earliest build</h2>
      <a href="#the-architecture-behind-the-earliest-build">
        
      </a>
    </div>
    <p>In the beta version, we rewrote the model using <a href="https://github.com/tensorflow/rust"><u>TensorFlow Rust</u></a> to make it compatible with our existing Rust-based stack. All of the computations for inference—where the model classifies and locates human faces—were executed on CPUs within our network.</p><p>Initially, this worked well and we saw near-realtime results.</p><p>However, the underlying limitations of our implementation became apparent when we started receiving consistent alerts that our underlying Images service was nearing its limits for memory usage. The increased memory usage didn’t line up with any recent deployments around this time, but a hunch led us to discover that the face cropping compute time graph had an uptick that matched the uptick in memory usage. Further tracing confirmed that AI face cropping was at the root of the problem.</p><p>When a service runs out of memory, it terminates its processes to free up memory and prevent the system from crashing. Since CPU-based implementations share RAM with other processes, this can potentially cause errors for other image optimization operations. In response, we switched our memory allocator from <a href="https://github.com/iromise/glibc"><u>glibc malloc</u></a> to <a href="https://github.com/jemalloc/jemalloc"><u>jemalloc</u></a>. This allowed us to use less memory at runtime, saving about 20 TiB of RAM globally. We also started culling the number of face cropping requests to limit CPU usage.</p><p>At this point, AI face cropping was already limited to our own internal uses and a small number of beta customers. These steps only temporarily reduced our memory consumption. They weren’t sufficient for handling global traffic, so we looked toward a more scalable design for long-term use.</p>
    <div>
      <h2>Doing more with less (memory)</h2>
      <a href="#doing-more-with-less-memory">
        
      </a>
    </div>
    <p>With memory usage alerts looming in the distance, it became clear that we needed to move to a GPU-based approach.</p><p>Unlike with CPUs, a GPU-based implementation avoids contention with other processes because memory access is typically dedicated and managed more tightly. We partnered with the <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> team, who created a framework for internal teams to integrate payloads into their model catalog for GPU access.</p><p>Some Workers AI models have their own standalone containers; this isn’t practical for every model, as routing traffic to multiple containers can be expensive. When using a GPU through Workers AI, the data needs to travel over the network, which can introduce latency. This is where model size is especially relevant, as network transport overhead becomes more noticeable with larger models.</p><p>To address this, Workers AI wraps smaller models in a single container and utilizes a latency-sensitive routing algorithm to identify the best instance to serve each payload. This means that models can be offloaded when there is no traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4MGV5N7dnK9H5DleDm1gsf/d9888c3e6f1f9057d69509560967abfd/8.png" />
          </figure><p><sup><i>A scheduler is used to optimize how—and when—models in the same container interact with GPUs.</i></sup></p><p>RetinaFace runs on 1 GB of VRAM on the smallest GPU; it’s small enough that it can be hot swapped at runtime alongside similarly sized models. If there is a call for the RetinaFace model, then the Python code will be loaded into the environment and executed.</p><p>As expected, we saw a significant drop in memory usage after we moved the feature to Workers AI. Now, each instance of our Images service consumes about 150 MiB of memory.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/n8UuvaiShes8W19fn1d4Y/d73d3e5f7e030c5da602af8dba54ed18/9.png" />
          </figure><p>With this new approach, memory leaks pose less concern to the overall availability of our service. Workers AI executes models within containers, so they can be terminated and restarted as needed without impacting other processes. Since face cropping runs separately from our Images service, restarting it won’t halt our other image optimization operations.</p>
    <div>
      <h2>Applying AI face cropping to our blog</h2>
      <a href="#applying-ai-face-cropping-to-our-blog">
        
      </a>
    </div>
    <p>As part of our beta launch, we updated the <a href="https://blog.cloudflare.com/"><u>Cloudflare blog</u></a> to apply AI face cropping on author images.</p><p>Authors can submit their own images, which appear as circular profile pictures in both the main blog feed and individual blog posts. By default, CSS centers images within their containers, making off-centered head positions more obvious. When two profile pictures include different amounts of negative space, this can also lead to a visual imbalance where authors’ faces appear at different scales:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7K6ENlSUrQj3StJYbke65l/63ff7b00802377f8e7913907aef36dd8/10.png" />
          </figure><p><sup><i>AI face cropping makes posts with multiple authors appear more balanced.</i></sup></p><p>In the example above, Austin’s original image is cropped tightly around his face. On the other hand, Taylor’s original image includes his torso and a larger margin of the background. As a result, Austin’s face appears larger and closer to the center than Taylor’s does. After we applied AI face cropping to profile pictures on the blog, their faces appear more similar in size, creating more balance and cohesion on their co-authored post.</p>
    <div>
      <h2>A new era of image editing, now in Images</h2>
      <a href="#a-new-era-of-image-editing-now-in-images">
        
      </a>
    </div>
    <p>Many developers already use Images to build scalable media pipelines. Our goal is to accelerate image workflows by automating rote, manual tasks.</p><p>For the Images team, this is only the beginning. We plan to release new AI capabilities, including features like background removal and generative upscale. You can try AI face cropping for free by <a href="https://dash.cloudflare.com/?to=/:account/images/delivery-zones"><u>enabling transformations in the Images dashboard</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Image Optimization]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">5j8iAw1mBIHhVkaj0UcbSZ</guid>
            <dc:creator>Deanna Lam</dc:creator>
            <dc:creator>Diretnan Domnan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improve your media pipelines with the Images binding for Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/improve-your-media-pipelines-with-the-images-binding-for-cloudflare-workers/</link>
            <pubDate>Thu, 03 Apr 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Media-rich applications require image and video pipelines that integrate seamlessly with the rest of your technology stack. Here’s how the Images binding enables you to build more flexible workflows. ]]></description>
            <content:encoded><![CDATA[ <p>When building a full-stack application, many developers spend a surprising amount of time trying to make sure that the various services they use can communicate and interact with each other. Media-rich applications require image and video pipelines that can integrate seamlessly with the rest of your technology stack.</p><p>With this in mind, we’re excited to introduce the <a href="https://developers.cloudflare.com/images/transform-images/bindings"><u>Images binding</u></a>, a way to connect the <a href="https://developers.cloudflare.com/images/transform-images/transform-via-workers/"><u>Images API</u></a> directly to your <a href="https://developers.cloudflare.com/workers/"><u>Worker</u></a> and enable new, programmatic workflows. The binding removes unnecessary friction from application development by allowing you to transform, overlay, and encode images within the Cloudflare Developer Platform ecosystem.</p><p>In this post, we’ll explain how the Images binding works, as well as the decisions behind <a href="https://developers.cloudflare.com/workers/local-development/"><u>local development support</u></a>. We’ll also walk through an example app that watermarks and encodes a user-uploaded image, then uploads the output directly to an <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a> bucket.</p>
    <div>
      <h2>The challenges of <code>fetch()</code></h2>
      <a href="#the-challenges-of-fetch">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/images/"><u>Cloudflare Images</u></a> was designed to help developers build scalable, cost-effective, and reliable image pipelines. You can deliver multiple copies of an image — each resized, manipulated, and encoded based on your needs. Only the original image needs to be stored; different versions are generated dynamically, or as requested by a user’s browser, then subsequently served from cache.</p><p>With Images, you have the flexibility to <a href="https://developers.cloudflare.com/images/transform-images/"><u>transform images</u></a> that are stored outside the Images product. Previously, the Images API was based on the <code>fetch()</code> method, which posed three challenges for developers:</p><p>First, when transforming a remote image, the original image must be retrieved from a URL. This isn’t applicable for every scenario, like resizing and compressing images as users upload them from their local machine to your app. We wanted to extend the Images API to broader use cases where images might not be accessible from a URL.</p><p>Second, the optimization operation — the changes you want to make to an image, like resizing it — is coupled with the delivery operation. If you wanted to crop an image, watermark it, then resize the watermarked image, then you’d need to serve one transformation to the browser, retrieve the output URL, and transform it again. This adds overhead to your code, and can be tedious and inefficient to maintain. Decoupling these operations means that you no longer need to manage multiple requests for consecutive transformations.</p><p>Third, optimization parameters — the way that you specify how an image should be manipulated — follow a fixed order. For example, cropping is performed before resizing. It’s difficult to build a flow that doesn’t align with the established hierarchy — like resizing first, then cropping — without a lot of time, trial, and effort.</p><p>But complex workflows shouldn’t require complex logic. In February, we <a href="https://developers.cloudflare.com/changelog/2025-02-21-images-bindings-in-workers/"><u>released the Images binding in Workers</u></a> to make the development experience more accessible, intuitive, and user-friendly. The binding helps you work more productively by simplifying how you connect the Images API to your Worker and providing more fine-grained control over how images are optimized.</p>
    <div>
      <h2>Extending the Images workflow</h2>
      <a href="#extending-the-images-workflow">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/424FXX9vM9cYlIfLMGUk5Z/e2db32589a3ded75801909ab4611747a/image1.png" />
          </figure><p><sup><i>Since optimization parameters follow a fixed order, we’d need to output the image to resize it after watermarking. The binding eliminates this step.</i></sup></p><p><a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>Bindings</u></a> connect your Workers to external resources on the Developer Platform, allowing you to manage interactions between services in a few lines of code. When you bind the Images API to your Worker, you can create more flexible, programmatic workflows to transform, resize, and encode your images — without requiring them to be accessible from a URL.</p><p>Within a Worker, the Images binding supports the following functions:</p><ul><li><p><code>.transform()</code>: Accepts optimization parameters that specify how an image should be manipulated</p></li><li><p><code>.draw()</code>: Overlays an image over the original image. The overlaid image can be optimized through a child <code>transform()</code> function.</p></li><li><p><code>.output()</code>: Defines the output format for the transformed image.</p></li><li><p><code>.info()</code>: Outputs information about the original image, like its format, file size, and dimensions.</p></li></ul>
    <div>
      <h2>The life of a binding request</h2>
      <a href="#the-life-of-a-binding-request">
        
      </a>
    </div>
    <p>At a high level, a binding works by establishing a communication channel between a Worker and the binding’s backend services.</p><p>To do this, the Workers runtime needs to know exactly which objects to construct when the Worker is instantiated. Our control plane layer translates between a given Worker’s code and each binding’s backend services. When a developer runs <code>wrangler deploy</code>, any invoked bindings are converted into a dependency graph. This describes the objects and their dependencies that will be injected into the <code>env</code> of the Worker when it runs. Then, the runtime loads the graph, builds the objects, and runs the Worker.</p><p>In most cases, the binding makes a remote procedure call to the backend services of the binding. The mechanism that makes this call must be constructed and injected into the binding object; for Images, this is implemented as a JavaScript wrapper object that makes HTTP calls to the Images API.</p><p>These calls contain the sequence of operations that are required to build the final image, represented as a tree structure. Each <code>.transform()</code> function adds a new node to the tree, describing the operations that should be performed on the image. The <code>.draw()</code> function adds a subtree, where child <code>.transform()</code> functions create additional nodes that represent the operations required to build the overlay image. When <code>.output()</code> is called, the tree is flattened into a list of operations; this list, along with the input image itself, is sent to the backend of the Images binding.</p><p>For example, let’s say we had the following commands:</p>
            <pre><code>env.IMAGES.input(image)
  .transform(rotate:90})
  .draw(
    env.IMAGES.input(watermark)
      .transform({width:32})
  )
  .transform({blur:5})
  .output({format:"image/png"})</code></pre>
            <p>Put together, the request would look something like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/495j0HjS1lIxaKY7Dnyf67/bd80e9a4bf277313e90ade13df2f9870/image2.png" />
          </figure><p>To communicate with the backend, we chose to send multipart forms. Each binding request is inherently expensive, as it can involve decoding, transforming, and encoding. Binary formats may offer slightly lower overhead per request, but given the bulk of the work in each request is the image processing itself, any gains would be nominal. Instead, we stuck with a well-supported, safe approach that our team had successfully implemented in the past.</p>
    <div>
      <h2>Meeting developers where they are</h2>
      <a href="#meeting-developers-where-they-are">
        
      </a>
    </div>
    <p>Beyond the core capabilities of the binding, we knew that we needed to consider the entire developer lifecycle. The ability to test, debug, and iterate is a crucial part of the development process.</p><p>Developers won’t use what they can’t test; they need to be able to validate exactly how image optimization will affect the user experience and performance of their application. That’s why we made the Images binding available in local development without incurring any usage charges.</p><p>As we scoped out this feature, we reached a crossroad with how we wanted the binding to work when developing locally. At first, we considered making requests to our production backend services for both unit and end-to-end testing. This would require open-sourcing the components of the binding and building them for all Wrangler-supported platforms and Node versions.</p><p>Instead, we focused our efforts on targeting individual use cases by providing two different methods. In <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a>, Cloudflare’s command-line tool, developers can choose between an online and offline mode of the Images binding. The online mode makes requests to the real Images API; this requires Internet access and authentication to the Cloudflare API. Meanwhile, the offline mode requests a lower fidelity <a href="https://testing.googleblog.com/2013/06/testing-on-toilet-fake-your-way-to.html"><u>fake</u></a>, which is a mock API implementation that supports a limited subset of features. This is primarily used for <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/"><u>unit tests</u></a>, as it doesn’t require Internet access or authentication. By default, <code>wrangler dev</code> uses the online mode, mirroring the same version that Cloudflare runs in production.</p>
    <div>
      <h2>See the binding in action</h2>
      <a href="#see-the-binding-in-action">
        
      </a>
    </div>
    <p>Let’s look at an <a href="https://developers.cloudflare.com/images/tutorials/optimize-user-uploaded-image/"><u>example app</u></a> that transforms a user-uploaded image, then uploads it directly to an R2 bucket.</p><p>To start, we <a href="https://developers.cloudflare.com/learning-paths/workers/get-started/first-worker/"><u>created a Worker application</u></a> and configured our <code>wrangler.toml</code> file to add the Images, R2, and assets bindings:</p>
            <pre><code>[images]
binding = "IMAGES"

[[r2_buckets]]
binding = "R2"
bucket_name = "&lt;BUCKET&gt;"

[assets]
directory = "./&lt;DIRECTORY&gt;"
binding = "ASSETS"</code></pre>
            <p>In our Worker project, the assets directory contains the image that we want to use as our watermark.</p><p>Our frontend has a <code>&lt;form&gt;</code> element that accepts image uploads:</p>
            <pre><code>const html = `
&lt;!DOCTYPE html&gt;
        &lt;html&gt;
          &lt;head&gt;
            &lt;meta charset="UTF-8"&gt;
            &lt;title&gt;Upload Image&lt;/title&gt;
          &lt;/head&gt;
          &lt;body&gt;
            &lt;h1&gt;Upload an image&lt;/h1&gt;
            &lt;form method="POST" enctype="multipart/form-data"&gt;
              &lt;input type="file" name="image" accept="image/*" required /&gt;
              &lt;button type="submit"&gt;Upload&lt;/button&gt;
            &lt;/form&gt;
          &lt;/body&gt;
        &lt;/html&gt;
`;

export default {
  async fetch(request, env) {
    if (request.method === "GET") {
      return new Response(html, {headers:{'Content-Type':'text/html'},})
    }
    if (request.method ==="POST") {
      // This is called when the user submits the form
    }
  }
};</code></pre>
            <p>Next, we set up our Worker to handle the optimization.</p><p>The user will upload images directly through the browser; since there isn’t an existing image URL, we won’t be able to use <code>fetch()</code> to get the uploaded image. Instead, we can transform the uploaded image directly, operating on its body as a stream of bytes.</p><p>Once we read the image, we can manipulate the image. Here, we apply our watermark and encode the image to AVIF before uploading the transformed image to our R2 bucket: </p>
            <pre><code>var __defProp = Object.defineProperty;
var __name = (target, value) =&gt; __defProp(target, "name", { value, configurable: true });

function assetUrl(request, path) {
	const url = new URL(request.url);
	url.pathname = path;
	return url;
}
__name(assetUrl, "assetUrl");

export default {
  async fetch(request, env) {
    if (request.method === "GET") {
      return new Response(html, {headers:{'Content-Type':'text/html'},})
    }
    if (request.method === "POST") {
      try {
        // Parse form data
        const formData = await request.formData();
        const file = formData.get("image");
        if (!file || typeof file.arrayBuffer !== "function") {
          return new Response("No image file provided", { status: 400 });
        }
        
        // Read uploaded image as array buffer
        const fileBuffer = await file.arrayBuffer();

	     // Fetch image as watermark
        let watermarkStream = (await env.ASSETS.fetch(assetUrl(request, "watermark.png"))).body;

        // Apply watermark and convert to AVIF
        const imageResponse = (
          await env.IMAGES.input(fileBuffer)
              // Draw the watermark on top of the image
              .draw(
                env.IMAGES.input(watermarkStream)
                  .transform({ width: 100, height: 100 }),
                { bottom: 10, right: 10, opacity: 0.75 }
              )
              // Output the final image as AVIF
              .output({ format: "image/avif" })
          ).response();

          // Add timestamp to file name
          const fileName = `image-${Date.now()}.avif`;
          
          // Upload to R2
          await env.R2.put(fileName, imageResponse.body)
         
          return new Response(`Image uploaded successfully as ${fileName}`, { status: 200 });
      } catch (err) {
        console.log(err.message)
      }
    }
  }
};</code></pre>
            <p>We’ve also created a <a href="https://developers.cloudflare.com/images/examples/"><u>gallery</u></a> in our documentation to demonstrate ways that you can use the Images binding. For example, you can <a href="https://developers.cloudflare.com/images/examples/transcode-from-workers-ai"><u>transcode images from Workers AI</u></a> or <a href="https://developers.cloudflare.com/images/examples/watermark-from-kv"><u>draw a watermark from KV</u></a> on an image that is stored in R2.</p><p>Looking ahead, the Images binding unlocks many exciting possibilities to seamlessly transform and manipulate images directly in Workers. We aim to create an even deeper connection between all the primitives that developers use to build AI and full-stack applications.</p><p>Have some feedback for this release? Let us know in the <a href="https://community.cloudflare.com/c/developers/images/63"><u>Community</u></a> forum.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Image Optimization]]></category>
            <guid isPermaLink="false">PKC5RU7wcrNRfwoLnBjZX</guid>
            <dc:creator>Deanna Lam</dc:creator>
            <dc:creator>Nick Skehin</dc:creator>
        </item>
        <item>
            <title><![CDATA[What’s new with Cloudflare Media: updates for Calls, Stream, and Images]]></title>
            <link>https://blog.cloudflare.com/whats-next-for-cloudflare-media/</link>
            <pubDate>Thu, 04 Apr 2024 13:00:40 GMT</pubDate>
            <description><![CDATA[ With Cloudflare Calls in open beta, you can build real-time, serverless video and audio applications. Cloudflare Stream lets your viewers instantly clip from ongoing streams ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Our customers use Cloudflare Calls, Stream, and Images to build live, interactive, and real-time experiences for their users. We want to reduce friction by making it easier to get data into our products. This also means providing transparent pricing, so customers can be confident that costs make economic sense for their business, especially as they scale.</p><p>Today, we’re introducing four new improvements to help you build media applications with Cloudflare:</p><ul><li><p>Cloudflare Calls is in open beta with transparent pricing</p></li><li><p>Cloudflare Stream has a Live Clipping API to let your viewers instantly clip from ongoing streams</p></li><li><p>Cloudflare Images has a pre-built upload widget that you can embed in your application to accept uploads from your users</p></li><li><p>Cloudflare Images lets you crop and resize images of people at scale with automatic face cropping</p></li></ul>
    <div>
      <h3>Build real-time video and audio applications with Cloudflare Calls</h3>
      <a href="#build-real-time-video-and-audio-applications-with-cloudflare-calls">
        
      </a>
    </div>
    <p>Cloudflare Calls is now in open beta, and you can activate it from your dashboard. Your usage will be free until May 15, 2024. Starting May 15, 2024, customers with a Calls subscription will receive the first terabyte each month for free, with any usage beyond that charged at $0.05 per real-time gigabyte. Additionally, there are no charges for inbound traffic to Cloudflare.</p><p>To get started, read the <a href="https://developers.cloudflare.com/calls/">developer documentation for Cloudflare Calls</a>.</p>
    <div>
      <h3>Live Instant Clipping: create clips from live streams and recordings</h3>
      <a href="#live-instant-clipping-create-clips-from-live-streams-and-recordings">
        
      </a>
    </div>
    <p>Live broadcasts often include short bursts of highly engaging content within a longer stream. Creators and viewers alike enjoy being able to make a “clip” of these moments to share across multiple channels. Being able to generate that clip rapidly enables our customers to offer instant replays, showcase key pieces of recordings, and build audiences on social media in real-time.</p><p>Today, <a href="https://www.cloudflare.com/products/cloudflare-stream/">Cloudflare Stream</a> is launching Live Instant Clipping in open beta for all customers. With the new Live Clipping API, you can let your viewers instantly clip and share moments from an ongoing stream - without re-encoding the video.</p><p>When planning this feature, we considered a typical user flow for generating clips from live events. Consider users watching a stream of a video game: something wild happens and users want to save and share a clip of it to social media. What will they do?</p><p>First, they’ll need to be able to review the preceding few minutes of the broadcast, so they know what to clip. Next, they need to select a start time and clip duration or end time, possibly as a visualization on a timeline or by scrubbing the video player. Finally, the clip must be available quickly in a way that can be replayed or shared across multiple platforms, even after the original broadcast has ended.</p><p>That ideal user flow implies some heavy lifting in the background. We now offer a manifest to preview recent live content in a rolling window, and we provide the timing information in that response to determine the start and end times of the requested clip relative to the whole broadcast. Finally, on request, we will generate on-the-fly that clip as a standalone video file for easy sharing as well as an HLS manifest for embedding into players.</p><p>Live Instant Clipping is available in beta to all customers starting today! Live clips are free to make; they do not count toward storage quotas, and playback is billed just like minutes of video delivered. To get started, check out the <a href="https://developers.cloudflare.com/stream/stream-live/live-instant-clipping/">Live Clipping API in developer documentation</a>.</p>
    <div>
      <h3>Integrate Cloudflare Images into your application with only a few lines of code</h3>
      <a href="#integrate-cloudflare-images-into-your-application-with-only-a-few-lines-of-code">
        
      </a>
    </div>
    <p>Building applications with user-uploaded images is even easier with the upload widget, a pre-built, interactive UI that lets users upload images directly into your Cloudflare Images account.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1MVN5ibd1UGnokaEm7f1Vq/8efedb285ec93d52867d78ca63cb454b/image3-9.png" />
            
            </figure><p>Many developers use <a href="https://www.cloudflare.com/developer-platform/cloudflare-images/">Cloudflare Images</a> as an end-to-end image management solution to support applications that center around user-generated content, from AI photo editors to social media platforms. Our APIs connect the frontend experience – where users upload their images – to the storage, optimization, and delivery operations in the backend.</p><p>But building an application can take time. Our team saw a huge opportunity to take away as much extra work as possible, and we wanted to provide off-the-shelf integration to speed up the development process.</p><p>With the upload widget, you can seamlessly integrate Cloudflare Images into your application within minutes. The widget can be integrated in two ways: by embedding a script into a static HTML page or by installing a package that works with your favorite framework. We provide a ready-made Worker template that you can deploy directly to your account to connect your frontend application with Cloudflare Images and authorize users to upload through the widget.</p><p>To try out the upload widget, <a href="https://forms.gle/vBu47y3638k8fkGF8">sign up for our closed beta</a>.</p>
    <div>
      <h3>Optimize images of people with automatic face cropping for Cloudflare Images</h3>
      <a href="#optimize-images-of-people-with-automatic-face-cropping-for-cloudflare-images">
        
      </a>
    </div>
    <p>Cloudflare Images lets you dynamically manipulate images in different aspect ratios and dimensions for various use cases. With face cropping for Cloudflare Images, you can now crop and resize images of people’s faces at scale. For example, if you’re building a social media application, you can apply automatic face cropping to generate profile picture thumbnails from user-uploaded images.</p><p>Our existing gravity parameter uses saliency detection to set the focal point of an image based on the most visually interesting pixels, which determines how the image will be cropped. We expanded this feature by using a machine learning model called RetinaFace, which classifies images that have human faces. We’re also introducing a new zoom parameter that you can combine with face cropping to specify how closely an image should be cropped toward the face.</p><p>To apply face cropping to your image optimization, <a href="https://forms.gle/2bPbuijRoqGi6Qn36">sign up for our closed beta</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JFNk182dDZHu0sxIySMC5/d3821e2f911b7e31bb411addcc10bdb6/image2-10.png" />
            
            </figure><p><i>Photo by</i> <a href="https://unsplash.com/@eyeforebony?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><i>Eye for Ebony</i></a> <i>on</i> <a href="https://unsplash.com/photos/photo-of-woman-wearing-purple-lipstick-and-black-crew-neck-shirt-vYpbBtkDhNE?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><i>Unsplash</i></a></p>
            <pre><code>https://example.com/cdn-cgi/image/fit=crop,width=500,height=500,gravity=face,zoom=0.6/https://example.com/images/picture.jpg</code></pre>
            
    <div>
      <h3>Meet the Media team over Discord</h3>
      <a href="#meet-the-media-team-over-discord">
        
      </a>
    </div>
    <p>As we’re working to build the next set of media tools, we’d love to hear what you’re building for your users. Come <a href="https://discord.gg/cloudflaredev">say hi to us on Discord</a>. You can also learn more by visiting our developer documentation for <a href="https://developers.cloudflare.com/calls/">Calls</a>, <a href="https://developers.cloudflare.com/stream/">Stream</a>, and <a href="https://developers.cloudflare.com/images/">Images</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Live Streaming]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Image Optimization]]></category>
            <category><![CDATA[Image Resizing]]></category>
            <category><![CDATA[Image Storage]]></category>
            <category><![CDATA[Cloudflare Calls]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">4fOMOrJU6Bg9JNkRAThc7c</guid>
            <dc:creator>Deanna Lam</dc:creator>
            <dc:creator>Taylor Smith</dc:creator>
            <dc:creator>Zaid Farooqui</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Images introduces AVIF, Blur and Bundle with Stream]]></title>
            <link>https://blog.cloudflare.com/images-avif-blur-bundle/</link>
            <pubDate>Thu, 18 Nov 2021 14:00:10 GMT</pubDate>
            <description><![CDATA[ Two months ago we launched Cloudflare Images for everyone and we are amazed about the adoption and the feedback we received. Today we are announcing AVIF and Blur support for Cloudflare Images and give you a preview of the upcoming functionality. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Two months ago we <a href="/announcing-cloudflare-images/">launched</a> Cloudflare Images for everyone, and we are amazed about the adoption and the feedback we received.</p><p>Let’s start with some numbers:</p><p>More than <b>70 million</b> images delivered per day on average in the week of November 5 to 12.</p><p>More than <b>1.5 million</b> images have been uploaded so far, growing faster every day.</p><p>But we are just getting started and are happy to announce the release of the most requested features, first we talk about the AVIF support for Images, converting as many images as possible with <b>AVIF</b> results in highly compressed, fast delivered images without compromising on the quality.</p><p>Secondly we introduce <b>blur</b>. By blurring an image, in combination with the already supported protection of private images via <a href="https://developers.cloudflare.com/images/cloudflare-images/serve-images/serve-private-images-using-signed-url-tokens">signed URL</a>, we make Cloudflare Images a great solution for previews for paid content.</p><p>For many of our customers it is important to be able to serve Images from their <b>own domain</b> and not only via imagedelivery.net. Here we show an easy solution for this using a custom Worker or a special URL.</p><p>Last but not least we announce the launch of new attractively priced <b>bundles</b> for both Cloudflare Images and Stream.</p>
    <div>
      <h3>Images supports AVIF</h3>
      <a href="#images-supports-avif">
        
      </a>
    </div>
    <p>We <a href="/generate-avif-images-with-image-resizing/">announced support</a> for the new AVIF image format in Image Resizing product last year.</p><p>Last month we added AVIF support in Cloudflare Images. It compresses images significantly better than older-generation formats such as WebP and JPEG. Today, AVIF image format is supported both in Chrome and Firefox. <a href="https://caniuse.com/avif">Globally, almost 70%</a> of users have a web browser that supports AVIF.</p>
    <div>
      <h3>What is AVIF</h3>
      <a href="#what-is-avif">
        
      </a>
    </div>
    <p>As we <a href="/generate-avif-images-with-image-resizing/#what-is-avif">explained previously</a>, AVIF is a combination of the HEIF ISO standard, and a royalty-free AV1 codec by <a href="https://aomedia.org/">Mozilla, Xiph, Google, Cisco, and many others</a>.</p><p>“Currently, JPEG is the most popular image format on the web. It's doing remarkably well for its age, and it will likely remain popular for years to come thanks to its excellent compatibility. There have been many previous attempts at replacing JPEG, such as JPEG 2000, JPEG XR, and WebP. However, these formats offered only modest compression improvements and didn't always beat JPEG on image quality. Compression and image quality in <a href="https://netflixtechblog.com/avif-for-next-generation-image-coding-b1d75675fe4">AVIF is better than in all of them, and by a wide margin</a>.”<sup>1</sup></p>
    <div>
      <h3>How Cloudflare Images supports AVIF</h3>
      <a href="#how-cloudflare-images-supports-avif">
        
      </a>
    </div>
    <p>As a reminder, <a href="/building-cloudflare-images-in-rust-and-cloudflare-workers/#image-delivery">image delivery</a> is done through the Cloudflare managed imagedelivery.net domain. It is powered by Cloudflare Workers. We have the following logic to request the AVIF format based on the Accept HTTP request header:</p>
            <pre><code>const WEBP_ACCEPT_HEADER = /image\/webp/i;
const AVIF_ACCEPT_HEADER = /image\/avif/i;

addEventListener("fetch", (event) =&gt; {
  event.respondWith(handleRequest(event));
});

async function handleRequest(event) {
  const request = event.request;
  const url = new URL(request.url);
  
  const headers = new Headers(request.headers);

  const accept = headers.get("accept");

  let format = undefined;

  if (WEBP_ACCEPT_HEADER.test(accept)) {
    format = "webp";
  }

  if (AVIF_ACCEPT_HEADER.test(accept)) {
    format = "avif";
  }

  const resizingReq = new Request(url, {
    headers,
    cf: {
      image: { ..., format },
    },
  });

  return fetch(resizingReq);
}</code></pre>
            <p>Based on the Accept header, the logic in the Worker detects if WebP or AVIF format can be served. The request is passed to Image Resizing. If the image is available in the Cloudflare cache it will be served immediately, otherwise the image will be resized, transformed, and cached. This approach ensures that for clients without AVIF format support we deliver images in WebP or JPEG formats.</p><p>The benefit of Cloudflare Images product is that we added AVIF support without a need for customers to change a single line of code from their side.</p><p>The transformation of an image to AVIF is compute-intensive but leads to a significant benefit in file-size. We are always weighing the cost and benefits in the decision which format to serve.</p><p>It Is worth noting that all the conversions to WebP and AVIF formats happen on the request phase for image delivery at the moment. We will be adding the ability to convert images on the upload phase in the future.</p>
    <div>
      <h3>Introducing Blur</h3>
      <a href="#introducing-blur">
        
      </a>
    </div>
    <p>One of the most requested features for Images and Image Resizing was adding support for blur. We recently added the support for blur both via <a href="https://developers.cloudflare.com/images/image-resizing/url-format">URL format</a> and <a href="https://developers.cloudflare.com/images/image-resizing/resize-with-workers">with Cloudflare Workers</a>.</p><p>Cloudflare Images uses variants. When you create a variant, you can define properties including variant name, width, height, and whether the variant should be publicly accessible. Blur will be available as a new option for variants via <a href="https://api.cloudflare.com/#cloudflare-images-variants-create-a-variant">variant API</a>:</p>
            <pre><code>curl -X POST "https://api.cloudflare.com/client/v4/accounts/9a7806061c88ada191ed06f989cc3dac/images/v1/variants" \
     -H "Authorization: Bearer &lt;api_token&gt;" \
     -H "Content-Type: application/json" \
     --data '{"id":"blur","options":{"metadata":"none","blur":20},"neverRequireSignedURLs":true}'</code></pre>
            <p>One of the use cases for using blur with Cloudflare Images is to control access to the premium content.</p><p>The customer will upload the image that requires an access token:</p>
            <pre><code>curl -X POST "https://api.cloudflare.com/client/v4/accounts/9a7806061c88ada191ed06f989cc3dac/images/v1" \
     -H "Authorization: Bearer &lt;api_token&gt;"
     --form 'file=@./&lt;file_name&gt;' \
     --form 'requireSignedURLs=true'</code></pre>
            <p>Using the variant we defined via API we can fetch the image without providing a signature:</p><img src="https://imagedelivery.net/r1xBEzoDl4p34DP7QLrECw/dfc72df8-863f-46e3-7bba-a21f9795e401/blur20" /><p>To access the protected image a <a href="https://developers.cloudflare.com/images/cloudflare-images/serve-images/serve-private-images-using-signed-url-tokens">valid signed URL</a> will be required:</p><img src="https://imagedelivery.net/r1xBEzoDl4p34DP7QLrECw/dfc72df8-863f-46e3-7bba-a21f9795e401/public?sig=d67d49055d652b8fb2575b3ec11f0e1a8fae3932d3e516d381e49e498dd4a96e" />
Lava lamps in the Cloudflare lobby. Courtesy of <a href="https://twitter.com/mahtin/status/888251632550424577">@mahtin</a>
<br /><p>The combination of image blurring and restricted access to images could be integrated into many scenarios and provides a powerful tool set for content publishers.</p><p>The functionality to define a variant with a blur option is coming soon in the Cloudflare dashboard.</p>
    <div>
      <h3>Serving images from custom domains</h3>
      <a href="#serving-images-from-custom-domains">
        
      </a>
    </div>
    <p>One important use case for Cloudflare Images customers is to serve images from custom domains. It could improve latency and loading performance by not requiring additional TLS negotiations on the client. Using Cloudflare Workers customers can add this functionality today using the following example:</p>
            <pre><code>const IMAGE_DELIVERY_HOST = "https://imagedelivery.net";

addEventListener("fetch", async (event) =&gt; {
  event.respondWith(handleRequest(event.request));
});

async function handleRequest(request) {
  const url = new URL(request.url);
  const { pathname, search } = url;

  const destinationURL = IMAGE_DELIVERY_HOST + pathname + search;
  return fetch(new Request(destinationURL));
}</code></pre>
            <p>For simplicity, the Workers script makes the redirect from the domain where it’s deployed to the imagedelivery.net. We assume the same format as for Cloudflare Images URLs:</p>
            <pre><code>https://&lt;customdomain.net&gt;/&lt;encoded account id&gt;/&lt;image id&gt;/&lt;variant name&gt;</code></pre>
            <p>The Worker could be adjusted to fit customer needs like:</p><ul><li><p>Serving images from a specific domains' path e.g. /images/</p></li><li><p>Populate account id or variant name automatically</p></li><li><p>Map Cloudflare Images to custom URLs altogether</p></li></ul><p>For customers who just want the simplicity of serving Cloudflare Images from their domains on Cloudflare we will be adding the ability to serve Cloudflare Images using the following format:</p>
            <pre><code>https://&lt;customdomain.net&gt;/cdn-cgi/imagedelivery/&lt;encrypted_account_id&gt;/&lt;_image_id&gt;/&lt;variant_name&gt;</code></pre>
            <p>Image delivery will be supported from all customer domains under the same Cloudflare account where Cloudflare Images subscription is activated. This will be available to all Cloudflare Images customers before the holidays.</p>
    <div>
      <h3>Images and Stream Bundle</h3>
      <a href="#images-and-stream-bundle">
        
      </a>
    </div>
    <p>Creator platforms, eCommerce, and many other products have one thing in common: having an easy and accessible way to upload, store and deliver your images and videos in the best and most affordable way is vital.</p><p>We teamed up with the Stream team to create a set of bundles that make it super easy to get started with your product.</p><p>The Starter bundle is perfect for experimenting and a first MVP. For just $10 per month it is 50% cheaper than the unbundled option, and includes enough to get started:</p><ul><li><p>Stream: 1,000 stored minutes and 5,000 minutes served</p></li><li><p>Images: 100,000 stored images and 500,000 images served</p></li></ul><p>For larger and fast scaling applications we have the Creator Bundle for $50 per month which saves over 60% compared to the unbundled products. It includes everything to start scaling:</p><ul><li><p>Stream: 10,000 stored minutes and 50,000 minutes served</p></li><li><p>Images: 500,000 stored images and 1,000,000 images served</p></li></ul><img src="https://imagedelivery.net/r1xBEzoDl4p34DP7QLrECw/fb149b8a-8d93-494d-74da-0a88b8ffd600/public" /><p>These new bundles will be available to all customers from the end of November.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We are not stopping here, and we already have the next features for Images lined up. One of them is Images Analytics. Having great analytics for a product is vital, and so we will be introducing analytics functionality for Cloudflare Images for all customers to be able to keep track of all images and their usage.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div>
<p></p>
<div></div><hr /><p><sup>1</sup><a href="/generate-avif-images-with-image-resizing/#what-is-avif">http://blog.cloudflare.com/generate-avif-images-with-image-resizing/#what-is-avif</a></p> ]]></content:encoded>
            <category><![CDATA[Full Stack Week]]></category>
            <category><![CDATA[Image Resizing]]></category>
            <category><![CDATA[Image Optimization]]></category>
            <guid isPermaLink="false">5fZMmfFfa85XpTYFNaU5S2</guid>
            <dc:creator>Marc Lamik</dc:creator>
            <dc:creator>Yevgen Safronov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Images Now Available to Everyone]]></title>
            <link>https://blog.cloudflare.com/announcing-cloudflare-images/</link>
            <pubDate>Wed, 15 Sep 2021 13:02:00 GMT</pubDate>
            <description><![CDATA[ Today, we are launching Cloudflare Images for all customers. Images provides a single product to store, resize and serve images. We built Cloudflare Images, so customers of all sizes can build a scalable and affordable image pipeline with a fraction of the effort. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we are launching Cloudflare Images for all customers. Images is a single product that stores, resizes, optimizes and serves images. We built <a href="https://www.cloudflare.com/products/cloudflare-images/">Cloudflare Images</a> so customers of all sizes can build a scalable and affordable image pipeline in minutes.</p>
    <div>
      <h2>Store images efficiently</h2>
      <a href="#store-images-efficiently">
        
      </a>
    </div>
    <p>Many legacy image pipelines are architected to take an image and create multiple copies of it to account for different sizes and formats. These copies are then stored in a storage bucket and delivered using a <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN</a>. This architecture can be hard to maintain and adds infrastructure cost in unpredictable ways.</p><p>With Cloudflare Images, you don’t need to worry about <i>creating</i> and <i>storing</i> multiple versions of the same image in different sizes and formats. Cloudflare Images makes a clear distinction between your stored images and the variants. Once you upload an image, you can apply any defined variant to the uploaded image. The variants and different formats don’t count towards your stored images quota.</p><p>This means that when a user uploads a picture that you need to resize in three different ways and serve in two different formats, you pay for <i>one</i> stored image instead of seven different images (the original, plus three variants for each of the two formats.)</p>
    <div>
      <h2>Built-in access control</h2>
      <a href="#built-in-access-control">
        
      </a>
    </div>
    <p>Every image that is uploaded to Cloudflare Images can be marked private, so it can only be accessed using an expiring signed URL token. This is ideal for use cases like membership sites that sell premium content.</p><p>Signed URLs give you the flexibility to validate if someone is a paying member using your custom logic and only give them access to the set of images they have paid for.</p>
    <div>
      <h2>Eliminate egress costs</h2>
      <a href="#eliminate-egress-costs">
        
      </a>
    </div>
    <p>Egress cost is the cost of getting your data out of a storage provider. The most common case being when you serve an image from storage you pay for the bits transmitted. And you end up paying every, single time that same image is displayed. It is easy to not account for this cost when you are doing cost-benefit analysis between different solutions. But egress costs add up rapidly, and it is not uncommon for customers to pay their storage provider a <i>very large multiple</i> of their total storage cost in egress.</p><p>When you use a multi-vendor solution for your image pipeline, you might use vendor A for storage, vendor B for resizing the images and vendor C for delivering the images. At face value, this solution might appear cheaper because you think “<i>we’ve picked the most affordable option for each piece of our image pipeline</i>.” But in this setup, the resizing service (vendor B) and the CDN (vendor C) still need to request images from vendor A.</p><p>With Cloudflare Images, you never have to worry about egress costs because the images are stored, optimized and delivered by a single product. And you will see only two line items on your bill for Cloudflare Images. You pay \$5/month for every 100,000 stored images and \$1 per 100,000 delivered images. There are no additional resizing, compute or egress costs.</p>
    <div>
      <h2>Uploading Images</h2>
      <a href="#uploading-images">
        
      </a>
    </div>
    <p>Cloudflare Images offers multiple ways to upload your images. We accept all the common file formats including JPEG, GIF and WEBP. Each image uploaded to Images can be up to 10 MB.</p><p>If you only have a few images or simply want a taste of the product, you can use the <a href="https://dash.cloudflare.com/?to=/:account/images">Images Dashboard</a>. Simply drag and drop one or more images:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/69R4SCilBZZo0HZXoxcQcb/2ea6038f76a3076a892505638092892b/image1-10.png" />
            
            </figure><p><i>Cloudflare Images Dashboard</i></p><p>If you have an app that lets your users upload images, you can use the Direct Creator Uploads feature of Cloudflare Images.</p><p>The Direct Creator Uploads API lets you request single-use tokens. These one-time upload URLs can be used by your app to upload your user’s submissions without exposing your API Key or Token. Here is an example cURL that returns a one-use upload url:</p>
            <pre><code>curl --request POST \
  --url https://api.cloudflare.com/client/v4/accounts/:account_id/images/v1/direct_upload \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer :token'</code></pre>
            <p>If the call is successful, you’ll receive a response that looks like this:</p>
            <pre><code>{
  "result": {
    "id": "2cdc28f0-017a-49c4-9ed7-87056c839c2",
    "uploadURL": "https://upload.imagedelivery.net/2cdc28f0-017a-49c4-9ed7-87056c839c2"
  },
  "result_info": null,
  "success": true,
  "errors": [],
  "messages": []
}</code></pre>
            <p>Your client-side app can now upload the image directly to the <code>uploadURL</code> without exposing your account credentials to the client.</p>
    <div>
      <h2>Resizing with Variants</h2>
      <a href="#resizing-with-variants">
        
      </a>
    </div>
    <p>Cloudflare Images lets you define variants and apply them to your uploaded images. You can define up to 20 different variants to support different use cases. Each variant has properties including the width and height of resized images.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4VVAvrN7YhsBzMG8AtBcQf/c615f98838d50a88d26537fe3f31f6a0/image4-11.png" />
            
            </figure><p><i>Configure variants in Cloudflare Images</i></p><p>You can also configure the fit property to describe how the width and height dimensions should be interpreted.</p><p>Fit Option</p><p>Behavior</p><p>Scale Down</p><p>Image will be shrunk in size to fully fit within the given width or height, but won’t be enlarged.</p><p>Contain</p><p>Image will be resized (shrunk or enlarged) to be as large as possible within the given width or height while preserving the aspect ratio.</p><p>Cover</p><p>Image will be resized to exactly fill the entire area specified by width and height, and will be cropped if necessary.</p><p>Crop</p><p>Image will be shrunk and cropped to fit within the area specified by width and height. The image won’t be enlarged. For images smaller than the given dimensions it’s the same as scale-down. For images larger than the given dimensions, it’s the same as cover.</p><p>Pad</p><p>Image will be resized (shrunk or enlarged) to be as large as possible within the given width or height while preserving the aspect ratio, and the extra area will be filled with a background color (white by default)</p><p>We plan to add more properties to give you maximum flexibility. If there is a particular property you’d love to see, <a href="https://docs.google.com/forms/d/1UmltETYpHIt0C9cZkr607ofcNUt53jgqk18Dh8vHxyU">let us know</a>.</p><p>Once you define your variants, you can begin using them with any image. From the Dashboard, simply click on Variants to quickly preview how any image would be rendered using each of your variants.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1IOsQ2YJ2gIoUUFu6vKNcX/2b279f4a21f1f315956685b5f329fcf9/image2-18.png" />
            
            </figure><p><i>Previewing variants in Cloudflare Images</i></p>
    <div>
      <h2>Optimized image delivery</h2>
      <a href="#optimized-image-delivery">
        
      </a>
    </div>
    <p>Once you’ve uploaded your first image, you will see the Image Delivery URL in your Images Dashboard:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5HAG3KwRzFyPM0ons8zdWs/6c71517b29b09416a46501104be5199e/image5-9.png" />
            
            </figure><p><i>Serving images with Cloudflare Images</i></p><p>A typical Image Delivery URL looks like this:</p><p><a href="https://imagedelivery.net/ZWd9g1K7eljCn_KDTu_OWA/:image_id/:variant_name"><code>https://imagedelivery.net/ZWd9g1K7eljCn_KDTu_OWA/:image_id/:variant_name</code></a></p><p>You can use this url template to form the final URL that returns any image and variant combination.</p><p>When a client requests an image, Cloudflare Images will pick the optimal format between WebP, PNG, JPEG and GIF. The format served to the eyeball is determined by client headers and the image type. Cloudflare Images will soon support AVIF, offering further compression. One of the best parts of using Cloudflare Images is that when we add support for newer formats such as AVIF, you will get the upside without needing to make any changes to your codebase.</p>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>All Cloudflare customers can <a href="https://dash.cloudflare.com/?to=/:account/images">sign up to use Cloudflare Images</a> this week. We built Cloudflare Images for developers. Check out the <a href="https://developers.cloudflare.com/images/">Cloudflare Images developer docs</a> for examples of implementing common use-cases such as letting your users upload images directly to Images and using signed URLs to implement access control.</p><p>We’re just getting started with Cloudflare Images. Here are some of the features we plan to support soon:</p><ul><li><p>AVIF support for even smaller file sizes and faster load times.</p></li><li><p>Variants that add a blur effect to your images.</p></li><li><p>Analytics to better understand your use of Images.</p></li></ul>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p></p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Image Resizing]]></category>
            <category><![CDATA[Image Optimization]]></category>
            <guid isPermaLink="false">46UovjCfOrGDiglTVokHXJ</guid>
            <dc:creator>Zaid Farooqui</dc:creator>
        </item>
    </channel>
</rss>