
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Fri, 10 Apr 2026 03:21:35 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Dynamically optimize, clip, and resize video from any origin with Media Transformations]]></title>
            <link>https://blog.cloudflare.com/media-transformations-for-video-open-beta/</link>
            <pubDate>Fri, 07 Mar 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ With Cloudflare Stream’s new Media Transformations, content owners can resize, crop, clip, and optimize short-form video, all without migrating storage.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, we are thrilled to announce Media Transformations, a new service that brings the magic of <a href="https://developers.cloudflare.com/images/transform-images/"><u>Image Transformations</u></a> to short-form video files wherever they are stored.</p><p>Since 2018, Cloudflare Stream has offered a managed video pipeline that empowers customers to serve rich video experiences at global scale easily, in multiple formats and quality levels. Sometimes, the greatest friction to getting started isn't even about video, but rather the thought of migrating all those files. Customers want a simpler solution that retains their current storage strategy to deliver small, optimized MP4 files. Now you can do that with Media Transformations.</p>
    <div>
      <h3>Short videos, big volume</h3>
      <a href="#short-videos-big-volume">
        
      </a>
    </div>
    <p>For customers with a huge volume of short video, such as generative AI output, e-commerce product videos, social media clips, or short marketing content, uploading those assets to Stream is not always practical. Furthermore, Stream’s key features like adaptive bitrate encoding and HLS packaging offer diminishing returns on short content or small files.</p><p>Instead, content like this should be fetched from our customers' existing storage like R2 or S3 directly, optimized by Cloudflare quickly, and delivered efficiently as small MP4 files. Cloudflare Images customers reading this will note that this sounds just like their existing Image Transformation workflows. Starting today, the same workflow can be applied to your short-form videos.</p>
    <div>
      <h3>What’s in a video?</h3>
      <a href="#whats-in-a-video">
        
      </a>
    </div>
    <p>The distinction between video and images online can sometimes be blurry --- consider an animated GIF: is that an image or a video? (They're usually smaller as MP4s anyway!) As a practical example, consider a selection of product images for a new jacket on an e-commerce site. You want a consumer to know how it looks, but also how it flows. So perhaps the first "image" in that carousel is actually a video of a model simply putting the jacket on. Media Transformations empowers customers to optimize the product video and images with similar tools and identical infrastructure.</p>
    <div>
      <h3>How to get started</h3>
      <a href="#how-to-get-started">
        
      </a>
    </div>
    <p>Any website that is already enabled for Image Transformations is now enabled for Media Transformations. To enable a new zone, navigate to “Transformations” under Stream (or Images), locate your zone in the list, and click Enable. Enabling and disabling a zone for transformations affects both Images and Media transformations.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5hltjlyKF43oV8gTvjr9vF/d904229983fbe9484b08763e22dcac8b/image3.png" />
          </figure><p>After enabling Media Transformations on a website, it is simple to construct a URL that transforms a video. The pattern is similar to Image Transformations, but uses the <code>media</code> endpoint instead of the <code>image</code> endpoint:</p>
            <pre><code>https://example.com/cdn-cgi/media/&lt;OPTIONS&gt;/&lt;SOURCE-VIDEO&gt;</code></pre>
            <p>The <code>&lt;OPTIONS&gt;</code> portion of the URL is a comma-separated <a href="https://developers.cloudflare.com/stream/transform-videos/"><u>list of flags</u></a> written as <code>key=value</code>. A few noteworthy flags:</p><ul><li><p><code>mode</code> can be <code>video</code> (the default) to output a video, <code>frame</code> to pull a still image of a single frame, or even spritesheet to generate an image with multiple frames, which is useful for seek previews or storyboarding.</p></li><li><p><code>time</code> specifies the exact start time from the input video to extract a frame or start making a clip</p></li><li><p><code>duration</code> specifies the length of an output video to make a clip shorter than the original</p></li><li><p><code>fit</code>, together with <code>height</code> and <code>width</code> allow resizing and cropping the output video or frame.</p></li><li><p>Setting <code>audio</code> to false removes the sound in the output video.</p></li></ul><p>The <code>&lt;SOURCE-VIDEO&gt;</code> is a full URL to a source file or a root-relative path if the origin is on the same zone as the transformation request.</p><p>A full list of supported options, examples, and troubleshooting information is <a href="https://developers.cloudflare.com/stream/transform-videos/"><u>available in DevDocs</u></a>.</p>
    <div>
      <h3>A few examples</h3>
      <a href="#a-few-examples">
        
      </a>
    </div>
    <p>I used my phone to take this video of the <a href="https://blog.cloudflare.com/harnessing-office-chaos/"><u>randomness mobile</u></a> in Cloudflare’s Austin Office and put it in an R2 bucket. Of course, it is possible to embed the original video file from R2 directly:</p>  


<p>That video file is almost 30 MB. Let’s optimize it together — a more efficient choice would be to resize the video to the width of this blog post template. Let’s apply a width adjustment in the options portion of the URL:</p>
            <pre><code>https://example.com/cdn-cgi/media/width=760/https://pub-d9fcbc1abcd244c1821f38b99017347f.r2.dev/aus-mobile.mp4</code></pre>
            <p>That will deliver the same video, resized and optimized:</p>


<p>Not only is this video the right size for its container, now it’s less than 4 MB. That’s a big bandwidth savings for visitors.</p><p>As I recorded the video, the lobby was pretty quiet, but there was someone talking in the distance. If we wanted to use this video as a background, we should remove the audio, shorten it, and perhaps crop it vertically. All of these options can be combined, comma-separated, in the options portion of the URL:</p>
            <pre><code>https://example.com/cdn-cgi/media/mode=video,duration=10s,width=480,height=720,fit=cover,audio=false/https://pub-d9fcbc1abcd244c1821f38b99017347f.r2.dev/aus-mobile.mp4</code></pre>
            <p>The result:</p>


<p>If this were a product video, we might want a small thumbnail to add to the carousel of images so shoppers can click to zoom in and see it move. Use the “frame” mode and a “time” to generate a static image from a single point in the video. The same size and fit options apply:</p>
            <pre><code>https://example.com/cdn-cgi/media/mode=frame,time=3s,width=120,height=120,fit=cover/https://pub-d9fcbc1abcd244c1821f38b99017347f.r2.dev/aus-mobile.mp4</code></pre>
            <p>Which generates this optimized image:</p> 
<img src="https://blog.cloudflare.com/cdn-cgi/media/mode=frame,time=3s,width=120,height=120,fit=cover/https://pub-d9fcbc1abcd244c1821f38b99017347f.r2.dev/aus-mobile.mp4" /><p>Try it out yourself using our video or one of your own: </p><ul><li><p>Enable transformations on your website/zone and use the endpoint: <code>https://[your-site]/cdn-cgi/media/</code></p></li><li><p>Mobile video: <a href="https://pub-d9fcbc1abcd244c1821f38b99017347f.r2.dev/aus-mobile.mp4"><u>https://pub-d9fcbc1abcd244c1821f38b99017347f.r2.dev/aus-mobile.mp4</u></a> </p></li><li><p>Check out the <a href="https://stream-video-transformer.kristianfreeman.com/"><u>Media Transformation URL Generator</u></a> from Kristian Freeman on our Developer Relations team, which he built using the <a href="https://streamlit.io/"><u>Streamlit</u></a> Python framework on Workers.</p></li></ul>
    <div>
      <h3>Input Limits</h3>
      <a href="#input-limits">
        
      </a>
    </div>
    <p>We are eager to start supporting real customer content, and we will right-size our input limitations with our early adopters. To start:</p><ul><li><p>Video files must be smaller than 40 megabytes.</p></li><li><p>Files must be MP4s and should be h.264 encoded.</p></li><li><p>Videos and images generated with Media Transformations will be cached. However, in our initial beta, the original content will not be cached which means regenerating a variant will result in a request to the origin.</p></li></ul>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Unlike Stream, Media Transformations receives requests on a customer’s own website. Internally, however, these requests are passed to the same <a href="https://blog.cloudflare.com/behind-the-scenes-with-stream-live-cloudflares-live-streaming-service/"><u>On-the-Fly Encoder (“OTFE”) platform that Stream Live uses</u></a>. To achieve this, the Stream team built modules that run on our servers to act as entry points for these requests.</p><p>These entry points perform some initial validation on the URL formatting and flags before building a request to Stream’s own Delivery Worker, which in turn calls OTFE’s set of transformation handlers. The original asset is fetched from the <i>customer’s</i> origin, validated for size and type, and passed to the same OTFE methods responsible for manipulating and optimizing <a href="https://developers.cloudflare.com/stream/viewing-videos/displaying-thumbnails/"><u>video or still frame thumbnails</u></a> for videos uploaded to Stream. These tools do a final inspection of the media type and encoding for compatibility, then generate the requested variant. If any errors were raised along the way, an HTTP error response will be generated using <a href="https://developers.cloudflare.com/images/reference/troubleshooting/#error-responses-from-resizing"><u>similar error codes</u></a> to Image Transformations. When successful, the result is cached for future use and delivered to the requestor as a single file. Even for new or uncached requests, all of this operates much faster than the video’s play time.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7wfYn8FLcgzgIdLT6NFeq3/f6c51134363231ffed964300cb9992b0/flowchart.png" />
          </figure>
    <div>
      <h3>What it costs</h3>
      <a href="#what-it-costs">
        
      </a>
    </div>
    <p>Media Transformations will be free for all customers while in beta. We expect the beta period to extend into Q3 2025, and after that, Media Transformations will use the same subscriptions and billing mechanics as Image Transformations — including a free allocation for all websites/zones. Generating a still frame (single image) from a video counts as 1 transformation. Generating an optimized video is billed as 1 transformation <i>per second of the output video.</i> Each unique transformation is only billed once per month. All Media and Image Transformations cost $0.50 per 1,000 monthly unique transformation operations, with a free monthly allocation of 5,000.</p><p>Using this post as an example, recall the two transformed videos and one transformed image above — the big original doesn’t count because it wasn’t transformed. The first video (showing blog post width) was 15 seconds of output. The second video (silent vertical clip) was 10 seconds of output. The preview square is a still frame. These three operations would count as 26 transformations — and they would only bill once per month, regardless of how many visitors this page receives.</p>
    <div>
      <h3>Looking ahead</h3>
      <a href="#looking-ahead">
        
      </a>
    </div>
    <p>Our short-term focus will be on right-sizing input limits based on real customer usage as well as adding a caching layer for origin fetches to reduce any egress fees our customers may be facing from other storage providers. Looking further, we intend to streamline Images and Media Transformations to further simplify the developer experience, unify the features, and streamline enablement: Cloudflare’s Media Transformations will optimize your images and video, quickly and easily, wherever you need them.</p><p>Try it for yourself today using our sample asset above, or get started by enabling Transformations on a zone in your account and uploading a short file to R2, both of which offer a free tier to get you going.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Media Platform]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Video]]></category>
            <guid isPermaLink="false">2KCsgqrpHOVpCClqBBPnYM</guid>
            <dc:creator>Taylor Smith</dc:creator>
            <dc:creator>Mickie Betz</dc:creator>
            <dc:creator>Ben Krebsbach</dc:creator>
        </item>
        <item>
            <title><![CDATA[Un experimento rápido: translating Cloudflare Stream captions with Workers AI]]></title>
            <link>https://blog.cloudflare.com/un-experimento-rapido-translating-cloudflare-stream-captions-with-workers-ai/</link>
            <pubDate>Tue, 24 Dec 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ How I used Workers AI to translate Cloudflare Stream’s auto-generated captions and what I learned along the way. ]]></description>
            <content:encoded><![CDATA[ <div>
  
</div>
<p></p><p><a href="https://www.cloudflare.com/products/cloudflare-stream"><u>Cloudflare Stream</u></a> launched AI-powered <a href="https://blog.cloudflare.com/stream-automatic-captions-with-ai"><u>automated captions</u></a> to transcribe English in on-demand videos in March 2024. Customers' immediate next questions were about other languages — both <i>transcribing</i> audio from other languages, and <i>translating</i> captions to make subtitles for other languages. As the Stream Product Manager, I've thought a lot about how we might tackle these, but I wondered…</p><p><b>What if I just translated a generated </b><a href="https://en.wikipedia.org/wiki/WebVTT"><b><u>VTT</u></b></a><b> (caption file)? Can we do that?</b> I hoped to use <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a> to conduct a quick experiment to learn more about the problem space, challenges we may find, and what platform capabilities we can leverage.</p><p>There is a <a href="https://github.com/elizabethsiegle/cfworkers-ai-translate"><u>sample translator demo</u></a> in Workers documentation that uses the “<a href="https://developers.cloudflare.com/workers-ai/models/m2m100-1.2b/"><u>m2m100-1.2b</u></a>” Many-to-Many multilingual translation model to translate short input strings. I decided to start there and try using it to translate some of the English captions in my Stream library into Spanish.</p>
    <div>
      <h2>Selecting test content</h2>
      <a href="#selecting-test-content">
        
      </a>
    </div>
    <p>I started with my <a href="https://customer-eq7kiuol0tk9chox.cloudflarestream.com/13297d6aa7c112b771c8d25d16fd3155/iframe?defaultTextTrack=en"><u>short demo video announcing</u></a> the transcription feature. I wanted a Worker that could read the VTT captions file from Stream, isolate the text content, and run it through the model as-is.</p><p>The first step was parsing the input. A VTT file is a text file that contains a sequence of numbered “cues,” each with a number, a start and end time, and text content. </p>
            <pre><code>WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:900000
 
1
00:00:00.000 --&gt; 00:00:02.580
Good morning, I'm Taylor Smith,
 
2
00:00:02.580 --&gt; 00:00:03.520
the Product Manager for Cloudflare
 
3
00:00:03.520 --&gt; 00:00:04.460
Stream. This is a quick
 
4
00:00:04.460 --&gt; 00:00:06.040
demo of our AI-powered automatic
 
5
00:00:06.040 --&gt; 00:00:07.580
subtitles feature. These subtitles
 
6
00:00:07.580 --&gt; 00:00:09.420
were generated with Cloudflare WorkersAI
 
7
00:00:09.420 --&gt; 00:00:10.860
and the Whisper Model,
 
8
00:00:10.860 --&gt; 00:00:12.020
not handwritten, and it took
 
9
00:00:12.020 --&gt; 00:00:13.940
just a few seconds.</code></pre>
            
    <div>
      <h2>Parsing the input</h2>
      <a href="#parsing-the-input">
        
      </a>
    </div>
    <p>I started with a simple Worker that would fetch the VTT from Stream directly, run it through a <a href="https://github.com/tsmith512/vtt-translate/blob/trunk/src/index.ts#L54"><u>function I wrote to deconstruct the cues</u></a>, and return the timestamps and original text in an easier to review format.</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env, ctx): Promise&lt;Response&gt; {
    // Step One: Get our input.
    const input = await fetch(PLACEHOLDER_VTT_URL)
      .then(res =&gt; res.text());
 
    // Step Two: Parse the VTT file and get the text
    const captions = vttToCues(input);
 
    // Done: Return what we have.
    return new Response(captions.map(c =&gt;
      (`#${c.number}: ${c.start} --&gt; ${c.end}: ${c.content.toString()}`)
    ).join('\n'));
  },
};</code></pre>
            <p>That returned this text:</p>
            <pre><code>#1: 0 --&gt; 2.58: Good morning, I'm Taylor Smith,
#2: 2.58 --&gt; 3.52: the Product Manager for Cloudflare
#3: 3.52 --&gt; 4.46: Stream. This is a quick
#4: 4.46 --&gt; 6.04: demo of our AI-powered automatic
#5: 6.04 --&gt; 7.58: subtitles feature. These subtitles
#6: 7.58 --&gt; 9.42: were generated with Cloudflare WorkersAI
#7: 9.42 --&gt; 10.86: and the Whisper Model,
#8: 10.86 --&gt; 12.02: not handwritten, and it took
#9: 12.02 --&gt; 13.94: just a few seconds.</code></pre>
            
    <div>
      <h2>AI-ify</h2>
      <a href="#ai-ify">
        
      </a>
    </div>
    <p>As a proof of concept, I adapted a snippet from the demo into my Worker. In the example, the target language and input text are extracted from the user’s request. In my experiment, I decided to hardcode the languages. Also, I had an array of input objects, one for each cue, not just a string. After interpreting the caption input <i>but before returning a response</i>, I used a map callback to parallelize all the AI.run() calls to translate each cue, so they could execute asynchronously and in-place, then awaited them all to resolve. Ultimately, the AI inference call itself is the simplest part of the script.</p>
            <pre><code>await Promise.all(captions.map(async (q) =&gt; {
  const translation = await env.AI.run(
    "@cf/meta/m2m100-1.2b",
    {
      text: q.content,
      source_lang: "en",
      target_lang: "es",
    }
  );
 
  q.content = translation?.translated_text ?? q.content;
}));</code></pre>
            <p>Then the script returns the translated output in the format from before.</p><p>Of course, this is not a scalable or error-tolerant approach for production use because it doesn’t make affordances for rate limiting, failures, or processing bigger throughput. But for a few minutes of tinkering, it taught me a lot.</p>
            <pre><code>#1: 0 --&gt; 2.58: Buen día, soy Taylor Smith.
#2: 2.58 --&gt; 3.52: El gerente de producto de Cloudflare
#3: 3.52 --&gt; 4.46: Rápido, esto es rápido
#4: 4.46 --&gt; 6.04: La demostración de nuestro automático AI-powered
#5: 6.04 --&gt; 7.58: Los subtítulos, estos subtítulos
#6: 7.58 --&gt; 9.42: Generado con Cloudflare WorkersAI
#7: 9.42 --&gt; 10.86: y el modelo de susurro,
#8: 10.86 --&gt; 12.02: No se escribió, y se tomó
#9: 12.02 --&gt; 13.94: Sólo unos segundos.</code></pre>
            <p>A few immediate observations: first, these results came back surprisingly quickly and the Workers AI code worked on the first try! Second, evaluating the quality of translation results is going to depend on having team members with expertise in those languages. Because — third, as a novice Spanish speaker, I can tell this output has some issues.</p><p>Cues 1 and 2 are okay, but 3 is not (“Fast, this is fast” from “[Cloudflare] Stream. This is a quick…”). Cues 5 through 9 had several idiomatic and grammatical issues, too. I theorized that this is because Stream splits the English captions into groups of 4 or 5 words to make them easy to <i>read</i> quickly in the overlay. But that also means sentences and grammatical constructs are interrupted. When those fragments go to the translation model, there isn’t enough context.</p>
    <div>
      <h2>Consolidating sentences</h2>
      <a href="#consolidating-sentences">
        
      </a>
    </div>
    <p>I speculated that reconstructing sentences would be the most effective way to improve translation quality, so I made that the one problem I attempted to solve within this exploration. I added a rough <a href="https://github.com/tsmith512/vtt-translate/blob/trunk/src/index.ts#L132C7-L218"><u>pre-processor</u></a> in the Worker that tries to merge caption cues together and then splits them at sentence boundaries instead. In the process, it also adjusts the timing of the resulting cues to cover the same approximate timeframe.</p><p>Looking at each cue in order:</p>
            <pre><code>// Break this cue up by sentence-ending punctuation.
const sentences = thisCue.content.split(/(?&lt;=[.?!]+)/g);

// Cut here? We have one fragment and it has a sentence terminator.
const cut = sentences.length === 1 &amp;&amp; thisCue.content.match(/[.?!]/);</code></pre>
            <p>But if there’s a cue that splits into multiple sentences, cut it up and split the timing. Leave the final fragment to roll into the next cue:</p>
            <pre><code>else if (sentences.length &gt; 1) {
  // Save the last fragment for later
  const nextContent = sentences.pop();

  // Put holdover content and all-but-last fragment into the content
  newContent += ' ' + sentences.join(' ');

  const thisLength = (thisCue.end - thisCue.start) / 2;

    result.push({
      number: newNumber,
      start: newStart,
      end: thisCue.start + (thisLength / 2), // End this cue early
      content: newContent,
    });

    // … then treat the next cue as a holdover
    cueLength = 1;
    newContent = nextContent;
    // Start the next consolidated cue halfway into this cue's original duration
    newStart = thisCue.start + (thisLength / 2) + 0.001;
    // Set the next consolidated cue's number to this cue's number
    newNumber = thisCue.number;
  }
}</code></pre>
            <p>Applying that to the input, it generates sentence-grouped output, visualized here in green:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1MzmQ0KAJBntBrqgwGAqTd/035d044fc9e70c9933c1406074de52b9/image2.png" />
          </figure><p>There are only 3 “new” cues, each starts at the beginning of a sentence. The consolidated cues are longer and might be harder to read when overlaid on a video, but they are complete grammatical units:</p>
            <pre><code>#1: 0 --&gt; 3.755:  Good morning, I'm Taylor Smith, the Product Manager for Cloudflare Stream.
#3: 3.756 --&gt; 6.425:  This is a quick demo of our AI-powered automatic subtitles feature.
#5: 6.426 --&gt; 12.5:  These subtitles were generated with Cloudflare Workers AI and the Whisper Model, not handwritten, and it took just a few seconds.</code></pre>
            <p>Translating this “prepared” input the same way as before:</p>
            <pre><code>#1: 0 --&gt; 3.755: Buen día, soy Taylor Smith, el gerente de producto de Cloudflare Stream.
#3: 3.756 --&gt; 6.425: Esta es una demostración rápida de nuestra función de subtítulos automáticos alimentados por IA.
#5: 6.426 --&gt; 12.5: Estos subtítulos fueron generados con Cloudflare WorkersAI y el Modelo Whisper, no escritos a mano, y solo tomó unos segundos.</code></pre>
            <p>¡Mucho mejor! [Much better!]</p>
    <div>
      <h2>Re-exporting to VTT</h2>
      <a href="#re-exporting-to-vtt">
        
      </a>
    </div>
    <p>To use these translated captions on a video, they need to be <a href="https://github.com/tsmith512/vtt-translate/blob/trunk/src/index.ts#L228-L238"><u>formatted back into a VTT</u></a> with renumbered cues and properly formatted timestamps. Ultimately, the solution should <a href="https://developers.cloudflare.com/stream/edit-videos/adding-captions/#upload-a-file"><u>automatically upload them back to Stream</u></a>, too, but that is an established process, so I set it aside as out of scope. The final VTT result from my Worker is this:</p>
            <pre><code>WEBVTT
 
1
00:00:00.000 --&gt; 00:00:03.754
Buen día, soy Taylor Smith, el gerente de producto de Cloudflare Stream.
 
2
00:00:03.755 --&gt; 00:00:06.424
Esta es una demostración rápida de nuestra función de subtítulos automáticos alimentados por IA.
 
3
00:00:06.426 --&gt; 00:00:12.500
Estos subtítulos fueron generados con Cloudflare WorkersAI y el Modelo Whisper, no escritos a mano, y solo tomó unos segundos.</code></pre>
            <p>I saved it to a file locally and, using the Cloudflare Dashboard, I added it to the video which you may have noticed embedded at the top of this post! Captions can also be <a href="https://developers.cloudflare.com/stream/edit-videos/adding-captions/#upload-a-file"><u>uploaded via the API</u></a>.</p>
    <div>
      <h2>More testing and what I learned</h2>
      <a href="#more-testing-and-what-i-learned">
        
      </a>
    </div>
    <p>I tested this script on a variety of videos from many sources, including short social media clips, 30-minute video diaries, and even a few clips with some specialized vocabulary. Ultimately, I was surprised at the level of prototype I was able to build on my first afternoon with Workers AI. The translation results were very promising! In the process, I learned a few key things that I will be bringing back to product planning for Stream:</p><p><b>We have the tools.</b> Workers AI has a model called "<a href="https://developers.cloudflare.com/workers-ai/models/m2m100-1.2b/"><u>m2m100-1.2b</u></a>" from Hugging Face that can do text translations between many languages. We can use it to translate the plain text cues from VTT files — whether we generate them or they are user-supplied. We’ll keep an eye out for new models as they are added, too.</p><p><b>Quality is prone to "copy-of-a-copy" effect.</b> When auto-translating captions that were auto-transcribed, issues that impact the English transcription have a huge downstream impact on the translation. Editing the source transcription improves quality <i>a lot</i>.</p><p><b>Good grammar and punctuation counts.</b> Translations are significantly improved if the source content is grammatically correct and punctuated properly. Punctuation is often missing when captions are auto-generated, but not always  — I would like to learn more about how to predict that and if there are ways we can increase punctuation in the output of transcription jobs. My cue consolidator experiment returns giant walls of text if there’s no punctuation on the input.</p><p><b>Translate full sentences when possible.</b> We split our transcriptions into cues of about 5 words for several reasons. However, this produces lower quality output when translated because it breaks grammatical constructs. Translation results are better with full sentences or at least complete fragments. This is doable, but easier said than done, particularly as we look toward support for additional input languages that use punctuation differently.</p><p><b>We will have blind spots when evaluating quality.</b> Everyone on our team was able to adequately evaluate English <i>transcriptions</i>. Sanity-checking the quality of <i>translations</i> will require team members who are familiar with those languages. We state disclaimers about transcription quality and offer tips to improve it, but at least we know what we're looking at. For translations, we may not know how far off we are in many cases. How many readers of this article objected to the first translation sample above?</p><p><b>Clear UI and API design will be important for these related but distinct workflows.</b> There are two different flows being requested by Stream customers: "My audio is in English, please make translated subtitles" alongside "My audio is in another language, please transcribe captions as-is." We will need to carefully consider how we shape user-facing interactions to make it really clear to a user what they are asking us to do.</p><p><b>Workers AI is really easy to use.</b> Sheepishly, I will admit: although I read Stream's code for the transcription feature, this was the first time I've ever used Workers AI on my own, and it was definitely the easiest part of this experiment!</p><p>Finally, as a product manager, it is important I remain focused on the outcome. From a certain point of view, this experiment is a bit of an <a href="https://en.wikipedia.org/wiki/XY_problem"><u>XY Problem</u></a>. The <i>need</i> is "I have audio in one language and I want subtitles in another." Are there other avenues worth looking into besides "transcribe to captions, then restructure and translate those captions?" Quite possibly. But this experiment with Workers AI helped me identify some potential challenges to plan for and opportunities to get excited about!</p><p>I’ve cleaned up and shared the sample code I used in this experiment at <a href="https://github.com/tsmith512/vtt-translate/"><u>https://github.com/tsmith512/vtt-translate/</u></a>. Try it out and share your experience!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workers AI]]></category>
            <guid isPermaLink="false">6OAfYNDjjJBccE1gFIVrnu</guid>
            <dc:creator>Taylor Smith</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Stream Generated Captions, powered by Workers AI]]></title>
            <link>https://blog.cloudflare.com/stream-automatic-captions-with-ai/</link>
            <pubDate>Thu, 20 Jun 2024 13:00:29 GMT</pubDate>
            <description><![CDATA[ With one click, users can now generate video captions effortlessly using Stream’s newest feature: AI-generated captions for on-demand videos and recordings of live streams ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ZNOCnu9XDN7qU6CIDuysv/e297849f3d6bae5663bfae0866febbfe/image3-6.png" />
            
            </figure><p>With one click, customers can now generate video captions effortlessly using Stream’s newest feature: AI-generated captions for on-demand videos and recordings of live streams. As part of Cloudflare’s mission to help build a better Internet, this feature is available to all Stream customers at no additional cost.</p><p>This solution is designed for simplicity, eliminating the need for third-party transcription services and complex workflows. For videos lacking accessibility features like captions, manual transcription can be time-consuming and impractical, especially for large video libraries. Traditionally, it has involved specialized services, sometimes even dedicated teams, to transcribe audio and deliver the text along with video, so it can be displayed during playback. As captions become more widely expected for a variety of reasons, including ethical obligation, legal compliance, and changing audience preferences, we wanted to relieve this burden.</p><p>With <a href="https://www.cloudflare.com/products/cloudflare-stream/">Stream’s integrated solution</a>, the caption generation process is seamlessly integrated into your existing video management workflow, saving time and resources. Regardless of when you uploaded a video, you can easily add automatic captions to enhance accessibility. Captions can now be generated within the Cloudflare Dashboard or via an API request, all within the familiar and unified Stream platform.</p><p>This feature is designed with utmost consideration for privacy and data protection. Unlike other third-party transcription services that may share content with external entities, your data remains securely within Cloudflare's ecosystem throughout the caption generation process. Cloudflare does not utilize your content for model training purposes. For more information about data protection, review <a href="https://developers.cloudflare.com/workers-ai/privacy/">Your Data and Workers AI</a>.</p>
    <div>
      <h2>Getting Started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Starting June 20th, 2024, this beta is available for all Stream customers as well as subscribers of the Professional and Business plans, which include 100 minutes of video storage.</p><p>To get started, upload a video to Stream (from the Cloudflare <a href="https://dash.cloudflare.com/?to=/:account/stream">Dashboard</a> or via <a href="https://developers.cloudflare.com/stream/uploading-videos/upload-video-file/">API</a>).</p><div>
  
</div><p>Next, navigate to the "Captions" tab on the video, click “Add Captions,” then select the language and “Generate captions with AI.” Finally, click save and within a few minutes, the new captions will be visible in the captions manager and automatically available in the player, too. Captions can also be <a href="https://developers.cloudflare.com/stream/edit-videos/adding-captions/">generated via the API</a>.</p><p>Captions are usually generated in a few minutes. When captions are ready, the Stream player will automatically be updated to offer them to users. The HLS and DASH manifests are also updated so third party players that support text tracks can display them as well.</p><p>On-demand videos and recordings of live streams, regardless of when they were created, are supported. While in beta, only English captions can be generated, and videos must be shorter than 2 hours. The quality of the transcription is best on videos with clear speech and minimal background noise.</p><p>We've been pleased with how well the AI model transcribes different types of content during our tests. That said, there are times when the results aren't perfect, and another method might work better for some use cases. It's important to check if the accuracy of the generated captions are right for your needs.</p>
    <div>
      <h2>Technical Details</h2>
      <a href="#technical-details">
        
      </a>
    </div>
    
    <div>
      <h3>Built using Workers AI</h3>
      <a href="#built-using-workers-ai">
        
      </a>
    </div>
    <p>The Stream engineering team built this new feature using <a href="https://developers.cloudflare.com/workers-ai/">Workers AI</a>, allowing us to access the <a href="https://developers.cloudflare.com/workers-ai/models/whisper/">Whisper</a> model – an open source Automatic Speech Recognition model – with a single API call. Using Workers AI radically simplified the AI model deployment, integration, and scaling with an out-of-the-box solution. We eliminated the need for our team to handle infrastructure complexities, enabling us to focus solely on building the automated captions feature.</p><p>Writing software that utilizes an AI model can involve several challenges. First, there’s the difficulty of configuring the appropriate hardware infrastructure. AI models require substantial computational resources to run efficiently and require specialized hardware, like GPUs, which can be expensive and complex to manage. There’s also the daunting task of deploying AI models at scale, which involve the complexities of balancing workload distribution, minimizing latency, optimizing throughput, and maintaining high availability. Not only does Workers AI solve the pain of managing underlying infrastructure, it also automatically scales as needed.</p><p>Using Workers AI transformed a daunting task into a Worker that transcribes audio files with less than 30 lines of code.</p>
            <pre><code>import { Ai } from '@cloudflare/ai'


export interface Env {
 AI: any
}


export type AiVTTOutput = {
 vtt?: string
}


export default {
 async fetch(request: Request, env: Env) {
   const blob = await request.arrayBuffer()


   const ai = new Ai(env.AI)
   const input = {
     audio: [...new Uint8Array(blob)],
   }


   try {
     const response: AiVTTOutput = (await ai.run(
       '@cf/openai/whisper-tiny-en',
       input
     )) as any
     return Response.json({ vtt: response.vtt })
   } catch (e) {
     const errMsg =
       e instanceof Error
         ? `${e.name}\n${e.message}\n${e.stack}`
         : 'unknown error type'
     return new Response(`${errMsg}`, {
       status: 500,
       statusText: 'Internal error',
     })
   }
 },
}</code></pre>
            
    <div>
      <h3>Quickly captioning videos at scale</h3>
      <a href="#quickly-captioning-videos-at-scale">
        
      </a>
    </div>
    <p>The Stream team wanted to ensure this feature is fast and performant at scale,   which required engineering work to process a high volume of videos regardless of duration.</p><p>First, our team needed to pre-process the audio prior to running <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/">AI inference</a> to ensure the input is compatible with Whisper’s input format and requirements.</p><p>There is a wide spectrum of variability in video content, from a short grainy video filmed on a phone to a multi-hour high-quality Hollywood-produced movie. Videos may be silent or contain an action-driven cacophony. Also, Stream’s on-demand videos include recordings of live streams which are packaged differently from videos uploaded as whole files. With this variability, the audio inputs are stored in an array of different container formats, with different durations, and different file sizes. We ensured our audio files were properly formatted to be compatible with Whisper’s requirements.</p><p>One aspect for pre-processing is ensuring files are a sensible duration for optimized inference.  Whisper has an “sweet spot” of 30 seconds for the duration of audio files for transcription. As they note in this <a href="https://github.com/openai/whisper/discussions/1118">Github discussion</a>: “<i>Too short, and you’d lack surrounding context. You’d cut sentences more often. A lot of sentences would cease to make sense. Too long, and you’ll need larger and larger models to contain the complexity of the meaning you want the model to keep track of.</i>” Fortunately, Stream already splits videos into smaller segments to ensure fast delivery during playback on the web. We wrote functionality to concatenate those small segments into 30-second batches prior to sending to Workers AI.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hAKps3VE8xMCHQX99y7th/28e47409eb8ed6e794683c878e4c0c32/image1-17.png" />
            
            </figure><p>To optimize processing speed, our team parallelized as many operations as possible. By concurrently creating the 30-second audio batches and sending requests to Workers AI, we take full advantage of the scalability of the Workers AI platform. Doing this greatly reduces the time it takes to generate captions, but adds some additional complexity. Because we are sending requests to Workers AI in parallel, transcription responses may arrive out-of-order. For example, if a video is one minute in duration, the request to generate captions for the second 30 seconds of a video may complete before the request for the first 30 seconds of the video. The captions need to be sequential to align with the video, so our team had to maintain an understanding of the audio batch order to ensure our final combined <a href="https://developer.mozilla.org/en-US/docs/Web/API/WebVTT_API">WebVTT caption file</a> is properly synced with the video. We sort the incoming Workers AI responses and re-order timestamps for a final accurate transcript.</p><p>The end result is the ability to generate captions for longer videos quickly and efficiently at scale.</p>
    <div>
      <h2>Try it now</h2>
      <a href="#try-it-now">
        
      </a>
    </div>
    <p>We are excited to bring this feature to open beta for all of our subscribers as well as <a href="https://www.cloudflare.com/plans/pro/">Pro</a> and <a href="https://www.cloudflare.com/plans/business/">Business</a> plan customers today! Get started by <a href="https://dash.cloudflare.com/?to=/:account/stream">uploading a video to Stream</a>. Review <a href="https://developers.cloudflare.com/stream/edit-videos/adding-captions/">our documentation</a> for tutorials and current beta limitations. Up next, we will be focused on adding more languages and supporting longer videos.</p> ]]></content:encoded>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <guid isPermaLink="false">79pYyuiPCqjfXnecuHIhTK</guid>
            <dc:creator>Mickie Betz</dc:creator>
            <dc:creator>Ben Krebsbach</dc:creator>
            <dc:creator>Taylor Smith</dc:creator>
        </item>
        <item>
            <title><![CDATA[What’s new with Cloudflare Media: updates for Calls, Stream, and Images]]></title>
            <link>https://blog.cloudflare.com/whats-next-for-cloudflare-media/</link>
            <pubDate>Thu, 04 Apr 2024 13:00:40 GMT</pubDate>
            <description><![CDATA[ With Cloudflare Calls in open beta, you can build real-time, serverless video and audio applications. Cloudflare Stream lets your viewers instantly clip from ongoing streams ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Our customers use Cloudflare Calls, Stream, and Images to build live, interactive, and real-time experiences for their users. We want to reduce friction by making it easier to get data into our products. This also means providing transparent pricing, so customers can be confident that costs make economic sense for their business, especially as they scale.</p><p>Today, we’re introducing four new improvements to help you build media applications with Cloudflare:</p><ul><li><p>Cloudflare Calls is in open beta with transparent pricing</p></li><li><p>Cloudflare Stream has a Live Clipping API to let your viewers instantly clip from ongoing streams</p></li><li><p>Cloudflare Images has a pre-built upload widget that you can embed in your application to accept uploads from your users</p></li><li><p>Cloudflare Images lets you crop and resize images of people at scale with automatic face cropping</p></li></ul>
    <div>
      <h3>Build real-time video and audio applications with Cloudflare Calls</h3>
      <a href="#build-real-time-video-and-audio-applications-with-cloudflare-calls">
        
      </a>
    </div>
    <p>Cloudflare Calls is now in open beta, and you can activate it from your dashboard. Your usage will be free until May 15, 2024. Starting May 15, 2024, customers with a Calls subscription will receive the first terabyte each month for free, with any usage beyond that charged at $0.05 per real-time gigabyte. Additionally, there are no charges for inbound traffic to Cloudflare.</p><p>To get started, read the <a href="https://developers.cloudflare.com/calls/">developer documentation for Cloudflare Calls</a>.</p>
    <div>
      <h3>Live Instant Clipping: create clips from live streams and recordings</h3>
      <a href="#live-instant-clipping-create-clips-from-live-streams-and-recordings">
        
      </a>
    </div>
    <p>Live broadcasts often include short bursts of highly engaging content within a longer stream. Creators and viewers alike enjoy being able to make a “clip” of these moments to share across multiple channels. Being able to generate that clip rapidly enables our customers to offer instant replays, showcase key pieces of recordings, and build audiences on social media in real-time.</p><p>Today, <a href="https://www.cloudflare.com/products/cloudflare-stream/">Cloudflare Stream</a> is launching Live Instant Clipping in open beta for all customers. With the new Live Clipping API, you can let your viewers instantly clip and share moments from an ongoing stream - without re-encoding the video.</p><p>When planning this feature, we considered a typical user flow for generating clips from live events. Consider users watching a stream of a video game: something wild happens and users want to save and share a clip of it to social media. What will they do?</p><p>First, they’ll need to be able to review the preceding few minutes of the broadcast, so they know what to clip. Next, they need to select a start time and clip duration or end time, possibly as a visualization on a timeline or by scrubbing the video player. Finally, the clip must be available quickly in a way that can be replayed or shared across multiple platforms, even after the original broadcast has ended.</p><p>That ideal user flow implies some heavy lifting in the background. We now offer a manifest to preview recent live content in a rolling window, and we provide the timing information in that response to determine the start and end times of the requested clip relative to the whole broadcast. Finally, on request, we will generate on-the-fly that clip as a standalone video file for easy sharing as well as an HLS manifest for embedding into players.</p><p>Live Instant Clipping is available in beta to all customers starting today! Live clips are free to make; they do not count toward storage quotas, and playback is billed just like minutes of video delivered. To get started, check out the <a href="https://developers.cloudflare.com/stream/stream-live/live-instant-clipping/">Live Clipping API in developer documentation</a>.</p>
    <div>
      <h3>Integrate Cloudflare Images into your application with only a few lines of code</h3>
      <a href="#integrate-cloudflare-images-into-your-application-with-only-a-few-lines-of-code">
        
      </a>
    </div>
    <p>Building applications with user-uploaded images is even easier with the upload widget, a pre-built, interactive UI that lets users upload images directly into your Cloudflare Images account.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1MVN5ibd1UGnokaEm7f1Vq/8efedb285ec93d52867d78ca63cb454b/image3-9.png" />
            
            </figure><p>Many developers use <a href="https://www.cloudflare.com/developer-platform/cloudflare-images/">Cloudflare Images</a> as an end-to-end image management solution to support applications that center around user-generated content, from AI photo editors to social media platforms. Our APIs connect the frontend experience – where users upload their images – to the storage, optimization, and delivery operations in the backend.</p><p>But building an application can take time. Our team saw a huge opportunity to take away as much extra work as possible, and we wanted to provide off-the-shelf integration to speed up the development process.</p><p>With the upload widget, you can seamlessly integrate Cloudflare Images into your application within minutes. The widget can be integrated in two ways: by embedding a script into a static HTML page or by installing a package that works with your favorite framework. We provide a ready-made Worker template that you can deploy directly to your account to connect your frontend application with Cloudflare Images and authorize users to upload through the widget.</p><p>To try out the upload widget, <a href="https://forms.gle/vBu47y3638k8fkGF8">sign up for our closed beta</a>.</p>
    <div>
      <h3>Optimize images of people with automatic face cropping for Cloudflare Images</h3>
      <a href="#optimize-images-of-people-with-automatic-face-cropping-for-cloudflare-images">
        
      </a>
    </div>
    <p>Cloudflare Images lets you dynamically manipulate images in different aspect ratios and dimensions for various use cases. With face cropping for Cloudflare Images, you can now crop and resize images of people’s faces at scale. For example, if you’re building a social media application, you can apply automatic face cropping to generate profile picture thumbnails from user-uploaded images.</p><p>Our existing gravity parameter uses saliency detection to set the focal point of an image based on the most visually interesting pixels, which determines how the image will be cropped. We expanded this feature by using a machine learning model called RetinaFace, which classifies images that have human faces. We’re also introducing a new zoom parameter that you can combine with face cropping to specify how closely an image should be cropped toward the face.</p><p>To apply face cropping to your image optimization, <a href="https://forms.gle/2bPbuijRoqGi6Qn36">sign up for our closed beta</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JFNk182dDZHu0sxIySMC5/d3821e2f911b7e31bb411addcc10bdb6/image2-10.png" />
            
            </figure><p><i>Photo by</i> <a href="https://unsplash.com/@eyeforebony?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><i>Eye for Ebony</i></a> <i>on</i> <a href="https://unsplash.com/photos/photo-of-woman-wearing-purple-lipstick-and-black-crew-neck-shirt-vYpbBtkDhNE?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><i>Unsplash</i></a></p>
            <pre><code>https://example.com/cdn-cgi/image/fit=crop,width=500,height=500,gravity=face,zoom=0.6/https://example.com/images/picture.jpg</code></pre>
            
    <div>
      <h3>Meet the Media team over Discord</h3>
      <a href="#meet-the-media-team-over-discord">
        
      </a>
    </div>
    <p>As we’re working to build the next set of media tools, we’d love to hear what you’re building for your users. Come <a href="https://discord.gg/cloudflaredev">say hi to us on Discord</a>. You can also learn more by visiting our developer documentation for <a href="https://developers.cloudflare.com/calls/">Calls</a>, <a href="https://developers.cloudflare.com/stream/">Stream</a>, and <a href="https://developers.cloudflare.com/images/">Images</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Live Streaming]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Image Optimization]]></category>
            <category><![CDATA[Image Resizing]]></category>
            <category><![CDATA[Image Storage]]></category>
            <category><![CDATA[Cloudflare Calls]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">4fOMOrJU6Bg9JNkRAThc7c</guid>
            <dc:creator>Deanna Lam</dc:creator>
            <dc:creator>Taylor Smith</dc:creator>
            <dc:creator>Zaid Farooqui</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Stream Low-Latency HLS support now in Open Beta]]></title>
            <link>https://blog.cloudflare.com/cloudflare-stream-low-latency-hls-open-beta/</link>
            <pubDate>Mon, 25 Sep 2023 13:00:29 GMT</pubDate>
            <description><![CDATA[ Cloudflare Stream’s LL-HLS support enters open beta today. You can deliver video to your audience faster, reducing the latency a viewer may experience on their player to as little as 3 seconds ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Stream Live lets users easily scale their live-streaming apps and websites to millions of creators and concurrent viewers while focusing on the content rather than the infrastructure — Stream manages codecs, protocols, and bit rate automatically.</p><p>For <a href="/recapping-speed-week-2023/">Speed Week</a> this year, we introduced a <a href="/low-latency-hls-support-for-cloudflare-stream/">closed beta of Low-Latency HTTP Live Streaming</a> (LL-HLS), which builds upon the high-quality, feature-rich HTTP Live Streaming (HLS) protocol. Lower latency brings creators even closer to their viewers, empowering customers to build more interactive features like chat and enabling the use of live-streaming in more time-sensitive applications like live e-learning, sports, gaming, and events.</p><p>Today, in celebration of Birthday Week, we’re opening this beta to all customers with even lower latency. With LL-HLS, you can deliver video to your audience faster, reducing the latency a viewer may experience on their player to as little as three seconds. <a href="https://www.cloudflare.com/developer-platform/solutions/live-streaming/">Low Latency streaming</a> is priced the same way, too: $1 per 1,000 minutes delivered, with zero extra charges for encoding or bandwidth.</p>
    <div>
      <h3>Broadcast with latency as low as three seconds.</h3>
      <a href="#broadcast-with-latency-as-low-as-three-seconds">
        
      </a>
    </div>
    <p>LL-HLS is an extension of the <a href="https://www.cloudflare.com/learning/video/what-is-http-live-streaming/">HLS standard</a> that allows us to reduce glass-to-glass latency — the time between something happening on the broadcast end and a user seeing it on their screen. That includes factors like network conditions and transcoding for HLS and adaptive bitrates. We also include client-side buffering in our understanding of latency because we know the experience is driven by what a user sees, not when a byte is delivered into a buffer. Depending on encoder and player settings, broadcasters' content can be playing on viewers' screens in less than three seconds.</p><div>
  
</div><p><i>On the left,</i> <a href="https://obsproject.com/"><i>OBS Studio</i></a> <i>broadcasting from my personal computer to Cloudflare Stream. On the right, watching this livestream using our own built-in player playing LL-HLS with three second latency!</i></p>
    <div>
      <h3>Same pricing, lower latency. Encoding is always free.</h3>
      <a href="#same-pricing-lower-latency-encoding-is-always-free">
        
      </a>
    </div>
    <p>Our addition of LL-HLS support builds on all the best parts of Stream including simple, predictable pricing. You never have to pay for ingress (broadcasting to us), compute (encoding), or egress. This allows you to stream with peace of mind, knowing there are no surprise fees and no need to trade quality for cost. Regardless of bitrate or resolution, Stream costs \$1 per 1,000 minutes of video delivered and \$5 per 1,000 minutes of video stored, billed monthly.</p><p>Stream also provides both a built-in web player or HLS/DASH manifests to use in a compatible player of your choosing. This enables you or your users to go live using the same protocols and tools that broadcasters big and small use to go live to YouTube or Twitch, but gives you full control over access and presentation of live streams. We also provide access control with signed URLs and hotlinking prevention measures to protect your content.</p>
    <div>
      <h3>Powered by the strength of the network</h3>
      <a href="#powered-by-the-strength-of-the-network">
        
      </a>
    </div>
    <p>And of course, Stream is powered by Cloudflare's global network for fast delivery worldwide, with points of presence within 50ms of 95% of the Internet connected population, a key factor in our quest to slash latency. We ingest live video close to broadcasters and move it rapidly through Cloudflare’s network. We run encoders on-demand and generate player manifests as close to viewers as possible.</p>
    <div>
      <h3>Getting started with LL-HLS</h3>
      <a href="#getting-started-with-ll-hls">
        
      </a>
    </div>
    <p>Getting started with Stream Live only takes a few minutes, and by using Live <i>Outputs</i> for restreaming, you can even test it without changing your existing infrastructure. First, create or update a Live Input in the Cloudflare dashboard. While in beta, Live Inputs will have an option to enable LL-HLS called “Low-Latency HLS Support.” Activate this toggle to enable the new pipeline.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3LHUfSgkCln8UFZy12CtHF/17d72972159b165364d5db31c069702f/image1-9.png" />
            
            </figure><p>Stream will automatically provide the RTMPS and SRT endpoints to broadcast your feed to us, just as before. For the best results, we recommend the following broadcast settings:</p><ul><li><p>Codec: h264</p></li><li><p>GOP size / keyframe interval: 1 second</p></li></ul><p>Optionally, configure a Live Output to point to your existing video ingest endpoint via RTMPS or SRT to test Stream while rebroadcasting to an existing workflow or infrastructure.</p><p>Stream will automatically provide RTMPS and SRT endpoints to broadcast your feed to us as well as an HTML embed for our built-in player.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zFDLgJ0iwEuh0Brat0TXz/da52d22ae28234ecd4390cf4dc518f4d/image3-7-1.png" />
            
            </figure><p>This connection information can be added easily to a broadcast application like OBS to start streaming immediately:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Gn4GlXNvamzD1qnN5vEi2/c503a9c7280c6bc8f88a1bfadf43b876/image2-7.png" />
            
            </figure><p>During the beta, our built-in player will automatically attempt to use low-latency for any enabled Live Input, falling back to regular HLS otherwise. If LL-HLS is being used, you’ll see “Low Latency” noted in the player.</p><p>During this phase of the beta, we are most closely focused on using <a href="https://obsproject.com/">OBS</a> to broadcast and Stream’s built-in player to watch. However, you may test the LL-HLS manifest in a player of your own by appending <code>?protocol=llhls</code> to the end of the HLS manifest URL. This flag may change in the future and is not yet ready for production usage; <a href="https://developers.cloudflare.com/stream/changelog/">watch for changes in DevDocs</a>.</p>
    <div>
      <h3>Sign up today</h3>
      <a href="#sign-up-today">
        
      </a>
    </div>
    <p>Low-Latency HLS is Stream Live’s latest tool to bring your creators and audiences together. All new and existing Stream subscriptions are eligible for the LL-HLS open beta today, with no pricing changes or contract requirements --- all part of building the fastest, simplest serverless live-streaming platform. Join our beta to start test-driving Low-Latency HLS!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Live Streaming]]></category>
            <category><![CDATA[Restreaming]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Latency]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">4s1ozw7fIyXQZnMYrnEnX4</guid>
            <dc:creator>Taylor Smith</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing scheduled deletion for Cloudflare Stream]]></title>
            <link>https://blog.cloudflare.com/introducing-scheduled-deletion-for-cloudflare-stream/</link>
            <pubDate>Fri, 11 Aug 2023 13:00:45 GMT</pubDate>
            <description><![CDATA[ Easily manage storage with scheduled deletion for Cloudflare Stream, available for live recordings and on-demand video ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Designed with developers in mind, Cloudflare Stream provides a seamless, integrated workflow that simplifies video streaming for creators and platforms alike. With features like <a href="/stream-live-ga/">Stream Live</a> and <a href="/stream-creator-management/">creator management</a>, customers have been looking for ways to streamline storage management.</p><p>Today, August 11, 2023, Cloudflare Stream is introducing scheduled deletion to easily manage video lifecycles from the Stream dashboard or our API, saving time and reducing storage-related costs. Whether you need to retain recordings from a live stream for only a limited time, or preserve direct creator videos for a set duration, scheduled deletion will simplify storage management and reduce costs.</p>
    <div>
      <h2>Stream scheduled deletion</h2>
      <a href="#stream-scheduled-deletion">
        
      </a>
    </div>
    <p>Scheduled deletion allows developers to automatically remove on-demand videos and live recordings from their library at a specified time. Live inputs can be set up with a deletion rule, ensuring that all recordings from the input will have a scheduled deletion date upon completion of the stream.</p><p>Let’s see how it works in those two configurations.</p>
    <div>
      <h2>Getting started with scheduled deletion for on-demand videos</h2>
      <a href="#getting-started-with-scheduled-deletion-for-on-demand-videos">
        
      </a>
    </div>
    <p>Whether you run a learning platform where students can upload videos for review, a platform that allows gamers to share clips of their gameplay, or anything in between, scheduled deletion can help manage storage and ensure you only keep the videos that you need. Scheduled deletion can be applied to both new and existing on-demand videos, as well as recordings from completed live streams. This feature lets you specify a specific date and time at which the video will be deleted. These dates can be applied in the Cloudflare dashboard or via the Cloudflare API.</p>
    <div>
      <h3>Cloudflare dashboard</h3>
      <a href="#cloudflare-dashboard">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3D1lNclKMneZTtlGbY2rO9/b917c24ab6e96bc00f78e0f0c9dbde77/Screenshot-2023-08-11-at-12.49.57.png" />
            
            </figure><ol><li><p>From the Cloudflare dashboard, select <b>Videos</b> under <b>Stream</b></p></li><li><p>Select a video</p></li><li><p>Select <b>Automatically Delete Video</b></p></li><li><p>Specify a desired date and time to delete the video</p></li><li><p>Click <b>Submit</b> to save the changes</p></li></ol>
    <div>
      <h3>Cloudflare API</h3>
      <a href="#cloudflare-api">
        
      </a>
    </div>
    <p>The Stream API can also be used to set the scheduled deletion property on new or existing videos. In this example, we’ll create a direct creator upload that will be deleted on December 31, 2023:</p>
            <pre><code>curl -X POST \
-H 'Authorization: Bearer &lt;BEARER_TOKEN&gt;' \
-d '{ "maxDurationSeconds": 10, "scheduledDeletion": "2023-12-31T12:34:56Z" }' \
https://api.cloudflare.com/client/v4/accounts/&lt;ACCOUNT_ID&gt;/stream/direct_upload </code></pre>
            <p>For more information on live inputs and how to configure deletion policies in our API, refer to <a href="https://developers.cloudflare.com/api/">the documentation</a>.</p>
    <div>
      <h2>Getting started with automated deletion for Live Input recordings</h2>
      <a href="#getting-started-with-automated-deletion-for-live-input-recordings">
        
      </a>
    </div>
    <p>We love how recordings from live streams allow those who may have missed the stream to catch up, but these recordings aren’t always needed forever. Scheduled recording deletion is a policy that can be configured for new or existing live inputs. Once configured, the recordings of all future streams on that input will have a scheduled deletion date calculated when the recording is available. Setting this retention policy can be done from the Cloudflare dashboard or via API operations to create or edit Live Inputs:</p>
    <div>
      <h3>Cloudflare Dashboard</h3>
      <a href="#cloudflare-dashboard">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2b60PHplltSmxKQQQ3LERW/431a214e7daa2431a198e7f138f26d6c/image3-3.png" />
            
            </figure><ol><li><p>From the Cloudflare dashboard, select <b>Live Inputs</b> under <b>Stream</b></p></li><li><p>Select <b>Create Live Input</b> or an existing live input</p></li><li><p>Select <b>Automatically Delete Recordings</b></p></li><li><p>Specify a number of days after which new recordings should be deleted</p></li><li><p>Click <b>Submit</b> to save the rule or create the new live input</p></li></ol>
    <div>
      <h3>Cloudflare API</h3>
      <a href="#cloudflare-api">
        
      </a>
    </div>
    <p>The Stream API makes it easy to add a deletion policy to new or existing inputs. Here is an example API request to create a live input with recordings that will expire after 30 days:</p>
            <pre><code>curl -X POST \
-H 'Authorization: Bearer &lt;BEARER_TOKEN&gt;' \
-H 'Content-Type: application/json' \
-d '{ "recording": {"mode": "automatic"}, "deleteRecordingAfterDays": 30 }' \
https://api.staging.cloudflare.com/client/v4/accounts/&lt;ACCOUNT_ID&gt;/stream/live_inputs/</code></pre>
            <p>For more information on live inputs and how to configure deletion policies in our API, refer to <a href="https://developers.cloudflare.com/api/">the documentation</a>.</p>
    <div>
      <h2>Try out scheduled deletion today</h2>
      <a href="#try-out-scheduled-deletion-today">
        
      </a>
    </div>
    <p>Scheduled deletion is now available to all Cloudflare Stream customers. Try it out now and join our <a href="https://discord.gg/cloudflaredev">Discord community</a> to let us know what you think! To learn more, check out our <a href="https://developers.cloudflare.com/stream/">developer docs</a>. Stay tuned for more exciting Cloudflare Stream updates in the future.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Internship Experience]]></category>
            <guid isPermaLink="false">64WSD7SbRipiXRnZ0xoHFt</guid>
            <dc:creator>Austin Christiansen</dc:creator>
            <dc:creator>Taylor Smith</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Low-Latency HLS Support for Cloudflare Stream]]></title>
            <link>https://blog.cloudflare.com/low-latency-hls-support-for-cloudflare-stream/</link>
            <pubDate>Mon, 19 Jun 2023 13:00:57 GMT</pubDate>
            <description><![CDATA[ Broadcast live to websites and applications with less than 10 second latency with Low-Latency HTTP Live Streaming (LL-HLS), now in beta with Cloudflare Stream ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Stream Live lets users easily scale their live streaming apps and websites to millions of creators and concurrent viewers without having to worry about bandwidth costs or purchasing hardware for real-time encoding at scale. Stream Live lets users focus on the content rather than the infrastructure — taking care of the codecs, protocols, and bitrate automatically. When we launched Stream Live last year, we focused on bringing high quality, feature-rich streaming to websites and applications with HTTP Live Streaming (HLS).</p><p>Today, we're excited to introduce support for <a href="https://www.cloudflare.com/developer-platform/solutions/live-streaming/"><i>Low-Latency</i> HTTP Live Streaming</a> (LL-HLS) in a closed beta, offering you an even faster streaming experience. LL-HLS will reduce the latency a viewer may experience on their player from highs of around 30 seconds to less than 10 in many cases. Lower latency brings creators even closer to their viewers, empowering customers to build more interactive features like Q&amp;A or chat and enabling the use of live streaming in more time-sensitive applications like sports, gaming, and live events.</p>
    <div>
      <h3>Broadcast with less than 10-second latency</h3>
      <a href="#broadcast-with-less-than-10-second-latency">
        
      </a>
    </div>
    <p>LL-HLS is an extension of HLS and allows us to reduce <i>glass-to-glass latency</i> — the time between something happening on the broadcast end and a user seeing it on their screen. This includes everything from broadcaster encoding to client-side buffering because we know the experience is driven by what a user sees, not when a byte is delivered into a buffer. Depending on encoder and player settings, broadcasters' content can be playing on viewers' screens in less than ten seconds.</p><p>Our addition of LL-HLS support builds on all the best parts of Stream including simple, predictable pricing. You never have to pay for ingest (broadcasting to us), compute (encoding), or egress. It costs \$5 per 1,000 minutes of video stored per month and \$1 per 1,000 minutes of video viewed per month. This allows you to stream with peace of mind, knowing there are no surprise fees.</p><p>Other platforms tack on live recordings as a separate add-on feature, and those recordings only become available minutes or even hours after a live stream ends. With Cloudflare Stream, Live segments are automatically recorded and immediately available for on-demand playback.</p><p>Stream also provides both a built-in web player and HLS manifests to use in a compatible player of your choosing. This enables you or your users to go live using the same protocols and tools that broadcasters big and small use to go live to YouTube or Twitch, but gives you full control over access and presentation of live streams.</p><p>We also provide <a href="https://www.cloudflare.com/learning/access-management/what-is-access-control/">access control</a> with signed URLs allowing you to protect your content, sharing with only certain users. This allows you to restrict access so only logged in members can watch a particular video, or only let users watch your video for a limited time period. And of course, Stream is powered by Cloudflare's global network for fast delivery worldwide, with points of presents within 50ms of 95% of the Internet connected population.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/r4I8NvvEb90cCrXRJtGLf/d3e66922eae5e29e7d5b4cd56f66c0ed/image2-8.png" />
            
            </figure><p>Left: Broadcasting to Stream Live using OBS. Right: Watching that same Stream. Note the five second difference in the NIST clock between the source and the playback.</p><p>Powering the LL-HLS experience involved making several improvements to our underlying infrastructure. One of the largest challenges we encountered was that our existing architecture involved a pipeline with multiple buffers as long as the keyframe interval. This meant Stream Live would introduce a delay of up to five times the keyframe interval. To resolve this, we simplified a portion of our pipeline — now, we work with individual frames rather than whole keyframe-intervals, but without giving up the economies of scale our approach to encoding provides. This decoupling of keyframe interval and internal buffer duration lets us dramatically reduce latency in HLS, with a maximum of twice the keyframe interval.</p>
    <div>
      <h3>Getting started with the LL-HLS beta</h3>
      <a href="#getting-started-with-the-ll-hls-beta">
        
      </a>
    </div>
    <p>As we prepare to ship this new functionality, we're <a href="https://docs.google.com/forms/d/e/1FAIpQLSeZ2NBuAXC75aDJllhVA0itW0TZ1w4s48TvFm-eP7R1h9Hc9g/viewform?usp=sf_link">looking for beta testers</a> to help us test non-production workloads. To participate in the beta, your application should be configured with these settings:</p><ul><li><p>H.264 video codec</p></li><li><p>Constant bitrate</p></li><li><p>Keyframe interval (GOP size) of 2s</p></li><li><p>No B Frames</p></li><li><p>Using the Stream built-in player</p></li></ul><p>Getting started with Stream Live only takes a few minutes. Create a Live Input in the Cloudflare dashboard, then Stream will automatically provide RTMPS and SRT endpoints to broadcast your feed to us as well as an HTML embed for our built-in player and the HLS manifest for a custom player.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5luGmhp7Otff4f8T8Bjwoh/3952e7bcb4998a416ad5e8c4e49b189c/image4-6.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/43x3NlJtNgaUAeJnmj3RD7/ee6e9533024c16ecf775813a56d06a2b/image3-7.png" />
            
            </figure><p>This connection information can be added easily to a broadcast application like OBS to start streaming immediately:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78aLkxTHD6JIknv5Cm8yVw/446ce2661c935560d3e68052b26d284f/image1-9.png" />
            
            </figure><p>Customers in the LL-HLS beta will need to make a minor adjustment to the built-in player embed code, but there are no changes to Live Input configuration, dashboard interface, API, or existing functionality.</p>
    <div>
      <h3>Sign up today</h3>
      <a href="#sign-up-today">
        
      </a>
    </div>
    <p>LL-HLS is Stream Live’s latest tool to bring your creators and audiences together. After the beta period, this feature will be generally available to all new and existing Stream subscriptions with no pricing changes or contract requirements --- all part of building the fastest, simplest serverless live streaming platform. <a href="https://docs.google.com/forms/d/e/1FAIpQLSeZ2NBuAXC75aDJllhVA0itW0TZ1w4s48TvFm-eP7R1h9Hc9g/viewform?usp=sf_link">Join our beta</a> to start test-driving Low-Latency HLS!</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Live Streaming]]></category>
            <category><![CDATA[Video]]></category>
            <guid isPermaLink="false">1AGHbDsMyWLMDqABUpgMo4</guid>
            <dc:creator>Taylor Smith</dc:creator>
        </item>
    </channel>
</rss>