
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 02:37:07 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cloudflare Calls: millions of cascading trees all the way down]]></title>
            <link>https://blog.cloudflare.com/cloudflare-calls-anycast-webrtc/</link>
            <pubDate>Thu, 04 Apr 2024 13:00:07 GMT</pubDate>
            <description><![CDATA[ Cloudflare Calls is a serverless SFU and TURN service running at Cloudflare’s edge. It’s now in open beta and costs $0.05/ real-time GB. It’s 100% anycast WebRTC ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Following its initial <a href="/announcing-cloudflare-calls">announcement</a> in September 2022, Cloudflare Calls is now in open beta and available in your <a href="https://dash.cloudflare.com/?to=/:account/calls">Cloudflare Dashboard</a>. Cloudflare Calls lets developers build real-time audio/video apps using <a href="https://webrtc.org/">WebRTC</a>, and it abstracts away the complexity by turning the Cloudflare network into a singular SFU. In this post, we dig into how we make this possible.</p>
    <div>
      <h2>WebRTC growing pains</h2>
      <a href="#webrtc-growing-pains">
        
      </a>
    </div>
    <p>WebRTC is the <a href="https://caniuse.com/webtransport">only</a> way to send UDP traffic out of a web browser – everything else uses TCP.</p><p>As a developer, you need a UDP-based transport layer for applications demanding low latency and real-time feedback, such as audio/video conferencing and interactive gaming. This is because unlike WebSocket and other TCP-based solutions, UDP is not subject to head-of-line blocking, <a href="/the-quicening">a</a> <a href="/a-primer-on-proxies">frequent</a> <a href="/stream-now-supports-srt-as-a-drop-in-replacement-for-rtmp">topic</a> on the Cloudflare Blog.</p><p>When building a new video conferencing app, you typically start with a peer-to-peer web application using WebRTC, where clients exchange data directly. This approach is efficient for small-scale demos, but scalability issues arise as the number of participants increases. This is because the amount of data each client must transmit grows substantially, following an almost exponential increase relative to the number of participants, as each client needs to send data to n-1 other clients.</p><p>Selective Forwarding Units (SFUs) play pivotal roles in scaling WebRTC applications. An SFU functions by receiving multiple media or data flows from participants and deciding which streams should be forwarded to other participants, thus acting as a media stream routing hub. This mechanism significantly reduces bandwidth requirements and improves scalability by managing stream distribution based on network conditions and participant needs. Even though <a href="https://arstechnica.com/information-technology/2012/05/skype-replaces-p2p-supernodes-with-linux-boxes-hosted-by-microsoft/">it hasn’t always been this way</a> from when video calling on computers first became popular, SFUs are often found in the cloud, rather than home computers of clients, because of superior connectivity offered in a data center.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1eHksyrQ7iybPx8c9iTqOq/f795cf163c06ea3cd89a74f82fc50f7d/P2P-vs.-SFU.png" />
            
            </figure><p>A modern audio/video application thus quickly becomes complicated with the addition of this server side element. Since all clients connect to this central SFU server, there are numerous things to consider when you’re architecting and scaling a real-time application:</p><ul><li><p>How close is the SFU server location(s) to the end user clients, how is a client assigned to a server?</p></li><li><p>Where is the SFU hosted, and if it’s hosted in the cloud, what are the egress costs from VMs?</p></li><li><p>How many participants can fit in a “room”? Are all participants sending and receiving data? With cameras on? Audio only?</p></li><li><p>Some SFUs require the use of custom SDKs. Which platforms do these run on and are they compatible with the application you’re trying to build?</p></li><li><p>Monitoring/reliability/other issues that come with running infrastructure</p></li></ul><p>Some of these concerns, and the complexity of WebRTC infrastructure in general, has made the community look in <a href="https://datatracker.ietf.org/group/moq/about/">different</a> directions. However, it is clear that in 2024, WebRTC is alive and well with plenty of new and old uses. AI startups build characters that converse in real time, cars leverage WebRTC to stream live footage of their cameras to smartphones, and video conferencing tools are going strong.</p><p>WebRTC has been interesting to us for a while. Cloudflare Stream implemented <a href="/webrtc-whip-whep-cloudflare-stream">WHIP and WHEP</a> WebRTC video streaming protocols in 2022, which remain the lowest latency way to broadcast video. OBS Studio <a href="https://github.com/obsproject/obs-studio/commit/851a8c216e14617fb523951839f3bdb240e85141">implemented</a> WHIP broadcasting support as have a variety of <a href="https://softvelum.com/nimble/webrtc/">software</a> and <a href="https://www.ospreyvideo.com/talon-encoders">hardware</a> vendors alongside Cloudflare. In late 2022, we launched <a href="/announcing-cloudflare-calls">Cloudflare Calls</a> in closed beta. When we blogged about it back then, we were very impressed with how WebRTC fared, and spoke to many customers about their pain points as well as creative ideas the existing browser APIs can foster. We also saw other WebRTC-based apps like <a href="https://www.nytimes.com/2021/02/15/business/clubhouse.html">Clubhouse</a> rise in popularity and <a href="https://blog.x.com/en_us/topics/product/2021/spaces-is-here">Twitter Spaces</a> play a role in popular culture. Today, we see real-time applications of a different sort. Many AI projects <a href="https://blog.character.ai/new-feature-announcement-character-group-chat/">have impressive demos</a> with voice/video interactions. All of these apps are built with the same WebRTC APIs and system architectures.</p><p>We are confident that Cloudflare Calls is a new kind of WebRTC infrastructure you should try. When we set out to build Cloudflare Calls, we had a few ideas that we weren’t sure would work, but were worth trying:</p><ul><li><p>Build every WebRTC component on Anycast with a single IP address for DTLS, ICE, STUN, SRTP, SCTP, etc.</p></li><li><p>Don’t force an SDK – WebRTC APIs by themselves are enough, and allow for the most novel uses to shine, because best developers always find ways to hit the limits of SDKs.</p></li><li><p>Deploy in all <a href="https://www.cloudflare.com/network">310+ cities</a> Cloudflare operates in – use every Cloudflare server, not just a subset</p></li><li><p>Exchange <a href="https://developers.cloudflare.com/calls/https-api/">offer and answer over HTTP</a> between Cloudflare and the WebRTC client. This way there is only a single PeerConnection to manage.</p></li></ul><p>Now we know this is all possible, because we made it happen, and we think it’s the best experience a developer can get with pure WebRTC.</p>
    <div>
      <h2>Is Cloudflare Calls a real SFU?</h2>
      <a href="#is-cloudflare-calls-a-real-sfu">
        
      </a>
    </div>
    <p>Cloudflare is in the business of having computers in numerous places. Historically, our core competency was operating a caching HTTP reverse proxy, and we are <a href="/network-performance-update-security-week-2024">very good</a> at this. With Cloudflare Calls, we asked ourselves “how can we build a large distributed system that brings together our global network to form one giant <i>stateful</i> system that feels like a single machine?”</p><p>When using Calls, every PeerConnection automatically connects to the closest Cloudflare data center instead of a single server. Rather than connecting every client that needs to communicate with each other to a single server, anycast spreads out connections as much as possible to minimize last mile latency sourced from your ISP between your client and Cloudflare.</p><p>It’s good to minimize last mile latency because after the data enters Cloudflare’s control, the underlying media can be managed carefully and routed through the Cloudflare <a href="/250-cities-is-just-the-start">backbone</a>. This is crucial for WebRTC applications where millisecond delays can significantly impact user experience. To give you a sense about latency between Cloudflare’s data centers and end-users, about 95% of the Internet connected population is within 50ms of a Cloudflare data center. As I write this, I am about 20ms away, but in the past, I have been lucky enough to be connected to a **great** home Wi-Fi network less than 1ms away in Manhattan. “But you are just one user!” you might be thinking, so here is a chart from <a href="https://radar.cloudflare.com/quality/">Cloudflare Radar</a> showing recent global latency measurements:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5r4NGPsXDGl4e0FdhghTzC/533446ea771a5a0d0436c7646282fead/pasted-image-0-1.png" />
            
            </figure><p>This setup allows more opportunities for packets lost to be replied with retransmissions closer to users, more opportunities for bandwidth adjustments.</p>
    <div>
      <h2>Eliminating SFU region selection</h2>
      <a href="#eliminating-sfu-region-selection">
        
      </a>
    </div>
    <p>A traditional challenge in WebRTC infrastructure involves the manual selection of Selective Forwarding Units (SFUs) based on geographic location to minimize latency. Some systems solve this problem by selecting a location for the SFU after the first user joins the “room”. This makes routing inefficient when the rest of the participants in the conversation are clustered elsewhere. The anycast architecture of Calls eliminates this issue. When a client initiates a connection, <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">BGP</a> dynamically determines the closest data center. Each selected server only becomes responsible for the PeerConnection of the clients closest to it.</p><p>One might see this is actually a simpler way of managing servers, as there is no need to maintain a layer of WebRTC load balancing for traffic or CPU capacity between servers. However, anycast has its own challenges, and we couldn’t take a laissez-faire approach.</p>
    <div>
      <h2>Steps to establishing a PeerConnection</h2>
      <a href="#steps-to-establishing-a-peerconnection">
        
      </a>
    </div>
    <p>One of the challenging parts in assigning a server to a client PeerConnection is supporting dual stack networking for backwards compatibility with clients that only support the old version of the Internet Protocol, IPv4.</p><p>Cloudflare Calls uses a single IP address per protocol, and our L4 <a href="/unimog-cloudflares-edge-load-balancer">load balancer</a> directs packets to a single server per client by using the 4-tuple {client IP, client port, destination IP, destination port} hashing. This means that every <a href="https://webrtcforthecurious.com/docs/03-connecting/#connectivity-checks">ICE connectivity check</a> packet arrives at different servers: one for IPv4 and one for IPv6.</p><p>ICE is not the only protocol used for WebRTC; there is also STUN and TURN for connectivity establishment. Actual media bits are encrypted using DTLS, which carries most of the data during a session.</p><p>DTLS packets don’t have any identifiers in them that would indicate they belong to a specific connection (unlike QUIC’s <a href="https://datatracker.ietf.org/doc/html/rfc9000">connection ID</a> field), so every server should be able to handle DTLS packets and get the necessary certificates to be able to decrypt them for processing. DTLS encryption is negotiated at the <a href="https://webrtcforthecurious.com/docs/02-signaling/#what-is-the-session-description-protocol-sdp">SDP layer</a> using the HTTPS API.</p><p>The HTTPS API for Calls also lands on a different server than DTLS and ICE connectivity checks. Since DTLS packets need information from the SDP exchanged using the HTTPS API, and ICE connectivity checks depend on the HTTPS API for userFragment and password fields in the connectivity check packets, it would be very useful for all of these to be available in one server. Yet in our setup, they’re not.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/44z2v70arwk48CPkKgJXKq/bd7611bc51a989c0662665e374ed9a50/Signaling.png" />
            
            </figure><p>Fippo and Gustavo of WebRTCHacks <a href="https://webrtchacks.com/how-cloudflare-glares-at-webrtc-with-whip-and-whep/">complained (gracefully noted)</a> about slow replies to ICE connectivity checks in their great article as they were digging into our WHIP implementation right around our announcement in 2022:</p><blockquote><p>Looking at the Wireshark dumps we see a surprisingly large amount of time pass between the first STUN request and the first STUN response – it was 1.8 seconds in the screenshot below.</p><p>In other tests, it was shorter, but still 600ms long.</p><p>After that, the DTLS packets do not get an immediate response, requiring multiple attempts. This ultimately leads to a call setup time of almost three seconds – way above the global average of 800ms <a href="https://medium.com/@fippo/how-long-does-the-dtls-handshake-take-86718dd966bf">Fippo has measured previously</a> (for the complete handshake, 200ms for the DTLS handshake). For Cloudflare with their extensive network, we expected this to be way below that average.</p></blockquote><p>Gustavo and Fippo observed our solution to this problem of different parts of the WebRTC negotiation landing on different servers. Since Cloudflare Calls unbundles the WebRTC protocol to make the entire network act like a single computer, at this critical moment, we need to form consensus across the network. We form consensus by configuring every server to handle any incoming PeerConnection just in time. When a packet arrives, if the server doesn’t know about it, it quickly learns about the negotiated parameters from another server, such as the ufrag and the DTLS fingerprint from the SDP, and responds with the appropriate response.</p>
    <div>
      <h2>Getting faster</h2>
      <a href="#getting-faster">
        
      </a>
    </div>
    <p>Even though we've sped up the process of forming consensus across the Cloudflare network, any delays incurred can still have weird side effects. For example, up until a few months ago, delays of a few hundred milliseconds caused slow connections in Chrome.</p><p>A connectivity check packet delayed by a few hundred milliseconds signals to Chrome that this is a high latency network, even though every other STUN message after that was replied to in less than 5-10ms. Chrome thus delays sending a USE-CANDIDATE attribute in the responses for a few seconds, degrading the user experience.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1sRoG86lCdvzcKJKEQof4D/8fa91aa893417bd1876d71fef8b52db7/image4-8.png" />
            
            </figure><p>Fortunately, Chrome also <a href="https://bugs.chromium.org/p/webrtc/issues/detail?id=3661">sends</a> DTLS ClientHello before USE-CANDIDATE (behavior we’ve seen only on Chrome), so to help speed up Chrome, Calls uses DTLS packets in place of STUN packets with USE-CANDIDATE attributes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Xhno0lLZDftcAHXLz9Btj/c5b7fad53f9bd3005b5f759067b4847c/image1-5.png" />
            
            </figure><p>After solving this issue with Chrome, PeerConnections globally now take about 100-250ms to get connected. This includes all consensus management, STUN packets, and a complete DTLS handshake.</p>
    <div>
      <h2>Sessions and Tracks are the building blocks of Cloudflare’s SFU, not rooms</h2>
      <a href="#sessions-and-tracks-are-the-building-blocks-of-cloudflares-sfu-not-rooms">
        
      </a>
    </div>
    <p>Once a PeerConnection is established to Cloudflare, we call this a Session. Many media Tracks or DataChannels can be published using a single Session, which returns a unique ID for each. These then can be subscribed to over any other PeerConnection anywhere around the world using the unique ID. The tracks can be published or subscribed anytime during the lifecycle of the PeerConnection.</p><p>In the background, Cloudflare takes care of scaling through a fan-out architecture with cascading trees that are unique per track. This structure works by creating a hierarchy of nodes where the root node distributes the stream to intermediate nodes, which then fan out to end-users. This significantly reduces the bandwidth required at the source and ensures scalability by distributing the load across the network. This simple but powerful architecture allows developers to build anything from 1:1 video calls to large 1:many or many:many broadcasting scenarios with Calls.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2yHByC3CWSsXH4ljdDsFqH/6f1d625c9e6b4e921b8fdb54a30cf843/Fan-out-architecutre.png" />
            
            </figure><p>There is no “room” concept in Cloudflare Calls. Each client can add as many tracks into a PeerConnection as they’d like. The limit is the bandwidth available between Cloudflare and the client, which is practically limited by the client side every time. The signaling or the concept of a “room” is left to the application developer, who can choose to pull as many tracks as they’d like from the tracks they have pushed elsewhere into a PeerConnection. This allows developers to move participants into breakout rooms and then back into a plenary room, and then 1:1 rooms while keeping the same PeerConnection and MediaTracks active.</p><p>Cloudflare offers an unopinionated approach to bandwidth management, allowing for greater control in customizing logic to suit your business needs. There is no active bandwidth management or restriction on the number of tracks. The <a href="https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/getStats">WebRTC Stats API</a> provides a standardized way to access data on packet loss and possible congestion, enabling you to incorporate client-side logic based on this information. For instance, if poor Wi-Fi connectivity leads to degraded service, your front-end could inform the user through a notice and automatically reduce the number of video tracks for that client.</p>
    <div>
      <h2>“NACK shield” at the edge</h2>
      <a href="#nack-shield-at-the-edge">
        
      </a>
    </div>
    <p>The Internet can't guarantee timely and orderly delivery of packets, leading to the necessity of retransmission mechanisms, particularly in protocols like TCP. This ensures data eventually reaches its destination, despite possible delays. Real-time systems, however, need special consideration of these delays. A packet that is delayed past its deadline for rendering on the screen is worthless, but a packet that is lost can be recovered if it can be retransmitted within a very short period of time, on the order of milliseconds. This is where NACKs come to play.</p><p>A WebRTC client receiving data constantly checks for packet loss. When one or more packets don’t arrive at the expected time or a sequence number discontinuity is seen on the receiving buffer, a special NACK packet is sent back to the source in order to ask for a packet retransmission.</p><p>In a peer-to-peer topology, if it receives a NACK packet, the source of the data has to retransmit packets for every participant. When an SFU is used, the SFU could send NACKs back to source, or keep a complex buffer for each client to handle retransmissions.</p><p>This gets more complicated with Cloudflare Calls, since both the publisher and the subscriber connect to Cloudflare, likely to different servers and also probably in different locations. In addition, there is a possibility of other Cloudflare data centers in the middle, either through <a href="/argo-v2">Argo</a>, or just as part of scaling to many subscribers on the same track.</p><p>It is common for SFUs to backpropagate NACK packets back to the source, losing valuable time to recover packets. Calls goes beyond this and can handle NACK packets in the location closest to the user, which decreases overall latency. The latency advantage gives more chance for the packet to be recovered compared to a centralized SFU or no NACK handling at all.</p><p>Since there is possibly a number of Cloudflare data centers between clients, packet loss within the Cloudflare network is also possible. We handle this by generating NACK packets in the network. With each hop that is taken with the packets, the receiving end can generate NACK packets. These packets are then recovered or backpropagated to the publisher to be recovered.</p>
    <div>
      <h2>Cloudflare Calls does TURN over Anycast too</h2>
      <a href="#cloudflare-calls-does-turn-over-anycast-too">
        
      </a>
    </div>
    <p>Separately from the SFU, Calls also offers a TURN service. TURN relays act as relay points for traffic between WebRTC clients like the browser and SFUs, particularly in scenarios where <a href="https://webrtcforthecurious.com/docs/03-connecting/#turn">direct communication is obstructed</a> by NATs or firewalls. TURN maintains an allocation of public IP addresses and ports for each session, ensuring connectivity even in restrictive network environments.</p><p>Cloudflare Calls’ TURN service supports a few ports to help with misbehaving middleboxes and firewalls:</p><ul><li><p>TURN-over-UDP over port 3478 (standard), and also port 53</p></li><li><p>TURN-over-TCP over ports 3478 and 80</p></li><li><p>TURN-over-TLS over ports 5349 and 443</p></li></ul><p>TURN works the same way as Calls, available over anycast and always connecting to the closest datacenter.</p>
    <div>
      <h2>Pricing and how to get started</h2>
      <a href="#pricing-and-how-to-get-started">
        
      </a>
    </div>
    <p>Cloudflare Calls is now in open beta and available in your <a href="https://dash.cloudflare.com/?to=/:account/calls">Cloudflare Dashboard</a>. Depending on your use case, you can set up an SFU application and/or a TURN service with only a few clicks.</p><p>To kick off its open beta phase, Calls is available at no cost for a limited time. Starting May 15, 2024, customers will receive the first terabyte each month for free, with any usage beyond that charged at $0.05 per real-time gigabyte. Beta customers will be provided at least 30 days to upgrade from the free beta to a paid subscription. Additionally, there are no charges for in-bound traffic to Cloudflare. For volume pricing, talk to your account manager.</p><p>Cloudflare Calls is ideal if you are building new WebRTC apps. If you have existing SFUs or TURN infrastructure, you may still consider using Calls alongside your existing infrastructure. Building a bridge to Calls from other places is not difficult as Cloudflare Calls supports standard WebRTC APIs and acts like just another WebRTC peer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6aUAwCF6AWChLdjGL00wno/5cb404a2ebb199ea980fba92e0a57298/image6-2.png" />
            
            </figure><p>We understand that getting started with a new platform is difficult, so we’re also open sourcing our internal video conferencing app, Orange Meets. Orange Meets supports small and large conference calls by maintaining room state in Workers Durable Objects. It has screen sharing, client-side noise-canceling, and background blur. It is written with TypeScript and React and is <a href="https://github.com/cloudflare/orange">available on GitHub</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7k2EN8juPmF4nzC090wEiP/85261afe40dd5695165729525bf2824a/image5-3.png" />
            
            </figure>
    <div>
      <h2>We’re hiring</h2>
      <a href="#were-hiring">
        
      </a>
    </div>
    <p>We think the current state of Cloudflare Calls enables many use cases. Calls already supports publishing and subscribing to media tracks and DataChannels. Soon, it will support features like simulcasting.</p><p>But we’re just scratching the surface and there is so much more to build on top of this foundation.</p><p>If you are passionate about WebRTC (and <a href="https://datatracker.ietf.org/group/moq/about/">other</a> real-time protocols!!), the Media Platform team building the Calls product at Cloudflare is <a href="https://boards.greenhouse.io/cloudflare/jobs/5709759?gh_jid=5709759">hiring</a> and would love to talk to you.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Calls]]></category>
            <category><![CDATA[WebRTC]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">5PAERmEc4TrWEUDjtsd9dU</guid>
            <dc:creator>Renan Dincer</dc:creator>
            <dc:creator>Rachel Chen</dc:creator>
            <dc:creator>Felipe Astroza Araya</dc:creator>
            <dc:creator>Kevin Kipp</dc:creator>
            <dc:creator>Kazi Najib</dc:creator>
        </item>
        <item>
            <title><![CDATA[WebRTC live streaming to unlimited viewers, with sub-second latency]]></title>
            <link>https://blog.cloudflare.com/webrtc-whip-whep-cloudflare-stream/</link>
            <pubDate>Tue, 27 Sep 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Stream now supports live streaming over WebRTC to unlimited concurrent viewers, using open standards WHIP and WHEP, with zero dependencies or client SDKs necessary. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Creators and broadcasters expect to be able to go live from anywhere, on any device. Viewers expect “live” to mean “real-time”. The protocols that power most live streams are unable to meet these growing expectations.</p><p>In talking to developers building live streaming into their apps and websites, we’ve heard near universal frustration with the limitations of existing live streaming technologies. Developers in 2022 rightly expect to be able to deliver low latency to viewers, broadcast reliably, and use web standards rather than old protocols that date back to the era of Flash.</p><p>Today, we’re excited to announce in open beta that Cloudflare Stream now supports live video streaming over WebRTC, with sub-second latency, to unlimited concurrent viewers. This is a new feature of Cloudflare Stream, and you can start using it right now in the Cloudflare Dashboard — read the <a href="https://developers.cloudflare.com/stream/webrtc-beta/">docs</a> to get started.</p><p>WebRTC with Cloudflare Stream leapfrogs existing tools and protocols, exclusively uses open standards with zero dependency on a specific SDK, and empowers any developer to build both low latency live streaming and playback into their website or app.</p>
    <div>
      <h3>The status quo of streaming live video is broken</h3>
      <a href="#the-status-quo-of-streaming-live-video-is-broken">
        
      </a>
    </div>
    <p>The status quo of streaming live video has high latency, depends on archaic protocols and is incompatible with the way developers build apps and websites. A reasonable person’s expectations of what the Internet should be able to deliver in 2022 are simply unmet by the dominant set of protocols carried over from past eras.</p><p><b>Viewers</b> increasingly expect “live” to mean “real-time”. People want to place bets on sports broadcasts in real-time, interact and ask questions to presenters in real-time, and never feel behind their friends at a live event.</p><p>In practice, the HLS and DASH standards used to deliver video have 10+ seconds of latency. LL-HLS and LL-DASH bring this down to closer to 5 seconds, but only as a hack on top of the existing protocol that delivers segments of video in individual HTTP requests. Sending mini video clips over TCP simply cannot deliver video in real-time. HLS and DASH are here to stay, but aren’t the future of real-time live video.</p><p><b>Creators and broadcasters</b> expect to be able to go live from anywhere, on any device.</p><p>In practice, people creating live content are stuck with a limited set of native apps, and can’t go live using RTMP from a web browser. Because it’s built on top of TCP, the RTMP broadcasting protocol struggles under even the slightest network disruption, making it a poor or often unworkable option when broadcasting from mobile networks. RTMP, originally built for use with Adobe Flash Player, was <a href="https://rtmp.veriskope.com/pdf/rtmp_specification_1.0.pdf">last updated in 2012</a>, and while Stream supports the <a href="/magic-hdmi-cable/">newer SRT protocol</a>, creators need an option that works natively on the web and can more easily be integrated in native apps.</p><p><b>Developers</b> expect to be able to build using standard APIs that are built into web browsers and native apps.</p><p>In practice, RTMP can’t be used from a web browser, and creating a native app that supports RTMP broadcasting typically requires diving into lower-level programming languages like C and Rust. Only those with expertise in both live video protocols and these languages have full access to the tools needed to create novel live streaming client applications.</p>
    <div>
      <h3>We’re solving this by using new open WebRTC standards: WHIP and WHEP</h3>
      <a href="#were-solving-this-by-using-new-open-webrtc-standards-whip-and-whep">
        
      </a>
    </div>
    <p>WebRTC is the real-time communications protocol, supported across all web browsers, that powers video calling services like Zoom and Google Meet. Since inception it’s been designed for real-time, ultra low-latency communications.</p><p>While WebRTC is well established, for most of its history it’s lacked standards for:</p><ul><li><p><b>Ingestion</b> — how broadcasters should <b><i>send</i></b> media content (akin to RTMP today)</p></li><li><p><b>Egress</b> — how viewers request and <b><i>receive</i></b> media content (akin to DASH or HLS today)</p></li></ul><p>As a result, developers have had to implement this on their own, and client applications on both sides are often tightly coupled to provider-specific implementations. Developers we talk to often express frustration, having sunk months of engineering work into building around a specific vendor’s SDK, unable to switch without a significant rewrite of their client apps.</p><p>At Cloudflare, our mission is broader — we’re helping to build a better Internet. Today we’re launching not just a new feature of Cloudflare Stream, but a vote of confidence in new WebRTC standards for both ingestion and egress. We think you should be able to start using Stream without feeling locked into an SDK or implementation specific to Cloudflare, and we’re committed to using open standards whenever possible.</p><p>For ingestion, <a href="https://www.ietf.org/archive/id/draft-ietf-wish-whip-03.html">WHIP</a> is an IETF draft on the Standards Track, with many applications already successfully using it in production. For delivery (egress), <a href="https://www.ietf.org/id/draft-murillo-whep-00.html">WHEP</a> is an IETF draft with broad agreement. Combined, they provide a standardized end-to-end way to broadcast one-to-many over WebRTC at scale.</p><p><b>Cloudflare Stream is the first cloud service to let you both broadcast using WHIP and playback using WHEP — no vendor-specific SDK needed.</b> Here’s how it works:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/19Hq5GYMLCWmxGoKQBiifO/0cd900cb0c07cfac25f14c2486a3cb77/image2-44.png" />
            
            </figure><p>Cloudflare Stream is already built on top of the Cloudflare developer platform, using Workers and Durable Objects running on Cloudflare’s global network, within 50ms of 95% of the world’s Internet-connected population.</p><p>Our WebRTC implementation extends this to relay WebRTC video through our network. Broadcasters stream video using WHIP to the point of presence closest to their location, which tells the Durable Object where the live stream can be found. Viewers request streaming video from the point of presence closest to them, which asks the Durable Object where to find the stream, and video is routed through Cloudflare’s network, all with sub-second latency.</p><p>Using Durable Objects, we achieve this with zero centralized state. And just like the rest of Cloudflare Stream, you never have to think about regions, both in terms of pricing and product development.</p><p>While existing ultra low-latency streaming providers charge significantly more to stream over WebRTC, because Stream runs on Cloudflare’s global network, we’re able to offer WebRTC streaming at the same price as delivering video over HLS or DASH. We don’t think you should be penalized with higher pricing when choosing which technology to rely on to stream live video. Once generally available, WebRTC streaming will cost $1 per 1000 minutes of video delivered, just like the rest of Stream.</p>
    <div>
      <h3>What does sub-second latency let you build?</h3>
      <a href="#what-does-sub-second-latency-let-you-build">
        
      </a>
    </div>
    <p>Ultra low latency unlocks interactivity within your website or app, removing the time delay between creators, in-person attendees, and those watching remotely.</p><p>Developers we talk to are building everything from live sports betting, to live auctions, to live viewer Q&amp;A and even real-time collaboration in video post-production. Even streams without in-app interactivity can benefit from real-time — no sports fan wants to get a text from their friend at the game that ruins the moment, before they’ve had a chance to watch the final play. Whether you’re bringing an existing app or have a new idea in mind, we’re excited to see what you build.</p>
    <div>
      <h3>If you can write JavaScript, you can let your users go live from their browser</h3>
      <a href="#if-you-can-write-javascript-you-can-let-your-users-go-live-from-their-browser">
        
      </a>
    </div>
    <p>While hobbyist and professional creators might take the time to download and learn how to use an application like <a href="https://obsproject.com/">OBS Studio</a>, most Internet users won’t get past this friction of new tools, and copying RTMP keys from one tool to another. To empower more people to go live, they need to be able to broadcast from within your website or app, just by enabling access to the camera and microphone.</p><p>Cloudflare Stream with WebRTC lets you build live streaming into your app as a front-end developer, without any special knowledge of video protocols. And our approach, using the WHIP and WHEP open standards, means you can do this with zero dependencies, with 100% your code that you control.</p>
    <div>
      <h3>Go live from a web browser with just a few lines of code</h3>
      <a href="#go-live-from-a-web-browser-with-just-a-few-lines-of-code">
        
      </a>
    </div>
    <p>You can go live right now, from your web browser, by creating a live input in the <a href="https://dash.cloudflare.com/?to=/:account/stream/inputs">Cloudflare Stream dashboard</a>, and pasting a URL into the example linked below.</p><p>Read the <a href="https://developers.cloudflare.com/stream/webrtc-beta/">docs</a> or <a href="https://workers.new/stream/webrtc">run the example code below in your browser using Stackblitz</a>.</p>
            <pre><code>&lt;video id="input-video" autoplay autoplay muted&gt;&lt;/video&gt;</code></pre>
            
            <pre><code>import WHIPClient from "./WHIPClient.js";

const url = "&lt;WEBRTC_URL_FROM_YOUR_LIVE_INPUT&gt;";
const videoElement = document.getElementById("input-video");
const client = new WHIPClient(url, videoElement);</code></pre>
            <p>This example uses an example WHIP client, written in just 100 lines of Javascript, using APIs that are native to web browsers, with zero dependencies. But because WHIP is an open standard, you can use any WHIP client you choose. Support for WHIP is growing across the video streaming industry — it has recently been added to <a href="https://gstreamer.freedesktop.org/">Gstreamer</a>, and one of the authors of the WHIP specification has written a <a href="https://github.com/medooze/whip-js">Javascript client implementation</a>. We intend to support the full <a href="https://www.ietf.org/archive/id/draft-ietf-wish-whip-03.html">WHIP specification</a>, including supporting <a href="https://www.rfc-editor.org/rfc/rfc8838">Trickle ICE</a> for fast NAT traversal.</p>
    <div>
      <h3>Play a live stream in a browser, with sub-second latency, no SDK required</h3>
      <a href="#play-a-live-stream-in-a-browser-with-sub-second-latency-no-sdk-required">
        
      </a>
    </div>
    <p>Once you’ve started streaming, copy the playback URL from the live input you just created, and paste it into the example linked below.</p><p>Read the <a href="https://developers.cloudflare.com/stream/webrtc-beta/">docs</a> or <a href="https://workers.new/stream/webrtc">run the example code below in your browser using Stackbltiz</a>.</p>
            <pre><code>&lt;video id="playback" controls autoplay muted&gt;&lt;/video&gt;</code></pre>
            
            <pre><code>import WHEPClient from './WHEPClient.js';
const url = "&lt;WEBRTC_PLAYBACK_URL_FROM_YOUR_LIVE_INPUT&gt;";
const videoElement = document.getElementById("playback");
const client = new WHEPClient(url, videoElement);</code></pre>
            <p>Just like the WHIP example before, this one uses an example WHEP client we’ve written that has zero dependencies. WHEP is an earlier IETF draft than WHIP, <a href="https://www.ietf.org/id/draft-murillo-whep-00.html">published in July of this year</a>, but adoption is moving quickly. People in the community have already written open-source client implementations in both <a href="https://github.com/medooze/whip-js/blob/main/whep.js">Javascript</a>, <a href="https://github.com/meetecho/simple-whep-client">C</a>, with more to come.</p>
    <div>
      <h3>Start experimenting with real-time live video, in open beta today</h3>
      <a href="#start-experimenting-with-real-time-live-video-in-open-beta-today">
        
      </a>
    </div>
    <p>WebRTC streaming is in open beta today, ready for you to use as an integrated feature of <a href="https://www.cloudflare.com/products/cloudflare-stream/">Cloudflare Stream</a>. Once Generally Available, WebRTC streaming will be priced like the rest of Cloudflare Stream, based on minutes of video delivered and minutes stored.</p><p><a href="https://developers.cloudflare.com/stream/webrtc-beta/">Read the docs</a> to get started.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">5PQXX1PxT5vsDahi24H7Cn</guid>
            <dc:creator>Kyle Boutette</dc:creator>
            <dc:creator>Kenny Luong</dc:creator>
            <dc:creator>Brendan Irvine-Broque</dc:creator>
            <dc:creator>Jacob Curtis</dc:creator>
            <dc:creator>Rachel Chen</dc:creator>
            <dc:creator>Felipe Astroza Araya</dc:creator>
            <dc:creator>Renan Dincer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Add Watermarks to your Cloudflare Stream Video Uploads]]></title>
            <link>https://blog.cloudflare.com/add-watermarks-to-your-cloudflare-stream-video-uploads/</link>
            <pubDate>Fri, 11 Sep 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ The Watermark API lets you add a watermark to a video at the time of uploading. It consists of two new features to the Stream API. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Since the launch of Cloudflare Stream, our customers have been asking for a programmatic way to add watermarks to their videos. We built the Watermarks API to support a wide range of use cases: from customers who simply want to tell Stream “<i>can you put this watermark image to the top right of my video?</i>” to customers with more detailed asks such as “<i>can you put this watermark image in a way it doesn’t take up more than 10% of the original video and with 20% opacity?”</i> All that and more is now available at no additional cost through the Watermarks API.</p>
    <div>
      <h2>What is Cloudflare Stream?</h2>
      <a href="#what-is-cloudflare-stream">
        
      </a>
    </div>
    <p>Cloudflare Stream provides out-of-the-box video infrastructure so developers can bring their app ideas to market faster. While building a video streaming app, developers must ask themselves questions like</p><ul><li><p>“<i>Where do we store the videos affordably?</i>”</p></li><li><p>“<i>How do we encode the videos to support users with varying Internet speeds?</i>”</p></li><li><p>“<i>How do we maintain our video pipeline in the long term?”</i></p></li></ul><p>Cloudflare Stream is a single product that handles video encoding, storage, delivery and presentation (with the Stream Player.) Stream lets developers launch their ideas faster while having the confidence the video infrastructure will scale with their app’s growth.</p>
    <div>
      <h2>How the Watermark API works</h2>
      <a href="#how-the-watermark-api-works">
        
      </a>
    </div>
    <p>The Watermark API lets you add a watermark to a video at the time of uploading. It consists of two new features to the Stream API:</p><ul><li><p>A new <b>/stream/watermarks</b> endpoint that lets you create watermark profiles and returns a uid, a unique identifier for each watermark profile</p></li><li><p>Support for a <b>watermark</b> object containing the <b>uid</b> of the watermark profile that can be passed at the time of upload</p></li></ul>
    <div>
      <h3>Step 1: Creating a Watermark Profile</h3>
      <a href="#step-1-creating-a-watermark-profile">
        
      </a>
    </div>
    <p>A watermark profile describes the nature of the watermark, including the image to use as a watermark and properties such as its positioning, padding and scale.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1gC4TBzYR8k50DMEMwCAvB/d89c30b075f05bb80adf697fc6450f18/image3-5.png" />
            
            </figure><p>In this example, we are going to create a watermark profile that places the Cloudflare logo to the lower left of the video:</p>
            <pre><code>curl --request POST \
  --url https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/stream/watermarks \
  --header 'content-type: application/json' \
  --header 'x-auth-email: $CLOUDFLARE_EMAIL \
  --header 'x-auth-key: $CLOUDFLARE_KEY \
  --data '{
  "url": "https://storage.googleapis.com/zaid-test/Watermarks%20Demo/cf-icon.png",
  "name": "Cloudflare Icon",
  "opacity": 0.5,
  "padding": 0.05,
  "scale": 0.1,
  "position": "lowerLeft"
}'</code></pre>
            <p>The response contains information about the watermark profile, including a <b>uid</b> that we will use in the next step</p>
            <pre><code>{
  "result": {
    "uid": "a85d289c2e3f82701103620d16cd2408",
    "size": 9165,
    "height": 504,
    "width": 600,
    "created": "2020-09-03T20:43:56.337486Z",
    "downloadedFrom": "REDACTED_VIDEO_URL",
    "name": "Cloudflare Icon",
    "opacity": 0.5,
    "padding": 0.05,
    "scale": 0.1,
    "position": "lowerLeft"
  },
  "success": true,
  "errors": [],
  "messages": []
}</code></pre>
            
    <div>
      <h3>Step 2: Apply the Watermark</h3>
      <a href="#step-2-apply-the-watermark">
        
      </a>
    </div>
    <p>We’ve created the watermark and are ready to use it. Below is a screengrab from the Built For This commercial. It contains no watermark:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6z0eJjFKqSUiIVkmA5jjfz/219c6ba9b833701f4eaea33ce9c7eb17/image1-2.png" />
            
            </figure><p>We are going to upload the commercial and request Stream to add the logo from the previous step as a watermark:</p>
            <pre><code>curl --request POST \
  --url https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/stream/copy \
  --header 'content-type: application/json' \
  --header 'x-auth-email: $EMAIL \
  --header 'x-auth-key: $AUTH_KEY' \
  --data '{
  "url": "https://storage.googleapis.com/zaid-test/Watermarks%20Demo/The%20Internet%20was%20BuiltForThis.mp4",
  "watermark": {
    "uid": "a85d289c2e3f82701103620d16cd2408"
  }
}'</code></pre>
            
    <div>
      <h3>Step 3: Your video, now with a watermark!</h3>
      <a href="#step-3-your-video-now-with-a-watermark">
        
      </a>
    </div>
    <p>You’re done! You can watch the video with a watermark:</p><br />
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Read the detailed <a href="https://developers.cloudflare.com/stream/watermarks">Watermark API docs</a> covering different use cases.</p><p>In future iterations, we plan to add support for animated watermarks. Additionally, we want to add Watermark support to the Stream Dashboard so you have a UI to manage and add watermarks.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">5kZ1ZhPezSltbszSKhtWqw</guid>
            <dc:creator>Rachel Chen</dc:creator>
        </item>
    </channel>
</rss>