
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 07:39:10 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cloudflare is the best place to build realtime voice agents]]></title>
            <link>https://blog.cloudflare.com/cloudflare-realtime-voice-ai/</link>
            <pubDate>Fri, 29 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we're excited to announce new capabilities that make it easier than ever to build real-time, voice-enabled AI applications on Cloudflare's global network. ]]></description>
            <content:encoded><![CDATA[ <p>The way we interact with AI is fundamentally changing. While text-based interfaces like ChatGPT have shown us what's possible, in terms of interaction, it’s only the beginning. Humans communicate not only by texting, but also talking — we show things, we interrupt and clarify in real-time. Voice AI brings these natural interaction patterns to our applications.</p><p>Today, we're excited to announce new capabilities that make it easier than ever to build real-time, voice-enabled AI applications on Cloudflare's global network. These new features create a complete platform for developers building the next generation of conversational AI experiences or can function as building blocks for more advanced AI agents running across platforms.</p><p>We're launching:</p><ul><li><p><b>Cloudflare Realtime Agents</b> - A runtime for orchestrating voice AI pipelines at the edge</p></li><li><p><b>Pipe raw WebRTC audio as PCM in Workers</b> - You can now connect WebRTC audio directly to your AI models or existing complex media pipelines already built on </p></li><li><p><b>Workers AI WebSocket support</b> - Realtime AI inference with models like PipeCat's smart-turn-v2</p></li><li><p><b>Deepgram on Workers AI</b> - Speech-to-text and text-to-speech running in over 330 cities worldwide</p></li></ul>
    <div>
      <h2>Why realtime AI matters now</h2>
      <a href="#why-realtime-ai-matters-now">
        
      </a>
    </div>
    <p>Today, building voice AI applications is hard. You need to coordinate multiple services such as speech-to-text, language models, text-to-speech while managing complex audio pipelines, handling interruptions, and keeping latency low enough for natural conversation. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/a2D2mbrkDnb0tRo5466DN/8a4643e52a5f23b6948f1d15671140ac/image4.jpg" />
          </figure><p>Building production voice AI requires orchestrating a complex symphony of technologies. You need low latency speech recognition, intelligent language models that understand context and can handle interruptions, natural-sounding voice synthesis, and all of this needs to happen in under 800 milliseconds — the threshold where conversation feels natural rather than stilted. This latency budget is unforgiving. Every millisecond counts: 40ms for microphone input, 300ms for transcription, 400ms for LLM inference, 150ms for text-to-speech. Any additional latency from poor infrastructure choices or distant servers transforms a delightful experience into a frustrating one.</p><p>That's why we're building real-time AI tools: we want to make real-time voice AI as easy to deploy as a static website. We're also witnessing a critical inflection point where conversational AI moves from experimental demos to production-ready systems that can scale globally. If you’re already a developer in the real-time AI ecosystem, we want to build the best building blocks for you to get the lowest latency by leveraging the 330+ datacenters Cloudflare has built.</p>
    <div>
      <h2>Introducing Cloudflare Realtime Agents</h2>
      <a href="#introducing-cloudflare-realtime-agents">
        
      </a>
    </div>
    <p>Cloudflare Realtime Agents is a simple runtime for orchestrating voice AI pipelines that run on our global network, as close to your users as possible. Instead of managing complex infrastructure yourself, you can focus on building great conversational experiences.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1QcKOdouzGYP8DecqqSzM8/022a33e9b7bcbcbd0461fa83df39b1ba/image1.png" />
          </figure>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>When a user connects to your voice AI application, here's what happens:</p><ol><li><p><b>WebRTC connection</b> - Audio streams from the user's device is sent to the nearest Cloudflare location via WebRTC, using Cloudflare RealtimeKit mobile or web SDKs</p></li><li><p><b>AI pipeline orchestration</b> - Your pre-configured pipeline runs: speech-to-text → LLM → text-to-speech, with support for interruption detection and turn-taking</p></li><li><p><b>Your configured runtime options/callbacks/tools run</b></p></li><li><p><b>Response delivery</b> - Generated audio streams back to the user with minimal latency</p></li></ol><p>The magic is in how we've designed this as composable building blocks. You're not locked into a rigid pipeline — you can configure data flows, add tee and join operations, and control exactly how your AI agent behaves.</p><p>Take a look at the <code>MyTextHandler</code> function from the above diagram, for example. It’s just a function that takes in text and returns text back, inserted after speech-to-text and before text-to-speech:</p>
            <pre><code>class MyTextHandler extends TextComponent {
	env: Env;

	constructor(env: Env) {
		super();
		this.env = env;
	}

	async onTranscript(text: string) {
		const { response } = await this.env.AI.run('@cf/meta/llama-3.1-8b-instruct', {
			prompt: "You are a wikipedia bot, answer the user query:" + text,
		});
		this.speak(response!);
	}
}</code></pre>
            <p>Your agent is a JavaScript class that extends RealtimeAgent, where you initialize a pipeline consisting of the various text-to-speech, speech-to-text, text-to-text and even speech-to-speech transformations.</p>
            <pre><code>export class MyAgent extends RealtimeAgent&lt;Env&gt; {
	constructor(ctx: DurableObjectState, env: Env) {
		super(ctx, env);
	}

	async init(agentId: string ,meetingId: string, authToken: string, workerUrl: string, accountId: string, apiToken: string) {
		// Construct your text processor for generating responses to text
		const textHandler = new MyTextHandler(this.env);
		// Construct a Meeting object to join the RTK meeting
		const transport = new RealtimeKitTransport(meetingId, authToken, [
			{
				media_kind: 'audio',
				stream_kind: 'microphone',
			},
		]);
		const { meeting } = transport;

		// Construct a pipeline to take in meeting audio, transcribe it using
		// Deepgram, and pass our generated responses through ElevenLabs to
		// be spoken in the meeting
		await this.initPipeline(
			[transport, new DeepgramSTT(this.env.DEEPGRAM_API_KEY), textHandler, new ElevenLabsTTS(this.env.ELEVENLABS_API_KEY), transport],
			agentId,
			workerUrl,
			accountId,
			apiToken,
		);

		// The RTK meeting object is accessible to us, so we can register handlers
		// on various events like participant joins/leaves, chat, etc.
		// This is optional
		meeting.participants.joined.on('participantJoined', (participant) =&gt; {
			textHandler.speak(`Participant Joined ${participant.name}`);
		});
		meeting.participants.joined.on('participantLeft', (participant) =&gt; {
			textHandler.speak(`Participant Left ${participant.name}`);
		});

		// Make sure to actually join the meeting after registering all handlers
		await meeting.rtkMeeting.join();
	}

	async deinit() {
		// Add any other cleanup logic required
		await this.deinitPipeline();
	}
}</code></pre>
            <p>View a full example in the <a href="https://developers.cloudflare.com/realtime/agents/getting-started/"><u>developer docs</u></a> and get your own Realtime Agent running. View <a href="https://dash.cloudflare.com/?to=/:account/realtime/agents"><u>Realtime Agents</u></a> on your dashboard.</p>
    <div>
      <h3>Built for flexibility</h3>
      <a href="#built-for-flexibility">
        
      </a>
    </div>
    <p>What makes Realtime Agents powerful is its flexibility:</p><ul><li><p><b>Many AI provider options</b> - Use the models on Workers AI, OpenAI, Anthropic, or any provider through AI Gateway</p></li><li><p><b>Multiple input/output modes</b> - Accept audio and/or text and respond with audio and/or text</p></li><li><p><b>Stateful coordination</b> - Maintain context across the conversation without managing complex state yourself</p></li><li><p><b>Speed and flexibility</b> - use <a href="https://realtime.cloudflare.com"><u>RealtimeKit</u></a> to manage WebRTC sessions and UI for faster development, or for full control over your stack, you can also connect directly using any standard WebRTC client or raw WebSockets</p></li><li><p><b>Integrate</b> with the <a href="https://developers.cloudflare.com/agents/"><u>Cloudflare Agents SDK</u></a></p></li></ul><p>During the open beta starting today, Cloudflare Realtime Agents runtime is free to use and works with various AI models:</p><ul><li><p>Speech and Audio: Integration with platforms like ElevenLabs and Deepgram.</p></li><li><p>LLM Inference: Flexible options to use large language models through Cloudflare Workers AI and AI Gateway, connect to third-party models like OpenAi, Gemini, Grok, Claude, or bring your own custom models.</p></li></ul>
    <div>
      <h2>Pipe raw WebRTC audio as PCM in Workers</h2>
      <a href="#pipe-raw-webrtc-audio-as-pcm-in-workers">
        
      </a>
    </div>
    <p>For developers who need the most flexibility with their applications beyond Realtime Agents, we're exposing the raw WebRTC audio pipeline directly to Workers. </p><p>WebRTC audio in Workers works by leveraging Cloudflare’s Realtime SFU, which converts WebRTC audio in Opus codec to PCM and streams it to any WebSocket endpoint you specify. This means you can use Workers to implement:</p><ul><li><p><b>Live transcription</b> - Stream audio from a video call directly to a transcription service</p></li><li><p><b>Custom AI pipelines</b> - Send audio to AI models without setting up complex infrastructure</p></li><li><p><b>Recording and processing</b> - Save, audit, or analyze audio streams in real-time</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2b15xG5EfUiNYLtH8cNRTh/116f1e195cada59a61874c74ee499159/image2.png" />
          </figure>
    <div>
      <h3>WebSockets vs WebRTC for voice AI</h3>
      <a href="#websockets-vs-webrtc-for-voice-ai">
        
      </a>
    </div>
    <p>WebSockets and WebRTC can handle audio for AI services, but they work best in different situations. WebSockets are perfect for server-to-server communication and work fine when you don't need super-fast responses, making them great for testing and experimenting. However, if you're building an app where users need real-time conversations with low delay, WebRTC is the better choice.</p><p>WebRTC has several advantages that make it superior for live audio streaming. It uses UDP instead of TCP, which prevents audio delays caused by lost packets holding up the entire stream (<a href="https://blog.cloudflare.com/the-road-to-quic/#head-of-line-blocking"><u>head of line blocking</u></a> is a common topic discussed on this blog). The Opus audio codec in WebRTC automatically adjusts to network conditions and can handle packet loss gracefully. WebRTC also includes built-in features like echo cancellation and noise reduction that WebSockets would require you to build separately. </p><p>With this feature, you can use WebRTC for client to server communication and leveraging Cloudflare to convert to familiar WebSockets for server-to-server communication and backend processing.</p>
    <div>
      <h3>The power of Workers + WebRTC</h3>
      <a href="#the-power-of-workers-webrtc">
        
      </a>
    </div>
    <p>When WebRTC audio gets converted to WebSockets, you get PCM audio at the original sample rate, and from there, you can run any task in and out of the Cloudflare developer platform:</p><ul><li><p>Resample audio and send to different AI providers</p></li><li><p>Run WebAssembly-based audio processing</p></li><li><p>Build complex applications with <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>, <a href="https://developers.cloudflare.com/durable-objects/api/alarms/"><u>Alarms</u></a> and other Workers primitives</p></li><li><p>Deploy containerized processing pipelines with <a href="https://developers.cloudflare.com/containers/"><u>Workers Containers</u></a></p></li></ul><p>The WebSocket works bidirectionally, so data sent back on the WebSocket becomes available as a WebRTC track on the Realtime SFU, ready to be consumed within WebRTC.</p><p>To illustrate this setup, we’ve made a simple <a href="https://github.com/cloudflare/realtime-examples/tree/main/tts-ws"><u>WebRTC application demo</u></a> that uses the ElevenLabs API for  text-to-speech.</p><p>Visit the <a href="https://developers.cloudflare.com/realtime/sfu/"><u>Realtime SFU developer docs</u></a> on how to get started.</p>
    <div>
      <h2>Realtime AI inference with WebSockets</h2>
      <a href="#realtime-ai-inference-with-websockets">
        
      </a>
    </div>
    <p>WebSockets provide the backbone of real-time AI pipelines because it is a low-latency, bidirectional primitive with ubiquitous support in developer tooling, especially for server to server communication. Although HTTP works great for many use cases like chat or batch inference, real-time voice AI needs persistent, low-latency connections when talking to AI inference servers. To support your real-time AI workloads, Workers AI now supports WebSocket connections in select models.</p>
    <div>
      <h3>Launching with PipeCat SmartTurn V2</h3>
      <a href="#launching-with-pipecat-smartturn-v2">
        
      </a>
    </div>
    <p>The first model with WebSocket support is PipeCat's <a href="https://developers.cloudflare.com/workers-ai/models/smart-turn-v2/"><u>smart-turn-v2</u></a> turn detection model — a critical component for natural conversation. Turn detection models determine when a speaker has finished talking and it's appropriate for the AI to respond. Getting this right is the difference between an AI that constantly interrupts and one that feels natural to talk to.</p><p>Below is an example on how to call smart-turn-v2 running on Workers AI.</p>
            <pre><code>"""
Cloudflare AI WebSocket Inference - With PipeCat's smart-turn-v2
"""

import asyncio
import websockets
import json
import numpy as np

# Configuration
ACCOUNT_ID = "your-account-id"
API_TOKEN = "your-api-token"
MODEL = "@cf/pipecat-ai/smart-turn-v2"

# WebSocket endpoint
WEBSOCKET_URL = f"wss://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/{MODEL}?dtype=uint8"

async def run_inference(audio_data: bytes) -&gt; dict:
    async with websockets.connect(
        WEBSOCKET_URL,
        additional_headers={
            "Authorization": f"Bearer {API_TOKEN}"
        }
    ) as websocket:
        await websocket.send(audio_data)
        
        response = await websocket.recv()
        result = json.loads(response)
        
        # Response format: {'is_complete': True, 'probability': 0.87}
        return result

def generate_test_audio():    
    noise = np.random.normal(128, 20, 8192).astype(np.uint8)
    noise = np.clip(noise, 0, 255) 
    
    return noise

async def demonstrate_inference():
    # Generate test audio
    noise = generate_test_audio()
    
    try:
        print("\nTesting noise...")
        noise_result = await run_inference(noise.tobytes())
        print(f"Noise result: {noise_result}")
        
    except Exception as e:
        print(f"Error: {e}")

if __name__ == "__main__":
    asyncio.run(demonstrate_inference())</code></pre>
            
    <div>
      <h2>Deepgram in Workers AI</h2>
      <a href="#deepgram-in-workers-ai">
        
      </a>
    </div>
    <p>On Wednesday, we announced that Deepgram's speech-to-text and text-to-speech models are available on Workers AI, running in Cloudflare locations worldwide. This means:</p><ul><li><p><b>Lower latency</b> - Speech recognition happens at the edge, close to users running in the same network as Workers</p></li><li><p><b>WebRTC audio processing</b> without leaving the Cloudflare network</p></li><li><p><b>State-of-the-art audio ML models</b> powerful, capable, and fast audio models, available directly through Workers AI</p></li><li><p><b>Global scale</b> - leverages Cloudflare’s global network in 330+ cities automatically</p></li></ul><p>Deepgram is a popular choice for voice AI applications. By building your voice AI systems on the Cloudflare platform, you get access to powerful models and the lowest latency infrastructure to give your application a natural, responsive experience.</p>
    <div>
      <h3>Interested in other realtime AI models running on Cloudflare?</h3>
      <a href="#interested-in-other-realtime-ai-models-running-on-cloudflare">
        
      </a>
    </div>
    <p>If you're developing AI models for real-time applications, we want to run them on Cloudflare's network. Whether you have proprietary models or need ultra-low latency inference at scale with open source models reach out to us.</p>
    <div>
      <h2>Get started today</h2>
      <a href="#get-started-today">
        
      </a>
    </div>
    <p>All of these features are available now:</p><ul><li><p><b>Cloudflare Realtime Agents</b> - <a href="https://developers.cloudflare.com/realtime/agents/getting-started/"><u>Start testing in beta</u></a></p></li><li><p><b>WebRTC audio as PCM in Workers</b> - <a href="https://developers.cloudflare.com/realtime/sfu/"><u>Read the documentation</u></a> and integrate with your applications</p></li><li><p><b>Workers AI WebSocket support</b> - Try out PipeCat’s <a href="https://developers.cloudflare.com/workers-ai/models/smart-turn-v2/"><u>smart-turn-v2</u></a> model</p></li><li><p><a href="https://blog.cloudflare.com/workers-ai-partner-models/"><b><u>Deepgram on Workers AI</u></b></a> - Available now at <a href="https://developers.cloudflare.com/workers-ai/models/aura-1/"><u>@cf/deepgram/aura-1</u></a> and <a href="https://developers.cloudflare.com/workers-ai/models/nova-3/"><u>@cf/deepgram/nova-3 </u></a></p></li></ul><p>Want to pick the brains of the engineers who built this? Join them for technical deep dives, live demos Q&amp;A at Cloudflare Connect in Las Vegas. Explore the <a href="https://events.cloudflare.com/connect/2025/"><u>full schedule and register</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wpPvADZYXKpbuqXcJWGfn/0c93500141d1f8dd443c04e5e3d69155/image3.png" />
          </figure><p>
</p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">4AaIT3iiPV1cfuh2FxoUgq</guid>
            <dc:creator>Renan Dincer</dc:creator>
        </item>
        <item>
            <title><![CDATA[MoQ: Refactoring the Internet's real-time media stack]]></title>
            <link>https://blog.cloudflare.com/moq/</link>
            <pubDate>Fri, 22 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Media over QUIC (MoQ) is a new IETF standard that resolves this conflict, creating a single foundation for sub-second, interactive streaming at a global scale.
 ]]></description>
            <content:encoded><![CDATA[ <p>For over two decades, we've built real-time communication on the Internet using a patchwork of specialized tools. RTMP gave us ingest. <a href="https://www.cloudflare.com/learning/video/what-is-http-live-streaming/"><u>HLS</u></a> and <a href="https://www.mpeg.org/standards/MPEG-DASH/"><u>DASH</u></a> gave us scale. WebRTC gave us interactivity. Each solved a specific problem for its time, and together they power the global streaming ecosystem we rely on today.</p><p>But using them together in 2025 feels like building a modern application with tools from different eras. The seams are starting to show—in complexity, in latency, and in the flexibility needed for the next generation of applications, from sub-second live auctions to massive interactive events. We're often forced to make painful trade-offs between latency, scale, and operational complexity.</p><p>Today Cloudflare is launching the first Media over QUIC (MoQ) relay network, running on every Cloudflare server in datacenters in 330+ cities. MoQ is an open protocol being developed at the <a href="https://www.ietf.org/"><u>IETF</u></a> by engineers from across the industry—not a proprietary Cloudflare technology. MoQ combines the low-latency interactivity of WebRTC, the scalability of HLS/DASH, and the simplicity of a single architecture, all built on a modern transport layer. We're joining Meta, Google, Cisco, and others in building implementations that work seamlessly together, creating a shared foundation for the next generation of real-time applications on the Internet.</p>
    <div>
      <h3><b>An evolutionary ladder of compromise</b></h3>
      <a href="#an-evolutionary-ladder-of-compromise">
        
      </a>
    </div>
    <p>To understand the promise of MoQ, we first have to appreciate the history that led us here—a journey defined by a series of architectural compromises where solving one problem inevitably created another.</p><p><b>The RTMP era: Conquering latency, compromising on scale</b></p><p>In the early 2000s, <b>RTMP (Real-Time Messaging Protocol)</b> was a breakthrough. It solved the frustrating "download and wait" experience of early video playback on the web by creating a persistent, stateful TCP connection between a <a href="https://en.wikipedia.org/wiki/Adobe_Flash"><u>Flash</u></a> client and a server. This enabled low-latency streaming (2-5 seconds), powering the first wave of live platforms like <a href="http://justin.tv"><u>Justin.tv</u></a> (which later became Twitch).</p><p>But its strength was its weakness. That stateful connection, which had to be maintained for every viewer, was architecturally hostile to scale. It required expensive, specialized media servers and couldn't use the commodity HTTP-based <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/"><u>Content Delivery Networks (CDNs)</u></a> that were beginning to power the rest of the web. Its reliance on TCP also meant that a single lost packet could freeze the entire stream—a phenomenon known as <a href="https://blog.cloudflare.com/the-road-to-quic/#head-of-line-blocking"><u>head-of-line blocking</u></a>—creating jarring latency spikes. The industry retained RTMP for the "first mile" from the camera to servers (ingest), but a new solution was needed for the "last mile" from servers to your screen (delivery).</p><p><b>The HLS &amp; DASH era: Solving for scale, compromising on latency</b></p><p>The catalyst for the next era was the iPhone's rejection of Flash. In response, Apple created <a href="https://www.cloudflare.com/learning/video/what-is-http-live-streaming/"><b><u>HLS (HTTP Live Streaming)</u></b></a>. HLS, and its open-standard counterpart <b>MPEG-DASH</b> abandoned stateful connections and treated video as a sequence of small, static files delivered over standard HTTP.</p><p>This enabled much greater scalability. By moving to the interoperable open standard of HTTP for the underlying transport, video could now be distributed by any web server and cached by global CDNs, allowing platforms to reach millions of viewers reliably and relatively inexpensively. The compromise? A <i>significant</i> trade-off in latency. To ensure smooth playback, players needed to buffer at least three video segments before starting. With segment durations of 6-10 seconds, this baked 15-30 seconds of latency directly into the architecture.</p><p>While extensions like <a href="https://developer.apple.com/documentation/http-live-streaming/enabling-low-latency-http-live-streaming-hls"><u>Low-Latency HLS (LL-HLS)</u></a> have more recently emerged to achieve latencies in the 3-second range, they remain complex patches<a href="https://blog.cloudflare.com/the-road-to-quic/#head-of-line-blocking"><u> fighting against the protocol's fundamental design</u></a>. These extensions introduce a layer of stateful, real-time communication—using clever workarounds like holding playlist requests open—that ultimately strain the stateless request-response model central to HTTP's scalability and composability.</p><p><b>The WebRTC Era: Conquering conversational latency, compromising on architecture</b></p><p>In parallel, <b>WebRTC (Web Real-Time Communication)</b> emerged to solve a different problem: plugin-free, two-way conversational video with sub-500ms latency within a browser. It worked by creating direct peer-to-peer (P2P) media paths, removing central servers from the equation.</p><p>But this P2P model is fundamentally at odds with broadcast scale. <a href="https://blog.cloudflare.com/cloudflare-calls-anycast-webrtc/#webrtc-growing-pains"><u>In a mesh network, the number of connections grows quadratically with each new participant</u></a> (the "N-squared problem"). For more than a handful of users, the model collapses under the weight of its own complexity. To work around this, the industry developed server-based topologies like the Selective Forwarding Unit (SFU) and Multipoint Control Unit (MCU). These are effective but require building what is essentially a <a href="https://blog.cloudflare.com/cloudflare-calls-anycast-webrtc/#is-cloudflare-calls-a-real-sfu"><u>private, stateful, real-time CDN</u></a>—a complex and expensive undertaking that is not standardized across infrastructure providers.</p><p>This journey has left us with a fragmented landscape of specialized, non-interoperable silos, forcing developers to stitch together multiple protocols and accept a painful three-way tension between <b>latency, scale, and complexity</b>.</p>
    <div>
      <h3><b>Introducing MoQ</b></h3>
      <a href="#introducing-moq">
        
      </a>
    </div>
    <p>This is the context into which Media over QUIC (MoQ) emerges. It's not just another protocol; it's a new design philosophy built from the ground up to resolve this historical trilemma. Born out of an open, community-driven effort at the IETF, <u>MoQ aims to be a foundational Internet technology, not a proprietary product</u>.</p><p>Its promise is to unify the disparate worlds of streaming by delivering:</p><ol><li><p><b>Sub-second latency at broadcast scale:</b> Combining the latency of WebRTC with the scale of HLS/DASH and the simplicity of RTMP.</p></li><li><p><b>Architectural simplicity:</b> Creating a single, flexible protocol for ingest, distribution, and interactive use cases, eliminating the need to transcode between different technologies.</p></li><li><p><b>Transport efficiency:</b> Building on <a href="https://blog.cloudflare.com/the-road-to-quic/"><u>QUIC</u></a>, a <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/"><u>UDP</u></a> based protocol to eliminate bottlenecks like TCP<a href="https://blog.cloudflare.com/the-road-to-quic/#head-of-line-blocking"><u> head-of-line blocking</u></a>.</p></li></ol><p>The initial focus was "Media" over QUIC, but the core concepts—named tracks of timed, ordered, but independent data—are so flexible that the working group is now simply calling the protocol "MoQ." The name reflects the power of the abstraction: it's a generic transport for any real-time data that needs to be delivered efficiently and at scale.</p><p>MoQ is now generic enough that it’s a data fanout or pub/sub system, for everything from audio/video (high bandwidth data) to sports score updates (low bandwidth data).</p>
    <div>
      <h3><b>A deep dive into the MoQ protocol stack</b></h3>
      <a href="#a-deep-dive-into-the-moq-protocol-stack">
        
      </a>
    </div>
    <p>MoQ's elegance comes from solving the right problem at the right layer. Let's build up from the foundation to see how it achieves sub-second latency at scale.</p><p>The choice of QUIC as MoQ's foundation isn't arbitrary—it addresses issues that have plagued streaming protocols for decades.</p><p>By building on <b>QUIC</b> (the transport protocol that also powers <a href="https://www.cloudflare.com/learning/performance/what-is-http3/"><u>HTTP/3</u></a>), MoQ solves some key streaming problems:</p><ul><li><p><b>No head-of-line blocking:</b> Unlike TCP where one lost packet blocks everything behind it, QUIC streams are independent. A lost packet on one stream (e.g., an audio track) doesn't block another (e.g., the main video track). This alone eliminates the stuttering that plagued RTMP.</p></li><li><p><b>Connection migration:</b> When your device switches from Wi-Fi to cellular mid-stream, the connection seamlessly migrates without interruption—no rebuffering, no reconnection.</p></li><li><p><b>Fast connection establishment:</b> QUIC's <a href="https://blog.cloudflare.com/even-faster-connection-establishment-with-quic-0-rtt-resumption/"><u>0-RTT resumption</u></a> means returning viewers can start playing instantly.</p></li><li><p><b>Baked-in, mandatory encryption:</b> All QUIC connections are encrypted by default with <a href="https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/"><u>TLS 1.3</u></a>.</p></li></ul>
    <div>
      <h4>The core innovation: Publish/subscribe for media</h4>
      <a href="#the-core-innovation-publish-subscribe-for-media">
        
      </a>
    </div>
    <p>With QUIC solving transport issues, MoQ introduces its key innovation: treating media as subscribable tracks in a publish/subscribe system. But unlike traditional pub/sub, this is designed specifically for real-time media at CDN scale.</p><p>Instead of complex session management (WebRTC) or file-based chunking (HLS), <b>MoQ lets publishers announce named tracks of media that subscribers can request</b>. A relay network handles the distribution without needing to understand the media itself.</p>
    <div>
      <h4>How MoQ organizes media: The data model</h4>
      <a href="#how-moq-organizes-media-the-data-model">
        
      </a>
    </div>
    <p>Before we see how media flows through the network, let's understand how MoQ structures it. MoQ organizes data in a hierarchy:</p><ul><li><p><b>Tracks</b>: Named streams of media, like "video-1080p" or "audio-english". Subscribers request specific tracks by name.</p></li><li><p><b>Groups</b>: Independently decodable chunks of a track. For video, this typically means a GOP (Group of Pictures) starting with a keyframe. New subscribers can join at any Group boundary.</p></li><li><p><b>Objects</b>: The actual packets sent on the wire. Each Object belongs to a Track and has a position within a Group.</p></li></ul><p>This simple hierarchy enables two capabilities:</p><ol><li><p>Subscribers can start playback at <b>Group</b> boundaries without waiting for the next keyframe</p></li><li><p>Relays can forward <b>Objects</b> without parsing or understanding the media format</p></li></ol>
    <div>
      <h5>The network architecture: From publisher to subscriber</h5>
      <a href="#the-network-architecture-from-publisher-to-subscriber">
        
      </a>
    </div>
    <p>MoQ’s network components are also simple:</p><ul><li><p><b>Publishers</b>: Announce track namespaces and send Objects</p></li><li><p><b>Subscribers</b>: Request specific tracks by name</p></li><li><p><b>Relays</b>: Connect publishers to subscribers by forwarding immutable Objects without parsing or <a href="https://www.cloudflare.com/learning/video/video-encoding-formats/"><u>transcoding</u></a> the media</p></li></ul><p>A Relay acts as a subscriber to receive tracks from upstream (like the original publisher) and simultaneously acts as a publisher to forward those same tracks downstream. This model is the key to MoQ's scalability: one upstream subscription can fan out to serve thousands of downstream viewers.</p>
    <div>
      <h5>The MoQ Stack</h5>
      <a href="#the-moq-stack">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4g2MroH24otkzH3LQsFWZe/84ca43ad6c1c933ac395bf4ac767c584/image1.png" />
          </figure><p>MoQ's architecture can be understood as three distinct layers, each with a clear job:</p><ol><li><p><b>The Transport Foundation (QUIC or WebTransport):</b> This is the modern foundation upon which everything is built. MoQT can run directly over raw <b>QUIC</b>, which is ideal for native applications, or over <b>WebTransport</b>, which is required for use in a web browser. Crucially, the<a href="https://www.ietf.org/archive/id/draft-ietf-webtrans-http3-02.html"> <u>WebTransport protocol</u></a> and its corresponding<a href="https://w3c.github.io/webtransport/"> <u>W3C browser API</u></a> make QUIC's multiplexed reliable streams and unreliable datagrams directly accessible to browser applications. This is a game-changer. Protocols like <a href="https://blog.cloudflare.com/stream-now-supports-srt-as-a-drop-in-replacement-for-rtmp/"><u>SRT</u></a> may be efficient, but their lack of native browser support relegates them to ingest-only roles. WebTransport gives MoQ first-class citizenship on the web, making it suitable for both ingest and massive-scale distribution directly to clients.</p></li><li><p><b>The MoQT Layer:</b> Sitting on top of QUIC (or WebTransport), the MoQT layer provides the signaling and structure for a publish-subscribe system. This is the primary focus of the IETF working group. It defines the core control messages—like ANNOUNCE, and SUBSCRIBE—and the basic data model we just covered. MoQT itself is intentionally spartan; it doesn't know or care if the data it's moving is <a href="https://www.cloudflare.com/learning/video/what-is-h264-avc/"><u>H.264</u></a> video, Opus audio, or game state updates.</p></li><li><p><b>The Streaming Format Layer:</b> This is where media-specific logic lives. A streaming format defines things like manifests, codec metadata, and packaging rules.
 <a href="https://datatracker.ietf.org/doc/draft-ietf-moq-warp/"><b><u>WARP</u></b></a> is one such format being developed alongside MoQT at the IETF, but it isn't the only one. Another standards body, like DASH-IF, could define a <a href="https://www.iso.org/standard/85623.html"><u>CMAF</u></a>-based streaming format over MoQT. A company that controls both original publisher and end subscriber can develop its own proprietary streaming format to experiment with new codecs or delivery mechanisms without being constrained by the transport protocol.</p></li></ol><p>This separation of layers is why different organizations can build interoperable implementations while still innovating at the streaming format layer.</p>
    <div>
      <h4>End-to-End Data Flow</h4>
      <a href="#end-to-end-data-flow">
        
      </a>
    </div>
    <p>Now that we understand the architecture and the data model, let's walk through how these pieces come together to deliver a stream. The protocol is flexible, but a typical broadcast flow relies on the <code>ANNOUNCE</code> and <code>SUBSCRIBE </code>messages to establish a data path from a publisher to a subscriber through the relay network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2iRTJFdtCjIOcyg7ezoYgJ/e303ea8d1eb438328b60fdb28be47e84/image2.png" />
          </figure><p>Here is a step-by-step breakdown of what happens in this flow:</p><ol><li><p><b>Initiating Connections:</b> The process begins when the endpoints, acting as clients, connect to the relay network. The Original Publisher initiates a connection with its nearest relay (we'll call it Relay A). Separately, an End Subscriber initiates a connection with its own local relay (Relay B). These endpoints perform a <code>SETUP</code> handshake with their respective relays to establish a MoQ session and declare supported parameters.</p></li><li><p><b>Announcing a Namespace:</b> To make its content discoverable, the Publisher sends an <code>ANNOUNCE</code> message to Relay A. This message declares that the publisher is the authoritative source for a given <b>track namespace</b>. Relay A receives this and registers in a shared control plane (a conceptual database) that it is now a source for this namespace within the network.</p></li><li><p><b>Subscribing to a Track:</b> When the End Subscriber wants to receive media, it sends a <code>SUBSCRIBE</code> message to its relay, Relay B. This message is a request for a specific <b>track name</b> within a specific <b>track namespace</b>.</p></li><li><p><b>Connecting the Relays:</b> Relay B receives the <code>SUBSCRIBE</code> request and queries the control plane. It looks up the requested namespace and discovers that Relay A is the source. Relay B then initiates a session with Relay A (if it doesn't already have one) and forwards the <code>SUBSCRIBE</code> request upstream.</p></li><li><p><b>Completing the Path and Forwarding Objects:</b> Relay A, having received the subscription request from Relay B, forwards it to the Original Publisher. With the full path now established, the Publisher begins sending the <code>Objects</code> for the requested track. The Objects flow from the Publisher to Relay A, which forwards them to Relay B, which in turn forwards them to the End Subscriber. If another subscriber connects to Relay B and requests the same track, Relay B can immediately start sending them the Objects without needing to create a new upstream subscription.</p></li></ol>
    <div>
      <h5>An Alternative Flow: The <code>PUBLISH</code> Model</h5>
      <a href="#an-alternative-flow-the-publish-model">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6KJYU1eWNyuSZEHNYonDDn/3898003d5a7f5904787c7ef009b22fe0/image3.png" />
          </figure><p>More recent drafts of the MoQ specification have introduced an alternative, push-based model using a <code>PUBLISH</code> message. In this flow, a publisher can effectively ask for permission to send a track's objects to a relay <i>without</i> waiting for a <code>SUBSCRIBE </code>request. The publisher sends a <code>PUBLISH</code> message, and the relay's <code>PUBLISH_OK</code> response indicates whether it will accept the objects. This is particularly useful for ingest scenarios, where a publisher wants to send its stream to an entry point in the network immediately, ensuring the media is available the instant the first subscriber connects.</p>
    <div>
      <h4>Advanced capabilities: Prioritization and congestion control</h4>
      <a href="#advanced-capabilities-prioritization-and-congestion-control">
        
      </a>
    </div>
    <p>MoQ’s benefits really shine when networks get congested. MoQ includes mechanisms for handling the reality of network traffic. One such mechanism is Subgroups.</p><p><b>Subgroups</b> are subdivisions within a Group that effectively map directly to the underlying QUIC streams. All Objects within the same Subgroup are generally sent on the same QUIC stream, guaranteeing their delivery order. Subgroup numbering also presents an opportunity to encode prioritization: within a Group, lower-numbered Subgroups are considered higher priority. </p><p>This enables intelligent quality degradation, especially with layered codecs (e.g. SVC):</p><ul><li><p><b>Subgroup 0</b>: Base video layer (360p) - must deliver</p></li><li><p><b>Subgroup 1</b>: Enhancement to 720p - deliver if bandwidth allows</p></li><li><p><b>Subgroup 2</b>: Enhancement to 1080p - first to drop under congestion</p></li></ul><p>When a relay detects congestion, it can drop Objects from higher-numbered Subgroups, preserving the base layer. Viewers see reduced quality instead of buffering.</p><p>The MoQ specification defines a scheduling algorithm that determines the order for all objects that are "ready to send." When a relay has multiple objects ready, it prioritizes them first by <b>group order</b> (ascending or descending) and then, within a group, by <b>subgroup id</b>. Our implementation supports the <b>group order</b> preference, which can be useful for low-latency broadcasts. If a viewer falls behind and its subscription uses descending group order, the relay prioritizes sending Objects from the newest "live" Group, potentially canceling unsent Objects from older Groups. This can help viewers catch up to the live edge quickly, a highly desirable feature for many interactive streaming use cases. The optimal strategies for using these features to improve QoE for specific use cases are still an open research question. We invite developers and researchers to use our network to experiment and help find the answers.</p>
    <div>
      <h3><b>Implementation: building the Cloudflare MoQ relay</b></h3>
      <a href="#implementation-building-the-cloudflare-moq-relay">
        
      </a>
    </div>
    <p>Theory is one thing; implementation is another. To validate the protocol and understand its real-world challenges, we've been building one of the first global MoQ relay networks. Cloudflare's network, which places compute and logic at the edge, is very well suited for this.</p><p>Our architecture connects the abstract concepts of MoQ to the Cloudflare stack. In our deep dive, we mentioned that when a publisher <code>ANNOUNCE</code>s a namespace, relays need to register this availability in a "shared control plane" so that <code>SUBSCRIBE</code> requests can be routed correctly. For this critical piece of state management, we use <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>.</p><p>When a publisher announces a new namespace to a relay in, say, London, that relay uses a Durable Object—our strongly consistent, single-threaded storage solution—to record that this namespace is now available at that specific location. When a subscriber in Paris wants a track from that namespace, the network can query this distributed state to find the nearest source and route the <code>SUBSCRIBE</code> request accordingly. This architecture builds upon the technology we developed for Cloudflare's real-time services and provides a solution to the challenge of state management at a global scale.</p>
    <div>
      <h4>An Evolving Specification</h4>
      <a href="#an-evolving-specification">
        
      </a>
    </div>
    <p>Building on a new protocol in the open means implementing against a moving target. To get MoQ into the hands of the community, we made a deliberate trade-off: our current relay implementation is based on a <b>subset of the features defined in </b><a href="https://www.ietf.org/archive/id/draft-ietf-moq-transport-07.html"><b><u>draft-ietf-moq-transport-07</u></b></a>. This version became a de facto target for interoperability among several open-source projects and pausing there allowed us to put effort towards other aspects of deploying our relay network<b>.</b></p><p>This draft of the protocol makes a distinction between accessing "past" and "future" content. <code><b>SUBSCRIBE</b></code> is used to receive <b>future</b> objects for a track as they arrive—like tuning into a live broadcast to get everything from that moment forward. In contrast, <code><b>FETCH</b></code> provides a mechanism for accessing <b>past</b> content that a relay may already have in its cache—like asking for a recording of a song that just played.</p><p>Both are part of the same specification, but for the most pressing low-latency use cases, a performant implementation of <code>SUBSCRIBE</code> is what matters most. For that reason, we have focused our initial efforts there and have not yet implemented <code>FETCH</code>.</p><p>This is where our roadmap is flexible and where the community can have a direct impact. Do you need <code>FETCH</code> to build on-demand or catch-up functionality? Or is more complete support for the prioritization features within <code>SUBSCRIBE</code> more critical for your use case? The feedback we receive from early developers will help us decide what to build next.</p><p>As always, we will announce our updates and changes to our implementation as we continue with development on our <a href="https://developers.cloudflare.com/moq"><u>developer docs pages</u></a>.</p>
    <div>
      <h3>Kick the tires on the future</h3>
      <a href="#kick-the-tires-on-the-future">
        
      </a>
    </div>
    <p>We believe in building in the open and interoperability in the community. MoQ is not a Cloudflare technology but a foundational Internet technology. To that end, the first demo client we’re presenting is an open source, community example.</p><p><b>You can access the demo here: </b><a href="https://moq.dev/publish/"><b><u>https://moq.dev/publish/</u></b></a></p><p>Even though this is a preview release, we are running MoQ relays at Cloudflare’s full scale, like we do every production service. This means every server that is part of the Cloudflare network in more than 330 cities is now a MoQ relay.</p><p>We invite you to experience the "wow" moment of near-instant, sub-second streaming latency that MoQ enables. How would you use a protocol that offers the speed of a video call with the scale of a global broadcast?</p>
    <div>
      <h3><b>Interoperability</b></h3>
      <a href="#interoperability">
        
      </a>
    </div>
    <p>We’ve been working with others in the IETF WG community and beyond on interoperability of publishers, players and other parts of the MoQ ecosystem. So far, we’ve tested with:</p><ul><li><p>Luke Curley’s <a href="https://moq.dev"><u>moq.dev</u></a></p></li><li><p>Lorenzo Miniero’s <a href="https://github.com/meetecho/imquic"><u>imquic</u></a></p></li><li><p>Meta’s <a href="https://github.com/facebookexperimental/moxygen"><u>Moxygen</u></a> </p></li><li><p><a href="https://github.com/englishm/moq-rs"><u>moq-rs</u></a></p></li><li><p><a href="https://github.com/englishm/moq-js"><u>moq-js</u></a></p></li><li><p><a href="https://norsk.video/"><u>Norsk</u></a></p></li><li><p><a href="https://vindral.com/"><u>Vindral</u></a></p></li></ul>
    <div>
      <h3>The Road Ahead</h3>
      <a href="#the-road-ahead">
        
      </a>
    </div>
    <p>The Internet's media stack is being refactored. For two decades, we've been forced to choose between latency, scale, and complexity. The compromises we made solved some problems, but also led to a fragmented ecosystem.</p><p>MoQ represents a promising new foundation—a chance to unify the silos and build the next generation of real-time applications on a scalable protocol. We're committed to helping build this foundation in the open, and we're just getting started.</p><p>MoQ is a realistic way forward, built on QUIC for future proofing, easier to understand than WebRTC, compatible with browsers unlike RTMP.</p><p>The protocol is evolving, the implementations are maturing, and the community is growing. Whether you're building the next generation of live streaming, exploring real-time collaboration, or pushing the boundaries of interactive media, consider whether MoQ may provide the foundation you need.</p>
    <div>
      <h3>Availability and pricing</h3>
      <a href="#availability-and-pricing">
        
      </a>
    </div>
    <p>We want developers to start building with MoQ today. To make that possible MoQ at Cloudflare is in tech preview - this means it's available free of charge for testing (at any scale). Visit our <a href="https://developers.cloudflare.com/moq/"><u>developer homepage </u></a>for updates and potential breaking changes.</p><p>Indie developers and large enterprises alike ask about pricing early in their adoption of new technologies. We will be transparent and clear about MoQ pricing. In general availability, self-serve customers should expect to pay 5 cents/GB outbound with no cost for traffic sent towards Cloudflare. </p><p>Enterprise customers can expect usual pricing in line with regular media delivery pricing, competitive with incumbent protocols. This means if you’re already using Cloudflare for media delivery, you should not be wary of adopting new technologies because of cost. We will support you.</p><p>If you’re interested in partnering with Cloudflare in adopting the protocol early or contributing to its development, please reach out to us at <a><u>moq@cloudflare.com</u></a>! Engineers excited about the future of the Internet are standing by.</p>
    <div>
      <h3>Get involved:</h3>
      <a href="#get-involved">
        
      </a>
    </div>
    <ul><li><p><b>Try the demo:</b> <a href="https://moq.dev/publish/"><u>https://moq.dev/publish/</u></a></p></li><li><p><b>Read the Internet draft:</b> <a href="https://datatracker.ietf.org/doc/draft-ietf-moq-transport/"><u>https://datatracker.ietf.org/doc/draft-ietf-moq-transport/</u></a></p></li><li><p><b>Contribute</b> to the protocol’s development: <a href="https://datatracker.ietf.org/group/moq/documents/"><u>https://datatracker.ietf.org/group/moq/documents/</u></a></p></li><li><p><b>Visit </b>our developer homepage: <a href="https://developers.cloudflare.com/moq/"><u>https://developers.cloudflare.com/moq/</u></a></p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Live Streaming]]></category>
            <category><![CDATA[WebRTC]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[Standards]]></category>
            <guid isPermaLink="false">2XgF5NjmAy3cqybLPkpMFu</guid>
            <dc:creator>Mike English</dc:creator>
            <dc:creator>Renan Dincer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Orange Me2eets: We made an end-to-end encrypted video calling app and it was easy]]></title>
            <link>https://blog.cloudflare.com/orange-me2eets-we-made-an-end-to-end-encrypted-video-calling-app-and-it-was/</link>
            <pubDate>Thu, 26 Jun 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Orange Meets, our open-source video calling web application, now supports end-to-end encryption using the MLS protocol with continuous group key agreement. ]]></description>
            <content:encoded><![CDATA[ <p>Developing a new video conferencing application often begins with a peer-to-peer setup using <a href="https://webrtc.org/"><u>WebRTC</u></a>, facilitating direct data exchange between clients. While effective for small demonstrations, this method encounters scalability hurdles with increased participants. The data transmission load for each client escalates significantly in proportion to the number of users, as each client is required to send data to every other client except themselves (n-1).</p><p>In the scaling of video conferencing applications, Selective Forwarding Units (SFUs) are essential.  Essentially a media stream routing hub, an SFU receives media and data flows from participants and intelligently determines which streams to forward. By strategically distributing media based on network conditions and participant needs, this mechanism minimizes bandwidth usage and greatly enhances scalability. Nearly every video conferencing application today uses SFUs.</p><p>In 2024, we announced <a href="https://blog.cloudflare.com/cloudflare-calls-anycast-webrtc/"><u>Cloudflare Realtime</u></a> (then called Cloudflare Calls), our suite of WebRTC products, and we also released <a href="https://github.com/cloudflare/orange"><u>Orange Meets</u></a>, an open source video chat application built on top of our SFU.</p><p>We also realized that use of an SFU often comes with a privacy cost, as there is now a centralized hub that could see and listen to all the media contents, even though its sole job is to forward media bytes between clients as a data plane.</p><p>We believe end-to-end encryption should be the industry standard for secure communication and that’s why today we’re excited to share that we’ve implemented and open sourced end-to-end encryption in Orange Meets. Our generic implementation is client-only, so it can be used with any WebRTC infrastructure. Finally, our new <i>designated committer </i>distributed algorithm is verified in a bounded model checker to verify this algorithm handles edge cases gracefully.</p>
    <div>
      <h2>End-to-end encryption for video conferencing is different than for text messaging</h2>
      <a href="#end-to-end-encryption-for-video-conferencing-is-different-than-for-text-messaging">
        
      </a>
    </div>
    <p>End-to-end encryption describes a secure communication channel whereby only the intended participants can read, see, or listen to the contents of the conversation, not anybody else. WhatsApp and iMessage, for example, are end-to-end-encrypted, which means that the companies that operate those apps or any other infrastructure can’t see the contents of your messages. </p><p>Whereas encrypted group chats are usually long-lived, highly asynchronous, and low bandwidth sessions, video and audio calls are short-lived, highly synchronous, and require high bandwidth. This difference comes with plenty of interesting tradeoffs, which influenced the design of our system.</p><p>We had to consider how factors like the ephemeral nature of calls, compared to the persistent nature of group text messages, also influenced the way we designed E2EE for Orange Meets. In chat messages, users must be able to decrypt messages sent to them while they were offline (e.g. while taking a flight). This is not a problem for real-time communication.</p><p>The bandwidth limitations around audio/video communication and the use of an SFU prevented us from using some of the E2EE technologies already available for text messages. Apple’s iMessage, for example, encrypts a message N-1 times for an N-user group chat. We can't encrypt the video for each recipient, as that could saturate the upload capacity of Internet connections as well as slow down the client. Media has to be encrypted once and decrypted by each client while preserving secrecy around only the current participants of the call.</p>
    <div>
      <h2>Messaging Layer Security (MLS)</h2>
      <a href="#messaging-layer-security-mls">
        
      </a>
    </div>
    <p>Around the same time we were working on Orange Meets, we saw a lot of excitement around new apps being built with <a href="https://messaginglayersecurity.rocks/"><u>Messaging Layer Security</u></a> (MLS), an IETF-standardized protocol that describes how you can do a group key exchange in order to establish end-to-end-encryption for group communication. </p><p>Previously, the only way to achieve these properties was to essentially run your own fork of the <a href="https://signal.org/docs/"><u>Signal protocol</u></a>, which itself is more of a living protocol than a solidified standard. Since MLS is standardized, we’ve now seen multiple high-quality implementations appear, and we’re able to use them to achieve Signal-level security with far less effort.</p><p>Implementing MLS here wasn’t easy: it required a moderate amount of client modification, and the development and verification of an encrypted room-joining protocol. Nonetheless, we’re excited to be pioneering a standards-based approach that any customer can run on our network, and to share more details about how our implementation works. </p><p>We did not have to make any changes to the SFU to get end-to-end encryption working. Cloudflare’s SFU doesn’t care about the contents of the data forwarded on our data plane and whether it’s encrypted or not.</p>
    <div>
      <h2>Orange Meets: the basics </h2>
      <a href="#orange-meets-the-basics">
        
      </a>
    </div>
    <p>Orange Meets is a video calling application built on <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a> that uses the <a href="https://developers.cloudflare.com/realtime/calls-vs-sfus/"><u>Cloudflare Realtime SFU service</u></a> as the data plane. The roles played by the three main entities in the application are as follows:</p><ul><li><p>The <i>user</i> is a participant in the video call. They connect to the Orange Meets server and SFU, described below.</p></li><li><p>The <i>Orange Meets Server </i>is a simple service run on a Cloudflare Worker that runs the small-scale coordination logic of Orange Meets, which is concerned with which user is in which video call — called a <i>room </i>— and what the state of the room is. Whenever something in the room changes, like a participant joining or leaving, or someone muting themselves, the app server broadcasts the change to all room participants. You can use any backend server for this component, we just chose Cloudflare Workers for its convenience.</p></li><li><p>Cloudflare Realtime <i>Selective Forwarding Unit</i> (SFU) is a service that Cloudflare runs, which takes everyone’s audio and video and broadcasts it to everyone else. These connections are potentially lossy, using UDP for transmission. This is done because a dropped video frame from five seconds ago is not very important in the context of a video call, and so should not be re-sent, as it would be in a TCP connection.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61htaksggj580PqX02XoVB/3b0f1ed34ee681e41b2009257fdc8525/image2.png" />
          </figure><p><sup><i>The network topology of Orange Meets</i></sup></p><p>Next, we have to define what we mean by end-to-end encryption in the context of video chat.</p>
    <div>
      <h2>End-to-end encrypting Orange Meets </h2>
      <a href="#end-to-end-encrypting-orange-meets">
        
      </a>
    </div>
    <p>The most immediate way to end-to-end encrypt Orange Meets is to simply have the initial users agree on a symmetric encryption/decryption key at the beginning of a call, and just encrypt every video frame using that key. This is sufficient to hide calls from Cloudflare’s SFU. Some source-encrypted video conferencing implementations, such as <a href="https://jitsi.org/e2ee-in-jitsi/"><u>Jitsi Meet</u></a>, work this way.</p><p>The issue, however, is that kicking a malicious user from a call does not invalidate their key, since the keys are negotiated just once. A joining user learns the key that was used to encrypt video from before they joined. These failures are more formally referred to as failures of <i>post-compromise security</i> and <i>perfect forward secrecy</i>. When a protocol successfully implements these in a group setting, we call the protocol a <b>continuous group key agreement protocol</b>.</p><p>Fortunately for us, MLS is a continuous group key agreement protocol that works out of the box, and the nice folks at <a href="https://phnx.im/"><u>Phoenix R&amp;D</u></a> and <a href="https://cryspen.com/"><u>Cryspen</u></a> have a well-documented <a href="https://github.com/openmls/openmls/tree/main"><u>open-source Rust implementation</u></a> of most of the MLS protocol. </p><p>All we needed to do was write an MLS client and compile it to WASM, so we could decrypt video streams in-browser. We’re using WASM since that’s one way of running Rust code in the browser. If you’re running a video conferencing application on a desktop or mobile native environment, there are other MLS implementations in your preferred programming language.</p><p>Our setup for encryption is as follows:</p><p><b>Make a web worker for encryption.</b> We wrote a web worker in Rust that accepts a WebRTC video stream, broken into individual frames, and encrypts each frame. This code is quite simple, as it’s just an MLS encryption:</p>
            <pre><code>group.create_message(
	&amp;self.mls_provider,
	self.my_signing_keys.as_ref()?,
	frame,
)</code></pre>
            <p><b>Postprocess outgoing audio/video.</b> We take our normal stream and, using some newer features of the <a href="https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API"><u>WebRTC API</u></a>, add a transform step to it. This transform step simply sends the stream to the worker:</p>
            <pre><code>const senderStreams = sender.createEncodedStreams()
const { readable, writable } = senderStreams
this.worker.postMessage(
	{
    	    type: 'encryptStream',
    	    in: readable,
    	    out: writable,
	},
	[readable, writable]
)</code></pre>
            <p>And the same for decryption:</p>
            <pre><code>const receiverStreams = receiver.createEncodedStreams()
const { readable, writable } = receiverStreams
this.worker.postMessage(
	{
    	    type: 'decryptStream',
    	    in: readable,
    	    out: writable,
	},
	[readable, writable]
)</code></pre>
            <p>Once we do this for both audio and video streams, we’re done.</p>
    <div>
      <h2>Handling different codec behaviors</h2>
      <a href="#handling-different-codec-behaviors">
        
      </a>
    </div>
    <p>The streams are now encrypted before sending and decrypted before rendering, but the browser doesn’t know this. To the browser, the stream is still an ordinary video or audio stream. This can cause errors to occur in the browser’s depacketizing logic, which expects to see certain bytes in certain places, depending on the codec. This results in some extremely cypherpunk artifacts every dozen seconds or so:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/72baLJkLPZPdbjHjGVxSU5/2ea34b02826aacc2b23086b463a4938f/image3.png" />
          </figure><p>Fortunately, this exact issue was discovered by engineers at Discord, who handily documented it in their <a href="https://github.com/discord/dave-protocol/blob/main/protocol.md"><u>DAVE</u></a> E2EE videocalling protocol. For the VP8 codec, which we use by default, the solution is simple: split off the first 1–10 bytes of each packet, and send them unencrypted:</p>
            <pre><code>fn split_vp8_header(frame: &amp;[u8]) -&gt; Option&lt;(&amp;[u8], &amp;[u8])&gt; {
    // If this is a keyframe, keep 10 bytes unencrypted. Otherwise, 1 is enough
    let is_keyframe = frame[0] &gt;&gt; 7 == 0;
    let unencrypted_prefix_size = if is_keyframe { 10 } else { 1 };
    frame.split_at_checked(unencrypted_prefix_size)
}</code></pre>
            <p>These bytes are not particularly important to encrypt, since they only contain versioning info, whether or not this frame is a keyframe, some constants, and the width and height of the video.</p><p>And that’s truly it for the stream encryption part! The only thing remaining is to figure out how we will let new users join a room.</p>
    <div>
      <h2>“Join my Orange Meet” </h2>
      <a href="#join-my-orange-meet">
        
      </a>
    </div>
    <p>Usually, the only way to join the call is to click a link. And since the protocol is encrypted, a joining user needs to have some cryptographic information in order to decrypt any messages. How do they receive this information, though? There are a few options.</p><p>DAVE does it by using an MLS feature called <i>external proposals</i>. In short, the Discord server registers itself as an <i>external sender</i>, i.e., a party that can send administrative messages to the group, but cannot receive any. When a user wants to join a room, they provide their own cryptographic material, called a <i>key package</i>, and the server constructs and sends an MLS <a href="https://www.rfc-editor.org/rfc/rfc9420.html#section-12.1.8"><u>External Add message</u></a> to the group to let them know about the new user joining. Eventually, a group member will <i>commit</i> this External Add, sending the joiner a <i>Welcome</i> message containing all information necessary to send and receive video.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1gQm3r3Bai8Rks4M82JuSh/87ff851a12505f5c17c241e3f1eade6a/image4.png" />
          </figure><p><sup><i>A user joining a group via MLS external proposals. Recall the Orange Meets app server functions as a broadcast channel for the whole group. We consider a group of 3 members. We write member #2 as the one committing to the proposal, but this can be done by any member. Member #2 also sends a Commit message to the other members, but we omit this for space.</i></sup><sup>  </sup></p><p>This is a perfectly viable way to implement room joining, but implementing it would require us to extend the Orange Meets server logic to have some concept of MLS. Since part of our goal is to keep things as simple as possible, we would like to do all our cryptography client-side.</p><p>So instead we do what we call the <i>designated committer</i> algorithm. When a user joins a group, they send their cryptographic material to one group member, the <i>designated committer</i>, who then constructs and sends the Add message to the rest of the group. Similarly, when notified of a user’s exit, the designated committer constructs and sends a Remove message to the rest of the group. With this setup, the server’s job remains nothing more than broadcasting messages! It’s quite simple too—the full implementation of the designated committer state machine comes out to <a href="https://github.com/cloudflare/orange/blob/66e80d6d9146e2aedd4668e581810c0ee6aeb4a0/rust-mls-worker/src/mls_ops.rs#L90-L446"><u>300 lines of Rust</u></a>, including the MLS boilerplate, and it’s about as efficient.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3k3U7kFcYTwY81XzSrggt8/c27945dec311f251493826542704d370/image1.png" />
          </figure><p><sup><i>A user joining a group via the designated committer algorithm.</i></sup></p><p>One cool property of the designated committer algorithm is that something like this isn’t possible in a text group chat setting, since any given user (in particular, the designated committer) may be offline for an arbitrary period of time. Our method works because it leverages the fact that video calls are an inherently synchronous medium.</p>
    <div>
      <h3>Verifying the Designated Committer Algorithm with TLA<sup>+</sup></h3>
      <a href="#verifying-the-designated-committer-algorithm-with-tla">
        
      </a>
    </div>
    <p>The designated committer algorithm is a pretty neat simplification, but it comes with some non-trivial edge cases that we need to make sure we handle, such as:</p><ul><li><p><i>How do we make sure there is only one designated committer at a time?</i> The designated committer is the alive user with the smallest index in the MLS group state, which all users share.</p></li><li><p><i>What happens if the designated committer exits?</i> Then the next user will take its place. Every user keeps track of pending Adds and Removes, so it can continue where the previous designated committer left off.</p></li><li><p><i>If a user has not caught up to all messages, could they think they’re the designated committer?</i> No, they have to believe first that all prior eligible designated committers are disconnected.</p></li></ul><p>To make extra sure that this algorithm was correct, we formally modeled it and put it through the <a href="https://lamport.azurewebsites.net/tla/high-level-view.html"><u>TLA</u><u><sup>+</sup></u></a> model checker. To our surprise, it caught some low-level bugs! In particular, it found that, if the designated committer dies while adding a user, the protocol does not recover. We fixed these by breaking up MLS operations and enforcing a strict ordering on messages locally (e.g., a Welcome is always sent before its corresponding Add).</p><p>You can find an explainer, lessons learned, and the full <a href="https://learntla.com/core/index.html"><u>PlusCal</u></a> program (a high-level language that compiles to TLA<sup>+</sup>) <a href="https://github.com/cloudflareresearch/orange-e2ee-model-check"><u>here</u></a>. The caveat, as with any use of a bounded model checker, is that the checking is, well, bounded. We verified that no invalid protocol states are possible in a group of up to five users. We think this is good evidence that the protocol is correct for an arbitrary number of users. Because there are only two distinct roles in the protocol (designated committer and other group member), any weird behavior ought to be reproducible with two or three users, max.</p>
    <div>
      <h2>Preventing Monster-in-the-Middle attacks</h2>
      <a href="#preventing-monster-in-the-middle-attacks">
        
      </a>
    </div>
    <p>One important concern to address in any end-to-end encryption setup is how to prevent the service provider from replacing users’ key packages with their own. If the Orange Meets app server did this, and colluded with a malicious SFU to decrypt and re-encrypt video frames on the fly, then the SFU could see all the video sent through the network, and nobody would know.</p><p>To resolve this, like DAVE, we include a <i>safety number</i> in the corner of the screen for all calls. This number uniquely represents the cryptographic state of the group. If you check out-of-band (e.g., in a Signal group chat) that everyone agrees on the safety number, then you can be sure nobody’s key material has been secretly replaced.</p><p>In fact, you could also read the safety number aloud in the video call itself, but doing this is not provably secure. Reading a safety number aloud is an <i>in-band verification</i> mechanism, i.e., one where a party authenticates a channel within that channel. If a malicious app server colluding with a malicious SFU were able to construct believable video and audio of the user reading the safety number aloud, it could bypass this safety mechanism. So if your threat model includes adversaries that are able to break into a Worker and Cloudflare’s SFU, and simultaneously generate real-time deep-fakes, you should use out-of-band verification 😄.</p>
    <div>
      <h2>Future work</h2>
      <a href="#future-work">
        
      </a>
    </div>
    <p>There are some areas we could improve on:</p><ul><li><p>There is another attack vector for a malicious app server: it is possible to simply serve users malicious JavaScript. This problem, more generally called the <a href="https://web.archive.org/web/20200731144044/https://www.nccgroup.com/us/about-us/newsroom-and-events/blog/2011/august/javascript-cryptography-considered-harmful/"><u>JavaScript Cryptography Problem</u></a>, affects any in-browser application where the client wants to hide data from the server. Fortunately, we are working on a standard to address this, called <a href="https://github.com/beurdouche/explainers/blob/main/waict-explainer.md"><u>Web Application Manifest Consistency, Integrity, and Transparency</u></a>. In short, like our <a href="https://blog.cloudflare.com/key-transparency/"><u>Code Verify</u></a> solution for WhatsApp, this would allow every website to commit to the JavaScript it serves, and have a third party create an auditable log of the code. With transparency, malicious JavaScript can still be distributed, but at least now there is a log that records the code.</p></li><li><p>We can make out-of-band authentication easier by placing trust in an identity provider. Using <a href="https://www.bastionzero.com/openpubkey"><u>OpenPubkey</u></a>, it would be possible for a user to get the identity provider to sign their cryptographic material, and then present that. Then all the users would check the signature before using the material. Transparency would also help here to ensure no signatures were made in secret.</p></li></ul>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We built end-to-end encryption into the Orange Meets video chat app without a lot of engineering time, and by modifying just the client code. To do so, we built a WASM (compiled from Rust) <a href="https://github.com/cloudflare/orange/blob/e2ee/rust-mls-worker"><u>service worker</u></a> that sets up an <a href="https://www.rfc-editor.org/rfc/rfc9420.html"><u>MLS</u></a> group and does stream encryption and decryption, and designed a new joining protocol for groups, called the <i>designated committer algorithm</i>, and <a href="https://github.com/cloudflareresearch/orange-e2ee-model-check"><u>formally modeled it in TLA</u><u><sup>+</sup></u></a>. We made comments for all kinds of optimizations that are left to do, so please send us a PR if you’re so inclined!</p><p>Try using Orange Meets with E2EE enabled at <a href="https://e2ee.orange.cloudflare.dev/"><u>e2ee.orange.cloudflare.dev</u></a>, or deploy your own instance using the <a href="https://github.com/cloudflare/orange"><u>open source repository</u></a> on Github.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Cloudflare Realtime]]></category>
            <guid isPermaLink="false">6X6FQzpKaqVyTLVk7rw6xm</guid>
            <dc:creator>Michael Rosenberg</dc:creator>
            <dc:creator>Kevin Kipp</dc:creator>
            <dc:creator>Renan Dincer</dc:creator>
            <dc:creator>Felipe Astroza Araya</dc:creator>
            <dc:creator>Mari Galicer</dc:creator>
        </item>
        <item>
            <title><![CDATA[TURN and anycast: making peer connections work globally]]></title>
            <link>https://blog.cloudflare.com/webrtc-turn-using-anycast/</link>
            <pubDate>Wed, 25 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ TURN servers relay media and data between devices when direct P2P connections are blocked or fail. Cloudflare Calls' TURN server uses anycast to eliminate the need to think about regions or scaling. ]]></description>
            <content:encoded><![CDATA[ <p>A <a href="https://www.cloudflare.com/learning/video/turn-server/"><u>TURN server</u></a> helps maintain connections during video calls when local networking conditions prevent participants from connecting directly to other participants. It acts as an intermediary, passing data between users when their networks block direct communication. TURN servers ensure that peer-to-peer calls go smoothly, even in less-than-ideal network conditions.</p><p>When building their own TURN infrastructure, developers often have to answer a few critical questions:</p><ol><li><p>“How do we build and maintain a mesh network that achieves near-zero latency to all our users?”</p></li><li><p>“Where should we spin up our servers?”</p></li><li><p>“Can we auto-scale reliably to be cost-efficient without hurting performance?”
</p></li></ol><p>In April, we launched Cloudflare Calls TURN in <a href="https://blog.cloudflare.com/cloudflare-calls-anycast-webrtc/"><u>open beta</u></a> to help answer these questions. Starting today, <a href="https://developers.cloudflare.com/calls/turn/"><u>Cloudflare Calls’ TURN service</u></a> is now generally available to all Cloudflare accounts. Our TURN server works on our anycast network, which helps deliver global coverage and near-zero latency required by real time applications.</p>
    <div>
      <h2>TURN solves connectivity and privacy problems for real time apps</h2>
      <a href="#turn-solves-connectivity-and-privacy-problems-for-real-time-apps">
        
      </a>
    </div>
    <p>When Internet Protocol version 4 (IPv4, <a href="https://datatracker.ietf.org/doc/html/rfc791"><u>RFC 791</u></a>) was designed back in 1981, it was assumed that the 32-bit address space was big enough for all computers to be able to connect to each other. When IPv4 was created, billions of people didn’t have smartphones in their pockets and the idea of the Internet of Things didn’t exist yet. It didn’t take long for companies, ISPs, and even entire countries to realize they didn’t have enough IPv4 address space to meet their needs.</p>
    <div>
      <h3>NATs are unpredictable</h3>
      <a href="#nats-are-unpredictable">
        
      </a>
    </div>
    <p>Fortunately, you can have multiple devices share the same IP address because the most common protocols run on top of IP are TCP and UDP, both of which support up to 65,535 port numbers. (Think of port numbers on an IP address as extensions behind a single phone number.) To solve this problem of IP scarcity, network engineers developed a way to share a single IP address across multiple devices by exploiting the port numbers. This is called Network Address Translation (NAT) and it is a process through which your router knows which packets to send to your smartphone versus your laptop or other devices, all of which are connecting to the public Internet through the IP address assigned to the router.</p><p>In a typical NAT setup, when a device sends a packet to the Internet, the NAT assigns a random, unused port to track it, keeping a forwarding table to map the device to the port. This allows NAT to direct responses back to the correct device, even if the source IP address and port vary across different destinations. The system works as long as the internal device initiates the connection and waits for the response.</p><p>However, real-time apps like video or audio calls are more challenging with NAT. Since NATs don't reveal how they assign ports, devices can't pre-communicate where to send responses, making it difficult to establish reliable connections. Earlier solutions like STUN (<a href="https://datatracker.ietf.org/doc/html/rfc3489"><u>RFC 3489</u></a>) couldn't fully solve this, which gave rise to the TURN protocol.</p><p>TURN predictably relays traffic between devices while ensuring minimal delay, which is crucial for real-time communication where even a second of lag can disrupt the experience.</p>
    <div>
      <h3>ICE to determine if a relay server is needed</h3>
      <a href="#ice-to-determine-if-a-relay-server-is-needed">
        
      </a>
    </div>
    <p>The <a href="https://datatracker.ietf.org/doc/html/rfc8445"><u>ICE (Interactive Connectivity Establishment) protocol</u></a> was designed to find the fastest communication path between devices. It works by testing multiple routes and choosing the one with the least delay. ICE determines whether a TURN server is needed to relay the connection when a direct peer-to-peer path cannot be established or is not performant enough.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71IX3n5RLM24rwRhwrpM2E/1ef5ecbf98cc85a46e385f333d6cb90c/image3.png" />
            
            </figure><p><sup><i>How two peers (A and B) try to connect directly by sharing their public and local IP addresses using the ICE protocol. If the direct connection fails, both peers use the TURN server to relay their connection and communicate with each other.</i></sup></p><p>While ICE is designed to find the most efficient connection path between peers, it can inadvertently expose sensitive information, creating privacy concerns. During the ICE process, endpoints exchange a list of all possible network addresses, including local IP addresses, NAT IP addresses, and TURN server addresses. This comprehensive sharing of network details can reveal information about a user's network topology, potentially exposing their approximate geographic location or details about their local network setup.</p><p>The "brute force" nature of ICE, where it attempts connections on all possible paths, can create distinctive network traffic patterns that sophisticated observers might use to infer the use of specific applications or communication protocols. </p>
    <div>
      <h2>TURN solves privacy problems</h2>
      <a href="#turn-solves-privacy-problems">
        
      </a>
    </div>
    <p>The threat from exposing sensitive information while using real-time applications is especially important for people that use end-to-end encrypted messaging apps for sensitive information — for example, journalists who need to communicate with unknown sources without revealing their location.</p><p>With Cloudflare TURN in place, traffic is proxied through Cloudflare, preventing either party in the call from seeing client IP addresses or associated metadata. Cloudflare simply forwards the calls to their intended recipients, but never inspects the contents — the underlying call data is always end-to-end encrypted. This masking of network traffic is an added layer of privacy.</p><p>Cloudflare is a trusted third-party when it comes to operating these types of services: we have experience operating privacy-preserving proxies at scale for our <a href="https://blog.cloudflare.com/1111-warp-better-vpn/"><u>Consumer WARP</u></a> product, <a href="https://blog.cloudflare.com/icloud-private-relay/"><u>Apple’s Private Relay</u></a>, and <a href="https://blog.cloudflare.com/cloudflare-now-powering-microsoft-edge-secure-network/"><u>Microsoft Edge’s Secure Network</u></a>, preserving end-user privacy without sacrificing performance.  </p>
    <div>
      <h2>Cloudflare’s TURN is the fastest because of Anycast</h2>
      <a href="#cloudflares-turn-is-the-fastest-because-of-anycast">
        
      </a>
    </div>
    <p>Lots of real time communication services run their own TURN servers on a commercial cloud provider because they don’t want to leave a certain percentage of their customers with non-working communication. This results in additional costs for DevOps, egress bandwidth, etc. And honestly, just deploying and running a TURN server, like <a href="https://github.com/coturn/coturn"><u>CoTURN</u></a>, in a VPS isn’t an interesting project for most engineers.</p><p>Because using a TURN relay adds extra delay for the packets to travel between the peers, the relays should be located as close as possible to the peers. Cloudflare’s TURN service avoids all these headaches by simply running in all of the <a href="https://www.cloudflare.com/network"><u>330 cities where Cloudflare has data centers</u></a>. And any time Cloudflare adds another city, the TURN service automatically becomes available there as well. </p>
    <div>
      <h3>Anycast is the perfect network topology for TURN</h3>
      <a href="#anycast-is-the-perfect-network-topology-for-turn">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/"><u>Anycast</u></a> is a network addressing and routing methodology in which a single IP address is shared by multiple servers in different locations. When a client sends a request to an anycast address, the network automatically routes the request via <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><u>BGP</u></a> to the topologically nearest server. This is in contrast to unicast, where each destination has a unique IP address. Anycast allows multiple servers to have the same IP address, and enables clients to automatically connect to a server close to them. This is similar to emergency phone networks (911, 112, etc.) which connect you to the closest emergency communications center in your area.</p><p>Anycast allows for lower latency because of the sheer number of locations available around the world. Approximately 95% of the Internet-connected population globally is within approximately 50ms away from a Cloudflare location. For real-time communication applications that use TURN, leads to improved call quality and user experience.</p>
    <div>
      <h3>Auto-scaling and inherently global</h3>
      <a href="#auto-scaling-and-inherently-global">
        
      </a>
    </div>
    <p>Running TURN over anycast allows for better scalability and global distribution. By naturally distributing load across multiple servers based on network topology, this setup helps balance traffic and improve performance. When you use Cloudflare’s TURN service, you don’t need to manage a list of servers for different parts of the world. And you don’t need to write custom scaling logic to scale VMs up or down based on your traffic.  </p><p>Anycast allows TURN to use fewer IP addresses, making it easier to allowlist in restrictive networks. Stateless protocols like DNS over UDP work well with anycast. This includes stateless STUN binding requests used to determine a system's external IP address behind a NAT.</p><p>However, stateful protocols over UDP, like QUIC or TURN, are more challenging with anycast. QUIC handles this better due to its stable connection ID, which load balancers can use to consistently route traffic. However, TURN/STUN lacks a similar connection ID. So when a TURN client sends requests to the Cloudflare TURN service, the <a href="https://blog.cloudflare.com/unimog-cloudflares-edge-load-balancer/"><u>Unimog load balancer</u></a> ensures that all its requests get routed to the same server within a data center. The challenges for the communication between a client on the Internet and Cloudflare services listening on an anycast IP address have been described <a href="https://blog.cloudflare.com/tag/loadbalancing/"><u>multiple times before</u></a>.</p>
    <div>
      <h3>How does Cloudflare's TURN server receive packets?</h3>
      <a href="#how-does-cloudflares-turn-server-receive-packets">
        
      </a>
    </div>
    <p>TURN servers act as relay points to help connect clients. This process involves two types of connections: the client-server connection and the third-party connection (relayed address).</p><p>The client-server connection uses <a href="https://developers.cloudflare.com/calls/turn/#_top"><u>published</u></a> IP and port information to communicate with TURN clients using anycast.</p><p>For the relayed address, using anycast poses a challenge. The TURN protocol requires that packets reach the specific Cloudflare server handling the client connection. If we used anycast for relay addresses, packets might not arrive at the correct data center or server.
</p><p>One alternative is to use unicast addresses for relay candidates. However, this approach has drawbacks, including making servers vulnerable to attacks and requiring many IP addresses.</p><p>
</p><p>To solve these issues, we've developed a middle-ground solution, previously discussed in “<a href="https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-anymore/"><u>Cloudflare servers don't own IPs anymore – so how do they connect to the Internet?</u></a>”. We use anycast addresses but add extra handling for packets that reach incorrect servers. If a packet arrives at the wrong Cloudflare location, we forward it over our backbone to the correct datacenter, rather than sending it back over the public Internet.</p><p>This approach not only resolves routing issues but also improves TURN connection speed. Packets meant for the relay address enter the Cloudflare network as close to the sender as possible, optimizing the routing process.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2JxesKGit6hdK0NzaEduEk/5b248cbc27293b9dc1ccb1a5b5f7b615/image1.png" />
            
            </figure><p><sup><i>In this non-ideal setup, a TURN client connects to Cloudflare using Anycast, while a direct client uses Unicast, which would expose the TURN server to potential DDoS attacks.</i></sup></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/CI9Q5qMDC7xXcifH5l898/975e7f52416e0b737c250433aa68ee82/image2.png" />
            
            </figure><p><sup><i>The optimized setup uses Anycast for all TURN clients, allowing for dynamic load distribution across Cloudflare's globally distributed TURN servers.</i></sup></p>
    <div>
      <h2>Try Cloudflare Calls TURN today</h2>
      <a href="#try-cloudflare-calls-turn-today">
        
      </a>
    </div>
    <p>The new TURN feature of Cloudflare Calls addresses critical challenges in real-time communication:</p><ul><li><p><b>Connectivity</b>: By solving NAT traversal issues, TURN ensures reliable connections even in complex network environments.</p></li><li><p><b>Privacy</b>: Acting as an intermediary, TURN enhances user privacy by masking IP addresses and network details.</p></li><li><p><b>Performance</b>: Leveraging Cloudflare's global anycast network, our TURN service offers unparalleled speed and near-zero latency.</p></li><li><p><b>Scalability</b>: With presence in over 330 cities, Cloudflare Calls TURN grows with your needs.</p></li></ul><p>Cloudflare Calls TURN service is billed on a usage basis. It is available to self-serve and Enterprise customers alike. There is no cost for the first 1,000 GB (one terabyte) of Cloudflare Calls usage each month. It costs five cents per GB after your first terabyte of usage on self-serve. Volume pricing is available for Enterprise customers through your account team.</p><p>Switching TURN providers is likely as simple as changing a single configuration in your real-time app. To get started with Cloudflare’s TURN service, create a TURN app from your <a href="https://dash.cloudflare.com/?to=/:account/calls"><u>Cloudflare Calls Dashboard</u></a> or read the <a href="https://developers.cloudflare.com/calls/turn/"><u>Developer Docs</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Calls]]></category>
            <category><![CDATA[Anycast]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[WebRTC]]></category>
            <category><![CDATA[TURN]]></category>
            <guid isPermaLink="false">EkJICbovEPPuOSElg8poy</guid>
            <dc:creator>Nils Ohlmeier</dc:creator>
            <dc:creator>Renan Dincer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Calls: millions of cascading trees all the way down]]></title>
            <link>https://blog.cloudflare.com/cloudflare-calls-anycast-webrtc/</link>
            <pubDate>Thu, 04 Apr 2024 13:00:07 GMT</pubDate>
            <description><![CDATA[ Cloudflare Calls is a serverless SFU and TURN service running at Cloudflare’s edge. It’s now in open beta and costs $0.05/ real-time GB. It’s 100% anycast WebRTC ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Following its initial <a href="/announcing-cloudflare-calls">announcement</a> in September 2022, Cloudflare Calls is now in open beta and available in your <a href="https://dash.cloudflare.com/?to=/:account/calls">Cloudflare Dashboard</a>. Cloudflare Calls lets developers build real-time audio/video apps using <a href="https://webrtc.org/">WebRTC</a>, and it abstracts away the complexity by turning the Cloudflare network into a singular SFU. In this post, we dig into how we make this possible.</p>
    <div>
      <h2>WebRTC growing pains</h2>
      <a href="#webrtc-growing-pains">
        
      </a>
    </div>
    <p>WebRTC is the <a href="https://caniuse.com/webtransport">only</a> way to send UDP traffic out of a web browser – everything else uses TCP.</p><p>As a developer, you need a UDP-based transport layer for applications demanding low latency and real-time feedback, such as audio/video conferencing and interactive gaming. This is because unlike WebSocket and other TCP-based solutions, UDP is not subject to head-of-line blocking, <a href="/the-quicening">a</a> <a href="/a-primer-on-proxies">frequent</a> <a href="/stream-now-supports-srt-as-a-drop-in-replacement-for-rtmp">topic</a> on the Cloudflare Blog.</p><p>When building a new video conferencing app, you typically start with a peer-to-peer web application using WebRTC, where clients exchange data directly. This approach is efficient for small-scale demos, but scalability issues arise as the number of participants increases. This is because the amount of data each client must transmit grows substantially, following an almost exponential increase relative to the number of participants, as each client needs to send data to n-1 other clients.</p><p>Selective Forwarding Units (SFUs) play pivotal roles in scaling WebRTC applications. An SFU functions by receiving multiple media or data flows from participants and deciding which streams should be forwarded to other participants, thus acting as a media stream routing hub. This mechanism significantly reduces bandwidth requirements and improves scalability by managing stream distribution based on network conditions and participant needs. Even though <a href="https://arstechnica.com/information-technology/2012/05/skype-replaces-p2p-supernodes-with-linux-boxes-hosted-by-microsoft/">it hasn’t always been this way</a> from when video calling on computers first became popular, SFUs are often found in the cloud, rather than home computers of clients, because of superior connectivity offered in a data center.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1eHksyrQ7iybPx8c9iTqOq/f795cf163c06ea3cd89a74f82fc50f7d/P2P-vs.-SFU.png" />
            
            </figure><p>A modern audio/video application thus quickly becomes complicated with the addition of this server side element. Since all clients connect to this central SFU server, there are numerous things to consider when you’re architecting and scaling a real-time application:</p><ul><li><p>How close is the SFU server location(s) to the end user clients, how is a client assigned to a server?</p></li><li><p>Where is the SFU hosted, and if it’s hosted in the cloud, what are the egress costs from VMs?</p></li><li><p>How many participants can fit in a “room”? Are all participants sending and receiving data? With cameras on? Audio only?</p></li><li><p>Some SFUs require the use of custom SDKs. Which platforms do these run on and are they compatible with the application you’re trying to build?</p></li><li><p>Monitoring/reliability/other issues that come with running infrastructure</p></li></ul><p>Some of these concerns, and the complexity of WebRTC infrastructure in general, has made the community look in <a href="https://datatracker.ietf.org/group/moq/about/">different</a> directions. However, it is clear that in 2024, WebRTC is alive and well with plenty of new and old uses. AI startups build characters that converse in real time, cars leverage WebRTC to stream live footage of their cameras to smartphones, and video conferencing tools are going strong.</p><p>WebRTC has been interesting to us for a while. Cloudflare Stream implemented <a href="/webrtc-whip-whep-cloudflare-stream">WHIP and WHEP</a> WebRTC video streaming protocols in 2022, which remain the lowest latency way to broadcast video. OBS Studio <a href="https://github.com/obsproject/obs-studio/commit/851a8c216e14617fb523951839f3bdb240e85141">implemented</a> WHIP broadcasting support as have a variety of <a href="https://softvelum.com/nimble/webrtc/">software</a> and <a href="https://www.ospreyvideo.com/talon-encoders">hardware</a> vendors alongside Cloudflare. In late 2022, we launched <a href="/announcing-cloudflare-calls">Cloudflare Calls</a> in closed beta. When we blogged about it back then, we were very impressed with how WebRTC fared, and spoke to many customers about their pain points as well as creative ideas the existing browser APIs can foster. We also saw other WebRTC-based apps like <a href="https://www.nytimes.com/2021/02/15/business/clubhouse.html">Clubhouse</a> rise in popularity and <a href="https://blog.x.com/en_us/topics/product/2021/spaces-is-here">Twitter Spaces</a> play a role in popular culture. Today, we see real-time applications of a different sort. Many AI projects <a href="https://blog.character.ai/new-feature-announcement-character-group-chat/">have impressive demos</a> with voice/video interactions. All of these apps are built with the same WebRTC APIs and system architectures.</p><p>We are confident that Cloudflare Calls is a new kind of WebRTC infrastructure you should try. When we set out to build Cloudflare Calls, we had a few ideas that we weren’t sure would work, but were worth trying:</p><ul><li><p>Build every WebRTC component on Anycast with a single IP address for DTLS, ICE, STUN, SRTP, SCTP, etc.</p></li><li><p>Don’t force an SDK – WebRTC APIs by themselves are enough, and allow for the most novel uses to shine, because best developers always find ways to hit the limits of SDKs.</p></li><li><p>Deploy in all <a href="https://www.cloudflare.com/network">310+ cities</a> Cloudflare operates in – use every Cloudflare server, not just a subset</p></li><li><p>Exchange <a href="https://developers.cloudflare.com/calls/https-api/">offer and answer over HTTP</a> between Cloudflare and the WebRTC client. This way there is only a single PeerConnection to manage.</p></li></ul><p>Now we know this is all possible, because we made it happen, and we think it’s the best experience a developer can get with pure WebRTC.</p>
    <div>
      <h2>Is Cloudflare Calls a real SFU?</h2>
      <a href="#is-cloudflare-calls-a-real-sfu">
        
      </a>
    </div>
    <p>Cloudflare is in the business of having computers in numerous places. Historically, our core competency was operating a caching HTTP reverse proxy, and we are <a href="/network-performance-update-security-week-2024">very good</a> at this. With Cloudflare Calls, we asked ourselves “how can we build a large distributed system that brings together our global network to form one giant <i>stateful</i> system that feels like a single machine?”</p><p>When using Calls, every PeerConnection automatically connects to the closest Cloudflare data center instead of a single server. Rather than connecting every client that needs to communicate with each other to a single server, anycast spreads out connections as much as possible to minimize last mile latency sourced from your ISP between your client and Cloudflare.</p><p>It’s good to minimize last mile latency because after the data enters Cloudflare’s control, the underlying media can be managed carefully and routed through the Cloudflare <a href="/250-cities-is-just-the-start">backbone</a>. This is crucial for WebRTC applications where millisecond delays can significantly impact user experience. To give you a sense about latency between Cloudflare’s data centers and end-users, about 95% of the Internet connected population is within 50ms of a Cloudflare data center. As I write this, I am about 20ms away, but in the past, I have been lucky enough to be connected to a **great** home Wi-Fi network less than 1ms away in Manhattan. “But you are just one user!” you might be thinking, so here is a chart from <a href="https://radar.cloudflare.com/quality/">Cloudflare Radar</a> showing recent global latency measurements:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5r4NGPsXDGl4e0FdhghTzC/533446ea771a5a0d0436c7646282fead/pasted-image-0-1.png" />
            
            </figure><p>This setup allows more opportunities for packets lost to be replied with retransmissions closer to users, more opportunities for bandwidth adjustments.</p>
    <div>
      <h2>Eliminating SFU region selection</h2>
      <a href="#eliminating-sfu-region-selection">
        
      </a>
    </div>
    <p>A traditional challenge in WebRTC infrastructure involves the manual selection of Selective Forwarding Units (SFUs) based on geographic location to minimize latency. Some systems solve this problem by selecting a location for the SFU after the first user joins the “room”. This makes routing inefficient when the rest of the participants in the conversation are clustered elsewhere. The anycast architecture of Calls eliminates this issue. When a client initiates a connection, <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">BGP</a> dynamically determines the closest data center. Each selected server only becomes responsible for the PeerConnection of the clients closest to it.</p><p>One might see this is actually a simpler way of managing servers, as there is no need to maintain a layer of WebRTC load balancing for traffic or CPU capacity between servers. However, anycast has its own challenges, and we couldn’t take a laissez-faire approach.</p>
    <div>
      <h2>Steps to establishing a PeerConnection</h2>
      <a href="#steps-to-establishing-a-peerconnection">
        
      </a>
    </div>
    <p>One of the challenging parts in assigning a server to a client PeerConnection is supporting dual stack networking for backwards compatibility with clients that only support the old version of the Internet Protocol, IPv4.</p><p>Cloudflare Calls uses a single IP address per protocol, and our L4 <a href="/unimog-cloudflares-edge-load-balancer">load balancer</a> directs packets to a single server per client by using the 4-tuple {client IP, client port, destination IP, destination port} hashing. This means that every <a href="https://webrtcforthecurious.com/docs/03-connecting/#connectivity-checks">ICE connectivity check</a> packet arrives at different servers: one for IPv4 and one for IPv6.</p><p>ICE is not the only protocol used for WebRTC; there is also STUN and TURN for connectivity establishment. Actual media bits are encrypted using DTLS, which carries most of the data during a session.</p><p>DTLS packets don’t have any identifiers in them that would indicate they belong to a specific connection (unlike QUIC’s <a href="https://datatracker.ietf.org/doc/html/rfc9000">connection ID</a> field), so every server should be able to handle DTLS packets and get the necessary certificates to be able to decrypt them for processing. DTLS encryption is negotiated at the <a href="https://webrtcforthecurious.com/docs/02-signaling/#what-is-the-session-description-protocol-sdp">SDP layer</a> using the HTTPS API.</p><p>The HTTPS API for Calls also lands on a different server than DTLS and ICE connectivity checks. Since DTLS packets need information from the SDP exchanged using the HTTPS API, and ICE connectivity checks depend on the HTTPS API for userFragment and password fields in the connectivity check packets, it would be very useful for all of these to be available in one server. Yet in our setup, they’re not.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/44z2v70arwk48CPkKgJXKq/bd7611bc51a989c0662665e374ed9a50/Signaling.png" />
            
            </figure><p>Fippo and Gustavo of WebRTCHacks <a href="https://webrtchacks.com/how-cloudflare-glares-at-webrtc-with-whip-and-whep/">complained (gracefully noted)</a> about slow replies to ICE connectivity checks in their great article as they were digging into our WHIP implementation right around our announcement in 2022:</p><blockquote><p>Looking at the Wireshark dumps we see a surprisingly large amount of time pass between the first STUN request and the first STUN response – it was 1.8 seconds in the screenshot below.</p><p>In other tests, it was shorter, but still 600ms long.</p><p>After that, the DTLS packets do not get an immediate response, requiring multiple attempts. This ultimately leads to a call setup time of almost three seconds – way above the global average of 800ms <a href="https://medium.com/@fippo/how-long-does-the-dtls-handshake-take-86718dd966bf">Fippo has measured previously</a> (for the complete handshake, 200ms for the DTLS handshake). For Cloudflare with their extensive network, we expected this to be way below that average.</p></blockquote><p>Gustavo and Fippo observed our solution to this problem of different parts of the WebRTC negotiation landing on different servers. Since Cloudflare Calls unbundles the WebRTC protocol to make the entire network act like a single computer, at this critical moment, we need to form consensus across the network. We form consensus by configuring every server to handle any incoming PeerConnection just in time. When a packet arrives, if the server doesn’t know about it, it quickly learns about the negotiated parameters from another server, such as the ufrag and the DTLS fingerprint from the SDP, and responds with the appropriate response.</p>
    <div>
      <h2>Getting faster</h2>
      <a href="#getting-faster">
        
      </a>
    </div>
    <p>Even though we've sped up the process of forming consensus across the Cloudflare network, any delays incurred can still have weird side effects. For example, up until a few months ago, delays of a few hundred milliseconds caused slow connections in Chrome.</p><p>A connectivity check packet delayed by a few hundred milliseconds signals to Chrome that this is a high latency network, even though every other STUN message after that was replied to in less than 5-10ms. Chrome thus delays sending a USE-CANDIDATE attribute in the responses for a few seconds, degrading the user experience.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1sRoG86lCdvzcKJKEQof4D/8fa91aa893417bd1876d71fef8b52db7/image4-8.png" />
            
            </figure><p>Fortunately, Chrome also <a href="https://bugs.chromium.org/p/webrtc/issues/detail?id=3661">sends</a> DTLS ClientHello before USE-CANDIDATE (behavior we’ve seen only on Chrome), so to help speed up Chrome, Calls uses DTLS packets in place of STUN packets with USE-CANDIDATE attributes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Xhno0lLZDftcAHXLz9Btj/c5b7fad53f9bd3005b5f759067b4847c/image1-5.png" />
            
            </figure><p>After solving this issue with Chrome, PeerConnections globally now take about 100-250ms to get connected. This includes all consensus management, STUN packets, and a complete DTLS handshake.</p>
    <div>
      <h2>Sessions and Tracks are the building blocks of Cloudflare’s SFU, not rooms</h2>
      <a href="#sessions-and-tracks-are-the-building-blocks-of-cloudflares-sfu-not-rooms">
        
      </a>
    </div>
    <p>Once a PeerConnection is established to Cloudflare, we call this a Session. Many media Tracks or DataChannels can be published using a single Session, which returns a unique ID for each. These then can be subscribed to over any other PeerConnection anywhere around the world using the unique ID. The tracks can be published or subscribed anytime during the lifecycle of the PeerConnection.</p><p>In the background, Cloudflare takes care of scaling through a fan-out architecture with cascading trees that are unique per track. This structure works by creating a hierarchy of nodes where the root node distributes the stream to intermediate nodes, which then fan out to end-users. This significantly reduces the bandwidth required at the source and ensures scalability by distributing the load across the network. This simple but powerful architecture allows developers to build anything from 1:1 video calls to large 1:many or many:many broadcasting scenarios with Calls.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2yHByC3CWSsXH4ljdDsFqH/6f1d625c9e6b4e921b8fdb54a30cf843/Fan-out-architecutre.png" />
            
            </figure><p>There is no “room” concept in Cloudflare Calls. Each client can add as many tracks into a PeerConnection as they’d like. The limit is the bandwidth available between Cloudflare and the client, which is practically limited by the client side every time. The signaling or the concept of a “room” is left to the application developer, who can choose to pull as many tracks as they’d like from the tracks they have pushed elsewhere into a PeerConnection. This allows developers to move participants into breakout rooms and then back into a plenary room, and then 1:1 rooms while keeping the same PeerConnection and MediaTracks active.</p><p>Cloudflare offers an unopinionated approach to bandwidth management, allowing for greater control in customizing logic to suit your business needs. There is no active bandwidth management or restriction on the number of tracks. The <a href="https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/getStats">WebRTC Stats API</a> provides a standardized way to access data on packet loss and possible congestion, enabling you to incorporate client-side logic based on this information. For instance, if poor Wi-Fi connectivity leads to degraded service, your front-end could inform the user through a notice and automatically reduce the number of video tracks for that client.</p>
    <div>
      <h2>“NACK shield” at the edge</h2>
      <a href="#nack-shield-at-the-edge">
        
      </a>
    </div>
    <p>The Internet can't guarantee timely and orderly delivery of packets, leading to the necessity of retransmission mechanisms, particularly in protocols like TCP. This ensures data eventually reaches its destination, despite possible delays. Real-time systems, however, need special consideration of these delays. A packet that is delayed past its deadline for rendering on the screen is worthless, but a packet that is lost can be recovered if it can be retransmitted within a very short period of time, on the order of milliseconds. This is where NACKs come to play.</p><p>A WebRTC client receiving data constantly checks for packet loss. When one or more packets don’t arrive at the expected time or a sequence number discontinuity is seen on the receiving buffer, a special NACK packet is sent back to the source in order to ask for a packet retransmission.</p><p>In a peer-to-peer topology, if it receives a NACK packet, the source of the data has to retransmit packets for every participant. When an SFU is used, the SFU could send NACKs back to source, or keep a complex buffer for each client to handle retransmissions.</p><p>This gets more complicated with Cloudflare Calls, since both the publisher and the subscriber connect to Cloudflare, likely to different servers and also probably in different locations. In addition, there is a possibility of other Cloudflare data centers in the middle, either through <a href="/argo-v2">Argo</a>, or just as part of scaling to many subscribers on the same track.</p><p>It is common for SFUs to backpropagate NACK packets back to the source, losing valuable time to recover packets. Calls goes beyond this and can handle NACK packets in the location closest to the user, which decreases overall latency. The latency advantage gives more chance for the packet to be recovered compared to a centralized SFU or no NACK handling at all.</p><p>Since there is possibly a number of Cloudflare data centers between clients, packet loss within the Cloudflare network is also possible. We handle this by generating NACK packets in the network. With each hop that is taken with the packets, the receiving end can generate NACK packets. These packets are then recovered or backpropagated to the publisher to be recovered.</p>
    <div>
      <h2>Cloudflare Calls does TURN over Anycast too</h2>
      <a href="#cloudflare-calls-does-turn-over-anycast-too">
        
      </a>
    </div>
    <p>Separately from the SFU, Calls also offers a TURN service. TURN relays act as relay points for traffic between WebRTC clients like the browser and SFUs, particularly in scenarios where <a href="https://webrtcforthecurious.com/docs/03-connecting/#turn">direct communication is obstructed</a> by NATs or firewalls. TURN maintains an allocation of public IP addresses and ports for each session, ensuring connectivity even in restrictive network environments.</p><p>Cloudflare Calls’ TURN service supports a few ports to help with misbehaving middleboxes and firewalls:</p><ul><li><p>TURN-over-UDP over port 3478 (standard), and also port 53</p></li><li><p>TURN-over-TCP over ports 3478 and 80</p></li><li><p>TURN-over-TLS over ports 5349 and 443</p></li></ul><p>TURN works the same way as Calls, available over anycast and always connecting to the closest datacenter.</p>
    <div>
      <h2>Pricing and how to get started</h2>
      <a href="#pricing-and-how-to-get-started">
        
      </a>
    </div>
    <p>Cloudflare Calls is now in open beta and available in your <a href="https://dash.cloudflare.com/?to=/:account/calls">Cloudflare Dashboard</a>. Depending on your use case, you can set up an SFU application and/or a TURN service with only a few clicks.</p><p>To kick off its open beta phase, Calls is available at no cost for a limited time. Starting May 15, 2024, customers will receive the first terabyte each month for free, with any usage beyond that charged at $0.05 per real-time gigabyte. Beta customers will be provided at least 30 days to upgrade from the free beta to a paid subscription. Additionally, there are no charges for in-bound traffic to Cloudflare. For volume pricing, talk to your account manager.</p><p>Cloudflare Calls is ideal if you are building new WebRTC apps. If you have existing SFUs or TURN infrastructure, you may still consider using Calls alongside your existing infrastructure. Building a bridge to Calls from other places is not difficult as Cloudflare Calls supports standard WebRTC APIs and acts like just another WebRTC peer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6aUAwCF6AWChLdjGL00wno/5cb404a2ebb199ea980fba92e0a57298/image6-2.png" />
            
            </figure><p>We understand that getting started with a new platform is difficult, so we’re also open sourcing our internal video conferencing app, Orange Meets. Orange Meets supports small and large conference calls by maintaining room state in Workers Durable Objects. It has screen sharing, client-side noise-canceling, and background blur. It is written with TypeScript and React and is <a href="https://github.com/cloudflare/orange">available on GitHub</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7k2EN8juPmF4nzC090wEiP/85261afe40dd5695165729525bf2824a/image5-3.png" />
            
            </figure>
    <div>
      <h2>We’re hiring</h2>
      <a href="#were-hiring">
        
      </a>
    </div>
    <p>We think the current state of Cloudflare Calls enables many use cases. Calls already supports publishing and subscribing to media tracks and DataChannels. Soon, it will support features like simulcasting.</p><p>But we’re just scratching the surface and there is so much more to build on top of this foundation.</p><p>If you are passionate about WebRTC (and <a href="https://datatracker.ietf.org/group/moq/about/">other</a> real-time protocols!!), the Media Platform team building the Calls product at Cloudflare is <a href="https://boards.greenhouse.io/cloudflare/jobs/5709759?gh_jid=5709759">hiring</a> and would love to talk to you.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Calls]]></category>
            <category><![CDATA[WebRTC]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">5PAERmEc4TrWEUDjtsd9dU</guid>
            <dc:creator>Renan Dincer</dc:creator>
            <dc:creator>Rachel Chen</dc:creator>
            <dc:creator>Felipe Astroza Araya</dc:creator>
            <dc:creator>Kevin Kipp</dc:creator>
            <dc:creator>Kazi Najib</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bringing the best live video experience to Cloudflare Stream with AV1]]></title>
            <link>https://blog.cloudflare.com/av1-cloudflare-stream-beta/</link>
            <pubDate>Wed, 05 Oct 2022 17:08:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Stream now supports the AV1 codec for live video in open beta, unlocking live-streaming at higher resolution, with lower bandwidth ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Consumer hardware is pushing the limits of consumers’ bandwidth.</p><p>VR headsets support 5760 x 3840 resolution — 22.1 million pixels <i>per frame</i> of video. Nearly all new TVs and smartphones sold today now support 4K — 8.8 million pixels per frame. It’s now normal for most people on a subway to be casually streaming video on their phone, even as they pass through a tunnel. People expect all of this to just work, and get frustrated when it doesn’t.</p><p>Consumer Internet bandwidth hasn’t kept up. Even advanced mobile carriers still limit streaming video resolution to prevent network congestion. Many mobile users still have to monitor and limit their mobile data usage. Higher Internet speeds require expensive infrastructure upgrades, and 30% of Americans still say they often have problems <a href="https://www.pewresearch.org/internet/2021/06/03/mobile-technology-and-home-broadband-2021/">simply connecting to the Internet at home</a>.</p><p>We talk to developers every day who are pushing up against these limits, trying to deliver the highest quality streaming video without buffering or jitter, challenged by viewers’ expectations and bandwidth. Developers building live video experiences hit these limits the hardest — buffering doesn’t just delay video playback, it can cause the viewer to get out of sync with the live event. Buffering can cause a sports fan to miss a key moment as playback suddenly skips ahead, or find out in a text message about the outcome of the final play, before they’ve had a chance to watch.</p><p>Today we’re announcing a big step towards breaking the ceiling of these limits — support in <a href="https://www.cloudflare.com/products/cloudflare-stream/">Cloudflare Stream</a> for the <a href="https://aomedia.org/av1-features/">AV1</a> codec for live videos and their recordings, available today to all Cloudflare Stream customers in open beta. Read <a href="https://developers.cloudflare.com/stream/viewing-videos/av1-playback/">the docs</a> to get started, or <a href="https://cool-sf-videos.pages.dev/">watch an AV1 video</a> from Cloudflare Stream in your web browser. AV1 is an open and royalty-free video codec that uses <a href="https://engineering.fb.com/2018/04/10/video-engineering/av1-beats-x264-and-libvpx-vp9-in-practical-use-case/">46% less bandwidth than H.264</a>, the most commonly used video codec on the web today.</p>
    <div>
      <h3>What is AV1, and how does it improve live video streaming?</h3>
      <a href="#what-is-av1-and-how-does-it-improve-live-video-streaming">
        
      </a>
    </div>
    <p>Every piece of information that travels across the Internet, from web pages to photos, requires data to be transmitted between two computers. A single character usually takes one byte, so a two-page letter would be 3600 bytes or 3.6 kilobytes of data transferred.</p><p>One pixel in a photo takes 3 bytes, one each for red, green and blue in the pixel. A 4K photo would take 8,294,400 bytes, or 8.2 Megabytes. A video is like a photo that changes 30 times a second, which would make almost 15 Gigabytes per minute. That’s a lot!</p><p>To reduce the amount of bandwidth needed to stream video, before video is sent to your device, it is compressed using a codec. When your device receives video, it decodes this into the pixels displayed on your screen. These codecs are essential to both streaming and storing video.</p><p>Video compression codecs combine multiple advanced techniques, and are able to compress video to one percent of the original size, with your eyes barely noticing a difference. This also makes video codecs computationally intensive and hard to run. Smartphones, laptops and TVs have specific media decoding hardware, separate from the main CPU, optimized to decode specific protocols quickly, using the minimum amount of battery life and power.</p><p>Every few years, as researchers invent more efficient compression techniques, standards bodies release new codecs that take advantage of these improvements. Each generation of improvements in compression technology increases the requirements for computers that run them. With higher requirements, new chips are made available with increased compute capacity. These new chips allow your device to display higher quality video while using less bandwidth.</p><p>AV1 takes advantage of recent advances in compute to deliver video with dramatically fewer bytes, even compared to other relatively recent video protocols like VP9 and HEVC.</p>
    <div>
      <h3>AV1 leverages the power of new smartphone chips</h3>
      <a href="#av1-leverages-the-power-of-new-smartphone-chips">
        
      </a>
    </div>
    <p>One of the biggest developments of the past few years has been the rise of custom chip designs for smartphones. Much of what’s driven the development of these chips is the need for advanced on-device image and video processing, as companies compete on the basis of which smartphone has the best camera.</p><p>This means the phones we carry around have an incredible amount of compute power. One way to think about AV1 is that it shifts work from the network to the viewer’s device. AV1 is fewer bytes over the wire, but computationally harder to decode than prior formats. When AV1 was first announced in 2018, it was dismissed by some as too slow to encode and decode, but smartphone chips have become radically faster in the past four years, more quickly than many saw coming.</p><p>AV1 hardware decoding is already built into the latest Google Pixel smartphones as part of the Tensor chip design. The <a href="https://semiconductor.samsung.com/processor/mobile-processor/exynos-2200/">Samsung Exynos 2200</a> and <a href="https://www.mediatek.com/products/smartphones-2/dimensity-1000-series">MediaTek Dimensity 1000 SoC</a> mobile chipsets both support hardware accelerated AV1 decoding. It appears that Google will <a href="https://android.googlesource.com/platform/cts/+/9203e0379bbb8991cdfee39e2a894d236bfaca8e?cf_target_id=EB3A10F16F1C7B0D8AE3D87D702DDC4A">require</a> that all devices that support Android 14 support decoding AV1. And AVPlayer, the media playback API built into iOS and tvOS, now <a href="https://developer.apple.com/documentation/coremedia/1564239-video_codec_constants/kcmvideocodectype_av1">includes an option for AV1</a>, which hints at future support. It’s clear that the industry is heading towards hardware-accelerated AV1 decoding in the most popular consumer devices.</p><p>With hardware decoding comes battery life savings — essential for both today’s smartphones and tomorrow’s VR headsets. For example, a Google Pixel 6 with AV1 hardware decoding uses only minimal battery and CPU to decode and play our <a href="https://cool-sf-videos.pages.dev/">test video</a>:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1vbKm1xvDV6JE2Z9Jkn0rO/b0809152c1eb4f589c098530f254f223/image1-5.png" />
            
            </figure>
    <div>
      <h3>AV1 encoding requires even more compute power</h3>
      <a href="#av1-encoding-requires-even-more-compute-power">
        
      </a>
    </div>
    <p>Just as decoding is significantly harder for end-user devices, it is also significantly harder to encode video using AV1. When AV1 was announced in 2018, many doubted whether hardware would be able to encode it efficiently enough for the protocol to be adopted quickly enough.</p><p>To demonstrate this, we encoded the 4K rendering of <a href="https://peach.blender.org/">Big Buck Bunny</a> (a classic among video engineers!) into AV1, using an AMD EPYC 7642 48-Core Processor with 256 GB RAM. This CPU continues to be a workhorse of our compute fleet, as we have <a href="/an-epyc-trip-to-rome-amd-is-cloudflares-10th-generation-edge-server-cpu/">written about previously</a>. We used the following command to re-encode the video, <a href="https://trac.ffmpeg.org/wiki/Encode/AV1">based on the example in the ffmpeg AV1 documentation:</a></p><p><code>ffmpeg -i bbb_sunflower_2160p_30fps_normal.mp4 -c:v libaom-av1 -crf 30 -b:v 0 -strict -2 av1_test.mkv</code></p><p>Using a single core, encoding just two seconds of video at 30fps took over 30 minutes. Even if all 48 cores were used to encode, it would take at minimum over 43 seconds to encode just two seconds of video. Live encoding using only CPUs would require over 20 servers running at full capacity.</p><p>Special-purpose AV1 software encoders like <a href="https://github.com/xiph/rav1e">rav1e</a> and <a href="https://gitlab.com/AOMediaCodec/SVT-AV1">SVT-AV1</a> that run on general purpose CPUs can encode somewhat faster than <a href="https://trac.ffmpeg.org/wiki/Encode/AV1">libaom-av1</a> with ffmpeg, but still consume a huge amount of compute power to encode AV1 in real-time, requiring multiple servers running at full capacity in many scenarios.</p>
    <div>
      <h3>Cloudflare Stream encodes your video to AV1 in real-time</h3>
      <a href="#cloudflare-stream-encodes-your-video-to-av1-in-real-time">
        
      </a>
    </div>
    <p>At Cloudflare, we control both the hardware and software on our network. So to solve the CPU constraint, we’ve installed dedicated AV1 hardware encoders, designed specifically to encode AV1 at blazing fast speeds. This end to end control is what lets us encode your video to AV1 in real-time. This is entirely out of reach to most public cloud customers, including the video infrastructure providers who depend on them for compute power.</p><p>Encoding in real-time means you can use AV1 for live video streaming, where saving bandwidth matters most. With a pre-recorded video, the client video player can fetch future segments of video well in advance, relying on a buffer that can be many tens of seconds long. With live video, buffering is constrained by latency — it’s not possible to build up a large buffer when viewing a live stream. There is less margin for error with live streaming, and every byte saved means that if a viewer’s connection is interrupted, it takes less time to recover before the buffer is empty.</p>
    <div>
      <h3>Stream lets you support AV1 with no additional work</h3>
      <a href="#stream-lets-you-support-av1-with-no-additional-work">
        
      </a>
    </div>
    <p>AV1 has a chicken or the egg dilemma. And we’re helping solve it.</p><p>Companies with large video libraries often re-encode their entire content library to a new codec before using it. But AV1 is so computationally intensive that re-encoding whole libraries has been cost prohibitive. Companies have to choose specific videos to re-encode, and guess which content will be most viewed ahead of time. This is particularly challenging for apps with user generated content, where content can suddenly go viral, and viewer patterns are hard to anticipate.</p><p>This has slowed down the adoption of AV1 — content providers wait for more devices to support AV1, and device manufacturers wait for more content to use AV1. Which will come first?</p><p>With Cloudflare Stream there is no need to manually trigger re-encoding, re-upload video, or manage the bulk encoding of a large video library. This is a unique approach that is made possible by integrating encoding and delivery into a single product — it is not possible to encode on-demand using the old way of encoding first, and then pointing a CDN at a bucket of pre-encoded files.</p><p>We think this approach can accelerate the adoption of AV1. Consider a video app with millions of minutes of user-generated video. Most videos will never be watched again. In the old model, developers would have to spend huge sums of money to encode upfront, or pick and choose which videos to re-encode. With Stream, we can help anyone incrementally adopt AV1, without re-encoding upfront. As we work towards making AV1 Generally Available, we’ll be working to make supporting AV1 simple and painless, even for videos already uploaded to Stream, with no special configuration necessary.</p>
    <div>
      <h3>Open, royalty-free, and widely supported</h3>
      <a href="#open-royalty-free-and-widely-supported">
        
      </a>
    </div>
    <p>At Cloudflare, we are committed to open standards and <a href="/tag/patent-troll/">fighting patent trolls</a>. While there are multiple competing options for new video codecs, we chose to support AV1 first in part because it is open source and has royalty-free licensing.</p><p>Other encoding codecs force device manufacturers to pay royalty fees in order to adopt their standard in consumer hardware, and have been quick to file lawsuits against competing video codecs. The group behind the open and royalty-free VP8 and VP9 codecs have been pushing back against this model for more than a decade, and AV1 is the successor to these codecs, with support from all the <a href="https://aomedia.org/membership/members/">biggest technology companies</a>, both software and hardware. Beyond its technical accomplishments, AV1 is a clear message from the industry that the future of video encoding should be open, royalty-free, and free from patent litigation.</p>
    <div>
      <h3>Try AV1 right now with <b><i>your</i></b> live stream or live recording</h3>
      <a href="#try-av1-right-now-with-your-live-stream-or-live-recording">
        
      </a>
    </div>
    <p>Support for AV1 is currently in open beta. You can try using AV1 on your own live video with Cloudflare Stream right now — just add the <code>?betaCodecSuggestion=av1</code> query parameter to the HLS or DASH manifest URL for any live stream or live recording created after October 1st in Cloudflare Stream. <a href="https://developers.cloudflare.com/stream/viewing-videos/av1-playback/">Read the docs</a> to get started. If you don’t yet have a Cloudflare account, you can sign up <a href="https://dash.cloudflare.com/sign-up/stream">here</a> and start using Cloudflare Stream in just a few minutes.</p><p>We also have a recording of a live video, encoded using AV1, that you can watch <a href="https://cool-sf-videos.pages.dev/">here</a>. Note that Safari does not yet support AV1.</p><p>We encourage you to try AV1 with your test streams, and we’d love your feedback. Join our <a href="https://discord.com/invite/cloudflaredev/">Discord channel</a> and tell us what you’re building, and what kinds of video you’re interested in using AV1 with. We’d love to hear from you!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">1VBAndDQEb6dfu5nhqoTvb</guid>
            <dc:creator>Renan Dincer</dc:creator>
            <dc:creator>Brendan Irvine-Broque</dc:creator>
            <dc:creator>Chris Howells</dc:creator>
            <dc:creator>Ryan Schachte</dc:creator>
        </item>
        <item>
            <title><![CDATA[WebRTC live streaming to unlimited viewers, with sub-second latency]]></title>
            <link>https://blog.cloudflare.com/webrtc-whip-whep-cloudflare-stream/</link>
            <pubDate>Tue, 27 Sep 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Stream now supports live streaming over WebRTC to unlimited concurrent viewers, using open standards WHIP and WHEP, with zero dependencies or client SDKs necessary. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Creators and broadcasters expect to be able to go live from anywhere, on any device. Viewers expect “live” to mean “real-time”. The protocols that power most live streams are unable to meet these growing expectations.</p><p>In talking to developers building live streaming into their apps and websites, we’ve heard near universal frustration with the limitations of existing live streaming technologies. Developers in 2022 rightly expect to be able to deliver low latency to viewers, broadcast reliably, and use web standards rather than old protocols that date back to the era of Flash.</p><p>Today, we’re excited to announce in open beta that Cloudflare Stream now supports live video streaming over WebRTC, with sub-second latency, to unlimited concurrent viewers. This is a new feature of Cloudflare Stream, and you can start using it right now in the Cloudflare Dashboard — read the <a href="https://developers.cloudflare.com/stream/webrtc-beta/">docs</a> to get started.</p><p>WebRTC with Cloudflare Stream leapfrogs existing tools and protocols, exclusively uses open standards with zero dependency on a specific SDK, and empowers any developer to build both low latency live streaming and playback into their website or app.</p>
    <div>
      <h3>The status quo of streaming live video is broken</h3>
      <a href="#the-status-quo-of-streaming-live-video-is-broken">
        
      </a>
    </div>
    <p>The status quo of streaming live video has high latency, depends on archaic protocols and is incompatible with the way developers build apps and websites. A reasonable person’s expectations of what the Internet should be able to deliver in 2022 are simply unmet by the dominant set of protocols carried over from past eras.</p><p><b>Viewers</b> increasingly expect “live” to mean “real-time”. People want to place bets on sports broadcasts in real-time, interact and ask questions to presenters in real-time, and never feel behind their friends at a live event.</p><p>In practice, the HLS and DASH standards used to deliver video have 10+ seconds of latency. LL-HLS and LL-DASH bring this down to closer to 5 seconds, but only as a hack on top of the existing protocol that delivers segments of video in individual HTTP requests. Sending mini video clips over TCP simply cannot deliver video in real-time. HLS and DASH are here to stay, but aren’t the future of real-time live video.</p><p><b>Creators and broadcasters</b> expect to be able to go live from anywhere, on any device.</p><p>In practice, people creating live content are stuck with a limited set of native apps, and can’t go live using RTMP from a web browser. Because it’s built on top of TCP, the RTMP broadcasting protocol struggles under even the slightest network disruption, making it a poor or often unworkable option when broadcasting from mobile networks. RTMP, originally built for use with Adobe Flash Player, was <a href="https://rtmp.veriskope.com/pdf/rtmp_specification_1.0.pdf">last updated in 2012</a>, and while Stream supports the <a href="/magic-hdmi-cable/">newer SRT protocol</a>, creators need an option that works natively on the web and can more easily be integrated in native apps.</p><p><b>Developers</b> expect to be able to build using standard APIs that are built into web browsers and native apps.</p><p>In practice, RTMP can’t be used from a web browser, and creating a native app that supports RTMP broadcasting typically requires diving into lower-level programming languages like C and Rust. Only those with expertise in both live video protocols and these languages have full access to the tools needed to create novel live streaming client applications.</p>
    <div>
      <h3>We’re solving this by using new open WebRTC standards: WHIP and WHEP</h3>
      <a href="#were-solving-this-by-using-new-open-webrtc-standards-whip-and-whep">
        
      </a>
    </div>
    <p>WebRTC is the real-time communications protocol, supported across all web browsers, that powers video calling services like Zoom and Google Meet. Since inception it’s been designed for real-time, ultra low-latency communications.</p><p>While WebRTC is well established, for most of its history it’s lacked standards for:</p><ul><li><p><b>Ingestion</b> — how broadcasters should <b><i>send</i></b> media content (akin to RTMP today)</p></li><li><p><b>Egress</b> — how viewers request and <b><i>receive</i></b> media content (akin to DASH or HLS today)</p></li></ul><p>As a result, developers have had to implement this on their own, and client applications on both sides are often tightly coupled to provider-specific implementations. Developers we talk to often express frustration, having sunk months of engineering work into building around a specific vendor’s SDK, unable to switch without a significant rewrite of their client apps.</p><p>At Cloudflare, our mission is broader — we’re helping to build a better Internet. Today we’re launching not just a new feature of Cloudflare Stream, but a vote of confidence in new WebRTC standards for both ingestion and egress. We think you should be able to start using Stream without feeling locked into an SDK or implementation specific to Cloudflare, and we’re committed to using open standards whenever possible.</p><p>For ingestion, <a href="https://www.ietf.org/archive/id/draft-ietf-wish-whip-03.html">WHIP</a> is an IETF draft on the Standards Track, with many applications already successfully using it in production. For delivery (egress), <a href="https://www.ietf.org/id/draft-murillo-whep-00.html">WHEP</a> is an IETF draft with broad agreement. Combined, they provide a standardized end-to-end way to broadcast one-to-many over WebRTC at scale.</p><p><b>Cloudflare Stream is the first cloud service to let you both broadcast using WHIP and playback using WHEP — no vendor-specific SDK needed.</b> Here’s how it works:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/19Hq5GYMLCWmxGoKQBiifO/0cd900cb0c07cfac25f14c2486a3cb77/image2-44.png" />
            
            </figure><p>Cloudflare Stream is already built on top of the Cloudflare developer platform, using Workers and Durable Objects running on Cloudflare’s global network, within 50ms of 95% of the world’s Internet-connected population.</p><p>Our WebRTC implementation extends this to relay WebRTC video through our network. Broadcasters stream video using WHIP to the point of presence closest to their location, which tells the Durable Object where the live stream can be found. Viewers request streaming video from the point of presence closest to them, which asks the Durable Object where to find the stream, and video is routed through Cloudflare’s network, all with sub-second latency.</p><p>Using Durable Objects, we achieve this with zero centralized state. And just like the rest of Cloudflare Stream, you never have to think about regions, both in terms of pricing and product development.</p><p>While existing ultra low-latency streaming providers charge significantly more to stream over WebRTC, because Stream runs on Cloudflare’s global network, we’re able to offer WebRTC streaming at the same price as delivering video over HLS or DASH. We don’t think you should be penalized with higher pricing when choosing which technology to rely on to stream live video. Once generally available, WebRTC streaming will cost $1 per 1000 minutes of video delivered, just like the rest of Stream.</p>
    <div>
      <h3>What does sub-second latency let you build?</h3>
      <a href="#what-does-sub-second-latency-let-you-build">
        
      </a>
    </div>
    <p>Ultra low latency unlocks interactivity within your website or app, removing the time delay between creators, in-person attendees, and those watching remotely.</p><p>Developers we talk to are building everything from live sports betting, to live auctions, to live viewer Q&amp;A and even real-time collaboration in video post-production. Even streams without in-app interactivity can benefit from real-time — no sports fan wants to get a text from their friend at the game that ruins the moment, before they’ve had a chance to watch the final play. Whether you’re bringing an existing app or have a new idea in mind, we’re excited to see what you build.</p>
    <div>
      <h3>If you can write JavaScript, you can let your users go live from their browser</h3>
      <a href="#if-you-can-write-javascript-you-can-let-your-users-go-live-from-their-browser">
        
      </a>
    </div>
    <p>While hobbyist and professional creators might take the time to download and learn how to use an application like <a href="https://obsproject.com/">OBS Studio</a>, most Internet users won’t get past this friction of new tools, and copying RTMP keys from one tool to another. To empower more people to go live, they need to be able to broadcast from within your website or app, just by enabling access to the camera and microphone.</p><p>Cloudflare Stream with WebRTC lets you build live streaming into your app as a front-end developer, without any special knowledge of video protocols. And our approach, using the WHIP and WHEP open standards, means you can do this with zero dependencies, with 100% your code that you control.</p>
    <div>
      <h3>Go live from a web browser with just a few lines of code</h3>
      <a href="#go-live-from-a-web-browser-with-just-a-few-lines-of-code">
        
      </a>
    </div>
    <p>You can go live right now, from your web browser, by creating a live input in the <a href="https://dash.cloudflare.com/?to=/:account/stream/inputs">Cloudflare Stream dashboard</a>, and pasting a URL into the example linked below.</p><p>Read the <a href="https://developers.cloudflare.com/stream/webrtc-beta/">docs</a> or <a href="https://workers.new/stream/webrtc">run the example code below in your browser using Stackblitz</a>.</p>
            <pre><code>&lt;video id="input-video" autoplay autoplay muted&gt;&lt;/video&gt;</code></pre>
            
            <pre><code>import WHIPClient from "./WHIPClient.js";

const url = "&lt;WEBRTC_URL_FROM_YOUR_LIVE_INPUT&gt;";
const videoElement = document.getElementById("input-video");
const client = new WHIPClient(url, videoElement);</code></pre>
            <p>This example uses an example WHIP client, written in just 100 lines of Javascript, using APIs that are native to web browsers, with zero dependencies. But because WHIP is an open standard, you can use any WHIP client you choose. Support for WHIP is growing across the video streaming industry — it has recently been added to <a href="https://gstreamer.freedesktop.org/">Gstreamer</a>, and one of the authors of the WHIP specification has written a <a href="https://github.com/medooze/whip-js">Javascript client implementation</a>. We intend to support the full <a href="https://www.ietf.org/archive/id/draft-ietf-wish-whip-03.html">WHIP specification</a>, including supporting <a href="https://www.rfc-editor.org/rfc/rfc8838">Trickle ICE</a> for fast NAT traversal.</p>
    <div>
      <h3>Play a live stream in a browser, with sub-second latency, no SDK required</h3>
      <a href="#play-a-live-stream-in-a-browser-with-sub-second-latency-no-sdk-required">
        
      </a>
    </div>
    <p>Once you’ve started streaming, copy the playback URL from the live input you just created, and paste it into the example linked below.</p><p>Read the <a href="https://developers.cloudflare.com/stream/webrtc-beta/">docs</a> or <a href="https://workers.new/stream/webrtc">run the example code below in your browser using Stackbltiz</a>.</p>
            <pre><code>&lt;video id="playback" controls autoplay muted&gt;&lt;/video&gt;</code></pre>
            
            <pre><code>import WHEPClient from './WHEPClient.js';
const url = "&lt;WEBRTC_PLAYBACK_URL_FROM_YOUR_LIVE_INPUT&gt;";
const videoElement = document.getElementById("playback");
const client = new WHEPClient(url, videoElement);</code></pre>
            <p>Just like the WHIP example before, this one uses an example WHEP client we’ve written that has zero dependencies. WHEP is an earlier IETF draft than WHIP, <a href="https://www.ietf.org/id/draft-murillo-whep-00.html">published in July of this year</a>, but adoption is moving quickly. People in the community have already written open-source client implementations in both <a href="https://github.com/medooze/whip-js/blob/main/whep.js">Javascript</a>, <a href="https://github.com/meetecho/simple-whep-client">C</a>, with more to come.</p>
    <div>
      <h3>Start experimenting with real-time live video, in open beta today</h3>
      <a href="#start-experimenting-with-real-time-live-video-in-open-beta-today">
        
      </a>
    </div>
    <p>WebRTC streaming is in open beta today, ready for you to use as an integrated feature of <a href="https://www.cloudflare.com/products/cloudflare-stream/">Cloudflare Stream</a>. Once Generally Available, WebRTC streaming will be priced like the rest of Cloudflare Stream, based on minutes of video delivered and minutes stored.</p><p><a href="https://developers.cloudflare.com/stream/webrtc-beta/">Read the docs</a> to get started.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">5PQXX1PxT5vsDahi24H7Cn</guid>
            <dc:creator>Kyle Boutette</dc:creator>
            <dc:creator>Kenny Luong</dc:creator>
            <dc:creator>Brendan Irvine-Broque</dc:creator>
            <dc:creator>Jacob Curtis</dc:creator>
            <dc:creator>Rachel Chen</dc:creator>
            <dc:creator>Felipe Astroza Araya</dc:creator>
            <dc:creator>Renan Dincer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Build real-time video and audio apps on the world’s most interconnected network]]></title>
            <link>https://blog.cloudflare.com/announcing-cloudflare-calls/</link>
            <pubDate>Tue, 27 Sep 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ We are announcing Cloudflare Calls, a new product that lets developers build real-time audio and video apps ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In the last two years, there has been a rapid rise in real-time apps that help groups of people get together virtually with near-zero latency. User expectations have also increased: your users expect real-time video and audio features to work flawlessly. We found that developers building real-time apps want to spend less time building and maintaining low-level infrastructure. Developers also told us they want to spend more time building features that truly make their idea special.</p><p>So today, we are announcing a new product that lets developers build real-time audio/video apps. Cloudflare Calls exposes a set of APIs that allows you to build things like:</p><ul><li><p>A video conferencing app with a custom UI</p></li><li><p>An interactive conversation where the moderators can invite select audience members “on stage” as speakers</p></li><li><p>A privacy-first group workout app where only the instructor can view all the participants while the participants can only view the instructor</p></li><li><p>Remote 'fireside chats' where one or multiple people can have a video call with an audience of 10,000+ people in real time (&lt;100ms delay)</p></li></ul><p>The protocol that makes all this possible is WebRTC. And Cloudflare Calls is the product that abstracts away the complexity by turning the Cloudflare network into a “super peer,” helping you build reliable and secure real-time experiences.</p>
    <div>
      <h3>What is WebRTC?</h3>
      <a href="#what-is-webrtc">
        
      </a>
    </div>
    <p>WebRTC is a peer-to-peer protocol that enables two or more users’ devices to talk to each other <i>directly</i> and without leaving the browser. In a native implementation, peer-to-peer typically works well for 1:1 calls with only two participants. But as you add additional participants, it is common for participants to experience reliability issues, including video freezes and participants getting out of sync. Why? Because as the number of participants increases, the coordination overhead between users’ devices also increases. Each participant needs to send media to each other participant, increasing the data consumption from each computer exponentially.</p><p>A selective forwarding unit (SFU) solves this problem. An SFU is a system that connects users with each other in real-time apps by intelligently managing and routing video and audio data between the participants. Apps that use an SFU reduce the data capacity required from each user because each user doesn’t have to send data to every other user. SFUs are required parts of a real-time application when the applications need to determine who is currently speaking or when they want to send appropriate resolution video when WebRTC simulcast is used.</p>
    <div>
      <h3>Beyond SFUs</h3>
      <a href="#beyond-sfus">
        
      </a>
    </div>
    <p>The centralized nature of an SFU is also its weakness. A centralized WebRTC server needs a region, which means that it will be slow in most parts of the world for most users while being fast for only a few select regions.</p><p>Typically, SFUs are built on public clouds. They consume a lot of bandwidth by both receiving and sending high resolution media to many devices. And they come with significant devops overhead requiring your team to manually configure regions and scalability.</p><p>We realized that merely offering an SFU-as-a-service wouldn’t solve the problem of cost and bandwidth efficiency.</p>
    <div>
      <h3>Biggest WebRTC server in the world</h3>
      <a href="#biggest-webrtc-server-in-the-world">
        
      </a>
    </div>
    <p>When you are on a five-person video call powered by a classic WebRTC implementation, each person’s device talks directly with each other. In WebRTC parlance, each of the five participants is called a <i>peer.</i> And the reliability of the five-person call will only be as good as the reliability of the person (or peer) with the weakest Internet connection.</p><p>We built Calls with a simple premise: <i>“What if Cloudflare could act as a WebRTC peer?”.</i>  Calls is a “super peer” or a “giant server that spans the whole world” allows applications to be built beyond the limitations of the lowest common denominator peer or a centralized SFU. Developers can focus on the strength of their app instead of trying to compensate for the weaknesses of the weakest peer in a p2p topology.</p><p>Calls does not use the traditional SFU topology where every participant connects to a centralized server in a single location. Instead, each participant connects to their local Cloudflare data center. When another participant wants to retrieve that media, the datacenter that homes that original media stream is found and the tracks are forwarded between datacenters automatically. If two participants are physically close their media does not travel around the world to a centralized region, instead they use the same datacenter, greatly reducing latency and improving reliability.</p><p>Calls is a configurable, global, regionless WebRTC server that is the size of Cloudflare's ever-growing network. The WebRTC protocol enables peers to send and receive <i>media tracks.</i> When you are on a video call, your computer is typically sending <i>two</i> tracks: one that contains the audio of you speaking and another that contains the video stream from your camera. Calls implements the WebRTC <a href="https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection">RTCPeerConnection</a> API across the Cloudflare Network where users can push media tracks. Calls also exposes an API where other media tracks can be requested within the same Peer Connection context.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/55LIoL4VrFNTx2wTimkShk/473c3054a27deac146389dc237c45a08/image2-41.png" />
            
            </figure><p>Cloudflare Calls will be a good solution if you operate your own WebRTC server such as Janus or MediaSoup. Cloudflare Calls can also replace existing deployments of Janus or MediaSoup, especially in cases where you have clients connecting globally to a single, centralized deployment.</p>
    <div>
      <h2>Region: Earth</h2>
      <a href="#region-earth">
        
      </a>
    </div>
    <p>Building and maintaining your own real-time infrastructure comes with unique architecture and scaling challenges. It requires you to answer and constantly revise your answers to thorny questions such as <i>“which regions do we support?”</i>, “<i>how many users do we need to justify spinning up more infrastructure in yet another cloud region?</i>”, <i>“how do we scale for unplanned spikes in usage?”</i> and <i>“how do we not lose money during low-usage hours of our infrastructure?”</i> when you run your own WebRTC server infrastructure.</p><p>Cloudflare Calls eliminates the need to answer these questions. Calls uses <a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/">anycast</a> for every connection, so every packet is always routed to the closest Cloudflare location. It is global by nature: your users are automatically served from a location close to them. Calls scales with your use and your team doesn’t have to build its own auto-scaling logic.</p><p>Calls runs on every Cloudflare location and every single Cloudflare server. Because the Cloudflare network is within 10 milliseconds of 90% of the world’s population, it does not add any noticeable latency.</p>
    <div>
      <h2>Answer “where’s the problem?”, only faster</h2>
      <a href="#answer-wheres-the-problem-only-faster">
        
      </a>
    </div>
    <p>When we talk to customers with existing WebRTC workloads, there is one consistent theme: customers wish it was easier to troubleshoot issues. When a group of people are talking over a video call, the stakes are much higher when users experience issues. When a web page fails to load, it is common for users to simply retry after a few minutes. When a video call is disruptive, it is often the end of the call.</p><p>Cloudflare Calls’ focus on observability will help customers get to the bottom of the issues faster. Because Calls is built on Cloudflare’s infrastructure, we have end-to-end visibility from all layers of the OSI model.</p><p>Calls provides a server side view of the <a href="https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_Statistics_API">WebRTC Statistics API</a>, so you can drill into issues each Peer Connection and the flow of media within without depending only on data sent from clients. We chose this because the Statistics API is a standardized place developers are used to getting information about their experience. It is the same API available in browsers, and you might already be using it today to gain insight into the performance of your WebRTC connections.</p>
    <div>
      <h3>Privacy and security at the core</h3>
      <a href="#privacy-and-security-at-the-core">
        
      </a>
    </div>
    <p>Calls eliminates the need for participants to share information such as their IP address with each other. Let’s say you are building an app that connects therapists and patients via video calls. With a traditional WebRTC implementation, both the patient and therapist’s devices would talk directly with each other, leading to exposure of potentially sensitive data such as the IP address. Exposure of information such as the IP address can leave your users vulnerable to denial-of-service attacks.</p><p>When using Calls, you are still using WebRTC, but the individual participants are connecting to the Cloudflare network. If four people are on a video call powered by Cloudflare Calls, each of the four participants' devices will be talking only with the Cloudflare network. To your end users, the experience will feel just like a peer-to-peer call, only with added security and privacy upside.</p><p>Finally, all video and audio traffic that passes through Cloudflare Calls is encrypted by default. Calls leverages existing Cloudflare products including Argo to route the video and audio content in a secure and efficient manner. Calls API enables granular controls that cannot be implemented with vanilla WebRTC alone. When you build using Calls, you are only limited by your imagination; not the technology.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We’re releasing Cloudflare Calls in closed beta today. To try out Cloudflare Calls, <a href="https://www.cloudflare.com/cloudflare-calls-signup-page">request an invitation</a> and check your inbox in coming weeks.Calls will be free during the beta period. We're looking to work with early customers who want to take Calls from beta to generally available with us. If you are building a real-time video app today, having challenges scaling traditional WebRTC infrastructure, or just have a great idea you want to explore, <a href="https://www.cloudflare.com/cloudflare-calls-signup-page">leave a comment</a> when you are requesting an invitation, and we’ll reach out.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[WebRTC]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">4PEbQjLrYQwj2Hj7O4b3ah</guid>
            <dc:creator>Zaid Farooqui</dc:creator>
            <dc:creator>Renan Dincer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Stream with sub-second latency is like a magical HDMI cable to the cloud]]></title>
            <link>https://blog.cloudflare.com/magic-hdmi-cable/</link>
            <pubDate>Fri, 13 May 2022 12:59:12 GMT</pubDate>
            <description><![CDATA[ Starting today, in open beta, Cloudflare Stream supports video playback with sub-second latency over SRT or RTMPS at scale ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Starting today, in open beta, <a href="https://www.cloudflare.com/products/cloudflare-stream/">Cloudflare Stream</a> supports video playback with sub-second latency over SRT or RTMPS at scale. Just like <a href="https://www.cloudflare.com/learning/video/what-is-http-live-streaming/">HLS</a> and DASH formats, playback over RTMPS and SRT costs $1 per 1,000 minutes delivered regardless of video encoding settings used.</p><p>Stream is like a magic HDMI cable to the cloud. You can easily connect a video stream and display it from as many screens as you want wherever you want around the world.</p>
    <div>
      <h3>What do we mean by sub-second?</h3>
      <a href="#what-do-we-mean-by-sub-second">
        
      </a>
    </div>
    <p>Video latency is the time it takes from when a camera sees something happen live to when viewers of a broadcast see the same thing happen via their screen. Although we like to think what’s on TV is happening simultaneously in the studio and your living room at the same time, this is not the case. Often, cable TV takes five seconds to reach your home.</p><p>On the Internet, the range of latencies across different services varies widely from multiple minutes down to a few seconds or less. Live streaming technologies like HLS and DASH, used on by the most common video streaming websites typically offer 10 to 30 seconds of latency, and this is what you can achieve with <a href="/stream-live/">Stream Live</a> today. However, this range does not feel natural for quite a few use cases where the viewers interact with the broadcasters. Imagine a text chat next to an esports live stream or Q&amp;A session in a remote webinar. These new ways of interacting with the broadcast won’t work with typical latencies that the industry is used to. You need one to two seconds <i>at most</i> to achieve the feeling that the viewer is in the same room as the broadcaster.</p><p>We expect Cloudflare Stream to deliver sub-second latencies reliably in most parts of the world by routing the video as much as possible within the Cloudflare network. For example, when you’re sending video from San Francisco on your Comcast home connection, the video travels directly to the nearest point where Comcast and Cloudflare connect, for example, San Jose. Whenever a viewer joins, say from Austin, the viewer connects to the Cloudflare location in Dallas, which then establishes a connection using the Cloudflare backbone to San Jose. This setup avoids unreliable long distance connections and allows Cloudflare to monitor the reliability and latency of the video all the way from broadcaster the last mile to the viewer last mile.</p>
    <div>
      <h3>Serverless, dynamic topology</h3>
      <a href="#serverless-dynamic-topology">
        
      </a>
    </div>
    <p>With Cloudflare Stream, the latency of content from the source to the destination is purely dependent on the physical distance between them: with no centralized routing, each Cloudflare location talks to other Cloudflare locations and shares the video among each other. This results in the minimum possible latency regardless of the locale you are broadcasting from.</p><p>We’ve tested about 500ms of glass to glass latency from San Francisco to London, both from and to residential networks. If both the broadcaster and the viewers were in California, this number would be lower, simply because of lower delay caused by less distance to travel over speed of light. An early tester was able to achieve 300ms of latency by broadcasting using OBS via RTMPS to Cloudflare Stream and pulling down that content over SRT using ffplay.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2X9Jnr7Cj20i62zOz2AS6u/7a672eb97ecd0991a3ad7adc937738a1/RTMPS.png" />
            
            </figure><p>Any server in the Cloudflare Anycast network can receive and publish low-latency video, which means that you're automatically broadcasting to the nearest server with no configuration necessary. To minimize latency and avoid network congestion, we route video traffic between broadcaster and audience servers using the same network telemetry as <a href="https://www.cloudflare.com/products/argo-smart-routing/">Argo</a>.</p><p>On top of this, we construct a dynamic distribution topology, unique to the stream, which grows to meet the capacity needs of the broadcast. We’re just getting started with low-latency video, and we will continue to focus on latency and playback reliability as our real-time video features grow.</p>
    <div>
      <h3>An HDMI cable to the cloud</h3>
      <a href="#an-hdmi-cable-to-the-cloud">
        
      </a>
    </div>
    <p>Most video on the Internet uses HTTP - the protocol for loading websites on your browser to deliver video. This has many advantages, such as easy to achieve interoperability across viewer devices. Maybe more importantly, HTTP can use the existing infrastructure like caches which reduce the cost of video delivery.</p><p>Using HTTP has a cost in latency as it is not a protocol built to deliver video. There’s been many attempts made to deliver low latency video over HTTP, with some reducing latency to a few seconds, but none reach the levels achievable by protocols designed with video in mind. WebRTC and video delivery over QUIC have the potential to further reduce latency, but face inconsistent support across platforms today.</p><p>Video-oriented protocols, such as RTMPS and SRT, side-step some of the challenges above but often require custom client libraries and are not available in modern web browsers. While we now support low latency video today over RTMPS and SRT, we are actively exploring other delivery protocols.</p><p>There’s no silver bullet – yet, and our goal is to make video delivery as easy as possible by supporting the set of protocols that enables our customers to meet their unique and creative needs. Today that can mean receiving RTMPS and delivering low-latency SRT, or ingesting SRT while publishing HLS. In the future, that may include ingesting WebRTC or publishing over QUIC or HTTP/3 or WebTransport. There are many interesting technologies on the <a href="https://grnh.se/4bdb03661us">horizon</a>.</p><p>We’re excited to see new use cases emerge as low-latency video becomes easier to integrate and less costly to manage. A remote cycling instructor can ask her students to slow down in response to an increase in heart rate; an esports league can effortlessly repeat their live feed to remote broadcasters to provide timely, localized commentary while interacting with their audience.</p>
    <div>
      <h3>Creative uses of low latency video</h3>
      <a href="#creative-uses-of-low-latency-video">
        
      </a>
    </div>
    <p>Viewer experience at events like a concert or a sporting event can be augmented with live video delivered in real time to participants’ phones. This way they can experience the event in real-time and see the goal scored or details of what’s going happening on the stage.</p><p>Often in big cities, people who cheer loudly across the city can be heard before seeing a goal scored on your own screen. This can be eliminated by when every video screen shows the same content at the same time.</p><p>Esports games, large company meetings or conferences where presenters or commentators react real time to comments on chat. The delay between a fan making a comment and them seeing the reaction on the video stream can be eliminated.</p><p>Online exercise bikes can provide even more relevant and timely feedback from the live instructors, adding to the sense of community developed while riding them.</p><p>Participants in esports streams can be switched from a passive viewer to an active live participant easily as there is no delay in the broadcast.</p><p>Security cameras can be monitored from anywhere in the world without having to open ports or set up centralized servers to receive and relay video.</p>
    <div>
      <h3>Getting Started</h3>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Get started by using your existing inputs on Cloudflare Stream. Without the need to reconnect, they will be available instantly for playback with the RTMPS/SRT playback URLs.</p><p>If you don’t have any inputs on Stream, <a href="https://dash.cloudflare.com/sign-up/stream">sign up</a> for $5/mo. You will get the ability to push live video, broadcast, record and now pull video with sub-second latency.</p><p>You will need to use a computer program like FFmpeg or OBS to push video. To playback RTMPS you can use VLC and FFplay for SRT. To integrate in your native app, you can utilize FFmpeg wrappers for native apps such as <a href="https://github.com/tanersener/ffmpeg-kit">ffmpeg-kit</a> for iOS.</p><p>RTMPS and SRT playback work with the recently launched <a href="/bring-your-own-ingest-domain-to-stream-live/">custom domain support</a>, so you can use the domain of your choice and keep your branding.</p> ]]></content:encoded>
            <category><![CDATA[Platform Week]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Video]]></category>
            <guid isPermaLink="false">2LSEcbl8qZ4zZNAc3VAgBP</guid>
            <dc:creator>J. Scott Miller</dc:creator>
            <dc:creator>Renan Dincer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Stream now supports SRT as a drop-in replacement for RTMP]]></title>
            <link>https://blog.cloudflare.com/stream-now-supports-srt-as-a-drop-in-replacement-for-rtmp/</link>
            <pubDate>Thu, 10 Mar 2022 18:00:00 GMT</pubDate>
            <description><![CDATA[ RTMP is not the protocol to carry us into the future so Cloudflare Stream now supports SRT in wherever you would use RTMP for.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1hEBY4WtbgFMKdGe11oVGp/4716bdd4d2882483707f9d1245a67425/rtr3vWNrtk3X2HSUuSSIHvNud8K6WeNHokdbx231vi9-TIC87GeD6QJ6dMR0vlSJF4qFzv-heJHGXFzhVDygIdHsm3Jh0kGw47QyVx8vtmxPVShb8YsE7CqBRaWJ.png" />
            
            </figure><p>SRT is a new and modern live video transport protocol. It features many improvements to the incumbent popular video ingest protocol, RTMP, such as lower latency, and better resilience against unpredictable network conditions on the public Internet. SRT supports newer video codecs and makes it easier to use accessibility features such as captions and multiple audio tracks. While RTMP development has been abandoned since at least 2012, SRT development is maintained by an active community of developers.</p><p>We don’t see RTMP use going down anytime soon, but we can do something so authors of new broadcasting software, as well as video streaming platforms, can have an alternative.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/lsYVSFjz5nlK7TSllp2Y8/2ac5c0004a982682f30f7abd147159a2/Screen-Shot-2022-03-10-at-12.05.57-PM.png" />
            
            </figure><p>Starting today, in open beta, you can use <a href="/restream-with-stream-connect/">Stream Connect</a> as a gateway to translate SRT to RTMP or RTMP to SRT with your existing applications. This way, you can get the last-mile reliability benefits of SRT and can continue to use the RTMP service of your choice. It’s priced at $1 per 1,000 minutes, regardless of video encoding parameters.</p><p>You can also use SRT to go live on <a href="/stream-live/">Stream Live</a>, our end-to-end live streaming service to get HLS and DASH manifest URLs from your SRT input, and do simulcasting to multiple platforms whether you use SRT or RTMP.</p><p>Stream’s SRT and RTMP implementation supports adding or removing RTMP or SRT outputs without having to restart the source stream, scales to tens of thousands of concurrent video streams per customer and runs on every Cloudflare server in every Cloudflare location around the world.</p>
    <div>
      <h3>Go live like it’s 2022</h3>
      <a href="#go-live-like-its-2022">
        
      </a>
    </div>
    <p>When we first started developing live video features on Cloudflare Stream earlier last year we had to decide whether to reimplement an old and unmaintained protocol, RTMP, or focus on the future and start off fresh by using a modern protocol. If we launched with RTMP, we would get instant compatibility with existing clients but would give up features that would greatly improve performance and reliability. Reimplementing RTMP would also mean we’d have to handle the complicated state machine that powers it, demux the FLV container, parse AMF and even write a server that sends the text “Genuine Adobe Flash Media Server 001” as part of the RTMP handshake.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5LxAJbuV3Vvdt0f0otJGeR/99c1e75da5da943291e08c2b83f6e515/pasted-image-0--3-.png" />
            
            </figure><p>Even though there were a few new protocols to evaluate and choose from in this project, the dominance of RTMP was still overwhelming. We decided to implement RTMP but really don’t want anybody else to do it again.</p>
    <div>
      <h3>Eliminate head of line blocking</h3>
      <a href="#eliminate-head-of-line-blocking">
        
      </a>
    </div>
    <p>A common weakness of TCP when it comes to low latency video transfer is head of line blocking. Imagine a camera app sending videos to a live streaming server. The camera puts every frame that is captured into packets and sends it over a reliable TCP connection. Regardless of the diverse set of Internet infrastructure it may be passing through, TCP makes sure all packets get delivered in order (so that your video frames don’t jump around) and reliably (so you don’t see any parts of the frame missing). However, this type of connection comes at a cost. If a <i>single</i> packet is dropped, or lost in the network somewhere between two endpoints like it happens on mobile network connections or wifi often, it means the entire TCP connection is brought to a halt while the lost packet is found and re-transmitted. This means that if one frame is suddenly missing, then <i>everything</i> that would come after the lost video frame needs to wait. This is known as head of line blocking.</p><p>RTMP experiences head of line blocking because it uses a TCP connection. Since SRT is a UDP-based protocol, it does not experience head of line blocking. SRT features packet recovery that is aware of the low-latency and high reliability requirements of video. Similar to <a href="https://www.rfc-editor.org/rfc/rfc9000.html">QUIC</a>, it achieves this by implementing its own logic for a reliable connection on top of UDP, rather than relying on TCP.</p><p>SRT solves this problem by waiting only a little bit, because it knows that losing a single frame won’t be noticeable by the end viewer in the majority of cases. The video moves on if the frame is not re-transmitted right away. SRT really shines when the broadcaster is streaming with less-than-stellar Internet connectivity. Using SRT means fewer buffering events, lower latency and a better overall viewing experience for your viewers.</p>
    <div>
      <h3>RTMP to SRT and SRT to RTMP</h3>
      <a href="#rtmp-to-srt-and-srt-to-rtmp">
        
      </a>
    </div>
    <p>Comparing SRT and RTMP today may not be that useful for the most pragmatic app developers. Perhaps it’s just another protocol that does the same thing for you. It’s important to remember that even though there might not be a big improvement for you today, tomorrow there will be new video use cases that will benefit from a UDP-based protocol that avoids head of line blocking, supports forward error correction and modern codecs beyond H.264 for high-resolution video.</p><p>Switching protocols requires effort from both software that sends video and software that receives video. This is a frustrating chicken-or-the-egg problem. A video streaming service won’t implement a protocol not in use and clients won’t implement a protocol not supported by streaming services.</p><p>Starting today, you can use Stream Connect to translate between protocols for you and deprecate RTMP without having to wait for video platforms to catch up. This way, you can use your favorite live video streaming service with the protocol of your choice.</p><p>Stream is useful if you’re a live streaming platform too! You can start using SRT while maintaining compatibility with existing RTMP clients. When creating a video service, you can have Stream Connect to terminate RTMP for you and send SRT over to the destination you intend instead.</p><p>SRT is already implemented in software like FFmpeg and OBS. Here’s how to get it working from OBS:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Xtq8EpQEYPGJ1ApiQpSvm/0fafdc93d03b1ba0ad6320fa26b39601/Screen-Shot-2022-03-10-at-12.05.39-PM.png" />
            
            </figure><p>Get started with signing up for Cloudflare Stream and adding a live input.</p>
    <div>
      <h3>Protocol-agnostic Live Streaming</h3>
      <a href="#protocol-agnostic-live-streaming">
        
      </a>
    </div>
    <p>We’re working on adding support for more media protocols in addition to RTMP and SRT. What would you like to see next? Let us know! If this post vibes with you, come work with the engineers building with <a href="https://boards.greenhouse.io/cloudflare/jobs/2953274?gh_jid=2953274">video</a> and <a href="https://boards.greenhouse.io/cloudflare/jobs/3523616?gh_jid=3523616">more</a> at Cloudflare!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">4q8tySaWaNx4DZiOCDRg0R</guid>
            <dc:creator>Renan Dincer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Stream now supports NFTs]]></title>
            <link>https://blog.cloudflare.com/cloudflare-stream-now-supports-nfts/</link>
            <pubDate>Fri, 16 Apr 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re introducing a new API that takes an ERC-721 token ID and contact address and sets it on a video so every video on Stream can be represented with an NFT.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare Stream has been helping creators publish their videos online without having to think about video quality, device compatibility, storage buckets or digging through FFmpeg documentation. These creators want to be able to claim ownership of their works and assert control over how that ownership claim is transferred. Increasingly, many of those creators are looking to Non-Fungible Tokens (NFTs).</p><p>NFTs are a special type of smart contract that allows provable ownership of the contract on the blockchain. Some call NFTs collectibles because like coins or stamps, collectors who enjoy them buy, sell and trade them. Collectors keep track of NFTs on the Ethereum blockchain which acts as a shared source of truth of all the activity.</p><p>Today, we’re introducing a new API that takes a <a href="https://eips.ethereum.org/EIPS/eip-721">ERC-721</a> token ID and contact address and sets it on a video so every video on Stream can be represented with an NFT.</p><p><code>curl -X POST -H "Authorization: Bearer $AUTH_TOKEN" --data '{"contract":"0x57f1887a8bf19b14fc0d912b9b2acc9af147ea85","token":"5"}' https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/stream/$VIDEO_ID/nft</code></p><p>Once you set it, you cannot change these values so be sure to set it to an NFT you own! If you set a video you own to an NFT you don’t own, the owner of the NFT can claim your video and delete it. As the owner of the linked NFT changes, admin rights on videos can be transferred between users without the involvement of the original user who uploaded the video.</p>
    <div>
      <h2>How can I represent a video with a NFT?</h2>
      <a href="#how-can-i-represent-a-video-with-a-nft">
        
      </a>
    </div>
    <p>Stream doesn’t create NFTs for you, however, you can use online platforms like <a href="https://opensea.io/">OpenSea</a> and  <a href="https://foundation.app/">Foundation</a> or a command line tool like <a href="https://docs.ipfs.io/how-to/mint-nfts-with-ipfs/#minty">Minty</a> to create them. These tools don't actually host your video but only allow you to create a NFT on the blockchain. For example, if you shot a cool video and want to turn it into an NFT, you'd use a third party service to get a token ID and contact address. Starting today, you can associate the token ID and contact address with your video on Stream.</p><p>You should consider storing a copy of the video you shot in a storage system like IPFS or Filecoin as they can help with long term storage for the NFT. A link from a decentralized storage system should be the one to use in your NFT metadata URL (We’re working on making a copy of the Stream videos available automatically in IPFS to facilitate long term storage of videos).</p><p>You can use Stream with your NFT to store high quality or longer versions of the video that may cost too much to store and distribute through decentralized storage. Stream offers accessibility features such as captions, compatibility with mobile and desktop devices and automatic quality switching.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/mkV46CgxM3bixzqaRxvVL/ed17d9893f43eee9a49704ff360b7118/image2-18.png" />
            
            </figure>
    <div>
      <h2>Why NFTs?</h2>
      <a href="#why-nfts">
        
      </a>
    </div>
    <p>NFTs allow for unlimited options for unique business models around media: videos can be owned by an entity with many owners (made into a crypto coin with voting rights proportional to ownership), crowdfunded (for example as done for the <a href="https://j.mirror.xyz/uVGCCwwm3k341lPpxaJmHTZROESVse9Pe_rmbiuUAC0">$ESSAY</a>) or simply traded (like <a href="https://www.cryptokitties.co/">cryptokittes</a> or <a href="https://www.nbatopshot.com/">NBA Top Shot</a>).</p><p>Digital ownership is not new (domain names are a good example that existed for decades) but it has never been this popular. We’ve been watching the explosive growth of the NFT community closely. If you’ve heard of <a href="https://www.cryptokitties.co/">CryptoKitties</a> a while back or <a href="https://www.bbc.com/news/technology-56371912">visited</a> <a href="https://www.businessinsider.com/nft-artists-make-millions-traditional-galleries-adapt-cryptoart-2021-3?op=1">any</a> <a href="https://www.cnn.com/style/article/beeple-first-nft-artwork-at-auction-sale-result/index.html">news</a> <a href="https://www.npr.org/2021/03/05/974089381/whats-an-nft-and-why-are-people-paying-millions-to-buy-them">website</a> recently you’ve probably heard of the vibrant community forming around digital ownership of art and the growth of creators that are able to monetize their work.  By permitting creators to embed their NFT claim to ownership into their video on Cloudflare Stream, we hope this will be a helpful step as acceptance of the process for minting, validating, and utilizing NFTs moves forward.</p><p>If you visit one of the many NFT marketplaces, you will see that most tokens are either images or short animations yet there are no films. We think this is partly because it’s hard to host videos on the Internet and get it to work on every device and connection speed. Storage technologies such as IPFS are not yet fit to support video streaming and require work on the creators’ part to make accessibility features such as captions to work.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cuFcwOubFQgZixD8qxjNd/34efa81dcd5a573308930db26c22e87a/image4-7.png" />
            
            </figure><blockquote><p>We've been building OpenSea since the inception of NFTs in 2017. NFTs provide a brand-new format for media with new and exciting properties, including scarcity, provable ownership, and interoperability between platforms. But turning videos into NFTs has always been a difficult task to scale. Cloudflare is well-positioned to help here, and is stepping up to this problem by adding NFT support to Cloudflare Stream, allowing video creators to produce and globally distribute high-quality video NFTs. We're excited for the new applications this will enable!- <b>Alex Atallah, CTO, Opensea.io</b></p></blockquote>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Z3CupElW1GwXPhjoSkj42/16b965c7b278fb7f1710001a71ad5885/image1-25.png" />
            
            </figure><blockquote><p>Dapper Labs is excited that Cloudflare continues to make it easier for folks to adopt Blockchain, the most democratizing technology since the creation of the Internet itself. Cloudflare Stream creates new opportunities for creators to showcase and monetize their work using NFTs.- <b>Dieter Shirley, CTO at Dapper Labs and creator of ERC-721</b></p></blockquote>
    <div>
      <h2>A model for decentralized ownership</h2>
      <a href="#a-model-for-decentralized-ownership">
        
      </a>
    </div>
    <p>By tagging token ID and contract on Stream videos, you make it possible to transfer admin rights on a video. If a change in ownership of the ERC-721 token happens on the Ethereum blockchain, the new owner of the token may claim control over the video on Cloudflare Stream by proving they own the token associated with the video. Once the video has been claimed via the API call, the video may be deleted or transferred to the new owner’s Cloudflare Stream account.</p><p>Here’s how the transfer API looks like:</p>
            <pre><code>curl -X POST -H "Authorization: $AUTH_TOKEN" --data '{"signature": "0xcac010d5ac8ef1f325c14fa0e7df5bb3a1920a205d827bd727e96b32b918875c26d8635dd778e0a1fa77f2cb547259e05fb40f3cb5c2673b77d77e372cc22f091b", "contract":"0x57f1887a8bf19b14fc0d912b9b2acc9af147ea85","token":"5"}' https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/stream/nft/claim</code></pre>
            <p>Signature is the following string signed with the wallet that owns the NFT:</p><p><code>“I claim ownership of NFT #$TOKEN_ID under contract $CONTRACT and this message expires at $EXPIRY_TIME”</code></p><p><code>$EXPIRY_TIME</code> is a RFC3339 formatted time in the soon future.</p><p><a href="https://etherscan.io/verifySig/2482">See an example signature</a></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6pW0D7LWla3eH6nZuWWfRL/ebed1b27cd0810883b439b0153946691/image3-10.png" />
            
            </figure><p>Cloudflare Stream doesn’t do anything complicated besides storing a token ID and contract address pair for each video in our database. When an NFT is set, we verify compatibility with ERC-721 using <a href="https://developers.cloudflare.com/distributed-web/ethereum-gateway">Cloudflare’s Ethereum Gateway</a>. When ownership on an NFT associated with a video is claimed, we update control over the video.</p><p>After making this API call, the video claimed shows up in the new account where the new owner of the NFT can change settings for the video, restrict views, add captions: anything that the original user who uploaded the video can do.</p>
    <div>
      <h2>NFTs on the video player</h2>
      <a href="#nfts-on-the-video-player">
        
      </a>
    </div>
    <div></div><p>Big Buck Bunny by Blender Foundation (Creative Commons Attribution 3.0 license) associated here with an example NFT.</p><p>An important part of digital ownership is linking back to the contract and token that governs the ownership. If a video on Stream is linked to an NFT, the Stream video player will show a link to the token and contract.</p>
    <div>
      <h2>We’re hiring</h2>
      <a href="#were-hiring">
        
      </a>
    </div>
    <p>We’re working on more ways to support creators and platforms that enable creators to do their best work. If you’re interested in working on technology for creators, Stream team at Cloudflare is hiring both in <a href="https://boards.greenhouse.io/cloudflare/jobs/2953274?gh_jid=2953274">the United States</a> and in <a href="https://boards.greenhouse.io/cloudflare/jobs/2953354?gh_jid=2953354">Europe</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">UGg43Y84LiTn0PyhYp9Oo</guid>
            <dc:creator>Renan Dincer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing the New Cloudflare Apps]]></title>
            <link>https://blog.cloudflare.com/cloudflare-apps-2/</link>
            <pubDate>Tue, 27 Jun 2017 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we’re excited to announce the next generation of Cloudflare Apps. Cloudflare Apps is an open platform of tools to build a high quality website. ]]></description>
            <content:encoded><![CDATA[ <p>Today we’re excited to announce the next generation of <a href="https://www.cloudflare.com/apps?utm_campaign=Apps_Launch&amp;utm_medium=post&amp;utm_source=blog&amp;utm_content=announce">Cloudflare Apps</a>. Cloudflare Apps is an open platform of tools to build a high quality website. It’s a place where every website owner can select from a vast catalog of Apps which can improve their websites and internet properties in every way imaginable. Selected apps can be previewed and installed instantly with just a few clicks, giving every website owner the power of technical expertise, and every developer the platform only Cloudflare can provide.</p><p>Apps can modify content and layout on the page they’re installed on, communicate with external services and dramatically improve websites. Imagine Google Analytics, YouTube videos, in-page chat tools, widgets, themes and every other business which can be built by improving websites. All of these and more can be done with Cloudflare Apps.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6BLeQ19fUc7PcMlUPUZ8EA/37e91c7219de872724ab4816ec030c26/Screen-Shot-2017-06-27-at-9.02.41-AM.png" />
            
            </figure><p>Cloudflare Apps makes it possible for a developer in her basement to build the next great new tool and get it on a million websites overnight. With Cloudflare Apps, even the smallest teams can get massive distribution for their apps on the web so that the best products win. With your help we will make it possible for developers like you to build a new kind of business.</p><p>Apps makes it possible for the more than six million Internet properties on Cloudflare’s network to take advantage of what you can build. Even non-technical users can preview and install apps, opening up a whole new massive audience to SaaS software companies and independent developers. Unlike other solutions, Apps get served from the site’s original domain, allowing you to get all the performance benefits of HTTP/2, TCP Pipelining and the Cloudflare edge.</p><p>We’re working with Oracle, Spotify, Pinterest, Zendesk and more great companies for launch. We can’t wait to see what apps you will create.</p>
    <div>
      <h2><a href="https://www.cloudflare.com/apps/developer/docs/getting-started">Develop an App ›</a></h2>
      <a href="#">
        
      </a>
    </div>
    
    <div>
      <h3>Live Preview</h3>
      <a href="#live-preview">
        
      </a>
    </div>
    <p>Before an app is installed we present the user with a live preview of the app to allow them to see what it will look like on their site. Using this preview they can customize it and ensure that it works how they wish. During the preview, users can also customize colors, placement and other options defined by the developer. As users change options in an app the preview gets updated. Even better, supporting this preview often requires no additional work beyond what’s required to build your app.</p>
            <figure>
            <a href="https://www.cloudflare.com/apps/welcome-bar/install">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/483dNgElrOJsuRuKad4NNl/67245ff68a5c235999603e949cb58df8/preview.png" />
            </a>
            </figure><p>Live Preview is not just limited to Cloudflare users; as a developer you can use it to <a href="https://www.cloudflare.com/apps/DsgJlv2AYmyW/install?initialUrl=https%3A%2F%2Fwww.nytimes.com%2F&amp;options=%7B%7D">show off</a> your apps to any user on any website.</p>
    <div>
      <h3>Logins and Registrations using OAuth</h3>
      <a href="#logins-and-registrations-using-oauth">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/apps/developer/docs/writing-your-app/managing-accounts">OAuth support</a> makes it easy to allow users to log into or register with your service without leaving the installation page. That means users don’t have to copy-paste API keys or embed codes from your service anymore. We’ve found that allowing users to register accounts greatly increases the likelihood that a user who preview’s an app will install it.</p>
    <div>
      <h3>Bidirectional Webhooks</h3>
      <a href="#bidirectional-webhooks">
        
      </a>
    </div>
    <p>Like webhooks you might be used to, Cloudflare Apps supports hooks which allow you to be notified when your users preview or install your app. Even better though, our hooks allow developers to modify installations as they happen. When we fire a hook, you can respond with the changes we should make to that user’s installation experience or their installation’s options. This allows you to tailor every user's installation experience to their specific account and records. For example Google Analytics allows their users to select from their analytics accounts while installing and Cover Message allows users to chose which of their Mailchimp or Constant Contact lists they’d like new leads.</p>
    <div>
      <h3>Selling Apps</h3>
      <a href="#selling-apps">
        
      </a>
    </div>
    <p>In the world of mobile apps it’s possible to make money as an independent developer, building apps and selling them. On the web, it’s much harder for a developer to build a business. Growing a sales and marketing team is time consuming and distracting, and ultimately results in the team with the biggest budget getting the most customers.</p><p>Cloudflare Apps allows anyone to not just build an app which gets installed onto websites, but also to sell it. You can charge any amount you’d like for your app, all recurring monthly. You can even offer multiple plans for users of different calibers or who need different features.Providing recurring revenue to app developers makes it possible for them to make a living building their apps, and create sustainable business in the process.</p>
            <figure>
            <a href="https://www.cloudflare.com/apps/">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2k74O0Zi8dyf0Lipj6KkNN/9a2c294e83bacb9308a4f8c121af1a0d/screencapture-cloudflare-apps-list-1498534531569.png" />
            </a>
            </figure>
    <div>
      <h3>Building an App</h3>
      <a href="#building-an-app">
        
      </a>
    </div>
    <p>For Cloudflare Apps we've built a new <a href="https://www.cloudflare.com/apps/developer/docs/getting-started">Documentation site</a> including example apps and screencasts.</p><p>We have also built an <a href="https://www.cloudflare.com/apps/developer/app-creator">App Creator</a> to allow developers to start developing their apps right away without having to set up a development environment. The Creator tool watches your app’s files on your computer and updates the app’s preview as you make changes live.</p>
    <div>
      <h2><a href="https://www.cloudflare.com/apps/developer/docs/getting-started">Develop an App ›</a></h2>
      <a href="#">
        
      </a>
    </div>
    <p>.large-video-figure video { width: 645px; max-width: 100%; border-radius: 4px; box-shadow: 0 0 0 1px rgba(0, 0, 0, .1); position:relative; top:-1px; } .large-video-figure { overflow: hidden; width: 100%; position:relative; margin-bottom:20px; } @media (all and (-webkit-min-device-pixel-ratio: 1.5)) { .large-video-figure video { width: 645px; } .large-video-figure { width: 61.75rem; max-width: 100%; } } .post-content { font-size: 1.1em !important; line-height: 1.6 !important; } body { background: white !important; }</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Apps]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">4YddVW6puzW6RK6dEYwgYT</guid>
            <dc:creator>Renan Dincer</dc:creator>
        </item>
    </channel>
</rss>