Subscribe to receive notifications of new posts:

Introducing Cloudflare Network Interconnect

2020-08-04

8 min read
This post is also available in 简体中文 and 日本語.

Today we’re excited to announce Cloudflare Network Interconnect (CNI). CNI allows our customers to interconnect branch and HQ locations directly with Cloudflare wherever they are, bringing Cloudflare’s full suite of network functions to their physical network edge. Using CNI to interconnect provides security, reliability, and performance benefits vs. using the public Internet to connect to Cloudflare. And because of Cloudflare’s global network reach, connecting to our network is straightforward no matter where on the planet your infrastructure and employees are.

At its most basic level, an interconnect is a link between two networks. Today, we’re offering customers the following options to interconnect with Cloudflare’s network:

  • Via a private network interconnect (PNI). A physical cable (or a virtual “pseudo-wire”; more on that later) that connects two networks.

  • Over an Internet Exchange (IX). A common switch fabric where multiple Internet Service Providers (ISPs) and Internet networks can interconnect with each other.

To use a real world analogy: Cloudflare over the years has built a network of highways across the Internet to handle all our customers' traffic. We’re now providing dedicated on-ramps for our customers’ on-prem networks to get onto those highways.

Why interconnect with Cloudflare?

CNI provides more reliable, faster, and more private connectivity between your infrastructure and Cloudflare’s. This delivers benefits across our product suite. Here are some examples of specific products and how you can combine them with CNI:

  • Cloudflare Access: Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network. CNI allows you to bring your own MPLS network to meet ours, allowing your employees to connect to your network securely and quickly no matter where they are.

  • CDN: Cloudflare’s CDN places content closer to visitors, improving site speed while minimizing origin load. CNI improves cache fill performance and reduces costs.

  • Magic Transit: Magic Transit protects datacenter and branch networks from unwanted attack and malicious traffic. Pairing Magic Transit with CNI decreases jitter and drives throughput improvements, and further hardens infrastructure from attack.

  • Cloudflare Workers: Workers is Cloudflare’s serverless compute platform. Integrating with CNI provides a secure connection to serverless cloud compute that does not traverse the public Internet, allowing customers to use Cloudflare’s unique set of Workers services with tighter network performance tolerances.

Let’s talk more about how CNI delivers these benefits.

Improving performance through interconnection

CNI is a great way to boost performance for many existing Cloudflare products. By utilizing CNI and setting up interconnection with Cloudflare wherever a customer’s origin infrastructure is, customers can get increased performance and security at lower cost than using public transit providers.

CNI makes things faster

As an example of the performance improvements network interconnects can deliver for Cloudflare customers, consider an HTTP application workload which flows through Cloudflare’s CDN and WAF. Many of our customers rely on our CDN to make their HTTP applications more responsive.

Cloudflare caches content very close to end users to provide the best performance possible. But, if content is not in cache, Cloudflare edge PoPs must contact the origin server to retrieve cacheable content. This can be slow, and places more load on an origin server compared to serving directly from cache.

With CNI, these origin pulls can be completed over a dedicated link, improving throughput and reducing overall time needed for origin pulls. Using Argo Tiered Cache, customers can manage tiered cache topologies and specify upstream cache tiers that correspond with locations where network interconnects are in place. Using Tiered Cache in this fashion lowers origin loads and increases cache hit rates, thereby improving performance and reducing origin infrastructures costs.

Here’s anonymized and sampled data from a real Cloudflare customer who recently provisioned interconnections between our network and theirs to further improve performance. Heavy users of our CDN, they were able to shave off precious milliseconds from their origin round trip time (RTT) by adding PNIs in multiple locations.

As an example, their 90th percentile round trip time in Warsaw, Poland decreased by 6.5ms as a result of provisioning a private network interconnect (from 7.5ms to 1ms), which is a performance win of 87%!  The jitter (variation in delay in received packets) on the link decreased from 82.9 to 0.3, which speaks to the dedicated, reliable nature of the link. CNI helps deliver reliable and performant network connectivity to your customers and employees.

Enhanced security through private connectivity

Customers with large on-premise networks want to move to the cloud: it’s cheaper, less hassle, and less overhead and maintenance.  However, customers want to also preserve their existing security and threat models.

Traditionally, CIOs trying to connect their IP networks to the Internet do so in two steps:

  1. Source connectivity to the Internet from transit providers (ISPs).

  2. Purchase, operate, and maintain network function specific hardware appliances. Think hardware load balancers, firewalls, DDoS mitigation equipment, WAN optimization, and more.

CNI allows CIOs to provision security services on Cloudflare and connect their existing networks to Cloudflare in a way that bypasses the public Internet.  Because Cloudflare integrates with on-premise networks and the cloud, customers can enforce security policies across both networks and create a consistent, secure boundary.

CNI increases cloud and network security by providing a private, dedicated link to the Cloudflare network. Since this link is reserved exclusively for the customer that provisions it, the customer’s traffic is isolated and private.

CNI + Magic Transit: Removing public Internet exposure

To use a product-specific example: through CNI’s integration with Magic Transit, customers can take advantage of private connectivity to minimize exposure of their network to the public Internet.

Magic Transit attracts customers’ IP traffic to our data centers by advertising their IP addresses from our edge via BGP. When traffic arrives, it’s filtered and sent along to customers’ data centers. Before CNI, all Magic Transit traffic was sent from Cloudflare to customers via Generic Routing Encapsulation (GRE) tunnels over the Internet. Because GRE endpoints are publicly routable, there is some risk these endpoints could be discovered and attacked, bypassing Cloudflare’s DDoS mitigation and security tools.

Using CNI removes this exposure to the Internet. Advantages of using CNI with Magic Transit include:

  • Reduced threat exposure. Although there are many steps companies can take to increase network security, some risk-sensitive organizations prefer not to expose endpoints to the public Internet at all. CNI allows Cloudflare to absorb that risk and forward only clean traffic (via Magic Transit) through a truly private interface.

  • Increased reliability. Traffic traveling over the public Internet is subject to factors outside of your control, including latency and packet loss on intermediate networks. Removing steps between Cloudflare’s network and yours means that after Magic Transit processes traffic, it’s forwarded directly and reliably to your network.

  • Simplified configuration. Soon, Magic Transit + CNI customers will have the option to skip making MSS (maximum segment size) changes when onboarding, a step that’s required for GRE-over-Internet and can be challenging for customers who need to consider their downstream customers’ MSS as well (eg. service providers).

Example deployment: Penguin Corp uses Cloudflare for Teams, Magic Transit, and CNI to protect branch and core networks, and employees.

Imagine Penguin Corp, a hypothetical company, has a fully connected private MPLS network.  Maintaining their network is difficult and they have a dedicated team of network engineers to do this.  They are currently paying a lot of money to run their own private cloud. To minimize costs, they limit their network egress points to two worldwide.  This creates a major performance problem for their users, whose bits have to travel a long way to accomplish basic tasks while still traversing Penguin’s network boundary.

SASE (Secure Access Service Edge) models look attractive to them, because they can, in theory, move away from their traditional MPLS network and move towards the cloud.  SASE deployments provide firewall, DDoS mitigation, and encryption services at the network edge, and bring security as a service to any cloud deployment, as seen in the diagram below:

CNI allows Penguin to use Cloudflare as their true network edge, hermetically sealing their branch office locations and datacenters from the Internet. Penguin can adapt to a SASE-like model while keeping exposure to the public Internet at zero. Penguin establishes PNIs with Cloudflare from their branch office in San Jose to Cloudflare’s San Jose location to take advantage of Cloudflare for Teams, and from their core colocation facility in Austin to Cloudflare’s Dallas location to use Magic Transit to protect their core networks.

Like Magic Transit, Cloudflare for Teams replaces traditional security hardware on-premise with Cloudflare’s global network. Customers who relied on VPN appliances to reach internal applications can instead connect securely through Cloudflare Access. Organizations maintaining physical web gateway boxes can send Internet-bound traffic to Cloudflare Gateway for filtering and logging.

Cloudflare for Teams services run in every Cloudflare data center, bringing filtering and authentication closer to your users and locations to avoid compromising performance. CNI improves that even further with a direct connection from your offices to Cloudflare. With a simple configuration change, all branch traffic reaches Cloudflare’s edge where Cloudflare for Teams policies can be applied. The link improves speed and reliability for users and replaces the need to backhaul traffic to centralized filtering appliances.

Once interconnected this way, Penguin’s network and employees realize two benefits:

  1. They get to use Cloudflare’s full set of security services without having to provision expensive and centralized physical or virtualized network appliances.

  2. Their security and performance services are running across Cloudflare’s global network in over 200 cities. This brings performance and usability improvements for users by putting security functions closer to them.

Scalable, global, and flexible interconnection options

CNI offers a big benefit to customers because it allows them to take advantage of our global footprint spanning 200+ cities: their branch office and datacenter infrastructure can connect to Cloudflare wherever they are.

This matters for two reasons: our globally distributed network makes it easier to interconnect locally, no matter where a customer’s branches and core infrastructure is, and allows for a globally distributed workforce to interact with our edge network with low latency and improved performance.

Customers don’t have to worry about securely expanding their network footprint: that’s our job.

To this point, global companies need to interconnect at many points around the world. Cloudflare Network Interconnect is priced for global network scale: Cloudflare doesn't charge anything for enterprise customers to provision CNI. Customers may need to pay for access to an interconnection platform or a datacenter cross-connect. We’ll work with you and any other parties involved to make the ordering and provisioning process as smooth as possible.

In other words, CNI’s pricing is designed to accommodate complicated enterprise network topologies and modern IT budgets.

How to interconnect

Customers can interconnect with Cloudflare in one of three ways: over a private network interconnect (PNI), over an IX, or through one of our interconnection platform partners. We have worked closely with our global partners to meet our customers where they are and how they want.

Private Network Interconnects

Private Network Interconnects are available at any of our listed private peering facilities. Getting a physical connection to Cloudflare is easy: specify where you want to connect, port speeds, and target VLANs. From there, we’ll authorize it, you’ll place the order, and let us do the rest.  Customers should choose PNI as their connectivity option if they want higher throughput than a virtual connection or connection over an IX, or want to eliminate as many intermediaries from an interconnect as possible.

Internet Exchanges

Customers who want to use existing Internet Exchanges can interconnect with us at any of the 235+ Internet Exchanges we participate in. To connect with Cloudflare via an Internet Exchange, follow the IX’s instructions to connect, and Cloudflare will spin up our side of the connection.  Customers should choose Internet Exchanges as their connectivity option if they are either already peered at an IX, or they want to interconnect in a place where an interconnection platform isn’t present.

Interconnection Platform Partners

Cloudflare is proud to be partnering with Equinix, Megaport, PCCW ConsoleConnect, PacketFabric, and Zayo to provide you with easy ways to virtually connect with us in any of the partner-supported locations. Customers should choose to connect with an interconnection platform if they are already using these providers or want a quick and easy way to onboard onto a secure cloud experience.

If you’re interested in learning more, please see this blog post about all the different ways you can interconnect. For all of the interconnect methodologies described above, the BGP session establishment and IP routing are the same. The only thing that is different is the physical way in which we interconnect with other networks.

How do I find the best places to interconnect?

Our product page for CNI includes tools to better understand the right places for your network to interconnect with ours.  Customers can use this data to help figure out the optimal place to interconnect to have the most connectivity with other cloud providers and other ISPs in general.

What’s the difference between CNI and peering?

Technically, peering and CNI use similar mechanisms and technical implementations behind the scenes.

We have had an open peering policy for years with any network and will continue to abide by that policy: it allows us to help build a better Internet for everyone by interconnecting networks together, making the Internet more reliable. Traditional networks use interconnect/peering to drive better performance for their customers and connectivity while driving down costs. With CNI, we are opening up our infrastructure to extend the same benefits to our customers as well.

How do I learn more?

CNI provides customers with better performance, reliability, scalability, and security than using the public Internet. A customer can interconnect with Cloudflare in any of our physical locations today, getting dedicated links to Cloudflare that deliver security benefits and more stable latency, jitter, and available bandwidth through each interconnection point.

Contact our enterprise sales team about adding Cloudflare Network Interconnect to your existing offerings.

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
Product NewsNetworkNetwork Interconnect

Follow on X

David Tuber|@tubes__
Cloudflare|@cloudflare

Related posts

October 24, 2024 1:00 PM

Durable Objects aren't just durable, they're fast: a 10x speedup for Cloudflare Queues

Learn how we built Cloudflare Queues using our own Developer Platform and how it evolved to a geographically-distributed, horizontally-scalable architecture built on Durable Objects. Our new architecture supports over 10x more throughput and over 3x lower latency compared to the previous version....

October 08, 2024 1:00 PM

Cloudflare acquires Kivera to add simple, preventive cloud security to Cloudflare One

The acquisition and integration of Kivera broadens the scope of Cloudflare’s SASE platform beyond just apps, incorporating increased cloud security through proactive configuration management of cloud services. ...

September 27, 2024 1:00 PM

AI Everywhere with the WAF Rule Builder Assistant, Cloudflare Radar AI Insights, and updated AI bot protection

This year for Cloudflare’s birthday, we’ve extended our AI Assistant capabilities to help you build new WAF rules, added new AI bot & crawler traffic insights to Radar, and given customers new AI bot blocking capabilities...

September 26, 2024 1:00 PM

Zero-latency SQLite storage in every Durable Object

Traditional cloud storage is inherently slow because it is accessed over a network and must synchronize many clients. But what if we could instead put your application code deep into the storage layer, such that your code runs where the data is stored? Durable Objects with SQLite do just that. ...