We recently announced Argo Tunnel which allows you to deploy your applications anywhere, even if your webserver is sitting behind a NAT or firewall. Now, with support for load balancing, you can spread the traffic across your tunnels.
A Quick Argo Tunnel Recap
Argo Tunnel allows you to expose your web server to the internet without having to open routes in your firewall or setup dedicated routes. Your servers stay safe inside your infrastructure. All you need to do is install cloudflared (our open source agent) and point it to your server. cloudflared will establish secure connections to our global network and securely forward requests to your service. Since cloudflared initializes the connection, you don't need to open a hole in your firewall or create a complex routing policy. Think of it as a lightweight GRE tunnel from Cloudflare to your server.
Tunnels and Load Balancers
CC BY-NC-ND 2.0 image by Carey Lyons
If you are running a simple service as a proof of concept or for local development, a single Argo Tunnel can be enough. For real-world deployments though, you almost always want multiple instances of your service running on seperate machines, availability zones, or even countries. Cloudflare’s distributed Load Balancing can now transparently balance traffic between how ever many Argo Tunnel instances you choose to create. Together this provides you with failure tolerance and, when combined with our geo-routing capabilities, improved performance around the world.
Want more performance in Australia? Just spin up more instances. Want to save money on the weekends? Just turn them off. Leave your firewalls closed and let Argo Tunnel handle the service discovery and routing for you.
On accounts with Load Balancing enabled, when you launch cloudflared to expose your web service, you can specify a load balancer you want to attach to, and we take care of the rest:
cloudflared --lb-pool my_lb_pool --hostname myshinyservice.example.com --url http://localhost:8080
In the example above we'll take care of:
Creating the DNS entry for your new service (myshinyservice.example.com).
Creating the Load Balancer (myshinyservice), if it doesn't exist.
Creating the Load Balancer Pool (my_lb_pool), if it doesn't exist.
Opening a tunnel and adding it to the pool.
Proxying all traffic from myshinyservice.example.com all the way to your server running on your localhost on port 8080.
Removing the tunnels from the pool when you shutdown cloudflared.
If you run the same command from another machine with another server it will automatically join the pool and start sharing the load across both. You're able to run a load balanced web service across multiple servers with a simple command. You don't even need to login to the Cloudflare UI.
Load Balancer Features
Now that you're running a resilient scalable web service, you'll probably want to delve into the other features the Cloudflare Load Balancing has to offer. Go to the traffic page and take a look at your newly minted Load Balancer. From there you can specify health checks, health check policy, routing policy and a fall-back pool in case your service is down.
Try it Out
Head over to your dashboard and make sure you have Argo (Traffic->Argo->Tiered Caching + Smart Routing) and Load Balancer (Traffic->Load Balancing) enabled. Start with the Argo Tunnel Quickstart Guide and run cloudflared with the --lb-pool option, just like we did in the example above. At the moment we limit our non-Enterprise customers to just a handful of origins, but expect that limitation to be removed in the near future. For now, play away!