Subscribe to receive notifications of new posts:

How To Minikube + Cloudflare

07/08/2018

7 min read

The following is a guest blog post by Nathan Franzen, Software Engineer at StackPointCloud. StackPointCloud is the creator of Stackpoint.io, the leading multi-cloud management platform for cloud native workloads. They are the developers of the Cloudflare Ingress Controller for Kubernetes.

Deploying Applications on Minikube with Argo Tunnels

This article assumes basic knowledge of Kubernetes. If you're not familiar with Kubernetes, visit https://kubernetes.io/docs/tutorials/kubernetes-basics/ to learn the basics.

Minikube is a tool which allows you to run a Kubernetes cluster locally. It’s not only a great way to experiment with Kubernetes, but also a great way to try out deploying services using a reverse tunnel.

At Cloudflare, we've created a product called Argo Tunnel which allows you to host services through a tunnel using Cloudflare as your edge. Tunnels provide a way to expose your services to the internet by creating a connection to Cloudflare's edge and routing your traffic over it. Since your service is creating its own outbound connection to the edge, you don’t have to open ports, configure a firewall, or even have a public IP address for your service. All traffic flows through Cloudflare, blocking attacks and intrusion attempts before they ever make it to you, completely securing your origin.

Deploying your service to more locations around the world is as simple as spinning up more containers. Anything which uses the Ingress Controller will receive your traffic, wherever the container is running in the world or on the Internet. Tunnels make it simpler to have robust security even while deploying across multiple regions or cloud providers.

Usually Minikube applications need to be ported over to a production Kubernetes setup to be deployed, but with Argo Tunnel, you can easily deploy a locally-running yet publicly-available Minikube instance making it a great way to try out both Kubernetes and Argo Tunnel. In this example, we’ll create a simple microservice that returns data when given a key, deploy it into Minikube, and start up the Argo Tunnel machinery to get it exposed to the Internet.

Getting Started with an Application API

We'll start by by creating a web service in Python using Flask. We'll write a simple application to represent a small piece of an API in just a few lines of code. The complete application, secret_token.py is simply:

from flask import Flask, jsonify, abort  
  
app = Flask(__name__)

@app.route('/api/v1/token/<key>', methods=['GET'])  
def token(key):  
	test_data = {  
		"e8990ab9be26": "3OX9+p39QLIvE6+x/w=",  
		"b01323031589": "wBvlo9G7Wqxsb2P9YS=",  
	}  
	secret = test_data.get(key)  
	if secret is None:  
		abort(404)  
	return jsonify({"key": key, "token": secret})

This tiny service will simply respond to a GET request with some secret data, given a key.

Using Docker

We’ll take the next step toward deployment and package our application into a portable Docker image with a Dockerfile:

FROM  python:alpine3.7  
RUN  pip install flask gunicorn
COPY  secret_token.py .
CMD  gunicorn -b 0.0.0.0:8000 secret_token:app

This will allow us to define a Docker image, the blueprint for the containers Minikube will build.

Deploying into Minikube

If you don't have Minikube installed, install it here: https://kubernetes.io/docs/tasks/tools/install-minikube/

Usually, we would build the Docker image with our Docker daemon and push it to a repository where the cluster can access it. With Minikube, however, that’s a round-trip we don’t need. We can share the Minikube Docker daemon with the Docker build process and avoid pushing to a cloud repository:

$ eval $(minikube docker-env)  
$ docker build -t myrepo/secret_token .

The image is now present on the Minikube VM where Kubernetes is running.

In a production Kubernetes system, we might spend a good deal of time going over the details of the deployment and service manifests, but kubectl run provides a simple way to get the basic app up and running. We add the image-pull-policy flag to make sure that Kubernetes doesn’t first try to pull the image remotely from Docker Hub.

$ kubectl run token --image myrepo/secret_token --expose --port 8000 --image-pull-policy=IfNotPresent --replicas=3

We now have a Kubernetes deployment running with our 3 replicas of containers built from our image, and a service associated with it that exposes port 8000. Save the two manifests locally into files:

kubectl get deployment token --export -o yaml > deployment.yaml  
kubectl get svc token --export -o yaml > service.yaml

We’ll be able to edit these files to make changes to our cluster configuration.

For local testing, let's change that service so that it exposes a NodePort -- this will proxy the service to a port on the Minikube VM. Replace the spec in the service.yaml file with:

spec:  
	ports:  
	-	nodePort:  32080  
		port:  8000  
		protocol:  TCP  
		targetPort:  8000  
	selector:  
		run:  token  
	sessionAffinity:  None  
	type:  NodePort

And apply the change to our cluster:

$ kubectl apply -f service.yaml

Now, we can test locally with curl, reaching the service via the NodePort on the Minikube VM:

$ minikube start
$ export MINIKUBE_IP=$(minikube ip)
$ curl http://$MINIKUBE_IP:32080/api/v1/token/b01323031589

Using Cloudflare’s Argo Tunnel

The NodePort setup is fine for testing the application locally, but if we want to share this service with others or better simulate how it will work in the real world, we need to expose it to the internet. In most cases, this means running in a cloud environment and dealing with load balancer configuration or setting up an NGINX ingress controller and dealing with network rules and routing. The Cloudflare Argo Tunnel Ingress Controller allows us to route almost anything to a Cloudflare domain, including services running inside of Minikube.

In the Kubernetes cluster, an ingress is an object that describes how we want our service exposed on the internet and an ingress-controller is the process that actually exposes it. To install the Cloudflare Ingress Controller, you’ll need to have a Cloudflare domain and an Argo Tunnel certificate, configured with the cloudflared application.

kubectl run was fine for quickly installing the test application, but for more complex installations, helm is a great tool, and is used to package the Cloudflare agent. Once you have the helm client installed, a simple helm init will configure Minikube to work with it. The chart for the ingress controller is found at the trusted-charts public repository and can be installed directly from there.

Cloudflared Configuration

Cloudflared is the end of the tunnel that runs on your machine and proxies traffic to and from your origin server through the tunnel. If you don't have it installed already, the cloudflared application complete quickstart instructions can be found at https://developers.cloudflare.com/argo-tunnel/quickstart/quickstart/

Installing the Controller with Helm

Now we will run some commands that define the repository that holds our chart and override a few default values:

$ helm repo add trusted-charts http://trusted-charts.stackpoint.io/ 
$ DOMAIN=anthopleura.net  
$ CERT_B64=$(base64 -w0 ~/.cloudflared/cert.pem 2>/dev/null|| base64 ~/.cloudflared/cert.pem)
$ NS="default"  
$ USE_RBAC=true
$ NAME=cloudflare

$ helm install --name $NAME --namespace $NS \                                                                                                             
   --set rbac.install=$USE_RBAC \
   --set secret.install=true \
   --set secret.domain=$DOMAIN,secret.certificate_b64=$CERT_B64 \
   trusted-charts/argo-ingress

This installation configures two cloudflare-warp-ingress controller replicas so that any service we expose will get two separate tunnels to the Cloudflare edge, paired together in a single pool.

Exposing Our Application with an Ingress

We'll need to write an ingress definition. Create a file called warp-controller.yaml:

apiVersion:  extensions/v1beta1  
kind:  Ingress  
metadata:  
	annotations:  
		kubernetes.io/ingress.class:  argo-tunnel  
	name:  token  
	namespace:  default  
spec:  
	rules:  
	-	host:  token.anthopleura.net  
	http:  
	paths:  
	-	backend:  
			serviceName:  token  
			servicePort:  8000

And apply the definition:
$ kubectl apply -f service.yaml

Examining the deployment

$ kubectl get pod

Should print:

NAME									READY 	STATUS	RESTARTS 	AGE

cloudflare-argo-ingress-6b886994b-52fsl 1/1 	Running 	   0 	34s

token-766cd8dd4c-bmksw 					1/1 	Running 	   0 	 2m

token-766cd8dd4c-l8gkw 					1/1 	Running 	   0 	 2m

token-766cd8dd4c-p2phg 					1/1 	Running 	   0 	 2m

The output shows the three token pods and the cloudflare-warp-ingress pod. Examine the logs from the argo pod to see the activity of the ingress controller:

$ kubectl logs cloudflare-argo-ingress-6b886994b-52fsl

The controller watches the cluster for creation of ingresses, services and pods.

The endpoint is live at https://token.anthopleura.net/api/v1/token/e8990ab9be26 returning

{  
	"key": "e8990ab9be26",  
	"token": "3OX9+p39QLIvE6+x/YK4DxWWCFi/D+c7g99c14oNB8g="  
}

Now this small piece of an api is available publicly on the internet for testing. Obviously you don’t want to serve public traffic into a minikube instance, but it’s certainly handy for sharing preliminary work across development teams.

The Cloudflare dashboard under the analytics tab will show some general statistics about the requests to your zone.

Routing and relationships

A quick sketch of the routing in the Kubernetes cluster and from the Cloudflare network:

The warp controller pods provide a way for Argo Tunnels to connect the pods containing your application to the internet through Cloudflare's edge.

Going farther with Cloudflare Load Balancers

This demo exposes a service through a single Argo Tunnel. If your Cloudflare account is enabled with load balancing, you can route traffic through a load balancer and pool of tunnels instead, by adding the annotationargo.cloudflare.com/lb-pool=token to the ingress. For details of load balancer routing and weighting please refer to the Cloudflare docs.

If you do use load balancing, then it is possible to run multiple instances of the ingress controller. When installing from the helm chart, set the value replicaCount to two or more and get multiple instances of the controller in the minikube cluster. The configuration will be useful for high availability in a single cluster. Load balancing can also be used to spread traffic across multiple clusters with different argo ingress controllers connecting to the same load balancing pool.

With two ingress controllers, the Cloudflare UI will show a pool named token.anthopleura.net with two origins with tunnel ids as origin addresses:

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet application, ward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
KubernetesDevelopersLoad BalancingAPIServerlessCloudflare Tunnel

Follow on X

Cloudflare|@cloudflare

Related posts