Subscribe to receive notifications of new posts:

A container identity bootstrapping tool

2017-07-03

7 min read

Everybody has secrets. Software developers have many. Often these secrets—API tokens, TLS private keys, database passwords, SSH keys, and other sensitive data—are needed to make a service run properly and interact securely with other services. Today we’re sharing a tool that we built at Cloudflare to securely distribute secrets to our Dockerized production applications: PAL.

PAL is available on Github: github.com/cloudflare/pal

Although PAL is not currently under active development, we have found it a useful tool and we think the community will benefit from its source being available. We believe that it's better to open source this tool and allow others to use the code than leave it hidden from view and unmaintained.

Secrets in production

CC BY 2.0 image by Personal Creations

How do you get these secrets to your services? If you’re the only developer, or one of a few on a project, you might put the secrets with your source code in your version control system. But if you just store the secrets in plain text with your code, everyone with access to your source repository can read them and use them for nefarious purposes (for example, stealing an API token and pretending to be an authorized client). Furthermore, distributed version control systems like Git will download a copy of the secret everywhere a repository is cloned, regardless of whether it’s needed there, and will keep that copy in the commit history forever. For a company where many people (including people who work on unrelated systems) have access to source control this just isn’t an option.

Another idea is to keep your secrets in a secure place and then embed them into your application artifacts (binaries, containers, packages, etc.) at build time. This can be awkward for modern CI/CD workflows because it results in multiple parallel sets of artifacts for different environments (e.g. production, staging, development). Once you have artifacts with secrets, they become secret themselves, and you will have to restrict access to the “armed” packages that contain secrets after they’re built. Consider the discovery last year that the source code of Twitter’s Vine service was available in public Docker repositories. Not only was the source code for the service leaked, but the API keys that allow Vine to interact with other services were also available. Vine paid over $10,000 when they were notified about this.

A more advanced technique to manage and deploy secrets is to use a secret management service. Secret management services can be used to create, store and rotate secrets as well as distribute them to applications. The secret management service acts as a gatekeeper, allowing access to some secrets for some applications as prescribed by an access control policy. An application that wants to gain access to a secret authenticates itself to the secret manager, the secret manager checks permissions to see if the application is authorized, and if authorized, sends the secret. There are many options to choose from, including Vault, Keywhiz, Knox, Secretary, dssss and even Docker’s own secret managment service.

Secret managers are a good solution as long as an identity/authorization system is already in place. However, since most authentication systems involve the client already being in possession of a secret, this presents a chicken and egg problem.

Identity Bootstrapping

Once we have verified a service’s identity, we can make access control decisions about what that service can access. Therefore, the real problem we must solve is the problem of bootstrapping service identity.

This problem has many solutions when services are tightly bound to individual machines (for example, we can simply install host-level credentials on each machine or even use a machine’s hardware to identify it. Virtual machine platforms like Amazon AWS have machine-based APIs for host-level identity, like IAM and KMS). Containerized services have a much more fluid lifecycle - instances may appear on many machines and may come and go over time. Furthermore, any number of trusted and untrusted containers might be running on the same host at the same time. So what we need instead is an identity that belongs to a service, not to a machine.

Every Application needs an ID to prove to the bouncer that they’re on the guest list for Club Secret.

Bootstrapping the identity of a service that lives in a container is not a solved problem, and most of the existing are deeply integrated into the container orchestration (Kubernetes, Docker Swarm, Mesos, etc.). We ran into the problem of container identity bootstrapping, and wanted something that worked with or current application deployment stack (Docker/Mesos/Marathon) but wasn’t locked down to a given orchestration platform.

Enter PAL

We use Docker containers to deploy many services across a shared, general-purpose Mesos cluster. To solve the service identity bootstrapping problem in our Docker environment, we developed PAL, which stands for Permissive Action Link, a security device for nuclear weapons. PAL makes sure secrets are only available in production, and only when jobs are authorized to run.

PAL makes it possible to keep only encrypted secrets in the configuration for a service while ensuring that those secrets can only be decrypted by authorized service instances in an approved environment (say, a production or staging environment). If those credentials serve to identify the service requesting access to secrets, PAL becomes a container identity bootstrapping solution that you can easily deploy a secret manager on top of.

How it works

The model for PAL is that the secrets areprovided in an encrypted form and either embedded in containers, or provided as runtime configuration for jobs running in an orchestration framework such as Apache Mesos.

PAL allows secrets to be decrypted at runtime after the service’s identity has been established. These credentials could allow authenticated inter-service communication, which would allow you to keep service secrets in a central repository such as Hashicorp’s Vault, KeyWhiz, or others. The credentials could also be used to issue service-level credentials (such as certificates for an internal PKI). Without PAL, you must distribute the identity credentials, required by tools like these themselves, inside your infrastructure.

PAL consists of two components: a small in-container initialization tool, pal, that requests secrets decryption and installs decrypted secret, and a daemon called pald that runs on every node in the cluster. pal and pald communicate with each other via a Unix socket. The pal tool is set as each job’s entrypoint, and it sends pald the encrypted secrets. pald then identifies the process making the job and determines whether it is allowed to access the requested secret. If so, it decrypts the secret on behalf of the job and returns the plaintext to pal, which installs the plaintext within the calling job’s container as either an environment variable or a file.

PAL currently supports two methods of encryption—PGP and Red October—but it can be extended to support more.

PAL-PGP

PGP is a popular form of encryption that has been around since the early 90s. PAL allows you to use secrets that are encrypted with PGP keys that are installed on the host. The current version of PAL does not apply policies at a per-key level (e.g. only containers with Docker Label A can use key 1), but it could easily be extended to do so.

PAL-Red October

The Red October mode is used for secrets that are very high value and need to be managed manually or with multi-person control. We open sourced Red October back in 2013. It has the nice feature of being able to encrypt a secret in such a way that multiple people are required to authorize the decryption.

In the PAL-RO typical setup, each machine in your cluster will be provisioned a Red October account. Before a container is scheduled to run, the secret owners delegate the ability to decrypt the secret to the host on which the container is going to run. When the container starts, pal calls pald which uses the machine’s Red October credentials to decrypt the secret via a call to the Red October server. Delegations can be of a limited time or for a number of decryptions. Once the delegations are used up, Red October has no way to decrypt the secret.

These two modes have been invaluable for protecting high-value secrets, where Red October provides additional oversight and control. For lower-value secrets, PGP provides a non-interactive experience that works well with ephemeral containers.

Authorization details

An important part of a secret management tool is ensuring that only authorized entities can decrypt a secret. PAL enables you to control which containers can decrypt a secret by leveraging existing code signing infrastructure. Both secrets and containers can be given optional labels that PAL will respect. Labels define which containers can access which secrets—a container must have the label of any secret it accesses. Labels are named references to security policies. An example label could be “production-team-secret” which denotes that a secret should conform to the production team’s secret policy. Labels bind cipher texts to an authorization to decrypt. These authorizations allow you to use PAL to control when and in what environment secrets can be decrypted.

By opening the Unix socket with the option SO_PASSCRED, we enable pald to obtain the process-level credentials (uid, pid, and gid) of the caller for each request. These credentials can then be used to identify containers and assign them a label. Labels allow PAL to consult a predefined policy and authorize containers to receive secrets. To get the list of labels on a container, pald uses the process id (pid) of the calling process to get its cgroups from Linux (by reading and parsing /proc/<pid>/cgroups). The names of the cgroups contain the Docker container id, which we can use to get container metadata via Docker’s inspect call. This container metadata carries a list of labels assigned by the Docker LABEL directive at build time.

Containers and their labels must be bound together using code integrity tools. PAL supports using Docker’s Notary, which confirms that a specific container hash maps to specific metadata like a container’s name and label.

PAL’s present and future

PAL represents one solution for identity bootstrapping for our environment. Other service identity bootstrapping tools bootstrap at the host-level or are highly environmentally coupled. AWS IAM, for example, only works at the level of virtual machines running on AWS; Kubernetes secrets and Docker secrets management only work in Kubernetes and Docker Swarm, respectively. While we’ve developed PAL to work alongside Mesos, we designed it to used as a service identity mechanism for many other environments simply by plugging in new ways for PAL to read identities from service artifacts (containers, packages, or binaries).

Recall the issue where Vine disclosed their source code in their Docker containers on a public repository, Docker Hub. With PAL, Vine could have kept their API keys (or even the entire codebase) encrypted in the container, published that safe version of the container to Docker Hub, and decrypted the code at container startup in their particular production environment.

Using PAL, you can give your trusted containers an identity that allows them to safely receive secrets only in production, without the risks associated with other secret distribution methods. This identity can be a secret like a cryptographic key, allowing your service to decrypt its sensitive configuration, or it could be a credential that allows it to access sensitive services such as secret managers or CAs. PAL solves a key bootstrapping problem for service identity, making it simple to run trusted and untrusted containers side-by-side while ensuring that your secrets are safe.

Credits

PAL was created by Joshua Kroll, Daniel Dao, and Ben Burkert with design prototyping by Nick Sullivan. This post was adapted from an internal blog by Joshua Kroll and a presentation I made at O’Reilly Security Amsterdam in 2016 , and BSides Las Vegas.

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
TLSAPIGitHubSecurity

Follow on X

Nick Sullivan|@grittygrease
Cloudflare|@cloudflare

Related posts

October 08, 2024 1:00 PM

Cloudflare acquires Kivera to add simple, preventive cloud security to Cloudflare One

The acquisition and integration of Kivera broadens the scope of Cloudflare’s SASE platform beyond just apps, incorporating increased cloud security through proactive configuration management of cloud services. ...

October 06, 2024 11:00 PM

Enhance your website's security with Cloudflare’s free security.txt generator

Introducing Cloudflare’s free security.txt generator, empowering all users to easily create and manage their security.txt files. This feature enhances vulnerability disclosure processes, aligns with industry standards, and is integrated into the dashboard for seamless access. Strengthen your website's security today!...