Managing .env files and keeping them in sync across environments is painful when done manually.
Doesn't it seem odd that Slack is still a common way for .env files to be synced between team members? Shouldn't we be concerned that syntax errors from .env file edits are so prevalent that dotenv linting tools are needed?
It's time to bring the DevOps principles of automation and ephemeral resources to managing environment variables and .env files. This post is a compilation of Doppler's best tips and tricks to do just that as is broken into three sections:
- Centralized Environment Variable Management
- Dynamically Injected Environment Variables
- Dynamically Created Ephemeral .env Files
Although the examples are Doppler centric, the goal is to give you fresh ideas for improving app config and secrets automation in your workplace. We've got lots of stuff to cover, so let's dive in!
1. Centralized Environment Variable Management
It's simply not possible to automate the syncing of environment variables across teams, hosting platforms, and development environments without a centralized source of truth at the organization level.
Modern platforms such as Heroku and Vercel provide built-in environment variable storage, but unless all of your applications are hosted on a single platform, they can only function as a source of truth for individual applications. And if you're using cloud-based virtual machines such as those from DigitalOcean or AWS EC2, you're on your own to figure out environment variable storage and access.
With the exception of Vercel's development scoped environment variables, first-class local development support is missing from every modern platform and cloud provider, explaining why so many teams still rely on .env files, even if not used in production. We know how crucial it is for local environments to closely mirror production, but it seems we're willing to make tradeoffs when it comes to environment variables.
In the past, secrets managers such as HashiCorp Vault were seen as the solution. But replacing the simplicity of environment variables with complex SDKs often resulted in siloed secrets and teams going rogue by managing environment variables their own way. Cloud secrets managers also didn't improve the local development story.
Essentially, we need a new way of managing environment variables that reflects the needs of modern application development.
Why we need SecretOps
SecretOps is designed for multi-cloud deployments and combines the strengths of traditional solutions while addressing their weaknesses.As a starting point, a SecretOps platform must:
- Centralize the storage and management of secrets
- Provide flexible and secure environment variable injection for every platform and cloud
- Provide a first-class local development experience
- Increase Developer productivity through secrets automation workflows
- Decrease the complexity of secrets management
Using Doppler as an example, we're tackling these requirements by providing:
- A fully-hosted solution with collaborative secrets management workflows and fine-grained access controls, all from a single centralized dashboard.
- A CLI for injecting environment variables and ephemeral .env files in any environment, from local development to CI/CD, Docker, Kubernetes, Virtual Machines etc.
- Integrations that sync secrets to platforms such as Vercel, Netlify, Heroku, and GitHub Secrets, so applications don't have to integrate with Doppler directly.
Doppler's operating model is that managing secrets should be centralized, but fetching and syncing secrets should be tailored to every customer's needs. For example, many of our customers enjoy the Doppler dashboard's superior features and developer experience but sync secrets to Azure Key Vault so production secrets access remains as is.
Our vision for SecretOps is constantly evolving. Our goal is to share, inspire, and help move our industry forward with new ideas that take secrets automation to the next level.
2. Dynamically Injected Environment Variables
Using Doppler to illustrate, let's look at several methods for environment variable injection.
Platform Injected Environment Variables
Having a platform or infrastructure tool to inject environment variables into your application is the best solution, as you can then say goodbye to .env files altogether.
So while that removes the need for .env files in select platforms, additional tooling is needed for virtual machines, local development, and Kubernetes, just to name a few.
CLI Application Runner
This method uses a CLI to run your application, injecting environment variables directly into the application process.
Here is how the Doppler CLI can be used to inject environment variables into a Node.js application:
doppler run -- npm start
You can also use a script:
doppler run -- ./launch-app.sh
Create a long-running background process in a virtual machine:
nohup doppler run -- npm start >/dev/null 2>&1 &
Or use the Doppler CLI inside a Docker container:
…
# Install Doppler CLI
RUN curl -Ls --tlsv1.2 --proto "=https" --retry 3 https://cli.doppler.com/install.sh | sh
CMD ["doppler", "run", "--", "npm", "start"]
A CLI application runner with environment variable injection should have the following properties:
- Be open source
- Support every major operating system
- Installable via package managers
- Correctly passes signals to the application (e.g SIGINT) so your application can terminate gracefully
The doppler run
command is just one way of accessing secrets, and you can find more examples in our CLI Secrets Access Guide.
3. Docker Container Environment Variables
This method injects environment variables into a Docker container at runtime, removing the temptation of embedding an .env file in the Docker image (yes, it happens) and avoiding the host creating and mounting the .env file inside the container.
The Doppler CLI pipes secrets in .env file format to the Docker CLI where it reads the output as a file using process substitution:
docker run \
--env-file <(doppler secrets download --no-file --format docker) \
my-awesome-app
Here we get the benefits of .env file configuration but without an .env file ever touching the disk.
You can see other use cases in our docker-examples GitHub repository.
Docker Compose Environment Variables
Docker Compose differs from Docker as it accesses environment variables from the host when docker compose up
is run.
Docker Compose sensibly requires you to define which environment variables to pass through to each service as variables such as $PATH
are host-specific:
version: '3.9'
services:
web:
image: my-app
# Host environment variables passed through to container
environment:
- AUTH_TOKEN
- DB_CONNECTION_URL
Because of how Docker Compose access environment variables, we can use the Doppler CLI as an application runner:
doppler run -- docker compose up
Other Docker Compose use cases can be found in our docker-examples GitHub repository.
Kubernetes Environment Variables
Kubernetes provides excellent support for injecting environment variables into containers using Key-Value pairs stored in a Kubernetes secret.
Doppler provides two options for syncing secrets to Kubernetes:
- Doppler CLI Created Secrets
- Doppler Kubernetes Operator Managed Secrets
Doppler CLI Created Secrets
The first step is to create a generic Kubernetes secret (the first argument being the secret's name) where just like Docker, secrets in .env file format are piped to kubectl
where it reads the output as a file:
kubectl create secret generic my-app-secret \
--from-env-file <(doppler secrets download --no-file --format docker)
Doppler Kubernetes Operator Managed Secrets
Our Kubernetes Operator is designed to scale and fully automate secrets syncing from Doppler to Kubernetes. Once installed and configured, it instantly creates and updates Kubernetes secrets as they change in Doppler with support for automatic deployment reloads if the secrets they're consuming have changed.
As it's a more advanced solution that requires Kubernetes cluster administration experience, we won't be covering it here but check out our Kubernetes Operator documentation to learn more.
Kubernetes Deployments and Environment Variables
Injecting environment variables into a Deployment from the Key-Value pairs in a Kubernetes secret is done using the envFrom
property of a container spec:
...
spec:
containers:
- name: awesome-app
envFrom:
- secretRef:
name: my-app-secret # Kubernetes secret name
…
Hopefully, you've learned some new tricks for environment variable injection! Now let's move on to.env files.
Dynamically Created Ephemeral .env Files
Environment variable injection is always preferred, but sometimes, an .env file is the only workable solution.
Protective measures such as locking down file ownership, file permissions, and heavily restricting shell access should go without saying. But the risk of .env files existing on the file system indefinitely has always been a concern and why we've been hesitant to recommend .env file usage in the past.
But thanks to the Doppler CLI, we can now mount ephemeral .env files that are automatically cleaned up when the application exits. Imagine not having to worry about anyone in your company accidentally committing an .env file again!
One of the most popular use cases is for PHP developers building Laravel applications:
doppler run --mount .env -- php artisan serve
The file extension is used to automatically set the format (JSON format is also supported):
doppler run --mount secrets.json -- npm start
You can set the format if the file extension doesn't map to a known type:
doppler run --mount app.config --mount-format json -- npm start
To increase security, you can also restrict the number of reads:
doppler run --mount .env --mount-max-reads 1 --command="php artisan config:cache && php artisan serve"
If you're wondering what happens to the mounted file if the Doppler process is force killed, its file contents will appear to vanish instantly. This is because the mounted file isn't a regular file at all, but a Unix named-pipe. If you've ever heard the phrase “everything is a file in Unix”, you now have a better understanding of what that means.
Named pipes are designed for inter-process communication while still using the file system as the access point. In this case, it's a client-server model, where your application is effectively sending a read request to the Doppler CLI. In the event the Doppler CLI is force killed, the .env file (named pipe) will still exist, but because no process is attached to it, requests to read the file will simply hang.
It's also named pipes that enable us to restrict the number of reads using the --mount-max-reads
option as once the limit is exceeded, the CLI simply removes the named pipe from the file system.
Summary
I hope you take away some new automation ideas to bring back to your team so you can spend less time updating .env files and more time shipping software.
Top comments (0)