Jan 05, 2025
Securing environment variables in production
5 min read
It is very common for apps to use environment variables for configuration. I do it, a lot and love it. It is simple enough, zero setup and works. For relatively complex products, I usually will just use Consul or Hashicorp Vault, AWS secrets.
But for simpler projects I build over at my studio, setting up Vault + Consul ( and maintenance ) would definitely be an overkill. Neither do I use AWS so Secrets manager cannot be helpful here. Environment variables fit my usecase perfectly fine here but they are very tricky in the sense that:
- it is easy to mess them up
- They are automatically exposed to all other processes on your machine
- They are visible in process list
- Many many other reasons why environment variables are bad
Using Infisical
Last year, I started making use of Infisical for secrets management for apps at the studio. Easy to set up, hosted, relatively cheap amongst others.
It is essentially a true OSS version of Vault with firstclass web support. You can read a comparison between both here https://infisical.com/infisical-vs-hashicorp-vault
The first step is to actually sign up obviously and set up your project then enter your secrets.
You can also selfhost if you choose not to use the SaaS version
Below is what my config for Sdump looks like:
Setting up Systemd service
The next step is to create a simple systemd service. You can add this
to /lib/systemd/system/service_name.service
service_name
is hypothetical here.
[Unit]
Description=Backend API
[Service]
ExecStart=infisical run --projectId 6544f-XXXX-YYYY --env=prod -- /usr/local/bin/binary http
User=root
Group=root
UMask=007
EnvironmentFile=/etc/infisical/infisical.env
[Install]
WantedBy=multi-user.target
A few things you must have noticed is inside of running the binary directly,
we are now running it as a subprocess of infisical run
.
What the run
command does is to fetch your secrets and inject them into the
subprocess as env values. This way only your subprocess can access these values.
This essentially gets rid of a lot of related issues about storing
your .env
around.
Is this secure?
Another thing avid readers must have noticed is the fact that infisical run
just works and if anyone sshs into the server, they would have access to the
infisical token and be able to fetch the secrets themselves. And at that point,
what is the difference between that and actually just storing the .env
file on
the machine directly?
This is because only temporary tokens are used. Take a look at EnvironmentFile
,
the temporary token will be stored there and the systemd service
makes the env values from the file available to the infisical
process.
The token which is now available will then be used by the infisical run
process
to authenticate the request and fetch the tokens.
The way I set up mine for my projects and automate it is to use Github actions as below:
name: Deploy to Linode
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Golang-Migrate
run: |
curl -L https://github.com/golang-migrate/migrate/releases/download/v4.18.1/migrate.linux-amd64.tar.gz | tar xvz
sudo mv migrate /usr/local/bin/migrate
- uses: actions/setup-go@v5
- name: Build app
run: go build -o fotion cmd/server/main.go
- name: SCP to Linode instance ( Binary )
uses: appleboy/scp-action@master
with:
host: ${{ secrets.IP_ADDRESS }}
username: "root"
key: ${{ secrets.SSH_KEY }}
port: 22
source: "fotion"
target: "/root"
- name: Run Migrations
run: |
migrate -database ${{ secrets.POSTGRES_ROOT_DSN }}/fotion -path ./datastore/postgres/migrations up
- name: Restart sdump systemd service
uses: appleboy/ssh-action@master
env:
INFISICAL_CLIENT_ID: ${{ secrets.INFISICAL_CLIENT_ID }}
INFISICAL_CLIENT_SECRET: ${{ secrets.INFISICAL_CLIENT_SECRET }}
with:
host: ${{ secrets.IP_ADDRESS }}
username: "root"
key: ${{ secrets.SSH_KEY }}
port: 22
script_path: deploy/linode.sh
envs: INFISICAL_CLIENT_ID,INFISICAL_CLIENT_SECRET
Then create a new file called linode.sh
or anything ( but update the github actions with the correct path).
The file should contain the following:
#!/bin/bash
## fetch the token from Infisical
## safe to store here. token Only valid for about 60 seconds
## You can configure this from the Infisical dashboard.
## Mine is set to 60s
# Check if required environment variables are set
if [ -z "$INFISICAL_CLIENT_ID" ]; then
echo "Error: INFISICAL_CLIENT_ID is not set"
exit 1
fi
if [ -z "$INFISICAL_CLIENT_SECRET" ]; then
echo "Error: INFISICAL_CLIENT_SECRET is not set"
exit 1
fi
## create the directory in `EnvironmentFile` in the systemd service
mkdir -p /etc/infisical
## Fetch the temporary token from Infisical
export INFISICAL_TOKEN=$(infisical login --method=universal-auth --client-id="${INFISICAL_CLIENT_ID}" --client-secret="${INFISICAL_CLIENT_SECRET}" --silent --plain)
if [ -z "$INFISICAL_TOKEN" ]; then
echo "Error: Failed to obtain INFISICAL_TOKEN"
exit 1
fi
## Now add the temp token to the file
## so the next time the service restarts `Infisical run` can use the new
## temporary token
sudo echo "INFISICAL_TOKEN=$INFISICAL_TOKEN" >/etc/infisical/infisical.env
sudo chmod 600 /etc/infisical/infisical.env
## restart service. Service already reads this config file
sudo systemctl stop fotion
## moves the file that was scp'd earlier to the correct location
sudo mv /root/fotion /usr/local/bin/fotion
## restart so it can use the new env values
sudo systemctl restart fotion
sudo systemctl status fotion
You can use this to also run your deployments end-end :))
The only downside to this is you cannot just run systemctl restart service_name
.
You have to go through GH actions which can just be as simple as restarting the
last successful build.