Infisical in Dokploy: how to manage secrets without putting them in environment variables
Plain-text environment variables are convenient until they stop being safe. Here's how we deployed Infisical as a self-hosted secrets manager inside Dokploy, and how we connected our applications so they can read credentials in an encrypted and auditable way.

Every project you deploy has secrets. Database passwords, API keys, authentication tokens. And when you're managing several services from a platform like Dokploy, the usual thing is to dump all of that into the panel's environment variables and move on. It works, it's fast, and it seems good enough.
Until you stop and think about what that actually means. Those variables are stored as plain text in Dokploy's database. Anyone with access to the panel can see them. There's no record of who accessed what or when. And if you need to rotate a key, you have to go service by service updating values by hand.
This article explains how we set up a self-hosted secrets manager with Infisical, integrated inside Dokploy as just another service, and how we connected our applications so they can read secrets safely without exposing them in environment variables.
The real problem with plain-text environment variables
It's not that environment variables are inherently bad. They're the standard way to configure Docker containers, and most PaaS platforms use them as their main mechanism. The problem starts when you use them to store sensitive information without any extra layer of protection.
In a typical setup with Dokploy (or any similar platform), secrets are exposed in several places. In the platform's internal database, which stores them unencrypted. In the output of docker inspect, which shows all container variables. In the process inside the container, accessible through /proc/*/environ. And in the admin panel itself, visible to any user with access.
When you have a single project with two or three variables, the risk is manageable. But when you're managing ten or fifteen services, each with its own credentials, the exposure surface grows and management turns into an operational problem.
Why Infisical and not another tool
There are several options for secrets management. HashiCorp Vault is the industry reference, but its operational complexity is considerable for a small team or a solo developer. AWS Secrets Manager and similar tools tie you to a specific cloud provider. SOPS con age is elegant for encrypting files in repositories, but it doesn't offer centralized management or an audit log.
Infisical sits in a middle ground that fits our case well. It's open-source, you can deploy it with a simple docker-compose, it has a full web interface for managing secrets by project and environment, it offers AES-256-GCM encryption at rest, an audit log for every access, and a CLI that lets you inject secrets into containers without changing the application's code.
And most importantly for our setup, it deploys as just another service inside Dokploy, shares the same Docker network as the rest of the applications, and doesn't need external infrastructure.
The deployment inside Dokploy
Infisical needs three components. Its own backend (a Node.js application), a PostgreSQL database, and a Redis instance. All three are packaged in a docker-compose that gets deployed as an independent service inside a Dokploy project.
The decision to deploy it as a separate compose inside the same infrastructure project, and not as part of an existing stack, is deliberate. Infisical is the source of truth for secrets, so its lifecycle should be independent. If you need to redeploy another service in the same project, you don't want Infisical restarting as a side effect.
services:
infisical:
image: infisical/infisical:v0.159.1
container_name: infisical-backend
restart: unless-stopped
environment:
- NODE_ENV=production
- ENCRYPTION_KEY=${ENCRYPTION_KEY}
- AUTH_SECRET=${AUTH_SECRET}
- DB_CONNECTION_URI=postgres://infisical:${DB_PASSWORD}@infisical-db:5432/infisical
- REDIS_URL=redis://infisical-redis:6379
- SITE_URL=${SITE_URL}
- TELEMETRY_ENABLED=false
depends_on:
infisical-db:
condition: service_healthy
infisical-redis:
condition: service_started
networks:
- dokploy-network
infisical-db:
image: postgres:17-alpine
container_name: infisical-db
restart: unless-stopped
environment:
- POSTGRES_USER=infisical
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=infisical
volumes:
- /etc/dokploy/infisical/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U infisical"]
interval: 10s
timeout: 5s
retries: 5
networks:
- dokploy-network
infisical-redis:
image: redis:8-alpine
container_name: infisical-redis
restart: unless-stopped
volumes:
- /etc/dokploy/infisical/redis:/data
networks:
- dokploy-network
networks:
dokploy-network:
external: trueThe compose connects all three services to dokploy-network, the external network Dokploy uses for all its containers. This lets any application deployed in Dokploy talk to Infisical internally without exposing ports to the outside world.
The compose environment variables (ENCRYPTION_KEY, AUTH_SECRET, DB_PASSWORD) are the only bootstrap credentials Infisical needs. Once it's up, every other secret is managed from its web interface.
Traps we ran into along the way
The deployment wasn't completely straightforward. We hit three issues that would've saved us time if we'd known about them upfront. As a complement, in Docker supply chain security we cover the other side of the coin.
The format of ENCRYPTION_KEY matters a lot. Infisical uses createCipheriv internally, which requires an AES key with an exact length. If you generate the key with openssl rand -base64 32 you get 44 characters, which isn't a valid length for AES. The correct format is openssl rand -hex 16, which produces 32 hexadecimal characters representing exactly 16 bytes.
Passwords with special characters break connection URIs. If your database password contains +, = or / (common in base64 strings), when it's interpolated into a URI like postgres://user:password@host/db the result gets corrupted. The + is interpreted as a space and parsing fails without a clear error message. The fix is to generate strictly alphanumeric passwords for any service whose credential will go inside a URI.
If a migration fails halfway through, you need to clean up before retrying. Infisical runs database migrations on first startup. If one fails, as happened to us because of the key length issue, the database is left in an inconsistent state. Redeploying without cleaning the PostgreSQL data doesn't fix anything because the migrations control table thinks some of them already ran. The fix is to delete the volume data and let it start from scratch.
Connecting the applications
Once Infisical is running, the flow for connecting an application has two parts. The setup in the Infisical interface, and the changes to the application's Dockerfile.
In Infisical
First you create a project and add the secrets in the corresponding environment (production, staging, or development). Then you create a Machine Identity at the organization level, which is the equivalent of a service account. You assign it Universal Auth authentication, which generates a Client ID and Client Secret pair. Finally, you give it read-only access to the project.
The Machine Identity is what lets the application authenticate against Infisical programmatically, without human intervention and with the minimum privileges it needs.
In the Dockerfile
The application needs the Infisical CLI installed in the Docker image. The CLI handles authentication, fetches the secrets, and passes them as environment variables to the main process. The application doesn't know Infisical exists, it just receives the environment variables like always. If you're interested in the approach we adopted, in environment variables in E2E scripts we explain it in detail.
# Instalar Infisical CLI en la etapa de runtime
RUN apk add --no-cache curl bash && \
curl -1sLf 'https://dl.cloudsmith.io/public/infisical/infisical-cli/setup.alpine.sh' | bash && \
apk add --no-cache infisical
# El CMD primero se autentica y después inyecta los secretos
CMD ["sh", "-c", "export INFISICAL_TOKEN=$(infisical login \
--method=universal-auth \
--client-id=$INFISICAL_UNIVERSAL_AUTH_CLIENT_ID \
--client-secret=$INFISICAL_UNIVERSAL_AUTH_CLIENT_SECRET \
--domain=$INFISICAL_API_URL --plain) && \
infisical run --token $INFISICAL_TOKEN \
--projectId $INFISICAL_PROJECT_ID \
--env prod --domain $INFISICAL_API_URL \
-- node server.js"]An important detail is that infisical run doesn't auto-authenticate using the CLI environment variables, at least in the current version. You need to do an explicit login first with infisical login --method=universal-auth, get the token, and pass it to infisical run with the --token flag. If you try to use infisical run directly, the CLI tries to open a browser for interactive login, which fails inside a container.
The environment variables in Dokploy
Where the application used to have ten or fifteen environment variables with real secrets, API keys, passwords, external service tokens, it now has exactly four variables that point to Infisical.
INFISICAL_UNIVERSAL_AUTH_CLIENT_ID=...
INFISICAL_UNIVERSAL_AUTH_CLIENT_SECRET=...
INFISICAL_PROJECT_ID=...
INFISICAL_API_URL=http://infisical-backend:8080The internal URL http://infisical-backend:8080 works because both containers are on the same Docker network. The traffic never leaves the server.
What you gain with this change
The fair question is whether all of this setup is worth it compared to just putting the variables in the Dokploy panel and moving on. The answer depends on your context, but these are the concrete benefits you get.
Encryption at rest. Secrets in Infisical are encrypted with AES-256-GCM. In Dokploy environment variables, they're plain text in a SQLite database.
Audit log. Every time an application reads a secret, it's recorded. Who accessed it, which project, which environment, and when. With environment variables, you have no visibility into who saw what.
Instant revocation. If you suspect credentials have been compromised, you revoke the Machine Identity with one click and the application loses access immediately. With environment variables, you'd have to change each individual secret in every affected service.
Centralized management. If you need to rotate the password for a database shared by three services, you change it in one place and all three pick it up on the next startup. Without Infisical, you have to update the variable in three different places and redeploy each one.
Least privilege. Each application has its own Machine Identity with read-only access to its specific project. If someone gets an application's Infisical credentials, they can only read that project's secrets, not the secrets for your whole infrastructure.
The chicken-and-egg problem
There's an obvious paradox in this approach. We're using a secrets manager to avoid having secrets in environment variables, but the credentials used to access Infisical are still sitting in Dokploy environment variables.
That's a fair point, but the difference is substantial. Before, you had N real secrets exposed directly, API keys, database passwords, external service tokens. Now you have a single pair of credentials that doesn't contain any real secret, has read-only access to a specific project, can be revoked instantly, and leaves a trace of every use in the audit log.
You always need some bootstrap credential somewhere. The key is that this credential should have the least privilege possible, be revocable, and leave traceability behind. Moving real secrets into an encrypted system with access control is a meaningful improvement, even if the bootstrap mechanism is still an environment variable.
Gradual migration strategy
You don't need to migrate every application all at once. In fact, I recommend not doing that. The approach that has worked best for us is to pick a simple application as a pilot, migrate its secrets to Infisical, verify that everything works properly for a few days, and then expand to the rest of the services progressively.
In our case, we started with a static portfolio that only had three environment variables. It was simple enough to diagnose problems quickly, and representative enough to validate the full flow.
Once we confirmed that secret injection was working properly in development, staging, and production, we migrated the more critical services following the same pattern.
What we learned
Deploying a self-hosted secrets manager isn't hard if you pick the right tool. Infisical with Docker Compose is up in fifteen minutes. What takes longer is deciding on the migration strategy, understanding the CLI nuances, and fixing the small formatting issues that only show up when you connect real pieces together.
If you're managing several services on a VPS with Dokploy, or any similar platform, having a centralized secrets manager saves you operational time in the medium term and significantly reduces the exposure surface of your credentials. It's not a tool you need from day one, but it's one of those infrastructure improvements that, once it's in place, makes you wonder why you didn't install it sooner. If you want to go deeper, in infrastructure with Dokploy we cover it in detail.
Plain-text secrets in environment variables are the norm in most self-hosted deployments. They work until they stop working, and when they stop working it's usually because someone saw something they shouldn't have seen. A secrets manager doesn't remove every risk, but it turns access to sensitive information into something encrypted, auditable, and revocable. And that's a big step up from a plain-text field in an admin panel.
Cover photo by FlyD on Unsplash.
Another post in the Homelab security series. You're coming from Supply chain, Dockerfile, npm, and checksums and continuing with Hadolint and Dockle, static analysis for your Dockerfiles.

Jose, author of the blog
QA Engineer. I write out loud about automation, AI and software architecture. If something here helped you, write to me and tell me about it.
Leave the first comment
What did you think? What would you add? Every comment sharpens the next post.
If you liked this

SSH vs Cloudflare Tunnel vs Pangolin vs Tailscale vs Headscale vs WireGuard, qué uso para qué
Tengo cloudflared en producción, Pangolin en plan para el home lab, Tailscale en el portátil y SSH tunnels abiertos varias veces al día. Desde fuera parecen alternativas que se pisan entre ellas, pero cada una resuelve un problema distinto. Aquí va el árbol de decisión que tengo en la cabeza para no acabar con seis cosas haciendo lo mismo.

OpenRouter vs Vercel AI Gateway vs Cloudflare vs Portkey vs LiteLLM, comparativa 2026
Comparativa transversal de AI Gateways en 2026 por ejes que pesan de verdad, catálogo, coste, latencia, observabilidad, failover, privacidad, operativa y lock-in. Cuándo elegir cada uno y por qué no son excluyentes.

Tu Dockerfile descarga binarios de atacantes (y cómo evitarlo)
Tres medidas concretas para proteger tu Dockerfile contra ataques de supply chain: verificación de checksums con SHA256, control de scripts npm con ignore-scripts y eliminación del package manager en la imagen de producción.