How we set up the infrastructure with Dokploy (and why we left Vercel)
One VPS, Docker, Traefik, and Dokploy. That's how we host the blog and ten more projects. Why we left Vercel, why we picked Dokploy over Coolify, and what we gained and lost along the way.

For a year and a half, this blog lived on Vercel. The workflow was convenient: push to main, automatic build, edge deploy, SSL certificate included. No containers to configure, no servers to maintain, no Dockerfiles to write. It worked and didn't cause any trouble.
Until we started needing things a PaaS doesn't let you do. A SQLite database that persists on disk. Our own S3 storage service. Security headers with full control. And above all, the ability to deploy ten projects on the same VPS without paying for ten separate subscriptions.
This article explains how we set up the current infrastructure with Dokploy, why we chose it over Coolify, and what we gained (and lost) by leaving Vercel.
Why we left Vercel
It wasn't an ideological decision. Vercel is an excellent product and still is. For static frontend projects or state-free Next.js apps, it's hard to beat. But our case got more complicated over time.
The blog uses SQLite as its database, with a .db file that needs to persist on disk between deploys. Vercel doesn't offer persistent filesystem storage. The alternative is using an external database (PlanetScale, Turso, Neon), which adds network latency to every query and one more dependency to manage. With local SQLite, read latency is basically zero.
Then more projects showed up. JMO Labs, ScamDetector, a portfolio, internal tools. On Vercel, each one was a separate project with its own usage limits. The bill grew linearly with every new service.
And finally, control. Vercel abstracts the infrastructure so you don't have to think about it. That's an advantage until you do need to think about it. Custom security headers, custom certificates, internal networks between services, cron jobs, persistent volumes. Every one of those needs required a workaround or simply had no solution inside the platform.
Coolify vs Dokploy: the decision
Once we decided we needed a self-hosted PaaS, the two serious options were Coolify and Dokploy.
Coolify is the more mature project. It has a large community, support for multiple languages and frameworks, builds with Nixpacks, and an interface with more features out of the box. If you're coming from Heroku or Vercel and want something that feels as close as possible, Coolify is the obvious choice.
Dokploy is more minimal. It uses Docker Compose and Docker Swarm as its base, doesn't reinvent the build system (you use standard Dockerfiles), and its interface is simpler and more direct. It doesn't have as many predefined integrations, but what it does, it does in a predictable way.
We picked Dokploy for three specific reasons.
First, native Docker. In Coolify, the Nixpacks build system sometimes produces images with unexpected behavior. With Dokploy, you write your own Dockerfile and know exactly what's inside your image. If something breaks, the problem is in your Dockerfile, not in some intermediate abstraction layer.
Second, Docker Compose as a first-class citizen. If you already have a docker-compose.yml for local development, deploying it on Dokploy is just copying the contents into its panel and hitting deploy. Coolify supports compose too, but Dokploy is built around that idea.
And third, lower operational complexity. Dokploy uses fewer server resources and has fewer moving parts. On a VPS with 12 GB of RAM running ten services, every megabyte of admin panel overhead matters.
It's not that Coolify is worse. For larger teams or more varied needs (builds in multiple languages without Docker, managed databases, predefined integrations), Coolify may be the better option. For our case, one developer with Docker-first projects, Dokploy is a better fit.
The current stack
Everything runs on a Debian VPS, managed by Dokploy. The architecture has these pieces.
Dokploy as the management panel. It deploys applications, manages domains, configures SSL, and exposes logs for each service. Under the hood it uses Docker Swarm to orchestrate containers and Traefik as the reverse proxy. As a practical example, in deploying OpenClaw with Docker and Dokploy we applied it to a real project.
Traefik handles HTTP/HTTPS routing, SSL termination with Let's Encrypt, and load balancing. Each application registers its domains in Traefik automatically through Docker labels.
SQLite as the database for the blog and other projects that need local persistence. The .db file lives in a persistent bind mount that survives redeploys and container updates.
PostgreSQL as the relational database for projects that need more advanced features (RLS, SQL functions, complex relationships). Depending on the project, it's used through InsForge (our self-hosted BaaS) or as an independent instance managed directly in Dokploy.
MinIO as S3-compatible storage. It runs as an independent service inside an infrastructure project in Dokploy, accessible from s3.josemanuelortega.me. Cover images and uploaded files go there.
Infisical as the centralized secrets manager. Also inside the same infrastructure project, it shares the Docker network with the rest of the services. Containers authenticate internally through http://infisical-backend:8080 without exposing traffic to the outside. If you want to go deeper, in secrets management with Infisical we cover it in detail.
Multi-stage Docker: from code to container
The blog's Dockerfile uses a three-stage pipeline. It's not some optimization gimmick, each stage has a clear purpose.
The first stage installs the dependencies. It installs pnpm with a pinned version, copies the lockfile, and runs pnpm install --frozen-lockfile --ignore-scripts. The --ignore-scripts flag blocks postinstall scripts from all dependencies, closing the supply chain vector we explained in an earlier article. After that, we only rebuild the native modules we explicitly need.
FROM node:22-alpine AS deps
ARG PNPM_VERSION=10.32.0
RUN corepack enable && corepack prepare pnpm@${PNPM_VERSION} --activate
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile --ignore-scripts && \
pnpm rebuild better-sqlite3The second stage is the build. It copies the dependencies from the previous stage, creates a temporary database so Next.js can pre-render the static pages, runs the migrations, and builds the application in standalone mode.
The third stage produces the final image, the runner. It copies only what's strictly needed from the build: the Next.js standalone bundle, the static assets, the SQL migrations, and the startup scripts. It removes npm, corepack, and yarn from the runtime because they aren't needed in production. The application runs as an unprivileged user (nextjs:1001).
FROM node:22-alpine AS runner
# Strip npm, corepack, yarn — not needed at runtime
RUN rm -rf /usr/local/lib/node_modules/npm \
/usr/local/lib/node_modules/corepack \
/usr/local/bin/npm /usr/local/bin/npx \
/usr/local/bin/corepack
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nextjs
USER nextjs
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --from=builder --chown=nextjs:nodejs /app/public ./publicThe result is a clean production image, with no development tools, no unnecessary dependencies, and a user that can't escalate privileges.
What happens when you git push
The deployment flow is deliberately simple. No GitHub Actions, no external CI pipelines, no remote image registries.
You run
git push origin mainfrom your local machine.GitHub sends a webhook to Dokploy.
Dokploy clones the repository, runs
docker buildwith the multi-stage Dockerfile, and stores the image locally.Docker Swarm updates the service with the new image. The old container shuts down and the new one starts.
The entrypoint runs three steps before starting the application: SQLite WAL checkpoint, pending Drizzle ORM migrations, and startup of the Node.js server.
#!/bin/sh
node .docker/wal-checkpoint.cjs # Limpieza del journal de SQLite
node drizzle/migrate.cjs # Migraciones pendientes
exec "$@" # node server.jsThe whole process takes between two and four minutes, depending on whether Docker can reuse cached layers. If only application code changed (not dependencies), the pnpm install stage is skipped completely thanks to Docker's layer cache.
Storage: MinIO as self-hosted S3
The blog's cover images and uploaded files are stored in MinIO, an S3 API-compatible storage service running on the same server.
The setup is a docker-compose inside the infrastructure project in Dokploy. MinIO exposes port 9000 for the S3 API and 9001 for the admin console. Traefik routes each one to its corresponding domain with automatic SSL. To understand the infrastructure behind it, in the blog's technical stack we explain it all.
The blog-media bucket has a public read policy. Images are served directly from https://s3.josemanuelortega.me/blog-media/posts/{slug}/cover.jpg without going through the application. Next.js optimizes them in the Image component using the remotePatterns configuration.
The alternative would be using Cloudflare R2, AWS S3, or a similar service. The advantage of self-hosted MinIO is that there are no egress costs, no request limits, and the data lives on your server. The downside is that you don't get a global CDN, and if the server goes down, storage goes down with it.
Security: what a PaaS doesn't let you control
One of the reasons for the move was having full control over the security posture. In Vercel you can add some headers, but not all of them, and CSP configuration has practical limitations. With Dokploy, the security configuration lives in next.config.ts and is deployed as part of the code.
These are the headers we apply to all routes except the static content ones.
const securityHeaders = [
{ key: "X-Content-Type-Options", value: "nosniff" },
{ key: "X-Frame-Options", value: "DENY" },
{ key: "Referrer-Policy", value: "strict-origin-when-cross-origin" },
{ key: "Cross-Origin-Opener-Policy", value: "same-origin" },
{ key: "Strict-Transport-Security",
value: "max-age=63072000; includeSubDomains; preload" },
{ key: "Content-Security-Policy", value: csp },
];The CSP is strict. It only allows scripts from the site's own domain and the analytics service. Frames only from YouTube and Vimeo for video embeds. External connections only to OpenRouter for the AI assistant and to analytics. object-src: 'none' and base-uri: 'self' by default.
On top of the HTTP hardening, the Dockerfile applies supply chain measures that we covered in detail in an earlier article: checksum verification for downloaded binaries, blocking npm scripts during installation, and removing the package manager from the final image.
The admin panel authentication uses Better Auth with passwords of at least 12 characters, support for 2FA with TOTP, and secure cookies (httpOnly, SameSite=strict, Secure in production). Sessions expire after 7 days with silent refresh every 24 hours. There's rate limiting on the authentication endpoints and on the admin API.
What we gained and what we lost
Being honest about the trade-offs matters just as much as explaining the benefits. This is the balance after months with this setup.
What we gained
We have full control over the infrastructure. Every configuration decision is in versioned code. The security headers, the CSP policy, the Dockerfile, the entrypoint, the migrations. If something breaks, the git history has the answer.
The cost is predictable. One VPS with 12 GB of RAM hosts the blog, MinIO, Infisical, and several other projects. The monthly cost is fixed, regardless of traffic or the number of deploys. On Vercel, the bill scaled with every project and every serverless function.
SQLite runs on disk. Reads with microsecond latency. No network connections to external databases. No cold starts. The .db file is backed up daily with a cron job that rotates the last seven copies.
Services talk over an internal network. The blog talks to MinIO and Infisical through the internal Docker network. The traffic never leaves the server. No need to manage firewalls or IP allowlists for connections between services.
What we lost
We lost edge computing. Vercel deploys your application to hundreds of geographically distributed nodes. With Dokploy, everything runs on one server in one location. If a user is far from the server, network latency is higher. For a blog with a mostly Spanish audience, that's not a real problem. For a global service, it would be.
Configuration-free deploys are gone too. On Vercel, deploying a Next.js app is connecting a repository and pushing. With Dokploy you need a Dockerfile, an entrypoint, configured bind mounts, environment variables in the panel, and enough Docker knowledge to diagnose problems.
There is no automatic scaling. If tomorrow an article goes viral and traffic jumps tenfold, Vercel scales automatically. Our VPS doesn't. We'd have to scale vertically (more RAM, more CPU) or put a CDN in front. It's a risk we've accepted for the traffic volume we handle.
Uptime is now our responsibility. If the server goes down at three in the morning, nobody brings it back automatically except Docker's watchdog. On Vercel, their infrastructure team handles that. Here, it's on us.
What we learned
A self-hosted PaaS isn't cheaper in time. It's cheaper in money, yes. But the time you save on billing, you spend on setup, maintenance, and debugging problems that simply don't exist on Vercel. The trade only makes sense if you enjoy the process or if control is a real requirement, not just a whim.
Docker Compose turned out to be the best contract between development and production. If your docker-compose.yml works locally, it works in Dokploy. No environment surprises, no "it works on my machine". The container is the deployment unit, period.
SQLite bind mounts need attention. The directory has to exist before the first deploy, with the correct permissions (1001:1001). If the permissions are wrong, the container starts, the migration fails silently, and the application has no database. It's a problem that only happens once, but it's confusing the first time you see it.
Keep Traefik out of your day-to-day concerns. Dokploy configures Traefik automatically for each domain. Don't try to customize its configuration by hand unless you have a very specific reason. Every time we touched Traefik directly, something broke in a creative way.
Set up automatic backups from day one. Not after something breaks. Our cron job runs a WAL checkpoint, copies the database, and rotates the last seven copies. It takes five minutes to set up and it's the difference between losing one article and losing everything.
Next on the horizon is the visible face of all this infra. An e-ink screen next to the monitor with the aggregated status of the VPS, fed by the same timers that send Telegram alerts when something breaks. Being able to see at a glance that everything is still green without unlocking my phone.
Migrating from a managed PaaS to your own infrastructure isn't an upgrade or a downgrade. It's a change in model. You trade convenience for control, automatic scaling for predictable cost, and abstractions for explicit decisions. The key is knowing what you're buying with each option, and making sure what you buy is what you actually need.

Jose, author of the blog
QA Engineer. I write out loud about automation, AI and software architecture. If something here helped you, write to me and tell me about it.
Leave the first comment
What did you think? What would you add? Every comment sharpens the next post.
If you liked this

Docker desde cero para gente de QA que no toca infraestructura
Imágenes, contenedores, volúmenes y redes explicados para QA. Lo mínimo que necesitas saber para no depender de DevOps.

Un TRMNL para las alertas del VPS, polling o webhook
Estoy valorando llevar las alertas de mi infraestructura a una pantalla de e-ink TRMNL. Comparo los dos caminos posibles, polling con un endpoint JSON detrás de Cloudflare Access y webhook desde los timers que ya tengo.

OpenClaw para testing y QA: automatiza lo que antes hacías a mano
OpenClaw no solo sirve para verificar integridad: regresión visual, monitorización de endpoints, análisis de logs, smoke tests post-deploy y auditoría de seguridad continua. Casos de uso reales para testing y QA.