Why I put my whole VPS behind Cloudflare
My VPS had been responding directly to the internet for a year and a half. One afternoon I decided to put all of it behind Cloudflare. This is the first post in a series where I explain why, what layers I set up, and in what order.

My VPS had been responding directly to the internet for a year and a half. Traefik was serving port 443, Let's Encrypt was validating over HTTP-01 on port 80, and I had a few dozen apps hanging off two DNS zones. It worked fine, but every time I opened the access logs I saw the same thing, bursts of requests to /wp-admin, /.env, /.git/config, login attempts against /xmlrpc.php, and bots trying WordPress CVEs on projects that don't have a single line of PHP.
I decided to put Cloudflare in front of everything. Not just as a CDN, but as a real proxy, with Authenticated Origin Pulls enabled and a firewall on the VPS that rejects any connection not coming from CF's official ranges. The idea is simple, my origin shouldn't talk to anyone except Cloudflare.
This is the first post in a series of seven where I walk through the whole migration. How I switched certificates to DNS-01, how I configured AOP, how I locked down the kernel ports, how I set up a monitor that automatically reinjects the token when Dokploy deletes it, what WAF rules I use on the Free plan, and what silent bug I found in my own apps when I put Cloudflare in the middle.
The previous state
Before the migration, the VPS looked like this. One public IP, ports 80 and 443 open to the whole internet, and Traefik managing certificates with Let's Encrypt over HTTP-01. Anyone could connect directly to the hostname or the IP, and the access log was a daily chronicle of internet noise.
The apps had their own protection layer. Rate limiting by IP, CrowdSec with its bouncer in Traefik, reasonable security headers, basic auth where it made sense. But all of those defenses assumed a model where the attacker reaches the origin. What was missing was a first layer that filtered traffic much earlier, ideally outside my infrastructure, and hid the VPS IP.
What I wanted to get from Cloudflare
My threat model isn't paranoid. I'm not expecting a sophisticated targeted attack, I'm expecting the constant noise of the open internet and the occasional traffic spike that saturates the provider's line. I had four goals when I put Cloudflare in the middle.
Hide the origin IP so scanners can't bypass Cloudflare or launch a volumetric DDoS against the VPS.
Filter hostile traffic before it reaches my VPS, with WAF rules that block known scanner paths, geoblocking for countries I never get legitimate traffic from, and rate limiting on sensitive endpoints.
Authenticate traffic to the origin with a client cert signed by Cloudflare's CA. If someone somehow discovers the VPS IP and points at it with a valid Host header, Traefik drops the connection during the TLS handshake because they aren't presenting the client cert.
Keep everything within Cloudflare's Free plan. No paying for Pro or ACM, treating the limits of Free as design constraints.
What I didn't want was to couple myself to Cloudflare so tightly that I couldn't leave. The apps still don't know they're behind CF, the certificates are still Let's Encrypt on my VPS, the data is still in my infrastructure. Cloudflare is a smart proxy in front, not a replacement for the stack.
The four defense layers I set up
The result is a layered architecture, each one with a clear purpose, all of them working together.
Internet
|
v
[Cloudflare edge] WAF Free, Universal SSL, geoblock
|
v
[Firewall del kernel] iptables + ipset whitelist de IPs CF en 80/443
|
v
[Traefik con AOP] tls.options exige cert cliente firmado por CA de CF
|
v
[Apps con rate-limit] cada app con su propia logica anti-abusoLayer 1, the Cloudflare edge
The first filter lives outside my VPS. Cloudflare presents the public certificate to the visitor, validates the host, applies the WAF rules (geoblocking, scanner paths, rate limiting), and forwards the request to the origin. Any request blocked at this layer never touches my server, it doesn't even burn CPU on the TLS handshake.
Layer 2, the kernel firewall
Even if CF is the legitimate path, someone could still discover the VPS IP and connect directly. The second layer is a systemd script that downloads the official Cloudflare ranges every night and loads them into an ipset. The iptables rules in the DOCKER-USER chain accept traffic on 80 and 443 only from those ranges, everything else gets dropped. Port 3000 for the Dokploy UI is also closed to the outside but open between containers.
Layer 3, Traefik with Authenticated Origin Pulls
If the firewall failed or I disabled it by mistake, Traefik is still there. I configured a tls.options called cloudflare-aop with clientAuthType: RequireAndVerifyClientCert and Cloudflare's public CA as a trusted file. Any TLS handshake that doesn't present a client cert signed by that CA ends in an SSL error before it ever reaches the app. Only Cloudflare presents that cert, so only Cloudflare can start TLS against my routers.
Layer 4, the apps
Each app still keeps its own anti-abuse logic. Rate limiting by IP, owner verification by hash, log integrity with HMAC, whatever each case needs. The difference is that now the client IP comes from the cf-connecting-ip header, not from x-real-ip. That has its own story, and I gave the last post in the series to it, because silently breaking that was one of the subtlest side effects of putting Cloudflare in the middle.
How certificate renewal works afterward
If the firewall closes port 80 to everything except Cloudflare, Let's Encrypt can't validate domains over HTTP-01. The fix is to switch Traefik's resolver to DNS-01, which validates by creating a TXT record in the DNS zone through the Cloudflare API. The token gets injected as an environment variable into the Traefik container and all routers move to the new resolver.
That brings one operational detail, if Dokploy updates itself, it recreates the dokploy-traefik container without the environment variable and renewals start failing silently. You'd only detect it manually after 60 to 90 days, when the certs expired. To avoid that, I set up a systemd timer that checks every hour whether the token is still inside the container and, if it isn't, automatically reinjects it by reading from a local file and notifying me through ntfy. It's a small piece, but it covers one of the most realistic failure modes in this setup.
Series roadmap
In the next posts I'll go through each piece in enough detail that you can replicate it.
This post, motivation and architecture.
DNS-01 and AOP in Traefik, the two main changes in the reverse proxy and the right order when you do both in the same session.
Firewall by Cloudflare IPs, the systemd script, the
ipset, and the lesson from the DROPs without-i ens3that broke container-to-container traffic for me.Cloudflare token monitor, the hourly timer that detects when Dokploy deletes the env var and puts it back on its own.
WAF, cache, and hardening on Cloudflare Free, the exact recipes I used without paying for a higher plan.
The silent cf-connecting-ip bug, what breaks in your apps when you put CF in front of Traefik and how to fix it in any Node project.
Cloudflare Tunnel and Access to take admin panels off the Internet, a later post where I take Dokploy, Infisical, and Umami off public DNS and put them behind identity.
Outside the six posts in the series, I've also spent the last few weeks thinking about putting the aggregated status of all this infra on an e-ink screen next to my monitor, powered by the same timers that already notify me through Telegram. That's the visible part I'm still missing.
The plan isn't to sell Cloudflare, it's to document the path of someone who had an exposed VPS and decided to put one more layer in front of it with weekend-engineer criteria, no budget, and all the code and runbooks under control. If I decide to leave CF tomorrow, the rollback for each layer is documented.

Jose, author of the blog
QA Engineer. I write out loud about automation, AI and software architecture. If something here helped you, write to me and tell me about it.
Leave the first comment
What did you think? What would you add? Every comment sharpens the next post.
If you liked this

ScamDetector, un detector de estafas con inteligencia artificial
ScamDetector combina inteligencia artificial, búsqueda de reputación de teléfonos y escaneo de URLs para ayudarte a identificar estafas digitales. Sin registro, sin datos almacenados.

Guía práctica de hardening para tu VPS Linux: de CrowdSec al kernel
Repaso completo de las medidas de seguridad que puedes aplicar a un VPS Linux: desde CrowdSec y el firewall hasta el hardening del kernel, pasando por SSH, Docker y las actualizaciones automáticas.

Cómo verificamos que nadie manipula los posts de este blog
Nuestros posts viven en una base de datos SQLite. Si alguien accede a ella, puede cambiar cualquier artículo sin dejar rastro. Construimos un verificador externo con hashes SHA-256 y firma Ed25519 que vigila la integridad desde un segundo servidor.