Cloudflare Tunnel and Access to get admin panels off the Internet
I moved the VPS admin panels (Umami, Infisical, and Dokploy) to Cloudflare Tunnel with Access. Ports 80 and 443 are still only for public apps, the panels no longer resolve to the origin or need their own login exposed to the Internet. Minimal compose, a Bypass policy for critical webhooks, and a catch-all with email OTP.

With the zone already pretty well locked down at the edge (WAF, rate limiting, AOP, firewall by CF IPs), there was still one piece I didn't like, the admin panels. The blog admin lives behind the Better Auth session and that gives me peace of mind, but the rest, the Dokploy panel, the Infisical panel, the Umami analytics dashboard, were all public subdomains in Cloudflare with Traefik routing to a service serving a login page. Authentication is done properly in each of them, but the principle of minimum exposure says something else, an admin panel shouldn't even respond to an HTTP request from an anonymous visitor.
The obvious option was Cloudflare Tunnel with Cloudflare Access in front of it. It makes the VPS open an outbound connection to the Cloudflare edge, and the panel doesn't even need DNS pointing to the origin. Before serving a single byte, Access requires identity. If you're not authenticated, you don't even get to the panel login.
I set it up on April 27, 2026 with three staged pilots, first Umami, then Infisical, and finally Dokploy. This post covers the pattern and the gotchas, which are less obvious than they look, especially when the panel also receives external webhooks.
Why Tunnel and not "open the port and put a VPN in front"
The alternatives I ruled out.
- WireGuard to the VPS, perfect for technical use but annoying when you're on your phone visiting a friend and just want to check a dashboard. And if the VPS connection changes IP, you have to touch the peers.
- A commercial VPN like Tailscale, just as good, but it means one more agent on the VPS and shifting trust to a different third party from the one I'm already using for everything else. Since I already have Cloudflare in front of the whole zone, adding Zero Trust from the same account keeps the model together instead of spreading it out.
- WAF rules with an IP allow-list, fragile, my IP changes, and it doesn't protect the first byte (CF still touches the origin).
- Basic Auth in Traefik in front of the panel, enough for some cases, but not for external webhooks from GitHub or Telegram that also go through that host. And identity is still just a shared secret, not proper authentication.
I cover the bigger picture of tunnels, mesh, and reverse proxy (when each one makes sense and when they're not substitutes but different pieces) in SSH vs Cloudflare Tunnel vs Pangolin vs Tailscale vs Headscale vs WireGuard, what I use for what.
Cloudflare Tunnel with Access checks four boxes at once. The origin accepts no inbound connection at all for those hosts (there literally is no public A record, DNS resolves to an internal CF CNAME). Identity is validated with a proper IdP (on Free, email OTP is enough for me). Sessions are centralized and revocable. And everything is managed from the same dashboard I already use for the rest of the zone.
One piece, one compose with cloudflared
The nice trick with Tunnel is that you need exactly one cloudflared container on the VPS, no matter how many panels you route later. Each panel is added from the dashboard as a Public Hostname inside the same tunnel. For any service in the orchestrator to be a target, the cloudflared container has to be on the internal Docker network (in my case dokploy-network) and reference the service by name.
The compose lives as its own entry inside the orchestrator's infrastructure project, slug infrastructure-cf-tunnel, and it's tiny.
services:
cloudflared:
image: cloudflare/cloudflared:2026.3.0
restart: unless-stopped
command: tunnel --no-autoupdate run
environment:
TUNNEL_TOKEN: ${TUNNEL_TOKEN}
networks:
- dokploy-network
networks:
dokploy-network:
external: trueThe TUNNEL_TOKEN is generated in Zero Trust > Networks > Tunnels when you create the tunnel, and injected as an environment variable from the orchestrator UI. The image is pinned by version (not latest) and --no-autoupdate stops the binary from updating itself on its own without me controlling it.
I left the tunnel name in CF as dokploy-vps-admin, which makes its purpose clear and keeps it separate from any other tunnel I may create in the future for a different use case.
Adding a new panel, step by step
Once the tunnel exists, adding a new panel is always the same routine.
- Public Hostname in the tunnel. In Zero Trust > Networks > Tunnels > dokploy-vps-admin > Public Hostname > Add a public hostname, you set the subdomain (
analytics,secretos,dokploy...), domain (tu-dominio.com), typeHTTP, and as the URL the service name on the internal network plus its internal port. For Umami it wasumami:3000, for Infisicalinfisical-backend:8080, for Dokploydokploy:3000. CF automatically creates a CNAME in the zone pointing to the tunnel. - Delete the old A or CNAME if it existed. This is critical and there's a gotcha here that I explain below. If the host already went through Traefik, before saving the Public Hostname you have to remove the pre-existing DNS record or it will stay in error.
- Create the Access Applications. This is the important part, it's not enough to have one rule asking for login everywhere. For panels that receive external webhooks (GitHub, Telegram, OAuth callbacks, healthchecks), you need to allow Access bypass on those specific paths. Otherwise your orchestrator's own webhooks will get the CF OTP form as the response and everything breaks quietly.
- Validate with curl that the bypass routes still respond and that the catch-all redirects to Access.
- Cut over from Traefik to tunnel is already done once DNS changes to the tunnel CNAME. The Traefik labels in the panel compose stay there as a safety net. After 24 to 48 hours with no incidents, I remove them with a compose edit.
The Access Applications order is the hardest part
An Application in Access is basically the pair (host + path) → policy. The same URL can have several Applications, one per path, evaluated from most specific to least. The rule that worked for me is always the same, create the Bypass rules for specific paths first, and the catch-all that protects the rest at the end. If you do the catch-all first and the bypass rules later, during those minutes the external webhooks break and the orchestrator's automatic deploy gets stuck.
Each Bypass is a Self-Hosted Application with the Bypass + Everyone policy. The catch-all is another Self-Hosted Application with an empty path and the Allow + Emails = [email protected] policy.
Case 1, Umami (blog analytics)
Umami serves a private dashboard, but the public script /script.js and the tracking endpoint /api/send need to be accessible to all visitors of my sites, without OTP, obviously. Two Bypass apps and one catch-all.
- Bypass
analytics.tu-dominio.com/script.js, Bypass + Everyone policy. - Bypass
analytics.tu-dominio.com/api/send, Bypass + Everyone policy. - Catch-all
analytics.tu-dominio.com/*, Allow + Email[email protected].
Case 2, Infisical (secrets manager)
Infisical is the cleanest case. The apps consuming secrets (CV, Blog, ScamDetector...) don't call the public subdomain, they call the internal Docker network, http://infisical-backend:8080. That route never goes out to the Internet or through the tunnel, so for Infisical only the catch-all is needed.
- Catch-all
secretos.tu-dominio.com/*, Allow + Email.
If your setup calls Infisical from outside the VPS, this changes (you'd need a specific Service Token exposed by a different API), but all my integrations are intra-host.
Case 3, Dokploy (orchestrator)
The most delicate one. Dokploy's panel is what I want to expose the least, but it's also what receives the most legitimate external webhooks. Three Bypass apps and one catch-all.
- Bypass
/api/deploy. This covers three different routes under it,/api/deploy/github(GitHub App webhook, triggers deploys for projects with auto-deploy),/api/deploy/[refreshToken](custom webhooks per application), and/api/deploy/compose/[refreshToken](custom webhooks per compose). I have 8 projects withautoDeploy: true, breaking this path means breaking all automatic deploys. - Bypass
/api/providers. OAuth callbacks from GitHub, Gitea, and GitLab plus the/api/providers/github/webhookwebhook. If you put this behind OTP, the OAuth flow fails, because GitHub won't solve a Cloudflare Access challenge. - Bypass
/api/health. External healthchecks. No point asking them for OTP. - Catch-all
/*, Allow + Email[email protected], 24 hour session, One-time PIN IdP.
I created the three Bypass rules before touching DNS for the catch-all, so during the cutover the webhooks were already accounted for. Then I confirmed that deploying a project with autoDeploy still worked with a test commit, and that GitHub OAuth opened correctly.
Identity Provider, email OTP is enough for one person
The Zero Trust Free plan comes with One-time PIN by email out of the box. It doesn't require configuration, doesn't need an external IdP, and for one person with a single authorized email it's perfectly enough. A 6-digit code arrives by email, you enter it, and you're in.
I set the session to 24 hours. It's a reasonable balance, during a workday I don't get asked for OTP every five minutes, and the next day it asks for identity again. With just 1 person on the account, the Free plan is more than enough for me (it allows up to 50 users).
The Apply instant auth toggle that shows up in the UI doesn't add anything when you only have one active IdP, because in that case CF applies instant auth implicitly and the form jumps straight to the email field without an intermediate screen. I left it OFF on all three panels and it works the same. I'd only turn it on if I add Google SSO in the future and want to force one default option.
Verification, curl with a cache buster
The test I run after each migration is always the same, one curl pass that distinguishes Bypass from catch-all. The ?nocache=$(date +%s) parameter matters, it keeps an intermediate cache from returning an old 200 and making you think the Access rule isn't active.
# Bypass paths, deben dar 200 directo del origen
curl -sI "https://analytics.tu-dominio.com/script.js?nocache=$(date +%s)"
curl -sI "https://dokploy.tu-dominio.com/api/health?nocache=$(date +%s)"
# Catch-all, debe dar 302 a Access
curl -sI "https://dokploy.tu-dominio.com/?nocache=$(date +%s)"
# Esperado, location: https://tu-cuenta.cloudflareaccess.com/...
If the catch-all doesn't return a 302 to cloudflareaccess.com and instead serves the panel HTML, something didn't apply. The most common cause is the Applications order being wrong, or the bypass having a path copied incorrectly.
The pre-existing DNS gotcha
This cost me five minutes of confusion the first time. If the host already has an A or CNAME record in the zone (because it used to go through Traefik), when you save the Public Hostname from the tunnel dashboard CF throws an error with a not very clear message saying the record already exists. The fix is to go to DNS > Records and delete the old record before saving the Public Hostname. Once you save it, CF automatically creates the CNAME to the tunnel.
The detail to remember is that for those few seconds the host has no DNS. For non-critical hosts, no big deal. For something like Dokploy where webhooks may be arriving all the time, it's better to do it in a quiet window or, even better, create the Bypass apps first with a nonsense test path so you know you have the Access flow order right before moving the real DNS.
What about webhooks from external services
The other point worth paying attention to is that external webhooks still arrive at the Cloudflare edge, just through a CNAME to the tunnel. In other words, GitHub calls https://dokploy.tu-dominio.com/api/deploy/github, CF resolves that to a tunnel endpoint, the tunnel carries the request to the internal service, the service responds, the tunnel returns the response to CF, and CF responds to GitHub. Zero open ports on the VPS. Zero published VPS IP. And zero exposed login.
The path is evaluated against the Access Applications, it matches the /api/deploy Bypass, policy Bypass + Everyone, so the request goes through without asking for identity. The policy applies to the method just like the path, GitHub will send a POST with its own headers and HMAC signature, and the internal service validates that later. Cloudflare Access doesn't take part in the webhook's own authentication process. (If that internal service does rate limiting or validates identity by client IP, keep in mind the silent cf-connecting-ip bug I covered in the previous post, webhooks travel through CF edge IPs and the right header is cf-connecting-ip.)
Rollback, two scenarios
As long as the Traefik labels are still in the panel compose and the tunnel is active in parallel, rollback is trivial.
- Before removing the labels, just restore the A record in DNS and traffic goes back to Traefik in less than 60 seconds. The tunnel is still there but unused until DNS points back to its CNAME again.
- After removing the labels, you need to redeploy the compose with the labels added back and restore DNS. Five minutes at most.
- If Access blocks by mistake, delete the catch-all app from the Zero Trust dashboard. The panel will be reachable without OTP for a few minutes while I diagnose it, which is better than locking myself out of the panel I'm trying to fix.
That's why I leave phase 6 (cutover from Traefik) at 24 to 48 hours, the labels are an inert safety net that gets in the way of nothing while the tunnel works, and it gives me a fast rollback during the observation window.
Cost and limits of the Free plan
Zero. Zero Trust Free supports up to 50 users, with free email OTP included. cloudflared is outbound only, so there is no tunnel cost either, that used to be a limitation of the old Argo Tunnel and CF removed it years ago. The tunnel is resilient to VPS IP changes, if I change hosting provider tomorrow and the VPS comes up with a different public IP, the panels are still reachable because the tunnel connects outbound to the CF edge, it doesn't depend on what IP the origin has.
The only limit I do watch is active Applications per account, on Free it's 50. Right now I use 7 (4 catch-alls + 3 bypass), so it's not tight.
What's next
The panel I still haven't migrated is n8n. It's the most delicate of the three because its webhooks don't come from one specific provider, they come from any integration pointing to /webhook/* or /webhook-test/*. A broad Bypass on those prefixes and a catch-all for the rest will be the same pattern, but I want to do an inventory before moving that piece, just in case one of my flows uses a non-standard path.
For the home lab, where traffic is more personal and I don't feel like sending it through a third party's edge, I'm not extending Cloudflare there, I'm going with Pangolin as the self-hosted piece. Same outbound tunnel model with identity in front, but with the whole data path under my control.
That wraps up the VPS migration series behind Cloudflare. There are still operational bits left (watching IPs banned in CrowdSec through Telegram, automating CF token rotation, having a second VPS as a remote worker) but they belong to their own story.

Jose, author of the blog
QA Engineer. I write out loud about automation, AI and software architecture. If something here helped you, write to me and tell me about it.
Leave the first comment
What did you think? What would you add? Every comment sharpens the next post.
If you liked this

ScamDetector, un detector de estafas con inteligencia artificial
ScamDetector combina inteligencia artificial, búsqueda de reputación de teléfonos y escaneo de URLs para ayudarte a identificar estafas digitales. Sin registro, sin datos almacenados.

Guía práctica de hardening para tu VPS Linux: de CrowdSec al kernel
Repaso completo de las medidas de seguridad que puedes aplicar a un VPS Linux: desde CrowdSec y el firewall hasta el hardening del kernel, pasando por SSH, Docker y las actualizaciones automáticas.

Cómo verificamos que nadie manipula los posts de este blog
Nuestros posts viven en una base de datos SQLite. Si alguien accede a ella, puede cambiar cualquier artículo sin dejar rastro. Construimos un verificador externo con hashes SHA-256 y firma Ed25519 que vigila la integridad desde un segundo servidor.