Pangolin to open up the home lab to the Internet without depending on Cloudflare
After putting the VPS admin panels behind Cloudflare Tunnel and Access, it's time to deal with the other side of the network, the home lab. I don't want Home Assistant, the NAS, the cameras, and the Raspberry going through Cloudflare's edge every time I open the app on my phone. Pangolin is the self-hosted piece that does the same thing (zero open ports, identity before the first byte) but keeps the entire data path inside my own infrastructure.

I just wrapped up the migration of the VPS admin panels behind Cloudflare Tunnel and Access, and now my head is on the other side of the network, the home lab. I've got Home Assistant orchestrating lights, blinds, and the thermostat, the Synology NAS, the camera panels, a Raspberry running Pi-hole, and a couple more services that only I use (and sometimes my family). Right now all of that lives on the LAN and I get to it through a VPN on the router when I'm away. It works, but the VPN on the router is the most rigid part of the setup, and I'm pretty sure the VPS model (zero open ports, identity before the first byte) is what I want at home too.
The obvious next step would be setting up another Cloudflare Tunnel for the home lab. I ruled that out after thinking it over for a couple of evenings, and today's post is about why, what I'm putting in its place, and how it fits with what I already have on the VPS.
Why Cloudflare's edge doesn't fit my home setup
For the VPS admin panels, Cloudflare is the obvious choice. The zone was already on CF, the apps I'm protecting are work tools, and the traffic going through them belongs to a single user, me. But the home lab is a different kind of thing.
- The traffic is personal, and sometimes private. A call from Home Assistant because someone turned on the light, a stream from the front door camera, the NAS panel with photo thumbnails. I don't love the idea of every one of those bytes going out to a third party's edge, even if that third party doesn't read them, just because it's not their problem to solve.
- The home lab has non-HTTP protocols waiting in line. RTSP from the cameras, MQTT, SSH to the Raspberry, SMB to the NAS. Cloudflare Tunnel is great for HTTP/HTTPS, but once you start pushing plain TCP through it, you're into Spectrum, paid plans, and no longer dealing with the simple solution it was for the panels.
- I want to decouple dependencies. If CF has a serious incident tomorrow and my blog goes down, I'd rather my heating at home doesn't go down with it. One piece for work, another for home.
- And one less technical point, the home lab is exactly the place where learning a self-hosted alternative is worth it. If I set it up in work production and it breaks on Monday, I learn the wrong lesson. If I set it up at home and the thermostat stops responding on a Saturday afternoon, I use the native client and fix it on Sunday.
The piece I'm going to use is Pangolin, an open source project that does pretty much the same thing as Cloudflare Tunnel + Access, but keeps the full data path inside machines I control.
What Pangolin is (and what each piece does)
Pangolin isn't one thing, it's a stack. The useful part is understanding what each piece does before you touch the compose file, because after that everything clicks into place pretty quickly.
- Pangolin is the control plane. It exposes the dashboard where you define sites, resources, users, and rules. A web app with its own database.
- Gerbil is the WireGuard server side. It listens on UDP/51820, validates handshakes, and routes packets to and from clients. It shares a network namespace with Traefik, so ports 80 and 443 show up here too.
- Traefik is the usual reverse proxy, with a couple of plugins to talk to Pangolin (dynamic config over HTTP) and issue Let's Encrypt certificates automatically. Pangolin renders the route config from its API.
- Newt is the client on the other end, a small user-space binary (it doesn't need root or a kernel module) that opens the outbound connection to Gerbil and acts as a TCP/UDP proxy toward internal services. It's the equivalent of
cloudflaredin the Cloudflare model.
The mental model is exactly the same one I already know from Cloudflare. The home lab never has an open port. Newt establishes an outbound session to Gerbil over WireGuard. Pangolin/Traefik accept the public HTTPS traffic on the VPS, route it to the right site through the tunnel, and the response comes back the same way. The difference is that I run Pangolin myself on a small VPS I control, and I put Newt inside the home network.
Sites and resources, the data model
Pangolin has two abstractions worth getting straight before you start clicking around.
A site is a remote location where a Newt connects from. In my case I'm going to have a single site, "home", with one Newt. If I later set up a Newt at my parents' place to keep an eye on their NAS, that'll be another site. The separation is useful because it lets you reuse internal names. 192.168.1.10 on my LAN is not 192.168.1.10 on theirs, but the dashboard knows which site each resource belongs to.
A resource is what you actually publish, the pair of "public subdomain" plus "internal destination". Each one has its own access policy, its own SSL, rules, headers, and targets. It's the equivalent of a Cloudflare Access Application, except here routing and auth live in the same piece.
For the home lab I'm going to start with five resources, one per panel.
casa.midominio.com, pointing to Home Assistant athttp://homeassistant.local:8123.nas.midominio.com, pointing to the Synology DSM panel.camaras.midominio.com, pointing to Frigate (the NVR I run on top of the cameras).pi.midominio.com, pointing to the Pi-hole panel on the Raspberry.printer.midominio.com, pointing to the 3D printer panel, which I still don't use much but I don't feel like exposing to the Internet with basic auth.
I manage the public subdomain with my DNS provider, pointing it at the IP of the VPS where Pangolin runs. Pangolin gets the Let's Encrypt certificates automatically through Traefik, so each panel ends up with HTTPS without me having to touch anything after that.
How I'm setting it up
The deployment has two sides.
On one side, a small VPS with a public IP where the Pangolin stack runs. It does only this, so it doesn't get mixed with the work VPS (where the blog, Dokploy, and the rest live). A minimal instance from any cheap provider is more than enough, the resources travel through the tunnel, not through the VPS itself.
The Pangolin compose file follows the pattern from the official docs, three coordinated services.
services:
pangolin:
image: fosrl/pangolin:1.x.y
container_name: pangolin
restart: unless-stopped
volumes:
- ./config:/app/config
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3001/api/v1/"]
interval: 3s
timeout: 3s
retries: 15
gerbil:
image: fosrl/gerbil:1.x.y
container_name: gerbil
restart: unless-stopped
depends_on:
pangolin:
condition: service_healthy
command:
- --reachableAt=http://gerbil:3004
- --generateAndSaveKeyTo=/var/config/key
- --remoteConfig=http://pangolin:3001/api/v1/
volumes:
- ./config/:/var/config
cap_add:
- NET_ADMIN
- SYS_MODULE
ports:
- 51820:51820/udp
- 21820:21820/udp
- 443:443
- 80:80
traefik:
image: traefik:v3.4.0
container_name: traefik
restart: unless-stopped
network_mode: service:gerbil
depends_on:
pangolin:
condition: service_healthy
command:
- --configFile=/etc/traefik/traefik_config.yml
volumes:
- ./config/traefik:/etc/traefik:ro
- ./config/letsencrypt:/letsencrypt
- ./config/traefik/logs:/var/log/traefik
networks:
default:
driver: bridge
name: pangolin
The detail I missed at first is that Traefik uses network_mode: service:gerbil. In other words, it's not a service with its own ports, it shares Gerbil's network space. That's why 80 and 443 show up under Gerbil, not Traefik. It's the only clean way to have public HTTPS traffic and WireGuard traffic coexist without twisted iptables rules.
The images are pinned to versions, never latest. It's the same rule I apply to cloudflared on the other side, updates happen when I decide with a conscious PR, not at 03:00 without me noticing.
On the other side, at home, Newt runs in a dedicated docker compose on one of the machines on the LAN (the Raspberry is perfectly fine for this, it uses basically nothing).
services:
newt:
image: fosrl/newt:1.x.y
container_name: newt
restart: unless-stopped
environment:
- PANGOLIN_ENDPOINT=https://pangolin.midominio.com
- NEWT_ID=${NEWT_ID}
- NEWT_SECRET=${NEWT_SECRET}
NEWT_ID and NEWT_SECRET are generated by Pangolin when you create the site in its dashboard. I inject the sensitive part from my secrets manager (I already wrote on this blog about how I put Infisical in front of projects, and I use the same pattern here). If I want to be stricter, Pangolin lets you pass the config as a Compose Secret instead of variables, which is nice for a Raspberry that's physically exposed in the kitchen.
Newt doesn't need to run on the same machine as the target service. Once the session is open, any IP or name reachable from Newt's LAN is reachable as a target. For HTTP panels that's transparent, the camera listens on its LAN IP and the dashboard resource points to that IP. It's only when some service is running internally on Newt's own localhost that I need to think about where I place it.
Identity provider and rules, the Access equivalent
This is where Pangolin has the nicest edge over Cloudflare. Access in CF is a piece attached to the tunnel but conceptually separate, with its own dashboard, its Applications evaluated in order, and its Bypass rules. In Pangolin, auth is per resource and configured in the same form where you define the destination.
Each resource has an auth block. The policy can be a few different kinds.
- Pangolin's own users, local users with their password and optional TOTP MFA. That's what I'm going to use, because I have a single admin user and a couple of occasional guests.
- OIDC, toward a self-hosted Authentik or Keycloak. It supports any IdP that speaks OpenID Connect with standard configuration (
client_id,client_secret,authorization_url,token_url). Longer term I might set up Authentik as the single IdP for the house, but to get started Pangolin's native auth is enough. - Access rules per resource, evaluated with JMESPath expressions over the IdP claims. The docs have a nice example, assigning users to a Pangolin organization if their
groupsclaim contains"home-lab", viacontains(groups, 'home-lab'). For a family setup it's exactly what I want, one rule like "anyone with an email from the domain can get into the NAS panel to see photos" and another like "only I can get into Home Assistant".
The equivalent of Cloudflare's "Bypass" in Pangolin is simply removing auth from a resource (or, if it's a specific endpoint, declaring a separate resource for that route without auth). For the home lab I don't expect any of those, all the panels are private. But if later I want to expose a public Home Assistant webhook for some external integration, the pattern is the same one I learned with Access, a specific resource before the general one.
Practical comparison with Cloudflare Tunnel and Access
I think of it as a mental table, because it took me a while to decide which one I want for what.
- Day-to-day operations. Cloudflare wins, polished dashboard and it keeps running by itself all day. Pangolin is behind, but not by much, and it forces me to understand better what's going on, which I appreciate at home.
- Traffic privacy. Pangolin wins, the bytes never leave the tunnel until they reach my VPS. CF inevitably terminates public TLS at its edge.
- Cost. CF is zero (Zero Trust Free plan). Pangolin needs a small VPS somewhere, we're talking 5 to 10 € a month depending on the provider. Not much, but not zero.
- Protocols. Pangolin wins, it supports native HTTP, TCP, and UDP under the same model. CF Tunnel handles HTTP/HTTPS very well and pushes you toward Spectrum for plain TCP.
- Resilience during outages. CF wins for small outages (it's Cloudflare, hard to beat that uptime), Pangolin wins for serious ones (if CF has a bad day, my thermostat doesn't care).
- Learning curve. CF wins easily, Pangolin asks for an afternoon reading the docs carefully. Still acceptable though, this isn't Kubernetes.
- Maintenance. CF is zero, Pangolin gets updates every few weeks and I'm the one deciding when to bump the image. It's work, but it's work I'm already doing for Dokploy and the other composes.
My conclusion is that the answer isn't "one kills the other", it's splitting the work. Work and VPS admin panels behind Cloudflare. Home lab and personal traffic behind Pangolin. Each one where it makes the most sense.
If you're interested in the full picture (including when an SSH tunnel is still the best option, where Tailscale fits in all this, and when Headscale makes sense), it's in SSH vs Cloudflare Tunnel vs Pangolin vs Tailscale vs Headscale vs WireGuard, what I use for what.
Backups and recovery, the detail I don't want to forget
This is one of those pieces I don't worry about with CF because CF handles it, but with Pangolin I do. The VPS where the stack runs has a ./config volume with the database, Gerbil's WireGuard keys, and the cached Let's Encrypt certificates. If that VPS dies without a backup, losing the certificates isn't a big deal (Let's Encrypt can issue them again), but losing the database is, that's where the users, resources, and site state live.
The plan is restic pointing to an external bucket (not the same NAS I'm exposing, obviously) with daily snapshots and 30-day retention. It's the same backup script I already use on the work VPS, so only the list of paths changes.
The second risk, and this one really is part of the model, is that if Newt at home dies without me noticing, the panels stop responding from the outside. The VPS still responds, but with a 502. I cover that with an external check (uptime-kuma running somewhere else) that alerts me through ntfy if the domain returns 502 for more than five minutes in a row. If the Raspberry running Newt hangs, I get the push before I even notice because opening Home Assistant failed.
Phased rollout plan
The home lab works right now, and I don't want to break it in one afternoon. The idea is to cut it over floor by floor, the same way I did with the VPS panels.
- VPS and stack. I bring up Pangolin + Gerbil + Traefik with a test domain, register a dummy site and a Newt on my laptop, and confirm that the whole flow (registration, tunnel, certificate, resource, login) works before I touch the home network.
- Newt at home. I set it up on the Raspberry with docker compose. It checks outbound UDP/51820 connectivity to the VPS. If the home router filters outgoing UDP in some weird way (it shouldn't, but it happens), Newt gives a clear error.
- First resource, Pi-hole. It's the least critical panel, if it goes down nothing happens. I expose it, test from my phone over mobile data, and validate that the OTP arrives.
- Home Assistant. The most sensitive one for me because I open it several times a day. I expose it with auth + MFA and keep the HA mobile app using the Pangolin endpoint as its external URL. If all goes well, the next day I disable the VPN on the router so Pangolin becomes the only path.
- NAS and cameras. Only after HA has gone a week without incidents.
- Decommission the router VPN. Only when all five panels are stable and monitoring hasn't yelled in two weeks.
Rollback at any step is immediate, I just shut down the Newt container at home and re-enable the router VPN. Since Pangolin doesn't touch the internal network (it doesn't change routes, it doesn't change local DNS), if it goes down it doesn't affect LAN use.
What's next
The next decent weekend when I have time, phase 1 is up, bring up the VPS and leave the flow verified with a test Newt. Once I have the first real panel running, I'll write the follow-up, "Pangolin in home production, what changed from the plan", because that's where you actually learn, in the gap between the README and reality.
There are a couple of pieces I'm not covering today because I want to see them with real data. One is exposing non-HTTP protocols (RTSP from the cameras to a viewer outside the LAN, SSH to the Raspberry from my phone), because that's where Pangolin competes with Tailscale on its own turf and the decision depends on how it feels day to day. The other is Authentik as the single IdP for the whole house, which starts to make sense once I have more than five resources and start inviting family members into some of the panels (the NAS photos, for example). Both of those will get their own post.

Jose, author of the blog
QA Engineer. I write out loud about automation, AI and software architecture. If something here helped you, write to me and tell me about it.
Leave the first comment
What did you think? What would you add? Every comment sharpens the next post.
If you liked this

ScamDetector, un detector de estafas con inteligencia artificial
ScamDetector combina inteligencia artificial, búsqueda de reputación de teléfonos y escaneo de URLs para ayudarte a identificar estafas digitales. Sin registro, sin datos almacenados.

Guía práctica de hardening para tu VPS Linux: de CrowdSec al kernel
Repaso completo de las medidas de seguridad que puedes aplicar a un VPS Linux: desde CrowdSec y el firewall hasta el hardening del kernel, pasando por SSH, Docker y las actualizaciones automáticas.

Cómo verificamos que nadie manipula los posts de este blog
Nuestros posts viven en una base de datos SQLite. Si alguien accede a ella, puede cambiar cualquier artículo sin dejar rastro. Construimos un verificador externo con hashes SHA-256 y firma Ed25519 que vigila la integridad desde un segundo servidor.