WAF, cache, and hardening with Cloudflare Free, without losing your mind
The five WAF Custom Rules I use on every zone, rate limiting with the catch that you can't use ip.src on Free, conservative HSTS, the minimum sensible TLS version, a one-year Cache Rule for hashed Vite bundles, and why I ruled out caching Next.js 16 endpoints on Free.

With Traefik and AOP, an IP-based firewall, and token monitoring already in place, half the value of having Cloudflare in front is still untapped. This post is about what I do on the Cloudflare side, WAF rules that filter hostile traffic before it reaches the origin, rate limiting for sensitive endpoints, zone hardening (SSL/TLS, HSTS, Bot Fight Mode), and a couple of Cache Rules that drastically cut traffic to the VPS for projects with hashed bundles.
Everything here fits within the Free plan. CF Pro removes several limitations and makes some rules simpler, but the goal was to see how far you can get without paying. Spoiler, pretty far.
Free plan limitations you should be clear about
Before opening the dashboard, here are the constraints that shape the design.
- 5 Custom Rules per zone. There's no room to experiment, every slot matters.
- 1 Rate Limit Rule per zone.
- Rate Limit Rules on Free don't allow the
ip.srcfield in the expression, neither directly nor withnot (ip.src in $lista). That means you can't exclude your own IP from the rate limit inside the rule. The workaround is a Skip Custom Rule that runs first and exempts your IP from the rest of the rules. - The only Rate Limit action on Free is Block. You don't get Managed Challenge, JS Challenge, or Log only.
- In Custom Rules (not Rate Limit),
ip.src in $listadoes work. The asymmetry isn't intuitive, but that's what it is.
An account-level IP List for your trusted IP
Before touching the zone, under Manage Account, Configurations, Lists, I created an IP List called trusted_ips with my home IP. It's an account-level list, so I can reference it from any zone as $trusted_ips. If my IP changes, I edit it in one place and all zones update.
The five WAF recipes
Recipe D, Skip for trusted IPs (slot 1, always first)
This rule runs before anything else and exempts my IP from the rest of the WAF and the rate limit. Without it, when I'm testing against my own APIs I lock myself out within seconds.
- Expression,
ip.src in $trusted_ips - Action,
Skip - Components to skip, check at least one (the UI throws an error if you leave everything unchecked), I usually check Remaining custom rules, Rate limiting rules, and Super Bot Fight Mode.
- Order, first.
It takes up a slot, but in return I can be as aggressive as I want with the rest of the rules without risking shooting myself in the foot when I'm working from home.
Recipe A, country geoblock
- Expression,
(ip.src.country in {"RU" "CN" "KP" "IR" "BY"}) and not (ip.src in $trusted_ips) - Action, Managed Challenge.
A short, conservative list, countries I objectively don't expect legitimate traffic from on personal projects, and where I get a disproportionate amount of scanner traffic. Managed Challenge instead of Block leaves the door open for a human visitor (if they can solve the challenge) without opening it to automated bots. The not (ip.src in $trusted_ips) clause is there as a safety net, even though I already have Recipe D doing the skip, I prefer the extra layer.
Recipe E, block scanner paths
- Expression:
(starts_with(http.request.uri.path, "/wp-")) or (http.request.uri.path contains "/.env") or (http.request.uri.path contains "/.git/") or (http.request.uri.path contains "/phpmyadmin") or (http.request.uri.path eq "/xmlrpc.php") or (http.request.uri.path contains "/.aws/") or (http.request.uri.path contains "/.ssh/") or (http.request.uri.path contains "/.DS_Store") - Action,
Block.
None of my projects serves these paths. Any request to them is always hostile, so there's no point counting them toward a rate limit. An instant Block is better than giving an attacker 5 tries before cutting them off.
Recipe B, rate limit on /admin with write methods
This recipe uses the only Rate Limit Rule available on Free. It limits attempts against my blog's admin endpoint and, while we're at it, against classic scanner paths that weren't covered in E (because recipe E already blocks them, this is intentional duplication for zones where E doesn't apply).
- Expression:
(http.request.uri.path eq "/admin" and http.request.method in {"POST" "PUT"}) or (starts_with(http.request.uri.path, "/wp-")) or (http.request.uri.path contains "/.env") or (http.request.uri.path contains "/phpmyadmin") or (http.request.uri.path eq "/xmlrpc.php") - Characteristics, IP. Threshold, 5 requests in 10 seconds. Action, Block 10 seconds.
The method in {"POST" "PUT"} keeps normal admin panel browsing (GET requests from the SPA) from consuming the counter. Only login or modification attempts count.
On zones with several subprojects (mine has lots of different APIs), Recipe B isn't the best option and I prefer a more generic rate limit like starts_with(http.request.uri.path, "/api/"). Since there's only one slot, you choose based on the zone's profile.
Zone hardening
Security > Configuration
- Bot Fight Mode, ON.
- Browser Integrity Check, ON.
- Security Level, Medium.
- Challenge Passage, 30 minutes (default).
- Privacy Pass Support, ON if it shows up.
SSL/TLS
- Always Use HTTPS, ON.
- Automatic HTTPS Rewrites, ON.
- Minimum TLS Version, TLS 1.2. I don't raise it to 1.3. The reason is that 1.3 clients will already negotiate 1.3 even with a minimum of 1.2 (the negotiation picks the highest version supported by both ends), but raising the minimum to 1.3 kicks out legitimate clients with older stacks (old curl, provider webhooks with slow-to-update stacks, monitors). I check again after a few months, and if Analytics > SSL/TLS tells me 0% of traffic is using 1.2, then yes, I'll raise it.
- TLS 1.3 (separate toggle), ON.
- Opportunistic Encryption, ON.
HSTS, conservative
This is the most delicate hardening decision, because HSTS is effectively irreversible. If you enable HSTS with includeSubDomains and some future subdomain has a temporary TLS issue, visitors who already cached the header won't be able to get in, and there's no way to bypass the browser error. And if you add preload, getting out of the browser-coded list takes months.
My initial setup is deliberately cautious.
- Status, On.
- Max Age, 6 months.
- Include subdomains, OFF.
- Preload, OFF.
- No-Sniff Header, ON.
After 1 to 2 months without incidents, I consider expanding to includeSubDomains. I'd only turn on Preload if I had total operational confidence. My rule of thumb is strict HSTS is earned, not chosen.
Cache Rules, what CF Free does cache well
One of the things that changes the experience the most (and the bandwidth going to my VPS) is Cache Rules for static assets. With modern hashed bundles (Vite, Webpack, Next.js), the filenames already include a content hash, so any change also changes the name and there's no risk of serving stale content. The natural conclusion is cache long, no regrets.
My origin (Express serving the dist/ folder from Vite) was sending Cache-Control: public, max-age=14400, 4 hours. That's reasonable but conservative, those assets can be cached much longer. A single Cache Rule covers the Vite projects I host.
- Expression:
(http.host in {"e2e.mi-zona.com" "otra-app.mi-zona.com"}) and starts_with(http.request.uri.path, "/assets/") - Settings:
- Eligibility, Eligible for cache.
- Edge TTL, Ignore origin Cache-Control and use this TTL, 1 year (31536000s).
- Browser TTL, Override origin, 1 year.
Result, /assets/index-HASH.js practically never touches my VPS again after the first request. And since the HASH changes on every deploy, there's no risk of serving old versions.
Heuristic for safely adding Cache Rules
curl -sI https://<host>/path-a-un-estaticotwice.- If the second one is already
cf-cache-status: HITwith a reasonablecache-control, don't touch it. - If you see MISS→HIT with a short
max-ageand the file has a hash in its name (foo-HASH.js), overriding TTL to 1 year is safe. - If the file does not have a hash (
styles.css,app.js), don't raise the TTL. Any future deploy would serve stale content for the whole cache window.
What I ruled out: caching Next.js 16 on CF Free
I tried four approaches to cache public endpoints on my blog (Next.js 16 App Router). None of them gave stable HITs. The reason is that Next.js 16 automatically injects the header Vary: rsc, next-router-state-tree, next-router-prefetch, next-router-segment-prefetch into every response, and CF Free only accepts Vary: Accept-Encoding as cacheable. Anything else turns the response into cf-cache-status: DYNAMIC, with no cache.
The approaches I tested and ruled out.
- Next.js middleware that rewrites the
Varyheader toAccept-Encoding. Next.js injects its Vary afterward, so two headers arrive and CF sticks with the stricter one. - Setting Vary in the Route Handler, same problem.
- Response Header Transform Rule in CF that removes
Vary, it seems to apply after the cache decision. - Cloudflare Worker Free with
caches.default, the strip works but the Cache API starts failing and in the end the Worker falls back to open fallback without cache.
Realistic solutions, move up to CF Pro for Custom Cache Key that ignores Vary, put a proxy in front of Next that rewrites before it reaches CF, or wait for a version of Next that allows opting out of the RSC Vary (it doesn't exist today). I documented it and dropped that path. Not every battle is worth fighting.
How to apply all this to a new zone, in order
The order of operations avoids windows where you can lock yourself out.
- Make sure your IP is in the
trusted_ipsIP List (account level). - Create the rules in this order, first D (Skip), then A (geoblock), then E (block scanner paths). Then choose B or a generic rate limit based on the zone's profile.
- Apply Security Settings (Bot Fight Mode, Browser Integrity Check, Security Level).
- Apply SSL/TLS (Always HTTPS, Min TLS 1.2, conservative HSTS).
- Validate from your normal IP, everything loads.
- Validate from an IP outside
trusted_ips(mobile data), a GET to/.envshould return 403.
UI traps
- Skip without checkboxes, the UI throws the error action parameters are required for the skip action. You have to check at least one component to skip.
- Persistent red banner, the Skip error can still show up on later screens even when it no longer applies. Close it with the X and, if the current form is valid, Deploy works.
- Rate Limit with
ip.src, it doesn't work on Free. If you try it, you get the literal error not entitled, the use of field ip.src is not allowed. - Rule created in a DNS only zone, you can preconfigure it, but it won't apply until some record switches to the orange cloud.
What you'd get with CF Pro or higher
If I ever move up to Pro, these are the things that change.
- Managed Challenge in Rate Limits (better UX for legitimate users).
ip.src in $trusted_ipsdirectly in Rate Limit expressions (removes the need for Recipe D).- More Rate Limit Rules per zone, so you can separate
/adminfrom/api/into different rules. - Cloudflare Managed Rules (managed OWASP Core Rule Set).
- Custom Cache Key to get around the Next.js RSC
Varyproblem.
What's next
With this, the Cloudflare side is covered. The edge filters, caches what it can, and tightens up TLS. The last piece in the series is something you don't see until it happens, a silent bug that shows up in any app that depends on the client IP when you put Cloudflare in front of Traefik. I cover it in the final post.

Jose, author of the blog
QA Engineer. I write out loud about automation, AI and software architecture. If something here helped you, write to me and tell me about it.
Leave the first comment
What did you think? What would you add? Every comment sharpens the next post.
If you liked this

ScamDetector, un detector de estafas con inteligencia artificial
ScamDetector combina inteligencia artificial, búsqueda de reputación de teléfonos y escaneo de URLs para ayudarte a identificar estafas digitales. Sin registro, sin datos almacenados.

Guía práctica de hardening para tu VPS Linux: de CrowdSec al kernel
Repaso completo de las medidas de seguridad que puedes aplicar a un VPS Linux: desde CrowdSec y el firewall hasta el hardening del kernel, pasando por SSH, Docker y las actualizaciones automáticas.

Cómo verificamos que nadie manipula los posts de este blog
Nuestros posts viven en una base de datos SQLite. Si alguien accede a ella, puede cambiar cualquier artículo sin dejar rastro. Construimos un verificador externo con hashes SHA-256 y firma Ed25519 que vigila la integridad desde un segundo servidor.