How we verify that nobody is tampering with this blog's posts
Our posts live in a SQLite database. If someone gets access to it, they can change any article without leaving a trace. We built an external verifier with SHA-256 hashes and Ed25519 signatures that watches integrity from a second server.

All the posts on this blog live in a SQLite table. A content field with HTML, a title, a slug, some categories. That's it. If someone gets access to that database, whether through a leaked backup, compromised credentials, or a server vulnerability, they can open the file, change the content of any article, and nobody would know.
I'm not talking about defacing the homepage with a message in red. I'm talking about something subtler, like changing a technical recommendation, altering a number in an article about investing, or inserting a malicious link into a tutorial. The kind of modification that goes unnoticed because the post still looks legitimate.
That led us to build an integrity verification system that works from outside the blog. A second server that periodically downloads the content of every published post, calculates a cryptographic hash, and compares it with the one it had last time. If something changed without us editing it, it raises an alert. From a security perspective, I go deeper into this in blog security, SEO, and performance.
Why the hash can't live in the blog
The first idea that comes to mind is simple. Every time you publish a post, you calculate a SHA-256 hash of the content and store it in an extra database column. Before serving the post, you verify that the hash matches. Problem solved.
Except it isn't.
If the attacker has access to the database, they can modify both the content and the hash. They update the HTML, recalculate the SHA-256, and save both. Your internal verification will say everything is fine because the hash of the new content matches the new hash. It's like putting a lock on a door and leaving the key taped to the frame.
Integrity only works if the source of truth lives outside the system you're trying to protect. That's why the hash has to live somewhere else. In our case, on a second VPS that doesn't share infrastructure with the blog.
Two requests and that's it
The blog exposes two endpoints protected with a read-only API key (INTEGRITY_API_KEY), separate from the admin key.
The first one, GET /api/integrity/manifest, returns the list of published posts with their slug and a timestamp called contentUpdatedAt. The second, GET /api/integrity/post, returns the raw data for all published posts in a single response, with title, HTML content, excerpt, cover image, categories, and tags.
The verifier makes exactly two HTTP requests per run, no matter how many posts there are. First the manifest to see what's published, then the bulk endpoint to fetch the data used to calculate the hashes. No pagination, no rate limiting getting in the way, no complexity.
The canonical hash
For two independent systems to calculate the same hash from the same data, you need a canonical representation. You can't hash the JSON exactly as it comes from the API, because key order, whitespace, or a null field versus an empty string would produce different hashes for the same content.
Our canonicalization function takes seven fields from each post and serializes them in the same order every time, with the same normalization rules.
function canonicalize(post: PostData): string {
return JSON.stringify({
title: (post.title ?? "").trim(),
slug: post.slug,
content: post.content ?? "",
excerpt: (post.excerpt ?? "").trim(),
coverImage: post.coverImage ?? "",
categories: [...post.categories].sort(),
tags: [...post.tags].sort(),
});
}null values are turned into empty strings. Titles and excerpts are trimmed. Categories and tags are sorted alphabetically by slug so the insertion order in the database doesn't affect the result. The string is deterministic and always produces the same SHA-256 for the same visible content.
We don't include fields like readingTime, publishedAt, or seriesId because they're operational metadata. A change in computed reading time or the order within a series isn't a change to the content a reader sees. Including them would generate false positives every time we reorganize a series or fix an internal calculation.
Telling legitimate edits apart from tampering
When you edit a post from the admin panel, the blog updates a field called contentUpdatedAt. This timestamp only changes when visible fields like the title, content, excerpt, or image are modified. If someone edits the database directly with a SQL UPDATE, that field does not get updated because the change doesn't go through the application logic.
The verifier uses this signal to classify each change. If the hash differs and contentUpdatedAt changed too, it's a legitimate edit and the verifier updates its baseline without raising an alarm. But if the hash differs and contentUpdatedAt stays the same, someone changed the content without going through the normal flow. That triggers a critical alert.
A sophisticated attacker could also update contentUpdatedAt if they know it exists and understand how it works. That's why the verifier implements a random audit that checks 20% of posts at random on every run, even if their timestamp hasn't changed. In five cycles, every post gets audited at least once with high probability.
Ed25519 signatures over baselines
The verifier stores its baselines in a local SQLite database on the second server. Each entry contains the post slug, the expected SHA-256 hash, the timestamp of the last verification, and the canonicalized data so it can generate a diff if something changes.
But that local database is also an attack vector. If someone compromises both servers, both the blog and the verifier, they could modify the post content and at the same time change the hash stored in the baseline. Verification would still pass.
To make that scenario harder, each baseline is signed with an Ed25519 private key that is generated automatically the first time the verifier runs. Before trusting a stored baseline, the verifier verifies the signature. If someone tampered with the verifier's database without having the private key, the signature won't validate and an alert is raised.
// Al guardar un baseline
const sig = signHash(contentHash);
// firma Ed25519 del hash, almacenada junto al baseline
// Antes de comparar
if (!verifySignature(baseline.contentHash, baseline.signature)) {
// ALERTA, baselines.db manipulado
}The private key is stored with 400 permissions, read-only for the process owner. It isn't shared, it isn't pushed to any repository, and it never leaves the verifier server.
On-demand verification with an autonomous agent
The cron job every eight hours is fine for routine detection, but sometimes you want to verify right now. In our case, we have an agent with autonomous capabilities called Claudio running on the same VPS as the verifier. It's a bot connected to Telegram with access to system tools, able to read files, run commands, and inspect logs.
We gave it a context file (AGENT.md) that describes what the verifier is, where it's installed, and which commands it can run. When you ask it for a verification, it doesn't need detailed instructions, it already knows it has to go to /opt/blog-verifier and run node dist/index.js --force.
What really changes things is that you talk to it in natural language. You don't need to remember paths, flags, or the exact syntax of each command. You tell it "run a forced verification" and it does it. You ask "when was the last verification" and it checks the log. If you see a line in the output you don't understand, you paste it in and it explains it. It's like having someone on call who understands the system and answers right away.


And all of this from Telegram on your phone, anywhere. On the couch, on the train, having a coffee. You don't need to open a laptop, connect over SSH, or remember which server each thing lives on. You send a message, it replies with the status, and you get on with your life.
That turns a passive security tool into something you actively check every time you deploy a change, restore a backup, or just want to sleep easy.
And you're not limited to your phone. Telegram works the same on a desktop, on a laptop, or in the browser web version. It doesn't matter where you are or what device you have handy, the conversation with the agent is the same and the context stays there. You ask it something from your phone while you're on the subway, get home, and continue from your laptop. Integrity verification stops being something you run when you happen to remember and becomes something you check whenever you feel like it.


The full flow
This is how a typical verifier run works.
It downloads the blog manifest in one HTTP request.
It compares the
contentUpdatedAttimestamps with the stored baselines to decide which posts to verify.It downloads all posts in bulk with a second request.
For each post that needs verification, it canonicalizes the data and calculates the SHA-256.
It verifies the Ed25519 signature of the stored baseline.
It compares the current hash with the baseline. If it matches, all good. If it differs and
contentUpdatedAtchanged, it's a legitimate edit and it updates the baseline. If it differs without a timestamp change, critical alert.It detects posts that disappeared from the manifest, either because they were deleted or unpublished.
It records everything in a verification log and, if there are critical alerts, notifies us via Telegram.
The result is a system that protects content integrity with three independent layers. SHA-256 hashes calculated outside the blog, the contentUpdatedAt signal to separate legitimate edits from direct tampering, and Ed25519 signatures that protect the verifier's own database. On this technique, I go deeper in digital signatures and steganography for traceability.
Security isn't a product you install, it's a habit you build. Periodically verifying that your content hasn't been altered is as basic as making backups, and hardly anyone does it.
Another entry in the Building this blog series. You come from Security, SEO, and performance and continue with Voice narration with AI.

Jose, author of the blog
QA Engineer. I write out loud about automation, AI and software architecture. If something here helped you, write to me and tell me about it.
Leave the first comment
What did you think? What would you add? Every comment sharpens the next post.
If you liked this

Endureciendo ScamDetector contra prompt injection, alucinaciones y abuso
Defensa contra inyección de prompts, prevención de alucinaciones del modelo, rate limiting en capas y el resto de cambios que endurecieron ScamDetector para producción real.

ScamDetector, un detector de estafas con inteligencia artificial
ScamDetector combina inteligencia artificial, búsqueda de reputación de teléfonos y escaneo de URLs para ayudarte a identificar estafas digitales. Sin registro, sin datos almacenados.

Guía práctica de hardening para tu VPS Linux: de CrowdSec al kernel
Repaso completo de las medidas de seguridad que puedes aplicar a un VPS Linux: desde CrowdSec y el firewall hasta el hardening del kernel, pasando por SSH, Docker y las actualizaciones automáticas.