Skip to main content
This feature is currently in beta. Make sure you have set up the Tusk Drift CLI and SDK before enabling coverage.
Tusk Drift can collect code coverage during test replay, showing which lines of your service code each trace test exercises. Coverage works with Node.js and Python.

Enable Coverage

Add coverage.enabled: true to your .tusk/config.yaml:
coverage:
  enabled: true
Coverage is automatically collected during suite validation runs (triggered by --validate-suite-if-default-branch in CI, which the Drift GitHub Action sets by default). No changes to your CI pipeline needed. PR branches run tests normally without coverage overhead.

Filter files

Exclude files from coverage reports using glob patterns:
coverage:
  enabled: true
  exclude:
    - "**/migrations/**"
    - "**/generated/**"
For monorepos, use include to restrict coverage to your service’s code:
coverage:
  enabled: true
  include:
    - "backend/src/**"
Patterns support ** for recursive directory matching:
PatternMatches
**/migrations/**Any file in any migrations/ directory
backend/src/**All files under backend/src/
**/*.test.tsAny .test.ts file
backend/src/db/migrations/**A specific subdirectory
Paths are relative to the git root (e.g., backend/src/db/migrations/...). Use **/migrations/** rather than migrations/** to match correctly.

Test locally

Use --show-coverage to see coverage output during local development:
tusk drift run --cloud --show-coverage --print
To export coverage data for Codecov or VS Code:
tusk drift run --cloud --show-coverage --coverage-output coverage.lcov --print

Reading the Output

After tests complete with --show-coverage, you’ll see:
📊 Coverage: 85.9% lines (55/64), 42.9% branches (6/14) across 2 files

  Per-file:
    server.js                                 85.2% (52/61)
    tuskDriftInit.js                         100.0% (3/3)

  Per-test:
    GET /api/random-user                     4 lines across 1 files
    GET /api/weather-activity                13 lines across 1 files
In the interactive TUI (without --print), coverage appears in the service logs panel (aggregate) and each test’s log panel (per-test detail).

CI/CD Integration

Automatic (via config)

With coverage.enabled: true, coverage data is automatically collected during validation runs. No CI changes needed:
# Your existing GitHub Action, no changes required
- uses: Use-Tusk/drift-action@v1
  with:
    api-key: ${{ secrets.TUSK_API_KEY }}

Exporting to Codecov

Add --coverage-output to export LCOV for third-party tools:
- name: Run Drift tests with coverage export
  run: tusk drift run --cloud --coverage-output coverage.lcov --print --ci --validate-suite-if-default-branch

- name: Upload coverage to Codecov
  uses: codecov/codecov-action@v4
  with:
    files: coverage.lcov
    flags: drift-api-tests
What’s included in coverage output:
  • In-suite tests: Always included, even if they fail. A failing test still exercises code paths.
  • Draft tests: Excluded from the file. Draft coverage data is uploaded to Tusk Cloud for promotion decisions.
  • After promotion: The Tusk Cloud dashboard will include newly promoted tests. The LCOV catches up on the next validation run.

Language Notes

  • Uses V8’s built-in precise coverage, no external dependencies needed
  • TypeScript source maps handled automatically when compiling (sourceMap: true in tsconfig.json required for tsc, swc, esbuild)
  • Node.js --experimental-strip-types works out of the box (no source maps needed — V8 runs .ts directly)
  • Tested with: CJS (require), ESM (import), tsc, ts-node, ts-node-dev, swc, esbuild (compile mode), --experimental-strip-types
  • If using tsc, run a clean build to avoid stale artifacts: rm -rf dist && tsc
  • Near-zero performance overhead
  • Single-process mode required — Node.js cluster module and PM2 cluster mode are not supported during coverage runs
  • Bundlers (webpack, esbuild bundle mode, Rollup, Vite) should work if source maps are produced, but are not yet fully tested. Contact us if you use a bundler and want to confirm compatibility.

Docker Compose

If your service runs in Docker Compose, add coverage env vars to your docker-compose.tusk-override.yml:
services:
  your-service:
    environment:
      - TUSK_DRIFT_MODE=${TUSK_DRIFT_MODE:-}
      - TUSK_MOCK_HOST=${TUSK_MOCK_HOST:-host.docker.internal}
      - TUSK_MOCK_PORT=${TUSK_MOCK_PORT:-9001}
      - TUSK_COVERAGE=${TUSK_COVERAGE:-}
      - NODE_V8_COVERAGE=/tmp/tusk-v8-coverage
NODE_V8_COVERAGE must be a fixed container path (not ${NODE_V8_COVERAGE:-}) because the CLI creates a host temp directory that doesn’t exist inside the container.
Add strip_path_prefix to convert container paths to repo-relative:
coverage:
  enabled: true
  strip_path_prefix: "/app"    # match your docker-compose volume mount (e.g., - .:/app)
Without this, coverage paths will be container-absolute (e.g., /app/src/views.py). Set the value to whatever path your project root is mounted at in the container.

Export Formats

LCOV (default)

tusk drift run --cloud --coverage-output coverage.lcov --print
Compatible with Codecov, Coveralls, SonarQube, VS Code, and most coverage tools.

JSON

tusk drift run --cloud --coverage-output coverage.json --print
Includes line-level data, branch data, per-test detail, and aggregate summary.

Troubleshooting

  1. Check that TUSK_COVERAGE reaches the service. For Docker, ensure it’s in docker-compose.tusk-override.yml.
  2. For Node.js Docker, ensure NODE_V8_COVERAGE is set to a writable container path (e.g., /tmp/tusk-v8-coverage).
  3. For Python, ensure coverage is installed (pip install coverage).
  4. Run with --show-coverage --print locally to verify coverage works before CI.
The aggregate includes startup coverage (module loading, decorator execution, DI registration) which runs before any test request. This matches how standard coverage tools like Istanbul and NYC work. The per-test breakdown shows only lines exercised by each individual test request, excluding startup.
Type definitions, barrel exports (index.ts), and config files are fully executed during module loading since their code is entirely top-level.
Add strip_path_prefix to your coverage config. Set it to the container mount point from your docker-compose.yaml volumes (e.g., - .:/app then use strip_path_prefix: "/app").
This applies when compiling TypeScript to JavaScript (via tsc, swc, or esbuild). Ensure sourceMap: true is set in your tsconfig.json and source map files (.js.map) are present alongside compiled output. If using tsc, run a clean build: rm -rf dist && tsc. If using --experimental-strip-types, this doesn’t apply — paths are already .ts.
Before any tests run, the CLI takes a “baseline” snapshot that captures every line the runtime considers executable. For Node.js, this is every line V8 compiled (functions, statements, branches). For Python, this is every line coverage.py’s compiler identifies as executable. Comments, blank lines, and structural syntax (closing braces, etc.) are not counted. The denominator starts with files loaded at startup, and expands to include any additional files loaded during test execution (e.g., lazily-imported modules).
No. Coverage forces concurrency to 1, overriding any --concurrency flag or test_execution.concurrency config. Per-test coverage relies on counter resets between tests. Running tests concurrently would mix coverage data between tests, making per-test attribution impossible.
Drift coverage measures which lines of your service code are exercised by API trace tests. A few things to keep in mind when interpreting the numbers:Tusk selects traces that are representative of how your API is actually used, rather than optimizing for maximum code coverage. This means coverage maps directly to the code paths your users depend on — which is often more useful than a high percentage from synthetic tests.Coverage tracks code exercised through API requests. Background jobs, cron tasks, CLI commands, and other non-API code paths won’t appear in the results. This is by design — Drift tests verify API behavior.Branch coverage depends on input variety. A handler with an if/elif chain will only show coverage for the branch arms that the recorded traffic actually triggered. As more diverse traffic is recorded and added to the test suite, branch coverage naturally increases.Coverage numbers will grow over time as more traces are added to the suite and as traffic covers more code paths.