Skip to main content
This feature is currently in beta and available to select customers. The exact CLI commands, API shapes, and workflow details may evolve during the beta period.

Overview

Tusk’s unit test generation experience is primarily designed around pull requests, merge requests, and the Tusk web app. The CLI and agent workflow beta extends that experience to local developer environments. With this beta, developers and coding agents (e.g., Claude Code, Codex, etc) can work with Tusk-generated test results in a more direct, machine-friendly way. This is especially useful for teams that already use local coding agents and want those agents to help review, refine, and incorporate Tusk’s output within agent workflows.

Use Cases

This is designed around three use cases:
  1. Retrieve Tusk test generation results — fetch the latest run for a branch and pull its generated test scenarios and code, in JSON.
  2. Let local agents iterate on tests — use Tusk’s scenarios as a starting point, refine them locally without waiting on another remote cycle.
  3. Send feedback back to Tusk — report which scenarios were useful, which were incorporated, and what should improve in future runs.
This is a good fit for teams that want tighter control over how Tusk-generated tests land in their codebase — especially teams already using coding agents like Claude Code.

Agent Skills

Tusk ships a ready-to-use agent skilltusk-unit-tests — so your coding agent can pull up the tests Tusk generated, help you decide which ones are worth keeping, and apply them to your branch.
npx skills add Use-Tusk/tusk-skills --skill tusk-unit-tests
Once installed, ask your agent something like “review the Tusk tests for this branch” and it will fetch the latest run, check which tests still apply to your current code, and walk you through adopting the ones worth keeping. Teams building their own agent tooling can model it after tusk-unit-tests or wire their own skill on top of the CLI primitives below.

Example Workflow

1

Tusk generates unit tests for a PR or MR

Tusk continues to generate tests in the cloud as part of its normal pull request or merge request workflow.
2

A developer or coding agent retrieves the latest run

The CLI returns machine-readable output so a local agent can inspect the run and its generated scenarios.
3

The agent reviews and applies selected tests

The agent surfaces which tests are worth keeping, the developer confirms the selection, and the tests land in the local checkout ready to run.
4

Feedback is sent back to Tusk

The developer or agent reports which scenarios were useful and which tests were incorporated, so Tusk can improve future runs.

Principles

We are designing this workflow around a few principles:
  • Agent-friendly by default: outputs should be easy to parse and consume programmatically
  • Local iteration when it matters: developers and agents should be able to refine tests without waiting on a full remote cycle
  • Clear ownership: Tusk suggests and generates, while the developer or local agent decides what to keep
  • Feedback loops back into the product: beta feedback should directly improve scenario quality and incorporation UX

Example CLI Workflow

Below is an example of the kind of workflow we are targeting in the beta.

1. Retrieve the latest Tusk run for a branch

tusk unit latest-run --repo acme/payments-service --branch feature/add-refund-guardrails
{
  "latest": {
    "run_id": "tccr_123",
    "status": "completed",
    "status_detail": null,
    "run_type": "commit_check",
    "created_at": "2026-04-12T18:04:11.421Z",
    "commit_sha": "abc123def456",
    "branch": "feature/add-refund-guardrails"
  },
  "history": [
    {
      "run_id": "tccr_122",
      "status": "completed",
      "run_type": "commit_check",
      "run_type_label": "Initial run for this commit",
      "commit_sha": "7fb2d6ec47ab2ac6",
      "retry_feedback": null,
      "created_at": "2026-04-12T17:31:02.118Z",
      "test_count": 3
    }
  ],
  "next_steps": [
    "Get full run details: `tusk unit get-run tccr_123`",
    "Apply diffs: `tusk unit get-diffs tccr_123 | jq -r '.files[].diff' | git apply`"
  ]
}
Status values: in_progress, completed, cancelled, skipped, error.

2. Fetch the full run details

tusk unit get-run tccr_123
{
  "run_id": "tccr_123",
  "status": "completed",
  "status_detail": null,
  "run_type": "commit_check",
  "created_at": "2026-04-12T18:04:11.421Z",
  "commit_sha": "abc123def456",
  "branch": "feature/add-refund-guardrails",
  "repo": "acme/payments-service",
  "base_branch": "main",
  "base_commit_sha": "def789abc012",
  "run_description": "...",
  "webapp_url": "https://app.usetusk.ai/app/testing-commit-check/tccr_123?client=...",
  "test_scenarios": [
    {
      "scenario_id": "ts_1",
      "is_passing": false,
      "file_path": "src/refunds/refundService.ts",
      "symbol_name": "validateRefundEligibility",
      "scenario_description": "Reject refunds when the original charge is already disputed",
      "test_file_path": "src/refunds/__tests__/refundService.test.ts"
    },
    {
      "scenario_id": "ts_2",
      "is_passing": true,
      "file_path": "src/refunds/refundService.ts",
      "symbol_name": "validateRefundEligibility",
      "scenario_description": "Allow partial refunds that stay within the remaining refundable amount",
      "test_file_path": "src/refunds/__tests__/refundService.test.ts"
    }
  ],
  "coverage_gains": null,
  "next_steps": [
    "Review a test scenario: `tusk unit get-scenario --run-id tccr_123 --scenario-id <scenario_id>`",
    "Apply all diffs: `tusk unit get-diffs tccr_123 | jq -r '.files[].diff' | git apply`"
  ]
}

3. Fetch one test scenario

tusk unit get-scenario --run-id tccr_123 --scenario-id ts_1
{
  "scenario_id": "ts_1",
  "is_passing": false,
  "file_path": "src/refunds/refundService.ts",
  "symbol_name": "validateRefundEligibility",
  "scenario_description": "Reject refunds when the original charge is already disputed",
  "test_file_path": "src/refunds/__tests__/refundService.test.ts",
  "test_code": "it('rejects refunds for disputed charges', async () => { /* ... */ })",
  "test_output": "Expected refund request to be rejected when charge.disputeStatus = 'open'",
  "has_error": false,
  "original_run_id": null
}

4. Pull the generated test changes locally

tusk unit get-diffs tccr_123
{
  "files": [
    {
      "file_path": "src/refunds/__tests__/refundService.test.ts",
      "file_type": "test_file",
      "diff": "@@ -42,6 +42,18 @@\n describe('validateRefundEligibility', () => {\n+  it('rejects refunds for disputed charges', async () => {\n+    /* ... */\n+  })\n })",
      "scenario_ids": ["ts_1"]
    }
  ]
}
file_type is one of test_file | source_file | auxiliary_file | unknown. scenario_ids correlates each file diff back to the scenarios that contributed to it — useful when a user only wants to adopt a subset. Returning the generated test changes as file-level unified diffs is often the safest workflow for local development. It lets a developer or coding agent review the proposed changes, apply them selectively, and resolve conflicts against local state when needed. Pipe the output through jq and git apply for a one-shot adoption:
tusk unit get-diffs tccr_123 | jq -r '.files[].diff' | git apply

5. Provide feedback back to Tusk

tusk unit feedback --run-id tccr_123 --file - <<'EOF'
{
  "scenarios": [
    {
      "scenario_id": "ts_1",
      "positive_feedback": ["covers_critical_path"],
      "applied_locally": true,
      "comment": "We kept this test and expanded the assertions locally."
    },
    {
      "scenario_id": "ts_2",
      "negative_feedback": ["no_value"],
      "applied_locally": false,
      "comment": "Too trivial to be worth keeping."
    }
  ]
}
EOF
Allowed positive_feedback values: covers_critical_path, valid_edge_case, caught_a_bug, other. Allowed negative_feedback values: incorrect_business_assumption, duplicates_existing_test, no_value, incorrect_assertion, poor_coding_practice, other. For a run that went in the wrong overall direction (wrong mocks, wrong symbols, wrong strategy), submit run-level guidance and trigger a fresh run in the same call with --retry:
tusk unit feedback --run-id tccr_123 --retry --file - <<'EOF'
{
  "run_feedback": {
    "comment": "The run targeted the right files, but the mocks do not match the real service contracts. Use simpler setup assumptions and focus on externally observable behavior."
  }
}
EOF
Prefer small local edits when generated tests are mostly correct. Use --retry only when the fixes are too broad to make locally. This feedback helps Tusk improve future test generation, and teams that want more direct control can also manage repo-specific instructions on the Customization page.

Questions?

Contact support@usetusk.ai. We are actively looking for feedback from early design partners as we shape this workflow.