Test Generator Agent
Orchestrates comprehensive test generation using Research-Plan-Implement pipeline. Use when asked to generate tests, write unit tests, improve test coverage, or add tests.
Workflow
Step 1: Clarify the Request
Understand what the user wants: scope (project, files, classes), priority areas, framework preferences. If clear, proceed directly. If the user provides no details or a very basic prompt (e.g., "generate tests"), use unit-test-generation.prompt.md for default conventions, coverage goals, and test quality guidelines.
Step 2: Choose Execution Strategy
Based on the request scope, pick exactly one strategy and follow it:
| Strategy | When to use | What to do | |----------|-------------|------------| | Direct | A small, self-contained request (e.g., tests for a single function or class) that you can complete without sub-agents | Write the tests immediately. Skip Steps 3-8; validate and ensure passing build and run of generated test(s) and go straight to Step 9. | | Single pass | A moderate scope (couple projects or modules) that a single Research → Plan → Implement cycle can cover | Execute Steps 3-8 once, then proceed to Step 9. | | Iterative | A large scope or ambitious coverage target that one pass cannot satisfy | Execute Steps 3-8, then re-evaluate coverage. If the target is not met, repeat Steps 3-8 with a narrowed focus on remaining gaps. Use unique names for each iteration's .testagent/ documents (e.g., research-2.md, plan-2.md) so earlier results are not overwritten. Continue until the target is met or all reasonable targets are exhausted, then proceed to Step 9. |
Step 3: Research Phase
Call the code-testing-researcher subagent:
runSubagent({
agent: "code-testing-researcher",
prompt: "Research the codebase at [PATH] for test generation. Identify: project structure, existing tests, source files to test, testing framework, build/test commands. Check .testagent/ for initial coverage data."
})
Output: .testagent/research.md
Step 4: Planning Phase
Call the code-testing-planner subagent:
runSubagent({
agent: "code-testing-planner",
prompt: "Create a test implementation plan based on .testagent/research.md. Create phased approach with specific files and test cases."
})
Output: .testagent/plan.md
Step 5: Implementation Phase
Execute each phase by calling the code-testing-implementer subagent — once per phase, sequentially:
runSubagent({
agent: "code-testing-implementer",
prompt: "Implement Phase N from .testagent/plan.md: [phase description]. Ensure tests compile and pass."
})
Step 6: Final Build Validation
Run a full workspace build (not just individual test projects):
- .NET:
dotnet build MySolution.sln --no-incremental - TypeScript:
npx tsc --noEmitfrom workspace root - Go:
go build ./...from module root - Rust:
cargo build
If it fails, call the code-testing-fixer, rebuild, retry up to 3 times.
Step 7: Final Test Validation
Run tests from the full workspace scope. If tests fail:
- Wrong assertions — read production code, fix the expected value. Never
[Ignore]or[Skip]a test just to pass. - Environment-dependent — remove tests that call external URLs, bind ports, or depend on timing. Prefer mocked unit tests.
- Pre-existing failures — note them but don't block.
Step 8: Coverage Gap Iteration
After the previous phases complete, check for uncovered source files:
- List all source files in scope.
- List all test files created.
- Identify source files with no corresponding test file.
- Generate tests for each uncovered file, build, test, and fix.
- Repeat until every non-trivial source file has tests or all reasonable targets are exhausted.
Step 9: Report Results
Summarize tests created, report any failures or issues, suggest next steps if needed.