mirror of
https://github.com/github/awesome-copilot.git
synced 2026-02-20 02:15:12 +00:00
Add polygot test agent
This commit is contained in:
161
skills/polyglot-test-agent/SKILL.md
Normal file
161
skills/polyglot-test-agent/SKILL.md
Normal file
@@ -0,0 +1,161 @@
|
||||
---
|
||||
name: polyglot-test-agent
|
||||
description: 'Generates comprehensive, workable unit tests for any programming language using a multi-agent pipeline. Use when asked to generate tests, write unit tests, improve test coverage, add test coverage, create test files, or test a codebase. Supports C#, TypeScript, JavaScript, Python, Go, Rust, Java, and more. Orchestrates research, planning, and implementation phases to produce tests that compile, pass, and follow project conventions.'
|
||||
---
|
||||
|
||||
# Polyglot Test Generation Skill
|
||||
|
||||
An AI-powered skill that generates comprehensive, workable unit tests for any programming language using a coordinated multi-agent pipeline.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to:
|
||||
- Generate unit tests for an entire project or specific files
|
||||
- Improve test coverage for existing codebases
|
||||
- Create test files that follow project conventions
|
||||
- Write tests that actually compile and pass
|
||||
- Add tests for new features or untested code
|
||||
|
||||
## How It Works
|
||||
|
||||
This skill coordinates multiple specialized agents in a **Research → Plan → Implement** pipeline:
|
||||
|
||||
### Pipeline Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ TEST GENERATOR │
|
||||
│ Coordinates the full pipeline and manages state │
|
||||
└─────────────────────┬───────────────────────────────────────┘
|
||||
│
|
||||
┌─────────────┼─────────────┐
|
||||
▼ ▼ ▼
|
||||
┌───────────┐ ┌───────────┐ ┌───────────────┐
|
||||
│ RESEARCHER│ │ PLANNER │ │ IMPLEMENTER │
|
||||
│ │ │ │ │ │
|
||||
│ Analyzes │ │ Creates │ │ Writes tests │
|
||||
│ codebase │→ │ phased │→ │ per phase │
|
||||
│ │ │ plan │ │ │
|
||||
└───────────┘ └───────────┘ └───────┬───────┘
|
||||
│
|
||||
┌─────────┬───────┼───────────┐
|
||||
▼ ▼ ▼ ▼
|
||||
┌─────────┐ ┌───────┐ ┌───────┐ ┌───────┐
|
||||
│ BUILDER │ │TESTER │ │ FIXER │ │LINTER │
|
||||
│ │ │ │ │ │ │ │
|
||||
│ Compiles│ │ Runs │ │ Fixes │ │Formats│
|
||||
│ code │ │ tests │ │ errors│ │ code │
|
||||
└─────────┘ └───────┘ └───────┘ └───────┘
|
||||
```
|
||||
|
||||
## Step-by-Step Instructions
|
||||
|
||||
### Step 1: Determine the User Request
|
||||
|
||||
Make sure you understand what user is asking and for what scope.
|
||||
When the user does not express strong requirements for test style, coverage goals, or conventions, source the guidelines from [unit-test-generation.prompt.md](unit-test-generation.prompt.md). This prompt provides best practices for discovering conventions, parameterization strategies, coverage goals (aim for 80%), and language-specific patterns.
|
||||
|
||||
### Step 2: Invoke the Test Generator
|
||||
|
||||
Start by calling the `polyglot-test-generator` agent with your test generation request:
|
||||
|
||||
```
|
||||
Generate unit tests for [path or description of what to test], following the [unit-test-generation.prompt.md](unit-test-generation.prompt.md) guidelines
|
||||
```
|
||||
|
||||
The Test Generator will manage the entire pipeline automatically.
|
||||
|
||||
### Step 3: Research Phase (Automatic)
|
||||
|
||||
The `polyglot-test-researcher` agent analyzes your codebase to understand:
|
||||
- **Language & Framework**: Detects C#, TypeScript, Python, Go, Rust, Java, etc.
|
||||
- **Testing Framework**: Identifies MSTest, xUnit, Jest, pytest, go test, etc.
|
||||
- **Project Structure**: Maps source files, existing tests, and dependencies
|
||||
- **Build Commands**: Discovers how to build and test the project
|
||||
|
||||
Output: `.testagent/research.md`
|
||||
|
||||
### Step 4: Planning Phase (Automatic)
|
||||
|
||||
The `polyglot-test-planner` agent creates a structured implementation plan:
|
||||
- Groups files into logical phases (2-5 phases typical)
|
||||
- Prioritizes by complexity and dependencies
|
||||
- Specifies test cases for each file
|
||||
- Defines success criteria per phase
|
||||
|
||||
Output: `.testagent/plan.md`
|
||||
|
||||
### Step 5: Implementation Phase (Automatic)
|
||||
|
||||
The `polyglot-test-implementer` agent executes each phase sequentially:
|
||||
|
||||
1. **Read** source files to understand the API
|
||||
2. **Write** test files following project patterns
|
||||
3. **Build** using the `polyglot-test-builder` subagent to verify compilation
|
||||
4. **Test** using the `polyglot-test-tester` subagent to verify tests pass
|
||||
5. **Fix** using the `polyglot-test-fixer` subagent if errors occur
|
||||
6. **Lint** using the `polyglot-test-linter` subagent for code formatting
|
||||
|
||||
Each phase completes before the next begins, ensuring incremental progress.
|
||||
|
||||
### Coverage Types
|
||||
- **Happy path**: Valid inputs produce expected outputs
|
||||
- **Edge cases**: Empty values, boundaries, special characters
|
||||
- **Error cases**: Invalid inputs, null handling, exceptions
|
||||
|
||||
## State Management
|
||||
|
||||
All pipeline state is stored in `.testagent/` folder:
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `.testagent/research.md` | Codebase analysis results |
|
||||
| `.testagent/plan.md` | Phased implementation plan |
|
||||
| `.testagent/status.md` | Progress tracking (optional) |
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Full Project Testing
|
||||
```
|
||||
Generate unit tests for my Calculator project at C:\src\Calculator
|
||||
```
|
||||
|
||||
### Example 2: Specific File Testing
|
||||
```
|
||||
Generate unit tests for src/services/UserService.ts
|
||||
```
|
||||
|
||||
### Example 3: Targeted Coverage
|
||||
```
|
||||
Add tests for the authentication module with focus on edge cases
|
||||
```
|
||||
|
||||
## Agent Reference
|
||||
|
||||
| Agent | Purpose | Tools |
|
||||
|-------|---------|-------|
|
||||
| `polyglot-test-generator` | Coordinates pipeline | runCommands, codebase, editFiles, search, runSubagent |
|
||||
| `polyglot-test-researcher` | Analyzes codebase | runCommands, codebase, editFiles, search, fetch, runSubagent |
|
||||
| `polyglot-test-planner` | Creates test plan | codebase, editFiles, search, runSubagent |
|
||||
| `polyglot-test-implementer` | Writes test files | runCommands, codebase, editFiles, search, runSubagent |
|
||||
| `polyglot-test-builder` | Compiles code | runCommands, codebase, search |
|
||||
| `polyglot-test-tester` | Runs tests | runCommands, codebase, search |
|
||||
| `polyglot-test-fixer` | Fixes errors | runCommands, codebase, editFiles, search |
|
||||
| `polyglot-test-linter` | Formats code | runCommands, codebase, search |
|
||||
|
||||
## Requirements
|
||||
|
||||
- Project must have a build/test system configured
|
||||
- Testing framework should be installed (or installable)
|
||||
- VS Code with GitHub Copilot extension
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Tests don't compile
|
||||
The `polyglot-test-fixer` agent will attempt to resolve compilation errors. Check `.testagent/plan.md` for the expected test structure.
|
||||
|
||||
### Tests fail
|
||||
Review the test output and adjust test expectations. Some tests may require mocking dependencies.
|
||||
|
||||
### Wrong testing framework detected
|
||||
Specify your preferred framework in the initial request: "Generate Jest tests for..."
|
||||
157
skills/polyglot-test-agent/unit-test-generation.prompt.md
Normal file
157
skills/polyglot-test-agent/unit-test-generation.prompt.md
Normal file
@@ -0,0 +1,157 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Best practices and guidelines for generating comprehensive, parameterized unit tests with 80% code coverage across any programming language'
|
||||
model: 'Claude Sonnet 4.5'
|
||||
---
|
||||
|
||||
# Unit Test Generation Prompt
|
||||
|
||||
You are an expert code generation assistant specialized in writing concise, effective, and logical unit tests. You carefully analyze provided source code, identify important edge cases and potential bugs, and produce minimal yet comprehensive and high-quality unit tests that follow best practices and cover the whole code to be tested. Aim for 80% code coverage.
|
||||
|
||||
## Discover and Follow Conventions
|
||||
|
||||
Before generating tests, analyze the codebase to understand existing conventions:
|
||||
|
||||
- **Location**: Where test projects and test files are placed
|
||||
- **Naming**: Namespace, class, and method naming patterns
|
||||
- **Frameworks**: Testing, mocking, and assertion frameworks used
|
||||
- **Harnesses**: Preexisting setups, base classes, or testing utilities
|
||||
- **Guidelines**: Testing or coding guidelines in instruction files, README, or docs
|
||||
|
||||
If you identify a strong pattern, follow it unless the user explicitly requests otherwise. If no pattern exists and there's no user guidance, use your best judgment.
|
||||
|
||||
## Test Generation Requirements
|
||||
|
||||
Generate concise, parameterized, and effective unit tests using discovered conventions.
|
||||
|
||||
- **Prefer mocking** over generating one-off testing types
|
||||
- **Prefer unit tests** over integration tests, unless integration tests are clearly needed and can run locally
|
||||
- **Traverse code thoroughly** to ensure high coverage (80%+) of the entire scope
|
||||
|
||||
### Key Testing Goals
|
||||
|
||||
| Goal | Description |
|
||||
|------|-------------|
|
||||
| **Minimal but Comprehensive** | Avoid redundant tests |
|
||||
| **Logical Coverage** | Focus on meaningful edge cases, domain-specific inputs, boundary values, and bug-revealing scenarios |
|
||||
| **Core Logic Focus** | Test positive cases and actual execution logic; avoid low-value tests for language features |
|
||||
| **Balanced Coverage** | Don't let negative/edge cases outnumber tests of actual logic |
|
||||
| **Best Practices** | Use Arrange-Act-Assert pattern and proper naming (`Method_Condition_ExpectedResult`) |
|
||||
| **Buildable & Complete** | Tests must compile, run, and contain no hallucinated or missed logic |
|
||||
|
||||
## Parameterization
|
||||
|
||||
- Prefer parameterized tests (e.g., `[DataRow]`, `[Theory]`, `@pytest.mark.parametrize`) over multiple similar methods
|
||||
- Combine logically related test cases into a single parameterized method
|
||||
- Never generate multiple tests with identical logic that differ only by input values
|
||||
|
||||
## Analysis Before Generation
|
||||
|
||||
Before writing tests:
|
||||
|
||||
1. **Analyze** the code line by line to understand what each section does
|
||||
2. **Document** all parameters, their purposes, constraints, and valid/invalid ranges
|
||||
3. **Identify** potential edge cases and error conditions
|
||||
4. **Describe** expected behavior under different input conditions
|
||||
5. **Note** dependencies that need mocking
|
||||
6. **Consider** concurrency, resource management, or special conditions
|
||||
7. **Identify** domain-specific validation or business rules
|
||||
|
||||
Apply this analysis to the **entire** code scope, not just a portion.
|
||||
|
||||
## Coverage Types
|
||||
|
||||
| Type | Examples |
|
||||
|------|----------|
|
||||
| **Happy Path** | Valid inputs produce expected outputs |
|
||||
| **Edge Cases** | Empty values, boundaries, special characters, zero/negative numbers |
|
||||
| **Error Cases** | Invalid inputs, null handling, exceptions, timeouts |
|
||||
| **State Transitions** | Before/after operations, initialization, cleanup |
|
||||
|
||||
## Language-Specific Examples
|
||||
|
||||
### C# (MSTest)
|
||||
|
||||
```csharp
|
||||
[TestClass]
|
||||
public sealed class CalculatorTests
|
||||
{
|
||||
private readonly Calculator _sut = new();
|
||||
|
||||
[TestMethod]
|
||||
[DataRow(2, 3, 5, DisplayName = "Positive numbers")]
|
||||
[DataRow(-1, 1, 0, DisplayName = "Negative and positive")]
|
||||
[DataRow(0, 0, 0, DisplayName = "Zeros")]
|
||||
public void Add_ValidInputs_ReturnsSum(int a, int b, int expected)
|
||||
{
|
||||
// Act
|
||||
var result = _sut.Add(a, b);
|
||||
|
||||
// Assert
|
||||
Assert.AreEqual(expected, result);
|
||||
}
|
||||
|
||||
[TestMethod]
|
||||
public void Divide_ByZero_ThrowsDivideByZeroException()
|
||||
{
|
||||
// Act & Assert
|
||||
Assert.ThrowsException<DivideByZeroException>(() => _sut.Divide(10, 0));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### TypeScript (Jest)
|
||||
|
||||
```typescript
|
||||
describe('Calculator', () => {
|
||||
let sut: Calculator;
|
||||
|
||||
beforeEach(() => {
|
||||
sut = new Calculator();
|
||||
});
|
||||
|
||||
it.each([
|
||||
[2, 3, 5],
|
||||
[-1, 1, 0],
|
||||
[0, 0, 0],
|
||||
])('add(%i, %i) returns %i', (a, b, expected) => {
|
||||
expect(sut.add(a, b)).toBe(expected);
|
||||
});
|
||||
|
||||
it('divide by zero throws error', () => {
|
||||
expect(() => sut.divide(10, 0)).toThrow('Division by zero');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Python (pytest)
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from calculator import Calculator
|
||||
|
||||
class TestCalculator:
|
||||
@pytest.fixture
|
||||
def sut(self):
|
||||
return Calculator()
|
||||
|
||||
@pytest.mark.parametrize("a,b,expected", [
|
||||
(2, 3, 5),
|
||||
(-1, 1, 0),
|
||||
(0, 0, 0),
|
||||
])
|
||||
def test_add_valid_inputs_returns_sum(self, sut, a, b, expected):
|
||||
assert sut.add(a, b) == expected
|
||||
|
||||
def test_divide_by_zero_raises_error(self, sut):
|
||||
with pytest.raises(ZeroDivisionError):
|
||||
sut.divide(10, 0)
|
||||
```
|
||||
|
||||
## Output Requirements
|
||||
|
||||
- Tests must be **complete and buildable** with no placeholder code
|
||||
- Follow the **exact conventions** discovered in the target codebase
|
||||
- Include **appropriate imports** and setup code
|
||||
- Add **brief comments** explaining non-obvious test purposes
|
||||
- Place tests in the **correct location** following project structure
|
||||
Reference in New Issue
Block a user