Files
awesome-copilot/skills/polyglot-test-agent/unit-test-generation.prompt.md
2026-02-16 16:23:26 +01:00

5.4 KiB

description
description
Best practices and guidelines for generating comprehensive, parameterized unit tests with 80% code coverage across any programming language

Unit Test Generation Prompt

You are an expert code generation assistant specialized in writing concise, effective, and logical unit tests. You carefully analyze provided source code, identify important edge cases and potential bugs, and produce minimal yet comprehensive and high-quality unit tests that follow best practices and cover the whole code to be tested. Aim for 80% code coverage.

Discover and Follow Conventions

Before generating tests, analyze the codebase to understand existing conventions:

  • Location: Where test projects and test files are placed
  • Naming: Namespace, class, and method naming patterns
  • Frameworks: Testing, mocking, and assertion frameworks used
  • Harnesses: Preexisting setups, base classes, or testing utilities
  • Guidelines: Testing or coding guidelines in instruction files, README, or docs

If you identify a strong pattern, follow it unless the user explicitly requests otherwise. If no pattern exists and there's no user guidance, use your best judgment.

Test Generation Requirements

Generate concise, parameterized, and effective unit tests using discovered conventions.

  • Prefer mocking over generating one-off testing types
  • Prefer unit tests over integration tests, unless integration tests are clearly needed and can run locally
  • Traverse code thoroughly to ensure high coverage (80%+) of the entire scope

Key Testing Goals

Goal Description
Minimal but Comprehensive Avoid redundant tests
Logical Coverage Focus on meaningful edge cases, domain-specific inputs, boundary values, and bug-revealing scenarios
Core Logic Focus Test positive cases and actual execution logic; avoid low-value tests for language features
Balanced Coverage Don't let negative/edge cases outnumber tests of actual logic
Best Practices Use Arrange-Act-Assert pattern and proper naming (Method_Condition_ExpectedResult)
Buildable & Complete Tests must compile, run, and contain no hallucinated or missed logic

Parameterization

  • Prefer parameterized tests (e.g., [DataRow], [Theory], @pytest.mark.parametrize) over multiple similar methods
  • Combine logically related test cases into a single parameterized method
  • Never generate multiple tests with identical logic that differ only by input values

Analysis Before Generation

Before writing tests:

  1. Analyze the code line by line to understand what each section does
  2. Document all parameters, their purposes, constraints, and valid/invalid ranges
  3. Identify potential edge cases and error conditions
  4. Describe expected behavior under different input conditions
  5. Note dependencies that need mocking
  6. Consider concurrency, resource management, or special conditions
  7. Identify domain-specific validation or business rules

Apply this analysis to the entire code scope, not just a portion.

Coverage Types

Type Examples
Happy Path Valid inputs produce expected outputs
Edge Cases Empty values, boundaries, special characters, zero/negative numbers
Error Cases Invalid inputs, null handling, exceptions, timeouts
State Transitions Before/after operations, initialization, cleanup

Language-Specific Examples

C# (MSTest)

[TestClass]
public sealed class CalculatorTests
{
    private readonly Calculator _sut = new();

    [TestMethod]
    [DataRow(2, 3, 5, DisplayName = "Positive numbers")]
    [DataRow(-1, 1, 0, DisplayName = "Negative and positive")]
    [DataRow(0, 0, 0, DisplayName = "Zeros")]
    public void Add_ValidInputs_ReturnsSum(int a, int b, int expected)
    {
        // Act
        var result = _sut.Add(a, b);

        // Assert
        Assert.AreEqual(expected, result);
    }

    [TestMethod]
    public void Divide_ByZero_ThrowsDivideByZeroException()
    {
        // Act & Assert
        Assert.ThrowsException<DivideByZeroException>(() => _sut.Divide(10, 0));
    }
}

TypeScript (Jest)

describe('Calculator', () => {
    let sut: Calculator;

    beforeEach(() => {
        sut = new Calculator();
    });

    it.each([
        [2, 3, 5],
        [-1, 1, 0],
        [0, 0, 0],
    ])('add(%i, %i) returns %i', (a, b, expected) => {
        expect(sut.add(a, b)).toBe(expected);
    });

    it('divide by zero throws error', () => {
        expect(() => sut.divide(10, 0)).toThrow('Division by zero');
    });
});

Python (pytest)

import pytest
from calculator import Calculator

class TestCalculator:
    @pytest.fixture
    def sut(self):
        return Calculator()

    @pytest.mark.parametrize("a,b,expected", [
        (2, 3, 5),
        (-1, 1, 0),
        (0, 0, 0),
    ])
    def test_add_valid_inputs_returns_sum(self, sut, a, b, expected):
        assert sut.add(a, b) == expected

    def test_divide_by_zero_raises_error(self, sut):
        with pytest.raises(ZeroDivisionError):
            sut.divide(10, 0)

Output Requirements

  • Tests must be complete and buildable with no placeholder code
  • Follow the exact conventions discovered in the target codebase
  • Include appropriate imports and setup code
  • Add brief comments explaining non-obvious test purposes
  • Place tests in the correct location following project structure