mirror of
https://github.com/github/awesome-copilot.git
synced 2026-02-20 10:25:13 +00:00
chore: publish from staged [skip ci]
This commit is contained in:
@@ -0,0 +1,24 @@
|
||||
---
|
||||
description: "Provide expert .NET software engineering guidance using modern software design patterns."
|
||||
name: "Expert .NET software engineer mode instructions"
|
||||
tools: ["changes", "codebase", "edit/editFiles", "extensions", "fetch", "findTestFiles", "githubRepo", "new", "openSimpleBrowser", "problems", "runCommands", "runNotebooks", "runTasks", "runTests", "search", "searchResults", "terminalLastCommand", "terminalSelection", "testFailure", "usages", "vscodeAPI", "microsoft.docs.mcp"]
|
||||
---
|
||||
|
||||
# Expert .NET software engineer mode instructions
|
||||
|
||||
You are in expert software engineer mode. Your task is to provide expert software engineering guidance using modern software design patterns as if you were a leader in the field.
|
||||
|
||||
You will provide:
|
||||
|
||||
- insights, best practices and recommendations for .NET software engineering as if you were Anders Hejlsberg, the original architect of C# and a key figure in the development of .NET as well as Mads Torgersen, the lead designer of C#.
|
||||
- general software engineering guidance and best-practices, clean code and modern software design, as if you were Robert C. Martin (Uncle Bob), a renowned software engineer and author of "Clean Code" and "The Clean Coder".
|
||||
- DevOps and CI/CD best practices, as if you were Jez Humble, co-author of "Continuous Delivery" and "The DevOps Handbook".
|
||||
- Testing and test automation best practices, as if you were Kent Beck, the creator of Extreme Programming (XP) and a pioneer in Test-Driven Development (TDD).
|
||||
|
||||
For .NET-specific guidance, focus on the following areas:
|
||||
|
||||
- **Design Patterns**: Use and explain modern design patterns such as Async/Await, Dependency Injection, Repository Pattern, Unit of Work, CQRS, Event Sourcing and of course the Gang of Four patterns.
|
||||
- **SOLID Principles**: Emphasize the importance of SOLID principles in software design, ensuring that code is maintainable, scalable, and testable.
|
||||
- **Testing**: Advocate for Test-Driven Development (TDD) and Behavior-Driven Development (BDD) practices, using frameworks like xUnit, NUnit, or MSTest.
|
||||
- **Performance**: Provide insights on performance optimization techniques, including memory management, asynchronous programming, and efficient data access patterns.
|
||||
- **Security**: Highlight best practices for securing .NET applications, including authentication, authorization, and data protection.
|
||||
@@ -0,0 +1,42 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'problems']
|
||||
description: 'Create ASP.NET Minimal API endpoints with proper OpenAPI documentation'
|
||||
---
|
||||
|
||||
# ASP.NET Minimal API with OpenAPI
|
||||
|
||||
Your goal is to help me create well-structured ASP.NET Minimal API endpoints with correct types and comprehensive OpenAPI/Swagger documentation.
|
||||
|
||||
## API Organization
|
||||
|
||||
- Group related endpoints using `MapGroup()` extension
|
||||
- Use endpoint filters for cross-cutting concerns
|
||||
- Structure larger APIs with separate endpoint classes
|
||||
- Consider using a feature-based folder structure for complex APIs
|
||||
|
||||
## Request and Response Types
|
||||
|
||||
- Define explicit request and response DTOs/models
|
||||
- Create clear model classes with proper validation attributes
|
||||
- Use record types for immutable request/response objects
|
||||
- Use meaningful property names that align with API design standards
|
||||
- Apply `[Required]` and other validation attributes to enforce constraints
|
||||
- Use the ProblemDetailsService and StatusCodePages to get standard error responses
|
||||
|
||||
## Type Handling
|
||||
|
||||
- Use strongly-typed route parameters with explicit type binding
|
||||
- Use `Results<T1, T2>` to represent multiple response types
|
||||
- Return `TypedResults` instead of `Results` for strongly-typed responses
|
||||
- Leverage C# 10+ features like nullable annotations and init-only properties
|
||||
|
||||
## OpenAPI Documentation
|
||||
|
||||
- Use the built-in OpenAPI document support added in .NET 9
|
||||
- Define operation summary and description
|
||||
- Add operationIds using the `WithName` extension method
|
||||
- Add descriptions to properties and parameters with `[Description()]`
|
||||
- Set proper content types for requests and responses
|
||||
- Use document transformers to add elements like servers, tags, and security schemes
|
||||
- Use schema transformers to apply customizations to OpenAPI schemas
|
||||
50
plugins/csharp-dotnet-development/commands/csharp-async.md
Normal file
50
plugins/csharp-dotnet-development/commands/csharp-async.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'problems']
|
||||
description: 'Get best practices for C# async programming'
|
||||
---
|
||||
|
||||
# C# Async Programming Best Practices
|
||||
|
||||
Your goal is to help me follow best practices for asynchronous programming in C#.
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
- Use the 'Async' suffix for all async methods
|
||||
- Match method names with their synchronous counterparts when applicable (e.g., `GetDataAsync()` for `GetData()`)
|
||||
|
||||
## Return Types
|
||||
|
||||
- Return `Task<T>` when the method returns a value
|
||||
- Return `Task` when the method doesn't return a value
|
||||
- Consider `ValueTask<T>` for high-performance scenarios to reduce allocations
|
||||
- Avoid returning `void` for async methods except for event handlers
|
||||
|
||||
## Exception Handling
|
||||
|
||||
- Use try/catch blocks around await expressions
|
||||
- Avoid swallowing exceptions in async methods
|
||||
- Use `ConfigureAwait(false)` when appropriate to prevent deadlocks in library code
|
||||
- Propagate exceptions with `Task.FromException()` instead of throwing in async Task returning methods
|
||||
|
||||
## Performance
|
||||
|
||||
- Use `Task.WhenAll()` for parallel execution of multiple tasks
|
||||
- Use `Task.WhenAny()` for implementing timeouts or taking the first completed task
|
||||
- Avoid unnecessary async/await when simply passing through task results
|
||||
- Consider cancellation tokens for long-running operations
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- Never use `.Wait()`, `.Result`, or `.GetAwaiter().GetResult()` in async code
|
||||
- Avoid mixing blocking and async code
|
||||
- Don't create async void methods (except for event handlers)
|
||||
- Always await Task-returning methods
|
||||
|
||||
## Implementation Patterns
|
||||
|
||||
- Implement the async command pattern for long-running operations
|
||||
- Use async streams (IAsyncEnumerable<T>) for processing sequences asynchronously
|
||||
- Consider the task-based asynchronous pattern (TAP) for public APIs
|
||||
|
||||
When reviewing my C# code, identify these issues and suggest improvements that follow these best practices.
|
||||
479
plugins/csharp-dotnet-development/commands/csharp-mstest.md
Normal file
479
plugins/csharp-dotnet-development/commands/csharp-mstest.md
Normal file
@@ -0,0 +1,479 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'problems', 'search']
|
||||
description: 'Get best practices for MSTest 3.x/4.x unit testing, including modern assertion APIs and data-driven tests'
|
||||
---
|
||||
|
||||
# MSTest Best Practices (MSTest 3.x/4.x)
|
||||
|
||||
Your goal is to help me write effective unit tests with modern MSTest, using current APIs and best practices.
|
||||
|
||||
## Project Setup
|
||||
|
||||
- Use a separate test project with naming convention `[ProjectName].Tests`
|
||||
- Reference MSTest 3.x+ NuGet packages (includes analyzers)
|
||||
- Consider using MSTest.Sdk for simplified project setup
|
||||
- Run tests with `dotnet test`
|
||||
|
||||
## Test Class Structure
|
||||
|
||||
- Use `[TestClass]` attribute for test classes
|
||||
- **Seal test classes by default** for performance and design clarity
|
||||
- Use `[TestMethod]` for test methods (prefer over `[DataTestMethod]`)
|
||||
- Follow Arrange-Act-Assert (AAA) pattern
|
||||
- Name tests using pattern `MethodName_Scenario_ExpectedBehavior`
|
||||
|
||||
```csharp
|
||||
[TestClass]
|
||||
public sealed class CalculatorTests
|
||||
{
|
||||
[TestMethod]
|
||||
public void Add_TwoPositiveNumbers_ReturnsSum()
|
||||
{
|
||||
// Arrange
|
||||
var calculator = new Calculator();
|
||||
|
||||
// Act
|
||||
var result = calculator.Add(2, 3);
|
||||
|
||||
// Assert
|
||||
Assert.AreEqual(5, result);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Test Lifecycle
|
||||
|
||||
- **Prefer constructors over `[TestInitialize]`** - enables `readonly` fields and follows standard C# patterns
|
||||
- Use `[TestCleanup]` for cleanup that must run even if test fails
|
||||
- Combine constructor with async `[TestInitialize]` when async setup is needed
|
||||
|
||||
```csharp
|
||||
[TestClass]
|
||||
public sealed class ServiceTests
|
||||
{
|
||||
private readonly MyService _service; // readonly enabled by constructor
|
||||
|
||||
public ServiceTests()
|
||||
{
|
||||
_service = new MyService();
|
||||
}
|
||||
|
||||
[TestInitialize]
|
||||
public async Task InitAsync()
|
||||
{
|
||||
// Use for async initialization only
|
||||
await _service.WarmupAsync();
|
||||
}
|
||||
|
||||
[TestCleanup]
|
||||
public void Cleanup() => _service.Reset();
|
||||
}
|
||||
```
|
||||
|
||||
### Execution Order
|
||||
|
||||
1. **Assembly Initialization** - `[AssemblyInitialize]` (once per test assembly)
|
||||
2. **Class Initialization** - `[ClassInitialize]` (once per test class)
|
||||
3. **Test Initialization** (for every test method):
|
||||
1. Constructor
|
||||
2. Set `TestContext` property
|
||||
3. `[TestInitialize]`
|
||||
4. **Test Execution** - test method runs
|
||||
5. **Test Cleanup** (for every test method):
|
||||
1. `[TestCleanup]`
|
||||
2. `DisposeAsync` (if implemented)
|
||||
3. `Dispose` (if implemented)
|
||||
6. **Class Cleanup** - `[ClassCleanup]` (once per test class)
|
||||
7. **Assembly Cleanup** - `[AssemblyCleanup]` (once per test assembly)
|
||||
|
||||
## Modern Assertion APIs
|
||||
|
||||
MSTest provides three assertion classes: `Assert`, `StringAssert`, and `CollectionAssert`.
|
||||
|
||||
### Assert Class - Core Assertions
|
||||
|
||||
```csharp
|
||||
// Equality
|
||||
Assert.AreEqual(expected, actual);
|
||||
Assert.AreNotEqual(notExpected, actual);
|
||||
Assert.AreSame(expectedObject, actualObject); // Reference equality
|
||||
Assert.AreNotSame(notExpectedObject, actualObject);
|
||||
|
||||
// Null checks
|
||||
Assert.IsNull(value);
|
||||
Assert.IsNotNull(value);
|
||||
|
||||
// Boolean
|
||||
Assert.IsTrue(condition);
|
||||
Assert.IsFalse(condition);
|
||||
|
||||
// Fail/Inconclusive
|
||||
Assert.Fail("Test failed due to...");
|
||||
Assert.Inconclusive("Test cannot be completed because...");
|
||||
```
|
||||
|
||||
### Exception Testing (Prefer over `[ExpectedException]`)
|
||||
|
||||
```csharp
|
||||
// Assert.Throws - matches TException or derived types
|
||||
var ex = Assert.Throws<ArgumentException>(() => Method(null));
|
||||
Assert.AreEqual("Value cannot be null.", ex.Message);
|
||||
|
||||
// Assert.ThrowsExactly - matches exact type only
|
||||
var ex = Assert.ThrowsExactly<InvalidOperationException>(() => Method());
|
||||
|
||||
// Async versions
|
||||
var ex = await Assert.ThrowsAsync<HttpRequestException>(async () => await client.GetAsync(url));
|
||||
var ex = await Assert.ThrowsExactlyAsync<InvalidOperationException>(async () => await Method());
|
||||
```
|
||||
|
||||
### Collection Assertions (Assert class)
|
||||
|
||||
```csharp
|
||||
Assert.Contains(expectedItem, collection);
|
||||
Assert.DoesNotContain(unexpectedItem, collection);
|
||||
Assert.ContainsSingle(collection); // exactly one element
|
||||
Assert.HasCount(5, collection);
|
||||
Assert.IsEmpty(collection);
|
||||
Assert.IsNotEmpty(collection);
|
||||
```
|
||||
|
||||
### String Assertions (Assert class)
|
||||
|
||||
```csharp
|
||||
Assert.Contains("expected", actualString);
|
||||
Assert.StartsWith("prefix", actualString);
|
||||
Assert.EndsWith("suffix", actualString);
|
||||
Assert.DoesNotStartWith("prefix", actualString);
|
||||
Assert.DoesNotEndWith("suffix", actualString);
|
||||
Assert.MatchesRegex(@"\d{3}-\d{4}", phoneNumber);
|
||||
Assert.DoesNotMatchRegex(@"\d+", textOnly);
|
||||
```
|
||||
|
||||
### Comparison Assertions
|
||||
|
||||
```csharp
|
||||
Assert.IsGreaterThan(lowerBound, actual);
|
||||
Assert.IsGreaterThanOrEqualTo(lowerBound, actual);
|
||||
Assert.IsLessThan(upperBound, actual);
|
||||
Assert.IsLessThanOrEqualTo(upperBound, actual);
|
||||
Assert.IsInRange(actual, low, high);
|
||||
Assert.IsPositive(number);
|
||||
Assert.IsNegative(number);
|
||||
```
|
||||
|
||||
### Type Assertions
|
||||
|
||||
```csharp
|
||||
// MSTest 3.x - uses out parameter
|
||||
Assert.IsInstanceOfType<MyClass>(obj, out var typed);
|
||||
typed.DoSomething();
|
||||
|
||||
// MSTest 4.x - returns typed result directly
|
||||
var typed = Assert.IsInstanceOfType<MyClass>(obj);
|
||||
typed.DoSomething();
|
||||
|
||||
Assert.IsNotInstanceOfType<WrongType>(obj);
|
||||
```
|
||||
|
||||
### Assert.That (MSTest 4.0+)
|
||||
|
||||
```csharp
|
||||
Assert.That(result.Count > 0); // Auto-captures expression in failure message
|
||||
```
|
||||
|
||||
### StringAssert Class
|
||||
|
||||
> **Note:** Prefer `Assert` class equivalents when available (e.g., `Assert.Contains("expected", actual)` over `StringAssert.Contains(actual, "expected")`).
|
||||
|
||||
```csharp
|
||||
StringAssert.Contains(actualString, "expected");
|
||||
StringAssert.StartsWith(actualString, "prefix");
|
||||
StringAssert.EndsWith(actualString, "suffix");
|
||||
StringAssert.Matches(actualString, new Regex(@"\d{3}-\d{4}"));
|
||||
StringAssert.DoesNotMatch(actualString, new Regex(@"\d+"));
|
||||
```
|
||||
|
||||
### CollectionAssert Class
|
||||
|
||||
> **Note:** Prefer `Assert` class equivalents when available (e.g., `Assert.Contains`).
|
||||
|
||||
```csharp
|
||||
// Containment
|
||||
CollectionAssert.Contains(collection, expectedItem);
|
||||
CollectionAssert.DoesNotContain(collection, unexpectedItem);
|
||||
|
||||
// Equality (same elements, same order)
|
||||
CollectionAssert.AreEqual(expectedCollection, actualCollection);
|
||||
CollectionAssert.AreNotEqual(unexpectedCollection, actualCollection);
|
||||
|
||||
// Equivalence (same elements, any order)
|
||||
CollectionAssert.AreEquivalent(expectedCollection, actualCollection);
|
||||
CollectionAssert.AreNotEquivalent(unexpectedCollection, actualCollection);
|
||||
|
||||
// Subset checks
|
||||
CollectionAssert.IsSubsetOf(subset, superset);
|
||||
CollectionAssert.IsNotSubsetOf(notSubset, collection);
|
||||
|
||||
// Element validation
|
||||
CollectionAssert.AllItemsAreInstancesOfType(collection, typeof(MyClass));
|
||||
CollectionAssert.AllItemsAreNotNull(collection);
|
||||
CollectionAssert.AllItemsAreUnique(collection);
|
||||
```
|
||||
|
||||
## Data-Driven Tests
|
||||
|
||||
### DataRow
|
||||
|
||||
```csharp
|
||||
[TestMethod]
|
||||
[DataRow(1, 2, 3)]
|
||||
[DataRow(0, 0, 0, DisplayName = "Zeros")]
|
||||
[DataRow(-1, 1, 0, IgnoreMessage = "Known issue #123")] // MSTest 3.8+
|
||||
public void Add_ReturnsSum(int a, int b, int expected)
|
||||
{
|
||||
Assert.AreEqual(expected, Calculator.Add(a, b));
|
||||
}
|
||||
```
|
||||
|
||||
### DynamicData
|
||||
|
||||
The data source can return any of the following types:
|
||||
|
||||
- `IEnumerable<(T1, T2, ...)>` (ValueTuple) - **preferred**, provides type safety (MSTest 3.7+)
|
||||
- `IEnumerable<Tuple<T1, T2, ...>>` - provides type safety
|
||||
- `IEnumerable<TestDataRow>` - provides type safety plus control over test metadata (display name, categories)
|
||||
- `IEnumerable<object[]>` - **least preferred**, no type safety
|
||||
|
||||
> **Note:** When creating new test data methods, prefer `ValueTuple` or `TestDataRow` over `IEnumerable<object[]>`. The `object[]` approach provides no compile-time type checking and can lead to runtime errors from type mismatches.
|
||||
|
||||
```csharp
|
||||
[TestMethod]
|
||||
[DynamicData(nameof(TestData))]
|
||||
public void DynamicTest(int a, int b, int expected)
|
||||
{
|
||||
Assert.AreEqual(expected, Calculator.Add(a, b));
|
||||
}
|
||||
|
||||
// ValueTuple - preferred (MSTest 3.7+)
|
||||
public static IEnumerable<(int a, int b, int expected)> TestData =>
|
||||
[
|
||||
(1, 2, 3),
|
||||
(0, 0, 0),
|
||||
];
|
||||
|
||||
// TestDataRow - when you need custom display names or metadata
|
||||
public static IEnumerable<TestDataRow<(int a, int b, int expected)>> TestDataWithMetadata =>
|
||||
[
|
||||
new((1, 2, 3)) { DisplayName = "Positive numbers" },
|
||||
new((0, 0, 0)) { DisplayName = "Zeros" },
|
||||
new((-1, 1, 0)) { DisplayName = "Mixed signs", IgnoreMessage = "Known issue #123" },
|
||||
];
|
||||
|
||||
// IEnumerable<object[]> - avoid for new code (no type safety)
|
||||
public static IEnumerable<object[]> LegacyTestData =>
|
||||
[
|
||||
[1, 2, 3],
|
||||
[0, 0, 0],
|
||||
];
|
||||
```
|
||||
|
||||
## TestContext
|
||||
|
||||
The `TestContext` class provides test run information, cancellation support, and output methods.
|
||||
See [TestContext documentation](https://learn.microsoft.com/dotnet/core/testing/unit-testing-mstest-writing-tests-testcontext) for complete reference.
|
||||
|
||||
### Accessing TestContext
|
||||
|
||||
```csharp
|
||||
// Property (MSTest suppresses CS8618 - don't use nullable or = null!)
|
||||
public TestContext TestContext { get; set; }
|
||||
|
||||
// Constructor injection (MSTest 3.6+) - preferred for immutability
|
||||
[TestClass]
|
||||
public sealed class MyTests
|
||||
{
|
||||
private readonly TestContext _testContext;
|
||||
|
||||
public MyTests(TestContext testContext)
|
||||
{
|
||||
_testContext = testContext;
|
||||
}
|
||||
}
|
||||
|
||||
// Static methods receive it as parameter
|
||||
[ClassInitialize]
|
||||
public static void ClassInit(TestContext context) { }
|
||||
|
||||
// Optional for cleanup methods (MSTest 3.6+)
|
||||
[ClassCleanup]
|
||||
public static void ClassCleanup(TestContext context) { }
|
||||
|
||||
[AssemblyCleanup]
|
||||
public static void AssemblyCleanup(TestContext context) { }
|
||||
```
|
||||
|
||||
### Cancellation Token
|
||||
|
||||
Always use `TestContext.CancellationToken` for cooperative cancellation with `[Timeout]`:
|
||||
|
||||
```csharp
|
||||
[TestMethod]
|
||||
[Timeout(5000)]
|
||||
public async Task LongRunningTest()
|
||||
{
|
||||
await _httpClient.GetAsync(url, TestContext.CancellationToken);
|
||||
}
|
||||
```
|
||||
|
||||
### Test Run Properties
|
||||
|
||||
```csharp
|
||||
TestContext.TestName // Current test method name
|
||||
TestContext.TestDisplayName // Display name (3.7+)
|
||||
TestContext.CurrentTestOutcome // Pass/Fail/InProgress
|
||||
TestContext.TestData // Parameterized test data (3.7+, in TestInitialize/Cleanup)
|
||||
TestContext.TestException // Exception if test failed (3.7+, in TestCleanup)
|
||||
TestContext.DeploymentDirectory // Directory with deployment items
|
||||
```
|
||||
|
||||
### Output and Result Files
|
||||
|
||||
```csharp
|
||||
// Write to test output (useful for debugging)
|
||||
TestContext.WriteLine("Processing item {0}", itemId);
|
||||
|
||||
// Attach files to test results (logs, screenshots)
|
||||
TestContext.AddResultFile(screenshotPath);
|
||||
|
||||
// Store/retrieve data across test methods
|
||||
TestContext.Properties["SharedKey"] = computedValue;
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Retry for Flaky Tests (MSTest 3.9+)
|
||||
|
||||
```csharp
|
||||
[TestMethod]
|
||||
[Retry(3)]
|
||||
public void FlakyTest() { }
|
||||
```
|
||||
|
||||
### Conditional Execution (MSTest 3.10+)
|
||||
|
||||
Skip or run tests based on OS or CI environment:
|
||||
|
||||
```csharp
|
||||
// OS-specific tests
|
||||
[TestMethod]
|
||||
[OSCondition(OperatingSystems.Windows)]
|
||||
public void WindowsOnlyTest() { }
|
||||
|
||||
[TestMethod]
|
||||
[OSCondition(OperatingSystems.Linux | OperatingSystems.MacOS)]
|
||||
public void UnixOnlyTest() { }
|
||||
|
||||
[TestMethod]
|
||||
[OSCondition(ConditionMode.Exclude, OperatingSystems.Windows)]
|
||||
public void SkipOnWindowsTest() { }
|
||||
|
||||
// CI environment tests
|
||||
[TestMethod]
|
||||
[CICondition] // Runs only in CI (default: ConditionMode.Include)
|
||||
public void CIOnlyTest() { }
|
||||
|
||||
[TestMethod]
|
||||
[CICondition(ConditionMode.Exclude)] // Skips in CI, runs locally
|
||||
public void LocalOnlyTest() { }
|
||||
```
|
||||
|
||||
### Parallelization
|
||||
|
||||
```csharp
|
||||
// Assembly level
|
||||
[assembly: Parallelize(Workers = 4, Scope = ExecutionScope.MethodLevel)]
|
||||
|
||||
// Disable for specific class
|
||||
[TestClass]
|
||||
[DoNotParallelize]
|
||||
public sealed class SequentialTests { }
|
||||
```
|
||||
|
||||
### Work Item Traceability (MSTest 3.8+)
|
||||
|
||||
Link tests to work items for traceability in test reports:
|
||||
|
||||
```csharp
|
||||
// Azure DevOps work items
|
||||
[TestMethod]
|
||||
[WorkItem(12345)] // Links to work item #12345
|
||||
public void Feature_Scenario_ExpectedBehavior() { }
|
||||
|
||||
// Multiple work items
|
||||
[TestMethod]
|
||||
[WorkItem(12345)]
|
||||
[WorkItem(67890)]
|
||||
public void Feature_CoversMultipleRequirements() { }
|
||||
|
||||
// GitHub issues (MSTest 3.8+)
|
||||
[TestMethod]
|
||||
[GitHubWorkItem("https://github.com/owner/repo/issues/42")]
|
||||
public void BugFix_Issue42_IsResolved() { }
|
||||
```
|
||||
|
||||
Work item associations appear in test results and can be used for:
|
||||
- Tracing test coverage to requirements
|
||||
- Linking bug fixes to regression tests
|
||||
- Generating traceability reports in CI/CD pipelines
|
||||
|
||||
## Common Mistakes to Avoid
|
||||
|
||||
```csharp
|
||||
// ❌ Wrong argument order
|
||||
Assert.AreEqual(actual, expected);
|
||||
// ✅ Correct
|
||||
Assert.AreEqual(expected, actual);
|
||||
|
||||
// ❌ Using ExpectedException (obsolete)
|
||||
[ExpectedException(typeof(ArgumentException))]
|
||||
// ✅ Use Assert.Throws
|
||||
Assert.Throws<ArgumentException>(() => Method());
|
||||
|
||||
// ❌ Using LINQ Single() - unclear exception
|
||||
var item = items.Single();
|
||||
// ✅ Use ContainsSingle - better failure message
|
||||
var item = Assert.ContainsSingle(items);
|
||||
|
||||
// ❌ Hard cast - unclear exception
|
||||
var handler = (MyHandler)result;
|
||||
// ✅ Type assertion - shows actual type on failure
|
||||
var handler = Assert.IsInstanceOfType<MyHandler>(result);
|
||||
|
||||
// ❌ Ignoring cancellation token
|
||||
await client.GetAsync(url, CancellationToken.None);
|
||||
// ✅ Flow test cancellation
|
||||
await client.GetAsync(url, TestContext.CancellationToken);
|
||||
|
||||
// ❌ Making TestContext nullable - leads to unnecessary null checks
|
||||
public TestContext? TestContext { get; set; }
|
||||
// ❌ Using null! - MSTest already suppresses CS8618 for this property
|
||||
public TestContext TestContext { get; set; } = null!;
|
||||
// ✅ Declare without nullable or initializer - MSTest handles the warning
|
||||
public TestContext TestContext { get; set; }
|
||||
```
|
||||
|
||||
## Test Organization
|
||||
|
||||
- Group tests by feature or component
|
||||
- Use `[TestCategory("Category")]` for filtering
|
||||
- Use `[TestProperty("Name", "Value")]` for custom metadata (e.g., `[TestProperty("Bug", "12345")]`)
|
||||
- Use `[Priority(1)]` for critical tests
|
||||
- Enable relevant MSTest analyzers (MSTEST0020 for constructor preference)
|
||||
|
||||
## Mocking and Isolation
|
||||
|
||||
- Use Moq or NSubstitute for mocking dependencies
|
||||
- Use interfaces to facilitate mocking
|
||||
- Mock dependencies to isolate units under test
|
||||
72
plugins/csharp-dotnet-development/commands/csharp-nunit.md
Normal file
72
plugins/csharp-dotnet-development/commands/csharp-nunit.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'problems', 'search']
|
||||
description: 'Get best practices for NUnit unit testing, including data-driven tests'
|
||||
---
|
||||
|
||||
# NUnit Best Practices
|
||||
|
||||
Your goal is to help me write effective unit tests with NUnit, covering both standard and data-driven testing approaches.
|
||||
|
||||
## Project Setup
|
||||
|
||||
- Use a separate test project with naming convention `[ProjectName].Tests`
|
||||
- Reference Microsoft.NET.Test.Sdk, NUnit, and NUnit3TestAdapter packages
|
||||
- Create test classes that match the classes being tested (e.g., `CalculatorTests` for `Calculator`)
|
||||
- Use .NET SDK test commands: `dotnet test` for running tests
|
||||
|
||||
## Test Structure
|
||||
|
||||
- Apply `[TestFixture]` attribute to test classes
|
||||
- Use `[Test]` attribute for test methods
|
||||
- Follow the Arrange-Act-Assert (AAA) pattern
|
||||
- Name tests using the pattern `MethodName_Scenario_ExpectedBehavior`
|
||||
- Use `[SetUp]` and `[TearDown]` for per-test setup and teardown
|
||||
- Use `[OneTimeSetUp]` and `[OneTimeTearDown]` for per-class setup and teardown
|
||||
- Use `[SetUpFixture]` for assembly-level setup and teardown
|
||||
|
||||
## Standard Tests
|
||||
|
||||
- Keep tests focused on a single behavior
|
||||
- Avoid testing multiple behaviors in one test method
|
||||
- Use clear assertions that express intent
|
||||
- Include only the assertions needed to verify the test case
|
||||
- Make tests independent and idempotent (can run in any order)
|
||||
- Avoid test interdependencies
|
||||
|
||||
## Data-Driven Tests
|
||||
|
||||
- Use `[TestCase]` for inline test data
|
||||
- Use `[TestCaseSource]` for programmatically generated test data
|
||||
- Use `[Values]` for simple parameter combinations
|
||||
- Use `[ValueSource]` for property or method-based data sources
|
||||
- Use `[Random]` for random numeric test values
|
||||
- Use `[Range]` for sequential numeric test values
|
||||
- Use `[Combinatorial]` or `[Pairwise]` for combining multiple parameters
|
||||
|
||||
## Assertions
|
||||
|
||||
- Use `Assert.That` with constraint model (preferred NUnit style)
|
||||
- Use constraints like `Is.EqualTo`, `Is.SameAs`, `Contains.Item`
|
||||
- Use `Assert.AreEqual` for simple value equality (classic style)
|
||||
- Use `CollectionAssert` for collection comparisons
|
||||
- Use `StringAssert` for string-specific assertions
|
||||
- Use `Assert.Throws<T>` or `Assert.ThrowsAsync<T>` to test exceptions
|
||||
- Use descriptive messages in assertions for clarity on failure
|
||||
|
||||
## Mocking and Isolation
|
||||
|
||||
- Consider using Moq or NSubstitute alongside NUnit
|
||||
- Mock dependencies to isolate units under test
|
||||
- Use interfaces to facilitate mocking
|
||||
- Consider using a DI container for complex test setups
|
||||
|
||||
## Test Organization
|
||||
|
||||
- Group tests by feature or component
|
||||
- Use categories with `[Category("CategoryName")]`
|
||||
- Use `[Order]` to control test execution order when necessary
|
||||
- Use `[Author("DeveloperName")]` to indicate ownership
|
||||
- Use `[Description]` to provide additional test information
|
||||
- Consider `[Explicit]` for tests that shouldn't run automatically
|
||||
- Use `[Ignore("Reason")]` to temporarily skip tests
|
||||
101
plugins/csharp-dotnet-development/commands/csharp-tunit.md
Normal file
101
plugins/csharp-dotnet-development/commands/csharp-tunit.md
Normal file
@@ -0,0 +1,101 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'problems', 'search']
|
||||
description: 'Get best practices for TUnit unit testing, including data-driven tests'
|
||||
---
|
||||
|
||||
# TUnit Best Practices
|
||||
|
||||
Your goal is to help me write effective unit tests with TUnit, covering both standard and data-driven testing approaches.
|
||||
|
||||
## Project Setup
|
||||
|
||||
- Use a separate test project with naming convention `[ProjectName].Tests`
|
||||
- Reference TUnit package and TUnit.Assertions for fluent assertions
|
||||
- Create test classes that match the classes being tested (e.g., `CalculatorTests` for `Calculator`)
|
||||
- Use .NET SDK test commands: `dotnet test` for running tests
|
||||
- TUnit requires .NET 8.0 or higher
|
||||
|
||||
## Test Structure
|
||||
|
||||
- No test class attributes required (like xUnit/NUnit)
|
||||
- Use `[Test]` attribute for test methods (not `[Fact]` like xUnit)
|
||||
- Follow the Arrange-Act-Assert (AAA) pattern
|
||||
- Name tests using the pattern `MethodName_Scenario_ExpectedBehavior`
|
||||
- Use lifecycle hooks: `[Before(Test)]` for setup and `[After(Test)]` for teardown
|
||||
- Use `[Before(Class)]` and `[After(Class)]` for shared context between tests in a class
|
||||
- Use `[Before(Assembly)]` and `[After(Assembly)]` for shared context across test classes
|
||||
- TUnit supports advanced lifecycle hooks like `[Before(TestSession)]` and `[After(TestSession)]`
|
||||
|
||||
## Standard Tests
|
||||
|
||||
- Keep tests focused on a single behavior
|
||||
- Avoid testing multiple behaviors in one test method
|
||||
- Use TUnit's fluent assertion syntax with `await Assert.That()`
|
||||
- Include only the assertions needed to verify the test case
|
||||
- Make tests independent and idempotent (can run in any order)
|
||||
- Avoid test interdependencies (use `[DependsOn]` attribute if needed)
|
||||
|
||||
## Data-Driven Tests
|
||||
|
||||
- Use `[Arguments]` attribute for inline test data (equivalent to xUnit's `[InlineData]`)
|
||||
- Use `[MethodData]` for method-based test data (equivalent to xUnit's `[MemberData]`)
|
||||
- Use `[ClassData]` for class-based test data
|
||||
- Create custom data sources by implementing `ITestDataSource`
|
||||
- Use meaningful parameter names in data-driven tests
|
||||
- Multiple `[Arguments]` attributes can be applied to the same test method
|
||||
|
||||
## Assertions
|
||||
|
||||
- Use `await Assert.That(value).IsEqualTo(expected)` for value equality
|
||||
- Use `await Assert.That(value).IsSameReferenceAs(expected)` for reference equality
|
||||
- Use `await Assert.That(value).IsTrue()` or `await Assert.That(value).IsFalse()` for boolean conditions
|
||||
- Use `await Assert.That(collection).Contains(item)` or `await Assert.That(collection).DoesNotContain(item)` for collections
|
||||
- Use `await Assert.That(value).Matches(pattern)` for regex pattern matching
|
||||
- Use `await Assert.That(action).Throws<TException>()` or `await Assert.That(asyncAction).ThrowsAsync<TException>()` to test exceptions
|
||||
- Chain assertions with `.And` operator: `await Assert.That(value).IsNotNull().And.IsEqualTo(expected)`
|
||||
- Use `.Or` operator for alternative conditions: `await Assert.That(value).IsEqualTo(1).Or.IsEqualTo(2)`
|
||||
- Use `.Within(tolerance)` for DateTime and numeric comparisons with tolerance
|
||||
- All assertions are asynchronous and must be awaited
|
||||
|
||||
## Advanced Features
|
||||
|
||||
- Use `[Repeat(n)]` to repeat tests multiple times
|
||||
- Use `[Retry(n)]` for automatic retry on failure
|
||||
- Use `[ParallelLimit<T>]` to control parallel execution limits
|
||||
- Use `[Skip("reason")]` to skip tests conditionally
|
||||
- Use `[DependsOn(nameof(OtherTest))]` to create test dependencies
|
||||
- Use `[Timeout(milliseconds)]` to set test timeouts
|
||||
- Create custom attributes by extending TUnit's base attributes
|
||||
|
||||
## Test Organization
|
||||
|
||||
- Group tests by feature or component
|
||||
- Use `[Category("CategoryName")]` for test categorization
|
||||
- Use `[DisplayName("Custom Test Name")]` for custom test names
|
||||
- Consider using `TestContext` for test diagnostics and information
|
||||
- Use conditional attributes like custom `[WindowsOnly]` for platform-specific tests
|
||||
|
||||
## Performance and Parallel Execution
|
||||
|
||||
- TUnit runs tests in parallel by default (unlike xUnit which requires explicit configuration)
|
||||
- Use `[NotInParallel]` to disable parallel execution for specific tests
|
||||
- Use `[ParallelLimit<T>]` with custom limit classes to control concurrency
|
||||
- Tests within the same class run sequentially by default
|
||||
- Use `[Repeat(n)]` with `[ParallelLimit<T>]` for load testing scenarios
|
||||
|
||||
## Migration from xUnit
|
||||
|
||||
- Replace `[Fact]` with `[Test]`
|
||||
- Replace `[Theory]` with `[Test]` and use `[Arguments]` for data
|
||||
- Replace `[InlineData]` with `[Arguments]`
|
||||
- Replace `[MemberData]` with `[MethodData]`
|
||||
- Replace `Assert.Equal` with `await Assert.That(actual).IsEqualTo(expected)`
|
||||
- Replace `Assert.True` with `await Assert.That(condition).IsTrue()`
|
||||
- Replace `Assert.Throws<T>` with `await Assert.That(action).Throws<T>()`
|
||||
- Replace constructor/IDisposable with `[Before(Test)]`/`[After(Test)]`
|
||||
- Replace `IClassFixture<T>` with `[Before(Class)]`/`[After(Class)]`
|
||||
|
||||
**Why TUnit over xUnit?**
|
||||
|
||||
TUnit offers a modern, fast, and flexible testing experience with advanced features not present in xUnit, such as asynchronous assertions, more refined lifecycle hooks, and improved data-driven testing capabilities. TUnit's fluent assertions provide clearer and more expressive test validation, making it especially suitable for complex .NET projects.
|
||||
69
plugins/csharp-dotnet-development/commands/csharp-xunit.md
Normal file
69
plugins/csharp-dotnet-development/commands/csharp-xunit.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'problems', 'search']
|
||||
description: 'Get best practices for XUnit unit testing, including data-driven tests'
|
||||
---
|
||||
|
||||
# XUnit Best Practices
|
||||
|
||||
Your goal is to help me write effective unit tests with XUnit, covering both standard and data-driven testing approaches.
|
||||
|
||||
## Project Setup
|
||||
|
||||
- Use a separate test project with naming convention `[ProjectName].Tests`
|
||||
- Reference Microsoft.NET.Test.Sdk, xunit, and xunit.runner.visualstudio packages
|
||||
- Create test classes that match the classes being tested (e.g., `CalculatorTests` for `Calculator`)
|
||||
- Use .NET SDK test commands: `dotnet test` for running tests
|
||||
|
||||
## Test Structure
|
||||
|
||||
- No test class attributes required (unlike MSTest/NUnit)
|
||||
- Use fact-based tests with `[Fact]` attribute for simple tests
|
||||
- Follow the Arrange-Act-Assert (AAA) pattern
|
||||
- Name tests using the pattern `MethodName_Scenario_ExpectedBehavior`
|
||||
- Use constructor for setup and `IDisposable.Dispose()` for teardown
|
||||
- Use `IClassFixture<T>` for shared context between tests in a class
|
||||
- Use `ICollectionFixture<T>` for shared context between multiple test classes
|
||||
|
||||
## Standard Tests
|
||||
|
||||
- Keep tests focused on a single behavior
|
||||
- Avoid testing multiple behaviors in one test method
|
||||
- Use clear assertions that express intent
|
||||
- Include only the assertions needed to verify the test case
|
||||
- Make tests independent and idempotent (can run in any order)
|
||||
- Avoid test interdependencies
|
||||
|
||||
## Data-Driven Tests
|
||||
|
||||
- Use `[Theory]` combined with data source attributes
|
||||
- Use `[InlineData]` for inline test data
|
||||
- Use `[MemberData]` for method-based test data
|
||||
- Use `[ClassData]` for class-based test data
|
||||
- Create custom data attributes by implementing `DataAttribute`
|
||||
- Use meaningful parameter names in data-driven tests
|
||||
|
||||
## Assertions
|
||||
|
||||
- Use `Assert.Equal` for value equality
|
||||
- Use `Assert.Same` for reference equality
|
||||
- Use `Assert.True`/`Assert.False` for boolean conditions
|
||||
- Use `Assert.Contains`/`Assert.DoesNotContain` for collections
|
||||
- Use `Assert.Matches`/`Assert.DoesNotMatch` for regex pattern matching
|
||||
- Use `Assert.Throws<T>` or `await Assert.ThrowsAsync<T>` to test exceptions
|
||||
- Use fluent assertions library for more readable assertions
|
||||
|
||||
## Mocking and Isolation
|
||||
|
||||
- Consider using Moq or NSubstitute alongside XUnit
|
||||
- Mock dependencies to isolate units under test
|
||||
- Use interfaces to facilitate mocking
|
||||
- Consider using a DI container for complex test setups
|
||||
|
||||
## Test Organization
|
||||
|
||||
- Group tests by feature or component
|
||||
- Use `[Trait("Category", "CategoryName")]` for categorization
|
||||
- Use collection fixtures to group tests with shared dependencies
|
||||
- Consider output helpers (`ITestOutputHelper`) for test diagnostics
|
||||
- Skip tests conditionally with `Skip = "reason"` in fact/theory attributes
|
||||
@@ -0,0 +1,84 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Ensure .NET/C# code meets best practices for the solution/project.'
|
||||
---
|
||||
# .NET/C# Best Practices
|
||||
|
||||
Your task is to ensure .NET/C# code in ${selection} meets the best practices specific to this solution/project. This includes:
|
||||
|
||||
## Documentation & Structure
|
||||
|
||||
- Create comprehensive XML documentation comments for all public classes, interfaces, methods, and properties
|
||||
- Include parameter descriptions and return value descriptions in XML comments
|
||||
- Follow the established namespace structure: {Core|Console|App|Service}.{Feature}
|
||||
|
||||
## Design Patterns & Architecture
|
||||
|
||||
- Use primary constructor syntax for dependency injection (e.g., `public class MyClass(IDependency dependency)`)
|
||||
- Implement the Command Handler pattern with generic base classes (e.g., `CommandHandler<TOptions>`)
|
||||
- Use interface segregation with clear naming conventions (prefix interfaces with 'I')
|
||||
- Follow the Factory pattern for complex object creation.
|
||||
|
||||
## Dependency Injection & Services
|
||||
|
||||
- Use constructor dependency injection with null checks via ArgumentNullException
|
||||
- Register services with appropriate lifetimes (Singleton, Scoped, Transient)
|
||||
- Use Microsoft.Extensions.DependencyInjection patterns
|
||||
- Implement service interfaces for testability
|
||||
|
||||
## Resource Management & Localization
|
||||
|
||||
- Use ResourceManager for localized messages and error strings
|
||||
- Separate LogMessages and ErrorMessages resource files
|
||||
- Access resources via `_resourceManager.GetString("MessageKey")`
|
||||
|
||||
## Async/Await Patterns
|
||||
|
||||
- Use async/await for all I/O operations and long-running tasks
|
||||
- Return Task or Task<T> from async methods
|
||||
- Use ConfigureAwait(false) where appropriate
|
||||
- Handle async exceptions properly
|
||||
|
||||
## Testing Standards
|
||||
|
||||
- Use MSTest framework with FluentAssertions for assertions
|
||||
- Follow AAA pattern (Arrange, Act, Assert)
|
||||
- Use Moq for mocking dependencies
|
||||
- Test both success and failure scenarios
|
||||
- Include null parameter validation tests
|
||||
|
||||
## Configuration & Settings
|
||||
|
||||
- Use strongly-typed configuration classes with data annotations
|
||||
- Implement validation attributes (Required, NotEmptyOrWhitespace)
|
||||
- Use IConfiguration binding for settings
|
||||
- Support appsettings.json configuration files
|
||||
|
||||
## Semantic Kernel & AI Integration
|
||||
|
||||
- Use Microsoft.SemanticKernel for AI operations
|
||||
- Implement proper kernel configuration and service registration
|
||||
- Handle AI model settings (ChatCompletion, Embedding, etc.)
|
||||
- Use structured output patterns for reliable AI responses
|
||||
|
||||
## Error Handling & Logging
|
||||
|
||||
- Use structured logging with Microsoft.Extensions.Logging
|
||||
- Include scoped logging with meaningful context
|
||||
- Throw specific exceptions with descriptive messages
|
||||
- Use try-catch blocks for expected failure scenarios
|
||||
|
||||
## Performance & Security
|
||||
|
||||
- Use C# 12+ features and .NET 8 optimizations where applicable
|
||||
- Implement proper input validation and sanitization
|
||||
- Use parameterized queries for database operations
|
||||
- Follow secure coding practices for AI/ML operations
|
||||
|
||||
## Code Quality
|
||||
|
||||
- Ensure SOLID principles compliance
|
||||
- Avoid code duplication through base classes and utilities
|
||||
- Use meaningful names that reflect domain concepts
|
||||
- Keep methods focused and cohesive
|
||||
- Implement proper disposal patterns for resources
|
||||
115
plugins/csharp-dotnet-development/commands/dotnet-upgrade.md
Normal file
115
plugins/csharp-dotnet-development/commands/dotnet-upgrade.md
Normal file
@@ -0,0 +1,115 @@
|
||||
---
|
||||
name: ".NET Upgrade Analysis Prompts"
|
||||
description: "Ready-to-use prompts for comprehensive .NET framework upgrade analysis and execution"
|
||||
---
|
||||
# Project Discovery & Assessment
|
||||
- name: "Project Classification Analysis"
|
||||
prompt: "Identify all projects in the solution and classify them by type (`.NET Framework`, `.NET Core`, `.NET Standard`). Analyze each `.csproj` for its current `TargetFramework` and SDK usage."
|
||||
|
||||
- name: "Dependency Compatibility Review"
|
||||
prompt: "Review external and internal dependencies for framework compatibility. Determine the upgrade complexity based on dependency graph depth."
|
||||
|
||||
- name: "Legacy Package Detection"
|
||||
prompt: "Identify legacy `packages.config` projects needing migration to `PackageReference` format."
|
||||
|
||||
# Upgrade Strategy & Sequencing
|
||||
- name: "Project Upgrade Ordering"
|
||||
prompt: "Recommend a project upgrade order from least to most dependent components. Suggest how to isolate class library upgrades before API or Azure Function migrations."
|
||||
|
||||
- name: "Incremental Strategy Planning"
|
||||
prompt: "Propose an incremental upgrade strategy with rollback checkpoints. Evaluate the use of **Upgrade Assistant** or **manual upgrades** based on project structure."
|
||||
|
||||
- name: "Progress Tracking Setup"
|
||||
prompt: "Generate an upgrade checklist for tracking build, test, and deployment readiness across all projects."
|
||||
|
||||
# Framework Targeting & Code Adjustments
|
||||
- name: "Target Framework Selection"
|
||||
prompt: "Suggest the correct `TargetFramework` for each project (e.g., `net8.0`). Review and update deprecated SDK or build configurations."
|
||||
|
||||
- name: "Code Modernization Analysis"
|
||||
prompt: "Identify code patterns needing modernization (e.g., `WebHostBuilder` → `HostBuilder`). Suggest replacements for deprecated .NET APIs and third-party libraries."
|
||||
|
||||
- name: "Async Pattern Conversion"
|
||||
prompt: "Recommend conversion of synchronous calls to async where appropriate for improved performance and scalability."
|
||||
|
||||
# NuGet & Dependency Management
|
||||
- name: "Package Compatibility Analysis"
|
||||
prompt: "Analyze outdated or incompatible NuGet packages and suggest compatible versions. Identify third-party libraries that lack .NET 8 support and provide migration paths."
|
||||
|
||||
- name: "Shared Dependency Strategy"
|
||||
prompt: "Recommend strategies for handling shared dependency upgrades across projects. Evaluate usage of legacy packages and suggest alternatives in Microsoft-supported namespaces."
|
||||
|
||||
- name: "Transitive Dependency Review"
|
||||
prompt: "Review transitive dependencies and potential version conflicts after upgrade. Suggest resolution strategies for dependency conflicts."
|
||||
|
||||
# CI/CD & Build Pipeline Updates
|
||||
- name: "Pipeline Configuration Analysis"
|
||||
prompt: "Analyze YAML build definitions for SDK version pinning and recommend updates. Suggest modifications for `UseDotNet@2` and `NuGetToolInstaller` tasks."
|
||||
|
||||
- name: "Build Pipeline Modernization"
|
||||
prompt: "Generate updated build pipeline snippets for .NET 8 migration. Recommend validation builds on feature branches before merging to main."
|
||||
|
||||
- name: "CI Automation Enhancement"
|
||||
prompt: "Identify opportunities to automate test and build verification in CI pipelines. Suggest strategies for continuous integration validation."
|
||||
|
||||
# Testing & Validation
|
||||
- name: "Build Validation Strategy"
|
||||
prompt: "Propose validation checks to ensure the upgraded solution builds and runs successfully. Recommend automated test execution for unit and integration suites post-upgrade."
|
||||
|
||||
- name: "Service Integration Verification"
|
||||
prompt: "Generate validation steps to verify logging, telemetry, and service connectivity. Suggest strategies for verifying backward compatibility and runtime behavior."
|
||||
|
||||
- name: "Deployment Readiness Check"
|
||||
prompt: "Recommend UAT deployment verification steps before production rollout. Create comprehensive testing scenarios for upgraded components."
|
||||
|
||||
# Breaking Change Analysis
|
||||
- name: "API Deprecation Detection"
|
||||
prompt: "Identify deprecated APIs or removed namespaces between target versions. Suggest automated scanning using `.NET Upgrade Assistant` and API Analyzer."
|
||||
|
||||
- name: "API Replacement Strategy"
|
||||
prompt: "Recommend replacement APIs or libraries for known breaking areas. Review configuration changes such as `Startup.cs` → `Program.cs` refactoring."
|
||||
|
||||
- name: "Regression Testing Focus"
|
||||
prompt: "Suggest regression testing scenarios focused on upgraded API endpoints or services. Create test plans for critical functionality validation."
|
||||
|
||||
# Version Control & Commit Strategy
|
||||
- name: "Branching Strategy Planning"
|
||||
prompt: "Recommend branching strategy for safe upgrade with rollback capability. Generate commit templates for partial and complete project upgrades."
|
||||
|
||||
- name: "PR Structure Optimization"
|
||||
prompt: "Suggest best practices for creating structured PRs (`Upgrade to .NET [Version]`). Identify tagging strategies for PRs involving breaking changes."
|
||||
|
||||
- name: "Code Review Guidelines"
|
||||
prompt: "Recommend peer review focus areas (build, test, and dependency validation). Create checklists for effective upgrade reviews."
|
||||
|
||||
# Documentation & Communication
|
||||
- name: "Upgrade Documentation Strategy"
|
||||
prompt: "Suggest how to document each project's framework change in the PR. Propose automated release note generation summarizing upgrades and test results."
|
||||
|
||||
- name: "Stakeholder Communication"
|
||||
prompt: "Recommend communicating version upgrades and migration timelines to consumers. Generate documentation templates for dependency updates and validation results."
|
||||
|
||||
- name: "Progress Tracking Systems"
|
||||
prompt: "Suggest maintaining an upgrade summary dashboard or markdown checklist. Create templates for tracking upgrade progress across multiple projects."
|
||||
|
||||
# Tools & Automation
|
||||
- name: "Upgrade Tool Selection"
|
||||
prompt: "Recommend when and how to use: `.NET Upgrade Assistant`, `dotnet list package --outdated`, `dotnet migrate`, and `graph.json` dependency visualization."
|
||||
|
||||
- name: "Analysis Script Generation"
|
||||
prompt: "Generate scripts or prompts for analyzing dependency graphs before upgrading. Propose AI-assisted prompts for Copilot to identify upgrade issues automatically."
|
||||
|
||||
- name: "Multi-Repository Validation"
|
||||
prompt: "Suggest how to validate automation output across multiple repositories. Create standardized validation workflows for enterprise-scale upgrades."
|
||||
|
||||
# Final Validation & Delivery
|
||||
- name: "Final Solution Validation"
|
||||
prompt: "Generate validation steps to confirm the final upgraded solution passes all validation checks. Suggest production deployment verification steps post-upgrade."
|
||||
|
||||
- name: "Deployment Readiness Confirmation"
|
||||
prompt: "Recommend generating final test results and build artifacts. Create a checklist summarizing completion across projects (builds/tests/deployment)."
|
||||
|
||||
- name: "Release Documentation"
|
||||
prompt: "Generate a release note summarizing framework changes and CI/CD updates. Create comprehensive upgrade summary documentation."
|
||||
|
||||
---
|
||||
Reference in New Issue
Block a user