mirror of
https://github.com/github/awesome-copilot.git
synced 2026-02-23 11:55:12 +00:00
Initial setup of repo with prompts and instructions
This commit is contained in:
41
prompts/aspnet-minimal-api-openapi.prompt.md
Normal file
41
prompts/aspnet-minimal-api-openapi.prompt.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
mode: "agent"
|
||||
tools: ["codebase", "terminalCommand"]
|
||||
description: "Create ASP.NET Minimal API endpoints with proper OpenAPI documentation"
|
||||
---
|
||||
|
||||
Your goal is to help me create well-structured ASP.NET Minimal API endpoints with correct types and comprehensive OpenAPI/Swagger documentation.
|
||||
|
||||
## API Organization
|
||||
|
||||
- Group related endpoints using `MapGroup()` extension
|
||||
- Use endpoint filters for cross-cutting concerns
|
||||
- Structure larger APIs with separate endpoint classes
|
||||
- Consider using a feature-based folder structure for complex APIs
|
||||
|
||||
## Request and Response Types
|
||||
|
||||
- Define explicit request and response DTOs/models
|
||||
- Create clear model classes with proper validation attributes
|
||||
- Use record types for immutable request/response objects
|
||||
- Use meaningful property names that align with API design standards
|
||||
- Apply `[Required]` and other validation attributes to enforce constraints
|
||||
|
||||
## Type Handling
|
||||
|
||||
- Use strongly-typed route parameters with explicit type binding
|
||||
- Apply proper parameter binding with `[FromBody]`, `[FromRoute]`, `[FromQuery]`
|
||||
- Use `Results<T1, T2>` to represent multiple response types
|
||||
- Return `TypedResults` instead of `Results` for strongly-typed responses
|
||||
- Leverage C# 10+ features like nullable annotations and init-only properties
|
||||
|
||||
## OpenAPI / Swagger Documentation
|
||||
|
||||
- Add explicit OpenAPI operation details with `.WithOpenApi()`
|
||||
- Define operation summary and description
|
||||
- Document response types with `.Produces<T>(statusCode)`
|
||||
- Document request bodies with `.WithRequestBody()`
|
||||
- Set proper content types for requests and responses
|
||||
- Include examples using `SwaggerRequestExampleAttribute`
|
||||
- Document authentication requirements with `.RequireAuthorization()`
|
||||
- Use XML documentation comments for descriptive API documentation
|
||||
21
prompts/comment-code-generate-a-tutorial.prompt.md
Normal file
21
prompts/comment-code-generate-a-tutorial.prompt.md
Normal file
@@ -0,0 +1,21 @@
|
||||
Transform this Python script into a polished, beginner-friendly project by refactoring the code, adding clear instructional comments, and generating a complete markdown tutorial.
|
||||
|
||||
1. **Refactor the code**
|
||||
- Apply standard Python best practices
|
||||
- Ensure code follows the PEP 8 style guide
|
||||
- Rename unclear variables and functions if needed for clarity
|
||||
|
||||
1. **Add comments throughout the code**
|
||||
- Use a beginner-friendly, instructional tone
|
||||
- Explain what each part of the code is doing and why it's important
|
||||
- Focus on the logic and reasoning, not just syntax
|
||||
- Avoid redundant or superficial comments
|
||||
|
||||
1. **Generate a tutorial as a `README.md` file**
|
||||
Include the following sections:
|
||||
- **Project Overview:** What the script does and why it's useful
|
||||
- **Setup Instructions:** Prerequisites, dependencies, and how to run the script
|
||||
- **How It Works:** A breakdown of the code logic based on the comments
|
||||
- **Example Usage:** A code snippet showing how to use it
|
||||
- **Sample Output:** (Optional) Include if the script returns visible results
|
||||
- Use clear, readable Markdown formatting
|
||||
50
prompts/csharp-async.prompt.md
Normal file
50
prompts/csharp-async.prompt.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
mode: "agent"
|
||||
tools: ["codebase", "terminalCommand"]
|
||||
description: "Get best practices for C# async programming"
|
||||
---
|
||||
|
||||
# C# Async Programming Best Practices
|
||||
|
||||
Your goal is to help me follow best practices for asynchronous programming in C#.
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
- Use the 'Async' suffix for all async methods
|
||||
- Match method names with their synchronous counterparts when applicable (e.g., `GetDataAsync()` for `GetData()`)
|
||||
|
||||
## Return Types
|
||||
|
||||
- Return `Task<T>` when the method returns a value
|
||||
- Return `Task` when the method doesn't return a value
|
||||
- Consider `ValueTask<T>` for high-performance scenarios to reduce allocations
|
||||
- Avoid returning `void` for async methods except for event handlers
|
||||
|
||||
## Exception Handling
|
||||
|
||||
- Use try/catch blocks around await expressions
|
||||
- Avoid swallowing exceptions in async methods
|
||||
- Use `ConfigureAwait(false)` when appropriate to prevent deadlocks in library code
|
||||
- Propagate exceptions with `Task.FromException()` instead of throwing in async Task returning methods
|
||||
|
||||
## Performance
|
||||
|
||||
- Use `Task.WhenAll()` for parallel execution of multiple tasks
|
||||
- Use `Task.WhenAny()` for implementing timeouts or taking the first completed task
|
||||
- Avoid unnecessary async/await when simply passing through task results
|
||||
- Consider cancellation tokens for long-running operations
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- Never use `.Wait()`, `.Result`, or `.GetAwaiter().GetResult()` in async code
|
||||
- Avoid mixing blocking and async code
|
||||
- Don't create async void methods (except for event handlers)
|
||||
- Always await Task-returning methods
|
||||
|
||||
## Implementation Patterns
|
||||
|
||||
- Implement the async command pattern for long-running operations
|
||||
- Use async streams (IAsyncEnumerable<T>) for processing sequences asynchronously
|
||||
- Consider the task-based asynchronous pattern (TAP) for public APIs
|
||||
|
||||
When reviewing my C# code, identify these issues and suggest improvements that follow these best practices.
|
||||
68
prompts/csharp-mstest.prompt.md
Normal file
68
prompts/csharp-mstest.prompt.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
mode: "agent"
|
||||
tools: ["codebase", "terminalCommand"]
|
||||
description: "Get best practices for MSTest unit testing, including data-driven tests"
|
||||
---
|
||||
|
||||
# MSTest Best Practices
|
||||
|
||||
Your goal is to help me write effective unit tests with MSTest, covering both standard and data-driven testing approaches.
|
||||
|
||||
## Project Setup
|
||||
|
||||
- Use a separate test project with naming convention `[ProjectName].Tests`
|
||||
- Reference Microsoft.NET.Test.Sdk, MSTest.TestAdapter, and MSTest.TestFramework packages
|
||||
- Create test classes that match the classes being tested (e.g., `CalculatorTests` for `Calculator`)
|
||||
- Use .NET SDK test commands: `dotnet test` for running tests
|
||||
|
||||
## Test Structure
|
||||
|
||||
- Use `[TestClass]` attribute for test classes
|
||||
- Use `[TestMethod]` attribute for test methods
|
||||
- Follow the Arrange-Act-Assert (AAA) pattern
|
||||
- Name tests using the pattern `MethodName_Scenario_ExpectedBehavior`
|
||||
- Use `[TestInitialize]` and `[TestCleanup]` for per-test setup and teardown
|
||||
- Use `[ClassInitialize]` and `[ClassCleanup]` for per-class setup and teardown
|
||||
- Use `[AssemblyInitialize]` and `[AssemblyCleanup]` for assembly-level setup and teardown
|
||||
|
||||
## Standard Tests
|
||||
|
||||
- Keep tests focused on a single behavior
|
||||
- Avoid testing multiple behaviors in one test method
|
||||
- Use clear assertions that express intent
|
||||
- Include only the assertions needed to verify the test case
|
||||
- Make tests independent and idempotent (can run in any order)
|
||||
- Avoid test interdependencies
|
||||
|
||||
## Data-Driven Tests
|
||||
|
||||
- Use `[DataTestMethod]` combined with data source attributes
|
||||
- Use `[DataRow]` for inline test data
|
||||
- Use `[DynamicData]` for programmatically generated test data
|
||||
- Use `[TestProperty]` to add metadata to tests
|
||||
- Consider `[CsvDataSource]` for external data sources
|
||||
- Use meaningful parameter names in data-driven tests
|
||||
|
||||
## Assertions
|
||||
|
||||
* Use `Assert.AreEqual` for value equality
|
||||
* Use `Assert.AreSame` for reference equality
|
||||
* Use `Assert.IsTrue`/`Assert.IsFalse` for boolean conditions
|
||||
* Use `CollectionAssert` for collection comparisons
|
||||
* Use `StringAssert` for string-specific assertions
|
||||
* Use `Assert.ThrowsException<T>` to test exceptions
|
||||
* Ensure assertions are simple in nature and have a message provided for clarity on failure
|
||||
|
||||
## Mocking and Isolation
|
||||
|
||||
* Consider using Moq or NSubstitute alongside MSTest
|
||||
* Mock dependencies to isolate units under test
|
||||
* Use interfaces to facilitate mocking
|
||||
* Consider using a DI container for complex test setups
|
||||
|
||||
## Test Organization
|
||||
|
||||
* Group tests by feature or component
|
||||
* Use test categories with `[TestCategory("Category")]`
|
||||
* Use test priorities with `[Priority(1)]` for critical tests
|
||||
* Use `[Owner("DeveloperName")]` to indicate ownership
|
||||
72
prompts/csharp-nunit.prompt.md
Normal file
72
prompts/csharp-nunit.prompt.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
mode: "agent"
|
||||
tools: ["codebase", "terminalCommand"]
|
||||
description: "Get best practices for NUnit unit testing, including data-driven tests"
|
||||
---
|
||||
|
||||
# NUnit Best Practices
|
||||
|
||||
Your goal is to help me write effective unit tests with NUnit, covering both standard and data-driven testing approaches.
|
||||
|
||||
## Project Setup
|
||||
|
||||
- Use a separate test project with naming convention `[ProjectName].Tests`
|
||||
- Reference Microsoft.NET.Test.Sdk, NUnit, and NUnit3TestAdapter packages
|
||||
- Create test classes that match the classes being tested (e.g., `CalculatorTests` for `Calculator`)
|
||||
- Use .NET SDK test commands: `dotnet test` for running tests
|
||||
|
||||
## Test Structure
|
||||
|
||||
- Apply `[TestFixture]` attribute to test classes
|
||||
- Use `[Test]` attribute for test methods
|
||||
- Follow the Arrange-Act-Assert (AAA) pattern
|
||||
- Name tests using the pattern `MethodName_Scenario_ExpectedBehavior`
|
||||
- Use `[SetUp]` and `[TearDown]` for per-test setup and teardown
|
||||
- Use `[OneTimeSetUp]` and `[OneTimeTearDown]` for per-class setup and teardown
|
||||
- Use `[SetUpFixture]` for assembly-level setup and teardown
|
||||
|
||||
## Standard Tests
|
||||
|
||||
- Keep tests focused on a single behavior
|
||||
- Avoid testing multiple behaviors in one test method
|
||||
- Use clear assertions that express intent
|
||||
- Include only the assertions needed to verify the test case
|
||||
- Make tests independent and idempotent (can run in any order)
|
||||
- Avoid test interdependencies
|
||||
|
||||
## Data-Driven Tests
|
||||
|
||||
- Use `[TestCase]` for inline test data
|
||||
- Use `[TestCaseSource]` for programmatically generated test data
|
||||
- Use `[Values]` for simple parameter combinations
|
||||
- Use `[ValueSource]` for property or method-based data sources
|
||||
- Use `[Random]` for random numeric test values
|
||||
- Use `[Range]` for sequential numeric test values
|
||||
- Use `[Combinatorial]` or `[Pairwise]` for combining multiple parameters
|
||||
|
||||
## Assertions
|
||||
|
||||
- Use `Assert.That` with constraint model (preferred NUnit style)
|
||||
- Use constraints like `Is.EqualTo`, `Is.SameAs`, `Contains.Item`
|
||||
- Use `Assert.AreEqual` for simple value equality (classic style)
|
||||
- Use `CollectionAssert` for collection comparisons
|
||||
- Use `StringAssert` for string-specific assertions
|
||||
- Use `Assert.Throws<T>` or `Assert.ThrowsAsync<T>` to test exceptions
|
||||
- Use descriptive messages in assertions for clarity on failure
|
||||
|
||||
## Mocking and Isolation
|
||||
|
||||
- Consider using Moq or NSubstitute alongside NUnit
|
||||
- Mock dependencies to isolate units under test
|
||||
- Use interfaces to facilitate mocking
|
||||
- Consider using a DI container for complex test setups
|
||||
|
||||
## Test Organization
|
||||
|
||||
- Group tests by feature or component
|
||||
- Use categories with `[Category("CategoryName")]`
|
||||
- Use `[Order]` to control test execution order when necessary
|
||||
- Use `[Author("DeveloperName")]` to indicate ownership
|
||||
- Use `[Description]` to provide additional test information
|
||||
- Consider `[Explicit]` for tests that shouldn't run automatically
|
||||
- Use `[Ignore("Reason")]` to temporarily skip tests
|
||||
69
prompts/csharp-xunit.prompt.md
Normal file
69
prompts/csharp-xunit.prompt.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
mode: "agent"
|
||||
tools: ["codebase", "terminalCommand"]
|
||||
description: "Get best practices for XUnit unit testing, including data-driven tests"
|
||||
---
|
||||
|
||||
# XUnit Best Practices
|
||||
|
||||
Your goal is to help me write effective unit tests with XUnit, covering both standard and data-driven testing approaches.
|
||||
|
||||
## Project Setup
|
||||
|
||||
- Use a separate test project with naming convention `[ProjectName].Tests`
|
||||
- Reference Microsoft.NET.Test.Sdk, xunit, and xunit.runner.visualstudio packages
|
||||
- Create test classes that match the classes being tested (e.g., `CalculatorTests` for `Calculator`)
|
||||
- Use .NET SDK test commands: `dotnet test` for running tests
|
||||
|
||||
## Test Structure
|
||||
|
||||
- No test class attributes required (unlike MSTest/NUnit)
|
||||
- Use fact-based tests with `[Fact]` attribute for simple tests
|
||||
- Follow the Arrange-Act-Assert (AAA) pattern
|
||||
- Name tests using the pattern `MethodName_Scenario_ExpectedBehavior`
|
||||
- Use constructor for setup and `IDisposable.Dispose()` for teardown
|
||||
- Use `IClassFixture<T>` for shared context between tests in a class
|
||||
- Use `ICollectionFixture<T>` for shared context between multiple test classes
|
||||
|
||||
## Standard Tests
|
||||
|
||||
- Keep tests focused on a single behavior
|
||||
- Avoid testing multiple behaviors in one test method
|
||||
- Use clear assertions that express intent
|
||||
- Include only the assertions needed to verify the test case
|
||||
- Make tests independent and idempotent (can run in any order)
|
||||
- Avoid test interdependencies
|
||||
|
||||
## Data-Driven Tests
|
||||
|
||||
- Use `[Theory]` combined with data source attributes
|
||||
- Use `[InlineData]` for inline test data
|
||||
- Use `[MemberData]` for method-based test data
|
||||
- Use `[ClassData]` for class-based test data
|
||||
- Create custom data attributes by implementing `DataAttribute`
|
||||
- Use meaningful parameter names in data-driven tests
|
||||
|
||||
## Assertions
|
||||
|
||||
- Use `Assert.Equal` for value equality
|
||||
- Use `Assert.Same` for reference equality
|
||||
- Use `Assert.True`/`Assert.False` for boolean conditions
|
||||
- Use `Assert.Contains`/`Assert.DoesNotContain` for collections
|
||||
- Use `Assert.Matches`/`Assert.DoesNotMatch` for regex pattern matching
|
||||
- Use `Assert.Throws<T>` or `await Assert.ThrowsAsync<T>` to test exceptions
|
||||
- Use fluent assertions library for more readable assertions
|
||||
|
||||
## Mocking and Isolation
|
||||
|
||||
- Consider using Moq or NSubstitute alongside XUnit
|
||||
- Mock dependencies to isolate units under test
|
||||
- Use interfaces to facilitate mocking
|
||||
- Consider using a DI container for complex test setups
|
||||
|
||||
## Test Organization
|
||||
|
||||
- Group tests by feature or component
|
||||
- Use `[Trait("Category", "CategoryName")]` for categorization
|
||||
- Use collection fixtures to group tests with shared dependencies
|
||||
- Consider output helpers (`ITestOutputHelper`) for test diagnostics
|
||||
- Skip tests conditionally with `Skip = "reason"` in fact/theory attributes
|
||||
76
prompts/ef-core.prompt.md
Normal file
76
prompts/ef-core.prompt.md
Normal file
@@ -0,0 +1,76 @@
|
||||
---
|
||||
mode: "agent"
|
||||
tools: ["codebase", "terminalCommand"]
|
||||
description: "Get best practices for Entity Framework Core"
|
||||
---
|
||||
|
||||
# Entity Framework Core Best Practices
|
||||
|
||||
Your goal is to help me follow best practices when working with Entity Framework Core.
|
||||
|
||||
## Data Context Design
|
||||
|
||||
- Keep DbContext classes focused and cohesive
|
||||
- Use constructor injection for configuration options
|
||||
- Override OnModelCreating for fluent API configuration
|
||||
- Separate entity configurations using IEntityTypeConfiguration
|
||||
- Consider using DbContextFactory pattern for console apps or tests
|
||||
|
||||
## Entity Design
|
||||
|
||||
- Use meaningful primary keys (consider natural vs surrogate keys)
|
||||
- Implement proper relationships (one-to-one, one-to-many, many-to-many)
|
||||
- Use data annotations or fluent API for constraints and validations
|
||||
- Implement appropriate navigational properties
|
||||
- Consider using owned entity types for value objects
|
||||
|
||||
## Performance
|
||||
|
||||
- Use AsNoTracking() for read-only queries
|
||||
- Implement pagination for large result sets with Skip() and Take()
|
||||
- Use Include() to eager load related entities when needed
|
||||
- Consider projection (Select) to retrieve only required fields
|
||||
- Use compiled queries for frequently executed queries
|
||||
- Avoid N+1 query problems by properly including related data
|
||||
|
||||
## Migrations
|
||||
|
||||
- Create small, focused migrations
|
||||
- Name migrations descriptively
|
||||
- Verify migration SQL scripts before applying to production
|
||||
- Consider using migration bundles for deployment
|
||||
- Add data seeding through migrations when appropriate
|
||||
|
||||
## Querying
|
||||
|
||||
- Use IQueryable judiciously and understand when queries execute
|
||||
- Prefer strongly-typed LINQ queries over raw SQL
|
||||
- Use appropriate query operators (Where, OrderBy, GroupBy)
|
||||
- Consider database functions for complex operations
|
||||
- Implement specifications pattern for reusable queries
|
||||
|
||||
## Change Tracking & Saving
|
||||
|
||||
- Use appropriate change tracking strategies
|
||||
- Batch your SaveChanges() calls
|
||||
- Implement concurrency control for multi-user scenarios
|
||||
- Consider using transactions for multiple operations
|
||||
- Use appropriate DbContext lifetimes (scoped for web apps)
|
||||
|
||||
## Security
|
||||
|
||||
- Avoid SQL injection by using parameterized queries
|
||||
- Implement appropriate data access permissions
|
||||
- Be careful with raw SQL queries
|
||||
- Consider data encryption for sensitive information
|
||||
- Use migrations to manage database user permissions
|
||||
|
||||
## Testing
|
||||
|
||||
- Use in-memory database provider for unit tests
|
||||
- Create separate testing contexts with SQLite for integration tests
|
||||
- Mock DbContext and DbSet for pure unit tests
|
||||
- Test migrations in isolated environments
|
||||
- Consider snapshot testing for model changes
|
||||
|
||||
When reviewing my EF Core code, identify issues and suggest improvements that follow these best practices.
|
||||
160
prompts/gen-specs-as-issues.prompt.md
Normal file
160
prompts/gen-specs-as-issues.prompt.md
Normal file
@@ -0,0 +1,160 @@
|
||||
# Product Manager Assistant: Feature Identification and Specification
|
||||
|
||||
This workflow guides you through a systematic approach to identify missing features, prioritize them, and create detailed specifications for implementation.
|
||||
|
||||
## 1. Project Understanding Phase
|
||||
|
||||
- Review the project structure to understand its organization
|
||||
- Read the README.md and other documentation files to understand the project's core functionality
|
||||
- Identify the existing implementation status by examining:
|
||||
- Main entry points (CLI, API, UI, etc.)
|
||||
- Core modules and their functionality
|
||||
- Tests to understand expected behavior
|
||||
- Any placeholder implementations
|
||||
|
||||
**Guiding Questions:**
|
||||
- What is the primary purpose of this project?
|
||||
- What user problems does it solve?
|
||||
- What patterns exist in the current implementation?
|
||||
- Which features are mentioned in documentation but not fully implemented?
|
||||
|
||||
## 2. Gap Analysis Phase
|
||||
|
||||
- Compare the documented capabilities ONLY against the actual implementation
|
||||
- Identify "placeholder" code that lacks real functionality
|
||||
- Look for features mentioned in documentation but missing robust implementation
|
||||
- Consider the user journey and identify broken or missing steps
|
||||
- Focus on core functionality first (not nice-to-have features)
|
||||
|
||||
**Output Creation:**
|
||||
- Create a list of potential missing features (5-7 items)
|
||||
- For each feature, note:
|
||||
- Current implementation status
|
||||
- References in documentation
|
||||
- Impact on user experience if missing
|
||||
|
||||
## 3. Prioritization Phase
|
||||
|
||||
- Apply a score to each identified gap:
|
||||
|
||||
**Scoring Matrix (1-5 scale):**
|
||||
- User Impact: How many users benefit?
|
||||
- Strategic Alignment: Fits core mission?
|
||||
- Implementation Feasibility: Technical complexity?
|
||||
- Resource Requirements: Development effort needed?
|
||||
- Risk Level: Potential negative impacts?
|
||||
|
||||
**Priority = (User Impact × Strategic Alignment) / (Implementation Effort × Risk Level)**
|
||||
|
||||
**Output Creation:**
|
||||
- Present the top 3 highest-priority missing features based on the scoring
|
||||
- For each, provide:
|
||||
- Feature name
|
||||
- Current status
|
||||
- Impact if not implemented
|
||||
- Dependencies on other features
|
||||
|
||||
## 4. Specification Development Phase
|
||||
|
||||
- For each prioritized feature, develop a detailed but practical specification:
|
||||
- Begin with the philosophical approach: simplicity over complexity
|
||||
- Focus on MVP functionality first
|
||||
- Consider the developer experience
|
||||
- Keep the specification implementation-friendly
|
||||
|
||||
**For Each Feature Specification:**
|
||||
1. **Overview & Scope**
|
||||
- What problem does it solve?
|
||||
- What's included and what's explicitly excluded?
|
||||
|
||||
2. **Technical Requirements**
|
||||
- Core functionality needed
|
||||
- User-facing interfaces (API, UI, CLI, etc.)
|
||||
- Integration points with existing code
|
||||
|
||||
3. **Implementation Plan**
|
||||
- Key modules/files to create or modify
|
||||
- Simple code examples showing the approach
|
||||
- Clear data structures and interfaces
|
||||
|
||||
4. **Acceptance Criteria**
|
||||
- How will we know when it's done?
|
||||
- What specific functionality must work?
|
||||
- What tests should pass?
|
||||
|
||||
## 5. GitHub Issue Creation Phase
|
||||
|
||||
- For each specification, create a GitHub issue:
|
||||
- Clear, descriptive title
|
||||
- Comprehensive specification in the body
|
||||
- Appropriate labels (enhancement, high-priority, etc.)
|
||||
- Explicitly mention MVP philosophy where relevant
|
||||
|
||||
**Issue Template Structure:**
|
||||
|
||||
# [Feature Name]
|
||||
|
||||
## Overview
|
||||
[Brief description of the feature and its purpose]
|
||||
|
||||
## Scope
|
||||
[What's included and what's explicitly excluded]
|
||||
|
||||
## Technical Requirements
|
||||
[Specific technical needs and constraints]
|
||||
|
||||
## Implementation Plan
|
||||
[Step-by-step approach with simple code examples]
|
||||
|
||||
## Acceptance Criteria
|
||||
[Clear list of requirements to consider the feature complete]
|
||||
|
||||
## Priority
|
||||
[Justification for prioritization]
|
||||
|
||||
## Dependencies
|
||||
- **Blocks:** [List of issues blocked by this one]
|
||||
- **Blocked by:** [List of issues this one depends on]
|
||||
|
||||
## Implementation Size
|
||||
- **Estimated effort:** [Small/Medium/Large]
|
||||
- **Sub-issues:** [Links to sub-issues if this is a parent issue]
|
||||
|
||||
|
||||
## 5.5 Work Distribution Optimization
|
||||
|
||||
- **Independence Analysis**
|
||||
- Review each specification to identify truly independent components
|
||||
- Refactor specifications to maximize independent work streams
|
||||
- Create clear boundaries between interdependent components
|
||||
|
||||
- **Dependency Mapping**
|
||||
- For features with unavoidable dependencies, establish clear issue hierarchies
|
||||
- Create parent issues for the overall feature with sub-issues for components
|
||||
- Explicitly document "blocked by" and "blocks" relationships
|
||||
|
||||
- **Workload Balancing**
|
||||
- Break down large specifications into smaller, manageable sub-issues
|
||||
- Ensure each sub-issue represents 1-3 days of development work
|
||||
- Include sub-issue specific acceptance criteria
|
||||
|
||||
**Implementation Guidelines:**
|
||||
- Use GitHub issue linking syntax to create explicit relationships
|
||||
- Add labels to indicate dependency status (e.g., "blocked", "prerequisite")
|
||||
- Include estimated complexity/effort for each issue to aid sprint planning
|
||||
|
||||
## 6. Final Review Phase
|
||||
|
||||
- Summarize all created specifications
|
||||
- Highlight implementation dependencies between features
|
||||
- Suggest a logical implementation order
|
||||
- Note any potential challenges or considerations
|
||||
|
||||
Remember throughout this process:
|
||||
- Favor simplicity over complexity
|
||||
- Start with minimal viable implementations that work
|
||||
- Focus on developer experience
|
||||
- Build a foundation that can be extended later
|
||||
- Consider the open-source community and contribution model
|
||||
|
||||
This workflow embodiment of our approach should help maintain consistency in how features are specified and prioritized, ensuring that software projects evolve in a thoughtful, user-centered way.
|
||||
43
prompts/javascript-typescript-jest.prompt.md
Normal file
43
prompts/javascript-typescript-jest.prompt.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
description: "Best practices for writing JavaScript/TypeScript tests using Jest, including mocking strategies, test structure, and common patterns."
|
||||
---
|
||||
|
||||
### Test Structure
|
||||
- Name test files with `.test.ts` or `.test.js` suffix
|
||||
- Place test files next to the code they test or in a dedicated `__tests__` directory
|
||||
- Use descriptive test names that explain the expected behavior
|
||||
- Use nested describe blocks to organize related tests
|
||||
- Follow the pattern: `describe('Component/Function/Class', () => { it('should do something', () => {}) })`
|
||||
|
||||
### Effective Mocking
|
||||
- Mock external dependencies (APIs, databases, etc.) to isolate your tests
|
||||
- Use `jest.mock()` for module-level mocks
|
||||
- Use `jest.spyOn()` for specific function mocks
|
||||
- Use `mockImplementation()` or `mockReturnValue()` to define mock behavior
|
||||
- Reset mocks between tests with `jest.resetAllMocks()` in `afterEach`
|
||||
|
||||
### Testing Async Code
|
||||
- Always return promises or use async/await syntax in tests
|
||||
- Use `resolves`/`rejects` matchers for promises
|
||||
- Set appropriate timeouts for slow tests with `jest.setTimeout()`
|
||||
|
||||
### Snapshot Testing
|
||||
- Use snapshot tests for UI components or complex objects that change infrequently
|
||||
- Keep snapshots small and focused
|
||||
- Review snapshot changes carefully before committing
|
||||
|
||||
### Testing React Components
|
||||
- Use React Testing Library over Enzyme for testing components
|
||||
- Test user behavior and component accessibility
|
||||
- Query elements by accessibility roles, labels, or text content
|
||||
- Use `userEvent` over `fireEvent` for more realistic user interactions
|
||||
|
||||
## Common Jest Matchers
|
||||
- Basic: `expect(value).toBe(expected)`, `expect(value).toEqual(expected)`
|
||||
- Truthiness: `expect(value).toBeTruthy()`, `expect(value).toBeFalsy()`
|
||||
- Numbers: `expect(value).toBeGreaterThan(3)`, `expect(value).toBeLessThanOrEqual(3)`
|
||||
- Strings: `expect(value).toMatch(/pattern/)`, `expect(value).toContain('substring')`
|
||||
- Arrays: `expect(array).toContain(item)`, `expect(array).toHaveLength(3)`
|
||||
- Objects: `expect(object).toHaveProperty('key', value)`
|
||||
- Exceptions: `expect(fn).toThrow()`, `expect(fn).toThrow(Error)`
|
||||
- Mock functions: `expect(mockFn).toHaveBeenCalled()`, `expect(mockFn).toHaveBeenCalledWith(arg1, arg2)`
|
||||
47
prompts/multi-stage-dockerfile.prompt.md
Normal file
47
prompts/multi-stage-dockerfile.prompt.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
mode: "agent"
|
||||
tools: ["codebase"]
|
||||
description: "Create optimized multi-stage Dockerfiles for any language or framework"
|
||||
---
|
||||
|
||||
Your goal is to help me create efficient multi-stage Dockerfiles that follow best practices, resulting in smaller, more secure container images.
|
||||
|
||||
## Multi-Stage Structure
|
||||
|
||||
- Use a builder stage for compilation, dependency installation, and other build-time operations
|
||||
- Use a separate runtime stage that only includes what's needed to run the application
|
||||
- Copy only the necessary artifacts from the builder stage to the runtime stage
|
||||
- Use meaningful stage names with the `AS` keyword (e.g., `FROM node:18 AS builder`)
|
||||
- Place stages in logical order: dependencies → build → test → runtime
|
||||
|
||||
## Base Images
|
||||
|
||||
- Start with official, minimal base images when possible
|
||||
- Specify exact version tags to ensure reproducible builds (e.g., `python:3.11-slim` not just `python`)
|
||||
- Consider distroless images for runtime stages where appropriate
|
||||
- Use Alpine-based images for smaller footprints when compatible with your application
|
||||
- Ensure the runtime image has the minimal necessary dependencies
|
||||
|
||||
## Layer Optimization
|
||||
|
||||
- Organize commands to maximize layer caching
|
||||
- Place commands that change frequently (like code changes) after commands that change less frequently (like dependency installation)
|
||||
- Use `.dockerignore` to prevent unnecessary files from being included in the build context
|
||||
- Combine related RUN commands with `&&` to reduce layer count
|
||||
- Consider using COPY --chown to set permissions in one step
|
||||
|
||||
## Security Practices
|
||||
|
||||
- Avoid running containers as root - use `USER` instruction to specify a non-root user
|
||||
- Remove build tools and unnecessary packages from the final image
|
||||
- Scan the final image for vulnerabilities
|
||||
- Set restrictive file permissions
|
||||
- Use multi-stage builds to avoid including build secrets in the final image
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
- Use build arguments for configuration that might change between environments
|
||||
- Leverage build cache efficiently by ordering layers from least to most frequently changing
|
||||
- Consider parallelization in build steps when possible
|
||||
- Set appropriate environment variables like NODE_ENV=production to optimize runtime behavior
|
||||
- Use appropriate healthchecks for the application type with the HEALTHCHECK instruction
|
||||
Reference in New Issue
Block a user