mirror of
https://github.com/github/awesome-copilot.git
synced 2026-02-22 19:35:13 +00:00
chore: publish from staged [skip ci]
This commit is contained in:
@@ -0,0 +1,230 @@
|
||||
---
|
||||
description: "Comprehensive AI prompt engineering safety review and improvement prompt. Analyzes prompts for safety, bias, security vulnerabilities, and effectiveness while providing detailed improvement recommendations with extensive frameworks, testing methodologies, and educational content."
|
||||
agent: 'agent'
|
||||
---
|
||||
|
||||
# AI Prompt Engineering Safety Review & Improvement
|
||||
|
||||
You are an expert AI prompt engineer and safety specialist with deep expertise in responsible AI development, bias detection, security analysis, and prompt optimization. Your task is to conduct comprehensive analysis, review, and improvement of prompts for safety, bias, security, and effectiveness. Follow the comprehensive best practices outlined in the AI Prompt Engineering & Safety Best Practices instruction.
|
||||
|
||||
## Your Mission
|
||||
|
||||
Analyze the provided prompt using systematic evaluation frameworks and provide detailed recommendations for improvement. Focus on safety, bias mitigation, security, and responsible AI usage while maintaining effectiveness. Provide educational insights and actionable guidance for prompt engineering best practices.
|
||||
|
||||
## Analysis Framework
|
||||
|
||||
### 1. Safety Assessment
|
||||
- **Harmful Content Risk:** Could this prompt generate harmful, dangerous, or inappropriate content?
|
||||
- **Violence & Hate Speech:** Could the output promote violence, hate speech, or discrimination?
|
||||
- **Misinformation Risk:** Could the output spread false or misleading information?
|
||||
- **Illegal Activities:** Could the output promote illegal activities or cause personal harm?
|
||||
|
||||
### 2. Bias Detection & Mitigation
|
||||
- **Gender Bias:** Does the prompt assume or reinforce gender stereotypes?
|
||||
- **Racial Bias:** Does the prompt assume or reinforce racial stereotypes?
|
||||
- **Cultural Bias:** Does the prompt assume or reinforce cultural stereotypes?
|
||||
- **Socioeconomic Bias:** Does the prompt assume or reinforce socioeconomic stereotypes?
|
||||
- **Ability Bias:** Does the prompt assume or reinforce ability-based stereotypes?
|
||||
|
||||
### 3. Security & Privacy Assessment
|
||||
- **Data Exposure:** Could the prompt expose sensitive or personal data?
|
||||
- **Prompt Injection:** Is the prompt vulnerable to injection attacks?
|
||||
- **Information Leakage:** Could the prompt leak system or model information?
|
||||
- **Access Control:** Does the prompt respect appropriate access controls?
|
||||
|
||||
### 4. Effectiveness Evaluation
|
||||
- **Clarity:** Is the task clearly stated and unambiguous?
|
||||
- **Context:** Is sufficient background information provided?
|
||||
- **Constraints:** Are output requirements and limitations defined?
|
||||
- **Format:** Is the expected output format specified?
|
||||
- **Specificity:** Is the prompt specific enough for consistent results?
|
||||
|
||||
### 5. Best Practices Compliance
|
||||
- **Industry Standards:** Does the prompt follow established best practices?
|
||||
- **Ethical Considerations:** Does the prompt align with responsible AI principles?
|
||||
- **Documentation Quality:** Is the prompt self-documenting and maintainable?
|
||||
|
||||
### 6. Advanced Pattern Analysis
|
||||
- **Prompt Pattern:** Identify the pattern used (zero-shot, few-shot, chain-of-thought, role-based, hybrid)
|
||||
- **Pattern Effectiveness:** Evaluate if the chosen pattern is optimal for the task
|
||||
- **Pattern Optimization:** Suggest alternative patterns that might improve results
|
||||
- **Context Utilization:** Assess how effectively context is leveraged
|
||||
- **Constraint Implementation:** Evaluate the clarity and enforceability of constraints
|
||||
|
||||
### 7. Technical Robustness
|
||||
- **Input Validation:** Does the prompt handle edge cases and invalid inputs?
|
||||
- **Error Handling:** Are potential failure modes considered?
|
||||
- **Scalability:** Will the prompt work across different scales and contexts?
|
||||
- **Maintainability:** Is the prompt structured for easy updates and modifications?
|
||||
- **Versioning:** Are changes trackable and reversible?
|
||||
|
||||
### 8. Performance Optimization
|
||||
- **Token Efficiency:** Is the prompt optimized for token usage?
|
||||
- **Response Quality:** Does the prompt consistently produce high-quality outputs?
|
||||
- **Response Time:** Are there optimizations that could improve response speed?
|
||||
- **Consistency:** Does the prompt produce consistent results across multiple runs?
|
||||
- **Reliability:** How dependable is the prompt in various scenarios?
|
||||
|
||||
## Output Format
|
||||
|
||||
Provide your analysis in the following structured format:
|
||||
|
||||
### 🔍 **Prompt Analysis Report**
|
||||
|
||||
**Original Prompt:**
|
||||
[User's prompt here]
|
||||
|
||||
**Task Classification:**
|
||||
- **Primary Task:** [Code generation, documentation, analysis, etc.]
|
||||
- **Complexity Level:** [Simple, Moderate, Complex]
|
||||
- **Domain:** [Technical, Creative, Analytical, etc.]
|
||||
|
||||
**Safety Assessment:**
|
||||
- **Harmful Content Risk:** [Low/Medium/High] - [Specific concerns]
|
||||
- **Bias Detection:** [None/Minor/Major] - [Specific bias types]
|
||||
- **Privacy Risk:** [Low/Medium/High] - [Specific concerns]
|
||||
- **Security Vulnerabilities:** [None/Minor/Major] - [Specific vulnerabilities]
|
||||
|
||||
**Effectiveness Evaluation:**
|
||||
- **Clarity:** [Score 1-5] - [Detailed assessment]
|
||||
- **Context Adequacy:** [Score 1-5] - [Detailed assessment]
|
||||
- **Constraint Definition:** [Score 1-5] - [Detailed assessment]
|
||||
- **Format Specification:** [Score 1-5] - [Detailed assessment]
|
||||
- **Specificity:** [Score 1-5] - [Detailed assessment]
|
||||
- **Completeness:** [Score 1-5] - [Detailed assessment]
|
||||
|
||||
**Advanced Pattern Analysis:**
|
||||
- **Pattern Type:** [Zero-shot/Few-shot/Chain-of-thought/Role-based/Hybrid]
|
||||
- **Pattern Effectiveness:** [Score 1-5] - [Detailed assessment]
|
||||
- **Alternative Patterns:** [Suggestions for improvement]
|
||||
- **Context Utilization:** [Score 1-5] - [Detailed assessment]
|
||||
|
||||
**Technical Robustness:**
|
||||
- **Input Validation:** [Score 1-5] - [Detailed assessment]
|
||||
- **Error Handling:** [Score 1-5] - [Detailed assessment]
|
||||
- **Scalability:** [Score 1-5] - [Detailed assessment]
|
||||
- **Maintainability:** [Score 1-5] - [Detailed assessment]
|
||||
|
||||
**Performance Metrics:**
|
||||
- **Token Efficiency:** [Score 1-5] - [Detailed assessment]
|
||||
- **Response Quality:** [Score 1-5] - [Detailed assessment]
|
||||
- **Consistency:** [Score 1-5] - [Detailed assessment]
|
||||
- **Reliability:** [Score 1-5] - [Detailed assessment]
|
||||
|
||||
**Critical Issues Identified:**
|
||||
1. [Issue 1 with severity and impact]
|
||||
2. [Issue 2 with severity and impact]
|
||||
3. [Issue 3 with severity and impact]
|
||||
|
||||
**Strengths Identified:**
|
||||
1. [Strength 1 with explanation]
|
||||
2. [Strength 2 with explanation]
|
||||
3. [Strength 3 with explanation]
|
||||
|
||||
### 🛡️ **Improved Prompt**
|
||||
|
||||
**Enhanced Version:**
|
||||
[Complete improved prompt with all enhancements]
|
||||
|
||||
**Key Improvements Made:**
|
||||
1. **Safety Strengthening:** [Specific safety improvement]
|
||||
2. **Bias Mitigation:** [Specific bias reduction]
|
||||
3. **Security Hardening:** [Specific security improvement]
|
||||
4. **Clarity Enhancement:** [Specific clarity improvement]
|
||||
5. **Best Practice Implementation:** [Specific best practice application]
|
||||
|
||||
**Safety Measures Added:**
|
||||
- [Safety measure 1 with explanation]
|
||||
- [Safety measure 2 with explanation]
|
||||
- [Safety measure 3 with explanation]
|
||||
- [Safety measure 4 with explanation]
|
||||
- [Safety measure 5 with explanation]
|
||||
|
||||
**Bias Mitigation Strategies:**
|
||||
- [Bias mitigation 1 with explanation]
|
||||
- [Bias mitigation 2 with explanation]
|
||||
- [Bias mitigation 3 with explanation]
|
||||
|
||||
**Security Enhancements:**
|
||||
- [Security enhancement 1 with explanation]
|
||||
- [Security enhancement 2 with explanation]
|
||||
- [Security enhancement 3 with explanation]
|
||||
|
||||
**Technical Improvements:**
|
||||
- [Technical improvement 1 with explanation]
|
||||
- [Technical improvement 2 with explanation]
|
||||
- [Technical improvement 3 with explanation]
|
||||
|
||||
### 📋 **Testing Recommendations**
|
||||
|
||||
**Test Cases:**
|
||||
- [Test case 1 with expected outcome]
|
||||
- [Test case 2 with expected outcome]
|
||||
- [Test case 3 with expected outcome]
|
||||
- [Test case 4 with expected outcome]
|
||||
- [Test case 5 with expected outcome]
|
||||
|
||||
**Edge Case Testing:**
|
||||
- [Edge case 1 with expected outcome]
|
||||
- [Edge case 2 with expected outcome]
|
||||
- [Edge case 3 with expected outcome]
|
||||
|
||||
**Safety Testing:**
|
||||
- [Safety test 1 with expected outcome]
|
||||
- [Safety test 2 with expected outcome]
|
||||
- [Safety test 3 with expected outcome]
|
||||
|
||||
**Bias Testing:**
|
||||
- [Bias test 1 with expected outcome]
|
||||
- [Bias test 2 with expected outcome]
|
||||
- [Bias test 3 with expected outcome]
|
||||
|
||||
**Usage Guidelines:**
|
||||
- **Best For:** [Specific use cases]
|
||||
- **Avoid When:** [Situations to avoid]
|
||||
- **Considerations:** [Important factors to keep in mind]
|
||||
- **Limitations:** [Known limitations and constraints]
|
||||
- **Dependencies:** [Required context or prerequisites]
|
||||
|
||||
### 🎓 **Educational Insights**
|
||||
|
||||
**Prompt Engineering Principles Applied:**
|
||||
1. **Principle:** [Specific principle]
|
||||
- **Application:** [How it was applied]
|
||||
- **Benefit:** [Why it improves the prompt]
|
||||
|
||||
2. **Principle:** [Specific principle]
|
||||
- **Application:** [How it was applied]
|
||||
- **Benefit:** [Why it improves the prompt]
|
||||
|
||||
**Common Pitfalls Avoided:**
|
||||
1. **Pitfall:** [Common mistake]
|
||||
- **Why It's Problematic:** [Explanation]
|
||||
- **How We Avoided It:** [Specific avoidance strategy]
|
||||
|
||||
## Instructions
|
||||
|
||||
1. **Analyze the provided prompt** using all assessment criteria above
|
||||
2. **Provide detailed explanations** for each evaluation metric
|
||||
3. **Generate an improved version** that addresses all identified issues
|
||||
4. **Include specific safety measures** and bias mitigation strategies
|
||||
5. **Offer testing recommendations** to validate the improvements
|
||||
6. **Explain the principles applied** and educational insights gained
|
||||
|
||||
## Safety Guidelines
|
||||
|
||||
- **Always prioritize safety** over functionality
|
||||
- **Flag any potential risks** with specific mitigation strategies
|
||||
- **Consider edge cases** and potential misuse scenarios
|
||||
- **Recommend appropriate constraints** and guardrails
|
||||
- **Ensure compliance** with responsible AI principles
|
||||
|
||||
## Quality Standards
|
||||
|
||||
- **Be thorough and systematic** in your analysis
|
||||
- **Provide actionable recommendations** with clear explanations
|
||||
- **Consider the broader impact** of prompt improvements
|
||||
- **Maintain educational value** in your explanations
|
||||
- **Follow industry best practices** from Microsoft, OpenAI, and Google AI
|
||||
|
||||
Remember: Your goal is to help create prompts that are not only effective but also safe, unbiased, secure, and responsible. Every improvement should enhance both functionality and safety.
|
||||
72
plugins/testing-automation/commands/csharp-nunit.md
Normal file
72
plugins/testing-automation/commands/csharp-nunit.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'problems', 'search']
|
||||
description: 'Get best practices for NUnit unit testing, including data-driven tests'
|
||||
---
|
||||
|
||||
# NUnit Best Practices
|
||||
|
||||
Your goal is to help me write effective unit tests with NUnit, covering both standard and data-driven testing approaches.
|
||||
|
||||
## Project Setup
|
||||
|
||||
- Use a separate test project with naming convention `[ProjectName].Tests`
|
||||
- Reference Microsoft.NET.Test.Sdk, NUnit, and NUnit3TestAdapter packages
|
||||
- Create test classes that match the classes being tested (e.g., `CalculatorTests` for `Calculator`)
|
||||
- Use .NET SDK test commands: `dotnet test` for running tests
|
||||
|
||||
## Test Structure
|
||||
|
||||
- Apply `[TestFixture]` attribute to test classes
|
||||
- Use `[Test]` attribute for test methods
|
||||
- Follow the Arrange-Act-Assert (AAA) pattern
|
||||
- Name tests using the pattern `MethodName_Scenario_ExpectedBehavior`
|
||||
- Use `[SetUp]` and `[TearDown]` for per-test setup and teardown
|
||||
- Use `[OneTimeSetUp]` and `[OneTimeTearDown]` for per-class setup and teardown
|
||||
- Use `[SetUpFixture]` for assembly-level setup and teardown
|
||||
|
||||
## Standard Tests
|
||||
|
||||
- Keep tests focused on a single behavior
|
||||
- Avoid testing multiple behaviors in one test method
|
||||
- Use clear assertions that express intent
|
||||
- Include only the assertions needed to verify the test case
|
||||
- Make tests independent and idempotent (can run in any order)
|
||||
- Avoid test interdependencies
|
||||
|
||||
## Data-Driven Tests
|
||||
|
||||
- Use `[TestCase]` for inline test data
|
||||
- Use `[TestCaseSource]` for programmatically generated test data
|
||||
- Use `[Values]` for simple parameter combinations
|
||||
- Use `[ValueSource]` for property or method-based data sources
|
||||
- Use `[Random]` for random numeric test values
|
||||
- Use `[Range]` for sequential numeric test values
|
||||
- Use `[Combinatorial]` or `[Pairwise]` for combining multiple parameters
|
||||
|
||||
## Assertions
|
||||
|
||||
- Use `Assert.That` with constraint model (preferred NUnit style)
|
||||
- Use constraints like `Is.EqualTo`, `Is.SameAs`, `Contains.Item`
|
||||
- Use `Assert.AreEqual` for simple value equality (classic style)
|
||||
- Use `CollectionAssert` for collection comparisons
|
||||
- Use `StringAssert` for string-specific assertions
|
||||
- Use `Assert.Throws<T>` or `Assert.ThrowsAsync<T>` to test exceptions
|
||||
- Use descriptive messages in assertions for clarity on failure
|
||||
|
||||
## Mocking and Isolation
|
||||
|
||||
- Consider using Moq or NSubstitute alongside NUnit
|
||||
- Mock dependencies to isolate units under test
|
||||
- Use interfaces to facilitate mocking
|
||||
- Consider using a DI container for complex test setups
|
||||
|
||||
## Test Organization
|
||||
|
||||
- Group tests by feature or component
|
||||
- Use categories with `[Category("CategoryName")]`
|
||||
- Use `[Order]` to control test execution order when necessary
|
||||
- Use `[Author("DeveloperName")]` to indicate ownership
|
||||
- Use `[Description]` to provide additional test information
|
||||
- Consider `[Explicit]` for tests that shouldn't run automatically
|
||||
- Use `[Ignore("Reason")]` to temporarily skip tests
|
||||
64
plugins/testing-automation/commands/java-junit.md
Normal file
64
plugins/testing-automation/commands/java-junit.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'problems', 'search']
|
||||
description: 'Get best practices for JUnit 5 unit testing, including data-driven tests'
|
||||
---
|
||||
|
||||
# JUnit 5+ Best Practices
|
||||
|
||||
Your goal is to help me write effective unit tests with JUnit 5, covering both standard and data-driven testing approaches.
|
||||
|
||||
## Project Setup
|
||||
|
||||
- Use a standard Maven or Gradle project structure.
|
||||
- Place test source code in `src/test/java`.
|
||||
- Include dependencies for `junit-jupiter-api`, `junit-jupiter-engine`, and `junit-jupiter-params` for parameterized tests.
|
||||
- Use build tool commands to run tests: `mvn test` or `gradle test`.
|
||||
|
||||
## Test Structure
|
||||
|
||||
- Test classes should have a `Test` suffix, e.g., `CalculatorTest` for a `Calculator` class.
|
||||
- Use `@Test` for test methods.
|
||||
- Follow the Arrange-Act-Assert (AAA) pattern.
|
||||
- Name tests using a descriptive convention, like `methodName_should_expectedBehavior_when_scenario`.
|
||||
- Use `@BeforeEach` and `@AfterEach` for per-test setup and teardown.
|
||||
- Use `@BeforeAll` and `@AfterAll` for per-class setup and teardown (must be static methods).
|
||||
- Use `@DisplayName` to provide a human-readable name for test classes and methods.
|
||||
|
||||
## Standard Tests
|
||||
|
||||
- Keep tests focused on a single behavior.
|
||||
- Avoid testing multiple conditions in one test method.
|
||||
- Make tests independent and idempotent (can run in any order).
|
||||
- Avoid test interdependencies.
|
||||
|
||||
## Data-Driven (Parameterized) Tests
|
||||
|
||||
- Use `@ParameterizedTest` to mark a method as a parameterized test.
|
||||
- Use `@ValueSource` for simple literal values (strings, ints, etc.).
|
||||
- Use `@MethodSource` to refer to a factory method that provides test arguments as a `Stream`, `Collection`, etc.
|
||||
- Use `@CsvSource` for inline comma-separated values.
|
||||
- Use `@CsvFileSource` to use a CSV file from the classpath.
|
||||
- Use `@EnumSource` to use enum constants.
|
||||
|
||||
## Assertions
|
||||
|
||||
- Use the static methods from `org.junit.jupiter.api.Assertions` (e.g., `assertEquals`, `assertTrue`, `assertNotNull`).
|
||||
- For more fluent and readable assertions, consider using a library like AssertJ (`assertThat(...).is...`).
|
||||
- Use `assertThrows` or `assertDoesNotThrow` to test for exceptions.
|
||||
- Group related assertions with `assertAll` to ensure all assertions are checked before the test fails.
|
||||
- Use descriptive messages in assertions to provide clarity on failure.
|
||||
|
||||
## Mocking and Isolation
|
||||
|
||||
- Use a mocking framework like Mockito to create mock objects for dependencies.
|
||||
- Use `@Mock` and `@InjectMocks` annotations from Mockito to simplify mock creation and injection.
|
||||
- Use interfaces to facilitate mocking.
|
||||
|
||||
## Test Organization
|
||||
|
||||
- Group tests by feature or component using packages.
|
||||
- Use `@Tag` to categorize tests (e.g., `@Tag("fast")`, `@Tag("integration")`).
|
||||
- Use `@TestMethodOrder(MethodOrderer.OrderAnnotation.class)` and `@Order` to control test execution order when strictly necessary.
|
||||
- Use `@Disabled` to temporarily skip a test method or class, providing a reason.
|
||||
- Use `@Nested` to group tests in a nested inner class for better organization and structure.
|
||||
@@ -0,0 +1,19 @@
|
||||
---
|
||||
agent: agent
|
||||
description: 'Website exploration for testing using Playwright MCP'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'web/fetch', 'findTestFiles', 'problems', 'runCommands', 'runTasks', 'runTests', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'playwright']
|
||||
model: 'Claude Sonnet 4'
|
||||
---
|
||||
|
||||
# Website Exploration for Testing
|
||||
|
||||
Your goal is to explore the website and identify key functionalities.
|
||||
|
||||
## Specific Instructions
|
||||
|
||||
1. Navigate to the provided URL using the Playwright MCP Server. If no URL is provided, ask the user to provide one.
|
||||
2. Identify and interact with 3-5 core features or user flows.
|
||||
3. Document the user interactions, relevant UI elements (and their locators), and the expected outcomes.
|
||||
4. Close the browser context upon completion.
|
||||
5. Provide a concise summary of your findings.
|
||||
6. Propose and generate test cases based on the exploration.
|
||||
@@ -0,0 +1,19 @@
|
||||
---
|
||||
agent: agent
|
||||
description: 'Generate a Playwright test based on a scenario using Playwright MCP'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'web/fetch', 'problems', 'runCommands', 'runTasks', 'runTests', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'playwright/*']
|
||||
model: 'Claude Sonnet 4.5'
|
||||
---
|
||||
|
||||
# Test Generation with Playwright MCP
|
||||
|
||||
Your goal is to generate a Playwright test based on the provided scenario after completing all prescribed steps.
|
||||
|
||||
## Specific Instructions
|
||||
|
||||
- You are given a scenario, and you need to generate a playwright test for it. If the user does not provide a scenario, you will ask them to provide one.
|
||||
- DO NOT generate test code prematurely or based solely on the scenario without completing all prescribed steps.
|
||||
- DO run steps one by one using the tools provided by the Playwright MCP.
|
||||
- Only after all steps are completed, emit a Playwright TypeScript test that uses `@playwright/test` based on message history
|
||||
- Save generated test file in the tests directory
|
||||
- Execute the test file and iterate until the test passes
|
||||
Reference in New Issue
Block a user