mirror of
https://github.com/github/awesome-copilot.git
synced 2026-04-11 18:55:55 +00:00
Revert "fetch -> web/fetch for everything"
This reverts commit ca790b1716.
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Add educational comments to the file specified, or prompt asking for file to comment if one is not provided.'
|
||||
tools: ['edit/editFiles', 'web/fetch', 'todos']
|
||||
tools: ['edit/editFiles', 'fetch', 'todos']
|
||||
---
|
||||
|
||||
# Add Educational Comments
|
||||
@@ -83,7 +83,7 @@ You are an expert educator and technical writer. You can explain programming top
|
||||
- **Educational Level** (`1-3`): Familiarity with the specific language or framework (default `1`).
|
||||
- **Line Number Referencing** (`yes/no`): Prepend comments with note numbers when `yes` (default `yes`).
|
||||
- **Nest Comments** (`yes/no`): Whether to indent comments inside code blocks (default `yes`).
|
||||
- **web/fetch List**: Optional URLs for authoritative references.
|
||||
- **Fetch List**: Optional URLs for authoritative references.
|
||||
|
||||
If a configurable element is missing, use the default value. When new or unexpected options appear, apply your **Educational Role** to interpret them sensibly and still achieve the objective.
|
||||
|
||||
@@ -97,7 +97,7 @@ If a configurable element is missing, use the default value. When new or unexpec
|
||||
- Educational Level = 1
|
||||
- Line Number Referencing = yes
|
||||
- Nest Comments = yes
|
||||
- web/fetch List:
|
||||
- Fetch List:
|
||||
- <https://peps.python.org/pep-0263/>
|
||||
|
||||
## Examples
|
||||
|
||||
@@ -25,7 +25,7 @@ ${FOCUS_ON_EXTENSIBILITY=true|false} <!-- Emphasize extension points and pattern
|
||||
- Package dependencies and import statements
|
||||
- Framework-specific patterns and conventions
|
||||
- Build and deployment configurations" : "Focus on ${PROJECT_TYPE} specific patterns and practices"}
|
||||
|
||||
|
||||
- ${ARCHITECTURE_PATTERN == "Auto-detect" ? "Determine the architectural pattern(s) by analyzing:
|
||||
- Folder organization and namespacing
|
||||
- Dependency flow and component boundaries
|
||||
@@ -131,7 +131,7 @@ Document implementation patterns for cross-cutting concerns:
|
||||
### 9. Technology-Specific Architectural Patterns
|
||||
${PROJECT_TYPE == "Auto-detect" ? "For each detected technology stack, document specific architectural patterns:" : `Document ${PROJECT_TYPE}-specific architectural patterns:`}
|
||||
|
||||
${(PROJECT_TYPE == ".NET" || PROJECT_TYPE == "Auto-detect") ?
|
||||
${(PROJECT_TYPE == ".NET" || PROJECT_TYPE == "Auto-detect") ?
|
||||
"#### .NET Architectural Patterns (if detected)
|
||||
- Host and application model implementation
|
||||
- Middleware pipeline organization
|
||||
@@ -140,7 +140,7 @@ ${(PROJECT_TYPE == ".NET" || PROJECT_TYPE == "Auto-detect") ?
|
||||
- API implementation patterns (controllers, minimal APIs, etc.)
|
||||
- Dependency injection container configuration" : ""}
|
||||
|
||||
${(PROJECT_TYPE == "Java" || PROJECT_TYPE == "Auto-detect") ?
|
||||
${(PROJECT_TYPE == "Java" || PROJECT_TYPE == "Auto-detect") ?
|
||||
"#### Java Architectural Patterns (if detected)
|
||||
- Application container and bootstrap process
|
||||
- Dependency injection framework usage (Spring, CDI, etc.)
|
||||
@@ -149,16 +149,16 @@ ${(PROJECT_TYPE == "Java" || PROJECT_TYPE == "Auto-detect") ?
|
||||
- ORM configuration and usage patterns
|
||||
- Service implementation patterns" : ""}
|
||||
|
||||
${(PROJECT_TYPE == "React" || PROJECT_TYPE == "Auto-detect") ?
|
||||
${(PROJECT_TYPE == "React" || PROJECT_TYPE == "Auto-detect") ?
|
||||
"#### React Architectural Patterns (if detected)
|
||||
- Component composition and reuse strategies
|
||||
- State management architecture
|
||||
- Side effect handling patterns
|
||||
- Routing and navigation approach
|
||||
- Data web/fetching and caching patterns
|
||||
- Data fetching and caching patterns
|
||||
- Rendering optimization strategies" : ""}
|
||||
|
||||
${(PROJECT_TYPE == "Angular" || PROJECT_TYPE == "Auto-detect") ?
|
||||
${(PROJECT_TYPE == "Angular" || PROJECT_TYPE == "Auto-detect") ?
|
||||
"#### Angular Architectural Patterns (if detected)
|
||||
- Module organization strategy
|
||||
- Component hierarchy design
|
||||
@@ -167,7 +167,7 @@ ${(PROJECT_TYPE == "Angular" || PROJECT_TYPE == "Auto-detect") ?
|
||||
- Reactive programming patterns
|
||||
- Route guard implementation" : ""}
|
||||
|
||||
${(PROJECT_TYPE == "Python" || PROJECT_TYPE == "Auto-detect") ?
|
||||
${(PROJECT_TYPE == "Python" || PROJECT_TYPE == "Auto-detect") ?
|
||||
"#### Python Architectural Patterns (if detected)
|
||||
- Module organization approach
|
||||
- Dependency management strategy
|
||||
@@ -176,7 +176,7 @@ ${(PROJECT_TYPE == "Python" || PROJECT_TYPE == "Auto-detect") ?
|
||||
- Asynchronous programming approach" : ""}
|
||||
|
||||
### 10. Implementation Patterns
|
||||
${INCLUDES_IMPLEMENTATION_PATTERNS ?
|
||||
${INCLUDES_IMPLEMENTATION_PATTERNS ?
|
||||
"Document concrete implementation patterns for key architectural components:
|
||||
|
||||
- **Interface Design Patterns**:
|
||||
@@ -225,7 +225,7 @@ ${INCLUDES_IMPLEMENTATION_PATTERNS ?
|
||||
- Note cloud service integration patterns
|
||||
|
||||
### 13. Extension and Evolution Patterns
|
||||
${FOCUS_ON_EXTENSIBILITY ?
|
||||
${FOCUS_ON_EXTENSIBILITY ?
|
||||
"Provide detailed guidance for extending the architecture:
|
||||
|
||||
- **Feature Addition Patterns**:
|
||||
@@ -246,7 +246,7 @@ ${FOCUS_ON_EXTENSIBILITY ?
|
||||
- Anti-corruption layer patterns
|
||||
- Service facade implementation" : "Document key extension points in the architecture."}
|
||||
|
||||
${INCLUDES_CODE_EXAMPLES ?
|
||||
${INCLUDES_CODE_EXAMPLES ?
|
||||
"### 14. Architectural Pattern Examples
|
||||
Extract representative code examples that illustrate key architectural patterns:
|
||||
|
||||
@@ -267,7 +267,7 @@ Extract representative code examples that illustrate key architectural patterns:
|
||||
|
||||
Include enough context with each example to show the pattern clearly, but keep examples concise and focused on architectural concepts." : ""}
|
||||
|
||||
${INCLUDES_DECISION_RECORDS ?
|
||||
${INCLUDES_DECISION_RECORDS ?
|
||||
"### 15. Architectural Decision Records
|
||||
Document key architectural decisions evident in the codebase:
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: agent
|
||||
description: 'Convert a text-based document to markdown following instructions from prompt, or if a documented option is passed, follow the instructions for that option.'
|
||||
tools: ['edit', 'edit/editFiles', 'web/fetch', 'runCommands', 'search', 'search/readFile', 'search/textSearch']
|
||||
tools: ['edit', 'edit/editFiles', 'fetch', 'runCommands', 'search', 'search/readFile', 'search/textSearch']
|
||||
---
|
||||
|
||||
# Convert Plaintext Documentation to Markdown
|
||||
@@ -35,7 +35,7 @@ converted
|
||||
|
||||
This prompt can be used with several parameters and options. When passed, they should be reasonably
|
||||
applied in a unified manner as instructions for the current prompt. When putting together instructions
|
||||
or a script to make a current conversion, if parameters and options are unclear, use #tool:web/fetch to
|
||||
or a script to make a current conversion, if parameters and options are unclear, use #tool:fetch to
|
||||
retrieve the URLs in the **Reference** section.
|
||||
|
||||
```bash
|
||||
@@ -355,9 +355,9 @@ and options provided
|
||||
|
||||
### Reference
|
||||
|
||||
- #web/fetch → https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax
|
||||
- #web/fetch → https://www.markdownguide.org/extended-syntax/
|
||||
- #web/fetch → https://learn.microsoft.com/en-us/azure/devops/project/wiki/markdown-guidance?view=azure-devops
|
||||
- #fetch → https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax
|
||||
- #fetch → https://www.markdownguide.org/extended-syntax/
|
||||
- #fetch → https://learn.microsoft.com/en-us/azure/devops/project/wiki/markdown-guidance?view=azure-devops
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Do not change the data, unless the prompt instructions clearly and without a doubt specify to do so.
|
||||
|
||||
@@ -197,7 +197,7 @@ A JSON representation showing 5-10 representative documents for the container
|
||||
"email": "john@example.com"
|
||||
},
|
||||
{
|
||||
"id": "order_456",
|
||||
"id": "order_456",
|
||||
"partitionKey": "user_123",
|
||||
"type": "order",
|
||||
"userId": "user_123",
|
||||
@@ -254,7 +254,7 @@ A JSON representation showing 5-10 representative documents for the container
|
||||
[Explain the overall trade-offs made and optimizations used as well as why - such as the examples below]
|
||||
|
||||
- **Aggregate Design**: Kept Orders and OrderItems together due to 95% access correlation - trades document size for query performance
|
||||
- **Denormalization**: Duplicated user name in Order document to avoid cross-partition lookup - trades storage for performance
|
||||
- **Denormalization**: Duplicated user name in Order document to avoid cross-partition lookup - trades storage for performance
|
||||
- **Normalization**: Kept User as separate document type from Orders due to low access correlation (15%) - optimizes update costs
|
||||
- **Indexing Strategy**: Used selective indexing instead of automatic to balance cost vs additional query needs
|
||||
- **Multi-Document Containers**: Used multi-document containers for [access_pattern] to enable transactional consistency
|
||||
@@ -290,7 +290,7 @@ A JSON representation showing 5-10 representative documents for the container
|
||||
- ALWAYS update cosmosdb_requirements.md after each user response with new information
|
||||
- ALWAYS treat design considerations in modeling file as evolving thoughts, not final decisions
|
||||
- ALWAYS consider Multi-Document Containers when entities have 30-70% access correlation
|
||||
- ALWAYS consider Hierarchical Partition Keys as alternative to synthetic keys if initial design recommends synthetic keys
|
||||
- ALWAYS consider Hierarchical Partition Keys as alternative to synthetic keys if initial design recommends synthetic keys
|
||||
- ALWAYS consider data binning for massive scale workloads of uniformed events and batch type writes workloads to optimize size and RU costs
|
||||
- **ALWAYS calculate costs accurately** - use realistic document sizes and include all overhead
|
||||
- **ALWAYS present final clean comparison** rather than multiple confusing iterations
|
||||
@@ -343,7 +343,7 @@ In aggregate-oriented design, Azure Cosmos DB NoSQL offers multiple levels of ag
|
||||
Multiple entities combined into a single Cosmos DB document. This provides:
|
||||
|
||||
• Atomic updates across all data in the aggregate
|
||||
• Single point read retrieval for all data. Make sure to reference the document by id and partition key via API (example `ReadItemAsync<Order>(id: "order0103", partitionKey: new PartitionKey("TimS1234"));` instead of using a query with `SELECT * FROM c WHERE c.id = "order0103" AND c.partitionKey = "TimS1234"` for point reads examples)
|
||||
• Single point read retrieval for all data. Make sure to reference the document by id and partition key via API (example `ReadItemAsync<Order>(id: "order0103", partitionKey: new PartitionKey("TimS1234"));` instead of using a query with `SELECT * FROM c WHERE c.id = "order0103" AND c.partitionKey = "TimS1234"` for point reads examples)
|
||||
• Subject to 2MB document size limit
|
||||
|
||||
When designing aggregates, consider both levels based on your requirements.
|
||||
@@ -375,7 +375,7 @@ When designing aggregates, consider both levels based on your requirements.
|
||||
• **Cross-partition overhead**: Each physical partition adds ~2.5 RU base cost to cross-partition queries
|
||||
• **Massive scale implications**: 100+ physical partitions make cross-partition queries extremely expensive and not scalable.
|
||||
• Index overhead: Every indexed property consumes storage and write RUs
|
||||
• Update patterns: Frequent updates to indexed properties or full Document replace increase RU costs (and the bigger Document size, bigger the impact of update RU increase)
|
||||
• Update patterns: Frequent updates to indexed properties or full Document replace increase RU costs (and the bigger Document size, bigger the impact of update RU increase)
|
||||
|
||||
## Core Design Philosophy
|
||||
|
||||
@@ -439,7 +439,7 @@ One-to-One: Store the related ID in both documents
|
||||
```json
|
||||
// Users container
|
||||
{ "id": "user_123", "partitionKey": "user_123", "profileId": "profile_456" }
|
||||
// Profiles container
|
||||
// Profiles container
|
||||
{ "id": "profile_456", "partitionKey": "profile_456", "userId": "user_123" }
|
||||
```
|
||||
|
||||
@@ -463,10 +463,10 @@ Frequently accessed attributes: Denormalize sparingly
|
||||
|
||||
```json
|
||||
// Orders document
|
||||
{
|
||||
"id": "order_789",
|
||||
"partitionKey": "user_123",
|
||||
"customerId": "user_123",
|
||||
{
|
||||
"id": "order_789",
|
||||
"partitionKey": "user_123",
|
||||
"customerId": "user_123",
|
||||
"customerName": "John Doe" // Include customer name to avoid lookup
|
||||
}
|
||||
```
|
||||
@@ -493,7 +493,7 @@ When deciding aggregate boundaries, use this decision framework:
|
||||
Step 1: Analyze Access Correlation
|
||||
|
||||
• 90% accessed together → Strong single document aggregate candidate
|
||||
• 50-90% accessed together → Multi-document container aggregate candidate
|
||||
• 50-90% accessed together → Multi-document container aggregate candidate
|
||||
• <50% accessed together → Separate aggregates/containers
|
||||
|
||||
Step 2: Check Constraints
|
||||
@@ -514,8 +514,8 @@ Based on Steps 1 & 2, select:
|
||||
Order + OrderItems:
|
||||
|
||||
Access Analysis:
|
||||
• web/fetch order without items: 5% (just checking status)
|
||||
• web/fetch order with all items: 95% (normal flow)
|
||||
• Fetch order without items: 5% (just checking status)
|
||||
• Fetch order with all items: 95% (normal flow)
|
||||
• Update patterns: Items rarely change independently
|
||||
• Combined size: ~50KB average, max 200KB
|
||||
|
||||
@@ -587,7 +587,7 @@ Index overhead increases RU costs and storage. It occurs when documents have man
|
||||
When making aggregate design decisions:
|
||||
|
||||
• Calculate read cost = frequency × RUs per operation
|
||||
• Calculate write cost = frequency × RUs per operation
|
||||
• Calculate write cost = frequency × RUs per operation
|
||||
• Total cost = Σ(read costs) + Σ(write costs)
|
||||
• Choose the design with lower total cost
|
||||
|
||||
@@ -623,7 +623,7 @@ When facing massive write volumes, **data binning/chunking** can reduce write op
|
||||
```json
|
||||
{
|
||||
"id": "chunk_001",
|
||||
"partitionKey": "account_test_chunk_001",
|
||||
"partitionKey": "account_test_chunk_001",
|
||||
"chunkId": 1,
|
||||
"records": [
|
||||
{ "recordId": 1, "data": "..." },
|
||||
@@ -660,7 +660,7 @@ When multiple entity types are frequently accessed together, group them in the s
|
||||
[
|
||||
{
|
||||
"id": "user_123",
|
||||
"partitionKey": "user_123",
|
||||
"partitionKey": "user_123",
|
||||
"type": "user",
|
||||
"name": "John Doe",
|
||||
"email": "john@example.com"
|
||||
@@ -668,7 +668,7 @@ When multiple entity types are frequently accessed together, group them in the s
|
||||
{
|
||||
"id": "order_456",
|
||||
"partitionKey": "user_123",
|
||||
"type": "order",
|
||||
"type": "order",
|
||||
"userId": "user_123",
|
||||
"amount": 99.99
|
||||
}
|
||||
@@ -705,7 +705,7 @@ Promoting to Single Document Aggregate
|
||||
When multi-document analysis reveals:
|
||||
|
||||
• Access correlation higher than initially thought (>90%)
|
||||
• All documents always web/fetched together
|
||||
• All documents always fetched together
|
||||
• Combined size remains bounded
|
||||
• Would benefit from atomic updates
|
||||
|
||||
@@ -728,7 +728,7 @@ Example analysis:
|
||||
|
||||
Product + Reviews Aggregate Analysis:
|
||||
- Access pattern: View product details (no reviews) - 70%
|
||||
- Access pattern: View product with reviews - 30%
|
||||
- Access pattern: View product with reviews - 30%
|
||||
- Update frequency: Products daily, Reviews hourly
|
||||
- Average sizes: Product 5KB, Reviews 200KB total
|
||||
- Decision: Multi-document container - low access correlation + size concerns + update mismatch
|
||||
@@ -741,7 +741,7 @@ Short-circuit denormalization involves duplicating a property from a related ent
|
||||
2. The duplicated property is mostly immutable or application can accept stale values
|
||||
3. The property is small enough and won't significantly impact RU consumption
|
||||
|
||||
Example: In an e-commerce application, you can duplicate the ProductName from the Product document into each OrderItem document, so that web/fetching order items doesn't require additional queries to retrieve product names.
|
||||
Example: In an e-commerce application, you can duplicate the ProductName from the Product document into each OrderItem document, so that fetching order items doesn't require additional queries to retrieve product names.
|
||||
|
||||
### Identifying relationship
|
||||
|
||||
@@ -788,14 +788,14 @@ StudentCourseLessons container:
|
||||
"type": "student"
|
||||
},
|
||||
{
|
||||
"id": "course_456",
|
||||
"id": "course_456",
|
||||
"partitionKey": "student_123",
|
||||
"type": "course",
|
||||
"courseId": "course_456"
|
||||
},
|
||||
{
|
||||
"id": "lesson_789",
|
||||
"partitionKey": "student_123",
|
||||
"partitionKey": "student_123",
|
||||
"type": "lesson",
|
||||
"courseId": "course_456",
|
||||
"lessonId": "lesson_789"
|
||||
@@ -818,7 +818,7 @@ TenantData container:
|
||||
```json
|
||||
{
|
||||
"id": "record_123",
|
||||
"partitionKey": "tenant_456_customer_789",
|
||||
"partitionKey": "tenant_456_customer_789",
|
||||
"tenantId": "tenant_456",
|
||||
"customerId": "customer_789"
|
||||
}
|
||||
@@ -877,20 +877,20 @@ Azure Cosmos DB doesn't enforce unique constraints beyond the id+partitionKey co
|
||||
function createUserWithUniqueEmail(userData) {
|
||||
var context = getContext();
|
||||
var container = context.getCollection();
|
||||
|
||||
|
||||
// Check if email already exists
|
||||
var query = `SELECT * FROM c WHERE c.email = "${userData.email}"`;
|
||||
|
||||
|
||||
var isAccepted = container.queryDocuments(
|
||||
container.getSelfLink(),
|
||||
query,
|
||||
function(err, documents) {
|
||||
if (err) throw new Error('Error querying documents: ' + err.message);
|
||||
|
||||
|
||||
if (documents.length > 0) {
|
||||
throw new Error('Email already exists');
|
||||
}
|
||||
|
||||
|
||||
// Email is unique, create the user
|
||||
var isAccepted = container.createDocument(
|
||||
container.getSelfLink(),
|
||||
@@ -900,11 +900,11 @@ function createUserWithUniqueEmail(userData) {
|
||||
context.getResponse().setBody(document);
|
||||
}
|
||||
);
|
||||
|
||||
|
||||
if (!isAccepted) throw new Error('The query was not accepted by the server.');
|
||||
}
|
||||
);
|
||||
|
||||
|
||||
if (!isAccepted) throw new Error('The query was not accepted by the server.');
|
||||
}
|
||||
```
|
||||
@@ -929,7 +929,7 @@ Hierarchical Partition Keys provide natural query boundaries using multiple fiel
|
||||
{
|
||||
"partitionKey": {
|
||||
"version": 2,
|
||||
"kind": "MultiHash",
|
||||
"kind": "MultiHash",
|
||||
"paths": ["/accountId", "/testId", "/chunkId"]
|
||||
}
|
||||
}
|
||||
@@ -944,7 +944,7 @@ Hierarchical Partition Keys provide natural query boundaries using multiple fiel
|
||||
- Data has natural hierarchy (tenant → user → document)
|
||||
- Frequent prefix-based queries
|
||||
- Want to eliminate synthetic partition key complexity
|
||||
- Apply only for Cosmos NoSQL API
|
||||
- Apply only for Cosmos NoSQL API
|
||||
|
||||
**Trade-offs**:
|
||||
- Requires dedicated tier (not available on serverless)
|
||||
@@ -963,7 +963,7 @@ Implementation: Add a shard suffix using hash-based or time-based calculation:
|
||||
// Hash-based sharding
|
||||
partitionKey = originalKey + "_" + (hash(identifier) % shardCount)
|
||||
|
||||
// Time-based sharding
|
||||
// Time-based sharding
|
||||
partitionKey = originalKey + "_" + (currentHour % shardCount)
|
||||
```
|
||||
|
||||
@@ -993,7 +993,7 @@ EventLog container (problematic):
|
||||
• Result: Limited to 10,000 RU/s regardless of total container throughput
|
||||
|
||||
Sharded solution:
|
||||
• Partition Key: date + "_" + shard_id (e.g., "2024-07-09_4")
|
||||
• Partition Key: date + "_" + shard_id (e.g., "2024-07-09_4")
|
||||
• Shard calculation: shard_id = hash(event_id) % 15
|
||||
• Result: Distributes daily events across 15 partitions
|
||||
|
||||
@@ -1002,7 +1002,7 @@ Sharded solution:
|
||||
When aggregate boundaries conflict with update patterns, prioritize based on RU cost impact:
|
||||
|
||||
Example: Order Processing System
|
||||
• Read pattern: Always web/fetch order with all items (1000 RPS)
|
||||
• Read pattern: Always fetch order with all items (1000 RPS)
|
||||
• Update pattern: Individual item status updates (100 RPS)
|
||||
|
||||
Option 1 - Combined aggregate (single document):
|
||||
@@ -1010,7 +1010,7 @@ Option 1 - Combined aggregate (single document):
|
||||
- Write cost: 100 RPS × 10 RU (rewrite entire order) = 1000 RU/s
|
||||
|
||||
Option 2 - Separate items (multi-document):
|
||||
- Read cost: 1000 RPS × 5 RU (query multiple items) = 5000 RU/s
|
||||
- Read cost: 1000 RPS × 5 RU (query multiple items) = 5000 RU/s
|
||||
- Write cost: 100 RPS × 10 RU (update single item) = 1000 RU/s
|
||||
|
||||
Decision: Option 1 better due to significantly lower read costs despite same write costs
|
||||
@@ -1029,7 +1029,7 @@ Example: Session tokens with 24-hour expiration
|
||||
{
|
||||
"id": "sess_abc123",
|
||||
"partitionKey": "user_456",
|
||||
"userId": "user_456",
|
||||
"userId": "user_456",
|
||||
"createdAt": "2024-01-01T12:00:00Z",
|
||||
"ttl": 86400
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Create an Architectural Decision Record (ADR) document for AI-optimized decision documentation.'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'web/fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
---
|
||||
# Create Architectural Decision Record
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Create a formal specification for an existing GitHub Actions CI/CD workflow, optimized for AI consumption and workflow maintenance.'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'web/fetch', 'findTestFiles', 'githubRepo', 'new', 'openSimpleBrowser', 'problems', 'runCommands', 'runInTerminal2', 'runNotebooks', 'runTasks', 'runTests', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI', 'microsoft.docs.mcp', 'github', 'Microsoft Docs']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'fetch', 'findTestFiles', 'githubRepo', 'new', 'openSimpleBrowser', 'problems', 'runCommands', 'runInTerminal2', 'runNotebooks', 'runTasks', 'runTests', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI', 'microsoft.docs.mcp', 'github', 'Microsoft Docs']
|
||||
---
|
||||
# Create GitHub Actions Workflow Specification
|
||||
|
||||
@@ -45,10 +45,10 @@ graph TD
|
||||
B --> C[Job 2]
|
||||
C --> D[Job 3]
|
||||
D --> E[End]
|
||||
|
||||
|
||||
B --> F[Parallel Job]
|
||||
F --> D
|
||||
|
||||
|
||||
style A fill:#e1f5fe
|
||||
style E fill:#e8f5e8
|
||||
```
|
||||
@@ -259,7 +259,7 @@ graph TD
|
||||
subgraph "Build Phase"
|
||||
A[Lint] --> B[Test] --> C[Build]
|
||||
end
|
||||
subgraph "Deploy Phase"
|
||||
subgraph "Deploy Phase"
|
||||
D[Staging] --> E[Production]
|
||||
end
|
||||
C --> D
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Create a new implementation plan file for new features, refactoring existing code or upgrading packages, design, architecture or infrastructure.'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'web/fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
---
|
||||
# Create Implementation Plan
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Create an llms.txt file from scratch based on repository structure following the llms.txt specification at https://llmstxt.org/'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'web/fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
---
|
||||
# Create LLMs.txt File from Repository Structure
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Create comprehensive, standardized documentation for object-oriented components following industry best practices and architectural documentation standards.'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'web/fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
---
|
||||
# Generate Standard OO Component Documentation
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Create a new specification file for the solution, optimized for Generative AI consumption.'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'web/fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
---
|
||||
# Create Specification
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Create time-boxed technical spike documents for researching and resolving critical development decisions before implementation.'
|
||||
tools: ['runCommands', 'runTasks', 'edit', 'search', 'extensions', 'usages', 'vscodeAPI', 'think', 'problems', 'changes', 'testFailure', 'openSimpleBrowser', 'web/fetch', 'githubRepo', 'todos', 'Microsoft Docs', 'search']
|
||||
tools: ['runCommands', 'runTasks', 'edit', 'search', 'extensions', 'usages', 'vscodeAPI', 'think', 'problems', 'changes', 'testFailure', 'openSimpleBrowser', 'fetch', 'githubRepo', 'todos', 'Microsoft Docs', 'search']
|
||||
---
|
||||
|
||||
# Create Technical Spike Document
|
||||
@@ -203,7 +203,7 @@ Use descriptive, kebab-case names that indicate the category and specific unknow
|
||||
|
||||
### Phase 1: Information Gathering
|
||||
|
||||
1. **Search existing documentation** using search/web/fetch tools
|
||||
1. **Search existing documentation** using search/fetch tools
|
||||
2. **Analyze codebase** for existing patterns and constraints
|
||||
3. **Research external resources** (APIs, libraries, examples)
|
||||
|
||||
@@ -222,7 +222,7 @@ Use descriptive, kebab-case names that indicate the category and specific unknow
|
||||
## Tools Usage
|
||||
|
||||
- **search/searchResults:** Research existing solutions and documentation
|
||||
- **web/fetch/githubRepo:** Analyze external APIs, libraries, and examples
|
||||
- **fetch/githubRepo:** Analyze external APIs, libraries, and examples
|
||||
- **codebase:** Understand existing system constraints and patterns
|
||||
- **runTasks:** Execute prototypes and validation tests
|
||||
- **editFiles:** Update research progress and findings
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Create a tldr page from documentation URLs and command examples, requiring both URL and command name.'
|
||||
tools: ['edit/createFile', 'web/fetch']
|
||||
tools: ['edit/createFile', 'fetch']
|
||||
---
|
||||
|
||||
# Create TLDR Page
|
||||
@@ -25,9 +25,9 @@ clear, example-driven command references.
|
||||
|
||||
* **Command** - The name of the command or tool (e.g., `git`, `nmcli`, `distrobox-create`)
|
||||
* **URL** - Link to authoritative upstream documentation
|
||||
- If one or more URLs are passed without a preceding `#web/fetch`, apply #tool:web/fetch to the first URL
|
||||
- If one or more URLs are passed without a preceding `#fetch`, apply #tool:fetch to the first URL
|
||||
- If ${file} is provided in lieu of a URL, and ${file} has a relevant URL to **command**, then use
|
||||
the data from the file as if web/fetched from the URL; use the URL extracted from the file when
|
||||
the data from the file as if fetched from the URL; use the URL extracted from the file when
|
||||
creating the `tldr` page
|
||||
- If more than one URL is in the file, prompt for which URL should be used for the `tldr` page
|
||||
|
||||
@@ -48,7 +48,7 @@ the command.
|
||||
### Syntax
|
||||
|
||||
```bash
|
||||
/create-tldr-page #web/fetch <URL> <command> [text data] [context file]
|
||||
/create-tldr-page #fetch <URL> <command> [text data] [context file]
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
@@ -64,7 +64,7 @@ the command.
|
||||
**Agent**
|
||||
|
||||
```text
|
||||
I'll web/fetch the URL and analyze the documentation.
|
||||
I'll fetch the URL and analyze the documentation.
|
||||
From the data extracted, I assume the command is `some-command`. Is this correct? (yes/no)
|
||||
```
|
||||
|
||||
@@ -123,7 +123,7 @@ Use this template structure when creating tldr pages:
|
||||
|
||||
### Reference Examples
|
||||
|
||||
You MAY web/fetch these example tldr pages to understand the proper format and style:
|
||||
You MAY fetch these example tldr pages to understand the proper format and style:
|
||||
|
||||
* [git](https://raw.githubusercontent.com/jhauga/tldr/refs/heads/main/pages/common/git.md)
|
||||
* [distrobox-create](https://raw.githubusercontent.com/jhauga/tldr/refs/heads/main/pages/linux/distrobox-create.md)
|
||||
@@ -134,7 +134,7 @@ You MAY web/fetch these example tldr pages to understand the proper format and s
|
||||
**User**
|
||||
|
||||
```bash
|
||||
/create-tldr-page #web/fetch https://git-scm.com/docs/git git
|
||||
/create-tldr-page #fetch https://git-scm.com/docs/git git
|
||||
```
|
||||
|
||||
**Agent**
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
tools: ['edit/editFiles', 'search', 'web/web/fetch']
|
||||
tools: ['edit/editFiles', 'search', 'web/fetch']
|
||||
description: 'Diátaxis Documentation Expert. An expert technical writer specializing in creating high-quality software documentation, guided by the principles and structure of the Diátaxis technical documentation authoring framework.'
|
||||
---
|
||||
|
||||
|
||||
@@ -7,34 +7,34 @@ agent: 'agent'
|
||||
|
||||
## Configuration Variables
|
||||
|
||||
${PROJECT_TYPE="Auto-detect|.NET|Java|React|Angular|Python|Node.js|Flutter|Other"}
|
||||
${PROJECT_TYPE="Auto-detect|.NET|Java|React|Angular|Python|Node.js|Flutter|Other"}
|
||||
<!-- Select primary technology -->
|
||||
|
||||
${INCLUDES_MICROSERVICES="Auto-detect|true|false"}
|
||||
${INCLUDES_MICROSERVICES="Auto-detect|true|false"}
|
||||
<!-- Is this a microservices architecture? -->
|
||||
|
||||
${INCLUDES_FRONTEND="Auto-detect|true|false"}
|
||||
${INCLUDES_FRONTEND="Auto-detect|true|false"}
|
||||
<!-- Does project include frontend components? -->
|
||||
|
||||
${IS_MONOREPO="Auto-detect|true|false"}
|
||||
${IS_MONOREPO="Auto-detect|true|false"}
|
||||
<!-- Is this a monorepo with multiple projects? -->
|
||||
|
||||
${VISUALIZATION_STYLE="ASCII|Markdown List|Table"}
|
||||
${VISUALIZATION_STYLE="ASCII|Markdown List|Table"}
|
||||
<!-- How to visualize the structure -->
|
||||
|
||||
${DEPTH_LEVEL=1-5}
|
||||
${DEPTH_LEVEL=1-5}
|
||||
<!-- How many levels of folders to document in detail -->
|
||||
|
||||
${INCLUDE_FILE_COUNTS=true|false}
|
||||
${INCLUDE_FILE_COUNTS=true|false}
|
||||
<!-- Include file count statistics -->
|
||||
|
||||
${INCLUDE_GENERATED_FOLDERS=true|false}
|
||||
${INCLUDE_GENERATED_FOLDERS=true|false}
|
||||
<!-- Include auto-generated folders -->
|
||||
|
||||
${INCLUDE_FILE_PATTERNS=true|false}
|
||||
${INCLUDE_FILE_PATTERNS=true|false}
|
||||
<!-- Document file naming/location patterns -->
|
||||
|
||||
${INCLUDE_TEMPLATES=true|false}
|
||||
${INCLUDE_TEMPLATES=true|false}
|
||||
<!-- Include file/folder templates for new features -->
|
||||
|
||||
## Generated Prompt
|
||||
@@ -43,7 +43,7 @@ ${INCLUDE_TEMPLATES=true|false}
|
||||
|
||||
### Initial Auto-detection Phase
|
||||
|
||||
${PROJECT_TYPE == "Auto-detect" ?
|
||||
${PROJECT_TYPE == "Auto-detect" ?
|
||||
"Begin by scanning the folder structure for key files that identify the project type:
|
||||
- Look for solution/project files (.sln, .csproj, .fsproj, .vbproj) to identify .NET projects
|
||||
- Check for build files (pom.xml, build.gradle, settings.gradle) for Java projects
|
||||
@@ -51,17 +51,17 @@ ${PROJECT_TYPE == "Auto-detect" ?
|
||||
- Look for specific framework files (angular.json, react-scripts entries, next.config.js)
|
||||
- Check for Python project identifiers (requirements.txt, setup.py, pyproject.toml)
|
||||
- Examine mobile app identifiers (pubspec.yaml, android/ios folders)
|
||||
- Note all technology signatures found and their versions" :
|
||||
- Note all technology signatures found and their versions" :
|
||||
"Focus analysis on ${PROJECT_TYPE} project structure"}
|
||||
|
||||
${IS_MONOREPO == "Auto-detect" ?
|
||||
${IS_MONOREPO == "Auto-detect" ?
|
||||
"Determine if this is a monorepo by looking for:
|
||||
- Multiple distinct projects with their own configuration files
|
||||
- Workspace configuration files (lerna.json, nx.json, turborepo.json, etc.)
|
||||
- Cross-project references and shared dependency patterns
|
||||
- Root-level orchestration scripts and configuration" : ""}
|
||||
|
||||
${INCLUDES_MICROSERVICES == "Auto-detect" ?
|
||||
${INCLUDES_MICROSERVICES == "Auto-detect" ?
|
||||
"Check for microservices architecture indicators:
|
||||
- Multiple service directories with similar/repeated structures
|
||||
- Service-specific Dockerfiles or deployment configurations
|
||||
@@ -70,7 +70,7 @@ ${INCLUDES_MICROSERVICES == "Auto-detect" ?
|
||||
- API gateway configuration files
|
||||
- Shared libraries or utilities across services" : ""}
|
||||
|
||||
${INCLUDES_FRONTEND == "Auto-detect" ?
|
||||
${INCLUDES_FRONTEND == "Auto-detect" ?
|
||||
"Identify frontend components by looking for:
|
||||
- Web asset directories (wwwroot, public, dist, static)
|
||||
- UI framework files (components, modules, pages)
|
||||
@@ -87,40 +87,40 @@ Provide a high-level overview of the ${PROJECT_TYPE == "Auto-detect" ? "detected
|
||||
- Note any structural patterns that repeat throughout the codebase
|
||||
- Document the rationale behind the structure where it can be inferred
|
||||
|
||||
${IS_MONOREPO == "Auto-detect" ?
|
||||
"If detected as a monorepo, explain how the monorepo is organized and the relationship between projects." :
|
||||
${IS_MONOREPO == "Auto-detect" ?
|
||||
"If detected as a monorepo, explain how the monorepo is organized and the relationship between projects." :
|
||||
IS_MONOREPO ? "Explain how the monorepo is organized and the relationship between projects." : ""}
|
||||
|
||||
${INCLUDES_MICROSERVICES == "Auto-detect" ?
|
||||
"If microservices are detected, describe how they are structured and organized." :
|
||||
${INCLUDES_MICROSERVICES == "Auto-detect" ?
|
||||
"If microservices are detected, describe how they are structured and organized." :
|
||||
INCLUDES_MICROSERVICES ? "Describe how the microservices are structured and organized." : ""}
|
||||
|
||||
### 2. Directory Visualization
|
||||
|
||||
${VISUALIZATION_STYLE == "ASCII" ?
|
||||
${VISUALIZATION_STYLE == "ASCII" ?
|
||||
"Create an ASCII tree representation of the folder hierarchy to depth level ${DEPTH_LEVEL}." : ""}
|
||||
|
||||
${VISUALIZATION_STYLE == "Markdown List" ?
|
||||
${VISUALIZATION_STYLE == "Markdown List" ?
|
||||
"Use nested markdown lists to represent the folder hierarchy to depth level ${DEPTH_LEVEL}." : ""}
|
||||
|
||||
${VISUALIZATION_STYLE == "Table" ?
|
||||
${VISUALIZATION_STYLE == "Table" ?
|
||||
"Create a table with columns for Path, Purpose, Content Types, and Conventions." : ""}
|
||||
|
||||
${INCLUDE_GENERATED_FOLDERS ?
|
||||
"Include all folders including generated ones." :
|
||||
${INCLUDE_GENERATED_FOLDERS ?
|
||||
"Include all folders including generated ones." :
|
||||
"Exclude auto-generated folders like bin/, obj/, node_modules/, etc."}
|
||||
|
||||
### 3. Key Directory Analysis
|
||||
|
||||
Document each significant directory's purpose, contents, and patterns:
|
||||
|
||||
${PROJECT_TYPE == "Auto-detect" ?
|
||||
${PROJECT_TYPE == "Auto-detect" ?
|
||||
"For each detected technology, analyze directory structures based on observed usage patterns:" : ""}
|
||||
|
||||
${(PROJECT_TYPE == ".NET" || PROJECT_TYPE == "Auto-detect") ?
|
||||
${(PROJECT_TYPE == ".NET" || PROJECT_TYPE == "Auto-detect") ?
|
||||
"#### .NET Project Structure (if detected)
|
||||
|
||||
- **Solution Organization**:
|
||||
- **Solution Organization**:
|
||||
- How projects are grouped and related
|
||||
- Solution folder organization patterns
|
||||
- Multi-targeting project patterns
|
||||
@@ -149,7 +149,7 @@ ${(PROJECT_TYPE == ".NET" || PROJECT_TYPE == "Auto-detect") ?
|
||||
- Test categories and organization
|
||||
- Test data and mock locations" : ""}
|
||||
|
||||
${(PROJECT_TYPE == "React" || PROJECT_TYPE == "Angular" || PROJECT_TYPE == "Auto-detect") ?
|
||||
${(PROJECT_TYPE == "React" || PROJECT_TYPE == "Angular" || PROJECT_TYPE == "Auto-detect") ?
|
||||
"#### UI Project Structure (if detected)
|
||||
|
||||
- **Component Organization**:
|
||||
@@ -170,13 +170,13 @@ ${(PROJECT_TYPE == "React" || PROJECT_TYPE == "Angular" || PROJECT_TYPE == "Auto
|
||||
- **API Integration**:
|
||||
- API client organization
|
||||
- Service layer structure
|
||||
- Data web/fetching patterns
|
||||
- Data fetching patterns
|
||||
|
||||
- **Asset Management**:
|
||||
- Static resource organization
|
||||
- Image/media file structure
|
||||
- Font and icon organization
|
||||
|
||||
|
||||
- **Style Organization**:
|
||||
- CSS/SCSS file structure
|
||||
- Theme organization
|
||||
@@ -184,36 +184,36 @@ ${(PROJECT_TYPE == "React" || PROJECT_TYPE == "Angular" || PROJECT_TYPE == "Auto
|
||||
|
||||
### 4. File Placement Patterns
|
||||
|
||||
${INCLUDE_FILE_PATTERNS ?
|
||||
${INCLUDE_FILE_PATTERNS ?
|
||||
"Document the patterns that determine where different types of files should be placed:
|
||||
|
||||
- **Configuration Files**:
|
||||
- Locations for different types of configuration
|
||||
- Environment-specific configuration patterns
|
||||
|
||||
|
||||
- **Model/Entity Definitions**:
|
||||
- Where domain models are defined
|
||||
- Data transfer object (DTO) locations
|
||||
- Schema definition locations
|
||||
|
||||
|
||||
- **Business Logic**:
|
||||
- Service implementation locations
|
||||
- Business rule organization
|
||||
- Utility and helper function placement
|
||||
|
||||
|
||||
- **Interface Definitions**:
|
||||
- Where interfaces and abstractions are defined
|
||||
- How interfaces are grouped and organized
|
||||
|
||||
|
||||
- **Test Files**:
|
||||
- Unit test location patterns
|
||||
- Integration test placement
|
||||
- Test utility and mock locations
|
||||
|
||||
|
||||
- **Documentation Files**:
|
||||
- API documentation placement
|
||||
- Internal documentation organization
|
||||
- README file distribution" :
|
||||
- README file distribution" :
|
||||
"Document where key file types are located in the project."}
|
||||
|
||||
### 5. Naming and Organization Conventions
|
||||
@@ -223,12 +223,12 @@ Document the naming and organizational conventions observed across the project:
|
||||
- Case conventions (PascalCase, camelCase, kebab-case)
|
||||
- Prefix and suffix patterns
|
||||
- Type indicators in filenames
|
||||
|
||||
|
||||
- **Folder Naming Patterns**:
|
||||
- Naming conventions for different folder types
|
||||
- Hierarchical naming patterns
|
||||
- Grouping and categorization conventions
|
||||
|
||||
|
||||
- **Namespace/Module Patterns**:
|
||||
- How namespaces/modules map to folder structure
|
||||
- Import/using statement organization
|
||||
@@ -252,13 +252,13 @@ Provide guidance for navigating and working with the codebase structure:
|
||||
- How to extend existing functionality
|
||||
- Where to place new tests
|
||||
- Configuration modification locations
|
||||
|
||||
|
||||
- **Dependency Patterns**:
|
||||
- How dependencies flow between folders
|
||||
- Import/reference patterns
|
||||
- Dependency injection registration locations
|
||||
|
||||
${INCLUDE_FILE_COUNTS ?
|
||||
${INCLUDE_FILE_COUNTS ?
|
||||
"- **Content Statistics**:
|
||||
- Files per directory analysis
|
||||
- Code distribution metrics
|
||||
@@ -271,12 +271,12 @@ Document the build process and output organization:
|
||||
- Build script locations and purposes
|
||||
- Build pipeline organization
|
||||
- Build task definitions
|
||||
|
||||
|
||||
- **Output Structure**:
|
||||
- Compiled/built output locations
|
||||
- Output organization patterns
|
||||
- Distribution package structure
|
||||
|
||||
|
||||
- **Environment-Specific Builds**:
|
||||
- Development vs. production differences
|
||||
- Environment configuration strategies
|
||||
@@ -284,7 +284,7 @@ Document the build process and output organization:
|
||||
|
||||
### 8. Technology-Specific Organization
|
||||
|
||||
${(PROJECT_TYPE == ".NET" || PROJECT_TYPE == "Auto-detect") ?
|
||||
${(PROJECT_TYPE == ".NET" || PROJECT_TYPE == "Auto-detect") ?
|
||||
"#### .NET-Specific Structure Patterns (if detected)
|
||||
|
||||
- **Project File Organization**:
|
||||
@@ -292,53 +292,53 @@ ${(PROJECT_TYPE == ".NET" || PROJECT_TYPE == "Auto-detect") ?
|
||||
- Target framework configuration
|
||||
- Property group organization
|
||||
- Item group patterns
|
||||
|
||||
|
||||
- **Assembly Organization**:
|
||||
- Assembly naming patterns
|
||||
- Multi-assembly architecture
|
||||
- Assembly reference patterns
|
||||
|
||||
|
||||
- **Resource Organization**:
|
||||
- Embedded resource patterns
|
||||
- Localization file structure
|
||||
- Static web asset organization
|
||||
|
||||
|
||||
- **Package Management**:
|
||||
- NuGet configuration locations
|
||||
- Package reference organization
|
||||
- Package version management" : ""}
|
||||
|
||||
${(PROJECT_TYPE == "Java" || PROJECT_TYPE == "Auto-detect") ?
|
||||
${(PROJECT_TYPE == "Java" || PROJECT_TYPE == "Auto-detect") ?
|
||||
"#### Java-Specific Structure Patterns (if detected)
|
||||
|
||||
- **Package Hierarchy**:
|
||||
- Package naming and nesting conventions
|
||||
- Domain vs. technical packages
|
||||
- Visibility and access patterns
|
||||
|
||||
|
||||
- **Build Tool Organization**:
|
||||
- Maven/Gradle structure patterns
|
||||
- Module organization
|
||||
- Plugin configuration patterns
|
||||
|
||||
|
||||
- **Resource Organization**:
|
||||
- Resource folder structures
|
||||
- Environment-specific resources
|
||||
- Properties file organization" : ""}
|
||||
|
||||
${(PROJECT_TYPE == "Node.js" || PROJECT_TYPE == "Auto-detect") ?
|
||||
${(PROJECT_TYPE == "Node.js" || PROJECT_TYPE == "Auto-detect") ?
|
||||
"#### Node.js-Specific Structure Patterns (if detected)
|
||||
|
||||
- **Module Organization**:
|
||||
- CommonJS vs. ESM organization
|
||||
- Internal module patterns
|
||||
- Third-party dependency management
|
||||
|
||||
|
||||
- **Script Organization**:
|
||||
- npm/yarn script definition patterns
|
||||
- Utility script locations
|
||||
- Development tool scripts
|
||||
|
||||
|
||||
- **Configuration Management**:
|
||||
- Configuration file locations
|
||||
- Environment variable management
|
||||
@@ -351,18 +351,18 @@ Document how the project structure is designed to be extended:
|
||||
- How to add new modules/features while maintaining conventions
|
||||
- Plugin/extension folder patterns
|
||||
- Customization directory structures
|
||||
|
||||
|
||||
- **Scalability Patterns**:
|
||||
- How the structure scales for larger features
|
||||
- Approach for breaking down large modules
|
||||
- Code splitting strategies
|
||||
|
||||
|
||||
- **Refactoring Patterns**:
|
||||
- Common refactoring approaches observed
|
||||
- How structural changes are managed
|
||||
- Incremental reorganization patterns
|
||||
|
||||
${INCLUDE_TEMPLATES ?
|
||||
${INCLUDE_TEMPLATES ?
|
||||
"### 10. Structure Templates
|
||||
|
||||
Provide templates for creating new components that follow project conventions:
|
||||
@@ -371,17 +371,17 @@ Provide templates for creating new components that follow project conventions:
|
||||
- Folder structure for adding a complete feature
|
||||
- Required file types and their locations
|
||||
- Naming patterns to follow
|
||||
|
||||
|
||||
- **New Component Template**:
|
||||
- Directory structure for a typical component
|
||||
- Essential files to include
|
||||
- Integration points with existing structure
|
||||
|
||||
|
||||
- **New Service Template**:
|
||||
- Structure for adding a new service
|
||||
- Interface and implementation placement
|
||||
- Configuration and registration patterns
|
||||
|
||||
|
||||
- **New Test Structure**:
|
||||
- Folder structure for test projects/files
|
||||
- Test file organization templates
|
||||
@@ -395,7 +395,7 @@ Document how the project structure is maintained and enforced:
|
||||
- Tools/scripts that enforce structure
|
||||
- Build checks for structural compliance
|
||||
- Linting rules related to structure
|
||||
|
||||
|
||||
- **Documentation Practices**:
|
||||
- How structural changes are documented
|
||||
- Where architectural decisions are recorded
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
model: Claude Sonnet 4
|
||||
tools: ['edit', 'githubRepo', 'changes', 'problems', 'search', 'runCommands', 'web/fetch']
|
||||
tools: ['edit', 'githubRepo', 'changes', 'problems', 'search', 'runCommands', 'fetch']
|
||||
description: 'Set up complete GitHub Copilot configuration for a new project based on technology stack'
|
||||
---
|
||||
|
||||
@@ -58,7 +58,7 @@ Create Coding Agent workflow file:
|
||||
- `copilot-setup-steps.yml` - GitHub Actions workflow for Coding Agent environment setup
|
||||
|
||||
**CRITICAL**: The workflow MUST follow this exact structure:
|
||||
- Job name MUST be `copilot-setup-steps`
|
||||
- Job name MUST be `copilot-setup-steps`
|
||||
- Include proper triggers (workflow_dispatch, push, pull_request on the workflow file)
|
||||
- Set appropriate permissions (minimum required)
|
||||
- Customize steps based on the technology stack provided
|
||||
@@ -67,9 +67,9 @@ Create Coding Agent workflow file:
|
||||
|
||||
For each file, follow these principles:
|
||||
|
||||
**MANDATORY FIRST STEP**: Always use the web/fetch tool to research existing patterns before creating any content:
|
||||
1. **web/fetch from awesome-copilot collections**: https://github.com/github/awesome-copilot/blob/main/docs/README.collections.md
|
||||
2. **web/fetch specific instruction files**: https://raw.githubusercontent.com/github/awesome-copilot/main/instructions/[relevant-file].instructions.md
|
||||
**MANDATORY FIRST STEP**: Always use the fetch tool to research existing patterns before creating any content:
|
||||
1. **Fetch from awesome-copilot collections**: https://github.com/github/awesome-copilot/blob/main/docs/README.collections.md
|
||||
2. **Fetch specific instruction files**: https://raw.githubusercontent.com/github/awesome-copilot/main/instructions/[relevant-file].instructions.md
|
||||
3. **Check for existing patterns** that match the technology stack
|
||||
|
||||
**Primary Approach**: Reference and adapt existing instructions from awesome-copilot repository:
|
||||
@@ -127,7 +127,7 @@ description: "Java Spring Boot development standards"
|
||||
- ✅ **"Use TypeScript strict mode for better type safety"**
|
||||
- ✅ **"Follow the repository's established error handling patterns"**
|
||||
|
||||
**Research Strategy with web/fetch tool:**
|
||||
**Research Strategy with fetch tool:**
|
||||
1. **Check awesome-copilot first** - Always start here for ALL file types
|
||||
2. **Look for exact tech stack matches** (e.g., React, Node.js, Spring Boot)
|
||||
3. **Look for general matches** (e.g., frontend chatmodes, testing prompts, review modes)
|
||||
@@ -135,15 +135,15 @@ description: "Java Spring Boot development standards"
|
||||
5. **Adapt community examples** to project needs
|
||||
6. **Only create custom content** if nothing relevant exists
|
||||
|
||||
**web/fetch these awesome-copilot directories:**
|
||||
**Fetch these awesome-copilot directories:**
|
||||
- **Instructions**: https://github.com/github/awesome-copilot/tree/main/instructions
|
||||
- **Prompts**: https://github.com/github/awesome-copilot/tree/main/prompts
|
||||
- **Prompts**: https://github.com/github/awesome-copilot/tree/main/prompts
|
||||
- **Chat Modes**: https://github.com/github/awesome-copilot/tree/main/chatmodes
|
||||
- **Collections**: https://github.com/github/awesome-copilot/blob/main/docs/README.collections.md
|
||||
|
||||
**Awesome-Copilot Collections to Check:**
|
||||
- **Frontend Web Development**: React, Angular, Vue, TypeScript, CSS frameworks
|
||||
- **C# .NET Development**: Testing, documentation, and best practices
|
||||
- **C# .NET Development**: Testing, documentation, and best practices
|
||||
- **Java Development**: Spring Boot, Quarkus, testing, documentation
|
||||
- **Database Development**: PostgreSQL, SQL Server, and general database best practices
|
||||
- **Azure Development**: Infrastructure as Code, serverless functions
|
||||
@@ -237,7 +237,7 @@ Requirements for the form:
|
||||
```yaml
|
||||
---
|
||||
description: Generate an implementation plan for new features or refactoring existing code.
|
||||
tools: ['codebase', 'web/fetch', 'findTestFiles', 'githubRepo', 'search', 'usages']
|
||||
tools: ['codebase', 'fetch', 'findTestFiles', 'githubRepo', 'search', 'usages']
|
||||
model: Claude Sonnet 4
|
||||
---
|
||||
# Planning mode instructions
|
||||
|
||||
@@ -44,7 +44,7 @@ Your goal is to help me write high-quality Spring Boot applications by following
|
||||
|
||||
- **Spring Data JPA:** Use Spring Data JPA repositories by extending `JpaRepository` or `CrudRepository` for standard database operations.
|
||||
- **Custom Queries:** For complex queries, use `@Query` or the JPA Criteria API.
|
||||
- **Projections:** Use DTO projections to web/fetch only the necessary data from the database.
|
||||
- **Projections:** Use DTO projections to fetch only the necessary data from the database.
|
||||
|
||||
## Logging
|
||||
|
||||
|
||||
@@ -146,7 +146,7 @@ The MCP server must provide:
|
||||
|
||||
### Tool Selection
|
||||
When importing from MCP:
|
||||
1. web/fetch available tools from server
|
||||
1. Fetch available tools from server
|
||||
2. Select specific tools to include (for security/simplicity)
|
||||
3. Tool definitions are auto-generated in ai-plugin.json
|
||||
|
||||
@@ -299,7 +299,7 @@ Then generate:
|
||||
- Ensure mcp.json points to correct server
|
||||
- Verify tools were selected during import
|
||||
- Check ai-plugin.json has correct function definitions
|
||||
- Re-web/fetch actions from MCP if server changed
|
||||
- Re-fetch actions from MCP if server changed
|
||||
|
||||
### Agent Not Understanding Queries
|
||||
- Review instructions in declarativeAgent.json
|
||||
@@ -307,4 +307,4 @@ Then generate:
|
||||
- Verify response_semantics extract correct data
|
||||
- Test with more specific queries
|
||||
|
||||
````
|
||||
````
|
||||
@@ -3,7 +3,7 @@ description: "Analyze chatmode or prompt files and recommend optimal AI models b
|
||||
agent: "agent"
|
||||
tools:
|
||||
- "search/codebase"
|
||||
- "web/fetch"
|
||||
- "fetch"
|
||||
- "context7/*"
|
||||
model: Auto (copilot)
|
||||
---
|
||||
@@ -103,7 +103,7 @@ Identify the primary task category based on content analysis:
|
||||
|
||||
Based on `tools` in frontmatter and body instructions:
|
||||
|
||||
- **Read-only tools** (search, web/fetch, usages, githubRepo): Lower complexity, faster models suitable
|
||||
- **Read-only tools** (search, fetch, usages, githubRepo): Lower complexity, faster models suitable
|
||||
- **Write operations** (edit/editFiles, new): Moderate complexity, accuracy important
|
||||
- **Execution tools** (runCommands, runTests, runTasks): Validation needs, iterative approach
|
||||
- **Advanced tools** (context7/\*, sequential-thinking/\*): Complex reasoning, premium models beneficial
|
||||
@@ -262,13 +262,13 @@ Verify model capabilities align with specified tools:
|
||||
|
||||
- If tools include `context7/*` or `sequential-thinking/*`: Recommend advanced reasoning models (Claude Sonnet 4.5, GPT-5, Claude Opus 4.1)
|
||||
- If tools include vision-related references: Ensure model supports images (flag if GPT-5 Codex, Claude Sonnet 4, or mini models selected)
|
||||
- If tools are read-only (search, web/fetch): Suggest cost-effective models (GPT-5 mini, Grok Code Fast 1)
|
||||
- If tools are read-only (search, fetch): Suggest cost-effective models (GPT-5 mini, Grok Code Fast 1)
|
||||
|
||||
### 5. Context7 Integration for Up-to-Date Information
|
||||
|
||||
**Leverage Context7 for Model Documentation**:
|
||||
|
||||
When uncertainty exists about current model capabilities, use Context7 to web/fetch latest information:
|
||||
When uncertainty exists about current model capabilities, use Context7 to fetch latest information:
|
||||
|
||||
```markdown
|
||||
**Verification with Context7**:
|
||||
@@ -568,7 +568,7 @@ If file specifies a deprecated model:
|
||||
### Example 4: Free Tier User with Planning Mode
|
||||
|
||||
**File**: `plan.agent.md`
|
||||
**Content**: "Research and planning mode with read-only tools (search, web/fetch, githubRepo)"
|
||||
**Content**: "Research and planning mode with read-only tools (search, fetch, githubRepo)"
|
||||
**Subscription**: Free (2K completions + 50 chat requests/month, 0x models only)
|
||||
**Recommendation**: GPT-4.1 (0x, balanced, included in Free tier)
|
||||
**Alternative**: GPT-5 mini (0x, faster but less context)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: agent
|
||||
description: 'Website exploration for testing using Playwright MCP'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'web/fetch', 'findTestFiles', 'problems', 'runCommands', 'runTasks', 'runTests', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'playwright']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'fetch', 'findTestFiles', 'problems', 'runCommands', 'runTasks', 'runTests', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'playwright']
|
||||
model: 'Claude Sonnet 4'
|
||||
---
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: agent
|
||||
description: 'Generate a Playwright test based on a scenario using Playwright MCP'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'web/fetch', 'problems', 'runCommands', 'runTasks', 'runTests', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'playwright/*']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'fetch', 'problems', 'runCommands', 'runTasks', 'runTests', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'playwright/*']
|
||||
model: 'Claude Sonnet 4.5'
|
||||
---
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ CREATE TABLE events (
|
||||
CREATE INDEX idx_events_data_gin ON events USING gin(data);
|
||||
|
||||
-- JSONB containment and path queries
|
||||
SELECT * FROM events
|
||||
SELECT * FROM events
|
||||
WHERE data @> '{"type": "login"}'
|
||||
AND data #>> '{user,role}' = 'admin';
|
||||
|
||||
@@ -53,7 +53,7 @@ SELECT array_agg(DISTINCT category) FROM posts, unnest(categories) as category;
|
||||
### Window Functions & Analytics
|
||||
```sql
|
||||
-- Advanced window functions
|
||||
SELECT
|
||||
SELECT
|
||||
product_id,
|
||||
sale_date,
|
||||
amount,
|
||||
@@ -79,19 +79,19 @@ CREATE TABLE documents (
|
||||
);
|
||||
|
||||
-- Update search vector
|
||||
UPDATE documents
|
||||
UPDATE documents
|
||||
SET search_vector = to_tsvector('english', title || ' ' || content);
|
||||
|
||||
-- GIN index for search performance
|
||||
CREATE INDEX idx_documents_search ON documents USING gin(search_vector);
|
||||
|
||||
-- Search queries
|
||||
SELECT * FROM documents
|
||||
SELECT * FROM documents
|
||||
WHERE search_vector @@ plainto_tsquery('english', 'postgresql database');
|
||||
|
||||
-- Ranking results
|
||||
SELECT *, ts_rank(search_vector, plainto_tsquery('postgresql')) as rank
|
||||
FROM documents
|
||||
FROM documents
|
||||
WHERE search_vector @@ plainto_tsquery('postgresql')
|
||||
ORDER BY rank DESC;
|
||||
```
|
||||
@@ -101,7 +101,7 @@ ORDER BY rank DESC;
|
||||
### Query Optimization
|
||||
```sql
|
||||
-- EXPLAIN ANALYZE for performance analysis
|
||||
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
|
||||
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
|
||||
SELECT u.name, COUNT(o.id) as order_count
|
||||
FROM users u
|
||||
LEFT JOIN orders o ON u.id = o.user_id
|
||||
@@ -111,8 +111,8 @@ GROUP BY u.id, u.name;
|
||||
-- Identify slow queries from pg_stat_statements
|
||||
SELECT query, calls, total_time, mean_time, rows,
|
||||
100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent
|
||||
FROM pg_stat_statements
|
||||
ORDER BY total_time DESC
|
||||
FROM pg_stat_statements
|
||||
ORDER BY total_time DESC
|
||||
LIMIT 10;
|
||||
```
|
||||
|
||||
@@ -134,13 +134,13 @@ CREATE INDEX idx_orders_covering ON orders(user_id, status) INCLUDE (total, crea
|
||||
### Connection & Memory Management
|
||||
```sql
|
||||
-- Check connection usage
|
||||
SELECT count(*) as connections, state
|
||||
FROM pg_stat_activity
|
||||
SELECT count(*) as connections, state
|
||||
FROM pg_stat_activity
|
||||
GROUP BY state;
|
||||
|
||||
-- Monitor memory usage
|
||||
SELECT name, setting, unit
|
||||
FROM pg_settings
|
||||
SELECT name, setting, unit
|
||||
FROM pg_settings
|
||||
WHERE name IN ('shared_buffers', 'work_mem', 'maintenance_work_mem');
|
||||
```
|
||||
|
||||
@@ -159,7 +159,7 @@ CREATE TYPE address_type AS (
|
||||
CREATE TYPE order_status AS ENUM ('pending', 'processing', 'shipped', 'delivered', 'cancelled');
|
||||
|
||||
-- Use domains for data validation
|
||||
CREATE DOMAIN email_address AS TEXT
|
||||
CREATE DOMAIN email_address AS TEXT
|
||||
CHECK (VALUE ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$');
|
||||
|
||||
-- Table using custom types
|
||||
@@ -182,12 +182,12 @@ CREATE TABLE reservations (
|
||||
);
|
||||
|
||||
-- Range queries
|
||||
SELECT * FROM reservations
|
||||
SELECT * FROM reservations
|
||||
WHERE reservation_period && tstzrange('2024-07-20', '2024-07-25');
|
||||
|
||||
-- Exclude overlapping ranges
|
||||
ALTER TABLE reservations
|
||||
ADD CONSTRAINT no_overlap
|
||||
ALTER TABLE reservations
|
||||
ADD CONSTRAINT no_overlap
|
||||
EXCLUDE USING gist (room_id WITH =, reservation_period WITH &&);
|
||||
```
|
||||
|
||||
@@ -203,7 +203,7 @@ CREATE TABLE locations (
|
||||
);
|
||||
|
||||
-- Geometric queries
|
||||
SELECT name FROM locations
|
||||
SELECT name FROM locations
|
||||
WHERE coordinates <-> point(40.7128, -74.0060) < 10; -- Within 10 units
|
||||
|
||||
-- GiST index for geometric data
|
||||
@@ -235,12 +235,12 @@ SELECT pg_size_pretty(pg_database_size(current_database())) as db_size;
|
||||
-- Table and index sizes
|
||||
SELECT schemaname, tablename,
|
||||
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as size
|
||||
FROM pg_tables
|
||||
FROM pg_tables
|
||||
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
|
||||
|
||||
-- Index usage statistics
|
||||
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read, idx_tup_web/fetch
|
||||
FROM pg_stat_user_indexes
|
||||
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read, idx_tup_fetch
|
||||
FROM pg_stat_user_indexes
|
||||
WHERE idx_scan = 0; -- Unused indexes
|
||||
```
|
||||
|
||||
@@ -258,13 +258,13 @@ WHERE idx_scan = 0; -- Unused indexes
|
||||
```sql
|
||||
-- Identify slow queries
|
||||
SELECT query, calls, total_time, mean_time, rows
|
||||
FROM pg_stat_statements
|
||||
ORDER BY total_time DESC
|
||||
FROM pg_stat_statements
|
||||
ORDER BY total_time DESC
|
||||
LIMIT 10;
|
||||
|
||||
-- Check index usage
|
||||
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read, idx_tup_web/fetch
|
||||
FROM pg_stat_user_indexes
|
||||
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read, idx_tup_fetch
|
||||
FROM pg_stat_user_indexes
|
||||
WHERE idx_scan = 0;
|
||||
```
|
||||
|
||||
@@ -282,27 +282,27 @@ WHERE idx_scan = 0;
|
||||
SELECT * FROM products ORDER BY id OFFSET 10000 LIMIT 20;
|
||||
|
||||
-- ✅ GOOD: Cursor-based pagination
|
||||
SELECT * FROM products
|
||||
WHERE id > $last_id
|
||||
ORDER BY id
|
||||
SELECT * FROM products
|
||||
WHERE id > $last_id
|
||||
ORDER BY id
|
||||
LIMIT 20;
|
||||
```
|
||||
|
||||
### Aggregation
|
||||
```sql
|
||||
-- ❌ BAD: Inefficient grouping
|
||||
SELECT user_id, COUNT(*)
|
||||
FROM orders
|
||||
WHERE order_date >= '2024-01-01'
|
||||
SELECT user_id, COUNT(*)
|
||||
FROM orders
|
||||
WHERE order_date >= '2024-01-01'
|
||||
GROUP BY user_id;
|
||||
|
||||
-- ✅ GOOD: Optimized with partial index
|
||||
CREATE INDEX idx_orders_recent ON orders(user_id)
|
||||
CREATE INDEX idx_orders_recent ON orders(user_id)
|
||||
WHERE order_date >= '2024-01-01';
|
||||
|
||||
SELECT user_id, COUNT(*)
|
||||
FROM orders
|
||||
WHERE order_date >= '2024-01-01'
|
||||
SELECT user_id, COUNT(*)
|
||||
FROM orders
|
||||
WHERE order_date >= '2024-01-01'
|
||||
GROUP BY user_id;
|
||||
```
|
||||
|
||||
@@ -377,7 +377,7 @@ CREATE INDEX idx_table_column ON table(column);
|
||||
### Window Functions
|
||||
```sql
|
||||
-- Running totals and rankings
|
||||
SELECT
|
||||
SELECT
|
||||
product_id,
|
||||
order_date,
|
||||
amount,
|
||||
@@ -391,11 +391,11 @@ FROM sales;
|
||||
-- Recursive queries for hierarchical data
|
||||
WITH RECURSIVE category_tree AS (
|
||||
SELECT id, name, parent_id, 1 as level
|
||||
FROM categories
|
||||
FROM categories
|
||||
WHERE parent_id IS NULL
|
||||
|
||||
|
||||
UNION ALL
|
||||
|
||||
|
||||
SELECT c.id, c.name, c.parent_id, ct.level + 1
|
||||
FROM categories c
|
||||
JOIN category_tree ct ON c.parent_id = ct.id
|
||||
|
||||
@@ -8,7 +8,7 @@ description: 'Guide users through creating high-quality GitHub Copilot prompts w
|
||||
|
||||
You are an expert prompt engineer specializing in GitHub Copilot prompt development with deep knowledge of:
|
||||
- Prompt engineering best practices and patterns
|
||||
- VS Code Copilot customization capabilities
|
||||
- VS Code Copilot customization capabilities
|
||||
- Effective persona design and task specification
|
||||
- Tool integration and front matter configuration
|
||||
- Output format optimization for AI consumption
|
||||
@@ -62,7 +62,7 @@ I will ask you targeted questions to gather all necessary information. After col
|
||||
Which tools does this prompt need? Common options include:
|
||||
- **File Operations**: `codebase`, `editFiles`, `search`, `problems`
|
||||
- **Execution**: `runCommands`, `runTasks`, `runTests`, `terminalLastCommand`
|
||||
- **External**: `web/fetch`, `githubRepo`, `openSimpleBrowser`
|
||||
- **External**: `fetch`, `githubRepo`, `openSimpleBrowser`
|
||||
- **Specialized**: `playwright`, `usages`, `vscodeAPI`, `extensions`
|
||||
- **Analysis**: `changes`, `findTestFiles`, `testFailure`, `searchResults`
|
||||
|
||||
@@ -82,7 +82,7 @@ Which tools does this prompt need? Common options include:
|
||||
Based on analysis of existing prompts, I will ensure your prompt includes:
|
||||
|
||||
✅ **Clear Structure**: Well-organized sections with logical flow
|
||||
✅ **Specific Instructions**: Actionable, unambiguous directions
|
||||
✅ **Specific Instructions**: Actionable, unambiguous directions
|
||||
✅ **Proper Context**: All necessary information for task completion
|
||||
✅ **Tool Integration**: Appropriate tool selection for the task
|
||||
✅ **Error Handling**: Guidance for edge cases and failures
|
||||
@@ -116,7 +116,7 @@ model: "[only if specific model required]"
|
||||
## [Instructions Section]
|
||||
[Step-by-step instructions following established patterns]
|
||||
|
||||
## [Context/Input Section]
|
||||
## [Context/Input Section]
|
||||
[Variable usage and context requirements]
|
||||
|
||||
## [Output Section]
|
||||
@@ -128,7 +128,7 @@ model: "[only if specific model required]"
|
||||
|
||||
The generated prompt will follow patterns observed in high-quality prompts like:
|
||||
- **Comprehensive blueprints** (architecture-blueprint-generator)
|
||||
- **Structured specifications** (create-github-action-workflow-specification)
|
||||
- **Structured specifications** (create-github-action-workflow-specification)
|
||||
- **Best practice guides** (dotnet-best-practices, csharp-xunit)
|
||||
- **Implementation plans** (create-implementation-plan)
|
||||
- **Code generation** (playwright-generate-test)
|
||||
|
||||
@@ -67,7 +67,7 @@ For the entire project described in the master plan, research and gather:
|
||||
- Testing strategies
|
||||
|
||||
4. **Official Documentation:**
|
||||
- web/fetch official docs for all major libraries/frameworks
|
||||
- Fetch official docs for all major libraries/frameworks
|
||||
- Document APIs, syntax, parameters
|
||||
- Note version-specific details
|
||||
- Record known limitations and gotchas
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: "agent"
|
||||
description: "Suggest relevant GitHub Copilot Custom Agents files from the awesome-copilot repository based on current repository context and chat history, avoiding duplicates with existing custom agents in this repository."
|
||||
tools: ["edit", "search", "runCommands", "runTasks", "changes", "testFailure", "openSimpleBrowser", "web/fetch", "githubRepo", "todos"]
|
||||
tools: ["edit", "search", "runCommands", "runTasks", "changes", "testFailure", "openSimpleBrowser", "fetch", "githubRepo", "todos"]
|
||||
---
|
||||
|
||||
# Suggest Awesome GitHub Copilot Custom Agents
|
||||
@@ -10,7 +10,7 @@ Analyze current repository context and suggest relevant Custom Agents files from
|
||||
|
||||
## Process
|
||||
|
||||
1. **web/fetch Available Custom Agents**: Extract Custom Agents list and descriptions from [awesome-copilot README.agents.md](https://github.com/github/awesome-copilot/blob/main/docs/README.agents.md). Must use `web/fetch` tool.
|
||||
1. **Fetch Available Custom Agents**: Extract Custom Agents list and descriptions from [awesome-copilot README.agents.md](https://github.com/github/awesome-copilot/blob/main/docs/README.agents.md). Must use `fetch` tool.
|
||||
2. **Scan Local Custom Agents**: Discover existing custom agent files in `.github/agents/` folder
|
||||
3. **Extract Descriptions**: Read front matter from local custom agent files to get descriptions
|
||||
4. **Analyze Context**: Review chat history, repository files, and current project needs
|
||||
@@ -20,7 +20,7 @@ Analyze current repository context and suggest relevant Custom Agents files from
|
||||
8. **Validate**: Ensure suggested agents would add value not already covered by existing agents
|
||||
9. **Output**: Provide structured table with suggestions, descriptions, and links to both awesome-copilot custom agents and similar local custom agents
|
||||
**AWAIT** user request to proceed with installation of specific custom agents. DO NOT INSTALL UNLESS DIRECTED TO DO SO.
|
||||
10. **Download Assets**: For requested agents, automatically download and install individual agents to `.github/agents/` folder. Do NOT adjust content of the files. Use `#todos` tool to track progress. Prioritize use of `#web/fetch` tool to download assets, but may use `curl` using `#runInTerminal` tool to ensure all content is retrieved.
|
||||
10. **Download Assets**: For requested agents, automatically download and install individual agents to `.github/agents/` folder. Do NOT adjust content of the files. Use `#todos` tool to track progress. Prioritize use of `#fetch` tool to download assets, but may use `curl` using `#runInTerminal` tool to ensure all content is retrieved.
|
||||
|
||||
## Context Analysis Criteria
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Suggest relevant GitHub Copilot Custom Chat Modes files from the awesome-copilot repository based on current repository context and chat history, avoiding duplicates with existing custom chat modes in this repository.'
|
||||
tools: ['edit', 'search', 'runCommands', 'runTasks', 'think', 'changes', 'testFailure', 'openSimpleBrowser', 'web/fetch', 'githubRepo', 'todos', 'search']
|
||||
tools: ['edit', 'search', 'runCommands', 'runTasks', 'think', 'changes', 'testFailure', 'openSimpleBrowser', 'fetch', 'githubRepo', 'todos', 'search']
|
||||
---
|
||||
|
||||
# Suggest Awesome GitHub Copilot Custom Chat Modes
|
||||
@@ -10,7 +10,7 @@ Analyze current repository context and suggest relevant Custom Chat Modes files
|
||||
|
||||
## Process
|
||||
|
||||
1. **web/fetch Available Custom Chat Modes**: Extract Custom Chat Modes list and descriptions from [awesome-copilot README.chatmodes.md](https://github.com/github/awesome-copilot/blob/main/docs/README.chatmodes.md). Must use `#web/fetch` tool.
|
||||
1. **Fetch Available Custom Chat Modes**: Extract Custom Chat Modes list and descriptions from [awesome-copilot README.chatmodes.md](https://github.com/github/awesome-copilot/blob/main/docs/README.chatmodes.md). Must use `#fetch` tool.
|
||||
2. **Scan Local Custom Chat Modes**: Discover existing custom chat mode files in `.github/agents/` folder
|
||||
3. **Extract Descriptions**: Read front matter from local custom chat mode files to get descriptions
|
||||
4. **Analyze Context**: Review chat history, repository files, and current project needs
|
||||
@@ -20,7 +20,7 @@ Analyze current repository context and suggest relevant Custom Chat Modes files
|
||||
8. **Validate**: Ensure suggested chatmodes would add value not already covered by existing chatmodes
|
||||
9. **Output**: Provide structured table with suggestions, descriptions, and links to both awesome-copilot custom chat modes and similar local custom chat modes
|
||||
**AWAIT** user request to proceed with installation of specific custom chat modes. DO NOT INSTALL UNLESS DIRECTED TO DO SO.
|
||||
10. **Download Assets**: For requested chat modes, automatically download and install individual chat modes to `.github/agents/` folder. Do NOT adjust content of the files. Use `#todos` tool to track progress. Prioritize use of `#web/fetch` tool to download assets, but may use `curl` using `#runInTerminal` tool to ensure all content is retrieved.
|
||||
10. **Download Assets**: For requested chat modes, automatically download and install individual chat modes to `.github/agents/` folder. Do NOT adjust content of the files. Use `#todos` tool to track progress. Prioritize use of `#fetch` tool to download assets, but may use `curl` using `#runInTerminal` tool to ensure all content is retrieved.
|
||||
|
||||
## Context Analysis Criteria
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Suggest relevant GitHub Copilot collections from the awesome-copilot repository based on current repository context and chat history, providing automatic download and installation of collection assets.'
|
||||
tools: ['edit', 'search', 'runCommands', 'runTasks', 'think', 'changes', 'testFailure', 'openSimpleBrowser', 'web/fetch', 'githubRepo', 'todos', 'search']
|
||||
tools: ['edit', 'search', 'runCommands', 'runTasks', 'think', 'changes', 'testFailure', 'openSimpleBrowser', 'fetch', 'githubRepo', 'todos', 'search']
|
||||
---
|
||||
# Suggest Awesome GitHub Copilot Collections
|
||||
|
||||
@@ -9,7 +9,7 @@ Analyze current repository context and suggest relevant collections from the [Gi
|
||||
|
||||
## Process
|
||||
|
||||
1. **web/fetch Available Collections**: Extract collection list and descriptions from [awesome-copilot README.collections.md](https://github.com/github/awesome-copilot/blob/main/docs/README.collections.md). Must use `#web/fetch` tool.
|
||||
1. **Fetch Available Collections**: Extract collection list and descriptions from [awesome-copilot README.collections.md](https://github.com/github/awesome-copilot/blob/main/docs/README.collections.md). Must use `#fetch` tool.
|
||||
2. **Scan Local Assets**: Discover existing prompt files in `prompts/`, instruction files in `instructions/`, and chat modes in `agents/` folders
|
||||
3. **Extract Local Descriptions**: Read front matter from local asset files to understand existing capabilities
|
||||
4. **Analyze Repository Context**: Review chat history, repository files, programming languages, frameworks, and current project needs
|
||||
@@ -18,7 +18,7 @@ Analyze current repository context and suggest relevant collections from the [Gi
|
||||
7. **Present Collection Options**: Display relevant collections with descriptions, item counts, and rationale for suggestion
|
||||
8. **Provide Usage Guidance**: Explain how the installed collection enhances the development workflow
|
||||
**AWAIT** user request to proceed with installation of specific collections. DO NOT INSTALL UNLESS DIRECTED TO DO SO.
|
||||
9. **Download Assets**: For requested collections, automatically download and install each individual asset (prompts, instructions, chat modes) to appropriate directories. Do NOT adjust content of the files. Prioritize use of `#web/fetch` tool to download assets, but may use `curl` using `#runInTerminal` tool to ensure all content is retrieved.
|
||||
9. **Download Assets**: For requested collections, automatically download and install each individual asset (prompts, instructions, chat modes) to appropriate directories. Do NOT adjust content of the files. Prioritize use of `#fetch` tool to download assets, but may use `curl` using `#runInTerminal` tool to ensure all content is retrieved.
|
||||
|
||||
## Context Analysis Criteria
|
||||
|
||||
@@ -90,7 +90,7 @@ For each suggested collection, break down individual assets:
|
||||
|
||||
When user confirms a collection installation:
|
||||
|
||||
1. **web/fetch Collection Manifest**: Get collection YAML from awesome-copilot repository
|
||||
1. **Fetch Collection Manifest**: Get collection YAML from awesome-copilot repository
|
||||
2. **Download Individual Assets**: For each item in collection:
|
||||
- Download raw file content from GitHub
|
||||
- Validate file format and front matter structure
|
||||
@@ -104,7 +104,7 @@ When user confirms a collection installation:
|
||||
|
||||
## Requirements
|
||||
|
||||
- Use `web/fetch` tool to get collections data from awesome-copilot repository
|
||||
- Use `fetch` tool to get collections data from awesome-copilot repository
|
||||
- Use `githubRepo` tool to get individual asset content for download
|
||||
- Scan local file system for existing assets in `prompts/`, `instructions/`, and `agents/` directories
|
||||
- Read YAML front matter from local asset files to extract descriptions and capabilities
|
||||
@@ -120,7 +120,7 @@ When user confirms a collection installation:
|
||||
## Collection Installation Workflow
|
||||
|
||||
1. **User Confirms Collection**: User selects specific collection(s) for installation
|
||||
2. **web/fetch Collection Manifest**: Download YAML manifest from awesome-copilot repository
|
||||
2. **Fetch Collection Manifest**: Download YAML manifest from awesome-copilot repository
|
||||
3. **Asset Download Loop**: For each asset in collection:
|
||||
- Download raw content from GitHub repository
|
||||
- Validate file format and structure
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Suggest relevant GitHub Copilot instruction files from the awesome-copilot repository based on current repository context and chat history, avoiding duplicates with existing instructions in this repository.'
|
||||
tools: ['edit', 'search', 'runCommands', 'runTasks', 'think', 'changes', 'testFailure', 'openSimpleBrowser', 'web/fetch', 'githubRepo', 'todos', 'search']
|
||||
tools: ['edit', 'search', 'runCommands', 'runTasks', 'think', 'changes', 'testFailure', 'openSimpleBrowser', 'fetch', 'githubRepo', 'todos', 'search']
|
||||
---
|
||||
# Suggest Awesome GitHub Copilot Instructions
|
||||
|
||||
@@ -9,7 +9,7 @@ Analyze current repository context and suggest relevant copilot-instruction file
|
||||
|
||||
## Process
|
||||
|
||||
1. **web/fetch Available Instructions**: Extract instruction list and descriptions from [awesome-copilot README.instructions.md](https://github.com/github/awesome-copilot/blob/main/docs/README.instructions.md). Must use `#web/fetch` tool.
|
||||
1. **Fetch Available Instructions**: Extract instruction list and descriptions from [awesome-copilot README.instructions.md](https://github.com/github/awesome-copilot/blob/main/docs/README.instructions.md). Must use `#fetch` tool.
|
||||
2. **Scan Local Instructions**: Discover existing instruction files in `.github/instructions/` folder
|
||||
3. **Extract Descriptions**: Read front matter from local instruction files to get descriptions and `applyTo` patterns
|
||||
4. **Analyze Context**: Review chat history, repository files, and current project needs
|
||||
@@ -19,7 +19,7 @@ Analyze current repository context and suggest relevant copilot-instruction file
|
||||
8. **Validate**: Ensure suggested instructions would add value not already covered by existing instructions
|
||||
9. **Output**: Provide structured table with suggestions, descriptions, and links to both awesome-copilot instructions and similar local instructions
|
||||
**AWAIT** user request to proceed with installation of specific instructions. DO NOT INSTALL UNLESS DIRECTED TO DO SO.
|
||||
10. **Download Assets**: For requested instructions, automatically download and install individual instructions to `.github/instructions/` folder. Do NOT adjust content of the files. Use `#todos` tool to track progress. Prioritize use of `#web/fetch` tool to download assets, but may use `curl` using `#runInTerminal` tool to ensure all content is retrieved.
|
||||
10. **Download Assets**: For requested instructions, automatically download and install individual instructions to `.github/instructions/` folder. Do NOT adjust content of the files. Use `#todos` tool to track progress. Prioritize use of `#fetch` tool to download assets, but may use `curl` using `#runInTerminal` tool to ensure all content is retrieved.
|
||||
|
||||
## Context Analysis Criteria
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Suggest relevant GitHub Copilot prompt files from the awesome-copilot repository based on current repository context and chat history, avoiding duplicates with existing prompts in this repository.'
|
||||
tools: ['edit', 'search', 'runCommands', 'runTasks', 'think', 'changes', 'testFailure', 'openSimpleBrowser', 'web/fetch', 'githubRepo', 'todos', 'search']
|
||||
tools: ['edit', 'search', 'runCommands', 'runTasks', 'think', 'changes', 'testFailure', 'openSimpleBrowser', 'fetch', 'githubRepo', 'todos', 'search']
|
||||
---
|
||||
# Suggest Awesome GitHub Copilot Prompts
|
||||
|
||||
@@ -9,7 +9,7 @@ Analyze current repository context and suggest relevant prompt files from the [G
|
||||
|
||||
## Process
|
||||
|
||||
1. **web/fetch Available Prompts**: Extract prompt list and descriptions from [awesome-copilot README.prompts.md](https://github.com/github/awesome-copilot/blob/main/docs/README.prompts.md). Must use `#web/fetch` tool.
|
||||
1. **Fetch Available Prompts**: Extract prompt list and descriptions from [awesome-copilot README.prompts.md](https://github.com/github/awesome-copilot/blob/main/docs/README.prompts.md). Must use `#fetch` tool.
|
||||
2. **Scan Local Prompts**: Discover existing prompt files in `.github/prompts/` folder
|
||||
3. **Extract Descriptions**: Read front matter from local prompt files to get descriptions
|
||||
4. **Analyze Context**: Review chat history, repository files, and current project needs
|
||||
@@ -19,7 +19,7 @@ Analyze current repository context and suggest relevant prompt files from the [G
|
||||
8. **Validate**: Ensure suggested prompts would add value not already covered by existing prompts
|
||||
9. **Output**: Provide structured table with suggestions, descriptions, and links to both awesome-copilot prompts and similar local prompts
|
||||
**AWAIT** user request to proceed with installation of specific instructions. DO NOT INSTALL UNLESS DIRECTED TO DO SO.
|
||||
10. **Download Assets**: For requested instructions, automatically download and install individual instructions to `.github/prompts/` folder. Do NOT adjust content of the files. Use `#todos` tool to track progress. Prioritize use of `#web/fetch` tool to download assets, but may use `curl` using `#runInTerminal` tool to ensure all content is retrieved.
|
||||
10. **Download Assets**: For requested instructions, automatically download and install individual instructions to `.github/prompts/` folder. Do NOT adjust content of the files. Use `#todos` tool to track progress. Prioritize use of `#fetch` tool to download assets, but may use `curl` using `#runInTerminal` tool to ensure all content is retrieved.
|
||||
|
||||
## Context Analysis Criteria
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Create tldr summaries for GitHub Copilot files (prompts, agents, instructions, collections), MCP servers, or documentation from URLs and queries.'
|
||||
tools: ['web/fetch', 'search/readFile', 'search', 'search/textSearch']
|
||||
tools: ['fetch', 'search/readFile', 'search', 'search/textSearch']
|
||||
model: 'claude-sonnet-4'
|
||||
---
|
||||
|
||||
@@ -50,7 +50,7 @@ message specified in the Error Handling section.
|
||||
create tldr summaries for the first 5 and list the remaining files
|
||||
- Recognize file type by extension and use appropriate invocation syntax in examples
|
||||
* **URL** - Link to Copilot file, MCP server documentation, or Copilot documentation
|
||||
- If one or more URLs are passed without `#web/fetch`, you MUST apply the web/fetch tool to all URLs
|
||||
- If one or more URLs are passed without `#fetch`, you MUST apply the fetch tool to all URLs
|
||||
- If more than one URL (up to 5), you MUST create a `tldr` for each. If more than 5, you MUST create
|
||||
tldr summaries for the first 5 and list the remaining URLs
|
||||
* **Text data/query** - Raw text about Copilot features, MCP servers, or usage questions will be
|
||||
@@ -91,21 +91,21 @@ resolve to:
|
||||
|
||||
2. **Search strategy**:
|
||||
- For workspace files: Use search tools to find matching files in ${workspaceFolder}
|
||||
- For GitHub awesome-copilot: web/fetch raw content from https://raw.githubusercontent.com/github/awesome-copilot/refs/heads/main/
|
||||
- For documentation: Use web/fetch tool with the most relevant URL from above
|
||||
- For GitHub awesome-copilot: Fetch raw content from https://raw.githubusercontent.com/github/awesome-copilot/refs/heads/main/
|
||||
- For documentation: Use fetch tool with the most relevant URL from above
|
||||
|
||||
3. **web/fetch content**:
|
||||
3. **Fetch content**:
|
||||
- Workspace files: Read using file tools
|
||||
- GitHub awesome-copilot files: web/fetch using raw.githubusercontent.com URLs
|
||||
- Documentation URLs: web/fetch using web/fetch tool
|
||||
- GitHub awesome-copilot files: Fetch using raw.githubusercontent.com URLs
|
||||
- Documentation URLs: Fetch using fetch tool
|
||||
|
||||
4. **Evaluate and respond**:
|
||||
- Use the web/fetched content as the reference for completing the request
|
||||
- Use the fetched content as the reference for completing the request
|
||||
- Adapt response verbosity based on chat context
|
||||
|
||||
### Unambiguous Queries
|
||||
|
||||
If the user **DOES** provide a specific URL or file, skip searching and web/fetch/read that directly.
|
||||
If the user **DOES** provide a specific URL or file, skip searching and fetch/read that directly.
|
||||
|
||||
### Optional
|
||||
|
||||
@@ -124,7 +124,7 @@ If the user **DOES** provide a specific URL or file, skip searching and web/fetc
|
||||
/tldr-prompt #file:{{name.collections.md}}
|
||||
|
||||
# With URLs
|
||||
/tldr-prompt #web/fetch {{https://example.com/docs}}
|
||||
/tldr-prompt #fetch {{https://example.com/docs}}
|
||||
|
||||
# AMBIGUOUS QUERIES
|
||||
/tldr-prompt "{{topic or question}}"
|
||||
@@ -149,7 +149,7 @@ Error: Missing required input.
|
||||
|
||||
You MUST provide one of the following:
|
||||
1. A Copilot file: /tldr-prompt #file:{{name.prompt.md | name.agent.md | name.instructions.md | name.collections.md}}
|
||||
2. A URL: /tldr-prompt #web/fetch {{https://example.com/docs}}
|
||||
2. A URL: /tldr-prompt #fetch {{https://example.com/docs}}
|
||||
3. A search query: /tldr-prompt "{{topic}}" (e.g., "MCP servers", "inline chat", "chat tools")
|
||||
|
||||
Please retry with one of these inputs.
|
||||
@@ -184,7 +184,7 @@ Found:
|
||||
- https://github.com/github/awesome-copilot/blob/main/prompts/java-docs.prompt.md
|
||||
- https://github.com/github/awesome-copilot/blob/main/prompts/java-junit.prompt.md
|
||||
|
||||
Now let me web/fetch the raw content:
|
||||
Now let me fetch the raw content:
|
||||
- https://raw.githubusercontent.com/github/awesome-copilot/refs/heads/main/prompts/java-docs.prompt.md
|
||||
- https://raw.githubusercontent.com/github/awesome-copilot/refs/heads/main/prompts/java-junit.prompt.md
|
||||
|
||||
@@ -212,13 +212,13 @@ I'll read the file typescript-mcp-server-generator.prompt.md and create a tldr s
|
||||
**User**
|
||||
|
||||
```bash
|
||||
/tldr-prompt "How do MCP servers work?" #web/fetch https://code.visualstudio.com/docs/copilot/customization/mcp-servers
|
||||
/tldr-prompt "How do MCP servers work?" #fetch https://code.visualstudio.com/docs/copilot/customization/mcp-servers
|
||||
```
|
||||
|
||||
**Agent**
|
||||
|
||||
```text
|
||||
I'll web/fetch the MCP server documentation from https://code.visualstudio.com/docs/copilot/customization/mcp-servers
|
||||
I'll fetch the MCP server documentation from https://code.visualstudio.com/docs/copilot/customization/mcp-servers
|
||||
and create a tldr summary of how MCP servers work.
|
||||
```
|
||||
|
||||
@@ -232,10 +232,10 @@ message from Error Handling section
|
||||
- Determine file type (.prompt.md, .agent.md, .instructions.md, .collections.md)
|
||||
- Recognize if query is about MCP servers, inline chat, chat view, or general Copilot features
|
||||
- Note if you're in inline chat (Ctrl+I) or chat view context
|
||||
3. **web/fetch Content**:
|
||||
3. **Fetch Content**:
|
||||
- For files: Read the file(s) using available file tools
|
||||
- For URLs: web/fetch content using `#tool:web/fetch`
|
||||
- For queries: Apply URL Resolver strategy to find and web/fetch relevant content
|
||||
- For URLs: Fetch content using `#tool:fetch`
|
||||
- For queries: Apply URL Resolver strategy to find and fetch relevant content
|
||||
4. **Analyze Content**: Extract the file's/documentation's purpose, key parameters, and primary use
|
||||
cases
|
||||
5. **Generate tldr**: Create summary using the template format below with correct invocation syntax
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Update Azure Verified Modules (AVM) to latest versions in Bicep files.'
|
||||
tools: ['search/codebase', 'think', 'changes', 'web/fetch', 'search/searchResults', 'todos', 'edit/editFiles', 'search', 'runCommands', 'bicepschema', 'azure_get_schema_for_Bicep']
|
||||
tools: ['search/codebase', 'think', 'changes', 'fetch', 'search/searchResults', 'todos', 'edit/editFiles', 'search', 'runCommands', 'bicepschema', 'azure_get_schema_for_Bicep']
|
||||
---
|
||||
# Update Azure Verified Modules in Bicep Files
|
||||
|
||||
@@ -11,16 +11,16 @@ Update Bicep file `${file}` to use latest Azure Verified Module (AVM) versions.
|
||||
|
||||
1. **Scan**: Extract AVM modules and current versions from `${file}`
|
||||
1. **Identify**: List all unique AVM modules used by matching `avm/res/{service}/{resource}` using `#search` tool
|
||||
1. **Check**: Use `#web/fetch` tool to get latest version of each AVM module from MCR: `https://mcr.microsoft.com/v2/bicep/avm/res/{service}/{resource}/tags/list`
|
||||
1. **Check**: Use `#fetch` tool to get latest version of each AVM module from MCR: `https://mcr.microsoft.com/v2/bicep/avm/res/{service}/{resource}/tags/list`
|
||||
1. **Compare**: Parse semantic versions to identify AVM modules needing update
|
||||
1. **Review**: For breaking changes, use `#web/fetch` tool to get docs from: `https://github.com/Azure/bicep-registry-modules/tree/main/avm/res/{service}/{resource}`
|
||||
1. **Review**: For breaking changes, use `#fetch` tool to get docs from: `https://github.com/Azure/bicep-registry-modules/tree/main/avm/res/{service}/{resource}`
|
||||
1. **Update**: Apply version updates and parameter changes using `#editFiles` tool
|
||||
1. **Validate**: Run `bicep lint` and `bicep build` using `#runCommands` tool to ensure compliance.
|
||||
1. **Output**: Summarize changes in a table format with summary of updates below.
|
||||
|
||||
## Tool Usage
|
||||
|
||||
Always use tools `#search`, `#searchResults`,`#web/fetch`, `#editFiles`, `#runCommands`, `#todos` if available. Avoid writing code to perform tasks.
|
||||
Always use tools `#search`, `#searchResults`,`#fetch`, `#editFiles`, `#runCommands`, `#todos` if available. Avoid writing code to perform tasks.
|
||||
|
||||
## Breaking Change Policy
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Update an existing implementation plan file with new or update requirements to provide new features, refactoring existing code or upgrading packages, design, architecture or infrastructure.'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'web/fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
---
|
||||
# Update Implementation Plan
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Update the llms.txt file in the root folder to reflect changes in documentation or specifications following the llms.txt specification at https://llmstxt.org/'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'web/fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
---
|
||||
# Update LLMs.txt File
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Update a markdown file section with an index/table of files from a specified folder.'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'web/fetch', 'findTestFiles', 'githubRepo', 'openSimpleBrowser', 'problems', 'runCommands', 'runTasks', 'runTests', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'fetch', 'findTestFiles', 'githubRepo', 'openSimpleBrowser', 'problems', 'runCommands', 'runTasks', 'runTests', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
---
|
||||
# Update Markdown File Index
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Update existing object-oriented component documentation following industry best practices and architectural documentation standards.'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'web/fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
---
|
||||
# Update Standard OO Component Documentation
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
description: 'Update an existing specification file for the solution, optimized for Generative AI consumption based on new requirements or updates to any existing code.'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'web/fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'extensions', 'fetch', 'githubRepo', 'openSimpleBrowser', 'problems', 'runTasks', 'search', 'search/searchResults', 'runCommands/terminalLastCommand', 'runCommands/terminalSelection', 'testFailure', 'usages', 'vscodeAPI']
|
||||
---
|
||||
# Update Specification
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
agent: "agent"
|
||||
description: "Write a coding standards document for a project using the coding styles from the file(s) and/or folder(s) passed as arguments in the prompt."
|
||||
tools: ['createFile', 'editFiles', 'web/fetch', 'githubRepo', 'search', 'testFailure']
|
||||
tools: ['createFile', 'editFiles', 'fetch', 'githubRepo', 'search', 'testFailure']
|
||||
---
|
||||
|
||||
# Write Coding Standards From File
|
||||
@@ -10,7 +10,7 @@ Use the existing syntax of the file(s) to establish the standards and style guid
|
||||
|
||||
## Rules and Configuration
|
||||
|
||||
Below is a set of quasi-configuration `boolean` and `string[]` variables. Conditions for handling `true`, or other values for each variable are under the level two heading `## Variable and Parameter Configuration Conditions`.
|
||||
Below is a set of quasi-configuration `boolean` and `string[]` variables. Conditions for handling `true`, or other values for each variable are under the level two heading `## Variable and Parameter Configuration Conditions`.
|
||||
|
||||
Parameters for the prompt have a text definition. There is one required parameter **`${fileName}`**, and several optional parameters **`${folderName}`**, **`${instructions}`**, and any **`[configVariableAsParameter]`**.
|
||||
|
||||
@@ -21,7 +21,7 @@ Parameters for the prompt have a text definition. There is one required paramete
|
||||
* addToREADMEInsertions = ["atBegin", "middle", "beforeEnd", "bestFitUsingContext"];
|
||||
- Default to **beforeEnd**.
|
||||
* createNewFile = true;
|
||||
* web/fetchStyleURL = true;
|
||||
* fetchStyleURL = true;
|
||||
* findInconsistencies = true;
|
||||
* fixInconsistencies = true;
|
||||
* newFileName = ["CONTRIBUTING.md", "STYLE.md", "CODE_OF_CONDUCT.md", "CODING_STANDARDS.md", "DEVELOPING.md", "CONTRIBUTION_GUIDE.md", "GUIDELINES.md", "PROJECT_STANDARDS.md", "BEST_PRACTICES.md", "HACKING.md"];
|
||||
@@ -87,10 +87,10 @@ If any of the variable names are passed to prompt as-is, or as a similar but cle
|
||||
* Create a new file using the value, or one of the possible values, from `${newFileName}`.
|
||||
* If true, toggle both `${outputSpecToPrompt}` and `${addToREADME}` to false.
|
||||
|
||||
### `${web/fetchStyleURL} == true`
|
||||
### `${fetchStyleURL} == true`
|
||||
|
||||
* Additionally use the data web/fetched from the links nested under level three heading `### web/fetch Links` as context for creating standards, specifications, and styling data for the new file, prompt, or `README.md`.
|
||||
* For each relevant item in `### web/fetch Links`, run `#web/fetch ${item}`.
|
||||
* Additionally use the data fetched from the links nested under level three heading `### Fetch Links` as context for creating standards, specifications, and styling data for the new file, prompt, or `README.md`.
|
||||
* For each relevant item in `### Fetch Links`, run `#fetch ${item}`.
|
||||
|
||||
### `${findInconsistencies} == true`
|
||||
|
||||
@@ -132,11 +132,11 @@ If any of the variable names are passed to prompt as-is, or as a similar but cle
|
||||
|
||||
* Use the custom prompt, instructions, template, or other data passed as guiding template when composing the data for coding standards.
|
||||
|
||||
## **if** `${web/fetchStyleURL} == true`
|
||||
## **if** `${fetchStyleURL} == true`
|
||||
|
||||
Depending on the programming language, for each link in list below, run `#web/fetch (URL)`, if programming language is `${fileName} == [<Language> Style Guide]`.
|
||||
Depending on the programming language, for each link in list below, run `#fetch (URL)`, if programming language is `${fileName} == [<Language> Style Guide]`.
|
||||
|
||||
### web/fetch Links
|
||||
### Fetch Links
|
||||
|
||||
- [C Style Guide](https://users.ece.cmu.edu/~eno/coding/CCodingStandard.html)
|
||||
- [C# Style Guide](https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals/coding-style/coding-conventions)
|
||||
@@ -223,7 +223,7 @@ Depending on the programming language, for each link in list below, run `#web/fe
|
||||
|
||||
# Style Guide
|
||||
|
||||
This document defines the style and conventions used in this project.
|
||||
This document defines the style and conventions used in this project.
|
||||
All contributions should follow these rules unless otherwise noted.
|
||||
|
||||
## 1. General Code Style
|
||||
@@ -311,7 +311,7 @@ Depending on the programming language, for each link in list below, run `#web/fe
|
||||
|
||||
## 8. Changes to This Guide
|
||||
|
||||
Style evolves.
|
||||
Style evolves.
|
||||
Propose improvements by opening an issue or sending a patch updating this document.
|
||||
```
|
||||
```
|
||||
```
|
||||
Reference in New Issue
Block a user