32 KiB
description, applyTo
| description | applyTo |
|---|---|
| Guidelines for creating custom agent files for GitHub Copilot | **/*.agent.md |
Custom Agent File Guidelines
Instructions for creating effective and maintainable custom agent files that provide specialized expertise for specific development tasks in GitHub Copilot.
Project Context
- Target audience: Developers creating custom agents for GitHub Copilot
- File format: Markdown with YAML frontmatter
- File naming convention: lowercase with hyphens (e.g.,
test-specialist.agent.md) - Location:
.github/agents/directory (repository-level) oragents/directory (organization/enterprise-level) - Purpose: Define specialized agents with tailored expertise, tools, and instructions for specific tasks
- Official documentation: https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/create-custom-agents
Required Frontmatter
Every agent file must include YAML frontmatter with the following fields:
---
description: 'Brief description of the agent purpose and capabilities'
name: 'Agent Display Name'
tools: ['read', 'edit', 'search']
model: 'Claude Sonnet 4.5'
target: 'vscode'
infer: true
---
Core Frontmatter Properties
description (REQUIRED)
- Single-quoted string, clearly stating the agent's purpose and domain expertise
- Should be concise (50-150 characters) and actionable
- Example:
'Focuses on test coverage, quality, and testing best practices'
name (OPTIONAL)
- Display name for the agent in the UI
- If omitted, defaults to filename (without
.mdor.agent.md) - Use title case and be descriptive
- Example:
'Testing Specialist'
tools (OPTIONAL)
- List of tool names or aliases the agent can use
- Supports comma-separated string or YAML array format
- If omitted, agent has access to all available tools
- See "Tool Configuration" section below for details
model (STRONGLY RECOMMENDED)
- Specifies which AI model the agent should use
- Supported in VS Code, JetBrains IDEs, Eclipse, and Xcode
- Example:
'Claude Sonnet 4.5','gpt-4','gpt-4o' - Choose based on agent complexity and required capabilities
target (OPTIONAL)
- Specifies target environment:
'vscode'or'github-copilot' - If omitted, agent is available in both environments
- Use when agent has environment-specific features
infer (OPTIONAL)
- Boolean controlling whether Copilot can automatically use this agent based on context
- Default:
trueif omitted - Set to
falseto require manual agent selection
metadata (OPTIONAL, GitHub.com only)
- Object with name-value pairs for agent annotation
- Example:
metadata: { category: 'testing', version: '1.0' } - Not supported in VS Code
mcp-servers (OPTIONAL, Organization/Enterprise only)
- Configure MCP servers available only to this agent
- Only supported for organization/enterprise level agents
- See "MCP Server Configuration" section below
handoffs (OPTIONAL, VS Code only)
- Enable guided sequential workflows that transition between agents with suggested next steps
- List of handoff configurations, each specifying a target agent and optional prompt
- After a chat response completes, handoff buttons appear allowing users to move to the next agent
- Only supported in VS Code (version 1.106+)
- See "Handoffs Configuration" section below for details
Handoffs Configuration
Handoffs enable you to create guided sequential workflows that transition seamlessly between custom agents. This is useful for orchestrating multi-step development workflows where users can review and approve each step before moving to the next one.
Common Handoff Patterns
- Planning → Implementation: Generate a plan in a planning agent, then hand off to an implementation agent to start coding
- Implementation → Review: Complete implementation, then switch to a code review agent to check for quality and security issues
- Write Failing Tests → Write Passing Tests: Generate failing tests, then hand off to implement the code that makes those tests pass
- Research → Documentation: Research a topic, then transition to a documentation agent to write guides
Handoff Frontmatter Structure
Define handoffs in the agent file's YAML frontmatter using the handoffs field:
---
description: 'Brief description of the agent'
name: 'Agent Name'
tools: ['search', 'read']
handoffs:
- label: Start Implementation
agent: implementation
prompt: 'Now implement the plan outlined above.'
send: false
- label: Code Review
agent: code-review
prompt: 'Please review the implementation for quality and security issues.'
send: false
---
Handoff Properties
Each handoff in the list must include the following properties:
| Property | Type | Required | Description |
|---|---|---|---|
label |
string | Yes | The display text shown on the handoff button in the chat interface |
agent |
string | Yes | The target agent identifier to switch to (name or filename without .agent.md) |
prompt |
string | No | The prompt text to pre-fill in the target agent's chat input |
send |
boolean | No | If true, automatically submits the prompt to the target agent (default: false) |
Handoff Behavior
- Button Display: Handoff buttons appear as interactive suggestions after a chat response completes
- Context Preservation: When users select a handoff button, they switch to the target agent with conversation context maintained
- Pre-filled Prompt: If a
promptis specified, it appears pre-filled in the target agent's chat input - Manual vs Auto: When
send: false, users must review and manually send the pre-filled prompt; whensend: true, the prompt is automatically submitted
Handoff Configuration Guidelines
When to Use Handoffs
- Multi-step workflows: Breaking down complex tasks across specialized agents
- Quality gates: Ensuring review steps between implementation phases
- Guided processes: Directing users through a structured development process
- Skill transitions: Moving from planning/design to implementation/testing specialists
Best Practices
-
Clear Labels: Use action-oriented labels that clearly indicate the next step
- ✅ Good: "Start Implementation", "Review for Security", "Write Tests"
- ❌ Avoid: "Next", "Go to agent", "Do something"
-
Relevant Prompts: Provide context-aware prompts that reference the completed work
- ✅ Good:
'Now implement the plan outlined above.' - ❌ Avoid: Generic prompts without context
- ✅ Good:
-
Selective Use: Don't create handoffs to every possible agent; focus on logical workflow transitions
- Limit to 2-3 most relevant next steps per agent
- Only add handoffs for agents that naturally follow in the workflow
-
Agent Dependencies: Ensure target agents exist before creating handoffs
- Handoffs to non-existent agents will be silently ignored
- Test handoffs to verify they work as expected
-
Prompt Content: Keep prompts concise and actionable
- Refer to work from the current agent without duplicating content
- Provide any necessary context the target agent might need
Example: Complete Workflow
Here's an example of three agents with handoffs creating a complete workflow:
Planning Agent (planner.agent.md):
---
description: 'Generate an implementation plan for new features or refactoring'
name: 'Planner'
tools: ['search', 'read']
handoffs:
- label: Implement Plan
agent: implementer
prompt: 'Implement the plan outlined above.'
send: false
---
# Planner Agent
You are a planning specialist. Your task is to:
1. Analyze the requirements
2. Break down the work into logical steps
3. Generate a detailed implementation plan
4. Identify testing requirements
Do not write any code - focus only on planning.
Implementation Agent (implementer.agent.md):
---
description: 'Implement code based on a plan or specification'
name: 'Implementer'
tools: ['read', 'edit', 'search', 'execute']
handoffs:
- label: Review Implementation
agent: reviewer
prompt: 'Please review this implementation for code quality, security, and adherence to best practices.'
send: false
---
# Implementer Agent
You are an implementation specialist. Your task is to:
1. Follow the provided plan or specification
2. Write clean, maintainable code
3. Include appropriate comments and documentation
4. Follow project coding standards
Implement the solution completely and thoroughly.
Review Agent (reviewer.agent.md):
---
description: 'Review code for quality, security, and best practices'
name: 'Reviewer'
tools: ['read', 'search']
handoffs:
- label: Back to Planning
agent: planner
prompt: 'Review the feedback above and determine if a new plan is needed.'
send: false
---
# Code Review Agent
You are a code review specialist. Your task is to:
1. Check code quality and maintainability
2. Identify security issues and vulnerabilities
3. Verify adherence to project standards
4. Suggest improvements
Provide constructive feedback on the implementation.
This workflow allows a developer to:
- Start with the Planner agent to create a detailed plan
- Hand off to the Implementer agent to write code based on the plan
- Hand off to the Reviewer agent to check the implementation
- Optionally hand off back to planning if significant issues are found
Version Compatibility
- VS Code: Handoffs are supported in VS Code 1.106 and later
- GitHub.com: Not currently supported; agent transition workflows use different mechanisms
- Other IDEs: Limited or no support; focus on VS Code implementations for maximum compatibility
Tool Configuration
Tool Specification Strategies
Enable all tools (default):
# Omit tools property entirely, or use:
tools: ['*']
Enable specific tools:
tools: ['read', 'edit', 'search', 'execute']
Enable MCP server tools:
tools: ['read', 'edit', 'github/*', 'playwright/navigate']
Disable all tools:
tools: []
Standard Tool Aliases
All aliases are case-insensitive:
| Alias | Alternative Names | Category | Description |
|---|---|---|---|
execute |
shell, Bash, powershell | Shell execution | Execute commands in appropriate shell |
read |
Read, NotebookRead, view | File reading | Read file contents |
edit |
Edit, MultiEdit, Write, NotebookEdit | File editing | Edit and modify files |
search |
Grep, Glob, search | Code search | Search for files or text in files |
agent |
custom-agent, Task | Agent invocation | Invoke other custom agents |
web |
WebSearch, WebFetch | Web access | Fetch web content and search |
todo |
TodoWrite | Task management | Create and manage task lists (VS Code only) |
Built-in MCP Server Tools
GitHub MCP Server:
tools: ['github/*'] # All GitHub tools
tools: ['github/get_file_contents', 'github/search_repositories'] # Specific tools
- All read-only tools available by default
- Token scoped to source repository
Playwright MCP Server:
tools: ['playwright/*'] # All Playwright tools
tools: ['playwright/navigate', 'playwright/screenshot'] # Specific tools
- Configured to access localhost only
- Useful for browser automation and testing
Tool Selection Best Practices
- Principle of Least Privilege: Only enable tools necessary for the agent's purpose
- Security: Limit
executeaccess unless explicitly required - Focus: Fewer tools = clearer agent purpose and better performance
- Documentation: Comment why specific tools are required for complex configurations
Sub-Agent Invocation (Agent Orchestration)
Agents can invoke other agents using runSubagent to orchestrate multi-step workflows.
How It Works
Include agent in tools list to enable sub-agent invocation:
tools: ['read', 'edit', 'search', 'agent']
Then invoke other agents with runSubagent:
const result = await runSubagent({
description: 'What this step does',
prompt: `You are the [Specialist] specialist.
Context:
- Parameter: ${parameterValue}
- Input: ${inputPath}
- Output: ${outputPath}
Task:
1. Do the specific work
2. Write results to output location
3. Return summary of completion`
});
Basic Pattern
Structure each sub-agent call with:
- description: Clear one-line purpose of the sub-agent invocation
- prompt: Detailed instructions with substituted variables
The prompt should include:
- Who the sub-agent is (specialist role)
- What context it needs (parameters, paths)
- What to do (concrete tasks)
- Where to write output
- What to return (summary)
Example: Multi-Step Processing
// Step 1: Process data
const processing = await runSubagent({
description: 'Transform raw input data',
prompt: `You are the Data Processor specialist.
Project: ${projectName}
Input: ${basePath}/raw/
Output: ${basePath}/processed/
Task:
1. Read all files from input directory
2. Apply transformations
3. Write processed files to output
4. Create summary: ${basePath}/processed/summary.md
Return: Number of files processed and any issues found`
});
// Step 2: Analyze (depends on Step 1)
const analysis = await runSubagent({
description: 'Analyze processed data',
prompt: `You are the Data Analyst specialist.
Project: ${projectName}
Input: ${basePath}/processed/
Output: ${basePath}/analysis/
Task:
1. Read processed files from input
2. Generate analysis report
3. Write to: ${basePath}/analysis/report.md
Return: Key findings and identified patterns`
});
Key Points
- Pass variables in prompts: Use
${variableName}for all dynamic values - Keep prompts focused: Clear, specific tasks for each sub-agent
- Return summaries: Each sub-agent should report what it accomplished
- Sequential execution: Use
awaitto maintain order when steps depend on each other - Error handling: Check results before proceeding to dependent steps
⚠️ Tool Availability Requirement
Critical: If a sub-agent requires specific tools (e.g., edit, execute, search), the orchestrator must include those tools in its own tools list. Sub-agents cannot access tools that aren't available to their parent orchestrator.
Example:
# If your sub-agents need to edit files, execute commands, or search code
tools: ['read', 'edit', 'search', 'execute', 'agent']
The orchestrator's tool permissions act as a ceiling for all invoked sub-agents. Plan your tool list carefully to ensure all sub-agents have the tools they need.
⚠️ Important Limitation
Sub-agent orchestration is NOT suitable for large-scale data processing. Avoid using runSubagent when:
- Processing hundreds or thousands of files
- Handling large datasets
- Performing bulk transformations on big codebases
- Orchestrating more than 5-10 sequential steps
Each sub-agent call adds latency and context overhead. For high-volume processing, implement logic directly in a single agent instead. Use orchestration only for coordinating specialized tasks on focused, manageable datasets.
Agent Prompt Structure
The markdown content below the frontmatter defines the agent's behavior, expertise, and instructions. Well-structured prompts typically include:
- Agent Identity and Role: Who the agent is and its primary role
- Core Responsibilities: What specific tasks the agent performs
- Approach and Methodology: How the agent works to accomplish tasks
- Guidelines and Constraints: What to do/avoid and quality standards
- Output Expectations: Expected output format and quality
Prompt Writing Best Practices
- Be Specific and Direct: Use imperative mood ("Analyze", "Generate"); avoid vague terms
- Define Boundaries: Clearly state scope limits and constraints
- Include Context: Explain domain expertise and reference relevant frameworks
- Focus on Behavior: Describe how the agent should think and work
- Use Structured Format: Headers, bullets, and lists make prompts scannable
Variable Definition and Extraction
Agents can define dynamic parameters to extract values from user input and use them throughout the agent's behavior and sub-agent communications. This enables flexible, context-aware agents that adapt to user-provided data.
When to Use Variables
Use variables when:
- Agent behavior depends on user input
- Need to pass dynamic values to sub-agents
- Want to make agents reusable across different contexts
- Require parameterized workflows
- Need to track or reference user-provided context
Examples:
- Extract project name from user prompt
- Capture certification name for pipeline processing
- Identify file paths or directories
- Extract configuration options
- Parse feature names or module identifiers
Variable Declaration Pattern
Define variables section early in the agent prompt to document expected parameters:
# Agent Name
## Dynamic Parameters
- **Parameter Name**: Description and usage
- **Another Parameter**: How it's extracted and used
## Your Mission
Process [PARAMETER_NAME] to accomplish [task].
Variable Extraction Methods
1. Explicit User Input
Ask the user to provide the variable if not detected in the prompt:
## Your Mission
Process the project by analyzing your codebase.
### Step 1: Identify Project
If no project name is provided, **ASK THE USER** for:
- Project name or identifier
- Base path or directory location
- Configuration type (if applicable)
Use this information to contextualize all subsequent tasks.
2. Implicit Extraction from Prompt
Automatically extract variables from the user's natural language input:
// Example: Extract certification name from user input
const userInput = "Process My Certification";
// Extract key information
const certificationName = extractCertificationName(userInput);
// Result: "My Certification"
const basePath = `certifications/${certificationName}`;
// Result: "certifications/My Certification"
3. Contextual Variable Resolution
Use file context or workspace information to derive variables:
## Variable Resolution Strategy
1. **From User Prompt**: First, look for explicit mentions in user input
2. **From File Context**: Check current file name or path
3. **From Workspace**: Use workspace folder or active project
4. **From Settings**: Reference configuration files
5. **Ask User**: If all else fails, request missing information
Using Variables in Agent Prompts
Variable Substitution in Instructions
Use template variables in agent prompts to make them dynamic:
# Agent Name
## Dynamic Parameters
- **Project Name**: ${projectName}
- **Base Path**: ${basePath}
- **Output Directory**: ${outputDir}
## Your Mission
Process the **${projectName}** project located at `${basePath}`.
## Process Steps
1. Read input from: `${basePath}/input/`
2. Process files according to project configuration
3. Write results to: `${outputDir}/`
4. Generate summary report
## Quality Standards
- Maintain project-specific coding standards for **${projectName}**
- Follow directory structure: `${basePath}/[structure]`
Passing Variables to Sub-Agents
When invoking a sub-agent, pass all context through template variables in the prompt:
// Extract and prepare variables
const basePath = `projects/${projectName}`;
const inputPath = `${basePath}/src/`;
const outputPath = `${basePath}/docs/`;
// Pass to sub-agent with all variables substituted
const result = await runSubagent({
description: 'Generate project documentation',
prompt: `You are the Documentation specialist.
Project: ${projectName}
Input: ${inputPath}
Output: ${outputPath}
Task:
1. Read source files from ${inputPath}
2. Generate comprehensive documentation
3. Write to ${outputPath}/index.md
4. Include code examples and usage guides
Return: Summary of documentation generated (file count, word count)`
});
The sub-agent receives all necessary context embedded in the prompt. Variables are resolved before sending the prompt, so the sub-agent works with concrete paths and values, not variable placeholders.
Real-World Example: Code Review Orchestrator
Example of a simple orchestrator that validates code through multiple specialized agents:
async function reviewCodePipeline(repositoryName, prNumber) {
const basePath = `projects/${repositoryName}/pr-${prNumber}`;
// Step 1: Security Review
const security = await runSubagent({
description: 'Scan for security vulnerabilities',
prompt: `You are the Security Reviewer specialist.
Repository: ${repositoryName}
PR: ${prNumber}
Code: ${basePath}/changes/
Task:
1. Scan code for OWASP Top 10 vulnerabilities
2. Check for injection attacks, auth flaws
3. Write findings to ${basePath}/security-review.md
Return: List of critical, high, and medium issues found`
});
// Step 2: Test Coverage Check
const coverage = await runSubagent({
description: 'Verify test coverage for changes',
prompt: `You are the Test Coverage specialist.
Repository: ${repositoryName}
PR: ${prNumber}
Changes: ${basePath}/changes/
Task:
1. Analyze code coverage for modified files
2. Identify untested critical paths
3. Write report to ${basePath}/coverage-report.md
Return: Current coverage percentage and gaps`
});
// Step 3: Aggregate Results
const finalReport = await runSubagent({
description: 'Compile all review findings',
prompt: `You are the Review Aggregator specialist.
Repository: ${repositoryName}
Reports: ${basePath}/*.md
Task:
1. Read all review reports from ${basePath}/
2. Synthesize findings into single report
3. Determine overall verdict (APPROVE/NEEDS_FIXES/BLOCK)
4. Write to ${basePath}/final-review.md
Return: Final verdict and executive summary`
});
return finalReport;
}
This pattern applies to any orchestration scenario: extract variables, call sub-agents with clear context, await results.
Variable Best Practices
1. Clear Documentation
Always document what variables are expected:
## Required Variables
- **projectName**: The name of the project (string, required)
- **basePath**: Root directory for project files (path, required)
## Optional Variables
- **mode**: Processing mode - quick/standard/detailed (enum, default: standard)
- **outputFormat**: Output format - markdown/json/html (enum, default: markdown)
## Derived Variables
- **outputDir**: Automatically set to ${basePath}/output
- **logFile**: Automatically set to ${basePath}/.log.md
2. Consistent Naming
Use consistent variable naming conventions:
// Good: Clear, descriptive naming
const variables = {
projectName, // What project to work on
basePath, // Where project files are located
outputDirectory, // Where to save results
processingMode, // How to process (detail level)
configurationPath // Where config files are
};
// Avoid: Ambiguous or inconsistent
const bad_variables = {
name, // Too generic
path, // Unclear which path
mode, // Too short
config // Too vague
};
3. Validation and Constraints
Document valid values and constraints:
## Variable Constraints
**projectName**:
- Type: string (alphanumeric, hyphens, underscores allowed)
- Length: 1-100 characters
- Required: yes
- Pattern: `/^[a-zA-Z0-9_-]+$/`
**processingMode**:
- Type: enum
- Valid values: "quick" (< 5min), "standard" (5-15min), "detailed" (15+ min)
- Default: "standard"
- Required: no
MCP Server Configuration (Organization/Enterprise Only)
MCP servers extend agent capabilities with additional tools. Only supported for organization and enterprise-level agents.
Configuration Format
---
name: my-custom-agent
description: 'Agent with MCP integration'
tools: ['read', 'edit', 'custom-mcp/tool-1']
mcp-servers:
custom-mcp:
type: 'local'
command: 'some-command'
args: ['--arg1', '--arg2']
tools: ["*"]
env:
ENV_VAR_NAME: ${{ secrets.API_KEY }}
---
MCP Server Properties
- type: Server type (
'local'or'stdio') - command: Command to start the MCP server
- args: Array of command arguments
- tools: Tools to enable from this server (
["*"]for all) - env: Environment variables (supports secrets)
Environment Variables and Secrets
Secrets must be configured in repository settings under "copilot" environment.
Supported syntax:
env:
# Environment variable only
VAR_NAME: COPILOT_MCP_ENV_VAR_VALUE
# Variable with header
VAR_NAME: $COPILOT_MCP_ENV_VAR_VALUE
VAR_NAME: ${COPILOT_MCP_ENV_VAR_VALUE}
# GitHub Actions-style (YAML only)
VAR_NAME: ${{ secrets.COPILOT_MCP_ENV_VAR_VALUE }}
VAR_NAME: ${{ var.COPILOT_MCP_ENV_VAR_VALUE }}
File Organization and Naming
Repository-Level Agents
- Location:
.github/agents/ - Scope: Available only in the specific repository
- Access: Uses repository-configured MCP servers
Organization/Enterprise-Level Agents
- Location:
.github-private/agents/(then move toagents/root) - Scope: Available across all repositories in org/enterprise
- Access: Can configure dedicated MCP servers
Naming Conventions
- Use lowercase with hyphens:
test-specialist.agent.md - Name should reflect agent purpose
- Filename becomes default agent name (if
namenot specified) - Allowed characters:
.,-,_,a-z,A-Z,0-9
Agent Processing and Behavior
Versioning
- Based on Git commit SHAs for the agent file
- Create branches/tags for different agent versions
- Instantiated using latest version for repository/branch
- PR interactions use same agent version for consistency
Name Conflicts
Priority (highest to lowest):
- Repository-level agent
- Organization-level agent
- Enterprise-level agent
Lower-level configurations override higher-level ones with the same name.
Tool Processing
toolslist filters available tools (built-in and MCP)- No tools specified = all tools enabled
- Empty list (
[]) = all tools disabled - Specific list = only those tools enabled
- Unrecognized tool names are ignored (allows environment-specific tools)
MCP Server Processing Order
- Out-of-the-box MCP servers (e.g., GitHub MCP)
- Custom agent MCP configuration (org/enterprise only)
- Repository-level MCP configurations
Each level can override settings from previous levels.
Agent Creation Checklist
Frontmatter
descriptionfield present and descriptive (50-150 chars)descriptionwrapped in single quotesnamespecified (optional but recommended)toolsconfigured appropriately (or intentionally omitted)modelspecified for optimal performancetargetset if environment-specificinferset tofalseif manual selection required
Prompt Content
- Clear agent identity and role defined
- Core responsibilities listed explicitly
- Approach and methodology explained
- Guidelines and constraints specified
- Output expectations documented
- Examples provided where helpful
- Instructions are specific and actionable
- Scope and boundaries clearly defined
- Total content under 30,000 characters
File Structure
- Filename follows lowercase-with-hyphens convention
- File placed in correct directory (
.github/agents/oragents/) - Filename uses only allowed characters
- File extension is
.agent.md
Quality Assurance
- Agent purpose is unique and not duplicative
- Tools are minimal and necessary
- Instructions are clear and unambiguous
- Agent has been tested with representative tasks
- Documentation references are current
- Security considerations addressed (if applicable)
Common Agent Patterns
Testing Specialist
Purpose: Focus on test coverage and quality Tools: All tools (for comprehensive test creation) Approach: Analyze, identify gaps, write tests, avoid production code changes
Implementation Planner
Purpose: Create detailed technical plans and specifications
Tools: Limited to ['read', 'search', 'edit']
Approach: Analyze requirements, create documentation, avoid implementation
Code Reviewer
Purpose: Review code quality and provide feedback
Tools: ['read', 'search'] only
Approach: Analyze, suggest improvements, no direct modifications
Refactoring Specialist
Purpose: Improve code structure and maintainability
Tools: ['read', 'search', 'edit']
Approach: Analyze patterns, propose refactorings, implement safely
Security Auditor
Purpose: Identify security issues and vulnerabilities
Tools: ['read', 'search', 'web']
Approach: Scan code, check against OWASP, report findings
Common Mistakes to Avoid
Frontmatter Errors
- ❌ Missing
descriptionfield - ❌ Description not wrapped in quotes
- ❌ Invalid tool names without checking documentation
- ❌ Incorrect YAML syntax (indentation, quotes)
Tool Configuration Issues
- ❌ Granting excessive tool access unnecessarily
- ❌ Missing required tools for agent's purpose
- ❌ Not using tool aliases consistently
- ❌ Forgetting MCP server namespace (
server-name/tool)
Prompt Content Problems
- ❌ Vague, ambiguous instructions
- ❌ Conflicting or contradictory guidelines
- ❌ Lack of clear scope definition
- ❌ Missing output expectations
- ❌ Overly verbose instructions (exceeding character limits)
- ❌ No examples or context for complex tasks
Organizational Issues
- ❌ Filename doesn't reflect agent purpose
- ❌ Wrong directory (confusing repo vs org level)
- ❌ Using spaces or special characters in filename
- ❌ Duplicate agent names causing conflicts
Testing and Validation
Manual Testing
- Create the agent file with proper frontmatter
- Reload VS Code or refresh GitHub.com
- Select the agent from the dropdown in Copilot Chat
- Test with representative user queries
- Verify tool access works as expected
- Confirm output meets expectations
Integration Testing
- Test agent with different file types in scope
- Verify MCP server connectivity (if configured)
- Check agent behavior with missing context
- Test error handling and edge cases
- Validate agent switching and handoffs
Quality Checks
- Run through agent creation checklist
- Review against common mistakes list
- Compare with example agents in repository
- Get peer review for complex agents
- Document any special configuration needs
Additional Resources
Official Documentation
Community Resources
Related Files
- Prompt Files Guidelines - For creating prompt files
- Instructions Guidelines - For creating instruction files
Version Compatibility Notes
GitHub.com (Coding Agent)
- ✅ Fully supports all standard frontmatter properties
- ✅ Repository and org/enterprise level agents
- ✅ MCP server configuration (org/enterprise)
- ❌ Does not support
model,argument-hint,handoffsproperties
VS Code / JetBrains / Eclipse / Xcode
- ✅ Supports
modelproperty for AI model selection - ✅ Supports
argument-hintandhandoffsproperties - ✅ User profile and workspace-level agents
- ❌ Cannot configure MCP servers at repository level
- ⚠️ Some properties may behave differently
When creating agents for multiple environments, focus on common properties and test in all target environments. Use target property to create environment-specific agents when necessary.