mirror of
https://github.com/github/awesome-copilot.git
synced 2026-03-12 04:05:12 +00:00
docs: add 4 new Learning Hub articles for agents, MCP, hooks, coding agent
Add four high-priority articles identified by gap analysis against nishanil/copilot-guide: - Building Custom Agents: personas, tools, MCP integration, patterns - Understanding MCP Servers: what MCP is, configuration, agent usage - Automating with Hooks: lifecycle events, hooks.json, practical examples - Using the Copilot Coding Agent: setup steps, issue assignment, PR workflow Update index.astro fundamentalsOrder to include all 10 articles. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
This commit is contained in:
319
website/src/content/learning-hub/automating-with-hooks.md
Normal file
319
website/src/content/learning-hub/automating-with-hooks.md
Normal file
@@ -0,0 +1,319 @@
|
||||
---
|
||||
title: 'Automating with Hooks'
|
||||
description: 'Learn how to use hooks to automate lifecycle events like formatting, linting, and governance checks during Copilot agent sessions.'
|
||||
authors:
|
||||
- GitHub Copilot Learning Hub Team
|
||||
lastUpdated: '2026-02-26'
|
||||
estimatedReadingTime: '8 minutes'
|
||||
tags:
|
||||
- hooks
|
||||
- automation
|
||||
- fundamentals
|
||||
relatedArticles:
|
||||
- ./building-custom-agents.md
|
||||
- ./what-are-agents-skills-instructions.md
|
||||
prerequisites:
|
||||
- Basic understanding of GitHub Copilot agents
|
||||
---
|
||||
|
||||
Hooks let you run automated scripts at key moments during a Copilot agent session—when a session starts, when the user submits a prompt, or when the agent is about to commit code. They're the glue between Copilot's AI capabilities and your team's existing tooling: linters, formatters, governance scanners, and notification systems.
|
||||
|
||||
This article explains how hooks work, how to configure them, and practical patterns for common automation needs.
|
||||
|
||||
## What Are Hooks?
|
||||
|
||||
Hooks are shell commands or scripts that run automatically in response to lifecycle events during a Copilot agent session. They execute outside the AI model—they're deterministic, repeatable, and under your full control.
|
||||
|
||||
**Key characteristics**:
|
||||
- Hooks run as shell commands on the user's machine
|
||||
- They execute synchronously—the agent waits for them to complete
|
||||
- They can block actions (e.g., prevent commits that fail linting)
|
||||
- They're defined in a `hooks.json` configuration file
|
||||
- They can include bundled scripts for complex logic
|
||||
|
||||
### When to Use Hooks vs Other Customizations
|
||||
|
||||
| Use Case | Best Tool |
|
||||
|----------|-----------|
|
||||
| Run a linter after every code change | **Hook** |
|
||||
| Teach Copilot your coding standards | **Instruction** |
|
||||
| Automate a multi-step workflow | **Skill** or **Agent** |
|
||||
| Scan prompts for sensitive data | **Hook** |
|
||||
| Format code before committing | **Hook** |
|
||||
| Generate tests for new code | **Skill** |
|
||||
|
||||
Hooks are ideal for **deterministic automation** that must happen reliably—things you don't want to depend on the AI remembering to do.
|
||||
|
||||
## Anatomy of a Hook
|
||||
|
||||
Each hook in this repository is a folder containing:
|
||||
|
||||
```
|
||||
hooks/
|
||||
└── my-hook/
|
||||
├── README.md # Documentation with frontmatter
|
||||
├── hooks.json # Hook configuration
|
||||
└── scripts/ # Optional bundled scripts
|
||||
└── check.sh
|
||||
```
|
||||
|
||||
### README.md
|
||||
|
||||
The README provides metadata and documentation:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: 'Auto Format'
|
||||
description: 'Automatically formats code using project formatters before commits'
|
||||
tags: ['formatting', 'code-quality']
|
||||
---
|
||||
|
||||
# Auto Format
|
||||
|
||||
Runs your project's configured formatter (Prettier, Black, gofmt, etc.)
|
||||
automatically before the agent commits changes.
|
||||
|
||||
## Setup
|
||||
|
||||
1. Ensure your formatter is installed and configured
|
||||
2. Copy the hooks.json to your `.copilot/hooks.json`
|
||||
3. Adjust the formatter command for your project
|
||||
```
|
||||
|
||||
### hooks.json
|
||||
|
||||
The configuration defines which events trigger which commands:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"hooks": {
|
||||
"copilotAgentCommit": [
|
||||
{
|
||||
"type": "command",
|
||||
"bash": "npx prettier --write .",
|
||||
"cwd": ".",
|
||||
"timeoutSec": 30
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Hook Events
|
||||
|
||||
Hooks can trigger on several lifecycle events:
|
||||
|
||||
| Event | When It Fires | Common Use Cases |
|
||||
|-------|---------------|------------------|
|
||||
| `sessionStart` | Agent session begins | Initialize logging, check prerequisites |
|
||||
| `sessionEnd` | Agent session ends | Clean up temp files, send notifications |
|
||||
| `userPromptSubmitted` | User sends a message | Scan for sensitive data, log for governance |
|
||||
| `copilotAgentCommit` | Agent is about to commit | Format code, run linters, validate changes |
|
||||
|
||||
### Event Configuration
|
||||
|
||||
Each hook entry supports these fields:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "command",
|
||||
"bash": "./scripts/my-check.sh",
|
||||
"cwd": ".",
|
||||
"timeoutSec": 10,
|
||||
"env": {
|
||||
"CUSTOM_VAR": "value"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**type**: Always `"command"` for shell-based hooks.
|
||||
|
||||
**bash**: The command or script to execute. Can be inline or reference a script file.
|
||||
|
||||
**cwd**: Working directory for the command. Use `"."` for the repository root.
|
||||
|
||||
**timeoutSec**: Maximum execution time. The hook is killed if it exceeds this limit.
|
||||
|
||||
**env**: Additional environment variables passed to the command.
|
||||
|
||||
## Practical Examples
|
||||
|
||||
### Auto-Format Before Commit
|
||||
|
||||
Ensure all code is formatted before the agent commits:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"hooks": {
|
||||
"copilotAgentCommit": [
|
||||
{
|
||||
"type": "command",
|
||||
"bash": "npx prettier --write . && git add -A",
|
||||
"cwd": ".",
|
||||
"timeoutSec": 30
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Lint Check Before Commit
|
||||
|
||||
Run ESLint and block the commit if there are errors:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"hooks": {
|
||||
"copilotAgentCommit": [
|
||||
{
|
||||
"type": "command",
|
||||
"bash": "npx eslint . --max-warnings 0",
|
||||
"cwd": ".",
|
||||
"timeoutSec": 60
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the lint command exits with a non-zero status, the commit is blocked.
|
||||
|
||||
### Governance Audit
|
||||
|
||||
Scan user prompts for potential security threats and log session activity:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"hooks": {
|
||||
"sessionStart": [
|
||||
{
|
||||
"type": "command",
|
||||
"bash": ".github/hooks/governance-audit/audit-session-start.sh",
|
||||
"cwd": ".",
|
||||
"timeoutSec": 5
|
||||
}
|
||||
],
|
||||
"userPromptSubmitted": [
|
||||
{
|
||||
"type": "command",
|
||||
"bash": ".github/hooks/governance-audit/audit-prompt.sh",
|
||||
"cwd": ".",
|
||||
"env": {
|
||||
"GOVERNANCE_LEVEL": "standard",
|
||||
"BLOCK_ON_THREAT": "false"
|
||||
},
|
||||
"timeoutSec": 10
|
||||
}
|
||||
],
|
||||
"sessionEnd": [
|
||||
{
|
||||
"type": "command",
|
||||
"bash": ".github/hooks/governance-audit/audit-session-end.sh",
|
||||
"cwd": ".",
|
||||
"timeoutSec": 5
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This pattern is useful for enterprise environments that need to audit AI interactions for compliance.
|
||||
|
||||
### Notification on Session End
|
||||
|
||||
Send a Slack or Teams notification when an agent session completes:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"hooks": {
|
||||
"sessionEnd": [
|
||||
{
|
||||
"type": "command",
|
||||
"bash": "curl -X POST \"$SLACK_WEBHOOK_URL\" -H 'Content-Type: application/json' -d '{\"text\": \"Copilot agent session completed\"}'",
|
||||
"cwd": ".",
|
||||
"env": {
|
||||
"SLACK_WEBHOOK_URL": "${input:slackWebhook}"
|
||||
},
|
||||
"timeoutSec": 5
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Writing Hook Scripts
|
||||
|
||||
For complex logic, use bundled scripts instead of inline bash commands:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
# scripts/pre-commit-check.sh
|
||||
set -euo pipefail
|
||||
|
||||
echo "Running pre-commit checks..."
|
||||
|
||||
# Format code
|
||||
npx prettier --write .
|
||||
|
||||
# Run linter
|
||||
npx eslint . --fix
|
||||
|
||||
# Run type checker
|
||||
npx tsc --noEmit
|
||||
|
||||
# Stage any formatting changes
|
||||
git add -A
|
||||
|
||||
echo "Pre-commit checks passed ✅"
|
||||
```
|
||||
|
||||
**Tips for hook scripts**:
|
||||
- Use `set -euo pipefail` to fail fast on errors
|
||||
- Keep scripts focused—one responsibility per script
|
||||
- Make scripts executable: `chmod +x scripts/pre-commit-check.sh`
|
||||
- Test scripts manually before adding them to hooks.json
|
||||
- Use reasonable timeouts—formatting a large codebase may need 30+ seconds
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Keep hooks fast**: Hooks run synchronously, so slow hooks delay the agent. Set tight timeouts and optimize scripts.
|
||||
- **Use non-zero exit codes to block**: If a hook exits with a non-zero code, the triggering action is blocked. Use this for must-pass checks.
|
||||
- **Bundle scripts in the hook folder**: Keep related scripts alongside the hooks.json for portability.
|
||||
- **Document setup requirements**: If hooks depend on tools being installed (Prettier, ESLint), document this in the README.
|
||||
- **Test locally first**: Run hook scripts manually before relying on them in agent sessions.
|
||||
- **Layer hooks, don't overload**: Use multiple hook entries for independent checks rather than one monolithic script.
|
||||
|
||||
## Common Questions
|
||||
|
||||
**Q: Where do I put hooks.json?**
|
||||
|
||||
A: Place it at `.copilot/hooks.json` in your repository root. This makes hooks available to all team members.
|
||||
|
||||
**Q: Can hooks access the user's prompt text?**
|
||||
|
||||
A: Yes, for `userPromptSubmitted` events the prompt content is available to the hook script via environment variables. See the [GitHub Copilot hooks documentation](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/use-hooks) for details.
|
||||
|
||||
**Q: What happens if a hook times out?**
|
||||
|
||||
A: The hook is terminated and the agent continues. Set `timeoutSec` appropriately for your scripts.
|
||||
|
||||
**Q: Can I have multiple hooks for the same event?**
|
||||
|
||||
A: Yes. Hooks for the same event run in the order they appear in the array. If any hook fails (non-zero exit), subsequent hooks for that event may be skipped.
|
||||
|
||||
**Q: Do hooks work with the Copilot coding agent?**
|
||||
|
||||
A: Yes. Hooks are especially valuable with the coding agent because they provide deterministic guardrails for autonomous operations. See [Using the Copilot Coding Agent](../learning-hub/using-copilot-coding-agent/) for details.
|
||||
|
||||
## Next Steps
|
||||
|
||||
- **Explore Examples**: Browse the [Hooks Directory](../hooks/) for ready-to-use hook configurations
|
||||
- **Build Agents**: [Building Custom Agents](../learning-hub/building-custom-agents/) — Create agents that complement hooks
|
||||
- **Automate Further**: [Using the Copilot Coding Agent](../learning-hub/using-copilot-coding-agent/) — Run hooks in autonomous agent sessions
|
||||
|
||||
---
|
||||
303
website/src/content/learning-hub/building-custom-agents.md
Normal file
303
website/src/content/learning-hub/building-custom-agents.md
Normal file
@@ -0,0 +1,303 @@
|
||||
---
|
||||
title: 'Building Custom Agents'
|
||||
description: 'Learn how to create specialized GitHub Copilot agents with custom personas, tool integrations, and domain expertise.'
|
||||
authors:
|
||||
- GitHub Copilot Learning Hub Team
|
||||
lastUpdated: '2026-02-26'
|
||||
estimatedReadingTime: '10 minutes'
|
||||
tags:
|
||||
- agents
|
||||
- customization
|
||||
- fundamentals
|
||||
relatedArticles:
|
||||
- ./what-are-agents-skills-instructions.md
|
||||
- ./creating-effective-skills.md
|
||||
- ./understanding-mcp-servers.md
|
||||
prerequisites:
|
||||
- Basic understanding of GitHub Copilot chat
|
||||
- Familiarity with agents, skills, and instructions
|
||||
---
|
||||
|
||||
Custom agents are specialized assistants that give GitHub Copilot a focused persona, specific tool access, and domain expertise. Unlike instructions (which apply passively) or skills (which handle individual tasks), agents define a complete working style—they shape how Copilot thinks, what tools it reaches for, and how it communicates throughout an entire session.
|
||||
|
||||
This article shows you how to design, structure, and deploy effective agents for your team's workflows.
|
||||
|
||||
## What Are Custom Agents?
|
||||
|
||||
Custom agents are Markdown files (`*.agent.md`) that configure GitHub Copilot with:
|
||||
|
||||
- **A persona**: The expertise, tone, and working style the agent adopts
|
||||
- **Tool access**: Which built-in tools and MCP servers the agent can use
|
||||
- **Guardrails**: Boundaries and conventions the agent follows
|
||||
- **A model preference**: Which AI model powers the agent (optional but recommended)
|
||||
|
||||
When a user selects a custom agent in VS Code or assigns it to an issue via the Copilot coding agent, the agent's configuration shapes the entire interaction.
|
||||
|
||||
**Key Points**:
|
||||
- Agents persist across a conversation—they maintain their persona and context
|
||||
- Agents can invoke tools, run commands, search codebases, and interact with MCP servers
|
||||
- Multiple agents can coexist in a repository, each serving different workflows
|
||||
- Agents are stored in `.github/agents/` and are shared with the entire team
|
||||
|
||||
### How Agents Differ from Other Customizations
|
||||
|
||||
**Agents vs Instructions**:
|
||||
- Agents are explicitly selected; instructions apply automatically to matching files
|
||||
- Agents define a complete persona; instructions provide passive background context
|
||||
- Use agents for interactive workflows; use instructions for coding standards
|
||||
|
||||
**Agents vs Skills**:
|
||||
- Agents are persistent personas; skills are single-task capabilities
|
||||
- Agents can invoke skills during a conversation
|
||||
- Use agents for complex multi-step workflows; use skills for focused, repeatable tasks
|
||||
|
||||
## Anatomy of an Agent
|
||||
|
||||
Every agent file has two parts: YAML frontmatter and Markdown instructions.
|
||||
|
||||
### Frontmatter Fields
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: 'Security Reviewer'
|
||||
description: 'Expert security auditor that reviews code for OWASP vulnerabilities, authentication flaws, and supply chain risks'
|
||||
model: Claude Sonnet 4
|
||||
tools: ['codebase', 'terminal', 'github']
|
||||
---
|
||||
```
|
||||
|
||||
**name** (recommended): A human-readable display name for the agent.
|
||||
|
||||
**description** (required): A clear summary of what the agent does. This is shown in the agent picker and helps users find the right agent.
|
||||
|
||||
**model** (recommended): The AI model that powers the agent. Choose based on the complexity of the task—use more capable models for nuanced reasoning.
|
||||
|
||||
**tools** (recommended): An array of built-in tools and MCP servers the agent can access. Common tools include:
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `codebase` | Search and analyze code across the repository |
|
||||
| `terminal` | Run shell commands |
|
||||
| `github` | Interact with GitHub APIs (issues, PRs, etc.) |
|
||||
| `fetch` | Make HTTP requests to external APIs |
|
||||
| `edit` | Modify files in the workspace |
|
||||
|
||||
For MCP server tools, reference them by server name (e.g., `postgres`, `docker`). See [Understanding MCP Servers](../learning-hub/understanding-mcp-servers/) for details.
|
||||
|
||||
### Agent Instructions
|
||||
|
||||
After the frontmatter, write Markdown instructions that define the agent's behavior. Structure these clearly:
|
||||
|
||||
````markdown
|
||||
---
|
||||
name: 'API Design Reviewer'
|
||||
description: 'Reviews API designs for consistency, RESTful patterns, and team conventions'
|
||||
model: Claude Sonnet 4
|
||||
tools: ['codebase', 'github']
|
||||
---
|
||||
|
||||
# API Design Reviewer
|
||||
|
||||
You are an expert API designer who reviews endpoints, schemas, and contracts for consistency and best practices.
|
||||
|
||||
## Your Expertise
|
||||
|
||||
- RESTful API design patterns
|
||||
- OpenAPI/Swagger specification
|
||||
- Versioning strategies
|
||||
- Error response standards
|
||||
- Pagination and filtering patterns
|
||||
|
||||
## Review Checklist
|
||||
|
||||
When reviewing API changes:
|
||||
|
||||
1. **Naming**: Verify endpoints use plural nouns, consistent casing
|
||||
2. **HTTP Methods**: Confirm correct verb usage (GET for reads, POST for creates)
|
||||
3. **Status Codes**: Check appropriate codes (201 for creation, 404 for not found)
|
||||
4. **Error Responses**: Ensure structured error objects with codes and messages
|
||||
5. **Pagination**: Verify cursor-based pagination for list endpoints
|
||||
6. **Versioning**: Confirm API version is specified in the path or header
|
||||
|
||||
## Output Format
|
||||
|
||||
Present findings as:
|
||||
- 🔴 **Breaking**: Changes that break existing clients
|
||||
- 🟡 **Warning**: Patterns that should be improved
|
||||
- 🟢 **Good**: Patterns that follow our conventions
|
||||
````
|
||||
|
||||
## Design Patterns
|
||||
|
||||
### The Domain Expert
|
||||
|
||||
Create agents with deep knowledge of a specific technology:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: 'Terraform Expert'
|
||||
description: 'Infrastructure-as-code specialist for Terraform on Azure with security-first defaults'
|
||||
model: Claude Sonnet 4
|
||||
tools: ['codebase', 'terminal']
|
||||
---
|
||||
|
||||
You are an expert in Terraform and Azure infrastructure.
|
||||
|
||||
## Principles
|
||||
|
||||
- Security-first: always enable encryption, disable public access by default
|
||||
- Use variables for all configurable values—never hardcode
|
||||
- Apply consistent tagging strategy across all resources
|
||||
- Follow Azure naming conventions: {env}-{project}-{resource-type}
|
||||
- Include diagnostic settings for all resources that support them
|
||||
```
|
||||
|
||||
### The Workflow Automator
|
||||
|
||||
Create agents that execute multi-step processes:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: 'Release Manager'
|
||||
description: 'Automates release preparation including changelog generation, version bumping, and tag creation'
|
||||
model: Claude Sonnet 4
|
||||
tools: ['codebase', 'terminal', 'github']
|
||||
---
|
||||
|
||||
You are a release manager who automates the release process.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Analyze commits since last release using conventional commit format
|
||||
2. Determine version bump (major/minor/patch) based on commit types
|
||||
3. Generate changelog from commit messages
|
||||
4. Update version in package.json / pyproject.toml
|
||||
5. Create a release summary for the PR description
|
||||
|
||||
## Rules
|
||||
|
||||
- Never skip the changelog step
|
||||
- Always verify the test suite passes before proceeding
|
||||
- Ask for confirmation before creating tags or releases
|
||||
```
|
||||
|
||||
### The Quality Gate
|
||||
|
||||
Create agents that enforce standards:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: 'Accessibility Auditor'
|
||||
description: 'Reviews UI components for WCAG 2.1 AA compliance and accessibility best practices'
|
||||
model: Claude Sonnet 4
|
||||
tools: ['codebase']
|
||||
---
|
||||
|
||||
You are an accessibility expert who reviews UI components for WCAG compliance.
|
||||
|
||||
## Audit Areas
|
||||
|
||||
- Semantic HTML structure
|
||||
- ARIA attributes and roles
|
||||
- Keyboard navigation support
|
||||
- Color contrast ratios (minimum 4.5:1 for text)
|
||||
- Screen reader compatibility
|
||||
- Focus management in dynamic content
|
||||
|
||||
## When Reviewing
|
||||
|
||||
- Check every interactive element has an accessible name
|
||||
- Verify form inputs have associated labels
|
||||
- Ensure images have meaningful alt text (or empty alt for decorative)
|
||||
- Test that all functionality is keyboard-accessible
|
||||
```
|
||||
|
||||
## Connecting Agents to MCP Servers
|
||||
|
||||
Agents become significantly more powerful when connected to external tools via MCP servers. Reference MCP tools in the `tools` array:
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: 'Database Administrator'
|
||||
description: 'Expert DBA for PostgreSQL performance tuning, query optimization, and schema design'
|
||||
tools: ['codebase', 'terminal', 'postgres-mcp']
|
||||
---
|
||||
```
|
||||
|
||||
The agent can then query your database, analyze query plans, and suggest optimizations—all within the conversation. For setup details, see [Understanding MCP Servers](../learning-hub/understanding-mcp-servers/).
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Writing Effective Agent Personas
|
||||
|
||||
- **Be specific about expertise**: "Expert in React 18+ with TypeScript" beats "Frontend developer"
|
||||
- **Define the working style**: Should the agent ask clarifying questions or make assumptions? Should it be concise or thorough?
|
||||
- **Include guardrails**: What should the agent never do? ("Never modify production configuration files directly")
|
||||
- **Provide examples**: Show the output format you expect (review comments, code patterns, etc.)
|
||||
|
||||
### Choosing the Right Model
|
||||
|
||||
| Scenario | Recommended Model |
|
||||
|----------|-------------------|
|
||||
| Complex reasoning, security review | Claude Sonnet 4 or higher |
|
||||
| Code generation, refactoring | GPT-4.1 |
|
||||
| Quick analysis, simple tasks | Claude Haiku or GPT-4.1-mini |
|
||||
| Large codebase understanding | Models with larger context windows |
|
||||
|
||||
### Organizing Agents in Your Repository
|
||||
|
||||
```
|
||||
.github/
|
||||
└── agents/
|
||||
├── security-reviewer.agent.md
|
||||
├── api-designer.agent.md
|
||||
├── terraform-expert.agent.md
|
||||
└── release-manager.agent.md
|
||||
```
|
||||
|
||||
Keep agents focused—one persona per file. If you find an agent trying to do too many things, split it into multiple agents or extract common tasks into skills that agents can invoke.
|
||||
|
||||
## Common Questions
|
||||
|
||||
**Q: How do I select a custom agent?**
|
||||
|
||||
A: In VS Code, open Copilot Chat and use the agent picker dropdown at the top of the chat panel. Your custom agents appear alongside built-in options. You can also `@mention` an agent by name.
|
||||
|
||||
**Q: Can agents use skills?**
|
||||
|
||||
A: Yes. Agents can discover and invoke skills during a conversation based on the user's intent. Skills extend what an agent can do without bloating the agent's own instructions.
|
||||
|
||||
**Q: How many agents should a repository have?**
|
||||
|
||||
A: Start with 2–3 agents for your most common workflows. Add more as patterns emerge. Typical teams have 3–8 agents covering areas like code review, infrastructure, testing, and documentation.
|
||||
|
||||
**Q: Can I use an agent with the Copilot coding agent?**
|
||||
|
||||
A: Yes. When you assign an issue to Copilot, you can specify which agent should handle it. The agent's persona and tool access apply to the autonomous coding session. See [Using the Copilot Coding Agent](../learning-hub/using-copilot-coding-agent/) for details.
|
||||
|
||||
**Q: Should agents include code examples?**
|
||||
|
||||
A: Yes, when defining output format or coding patterns. Show what you expect the agent to produce—review formats, code structure, commit message style, etc.
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- ❌ **Too broad**: "You are a software engineer" — no focus or guardrails
|
||||
✅ **Instead**: Define specific expertise, review criteria, and output format
|
||||
|
||||
- ❌ **No tools specified**: Agent can't search code or run commands
|
||||
✅ **Instead**: Declare the tools the agent needs in frontmatter
|
||||
|
||||
- ❌ **Conflicting with instructions**: Agent says "use tabs" but instructions say "use spaces"
|
||||
✅ **Instead**: Agents should complement instructions, not contradict them
|
||||
|
||||
- ❌ **Monolithic agent**: One agent that handles security, testing, docs, and deployment
|
||||
✅ **Instead**: Create focused agents and let them invoke shared skills
|
||||
|
||||
## Next Steps
|
||||
|
||||
- **Explore Repository Examples**: Browse the [Agents Directory](../agents/) for production agent definitions
|
||||
- **Connect External Tools**: [Understanding MCP Servers](../learning-hub/understanding-mcp-servers/) — Give agents access to databases, APIs, and more
|
||||
- **Automate with Coding Agent**: [Using the Copilot Coding Agent](../learning-hub/using-copilot-coding-agent/) — Run agents autonomously on issues
|
||||
- **Add Reusable Tasks**: [Creating Effective Skills](../learning-hub/creating-effective-skills/) — Build tasks agents can discover and invoke
|
||||
|
||||
---
|
||||
223
website/src/content/learning-hub/understanding-mcp-servers.md
Normal file
223
website/src/content/learning-hub/understanding-mcp-servers.md
Normal file
@@ -0,0 +1,223 @@
|
||||
---
|
||||
title: 'Understanding MCP Servers'
|
||||
description: 'Learn how Model Context Protocol servers extend GitHub Copilot with access to external tools, databases, and APIs.'
|
||||
authors:
|
||||
- GitHub Copilot Learning Hub Team
|
||||
lastUpdated: '2026-02-26'
|
||||
estimatedReadingTime: '8 minutes'
|
||||
tags:
|
||||
- mcp
|
||||
- tools
|
||||
- fundamentals
|
||||
relatedArticles:
|
||||
- ./building-custom-agents.md
|
||||
- ./what-are-agents-skills-instructions.md
|
||||
prerequisites:
|
||||
- Basic understanding of GitHub Copilot agents
|
||||
---
|
||||
|
||||
GitHub Copilot's built-in tools—code search, file editing, terminal access—cover a wide range of tasks. But real-world workflows often need access to external systems: databases, cloud APIs, monitoring dashboards, or internal services. That's where MCP servers come in.
|
||||
|
||||
This article explains what MCP is, how to configure servers, and how agents use them to accomplish tasks that would otherwise require context-switching.
|
||||
|
||||
## What Is MCP?
|
||||
|
||||
The **Model Context Protocol (MCP)** is an open standard for connecting AI assistants to external data sources and tools. An MCP server is a lightweight process that exposes capabilities—called **tools**—that Copilot can invoke during a conversation.
|
||||
|
||||
Think of MCP servers as bridges:
|
||||
|
||||
```
|
||||
GitHub Copilot ←→ MCP Server ←→ External System
|
||||
(bridge) (database, API, etc.)
|
||||
```
|
||||
|
||||
**Key characteristics**:
|
||||
- MCP is an open protocol, not specific to GitHub Copilot—it works across AI tools
|
||||
- Servers run locally on your machine or in a container
|
||||
- Each server exposes one or more tools with defined inputs and outputs
|
||||
- Agents and users can invoke MCP tools naturally during conversation
|
||||
|
||||
### Built-in vs MCP Tools
|
||||
|
||||
GitHub Copilot provides several **built-in tools** that are always available:
|
||||
|
||||
| Built-in Tool | What It Does |
|
||||
|--------------|--------------|
|
||||
| `codebase` | Search and analyze code across the repository |
|
||||
| `terminal` | Run shell commands in the integrated terminal |
|
||||
| `edit` | Create and modify files in the workspace |
|
||||
| `fetch` | Make HTTP requests to URLs |
|
||||
| `search` | Search across workspace files |
|
||||
| `github` | Interact with GitHub APIs |
|
||||
|
||||
**MCP tools** extend this with external capabilities:
|
||||
|
||||
| MCP Server Example | What It Adds |
|
||||
|-------------------|--------------|
|
||||
| PostgreSQL server | Query databases, inspect schemas, analyze query plans |
|
||||
| Docker server | Manage containers, inspect logs, deploy services |
|
||||
| Sentry server | Fetch error reports, analyze crash data |
|
||||
| Figma server | Read design tokens, component specs |
|
||||
|
||||
## Configuring MCP Servers
|
||||
|
||||
MCP servers are configured per-workspace in `.vscode/mcp.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"servers": {
|
||||
"postgres": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-postgres"],
|
||||
"env": {
|
||||
"DATABASE_URL": "postgresql://user:pass@localhost:5432/mydb"
|
||||
}
|
||||
},
|
||||
"filesystem": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-filesystem", "./docs"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Fields
|
||||
|
||||
**command**: The executable to run the MCP server (e.g., `npx`, `python`, `docker`).
|
||||
|
||||
**args**: Arguments passed to the command. Most MCP servers are distributed as npm packages and can be run with `npx -y`.
|
||||
|
||||
**env**: Environment variables passed to the server process. Use these for connection strings, API keys, and configuration—never hardcode secrets in the JSON file.
|
||||
|
||||
### Common MCP Server Configurations
|
||||
|
||||
**PostgreSQL** — Query databases and inspect schemas:
|
||||
```json
|
||||
{
|
||||
"postgres": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-postgres"],
|
||||
"env": {
|
||||
"DATABASE_URL": "${input:databaseUrl}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**GitHub** — Extended GitHub API access:
|
||||
```json
|
||||
{
|
||||
"github": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-github"],
|
||||
"env": {
|
||||
"GITHUB_TOKEN": "${input:githubToken}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Filesystem** — Controlled access to specific directories:
|
||||
```json
|
||||
{
|
||||
"filesystem": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-filesystem", "./data", "./config"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **Security tip**: Use `${input:variableName}` for sensitive values. VS Code will prompt for these at runtime rather than storing them in the file.
|
||||
|
||||
## How Agents Use MCP Tools
|
||||
|
||||
When an agent declares an MCP server in its `tools` array, Copilot can invoke that server's capabilities during conversation:
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: 'Database Administrator'
|
||||
description: 'Expert DBA for PostgreSQL performance tuning and schema design'
|
||||
tools: ['codebase', 'terminal', 'postgres']
|
||||
---
|
||||
```
|
||||
|
||||
With this configuration, the agent can:
|
||||
- Run SQL queries to inspect table structures
|
||||
- Analyze query execution plans
|
||||
- Suggest index optimizations based on actual data
|
||||
- Compare schema changes against the live database
|
||||
|
||||
### Example Conversation
|
||||
|
||||
```
|
||||
User: The users page is loading slowly. Can you figure out why?
|
||||
|
||||
Agent: Let me check the query that powers the users page.
|
||||
[Searches codebase for user listing query]
|
||||
[Runs EXPLAIN ANALYZE via postgres MCP server]
|
||||
|
||||
I found the issue. The query on user_profiles is doing a sequential scan
|
||||
on 2.4M rows. Here's what I recommend:
|
||||
|
||||
CREATE INDEX idx_user_profiles_active ON user_profiles (is_active)
|
||||
WHERE is_active = true;
|
||||
|
||||
This should reduce the query time from ~3.2s to ~15ms based on the
|
||||
current data distribution.
|
||||
```
|
||||
|
||||
Without the MCP server, the agent would have to guess at database structure and performance characteristics. With it, the agent works with real data.
|
||||
|
||||
## Finding MCP Servers
|
||||
|
||||
The MCP ecosystem is growing rapidly. Here are key resources:
|
||||
|
||||
- **[Official MCP Servers](https://github.com/modelcontextprotocol/servers)**: Reference implementations for common services (PostgreSQL, Slack, Google Drive, etc.)
|
||||
- **[MCP Specification](https://spec.modelcontextprotocol.io/)**: The protocol specification for building your own servers
|
||||
- **[Awesome MCP Servers](https://github.com/punkpeye/awesome-mcp-servers)**: Community-curated list of MCP servers
|
||||
|
||||
### Building Your Own MCP Server
|
||||
|
||||
If your team has internal tools or proprietary APIs, you can build custom MCP servers. The protocol supports three main capability types:
|
||||
|
||||
| Capability | Description | Example |
|
||||
|-----------|-------------|---------|
|
||||
| **Tools** | Functions the AI can invoke | `query_database`, `deploy_service` |
|
||||
| **Resources** | Data the AI can read | Database schemas, API docs |
|
||||
| **Prompts** | Pre-built conversation templates | Common troubleshooting flows |
|
||||
|
||||
MCP server SDKs are available in [Python](https://github.com/modelcontextprotocol/python-sdk), [TypeScript](https://github.com/modelcontextprotocol/typescript-sdk), and other languages. Browse the [Agents Directory](../agents/) for examples of agents built around MCP server expertise.
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Principle of least privilege**: Only give MCP servers the minimum access they need. Use read-only database connections for analysis agents.
|
||||
- **Keep secrets out of config files**: Use `${input:variableName}` for API keys and connection strings, or load from environment variables.
|
||||
- **Document your servers**: Add comments or a README explaining which MCP servers your project uses and why.
|
||||
- **Version control carefully**: Commit `.vscode/mcp.json` for shared server configurations, but use `.gitignore` for any files containing credentials.
|
||||
- **Test server connectivity**: Verify MCP servers start correctly before relying on them in agent workflows.
|
||||
|
||||
## Common Questions
|
||||
|
||||
**Q: Do MCP servers run in the cloud?**
|
||||
|
||||
A: No, MCP servers typically run locally on your machine as child processes. They're started automatically when needed and stopped when the session ends.
|
||||
|
||||
**Q: Can I use MCP servers without custom agents?**
|
||||
|
||||
A: Yes. Once configured in `.vscode/mcp.json`, MCP tools are available in any Copilot Chat session. Custom agents simply make it easier to pre-select the right tools for a workflow.
|
||||
|
||||
**Q: Are MCP servers secure?**
|
||||
|
||||
A: MCP servers run with the same permissions as your user account. Follow least-privilege principles: use read-only database connections, scope API tokens narrowly, and review server code before trusting it.
|
||||
|
||||
**Q: How many MCP servers can I configure?**
|
||||
|
||||
A: There's no hard limit, but each server is a running process. Configure only the servers you actively use. Most projects use 1–3 servers.
|
||||
|
||||
## Next Steps
|
||||
|
||||
- **Build Agents**: [Building Custom Agents](../learning-hub/building-custom-agents/) — Create agents that leverage MCP tools
|
||||
- **Explore Examples**: Browse the [Agents Directory](../agents/) for agents built around MCP server integrations
|
||||
- **Protocol Deep Dive**: [MCP Specification](https://spec.modelcontextprotocol.io/) — Learn the protocol details for building your own servers
|
||||
|
||||
---
|
||||
277
website/src/content/learning-hub/using-copilot-coding-agent.md
Normal file
277
website/src/content/learning-hub/using-copilot-coding-agent.md
Normal file
@@ -0,0 +1,277 @@
|
||||
---
|
||||
title: 'Using the Copilot Coding Agent'
|
||||
description: 'Learn how to use GitHub Copilot coding agent to autonomously work on issues, generate pull requests, and automate development tasks.'
|
||||
authors:
|
||||
- GitHub Copilot Learning Hub Team
|
||||
lastUpdated: '2026-02-26'
|
||||
estimatedReadingTime: '9 minutes'
|
||||
tags:
|
||||
- coding-agent
|
||||
- automation
|
||||
- agentic
|
||||
relatedArticles:
|
||||
- ./building-custom-agents.md
|
||||
- ./automating-with-hooks.md
|
||||
prerequisites:
|
||||
- Understanding of GitHub Copilot agents
|
||||
- Repository with GitHub Copilot enabled
|
||||
---
|
||||
|
||||
The Copilot coding agent is an autonomous agent that can work on GitHub issues without continuous human guidance. You assign it an issue, it spins up a cloud environment, writes code, runs tests, and opens a pull request—all while you focus on other work. Think of it as a junior developer who never sleeps, handles the well-defined tasks, and always asks for review.
|
||||
|
||||
This article explains how the coding agent works, how to set it up, and best practices for getting the most out of autonomous coding sessions.
|
||||
|
||||
## How It Works
|
||||
|
||||
The coding agent follows a straightforward workflow:
|
||||
|
||||
```
|
||||
1. You assign an issue to Copilot (or @mention it)
|
||||
↓
|
||||
2. Copilot spins up a cloud dev environment
|
||||
↓
|
||||
3. It reads the issue, your instructions, and codebase
|
||||
↓
|
||||
4. It plans and implements a solution
|
||||
↓
|
||||
5. It runs tests and validates the changes
|
||||
↓
|
||||
6. It opens a pull request for your review
|
||||
```
|
||||
|
||||
The agent works in its own branch, in an isolated environment. It can't merge code or deploy—it always produces a PR that a human must review and approve.
|
||||
|
||||
**Key characteristics**:
|
||||
- Runs in a secure, sandboxed cloud environment
|
||||
- Uses your repository's instructions, agents, and skills for context
|
||||
- Executes hooks (linting, formatting) automatically
|
||||
- Creates a PR with a summary of what it did and why
|
||||
- Supports iterating—you can comment on the PR and the agent will refine
|
||||
|
||||
## Setting Up the Environment
|
||||
|
||||
The coding agent needs to know how to set up your project. Define this in `.github/copilot-setup-steps.yml`:
|
||||
|
||||
```yaml
|
||||
# .github/copilot-setup-steps.yml
|
||||
steps:
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Build the project
|
||||
run: npm run build
|
||||
|
||||
- name: Verify tests pass
|
||||
run: npm test
|
||||
```
|
||||
|
||||
### What to Include
|
||||
|
||||
Think of this file as bootstrapping instructions for a new developer joining the project:
|
||||
|
||||
**Language runtimes**: If your project needs a specific Node.js, Python, or Go version, install it here.
|
||||
|
||||
**Dependencies**: Install all project dependencies (`npm ci`, `pip install -r requirements.txt`, `bundle install`).
|
||||
|
||||
**Build step**: Compile the project if needed, so the agent can verify its changes build successfully.
|
||||
|
||||
**Test command**: Run the test suite so the agent can validate its changes don't break existing functionality.
|
||||
|
||||
**Example for a Python project**:
|
||||
```yaml
|
||||
steps:
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.12'
|
||||
|
||||
- name: Install dependencies
|
||||
run: pip install -r requirements.txt
|
||||
|
||||
- name: Run tests
|
||||
run: pytest
|
||||
```
|
||||
|
||||
**Example for a multi-language project**:
|
||||
```yaml
|
||||
steps:
|
||||
- name: Install Node.js dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Install Python dependencies
|
||||
run: pip install -r requirements.txt
|
||||
|
||||
- name: Build frontend
|
||||
run: npm run build
|
||||
|
||||
- name: Run all tests
|
||||
run: npm test && pytest
|
||||
```
|
||||
|
||||
## Assigning Work to the Coding Agent
|
||||
|
||||
There are several ways to trigger the coding agent:
|
||||
|
||||
### From a GitHub Issue
|
||||
|
||||
1. Create a well-described issue with clear acceptance criteria
|
||||
2. Assign the issue to **Copilot** (it appears as an assignee option)
|
||||
3. The agent starts working within minutes
|
||||
|
||||
### From a Comment
|
||||
|
||||
On any issue, comment:
|
||||
```
|
||||
@copilot work on this
|
||||
```
|
||||
|
||||
Or provide more specific direction:
|
||||
```
|
||||
@copilot implement the user avatar upload feature described above.
|
||||
Use the existing FileUpload component and S3 service.
|
||||
```
|
||||
|
||||
### Specifying an Agent
|
||||
|
||||
You can direct the coding agent to use a specific custom agent:
|
||||
|
||||
```
|
||||
@copilot use the terraform-expert agent to implement this infrastructure change
|
||||
```
|
||||
|
||||
The agent will adopt the persona, tools, and guardrails defined in that agent file.
|
||||
|
||||
## Writing Effective Issues for the Coding Agent
|
||||
|
||||
The coding agent is only as good as the issue it receives. Well-structured issues lead to better results.
|
||||
|
||||
### Good Issue Structure
|
||||
|
||||
```markdown
|
||||
## Summary
|
||||
Add a rate limiter to the /api/login endpoint to prevent brute force attacks.
|
||||
|
||||
## Requirements
|
||||
- Limit to 5 attempts per IP address per 15-minute window
|
||||
- Return HTTP 429 with a Retry-After header when limit is exceeded
|
||||
- Use the existing Redis cache for rate tracking
|
||||
- Log rate limit violations to our security audit log
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Rate limiter middleware is applied to POST /api/login
|
||||
- [ ] Tests cover: normal login, rate limit hit, rate limit reset
|
||||
- [ ] Existing login tests continue to pass
|
||||
|
||||
## Context
|
||||
- Rate limiter utility exists at src/middleware/rate-limiter.ts
|
||||
- Redis client is configured in src/config/redis.ts
|
||||
- Security audit logger is at src/utils/security-logger.ts
|
||||
```
|
||||
|
||||
### Tips for Better Results
|
||||
|
||||
- **Be specific**: "Add input validation" is vague. "Validate email format and password length (8+ chars) on the registration endpoint" is actionable.
|
||||
- **Point to existing code**: Reference files, utilities, and patterns the agent should use.
|
||||
- **Define done**: List acceptance criteria or test cases that verify the work is complete.
|
||||
- **Scope appropriately**: Single-feature issues work best. Break large features into smaller issues.
|
||||
- **Include constraints**: If there are things the agent should NOT do ("don't modify the database schema"), say so explicitly.
|
||||
|
||||
## Working with the Pull Request
|
||||
|
||||
When the coding agent finishes, it opens a PR with:
|
||||
|
||||
- A description of changes and the reasoning behind them
|
||||
- File-by-file summaries of what changed
|
||||
- References back to the original issue
|
||||
|
||||
### Reviewing the PR
|
||||
|
||||
Review coding agent PRs like any other:
|
||||
|
||||
1. **Read the summary**: Understand what the agent did and why
|
||||
2. **Check the diff**: Verify the implementation matches your expectations
|
||||
3. **Run tests locally**: Confirm tests pass in your environment
|
||||
4. **Leave comments**: If something needs to change, comment on the PR
|
||||
|
||||
### Iterating with Comments
|
||||
|
||||
If the PR needs adjustments, comment directly:
|
||||
|
||||
```
|
||||
@copilot the rate limiter should use a sliding window, not a fixed window.
|
||||
Also, add a test for the Retry-After header value.
|
||||
```
|
||||
|
||||
The agent will read your feedback, make changes, and push new commits to the same PR.
|
||||
|
||||
## Hooks and the Coding Agent
|
||||
|
||||
Hooks are especially valuable with the coding agent because they provide deterministic guardrails for autonomous work:
|
||||
|
||||
- **`copilotAgentCommit`**: Format code, run linters, and validate changes before every commit the agent makes
|
||||
- **`sessionStart`**: Log the start of autonomous sessions for governance
|
||||
- **`sessionEnd`**: Send notifications when the agent finishes
|
||||
|
||||
See [Automating with Hooks](../learning-hub/automating-with-hooks/) for configuration details.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Setting Up for Success
|
||||
|
||||
- **Invest in `copilot-setup-steps.yml`**: A reliable setup means the agent can build and test confidently. If tests are flaky, the agent will struggle.
|
||||
- **Add comprehensive instructions**: The agent reads your `.github/instructions/` files. The more context you provide about patterns and conventions, the better the output.
|
||||
- **Define hooks for formatting**: Hooks ensure the agent's code meets your style requirements automatically, reducing review friction.
|
||||
|
||||
### Choosing the Right Tasks
|
||||
|
||||
The coding agent excels at:
|
||||
- ✅ Well-defined feature implementations with clear acceptance criteria
|
||||
- ✅ Bug fixes with reproducible steps
|
||||
- ✅ Adding tests to existing code
|
||||
- ✅ Refactoring with specific goals (extract function, rename, etc.)
|
||||
- ✅ Documentation updates based on code changes
|
||||
|
||||
It's less suited for:
|
||||
- ❌ Ambiguous design decisions that need team discussion
|
||||
- ❌ Large architectural changes spanning many files
|
||||
- ❌ Tasks requiring access to external systems not in the dev environment
|
||||
- ❌ Performance optimization without clear metrics
|
||||
|
||||
### Security Considerations
|
||||
|
||||
- The coding agent works in an isolated environment—it can't access your local machine
|
||||
- It can only modify code in its branch—it can't push to main or deploy
|
||||
- All changes go through PR review before merging
|
||||
- Use hooks to enforce security scanning on every commit
|
||||
- Scope repository permissions appropriately
|
||||
|
||||
## Common Questions
|
||||
|
||||
**Q: How long does the coding agent take?**
|
||||
|
||||
A: Typically 5–30 minutes depending on the complexity of the task and the size of the codebase. You'll receive a notification when the PR is ready.
|
||||
|
||||
**Q: Can I use the coding agent with private repositories?**
|
||||
|
||||
A: Yes. The coding agent works with both public and private repositories where GitHub Copilot is enabled.
|
||||
|
||||
**Q: What if the agent gets stuck?**
|
||||
|
||||
A: The agent has built-in timeouts. If it can't make progress, it will open a PR with what it has and explain what it couldn't resolve. You can then comment with guidance or take over manually.
|
||||
|
||||
**Q: Can I assign multiple issues at once?**
|
||||
|
||||
A: Yes. The coding agent can work on multiple issues in parallel, each in its own branch. Use Mission Control on GitHub.com to track all active agent sessions.
|
||||
|
||||
**Q: Does the coding agent use my custom agents?**
|
||||
|
||||
A: Yes. You can specify which agent to use when assigning work. The coding agent adopts that agent's persona, tools, and guardrails for the session.
|
||||
|
||||
## Next Steps
|
||||
|
||||
- **Set Up Your Environment**: Create `.github/copilot-setup-steps.yml` for your project
|
||||
- **Add Guardrails**: [Automating with Hooks](../learning-hub/automating-with-hooks/) — Ensure code quality in autonomous sessions
|
||||
- **Build Custom Agents**: [Building Custom Agents](../learning-hub/building-custom-agents/) — Create specialized agents for the coding agent to use
|
||||
- **Explore Configuration**: [Copilot Configuration Basics](../learning-hub/copilot-configuration-basics/) — Set up repository-level customizations
|
||||
|
||||
---
|
||||
@@ -11,6 +11,10 @@ const fundamentalsOrder = [
|
||||
'copilot-configuration-basics',
|
||||
'defining-custom-instructions',
|
||||
'creating-effective-skills',
|
||||
'building-custom-agents',
|
||||
'understanding-mcp-servers',
|
||||
'automating-with-hooks',
|
||||
'using-copilot-coding-agent',
|
||||
'before-after-customization-examples',
|
||||
];
|
||||
|
||||
|
||||
Reference in New Issue
Block a user