chore: publish from staged

This commit is contained in:
github-actions[bot]
2026-03-06 10:20:53 +00:00
parent fca5de1f6a
commit 9ef54533f0
245 changed files with 37420 additions and 203 deletions

View File

@@ -0,0 +1,305 @@
---
name: az-cost-optimize
description: 'Analyze Azure resources used in the app (IaC files and/or resources in a target rg) and optimize costs - creating GitHub issues for identified optimizations.'
---
# Azure Cost Optimize
This workflow analyzes Infrastructure-as-Code (IaC) files and Azure resources to generate cost optimization recommendations. It creates individual GitHub issues for each optimization opportunity plus one EPIC issue to coordinate implementation, enabling efficient tracking and execution of cost savings initiatives.
## Prerequisites
- Azure MCP server configured and authenticated
- GitHub MCP server configured and authenticated
- Target GitHub repository identified
- Azure resources deployed (IaC files optional but helpful)
- Prefer Azure MCP tools (`azmcp-*`) over direct Azure CLI when available
## Workflow Steps
### Step 1: Get Azure Best Practices
**Action**: Retrieve cost optimization best practices before analysis
**Tools**: Azure MCP best practices tool
**Process**:
1. **Load Best Practices**:
- Execute `azmcp-bestpractices-get` to get some of the latest Azure optimization guidelines. This may not cover all scenarios but provides a foundation.
- Use these practices to inform subsequent analysis and recommendations as much as possible
- Reference best practices in optimization recommendations, either from the MCP tool output or general Azure documentation
### Step 2: Discover Azure Infrastructure
**Action**: Dynamically discover and analyze Azure resources and configurations
**Tools**: Azure MCP tools + Azure CLI fallback + Local file system access
**Process**:
1. **Resource Discovery**:
- Execute `azmcp-subscription-list` to find available subscriptions
- Execute `azmcp-group-list --subscription <subscription-id>` to find resource groups
- Get a list of all resources in the relevant group(s):
- Use `az resource list --subscription <id> --resource-group <name>`
- For each resource type, use MCP tools first if possible, then CLI fallback:
- `azmcp-cosmos-account-list --subscription <id>` - Cosmos DB accounts
- `azmcp-storage-account-list --subscription <id>` - Storage accounts
- `azmcp-monitor-workspace-list --subscription <id>` - Log Analytics workspaces
- `azmcp-keyvault-key-list` - Key Vaults
- `az webapp list` - Web Apps (fallback - no MCP tool available)
- `az appservice plan list` - App Service Plans (fallback)
- `az functionapp list` - Function Apps (fallback)
- `az sql server list` - SQL Servers (fallback)
- `az redis list` - Redis Cache (fallback)
- ... and so on for other resource types
2. **IaC Detection**:
- Use `file_search` to scan for IaC files: "**/*.bicep", "**/*.tf", "**/main.json", "**/*template*.json"
- Parse resource definitions to understand intended configurations
- Compare against discovered resources to identify discrepancies
- Note presence of IaC files for implementation recommendations later on
- Do NOT use any other file from the repository, only IaC files. Using other files is NOT allowed as it is not a source of truth.
- If you do not find IaC files, then STOP and report no IaC files found to the user.
3. **Configuration Analysis**:
- Extract current SKUs, tiers, and settings for each resource
- Identify resource relationships and dependencies
- Map resource utilization patterns where available
### Step 3: Collect Usage Metrics & Validate Current Costs
**Action**: Gather utilization data AND verify actual resource costs
**Tools**: Azure MCP monitoring tools + Azure CLI
**Process**:
1. **Find Monitoring Sources**:
- Use `azmcp-monitor-workspace-list --subscription <id>` to find Log Analytics workspaces
- Use `azmcp-monitor-table-list --subscription <id> --workspace <name> --table-type "CustomLog"` to discover available data
2. **Execute Usage Queries**:
- Use `azmcp-monitor-log-query` with these predefined queries:
- Query: "recent" for recent activity patterns
- Query: "errors" for error-level logs indicating issues
- For custom analysis, use KQL queries:
```kql
// CPU utilization for App Services
AppServiceAppLogs
| where TimeGenerated > ago(7d)
| summarize avg(CpuTime) by Resource, bin(TimeGenerated, 1h)
// Cosmos DB RU consumption
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.DOCUMENTDB"
| where TimeGenerated > ago(7d)
| summarize avg(RequestCharge) by Resource
// Storage account access patterns
StorageBlobLogs
| where TimeGenerated > ago(7d)
| summarize RequestCount=count() by AccountName, bin(TimeGenerated, 1d)
```
3. **Calculate Baseline Metrics**:
- CPU/Memory utilization averages
- Database throughput patterns
- Storage access frequency
- Function execution rates
4. **VALIDATE CURRENT COSTS**:
- Using the SKU/tier configurations discovered in Step 2
- Look up current Azure pricing at https://azure.microsoft.com/pricing/ or use `az billing` commands
- Document: Resource → Current SKU → Estimated monthly cost
- Calculate realistic current monthly total before proceeding to recommendations
### Step 4: Generate Cost Optimization Recommendations
**Action**: Analyze resources to identify optimization opportunities
**Tools**: Local analysis using collected data
**Process**:
1. **Apply Optimization Patterns** based on resource types found:
**Compute Optimizations**:
- App Service Plans: Right-size based on CPU/memory usage
- Function Apps: Premium → Consumption plan for low usage
- Virtual Machines: Scale down oversized instances
**Database Optimizations**:
- Cosmos DB:
- Provisioned → Serverless for variable workloads
- Right-size RU/s based on actual usage
- SQL Database: Right-size service tiers based on DTU usage
**Storage Optimizations**:
- Implement lifecycle policies (Hot → Cool → Archive)
- Consolidate redundant storage accounts
- Right-size storage tiers based on access patterns
**Infrastructure Optimizations**:
- Remove unused/redundant resources
- Implement auto-scaling where beneficial
- Schedule non-production environments
2. **Calculate Evidence-Based Savings**:
- Current validated cost → Target cost = Savings
- Document pricing source for both current and target configurations
3. **Calculate Priority Score** for each recommendation:
```
Priority Score = (Value Score × Monthly Savings) / (Risk Score × Implementation Days)
High Priority: Score > 20
Medium Priority: Score 5-20
Low Priority: Score < 5
```
4. **Validate Recommendations**:
- Ensure Azure CLI commands are accurate
- Verify estimated savings calculations
- Assess implementation risks and prerequisites
- Ensure all savings calculations have supporting evidence
### Step 5: User Confirmation
**Action**: Present summary and get approval before creating GitHub issues
**Process**:
1. **Display Optimization Summary**:
```
🎯 Azure Cost Optimization Summary
📊 Analysis Results:
• Total Resources Analyzed: X
• Current Monthly Cost: $X
• Potential Monthly Savings: $Y
• Optimization Opportunities: Z
• High Priority Items: N
🏆 Recommendations:
1. [Resource]: [Current SKU] → [Target SKU] = $X/month savings - [Risk Level] | [Implementation Effort]
2. [Resource]: [Current Config] → [Target Config] = $Y/month savings - [Risk Level] | [Implementation Effort]
3. [Resource]: [Current Config] → [Target Config] = $Z/month savings - [Risk Level] | [Implementation Effort]
... and so on
💡 This will create:
• Y individual GitHub issues (one per optimization)
• 1 EPIC issue to coordinate implementation
❓ Proceed with creating GitHub issues? (y/n)
```
2. **Wait for User Confirmation**: Only proceed if user confirms
### Step 6: Create Individual Optimization Issues
**Action**: Create separate GitHub issues for each optimization opportunity. Label them with "cost-optimization" (green color), "azure" (blue color).
**MCP Tools Required**: `create_issue` for each recommendation
**Process**:
1. **Create Individual Issues** using this template:
**Title Format**: `[COST-OPT] [Resource Type] - [Brief Description] - $X/month savings`
**Body Template**:
```markdown
## 💰 Cost Optimization: [Brief Title]
**Monthly Savings**: $X | **Risk Level**: [Low/Medium/High] | **Implementation Effort**: X days
### 📋 Description
[Clear explanation of the optimization and why it's needed]
### 🔧 Implementation
**IaC Files Detected**: [Yes/No - based on file_search results]
```bash
# If IaC files found: Show IaC modifications + deployment
# File: infrastructure/bicep/modules/app-service.bicep
# Change: sku.name: 'S3' → 'B2'
az deployment group create --resource-group [rg] --template-file infrastructure/bicep/main.bicep
# If no IaC files: Direct Azure CLI commands + warning
# ⚠️ No IaC files found. If they exist elsewhere, modify those instead.
az appservice plan update --name [plan] --sku B2
```
### 📊 Evidence
- Current Configuration: [details]
- Usage Pattern: [evidence from monitoring data]
- Cost Impact: $X/month → $Y/month
- Best Practice Alignment: [reference to Azure best practices if applicable]
### ✅ Validation Steps
- [ ] Test in non-production environment
- [ ] Verify no performance degradation
- [ ] Confirm cost reduction in Azure Cost Management
- [ ] Update monitoring and alerts if needed
### ⚠️ Risks & Considerations
- [Risk 1 and mitigation]
- [Risk 2 and mitigation]
**Priority Score**: X | **Value**: X/10 | **Risk**: X/10
```
### Step 7: Create EPIC Coordinating Issue
**Action**: Create master issue to track all optimization work. Label it with "cost-optimization" (green color), "azure" (blue color), and "epic" (purple color).
**MCP Tools Required**: `create_issue` for EPIC
**Note about mermaid diagrams**: Ensure you verify mermaid syntax is correct and create the diagrams taking accessibility guidelines into account (styling, colors, etc.).
**Process**:
1. **Create EPIC Issue**:
**Title**: `[EPIC] Azure Cost Optimization Initiative - $X/month potential savings`
**Body Template**:
```markdown
# 🎯 Azure Cost Optimization EPIC
**Total Potential Savings**: $X/month | **Implementation Timeline**: X weeks
## 📊 Executive Summary
- **Resources Analyzed**: X
- **Optimization Opportunities**: Y
- **Total Monthly Savings Potential**: $X
- **High Priority Items**: N
## 🏗️ Current Architecture Overview
```mermaid
graph TB
subgraph "Resource Group: [name]"
[Generated architecture diagram showing current resources and costs]
end
```
## 📋 Implementation Tracking
### 🚀 High Priority (Implement First)
- [ ] #[issue-number]: [Title] - $X/month savings
- [ ] #[issue-number]: [Title] - $X/month savings
### ⚡ Medium Priority
- [ ] #[issue-number]: [Title] - $X/month savings
- [ ] #[issue-number]: [Title] - $X/month savings
### 🔄 Low Priority (Nice to Have)
- [ ] #[issue-number]: [Title] - $X/month savings
## 📈 Progress Tracking
- **Completed**: 0 of Y optimizations
- **Savings Realized**: $0 of $X/month
- **Implementation Status**: Not Started
## 🎯 Success Criteria
- [ ] All high-priority optimizations implemented
- [ ] >80% of estimated savings realized
- [ ] No performance degradation observed
- [ ] Cost monitoring dashboard updated
## 📝 Notes
- Review and update this EPIC as issues are completed
- Monitor actual vs. estimated savings
- Consider scheduling regular cost optimization reviews
```
## Error Handling
- **Cost Validation**: If savings estimates lack supporting evidence or seem inconsistent with Azure pricing, re-verify configurations and pricing sources before proceeding
- **Azure Authentication Failure**: Provide manual Azure CLI setup steps
- **No Resources Found**: Create informational issue about Azure resource deployment
- **GitHub Creation Failure**: Output formatted recommendations to console
- **Insufficient Usage Data**: Note limitations and provide configuration-based recommendations only
## Success Criteria
- ✅ All cost estimates verified against actual resource configurations and Azure pricing
- ✅ Individual issues created for each optimization (trackable and assignable)
- ✅ EPIC issue provides comprehensive coordination and tracking
- ✅ All recommendations include specific, executable Azure CLI commands
- ✅ Priority scoring enables ROI-focused implementation
- ✅ Architecture diagram accurately represents current state
- ✅ User confirmation prevents unwanted issue creation

View File

@@ -0,0 +1,189 @@
---
name: azure-pricing
description: 'Fetches real-time Azure retail pricing using the Azure Retail Prices API (prices.azure.com) and estimates Copilot Studio agent credit consumption. Use when the user asks about the cost of any Azure service, wants to compare SKU prices, needs pricing data for a cost estimate, mentions Azure pricing, Azure costs, Azure billing, or asks about Copilot Studio pricing, Copilot Credits, or agent usage estimation. Covers compute, storage, networking, databases, AI, Copilot Studio, and all other Azure service families.'
compatibility: Requires internet access to prices.azure.com and learn.microsoft.com. No authentication needed.
metadata:
author: anthonychu
version: "1.2"
---
# Azure Pricing Skill
Use this skill to retrieve real-time Azure retail pricing data from the public Azure Retail Prices API. No authentication is required.
## When to Use This Skill
- User asks about the cost of an Azure service (e.g., "How much does a D4s v5 VM cost?")
- User wants to compare pricing across regions or SKUs
- User needs a cost estimate for a workload or architecture
- User mentions Azure pricing, Azure costs, or Azure billing
- User asks about reserved instance vs. pay-as-you-go pricing
- User wants to know about savings plans or spot pricing
## API Endpoint
```
GET https://prices.azure.com/api/retail/prices?api-version=2023-01-01-preview
```
Append `$filter` as a query parameter using OData filter syntax. Always use `api-version=2023-01-01-preview` to ensure savings plan data is included.
## Step-by-step Instructions
If anything is unclear about the user's request, ask clarifying questions to identify the correct filter fields and values before calling the API.
1. **Identify filter fields** from the user's request (service name, region, SKU, price type).
2. **Resolve the region**: the API requires `armRegionName` values in lowercase with no spaces (e.g. "East US" → `eastus`, "West Europe" → `westeurope`, "Southeast Asia" → `southeastasia`). See [references/REGIONS.md](references/REGIONS.md) for a complete list.
3. **Build the filter string** using the fields below and fetch the URL.
4. **Parse the `Items` array** from the JSON response. Each item contains price and metadata.
5. **Follow pagination** via `NextPageLink` if you need more than the first 1000 results (rarely needed).
6. **Calculate cost estimates** using the formulas in [references/COST-ESTIMATOR.md](references/COST-ESTIMATOR.md) to produce monthly/annual estimates.
7. **Present results** in a clear summary table with service, SKU, region, unit price, and monthly/annual estimates.
## Filterable Fields
| Field | Type | Example |
|---|---|---|
| `serviceName` | string (exact, case-sensitive) | `'Functions'`, `'Virtual Machines'`, `'Storage'` |
| `serviceFamily` | string (exact, case-sensitive) | `'Compute'`, `'Storage'`, `'Databases'`, `'AI + Machine Learning'` |
| `armRegionName` | string (exact, lowercase) | `'eastus'`, `'westeurope'`, `'southeastasia'` |
| `armSkuName` | string (exact) | `'Standard_D4s_v5'`, `'Standard_LRS'` |
| `skuName` | string (contains supported) | `'D4s v5'` |
| `priceType` | string | `'Consumption'`, `'Reservation'`, `'DevTestConsumption'` |
| `meterName` | string (contains supported) | `'Spot'` |
Use `eq` for equality, `and` to combine, and `contains(field, 'value')` for partial matches.
## Example Filter Strings
```
# All consumption prices for Functions in East US
serviceName eq 'Functions' and armRegionName eq 'eastus' and priceType eq 'Consumption'
# D4s v5 VMs in West Europe (consumption only)
armSkuName eq 'Standard_D4s_v5' and armRegionName eq 'westeurope' and priceType eq 'Consumption'
# All storage prices in a region
serviceName eq 'Storage' and armRegionName eq 'eastus'
# Spot pricing for a specific SKU
armSkuName eq 'Standard_D4s_v5' and contains(meterName, 'Spot') and armRegionName eq 'eastus'
# 1-year reservation pricing
serviceName eq 'Virtual Machines' and priceType eq 'Reservation' and armRegionName eq 'eastus'
# Azure AI / OpenAI pricing (now under Foundry Models)
serviceName eq 'Foundry Models' and armRegionName eq 'eastus' and priceType eq 'Consumption'
# Azure Cosmos DB pricing
serviceName eq 'Azure Cosmos DB' and armRegionName eq 'eastus' and priceType eq 'Consumption'
```
## Full Example Fetch URL
```
https://prices.azure.com/api/retail/prices?api-version=2023-01-01-preview&$filter=serviceName eq 'Functions' and armRegionName eq 'eastus' and priceType eq 'Consumption'
```
URL-encode spaces as `%20` and quotes as `%27` when constructing the URL.
## Key Response Fields
```json
{
"Items": [
{
"retailPrice": 0.000016,
"unitPrice": 0.000016,
"currencyCode": "USD",
"unitOfMeasure": "1 Execution",
"serviceName": "Functions",
"skuName": "Premium",
"armRegionName": "eastus",
"meterName": "vCPU Duration",
"productName": "Functions",
"priceType": "Consumption",
"isPrimaryMeterRegion": true,
"savingsPlan": [
{ "unitPrice": 0.000012, "term": "1 Year" },
{ "unitPrice": 0.000010, "term": "3 Years" }
]
}
],
"NextPageLink": null,
"Count": 1
}
```
Only use items where `isPrimaryMeterRegion` is `true` unless the user specifically asks for non-primary meters.
## Supported serviceFamily Values
`Analytics`, `Compute`, `Containers`, `Data`, `Databases`, `Developer Tools`, `Integration`, `Internet of Things`, `Management and Governance`, `Networking`, `Security`, `Storage`, `Web`, `AI + Machine Learning`
## Tips
- `serviceName` values are case-sensitive. When unsure, filter by `serviceFamily` first to discover valid `serviceName` values in the results.
- If results are empty, try broadening the filter (e.g., remove `priceType` or region constraints first).
- Prices are always in USD unless `currencyCode` is specified in the request.
- For savings plan prices, look for the `savingsPlan` array on each item (only in `2023-01-01-preview`).
- See [references/SERVICE-NAMES.md](references/SERVICE-NAMES.md) for a catalog of common service names and their correct casing.
- See [references/COST-ESTIMATOR.md](references/COST-ESTIMATOR.md) for cost estimation formulas and patterns.
- See [references/COPILOT-STUDIO-RATES.md](references/COPILOT-STUDIO-RATES.md) for Copilot Studio billing rates and estimation formulas.
## Troubleshooting
| Issue | Solution |
|-------|----------|
| Empty results | Broaden the filter — remove `priceType` or `armRegionName` first |
| Wrong service name | Use `serviceFamily` filter to discover valid `serviceName` values |
| Missing savings plan data | Ensure `api-version=2023-01-01-preview` is in the URL |
| URL errors | Check URL encoding — spaces as `%20`, quotes as `%27` |
| Too many results | Add more filter fields (region, SKU, priceType) to narrow down |
---
# Copilot Studio Agent Usage Estimation
Use this section when the user asks about Copilot Studio pricing, Copilot Credits, or agent usage costs.
## When to Use This Section
- User asks about Copilot Studio pricing or costs
- User asks about Copilot Credits or agent credit consumption
- User wants to estimate monthly costs for a Copilot Studio agent
- User mentions agent usage estimation or the Copilot Studio estimator
- User asks how much an agent will cost to run
## Key Facts
- **1 Copilot Credit = $0.01 USD**
- Credits are pooled across the entire tenant
- Employee-facing agents with M365 Copilot licensed users get classic answers, generative answers, and tenant graph grounding at zero cost
- Overage enforcement triggers at 125% of prepaid capacity
## Step-by-step Estimation
1. **Gather inputs** from the user: agent type (employee/customer), number of users, interactions/month, knowledge %, tenant graph %, tool usage per session.
2. **Fetch live billing rates** — use the built-in web fetch tool to download the latest rates from the source URLs listed below. This ensures the estimate always uses the most current Microsoft pricing.
3. **Parse the fetched content** to extract the current billing rates table (credits per feature type).
4. **Calculate the estimate** using the rates and formulas from the fetched content:
- `total_sessions = users × interactions_per_month`
- Knowledge credits: apply tenant graph grounding rate, generative answer rate, and classic answer rate
- Agent tools credits: apply agent action rate per tool call
- Agent flow credits: apply flow rate per 100 actions
- Prompt modifier credits: apply basic/standard/premium rates per 10 responses
5. **Present results** in a clear table with breakdown by category, total credits, and estimated USD cost.
## Source URLs to Fetch
When answering Copilot Studio pricing questions, fetch the latest content from these URLs to use as context:
| URL | Content |
|---|---|
| https://learn.microsoft.com/en-us/microsoft-copilot-studio/requirements-messages-management | Billing rates table, billing examples, overage enforcement rules |
| https://learn.microsoft.com/en-us/microsoft-copilot-studio/billing-licensing | Licensing options, M365 Copilot inclusions, prepaid vs pay-as-you-go |
Fetch at least the first URL (billing rates) before calculating. The second URL provides supplementary context for licensing questions.
See [references/COPILOT-STUDIO-RATES.md](references/COPILOT-STUDIO-RATES.md) for a cached snapshot of rates, formulas, and billing examples (use as fallback if web fetch is unavailable).

View File

@@ -0,0 +1,135 @@
# Copilot Studio — Billing Rates & Estimation
> Source: [Billing rates and management](https://learn.microsoft.com/en-us/microsoft-copilot-studio/requirements-messages-management)
> Estimator: [Microsoft agent usage estimator](https://microsoft.github.io/copilot-studio-estimator/)
> Licensing Guide: [Copilot Studio Licensing Guide](https://go.microsoft.com/fwlink/?linkid=2320995)
## Copilot Credit Rate
**1 Copilot Credit = $0.01 USD**
## Billing Rates (cached snapshot — last updated March 2026)
**IMPORTANT: Always prefer fetching live rates from the source URLs below. Use this table only as a fallback if web fetch is unavailable.**
| Feature | Rate | Unit |
|---|---|---|
| Classic answer | 1 | per response |
| Generative answer | 2 | per response |
| Agent action | 5 | per action (triggers, deep reasoning, topic transitions, computer use) |
| Tenant graph grounding | 10 | per message |
| Agent flow actions | 13 | per 100 flow actions |
| Text & gen AI tools (basic) | 1 | per 10 responses |
| Text & gen AI tools (standard) | 15 | per 10 responses |
| Text & gen AI tools (premium) | 100 | per 10 responses |
| Content processing tools | 8 | per page |
### Notes
- **Classic answers**: Predefined, manually authored responses. Static — don't change unless updated by the maker.
- **Generative answers**: Dynamically generated using AI models (GPTs). Adapt based on context and knowledge sources.
- **Tenant graph grounding**: RAG over tenant-wide Microsoft Graph, including external data via connectors. Optional per agent.
- **Agent actions**: Steps like triggers, deep reasoning, topic transitions visible in the activity map. Includes Computer-Using Agents.
- **Text & gen AI tools**: Prompt tools embedded in agents. Three tiers (basic/standard/premium) based on the underlying language model.
- **Agent flow actions**: Predefined flow action sequences executed without agent reasoning/orchestration at each step.
### Reasoning Model Billing
When using a reasoning-capable model:
```
Total cost = feature rate for operation + text & gen AI tools (premium) per 10 responses
```
Example: A generative answer using a reasoning model costs **2 credits** (generative answer) **+ 10 credits** (premium per response, prorated from 100/10).
## Estimation Formula
### Inputs
| Parameter | Description |
|---|---|
| `users` | Number of end users |
| `interactions_per_month` | Average interactions per user per month |
| `knowledge_pct` | % of responses from knowledge sources (0-100) |
| `tenant_graph_pct` | Of knowledge responses, % using tenant graph grounding (0-100) |
| `tool_prompt` | Average Prompt tool calls per session |
| `tool_agent_flow` | Average Agent flow calls per session |
| `tool_computer_use` | Average Computer use calls per session |
| `tool_custom_connector` | Average Custom connector calls per session |
| `tool_mcp` | Average MCP (Model Context Protocol) calls per session |
| `tool_rest_api` | Average REST API calls per session |
| `prompts_basic` | Average basic AI prompt uses per session |
| `prompts_standard` | Average standard AI prompt uses per session |
| `prompts_premium` | Average premium AI prompt uses per session |
### Calculation
```
total_sessions = users × interactions_per_month
── Knowledge Credits ──
tenant_graph_credits = total_sessions × (knowledge_pct/100) × (tenant_graph_pct/100) × 10
generative_answer_credits = total_sessions × (knowledge_pct/100) × (1 - tenant_graph_pct/100) × 2
classic_answer_credits = total_sessions × (1 - knowledge_pct/100) × 1
── Agent Tools Credits ──
tool_calls = total_sessions × (prompt + computer_use + custom_connector + mcp + rest_api)
tool_credits = tool_calls × 5
── Agent Flow Credits ──
flow_calls = total_sessions × tool_agent_flow
flow_credits = ceil(flow_calls / 100) × 13
── Prompt Modifier Credits ──
basic_credits = ceil(total_sessions × prompts_basic / 10) × 1
standard_credits = ceil(total_sessions × prompts_standard / 10) × 15
premium_credits = ceil(total_sessions × prompts_premium / 10) × 100
── Total ──
total_credits = knowledge + tools + flows + prompts
cost_usd = total_credits × 0.01
```
## Billing Examples (from Microsoft Docs)
### Customer Support Agent
- 4 classic answers + 2 generative answers per session
- 900 customers/day
- **Daily**: `[(4×1) + (2×2)] × 900 = 7,200 credits`
- **Monthly (30d)**: ~216,000 credits = **~$2,160**
### Sales Performance Agent (Tenant Graph Grounded)
- 4 generative answers + 4 tenant graph grounded responses per session
- 100 unlicensed users
- **Daily**: `[(4×2) + (4×10)] × 100 = 4,800 credits`
- **Monthly (30d)**: ~144,000 credits = **~$1,440**
### Order Processing Agent
- 4 action calls per trigger (autonomous)
- **Per trigger**: `4 × 5 = 20 credits`
## Employee vs Customer Agent Types
| Agent Type | Included with M365 Copilot? |
|---|---|
| Employee-facing (BtoE) | Classic answers, generative answers, and tenant graph grounding are included at zero cost when the user has a Microsoft 365 Copilot license |
| Customer/partner-facing | All usage is billed normally |
## Overage Enforcement
- Triggered at **125%** of prepaid capacity
- Custom agents are disabled (ongoing conversations continue)
- Email notification sent to tenant admin
- Resolution: reallocate capacity, purchase more, or enable pay-as-you-go
## Live Source URLs
For the latest rates, fetch content from these pages:
- [Billing rates and management](https://learn.microsoft.com/en-us/microsoft-copilot-studio/requirements-messages-management)
- [Copilot Studio licensing](https://learn.microsoft.com/en-us/microsoft-copilot-studio/billing-licensing)
- [Copilot Studio Licensing Guide (PDF)](https://go.microsoft.com/fwlink/?linkid=2320995)

View File

@@ -0,0 +1,142 @@
# Cost Estimator Reference
Formulas and patterns for converting Azure unit prices into monthly and annual cost estimates.
## Standard Time-Based Calculations
### Hours per Month
Azure uses **730 hours/month** as the standard billing period (365 days × 24 hours / 12 months).
```
Monthly Cost = Unit Price per Hour × 730
Annual Cost = Monthly Cost × 12
```
### Common Multipliers
| Period | Hours | Calculation |
|--------|-------|-------------|
| 1 Hour | 1 | Unit price |
| 1 Day | 24 | Unit price × 24 |
| 1 Week | 168 | Unit price × 168 |
| 1 Month | 730 | Unit price × 730 |
| 1 Year | 8,760 | Unit price × 8,760 |
## Service-Specific Formulas
### Virtual Machines (Compute)
```
Monthly Cost = hourly price × 730
```
For VMs that run only business hours (8h/day, 22 days/month):
```
Monthly Cost = hourly price × 176
```
### Azure Functions
```
Execution Cost = price per execution × number of executions
Compute Cost = price per GB-s × (memory in GB × execution time in seconds × number of executions)
Total Monthly = Execution Cost + Compute Cost
```
Free grant: 1M executions and 400,000 GB-s per month.
### Azure Blob Storage
```
Storage Cost = price per GB × storage in GB
Transaction Cost = price per 10,000 ops × (operations / 10,000)
Egress Cost = price per GB × egress in GB
Total Monthly = Storage Cost + Transaction Cost + Egress Cost
```
### Azure Cosmos DB
#### Provisioned Throughput
```
Monthly Cost = (RU/s / 100) × price per 100 RU/s × 730
```
#### Serverless
```
Monthly Cost = (total RUs consumed / 1,000,000) × price per 1M RUs
```
### Azure SQL Database
#### DTU Model
```
Monthly Cost = price per DTU × DTUs × 730
```
#### vCore Model
```
Monthly Cost = vCore price × vCores × 730 + storage price per GB × storage GB
```
### Azure Kubernetes Service (AKS)
```
Monthly Cost = node VM price × 730 × number of nodes
```
Control plane is free for standard tier.
### Azure App Service
```
Monthly Cost = plan price × 730 (for hourly-priced plans)
```
Or flat monthly price for fixed-tier plans.
### Azure OpenAI
```
Monthly Cost = (input tokens / 1000) × input price per 1K tokens
+ (output tokens / 1000) × output price per 1K tokens
```
## Reservation vs. Pay-As-You-Go Comparison
When presenting pricing options, always show the comparison:
```
| Pricing Model | Monthly Cost | Annual Cost | Savings vs. PAYG |
|---------------|-------------|-------------|------------------|
| Pay-As-You-Go | $X | $Y | — |
| 1-Year Reserved | $A | $B | Z% |
| 3-Year Reserved | $C | $D | W% |
| Savings Plan (1yr) | $E | $F | V% |
| Savings Plan (3yr) | $G | $H | U% |
| Spot (if available) | $I | N/A | T% |
```
Savings percentage formula:
```
Savings % = ((PAYG Price - Reserved Price) / PAYG Price) × 100
```
## Cost Summary Table Template
Always present results in this format:
```markdown
| Service | SKU | Region | Unit Price | Unit | Monthly Est. | Annual Est. |
|---------|-----|--------|-----------|------|-------------|-------------|
| Virtual Machines | Standard_D4s_v5 | East US | $0.192/hr | 1 Hour | $140.16 | $1,681.92 |
```
## Tips
- Always clarify the **usage pattern** before estimating (24/7 vs. business hours vs. sporadic).
- For **storage**, ask about expected data volume and access patterns.
- For **databases**, ask about throughput requirements (RU/s, DTUs, or vCores).
- For **serverless** services, ask about expected invocation count and duration.
- Round to 2 decimal places for display.
- Note that prices are in **USD** unless otherwise specified.

View File

@@ -0,0 +1,84 @@
# Azure Region Names Reference
The Azure Retail Prices API requires `armRegionName` values in lowercase with no spaces. Use this table to map common region names to their API values.
## Region Mapping
| Display Name | armRegionName |
|-------------|---------------|
| East US | `eastus` |
| East US 2 | `eastus2` |
| Central US | `centralus` |
| North Central US | `northcentralus` |
| South Central US | `southcentralus` |
| West Central US | `westcentralus` |
| West US | `westus` |
| West US 2 | `westus2` |
| West US 3 | `westus3` |
| Canada Central | `canadacentral` |
| Canada East | `canadaeast` |
| Brazil South | `brazilsouth` |
| North Europe | `northeurope` |
| West Europe | `westeurope` |
| UK South | `uksouth` |
| UK West | `ukwest` |
| France Central | `francecentral` |
| France South | `francesouth` |
| Germany West Central | `germanywestcentral` |
| Germany North | `germanynorth` |
| Switzerland North | `switzerlandnorth` |
| Switzerland West | `switzerlandwest` |
| Norway East | `norwayeast` |
| Norway West | `norwaywest` |
| Sweden Central | `swedencentral` |
| Italy North | `italynorth` |
| Poland Central | `polandcentral` |
| Spain Central | `spaincentral` |
| East Asia | `eastasia` |
| Southeast Asia | `southeastasia` |
| Japan East | `japaneast` |
| Japan West | `japanwest` |
| Australia East | `australiaeast` |
| Australia Southeast | `australiasoutheast` |
| Australia Central | `australiacentral` |
| Korea Central | `koreacentral` |
| Korea South | `koreasouth` |
| Central India | `centralindia` |
| South India | `southindia` |
| West India | `westindia` |
| UAE North | `uaenorth` |
| UAE Central | `uaecentral` |
| South Africa North | `southafricanorth` |
| South Africa West | `southafricawest` |
| Qatar Central | `qatarcentral` |
## Conversion Rules
1. Remove all spaces
2. Convert to lowercase
3. Examples:
- "East US" → `eastus`
- "West Europe" → `westeurope`
- "Southeast Asia" → `southeastasia`
- "South Central US" → `southcentralus`
## Common Aliases
Users may refer to regions informally. Map these to the correct `armRegionName`:
| User Says | Maps To |
|-----------|---------|
| "US East", "Virginia" | `eastus` |
| "US West", "California" | `westus` |
| "Europe", "EU" | `westeurope` (default) |
| "UK", "London" | `uksouth` |
| "Asia", "Singapore" | `southeastasia` |
| "Japan", "Tokyo" | `japaneast` |
| "Australia", "Sydney" | `australiaeast` |
| "India", "Mumbai" | `centralindia` |
| "Korea", "Seoul" | `koreacentral` |
| "Brazil", "São Paulo" | `brazilsouth` |
| "Canada", "Toronto" | `canadacentral` |
| "Germany", "Frankfurt" | `germanywestcentral` |
| "France", "Paris" | `francecentral` |
| "Sweden", "Stockholm" | `swedencentral` |

View File

@@ -0,0 +1,106 @@
# Azure Service Names Reference
The `serviceName` field in the Azure Retail Prices API is **case-sensitive**. Use this reference to find the exact service name to use in filters.
## Compute
| Service | `serviceName` Value |
|---------|-------------------|
| Virtual Machines | `Virtual Machines` |
| Azure Functions | `Functions` |
| Azure App Service | `Azure App Service` |
| Azure Container Apps | `Azure Container Apps` |
| Azure Container Instances | `Container Instances` |
| Azure Kubernetes Service | `Azure Kubernetes Service` |
| Azure Batch | `Azure Batch` |
| Azure Spring Apps | `Azure Spring Apps` |
| Azure VMware Solution | `Azure VMware Solution` |
## Storage
| Service | `serviceName` Value |
|---------|-------------------|
| Azure Storage (Blob, Files, Queues, Tables) | `Storage` |
| Azure NetApp Files | `Azure NetApp Files` |
| Azure Backup | `Backup` |
| Azure Data Box | `Data Box` |
> **Note**: Blob Storage, Files, Disk Storage, and Data Lake Storage are all under the single `Storage` service name. Use `meterName` or `productName` to distinguish between them (e.g., `contains(meterName, 'Blob')`).
## Databases
| Service | `serviceName` Value |
|---------|-------------------|
| Azure Cosmos DB | `Azure Cosmos DB` |
| Azure SQL Database | `SQL Database` |
| Azure SQL Managed Instance | `SQL Managed Instance` |
| Azure Database for PostgreSQL | `Azure Database for PostgreSQL` |
| Azure Database for MySQL | `Azure Database for MySQL` |
| Azure Cache for Redis | `Redis Cache` |
## AI + Machine Learning
| Service | `serviceName` Value |
|---------|-------------------|
| Azure AI Foundry Models (incl. OpenAI) | `Foundry Models` |
| Azure AI Foundry Tools | `Foundry Tools` |
| Azure Machine Learning | `Azure Machine Learning` |
| Azure Cognitive Search (AI Search) | `Azure Cognitive Search` |
| Azure Bot Service | `Azure Bot Service` |
> **Note**: Azure OpenAI pricing is now under `Foundry Models`. Use `contains(productName, 'OpenAI')` or `contains(meterName, 'GPT')` to filter for OpenAI-specific models.
## Networking
| Service | `serviceName` Value |
|---------|-------------------|
| Azure Load Balancer | `Load Balancer` |
| Azure Application Gateway | `Application Gateway` |
| Azure Front Door | `Azure Front Door Service` |
| Azure CDN | `Azure CDN` |
| Azure DNS | `Azure DNS` |
| Azure Virtual Network | `Virtual Network` |
| Azure VPN Gateway | `VPN Gateway` |
| Azure ExpressRoute | `ExpressRoute` |
| Azure Firewall | `Azure Firewall` |
## Analytics
| Service | `serviceName` Value |
|---------|-------------------|
| Azure Synapse Analytics | `Azure Synapse Analytics` |
| Azure Data Factory | `Azure Data Factory v2` |
| Azure Stream Analytics | `Azure Stream Analytics` |
| Azure Databricks | `Azure Databricks` |
| Azure Event Hubs | `Event Hubs` |
## Integration
| Service | `serviceName` Value |
|---------|-------------------|
| Azure Service Bus | `Service Bus` |
| Azure Logic Apps | `Logic Apps` |
| Azure API Management | `API Management` |
| Azure Event Grid | `Event Grid` |
## Management & Monitoring
| Service | `serviceName` Value |
|---------|-------------------|
| Azure Monitor | `Azure Monitor` |
| Azure Log Analytics | `Log Analytics` |
| Azure Key Vault | `Key Vault` |
| Azure Backup | `Backup` |
## Web
| Service | `serviceName` Value |
|---------|-------------------|
| Azure Static Web Apps | `Azure Static Web Apps` |
| Azure SignalR | `Azure SignalR Service` |
## Tips
- If you're unsure about a service name, **filter by `serviceFamily` first** to discover valid `serviceName` values in the response.
- Example: `serviceFamily eq 'Databases' and armRegionName eq 'eastus'` will return all database service names.
- Some services have multiple `serviceName` entries for different tiers or generations.

View File

@@ -0,0 +1,290 @@
---
name: azure-resource-health-diagnose
description: 'Analyze Azure resource health, diagnose issues from logs and telemetry, and create a remediation plan for identified problems.'
---
# Azure Resource Health & Issue Diagnosis
This workflow analyzes a specific Azure resource to assess its health status, diagnose potential issues using logs and telemetry data, and develop a comprehensive remediation plan for any problems discovered.
## Prerequisites
- Azure MCP server configured and authenticated
- Target Azure resource identified (name and optionally resource group/subscription)
- Resource must be deployed and running to generate logs/telemetry
- Prefer Azure MCP tools (`azmcp-*`) over direct Azure CLI when available
## Workflow Steps
### Step 1: Get Azure Best Practices
**Action**: Retrieve diagnostic and troubleshooting best practices
**Tools**: Azure MCP best practices tool
**Process**:
1. **Load Best Practices**:
- Execute Azure best practices tool to get diagnostic guidelines
- Focus on health monitoring, log analysis, and issue resolution patterns
- Use these practices to inform diagnostic approach and remediation recommendations
### Step 2: Resource Discovery & Identification
**Action**: Locate and identify the target Azure resource
**Tools**: Azure MCP tools + Azure CLI fallback
**Process**:
1. **Resource Lookup**:
- If only resource name provided: Search across subscriptions using `azmcp-subscription-list`
- Use `az resource list --name <resource-name>` to find matching resources
- If multiple matches found, prompt user to specify subscription/resource group
- Gather detailed resource information:
- Resource type and current status
- Location, tags, and configuration
- Associated services and dependencies
2. **Resource Type Detection**:
- Identify resource type to determine appropriate diagnostic approach:
- **Web Apps/Function Apps**: Application logs, performance metrics, dependency tracking
- **Virtual Machines**: System logs, performance counters, boot diagnostics
- **Cosmos DB**: Request metrics, throttling, partition statistics
- **Storage Accounts**: Access logs, performance metrics, availability
- **SQL Database**: Query performance, connection logs, resource utilization
- **Application Insights**: Application telemetry, exceptions, dependencies
- **Key Vault**: Access logs, certificate status, secret usage
- **Service Bus**: Message metrics, dead letter queues, throughput
### Step 3: Health Status Assessment
**Action**: Evaluate current resource health and availability
**Tools**: Azure MCP monitoring tools + Azure CLI
**Process**:
1. **Basic Health Check**:
- Check resource provisioning state and operational status
- Verify service availability and responsiveness
- Review recent deployment or configuration changes
- Assess current resource utilization (CPU, memory, storage, etc.)
2. **Service-Specific Health Indicators**:
- **Web Apps**: HTTP response codes, response times, uptime
- **Databases**: Connection success rate, query performance, deadlocks
- **Storage**: Availability percentage, request success rate, latency
- **VMs**: Boot diagnostics, guest OS metrics, network connectivity
- **Functions**: Execution success rate, duration, error frequency
### Step 4: Log & Telemetry Analysis
**Action**: Analyze logs and telemetry to identify issues and patterns
**Tools**: Azure MCP monitoring tools for Log Analytics queries
**Process**:
1. **Find Monitoring Sources**:
- Use `azmcp-monitor-workspace-list` to identify Log Analytics workspaces
- Locate Application Insights instances associated with the resource
- Identify relevant log tables using `azmcp-monitor-table-list`
2. **Execute Diagnostic Queries**:
Use `azmcp-monitor-log-query` with targeted KQL queries based on resource type:
**General Error Analysis**:
```kql
// Recent errors and exceptions
union isfuzzy=true
AzureDiagnostics,
AppServiceHTTPLogs,
AppServiceAppLogs,
AzureActivity
| where TimeGenerated > ago(24h)
| where Level == "Error" or ResultType != "Success"
| summarize ErrorCount=count() by Resource, ResultType, bin(TimeGenerated, 1h)
| order by TimeGenerated desc
```
**Performance Analysis**:
```kql
// Performance degradation patterns
Perf
| where TimeGenerated > ago(7d)
| where ObjectName == "Processor" and CounterName == "% Processor Time"
| summarize avg(CounterValue) by Computer, bin(TimeGenerated, 1h)
| where avg_CounterValue > 80
```
**Application-Specific Queries**:
```kql
// Application Insights - Failed requests
requests
| where timestamp > ago(24h)
| where success == false
| summarize FailureCount=count() by resultCode, bin(timestamp, 1h)
| order by timestamp desc
// Database - Connection failures
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.SQL"
| where Category == "SQLSecurityAuditEvents"
| where action_name_s == "CONNECTION_FAILED"
| summarize ConnectionFailures=count() by bin(TimeGenerated, 1h)
```
3. **Pattern Recognition**:
- Identify recurring error patterns or anomalies
- Correlate errors with deployment times or configuration changes
- Analyze performance trends and degradation patterns
- Look for dependency failures or external service issues
### Step 5: Issue Classification & Root Cause Analysis
**Action**: Categorize identified issues and determine root causes
**Process**:
1. **Issue Classification**:
- **Critical**: Service unavailable, data loss, security breaches
- **High**: Performance degradation, intermittent failures, high error rates
- **Medium**: Warnings, suboptimal configuration, minor performance issues
- **Low**: Informational alerts, optimization opportunities
2. **Root Cause Analysis**:
- **Configuration Issues**: Incorrect settings, missing dependencies
- **Resource Constraints**: CPU/memory/disk limitations, throttling
- **Network Issues**: Connectivity problems, DNS resolution, firewall rules
- **Application Issues**: Code bugs, memory leaks, inefficient queries
- **External Dependencies**: Third-party service failures, API limits
- **Security Issues**: Authentication failures, certificate expiration
3. **Impact Assessment**:
- Determine business impact and affected users/systems
- Evaluate data integrity and security implications
- Assess recovery time objectives and priorities
### Step 6: Generate Remediation Plan
**Action**: Create a comprehensive plan to address identified issues
**Process**:
1. **Immediate Actions** (Critical issues):
- Emergency fixes to restore service availability
- Temporary workarounds to mitigate impact
- Escalation procedures for complex issues
2. **Short-term Fixes** (High/Medium issues):
- Configuration adjustments and resource scaling
- Application updates and patches
- Monitoring and alerting improvements
3. **Long-term Improvements** (All issues):
- Architectural changes for better resilience
- Preventive measures and monitoring enhancements
- Documentation and process improvements
4. **Implementation Steps**:
- Prioritized action items with specific Azure CLI commands
- Testing and validation procedures
- Rollback plans for each change
- Monitoring to verify issue resolution
### Step 7: User Confirmation & Report Generation
**Action**: Present findings and get approval for remediation actions
**Process**:
1. **Display Health Assessment Summary**:
```
🏥 Azure Resource Health Assessment
📊 Resource Overview:
• Resource: [Name] ([Type])
• Status: [Healthy/Warning/Critical]
• Location: [Region]
• Last Analyzed: [Timestamp]
🚨 Issues Identified:
• Critical: X issues requiring immediate attention
• High: Y issues affecting performance/reliability
• Medium: Z issues for optimization
• Low: N informational items
🔍 Top Issues:
1. [Issue Type]: [Description] - Impact: [High/Medium/Low]
2. [Issue Type]: [Description] - Impact: [High/Medium/Low]
3. [Issue Type]: [Description] - Impact: [High/Medium/Low]
🛠️ Remediation Plan:
• Immediate Actions: X items
• Short-term Fixes: Y items
• Long-term Improvements: Z items
• Estimated Resolution Time: [Timeline]
❓ Proceed with detailed remediation plan? (y/n)
```
2. **Generate Detailed Report**:
```markdown
# Azure Resource Health Report: [Resource Name]
**Generated**: [Timestamp]
**Resource**: [Full Resource ID]
**Overall Health**: [Status with color indicator]
## 🔍 Executive Summary
[Brief overview of health status and key findings]
## 📊 Health Metrics
- **Availability**: X% over last 24h
- **Performance**: [Average response time/throughput]
- **Error Rate**: X% over last 24h
- **Resource Utilization**: [CPU/Memory/Storage percentages]
## 🚨 Issues Identified
### Critical Issues
- **[Issue 1]**: [Description]
- **Root Cause**: [Analysis]
- **Impact**: [Business impact]
- **Immediate Action**: [Required steps]
### High Priority Issues
- **[Issue 2]**: [Description]
- **Root Cause**: [Analysis]
- **Impact**: [Performance/reliability impact]
- **Recommended Fix**: [Solution steps]
## 🛠️ Remediation Plan
### Phase 1: Immediate Actions (0-2 hours)
```bash
# Critical fixes to restore service
[Azure CLI commands with explanations]
```
### Phase 2: Short-term Fixes (2-24 hours)
```bash
# Performance and reliability improvements
[Azure CLI commands with explanations]
```
### Phase 3: Long-term Improvements (1-4 weeks)
```bash
# Architectural and preventive measures
[Azure CLI commands and configuration changes]
```
## 📈 Monitoring Recommendations
- **Alerts to Configure**: [List of recommended alerts]
- **Dashboards to Create**: [Monitoring dashboard suggestions]
- **Regular Health Checks**: [Recommended frequency and scope]
## ✅ Validation Steps
- [ ] Verify issue resolution through logs
- [ ] Confirm performance improvements
- [ ] Test application functionality
- [ ] Update monitoring and alerting
- [ ] Document lessons learned
## 📝 Prevention Measures
- [Recommendations to prevent similar issues]
- [Process improvements]
- [Monitoring enhancements]
```
## Error Handling
- **Resource Not Found**: Provide guidance on resource name/location specification
- **Authentication Issues**: Guide user through Azure authentication setup
- **Insufficient Permissions**: List required RBAC roles for resource access
- **No Logs Available**: Suggest enabling diagnostic settings and waiting for data
- **Query Timeouts**: Break down analysis into smaller time windows
- **Service-Specific Issues**: Provide generic health assessment with limitations noted
## Success Criteria
- ✅ Resource health status accurately assessed
- ✅ All significant issues identified and categorized
- ✅ Root cause analysis completed for major problems
- ✅ Actionable remediation plan with specific steps provided
- ✅ Monitoring and prevention recommendations included
- ✅ Clear prioritization of issues by business impact
- ✅ Implementation steps include validation and rollback procedures

View File

@@ -0,0 +1,367 @@
---
name: import-infrastructure-as-code
description: 'Import existing Azure resources into Terraform using Azure CLI discovery and Azure Verified Modules (AVM). Use when asked to reverse-engineer live Azure infrastructure, generate Infrastructure as Code from existing subscriptions/resource groups/resource IDs, map dependencies, derive exact import addresses from downloaded module source, prevent configuration drift, and produce AVM-based Terraform files ready for validation and planning across any Azure resource type.'
---
# Import Infrastructure as Code (Azure -> Terraform with AVM)
Convert existing Azure infrastructure into maintainable Terraform code using discovery data and Azure Verified Modules.
## When to Use This Skill
Use this skill when the user asks to:
- Import existing Azure resources into Terraform
- Generate IaC from live Azure environments
- Handle any Azure resource type supported by AVM (and document justified non-AVM fallbacks)
- Recreate infrastructure from a subscription or resource group
- Map dependencies between discovered Azure resources
- Use AVM modules instead of handwritten `azurerm_*` resources
## Prerequisites
- Azure CLI installed and authenticated (`az login`)
- Access to the target subscription or resource group
- Terraform CLI installed
- Network access to Terraform Registry and AVM index sources
## Inputs
| Parameter | Required | Default | Description |
|---|---|---|---|
| `subscription-id` | No | Active CLI context | Azure subscription used for subscription-scope discovery and context setting |
| `resource-group-name` | No | None | Azure resource group used for resource-group-scope discovery |
| `resource-id` | No | None | One or more Azure ARM resource IDs used for specific-resource-scope discovery |
At least one of `subscription-id`, `resource-group-name`, or `resource-id` is required.
## Step-by-Step Workflows
### 1) Collect Required Scope (Mandatory)
Request one of these scopes before running discovery commands:
- Subscription scope: `<subscription-id>`
- Resource group scope: `<resource-group-name>`
- Specific resources scope: one or more `<resource-id>` values
Scope handling rules:
- Treat Azure ARM resource IDs (for example `/subscriptions/.../providers/...`) as cloud resource identifiers, not local file system paths.
- Use resource IDs only with Azure CLI `--ids` arguments (for example `az resource show --ids <resource-id>`).
- Never pass resource IDs to file-reading commands (`cat`, `ls`, `read_file`, glob searches) unless the user explicitly says they are local file paths.
- If the user already provided one valid scope, do not ask for additional scope inputs unless required by a failing command.
- Do not ask follow-up questions that can be answered from already-provided scope values.
If scope is missing, ask for it explicitly and stop.
### 2) Authenticate and Set Context
Run only the commands required for the selected scope.
For subscription scope:
```bash
az login
az account set --subscription <subscription-id>
az account show --query "{subscriptionId:id, name:name, tenantId:tenantId}" -o json
```
Expected output: JSON object with `subscriptionId`, `name`, and `tenantId`.
For resource group or specific resource scope, `az login` is still required but `az account set` is optional if the active context is already correct.
When using specific resource scope, prefer direct `--ids`-based commands first and avoid extra discovery prompts for subscription or resource group unless needed for a concrete command.
### 3) Run Discovery Commands
Discover resources using the selected scopes. Ensure to fetch all necessary information for accurate Terraform generation.
```bash
# Subscription scope
az resource list --subscription <subscription-id> -o json
# Resource group scope
az resource list --resource-group <resource-group-name> -o json
# Specific resource scope
az resource show --ids <resource-id-1> <resource-id-2> ... -o json
```
Expected output: JSON object or array containing Azure resource metadata (`id`, `type`, `name`, `location`, `tags`, `properties`).
### 4) Resolve Dependencies Before Code Generation
Parse exported JSON and map:
- Parent-child relationships (for example: NIC -> Subnet -> VNet)
- Cross-resource references in `properties`
- Ordering for Terraform creation
IMPORTANT: Generate the following documentation and save it to a docs folder in the root of the project.
- `exported-resources.json` with all discovered resources and their metadata, including dependencies and references.
- `EXPORTED-ARCHITECTURE.MD` file with a human-readable architecture overview based on the discovered resources and their relationships.
### 5) Select Azure Verified Modules (Required)
Use the latest AVM version for each resource type.
### Terraform Registry
- Search for "avm" + resource name
- Filter by "Partner" tag to find official AVM modules
- Example: Search "avm storage account" → filter by Partner
### Official AVM Index
> **Note:** The following links always point to the latest version of the CSV files on the main branch. As intended, this means the files may change over time. If you require a point-in-time version, consider using a specific release tag in the URL.
- **Terraform Resource Modules**: `https://raw.githubusercontent.com/Azure/Azure-Verified-Modules/refs/heads/main/docs/static/module-indexes/TerraformResourceModules.csv`
- **Terraform Pattern Modules**: `https://raw.githubusercontent.com/Azure/Azure-Verified-Modules/refs/heads/main/docs/static/module-indexes/TerraformPatternModules.csv`
- **Terraform Utility Modules**: `https://raw.githubusercontent.com/Azure/Azure-Verified-Modules/refs/heads/main/docs/static/module-indexes/TerraformUtilityModules.csv`
### Individual Module information
Use the `web` tool or another suitable MCP method to get module information if not available locally in the `.terraform` folder.
Use AVM sources:
- Registry: `https://registry.terraform.io/modules/Azure/<module>/azurerm/latest`
- GitHub: `https://github.com/Azure/terraform-azurerm-avm-res-<service>-<resource>`
Prefer AVM modules over handwritten `azurerm_*` resources when an AVM module exists.
When fetching module information from GitHub repositories, the README.md file in the root of the repository typically contains all detailed information about the module, for example: https://raw.githubusercontent.com/Azure/terraform-azurerm-avm-res-<service>-<resource>/refs/heads/main/README.md
### 5a) Read the Module README Before Writing Any Code (Mandatory)
**This step is not optional.** Before writing a single line of HCL for a module, fetch and
read the full README for that module. Do not rely on knowledge of the raw `azurerm` provider
or prior experience with other AVM modules.
For each selected AVM module, fetch its README:
```text
https://raw.githubusercontent.com/Azure/terraform-azurerm-avm-res-<service>-<resource>/refs/heads/main/README.md
```
Or if the module is already downloaded after `terraform init`:
```bash
cat .terraform/modules/<module_key>/README.md
```
From the README, extract and record **before writing code**:
1. **Required Inputs** — every input the module requires. Any child resource listed here
(NICs, extensions, subnets, public IPs) is managed **inside** the module. Do **not**
create standalone module blocks for those resources.
2. **Optional Inputs** — the exact Terraform variable names and their declared `type`.
Do not assume they match the raw `azurerm` provider argument names or block shapes.
3. **Usage examples** — check what resource group identifier is used (`parent_id` vs
`resource_group_name`), how child resources are expressed (inline map vs separate module),
and what syntax each input expects.
#### Apply module rules as patterns, not assumptions
Use the lessons below as examples of the *type* of mismatch that often causes imports to fail.
Do not assume these exact names apply to every AVM module. Always verify each selected module's
README and `variables.tf`.
**`avm-res-compute-virtualmachine` (any version)**
- `network_interfaces` is a **Required Input**. NICs are owned by the VM module. Never
create standalone `avm-res-network-networkinterface` modules alongside a VM module —
define every NIC inline under `network_interfaces`.
- TrustedLaunch is expressed through the top-level booleans `secure_boot_enabled = true`
and `vtpm_enabled = true`. The `security_type` argument exists only under `os_disk` for
Confidential VM disk encryption and must not be used for TrustedLaunch.
- `boot_diagnostics` is a `bool`, not an object. Use `boot_diagnostics = true`; use the
separate `boot_diagnostics_storage_account_uri` variable if a storage URI is needed.
- Extensions are managed inside the module via the `extensions` map. Do not create
standalone extension resources.
**`avm-res-network-virtualnetwork` (any version)**
- This module is backed by the AzAPI provider, not `azurerm`. Use `parent_id` (the full
resource group resource ID string) to specify the resource group, not `resource_group_name`.
- Every example in the README shows `parent_id`; none show `resource_group_name`.
Generalized takeaway for all AVM modules:
- Determine child resource ownership from **Required Inputs** before creating sibling modules.
- Determine accepted variable names and types from **Optional Inputs** and `variables.tf`.
- Determine identifier style and input shape from README usage examples.
- Do not infer argument names from raw `azurerm_*` resources.
### 6) Generate Terraform Files
### Before Writing Import Blocks — Inspect Module Source (Mandatory)
After `terraform init` downloads the modules, inspect each module's source files to determine
the exact Terraform resource addresses before writing any `import {}` blocks. Never write
import addresses from memory.
#### Step A — Identify the provider and resource label
```bash
grep "^resource" .terraform/modules/<module_key>/main*.tf
```
This reveals whether the module uses `azurerm_*` or `azapi_resource` labels. For example,
`avm-res-network-virtualnetwork` exposes `azapi_resource "vnet"`, not
`azurerm_virtual_network "this"`.
#### Step B — Identify child modules and nested paths
```bash
grep "^module" .terraform/modules/<module_key>/main*.tf
```
If child resources are managed in a sub-module (subnets, extensions, etc.), the import
address must include every intermediate module label:
```text
module.<root_module_key>.module.<child_module_key>["<map_key>"].<resource_type>.<label>[<index>]
```
#### Step C — Check for `count` vs `for_each`
```bash
grep -n "count\|for_each" .terraform/modules/<module_key>/main*.tf
```
Any resource using `count` requires an index in the import address. When `count = 1` (e.g.,
conditional Linux vs Windows selection), the address must end with `[0]`. Resources using
`for_each` use string keys, not numeric indexes.
#### Known import address patterns (examples from lessons learned)
These are examples only. Use them as templates for reasoning, then derive the exact addresses
from the downloaded source code for the modules in your current import.
| Resource | Correct import `to` address pattern |
|---|---|
| AzAPI-backed VNet | `module.<vnet_key>.azapi_resource.vnet` |
| Subnet (nested, count-based) | `module.<vnet_key>.module.subnet["<subnet_name>"].azapi_resource.subnet[0]` |
| Linux VM (count-based) | `module.<vm_key>.azurerm_linux_virtual_machine.this[0]` |
| VM NIC | `module.<vm_key>.azurerm_network_interface.virtualmachine_network_interfaces["<nic_key>"]` |
| VM extension (default deploy_sequence=5) | `module.<vm_key>.module.extension["<ext_name>"].azurerm_virtual_machine_extension.this` |
| VM extension (deploy_sequence=14) | `module.<vm_key>.module.extension_<n>["<ext_name>"].azurerm_virtual_machine_extension.this` |
| NSG-NIC association | `module.<vm_key>.azurerm_network_interface_security_group_association.this["<nic_key>-<nsg_key>"]` |
Produce:
- `providers.tf` with `azurerm` provider and required version constraints
- `main.tf` with AVM module blocks and explicit dependencies
- `variables.tf` for environment-specific values
- `outputs.tf` for key IDs and endpoints
- `terraform.tfvars.example` with placeholder values
### Diff Live Properties Against Module Defaults (Mandatory)
After writing the initial configuration, compare every non-zero property of each discovered
live resource against the default value declared in the corresponding AVM module's
`variables.tf`. Any property where the live value differs from the module default must be
set explicitly in the Terraform configuration.
Pay particular attention to the following property categories, which are common sources
of silent configuration drift:
- **Timeout values** (e.g., Public IP `idle_timeout_in_minutes` defaults to `4`; live
deployments often use `30`)
- **Network policy flags** (e.g., subnet `private_endpoint_network_policies` defaults to
`"Enabled"`; existing subnets often have `"Disabled"`)
- **SKU and allocation** (e.g., Public IP `sku`, `allocation_method`)
- **Availability zones** (e.g., VM zone, Public IP zone)
- **Redundancy and replication** settings on storage and database resources
Retrieve full live properties with explicit `az` commands, for example:
```bash
az network public-ip show --ids <resource_id> --query "{idleTimeout:idleTimeoutInMinutes, sku:sku.name, zones:zones}" -o json
az network vnet subnet show --ids <resource_id> --query "{privateEndpointPolicies:privateEndpointNetworkPolicies, delegation:delegations}" -o json
```
Do not rely solely on `az resource list` output, which may omit nested or computed properties.
Pin module versions explicitly:
```hcl
module "example" {
source = "Azure/<module>/azurerm"
version = "<latest-compatible-version>"
}
```
### 7) Validate Generated Code
Run:
```bash
terraform init
terraform fmt -recursive
terraform validate
terraform plan
```
Expected output: no syntax errors, no validation errors, and a plan that matches discovered infrastructure intent.
## Troubleshooting
| Problem | Likely Cause | Action |
|---|---|---|
| `az` command fails with authorization errors | Wrong tenant/subscription or missing RBAC role | Re-run `az login`, verify subscription context, confirm required permissions |
| Discovery output is empty | Incorrect scope or no resources in scope | Re-check scope input and run scoped list/show command again |
| No AVM module found for a resource type | Resource type not yet covered by AVM | Use native `azurerm_*` resource for that type and document the gap |
| `terraform validate` fails | Missing variables or unresolved dependencies | Add required variables and explicit dependencies, then re-run validation |
| Unknown argument or variable not found in module | AVM variable name differs from `azurerm` provider argument name | Read the module README `variables.tf` or Optional Inputs section for the correct name |
| Import block fails — resource not found at address | Wrong provider label (`azurerm_` vs `azapi_`), missing sub-module path, or missing `[0]` index | Run `grep "^resource" .terraform/modules/<key>/main*.tf` and `grep "^module"` to find exact address |
| `terraform plan` shows unexpected `~ update` on imported resource | Live value differs from AVM module default | Fetch live property with `az <resource> show`, compare to module default, add explicit value |
| Child-resource module gives "provider configuration not present" | Child resources declared as standalone modules even though parent module owns them | Check Required Inputs in README, remove incorrect standalone modules, and model child resources using the parent module's documented input structure |
| Nested child resource import fails with "resource not found" | Missing intermediate module path, wrong map key, or missing index | Inspect module blocks and `count`/`for_each` in source; build full nested import address including all module segments and required key/index |
| Tool tries to read ARM resource ID as file path or asks repeated scope questions | Resource ID not treated as `--ids` input, or agent did not trust already-provided scope | Treat ARM IDs strictly as cloud identifiers, use `az ... --ids ...`, and stop re-prompting once one valid scope is present |
## Response Contract
When returning results, provide:
1. Scope used (subscription, resource group, or resource IDs)
2. Discovery files created
3. Resource types detected
4. AVM modules selected with versions
5. Terraform files generated or updated
6. Validation command results
7. Open gaps requiring user input (if any)
## Execution Rules for the Agent
- Do not continue if scope is missing.
- Do not claim successful import without listing discovered files and validation output.
- Do not skip dependency mapping before generating Terraform.
- Prefer AVM modules first; justify each non-AVM fallback explicitly.
- **Read the README for every AVM module before writing code.** Required Inputs identify
which child resources the module owns. Optional Inputs document exact variable names and
types. Usage examples show provider-specific conventions (`parent_id` vs
`resource_group_name`). Skipping the README is the single most common cause of
code errors in AVM-based imports.
- **Never assume NIC, extension, or public IP resources are standalone.** For
any AVM module, treat child resources as parent-owned unless the README explicitly indicates
a separate module is required. Check Required Inputs before creating sibling modules.
- **Never write import addresses from memory.** After `terraform init`, grep the downloaded
module source to discover the actual provider (`azurerm` vs `azapi`), resource labels,
sub-module nesting, and `count` vs `for_each` usage before writing any `import {}` block.
- **Never treat ARM resource IDs as file paths.** Resource IDs belong in Azure CLI `--ids`
arguments and API queries, not file IO tools. Only read local files when a real workspace
path is provided.
- **Minimize prompts when scope is already known.** If subscription, resource group, or
specific resource IDs are already provided, proceed with commands directly and only ask a
follow-up when a command fails due to missing required context.
- **Do not declare the import complete until `terraform plan` shows 0 destroys and 0
unwanted changes.** Telemetry `+ create` resources are acceptable. Any `~ update` or
`- destroy` on real infrastructure resources must be resolved.
## References
- [Azure Verified Modules index (Terraform)](https://github.com/Azure/Azure-Verified-Modules/tree/main/docs/static/module-indexes)
- [Terraform AVM Registry namespace](https://registry.terraform.io/namespaces/Azure)