mirror of
https://github.com/github/awesome-copilot.git
synced 2026-04-13 03:35:55 +00:00
feat: add FlowStudio monitoring + governance skills, update debug + build + mcp (#1304)
- **New skill: flowstudio-power-automate-monitoring** — flow health, failure rates, maker inventory, Power Apps, environment/connection counts via FlowStudio MCP cached store tools. - **New skill: flowstudio-power-automate-governance** — 10 CoE-aligned governance workflows: compliance review, orphan detection, archive scoring, connector audit, notification management, classification/tagging, maker offboarding, security review, environment governance, governance dashboard. - **Updated flowstudio-power-automate-debug** — purely live API tools (no store dependencies), mandatory action output inspection step, resubmit clarified as working for ALL trigger types. - **Updated flowstudio-power-automate-build** — Step 1 uses list_live_flows (not list_store_flows) for the duplicate check, resubmit-first testing. - **Updated flowstudio-power-automate-mcp** — store tool catalog, response shapes verified against real API calls, set_store_flow_state shape fix. - Plugin version bumped to 2.0.0, all 5 skills listed in plugin.json. - Generated docs regenerated via npm start. All response shapes verified against real FlowStudio MCP API calls. All 10 governance workflows validated with real tenant data. Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -2,11 +2,20 @@
|
||||
name: flowstudio-power-automate-build
|
||||
description: >-
|
||||
Build, scaffold, and deploy Power Automate cloud flows using the FlowStudio
|
||||
MCP server. Load this skill when asked to: create a flow, build a new flow,
|
||||
MCP server. Your agent constructs flow definitions, wires connections, deploys,
|
||||
and tests — all via MCP without opening the portal.
|
||||
Load this skill when asked to: create a flow, build a new flow,
|
||||
deploy a flow definition, scaffold a Power Automate workflow, construct a flow
|
||||
JSON, update an existing flow's actions, patch a flow definition, add actions
|
||||
to a flow, wire up connections, or generate a workflow definition from scratch.
|
||||
Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
|
||||
metadata:
|
||||
openclaw:
|
||||
requires:
|
||||
env:
|
||||
- FLOWSTUDIO_MCP_TOKEN
|
||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
||||
homepage: https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Build & Deploy Power Automate Flows with FlowStudio MCP
|
||||
@@ -64,14 +73,15 @@ ENV = "<environment-id>" # e.g. Default-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
|
||||
Always look before you build to avoid duplicates:
|
||||
|
||||
```python
|
||||
results = mcp("list_store_flows",
|
||||
environmentName=ENV, searchTerm="My New Flow")
|
||||
results = mcp("list_live_flows", environmentName=ENV)
|
||||
|
||||
# list_store_flows returns a direct array (no wrapper object)
|
||||
if len(results) > 0:
|
||||
# list_live_flows returns { "flows": [...] }
|
||||
matches = [f for f in results["flows"]
|
||||
if "My New Flow".lower() in f["displayName"].lower()]
|
||||
|
||||
if len(matches) > 0:
|
||||
# Flow exists — modify rather than create
|
||||
# id format is "envId.flowId" — split to get the flow UUID
|
||||
FLOW_ID = results[0]["id"].split(".", 1)[1]
|
||||
FLOW_ID = matches[0]["id"] # plain UUID from list_live_flows
|
||||
print(f"Existing flow: {FLOW_ID}")
|
||||
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
else:
|
||||
@@ -182,7 +192,7 @@ for connector in connectors_needed:
|
||||
> connection_references = ref_flow["properties"]["connectionReferences"]
|
||||
> ```
|
||||
|
||||
See the `power-automate-mcp` skill's **connection-references.md** reference
|
||||
See the `flowstudio-power-automate-mcp` skill's **connection-references.md** reference
|
||||
for the full connection reference structure.
|
||||
|
||||
---
|
||||
@@ -278,6 +288,8 @@ check = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
|
||||
# Confirm state
|
||||
print("State:", check["properties"]["state"]) # Should be "Started"
|
||||
# If state is "Stopped", use set_live_flow_state — NOT update_live_flow
|
||||
# mcp("set_live_flow_state", environmentName=ENV, flowName=FLOW_ID, state="Started")
|
||||
|
||||
# Confirm the action we added is there
|
||||
acts = check["properties"]["definition"]["actions"]
|
||||
@@ -294,38 +306,45 @@ print("Actions:", list(acts.keys()))
|
||||
> flow will do and wait for explicit approval before calling `trigger_live_flow`
|
||||
> or `resubmit_live_flow_run`.
|
||||
|
||||
### Updated flows (have prior runs)
|
||||
### Updated flows (have prior runs) — ANY trigger type
|
||||
|
||||
The fastest path — resubmit the most recent run:
|
||||
> **Use `resubmit_live_flow_run` first.** It works for EVERY trigger type —
|
||||
> Recurrence, SharePoint, connector webhooks, Button, and HTTP. It replays
|
||||
> the original trigger payload. Do NOT ask the user to manually trigger the
|
||||
> flow or wait for the next scheduled run.
|
||||
|
||||
```python
|
||||
runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=1)
|
||||
if runs:
|
||||
# Works for Recurrence, SharePoint, connector triggers — not just HTTP
|
||||
result = mcp("resubmit_live_flow_run",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=runs[0]["name"])
|
||||
print(result)
|
||||
print(result) # {"resubmitted": true, "triggerName": "..."}
|
||||
```
|
||||
|
||||
### Flows already using an HTTP trigger
|
||||
### HTTP-triggered flows — custom test payload
|
||||
|
||||
Fire directly with a test payload:
|
||||
Only use `trigger_live_flow` when you need to send a **different** payload
|
||||
than the original run. For verifying a fix, `resubmit_live_flow_run` is
|
||||
better because it uses the exact data that caused the failure.
|
||||
|
||||
```python
|
||||
schema = mcp("get_live_flow_http_schema",
|
||||
environmentName=ENV, flowName=FLOW_ID)
|
||||
print("Expected body:", schema.get("triggerSchema"))
|
||||
print("Expected body:", schema.get("requestSchema"))
|
||||
|
||||
result = mcp("trigger_live_flow",
|
||||
environmentName=ENV, flowName=FLOW_ID,
|
||||
body={"name": "Test", "value": 1})
|
||||
print(f"Status: {result['status']}")
|
||||
print(f"Status: {result['responseStatus']}")
|
||||
```
|
||||
|
||||
### Brand-new non-HTTP flows (Recurrence, connector triggers, etc.)
|
||||
|
||||
A brand-new Recurrence or connector-triggered flow has no runs to resubmit
|
||||
and no HTTP endpoint to call. **Deploy with a temporary HTTP trigger first,
|
||||
test the actions, then swap to the production trigger.**
|
||||
A brand-new Recurrence or connector-triggered flow has **no prior runs** to
|
||||
resubmit and no HTTP endpoint to call. This is the ONLY scenario where you
|
||||
need the temporary HTTP trigger approach below. **Deploy with a temporary
|
||||
HTTP trigger first, test the actions, then swap to the production trigger.**
|
||||
|
||||
#### 7a — Save the real trigger, deploy with a temporary HTTP trigger
|
||||
|
||||
@@ -384,7 +403,7 @@ if run["status"] == "Failed":
|
||||
root = err["failedActions"][-1]
|
||||
print(f"Root cause: {root['actionName']} → {root.get('code')}")
|
||||
# Debug and fix the definition before proceeding
|
||||
# See power-automate-debug skill for full diagnosis workflow
|
||||
# See flowstudio-power-automate-debug skill for full diagnosis workflow
|
||||
```
|
||||
|
||||
#### 7c — Swap to the production trigger
|
||||
@@ -428,7 +447,7 @@ else:
|
||||
| `union(old_data, new_data)` | Old values override new (first-wins) | Use `union(new_data, old_data)` |
|
||||
| `split()` on potentially-null string | `InvalidTemplate` crash | Wrap with `coalesce(field, '')` |
|
||||
| Checking `result["error"]` exists | Always present; true error is `!= null` | Use `result.get("error") is not None` |
|
||||
| Flow deployed but state is "Stopped" | Flow won't run on schedule | Check connection auth; re-enable |
|
||||
| Flow deployed but state is "Stopped" | Flow won't run on schedule | Call `set_live_flow_state` with `state: "Started"` — do **not** use `update_live_flow` for state changes |
|
||||
| Teams "Chat with Flow bot" recipient as object | 400 `GraphUserDetailNotFound` | Use plain string with trailing semicolon (see below) |
|
||||
|
||||
### Teams `PostMessageToConversation` — Recipient Formats
|
||||
|
||||
@@ -2,11 +2,20 @@
|
||||
name: flowstudio-power-automate-debug
|
||||
description: >-
|
||||
Debug failing Power Automate cloud flows using the FlowStudio MCP server.
|
||||
The Graph API only shows top-level status codes. This skill gives your agent
|
||||
action-level inputs and outputs to find the actual root cause.
|
||||
Load this skill when asked to: debug a flow, investigate a failed run, why is
|
||||
this flow failing, inspect action outputs, find the root cause of a flow error,
|
||||
fix a broken Power Automate flow, diagnose a timeout, trace a DynamicOperationRequestFailure,
|
||||
check connector auth errors, read error details from a run, or troubleshoot
|
||||
expression failures. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
|
||||
metadata:
|
||||
openclaw:
|
||||
requires:
|
||||
env:
|
||||
- FLOWSTUDIO_MCP_TOKEN
|
||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
||||
homepage: https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Power Automate Debugging with FlowStudio MCP
|
||||
@@ -14,6 +23,10 @@ description: >-
|
||||
A step-by-step diagnostic process for investigating failing Power Automate
|
||||
cloud flows through the FlowStudio MCP server.
|
||||
|
||||
> **Real debugging examples**: [Expression error in child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/fix-expression-error.md) |
|
||||
> [Data entry, not a flow bug](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/data-not-flow.md) |
|
||||
> [Null value crashes child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/null-child-flow.md)
|
||||
|
||||
**Prerequisite**: A FlowStudio MCP server must be reachable with a valid JWT.
|
||||
See the `flowstudio-power-automate-mcp` skill for connection setup.
|
||||
Subscribe at https://mcp.flowstudio.app
|
||||
@@ -59,46 +72,6 @@ ENV = "<environment-id>" # e.g. Default-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
|
||||
|
||||
---
|
||||
|
||||
## FlowStudio for Teams: Fast-Path Diagnosis (Skip Steps 2–4)
|
||||
|
||||
If you have a FlowStudio for Teams subscription, `get_store_flow_errors`
|
||||
returns per-run failure data including action names and remediation hints
|
||||
in a single call — no need to walk through live API steps.
|
||||
|
||||
```python
|
||||
# Quick failure summary
|
||||
summary = mcp("get_store_flow_summary", environmentName=ENV, flowName=FLOW_ID)
|
||||
# {"totalRuns": 100, "failRuns": 10, "failRate": 0.1,
|
||||
# "averageDurationSeconds": 29.4, "maxDurationSeconds": 158.9,
|
||||
# "firstFailRunRemediation": "<hint or null>"}
|
||||
print(f"Fail rate: {summary['failRate']:.0%} over {summary['totalRuns']} runs")
|
||||
|
||||
# Per-run error details (requires active monitoring to be configured)
|
||||
errors = mcp("get_store_flow_errors", environmentName=ENV, flowName=FLOW_ID)
|
||||
if errors:
|
||||
for r in errors[:3]:
|
||||
print(r["startTime"], "|", r.get("failedActions"), "|", r.get("remediationHint"))
|
||||
# If errors confirms the failing action → jump to Step 6 (apply fix)
|
||||
else:
|
||||
# Store doesn't have run-level detail for this flow — use live tools (Steps 2–5)
|
||||
pass
|
||||
```
|
||||
|
||||
For the full governance record (description, complexity, tier, connector list):
|
||||
```python
|
||||
record = mcp("get_store_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
# {"displayName": "My Flow", "state": "Started",
|
||||
# "runPeriodTotal": 100, "runPeriodFailRate": 0.1, "runPeriodFails": 10,
|
||||
# "runPeriodDurationAverage": 29410.8, ← milliseconds
|
||||
# "runError": "{\"code\": \"EACCES\", ...}", ← JSON string, parse it
|
||||
# "description": "...", "tier": "Premium", "complexity": "{...}"}
|
||||
if record.get("runError"):
|
||||
last_err = json.loads(record["runError"])
|
||||
print("Last run error:", last_err)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 1 — Locate the Flow
|
||||
|
||||
```python
|
||||
@@ -134,6 +107,13 @@ RUN_ID = next(r["name"] for r in runs if r["status"] == "Failed")
|
||||
|
||||
## Step 3 — Get the Top-Level Error
|
||||
|
||||
> **CRITICAL**: `get_live_flow_run_error` tells you **which** action failed.
|
||||
> `get_live_flow_run_action_outputs` tells you **why**. You must call BOTH.
|
||||
> Never stop at the error alone — error codes like `ActionFailed`,
|
||||
> `NotSpecified`, and `InternalServerError` are generic wrappers. The actual
|
||||
> root cause (wrong field, null value, HTTP 500 body, stack trace) is only
|
||||
> visible in the action's inputs and outputs.
|
||||
|
||||
```python
|
||||
err = mcp("get_live_flow_run_error",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID)
|
||||
@@ -164,7 +144,86 @@ print(f"Root action: {root['actionName']} → code: {root.get('code')}")
|
||||
|
||||
---
|
||||
|
||||
## Step 4 — Read the Flow Definition
|
||||
## Step 4 — Inspect the Failing Action's Inputs and Outputs
|
||||
|
||||
> **This is the most important step.** `get_live_flow_run_error` only gives
|
||||
> you a generic error code. The actual error detail — HTTP status codes,
|
||||
> response bodies, stack traces, null values — lives in the action's runtime
|
||||
> inputs and outputs. **Always inspect the failing action immediately after
|
||||
> identifying it.**
|
||||
|
||||
```python
|
||||
# Get the root failing action's full inputs and outputs
|
||||
root_action = err["failedActions"][-1]["actionName"]
|
||||
detail = mcp("get_live_flow_run_action_outputs",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
runName=RUN_ID,
|
||||
actionName=root_action)
|
||||
|
||||
out = detail[0] if detail else {}
|
||||
print(f"Action: {out.get('actionName')}")
|
||||
print(f"Status: {out.get('status')}")
|
||||
|
||||
# For HTTP actions, the real error is in outputs.body
|
||||
if isinstance(out.get("outputs"), dict):
|
||||
status_code = out["outputs"].get("statusCode")
|
||||
body = out["outputs"].get("body", {})
|
||||
print(f"HTTP {status_code}")
|
||||
print(json.dumps(body, indent=2)[:500])
|
||||
|
||||
# Error bodies are often nested JSON strings — parse them
|
||||
if isinstance(body, dict) and "error" in body:
|
||||
err_detail = body["error"]
|
||||
if isinstance(err_detail, str):
|
||||
err_detail = json.loads(err_detail)
|
||||
print(f"Error: {err_detail.get('message', err_detail)}")
|
||||
|
||||
# For expression errors, the error is in the error field
|
||||
if out.get("error"):
|
||||
print(f"Error: {out['error']}")
|
||||
|
||||
# Also check inputs — they show what expression/URL/body was used
|
||||
if out.get("inputs"):
|
||||
print(f"Inputs: {json.dumps(out['inputs'], indent=2)[:500]}")
|
||||
```
|
||||
|
||||
### What the action outputs reveal (that error codes don't)
|
||||
|
||||
| Error code from `get_live_flow_run_error` | What `get_live_flow_run_action_outputs` reveals |
|
||||
|---|---|
|
||||
| `ActionFailed` | Which nested action actually failed and its HTTP response |
|
||||
| `NotSpecified` | The HTTP status code + response body with the real error |
|
||||
| `InternalServerError` | The server's error message, stack trace, or API error JSON |
|
||||
| `InvalidTemplate` | The exact expression that failed and the null/wrong-type value |
|
||||
| `BadRequest` | The request body that was sent and why the server rejected it |
|
||||
|
||||
### Example: HTTP action returning 500
|
||||
|
||||
```
|
||||
Error code: "InternalServerError" ← this tells you nothing
|
||||
|
||||
Action outputs reveal:
|
||||
HTTP 500
|
||||
body: {"error": "Cannot read properties of undefined (reading 'toLowerCase')
|
||||
at getClientParamsFromConnectionString (storage.js:20)"}
|
||||
← THIS tells you the Azure Function crashed because a connection string is undefined
|
||||
```
|
||||
|
||||
### Example: Expression error on null
|
||||
|
||||
```
|
||||
Error code: "BadRequest" ← generic
|
||||
|
||||
Action outputs reveal:
|
||||
inputs: "body('HTTP_GetTokenFromStore')?['token']?['access_token']"
|
||||
outputs: "" ← empty string, the path resolved to null
|
||||
← THIS tells you the response shape changed — token is at body.access_token, not body.token.access_token
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 5 — Read the Flow Definition
|
||||
|
||||
```python
|
||||
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
@@ -177,41 +236,48 @@ to understand what data it expects.
|
||||
|
||||
---
|
||||
|
||||
## Step 5 — Inspect Action Outputs (Walk Back from Failure)
|
||||
## Step 6 — Walk Back from the Failure
|
||||
|
||||
For each action **leading up to** the failure, inspect its runtime output:
|
||||
When the failing action's inputs reference upstream actions, inspect those
|
||||
too. Walk backward through the chain until you find the source of the
|
||||
bad data:
|
||||
|
||||
```python
|
||||
for action_name in ["Compose_WeekEnd", "HTTP_Get_Data", "Parse_JSON"]:
|
||||
# Inspect multiple actions leading up to the failure
|
||||
for action_name in [root_action, "Compose_WeekEnd", "HTTP_Get_Data"]:
|
||||
result = mcp("get_live_flow_run_action_outputs",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
runName=RUN_ID,
|
||||
actionName=action_name)
|
||||
# Returns an array — single-element when actionName is provided
|
||||
out = result[0] if result else {}
|
||||
print(action_name, out.get("status"))
|
||||
print(json.dumps(out.get("outputs", {}), indent=2)[:500])
|
||||
print(f"\n--- {action_name} ({out.get('status')}) ---")
|
||||
print(f"Inputs: {json.dumps(out.get('inputs', ''), indent=2)[:300]}")
|
||||
print(f"Outputs: {json.dumps(out.get('outputs', ''), indent=2)[:300]}")
|
||||
```
|
||||
|
||||
> ⚠️ Output payloads from array-processing actions can be very large.
|
||||
> Always slice (e.g. `[:500]`) before printing.
|
||||
|
||||
> **Tip**: Omit `actionName` to get ALL actions in a single call.
|
||||
> This returns every action's inputs/outputs — useful when you're not sure
|
||||
> which upstream action produced the bad data. But use 120s+ timeout as
|
||||
> the response can be very large.
|
||||
|
||||
---
|
||||
|
||||
## Step 6 — Pinpoint the Root Cause
|
||||
## Step 7 — Pinpoint the Root Cause
|
||||
|
||||
### Expression Errors (e.g. `split` on null)
|
||||
If the error mentions `InvalidTemplate` or a function name:
|
||||
1. Find the action in the definition
|
||||
2. Check what upstream action/expression it reads
|
||||
3. Inspect that upstream action's output for null / missing fields
|
||||
3. **Inspect that upstream action's output** for null / missing fields
|
||||
|
||||
```python
|
||||
# Example: action uses split(item()?['Name'], ' ')
|
||||
# → null Name in the source data
|
||||
result = mcp("get_live_flow_run_action_outputs", ..., actionName="Compose_Names")
|
||||
# Returns a single-element array; index [0] to get the action object
|
||||
if not result:
|
||||
print("No outputs returned for Compose_Names")
|
||||
names = []
|
||||
@@ -223,9 +289,20 @@ print(f"{len(nulls)} records with null Name")
|
||||
|
||||
### Wrong Field Path
|
||||
Expression `triggerBody()?['fieldName']` returns null → `fieldName` is wrong.
|
||||
Check the trigger output shape with:
|
||||
**Inspect the trigger output** to see the actual field names:
|
||||
```python
|
||||
mcp("get_live_flow_run_action_outputs", ..., actionName="<trigger-action-name>")
|
||||
result = mcp("get_live_flow_run_action_outputs", ..., actionName="<trigger-action-name>")
|
||||
print(json.dumps(result[0].get("outputs"), indent=2)[:500])
|
||||
```
|
||||
|
||||
### HTTP Actions Returning Errors
|
||||
The error code says `InternalServerError` or `NotSpecified` — **always inspect
|
||||
the action outputs** to get the actual HTTP status and response body:
|
||||
```python
|
||||
result = mcp("get_live_flow_run_action_outputs", ..., actionName="HTTP_Get_Data")
|
||||
out = result[0]
|
||||
print(f"HTTP {out['outputs']['statusCode']}")
|
||||
print(json.dumps(out['outputs']['body'], indent=2)[:500])
|
||||
```
|
||||
|
||||
### Connection / Auth Failures
|
||||
@@ -234,7 +311,7 @@ service account running the flow. Cannot fix via API; fix in PA designer.
|
||||
|
||||
---
|
||||
|
||||
## Step 7 — Apply the Fix
|
||||
## Step 8 — Apply the Fix
|
||||
|
||||
**For expression/data issues**:
|
||||
```python
|
||||
@@ -260,13 +337,23 @@ print(result.get("error")) # None = success
|
||||
|
||||
---
|
||||
|
||||
## Step 8 — Verify the Fix
|
||||
## Step 9 — Verify the Fix
|
||||
|
||||
> **Use `resubmit_live_flow_run` to test ANY flow — not just HTTP triggers.**
|
||||
> `resubmit_live_flow_run` replays a previous run using its original trigger
|
||||
> payload. This works for **every trigger type**: Recurrence, SharePoint
|
||||
> "When an item is created", connector webhooks, Button triggers, and HTTP
|
||||
> triggers. You do NOT need to ask the user to manually trigger the flow or
|
||||
> wait for the next scheduled run.
|
||||
>
|
||||
> The only case where `resubmit` is not available is a **brand-new flow that
|
||||
> has never run** — it has no prior run to replay.
|
||||
|
||||
```python
|
||||
# Resubmit the failed run
|
||||
# Resubmit the failed run — works for ANY trigger type
|
||||
resubmit = mcp("resubmit_live_flow_run",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID)
|
||||
print(resubmit)
|
||||
print(resubmit) # {"resubmitted": true, "triggerName": "..."}
|
||||
|
||||
# Wait ~30 s then check
|
||||
import time; time.sleep(30)
|
||||
@@ -274,16 +361,26 @@ new_runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=
|
||||
print(new_runs[0]["status"]) # Succeeded = done
|
||||
```
|
||||
|
||||
### Testing HTTP-Triggered Flows
|
||||
### When to use resubmit vs trigger
|
||||
|
||||
For flows with a `Request` (HTTP) trigger, use `trigger_live_flow` instead
|
||||
of `resubmit_live_flow_run` to test with custom payloads:
|
||||
| Scenario | Use | Why |
|
||||
|---|---|---|
|
||||
| **Testing a fix** on any flow | `resubmit_live_flow_run` | Replays the exact trigger payload that caused the failure — best way to verify |
|
||||
| Recurrence / scheduled flow | `resubmit_live_flow_run` | Cannot be triggered on demand any other way |
|
||||
| SharePoint / connector trigger | `resubmit_live_flow_run` | Cannot be triggered without creating a real SP item |
|
||||
| HTTP trigger with **custom** test payload | `trigger_live_flow` | When you need to send different data than the original run |
|
||||
| Brand-new flow, never run | `trigger_live_flow` (HTTP only) | No prior run exists to resubmit |
|
||||
|
||||
### Testing HTTP-Triggered Flows with custom payloads
|
||||
|
||||
For flows with a `Request` (HTTP) trigger, use `trigger_live_flow` when you
|
||||
need to send a **different** payload than the original run:
|
||||
|
||||
```python
|
||||
# First inspect what the trigger expects
|
||||
schema = mcp("get_live_flow_http_schema",
|
||||
environmentName=ENV, flowName=FLOW_ID)
|
||||
print("Expected body schema:", schema.get("triggerSchema"))
|
||||
print("Expected body schema:", schema.get("requestSchema"))
|
||||
print("Response schemas:", schema.get("responseSchemas"))
|
||||
|
||||
# Trigger with a test payload
|
||||
@@ -291,7 +388,7 @@ result = mcp("trigger_live_flow",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
body={"name": "Test User", "value": 42})
|
||||
print(f"Status: {result['status']}, Body: {result.get('body')}")
|
||||
print(f"Status: {result['responseStatus']}, Body: {result.get('responseBody')}")
|
||||
```
|
||||
|
||||
> `trigger_live_flow` handles AAD-authenticated triggers automatically.
|
||||
@@ -301,13 +398,19 @@ print(f"Status: {result['status']}, Body: {result.get('body')}")
|
||||
|
||||
## Quick-Reference Diagnostic Decision Tree
|
||||
|
||||
| Symptom | First Tool to Call | What to Look For |
|
||||
|---|---|---|
|
||||
| Flow shows as Failed | `get_live_flow_run_error` | `failedActions[-1]["actionName"]` = root cause |
|
||||
| Expression crash | `get_live_flow_run_action_outputs` on prior action | null / wrong-type fields in output body |
|
||||
| Flow never starts | `get_live_flow` | check `properties.state` = "Started" |
|
||||
| Action returns wrong data | `get_live_flow_run_action_outputs` | actual output body vs expected |
|
||||
| Fix applied but still fails | `get_live_flow_runs` after resubmit | new run `status` field |
|
||||
| Symptom | First Tool | Then ALWAYS Call | What to Look For |
|
||||
|---|---|---|---|
|
||||
| Flow shows as Failed | `get_live_flow_run_error` | `get_live_flow_run_action_outputs` on the failing action | HTTP status + response body in `outputs` |
|
||||
| Error code is generic (`ActionFailed`, `NotSpecified`) | — | `get_live_flow_run_action_outputs` | The `outputs.body` contains the real error message, stack trace, or API error |
|
||||
| HTTP action returns 500 | — | `get_live_flow_run_action_outputs` | `outputs.statusCode` + `outputs.body` with server error detail |
|
||||
| Expression crash | — | `get_live_flow_run_action_outputs` on prior action | null / wrong-type fields in output body |
|
||||
| Flow never starts | `get_live_flow` | — | check `properties.state` = "Started" |
|
||||
| Action returns wrong data | `get_live_flow_run_action_outputs` | — | actual output body vs expected |
|
||||
| Fix applied but still fails | `get_live_flow_runs` after resubmit | — | new run `status` field |
|
||||
|
||||
> **Rule: never diagnose from error codes alone.** `get_live_flow_run_error`
|
||||
> identifies the failing action. `get_live_flow_run_action_outputs` reveals
|
||||
> the actual cause. Always call both.
|
||||
|
||||
---
|
||||
|
||||
|
||||
504
skills/flowstudio-power-automate-governance/SKILL.md
Normal file
504
skills/flowstudio-power-automate-governance/SKILL.md
Normal file
@@ -0,0 +1,504 @@
|
||||
---
|
||||
name: flowstudio-power-automate-governance
|
||||
description: >-
|
||||
Govern Power Automate flows and Power Apps at scale using the FlowStudio MCP
|
||||
cached store. Classify flows by business impact, detect orphaned resources,
|
||||
audit connector usage, enforce compliance standards, manage notification rules,
|
||||
and compute governance scores — all without Dataverse or the CoE Starter Kit.
|
||||
Load this skill when asked to: tag or classify flows, set business impact,
|
||||
assign ownership, detect orphans, audit connectors, check compliance, compute
|
||||
archive scores, manage notification rules, run a governance review, generate
|
||||
a compliance report, offboard a maker, or any task that involves writing
|
||||
governance metadata to flows. Requires a FlowStudio for Teams or MCP Pro+
|
||||
subscription — see https://mcp.flowstudio.app
|
||||
metadata:
|
||||
openclaw:
|
||||
requires:
|
||||
env:
|
||||
- FLOWSTUDIO_MCP_TOKEN
|
||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
||||
homepage: https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Power Automate Governance with FlowStudio MCP
|
||||
|
||||
Classify, tag, and govern Power Automate flows at scale through the FlowStudio
|
||||
MCP **cached store** — without Dataverse, without the CoE Starter Kit, and
|
||||
without the Power Automate portal.
|
||||
|
||||
This skill uses `update_store_flow` to write governance metadata and the
|
||||
monitoring tools (`list_store_flows`, `get_store_flow`, `list_store_makers`,
|
||||
etc.) to read tenant state. For monitoring and health-check workflows, see
|
||||
the `flowstudio-power-automate-monitoring` skill.
|
||||
|
||||
> **Start every session with `tools/list`** to confirm tool names and parameters.
|
||||
> This skill covers workflows and patterns — things `tools/list` cannot tell you.
|
||||
> If this document disagrees with `tools/list` or a real API response, the API wins.
|
||||
|
||||
---
|
||||
|
||||
## Critical: How to Extract Flow IDs
|
||||
|
||||
`list_store_flows` returns `id` in format `<environmentId>.<flowId>`. **You must split
|
||||
on the first `.`** to get `environmentName` and `flowName` for all other tools:
|
||||
|
||||
```
|
||||
id = "Default-<envGuid>.<flowGuid>"
|
||||
environmentName = "Default-<envGuid>" (everything before first ".")
|
||||
flowName = "<flowGuid>" (everything after first ".")
|
||||
```
|
||||
|
||||
Also: skip entries that have no `displayName` or have `state=Deleted` —
|
||||
these are sparse records or flows that no longer exist in Power Automate.
|
||||
If a deleted flow has `monitor=true`, suggest disabling monitoring
|
||||
(`update_store_flow` with `monitor=false`) to free up a monitoring slot
|
||||
(standard plan includes 20).
|
||||
|
||||
---
|
||||
|
||||
## The Write Tool: `update_store_flow`
|
||||
|
||||
`update_store_flow` writes governance metadata to the **Flow Studio cache
|
||||
only** — it does NOT modify the flow in Power Automate. These fields are
|
||||
not visible via `get_live_flow` or the PA portal. They exist only in the
|
||||
Flow Studio store and are used by Flow Studio's scanning pipeline and
|
||||
notification rules.
|
||||
|
||||
This means:
|
||||
- `ownerTeam` / `supportEmail` — sets who Flow Studio considers the
|
||||
governance contact. Does NOT change the actual PA flow owner.
|
||||
- `rule_notify_email` — sets who receives Flow Studio failure/missing-run
|
||||
notifications. Does NOT change Microsoft's built-in flow failure alerts.
|
||||
- `monitor` / `critical` / `businessImpact` — Flow Studio classification
|
||||
only. Power Automate has no equivalent fields.
|
||||
|
||||
Merge semantics — only fields you provide are updated. Returns the full
|
||||
updated record (same shape as `get_store_flow`).
|
||||
|
||||
Required parameters: `environmentName`, `flowName`. All other fields optional.
|
||||
|
||||
### Settable Fields
|
||||
|
||||
| Field | Type | Purpose |
|
||||
|---|---|---|
|
||||
| `monitor` | bool | Enable run-level scanning (standard plan: 20 flows included) |
|
||||
| `rule_notify_onfail` | bool | Send email notification on any failed run |
|
||||
| `rule_notify_onmissingdays` | number | Send notification when flow hasn't run in N days (0 = disabled) |
|
||||
| `rule_notify_email` | string | Comma-separated notification recipients |
|
||||
| `description` | string | What the flow does |
|
||||
| `tags` | string | Classification tags (also auto-extracted from description `#hashtags`) |
|
||||
| `businessImpact` | string | Low / Medium / High / Critical |
|
||||
| `businessJustification` | string | Why the flow exists, what process it automates |
|
||||
| `businessValue` | string | Business value statement |
|
||||
| `ownerTeam` | string | Accountable team |
|
||||
| `ownerBusinessUnit` | string | Business unit |
|
||||
| `supportGroup` | string | Support escalation group |
|
||||
| `supportEmail` | string | Support contact email |
|
||||
| `critical` | bool | Designate as business-critical |
|
||||
| `tier` | string | Standard or Premium |
|
||||
| `security` | string | Security classification or notes |
|
||||
|
||||
> **Caution with `security`:** The `security` field on `get_store_flow`
|
||||
> contains structured JSON (e.g. `{"triggerRequestAuthenticationType":"All"}`).
|
||||
> Writing a plain string like `"reviewed"` will overwrite this. To mark a
|
||||
> flow as security-reviewed, use `tags` instead.
|
||||
|
||||
---
|
||||
|
||||
## Governance Workflows
|
||||
|
||||
### 1. Compliance Detail Review
|
||||
|
||||
Identify flows missing required governance metadata — the equivalent of
|
||||
the CoE Starter Kit's Developer Compliance Center.
|
||||
|
||||
```
|
||||
1. Ask the user which compliance fields they require
|
||||
(or use their organization's existing governance policy)
|
||||
2. list_store_flows
|
||||
3. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Check which required fields are missing or empty
|
||||
4. Report non-compliant flows with missing fields listed
|
||||
5. For each non-compliant flow:
|
||||
- Ask the user for values
|
||||
- update_store_flow(environmentName, flowName, ...provided fields)
|
||||
```
|
||||
|
||||
**Fields available for compliance checks:**
|
||||
|
||||
| Field | Example policy |
|
||||
|---|---|
|
||||
| `description` | Every flow should be documented |
|
||||
| `businessImpact` | Classify as Low / Medium / High / Critical |
|
||||
| `businessJustification` | Required for High/Critical impact flows |
|
||||
| `ownerTeam` | Every flow should have an accountable team |
|
||||
| `supportEmail` | Required for production flows |
|
||||
| `monitor` | Required for critical flows (note: standard plan includes 20 monitored flows) |
|
||||
| `rule_notify_onfail` | Recommended for monitored flows |
|
||||
| `critical` | Designate business-critical flows |
|
||||
|
||||
> Each organization defines their own compliance rules. The fields above are
|
||||
> suggestions based on common Power Platform governance patterns (CoE Starter
|
||||
> Kit). Ask the user what their requirements are before flagging flows as
|
||||
> non-compliant.
|
||||
>
|
||||
> **Tip:** Flows created or updated via MCP already have `description`
|
||||
> (auto-appended by `update_live_flow`). Flows created manually in the
|
||||
> Power Automate portal are the ones most likely missing governance metadata.
|
||||
|
||||
### 2. Orphaned Resource Detection
|
||||
|
||||
Find flows owned by deleted or disabled Azure AD accounts.
|
||||
|
||||
```
|
||||
1. list_store_makers
|
||||
2. Filter where deleted=true AND ownerFlowCount > 0
|
||||
Note: deleted makers have NO displayName/mail — record their id (AAD OID)
|
||||
3. list_store_flows → collect all flows
|
||||
4. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Parse owners: json.loads(record["owners"])
|
||||
- Check if any owner principalId matches an orphaned maker id
|
||||
5. Report orphaned flows: maker id, flow name, flow state
|
||||
6. For each orphaned flow:
|
||||
- Reassign governance: update_store_flow(environmentName, flowName,
|
||||
ownerTeam="NewTeam", supportEmail="new-owner@contoso.com")
|
||||
- Or decommission: set_store_flow_state(environmentName, flowName,
|
||||
state="Stopped")
|
||||
```
|
||||
|
||||
> `update_store_flow` updates governance metadata in the cache only. To
|
||||
> transfer actual PA ownership, an admin must use the Power Platform admin
|
||||
> center or PowerShell.
|
||||
>
|
||||
> **Note:** Many orphaned flows are system-generated (created by
|
||||
> `DataverseSystemUser` accounts for SLA monitoring, knowledge articles,
|
||||
> etc.). These were never built by a person — consider tagging them
|
||||
> rather than reassigning.
|
||||
>
|
||||
> **Coverage:** This workflow searches the cached store only, not the
|
||||
> live PA API. Flows created after the last scan won't appear.
|
||||
|
||||
### 3. Archive Score Calculation
|
||||
|
||||
Compute an inactivity score (0-7) per flow to identify safe cleanup
|
||||
candidates. Aligns with the CoE Starter Kit's archive scoring.
|
||||
|
||||
```
|
||||
1. list_store_flows
|
||||
2. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
3. Compute archive score (0-7), add 1 point for each:
|
||||
+1 lastModifiedTime within 24 hours of createdTime
|
||||
+1 displayName contains "test", "demo", "copy", "temp", or "backup"
|
||||
(case-insensitive)
|
||||
+1 createdTime is more than 12 months ago
|
||||
+1 state is "Stopped" or "Suspended"
|
||||
+1 json.loads(owners) is empty array []
|
||||
+1 runPeriodTotal = 0 (never ran or no recent runs)
|
||||
+1 parse json.loads(complexity) → actions < 5
|
||||
4. Classify:
|
||||
Score 5-7: Recommend archive — report to user for confirmation
|
||||
Score 3-4: Flag for review →
|
||||
Read existing tags from get_store_flow response, append #archive-review
|
||||
update_store_flow(environmentName, flowName, tags="<existing> #archive-review")
|
||||
Score 0-2: Active, no action
|
||||
5. For user-confirmed archives:
|
||||
set_store_flow_state(environmentName, flowName, state="Stopped")
|
||||
Read existing tags, append #archived
|
||||
update_store_flow(environmentName, flowName, tags="<existing> #archived")
|
||||
```
|
||||
|
||||
> **What "archive" means:** Power Automate has no native archive feature.
|
||||
> Archiving via MCP means: (1) stop the flow so it can't run, and
|
||||
> (2) tag it `#archived` so it's discoverable for future cleanup.
|
||||
> Actual deletion requires the Power Automate portal or admin PowerShell
|
||||
> — it cannot be done via MCP tools.
|
||||
|
||||
### 4. Connector Audit
|
||||
|
||||
Audit which connectors are in use across monitored flows. Useful for DLP
|
||||
impact analysis and premium license planning.
|
||||
|
||||
```
|
||||
1. list_store_flows(monitor=true)
|
||||
(scope to monitored flows — auditing all 1000+ flows is expensive)
|
||||
2. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Parse connections: json.loads(record["connections"])
|
||||
Returns array of objects with apiName, apiId, connectionName
|
||||
- Note the flow-level tier field ("Standard" or "Premium")
|
||||
3. Build connector inventory:
|
||||
- Which apiNames are used and by how many flows
|
||||
- Which flows have tier="Premium" (premium connector detected)
|
||||
- Which flows use HTTP connectors (apiName contains "http")
|
||||
- Which flows use custom connectors (non-shared_ prefix apiNames)
|
||||
4. Report inventory to user
|
||||
- For DLP analysis: user provides their DLP policy connector groups,
|
||||
agent cross-references against the inventory
|
||||
```
|
||||
|
||||
> **Scope to monitored flows.** Each flow requires a `get_store_flow` call
|
||||
> to read the `connections` JSON. Standard plans have ~20 monitored flows —
|
||||
> manageable. Auditing all flows in a large tenant (1000+) would be very
|
||||
> expensive in API calls.
|
||||
>
|
||||
> **`list_store_connections`** returns connection instances (who created
|
||||
> which connection) but NOT connector types per flow. Use it for connection
|
||||
> counts per environment, not for the connector audit.
|
||||
>
|
||||
> DLP policy definitions are not available via MCP. The agent builds the
|
||||
> connector inventory; the user provides the DLP classification to
|
||||
> cross-reference against.
|
||||
|
||||
### 5. Notification Rule Management
|
||||
|
||||
Configure monitoring and alerting for flows at scale.
|
||||
|
||||
```
|
||||
Enable failure alerts on all critical flows:
|
||||
1. list_store_flows(monitor=true)
|
||||
2. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- If critical=true AND rule_notify_onfail is not true:
|
||||
update_store_flow(environmentName, flowName,
|
||||
rule_notify_onfail=true,
|
||||
rule_notify_email="oncall@contoso.com")
|
||||
- If NO flows have critical=true: this is a governance finding.
|
||||
Recommend the user designate their most important flows as critical
|
||||
using update_store_flow(critical=true) before configuring alerts.
|
||||
|
||||
Enable missing-run detection for scheduled flows:
|
||||
1. list_store_flows(monitor=true)
|
||||
2. For each flow where triggerType="Recurrence" (available on list response):
|
||||
- Skip flows with state="Stopped" or "Suspended" (not expected to run)
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- If rule_notify_onmissingdays is 0 or not set:
|
||||
update_store_flow(environmentName, flowName,
|
||||
rule_notify_onmissingdays=2)
|
||||
```
|
||||
|
||||
> `critical`, `rule_notify_onfail`, and `rule_notify_onmissingdays` are only
|
||||
> available from `get_store_flow`, not from `list_store_flows`. The list call
|
||||
> pre-filters to monitored flows; the detail call checks the notification fields.
|
||||
>
|
||||
> **Monitoring limit:** The standard plan (FlowStudio for Teams / MCP Pro+)
|
||||
> includes 20 monitored flows. Before bulk-enabling `monitor=true`, check
|
||||
> how many flows are already monitored:
|
||||
> `len(list_store_flows(monitor=true))`
|
||||
|
||||
### 6. Classification and Tagging
|
||||
|
||||
Bulk-classify flows by connector type, business function, or risk level.
|
||||
|
||||
```
|
||||
Auto-tag by connector:
|
||||
1. list_store_flows
|
||||
2. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Parse connections: json.loads(record["connections"])
|
||||
- Build tags from apiName values:
|
||||
shared_sharepointonline → #sharepoint
|
||||
shared_teams → #teams
|
||||
shared_office365 → #email
|
||||
Custom connectors → #custom-connector
|
||||
HTTP-related connectors → #http-external
|
||||
- Read existing tags from get_store_flow response, append new tags
|
||||
- update_store_flow(environmentName, flowName,
|
||||
tags="<existing tags> #sharepoint #teams")
|
||||
```
|
||||
|
||||
> **Two tag systems:** Tags shown in `list_store_flows` are auto-extracted
|
||||
> from the flow's `description` field (e.g. a maker writes `#operations` in
|
||||
> the PA portal description). Tags set via `update_store_flow(tags=...)`
|
||||
> write to a separate field in the Azure Table cache. They are independent —
|
||||
> writing store tags does not touch the description, and editing the
|
||||
> description in the portal does not affect store tags.
|
||||
>
|
||||
> **Tag merge:** `update_store_flow(tags=...)` overwrites the store tags
|
||||
> field. To avoid losing tags from other workflows, read the current store
|
||||
> tags from `get_store_flow` first, append new ones, then write back.
|
||||
>
|
||||
> `get_store_flow` already has a `tier` field (Standard/Premium) computed
|
||||
> by the scanning pipeline. Only use `update_store_flow(tier=...)` if you
|
||||
> need to override it.
|
||||
|
||||
### 7. Maker Offboarding
|
||||
|
||||
When an employee leaves, identify their flows and apps, and reassign
|
||||
Flow Studio governance contacts and notification recipients.
|
||||
|
||||
```
|
||||
1. get_store_maker(makerKey="<departing-user-aad-oid>")
|
||||
→ check ownerFlowCount, ownerAppCount, deleted status
|
||||
2. list_store_flows → collect all flows
|
||||
3. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Parse owners: json.loads(record["owners"])
|
||||
- If any principalId matches the departing user's OID → flag
|
||||
4. list_store_power_apps → filter where ownerId matches the OID
|
||||
5. For each flagged flow:
|
||||
- Check runPeriodTotal and runLast — is it still active?
|
||||
- If keeping:
|
||||
update_store_flow(environmentName, flowName,
|
||||
ownerTeam="NewTeam", supportEmail="new-owner@contoso.com")
|
||||
- If decommissioning:
|
||||
set_store_flow_state(environmentName, flowName, state="Stopped")
|
||||
Read existing tags, append #decommissioned
|
||||
update_store_flow(environmentName, flowName, tags="<existing> #decommissioned")
|
||||
6. Report: flows reassigned, flows stopped, apps needing manual reassignment
|
||||
```
|
||||
|
||||
> **What "reassign" means here:** `update_store_flow` changes who Flow
|
||||
> Studio considers the governance contact and who receives Flow Studio
|
||||
> notifications. It does NOT transfer the actual Power Automate flow
|
||||
> ownership — that requires the Power Platform admin center or PowerShell.
|
||||
> Also update `rule_notify_email` so failure notifications go to the new
|
||||
> team instead of the departing employee's email.
|
||||
>
|
||||
> Power Apps ownership cannot be changed via MCP tools. Report them for
|
||||
> manual reassignment in the Power Apps admin center.
|
||||
|
||||
### 8. Security Review
|
||||
|
||||
Review flows for potential security concerns using cached store data.
|
||||
|
||||
```
|
||||
1. list_store_flows(monitor=true)
|
||||
2. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Parse security: json.loads(record["security"])
|
||||
- Parse connections: json.loads(record["connections"])
|
||||
- Read sharingType directly (top-level field, NOT inside security JSON)
|
||||
3. Report findings to user for review
|
||||
4. For reviewed flows:
|
||||
Read existing tags, append #security-reviewed
|
||||
update_store_flow(environmentName, flowName, tags="<existing> #security-reviewed")
|
||||
Do NOT overwrite the security field — it contains structured auth data
|
||||
```
|
||||
|
||||
**Fields available for security review:**
|
||||
|
||||
| Field | Where | What it tells you |
|
||||
|---|---|---|
|
||||
| `security.triggerRequestAuthenticationType` | security JSON | `"All"` = HTTP trigger accepts unauthenticated requests |
|
||||
| `sharingType` | top-level | `"Coauthor"` = shared with co-authors for editing |
|
||||
| `connections` | connections JSON | Which connectors the flow uses (check for HTTP, custom) |
|
||||
| `referencedResources` | JSON string | SharePoint sites, Teams channels, external URLs the flow accesses |
|
||||
| `tier` | top-level | `"Premium"` = uses premium connectors |
|
||||
|
||||
> Each organization decides what constitutes a security concern. For example,
|
||||
> an unauthenticated HTTP trigger is expected for webhook receivers (Stripe,
|
||||
> GitHub) but may be a risk for internal flows. Review findings in context
|
||||
> before flagging.
|
||||
|
||||
### 9. Environment Governance
|
||||
|
||||
Audit environments for compliance and sprawl.
|
||||
|
||||
```
|
||||
1. list_store_environments
|
||||
Skip entries without displayName (tenant-level metadata rows)
|
||||
2. Flag:
|
||||
- Developer environments (sku="Developer") — should be limited
|
||||
- Non-managed environments (isManagedEnvironment=false) — less governance
|
||||
- Note: isAdmin=false means the current service account lacks admin
|
||||
access to that environment, not that the environment has no admin
|
||||
3. list_store_flows → group by environmentName
|
||||
- Flow count per environment
|
||||
- Failure rate analysis: runPeriodFailRate is on the list response —
|
||||
no need for per-flow get_store_flow calls
|
||||
4. list_store_connections → group by environmentName
|
||||
- Connection count per environment
|
||||
```
|
||||
|
||||
### 10. Governance Dashboard
|
||||
|
||||
Generate a tenant-wide governance summary.
|
||||
|
||||
```
|
||||
Efficient metrics (list calls only):
|
||||
1. total_flows = len(list_store_flows())
|
||||
2. monitored = len(list_store_flows(monitor=true))
|
||||
3. with_onfail = len(list_store_flows(rule_notify_onfail=true))
|
||||
4. makers = list_store_makers()
|
||||
→ active = count where deleted=false
|
||||
→ orphan_count = count where deleted=true AND ownerFlowCount > 0
|
||||
5. apps = list_store_power_apps()
|
||||
→ widely_shared = count where sharedUsersCount > 3
|
||||
6. envs = list_store_environments() → count, group by sku
|
||||
7. conns = list_store_connections() → count
|
||||
|
||||
Compute from list data:
|
||||
- Monitoring %: monitored / total_flows
|
||||
- Notification %: with_onfail / monitored
|
||||
- Orphan count: from step 4
|
||||
- High-risk count: flows with runPeriodFailRate > 0.2 (on list response)
|
||||
|
||||
Detailed metrics (require get_store_flow per flow — expensive for large tenants):
|
||||
- Compliance %: flows with businessImpact set / total active flows
|
||||
- Undocumented count: flows without description
|
||||
- Tier breakdown: group by tier field
|
||||
|
||||
For detailed metrics, iterate all flows in a single pass:
|
||||
For each flow from list_store_flows (skip sparse entries):
|
||||
Split id → environmentName, flowName
|
||||
get_store_flow(environmentName, flowName)
|
||||
→ accumulate businessImpact, description, tier
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Field Reference: `get_store_flow` Fields Used in Governance
|
||||
|
||||
All fields below are confirmed present on the `get_store_flow` response.
|
||||
Fields marked with `*` are also available on `list_store_flows` (cheaper).
|
||||
|
||||
| Field | Type | Governance use |
|
||||
|---|---|---|
|
||||
| `displayName` * | string | Archive score (test/demo name detection) |
|
||||
| `state` * | string | Archive score, lifecycle management |
|
||||
| `tier` | string | License audit (Standard vs Premium) |
|
||||
| `monitor` * | bool | Is this flow being actively monitored? |
|
||||
| `critical` | bool | Business-critical designation (settable via update_store_flow) |
|
||||
| `businessImpact` | string | Compliance classification |
|
||||
| `businessJustification` | string | Compliance attestation |
|
||||
| `ownerTeam` | string | Ownership accountability |
|
||||
| `supportEmail` | string | Escalation contact |
|
||||
| `rule_notify_onfail` | bool | Failure alerting configured? |
|
||||
| `rule_notify_onmissingdays` | number | SLA monitoring configured? |
|
||||
| `rule_notify_email` | string | Alert recipients |
|
||||
| `description` | string | Documentation completeness |
|
||||
| `tags` | string | Classification — `list_store_flows` shows description-extracted hashtags only; store tags written by `update_store_flow` require `get_store_flow` to read back |
|
||||
| `runPeriodTotal` * | number | Activity level |
|
||||
| `runPeriodFailRate` * | number | Health status |
|
||||
| `runLast` | ISO string | Last run timestamp |
|
||||
| `scanned` | ISO string | Data freshness |
|
||||
| `deleted` | bool | Lifecycle tracking |
|
||||
| `createdTime` * | ISO string | Archive score (age) |
|
||||
| `lastModifiedTime` * | ISO string | Archive score (staleness) |
|
||||
| `owners` | JSON string | Orphan detection, ownership audit — parse with json.loads() |
|
||||
| `connections` | JSON string | Connector audit, tier — parse with json.loads() |
|
||||
| `complexity` | JSON string | Archive score (simplicity) — parse with json.loads() |
|
||||
| `security` | JSON string | Auth type audit — parse with json.loads(), contains `triggerRequestAuthenticationType` |
|
||||
| `sharingType` | string | Oversharing detection (top-level, NOT inside security) |
|
||||
| `referencedResources` | JSON string | URL audit — parse with json.loads() |
|
||||
|
||||
---
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `flowstudio-power-automate-monitoring` — Health checks, failure rates, inventory (read-only)
|
||||
- `flowstudio-power-automate-mcp` — Core connection setup, live tool reference
|
||||
- `flowstudio-power-automate-debug` — Deep diagnosis with action-level inputs/outputs
|
||||
- `flowstudio-power-automate-build` — Build and deploy flow definitions
|
||||
@@ -1,13 +1,22 @@
|
||||
---
|
||||
name: flowstudio-power-automate-mcp
|
||||
description: >-
|
||||
Connect to and operate Power Automate cloud flows via a FlowStudio MCP server.
|
||||
Give your AI agent the same visibility you have in the Power Automate portal — plus
|
||||
a bit more. The Graph API only returns top-level run status. Flow Studio MCP exposes
|
||||
action-level inputs, outputs, loop iterations, and nested child flow failures.
|
||||
Use when asked to: list flows, read a flow definition, check run history, inspect
|
||||
action outputs, resubmit a run, cancel a running flow, view connections, get a
|
||||
trigger URL, validate a definition, monitor flow health, or any task that requires
|
||||
talking to the Power Automate API through an MCP tool. Also use for Power Platform
|
||||
environment discovery and connection management. Requires a FlowStudio MCP
|
||||
subscription or compatible server — see https://mcp.flowstudio.app
|
||||
metadata:
|
||||
openclaw:
|
||||
requires:
|
||||
env:
|
||||
- FLOWSTUDIO_MCP_TOKEN
|
||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
||||
homepage: https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Power Automate via FlowStudio MCP
|
||||
@@ -16,6 +25,10 @@ This skill lets AI agents read, monitor, and operate Microsoft Power Automate
|
||||
cloud flows programmatically through a **FlowStudio MCP server** — no browser,
|
||||
no UI, no manual steps.
|
||||
|
||||
> **Real debugging examples**: [Expression error in child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/fix-expression-error.md) |
|
||||
> [Data entry, not a flow bug](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/data-not-flow.md) |
|
||||
> [Null value crashes child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/null-child-flow.md)
|
||||
|
||||
> **Requires:** A [FlowStudio](https://mcp.flowstudio.app) MCP subscription (or
|
||||
> compatible Power Automate MCP server). You will need:
|
||||
> - MCP endpoint: `https://mcp.flowstudio.app/mcp` (same for all subscribers)
|
||||
@@ -445,6 +458,6 @@ print(new_runs[0]["status"]) # Succeeded = done
|
||||
|
||||
## More Capabilities
|
||||
|
||||
For **diagnosing failing flows** end-to-end → load the `power-automate-debug` skill.
|
||||
For **diagnosing failing flows** end-to-end → load the `flowstudio-power-automate-debug` skill.
|
||||
|
||||
For **building and deploying new flows** → load the `power-automate-build` skill.
|
||||
For **building and deploying new flows** → load the `flowstudio-power-automate-build` skill.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
Compact lookup for recognising action types returned by `get_live_flow`.
|
||||
Use this to **read and understand** existing flow definitions.
|
||||
|
||||
> For full copy-paste construction patterns, see the `power-automate-build` skill.
|
||||
> For full copy-paste construction patterns, see the `flowstudio-power-automate-build` skill.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -138,7 +138,7 @@ Response: **direct array** (no wrapper).
|
||||
]
|
||||
```
|
||||
|
||||
> **`id` format**: `envId.flowId` --- split on the first `.` to extract the flow UUID:
|
||||
> **`id` format**: `<environmentId>.<flowId>` --- split on the first `.` to extract the flow UUID:
|
||||
> `flow_id = item["id"].split(".", 1)[1]`
|
||||
|
||||
### `get_store_flow`
|
||||
@@ -146,7 +146,7 @@ Response: **direct array** (no wrapper).
|
||||
Response: single flow metadata from cache (selected fields).
|
||||
```json
|
||||
{
|
||||
"id": "envId.flowId",
|
||||
"id": "<environmentId>.<flowId>",
|
||||
"displayName": "My Flow",
|
||||
"state": "Started",
|
||||
"triggerType": "Recurrence",
|
||||
@@ -204,7 +204,7 @@ Response:
|
||||
```json
|
||||
{
|
||||
"created": false,
|
||||
"flowKey": "envId.flowId",
|
||||
"flowKey": "<environmentId>.<flowId>",
|
||||
"updated": ["definition", "connectionReferences"],
|
||||
"displayName": "My Flow",
|
||||
"state": "Started",
|
||||
@@ -353,17 +353,69 @@ Response keys: `flowKey`, `triggerName`, `triggerUrl`, `requiresAadAuth`, `authT
|
||||
|
||||
> **Only works for `Request` (HTTP) triggers.** Returns an error for Recurrence
|
||||
> and other trigger types: `"only HTTP Request triggers can be invoked via this tool"`.
|
||||
> `Button`-kind triggers return `ListCallbackUrlOperationBlocked`.
|
||||
>
|
||||
> `responseStatus` + `responseBody` contain the flow's Response action output.
|
||||
> AAD-authenticated triggers are handled automatically.
|
||||
>
|
||||
> **Content-type note**: The body is sent as `application/octet-stream` (raw),
|
||||
> not `application/json`. Flows with a trigger schema that has `required` fields
|
||||
> will reject the request with `InvalidRequestContent` (400) because PA validates
|
||||
> `Content-Type` before parsing against the schema. Flows without a schema, or
|
||||
> flows designed to accept raw input (e.g. Baker-pattern flows that parse the body
|
||||
> internally), will work fine. The flow receives the JSON as base64-encoded
|
||||
> `$content` with `$content-type: application/octet-stream`.
|
||||
|
||||
---
|
||||
|
||||
## Flow State Management
|
||||
|
||||
### `set_live_flow_state`
|
||||
|
||||
Start or stop a Power Automate flow via the live PA API. Does **not** require
|
||||
a Power Clarity workspace — works for any flow the impersonated account can access.
|
||||
Reads the current state first and only issues the start/stop call if a change is
|
||||
actually needed.
|
||||
|
||||
Parameters: `environmentName`, `flowName`, `state` (`"Started"` | `"Stopped"`) — all required.
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"flowName": "6321ab25-7eb0-42df-b977-e97d34bcb272",
|
||||
"environmentName": "Default-26e65220-...",
|
||||
"requestedState": "Started",
|
||||
"actualState": "Started"
|
||||
}
|
||||
```
|
||||
|
||||
> **Use this tool** — not `update_live_flow` — to start or stop a flow.
|
||||
> `update_live_flow` only changes displayName/definition; the PA API ignores
|
||||
> state passed through that endpoint.
|
||||
|
||||
### `set_store_flow_state`
|
||||
|
||||
Start or stop a flow. Pass `state: "Started"` or `state: "Stopped"`.
|
||||
Start or stop a flow via the live PA API **and** persist the updated state back
|
||||
to the Power Clarity cache. Same parameters as `set_live_flow_state` but requires
|
||||
a Power Clarity workspace.
|
||||
|
||||
Response (different shape from `set_live_flow_state`):
|
||||
```json
|
||||
{
|
||||
"flowKey": "<environmentId>.<flowId>",
|
||||
"requestedState": "Stopped",
|
||||
"currentState": "Stopped",
|
||||
"flow": { /* full gFlows record, same shape as get_store_flow */ }
|
||||
}
|
||||
```
|
||||
|
||||
> Prefer `set_live_flow_state` when you only need to toggle state — it's
|
||||
> simpler and has no subscription requirement.
|
||||
>
|
||||
> Use `set_store_flow_state` when you need the cache updated immediately
|
||||
> (without waiting for the next daily scan) AND want the full updated
|
||||
> governance record back in the same call — useful for workflows that
|
||||
> stop a flow and immediately tag or inspect it.
|
||||
|
||||
---
|
||||
|
||||
@@ -424,6 +476,8 @@ Non-obvious behaviors discovered through real API usage. These are things
|
||||
- `error` key is **always present** in response --- `null` means success.
|
||||
Do NOT check `if "error" in result`; check `result.get("error") is not None`.
|
||||
- On create, `created` = new flow GUID (string). On update, `created` = `false`.
|
||||
- **Cannot change flow state.** Only updates displayName, definition, and
|
||||
connectionReferences. Use `set_live_flow_state` to start/stop a flow.
|
||||
|
||||
### `trigger_live_flow`
|
||||
- **Only works for HTTP Request triggers.** Returns error for Recurrence, connector,
|
||||
|
||||
399
skills/flowstudio-power-automate-monitoring/SKILL.md
Normal file
399
skills/flowstudio-power-automate-monitoring/SKILL.md
Normal file
@@ -0,0 +1,399 @@
|
||||
---
|
||||
name: flowstudio-power-automate-monitoring
|
||||
description: >-
|
||||
Monitor Power Automate flow health, track failure rates, and inventory tenant
|
||||
assets using the FlowStudio MCP cached store. The live API only returns
|
||||
top-level run status. Store tools surface aggregated stats, per-run failure
|
||||
details with remediation hints, maker activity, and Power Apps inventory —
|
||||
all from a fast cache with no rate-limit pressure on the PA API.
|
||||
Load this skill when asked to: check flow health, find failing flows, get
|
||||
failure rates, review error trends, list all flows with monitoring enabled,
|
||||
check who built a flow, find inactive makers, inventory Power Apps, see
|
||||
environment or connection counts, get a flow summary, or any tenant-wide
|
||||
health overview. Requires a FlowStudio for Teams or MCP Pro+ subscription —
|
||||
see https://mcp.flowstudio.app
|
||||
metadata:
|
||||
openclaw:
|
||||
requires:
|
||||
env:
|
||||
- FLOWSTUDIO_MCP_TOKEN
|
||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
||||
homepage: https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Power Automate Monitoring with FlowStudio MCP
|
||||
|
||||
Monitor flow health, track failure rates, and inventory tenant assets through
|
||||
the FlowStudio MCP **cached store** — fast reads, no PA API rate limits, and
|
||||
enriched with governance metadata and remediation hints.
|
||||
|
||||
> **Requires:** A [FlowStudio for Teams or MCP Pro+](https://mcp.flowstudio.app)
|
||||
> subscription.
|
||||
>
|
||||
> **Start every session with `tools/list`** to confirm tool names and parameters.
|
||||
> This skill covers response shapes, behavioral notes, and workflow patterns —
|
||||
> things `tools/list` cannot tell you. If this document disagrees with
|
||||
> `tools/list` or a real API response, the API wins.
|
||||
|
||||
---
|
||||
|
||||
## How Monitoring Works
|
||||
|
||||
Flow Studio scans the Power Automate API daily for each subscriber and caches
|
||||
the results. There are two levels:
|
||||
|
||||
- **All flows** get metadata scanned: definition, connections, owners, trigger
|
||||
type, and aggregate run statistics (`runPeriodTotal`, `runPeriodFailRate`,
|
||||
etc.). Environments, apps, connections, and makers are also scanned.
|
||||
- **Monitored flows** (`monitor: true`) additionally get per-run detail:
|
||||
individual run records with status, duration, failed action names, and
|
||||
remediation hints. This is what populates `get_store_flow_runs`,
|
||||
`get_store_flow_errors`, and `get_store_flow_summary`.
|
||||
|
||||
**Data freshness:** Check the `scanned` field on `get_store_flow` to see when
|
||||
a flow was last scanned. If stale, the scanning pipeline may not be running.
|
||||
|
||||
**Enabling monitoring:** Set `monitor: true` via `update_store_flow` or the
|
||||
Flow Studio for Teams app
|
||||
([how to select flows](https://learn.flowstudio.app/teams-monitoring)).
|
||||
|
||||
**Designating critical flows:** Use `update_store_flow` with `critical=true`
|
||||
on business-critical flows. This enables the governance skill's notification
|
||||
rule management to auto-configure failure alerts on critical flows.
|
||||
|
||||
---
|
||||
|
||||
## Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|---|---|
|
||||
| `list_store_flows` | List flows with failure rates and monitoring filters |
|
||||
| `get_store_flow` | Full cached record: run stats, owners, tier, connections, definition |
|
||||
| `get_store_flow_summary` | Aggregated run stats: success/fail rate, avg/max duration |
|
||||
| `get_store_flow_runs` | Per-run history with duration, status, failed actions, remediation |
|
||||
| `get_store_flow_errors` | Failed-only runs with action names and remediation hints |
|
||||
| `get_store_flow_trigger_url` | Trigger URL from cache (instant, no PA API call) |
|
||||
| `set_store_flow_state` | Start or stop a flow and sync state back to cache |
|
||||
| `update_store_flow` | Set monitor flag, notification rules, tags, governance metadata |
|
||||
| `list_store_environments` | All Power Platform environments |
|
||||
| `list_store_connections` | All connections |
|
||||
| `list_store_makers` | All makers (citizen developers) |
|
||||
| `get_store_maker` | Maker detail: flow/app counts, licenses, account status |
|
||||
| `list_store_power_apps` | All Power Apps canvas apps |
|
||||
|
||||
---
|
||||
|
||||
## Store vs Live
|
||||
|
||||
| Question | Use Store | Use Live |
|
||||
|---|---|---|
|
||||
| How many flows are failing? | `list_store_flows` | — |
|
||||
| What's the fail rate over 30 days? | `get_store_flow_summary` | — |
|
||||
| Show error history for a flow | `get_store_flow_errors` | — |
|
||||
| Who built this flow? | `get_store_flow` → parse `owners` | — |
|
||||
| Read the full flow definition | `get_store_flow` has it (JSON string) | `get_live_flow` (structured) |
|
||||
| Inspect action inputs/outputs from a run | — | `get_live_flow_run_action_outputs` |
|
||||
| Resubmit a failed run | — | `resubmit_live_flow_run` |
|
||||
|
||||
> Store tools answer "what happened?" and "how healthy is it?"
|
||||
> Live tools answer "what exactly went wrong?" and "fix it now."
|
||||
|
||||
> If `get_store_flow_runs`, `get_store_flow_errors`, or `get_store_flow_summary`
|
||||
> return empty results, check: (1) is `monitor: true` on the flow? and
|
||||
> (2) is the `scanned` field recent? Use `get_store_flow` to verify both.
|
||||
|
||||
---
|
||||
|
||||
## Response Shapes
|
||||
|
||||
### `list_store_flows`
|
||||
|
||||
Direct array. Filters: `monitor` (bool), `rule_notify_onfail` (bool),
|
||||
`rule_notify_onmissingdays` (bool).
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "Default-<envGuid>.<flowGuid>",
|
||||
"displayName": "Stripe subscription updated",
|
||||
"state": "Started",
|
||||
"triggerType": "Request",
|
||||
"triggerUrl": "https://...",
|
||||
"tags": ["#operations", "#sensitive"],
|
||||
"environmentName": "Default-26e65220-...",
|
||||
"monitor": true,
|
||||
"runPeriodFailRate": 0.012,
|
||||
"runPeriodTotal": 82,
|
||||
"createdTime": "2025-06-24T01:20:53Z",
|
||||
"lastModifiedTime": "2025-06-24T03:51:03Z"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
> `id` format: `Default-<envGuid>.<flowGuid>`. Split on first `.` to get
|
||||
> `environmentName` and `flowName`.
|
||||
>
|
||||
> `triggerUrl` and `tags` are optional. Some entries are sparse (just `id` +
|
||||
> `monitor`) — skip entries without `displayName`.
|
||||
>
|
||||
> Tags on `list_store_flows` are auto-extracted from the flow's `description`
|
||||
> field (maker hashtags like `#operations`). Tags written via
|
||||
> `update_store_flow(tags=...)` are stored separately and only visible on
|
||||
> `get_store_flow` — they do NOT appear in the list response.
|
||||
|
||||
### `get_store_flow`
|
||||
|
||||
Full cached record. Key fields:
|
||||
|
||||
| Category | Fields |
|
||||
|---|---|
|
||||
| Identity | `name`, `displayName`, `environmentName`, `state`, `triggerType`, `triggerKind`, `tier`, `sharingType` |
|
||||
| Run stats | `runPeriodTotal`, `runPeriodFails`, `runPeriodSuccess`, `runPeriodFailRate`, `runPeriodSuccessRate`, `runPeriodDurationAverage`/`Max`/`Min` (milliseconds), `runTotal`, `runFails`, `runFirst`, `runLast`, `runToday` |
|
||||
| Governance | `monitor` (bool), `rule_notify_onfail` (bool), `rule_notify_onmissingdays` (number), `rule_notify_email` (string), `log_notify_onfail` (ISO), `description`, `tags` |
|
||||
| Freshness | `scanned` (ISO), `nextScan` (ISO) |
|
||||
| Lifecycle | `deleted` (bool), `deletedTime` (ISO) |
|
||||
| JSON strings | `actions`, `connections`, `owners`, `complexity`, `definition`, `createdBy`, `security`, `triggers`, `referencedResources`, `runError` — all require `json.loads()` to parse |
|
||||
|
||||
> Duration fields (`runPeriodDurationAverage`, `Max`, `Min`) are in
|
||||
> **milliseconds**. Divide by 1000 for seconds.
|
||||
>
|
||||
> `runError` contains the last run error as a JSON string. Parse it:
|
||||
> `json.loads(record["runError"])` — returns `{}` when no error.
|
||||
|
||||
### `get_store_flow_summary`
|
||||
|
||||
Aggregated stats over a time window (default: last 7 days).
|
||||
|
||||
```json
|
||||
{
|
||||
"flowKey": "Default-<envGuid>.<flowGuid>",
|
||||
"windowStart": null,
|
||||
"windowEnd": null,
|
||||
"totalRuns": 82,
|
||||
"successRuns": 81,
|
||||
"failRuns": 1,
|
||||
"successRate": 0.988,
|
||||
"failRate": 0.012,
|
||||
"averageDurationSeconds": 2.877,
|
||||
"maxDurationSeconds": 9.433,
|
||||
"firstFailRunRemediation": null,
|
||||
"firstFailRunUrl": null
|
||||
}
|
||||
```
|
||||
|
||||
> Returns all zeros when no run data exists for this flow in the window.
|
||||
> Use `startTime` and `endTime` (ISO 8601) parameters to change the window.
|
||||
|
||||
### `get_store_flow_runs` / `get_store_flow_errors`
|
||||
|
||||
Direct array. `get_store_flow_errors` filters to `status=Failed` only.
|
||||
Parameters: `startTime`, `endTime`, `status` (array: `["Failed"]`,
|
||||
`["Succeeded"]`, etc.).
|
||||
|
||||
> Both return `[]` when no run data exists.
|
||||
|
||||
### `get_store_flow_trigger_url`
|
||||
|
||||
```json
|
||||
{
|
||||
"flowKey": "Default-<envGuid>.<flowGuid>",
|
||||
"displayName": "Stripe subscription updated",
|
||||
"triggerType": "Request",
|
||||
"triggerKind": "Http",
|
||||
"triggerUrl": "https://..."
|
||||
}
|
||||
```
|
||||
|
||||
> `triggerUrl` is null for non-HTTP triggers.
|
||||
|
||||
### `set_store_flow_state`
|
||||
|
||||
Calls the live PA API then syncs state to the cache and returns the
|
||||
full updated record.
|
||||
|
||||
```json
|
||||
{
|
||||
"flowKey": "Default-<envGuid>.<flowGuid>",
|
||||
"requestedState": "Stopped",
|
||||
"currentState": "Stopped",
|
||||
"flow": { /* full gFlows record, same shape as get_store_flow */ }
|
||||
}
|
||||
```
|
||||
|
||||
> The embedded `flow` object reflects the new state immediately — no
|
||||
> follow-up `get_store_flow` call needed. Useful for governance workflows
|
||||
> that stop a flow and then read its tags/monitor/owner metadata in the
|
||||
> same turn.
|
||||
>
|
||||
> Functionally equivalent to `set_live_flow_state` for changing state,
|
||||
> but `set_live_flow_state` only returns `{flowName, environmentName,
|
||||
> requestedState, actualState}` and doesn't sync the cache. Prefer
|
||||
> `set_live_flow_state` when you only need to toggle state and don't
|
||||
> care about cache freshness.
|
||||
|
||||
### `update_store_flow`
|
||||
|
||||
Updates governance metadata. Only provided fields are updated (merge).
|
||||
Returns the full updated record (same shape as `get_store_flow`).
|
||||
|
||||
Settable fields: `monitor` (bool), `rule_notify_onfail` (bool),
|
||||
`rule_notify_onmissingdays` (number, 0=disabled),
|
||||
`rule_notify_email` (comma-separated), `description`, `tags`,
|
||||
`businessImpact`, `businessJustification`, `businessValue`,
|
||||
`ownerTeam`, `ownerBusinessUnit`, `supportGroup`, `supportEmail`,
|
||||
`critical` (bool), `tier`, `security`.
|
||||
|
||||
### `list_store_environments`
|
||||
|
||||
Direct array.
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "Default-26e65220-...",
|
||||
"displayName": "Flow Studio (default)",
|
||||
"sku": "Default",
|
||||
"type": "NotSpecified",
|
||||
"location": "australia",
|
||||
"isDefault": true,
|
||||
"isAdmin": true,
|
||||
"isManagedEnvironment": false,
|
||||
"createdTime": "2017-01-18T01:06:46Z"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
> `sku` values: `Default`, `Production`, `Developer`, `Sandbox`, `Teams`.
|
||||
|
||||
### `list_store_connections`
|
||||
|
||||
Direct array. Can be very large (1500+ items).
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "<environmentId>.<connectionId>",
|
||||
"displayName": "user@contoso.com",
|
||||
"createdBy": "{\"id\":\"...\",\"displayName\":\"...\",\"email\":\"...\"}",
|
||||
"environmentName": "...",
|
||||
"statuses": "[{\"status\":\"Connected\"}]"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
> `createdBy` and `statuses` are **JSON strings** — parse with `json.loads()`.
|
||||
|
||||
### `list_store_makers`
|
||||
|
||||
Direct array.
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "09dbe02f-...",
|
||||
"displayName": "Catherine Han",
|
||||
"mail": "catherine.han@flowstudio.app",
|
||||
"deleted": false,
|
||||
"ownerFlowCount": 199,
|
||||
"ownerAppCount": 209,
|
||||
"userIsServicePrinciple": false
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
> Deleted makers have `deleted: true` and no `displayName`/`mail` fields.
|
||||
|
||||
### `get_store_maker`
|
||||
|
||||
Full maker record. Key fields: `displayName`, `mail`, `userPrincipalName`,
|
||||
`ownerFlowCount`, `ownerAppCount`, `accountEnabled`, `deleted`, `country`,
|
||||
`firstFlow`, `firstFlowCreatedTime`, `lastFlowCreatedTime`,
|
||||
`firstPowerApp`, `lastPowerAppCreatedTime`,
|
||||
`licenses` (JSON string of M365 SKUs).
|
||||
|
||||
### `list_store_power_apps`
|
||||
|
||||
Direct array.
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "<environmentId>.<appId>",
|
||||
"displayName": "My App",
|
||||
"environmentName": "...",
|
||||
"ownerId": "09dbe02f-...",
|
||||
"ownerName": "Catherine Han",
|
||||
"appType": "Canvas",
|
||||
"sharedUsersCount": 0,
|
||||
"createdTime": "2023-08-18T01:06:22Z",
|
||||
"lastModifiedTime": "2023-08-18T01:06:22Z",
|
||||
"lastPublishTime": "2023-08-18T01:06:22Z"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Find unhealthy flows
|
||||
|
||||
```
|
||||
1. list_store_flows
|
||||
2. Filter where runPeriodFailRate > 0.1 and runPeriodTotal >= 5
|
||||
3. Sort by runPeriodFailRate descending
|
||||
4. For each: get_store_flow for full detail
|
||||
```
|
||||
|
||||
### Check a specific flow's health
|
||||
|
||||
```
|
||||
1. get_store_flow → check scanned (freshness), runPeriodFailRate, runPeriodTotal
|
||||
2. get_store_flow_summary → aggregated stats with optional time window
|
||||
3. get_store_flow_errors → per-run failure detail with remediation hints
|
||||
4. If deeper diagnosis needed → switch to live tools:
|
||||
get_live_flow_runs → get_live_flow_run_action_outputs
|
||||
```
|
||||
|
||||
### Enable monitoring on a flow
|
||||
|
||||
```
|
||||
1. update_store_flow with monitor=true
|
||||
2. Optionally set rule_notify_onfail=true, rule_notify_email="user@domain.com"
|
||||
3. Run data will appear after the next daily scan
|
||||
```
|
||||
|
||||
### Daily health check
|
||||
|
||||
```
|
||||
1. list_store_flows
|
||||
2. Flag flows with runPeriodFailRate > 0.2 and runPeriodTotal >= 3
|
||||
3. Flag monitored flows with state="Stopped" (may indicate auto-suspension)
|
||||
4. For critical failures → get_store_flow_errors for remediation hints
|
||||
```
|
||||
|
||||
### Maker audit
|
||||
|
||||
```
|
||||
1. list_store_makers
|
||||
2. Identify deleted accounts still owning flows (deleted=true, ownerFlowCount > 0)
|
||||
3. get_store_maker for full detail on specific users
|
||||
```
|
||||
|
||||
### Inventory
|
||||
|
||||
```
|
||||
1. list_store_environments → environment count, SKUs, locations
|
||||
2. list_store_flows → flow count by state, trigger type, fail rate
|
||||
3. list_store_power_apps → app count, owners, sharing
|
||||
4. list_store_connections → connection count per environment
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `power-automate-mcp` — Core connection setup, live tool reference
|
||||
- `power-automate-debug` — Deep diagnosis with action-level inputs/outputs (live API)
|
||||
- `power-automate-build` — Build and deploy flow definitions
|
||||
- `power-automate-governance` — Governance metadata, tagging, notification rules, CoE patterns
|
||||
Reference in New Issue
Block a user