chore: publish from staged

This commit is contained in:
github-actions[bot]
2026-04-09 06:26:21 +00:00
parent 017f31f495
commit a68b190031
467 changed files with 97527 additions and 276 deletions

View File

@@ -19,10 +19,10 @@
"governance"
],
"skills": [
"./skills/flowstudio-power-automate-mcp/",
"./skills/flowstudio-power-automate-debug/",
"./skills/flowstudio-power-automate-build/",
"./skills/flowstudio-power-automate-monitoring/",
"./skills/flowstudio-power-automate-governance/"
"./skills/flowstudio-power-automate-mcp",
"./skills/flowstudio-power-automate-debug",
"./skills/flowstudio-power-automate-build",
"./skills/flowstudio-power-automate-monitoring",
"./skills/flowstudio-power-automate-governance"
]
}

View File

@@ -0,0 +1,479 @@
---
name: flowstudio-power-automate-build
description: >-
Build, scaffold, and deploy Power Automate cloud flows using the FlowStudio
MCP server. Your agent constructs flow definitions, wires connections, deploys,
and tests — all via MCP without opening the portal.
Load this skill when asked to: create a flow, build a new flow,
deploy a flow definition, scaffold a Power Automate workflow, construct a flow
JSON, update an existing flow's actions, patch a flow definition, add actions
to a flow, wire up connections, or generate a workflow definition from scratch.
Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
metadata:
openclaw:
requires:
env:
- FLOWSTUDIO_MCP_TOKEN
primaryEnv: FLOWSTUDIO_MCP_TOKEN
homepage: https://mcp.flowstudio.app
---
# Build & Deploy Power Automate Flows with FlowStudio MCP
Step-by-step guide for constructing and deploying Power Automate cloud flows
programmatically through the FlowStudio MCP server.
**Prerequisite**: A FlowStudio MCP server must be reachable with a valid JWT.
See the `flowstudio-power-automate-mcp` skill for connection setup.
Subscribe at https://mcp.flowstudio.app
---
## Source of Truth
> **Always call `tools/list` first** to confirm available tool names and their
> parameter schemas. Tool names and parameters may change between server versions.
> This skill covers response shapes, behavioral notes, and build patterns —
> things `tools/list` cannot tell you. If this document disagrees with `tools/list`
> or a real API response, the API wins.
---
## Python Helper
```python
import json, urllib.request
MCP_URL = "https://mcp.flowstudio.app/mcp"
MCP_TOKEN = "<YOUR_JWT_TOKEN>"
def mcp(tool, **kwargs):
payload = json.dumps({"jsonrpc": "2.0", "id": 1, "method": "tools/call",
"params": {"name": tool, "arguments": kwargs}}).encode()
req = urllib.request.Request(MCP_URL, data=payload,
headers={"x-api-key": MCP_TOKEN, "Content-Type": "application/json",
"User-Agent": "FlowStudio-MCP/1.0"})
try:
resp = urllib.request.urlopen(req, timeout=120)
except urllib.error.HTTPError as e:
body = e.read().decode("utf-8", errors="replace")
raise RuntimeError(f"MCP HTTP {e.code}: {body[:200]}") from e
raw = json.loads(resp.read())
if "error" in raw:
raise RuntimeError(f"MCP error: {json.dumps(raw['error'])}")
return json.loads(raw["result"]["content"][0]["text"])
ENV = "<environment-id>" # e.g. Default-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
```
---
## Step 1 — Safety Check: Does the Flow Already Exist?
Always look before you build to avoid duplicates:
```python
results = mcp("list_live_flows", environmentName=ENV)
# list_live_flows returns { "flows": [...] }
matches = [f for f in results["flows"]
if "My New Flow".lower() in f["displayName"].lower()]
if len(matches) > 0:
# Flow exists — modify rather than create
FLOW_ID = matches[0]["id"] # plain UUID from list_live_flows
print(f"Existing flow: {FLOW_ID}")
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
else:
print("Flow not found — building from scratch")
FLOW_ID = None
```
---
## Step 2 — Obtain Connection References
Every connector action needs a `connectionName` that points to a key in the
flow's `connectionReferences` map. That key links to an authenticated connection
in the environment.
> **MANDATORY**: You MUST call `list_live_connections` first — do NOT ask the
> user for connection names or GUIDs. The API returns the exact values you need.
> Only prompt the user if the API confirms that required connections are missing.
### 2a — Always call `list_live_connections` first
```python
conns = mcp("list_live_connections", environmentName=ENV)
# Filter to connected (authenticated) connections only
active = [c for c in conns["connections"]
if c["statuses"][0]["status"] == "Connected"]
# Build a lookup: connectorName → connectionName (id)
conn_map = {}
for c in active:
conn_map[c["connectorName"]] = c["id"]
print(f"Found {len(active)} active connections")
print("Available connectors:", list(conn_map.keys()))
```
### 2b — Determine which connectors the flow needs
Based on the flow you are building, identify which connectors are required.
Common connector API names:
| Connector | API name |
|---|---|
| SharePoint | `shared_sharepointonline` |
| Outlook / Office 365 | `shared_office365` |
| Teams | `shared_teams` |
| Approvals | `shared_approvals` |
| OneDrive for Business | `shared_onedriveforbusiness` |
| Excel Online (Business) | `shared_excelonlinebusiness` |
| Dataverse | `shared_commondataserviceforapps` |
| Microsoft Forms | `shared_microsoftforms` |
> **Flows that need NO connections** (e.g. Recurrence + Compose + HTTP only)
> can skip the rest of Step 2 — omit `connectionReferences` from the deploy call.
### 2c — If connections are missing, guide the user
```python
connectors_needed = ["shared_sharepointonline", "shared_office365"] # adjust per flow
missing = [c for c in connectors_needed if c not in conn_map]
if not missing:
print("✅ All required connections are available — proceeding to build")
else:
# ── STOP: connections must be created interactively ──
# Connections require OAuth consent in a browser — no API can create them.
print("⚠️ The following connectors have no active connection in this environment:")
for c in missing:
friendly = c.replace("shared_", "").replace("onlinebusiness", " Online (Business)")
print(f"{friendly} (API name: {c})")
print()
print("Please create the missing connections:")
print(" 1. Open https://make.powerautomate.com/connections")
print(" 2. Select the correct environment from the top-right picker")
print(" 3. Click '+ New connection' for each missing connector listed above")
print(" 4. Sign in and authorize when prompted")
print(" 5. Tell me when done — I will re-check and continue building")
# DO NOT proceed to Step 3 until the user confirms.
# After user confirms, re-run Step 2a to refresh conn_map.
```
### 2d — Build the connectionReferences block
Only execute this after 2c confirms no missing connectors:
```python
connection_references = {}
for connector in connectors_needed:
connection_references[connector] = {
"connectionName": conn_map[connector], # the GUID from list_live_connections
"source": "Invoker",
"id": f"/providers/Microsoft.PowerApps/apis/{connector}"
}
```
> **IMPORTANT — `host.connectionName` in actions**: When building actions in
> Step 3, set `host.connectionName` to the **key** from this map (e.g.
> `shared_teams`), NOT the connection GUID. The GUID only goes inside the
> `connectionReferences` entry. The engine matches the action's
> `host.connectionName` to the key to find the right connection.
> **Alternative** — if you already have a flow using the same connectors,
> you can extract `connectionReferences` from its definition:
> ```python
> ref_flow = mcp("get_live_flow", environmentName=ENV, flowName="<existing-flow-id>")
> connection_references = ref_flow["properties"]["connectionReferences"]
> ```
See the `flowstudio-power-automate-mcp` skill's **connection-references.md** reference
for the full connection reference structure.
---
## Step 3 — Build the Flow Definition
Construct the definition object. See [flow-schema.md](references/flow-schema.md)
for the full schema and these action pattern references for copy-paste templates:
- [action-patterns-core.md](references/action-patterns-core.md) — Variables, control flow, expressions
- [action-patterns-data.md](references/action-patterns-data.md) — Array transforms, HTTP, parsing
- [action-patterns-connectors.md](references/action-patterns-connectors.md) — SharePoint, Outlook, Teams, Approvals
```python
definition = {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"contentVersion": "1.0.0.0",
"triggers": { ... }, # see trigger-types.md / build-patterns.md
"actions": { ... } # see ACTION-PATTERNS-*.md / build-patterns.md
}
```
> See [build-patterns.md](references/build-patterns.md) for complete, ready-to-use
> flow definitions covering Recurrence+SharePoint+Teams, HTTP triggers, and more.
---
## Step 4 — Deploy (Create or Update)
`update_live_flow` handles both creation and updates in a single tool.
### Create a new flow (no existing flow)
Omit `flowName` — the server generates a new GUID and creates via PUT:
```python
result = mcp("update_live_flow",
environmentName=ENV,
# flowName omitted → creates a new flow
definition=definition,
connectionReferences=connection_references,
displayName="Overdue Invoice Notifications",
description="Weekly SharePoint → Teams notification flow, built by agent"
)
if result.get("error") is not None:
print("Create failed:", result["error"])
else:
# Capture the new flow ID for subsequent steps
FLOW_ID = result["created"]
print(f"✅ Flow created: {FLOW_ID}")
```
### Update an existing flow
Provide `flowName` to PATCH:
```python
result = mcp("update_live_flow",
environmentName=ENV,
flowName=FLOW_ID,
definition=definition,
connectionReferences=connection_references,
displayName="My Updated Flow",
description="Updated by agent on " + __import__('datetime').datetime.utcnow().isoformat()
)
if result.get("error") is not None:
print("Update failed:", result["error"])
else:
print("Update succeeded:", result)
```
> ⚠️ `update_live_flow` always returns an `error` key.
> `null` (Python `None`) means success — do not treat the presence of the key as failure.
>
> ⚠️ `description` is required for both create and update.
### Common deployment errors
| Error message (contains) | Cause | Fix |
|---|---|---|
| `missing from connectionReferences` | An action's `host.connectionName` references a key that doesn't exist in the `connectionReferences` map | Ensure `host.connectionName` uses the **key** from `connectionReferences` (e.g. `shared_teams`), not the raw GUID |
| `ConnectionAuthorizationFailed` / 403 | The connection GUID belongs to another user or is not authorized | Re-run Step 2a and use a connection owned by the current `x-api-key` user |
| `InvalidTemplate` / `InvalidDefinition` | Syntax error in the definition JSON | Check `runAfter` chains, expression syntax, and action type spelling |
| `ConnectionNotConfigured` | A connector action exists but the connection GUID is invalid or expired | Re-check `list_live_connections` for a fresh GUID |
---
## Step 5 — Verify the Deployment
```python
check = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
# Confirm state
print("State:", check["properties"]["state"]) # Should be "Started"
# If state is "Stopped", use set_live_flow_state — NOT update_live_flow
# mcp("set_live_flow_state", environmentName=ENV, flowName=FLOW_ID, state="Started")
# Confirm the action we added is there
acts = check["properties"]["definition"]["actions"]
print("Actions:", list(acts.keys()))
```
---
## Step 6 — Test the Flow
> **MANDATORY**: Before triggering any test run, **ask the user for confirmation**.
> Running a flow has real side effects — it may send emails, post Teams messages,
> write to SharePoint, start approvals, or call external APIs. Explain what the
> flow will do and wait for explicit approval before calling `trigger_live_flow`
> or `resubmit_live_flow_run`.
### Updated flows (have prior runs) — ANY trigger type
> **Use `resubmit_live_flow_run` first.** It works for EVERY trigger type —
> Recurrence, SharePoint, connector webhooks, Button, and HTTP. It replays
> the original trigger payload. Do NOT ask the user to manually trigger the
> flow or wait for the next scheduled run.
```python
runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=1)
if runs:
# Works for Recurrence, SharePoint, connector triggers — not just HTTP
result = mcp("resubmit_live_flow_run",
environmentName=ENV, flowName=FLOW_ID, runName=runs[0]["name"])
print(result) # {"resubmitted": true, "triggerName": "..."}
```
### HTTP-triggered flows — custom test payload
Only use `trigger_live_flow` when you need to send a **different** payload
than the original run. For verifying a fix, `resubmit_live_flow_run` is
better because it uses the exact data that caused the failure.
```python
schema = mcp("get_live_flow_http_schema",
environmentName=ENV, flowName=FLOW_ID)
print("Expected body:", schema.get("requestSchema"))
result = mcp("trigger_live_flow",
environmentName=ENV, flowName=FLOW_ID,
body={"name": "Test", "value": 1})
print(f"Status: {result['responseStatus']}")
```
### Brand-new non-HTTP flows (Recurrence, connector triggers, etc.)
A brand-new Recurrence or connector-triggered flow has **no prior runs** to
resubmit and no HTTP endpoint to call. This is the ONLY scenario where you
need the temporary HTTP trigger approach below. **Deploy with a temporary
HTTP trigger first, test the actions, then swap to the production trigger.**
#### 7a — Save the real trigger, deploy with a temporary HTTP trigger
```python
# Save the production trigger you built in Step 3
production_trigger = definition["triggers"]
# Replace with a temporary HTTP trigger
definition["triggers"] = {
"manual": {
"type": "Request",
"kind": "Http",
"inputs": {
"schema": {}
}
}
}
# Deploy (create or update) with the temp trigger
result = mcp("update_live_flow",
environmentName=ENV,
flowName=FLOW_ID, # omit if creating new
definition=definition,
connectionReferences=connection_references,
displayName="Overdue Invoice Notifications",
description="Deployed with temp HTTP trigger for testing")
if result.get("error") is not None:
print("Deploy failed:", result["error"])
else:
if not FLOW_ID:
FLOW_ID = result["created"]
print(f"✅ Deployed with temp HTTP trigger: {FLOW_ID}")
```
#### 7b — Fire the flow and check the result
```python
# Trigger the flow
test = mcp("trigger_live_flow",
environmentName=ENV, flowName=FLOW_ID)
print(f"Trigger response status: {test['status']}")
# Wait for the run to complete
import time; time.sleep(15)
# Check the run result
runs = mcp("get_live_flow_runs",
environmentName=ENV, flowName=FLOW_ID, top=1)
run = runs[0]
print(f"Run {run['name']}: {run['status']}")
if run["status"] == "Failed":
err = mcp("get_live_flow_run_error",
environmentName=ENV, flowName=FLOW_ID, runName=run["name"])
root = err["failedActions"][-1]
print(f"Root cause: {root['actionName']}{root.get('code')}")
# Debug and fix the definition before proceeding
# See flowstudio-power-automate-debug skill for full diagnosis workflow
```
#### 7c — Swap to the production trigger
Once the test run succeeds, replace the temporary HTTP trigger with the real one:
```python
# Restore the production trigger
definition["triggers"] = production_trigger
result = mcp("update_live_flow",
environmentName=ENV,
flowName=FLOW_ID,
definition=definition,
connectionReferences=connection_references,
description="Swapped to production trigger after successful test")
if result.get("error") is not None:
print("Trigger swap failed:", result["error"])
else:
print("✅ Production trigger deployed — flow is live")
```
> **Why this works**: The trigger is just the entry point — the actions are
> identical regardless of how the flow starts. Testing via HTTP trigger
> exercises all the same Compose, SharePoint, Teams, etc. actions.
>
> **Connector triggers** (e.g. "When an item is created in SharePoint"):
> If actions reference `triggerBody()` or `triggerOutputs()`, pass a
> representative test payload in `trigger_live_flow`'s `body` parameter
> that matches the shape the connector trigger would produce.
---
## Gotchas
| Mistake | Consequence | Prevention |
|---|---|---|
| Missing `connectionReferences` in deploy | 400 "Supply connectionReferences" | Always call `list_live_connections` first |
| `"operationOptions"` missing on Foreach | Parallel execution, race conditions on writes | Always add `"Sequential"` |
| `union(old_data, new_data)` | Old values override new (first-wins) | Use `union(new_data, old_data)` |
| `split()` on potentially-null string | `InvalidTemplate` crash | Wrap with `coalesce(field, '')` |
| Checking `result["error"]` exists | Always present; true error is `!= null` | Use `result.get("error") is not None` |
| Flow deployed but state is "Stopped" | Flow won't run on schedule | Call `set_live_flow_state` with `state: "Started"` — do **not** use `update_live_flow` for state changes |
| Teams "Chat with Flow bot" recipient as object | 400 `GraphUserDetailNotFound` | Use plain string with trailing semicolon (see below) |
### Teams `PostMessageToConversation` — Recipient Formats
The `body/recipient` parameter format depends on the `location` value:
| Location | `body/recipient` format | Example |
|---|---|---|
| **Chat with Flow bot** | Plain email string with **trailing semicolon** | `"user@contoso.com;"` |
| **Channel** | Object with `groupId` and `channelId` | `{"groupId": "...", "channelId": "..."}` |
> **Common mistake**: passing `{"to": "user@contoso.com"}` for "Chat with Flow bot"
> returns a 400 `GraphUserDetailNotFound` error. The API expects a plain string.
---
## Reference Files
- [flow-schema.md](references/flow-schema.md) — Full flow definition JSON schema
- [trigger-types.md](references/trigger-types.md) — Trigger type templates
- [action-patterns-core.md](references/action-patterns-core.md) — Variables, control flow, expressions
- [action-patterns-data.md](references/action-patterns-data.md) — Array transforms, HTTP, parsing
- [action-patterns-connectors.md](references/action-patterns-connectors.md) — SharePoint, Outlook, Teams, Approvals
- [build-patterns.md](references/build-patterns.md) — Complete flow definition templates (Recurrence+SP+Teams, HTTP trigger)
## Related Skills
- `flowstudio-power-automate-mcp` — Core connection setup and tool reference
- `flowstudio-power-automate-debug` — Debug failing flows after deployment

View File

@@ -0,0 +1,542 @@
# FlowStudio MCP — Action Patterns: Connectors
SharePoint, Outlook, Teams, and Approvals connector action patterns.
> All examples assume `"runAfter"` is set appropriately.
> Replace `<connectionName>` with the **key** you used in `connectionReferences`
> (e.g. `shared_sharepointonline`, `shared_teams`). This is NOT the connection
> GUID — it is the logical reference name that links the action to its entry in
> the `connectionReferences` map.
---
## SharePoint
### SharePoint — Get Items
```json
"Get_SP_Items": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>",
"operationId": "GetItems"
},
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"table": "MyList",
"$filter": "Status eq 'Active'",
"$top": 500
}
}
}
```
Result reference: `@outputs('Get_SP_Items')?['body/value']`
> **Dynamic OData filter with string interpolation**: inject a runtime value
> directly into the `$filter` string using `@{...}` syntax:
> ```
> "$filter": "Title eq '@{outputs('ConfirmationCode')}'"
> ```
> Note the single-quotes inside double-quotes — correct OData string literal
> syntax. Avoids a separate variable action.
> **Pagination for large lists**: by default, GetItems stops at `$top`. To auto-paginate
> beyond that, enable the pagination policy on the action. In the flow definition this
> appears as:
> ```json
> "paginationPolicy": { "minimumItemCount": 10000 }
> ```
> Set `minimumItemCount` to the maximum number of items you expect. The connector will
> keep fetching pages until that count is reached or the list is exhausted. Without this,
> flows silently return a capped result on lists with >5,000 items.
---
### SharePoint — Get Item (Single Row by ID)
```json
"Get_SP_Item": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>",
"operationId": "GetItem"
},
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"table": "MyList",
"id": "@triggerBody()?['ID']"
}
}
}
```
Result reference: `@body('Get_SP_Item')?['FieldName']`
> Use `GetItem` (not `GetItems` with a filter) when you already have the ID.
> Re-fetching after a trigger gives you the **current** row state, not the
> snapshot captured at trigger time — important if another process may have
> modified the item since the flow started.
---
### SharePoint — Create Item
```json
"Create_SP_Item": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>",
"operationId": "PostItem"
},
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"table": "MyList",
"item/Title": "@variables('myTitle')",
"item/Status": "Active"
}
}
}
```
---
### SharePoint — Update Item
```json
"Update_SP_Item": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>",
"operationId": "PatchItem"
},
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"table": "MyList",
"id": "@item()?['ID']",
"item/Status": "Processed"
}
}
}
```
---
### SharePoint — File Upsert (Create or Overwrite in Document Library)
SharePoint's `CreateFile` fails if the file already exists. To upsert (create or overwrite)
without a prior existence check, use `GetFileMetadataByPath` on **both Succeeded and Failed**
from `CreateFile` — if create failed because the file exists, the metadata call still
returns its ID, which `UpdateFile` can then overwrite:
```json
"Create_File": {
"type": "OpenApiConnection",
"inputs": {
"host": { "apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>", "operationId": "CreateFile" },
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"folderPath": "/My Library/Subfolder",
"name": "@{variables('filename')}",
"body": "@outputs('Compose_File_Content')"
}
}
},
"Get_File_Metadata_By_Path": {
"type": "OpenApiConnection",
"runAfter": { "Create_File": ["Succeeded", "Failed"] },
"inputs": {
"host": { "apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>", "operationId": "GetFileMetadataByPath" },
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"path": "/My Library/Subfolder/@{variables('filename')}"
}
}
},
"Update_File": {
"type": "OpenApiConnection",
"runAfter": { "Get_File_Metadata_By_Path": ["Succeeded", "Skipped"] },
"inputs": {
"host": { "apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>", "operationId": "UpdateFile" },
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"id": "@outputs('Get_File_Metadata_By_Path')?['body/{Identifier}']",
"body": "@outputs('Compose_File_Content')"
}
}
}
```
> If `Create_File` succeeds, `Get_File_Metadata_By_Path` is `Skipped` and `Update_File`
> still fires (accepting `Skipped`), harmlessly overwriting the file just created.
> If `Create_File` fails (file exists), the metadata call retrieves the existing file's ID
> and `Update_File` overwrites it. Either way you end with the latest content.
>
> **Document library system properties** — when iterating a file library result (e.g.
> from `ListFolder` or `GetFilesV2`), use curly-brace property names to access
> SharePoint's built-in file metadata. These are different from list field names:
> ```
> @item()?['{Name}'] — filename without path (e.g. "report.csv")
> @item()?['{FilenameWithExtension}'] — same as {Name} in most connectors
> @item()?['{Identifier}'] — internal file ID for use in UpdateFile/DeleteFile
> @item()?['{FullPath}'] — full server-relative path
> @item()?['{IsFolder}'] — boolean, true for folder entries
> ```
---
### SharePoint — GetItemChanges Column Gate
When a SharePoint "item modified" trigger fires, it doesn't tell you WHICH
column changed. Use `GetItemChanges` to get per-column change flags, then gate
downstream logic on specific columns:
```json
"Get_Changes": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>",
"operationId": "GetItemChanges"
},
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"table": "<list-guid>",
"id": "@triggerBody()?['ID']",
"since": "@triggerBody()?['Modified']",
"includeDrafts": false
}
}
}
```
Gate on a specific column:
```json
"expression": {
"and": [{
"equals": [
"@body('Get_Changes')?['Column']?['hasChanged']",
true
]
}]
}
```
> **New-item detection:** On the very first modification (version 1.0),
> `GetItemChanges` may report no prior version. Check
> `@equals(triggerBody()?['OData__UIVersionString'], '1.0')` to detect
> newly created items and skip change-gate logic for those.
---
### SharePoint — REST MERGE via HttpRequest
For cross-list updates or advanced operations not supported by the standard
Update Item connector (e.g., updating a list in a different site), use the
SharePoint REST API via the `HttpRequest` operation:
```json
"Update_Cross_List_Item": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>",
"operationId": "HttpRequest"
},
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/target-site",
"parameters/method": "POST",
"parameters/uri": "/_api/web/lists(guid'<list-guid>')/items(@{variables('ItemId')})",
"parameters/headers": {
"Accept": "application/json;odata=nometadata",
"Content-Type": "application/json;odata=nometadata",
"X-HTTP-Method": "MERGE",
"IF-MATCH": "*"
},
"parameters/body": "{ \"Title\": \"@{variables('NewTitle')}\", \"Status\": \"@{variables('NewStatus')}\" }"
}
}
}
```
> **Key headers:**
> - `X-HTTP-Method: MERGE` — tells SharePoint to do a partial update (PATCH semantics)
> - `IF-MATCH: *` — overwrites regardless of current ETag (no conflict check)
>
> The `HttpRequest` operation reuses the existing SharePoint connection — no extra
> authentication needed. Use this when the standard Update Item connector can't
> reach the target list (different site collection, or you need raw REST control).
---
### SharePoint — File as JSON Database (Read + Parse)
Use a SharePoint document library JSON file as a queryable "database" of
last-known-state records. A separate process (e.g., Power BI dataflow) maintains
the file; the flow downloads and filters it for before/after comparisons.
```json
"Get_File": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>",
"operationId": "GetFileContent"
},
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"id": "%252fShared%2bDocuments%252fdata.json",
"inferContentType": false
}
}
},
"Parse_JSON_File": {
"type": "Compose",
"runAfter": { "Get_File": ["Succeeded"] },
"inputs": "@json(decodeBase64(body('Get_File')?['$content']))"
},
"Find_Record": {
"type": "Query",
"runAfter": { "Parse_JSON_File": ["Succeeded"] },
"inputs": {
"from": "@outputs('Parse_JSON_File')",
"where": "@equals(item()?['id'], variables('RecordId'))"
}
}
```
> **Decode chain:** `GetFileContent` returns base64-encoded content in
> `body(...)?['$content']`. Apply `decodeBase64()` then `json()` to get a
> usable array. `Filter Array` then acts as a WHERE clause.
>
> **When to use:** When you need a lightweight "before" snapshot to detect field
> changes from a webhook payload (the "after" state). Simpler than maintaining
> a full SharePoint list mirror — works well for up to ~10K records.
>
> **File path encoding:** In the `id` parameter, SharePoint URL-encodes paths
> twice. Spaces become `%2b` (plus sign), slashes become `%252f`.
---
## Outlook
### Outlook — Send Email
```json
"Send_Email": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_office365",
"connectionName": "<connectionName>",
"operationId": "SendEmailV2"
},
"parameters": {
"emailMessage/To": "recipient@contoso.com",
"emailMessage/Subject": "Automated notification",
"emailMessage/Body": "<p>@{outputs('Compose_Message')}</p>",
"emailMessage/IsHtml": true
}
}
}
```
---
### Outlook — Get Emails (Read Template from Folder)
```json
"Get_Email_Template": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_office365",
"connectionName": "<connectionName>",
"operationId": "GetEmailsV3"
},
"parameters": {
"folderPath": "Id::<outlook-folder-id>",
"fetchOnlyUnread": false,
"includeAttachments": false,
"top": 1,
"importance": "Any",
"fetchOnlyWithAttachment": false,
"subjectFilter": "My Email Template Subject"
}
}
}
```
Access subject and body:
```
@first(outputs('Get_Email_Template')?['body/value'])?['subject']
@first(outputs('Get_Email_Template')?['body/value'])?['body']
```
> **Outlook-as-CMS pattern**: store a template email in a dedicated Outlook folder.
> Set `fetchOnlyUnread: false` so the template persists after first use.
> Non-technical users can update subject and body by editing that email —
> no flow changes required. Pass subject and body directly into `SendEmailV2`.
>
> To get a folder ID: in Outlook on the web, right-click the folder → open in
> new tab — the folder GUID is in the URL. Prefix it with `Id::` in `folderPath`.
---
## Teams
### Teams — Post Message
```json
"Post_Teams_Message": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_teams",
"connectionName": "<connectionName>",
"operationId": "PostMessageToConversation"
},
"parameters": {
"poster": "Flow bot",
"location": "Channel",
"body/recipient": {
"groupId": "<team-id>",
"channelId": "<channel-id>"
},
"body/messageBody": "@outputs('Compose_Message')"
}
}
}
```
#### Variant: Group Chat (1:1 or Multi-Person)
To post to a group chat instead of a channel, use `"location": "Group chat"` with
a thread ID as the recipient:
```json
"Post_To_Group_Chat": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_teams",
"connectionName": "<connectionName>",
"operationId": "PostMessageToConversation"
},
"parameters": {
"poster": "Flow bot",
"location": "Group chat",
"body/recipient": "19:<thread-hash>@thread.v2",
"body/messageBody": "@outputs('Compose_Message')"
}
}
}
```
For 1:1 ("Chat with Flow bot"), use `"location": "Chat with Flow bot"` and set
`body/recipient` to the user's email address.
> **Active-user gate:** When sending notifications in a loop, check the recipient's
> Azure AD account is enabled before posting — avoids failed deliveries to departed
> staff:
> ```json
> "Check_User_Active": {
> "type": "OpenApiConnection",
> "inputs": {
> "host": { "apiId": "/providers/Microsoft.PowerApps/apis/shared_office365users",
> "operationId": "UserProfile_V2" },
> "parameters": { "id": "@{item()?['Email']}" }
> }
> }
> ```
> Then gate: `@equals(body('Check_User_Active')?['accountEnabled'], true)`
---
## Approvals
### Split Approval (Create → Wait)
The standard "Start and wait for an approval" is a single blocking action.
For more control (e.g., posting the approval link in Teams, or adding a timeout
scope), split it into two actions: `CreateAnApproval` (fire-and-forget) then
`WaitForAnApproval` (webhook pause).
```json
"Create_Approval": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_approvals",
"connectionName": "<connectionName>",
"operationId": "CreateAnApproval"
},
"parameters": {
"approvalType": "CustomResponse/Result",
"ApprovalCreationInput/title": "Review: @{variables('ItemTitle')}",
"ApprovalCreationInput/assignedTo": "approver@contoso.com",
"ApprovalCreationInput/details": "Please review and select an option.",
"ApprovalCreationInput/responseOptions": ["Approve", "Reject", "Defer"],
"ApprovalCreationInput/enableNotifications": true,
"ApprovalCreationInput/enableReassignment": true
}
}
},
"Wait_For_Approval": {
"type": "OpenApiConnectionWebhook",
"runAfter": { "Create_Approval": ["Succeeded"] },
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_approvals",
"connectionName": "<connectionName>",
"operationId": "WaitForAnApproval"
},
"parameters": {
"approvalName": "@body('Create_Approval')?['name']"
}
}
}
```
> **`approvalType` options:**
> - `"Approve/Reject - First to respond"` — binary, first responder wins
> - `"Approve/Reject - Everyone must approve"` — requires all assignees
> - `"CustomResponse/Result"` — define your own response buttons
>
> After `Wait_For_Approval`, read the outcome:
> ```
> @body('Wait_For_Approval')?['outcome'] → "Approve", "Reject", or custom
> @body('Wait_For_Approval')?['responses'][0]?['responder']?['displayName']
> @body('Wait_For_Approval')?['responses'][0]?['comments']
> ```
>
> The split pattern lets you insert actions between create and wait — e.g.,
> posting the approval link to Teams, starting a timeout scope, or logging
> the pending approval to a tracking list.

View File

@@ -0,0 +1,542 @@
# FlowStudio MCP — Action Patterns: Core
Variables, control flow, and expression patterns for Power Automate flow definitions.
> All examples assume `"runAfter"` is set appropriately.
> Replace `<connectionName>` with the **key** you used in your `connectionReferences` map
> (e.g. `shared_teams`, `shared_office365`) — NOT the connection GUID.
---
## Data & Variables
### Compose (Store a Value)
```json
"Compose_My_Value": {
"type": "Compose",
"runAfter": {},
"inputs": "@variables('myVar')"
}
```
Reference: `@outputs('Compose_My_Value')`
---
### Initialize Variable
```json
"Init_Counter": {
"type": "InitializeVariable",
"runAfter": {},
"inputs": {
"variables": [{
"name": "counter",
"type": "Integer",
"value": 0
}]
}
}
```
Types: `"Integer"`, `"Float"`, `"Boolean"`, `"String"`, `"Array"`, `"Object"`
---
### Set Variable
```json
"Set_Counter": {
"type": "SetVariable",
"runAfter": {},
"inputs": {
"name": "counter",
"value": "@add(variables('counter'), 1)"
}
}
```
---
### Append to Array Variable
```json
"Collect_Item": {
"type": "AppendToArrayVariable",
"runAfter": {},
"inputs": {
"name": "resultArray",
"value": "@item()"
}
}
```
---
### Increment Variable
```json
"Increment_Counter": {
"type": "IncrementVariable",
"runAfter": {},
"inputs": {
"name": "counter",
"value": 1
}
}
```
> Use `IncrementVariable` (not `SetVariable` with `add()`) for counters inside loops —
> it is atomic and avoids expression errors when the variable is used elsewhere in the
> same iteration. `value` can be any integer or expression, e.g. `@mul(item()?['Interval'], 60)`
> to advance a Unix timestamp cursor by N minutes.
---
## Control Flow
### Condition (If/Else)
```json
"Check_Status": {
"type": "If",
"runAfter": {},
"expression": {
"and": [{ "equals": ["@item()?['Status']", "Active"] }]
},
"actions": {
"Handle_Active": {
"type": "Compose",
"runAfter": {},
"inputs": "Active user: @{item()?['Name']}"
}
},
"else": {
"actions": {
"Handle_Inactive": {
"type": "Compose",
"runAfter": {},
"inputs": "Inactive user"
}
}
}
}
```
Comparison operators: `equals`, `not`, `greater`, `greaterOrEquals`, `less`, `lessOrEquals`, `contains`
Logical: `and: [...]`, `or: [...]`
---
### Switch
```json
"Route_By_Type": {
"type": "Switch",
"runAfter": {},
"expression": "@triggerBody()?['type']",
"cases": {
"Case_Email": {
"case": "email",
"actions": { "Process_Email": { "type": "Compose", "runAfter": {}, "inputs": "email" } }
},
"Case_Teams": {
"case": "teams",
"actions": { "Process_Teams": { "type": "Compose", "runAfter": {}, "inputs": "teams" } }
}
},
"default": {
"actions": { "Unknown_Type": { "type": "Compose", "runAfter": {}, "inputs": "unknown" } }
}
}
```
---
### Scope (Grouping / Try-Catch)
Wrap related actions in a Scope to give them a shared name, collapse them in the
designer, and — most importantly — handle their errors as a unit.
```json
"Scope_Get_Customer": {
"type": "Scope",
"runAfter": {},
"actions": {
"HTTP_Get_Customer": {
"type": "Http",
"runAfter": {},
"inputs": {
"method": "GET",
"uri": "https://api.example.com/customers/@{variables('customerId')}"
}
},
"Compose_Email": {
"type": "Compose",
"runAfter": { "HTTP_Get_Customer": ["Succeeded"] },
"inputs": "@outputs('HTTP_Get_Customer')?['body/email']"
}
}
},
"Handle_Scope_Error": {
"type": "Compose",
"runAfter": { "Scope_Get_Customer": ["Failed", "TimedOut"] },
"inputs": "Scope failed: @{result('Scope_Get_Customer')?[0]?['error']?['message']}"
}
```
> Reference scope results: `@result('Scope_Get_Customer')` returns an array of action
> outcomes. Use `runAfter: {"MyScope": ["Failed", "TimedOut"]}` on a follow-up action
> to create try/catch semantics without a Terminate.
---
### Foreach (Sequential)
```json
"Process_Each_Item": {
"type": "Foreach",
"runAfter": {},
"foreach": "@outputs('Get_Items')?['body/value']",
"operationOptions": "Sequential",
"actions": {
"Handle_Item": {
"type": "Compose",
"runAfter": {},
"inputs": "@item()?['Title']"
}
}
}
```
> Always include `"operationOptions": "Sequential"` unless parallel is intentional.
---
### Foreach (Parallel with Concurrency Limit)
```json
"Process_Each_Item_Parallel": {
"type": "Foreach",
"runAfter": {},
"foreach": "@body('Get_SP_Items')?['value']",
"runtimeConfiguration": {
"concurrency": {
"repetitions": 20
}
},
"actions": {
"HTTP_Upsert": {
"type": "Http",
"runAfter": {},
"inputs": {
"method": "POST",
"uri": "https://api.example.com/contacts/@{item()?['Email']}"
}
}
}
}
```
> Set `repetitions` to control how many items are processed simultaneously.
> Practical values: `510` for external API calls (respect rate limits),
> `2050` for internal/fast operations.
> Omit `runtimeConfiguration.concurrency` entirely for the platform default
> (currently 50). Do NOT use `"operationOptions": "Sequential"` and concurrency together.
---
### Wait (Delay)
```json
"Delay_10_Minutes": {
"type": "Wait",
"runAfter": {},
"inputs": {
"interval": {
"count": 10,
"unit": "Minute"
}
}
}
```
Valid `unit` values: `"Second"`, `"Minute"`, `"Hour"`, `"Day"`
> Use a Delay + re-fetch as a deduplication guard: wait for any competing process
> to complete, then re-read the record before acting. This avoids double-processing
> when multiple triggers or manual edits can race on the same item.
---
### Terminate (Success or Failure)
```json
"Terminate_Success": {
"type": "Terminate",
"runAfter": {},
"inputs": {
"runStatus": "Succeeded"
}
},
"Terminate_Failure": {
"type": "Terminate",
"runAfter": { "Risky_Action": ["Failed"] },
"inputs": {
"runStatus": "Failed",
"runError": {
"code": "StepFailed",
"message": "@{outputs('Get_Error_Message')}"
}
}
}
```
---
### Do Until (Loop Until Condition)
Repeats a block of actions until an exit condition becomes true.
Use when the number of iterations is not known upfront (e.g. paginating an API,
walking a time range, polling until a status changes).
```json
"Do_Until_Done": {
"type": "Until",
"runAfter": {},
"expression": "@greaterOrEquals(variables('cursor'), variables('endValue'))",
"limit": {
"count": 5000,
"timeout": "PT5H"
},
"actions": {
"Do_Work": {
"type": "Compose",
"runAfter": {},
"inputs": "@variables('cursor')"
},
"Advance_Cursor": {
"type": "IncrementVariable",
"runAfter": { "Do_Work": ["Succeeded"] },
"inputs": {
"name": "cursor",
"value": 1
}
}
}
}
```
> Always set `limit.count` and `limit.timeout` explicitly — the platform defaults are
> low (60 iterations, 1 hour). For time-range walkers use `limit.count: 5000` and
> `limit.timeout: "PT5H"` (ISO 8601 duration).
>
> The exit condition is evaluated **before** each iteration. Initialise your cursor
> variable before the loop so the condition can evaluate correctly on the first pass.
---
### Async Polling with RequestId Correlation
When an API starts a long-running job asynchronously (e.g. Power BI dataset refresh,
report generation, batch export), the trigger call returns a request ID. Capture it
from the **response header**, then poll a status endpoint filtering by that exact ID:
```json
"Start_Job": {
"type": "Http",
"inputs": { "method": "POST", "uri": "https://api.example.com/jobs" }
},
"Capture_Request_ID": {
"type": "Compose",
"runAfter": { "Start_Job": ["Succeeded"] },
"inputs": "@outputs('Start_Job')?['headers/X-Request-Id']"
},
"Initialize_Status": {
"type": "InitializeVariable",
"inputs": { "variables": [{ "name": "jobStatus", "type": "String", "value": "Running" }] }
},
"Poll_Until_Done": {
"type": "Until",
"expression": "@not(equals(variables('jobStatus'), 'Running'))",
"limit": { "count": 60, "timeout": "PT30M" },
"actions": {
"Delay": { "type": "Wait", "inputs": { "interval": { "count": 20, "unit": "Second" } } },
"Get_History": {
"type": "Http",
"runAfter": { "Delay": ["Succeeded"] },
"inputs": { "method": "GET", "uri": "https://api.example.com/jobs/history" }
},
"Filter_This_Job": {
"type": "Query",
"runAfter": { "Get_History": ["Succeeded"] },
"inputs": {
"from": "@outputs('Get_History')?['body/items']",
"where": "@equals(item()?['requestId'], outputs('Capture_Request_ID'))"
}
},
"Set_Status": {
"type": "SetVariable",
"runAfter": { "Filter_This_Job": ["Succeeded"] },
"inputs": {
"name": "jobStatus",
"value": "@first(body('Filter_This_Job'))?['status']"
}
}
}
},
"Handle_Failure": {
"type": "If",
"runAfter": { "Poll_Until_Done": ["Succeeded"] },
"expression": { "equals": ["@variables('jobStatus')", "Failed"] },
"actions": { "Terminate_Failed": { "type": "Terminate", "inputs": { "runStatus": "Failed" } } },
"else": { "actions": {} }
}
```
Access response headers: `@outputs('Start_Job')?['headers/X-Request-Id']`
> **Status variable initialisation**: set a sentinel value (`"Running"`, `"Unknown"`) before
> the loop. The exit condition tests for any value other than the sentinel.
> This way an empty poll result (job not yet in history) leaves the variable unchanged
> and the loop continues — it doesn't accidentally exit on null.
>
> **Filter before extracting**: always `Filter Array` the history to your specific
> request ID before calling `first()`. History endpoints return all jobs; without
> filtering, status from a different concurrent job can corrupt your poll.
---
### runAfter Fallback (Failed → Alternative Action)
Route to a fallback action when a primary action fails — without a Condition block.
Simply set `runAfter` on the fallback to accept `["Failed"]` from the primary:
```json
"HTTP_Get_Hi_Res": {
"type": "Http",
"runAfter": {},
"inputs": { "method": "GET", "uri": "https://api.example.com/data?resolution=hi-res" }
},
"HTTP_Get_Low_Res": {
"type": "Http",
"runAfter": { "HTTP_Get_Hi_Res": ["Failed"] },
"inputs": { "method": "GET", "uri": "https://api.example.com/data?resolution=low-res" }
}
```
> Actions that follow can use `runAfter` accepting both `["Succeeded", "Skipped"]` to
> handle either path — see **Fan-In Join Gate** below.
---
### Fan-In Join Gate (Merge Two Mutually Exclusive Branches)
When two branches are mutually exclusive (only one can succeed per run), use a single
downstream action that accepts `["Succeeded", "Skipped"]` from **both** branches.
The gate fires exactly once regardless of which branch ran:
```json
"Increment_Count": {
"type": "IncrementVariable",
"runAfter": {
"Update_Hi_Res_Metadata": ["Succeeded", "Skipped"],
"Update_Low_Res_Metadata": ["Succeeded", "Skipped"]
},
"inputs": { "name": "LoopCount", "value": 1 }
}
```
> This avoids duplicating the downstream action in each branch. The key insight:
> whichever branch was skipped reports `Skipped` — the gate accepts that state and
> fires once. Only works cleanly when the two branches are truly mutually exclusive
> (e.g. one is `runAfter: [...Failed]` of the other).
---
## Expressions
### Common Expression Patterns
```
Null-safe field access: @item()?['FieldName']
Null guard: @coalesce(item()?['Name'], 'Unknown')
String format: @{variables('firstName')} @{variables('lastName')}
Date today: @utcNow()
Formatted date: @formatDateTime(utcNow(), 'dd/MM/yyyy')
Add days: @addDays(utcNow(), 7)
Array length: @length(variables('myArray'))
Filter array: Use the "Filter array" action (no inline filter expression exists in PA)
Union (new wins): @union(body('New_Data'), outputs('Old_Data'))
Sort: @sort(variables('myArray'), 'Date')
Unix timestamp → date: @formatDateTime(addseconds('1970-1-1', triggerBody()?['created']), 'yyyy-MM-dd')
Date → Unix milliseconds: @div(sub(ticks(startOfDay(item()?['Created'])), ticks(formatDateTime('1970-01-01Z','o'))), 10000)
Date → Unix seconds: @div(sub(ticks(item()?['Start']), ticks('1970-01-01T00:00:00Z')), 10000000)
Unix seconds → datetime: @addSeconds('1970-01-01T00:00:00Z', int(variables('Unix')))
Coalesce as no-else: @coalesce(outputs('Optional_Step'), outputs('Default_Step'))
Flow elapsed minutes: @div(float(sub(ticks(utcNow()), ticks(outputs('Flow_Start')))), 600000000)
HH:mm time string: @formatDateTime(outputs('Local_Datetime'), 'HH:mm')
Response header: @outputs('HTTP_Action')?['headers/X-Request-Id']
Array max (by field): @reverse(sort(body('Select_Items'), 'Date'))[0]
Integer day span: @int(split(dateDifference(outputs('Start'), outputs('End')), '.')[0])
ISO week number: @div(add(dayofyear(addDays(subtractFromTime(date, sub(dayofweek(date),1), 'Day'), 3)), 6), 7)
Join errors to string: @if(equals(length(variables('Errors')),0), null, concat(join(variables('Errors'),', '),' not found.'))
Normalize before compare: @replace(coalesce(outputs('Value'),''),'_',' ')
Robust non-empty check: @greater(length(trim(coalesce(string(outputs('Val')), ''))), 0)
```
### Newlines in Expressions
> **`\n` does NOT produce a newline inside Power Automate expressions.** It is
> treated as a literal backslash + `n` and will either appear verbatim or cause
> a validation error.
Use `decodeUriComponent('%0a')` wherever you need a newline character:
```
Newline (LF): decodeUriComponent('%0a')
CRLF: decodeUriComponent('%0d%0a')
```
Example — multi-line Teams or email body via `concat()`:
```json
"Compose_Message": {
"type": "Compose",
"inputs": "@concat('Hi ', outputs('Get_User')?['body/displayName'], ',', decodeUriComponent('%0a%0a'), 'Your report is ready.', decodeUriComponent('%0a'), '- The Team')"
}
```
Example — `join()` with newline separator:
```json
"Compose_List": {
"type": "Compose",
"inputs": "@join(body('Select_Names'), decodeUriComponent('%0a'))"
}
```
> This is the only reliable way to embed newlines in dynamically built strings
> in Power Automate flow definitions (confirmed against Logic Apps runtime).
---
### Sum an array (XPath trick)
Power Automate has no native `sum()` function. Use XPath on XML instead:
```json
"Prepare_For_Sum": {
"type": "Compose",
"runAfter": {},
"inputs": { "root": { "numbers": "@body('Select_Amounts')" } }
},
"Sum": {
"type": "Compose",
"runAfter": { "Prepare_For_Sum": ["Succeeded"] },
"inputs": "@xpath(xml(outputs('Prepare_For_Sum')), 'sum(/root/numbers)')"
}
```
`Select_Amounts` must output a flat array of numbers (use a **Select** action to extract a single numeric field first). The result is a number you can use directly in conditions or calculations.
> This is the only way to aggregate (sum/min/max) an array without a loop in Power Automate.

View File

@@ -0,0 +1,735 @@
# FlowStudio MCP — Action Patterns: Data Transforms
Array operations, HTTP calls, parsing, and data transformation patterns.
> All examples assume `"runAfter"` is set appropriately.
> `<connectionName>` is the **key** in `connectionReferences` (e.g. `shared_sharepointonline`), not the GUID.
> The GUID goes in the map value's `connectionName` property.
---
## Array Operations
### Select (Reshape / Project an Array)
Transforms each item in an array, keeping only the columns you need or renaming them.
Avoids carrying large objects through the rest of the flow.
```json
"Select_Needed_Columns": {
"type": "Select",
"runAfter": {},
"inputs": {
"from": "@outputs('HTTP_Get_Subscriptions')?['body/data']",
"select": {
"id": "@item()?['id']",
"status": "@item()?['status']",
"trial_end": "@item()?['trial_end']",
"cancel_at": "@item()?['cancel_at']",
"interval": "@item()?['plan']?['interval']"
}
}
}
```
Result reference: `@body('Select_Needed_Columns')` — returns a direct array of reshaped objects.
> Use Select before looping or filtering to reduce payload size and simplify
> downstream expressions. Works on any array — SP results, HTTP responses, variables.
>
> **Tips:**
> - **Single-to-array coercion:** When an API returns a single object but you need
> Select (which requires an array), wrap it: `@array(body('Get_Employee')?['data'])`.
> The output is a 1-element array — access results via `?[0]?['field']`.
> - **Null-normalize optional fields:** Use `@if(empty(item()?['field']), null, item()?['field'])`
> on every optional field to normalize empty strings, missing properties, and empty
> objects to explicit `null`. Ensures consistent downstream `@equals(..., @null)` checks.
> - **Flatten nested objects:** Project nested properties into flat fields:
> ```
> "manager_name": "@if(empty(item()?['manager']?['name']), null, item()?['manager']?['name'])"
> ```
> This enables direct field-level comparison with a flat schema from another source.
---
### Filter Array (Query)
Filters an array to items matching a condition. Use the action form (not the `filter()`
expression) for complex multi-condition logic — it's clearer and easier to maintain.
```json
"Filter_Active_Subscriptions": {
"type": "Query",
"runAfter": {},
"inputs": {
"from": "@body('Select_Needed_Columns')",
"where": "@and(or(equals(item().status, 'trialing'), equals(item().status, 'active')), equals(item().cancel_at, null))"
}
}
```
Result reference: `@body('Filter_Active_Subscriptions')` — direct filtered array.
> Tip: run multiple Filter Array actions on the same source array to create
> named buckets (e.g. active, being-canceled, fully-canceled), then use
> `coalesce(first(body('Filter_A')), first(body('Filter_B')), ...)` to pick
> the highest-priority match without any loops.
---
### Create CSV Table (Array → CSV String)
Converts an array of objects into a CSV-formatted string — no connector call, no code.
Use after a `Select` or `Filter Array` to export data or pass it to a file-write action.
```json
"Create_CSV": {
"type": "Table",
"runAfter": {},
"inputs": {
"from": "@body('Select_Output_Columns')",
"format": "CSV"
}
}
```
Result reference: `@body('Create_CSV')` — a plain string with header row + data rows.
```json
// Custom column order / renamed headers:
"Create_CSV_Custom": {
"type": "Table",
"inputs": {
"from": "@body('Select_Output_Columns')",
"format": "CSV",
"columns": [
{ "header": "Date", "value": "@item()?['transactionDate']" },
{ "header": "Amount", "value": "@item()?['amount']" },
{ "header": "Description", "value": "@item()?['description']" }
]
}
}
```
> Without `columns`, headers are taken from the object property names in the source array.
> With `columns`, you control header names and column order explicitly.
>
> The output is a raw string. Write it to a file with `CreateFile` or `UpdateFile`
> (set `body` to `@body('Create_CSV')`), or store in a variable with `SetVariable`.
>
> If source data came from Power BI's `ExecuteDatasetQuery`, column names will be
> wrapped in square brackets (e.g. `[Amount]`). Strip them before writing:
> `@replace(replace(body('Create_CSV'),'[',''),']','')`
---
### range() + Select for Array Generation
`range(0, N)` produces an integer sequence `[0, 1, 2, …, N-1]`. Pipe it through
a Select action to generate date series, index grids, or any computed array
without a loop:
```json
// Generate 14 consecutive dates starting from a base date
"Generate_Date_Series": {
"type": "Select",
"inputs": {
"from": "@range(0, 14)",
"select": "@addDays(outputs('Base_Date'), item(), 'yyyy-MM-dd')"
}
}
```
Result: `@body('Generate_Date_Series')``["2025-01-06", "2025-01-07", …, "2025-01-19"]`
```json
// Flatten a 2D array (rows × cols) into 1D using arithmetic indexing
"Flatten_Grid": {
"type": "Select",
"inputs": {
"from": "@range(0, mul(length(outputs('Rows')), length(outputs('Cols'))))",
"select": {
"row": "@outputs('Rows')[div(item(), length(outputs('Cols')))]",
"col": "@outputs('Cols')[mod(item(), length(outputs('Cols')))]"
}
}
}
```
> `range()` is zero-based. The Cartesian product pattern above uses `div(i, cols)`
> for the row index and `mod(i, cols)` for the column index — equivalent to a
> nested for-loop flattened into a single pass. Useful for generating time-slot ×
> date grids, shift × location assignments, etc.
---
### Dynamic Dictionary via json(concat(join()))
When you need O(1) key→value lookups at runtime and Power Automate has no native
dictionary type, build one from an array using Select + join + json:
```json
"Build_Key_Value_Pairs": {
"type": "Select",
"inputs": {
"from": "@body('Get_Lookup_Items')?['value']",
"select": "@concat('\"', item()?['Key'], '\":\"', item()?['Value'], '\"')"
}
},
"Assemble_Dictionary": {
"type": "Compose",
"inputs": "@json(concat('{', join(body('Build_Key_Value_Pairs'), ','), '}'))"
}
```
Lookup: `@outputs('Assemble_Dictionary')?['myKey']`
```json
// Practical example: date → rate-code lookup for business rules
"Build_Holiday_Rates": {
"type": "Select",
"inputs": {
"from": "@body('Get_Holidays')?['value']",
"select": "@concat('\"', formatDateTime(item()?['Date'], 'yyyy-MM-dd'), '\":\"', item()?['RateCode'], '\"')"
}
},
"Holiday_Dict": {
"type": "Compose",
"inputs": "@json(concat('{', join(body('Build_Holiday_Rates'), ','), '}'))"
}
```
Then inside a loop: `@coalesce(outputs('Holiday_Dict')?[item()?['Date']], 'Standard')`
> The `json(concat('{', join(...), '}'))` pattern works for string values. For numeric
> or boolean values, omit the inner escaped quotes around the value portion.
> Keys must be unique — duplicate keys silently overwrite earlier ones.
> This replaces deeply nested `if(equals(key,'A'),'X', if(equals(key,'B'),'Y', ...))` chains.
---
### union() for Changed-Field Detection
When you need to find records where *any* of several fields has changed, run one
`Filter Array` per field and `union()` the results. This avoids a complex
multi-condition filter and produces a clean deduplicated set:
```json
"Filter_Name_Changed": {
"type": "Query",
"inputs": { "from": "@body('Existing_Records')",
"where": "@not(equals(item()?['name'], item()?['dest_name']))" }
},
"Filter_Status_Changed": {
"type": "Query",
"inputs": { "from": "@body('Existing_Records')",
"where": "@not(equals(item()?['status'], item()?['dest_status']))" }
},
"All_Changed": {
"type": "Compose",
"inputs": "@union(body('Filter_Name_Changed'), body('Filter_Status_Changed'))"
}
```
Reference: `@outputs('All_Changed')` — deduplicated array of rows where anything changed.
> `union()` deduplicates by object identity, so a row that changed in both fields
> appears once. Add more `Filter_*_Changed` inputs to `union()` as needed:
> `@union(body('F1'), body('F2'), body('F3'))`
---
### File-Content Change Gate
Before running expensive processing on a file or blob, compare its current content
to a stored baseline. Skip entirely if nothing has changed — makes sync flows
idempotent and safe to re-run or schedule aggressively.
```json
"Get_File_From_Source": { ... },
"Get_Stored_Baseline": { ... },
"Condition_File_Changed": {
"type": "If",
"expression": {
"not": {
"equals": [
"@base64(body('Get_File_From_Source'))",
"@body('Get_Stored_Baseline')"
]
}
},
"actions": {
"Update_Baseline": { "...": "overwrite stored copy with new content" },
"Process_File": { "...": "all expensive work goes here" }
},
"else": { "actions": {} }
}
```
> Store the baseline as a file in SharePoint or blob storage — `base64()`-encode the
> live content before comparing so binary and text files are handled uniformly.
> Write the new baseline **before** processing so a re-run after a partial failure
> does not re-process the same file again.
---
### Set-Join for Sync (Update Detection without Nested Loops)
When syncing a source collection into a destination (e.g. API response → SharePoint list,
CSV → database), avoid nested `Apply to each` loops to find changed records.
Instead, **project flat key arrays** and use `contains()` to perform set operations —
zero nested loops, and the final loop only touches changed items.
**Full insert/update/delete sync pattern:**
```json
// Step 1 — Project a flat key array from the DESTINATION (e.g. SharePoint)
"Select_Dest_Keys": {
"type": "Select",
"inputs": {
"from": "@outputs('Get_Dest_Items')?['body/value']",
"select": "@item()?['Title']"
}
}
// → ["KEY1", "KEY2", "KEY3", ...]
// Step 2 — INSERT: source rows whose key is NOT in destination
"Filter_To_Insert": {
"type": "Query",
"inputs": {
"from": "@body('Source_Array')",
"where": "@not(contains(body('Select_Dest_Keys'), item()?['key']))"
}
}
// → Apply to each Filter_To_Insert → CreateItem
// Step 3 — INNER JOIN: source rows that exist in destination
"Filter_Already_Exists": {
"type": "Query",
"inputs": {
"from": "@body('Source_Array')",
"where": "@contains(body('Select_Dest_Keys'), item()?['key'])"
}
}
// Step 4 — UPDATE: one Filter per tracked field, then union them
"Filter_Field1_Changed": {
"type": "Query",
"inputs": {
"from": "@body('Filter_Already_Exists')",
"where": "@not(equals(item()?['field1'], item()?['dest_field1']))"
}
}
"Filter_Field2_Changed": {
"type": "Query",
"inputs": {
"from": "@body('Filter_Already_Exists')",
"where": "@not(equals(item()?['field2'], item()?['dest_field2']))"
}
}
"Union_Changed": {
"type": "Compose",
"inputs": "@union(body('Filter_Field1_Changed'), body('Filter_Field2_Changed'))"
}
// → rows where ANY tracked field differs
// Step 5 — Resolve destination IDs for changed rows (no nested loop)
"Select_Changed_Keys": {
"type": "Select",
"inputs": { "from": "@outputs('Union_Changed')", "select": "@item()?['key']" }
}
"Filter_Dest_Items_To_Update": {
"type": "Query",
"inputs": {
"from": "@outputs('Get_Dest_Items')?['body/value']",
"where": "@contains(body('Select_Changed_Keys'), item()?['Title'])"
}
}
// Step 6 — Single loop over changed items only
"Apply_to_each_Update": {
"type": "Foreach",
"foreach": "@body('Filter_Dest_Items_To_Update')",
"actions": {
"Get_Source_Row": {
"type": "Query",
"inputs": {
"from": "@outputs('Union_Changed')",
"where": "@equals(item()?['key'], items('Apply_to_each_Update')?['Title'])"
}
},
"Update_Item": {
"...": "...",
"id": "@items('Apply_to_each_Update')?['ID']",
"item/field1": "@first(body('Get_Source_Row'))?['field1']"
}
}
}
// Step 7 — DELETE: destination keys NOT in source
"Select_Source_Keys": {
"type": "Select",
"inputs": { "from": "@body('Source_Array')", "select": "@item()?['key']" }
}
"Filter_To_Delete": {
"type": "Query",
"inputs": {
"from": "@outputs('Get_Dest_Items')?['body/value']",
"where": "@not(contains(body('Select_Source_Keys'), item()?['Title']))"
}
}
// → Apply to each Filter_To_Delete → DeleteItem
```
> **Why this beats nested loops**: the naive approach (for each dest item, scan source)
> is O(n × m) and hits Power Automate's 100k-action run limit fast on large lists.
> This pattern is O(n + m): one pass to build key arrays, one pass per filter.
> The update loop in Step 6 only iterates *changed* records — often a tiny fraction
> of the full collection. Run Steps 2/4/7 in **parallel Scopes** for further speed.
---
### First-or-Null Single-Row Lookup
Use `first()` on the result array to extract one record without a loop.
Then null-check the output to guard downstream actions.
```json
"Get_First_Match": {
"type": "Compose",
"runAfter": { "Get_SP_Items": ["Succeeded"] },
"inputs": "@first(outputs('Get_SP_Items')?['body/value'])"
}
```
In a Condition, test for no-match with the **`@null` literal** (not `empty()`):
```json
"Condition": {
"type": "If",
"expression": {
"not": {
"equals": [
"@outputs('Get_First_Match')",
"@null"
]
}
}
}
```
Access fields on the matched row: `@outputs('Get_First_Match')?['FieldName']`
> Use this instead of `Apply to each` when you only need one matching record.
> `first()` on an empty array returns `null`; `empty()` is for arrays/strings,
> not scalars — using it on a `first()` result causes a runtime error.
---
## HTTP & Parsing
### HTTP Action (External API)
```json
"Call_External_API": {
"type": "Http",
"runAfter": {},
"inputs": {
"method": "POST",
"uri": "https://api.example.com/endpoint",
"headers": {
"Content-Type": "application/json",
"Authorization": "Bearer @{variables('apiToken')}"
},
"body": {
"data": "@outputs('Compose_Payload')"
},
"retryPolicy": {
"type": "Fixed",
"count": 3,
"interval": "PT10S"
}
}
}
```
Response reference: `@outputs('Call_External_API')?['body']`
#### Variant: ActiveDirectoryOAuth (Service-to-Service)
For calling APIs that require Azure AD client-credentials (e.g., Microsoft Graph),
use in-line OAuth instead of a Bearer token variable:
```json
"Call_Graph_API": {
"type": "Http",
"runAfter": {},
"inputs": {
"method": "GET",
"uri": "https://graph.microsoft.com/v1.0/users?$search=\"employeeId:@{variables('Code')}\"&$select=id,displayName",
"headers": {
"Content-Type": "application/json",
"ConsistencyLevel": "eventual"
},
"authentication": {
"type": "ActiveDirectoryOAuth",
"authority": "https://login.microsoftonline.com",
"tenant": "<tenant-id>",
"audience": "https://graph.microsoft.com",
"clientId": "<app-registration-id>",
"secret": "@parameters('graphClientSecret')"
}
}
}
```
> **When to use:** Calling Microsoft Graph, Azure Resource Manager, or any
> Azure AD-protected API from a flow without a premium connector.
>
> The `authentication` block handles the entire OAuth client-credentials flow
> transparently — no manual token acquisition step needed.
>
> `ConsistencyLevel: eventual` is required for Graph `$search` queries.
> Without it, `$search` returns 400.
>
> For PATCH/PUT writes, the same `authentication` block works — just change
> `method` and add a `body`.
>
> ⚠️ **Never hardcode `secret` inline.** Use `@parameters('graphClientSecret')`
> and declare it in the flow's `parameters` block (type `securestring`). This
> prevents the secret from appearing in run history or being readable via
> `get_live_flow`. Declare the parameter like:
> ```json
> "parameters": {
> "graphClientSecret": { "type": "securestring", "defaultValue": "" }
> }
> ```
> Then pass the real value via the flow's connections or environment variables
> — never commit it to source control.
---
### HTTP Response (Return to Caller)
Used in HTTP-triggered flows to send a structured reply back to the caller.
Must run before the flow times out (default 2 min for synchronous HTTP).
```json
"Response": {
"type": "Response",
"runAfter": {},
"inputs": {
"statusCode": 200,
"headers": {
"Content-Type": "application/json"
},
"body": {
"status": "success",
"message": "@{outputs('Compose_Result')}"
}
}
}
```
> **PowerApps / low-code caller pattern**: always return `statusCode: 200` with a
> `status` field in the body (`"success"` / `"error"`). PowerApps HTTP actions
> do not handle non-2xx responses gracefully — the caller should inspect
> `body.status` rather than the HTTP status code.
>
> Use multiple Response actions — one per branch — so each path returns
> an appropriate message. Only one will execute per run.
---
### Child Flow Call (Parent→Child via HTTP POST)
Power Automate supports parent→child orchestration by calling a child flow's
HTTP trigger URL directly. The parent sends an HTTP POST and blocks until the
child returns a `Response` action. The child flow uses a `manual` (Request) trigger.
```json
// PARENT — call child flow and wait for its response
"Call_Child_Flow": {
"type": "Http",
"inputs": {
"method": "POST",
"uri": "https://prod-XX.australiasoutheast.logic.azure.com:443/workflows/<workflowId>/triggers/manual/paths/invoke?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<SAS>",
"headers": { "Content-Type": "application/json" },
"body": {
"ID": "@triggerBody()?['ID']",
"WeekEnd": "@triggerBody()?['WeekEnd']",
"Payload": "@variables('dataArray')"
},
"retryPolicy": { "type": "none" }
},
"operationOptions": "DisableAsyncPattern",
"runtimeConfiguration": {
"contentTransfer": { "transferMode": "Chunked" }
},
"limit": { "timeout": "PT2H" }
}
```
```json
// CHILD — manual trigger receives the JSON body
// (trigger definition)
"manual": {
"type": "Request",
"kind": "Http",
"inputs": {
"schema": {
"type": "object",
"properties": {
"ID": { "type": "string" },
"WeekEnd": { "type": "string" },
"Payload": { "type": "array" }
}
}
}
}
// CHILD — return result to parent
"Response_Success": {
"type": "Response",
"inputs": {
"statusCode": 200,
"headers": { "Content-Type": "application/json" },
"body": { "Result": "Success", "Count": "@length(variables('processed'))" }
}
}
```
> **`retryPolicy: none`** — critical on the parent's HTTP call. Without it, a child
> flow timeout triggers retries, spawning duplicate child runs.
>
> **`DisableAsyncPattern`** — prevents the parent from treating a 202 Accepted as
> completion. The parent will block until the child sends its `Response`.
>
> **`transferMode: Chunked`** — enable when passing large arrays (>100 KB) to the child;
> avoids request-size limits.
>
> **`limit.timeout: PT2H`** — raise the default 2-minute HTTP timeout for long-running
> children. Max is PT24H.
>
> The child flow's trigger URL contains a SAS token (`sig=...`) that authenticates
> the call. Copy it from the child flow's trigger properties panel. The URL changes
> if the trigger is deleted and re-created.
---
### Parse JSON
```json
"Parse_Response": {
"type": "ParseJson",
"runAfter": {},
"inputs": {
"content": "@outputs('Call_External_API')?['body']",
"schema": {
"type": "object",
"properties": {
"id": { "type": "integer" },
"name": { "type": "string" },
"items": {
"type": "array",
"items": { "type": "object" }
}
}
}
}
}
```
Access parsed values: `@body('Parse_Response')?['name']`
---
### Manual CSV → JSON (No Premium Action)
Parse a raw CSV string into an array of objects using only built-in expressions.
Avoids the premium "Parse CSV" connector action.
```json
"Delimiter": {
"type": "Compose",
"inputs": ","
},
"Strip_Quotes": {
"type": "Compose",
"inputs": "@replace(body('Get_File_Content'), '\"', '')"
},
"Detect_Line_Ending": {
"type": "Compose",
"inputs": "@if(equals(indexOf(outputs('Strip_Quotes'), decodeUriComponent('%0D%0A')), -1), if(equals(indexOf(outputs('Strip_Quotes'), decodeUriComponent('%0A')), -1), decodeUriComponent('%0D'), decodeUriComponent('%0A')), decodeUriComponent('%0D%0A'))"
},
"Headers": {
"type": "Compose",
"inputs": "@split(first(split(outputs('Strip_Quotes'), outputs('Detect_Line_Ending'))), outputs('Delimiter'))"
},
"Data_Rows": {
"type": "Compose",
"inputs": "@skip(split(outputs('Strip_Quotes'), outputs('Detect_Line_Ending')), 1)"
},
"Select_CSV_Body": {
"type": "Select",
"inputs": {
"from": "@outputs('Data_Rows')",
"select": {
"@{outputs('Headers')[0]}": "@split(item(), outputs('Delimiter'))[0]",
"@{outputs('Headers')[1]}": "@split(item(), outputs('Delimiter'))[1]",
"@{outputs('Headers')[2]}": "@split(item(), outputs('Delimiter'))[2]"
}
}
},
"Filter_Empty_Rows": {
"type": "Query",
"inputs": {
"from": "@body('Select_CSV_Body')",
"where": "@not(equals(item()?[outputs('Headers')[0]], null))"
}
}
```
Result: `@body('Filter_Empty_Rows')` — array of objects with header names as keys.
> **`Detect_Line_Ending`** handles CRLF (Windows), LF (Unix), and CR (old Mac) automatically
> using `indexOf()` with `decodeUriComponent('%0D%0A' / '%0A' / '%0D')`.
>
> **Dynamic key names in `Select`**: `@{outputs('Headers')[0]}` as a JSON key in a
> `Select` shape sets the output property name at runtime from the header row —
> this works as long as the expression is in `@{...}` interpolation syntax.
>
> **Columns with embedded commas**: if field values can contain the delimiter,
> use `length(split(row, ','))` in a Switch to detect the column count and manually
> reassemble the split fragments: `@concat(split(item(),',')[1],',',split(item(),',')[2])`
---
### ConvertTimeZone (Built-in, No Connector)
Converts a timestamp between timezones with no API call or connector licence cost.
Format string `"g"` produces short locale date+time (`M/d/yyyy h:mm tt`).
```json
"Convert_to_Local_Time": {
"type": "Expression",
"kind": "ConvertTimeZone",
"runAfter": {},
"inputs": {
"baseTime": "@{outputs('UTC_Timestamp')}",
"sourceTimeZone": "UTC",
"destinationTimeZone": "Taipei Standard Time",
"formatString": "g"
}
}
```
Result reference: `@body('Convert_to_Local_Time')`**not** `outputs()`, unlike most actions.
Common `formatString` values: `"g"` (short), `"f"` (full), `"yyyy-MM-dd"`, `"HH:mm"`
Common timezone strings: `"UTC"`, `"AUS Eastern Standard Time"`, `"Taipei Standard Time"`,
`"Singapore Standard Time"`, `"GMT Standard Time"`
> This is `type: Expression, kind: ConvertTimeZone` — a built-in Logic Apps action,
> not a connector. No connection reference needed. Reference the output via
> `body()` (not `outputs()`), otherwise the expression returns null.

View File

@@ -0,0 +1,108 @@
# Common Build Patterns
Complete flow definition templates ready to copy and customize.
---
## Pattern: Recurrence + SharePoint list read + Teams notification
```json
{
"triggers": {
"Recurrence": {
"type": "Recurrence",
"recurrence": { "frequency": "Day", "interval": 1,
"startTime": "2026-01-01T08:00:00Z",
"timeZone": "AUS Eastern Standard Time" }
}
},
"actions": {
"Get_SP_Items": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "shared_sharepointonline",
"operationId": "GetItems"
},
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"table": "MyList",
"$filter": "Status eq 'Active'",
"$top": 500
}
}
},
"Apply_To_Each": {
"type": "Foreach",
"runAfter": { "Get_SP_Items": ["Succeeded"] },
"foreach": "@outputs('Get_SP_Items')?['body/value']",
"actions": {
"Post_Teams_Message": {
"type": "OpenApiConnection",
"runAfter": {},
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_teams",
"connectionName": "shared_teams",
"operationId": "PostMessageToConversation"
},
"parameters": {
"poster": "Flow bot",
"location": "Channel",
"body/recipient": {
"groupId": "<team-id>",
"channelId": "<channel-id>"
},
"body/messageBody": "Item: @{items('Apply_To_Each')?['Title']}"
}
}
}
},
"operationOptions": "Sequential"
}
}
}
```
---
## Pattern: HTTP trigger (webhook / Power App call)
```json
{
"triggers": {
"manual": {
"type": "Request",
"kind": "Http",
"inputs": {
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"value": { "type": "number" }
}
}
}
}
},
"actions": {
"Compose_Response": {
"type": "Compose",
"runAfter": {},
"inputs": "Received: @{triggerBody()?['name']} = @{triggerBody()?['value']}"
},
"Response": {
"type": "Response",
"runAfter": { "Compose_Response": ["Succeeded"] },
"inputs": {
"statusCode": 200,
"body": { "status": "ok", "message": "@{outputs('Compose_Response')}" }
}
}
}
}
```
Access body values: `@triggerBody()?['name']`

View File

@@ -0,0 +1,225 @@
# FlowStudio MCP — Flow Definition Schema
The full JSON structure expected by `update_live_flow` (and returned by `get_live_flow`).
---
## Top-Level Shape
```json
{
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"$connections": {
"defaultValue": {},
"type": "Object"
}
},
"triggers": {
"<TriggerName>": { ... }
},
"actions": {
"<ActionName>": { ... }
},
"outputs": {}
}
```
---
## `triggers`
Exactly one trigger per flow definition. The key name is arbitrary but
conventional names are used (e.g. `Recurrence`, `manual`, `When_a_new_email_arrives`).
See [trigger-types.md](trigger-types.md) for all trigger templates.
---
## `actions`
Dictionary of action definitions keyed by unique action name.
Key names may not contain spaces — use underscores.
Each action must include:
- `type` — action type identifier
- `runAfter` — map of upstream action names → status conditions array
- `inputs` — action-specific input configuration
See [action-patterns-core.md](action-patterns-core.md), [action-patterns-data.md](action-patterns-data.md),
and [action-patterns-connectors.md](action-patterns-connectors.md) for templates.
### Optional Action Properties
Beyond the required `type`, `runAfter`, and `inputs`, actions can include:
| Property | Purpose |
|---|---|
| `runtimeConfiguration` | Pagination, concurrency, secure data, chunked transfer |
| `operationOptions` | `"Sequential"` for Foreach, `"DisableAsyncPattern"` for HTTP |
| `limit` | Timeout override (e.g. `{"timeout": "PT2H"}`) |
#### `runtimeConfiguration` Variants
**Pagination** (SharePoint Get Items with large lists):
```json
"runtimeConfiguration": {
"paginationPolicy": {
"minimumItemCount": 5000
}
}
```
> Without this, Get Items silently caps at 256 results. Set `minimumItemCount`
> to the maximum rows you expect. Required for any SharePoint list over 256 items.
**Concurrency** (parallel Foreach):
```json
"runtimeConfiguration": {
"concurrency": {
"repetitions": 20
}
}
```
**Secure inputs/outputs** (mask values in run history):
```json
"runtimeConfiguration": {
"secureData": {
"properties": ["inputs", "outputs"]
}
}
```
> Use on actions that handle credentials, tokens, or PII. Masked values show
> as `"<redacted>"` in the flow run history UI and API responses.
**Chunked transfer** (large HTTP payloads):
```json
"runtimeConfiguration": {
"contentTransfer": {
"transferMode": "Chunked"
}
}
```
> Enable on HTTP actions sending or receiving bodies >100 KB (e.g. parent→child
> flow calls with large arrays).
---
## `runAfter` Rules
The first action in a branch has `"runAfter": {}` (empty — runs after trigger).
Subsequent actions declare their dependency:
```json
"My_Action": {
"runAfter": {
"Previous_Action": ["Succeeded"]
}
}
```
Multiple upstream dependencies:
```json
"runAfter": {
"Action_A": ["Succeeded"],
"Action_B": ["Succeeded", "Skipped"]
}
```
Error-handling action (runs when upstream failed):
```json
"Log_Error": {
"runAfter": {
"Risky_Action": ["Failed"]
}
}
```
---
## `parameters` (Flow-Level Input Parameters)
Optional. Define reusable values at the flow level:
```json
"parameters": {
"listName": {
"type": "string",
"defaultValue": "MyList"
},
"maxItems": {
"type": "integer",
"defaultValue": 100
}
}
```
Reference: `@parameters('listName')` in expression strings.
---
## `outputs`
Rarely used in cloud flows. Leave as `{}` unless the flow is called
as a child flow and needs to return values.
For child flows that return data:
```json
"outputs": {
"resultData": {
"type": "object",
"value": "@outputs('Compose_Result')"
}
}
```
---
## Scoped Actions (Inside Scope Block)
Actions that need to be grouped for error handling or clarity:
```json
"Scope_Main_Process": {
"type": "Scope",
"runAfter": {},
"actions": {
"Step_One": { ... },
"Step_Two": { "runAfter": { "Step_One": ["Succeeded"] }, ... }
}
}
```
---
## Full Minimal Example
```json
{
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"contentVersion": "1.0.0.0",
"triggers": {
"Recurrence": {
"type": "Recurrence",
"recurrence": {
"frequency": "Week",
"interval": 1,
"schedule": { "weekDays": ["Monday"] },
"startTime": "2026-01-05T09:00:00Z",
"timeZone": "AUS Eastern Standard Time"
}
}
},
"actions": {
"Compose_Greeting": {
"type": "Compose",
"runAfter": {},
"inputs": "Good Monday!"
}
},
"outputs": {}
}
```

View File

@@ -0,0 +1,211 @@
# FlowStudio MCP — Trigger Types
Copy-paste trigger definitions for Power Automate flow definitions.
---
## Recurrence
Run on a schedule.
```json
"Recurrence": {
"type": "Recurrence",
"recurrence": {
"frequency": "Day",
"interval": 1,
"startTime": "2026-01-01T08:00:00Z",
"timeZone": "AUS Eastern Standard Time"
}
}
```
Weekly on specific days:
```json
"Recurrence": {
"type": "Recurrence",
"recurrence": {
"frequency": "Week",
"interval": 1,
"schedule": {
"weekDays": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"]
},
"startTime": "2026-01-05T09:00:00Z",
"timeZone": "AUS Eastern Standard Time"
}
}
```
Common `timeZone` values:
- `"AUS Eastern Standard Time"` — Sydney/Melbourne (UTC+10/+11)
- `"UTC"` — Universal time
- `"E. Australia Standard Time"` — Brisbane (UTC+10 no DST)
- `"New Zealand Standard Time"` — Auckland (UTC+12/+13)
- `"Pacific Standard Time"` — Los Angeles (UTC-8/-7)
- `"GMT Standard Time"` — London (UTC+0/+1)
---
## Manual (HTTP Request / Power Apps)
Receive an HTTP POST with a JSON body.
```json
"manual": {
"type": "Request",
"kind": "Http",
"inputs": {
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"value": { "type": "integer" }
},
"required": ["name"]
}
}
}
```
Access values: `@triggerBody()?['name']`
Trigger URL available after saving: `@listCallbackUrl()`
#### No-Schema Variant (Accept Arbitrary JSON)
When the incoming payload structure is unknown or varies, omit the schema
to accept any valid JSON body without validation:
```json
"manual": {
"type": "Request",
"kind": "Http",
"inputs": {
"schema": {}
}
}
```
Access any field dynamically: `@triggerBody()?['anyField']`
> Use this for external webhooks (Stripe, GitHub, Employment Hero, etc.) where the
> payload shape may change or is not fully documented. The flow accepts any
> JSON without returning 400 for unexpected properties.
---
## Automated (SharePoint Item Created)
```json
"When_an_item_is_created": {
"type": "OpenApiConnectionNotification",
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>",
"operationId": "OnNewItem"
},
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"table": "MyList"
},
"subscribe": {
"body": { "notificationUrl": "@listCallbackUrl()" },
"queries": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"table": "MyList"
}
}
}
}
```
Access trigger data: `@triggerBody()?['ID']`, `@triggerBody()?['Title']`, etc.
---
## Automated (SharePoint Item Modified)
```json
"When_an_existing_item_is_modified": {
"type": "OpenApiConnectionNotification",
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "<connectionName>",
"operationId": "OnUpdatedItem"
},
"parameters": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"table": "MyList"
},
"subscribe": {
"body": { "notificationUrl": "@listCallbackUrl()" },
"queries": {
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
"table": "MyList"
}
}
}
}
```
---
## Automated (Outlook: When New Email Arrives)
```json
"When_a_new_email_arrives": {
"type": "OpenApiConnectionNotification",
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_office365",
"connectionName": "<connectionName>",
"operationId": "OnNewEmail"
},
"parameters": {
"folderId": "Inbox",
"to": "monitored@contoso.com",
"isHTML": true
},
"subscribe": {
"body": { "notificationUrl": "@listCallbackUrl()" }
}
}
}
```
---
## Child Flow (Called by Another Flow)
```json
"manual": {
"type": "Request",
"kind": "Button",
"inputs": {
"schema": {
"type": "object",
"properties": {
"items": {
"type": "array",
"items": { "type": "object" }
}
}
}
}
}
```
Access parent-supplied data: `@triggerBody()?['items']`
To return data to the parent, add a `Response` action:
```json
"Respond_to_Parent": {
"type": "Response",
"runAfter": { "Compose_Result": ["Succeeded"] },
"inputs": {
"statusCode": 200,
"body": "@outputs('Compose_Result')"
}
}
```

View File

@@ -0,0 +1,425 @@
---
name: flowstudio-power-automate-debug
description: >-
Debug failing Power Automate cloud flows using the FlowStudio MCP server.
The Graph API only shows top-level status codes. This skill gives your agent
action-level inputs and outputs to find the actual root cause.
Load this skill when asked to: debug a flow, investigate a failed run, why is
this flow failing, inspect action outputs, find the root cause of a flow error,
fix a broken Power Automate flow, diagnose a timeout, trace a DynamicOperationRequestFailure,
check connector auth errors, read error details from a run, or troubleshoot
expression failures. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
metadata:
openclaw:
requires:
env:
- FLOWSTUDIO_MCP_TOKEN
primaryEnv: FLOWSTUDIO_MCP_TOKEN
homepage: https://mcp.flowstudio.app
---
# Power Automate Debugging with FlowStudio MCP
A step-by-step diagnostic process for investigating failing Power Automate
cloud flows through the FlowStudio MCP server.
> **Real debugging examples**: [Expression error in child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/fix-expression-error.md) |
> [Data entry, not a flow bug](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/data-not-flow.md) |
> [Null value crashes child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/null-child-flow.md)
**Prerequisite**: A FlowStudio MCP server must be reachable with a valid JWT.
See the `flowstudio-power-automate-mcp` skill for connection setup.
Subscribe at https://mcp.flowstudio.app
---
## Source of Truth
> **Always call `tools/list` first** to confirm available tool names and their
> parameter schemas. Tool names and parameters may change between server versions.
> This skill covers response shapes, behavioral notes, and diagnostic patterns —
> things `tools/list` cannot tell you. If this document disagrees with `tools/list`
> or a real API response, the API wins.
---
## Python Helper
```python
import json, urllib.request
MCP_URL = "https://mcp.flowstudio.app/mcp"
MCP_TOKEN = "<YOUR_JWT_TOKEN>"
def mcp(tool, **kwargs):
payload = json.dumps({"jsonrpc": "2.0", "id": 1, "method": "tools/call",
"params": {"name": tool, "arguments": kwargs}}).encode()
req = urllib.request.Request(MCP_URL, data=payload,
headers={"x-api-key": MCP_TOKEN, "Content-Type": "application/json",
"User-Agent": "FlowStudio-MCP/1.0"})
try:
resp = urllib.request.urlopen(req, timeout=120)
except urllib.error.HTTPError as e:
body = e.read().decode("utf-8", errors="replace")
raise RuntimeError(f"MCP HTTP {e.code}: {body[:200]}") from e
raw = json.loads(resp.read())
if "error" in raw:
raise RuntimeError(f"MCP error: {json.dumps(raw['error'])}")
return json.loads(raw["result"]["content"][0]["text"])
ENV = "<environment-id>" # e.g. Default-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
```
---
## Step 1 — Locate the Flow
```python
result = mcp("list_live_flows", environmentName=ENV)
# Returns a wrapper object: {mode, flows, totalCount, error}
target = next(f for f in result["flows"] if "My Flow Name" in f["displayName"])
FLOW_ID = target["id"] # plain UUID — use directly as flowName
print(FLOW_ID)
```
---
## Step 2 — Find the Failing Run
```python
runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=5)
# Returns direct array (newest first):
# [{"name": "08584296068667933411438594643CU15",
# "status": "Failed",
# "startTime": "2026-02-25T06:13:38.6910688Z",
# "endTime": "2026-02-25T06:15:24.1995008Z",
# "triggerName": "manual",
# "error": {"code": "ActionFailed", "message": "An action failed..."}},
# {"name": "...", "status": "Succeeded", "error": null, ...}]
for r in runs:
print(r["name"], r["status"], r["startTime"])
RUN_ID = next(r["name"] for r in runs if r["status"] == "Failed")
```
---
## Step 3 — Get the Top-Level Error
> **CRITICAL**: `get_live_flow_run_error` tells you **which** action failed.
> `get_live_flow_run_action_outputs` tells you **why**. You must call BOTH.
> Never stop at the error alone — error codes like `ActionFailed`,
> `NotSpecified`, and `InternalServerError` are generic wrappers. The actual
> root cause (wrong field, null value, HTTP 500 body, stack trace) is only
> visible in the action's inputs and outputs.
```python
err = mcp("get_live_flow_run_error",
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID)
# Returns:
# {
# "runName": "08584296068667933411438594643CU15",
# "failedActions": [
# {"actionName": "Apply_to_each_prepare_workers", "status": "Failed",
# "error": {"code": "ActionFailed", "message": "An action failed..."},
# "startTime": "...", "endTime": "..."},
# {"actionName": "HTTP_find_AD_User_by_Name", "status": "Failed",
# "code": "NotSpecified", "startTime": "...", "endTime": "..."}
# ],
# "allActions": [
# {"actionName": "Apply_to_each", "status": "Skipped"},
# {"actionName": "Compose_WeekEnd", "status": "Succeeded"},
# ...
# ]
# }
# failedActions is ordered outer-to-inner. The ROOT cause is the LAST entry:
root = err["failedActions"][-1]
print(f"Root action: {root['actionName']} → code: {root.get('code')}")
# allActions shows every action's status — useful for spotting what was Skipped
# See common-errors.md to decode the error code.
```
---
## Step 4 — Inspect the Failing Action's Inputs and Outputs
> **This is the most important step.** `get_live_flow_run_error` only gives
> you a generic error code. The actual error detail — HTTP status codes,
> response bodies, stack traces, null values — lives in the action's runtime
> inputs and outputs. **Always inspect the failing action immediately after
> identifying it.**
```python
# Get the root failing action's full inputs and outputs
root_action = err["failedActions"][-1]["actionName"]
detail = mcp("get_live_flow_run_action_outputs",
environmentName=ENV,
flowName=FLOW_ID,
runName=RUN_ID,
actionName=root_action)
out = detail[0] if detail else {}
print(f"Action: {out.get('actionName')}")
print(f"Status: {out.get('status')}")
# For HTTP actions, the real error is in outputs.body
if isinstance(out.get("outputs"), dict):
status_code = out["outputs"].get("statusCode")
body = out["outputs"].get("body", {})
print(f"HTTP {status_code}")
print(json.dumps(body, indent=2)[:500])
# Error bodies are often nested JSON strings — parse them
if isinstance(body, dict) and "error" in body:
err_detail = body["error"]
if isinstance(err_detail, str):
err_detail = json.loads(err_detail)
print(f"Error: {err_detail.get('message', err_detail)}")
# For expression errors, the error is in the error field
if out.get("error"):
print(f"Error: {out['error']}")
# Also check inputs — they show what expression/URL/body was used
if out.get("inputs"):
print(f"Inputs: {json.dumps(out['inputs'], indent=2)[:500]}")
```
### What the action outputs reveal (that error codes don't)
| Error code from `get_live_flow_run_error` | What `get_live_flow_run_action_outputs` reveals |
|---|---|
| `ActionFailed` | Which nested action actually failed and its HTTP response |
| `NotSpecified` | The HTTP status code + response body with the real error |
| `InternalServerError` | The server's error message, stack trace, or API error JSON |
| `InvalidTemplate` | The exact expression that failed and the null/wrong-type value |
| `BadRequest` | The request body that was sent and why the server rejected it |
### Example: HTTP action returning 500
```
Error code: "InternalServerError" ← this tells you nothing
Action outputs reveal:
HTTP 500
body: {"error": "Cannot read properties of undefined (reading 'toLowerCase')
at getClientParamsFromConnectionString (storage.js:20)"}
← THIS tells you the Azure Function crashed because a connection string is undefined
```
### Example: Expression error on null
```
Error code: "BadRequest" ← generic
Action outputs reveal:
inputs: "body('HTTP_GetTokenFromStore')?['token']?['access_token']"
outputs: "" ← empty string, the path resolved to null
← THIS tells you the response shape changed — token is at body.access_token, not body.token.access_token
```
---
## Step 5 — Read the Flow Definition
```python
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
actions = defn["properties"]["definition"]["actions"]
print(list(actions.keys()))
```
Find the failing action in the definition. Inspect its `inputs` expression
to understand what data it expects.
---
## Step 6 — Walk Back from the Failure
When the failing action's inputs reference upstream actions, inspect those
too. Walk backward through the chain until you find the source of the
bad data:
```python
# Inspect multiple actions leading up to the failure
for action_name in [root_action, "Compose_WeekEnd", "HTTP_Get_Data"]:
result = mcp("get_live_flow_run_action_outputs",
environmentName=ENV,
flowName=FLOW_ID,
runName=RUN_ID,
actionName=action_name)
out = result[0] if result else {}
print(f"\n--- {action_name} ({out.get('status')}) ---")
print(f"Inputs: {json.dumps(out.get('inputs', ''), indent=2)[:300]}")
print(f"Outputs: {json.dumps(out.get('outputs', ''), indent=2)[:300]}")
```
> ⚠️ Output payloads from array-processing actions can be very large.
> Always slice (e.g. `[:500]`) before printing.
> **Tip**: Omit `actionName` to get ALL actions in a single call.
> This returns every action's inputs/outputs — useful when you're not sure
> which upstream action produced the bad data. But use 120s+ timeout as
> the response can be very large.
---
## Step 7 — Pinpoint the Root Cause
### Expression Errors (e.g. `split` on null)
If the error mentions `InvalidTemplate` or a function name:
1. Find the action in the definition
2. Check what upstream action/expression it reads
3. **Inspect that upstream action's output** for null / missing fields
```python
# Example: action uses split(item()?['Name'], ' ')
# → null Name in the source data
result = mcp("get_live_flow_run_action_outputs", ..., actionName="Compose_Names")
if not result:
print("No outputs returned for Compose_Names")
names = []
else:
names = result[0].get("outputs", {}).get("body") or []
nulls = [x for x in names if x.get("Name") is None]
print(f"{len(nulls)} records with null Name")
```
### Wrong Field Path
Expression `triggerBody()?['fieldName']` returns null → `fieldName` is wrong.
**Inspect the trigger output** to see the actual field names:
```python
result = mcp("get_live_flow_run_action_outputs", ..., actionName="<trigger-action-name>")
print(json.dumps(result[0].get("outputs"), indent=2)[:500])
```
### HTTP Actions Returning Errors
The error code says `InternalServerError` or `NotSpecified` — **always inspect
the action outputs** to get the actual HTTP status and response body:
```python
result = mcp("get_live_flow_run_action_outputs", ..., actionName="HTTP_Get_Data")
out = result[0]
print(f"HTTP {out['outputs']['statusCode']}")
print(json.dumps(out['outputs']['body'], indent=2)[:500])
```
### Connection / Auth Failures
Look for `ConnectionAuthorizationFailed` — the connection owner must match the
service account running the flow. Cannot fix via API; fix in PA designer.
---
## Step 8 — Apply the Fix
**For expression/data issues**:
```python
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
acts = defn["properties"]["definition"]["actions"]
# Example: fix split on potentially-null Name
acts["Compose_Names"]["inputs"] = \
"@coalesce(item()?['Name'], 'Unknown')"
conn_refs = defn["properties"]["connectionReferences"]
result = mcp("update_live_flow",
environmentName=ENV,
flowName=FLOW_ID,
definition=defn["properties"]["definition"],
connectionReferences=conn_refs)
print(result.get("error")) # None = success
```
> ⚠️ `update_live_flow` always returns an `error` key.
> A value of `null` (Python `None`) means success.
---
## Step 9 — Verify the Fix
> **Use `resubmit_live_flow_run` to test ANY flow — not just HTTP triggers.**
> `resubmit_live_flow_run` replays a previous run using its original trigger
> payload. This works for **every trigger type**: Recurrence, SharePoint
> "When an item is created", connector webhooks, Button triggers, and HTTP
> triggers. You do NOT need to ask the user to manually trigger the flow or
> wait for the next scheduled run.
>
> The only case where `resubmit` is not available is a **brand-new flow that
> has never run** — it has no prior run to replay.
```python
# Resubmit the failed run — works for ANY trigger type
resubmit = mcp("resubmit_live_flow_run",
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID)
print(resubmit) # {"resubmitted": true, "triggerName": "..."}
# Wait ~30 s then check
import time; time.sleep(30)
new_runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=3)
print(new_runs[0]["status"]) # Succeeded = done
```
### When to use resubmit vs trigger
| Scenario | Use | Why |
|---|---|---|
| **Testing a fix** on any flow | `resubmit_live_flow_run` | Replays the exact trigger payload that caused the failure — best way to verify |
| Recurrence / scheduled flow | `resubmit_live_flow_run` | Cannot be triggered on demand any other way |
| SharePoint / connector trigger | `resubmit_live_flow_run` | Cannot be triggered without creating a real SP item |
| HTTP trigger with **custom** test payload | `trigger_live_flow` | When you need to send different data than the original run |
| Brand-new flow, never run | `trigger_live_flow` (HTTP only) | No prior run exists to resubmit |
### Testing HTTP-Triggered Flows with custom payloads
For flows with a `Request` (HTTP) trigger, use `trigger_live_flow` when you
need to send a **different** payload than the original run:
```python
# First inspect what the trigger expects
schema = mcp("get_live_flow_http_schema",
environmentName=ENV, flowName=FLOW_ID)
print("Expected body schema:", schema.get("requestSchema"))
print("Response schemas:", schema.get("responseSchemas"))
# Trigger with a test payload
result = mcp("trigger_live_flow",
environmentName=ENV,
flowName=FLOW_ID,
body={"name": "Test User", "value": 42})
print(f"Status: {result['responseStatus']}, Body: {result.get('responseBody')}")
```
> `trigger_live_flow` handles AAD-authenticated triggers automatically.
> Only works for flows with a `Request` (HTTP) trigger type.
---
## Quick-Reference Diagnostic Decision Tree
| Symptom | First Tool | Then ALWAYS Call | What to Look For |
|---|---|---|---|
| Flow shows as Failed | `get_live_flow_run_error` | `get_live_flow_run_action_outputs` on the failing action | HTTP status + response body in `outputs` |
| Error code is generic (`ActionFailed`, `NotSpecified`) | — | `get_live_flow_run_action_outputs` | The `outputs.body` contains the real error message, stack trace, or API error |
| HTTP action returns 500 | — | `get_live_flow_run_action_outputs` | `outputs.statusCode` + `outputs.body` with server error detail |
| Expression crash | — | `get_live_flow_run_action_outputs` on prior action | null / wrong-type fields in output body |
| Flow never starts | `get_live_flow` | — | check `properties.state` = "Started" |
| Action returns wrong data | `get_live_flow_run_action_outputs` | — | actual output body vs expected |
| Fix applied but still fails | `get_live_flow_runs` after resubmit | — | new run `status` field |
> **Rule: never diagnose from error codes alone.** `get_live_flow_run_error`
> identifies the failing action. `get_live_flow_run_action_outputs` reveals
> the actual cause. Always call both.
---
## Reference Files
- [common-errors.md](references/common-errors.md) — Error codes, likely causes, and fixes
- [debug-workflow.md](references/debug-workflow.md) — Full decision tree for complex failures
## Related Skills
- `flowstudio-power-automate-mcp` — Core connection setup and operation reference
- `flowstudio-power-automate-build` — Build and deploy new flows

View File

@@ -0,0 +1,188 @@
# FlowStudio MCP — Common Power Automate Errors
Reference for error codes, likely causes, and recommended fixes when debugging
Power Automate flows via the FlowStudio MCP server.
---
## Expression / Template Errors
### `InvalidTemplate` — Function Applied to Null
**Full message pattern**: `"Unable to process template language expressions... function 'split' expects its first argument 'text' to be of type string"`
**Root cause**: An expression like `@split(item()?['Name'], ' ')` received a null value.
**Diagnosis**:
1. Note the action name in the error message
2. Call `get_live_flow_run_action_outputs` on the action that produces the array
3. Find items where `Name` (or the referenced field) is `null`
**Fixes**:
```
Before: @split(item()?['Name'], ' ')
After: @split(coalesce(item()?['Name'], ''), ' ')
Or guard the whole foreach body with a condition:
expression: "@not(empty(item()?['Name']))"
```
---
### `InvalidTemplate` — Wrong Expression Path
**Full message pattern**: `"Unable to process template language expressions... 'triggerBody()?['FieldName']' is of type 'Null'"`
**Root cause**: The field name in the expression doesn't match the actual payload schema.
**Diagnosis**:
```python
# Check trigger output shape
mcp("get_live_flow_run_action_outputs",
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID,
actionName="<trigger-name>")
# Compare actual keys vs expression
```
**Fix**: Update expression to use the correct key name. Common mismatches:
- `triggerBody()?['body']` vs `triggerBody()?['Body']` (case-sensitive)
- `triggerBody()?['Subject']` vs `triggerOutputs()?['body/Subject']`
---
### `InvalidTemplate` — Type Mismatch
**Full message pattern**: `"... expected type 'Array' but got type 'Object'"`
**Root cause**: Passing an object where the expression expects an array (e.g. a single item HTTP response vs a list response).
**Fix**:
```
Before: @outputs('HTTP')?['body']
After: @outputs('HTTP')?['body/value'] ← for OData list responses
@createArray(outputs('HTTP')?['body']) ← wrap single object in array
```
---
## Connection / Auth Errors
### `ConnectionAuthorizationFailed`
**Full message**: `"The API connection ... is not authorized."`
**Root cause**: The connection referenced in the flow is owned by a different
user/service account than the one whose JWT is being used.
**Diagnosis**: Check `properties.connectionReferences` — the `connectionName` GUID
identifies the owner. Cannot be fixed via API.
**Fix options**:
1. Open flow in Power Automate designer → re-authenticate the connection
2. Use a connection owned by the service account whose token you hold
3. Share the connection with the service account in PA admin
---
### `InvalidConnectionCredentials`
**Root cause**: The underlying OAuth token for the connection has expired or
the user's credentials changed.
**Fix**: Owner must sign in to Power Automate and refresh the connection.
---
## HTTP Action Errors
### `ActionFailed` — HTTP 4xx/5xx
**Full message pattern**: `"An HTTP request to... failed with status code '400'"`
**Diagnosis**:
```python
actions_out = mcp("get_live_flow_run_action_outputs", ..., actionName="HTTP_My_Call")
item = actions_out[0] # first entry in the returned array
print(item["outputs"]["statusCode"]) # 400, 401, 403, 500...
print(item["outputs"]["body"]) # error details from target API
```
**Common causes**:
- 401 — missing or expired auth header
- 403 — permission denied on target resource
- 404 — wrong URL / resource deleted
- 400 — malformed JSON body (check expression that builds the body)
---
### `ActionFailed` — HTTP Timeout
**Root cause**: Target endpoint did not respond within the connector's timeout
(default 90 s for HTTP action).
**Fix**: Add retry policy to the HTTP action, or split the payload into smaller
batches to reduce per-request processing time.
---
## Control Flow Errors
### `ActionSkipped` Instead of Running
**Root cause**: The `runAfter` condition wasn't met. E.g. an action set to
`runAfter: { "Prev": ["Succeeded"] }` won't run if `Prev` failed or was skipped.
**Diagnosis**: Check the preceding action's status. Deliberately skipped
(e.g. inside a false branch) is intentional — unexpected skip is a logic gap.
**Fix**: Add `"Failed"` or `"Skipped"` to the `runAfter` status array if the
action should run on those outcomes too.
---
### Foreach Runs in Wrong Order / Race Condition
**Root cause**: `Foreach` without `operationOptions: "Sequential"` runs
iterations in parallel, causing write conflicts or undefined ordering.
**Fix**: Add `"operationOptions": "Sequential"` to the Foreach action.
---
## Update / Deploy Errors
### `update_live_flow` Returns No-Op
**Symptom**: `result["updated"]` is empty list or `result["created"]` is empty.
**Likely cause**: Passing wrong parameter name. The required key is `definition`
(object), not `flowDefinition` or `body`.
---
### `update_live_flow` — `"Supply connectionReferences"`
**Root cause**: The definition contains `OpenApiConnection` or
`OpenApiConnectionWebhook` actions but `connectionReferences` was not passed.
**Fix**: Fetch the existing connection references with `get_live_flow` and pass
them as the `connectionReferences` argument.
---
## Data Logic Errors
### `union()` Overriding Correct Records with Nulls
**Symptom**: After merging two arrays, some records have null fields that existed
in one of the source arrays.
**Root cause**: `union(old_data, new_data)``union()` first-wins, so old_data
values override new_data for matching records.
**Fix**: Swap argument order: `union(new_data, old_data)`
```
Before: @sort(union(outputs('Old_Array'), body('New_Array')), 'Date')
After: @sort(union(body('New_Array'), outputs('Old_Array')), 'Date')
```

View File

@@ -0,0 +1,157 @@
# FlowStudio MCP — Debug Workflow
End-to-end decision tree for diagnosing Power Automate flow failures.
---
## Top-Level Decision Tree
```
Flow is failing
├── Flow never starts / no runs appear
│ └── ► Check flow State: get_live_flow → properties.state
│ ├── "Stopped" → flow is disabled; enable in PA designer
│ └── "Started" + no runs → trigger condition not met (check trigger config)
├── Flow run shows "Failed"
│ ├── Step A: get_live_flow_run_error → read error.code + error.message
│ │
│ ├── error.code = "InvalidTemplate"
│ │ └── ► Expression error (null value, wrong type, bad path)
│ │ └── See: Expression Error Workflow below
│ │
│ ├── error.code = "ConnectionAuthorizationFailed"
│ │ └── ► Connection owned by different user; fix in PA designer
│ │
│ ├── error.code = "ActionFailed" + message mentions HTTP
│ │ └── ► See: HTTP Action Workflow below
│ │
│ └── Unknown / generic error
│ └── ► Walk actions backwards (Step B below)
└── Flow Succeeds but output is wrong
└── ► Inspect intermediate actions with get_live_flow_run_action_outputs
└── See: Data Quality Workflow below
```
---
## Expression Error Workflow
```
InvalidTemplate error
├── 1. Read error.message — identifies the action name and function
├── 2. Get flow definition: get_live_flow
│ └── Find that action in definition["actions"][action_name]["inputs"]
│ └── Identify what upstream value the expression reads
├── 3. get_live_flow_run_action_outputs for the action BEFORE the failing one
│ └── Look for null / wrong type in that action's output
│ ├── Null string field → wrap with coalesce(): @coalesce(field, '')
│ ├── Null object → add empty check condition before the action
│ └── Wrong field name → correct the key (case-sensitive)
└── 4. Apply fix with update_live_flow, then resubmit
```
---
## HTTP Action Workflow
```
ActionFailed on HTTP action
├── 1. get_live_flow_run_action_outputs on the HTTP action
│ └── Read: outputs.statusCode, outputs.body
├── statusCode = 401
│ └── ► Auth header missing or expired OAuth token
│ Check: action inputs.authentication block
├── statusCode = 403
│ └── ► Insufficient permission on target resource
│ Check: service principal / user has access
├── statusCode = 400
│ └── ► Malformed request body
│ Check: action inputs.body expression; parse errors often in nested JSON
├── statusCode = 404
│ └── ► Wrong URL or resource deleted/renamed
│ Check: action inputs.uri expression
└── statusCode = 500 / timeout
└── ► Target system error; retry policy may help
Add: "retryPolicy": {"type": "Fixed", "count": 3, "interval": "PT10S"}
```
---
## Data Quality Workflow
```
Flow succeeds but output data is wrong
├── 1. Identify the first "wrong" output — which action produces it?
├── 2. get_live_flow_run_action_outputs on that action
│ └── Compare actual output body vs expected
├── Source array has nulls / unexpected values
│ ├── Check the trigger data — get_live_flow_run_action_outputs on trigger
│ └── Trace forward action by action until the value corrupts
├── Merge/union has wrong values
│ └── Check union argument order:
│ union(NEW, old) = new wins ✓
│ union(OLD, new) = old wins ← common bug
├── Foreach output missing items
│ ├── Check foreach condition — filter may be too strict
│ └── Check if parallel foreach caused race condition (add Sequential)
└── Date/time values wrong timezone
└── Use convertTimeZone() — utcNow() is always UTC
```
---
## Walk-Back Analysis (Unknown Failure)
When the error message doesn't clearly name a root cause:
```python
# 1. Get all action names from definition
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
actions = list(defn["properties"]["definition"]["actions"].keys())
# 2. Check status of each action in the failed run
for action in actions:
actions_out = mcp("get_live_flow_run_action_outputs",
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID,
actionName=action)
# Returns an array of action objects
item = actions_out[0] if actions_out else {}
status = item.get("status", "unknown")
print(f"{action}: {status}")
# 3. Find the boundary between Succeeded and Failed/Skipped
# The first Failed action is likely the root cause (unless skipped by design)
```
Actions inside Foreach / Condition branches may appear nested —
check the parent action first to confirm the branch ran at all.
---
## Post-Fix Verification Checklist
1. `update_live_flow` returns `error: null` — definition accepted
2. `resubmit_live_flow_run` confirms new run started
3. Wait for run completion (poll `get_live_flow_runs` every 15 s)
4. Confirm new run `status = "Succeeded"`
5. If flow has downstream consumers (child flows, emails, SharePoint writes),
spot-check those too

View File

@@ -0,0 +1,504 @@
---
name: flowstudio-power-automate-governance
description: >-
Govern Power Automate flows and Power Apps at scale using the FlowStudio MCP
cached store. Classify flows by business impact, detect orphaned resources,
audit connector usage, enforce compliance standards, manage notification rules,
and compute governance scores — all without Dataverse or the CoE Starter Kit.
Load this skill when asked to: tag or classify flows, set business impact,
assign ownership, detect orphans, audit connectors, check compliance, compute
archive scores, manage notification rules, run a governance review, generate
a compliance report, offboard a maker, or any task that involves writing
governance metadata to flows. Requires a FlowStudio for Teams or MCP Pro+
subscription — see https://mcp.flowstudio.app
metadata:
openclaw:
requires:
env:
- FLOWSTUDIO_MCP_TOKEN
primaryEnv: FLOWSTUDIO_MCP_TOKEN
homepage: https://mcp.flowstudio.app
---
# Power Automate Governance with FlowStudio MCP
Classify, tag, and govern Power Automate flows at scale through the FlowStudio
MCP **cached store** — without Dataverse, without the CoE Starter Kit, and
without the Power Automate portal.
This skill uses `update_store_flow` to write governance metadata and the
monitoring tools (`list_store_flows`, `get_store_flow`, `list_store_makers`,
etc.) to read tenant state. For monitoring and health-check workflows, see
the `flowstudio-power-automate-monitoring` skill.
> **Start every session with `tools/list`** to confirm tool names and parameters.
> This skill covers workflows and patterns — things `tools/list` cannot tell you.
> If this document disagrees with `tools/list` or a real API response, the API wins.
---
## Critical: How to Extract Flow IDs
`list_store_flows` returns `id` in format `<environmentId>.<flowId>`. **You must split
on the first `.`** to get `environmentName` and `flowName` for all other tools:
```
id = "Default-<envGuid>.<flowGuid>"
environmentName = "Default-<envGuid>" (everything before first ".")
flowName = "<flowGuid>" (everything after first ".")
```
Also: skip entries that have no `displayName` or have `state=Deleted`
these are sparse records or flows that no longer exist in Power Automate.
If a deleted flow has `monitor=true`, suggest disabling monitoring
(`update_store_flow` with `monitor=false`) to free up a monitoring slot
(standard plan includes 20).
---
## The Write Tool: `update_store_flow`
`update_store_flow` writes governance metadata to the **Flow Studio cache
only** — it does NOT modify the flow in Power Automate. These fields are
not visible via `get_live_flow` or the PA portal. They exist only in the
Flow Studio store and are used by Flow Studio's scanning pipeline and
notification rules.
This means:
- `ownerTeam` / `supportEmail` — sets who Flow Studio considers the
governance contact. Does NOT change the actual PA flow owner.
- `rule_notify_email` — sets who receives Flow Studio failure/missing-run
notifications. Does NOT change Microsoft's built-in flow failure alerts.
- `monitor` / `critical` / `businessImpact` — Flow Studio classification
only. Power Automate has no equivalent fields.
Merge semantics — only fields you provide are updated. Returns the full
updated record (same shape as `get_store_flow`).
Required parameters: `environmentName`, `flowName`. All other fields optional.
### Settable Fields
| Field | Type | Purpose |
|---|---|---|
| `monitor` | bool | Enable run-level scanning (standard plan: 20 flows included) |
| `rule_notify_onfail` | bool | Send email notification on any failed run |
| `rule_notify_onmissingdays` | number | Send notification when flow hasn't run in N days (0 = disabled) |
| `rule_notify_email` | string | Comma-separated notification recipients |
| `description` | string | What the flow does |
| `tags` | string | Classification tags (also auto-extracted from description `#hashtags`) |
| `businessImpact` | string | Low / Medium / High / Critical |
| `businessJustification` | string | Why the flow exists, what process it automates |
| `businessValue` | string | Business value statement |
| `ownerTeam` | string | Accountable team |
| `ownerBusinessUnit` | string | Business unit |
| `supportGroup` | string | Support escalation group |
| `supportEmail` | string | Support contact email |
| `critical` | bool | Designate as business-critical |
| `tier` | string | Standard or Premium |
| `security` | string | Security classification or notes |
> **Caution with `security`:** The `security` field on `get_store_flow`
> contains structured JSON (e.g. `{"triggerRequestAuthenticationType":"All"}`).
> Writing a plain string like `"reviewed"` will overwrite this. To mark a
> flow as security-reviewed, use `tags` instead.
---
## Governance Workflows
### 1. Compliance Detail Review
Identify flows missing required governance metadata — the equivalent of
the CoE Starter Kit's Developer Compliance Center.
```
1. Ask the user which compliance fields they require
(or use their organization's existing governance policy)
2. list_store_flows
3. For each flow (skip entries without displayName or state=Deleted):
- Split id → environmentName, flowName
- get_store_flow(environmentName, flowName)
- Check which required fields are missing or empty
4. Report non-compliant flows with missing fields listed
5. For each non-compliant flow:
- Ask the user for values
- update_store_flow(environmentName, flowName, ...provided fields)
```
**Fields available for compliance checks:**
| Field | Example policy |
|---|---|
| `description` | Every flow should be documented |
| `businessImpact` | Classify as Low / Medium / High / Critical |
| `businessJustification` | Required for High/Critical impact flows |
| `ownerTeam` | Every flow should have an accountable team |
| `supportEmail` | Required for production flows |
| `monitor` | Required for critical flows (note: standard plan includes 20 monitored flows) |
| `rule_notify_onfail` | Recommended for monitored flows |
| `critical` | Designate business-critical flows |
> Each organization defines their own compliance rules. The fields above are
> suggestions based on common Power Platform governance patterns (CoE Starter
> Kit). Ask the user what their requirements are before flagging flows as
> non-compliant.
>
> **Tip:** Flows created or updated via MCP already have `description`
> (auto-appended by `update_live_flow`). Flows created manually in the
> Power Automate portal are the ones most likely missing governance metadata.
### 2. Orphaned Resource Detection
Find flows owned by deleted or disabled Azure AD accounts.
```
1. list_store_makers
2. Filter where deleted=true AND ownerFlowCount > 0
Note: deleted makers have NO displayName/mail — record their id (AAD OID)
3. list_store_flows → collect all flows
4. For each flow (skip entries without displayName or state=Deleted):
- Split id → environmentName, flowName
- get_store_flow(environmentName, flowName)
- Parse owners: json.loads(record["owners"])
- Check if any owner principalId matches an orphaned maker id
5. Report orphaned flows: maker id, flow name, flow state
6. For each orphaned flow:
- Reassign governance: update_store_flow(environmentName, flowName,
ownerTeam="NewTeam", supportEmail="new-owner@contoso.com")
- Or decommission: set_store_flow_state(environmentName, flowName,
state="Stopped")
```
> `update_store_flow` updates governance metadata in the cache only. To
> transfer actual PA ownership, an admin must use the Power Platform admin
> center or PowerShell.
>
> **Note:** Many orphaned flows are system-generated (created by
> `DataverseSystemUser` accounts for SLA monitoring, knowledge articles,
> etc.). These were never built by a person — consider tagging them
> rather than reassigning.
>
> **Coverage:** This workflow searches the cached store only, not the
> live PA API. Flows created after the last scan won't appear.
### 3. Archive Score Calculation
Compute an inactivity score (0-7) per flow to identify safe cleanup
candidates. Aligns with the CoE Starter Kit's archive scoring.
```
1. list_store_flows
2. For each flow (skip entries without displayName or state=Deleted):
- Split id → environmentName, flowName
- get_store_flow(environmentName, flowName)
3. Compute archive score (0-7), add 1 point for each:
+1 lastModifiedTime within 24 hours of createdTime
+1 displayName contains "test", "demo", "copy", "temp", or "backup"
(case-insensitive)
+1 createdTime is more than 12 months ago
+1 state is "Stopped" or "Suspended"
+1 json.loads(owners) is empty array []
+1 runPeriodTotal = 0 (never ran or no recent runs)
+1 parse json.loads(complexity) → actions < 5
4. Classify:
Score 5-7: Recommend archive — report to user for confirmation
Score 3-4: Flag for review →
Read existing tags from get_store_flow response, append #archive-review
update_store_flow(environmentName, flowName, tags="<existing> #archive-review")
Score 0-2: Active, no action
5. For user-confirmed archives:
set_store_flow_state(environmentName, flowName, state="Stopped")
Read existing tags, append #archived
update_store_flow(environmentName, flowName, tags="<existing> #archived")
```
> **What "archive" means:** Power Automate has no native archive feature.
> Archiving via MCP means: (1) stop the flow so it can't run, and
> (2) tag it `#archived` so it's discoverable for future cleanup.
> Actual deletion requires the Power Automate portal or admin PowerShell
> — it cannot be done via MCP tools.
### 4. Connector Audit
Audit which connectors are in use across monitored flows. Useful for DLP
impact analysis and premium license planning.
```
1. list_store_flows(monitor=true)
(scope to monitored flows — auditing all 1000+ flows is expensive)
2. For each flow (skip entries without displayName or state=Deleted):
- Split id → environmentName, flowName
- get_store_flow(environmentName, flowName)
- Parse connections: json.loads(record["connections"])
Returns array of objects with apiName, apiId, connectionName
- Note the flow-level tier field ("Standard" or "Premium")
3. Build connector inventory:
- Which apiNames are used and by how many flows
- Which flows have tier="Premium" (premium connector detected)
- Which flows use HTTP connectors (apiName contains "http")
- Which flows use custom connectors (non-shared_ prefix apiNames)
4. Report inventory to user
- For DLP analysis: user provides their DLP policy connector groups,
agent cross-references against the inventory
```
> **Scope to monitored flows.** Each flow requires a `get_store_flow` call
> to read the `connections` JSON. Standard plans have ~20 monitored flows —
> manageable. Auditing all flows in a large tenant (1000+) would be very
> expensive in API calls.
>
> **`list_store_connections`** returns connection instances (who created
> which connection) but NOT connector types per flow. Use it for connection
> counts per environment, not for the connector audit.
>
> DLP policy definitions are not available via MCP. The agent builds the
> connector inventory; the user provides the DLP classification to
> cross-reference against.
### 5. Notification Rule Management
Configure monitoring and alerting for flows at scale.
```
Enable failure alerts on all critical flows:
1. list_store_flows(monitor=true)
2. For each flow (skip entries without displayName or state=Deleted):
- Split id → environmentName, flowName
- get_store_flow(environmentName, flowName)
- If critical=true AND rule_notify_onfail is not true:
update_store_flow(environmentName, flowName,
rule_notify_onfail=true,
rule_notify_email="oncall@contoso.com")
- If NO flows have critical=true: this is a governance finding.
Recommend the user designate their most important flows as critical
using update_store_flow(critical=true) before configuring alerts.
Enable missing-run detection for scheduled flows:
1. list_store_flows(monitor=true)
2. For each flow where triggerType="Recurrence" (available on list response):
- Skip flows with state="Stopped" or "Suspended" (not expected to run)
- Split id → environmentName, flowName
- get_store_flow(environmentName, flowName)
- If rule_notify_onmissingdays is 0 or not set:
update_store_flow(environmentName, flowName,
rule_notify_onmissingdays=2)
```
> `critical`, `rule_notify_onfail`, and `rule_notify_onmissingdays` are only
> available from `get_store_flow`, not from `list_store_flows`. The list call
> pre-filters to monitored flows; the detail call checks the notification fields.
>
> **Monitoring limit:** The standard plan (FlowStudio for Teams / MCP Pro+)
> includes 20 monitored flows. Before bulk-enabling `monitor=true`, check
> how many flows are already monitored:
> `len(list_store_flows(monitor=true))`
### 6. Classification and Tagging
Bulk-classify flows by connector type, business function, or risk level.
```
Auto-tag by connector:
1. list_store_flows
2. For each flow (skip entries without displayName or state=Deleted):
- Split id → environmentName, flowName
- get_store_flow(environmentName, flowName)
- Parse connections: json.loads(record["connections"])
- Build tags from apiName values:
shared_sharepointonline → #sharepoint
shared_teams → #teams
shared_office365 → #email
Custom connectors → #custom-connector
HTTP-related connectors → #http-external
- Read existing tags from get_store_flow response, append new tags
- update_store_flow(environmentName, flowName,
tags="<existing tags> #sharepoint #teams")
```
> **Two tag systems:** Tags shown in `list_store_flows` are auto-extracted
> from the flow's `description` field (e.g. a maker writes `#operations` in
> the PA portal description). Tags set via `update_store_flow(tags=...)`
> write to a separate field in the Azure Table cache. They are independent —
> writing store tags does not touch the description, and editing the
> description in the portal does not affect store tags.
>
> **Tag merge:** `update_store_flow(tags=...)` overwrites the store tags
> field. To avoid losing tags from other workflows, read the current store
> tags from `get_store_flow` first, append new ones, then write back.
>
> `get_store_flow` already has a `tier` field (Standard/Premium) computed
> by the scanning pipeline. Only use `update_store_flow(tier=...)` if you
> need to override it.
### 7. Maker Offboarding
When an employee leaves, identify their flows and apps, and reassign
Flow Studio governance contacts and notification recipients.
```
1. get_store_maker(makerKey="<departing-user-aad-oid>")
→ check ownerFlowCount, ownerAppCount, deleted status
2. list_store_flows → collect all flows
3. For each flow (skip entries without displayName or state=Deleted):
- Split id → environmentName, flowName
- get_store_flow(environmentName, flowName)
- Parse owners: json.loads(record["owners"])
- If any principalId matches the departing user's OID → flag
4. list_store_power_apps → filter where ownerId matches the OID
5. For each flagged flow:
- Check runPeriodTotal and runLast — is it still active?
- If keeping:
update_store_flow(environmentName, flowName,
ownerTeam="NewTeam", supportEmail="new-owner@contoso.com")
- If decommissioning:
set_store_flow_state(environmentName, flowName, state="Stopped")
Read existing tags, append #decommissioned
update_store_flow(environmentName, flowName, tags="<existing> #decommissioned")
6. Report: flows reassigned, flows stopped, apps needing manual reassignment
```
> **What "reassign" means here:** `update_store_flow` changes who Flow
> Studio considers the governance contact and who receives Flow Studio
> notifications. It does NOT transfer the actual Power Automate flow
> ownership — that requires the Power Platform admin center or PowerShell.
> Also update `rule_notify_email` so failure notifications go to the new
> team instead of the departing employee's email.
>
> Power Apps ownership cannot be changed via MCP tools. Report them for
> manual reassignment in the Power Apps admin center.
### 8. Security Review
Review flows for potential security concerns using cached store data.
```
1. list_store_flows(monitor=true)
2. For each flow (skip entries without displayName or state=Deleted):
- Split id → environmentName, flowName
- get_store_flow(environmentName, flowName)
- Parse security: json.loads(record["security"])
- Parse connections: json.loads(record["connections"])
- Read sharingType directly (top-level field, NOT inside security JSON)
3. Report findings to user for review
4. For reviewed flows:
Read existing tags, append #security-reviewed
update_store_flow(environmentName, flowName, tags="<existing> #security-reviewed")
Do NOT overwrite the security field — it contains structured auth data
```
**Fields available for security review:**
| Field | Where | What it tells you |
|---|---|---|
| `security.triggerRequestAuthenticationType` | security JSON | `"All"` = HTTP trigger accepts unauthenticated requests |
| `sharingType` | top-level | `"Coauthor"` = shared with co-authors for editing |
| `connections` | connections JSON | Which connectors the flow uses (check for HTTP, custom) |
| `referencedResources` | JSON string | SharePoint sites, Teams channels, external URLs the flow accesses |
| `tier` | top-level | `"Premium"` = uses premium connectors |
> Each organization decides what constitutes a security concern. For example,
> an unauthenticated HTTP trigger is expected for webhook receivers (Stripe,
> GitHub) but may be a risk for internal flows. Review findings in context
> before flagging.
### 9. Environment Governance
Audit environments for compliance and sprawl.
```
1. list_store_environments
Skip entries without displayName (tenant-level metadata rows)
2. Flag:
- Developer environments (sku="Developer") — should be limited
- Non-managed environments (isManagedEnvironment=false) — less governance
- Note: isAdmin=false means the current service account lacks admin
access to that environment, not that the environment has no admin
3. list_store_flows → group by environmentName
- Flow count per environment
- Failure rate analysis: runPeriodFailRate is on the list response —
no need for per-flow get_store_flow calls
4. list_store_connections → group by environmentName
- Connection count per environment
```
### 10. Governance Dashboard
Generate a tenant-wide governance summary.
```
Efficient metrics (list calls only):
1. total_flows = len(list_store_flows())
2. monitored = len(list_store_flows(monitor=true))
3. with_onfail = len(list_store_flows(rule_notify_onfail=true))
4. makers = list_store_makers()
→ active = count where deleted=false
→ orphan_count = count where deleted=true AND ownerFlowCount > 0
5. apps = list_store_power_apps()
→ widely_shared = count where sharedUsersCount > 3
6. envs = list_store_environments() → count, group by sku
7. conns = list_store_connections() → count
Compute from list data:
- Monitoring %: monitored / total_flows
- Notification %: with_onfail / monitored
- Orphan count: from step 4
- High-risk count: flows with runPeriodFailRate > 0.2 (on list response)
Detailed metrics (require get_store_flow per flow — expensive for large tenants):
- Compliance %: flows with businessImpact set / total active flows
- Undocumented count: flows without description
- Tier breakdown: group by tier field
For detailed metrics, iterate all flows in a single pass:
For each flow from list_store_flows (skip sparse entries):
Split id → environmentName, flowName
get_store_flow(environmentName, flowName)
→ accumulate businessImpact, description, tier
```
---
## Field Reference: `get_store_flow` Fields Used in Governance
All fields below are confirmed present on the `get_store_flow` response.
Fields marked with `*` are also available on `list_store_flows` (cheaper).
| Field | Type | Governance use |
|---|---|---|
| `displayName` * | string | Archive score (test/demo name detection) |
| `state` * | string | Archive score, lifecycle management |
| `tier` | string | License audit (Standard vs Premium) |
| `monitor` * | bool | Is this flow being actively monitored? |
| `critical` | bool | Business-critical designation (settable via update_store_flow) |
| `businessImpact` | string | Compliance classification |
| `businessJustification` | string | Compliance attestation |
| `ownerTeam` | string | Ownership accountability |
| `supportEmail` | string | Escalation contact |
| `rule_notify_onfail` | bool | Failure alerting configured? |
| `rule_notify_onmissingdays` | number | SLA monitoring configured? |
| `rule_notify_email` | string | Alert recipients |
| `description` | string | Documentation completeness |
| `tags` | string | Classification — `list_store_flows` shows description-extracted hashtags only; store tags written by `update_store_flow` require `get_store_flow` to read back |
| `runPeriodTotal` * | number | Activity level |
| `runPeriodFailRate` * | number | Health status |
| `runLast` | ISO string | Last run timestamp |
| `scanned` | ISO string | Data freshness |
| `deleted` | bool | Lifecycle tracking |
| `createdTime` * | ISO string | Archive score (age) |
| `lastModifiedTime` * | ISO string | Archive score (staleness) |
| `owners` | JSON string | Orphan detection, ownership audit — parse with json.loads() |
| `connections` | JSON string | Connector audit, tier — parse with json.loads() |
| `complexity` | JSON string | Archive score (simplicity) — parse with json.loads() |
| `security` | JSON string | Auth type audit — parse with json.loads(), contains `triggerRequestAuthenticationType` |
| `sharingType` | string | Oversharing detection (top-level, NOT inside security) |
| `referencedResources` | JSON string | URL audit — parse with json.loads() |
---
## Related Skills
- `flowstudio-power-automate-monitoring` — Health checks, failure rates, inventory (read-only)
- `flowstudio-power-automate-mcp` — Core connection setup, live tool reference
- `flowstudio-power-automate-debug` — Deep diagnosis with action-level inputs/outputs
- `flowstudio-power-automate-build` — Build and deploy flow definitions

View File

@@ -0,0 +1,463 @@
---
name: flowstudio-power-automate-mcp
description: >-
Give your AI agent the same visibility you have in the Power Automate portal — plus
a bit more. The Graph API only returns top-level run status. Flow Studio MCP exposes
action-level inputs, outputs, loop iterations, and nested child flow failures.
Use when asked to: list flows, read a flow definition, check run history, inspect
action outputs, resubmit a run, cancel a running flow, view connections, get a
trigger URL, validate a definition, monitor flow health, or any task that requires
talking to the Power Automate API through an MCP tool. Also use for Power Platform
environment discovery and connection management. Requires a FlowStudio MCP
subscription or compatible server — see https://mcp.flowstudio.app
metadata:
openclaw:
requires:
env:
- FLOWSTUDIO_MCP_TOKEN
primaryEnv: FLOWSTUDIO_MCP_TOKEN
homepage: https://mcp.flowstudio.app
---
# Power Automate via FlowStudio MCP
This skill lets AI agents read, monitor, and operate Microsoft Power Automate
cloud flows programmatically through a **FlowStudio MCP server** — no browser,
no UI, no manual steps.
> **Real debugging examples**: [Expression error in child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/fix-expression-error.md) |
> [Data entry, not a flow bug](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/data-not-flow.md) |
> [Null value crashes child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/null-child-flow.md)
> **Requires:** A [FlowStudio](https://mcp.flowstudio.app) MCP subscription (or
> compatible Power Automate MCP server). You will need:
> - MCP endpoint: `https://mcp.flowstudio.app/mcp` (same for all subscribers)
> - API key / JWT token (`x-api-key` header — NOT Bearer)
> - Power Platform environment name (e.g. `Default-<tenant-guid>`)
---
## Source of Truth
| Priority | Source | Covers |
|----------|--------|--------|
| 1 | **Real API response** | Always trust what the server actually returns |
| 2 | **`tools/list`** | Tool names, parameter names, types, required flags |
| 3 | **SKILL docs & reference files** | Response shapes, behavioral notes, workflow recipes |
> **Start every new session with `tools/list`.**
> It returns the authoritative, up-to-date schema for every tool — parameter names,
> types, and required flags. The SKILL docs cover what `tools/list` cannot tell you:
> response shapes, non-obvious behaviors, and end-to-end workflow patterns.
>
> If any documentation disagrees with `tools/list` or a real API response,
> the API wins.
---
## Recommended Language: Python or Node.js
All examples in this skill and the companion build / debug skills use **Python
with `urllib.request`** (stdlib — no `pip install` needed). **Node.js** is an
equally valid choice: `fetch` is built-in from Node 18+, JSON handling is
native, and the async/await model maps cleanly onto the request-response pattern
of MCP tool calls — making it a natural fit for teams already working in a
JavaScript/TypeScript stack.
| Language | Verdict | Notes |
|---|---|---|
| **Python** | ✅ Recommended | Clean JSON handling, no escaping issues, all skill examples use it |
| **Node.js (≥ 18)** | ✅ Recommended | Native `fetch` + `JSON.stringify`/`JSON.parse`; async/await fits MCP call patterns well; no extra packages needed |
| PowerShell | ⚠️ Avoid for flow operations | `ConvertTo-Json -Depth` silently truncates nested definitions; quoting and escaping break complex payloads. Acceptable for a quick `tools/list` discovery call but not for building or updating flows. |
| cURL / Bash | ⚠️ Possible but fragile | Shell-escaping nested JSON is error-prone; no native JSON parser |
> **TL;DR — use the Core MCP Helper (Python or Node.js) below.** Both handle
> JSON-RPC framing, auth, and response parsing in a single reusable function.
---
## What You Can Do
FlowStudio MCP has two access tiers. **FlowStudio for Teams** subscribers get
both the fast Azure-table store (cached snapshot data + governance metadata) and
full live Power Automate API access. **MCP-only subscribers** get the live tools —
more than enough to build, debug, and operate flows.
### Live Tools — Available to All MCP Subscribers
| Tool | What it does |
|---|---|
| `list_live_flows` | List flows in an environment directly from the PA API (always current) |
| `list_live_environments` | List all Power Platform environments visible to the service account |
| `list_live_connections` | List all connections in an environment from the PA API |
| `get_live_flow` | Fetch the complete flow definition (triggers, actions, parameters) |
| `get_live_flow_http_schema` | Inspect the JSON body schema and response schemas of an HTTP-triggered flow |
| `get_live_flow_trigger_url` | Get the current signed callback URL for an HTTP-triggered flow |
| `trigger_live_flow` | POST to an HTTP-triggered flow's callback URL (AAD auth handled automatically) |
| `update_live_flow` | Create a new flow or patch an existing definition in one call |
| `add_live_flow_to_solution` | Migrate a non-solution flow into a solution |
| `get_live_flow_runs` | List recent run history with status, start/end times, and errors |
| `get_live_flow_run_error` | Get structured error details (per-action) for a failed run |
| `get_live_flow_run_action_outputs` | Inspect inputs/outputs of any action (or every foreach iteration) in a run |
| `resubmit_live_flow_run` | Re-run a failed or cancelled run using its original trigger payload |
| `cancel_live_flow_run` | Cancel a currently running flow execution |
### Store Tools — FlowStudio for Teams Subscribers Only
These tools read from (and write to) the FlowStudio Azure table — a monitored
snapshot of your tenant's flows enriched with governance metadata and run statistics.
| Tool | What it does |
|---|---|
| `list_store_flows` | Search flows from the cache with governance flags, run failure rates, and owner metadata |
| `get_store_flow` | Get full cached details for a single flow including run stats and governance fields |
| `get_store_flow_trigger_url` | Get the trigger URL from the cache (instant, no PA API call) |
| `get_store_flow_runs` | Cached run history for the last N days with duration and remediation hints |
| `get_store_flow_errors` | Cached failed-only runs with failed action names and remediation hints |
| `get_store_flow_summary` | Aggregated stats: success rate, failure count, avg/max duration |
| `set_store_flow_state` | Start or stop a flow via the PA API and sync the result back to the store |
| `update_store_flow` | Update governance metadata (description, tags, monitor flag, notification rules, business impact) |
| `list_store_environments` | List all environments from the cache |
| `list_store_makers` | List all makers (citizen developers) from the cache |
| `get_store_maker` | Get a maker's flow/app counts and account status |
| `list_store_power_apps` | List all Power Apps canvas apps from the cache |
| `list_store_connections` | List all Power Platform connections from the cache |
---
## Which Tool Tier to Call First
| Task | Tool | Notes |
|---|---|---|
| List flows | `list_live_flows` | Always current — calls PA API directly |
| Read a definition | `get_live_flow` | Always fetched live — not cached |
| Debug a failure | `get_live_flow_runs``get_live_flow_run_error` | Use live run data |
> ⚠️ **`list_live_flows` returns a wrapper object** with a `flows` array — access via `result["flows"]`.
> Store tools (`list_store_flows`, `get_store_flow`, etc.) are available to **FlowStudio for Teams** subscribers and provide cached governance metadata. Use live tools when in doubt — they work for all subscription tiers.
---
## Step 0 — Discover Available Tools
Always start by calling `tools/list` to confirm the server is reachable and see
exactly which tool names are available (names may vary by server version):
```python
import json, urllib.request
TOKEN = "<YOUR_JWT_TOKEN>"
MCP = "https://mcp.flowstudio.app/mcp"
def mcp_raw(method, params=None, cid=1):
payload = {"jsonrpc": "2.0", "method": method, "id": cid}
if params:
payload["params"] = params
req = urllib.request.Request(MCP, data=json.dumps(payload).encode(),
headers={"x-api-key": TOKEN, "Content-Type": "application/json",
"User-Agent": "FlowStudio-MCP/1.0"})
try:
resp = urllib.request.urlopen(req, timeout=30)
except urllib.error.HTTPError as e:
raise RuntimeError(f"MCP HTTP {e.code} — check token and endpoint") from e
return json.loads(resp.read())
raw = mcp_raw("tools/list")
if "error" in raw:
print("ERROR:", raw["error"]); raise SystemExit(1)
for t in raw["result"]["tools"]:
print(t["name"], "", t["description"][:60])
```
---
## Core MCP Helper (Python)
Use this helper throughout all subsequent operations:
```python
import json, urllib.request
TOKEN = "<YOUR_JWT_TOKEN>"
MCP = "https://mcp.flowstudio.app/mcp"
def mcp(tool, args, cid=1):
payload = {"jsonrpc": "2.0", "method": "tools/call", "id": cid,
"params": {"name": tool, "arguments": args}}
req = urllib.request.Request(MCP, data=json.dumps(payload).encode(),
headers={"x-api-key": TOKEN, "Content-Type": "application/json",
"User-Agent": "FlowStudio-MCP/1.0"})
try:
resp = urllib.request.urlopen(req, timeout=120)
except urllib.error.HTTPError as e:
body = e.read().decode("utf-8", errors="replace")
raise RuntimeError(f"MCP HTTP {e.code}: {body[:200]}") from e
raw = json.loads(resp.read())
if "error" in raw:
raise RuntimeError(f"MCP error: {json.dumps(raw['error'])}")
text = raw["result"]["content"][0]["text"]
return json.loads(text)
```
> **Common auth errors:**
> - HTTP 401/403 → token is missing, expired, or malformed. Get a fresh JWT from [mcp.flowstudio.app](https://mcp.flowstudio.app).
> - HTTP 400 → malformed JSON-RPC payload. Check `Content-Type: application/json` and body structure.
> - `MCP error: {"code": -32602, ...}` → wrong or missing tool arguments.
---
## Core MCP Helper (Node.js)
Equivalent helper for Node.js 18+ (built-in `fetch` — no packages required):
```js
const TOKEN = "<YOUR_JWT_TOKEN>";
const MCP = "https://mcp.flowstudio.app/mcp";
async function mcp(tool, args, cid = 1) {
const payload = {
jsonrpc: "2.0",
method: "tools/call",
id: cid,
params: { name: tool, arguments: args },
};
const res = await fetch(MCP, {
method: "POST",
headers: {
"x-api-key": TOKEN,
"Content-Type": "application/json",
"User-Agent": "FlowStudio-MCP/1.0",
},
body: JSON.stringify(payload),
});
if (!res.ok) {
const body = await res.text();
throw new Error(`MCP HTTP ${res.status}: ${body.slice(0, 200)}`);
}
const raw = await res.json();
if (raw.error) throw new Error(`MCP error: ${JSON.stringify(raw.error)}`);
return JSON.parse(raw.result.content[0].text);
}
```
> Requires Node.js 18+. For older Node, replace `fetch` with `https.request`
> from the stdlib or install `node-fetch`.
---
## List Flows
```python
ENV = "Default-<tenant-guid>"
result = mcp("list_live_flows", {"environmentName": ENV})
# Returns wrapper object:
# {"mode": "owner", "flows": [{"id": "0757041a-...", "displayName": "My Flow",
# "state": "Started", "triggerType": "Request", ...}], "totalCount": 42, "error": null}
for f in result["flows"]:
FLOW_ID = f["id"] # plain UUID — use directly as flowName
print(FLOW_ID, "|", f["displayName"], "|", f["state"])
```
---
## Read a Flow Definition
```python
FLOW = "<flow-uuid>"
flow = mcp("get_live_flow", {"environmentName": ENV, "flowName": FLOW})
# Display name and state
print(flow["properties"]["displayName"])
print(flow["properties"]["state"])
# List all action names
actions = flow["properties"]["definition"]["actions"]
print("Actions:", list(actions.keys()))
# Inspect one action's expression
print(actions["Compose_Filter"]["inputs"])
```
---
## Check Run History
```python
# Most recent runs (newest first)
runs = mcp("get_live_flow_runs", {"environmentName": ENV, "flowName": FLOW, "top": 5})
# Returns direct array:
# [{"name": "08584296068667933411438594643CU15",
# "status": "Failed",
# "startTime": "2026-02-25T06:13:38.6910688Z",
# "endTime": "2026-02-25T06:15:24.1995008Z",
# "triggerName": "manual",
# "error": {"code": "ActionFailed", "message": "An action failed..."}},
# {"name": "08584296028664130474944675379CU26",
# "status": "Succeeded", "error": null, ...}]
for r in runs:
print(r["name"], r["status"])
# Get the name of the first failed run
run_id = next((r["name"] for r in runs if r["status"] == "Failed"), None)
```
---
## Inspect an Action's Output
```python
run_id = runs[0]["name"]
out = mcp("get_live_flow_run_action_outputs", {
"environmentName": ENV,
"flowName": FLOW,
"runName": run_id,
"actionName": "Get_Customer_Record" # exact action name from the definition
})
print(json.dumps(out, indent=2))
```
---
## Get a Run's Error
```python
err = mcp("get_live_flow_run_error", {
"environmentName": ENV,
"flowName": FLOW,
"runName": run_id
})
# Returns:
# {"runName": "08584296068...",
# "failedActions": [
# {"actionName": "HTTP_find_AD_User_by_Name", "status": "Failed",
# "code": "NotSpecified", "startTime": "...", "endTime": "..."},
# {"actionName": "Scope_prepare_workers", "status": "Failed",
# "error": {"code": "ActionFailed", "message": "An action failed..."}}
# ],
# "allActions": [
# {"actionName": "Apply_to_each", "status": "Skipped"},
# {"actionName": "Compose_WeekEnd", "status": "Succeeded"},
# ...
# ]}
# The ROOT cause is usually the deepest entry in failedActions:
root = err["failedActions"][-1]
print(f"Root failure: {root['actionName']}{root['code']}")
```
---
## Resubmit a Run
```python
result = mcp("resubmit_live_flow_run", {
"environmentName": ENV,
"flowName": FLOW,
"runName": run_id
})
print(result) # {"resubmitted": true, "triggerName": "..."}
```
---
## Cancel a Running Run
```python
mcp("cancel_live_flow_run", {
"environmentName": ENV,
"flowName": FLOW,
"runName": run_id
})
```
> ⚠️ **Do NOT cancel a run that shows `Running` because it is waiting for an
> adaptive card response.** That status is normal — the flow is paused waiting
> for a human to respond in Teams. Cancelling it will discard the pending card.
---
## Full Round-Trip Example — Debug and Fix a Failing Flow
```python
# ── 1. Find the flow ─────────────────────────────────────────────────────
result = mcp("list_live_flows", {"environmentName": ENV})
target = next(f for f in result["flows"] if "My Flow Name" in f["displayName"])
FLOW_ID = target["id"]
# ── 2. Get the most recent failed run ────────────────────────────────────
runs = mcp("get_live_flow_runs", {"environmentName": ENV, "flowName": FLOW_ID, "top": 5})
# [{"name": "08584296068...", "status": "Failed", ...}, ...]
RUN_ID = next(r["name"] for r in runs if r["status"] == "Failed")
# ── 3. Get per-action failure breakdown ──────────────────────────────────
err = mcp("get_live_flow_run_error", {"environmentName": ENV, "flowName": FLOW_ID, "runName": RUN_ID})
# {"failedActions": [{"actionName": "HTTP_find_AD_User_by_Name", "code": "NotSpecified",...}], ...}
root_action = err["failedActions"][-1]["actionName"]
print(f"Root failure: {root_action}")
# ── 4. Read the definition and inspect the failing action's expression ───
defn = mcp("get_live_flow", {"environmentName": ENV, "flowName": FLOW_ID})
acts = defn["properties"]["definition"]["actions"]
print("Failing action inputs:", acts[root_action]["inputs"])
# ── 5. Inspect the prior action's output to find the null ────────────────
out = mcp("get_live_flow_run_action_outputs", {
"environmentName": ENV, "flowName": FLOW_ID,
"runName": RUN_ID, "actionName": "Compose_Names"
})
nulls = [x for x in out.get("body", []) if x.get("Name") is None]
print(f"{len(nulls)} records with null Name")
# ── 6. Apply the fix ─────────────────────────────────────────────────────
acts[root_action]["inputs"]["parameters"]["searchName"] = \
"@coalesce(item()?['Name'], '')"
conn_refs = defn["properties"]["connectionReferences"]
result = mcp("update_live_flow", {
"environmentName": ENV, "flowName": FLOW_ID,
"definition": defn["properties"]["definition"],
"connectionReferences": conn_refs
})
assert result.get("error") is None, f"Deploy failed: {result['error']}"
# ⚠️ error key is always present — only fail if it is NOT None
# ── 7. Resubmit and verify ───────────────────────────────────────────────
mcp("resubmit_live_flow_run", {"environmentName": ENV, "flowName": FLOW_ID, "runName": RUN_ID})
import time; time.sleep(30)
new_runs = mcp("get_live_flow_runs", {"environmentName": ENV, "flowName": FLOW_ID, "top": 1})
print(new_runs[0]["status"]) # Succeeded = done
```
---
## Auth & Connection Notes
| Field | Value |
|---|---|
| Auth header | `x-api-key: <JWT>`**not** `Authorization: Bearer` |
| Token format | Plain JWT — do not strip, alter, or prefix it |
| Timeout | Use ≥ 120 s for `get_live_flow_run_action_outputs` (large outputs) |
| Environment name | `Default-<tenant-guid>` (find it via `list_live_environments` or `list_live_flows` response) |
---
## Reference Files
- [MCP-BOOTSTRAP.md](references/MCP-BOOTSTRAP.md) — endpoint, auth, request/response format (read this first)
- [tool-reference.md](references/tool-reference.md) — response shapes and behavioral notes (parameters are in `tools/list`)
- [action-types.md](references/action-types.md) — Power Automate action type patterns
- [connection-references.md](references/connection-references.md) — connector reference guide
---
## More Capabilities
For **diagnosing failing flows** end-to-end → load the `flowstudio-power-automate-debug` skill.
For **building and deploying new flows** → load the `flowstudio-power-automate-build` skill.

View File

@@ -0,0 +1,53 @@
# MCP Bootstrap — Quick Reference
Everything an agent needs to start calling the FlowStudio MCP server.
```
Endpoint: https://mcp.flowstudio.app/mcp
Protocol: JSON-RPC 2.0 over HTTP POST
Transport: Streamable HTTP — single POST per request, no SSE, no WebSocket
Auth: x-api-key header with JWT token (NOT Bearer)
```
## Required Headers
```
Content-Type: application/json
x-api-key: <token>
User-Agent: FlowStudio-MCP/1.0 ← required, or Cloudflare blocks you
```
## Step 1 — Discover Tools
```json
POST {"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}
```
Returns all tools with names, descriptions, and input schemas.
Free — not counted against plan limits.
## Step 2 — Call a Tool
```json
POST {"jsonrpc":"2.0","id":1,"method":"tools/call",
"params":{"name":"<tool_name>","arguments":{...}}}
```
## Response Shape
```
Success → {"result":{"content":[{"type":"text","text":"<JSON string>"}]}}
Error → {"result":{"content":[{"type":"text","text":"{\"error\":{...}}"}]}}
```
Always parse `result.content[0].text` as JSON to get the actual data.
## Key Tips
- Tool results are JSON strings inside the text field — **double-parse needed**
- `"error"` field in parsed body: `null` = success, object = failure
- `environmentName` is required for most tools, but **not** for:
`list_live_environments`, `list_live_connections`, `list_store_flows`,
`list_store_environments`, `list_store_makers`, `get_store_maker`,
`list_store_power_apps`, `list_store_connections`
- When in doubt, check the `required` array in each tool's schema from `tools/list`

View File

@@ -0,0 +1,79 @@
# FlowStudio MCP — Action Types Reference
Compact lookup for recognising action types returned by `get_live_flow`.
Use this to **read and understand** existing flow definitions.
> For full copy-paste construction patterns, see the `flowstudio-power-automate-build` skill.
---
## How to Read a Flow Definition
Every action has `"type"`, `"runAfter"`, and `"inputs"`. The `runAfter` object
declares dependencies: `{"Previous": ["Succeeded"]}`. Valid statuses:
`Succeeded`, `Failed`, `Skipped`, `TimedOut`.
---
## Action Type Quick Reference
| Type | Purpose | Key fields to inspect | Output reference |
|---|---|---|---|
| `Compose` | Store/transform a value | `inputs` (any expression) | `outputs('Name')` |
| `InitializeVariable` | Declare a variable | `inputs.variables[].{name, type, value}` | `variables('name')` |
| `SetVariable` | Update a variable | `inputs.{name, value}` | `variables('name')` |
| `IncrementVariable` | Increment a numeric variable | `inputs.{name, value}` | `variables('name')` |
| `AppendToArrayVariable` | Push to an array variable | `inputs.{name, value}` | `variables('name')` |
| `If` | Conditional branch | `expression.and/or`, `actions`, `else.actions` | — |
| `Switch` | Multi-way branch | `expression`, `cases.{case, actions}`, `default` | — |
| `Foreach` | Loop over array | `foreach`, `actions`, `operationOptions` | `item()` / `items('Name')` |
| `Until` | Loop until condition | `expression`, `limit.{count, timeout}`, `actions` | — |
| `Wait` | Delay | `inputs.interval.{count, unit}` | — |
| `Scope` | Group / try-catch | `actions` (nested action map) | `result('Name')` |
| `Terminate` | End run | `inputs.{runStatus, runError}` | — |
| `OpenApiConnection` | Connector call (SP, Outlook, Teams…) | `inputs.host.{apiId, connectionName, operationId}`, `inputs.parameters` | `outputs('Name')?['body/...']` |
| `OpenApiConnectionWebhook` | Webhook wait (approvals, adaptive cards) | same as above | `body('Name')?['...']` |
| `Http` | External HTTP call | `inputs.{method, uri, headers, body}` | `outputs('Name')?['body']` |
| `Response` | Return to HTTP caller | `inputs.{statusCode, headers, body}` | — |
| `Query` | Filter array | `inputs.{from, where}` | `body('Name')` (filtered array) |
| `Select` | Reshape/project array | `inputs.{from, select}` | `body('Name')` (projected array) |
| `Table` | Array → CSV/HTML string | `inputs.{from, format, columns}` | `body('Name')` (string) |
| `ParseJson` | Parse JSON with schema | `inputs.{content, schema}` | `body('Name')?['field']` |
| `Expression` | Built-in function (e.g. ConvertTimeZone) | `kind`, `inputs` | `body('Name')` |
---
## Connector Identification
When you see `type: OpenApiConnection`, identify the connector from `host.apiId`:
| apiId suffix | Connector |
|---|---|
| `shared_sharepointonline` | SharePoint |
| `shared_office365` | Outlook / Office 365 |
| `shared_teams` | Microsoft Teams |
| `shared_approvals` | Approvals |
| `shared_office365users` | Office 365 Users |
| `shared_flowmanagement` | Flow Management |
The `operationId` tells you the specific operation (e.g. `GetItems`, `SendEmailV2`,
`PostMessageToConversation`). The `connectionName` maps to a GUID in
`properties.connectionReferences`.
---
## Common Expressions (Reading Cheat Sheet)
| Expression | Meaning |
|---|---|
| `@outputs('X')?['body/value']` | Array result from connector action X |
| `@body('X')` | Direct body of action X (Query, Select, ParseJson) |
| `@item()?['Field']` | Current loop item's field |
| `@triggerBody()?['Field']` | Trigger payload field |
| `@variables('name')` | Variable value |
| `@coalesce(a, b)` | First non-null of a, b |
| `@first(array)` | First element (null if empty) |
| `@length(array)` | Array count |
| `@empty(value)` | True if null/empty string/empty array |
| `@union(a, b)` | Merge arrays — **first wins** on duplicates |
| `@result('Scope')` | Array of action outcomes inside a Scope |

View File

@@ -0,0 +1,115 @@
# FlowStudio MCP — Connection References
Connection references wire a flow's connector actions to real authenticated
connections in the Power Platform. They are required whenever you call
`update_live_flow` with a definition that uses connector actions.
---
## Structure in a Flow Definition
```json
{
"properties": {
"definition": { ... },
"connectionReferences": {
"shared_sharepointonline": {
"connectionName": "shared-sharepointonl-62599557c-1f33-4aec-b4c0-a6e4afcae3be",
"id": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"displayName": "SharePoint"
},
"shared_office365": {
"connectionName": "shared-office365-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"id": "/providers/Microsoft.PowerApps/apis/shared_office365",
"displayName": "Office 365 Outlook"
}
}
}
}
```
Keys are **logical reference names** (e.g. `shared_sharepointonline`).
These match the `connectionName` field inside each action's `host` block.
---
## Finding Connection GUIDs
Call `get_live_flow` on **any existing flow** that uses the same connection
and copy the `connectionReferences` block. The GUID after the connector prefix is
the connection instance owned by the authenticating user.
```python
flow = mcp("get_live_flow", environmentName=ENV, flowName=EXISTING_FLOW_ID)
conn_refs = flow["properties"]["connectionReferences"]
# conn_refs["shared_sharepointonline"]["connectionName"]
# → "shared-sharepointonl-62599557c-1f33-4aec-b4c0-a6e4afcae3be"
```
> ⚠️ Connection references are **user-scoped**. If a connection is owned
> by another account, `update_live_flow` will return 403
> `ConnectionAuthorizationFailed`. You must use a connection belonging to
> the account whose token is in the `x-api-key` header.
---
## Passing `connectionReferences` to `update_live_flow`
```python
result = mcp("update_live_flow",
environmentName=ENV,
flowName=FLOW_ID,
definition=modified_definition,
connectionReferences={
"shared_sharepointonline": {
"connectionName": "shared-sharepointonl-62599557c-1f33-4aec-b4c0-a6e4afcae3be",
"id": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline"
}
}
)
```
Only include connections that the definition actually uses.
---
## Common Connector API IDs
| Service | API ID |
|---|---|
| SharePoint Online | `/providers/Microsoft.PowerApps/apis/shared_sharepointonline` |
| Office 365 Outlook | `/providers/Microsoft.PowerApps/apis/shared_office365` |
| Microsoft Teams | `/providers/Microsoft.PowerApps/apis/shared_teams` |
| OneDrive for Business | `/providers/Microsoft.PowerApps/apis/shared_onedriveforbusiness` |
| Azure AD | `/providers/Microsoft.PowerApps/apis/shared_azuread` |
| HTTP with Azure AD | `/providers/Microsoft.PowerApps/apis/shared_webcontents` |
| SQL Server | `/providers/Microsoft.PowerApps/apis/shared_sql` |
| Dataverse | `/providers/Microsoft.PowerApps/apis/shared_commondataserviceforapps` |
| Azure Blob Storage | `/providers/Microsoft.PowerApps/apis/shared_azureblob` |
| Approvals | `/providers/Microsoft.PowerApps/apis/shared_approvals` |
| Office 365 Users | `/providers/Microsoft.PowerApps/apis/shared_office365users` |
| Flow Management | `/providers/Microsoft.PowerApps/apis/shared_flowmanagement` |
---
## Teams Adaptive Card Dual-Connection Requirement
Flows that send adaptive cards **and** post follow-up messages require two
separate Teams connections:
```json
"connectionReferences": {
"shared_teams": {
"connectionName": "shared-teams-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"id": "/providers/Microsoft.PowerApps/apis/shared_teams"
},
"shared_teams_1": {
"connectionName": "shared-teams-yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy",
"id": "/providers/Microsoft.PowerApps/apis/shared_teams"
}
}
```
Both can point to the **same underlying Teams account** but must be registered
as two distinct connection references. The webhook (`OpenApiConnectionWebhook`)
uses `shared_teams` and subsequent message actions use `shared_teams_1`.

View File

@@ -0,0 +1,499 @@
# FlowStudio MCP — Tool Response Catalog
Response shapes and behavioral notes for the FlowStudio Power Automate MCP server.
> **For tool names and parameters**: Always call `tools/list` on the server.
> It returns the authoritative, up-to-date schema for every tool.
> This document covers what `tools/list` does NOT tell you: **response shapes**
> and **non-obvious behaviors** discovered through real usage.
---
## Source of Truth
| Priority | Source | Covers |
|----------|--------|--------|
| 1 | **Real API response** | Always trust what the server actually returns |
| 2 | **`tools/list`** | Tool names, parameter names, types, required flags |
| 3 | **This document** | Response shapes, behavioral notes, gotchas |
> If this document disagrees with `tools/list` or real API behavior,
> the API wins. Update this document accordingly.
---
## Environment & Tenant Discovery
### `list_live_environments`
Response: direct array of environments.
```json
[
{
"id": "Default-26e65220-5561-46ef-9783-ce5f20489241",
"displayName": "FlowStudio (default)",
"sku": "Production",
"location": "australia",
"state": "Enabled",
"isDefault": true,
"isAdmin": true,
"isMember": true,
"createdTime": "2023-08-18T00:41:05Z"
}
]
```
> Use the `id` value as `environmentName` in all other tools.
### `list_store_environments`
Same shape as `list_live_environments` but read from cache (faster).
---
## Connection Discovery
### `list_live_connections`
Response: wrapper object with `connections` array.
```json
{
"connections": [
{
"id": "shared-office365-9f9d2c8e-55f1-49c9-9f9c-1c45d1fbbdce",
"displayName": "user@contoso.com",
"connectorName": "shared_office365",
"createdBy": "User Name",
"statuses": [{"status": "Connected"}],
"createdTime": "2024-03-12T21:23:55.206815Z"
}
],
"totalCount": 56,
"error": null
}
```
> **Key field**: `id` is the `connectionName` value used in `connectionReferences`.
>
> **Key field**: `connectorName` maps to apiId:
> `"/providers/Microsoft.PowerApps/apis/" + connectorName`
>
> Filter by status: `statuses[0].status == "Connected"`.
>
> **Note**: `tools/list` marks `environmentName` as optional, but the server
> returns `MissingEnvironmentFilter` (HTTP 400) if you omit it. Always pass
> `environmentName`.
### `list_store_connections`
Same connection data from cache.
---
## Flow Discovery & Listing
### `list_live_flows`
Response: wrapper object with `flows` array.
```json
{
"mode": "owner",
"flows": [
{
"id": "0757041a-8ef2-cf74-ef06-06881916f371",
"displayName": "My Flow",
"state": "Started",
"triggerType": "Request",
"triggerKind": "Http",
"createdTime": "2023-08-18T01:18:17Z",
"lastModifiedTime": "2023-08-18T12:47:42Z",
"owners": "<aad-object-id>",
"definitionAvailable": true
}
],
"totalCount": 100,
"error": null
}
```
> Access via `result["flows"]`. `id` is a plain UUID --- use directly as `flowName`.
>
> `mode` indicates the access scope used (`"owner"` or `"admin"`).
### `list_store_flows`
Response: **direct array** (no wrapper).
```json
[
{
"id": "3991358a-f603-e49d-b1ed-a9e4f72e2dcb.0757041a-8ef2-cf74-ef06-06881916f371",
"displayName": "Admin | Sync Template v3 (Solutions)",
"state": "Started",
"triggerType": "OpenApiConnectionWebhook",
"environmentName": "3991358a-f603-e49d-b1ed-a9e4f72e2dcb",
"runPeriodTotal": 100,
"createdTime": "2023-08-18T01:18:17Z",
"lastModifiedTime": "2023-08-18T12:47:42Z"
}
]
```
> **`id` format**: `<environmentId>.<flowId>` --- split on the first `.` to extract the flow UUID:
> `flow_id = item["id"].split(".", 1)[1]`
### `get_store_flow`
Response: single flow metadata from cache (selected fields).
```json
{
"id": "<environmentId>.<flowId>",
"displayName": "My Flow",
"state": "Started",
"triggerType": "Recurrence",
"runPeriodTotal": 100,
"runPeriodFailRate": 0.1,
"runPeriodSuccessRate": 0.9,
"runPeriodFails": 10,
"runPeriodSuccess": 90,
"runPeriodDurationAverage": 29410.8,
"runPeriodDurationMax": 158900.0,
"runError": "{\"code\": \"EACCES\", ...}",
"description": "Flow description",
"tier": "Premium",
"complexity": "{...}",
"actions": 42,
"connections": ["sharepointonline", "office365"],
"owners": ["user@contoso.com"],
"createdBy": "user@contoso.com"
}
```
> `runPeriodDurationAverage` / `runPeriodDurationMax` are in **milliseconds** (divide by 1000).
> `runError` is a **JSON string** --- parse with `json.loads()`.
---
## Flow Definition (Live API)
### `get_live_flow`
Response: full flow definition from PA API.
```json
{
"name": "<flow-guid>",
"properties": {
"displayName": "My Flow",
"state": "Started",
"definition": {
"triggers": { "..." },
"actions": { "..." },
"parameters": { "..." }
},
"connectionReferences": { "..." }
}
}
```
### `update_live_flow`
**Create mode**: Omit `flowName` --- creates a new flow. `definition` and `displayName` required.
**Update mode**: Provide `flowName` --- PATCHes existing flow.
Response:
```json
{
"created": false,
"flowKey": "<environmentId>.<flowId>",
"updated": ["definition", "connectionReferences"],
"displayName": "My Flow",
"state": "Started",
"definition": { "...full definition..." },
"error": null
}
```
> `error` is **always present** but may be `null`. Check `result.get("error") is not None`.
>
> On create: `created` is the new flow GUID (string). On update: `created` is `false`.
>
> `description` is **always required** (create and update).
### `add_live_flow_to_solution`
Migrates a non-solution flow into a solution. Returns error if already in a solution.
---
## Run History & Monitoring
### `get_live_flow_runs`
Response: direct array of runs (newest first).
```json
[{
"name": "<run-id>",
"status": "Succeeded|Failed|Running|Cancelled",
"startTime": "2026-02-25T06:13:38Z",
"endTime": "2026-02-25T06:14:02Z",
"triggerName": "Recurrence",
"error": null
}]
```
> `top` defaults to **30** and auto-paginates for higher values. Set `top: 300`
> for 24-hour coverage on flows running every 5 minutes.
>
> Run ID field is **`name`** (not `runName`). Use this value as the `runName`
> parameter in other tools.
### `get_live_flow_run_error`
Response: structured error breakdown for a failed run.
```json
{
"runName": "08584296068667933411438594643CU15",
"failedActions": [
{
"actionName": "Apply_to_each_prepare_workers",
"status": "Failed",
"error": {"code": "ActionFailed", "message": "An action failed."},
"code": "ActionFailed",
"startTime": "2026-02-25T06:13:52Z",
"endTime": "2026-02-25T06:15:24Z"
},
{
"actionName": "HTTP_find_AD_User_by_Name",
"status": "Failed",
"code": "NotSpecified",
"startTime": "2026-02-25T06:14:01Z",
"endTime": "2026-02-25T06:14:05Z"
}
],
"allActions": [
{"actionName": "Apply_to_each", "status": "Skipped"},
{"actionName": "Compose_WeekEnd", "status": "Succeeded"},
{"actionName": "HTTP_find_AD_User_by_Name", "status": "Failed"}
]
}
```
> `failedActions` is ordered outer-to-inner --- the **last entry is the root cause**.
> Use `failedActions[-1]["actionName"]` as the starting point for diagnosis.
### `get_live_flow_run_action_outputs`
Response: array of action detail objects.
```json
[
{
"actionName": "Compose_WeekEnd_now",
"status": "Succeeded",
"startTime": "2026-02-25T06:13:52Z",
"endTime": "2026-02-25T06:13:52Z",
"error": null,
"inputs": "Mon, 25 Feb 2026 06:13:52 GMT",
"outputs": "Mon, 25 Feb 2026 06:13:52 GMT"
}
]
```
> **`actionName` is optional**: omit it to return ALL actions in the run;
> provide it to return a single-element array for that action only.
>
> Outputs can be very large (50 MB+) for bulk-data actions. Use 120s+ timeout.
---
## Run Control
### `resubmit_live_flow_run`
Response: `{ flowKey, resubmitted: true, runName, triggerName }`
### `cancel_live_flow_run`
Cancels a `Running` flow run.
> Do NOT cancel runs waiting for an adaptive card response --- status `Running`
> is normal while a Teams card is awaiting user input.
---
## HTTP Trigger Tools
### `get_live_flow_http_schema`
Response keys:
```
flowKey - Flow GUID
displayName - Flow display name
triggerName - Trigger action name (e.g. "manual")
triggerType - Trigger type (e.g. "Request")
triggerKind - Trigger kind (e.g. "Http")
requestMethod - HTTP method (e.g. "POST")
relativePath - Relative path configured on the trigger (if any)
requestSchema - JSON schema the trigger expects as POST body
requestHeaders - Headers the trigger expects
responseSchemas - Array of JSON schemas defined on Response action(s)
responseSchemaCount - Number of Response actions that define output schemas
```
> The request body schema is in `requestSchema` (not `triggerSchema`).
### `get_live_flow_trigger_url`
Returns the signed callback URL for HTTP-triggered flows. Response includes
`flowKey`, `triggerName`, `triggerType`, `triggerKind`, `triggerMethod`, `triggerUrl`.
### `trigger_live_flow`
Response keys: `flowKey`, `triggerName`, `triggerUrl`, `requiresAadAuth`, `authType`,
`responseStatus`, `responseBody`.
> **Only works for `Request` (HTTP) triggers.** Returns an error for Recurrence
> and other trigger types: `"only HTTP Request triggers can be invoked via this tool"`.
> `Button`-kind triggers return `ListCallbackUrlOperationBlocked`.
>
> `responseStatus` + `responseBody` contain the flow's Response action output.
> AAD-authenticated triggers are handled automatically.
>
> **Content-type note**: The body is sent as `application/octet-stream` (raw),
> not `application/json`. Flows with a trigger schema that has `required` fields
> will reject the request with `InvalidRequestContent` (400) because PA validates
> `Content-Type` before parsing against the schema. Flows without a schema, or
> flows designed to accept raw input (e.g. Baker-pattern flows that parse the body
> internally), will work fine. The flow receives the JSON as base64-encoded
> `$content` with `$content-type: application/octet-stream`.
---
## Flow State Management
### `set_live_flow_state`
Start or stop a Power Automate flow via the live PA API. Does **not** require
a Power Clarity workspace — works for any flow the impersonated account can access.
Reads the current state first and only issues the start/stop call if a change is
actually needed.
Parameters: `environmentName`, `flowName`, `state` (`"Started"` | `"Stopped"`) — all required.
Response:
```json
{
"flowName": "6321ab25-7eb0-42df-b977-e97d34bcb272",
"environmentName": "Default-26e65220-...",
"requestedState": "Started",
"actualState": "Started"
}
```
> **Use this tool** — not `update_live_flow` — to start or stop a flow.
> `update_live_flow` only changes displayName/definition; the PA API ignores
> state passed through that endpoint.
### `set_store_flow_state`
Start or stop a flow via the live PA API **and** persist the updated state back
to the Power Clarity cache. Same parameters as `set_live_flow_state` but requires
a Power Clarity workspace.
Response (different shape from `set_live_flow_state`):
```json
{
"flowKey": "<environmentId>.<flowId>",
"requestedState": "Stopped",
"currentState": "Stopped",
"flow": { /* full gFlows record, same shape as get_store_flow */ }
}
```
> Prefer `set_live_flow_state` when you only need to toggle state — it's
> simpler and has no subscription requirement.
>
> Use `set_store_flow_state` when you need the cache updated immediately
> (without waiting for the next daily scan) AND want the full updated
> governance record back in the same call — useful for workflows that
> stop a flow and immediately tag or inspect it.
---
## Store Tools --- FlowStudio for Teams Only
### `get_store_flow_summary`
Response: aggregated run statistics.
```json
{
"totalRuns": 100,
"failRuns": 10,
"failRate": 0.1,
"averageDurationSeconds": 29.4,
"maxDurationSeconds": 158.9,
"firstFailRunRemediation": "<hint or null>"
}
```
### `get_store_flow_runs`
Cached run history for the last N days with duration and remediation hints.
### `get_store_flow_errors`
Cached failed-only runs with failed action names and remediation hints.
### `get_store_flow_trigger_url`
Trigger URL from cache (instant, no PA API call).
### `update_store_flow`
Update governance metadata (description, tags, monitor flag, notification rules, business impact).
### `list_store_makers` / `get_store_maker`
Maker (citizen developer) discovery and detail.
### `list_store_power_apps`
List all Power Apps canvas apps from the cache.
---
## Behavioral Notes
Non-obvious behaviors discovered through real API usage. These are things
`tools/list` cannot tell you.
### `get_live_flow_run_action_outputs`
- **`actionName` is optional**: omit to get all actions, provide to get one.
This changes the response from N elements to 1 element (still an array).
- Outputs can be 50 MB+ for bulk-data actions --- always use 120s+ timeout.
### `update_live_flow`
- `description` is **always required** (create and update modes).
- `error` key is **always present** in response --- `null` means success.
Do NOT check `if "error" in result`; check `result.get("error") is not None`.
- On create, `created` = new flow GUID (string). On update, `created` = `false`.
- **Cannot change flow state.** Only updates displayName, definition, and
connectionReferences. Use `set_live_flow_state` to start/stop a flow.
### `trigger_live_flow`
- **Only works for HTTP Request triggers.** Returns error for Recurrence, connector,
and other trigger types.
- AAD-authenticated triggers are handled automatically (impersonated Bearer token).
### `get_live_flow_runs`
- `top` defaults to **30** with automatic pagination for higher values.
- Run ID field is `name`, not `runName`. Use this value as `runName` in other tools.
- Runs are returned newest-first.
### Teams `PostMessageToConversation` (via `update_live_flow`)
- **"Chat with Flow bot"**: `body/recipient` = `"user@domain.com;"` (string with trailing semicolon).
- **"Channel"**: `body/recipient` = `{"groupId": "...", "channelId": "..."}` (object).
- `poster`: `"Flow bot"` for Workflows bot identity, `"User"` for user identity.
### `list_live_connections`
- `id` is the value you need for `connectionName` in `connectionReferences`.
- `connectorName` maps to apiId: `"/providers/Microsoft.PowerApps/apis/" + connectorName`.

View File

@@ -0,0 +1,399 @@
---
name: flowstudio-power-automate-monitoring
description: >-
Monitor Power Automate flow health, track failure rates, and inventory tenant
assets using the FlowStudio MCP cached store. The live API only returns
top-level run status. Store tools surface aggregated stats, per-run failure
details with remediation hints, maker activity, and Power Apps inventory —
all from a fast cache with no rate-limit pressure on the PA API.
Load this skill when asked to: check flow health, find failing flows, get
failure rates, review error trends, list all flows with monitoring enabled,
check who built a flow, find inactive makers, inventory Power Apps, see
environment or connection counts, get a flow summary, or any tenant-wide
health overview. Requires a FlowStudio for Teams or MCP Pro+ subscription —
see https://mcp.flowstudio.app
metadata:
openclaw:
requires:
env:
- FLOWSTUDIO_MCP_TOKEN
primaryEnv: FLOWSTUDIO_MCP_TOKEN
homepage: https://mcp.flowstudio.app
---
# Power Automate Monitoring with FlowStudio MCP
Monitor flow health, track failure rates, and inventory tenant assets through
the FlowStudio MCP **cached store** — fast reads, no PA API rate limits, and
enriched with governance metadata and remediation hints.
> **Requires:** A [FlowStudio for Teams or MCP Pro+](https://mcp.flowstudio.app)
> subscription.
>
> **Start every session with `tools/list`** to confirm tool names and parameters.
> This skill covers response shapes, behavioral notes, and workflow patterns —
> things `tools/list` cannot tell you. If this document disagrees with
> `tools/list` or a real API response, the API wins.
---
## How Monitoring Works
Flow Studio scans the Power Automate API daily for each subscriber and caches
the results. There are two levels:
- **All flows** get metadata scanned: definition, connections, owners, trigger
type, and aggregate run statistics (`runPeriodTotal`, `runPeriodFailRate`,
etc.). Environments, apps, connections, and makers are also scanned.
- **Monitored flows** (`monitor: true`) additionally get per-run detail:
individual run records with status, duration, failed action names, and
remediation hints. This is what populates `get_store_flow_runs`,
`get_store_flow_errors`, and `get_store_flow_summary`.
**Data freshness:** Check the `scanned` field on `get_store_flow` to see when
a flow was last scanned. If stale, the scanning pipeline may not be running.
**Enabling monitoring:** Set `monitor: true` via `update_store_flow` or the
Flow Studio for Teams app
([how to select flows](https://learn.flowstudio.app/teams-monitoring)).
**Designating critical flows:** Use `update_store_flow` with `critical=true`
on business-critical flows. This enables the governance skill's notification
rule management to auto-configure failure alerts on critical flows.
---
## Tools
| Tool | Purpose |
|---|---|
| `list_store_flows` | List flows with failure rates and monitoring filters |
| `get_store_flow` | Full cached record: run stats, owners, tier, connections, definition |
| `get_store_flow_summary` | Aggregated run stats: success/fail rate, avg/max duration |
| `get_store_flow_runs` | Per-run history with duration, status, failed actions, remediation |
| `get_store_flow_errors` | Failed-only runs with action names and remediation hints |
| `get_store_flow_trigger_url` | Trigger URL from cache (instant, no PA API call) |
| `set_store_flow_state` | Start or stop a flow and sync state back to cache |
| `update_store_flow` | Set monitor flag, notification rules, tags, governance metadata |
| `list_store_environments` | All Power Platform environments |
| `list_store_connections` | All connections |
| `list_store_makers` | All makers (citizen developers) |
| `get_store_maker` | Maker detail: flow/app counts, licenses, account status |
| `list_store_power_apps` | All Power Apps canvas apps |
---
## Store vs Live
| Question | Use Store | Use Live |
|---|---|---|
| How many flows are failing? | `list_store_flows` | — |
| What's the fail rate over 30 days? | `get_store_flow_summary` | — |
| Show error history for a flow | `get_store_flow_errors` | — |
| Who built this flow? | `get_store_flow` → parse `owners` | — |
| Read the full flow definition | `get_store_flow` has it (JSON string) | `get_live_flow` (structured) |
| Inspect action inputs/outputs from a run | — | `get_live_flow_run_action_outputs` |
| Resubmit a failed run | — | `resubmit_live_flow_run` |
> Store tools answer "what happened?" and "how healthy is it?"
> Live tools answer "what exactly went wrong?" and "fix it now."
> If `get_store_flow_runs`, `get_store_flow_errors`, or `get_store_flow_summary`
> return empty results, check: (1) is `monitor: true` on the flow? and
> (2) is the `scanned` field recent? Use `get_store_flow` to verify both.
---
## Response Shapes
### `list_store_flows`
Direct array. Filters: `monitor` (bool), `rule_notify_onfail` (bool),
`rule_notify_onmissingdays` (bool).
```json
[
{
"id": "Default-<envGuid>.<flowGuid>",
"displayName": "Stripe subscription updated",
"state": "Started",
"triggerType": "Request",
"triggerUrl": "https://...",
"tags": ["#operations", "#sensitive"],
"environmentName": "Default-26e65220-...",
"monitor": true,
"runPeriodFailRate": 0.012,
"runPeriodTotal": 82,
"createdTime": "2025-06-24T01:20:53Z",
"lastModifiedTime": "2025-06-24T03:51:03Z"
}
]
```
> `id` format: `Default-<envGuid>.<flowGuid>`. Split on first `.` to get
> `environmentName` and `flowName`.
>
> `triggerUrl` and `tags` are optional. Some entries are sparse (just `id` +
> `monitor`) — skip entries without `displayName`.
>
> Tags on `list_store_flows` are auto-extracted from the flow's `description`
> field (maker hashtags like `#operations`). Tags written via
> `update_store_flow(tags=...)` are stored separately and only visible on
> `get_store_flow` — they do NOT appear in the list response.
### `get_store_flow`
Full cached record. Key fields:
| Category | Fields |
|---|---|
| Identity | `name`, `displayName`, `environmentName`, `state`, `triggerType`, `triggerKind`, `tier`, `sharingType` |
| Run stats | `runPeriodTotal`, `runPeriodFails`, `runPeriodSuccess`, `runPeriodFailRate`, `runPeriodSuccessRate`, `runPeriodDurationAverage`/`Max`/`Min` (milliseconds), `runTotal`, `runFails`, `runFirst`, `runLast`, `runToday` |
| Governance | `monitor` (bool), `rule_notify_onfail` (bool), `rule_notify_onmissingdays` (number), `rule_notify_email` (string), `log_notify_onfail` (ISO), `description`, `tags` |
| Freshness | `scanned` (ISO), `nextScan` (ISO) |
| Lifecycle | `deleted` (bool), `deletedTime` (ISO) |
| JSON strings | `actions`, `connections`, `owners`, `complexity`, `definition`, `createdBy`, `security`, `triggers`, `referencedResources`, `runError` — all require `json.loads()` to parse |
> Duration fields (`runPeriodDurationAverage`, `Max`, `Min`) are in
> **milliseconds**. Divide by 1000 for seconds.
>
> `runError` contains the last run error as a JSON string. Parse it:
> `json.loads(record["runError"])` — returns `{}` when no error.
### `get_store_flow_summary`
Aggregated stats over a time window (default: last 7 days).
```json
{
"flowKey": "Default-<envGuid>.<flowGuid>",
"windowStart": null,
"windowEnd": null,
"totalRuns": 82,
"successRuns": 81,
"failRuns": 1,
"successRate": 0.988,
"failRate": 0.012,
"averageDurationSeconds": 2.877,
"maxDurationSeconds": 9.433,
"firstFailRunRemediation": null,
"firstFailRunUrl": null
}
```
> Returns all zeros when no run data exists for this flow in the window.
> Use `startTime` and `endTime` (ISO 8601) parameters to change the window.
### `get_store_flow_runs` / `get_store_flow_errors`
Direct array. `get_store_flow_errors` filters to `status=Failed` only.
Parameters: `startTime`, `endTime`, `status` (array: `["Failed"]`,
`["Succeeded"]`, etc.).
> Both return `[]` when no run data exists.
### `get_store_flow_trigger_url`
```json
{
"flowKey": "Default-<envGuid>.<flowGuid>",
"displayName": "Stripe subscription updated",
"triggerType": "Request",
"triggerKind": "Http",
"triggerUrl": "https://..."
}
```
> `triggerUrl` is null for non-HTTP triggers.
### `set_store_flow_state`
Calls the live PA API then syncs state to the cache and returns the
full updated record.
```json
{
"flowKey": "Default-<envGuid>.<flowGuid>",
"requestedState": "Stopped",
"currentState": "Stopped",
"flow": { /* full gFlows record, same shape as get_store_flow */ }
}
```
> The embedded `flow` object reflects the new state immediately — no
> follow-up `get_store_flow` call needed. Useful for governance workflows
> that stop a flow and then read its tags/monitor/owner metadata in the
> same turn.
>
> Functionally equivalent to `set_live_flow_state` for changing state,
> but `set_live_flow_state` only returns `{flowName, environmentName,
> requestedState, actualState}` and doesn't sync the cache. Prefer
> `set_live_flow_state` when you only need to toggle state and don't
> care about cache freshness.
### `update_store_flow`
Updates governance metadata. Only provided fields are updated (merge).
Returns the full updated record (same shape as `get_store_flow`).
Settable fields: `monitor` (bool), `rule_notify_onfail` (bool),
`rule_notify_onmissingdays` (number, 0=disabled),
`rule_notify_email` (comma-separated), `description`, `tags`,
`businessImpact`, `businessJustification`, `businessValue`,
`ownerTeam`, `ownerBusinessUnit`, `supportGroup`, `supportEmail`,
`critical` (bool), `tier`, `security`.
### `list_store_environments`
Direct array.
```json
[
{
"id": "Default-26e65220-...",
"displayName": "Flow Studio (default)",
"sku": "Default",
"type": "NotSpecified",
"location": "australia",
"isDefault": true,
"isAdmin": true,
"isManagedEnvironment": false,
"createdTime": "2017-01-18T01:06:46Z"
}
]
```
> `sku` values: `Default`, `Production`, `Developer`, `Sandbox`, `Teams`.
### `list_store_connections`
Direct array. Can be very large (1500+ items).
```json
[
{
"id": "<environmentId>.<connectionId>",
"displayName": "user@contoso.com",
"createdBy": "{\"id\":\"...\",\"displayName\":\"...\",\"email\":\"...\"}",
"environmentName": "...",
"statuses": "[{\"status\":\"Connected\"}]"
}
]
```
> `createdBy` and `statuses` are **JSON strings** — parse with `json.loads()`.
### `list_store_makers`
Direct array.
```json
[
{
"id": "09dbe02f-...",
"displayName": "Catherine Han",
"mail": "catherine.han@flowstudio.app",
"deleted": false,
"ownerFlowCount": 199,
"ownerAppCount": 209,
"userIsServicePrinciple": false
}
]
```
> Deleted makers have `deleted: true` and no `displayName`/`mail` fields.
### `get_store_maker`
Full maker record. Key fields: `displayName`, `mail`, `userPrincipalName`,
`ownerFlowCount`, `ownerAppCount`, `accountEnabled`, `deleted`, `country`,
`firstFlow`, `firstFlowCreatedTime`, `lastFlowCreatedTime`,
`firstPowerApp`, `lastPowerAppCreatedTime`,
`licenses` (JSON string of M365 SKUs).
### `list_store_power_apps`
Direct array.
```json
[
{
"id": "<environmentId>.<appId>",
"displayName": "My App",
"environmentName": "...",
"ownerId": "09dbe02f-...",
"ownerName": "Catherine Han",
"appType": "Canvas",
"sharedUsersCount": 0,
"createdTime": "2023-08-18T01:06:22Z",
"lastModifiedTime": "2023-08-18T01:06:22Z",
"lastPublishTime": "2023-08-18T01:06:22Z"
}
]
```
---
## Common Workflows
### Find unhealthy flows
```
1. list_store_flows
2. Filter where runPeriodFailRate > 0.1 and runPeriodTotal >= 5
3. Sort by runPeriodFailRate descending
4. For each: get_store_flow for full detail
```
### Check a specific flow's health
```
1. get_store_flow → check scanned (freshness), runPeriodFailRate, runPeriodTotal
2. get_store_flow_summary → aggregated stats with optional time window
3. get_store_flow_errors → per-run failure detail with remediation hints
4. If deeper diagnosis needed → switch to live tools:
get_live_flow_runs → get_live_flow_run_action_outputs
```
### Enable monitoring on a flow
```
1. update_store_flow with monitor=true
2. Optionally set rule_notify_onfail=true, rule_notify_email="user@domain.com"
3. Run data will appear after the next daily scan
```
### Daily health check
```
1. list_store_flows
2. Flag flows with runPeriodFailRate > 0.2 and runPeriodTotal >= 3
3. Flag monitored flows with state="Stopped" (may indicate auto-suspension)
4. For critical failures → get_store_flow_errors for remediation hints
```
### Maker audit
```
1. list_store_makers
2. Identify deleted accounts still owning flows (deleted=true, ownerFlowCount > 0)
3. get_store_maker for full detail on specific users
```
### Inventory
```
1. list_store_environments → environment count, SKUs, locations
2. list_store_flows → flow count by state, trigger type, fail rate
3. list_store_power_apps → app count, owners, sharing
4. list_store_connections → connection count per environment
```
---
## Related Skills
- `power-automate-mcp` — Core connection setup, live tool reference
- `power-automate-debug` — Deep diagnosis with action-level inputs/outputs (live API)
- `power-automate-build` — Build and deploy flow definitions
- `power-automate-governance` — Governance metadata, tagging, notification rules, CoE patterns