mirror of
https://github.com/github/awesome-copilot.git
synced 2026-03-12 12:15:12 +00:00
chore: publish from staged
This commit is contained in:
@@ -17,8 +17,8 @@
|
||||
"workflow-automation"
|
||||
],
|
||||
"skills": [
|
||||
"./skills/flowstudio-power-automate-mcp/",
|
||||
"./skills/flowstudio-power-automate-debug/",
|
||||
"./skills/flowstudio-power-automate-build/"
|
||||
"./skills/flowstudio-power-automate-mcp",
|
||||
"./skills/flowstudio-power-automate-debug",
|
||||
"./skills/flowstudio-power-automate-build"
|
||||
]
|
||||
}
|
||||
|
||||
@@ -0,0 +1,460 @@
|
||||
---
|
||||
name: flowstudio-power-automate-build
|
||||
description: >-
|
||||
Build, scaffold, and deploy Power Automate cloud flows using the FlowStudio
|
||||
MCP server. Load this skill when asked to: create a flow, build a new flow,
|
||||
deploy a flow definition, scaffold a Power Automate workflow, construct a flow
|
||||
JSON, update an existing flow's actions, patch a flow definition, add actions
|
||||
to a flow, wire up connections, or generate a workflow definition from scratch.
|
||||
Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Build & Deploy Power Automate Flows with FlowStudio MCP
|
||||
|
||||
Step-by-step guide for constructing and deploying Power Automate cloud flows
|
||||
programmatically through the FlowStudio MCP server.
|
||||
|
||||
**Prerequisite**: A FlowStudio MCP server must be reachable with a valid JWT.
|
||||
See the `flowstudio-power-automate-mcp` skill for connection setup.
|
||||
Subscribe at https://mcp.flowstudio.app
|
||||
|
||||
---
|
||||
|
||||
## Source of Truth
|
||||
|
||||
> **Always call `tools/list` first** to confirm available tool names and their
|
||||
> parameter schemas. Tool names and parameters may change between server versions.
|
||||
> This skill covers response shapes, behavioral notes, and build patterns —
|
||||
> things `tools/list` cannot tell you. If this document disagrees with `tools/list`
|
||||
> or a real API response, the API wins.
|
||||
|
||||
---
|
||||
|
||||
## Python Helper
|
||||
|
||||
```python
|
||||
import json, urllib.request
|
||||
|
||||
MCP_URL = "https://mcp.flowstudio.app/mcp"
|
||||
MCP_TOKEN = "<YOUR_JWT_TOKEN>"
|
||||
|
||||
def mcp(tool, **kwargs):
|
||||
payload = json.dumps({"jsonrpc": "2.0", "id": 1, "method": "tools/call",
|
||||
"params": {"name": tool, "arguments": kwargs}}).encode()
|
||||
req = urllib.request.Request(MCP_URL, data=payload,
|
||||
headers={"x-api-key": MCP_TOKEN, "Content-Type": "application/json",
|
||||
"User-Agent": "FlowStudio-MCP/1.0"})
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=120)
|
||||
except urllib.error.HTTPError as e:
|
||||
body = e.read().decode("utf-8", errors="replace")
|
||||
raise RuntimeError(f"MCP HTTP {e.code}: {body[:200]}") from e
|
||||
raw = json.loads(resp.read())
|
||||
if "error" in raw:
|
||||
raise RuntimeError(f"MCP error: {json.dumps(raw['error'])}")
|
||||
return json.loads(raw["result"]["content"][0]["text"])
|
||||
|
||||
ENV = "<environment-id>" # e.g. Default-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 1 — Safety Check: Does the Flow Already Exist?
|
||||
|
||||
Always look before you build to avoid duplicates:
|
||||
|
||||
```python
|
||||
results = mcp("list_store_flows",
|
||||
environmentName=ENV, searchTerm="My New Flow")
|
||||
|
||||
# list_store_flows returns a direct array (no wrapper object)
|
||||
if len(results) > 0:
|
||||
# Flow exists — modify rather than create
|
||||
# id format is "envId.flowId" — split to get the flow UUID
|
||||
FLOW_ID = results[0]["id"].split(".", 1)[1]
|
||||
print(f"Existing flow: {FLOW_ID}")
|
||||
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
else:
|
||||
print("Flow not found — building from scratch")
|
||||
FLOW_ID = None
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2 — Obtain Connection References
|
||||
|
||||
Every connector action needs a `connectionName` that points to a key in the
|
||||
flow's `connectionReferences` map. That key links to an authenticated connection
|
||||
in the environment.
|
||||
|
||||
> **MANDATORY**: You MUST call `list_live_connections` first — do NOT ask the
|
||||
> user for connection names or GUIDs. The API returns the exact values you need.
|
||||
> Only prompt the user if the API confirms that required connections are missing.
|
||||
|
||||
### 2a — Always call `list_live_connections` first
|
||||
|
||||
```python
|
||||
conns = mcp("list_live_connections", environmentName=ENV)
|
||||
|
||||
# Filter to connected (authenticated) connections only
|
||||
active = [c for c in conns["connections"]
|
||||
if c["statuses"][0]["status"] == "Connected"]
|
||||
|
||||
# Build a lookup: connectorName → connectionName (id)
|
||||
conn_map = {}
|
||||
for c in active:
|
||||
conn_map[c["connectorName"]] = c["id"]
|
||||
|
||||
print(f"Found {len(active)} active connections")
|
||||
print("Available connectors:", list(conn_map.keys()))
|
||||
```
|
||||
|
||||
### 2b — Determine which connectors the flow needs
|
||||
|
||||
Based on the flow you are building, identify which connectors are required.
|
||||
Common connector API names:
|
||||
|
||||
| Connector | API name |
|
||||
|---|---|
|
||||
| SharePoint | `shared_sharepointonline` |
|
||||
| Outlook / Office 365 | `shared_office365` |
|
||||
| Teams | `shared_teams` |
|
||||
| Approvals | `shared_approvals` |
|
||||
| OneDrive for Business | `shared_onedriveforbusiness` |
|
||||
| Excel Online (Business) | `shared_excelonlinebusiness` |
|
||||
| Dataverse | `shared_commondataserviceforapps` |
|
||||
| Microsoft Forms | `shared_microsoftforms` |
|
||||
|
||||
> **Flows that need NO connections** (e.g. Recurrence + Compose + HTTP only)
|
||||
> can skip the rest of Step 2 — omit `connectionReferences` from the deploy call.
|
||||
|
||||
### 2c — If connections are missing, guide the user
|
||||
|
||||
```python
|
||||
connectors_needed = ["shared_sharepointonline", "shared_office365"] # adjust per flow
|
||||
|
||||
missing = [c for c in connectors_needed if c not in conn_map]
|
||||
|
||||
if not missing:
|
||||
print("✅ All required connections are available — proceeding to build")
|
||||
else:
|
||||
# ── STOP: connections must be created interactively ──
|
||||
# Connections require OAuth consent in a browser — no API can create them.
|
||||
print("⚠️ The following connectors have no active connection in this environment:")
|
||||
for c in missing:
|
||||
friendly = c.replace("shared_", "").replace("onlinebusiness", " Online (Business)")
|
||||
print(f" • {friendly} (API name: {c})")
|
||||
print()
|
||||
print("Please create the missing connections:")
|
||||
print(" 1. Open https://make.powerautomate.com/connections")
|
||||
print(" 2. Select the correct environment from the top-right picker")
|
||||
print(" 3. Click '+ New connection' for each missing connector listed above")
|
||||
print(" 4. Sign in and authorize when prompted")
|
||||
print(" 5. Tell me when done — I will re-check and continue building")
|
||||
# DO NOT proceed to Step 3 until the user confirms.
|
||||
# After user confirms, re-run Step 2a to refresh conn_map.
|
||||
```
|
||||
|
||||
### 2d — Build the connectionReferences block
|
||||
|
||||
Only execute this after 2c confirms no missing connectors:
|
||||
|
||||
```python
|
||||
connection_references = {}
|
||||
for connector in connectors_needed:
|
||||
connection_references[connector] = {
|
||||
"connectionName": conn_map[connector], # the GUID from list_live_connections
|
||||
"source": "Invoker",
|
||||
"id": f"/providers/Microsoft.PowerApps/apis/{connector}"
|
||||
}
|
||||
```
|
||||
|
||||
> **IMPORTANT — `host.connectionName` in actions**: When building actions in
|
||||
> Step 3, set `host.connectionName` to the **key** from this map (e.g.
|
||||
> `shared_teams`), NOT the connection GUID. The GUID only goes inside the
|
||||
> `connectionReferences` entry. The engine matches the action's
|
||||
> `host.connectionName` to the key to find the right connection.
|
||||
|
||||
> **Alternative** — if you already have a flow using the same connectors,
|
||||
> you can extract `connectionReferences` from its definition:
|
||||
> ```python
|
||||
> ref_flow = mcp("get_live_flow", environmentName=ENV, flowName="<existing-flow-id>")
|
||||
> connection_references = ref_flow["properties"]["connectionReferences"]
|
||||
> ```
|
||||
|
||||
See the `power-automate-mcp` skill's **connection-references.md** reference
|
||||
for the full connection reference structure.
|
||||
|
||||
---
|
||||
|
||||
## Step 3 — Build the Flow Definition
|
||||
|
||||
Construct the definition object. See [flow-schema.md](references/flow-schema.md)
|
||||
for the full schema and these action pattern references for copy-paste templates:
|
||||
- [action-patterns-core.md](references/action-patterns-core.md) — Variables, control flow, expressions
|
||||
- [action-patterns-data.md](references/action-patterns-data.md) — Array transforms, HTTP, parsing
|
||||
- [action-patterns-connectors.md](references/action-patterns-connectors.md) — SharePoint, Outlook, Teams, Approvals
|
||||
|
||||
```python
|
||||
definition = {
|
||||
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"triggers": { ... }, # see trigger-types.md / build-patterns.md
|
||||
"actions": { ... } # see ACTION-PATTERNS-*.md / build-patterns.md
|
||||
}
|
||||
```
|
||||
|
||||
> See [build-patterns.md](references/build-patterns.md) for complete, ready-to-use
|
||||
> flow definitions covering Recurrence+SharePoint+Teams, HTTP triggers, and more.
|
||||
|
||||
---
|
||||
|
||||
## Step 4 — Deploy (Create or Update)
|
||||
|
||||
`update_live_flow` handles both creation and updates in a single tool.
|
||||
|
||||
### Create a new flow (no existing flow)
|
||||
|
||||
Omit `flowName` — the server generates a new GUID and creates via PUT:
|
||||
|
||||
```python
|
||||
result = mcp("update_live_flow",
|
||||
environmentName=ENV,
|
||||
# flowName omitted → creates a new flow
|
||||
definition=definition,
|
||||
connectionReferences=connection_references,
|
||||
displayName="Overdue Invoice Notifications",
|
||||
description="Weekly SharePoint → Teams notification flow, built by agent"
|
||||
)
|
||||
|
||||
if result.get("error") is not None:
|
||||
print("Create failed:", result["error"])
|
||||
else:
|
||||
# Capture the new flow ID for subsequent steps
|
||||
FLOW_ID = result["created"]
|
||||
print(f"✅ Flow created: {FLOW_ID}")
|
||||
```
|
||||
|
||||
### Update an existing flow
|
||||
|
||||
Provide `flowName` to PATCH:
|
||||
|
||||
```python
|
||||
result = mcp("update_live_flow",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
definition=definition,
|
||||
connectionReferences=connection_references,
|
||||
displayName="My Updated Flow",
|
||||
description="Updated by agent on " + __import__('datetime').datetime.utcnow().isoformat()
|
||||
)
|
||||
|
||||
if result.get("error") is not None:
|
||||
print("Update failed:", result["error"])
|
||||
else:
|
||||
print("Update succeeded:", result)
|
||||
```
|
||||
|
||||
> ⚠️ `update_live_flow` always returns an `error` key.
|
||||
> `null` (Python `None`) means success — do not treat the presence of the key as failure.
|
||||
>
|
||||
> ⚠️ `description` is required for both create and update.
|
||||
|
||||
### Common deployment errors
|
||||
|
||||
| Error message (contains) | Cause | Fix |
|
||||
|---|---|---|
|
||||
| `missing from connectionReferences` | An action's `host.connectionName` references a key that doesn't exist in the `connectionReferences` map | Ensure `host.connectionName` uses the **key** from `connectionReferences` (e.g. `shared_teams`), not the raw GUID |
|
||||
| `ConnectionAuthorizationFailed` / 403 | The connection GUID belongs to another user or is not authorized | Re-run Step 2a and use a connection owned by the current `x-api-key` user |
|
||||
| `InvalidTemplate` / `InvalidDefinition` | Syntax error in the definition JSON | Check `runAfter` chains, expression syntax, and action type spelling |
|
||||
| `ConnectionNotConfigured` | A connector action exists but the connection GUID is invalid or expired | Re-check `list_live_connections` for a fresh GUID |
|
||||
|
||||
---
|
||||
|
||||
## Step 5 — Verify the Deployment
|
||||
|
||||
```python
|
||||
check = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
|
||||
# Confirm state
|
||||
print("State:", check["properties"]["state"]) # Should be "Started"
|
||||
|
||||
# Confirm the action we added is there
|
||||
acts = check["properties"]["definition"]["actions"]
|
||||
print("Actions:", list(acts.keys()))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 6 — Test the Flow
|
||||
|
||||
> **MANDATORY**: Before triggering any test run, **ask the user for confirmation**.
|
||||
> Running a flow has real side effects — it may send emails, post Teams messages,
|
||||
> write to SharePoint, start approvals, or call external APIs. Explain what the
|
||||
> flow will do and wait for explicit approval before calling `trigger_live_flow`
|
||||
> or `resubmit_live_flow_run`.
|
||||
|
||||
### Updated flows (have prior runs)
|
||||
|
||||
The fastest path — resubmit the most recent run:
|
||||
|
||||
```python
|
||||
runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=1)
|
||||
if runs:
|
||||
result = mcp("resubmit_live_flow_run",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=runs[0]["name"])
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Flows already using an HTTP trigger
|
||||
|
||||
Fire directly with a test payload:
|
||||
|
||||
```python
|
||||
schema = mcp("get_live_flow_http_schema",
|
||||
environmentName=ENV, flowName=FLOW_ID)
|
||||
print("Expected body:", schema.get("triggerSchema"))
|
||||
|
||||
result = mcp("trigger_live_flow",
|
||||
environmentName=ENV, flowName=FLOW_ID,
|
||||
body={"name": "Test", "value": 1})
|
||||
print(f"Status: {result['status']}")
|
||||
```
|
||||
|
||||
### Brand-new non-HTTP flows (Recurrence, connector triggers, etc.)
|
||||
|
||||
A brand-new Recurrence or connector-triggered flow has no runs to resubmit
|
||||
and no HTTP endpoint to call. **Deploy with a temporary HTTP trigger first,
|
||||
test the actions, then swap to the production trigger.**
|
||||
|
||||
#### 7a — Save the real trigger, deploy with a temporary HTTP trigger
|
||||
|
||||
```python
|
||||
# Save the production trigger you built in Step 3
|
||||
production_trigger = definition["triggers"]
|
||||
|
||||
# Replace with a temporary HTTP trigger
|
||||
definition["triggers"] = {
|
||||
"manual": {
|
||||
"type": "Request",
|
||||
"kind": "Http",
|
||||
"inputs": {
|
||||
"schema": {}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Deploy (create or update) with the temp trigger
|
||||
result = mcp("update_live_flow",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID, # omit if creating new
|
||||
definition=definition,
|
||||
connectionReferences=connection_references,
|
||||
displayName="Overdue Invoice Notifications",
|
||||
description="Deployed with temp HTTP trigger for testing")
|
||||
|
||||
if result.get("error") is not None:
|
||||
print("Deploy failed:", result["error"])
|
||||
else:
|
||||
if not FLOW_ID:
|
||||
FLOW_ID = result["created"]
|
||||
print(f"✅ Deployed with temp HTTP trigger: {FLOW_ID}")
|
||||
```
|
||||
|
||||
#### 7b — Fire the flow and check the result
|
||||
|
||||
```python
|
||||
# Trigger the flow
|
||||
test = mcp("trigger_live_flow",
|
||||
environmentName=ENV, flowName=FLOW_ID)
|
||||
print(f"Trigger response status: {test['status']}")
|
||||
|
||||
# Wait for the run to complete
|
||||
import time; time.sleep(15)
|
||||
|
||||
# Check the run result
|
||||
runs = mcp("get_live_flow_runs",
|
||||
environmentName=ENV, flowName=FLOW_ID, top=1)
|
||||
run = runs[0]
|
||||
print(f"Run {run['name']}: {run['status']}")
|
||||
|
||||
if run["status"] == "Failed":
|
||||
err = mcp("get_live_flow_run_error",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=run["name"])
|
||||
root = err["failedActions"][-1]
|
||||
print(f"Root cause: {root['actionName']} → {root.get('code')}")
|
||||
# Debug and fix the definition before proceeding
|
||||
# See power-automate-debug skill for full diagnosis workflow
|
||||
```
|
||||
|
||||
#### 7c — Swap to the production trigger
|
||||
|
||||
Once the test run succeeds, replace the temporary HTTP trigger with the real one:
|
||||
|
||||
```python
|
||||
# Restore the production trigger
|
||||
definition["triggers"] = production_trigger
|
||||
|
||||
result = mcp("update_live_flow",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
definition=definition,
|
||||
connectionReferences=connection_references,
|
||||
description="Swapped to production trigger after successful test")
|
||||
|
||||
if result.get("error") is not None:
|
||||
print("Trigger swap failed:", result["error"])
|
||||
else:
|
||||
print("✅ Production trigger deployed — flow is live")
|
||||
```
|
||||
|
||||
> **Why this works**: The trigger is just the entry point — the actions are
|
||||
> identical regardless of how the flow starts. Testing via HTTP trigger
|
||||
> exercises all the same Compose, SharePoint, Teams, etc. actions.
|
||||
>
|
||||
> **Connector triggers** (e.g. "When an item is created in SharePoint"):
|
||||
> If actions reference `triggerBody()` or `triggerOutputs()`, pass a
|
||||
> representative test payload in `trigger_live_flow`'s `body` parameter
|
||||
> that matches the shape the connector trigger would produce.
|
||||
|
||||
---
|
||||
|
||||
## Gotchas
|
||||
|
||||
| Mistake | Consequence | Prevention |
|
||||
|---|---|---|
|
||||
| Missing `connectionReferences` in deploy | 400 "Supply connectionReferences" | Always call `list_live_connections` first |
|
||||
| `"operationOptions"` missing on Foreach | Parallel execution, race conditions on writes | Always add `"Sequential"` |
|
||||
| `union(old_data, new_data)` | Old values override new (first-wins) | Use `union(new_data, old_data)` |
|
||||
| `split()` on potentially-null string | `InvalidTemplate` crash | Wrap with `coalesce(field, '')` |
|
||||
| Checking `result["error"]` exists | Always present; true error is `!= null` | Use `result.get("error") is not None` |
|
||||
| Flow deployed but state is "Stopped" | Flow won't run on schedule | Check connection auth; re-enable |
|
||||
| Teams "Chat with Flow bot" recipient as object | 400 `GraphUserDetailNotFound` | Use plain string with trailing semicolon (see below) |
|
||||
|
||||
### Teams `PostMessageToConversation` — Recipient Formats
|
||||
|
||||
The `body/recipient` parameter format depends on the `location` value:
|
||||
|
||||
| Location | `body/recipient` format | Example |
|
||||
|---|---|---|
|
||||
| **Chat with Flow bot** | Plain email string with **trailing semicolon** | `"user@contoso.com;"` |
|
||||
| **Channel** | Object with `groupId` and `channelId` | `{"groupId": "...", "channelId": "..."}` |
|
||||
|
||||
> **Common mistake**: passing `{"to": "user@contoso.com"}` for "Chat with Flow bot"
|
||||
> returns a 400 `GraphUserDetailNotFound` error. The API expects a plain string.
|
||||
|
||||
---
|
||||
|
||||
## Reference Files
|
||||
|
||||
- [flow-schema.md](references/flow-schema.md) — Full flow definition JSON schema
|
||||
- [trigger-types.md](references/trigger-types.md) — Trigger type templates
|
||||
- [action-patterns-core.md](references/action-patterns-core.md) — Variables, control flow, expressions
|
||||
- [action-patterns-data.md](references/action-patterns-data.md) — Array transforms, HTTP, parsing
|
||||
- [action-patterns-connectors.md](references/action-patterns-connectors.md) — SharePoint, Outlook, Teams, Approvals
|
||||
- [build-patterns.md](references/build-patterns.md) — Complete flow definition templates (Recurrence+SP+Teams, HTTP trigger)
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `flowstudio-power-automate-mcp` — Core connection setup and tool reference
|
||||
- `flowstudio-power-automate-debug` — Debug failing flows after deployment
|
||||
@@ -0,0 +1,542 @@
|
||||
# FlowStudio MCP — Action Patterns: Connectors
|
||||
|
||||
SharePoint, Outlook, Teams, and Approvals connector action patterns.
|
||||
|
||||
> All examples assume `"runAfter"` is set appropriately.
|
||||
> Replace `<connectionName>` with the **key** you used in `connectionReferences`
|
||||
> (e.g. `shared_sharepointonline`, `shared_teams`). This is NOT the connection
|
||||
> GUID — it is the logical reference name that links the action to its entry in
|
||||
> the `connectionReferences` map.
|
||||
|
||||
---
|
||||
|
||||
## SharePoint
|
||||
|
||||
### SharePoint — Get Items
|
||||
|
||||
```json
|
||||
"Get_SP_Items": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "GetItems"
|
||||
},
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"table": "MyList",
|
||||
"$filter": "Status eq 'Active'",
|
||||
"$top": 500
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Result reference: `@outputs('Get_SP_Items')?['body/value']`
|
||||
|
||||
> **Dynamic OData filter with string interpolation**: inject a runtime value
|
||||
> directly into the `$filter` string using `@{...}` syntax:
|
||||
> ```
|
||||
> "$filter": "Title eq '@{outputs('ConfirmationCode')}'"
|
||||
> ```
|
||||
> Note the single-quotes inside double-quotes — correct OData string literal
|
||||
> syntax. Avoids a separate variable action.
|
||||
|
||||
> **Pagination for large lists**: by default, GetItems stops at `$top`. To auto-paginate
|
||||
> beyond that, enable the pagination policy on the action. In the flow definition this
|
||||
> appears as:
|
||||
> ```json
|
||||
> "paginationPolicy": { "minimumItemCount": 10000 }
|
||||
> ```
|
||||
> Set `minimumItemCount` to the maximum number of items you expect. The connector will
|
||||
> keep fetching pages until that count is reached or the list is exhausted. Without this,
|
||||
> flows silently return a capped result on lists with >5,000 items.
|
||||
|
||||
---
|
||||
|
||||
### SharePoint — Get Item (Single Row by ID)
|
||||
|
||||
```json
|
||||
"Get_SP_Item": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "GetItem"
|
||||
},
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"table": "MyList",
|
||||
"id": "@triggerBody()?['ID']"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Result reference: `@body('Get_SP_Item')?['FieldName']`
|
||||
|
||||
> Use `GetItem` (not `GetItems` with a filter) when you already have the ID.
|
||||
> Re-fetching after a trigger gives you the **current** row state, not the
|
||||
> snapshot captured at trigger time — important if another process may have
|
||||
> modified the item since the flow started.
|
||||
|
||||
---
|
||||
|
||||
### SharePoint — Create Item
|
||||
|
||||
```json
|
||||
"Create_SP_Item": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "PostItem"
|
||||
},
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"table": "MyList",
|
||||
"item/Title": "@variables('myTitle')",
|
||||
"item/Status": "Active"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### SharePoint — Update Item
|
||||
|
||||
```json
|
||||
"Update_SP_Item": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "PatchItem"
|
||||
},
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"table": "MyList",
|
||||
"id": "@item()?['ID']",
|
||||
"item/Status": "Processed"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### SharePoint — File Upsert (Create or Overwrite in Document Library)
|
||||
|
||||
SharePoint's `CreateFile` fails if the file already exists. To upsert (create or overwrite)
|
||||
without a prior existence check, use `GetFileMetadataByPath` on **both Succeeded and Failed**
|
||||
from `CreateFile` — if create failed because the file exists, the metadata call still
|
||||
returns its ID, which `UpdateFile` can then overwrite:
|
||||
|
||||
```json
|
||||
"Create_File": {
|
||||
"type": "OpenApiConnection",
|
||||
"inputs": {
|
||||
"host": { "apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>", "operationId": "CreateFile" },
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"folderPath": "/My Library/Subfolder",
|
||||
"name": "@{variables('filename')}",
|
||||
"body": "@outputs('Compose_File_Content')"
|
||||
}
|
||||
}
|
||||
},
|
||||
"Get_File_Metadata_By_Path": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": { "Create_File": ["Succeeded", "Failed"] },
|
||||
"inputs": {
|
||||
"host": { "apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>", "operationId": "GetFileMetadataByPath" },
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"path": "/My Library/Subfolder/@{variables('filename')}"
|
||||
}
|
||||
}
|
||||
},
|
||||
"Update_File": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": { "Get_File_Metadata_By_Path": ["Succeeded", "Skipped"] },
|
||||
"inputs": {
|
||||
"host": { "apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>", "operationId": "UpdateFile" },
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"id": "@outputs('Get_File_Metadata_By_Path')?['body/{Identifier}']",
|
||||
"body": "@outputs('Compose_File_Content')"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> If `Create_File` succeeds, `Get_File_Metadata_By_Path` is `Skipped` and `Update_File`
|
||||
> still fires (accepting `Skipped`), harmlessly overwriting the file just created.
|
||||
> If `Create_File` fails (file exists), the metadata call retrieves the existing file's ID
|
||||
> and `Update_File` overwrites it. Either way you end with the latest content.
|
||||
>
|
||||
> **Document library system properties** — when iterating a file library result (e.g.
|
||||
> from `ListFolder` or `GetFilesV2`), use curly-brace property names to access
|
||||
> SharePoint's built-in file metadata. These are different from list field names:
|
||||
> ```
|
||||
> @item()?['{Name}'] — filename without path (e.g. "report.csv")
|
||||
> @item()?['{FilenameWithExtension}'] — same as {Name} in most connectors
|
||||
> @item()?['{Identifier}'] — internal file ID for use in UpdateFile/DeleteFile
|
||||
> @item()?['{FullPath}'] — full server-relative path
|
||||
> @item()?['{IsFolder}'] — boolean, true for folder entries
|
||||
> ```
|
||||
|
||||
---
|
||||
|
||||
### SharePoint — GetItemChanges Column Gate
|
||||
|
||||
When a SharePoint "item modified" trigger fires, it doesn't tell you WHICH
|
||||
column changed. Use `GetItemChanges` to get per-column change flags, then gate
|
||||
downstream logic on specific columns:
|
||||
|
||||
```json
|
||||
"Get_Changes": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "GetItemChanges"
|
||||
},
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"table": "<list-guid>",
|
||||
"id": "@triggerBody()?['ID']",
|
||||
"since": "@triggerBody()?['Modified']",
|
||||
"includeDrafts": false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Gate on a specific column:
|
||||
|
||||
```json
|
||||
"expression": {
|
||||
"and": [{
|
||||
"equals": [
|
||||
"@body('Get_Changes')?['Column']?['hasChanged']",
|
||||
true
|
||||
]
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
> **New-item detection:** On the very first modification (version 1.0),
|
||||
> `GetItemChanges` may report no prior version. Check
|
||||
> `@equals(triggerBody()?['OData__UIVersionString'], '1.0')` to detect
|
||||
> newly created items and skip change-gate logic for those.
|
||||
|
||||
---
|
||||
|
||||
### SharePoint — REST MERGE via HttpRequest
|
||||
|
||||
For cross-list updates or advanced operations not supported by the standard
|
||||
Update Item connector (e.g., updating a list in a different site), use the
|
||||
SharePoint REST API via the `HttpRequest` operation:
|
||||
|
||||
```json
|
||||
"Update_Cross_List_Item": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "HttpRequest"
|
||||
},
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/target-site",
|
||||
"parameters/method": "POST",
|
||||
"parameters/uri": "/_api/web/lists(guid'<list-guid>')/items(@{variables('ItemId')})",
|
||||
"parameters/headers": {
|
||||
"Accept": "application/json;odata=nometadata",
|
||||
"Content-Type": "application/json;odata=nometadata",
|
||||
"X-HTTP-Method": "MERGE",
|
||||
"IF-MATCH": "*"
|
||||
},
|
||||
"parameters/body": "{ \"Title\": \"@{variables('NewTitle')}\", \"Status\": \"@{variables('NewStatus')}\" }"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **Key headers:**
|
||||
> - `X-HTTP-Method: MERGE` — tells SharePoint to do a partial update (PATCH semantics)
|
||||
> - `IF-MATCH: *` — overwrites regardless of current ETag (no conflict check)
|
||||
>
|
||||
> The `HttpRequest` operation reuses the existing SharePoint connection — no extra
|
||||
> authentication needed. Use this when the standard Update Item connector can't
|
||||
> reach the target list (different site collection, or you need raw REST control).
|
||||
|
||||
---
|
||||
|
||||
### SharePoint — File as JSON Database (Read + Parse)
|
||||
|
||||
Use a SharePoint document library JSON file as a queryable "database" of
|
||||
last-known-state records. A separate process (e.g., Power BI dataflow) maintains
|
||||
the file; the flow downloads and filters it for before/after comparisons.
|
||||
|
||||
```json
|
||||
"Get_File": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "GetFileContent"
|
||||
},
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"id": "%252fShared%2bDocuments%252fdata.json",
|
||||
"inferContentType": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"Parse_JSON_File": {
|
||||
"type": "Compose",
|
||||
"runAfter": { "Get_File": ["Succeeded"] },
|
||||
"inputs": "@json(decodeBase64(body('Get_File')?['$content']))"
|
||||
},
|
||||
"Find_Record": {
|
||||
"type": "Query",
|
||||
"runAfter": { "Parse_JSON_File": ["Succeeded"] },
|
||||
"inputs": {
|
||||
"from": "@outputs('Parse_JSON_File')",
|
||||
"where": "@equals(item()?['id'], variables('RecordId'))"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **Decode chain:** `GetFileContent` returns base64-encoded content in
|
||||
> `body(...)?['$content']`. Apply `decodeBase64()` then `json()` to get a
|
||||
> usable array. `Filter Array` then acts as a WHERE clause.
|
||||
>
|
||||
> **When to use:** When you need a lightweight "before" snapshot to detect field
|
||||
> changes from a webhook payload (the "after" state). Simpler than maintaining
|
||||
> a full SharePoint list mirror — works well for up to ~10K records.
|
||||
>
|
||||
> **File path encoding:** In the `id` parameter, SharePoint URL-encodes paths
|
||||
> twice. Spaces become `%2b` (plus sign), slashes become `%252f`.
|
||||
|
||||
---
|
||||
|
||||
## Outlook
|
||||
|
||||
### Outlook — Send Email
|
||||
|
||||
```json
|
||||
"Send_Email": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_office365",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "SendEmailV2"
|
||||
},
|
||||
"parameters": {
|
||||
"emailMessage/To": "recipient@contoso.com",
|
||||
"emailMessage/Subject": "Automated notification",
|
||||
"emailMessage/Body": "<p>@{outputs('Compose_Message')}</p>",
|
||||
"emailMessage/IsHtml": true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Outlook — Get Emails (Read Template from Folder)
|
||||
|
||||
```json
|
||||
"Get_Email_Template": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_office365",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "GetEmailsV3"
|
||||
},
|
||||
"parameters": {
|
||||
"folderPath": "Id::<outlook-folder-id>",
|
||||
"fetchOnlyUnread": false,
|
||||
"includeAttachments": false,
|
||||
"top": 1,
|
||||
"importance": "Any",
|
||||
"fetchOnlyWithAttachment": false,
|
||||
"subjectFilter": "My Email Template Subject"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access subject and body:
|
||||
```
|
||||
@first(outputs('Get_Email_Template')?['body/value'])?['subject']
|
||||
@first(outputs('Get_Email_Template')?['body/value'])?['body']
|
||||
```
|
||||
|
||||
> **Outlook-as-CMS pattern**: store a template email in a dedicated Outlook folder.
|
||||
> Set `fetchOnlyUnread: false` so the template persists after first use.
|
||||
> Non-technical users can update subject and body by editing that email —
|
||||
> no flow changes required. Pass subject and body directly into `SendEmailV2`.
|
||||
>
|
||||
> To get a folder ID: in Outlook on the web, right-click the folder → open in
|
||||
> new tab — the folder GUID is in the URL. Prefix it with `Id::` in `folderPath`.
|
||||
|
||||
---
|
||||
|
||||
## Teams
|
||||
|
||||
### Teams — Post Message
|
||||
|
||||
```json
|
||||
"Post_Teams_Message": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_teams",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "PostMessageToConversation"
|
||||
},
|
||||
"parameters": {
|
||||
"poster": "Flow bot",
|
||||
"location": "Channel",
|
||||
"body/recipient": {
|
||||
"groupId": "<team-id>",
|
||||
"channelId": "<channel-id>"
|
||||
},
|
||||
"body/messageBody": "@outputs('Compose_Message')"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Variant: Group Chat (1:1 or Multi-Person)
|
||||
|
||||
To post to a group chat instead of a channel, use `"location": "Group chat"` with
|
||||
a thread ID as the recipient:
|
||||
|
||||
```json
|
||||
"Post_To_Group_Chat": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_teams",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "PostMessageToConversation"
|
||||
},
|
||||
"parameters": {
|
||||
"poster": "Flow bot",
|
||||
"location": "Group chat",
|
||||
"body/recipient": "19:<thread-hash>@thread.v2",
|
||||
"body/messageBody": "@outputs('Compose_Message')"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For 1:1 ("Chat with Flow bot"), use `"location": "Chat with Flow bot"` and set
|
||||
`body/recipient` to the user's email address.
|
||||
|
||||
> **Active-user gate:** When sending notifications in a loop, check the recipient's
|
||||
> Azure AD account is enabled before posting — avoids failed deliveries to departed
|
||||
> staff:
|
||||
> ```json
|
||||
> "Check_User_Active": {
|
||||
> "type": "OpenApiConnection",
|
||||
> "inputs": {
|
||||
> "host": { "apiId": "/providers/Microsoft.PowerApps/apis/shared_office365users",
|
||||
> "operationId": "UserProfile_V2" },
|
||||
> "parameters": { "id": "@{item()?['Email']}" }
|
||||
> }
|
||||
> }
|
||||
> ```
|
||||
> Then gate: `@equals(body('Check_User_Active')?['accountEnabled'], true)`
|
||||
|
||||
---
|
||||
|
||||
## Approvals
|
||||
|
||||
### Split Approval (Create → Wait)
|
||||
|
||||
The standard "Start and wait for an approval" is a single blocking action.
|
||||
For more control (e.g., posting the approval link in Teams, or adding a timeout
|
||||
scope), split it into two actions: `CreateAnApproval` (fire-and-forget) then
|
||||
`WaitForAnApproval` (webhook pause).
|
||||
|
||||
```json
|
||||
"Create_Approval": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_approvals",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "CreateAnApproval"
|
||||
},
|
||||
"parameters": {
|
||||
"approvalType": "CustomResponse/Result",
|
||||
"ApprovalCreationInput/title": "Review: @{variables('ItemTitle')}",
|
||||
"ApprovalCreationInput/assignedTo": "approver@contoso.com",
|
||||
"ApprovalCreationInput/details": "Please review and select an option.",
|
||||
"ApprovalCreationInput/responseOptions": ["Approve", "Reject", "Defer"],
|
||||
"ApprovalCreationInput/enableNotifications": true,
|
||||
"ApprovalCreationInput/enableReassignment": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"Wait_For_Approval": {
|
||||
"type": "OpenApiConnectionWebhook",
|
||||
"runAfter": { "Create_Approval": ["Succeeded"] },
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_approvals",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "WaitForAnApproval"
|
||||
},
|
||||
"parameters": {
|
||||
"approvalName": "@body('Create_Approval')?['name']"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **`approvalType` options:**
|
||||
> - `"Approve/Reject - First to respond"` — binary, first responder wins
|
||||
> - `"Approve/Reject - Everyone must approve"` — requires all assignees
|
||||
> - `"CustomResponse/Result"` — define your own response buttons
|
||||
>
|
||||
> After `Wait_For_Approval`, read the outcome:
|
||||
> ```
|
||||
> @body('Wait_For_Approval')?['outcome'] → "Approve", "Reject", or custom
|
||||
> @body('Wait_For_Approval')?['responses'][0]?['responder']?['displayName']
|
||||
> @body('Wait_For_Approval')?['responses'][0]?['comments']
|
||||
> ```
|
||||
>
|
||||
> The split pattern lets you insert actions between create and wait — e.g.,
|
||||
> posting the approval link to Teams, starting a timeout scope, or logging
|
||||
> the pending approval to a tracking list.
|
||||
@@ -0,0 +1,542 @@
|
||||
# FlowStudio MCP — Action Patterns: Core
|
||||
|
||||
Variables, control flow, and expression patterns for Power Automate flow definitions.
|
||||
|
||||
> All examples assume `"runAfter"` is set appropriately.
|
||||
> Replace `<connectionName>` with the **key** you used in your `connectionReferences` map
|
||||
> (e.g. `shared_teams`, `shared_office365`) — NOT the connection GUID.
|
||||
|
||||
---
|
||||
|
||||
## Data & Variables
|
||||
|
||||
### Compose (Store a Value)
|
||||
|
||||
```json
|
||||
"Compose_My_Value": {
|
||||
"type": "Compose",
|
||||
"runAfter": {},
|
||||
"inputs": "@variables('myVar')"
|
||||
}
|
||||
```
|
||||
|
||||
Reference: `@outputs('Compose_My_Value')`
|
||||
|
||||
---
|
||||
|
||||
### Initialize Variable
|
||||
|
||||
```json
|
||||
"Init_Counter": {
|
||||
"type": "InitializeVariable",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"variables": [{
|
||||
"name": "counter",
|
||||
"type": "Integer",
|
||||
"value": 0
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Types: `"Integer"`, `"Float"`, `"Boolean"`, `"String"`, `"Array"`, `"Object"`
|
||||
|
||||
---
|
||||
|
||||
### Set Variable
|
||||
|
||||
```json
|
||||
"Set_Counter": {
|
||||
"type": "SetVariable",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"name": "counter",
|
||||
"value": "@add(variables('counter'), 1)"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Append to Array Variable
|
||||
|
||||
```json
|
||||
"Collect_Item": {
|
||||
"type": "AppendToArrayVariable",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"name": "resultArray",
|
||||
"value": "@item()"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Increment Variable
|
||||
|
||||
```json
|
||||
"Increment_Counter": {
|
||||
"type": "IncrementVariable",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"name": "counter",
|
||||
"value": 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> Use `IncrementVariable` (not `SetVariable` with `add()`) for counters inside loops —
|
||||
> it is atomic and avoids expression errors when the variable is used elsewhere in the
|
||||
> same iteration. `value` can be any integer or expression, e.g. `@mul(item()?['Interval'], 60)`
|
||||
> to advance a Unix timestamp cursor by N minutes.
|
||||
|
||||
---
|
||||
|
||||
## Control Flow
|
||||
|
||||
### Condition (If/Else)
|
||||
|
||||
```json
|
||||
"Check_Status": {
|
||||
"type": "If",
|
||||
"runAfter": {},
|
||||
"expression": {
|
||||
"and": [{ "equals": ["@item()?['Status']", "Active"] }]
|
||||
},
|
||||
"actions": {
|
||||
"Handle_Active": {
|
||||
"type": "Compose",
|
||||
"runAfter": {},
|
||||
"inputs": "Active user: @{item()?['Name']}"
|
||||
}
|
||||
},
|
||||
"else": {
|
||||
"actions": {
|
||||
"Handle_Inactive": {
|
||||
"type": "Compose",
|
||||
"runAfter": {},
|
||||
"inputs": "Inactive user"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Comparison operators: `equals`, `not`, `greater`, `greaterOrEquals`, `less`, `lessOrEquals`, `contains`
|
||||
Logical: `and: [...]`, `or: [...]`
|
||||
|
||||
---
|
||||
|
||||
### Switch
|
||||
|
||||
```json
|
||||
"Route_By_Type": {
|
||||
"type": "Switch",
|
||||
"runAfter": {},
|
||||
"expression": "@triggerBody()?['type']",
|
||||
"cases": {
|
||||
"Case_Email": {
|
||||
"case": "email",
|
||||
"actions": { "Process_Email": { "type": "Compose", "runAfter": {}, "inputs": "email" } }
|
||||
},
|
||||
"Case_Teams": {
|
||||
"case": "teams",
|
||||
"actions": { "Process_Teams": { "type": "Compose", "runAfter": {}, "inputs": "teams" } }
|
||||
}
|
||||
},
|
||||
"default": {
|
||||
"actions": { "Unknown_Type": { "type": "Compose", "runAfter": {}, "inputs": "unknown" } }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Scope (Grouping / Try-Catch)
|
||||
|
||||
Wrap related actions in a Scope to give them a shared name, collapse them in the
|
||||
designer, and — most importantly — handle their errors as a unit.
|
||||
|
||||
```json
|
||||
"Scope_Get_Customer": {
|
||||
"type": "Scope",
|
||||
"runAfter": {},
|
||||
"actions": {
|
||||
"HTTP_Get_Customer": {
|
||||
"type": "Http",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"method": "GET",
|
||||
"uri": "https://api.example.com/customers/@{variables('customerId')}"
|
||||
}
|
||||
},
|
||||
"Compose_Email": {
|
||||
"type": "Compose",
|
||||
"runAfter": { "HTTP_Get_Customer": ["Succeeded"] },
|
||||
"inputs": "@outputs('HTTP_Get_Customer')?['body/email']"
|
||||
}
|
||||
}
|
||||
},
|
||||
"Handle_Scope_Error": {
|
||||
"type": "Compose",
|
||||
"runAfter": { "Scope_Get_Customer": ["Failed", "TimedOut"] },
|
||||
"inputs": "Scope failed: @{result('Scope_Get_Customer')?[0]?['error']?['message']}"
|
||||
}
|
||||
```
|
||||
|
||||
> Reference scope results: `@result('Scope_Get_Customer')` returns an array of action
|
||||
> outcomes. Use `runAfter: {"MyScope": ["Failed", "TimedOut"]}` on a follow-up action
|
||||
> to create try/catch semantics without a Terminate.
|
||||
|
||||
---
|
||||
|
||||
### Foreach (Sequential)
|
||||
|
||||
```json
|
||||
"Process_Each_Item": {
|
||||
"type": "Foreach",
|
||||
"runAfter": {},
|
||||
"foreach": "@outputs('Get_Items')?['body/value']",
|
||||
"operationOptions": "Sequential",
|
||||
"actions": {
|
||||
"Handle_Item": {
|
||||
"type": "Compose",
|
||||
"runAfter": {},
|
||||
"inputs": "@item()?['Title']"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> Always include `"operationOptions": "Sequential"` unless parallel is intentional.
|
||||
|
||||
---
|
||||
|
||||
### Foreach (Parallel with Concurrency Limit)
|
||||
|
||||
```json
|
||||
"Process_Each_Item_Parallel": {
|
||||
"type": "Foreach",
|
||||
"runAfter": {},
|
||||
"foreach": "@body('Get_SP_Items')?['value']",
|
||||
"runtimeConfiguration": {
|
||||
"concurrency": {
|
||||
"repetitions": 20
|
||||
}
|
||||
},
|
||||
"actions": {
|
||||
"HTTP_Upsert": {
|
||||
"type": "Http",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"method": "POST",
|
||||
"uri": "https://api.example.com/contacts/@{item()?['Email']}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> Set `repetitions` to control how many items are processed simultaneously.
|
||||
> Practical values: `5–10` for external API calls (respect rate limits),
|
||||
> `20–50` for internal/fast operations.
|
||||
> Omit `runtimeConfiguration.concurrency` entirely for the platform default
|
||||
> (currently 50). Do NOT use `"operationOptions": "Sequential"` and concurrency together.
|
||||
|
||||
---
|
||||
|
||||
### Wait (Delay)
|
||||
|
||||
```json
|
||||
"Delay_10_Minutes": {
|
||||
"type": "Wait",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"interval": {
|
||||
"count": 10,
|
||||
"unit": "Minute"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Valid `unit` values: `"Second"`, `"Minute"`, `"Hour"`, `"Day"`
|
||||
|
||||
> Use a Delay + re-fetch as a deduplication guard: wait for any competing process
|
||||
> to complete, then re-read the record before acting. This avoids double-processing
|
||||
> when multiple triggers or manual edits can race on the same item.
|
||||
|
||||
---
|
||||
|
||||
### Terminate (Success or Failure)
|
||||
|
||||
```json
|
||||
"Terminate_Success": {
|
||||
"type": "Terminate",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"runStatus": "Succeeded"
|
||||
}
|
||||
},
|
||||
"Terminate_Failure": {
|
||||
"type": "Terminate",
|
||||
"runAfter": { "Risky_Action": ["Failed"] },
|
||||
"inputs": {
|
||||
"runStatus": "Failed",
|
||||
"runError": {
|
||||
"code": "StepFailed",
|
||||
"message": "@{outputs('Get_Error_Message')}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Do Until (Loop Until Condition)
|
||||
|
||||
Repeats a block of actions until an exit condition becomes true.
|
||||
Use when the number of iterations is not known upfront (e.g. paginating an API,
|
||||
walking a time range, polling until a status changes).
|
||||
|
||||
```json
|
||||
"Do_Until_Done": {
|
||||
"type": "Until",
|
||||
"runAfter": {},
|
||||
"expression": "@greaterOrEquals(variables('cursor'), variables('endValue'))",
|
||||
"limit": {
|
||||
"count": 5000,
|
||||
"timeout": "PT5H"
|
||||
},
|
||||
"actions": {
|
||||
"Do_Work": {
|
||||
"type": "Compose",
|
||||
"runAfter": {},
|
||||
"inputs": "@variables('cursor')"
|
||||
},
|
||||
"Advance_Cursor": {
|
||||
"type": "IncrementVariable",
|
||||
"runAfter": { "Do_Work": ["Succeeded"] },
|
||||
"inputs": {
|
||||
"name": "cursor",
|
||||
"value": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> Always set `limit.count` and `limit.timeout` explicitly — the platform defaults are
|
||||
> low (60 iterations, 1 hour). For time-range walkers use `limit.count: 5000` and
|
||||
> `limit.timeout: "PT5H"` (ISO 8601 duration).
|
||||
>
|
||||
> The exit condition is evaluated **before** each iteration. Initialise your cursor
|
||||
> variable before the loop so the condition can evaluate correctly on the first pass.
|
||||
|
||||
---
|
||||
|
||||
### Async Polling with RequestId Correlation
|
||||
|
||||
When an API starts a long-running job asynchronously (e.g. Power BI dataset refresh,
|
||||
report generation, batch export), the trigger call returns a request ID. Capture it
|
||||
from the **response header**, then poll a status endpoint filtering by that exact ID:
|
||||
|
||||
```json
|
||||
"Start_Job": {
|
||||
"type": "Http",
|
||||
"inputs": { "method": "POST", "uri": "https://api.example.com/jobs" }
|
||||
},
|
||||
"Capture_Request_ID": {
|
||||
"type": "Compose",
|
||||
"runAfter": { "Start_Job": ["Succeeded"] },
|
||||
"inputs": "@outputs('Start_Job')?['headers/X-Request-Id']"
|
||||
},
|
||||
"Initialize_Status": {
|
||||
"type": "InitializeVariable",
|
||||
"inputs": { "variables": [{ "name": "jobStatus", "type": "String", "value": "Running" }] }
|
||||
},
|
||||
"Poll_Until_Done": {
|
||||
"type": "Until",
|
||||
"expression": "@not(equals(variables('jobStatus'), 'Running'))",
|
||||
"limit": { "count": 60, "timeout": "PT30M" },
|
||||
"actions": {
|
||||
"Delay": { "type": "Wait", "inputs": { "interval": { "count": 20, "unit": "Second" } } },
|
||||
"Get_History": {
|
||||
"type": "Http",
|
||||
"runAfter": { "Delay": ["Succeeded"] },
|
||||
"inputs": { "method": "GET", "uri": "https://api.example.com/jobs/history" }
|
||||
},
|
||||
"Filter_This_Job": {
|
||||
"type": "Query",
|
||||
"runAfter": { "Get_History": ["Succeeded"] },
|
||||
"inputs": {
|
||||
"from": "@outputs('Get_History')?['body/items']",
|
||||
"where": "@equals(item()?['requestId'], outputs('Capture_Request_ID'))"
|
||||
}
|
||||
},
|
||||
"Set_Status": {
|
||||
"type": "SetVariable",
|
||||
"runAfter": { "Filter_This_Job": ["Succeeded"] },
|
||||
"inputs": {
|
||||
"name": "jobStatus",
|
||||
"value": "@first(body('Filter_This_Job'))?['status']"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"Handle_Failure": {
|
||||
"type": "If",
|
||||
"runAfter": { "Poll_Until_Done": ["Succeeded"] },
|
||||
"expression": { "equals": ["@variables('jobStatus')", "Failed"] },
|
||||
"actions": { "Terminate_Failed": { "type": "Terminate", "inputs": { "runStatus": "Failed" } } },
|
||||
"else": { "actions": {} }
|
||||
}
|
||||
```
|
||||
|
||||
Access response headers: `@outputs('Start_Job')?['headers/X-Request-Id']`
|
||||
|
||||
> **Status variable initialisation**: set a sentinel value (`"Running"`, `"Unknown"`) before
|
||||
> the loop. The exit condition tests for any value other than the sentinel.
|
||||
> This way an empty poll result (job not yet in history) leaves the variable unchanged
|
||||
> and the loop continues — it doesn't accidentally exit on null.
|
||||
>
|
||||
> **Filter before extracting**: always `Filter Array` the history to your specific
|
||||
> request ID before calling `first()`. History endpoints return all jobs; without
|
||||
> filtering, status from a different concurrent job can corrupt your poll.
|
||||
|
||||
---
|
||||
|
||||
### runAfter Fallback (Failed → Alternative Action)
|
||||
|
||||
Route to a fallback action when a primary action fails — without a Condition block.
|
||||
Simply set `runAfter` on the fallback to accept `["Failed"]` from the primary:
|
||||
|
||||
```json
|
||||
"HTTP_Get_Hi_Res": {
|
||||
"type": "Http",
|
||||
"runAfter": {},
|
||||
"inputs": { "method": "GET", "uri": "https://api.example.com/data?resolution=hi-res" }
|
||||
},
|
||||
"HTTP_Get_Low_Res": {
|
||||
"type": "Http",
|
||||
"runAfter": { "HTTP_Get_Hi_Res": ["Failed"] },
|
||||
"inputs": { "method": "GET", "uri": "https://api.example.com/data?resolution=low-res" }
|
||||
}
|
||||
```
|
||||
|
||||
> Actions that follow can use `runAfter` accepting both `["Succeeded", "Skipped"]` to
|
||||
> handle either path — see **Fan-In Join Gate** below.
|
||||
|
||||
---
|
||||
|
||||
### Fan-In Join Gate (Merge Two Mutually Exclusive Branches)
|
||||
|
||||
When two branches are mutually exclusive (only one can succeed per run), use a single
|
||||
downstream action that accepts `["Succeeded", "Skipped"]` from **both** branches.
|
||||
The gate fires exactly once regardless of which branch ran:
|
||||
|
||||
```json
|
||||
"Increment_Count": {
|
||||
"type": "IncrementVariable",
|
||||
"runAfter": {
|
||||
"Update_Hi_Res_Metadata": ["Succeeded", "Skipped"],
|
||||
"Update_Low_Res_Metadata": ["Succeeded", "Skipped"]
|
||||
},
|
||||
"inputs": { "name": "LoopCount", "value": 1 }
|
||||
}
|
||||
```
|
||||
|
||||
> This avoids duplicating the downstream action in each branch. The key insight:
|
||||
> whichever branch was skipped reports `Skipped` — the gate accepts that state and
|
||||
> fires once. Only works cleanly when the two branches are truly mutually exclusive
|
||||
> (e.g. one is `runAfter: [...Failed]` of the other).
|
||||
|
||||
---
|
||||
|
||||
## Expressions
|
||||
|
||||
### Common Expression Patterns
|
||||
|
||||
```
|
||||
Null-safe field access: @item()?['FieldName']
|
||||
Null guard: @coalesce(item()?['Name'], 'Unknown')
|
||||
String format: @{variables('firstName')} @{variables('lastName')}
|
||||
Date today: @utcNow()
|
||||
Formatted date: @formatDateTime(utcNow(), 'dd/MM/yyyy')
|
||||
Add days: @addDays(utcNow(), 7)
|
||||
Array length: @length(variables('myArray'))
|
||||
Filter array: Use the "Filter array" action (no inline filter expression exists in PA)
|
||||
Union (new wins): @union(body('New_Data'), outputs('Old_Data'))
|
||||
Sort: @sort(variables('myArray'), 'Date')
|
||||
Unix timestamp → date: @formatDateTime(addseconds('1970-1-1', triggerBody()?['created']), 'yyyy-MM-dd')
|
||||
Date → Unix milliseconds: @div(sub(ticks(startOfDay(item()?['Created'])), ticks(formatDateTime('1970-01-01Z','o'))), 10000)
|
||||
Date → Unix seconds: @div(sub(ticks(item()?['Start']), ticks('1970-01-01T00:00:00Z')), 10000000)
|
||||
Unix seconds → datetime: @addSeconds('1970-01-01T00:00:00Z', int(variables('Unix')))
|
||||
Coalesce as no-else: @coalesce(outputs('Optional_Step'), outputs('Default_Step'))
|
||||
Flow elapsed minutes: @div(float(sub(ticks(utcNow()), ticks(outputs('Flow_Start')))), 600000000)
|
||||
HH:mm time string: @formatDateTime(outputs('Local_Datetime'), 'HH:mm')
|
||||
Response header: @outputs('HTTP_Action')?['headers/X-Request-Id']
|
||||
Array max (by field): @reverse(sort(body('Select_Items'), 'Date'))[0]
|
||||
Integer day span: @int(split(dateDifference(outputs('Start'), outputs('End')), '.')[0])
|
||||
ISO week number: @div(add(dayofyear(addDays(subtractFromTime(date, sub(dayofweek(date),1), 'Day'), 3)), 6), 7)
|
||||
Join errors to string: @if(equals(length(variables('Errors')),0), null, concat(join(variables('Errors'),', '),' not found.'))
|
||||
Normalize before compare: @replace(coalesce(outputs('Value'),''),'_',' ')
|
||||
Robust non-empty check: @greater(length(trim(coalesce(string(outputs('Val')), ''))), 0)
|
||||
```
|
||||
|
||||
### Newlines in Expressions
|
||||
|
||||
> **`\n` does NOT produce a newline inside Power Automate expressions.** It is
|
||||
> treated as a literal backslash + `n` and will either appear verbatim or cause
|
||||
> a validation error.
|
||||
|
||||
Use `decodeUriComponent('%0a')` wherever you need a newline character:
|
||||
|
||||
```
|
||||
Newline (LF): decodeUriComponent('%0a')
|
||||
CRLF: decodeUriComponent('%0d%0a')
|
||||
```
|
||||
|
||||
Example — multi-line Teams or email body via `concat()`:
|
||||
```json
|
||||
"Compose_Message": {
|
||||
"type": "Compose",
|
||||
"inputs": "@concat('Hi ', outputs('Get_User')?['body/displayName'], ',', decodeUriComponent('%0a%0a'), 'Your report is ready.', decodeUriComponent('%0a'), '- The Team')"
|
||||
}
|
||||
```
|
||||
|
||||
Example — `join()` with newline separator:
|
||||
```json
|
||||
"Compose_List": {
|
||||
"type": "Compose",
|
||||
"inputs": "@join(body('Select_Names'), decodeUriComponent('%0a'))"
|
||||
}
|
||||
```
|
||||
|
||||
> This is the only reliable way to embed newlines in dynamically built strings
|
||||
> in Power Automate flow definitions (confirmed against Logic Apps runtime).
|
||||
|
||||
---
|
||||
|
||||
### Sum an array (XPath trick)
|
||||
|
||||
Power Automate has no native `sum()` function. Use XPath on XML instead:
|
||||
|
||||
```json
|
||||
"Prepare_For_Sum": {
|
||||
"type": "Compose",
|
||||
"runAfter": {},
|
||||
"inputs": { "root": { "numbers": "@body('Select_Amounts')" } }
|
||||
},
|
||||
"Sum": {
|
||||
"type": "Compose",
|
||||
"runAfter": { "Prepare_For_Sum": ["Succeeded"] },
|
||||
"inputs": "@xpath(xml(outputs('Prepare_For_Sum')), 'sum(/root/numbers)')"
|
||||
}
|
||||
```
|
||||
|
||||
`Select_Amounts` must output a flat array of numbers (use a **Select** action to extract a single numeric field first). The result is a number you can use directly in conditions or calculations.
|
||||
|
||||
> This is the only way to aggregate (sum/min/max) an array without a loop in Power Automate.
|
||||
@@ -0,0 +1,735 @@
|
||||
# FlowStudio MCP — Action Patterns: Data Transforms
|
||||
|
||||
Array operations, HTTP calls, parsing, and data transformation patterns.
|
||||
|
||||
> All examples assume `"runAfter"` is set appropriately.
|
||||
> `<connectionName>` is the **key** in `connectionReferences` (e.g. `shared_sharepointonline`), not the GUID.
|
||||
> The GUID goes in the map value's `connectionName` property.
|
||||
|
||||
---
|
||||
|
||||
## Array Operations
|
||||
|
||||
### Select (Reshape / Project an Array)
|
||||
|
||||
Transforms each item in an array, keeping only the columns you need or renaming them.
|
||||
Avoids carrying large objects through the rest of the flow.
|
||||
|
||||
```json
|
||||
"Select_Needed_Columns": {
|
||||
"type": "Select",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"from": "@outputs('HTTP_Get_Subscriptions')?['body/data']",
|
||||
"select": {
|
||||
"id": "@item()?['id']",
|
||||
"status": "@item()?['status']",
|
||||
"trial_end": "@item()?['trial_end']",
|
||||
"cancel_at": "@item()?['cancel_at']",
|
||||
"interval": "@item()?['plan']?['interval']"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Result reference: `@body('Select_Needed_Columns')` — returns a direct array of reshaped objects.
|
||||
|
||||
> Use Select before looping or filtering to reduce payload size and simplify
|
||||
> downstream expressions. Works on any array — SP results, HTTP responses, variables.
|
||||
>
|
||||
> **Tips:**
|
||||
> - **Single-to-array coercion:** When an API returns a single object but you need
|
||||
> Select (which requires an array), wrap it: `@array(body('Get_Employee')?['data'])`.
|
||||
> The output is a 1-element array — access results via `?[0]?['field']`.
|
||||
> - **Null-normalize optional fields:** Use `@if(empty(item()?['field']), null, item()?['field'])`
|
||||
> on every optional field to normalize empty strings, missing properties, and empty
|
||||
> objects to explicit `null`. Ensures consistent downstream `@equals(..., @null)` checks.
|
||||
> - **Flatten nested objects:** Project nested properties into flat fields:
|
||||
> ```
|
||||
> "manager_name": "@if(empty(item()?['manager']?['name']), null, item()?['manager']?['name'])"
|
||||
> ```
|
||||
> This enables direct field-level comparison with a flat schema from another source.
|
||||
|
||||
---
|
||||
|
||||
### Filter Array (Query)
|
||||
|
||||
Filters an array to items matching a condition. Use the action form (not the `filter()`
|
||||
expression) for complex multi-condition logic — it's clearer and easier to maintain.
|
||||
|
||||
```json
|
||||
"Filter_Active_Subscriptions": {
|
||||
"type": "Query",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"from": "@body('Select_Needed_Columns')",
|
||||
"where": "@and(or(equals(item().status, 'trialing'), equals(item().status, 'active')), equals(item().cancel_at, null))"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Result reference: `@body('Filter_Active_Subscriptions')` — direct filtered array.
|
||||
|
||||
> Tip: run multiple Filter Array actions on the same source array to create
|
||||
> named buckets (e.g. active, being-canceled, fully-canceled), then use
|
||||
> `coalesce(first(body('Filter_A')), first(body('Filter_B')), ...)` to pick
|
||||
> the highest-priority match without any loops.
|
||||
|
||||
---
|
||||
|
||||
### Create CSV Table (Array → CSV String)
|
||||
|
||||
Converts an array of objects into a CSV-formatted string — no connector call, no code.
|
||||
Use after a `Select` or `Filter Array` to export data or pass it to a file-write action.
|
||||
|
||||
```json
|
||||
"Create_CSV": {
|
||||
"type": "Table",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"from": "@body('Select_Output_Columns')",
|
||||
"format": "CSV"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Result reference: `@body('Create_CSV')` — a plain string with header row + data rows.
|
||||
|
||||
```json
|
||||
// Custom column order / renamed headers:
|
||||
"Create_CSV_Custom": {
|
||||
"type": "Table",
|
||||
"inputs": {
|
||||
"from": "@body('Select_Output_Columns')",
|
||||
"format": "CSV",
|
||||
"columns": [
|
||||
{ "header": "Date", "value": "@item()?['transactionDate']" },
|
||||
{ "header": "Amount", "value": "@item()?['amount']" },
|
||||
{ "header": "Description", "value": "@item()?['description']" }
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> Without `columns`, headers are taken from the object property names in the source array.
|
||||
> With `columns`, you control header names and column order explicitly.
|
||||
>
|
||||
> The output is a raw string. Write it to a file with `CreateFile` or `UpdateFile`
|
||||
> (set `body` to `@body('Create_CSV')`), or store in a variable with `SetVariable`.
|
||||
>
|
||||
> If source data came from Power BI's `ExecuteDatasetQuery`, column names will be
|
||||
> wrapped in square brackets (e.g. `[Amount]`). Strip them before writing:
|
||||
> `@replace(replace(body('Create_CSV'),'[',''),']','')`
|
||||
|
||||
---
|
||||
|
||||
### range() + Select for Array Generation
|
||||
|
||||
`range(0, N)` produces an integer sequence `[0, 1, 2, …, N-1]`. Pipe it through
|
||||
a Select action to generate date series, index grids, or any computed array
|
||||
without a loop:
|
||||
|
||||
```json
|
||||
// Generate 14 consecutive dates starting from a base date
|
||||
"Generate_Date_Series": {
|
||||
"type": "Select",
|
||||
"inputs": {
|
||||
"from": "@range(0, 14)",
|
||||
"select": "@addDays(outputs('Base_Date'), item(), 'yyyy-MM-dd')"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Result: `@body('Generate_Date_Series')` → `["2025-01-06", "2025-01-07", …, "2025-01-19"]`
|
||||
|
||||
```json
|
||||
// Flatten a 2D array (rows × cols) into 1D using arithmetic indexing
|
||||
"Flatten_Grid": {
|
||||
"type": "Select",
|
||||
"inputs": {
|
||||
"from": "@range(0, mul(length(outputs('Rows')), length(outputs('Cols'))))",
|
||||
"select": {
|
||||
"row": "@outputs('Rows')[div(item(), length(outputs('Cols')))]",
|
||||
"col": "@outputs('Cols')[mod(item(), length(outputs('Cols')))]"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> `range()` is zero-based. The Cartesian product pattern above uses `div(i, cols)`
|
||||
> for the row index and `mod(i, cols)` for the column index — equivalent to a
|
||||
> nested for-loop flattened into a single pass. Useful for generating time-slot ×
|
||||
> date grids, shift × location assignments, etc.
|
||||
|
||||
---
|
||||
|
||||
### Dynamic Dictionary via json(concat(join()))
|
||||
|
||||
When you need O(1) key→value lookups at runtime and Power Automate has no native
|
||||
dictionary type, build one from an array using Select + join + json:
|
||||
|
||||
```json
|
||||
"Build_Key_Value_Pairs": {
|
||||
"type": "Select",
|
||||
"inputs": {
|
||||
"from": "@body('Get_Lookup_Items')?['value']",
|
||||
"select": "@concat('\"', item()?['Key'], '\":\"', item()?['Value'], '\"')"
|
||||
}
|
||||
},
|
||||
"Assemble_Dictionary": {
|
||||
"type": "Compose",
|
||||
"inputs": "@json(concat('{', join(body('Build_Key_Value_Pairs'), ','), '}'))"
|
||||
}
|
||||
```
|
||||
|
||||
Lookup: `@outputs('Assemble_Dictionary')?['myKey']`
|
||||
|
||||
```json
|
||||
// Practical example: date → rate-code lookup for business rules
|
||||
"Build_Holiday_Rates": {
|
||||
"type": "Select",
|
||||
"inputs": {
|
||||
"from": "@body('Get_Holidays')?['value']",
|
||||
"select": "@concat('\"', formatDateTime(item()?['Date'], 'yyyy-MM-dd'), '\":\"', item()?['RateCode'], '\"')"
|
||||
}
|
||||
},
|
||||
"Holiday_Dict": {
|
||||
"type": "Compose",
|
||||
"inputs": "@json(concat('{', join(body('Build_Holiday_Rates'), ','), '}'))"
|
||||
}
|
||||
```
|
||||
|
||||
Then inside a loop: `@coalesce(outputs('Holiday_Dict')?[item()?['Date']], 'Standard')`
|
||||
|
||||
> The `json(concat('{', join(...), '}'))` pattern works for string values. For numeric
|
||||
> or boolean values, omit the inner escaped quotes around the value portion.
|
||||
> Keys must be unique — duplicate keys silently overwrite earlier ones.
|
||||
> This replaces deeply nested `if(equals(key,'A'),'X', if(equals(key,'B'),'Y', ...))` chains.
|
||||
|
||||
---
|
||||
|
||||
### union() for Changed-Field Detection
|
||||
|
||||
When you need to find records where *any* of several fields has changed, run one
|
||||
`Filter Array` per field and `union()` the results. This avoids a complex
|
||||
multi-condition filter and produces a clean deduplicated set:
|
||||
|
||||
```json
|
||||
"Filter_Name_Changed": {
|
||||
"type": "Query",
|
||||
"inputs": { "from": "@body('Existing_Records')",
|
||||
"where": "@not(equals(item()?['name'], item()?['dest_name']))" }
|
||||
},
|
||||
"Filter_Status_Changed": {
|
||||
"type": "Query",
|
||||
"inputs": { "from": "@body('Existing_Records')",
|
||||
"where": "@not(equals(item()?['status'], item()?['dest_status']))" }
|
||||
},
|
||||
"All_Changed": {
|
||||
"type": "Compose",
|
||||
"inputs": "@union(body('Filter_Name_Changed'), body('Filter_Status_Changed'))"
|
||||
}
|
||||
```
|
||||
|
||||
Reference: `@outputs('All_Changed')` — deduplicated array of rows where anything changed.
|
||||
|
||||
> `union()` deduplicates by object identity, so a row that changed in both fields
|
||||
> appears once. Add more `Filter_*_Changed` inputs to `union()` as needed:
|
||||
> `@union(body('F1'), body('F2'), body('F3'))`
|
||||
|
||||
---
|
||||
|
||||
### File-Content Change Gate
|
||||
|
||||
Before running expensive processing on a file or blob, compare its current content
|
||||
to a stored baseline. Skip entirely if nothing has changed — makes sync flows
|
||||
idempotent and safe to re-run or schedule aggressively.
|
||||
|
||||
```json
|
||||
"Get_File_From_Source": { ... },
|
||||
"Get_Stored_Baseline": { ... },
|
||||
"Condition_File_Changed": {
|
||||
"type": "If",
|
||||
"expression": {
|
||||
"not": {
|
||||
"equals": [
|
||||
"@base64(body('Get_File_From_Source'))",
|
||||
"@body('Get_Stored_Baseline')"
|
||||
]
|
||||
}
|
||||
},
|
||||
"actions": {
|
||||
"Update_Baseline": { "...": "overwrite stored copy with new content" },
|
||||
"Process_File": { "...": "all expensive work goes here" }
|
||||
},
|
||||
"else": { "actions": {} }
|
||||
}
|
||||
```
|
||||
|
||||
> Store the baseline as a file in SharePoint or blob storage — `base64()`-encode the
|
||||
> live content before comparing so binary and text files are handled uniformly.
|
||||
> Write the new baseline **before** processing so a re-run after a partial failure
|
||||
> does not re-process the same file again.
|
||||
|
||||
---
|
||||
|
||||
### Set-Join for Sync (Update Detection without Nested Loops)
|
||||
|
||||
When syncing a source collection into a destination (e.g. API response → SharePoint list,
|
||||
CSV → database), avoid nested `Apply to each` loops to find changed records.
|
||||
Instead, **project flat key arrays** and use `contains()` to perform set operations —
|
||||
zero nested loops, and the final loop only touches changed items.
|
||||
|
||||
**Full insert/update/delete sync pattern:**
|
||||
|
||||
```json
|
||||
// Step 1 — Project a flat key array from the DESTINATION (e.g. SharePoint)
|
||||
"Select_Dest_Keys": {
|
||||
"type": "Select",
|
||||
"inputs": {
|
||||
"from": "@outputs('Get_Dest_Items')?['body/value']",
|
||||
"select": "@item()?['Title']"
|
||||
}
|
||||
}
|
||||
// → ["KEY1", "KEY2", "KEY3", ...]
|
||||
|
||||
// Step 2 — INSERT: source rows whose key is NOT in destination
|
||||
"Filter_To_Insert": {
|
||||
"type": "Query",
|
||||
"inputs": {
|
||||
"from": "@body('Source_Array')",
|
||||
"where": "@not(contains(body('Select_Dest_Keys'), item()?['key']))"
|
||||
}
|
||||
}
|
||||
// → Apply to each Filter_To_Insert → CreateItem
|
||||
|
||||
// Step 3 — INNER JOIN: source rows that exist in destination
|
||||
"Filter_Already_Exists": {
|
||||
"type": "Query",
|
||||
"inputs": {
|
||||
"from": "@body('Source_Array')",
|
||||
"where": "@contains(body('Select_Dest_Keys'), item()?['key'])"
|
||||
}
|
||||
}
|
||||
|
||||
// Step 4 — UPDATE: one Filter per tracked field, then union them
|
||||
"Filter_Field1_Changed": {
|
||||
"type": "Query",
|
||||
"inputs": {
|
||||
"from": "@body('Filter_Already_Exists')",
|
||||
"where": "@not(equals(item()?['field1'], item()?['dest_field1']))"
|
||||
}
|
||||
}
|
||||
"Filter_Field2_Changed": {
|
||||
"type": "Query",
|
||||
"inputs": {
|
||||
"from": "@body('Filter_Already_Exists')",
|
||||
"where": "@not(equals(item()?['field2'], item()?['dest_field2']))"
|
||||
}
|
||||
}
|
||||
"Union_Changed": {
|
||||
"type": "Compose",
|
||||
"inputs": "@union(body('Filter_Field1_Changed'), body('Filter_Field2_Changed'))"
|
||||
}
|
||||
// → rows where ANY tracked field differs
|
||||
|
||||
// Step 5 — Resolve destination IDs for changed rows (no nested loop)
|
||||
"Select_Changed_Keys": {
|
||||
"type": "Select",
|
||||
"inputs": { "from": "@outputs('Union_Changed')", "select": "@item()?['key']" }
|
||||
}
|
||||
"Filter_Dest_Items_To_Update": {
|
||||
"type": "Query",
|
||||
"inputs": {
|
||||
"from": "@outputs('Get_Dest_Items')?['body/value']",
|
||||
"where": "@contains(body('Select_Changed_Keys'), item()?['Title'])"
|
||||
}
|
||||
}
|
||||
// Step 6 — Single loop over changed items only
|
||||
"Apply_to_each_Update": {
|
||||
"type": "Foreach",
|
||||
"foreach": "@body('Filter_Dest_Items_To_Update')",
|
||||
"actions": {
|
||||
"Get_Source_Row": {
|
||||
"type": "Query",
|
||||
"inputs": {
|
||||
"from": "@outputs('Union_Changed')",
|
||||
"where": "@equals(item()?['key'], items('Apply_to_each_Update')?['Title'])"
|
||||
}
|
||||
},
|
||||
"Update_Item": {
|
||||
"...": "...",
|
||||
"id": "@items('Apply_to_each_Update')?['ID']",
|
||||
"item/field1": "@first(body('Get_Source_Row'))?['field1']"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Step 7 — DELETE: destination keys NOT in source
|
||||
"Select_Source_Keys": {
|
||||
"type": "Select",
|
||||
"inputs": { "from": "@body('Source_Array')", "select": "@item()?['key']" }
|
||||
}
|
||||
"Filter_To_Delete": {
|
||||
"type": "Query",
|
||||
"inputs": {
|
||||
"from": "@outputs('Get_Dest_Items')?['body/value']",
|
||||
"where": "@not(contains(body('Select_Source_Keys'), item()?['Title']))"
|
||||
}
|
||||
}
|
||||
// → Apply to each Filter_To_Delete → DeleteItem
|
||||
```
|
||||
|
||||
> **Why this beats nested loops**: the naive approach (for each dest item, scan source)
|
||||
> is O(n × m) and hits Power Automate's 100k-action run limit fast on large lists.
|
||||
> This pattern is O(n + m): one pass to build key arrays, one pass per filter.
|
||||
> The update loop in Step 6 only iterates *changed* records — often a tiny fraction
|
||||
> of the full collection. Run Steps 2/4/7 in **parallel Scopes** for further speed.
|
||||
|
||||
---
|
||||
|
||||
### First-or-Null Single-Row Lookup
|
||||
|
||||
Use `first()` on the result array to extract one record without a loop.
|
||||
Then null-check the output to guard downstream actions.
|
||||
|
||||
```json
|
||||
"Get_First_Match": {
|
||||
"type": "Compose",
|
||||
"runAfter": { "Get_SP_Items": ["Succeeded"] },
|
||||
"inputs": "@first(outputs('Get_SP_Items')?['body/value'])"
|
||||
}
|
||||
```
|
||||
|
||||
In a Condition, test for no-match with the **`@null` literal** (not `empty()`):
|
||||
|
||||
```json
|
||||
"Condition": {
|
||||
"type": "If",
|
||||
"expression": {
|
||||
"not": {
|
||||
"equals": [
|
||||
"@outputs('Get_First_Match')",
|
||||
"@null"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access fields on the matched row: `@outputs('Get_First_Match')?['FieldName']`
|
||||
|
||||
> Use this instead of `Apply to each` when you only need one matching record.
|
||||
> `first()` on an empty array returns `null`; `empty()` is for arrays/strings,
|
||||
> not scalars — using it on a `first()` result causes a runtime error.
|
||||
|
||||
---
|
||||
|
||||
## HTTP & Parsing
|
||||
|
||||
### HTTP Action (External API)
|
||||
|
||||
```json
|
||||
"Call_External_API": {
|
||||
"type": "Http",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"method": "POST",
|
||||
"uri": "https://api.example.com/endpoint",
|
||||
"headers": {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": "Bearer @{variables('apiToken')}"
|
||||
},
|
||||
"body": {
|
||||
"data": "@outputs('Compose_Payload')"
|
||||
},
|
||||
"retryPolicy": {
|
||||
"type": "Fixed",
|
||||
"count": 3,
|
||||
"interval": "PT10S"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Response reference: `@outputs('Call_External_API')?['body']`
|
||||
|
||||
#### Variant: ActiveDirectoryOAuth (Service-to-Service)
|
||||
|
||||
For calling APIs that require Azure AD client-credentials (e.g., Microsoft Graph),
|
||||
use in-line OAuth instead of a Bearer token variable:
|
||||
|
||||
```json
|
||||
"Call_Graph_API": {
|
||||
"type": "Http",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"method": "GET",
|
||||
"uri": "https://graph.microsoft.com/v1.0/users?$search=\"employeeId:@{variables('Code')}\"&$select=id,displayName",
|
||||
"headers": {
|
||||
"Content-Type": "application/json",
|
||||
"ConsistencyLevel": "eventual"
|
||||
},
|
||||
"authentication": {
|
||||
"type": "ActiveDirectoryOAuth",
|
||||
"authority": "https://login.microsoftonline.com",
|
||||
"tenant": "<tenant-id>",
|
||||
"audience": "https://graph.microsoft.com",
|
||||
"clientId": "<app-registration-id>",
|
||||
"secret": "@parameters('graphClientSecret')"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **When to use:** Calling Microsoft Graph, Azure Resource Manager, or any
|
||||
> Azure AD-protected API from a flow without a premium connector.
|
||||
>
|
||||
> The `authentication` block handles the entire OAuth client-credentials flow
|
||||
> transparently — no manual token acquisition step needed.
|
||||
>
|
||||
> `ConsistencyLevel: eventual` is required for Graph `$search` queries.
|
||||
> Without it, `$search` returns 400.
|
||||
>
|
||||
> For PATCH/PUT writes, the same `authentication` block works — just change
|
||||
> `method` and add a `body`.
|
||||
>
|
||||
> ⚠️ **Never hardcode `secret` inline.** Use `@parameters('graphClientSecret')`
|
||||
> and declare it in the flow's `parameters` block (type `securestring`). This
|
||||
> prevents the secret from appearing in run history or being readable via
|
||||
> `get_live_flow`. Declare the parameter like:
|
||||
> ```json
|
||||
> "parameters": {
|
||||
> "graphClientSecret": { "type": "securestring", "defaultValue": "" }
|
||||
> }
|
||||
> ```
|
||||
> Then pass the real value via the flow's connections or environment variables
|
||||
> — never commit it to source control.
|
||||
|
||||
---
|
||||
|
||||
### HTTP Response (Return to Caller)
|
||||
|
||||
Used in HTTP-triggered flows to send a structured reply back to the caller.
|
||||
Must run before the flow times out (default 2 min for synchronous HTTP).
|
||||
|
||||
```json
|
||||
"Response": {
|
||||
"type": "Response",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"statusCode": 200,
|
||||
"headers": {
|
||||
"Content-Type": "application/json"
|
||||
},
|
||||
"body": {
|
||||
"status": "success",
|
||||
"message": "@{outputs('Compose_Result')}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **PowerApps / low-code caller pattern**: always return `statusCode: 200` with a
|
||||
> `status` field in the body (`"success"` / `"error"`). PowerApps HTTP actions
|
||||
> do not handle non-2xx responses gracefully — the caller should inspect
|
||||
> `body.status` rather than the HTTP status code.
|
||||
>
|
||||
> Use multiple Response actions — one per branch — so each path returns
|
||||
> an appropriate message. Only one will execute per run.
|
||||
|
||||
---
|
||||
|
||||
### Child Flow Call (Parent→Child via HTTP POST)
|
||||
|
||||
Power Automate supports parent→child orchestration by calling a child flow's
|
||||
HTTP trigger URL directly. The parent sends an HTTP POST and blocks until the
|
||||
child returns a `Response` action. The child flow uses a `manual` (Request) trigger.
|
||||
|
||||
```json
|
||||
// PARENT — call child flow and wait for its response
|
||||
"Call_Child_Flow": {
|
||||
"type": "Http",
|
||||
"inputs": {
|
||||
"method": "POST",
|
||||
"uri": "https://prod-XX.australiasoutheast.logic.azure.com:443/workflows/<workflowId>/triggers/manual/paths/invoke?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<SAS>",
|
||||
"headers": { "Content-Type": "application/json" },
|
||||
"body": {
|
||||
"ID": "@triggerBody()?['ID']",
|
||||
"WeekEnd": "@triggerBody()?['WeekEnd']",
|
||||
"Payload": "@variables('dataArray')"
|
||||
},
|
||||
"retryPolicy": { "type": "none" }
|
||||
},
|
||||
"operationOptions": "DisableAsyncPattern",
|
||||
"runtimeConfiguration": {
|
||||
"contentTransfer": { "transferMode": "Chunked" }
|
||||
},
|
||||
"limit": { "timeout": "PT2H" }
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
// CHILD — manual trigger receives the JSON body
|
||||
// (trigger definition)
|
||||
"manual": {
|
||||
"type": "Request",
|
||||
"kind": "Http",
|
||||
"inputs": {
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"ID": { "type": "string" },
|
||||
"WeekEnd": { "type": "string" },
|
||||
"Payload": { "type": "array" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// CHILD — return result to parent
|
||||
"Response_Success": {
|
||||
"type": "Response",
|
||||
"inputs": {
|
||||
"statusCode": 200,
|
||||
"headers": { "Content-Type": "application/json" },
|
||||
"body": { "Result": "Success", "Count": "@length(variables('processed'))" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **`retryPolicy: none`** — critical on the parent's HTTP call. Without it, a child
|
||||
> flow timeout triggers retries, spawning duplicate child runs.
|
||||
>
|
||||
> **`DisableAsyncPattern`** — prevents the parent from treating a 202 Accepted as
|
||||
> completion. The parent will block until the child sends its `Response`.
|
||||
>
|
||||
> **`transferMode: Chunked`** — enable when passing large arrays (>100 KB) to the child;
|
||||
> avoids request-size limits.
|
||||
>
|
||||
> **`limit.timeout: PT2H`** — raise the default 2-minute HTTP timeout for long-running
|
||||
> children. Max is PT24H.
|
||||
>
|
||||
> The child flow's trigger URL contains a SAS token (`sig=...`) that authenticates
|
||||
> the call. Copy it from the child flow's trigger properties panel. The URL changes
|
||||
> if the trigger is deleted and re-created.
|
||||
|
||||
---
|
||||
|
||||
### Parse JSON
|
||||
|
||||
```json
|
||||
"Parse_Response": {
|
||||
"type": "ParseJson",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"content": "@outputs('Call_External_API')?['body']",
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": { "type": "integer" },
|
||||
"name": { "type": "string" },
|
||||
"items": {
|
||||
"type": "array",
|
||||
"items": { "type": "object" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access parsed values: `@body('Parse_Response')?['name']`
|
||||
|
||||
---
|
||||
|
||||
### Manual CSV → JSON (No Premium Action)
|
||||
|
||||
Parse a raw CSV string into an array of objects using only built-in expressions.
|
||||
Avoids the premium "Parse CSV" connector action.
|
||||
|
||||
```json
|
||||
"Delimiter": {
|
||||
"type": "Compose",
|
||||
"inputs": ","
|
||||
},
|
||||
"Strip_Quotes": {
|
||||
"type": "Compose",
|
||||
"inputs": "@replace(body('Get_File_Content'), '\"', '')"
|
||||
},
|
||||
"Detect_Line_Ending": {
|
||||
"type": "Compose",
|
||||
"inputs": "@if(equals(indexOf(outputs('Strip_Quotes'), decodeUriComponent('%0D%0A')), -1), if(equals(indexOf(outputs('Strip_Quotes'), decodeUriComponent('%0A')), -1), decodeUriComponent('%0D'), decodeUriComponent('%0A')), decodeUriComponent('%0D%0A'))"
|
||||
},
|
||||
"Headers": {
|
||||
"type": "Compose",
|
||||
"inputs": "@split(first(split(outputs('Strip_Quotes'), outputs('Detect_Line_Ending'))), outputs('Delimiter'))"
|
||||
},
|
||||
"Data_Rows": {
|
||||
"type": "Compose",
|
||||
"inputs": "@skip(split(outputs('Strip_Quotes'), outputs('Detect_Line_Ending')), 1)"
|
||||
},
|
||||
"Select_CSV_Body": {
|
||||
"type": "Select",
|
||||
"inputs": {
|
||||
"from": "@outputs('Data_Rows')",
|
||||
"select": {
|
||||
"@{outputs('Headers')[0]}": "@split(item(), outputs('Delimiter'))[0]",
|
||||
"@{outputs('Headers')[1]}": "@split(item(), outputs('Delimiter'))[1]",
|
||||
"@{outputs('Headers')[2]}": "@split(item(), outputs('Delimiter'))[2]"
|
||||
}
|
||||
}
|
||||
},
|
||||
"Filter_Empty_Rows": {
|
||||
"type": "Query",
|
||||
"inputs": {
|
||||
"from": "@body('Select_CSV_Body')",
|
||||
"where": "@not(equals(item()?[outputs('Headers')[0]], null))"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Result: `@body('Filter_Empty_Rows')` — array of objects with header names as keys.
|
||||
|
||||
> **`Detect_Line_Ending`** handles CRLF (Windows), LF (Unix), and CR (old Mac) automatically
|
||||
> using `indexOf()` with `decodeUriComponent('%0D%0A' / '%0A' / '%0D')`.
|
||||
>
|
||||
> **Dynamic key names in `Select`**: `@{outputs('Headers')[0]}` as a JSON key in a
|
||||
> `Select` shape sets the output property name at runtime from the header row —
|
||||
> this works as long as the expression is in `@{...}` interpolation syntax.
|
||||
>
|
||||
> **Columns with embedded commas**: if field values can contain the delimiter,
|
||||
> use `length(split(row, ','))` in a Switch to detect the column count and manually
|
||||
> reassemble the split fragments: `@concat(split(item(),',')[1],',',split(item(),',')[2])`
|
||||
|
||||
---
|
||||
|
||||
### ConvertTimeZone (Built-in, No Connector)
|
||||
|
||||
Converts a timestamp between timezones with no API call or connector licence cost.
|
||||
Format string `"g"` produces short locale date+time (`M/d/yyyy h:mm tt`).
|
||||
|
||||
```json
|
||||
"Convert_to_Local_Time": {
|
||||
"type": "Expression",
|
||||
"kind": "ConvertTimeZone",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"baseTime": "@{outputs('UTC_Timestamp')}",
|
||||
"sourceTimeZone": "UTC",
|
||||
"destinationTimeZone": "Taipei Standard Time",
|
||||
"formatString": "g"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Result reference: `@body('Convert_to_Local_Time')` — **not** `outputs()`, unlike most actions.
|
||||
|
||||
Common `formatString` values: `"g"` (short), `"f"` (full), `"yyyy-MM-dd"`, `"HH:mm"`
|
||||
|
||||
Common timezone strings: `"UTC"`, `"AUS Eastern Standard Time"`, `"Taipei Standard Time"`,
|
||||
`"Singapore Standard Time"`, `"GMT Standard Time"`
|
||||
|
||||
> This is `type: Expression, kind: ConvertTimeZone` — a built-in Logic Apps action,
|
||||
> not a connector. No connection reference needed. Reference the output via
|
||||
> `body()` (not `outputs()`), otherwise the expression returns null.
|
||||
@@ -0,0 +1,108 @@
|
||||
# Common Build Patterns
|
||||
|
||||
Complete flow definition templates ready to copy and customize.
|
||||
|
||||
---
|
||||
|
||||
## Pattern: Recurrence + SharePoint list read + Teams notification
|
||||
|
||||
```json
|
||||
{
|
||||
"triggers": {
|
||||
"Recurrence": {
|
||||
"type": "Recurrence",
|
||||
"recurrence": { "frequency": "Day", "interval": 1,
|
||||
"startTime": "2026-01-01T08:00:00Z",
|
||||
"timeZone": "AUS Eastern Standard Time" }
|
||||
}
|
||||
},
|
||||
"actions": {
|
||||
"Get_SP_Items": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "shared_sharepointonline",
|
||||
"operationId": "GetItems"
|
||||
},
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"table": "MyList",
|
||||
"$filter": "Status eq 'Active'",
|
||||
"$top": 500
|
||||
}
|
||||
}
|
||||
},
|
||||
"Apply_To_Each": {
|
||||
"type": "Foreach",
|
||||
"runAfter": { "Get_SP_Items": ["Succeeded"] },
|
||||
"foreach": "@outputs('Get_SP_Items')?['body/value']",
|
||||
"actions": {
|
||||
"Post_Teams_Message": {
|
||||
"type": "OpenApiConnection",
|
||||
"runAfter": {},
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_teams",
|
||||
"connectionName": "shared_teams",
|
||||
"operationId": "PostMessageToConversation"
|
||||
},
|
||||
"parameters": {
|
||||
"poster": "Flow bot",
|
||||
"location": "Channel",
|
||||
"body/recipient": {
|
||||
"groupId": "<team-id>",
|
||||
"channelId": "<channel-id>"
|
||||
},
|
||||
"body/messageBody": "Item: @{items('Apply_To_Each')?['Title']}"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"operationOptions": "Sequential"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pattern: HTTP trigger (webhook / Power App call)
|
||||
|
||||
```json
|
||||
{
|
||||
"triggers": {
|
||||
"manual": {
|
||||
"type": "Request",
|
||||
"kind": "Http",
|
||||
"inputs": {
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": { "type": "string" },
|
||||
"value": { "type": "number" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"actions": {
|
||||
"Compose_Response": {
|
||||
"type": "Compose",
|
||||
"runAfter": {},
|
||||
"inputs": "Received: @{triggerBody()?['name']} = @{triggerBody()?['value']}"
|
||||
},
|
||||
"Response": {
|
||||
"type": "Response",
|
||||
"runAfter": { "Compose_Response": ["Succeeded"] },
|
||||
"inputs": {
|
||||
"statusCode": 200,
|
||||
"body": { "status": "ok", "message": "@{outputs('Compose_Response')}" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access body values: `@triggerBody()?['name']`
|
||||
@@ -0,0 +1,225 @@
|
||||
# FlowStudio MCP — Flow Definition Schema
|
||||
|
||||
The full JSON structure expected by `update_live_flow` (and returned by `get_live_flow`).
|
||||
|
||||
---
|
||||
|
||||
## Top-Level Shape
|
||||
|
||||
```json
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"$connections": {
|
||||
"defaultValue": {},
|
||||
"type": "Object"
|
||||
}
|
||||
},
|
||||
"triggers": {
|
||||
"<TriggerName>": { ... }
|
||||
},
|
||||
"actions": {
|
||||
"<ActionName>": { ... }
|
||||
},
|
||||
"outputs": {}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## `triggers`
|
||||
|
||||
Exactly one trigger per flow definition. The key name is arbitrary but
|
||||
conventional names are used (e.g. `Recurrence`, `manual`, `When_a_new_email_arrives`).
|
||||
|
||||
See [trigger-types.md](trigger-types.md) for all trigger templates.
|
||||
|
||||
---
|
||||
|
||||
## `actions`
|
||||
|
||||
Dictionary of action definitions keyed by unique action name.
|
||||
Key names may not contain spaces — use underscores.
|
||||
|
||||
Each action must include:
|
||||
- `type` — action type identifier
|
||||
- `runAfter` — map of upstream action names → status conditions array
|
||||
- `inputs` — action-specific input configuration
|
||||
|
||||
See [action-patterns-core.md](action-patterns-core.md), [action-patterns-data.md](action-patterns-data.md),
|
||||
and [action-patterns-connectors.md](action-patterns-connectors.md) for templates.
|
||||
|
||||
### Optional Action Properties
|
||||
|
||||
Beyond the required `type`, `runAfter`, and `inputs`, actions can include:
|
||||
|
||||
| Property | Purpose |
|
||||
|---|---|
|
||||
| `runtimeConfiguration` | Pagination, concurrency, secure data, chunked transfer |
|
||||
| `operationOptions` | `"Sequential"` for Foreach, `"DisableAsyncPattern"` for HTTP |
|
||||
| `limit` | Timeout override (e.g. `{"timeout": "PT2H"}`) |
|
||||
|
||||
#### `runtimeConfiguration` Variants
|
||||
|
||||
**Pagination** (SharePoint Get Items with large lists):
|
||||
```json
|
||||
"runtimeConfiguration": {
|
||||
"paginationPolicy": {
|
||||
"minimumItemCount": 5000
|
||||
}
|
||||
}
|
||||
```
|
||||
> Without this, Get Items silently caps at 256 results. Set `minimumItemCount`
|
||||
> to the maximum rows you expect. Required for any SharePoint list over 256 items.
|
||||
|
||||
**Concurrency** (parallel Foreach):
|
||||
```json
|
||||
"runtimeConfiguration": {
|
||||
"concurrency": {
|
||||
"repetitions": 20
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Secure inputs/outputs** (mask values in run history):
|
||||
```json
|
||||
"runtimeConfiguration": {
|
||||
"secureData": {
|
||||
"properties": ["inputs", "outputs"]
|
||||
}
|
||||
}
|
||||
```
|
||||
> Use on actions that handle credentials, tokens, or PII. Masked values show
|
||||
> as `"<redacted>"` in the flow run history UI and API responses.
|
||||
|
||||
**Chunked transfer** (large HTTP payloads):
|
||||
```json
|
||||
"runtimeConfiguration": {
|
||||
"contentTransfer": {
|
||||
"transferMode": "Chunked"
|
||||
}
|
||||
}
|
||||
```
|
||||
> Enable on HTTP actions sending or receiving bodies >100 KB (e.g. parent→child
|
||||
> flow calls with large arrays).
|
||||
|
||||
---
|
||||
|
||||
## `runAfter` Rules
|
||||
|
||||
The first action in a branch has `"runAfter": {}` (empty — runs after trigger).
|
||||
|
||||
Subsequent actions declare their dependency:
|
||||
|
||||
```json
|
||||
"My_Action": {
|
||||
"runAfter": {
|
||||
"Previous_Action": ["Succeeded"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Multiple upstream dependencies:
|
||||
```json
|
||||
"runAfter": {
|
||||
"Action_A": ["Succeeded"],
|
||||
"Action_B": ["Succeeded", "Skipped"]
|
||||
}
|
||||
```
|
||||
|
||||
Error-handling action (runs when upstream failed):
|
||||
```json
|
||||
"Log_Error": {
|
||||
"runAfter": {
|
||||
"Risky_Action": ["Failed"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## `parameters` (Flow-Level Input Parameters)
|
||||
|
||||
Optional. Define reusable values at the flow level:
|
||||
|
||||
```json
|
||||
"parameters": {
|
||||
"listName": {
|
||||
"type": "string",
|
||||
"defaultValue": "MyList"
|
||||
},
|
||||
"maxItems": {
|
||||
"type": "integer",
|
||||
"defaultValue": 100
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Reference: `@parameters('listName')` in expression strings.
|
||||
|
||||
---
|
||||
|
||||
## `outputs`
|
||||
|
||||
Rarely used in cloud flows. Leave as `{}` unless the flow is called
|
||||
as a child flow and needs to return values.
|
||||
|
||||
For child flows that return data:
|
||||
|
||||
```json
|
||||
"outputs": {
|
||||
"resultData": {
|
||||
"type": "object",
|
||||
"value": "@outputs('Compose_Result')"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Scoped Actions (Inside Scope Block)
|
||||
|
||||
Actions that need to be grouped for error handling or clarity:
|
||||
|
||||
```json
|
||||
"Scope_Main_Process": {
|
||||
"type": "Scope",
|
||||
"runAfter": {},
|
||||
"actions": {
|
||||
"Step_One": { ... },
|
||||
"Step_Two": { "runAfter": { "Step_One": ["Succeeded"] }, ... }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Full Minimal Example
|
||||
|
||||
```json
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"triggers": {
|
||||
"Recurrence": {
|
||||
"type": "Recurrence",
|
||||
"recurrence": {
|
||||
"frequency": "Week",
|
||||
"interval": 1,
|
||||
"schedule": { "weekDays": ["Monday"] },
|
||||
"startTime": "2026-01-05T09:00:00Z",
|
||||
"timeZone": "AUS Eastern Standard Time"
|
||||
}
|
||||
}
|
||||
},
|
||||
"actions": {
|
||||
"Compose_Greeting": {
|
||||
"type": "Compose",
|
||||
"runAfter": {},
|
||||
"inputs": "Good Monday!"
|
||||
}
|
||||
},
|
||||
"outputs": {}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,211 @@
|
||||
# FlowStudio MCP — Trigger Types
|
||||
|
||||
Copy-paste trigger definitions for Power Automate flow definitions.
|
||||
|
||||
---
|
||||
|
||||
## Recurrence
|
||||
|
||||
Run on a schedule.
|
||||
|
||||
```json
|
||||
"Recurrence": {
|
||||
"type": "Recurrence",
|
||||
"recurrence": {
|
||||
"frequency": "Day",
|
||||
"interval": 1,
|
||||
"startTime": "2026-01-01T08:00:00Z",
|
||||
"timeZone": "AUS Eastern Standard Time"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Weekly on specific days:
|
||||
```json
|
||||
"Recurrence": {
|
||||
"type": "Recurrence",
|
||||
"recurrence": {
|
||||
"frequency": "Week",
|
||||
"interval": 1,
|
||||
"schedule": {
|
||||
"weekDays": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"]
|
||||
},
|
||||
"startTime": "2026-01-05T09:00:00Z",
|
||||
"timeZone": "AUS Eastern Standard Time"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Common `timeZone` values:
|
||||
- `"AUS Eastern Standard Time"` — Sydney/Melbourne (UTC+10/+11)
|
||||
- `"UTC"` — Universal time
|
||||
- `"E. Australia Standard Time"` — Brisbane (UTC+10 no DST)
|
||||
- `"New Zealand Standard Time"` — Auckland (UTC+12/+13)
|
||||
- `"Pacific Standard Time"` — Los Angeles (UTC-8/-7)
|
||||
- `"GMT Standard Time"` — London (UTC+0/+1)
|
||||
|
||||
---
|
||||
|
||||
## Manual (HTTP Request / Power Apps)
|
||||
|
||||
Receive an HTTP POST with a JSON body.
|
||||
|
||||
```json
|
||||
"manual": {
|
||||
"type": "Request",
|
||||
"kind": "Http",
|
||||
"inputs": {
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": { "type": "string" },
|
||||
"value": { "type": "integer" }
|
||||
},
|
||||
"required": ["name"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access values: `@triggerBody()?['name']`
|
||||
Trigger URL available after saving: `@listCallbackUrl()`
|
||||
|
||||
#### No-Schema Variant (Accept Arbitrary JSON)
|
||||
|
||||
When the incoming payload structure is unknown or varies, omit the schema
|
||||
to accept any valid JSON body without validation:
|
||||
|
||||
```json
|
||||
"manual": {
|
||||
"type": "Request",
|
||||
"kind": "Http",
|
||||
"inputs": {
|
||||
"schema": {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access any field dynamically: `@triggerBody()?['anyField']`
|
||||
|
||||
> Use this for external webhooks (Stripe, GitHub, Employment Hero, etc.) where the
|
||||
> payload shape may change or is not fully documented. The flow accepts any
|
||||
> JSON without returning 400 for unexpected properties.
|
||||
|
||||
---
|
||||
|
||||
## Automated (SharePoint Item Created)
|
||||
|
||||
```json
|
||||
"When_an_item_is_created": {
|
||||
"type": "OpenApiConnectionNotification",
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "OnNewItem"
|
||||
},
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"table": "MyList"
|
||||
},
|
||||
"subscribe": {
|
||||
"body": { "notificationUrl": "@listCallbackUrl()" },
|
||||
"queries": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"table": "MyList"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access trigger data: `@triggerBody()?['ID']`, `@triggerBody()?['Title']`, etc.
|
||||
|
||||
---
|
||||
|
||||
## Automated (SharePoint Item Modified)
|
||||
|
||||
```json
|
||||
"When_an_existing_item_is_modified": {
|
||||
"type": "OpenApiConnectionNotification",
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "OnUpdatedItem"
|
||||
},
|
||||
"parameters": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"table": "MyList"
|
||||
},
|
||||
"subscribe": {
|
||||
"body": { "notificationUrl": "@listCallbackUrl()" },
|
||||
"queries": {
|
||||
"dataset": "https://mytenant.sharepoint.com/sites/mysite",
|
||||
"table": "MyList"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Automated (Outlook: When New Email Arrives)
|
||||
|
||||
```json
|
||||
"When_a_new_email_arrives": {
|
||||
"type": "OpenApiConnectionNotification",
|
||||
"inputs": {
|
||||
"host": {
|
||||
"apiId": "/providers/Microsoft.PowerApps/apis/shared_office365",
|
||||
"connectionName": "<connectionName>",
|
||||
"operationId": "OnNewEmail"
|
||||
},
|
||||
"parameters": {
|
||||
"folderId": "Inbox",
|
||||
"to": "monitored@contoso.com",
|
||||
"isHTML": true
|
||||
},
|
||||
"subscribe": {
|
||||
"body": { "notificationUrl": "@listCallbackUrl()" }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Child Flow (Called by Another Flow)
|
||||
|
||||
```json
|
||||
"manual": {
|
||||
"type": "Request",
|
||||
"kind": "Button",
|
||||
"inputs": {
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"items": {
|
||||
"type": "array",
|
||||
"items": { "type": "object" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access parent-supplied data: `@triggerBody()?['items']`
|
||||
|
||||
To return data to the parent, add a `Response` action:
|
||||
```json
|
||||
"Respond_to_Parent": {
|
||||
"type": "Response",
|
||||
"runAfter": { "Compose_Result": ["Succeeded"] },
|
||||
"inputs": {
|
||||
"statusCode": 200,
|
||||
"body": "@outputs('Compose_Result')"
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,322 @@
|
||||
---
|
||||
name: flowstudio-power-automate-debug
|
||||
description: >-
|
||||
Debug failing Power Automate cloud flows using the FlowStudio MCP server.
|
||||
Load this skill when asked to: debug a flow, investigate a failed run, why is
|
||||
this flow failing, inspect action outputs, find the root cause of a flow error,
|
||||
fix a broken Power Automate flow, diagnose a timeout, trace a DynamicOperationRequestFailure,
|
||||
check connector auth errors, read error details from a run, or troubleshoot
|
||||
expression failures. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Power Automate Debugging with FlowStudio MCP
|
||||
|
||||
A step-by-step diagnostic process for investigating failing Power Automate
|
||||
cloud flows through the FlowStudio MCP server.
|
||||
|
||||
**Prerequisite**: A FlowStudio MCP server must be reachable with a valid JWT.
|
||||
See the `flowstudio-power-automate-mcp` skill for connection setup.
|
||||
Subscribe at https://mcp.flowstudio.app
|
||||
|
||||
---
|
||||
|
||||
## Source of Truth
|
||||
|
||||
> **Always call `tools/list` first** to confirm available tool names and their
|
||||
> parameter schemas. Tool names and parameters may change between server versions.
|
||||
> This skill covers response shapes, behavioral notes, and diagnostic patterns —
|
||||
> things `tools/list` cannot tell you. If this document disagrees with `tools/list`
|
||||
> or a real API response, the API wins.
|
||||
|
||||
---
|
||||
|
||||
## Python Helper
|
||||
|
||||
```python
|
||||
import json, urllib.request
|
||||
|
||||
MCP_URL = "https://mcp.flowstudio.app/mcp"
|
||||
MCP_TOKEN = "<YOUR_JWT_TOKEN>"
|
||||
|
||||
def mcp(tool, **kwargs):
|
||||
payload = json.dumps({"jsonrpc": "2.0", "id": 1, "method": "tools/call",
|
||||
"params": {"name": tool, "arguments": kwargs}}).encode()
|
||||
req = urllib.request.Request(MCP_URL, data=payload,
|
||||
headers={"x-api-key": MCP_TOKEN, "Content-Type": "application/json",
|
||||
"User-Agent": "FlowStudio-MCP/1.0"})
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=120)
|
||||
except urllib.error.HTTPError as e:
|
||||
body = e.read().decode("utf-8", errors="replace")
|
||||
raise RuntimeError(f"MCP HTTP {e.code}: {body[:200]}") from e
|
||||
raw = json.loads(resp.read())
|
||||
if "error" in raw:
|
||||
raise RuntimeError(f"MCP error: {json.dumps(raw['error'])}")
|
||||
return json.loads(raw["result"]["content"][0]["text"])
|
||||
|
||||
ENV = "<environment-id>" # e.g. Default-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## FlowStudio for Teams: Fast-Path Diagnosis (Skip Steps 2–4)
|
||||
|
||||
If you have a FlowStudio for Teams subscription, `get_store_flow_errors`
|
||||
returns per-run failure data including action names and remediation hints
|
||||
in a single call — no need to walk through live API steps.
|
||||
|
||||
```python
|
||||
# Quick failure summary
|
||||
summary = mcp("get_store_flow_summary", environmentName=ENV, flowName=FLOW_ID)
|
||||
# {"totalRuns": 100, "failRuns": 10, "failRate": 0.1,
|
||||
# "averageDurationSeconds": 29.4, "maxDurationSeconds": 158.9,
|
||||
# "firstFailRunRemediation": "<hint or null>"}
|
||||
print(f"Fail rate: {summary['failRate']:.0%} over {summary['totalRuns']} runs")
|
||||
|
||||
# Per-run error details (requires active monitoring to be configured)
|
||||
errors = mcp("get_store_flow_errors", environmentName=ENV, flowName=FLOW_ID)
|
||||
if errors:
|
||||
for r in errors[:3]:
|
||||
print(r["startTime"], "|", r.get("failedActions"), "|", r.get("remediationHint"))
|
||||
# If errors confirms the failing action → jump to Step 6 (apply fix)
|
||||
else:
|
||||
# Store doesn't have run-level detail for this flow — use live tools (Steps 2–5)
|
||||
pass
|
||||
```
|
||||
|
||||
For the full governance record (description, complexity, tier, connector list):
|
||||
```python
|
||||
record = mcp("get_store_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
# {"displayName": "My Flow", "state": "Started",
|
||||
# "runPeriodTotal": 100, "runPeriodFailRate": 0.1, "runPeriodFails": 10,
|
||||
# "runPeriodDurationAverage": 29410.8, ← milliseconds
|
||||
# "runError": "{\"code\": \"EACCES\", ...}", ← JSON string, parse it
|
||||
# "description": "...", "tier": "Premium", "complexity": "{...}"}
|
||||
if record.get("runError"):
|
||||
last_err = json.loads(record["runError"])
|
||||
print("Last run error:", last_err)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 1 — Locate the Flow
|
||||
|
||||
```python
|
||||
result = mcp("list_live_flows", environmentName=ENV)
|
||||
# Returns a wrapper object: {mode, flows, totalCount, error}
|
||||
target = next(f for f in result["flows"] if "My Flow Name" in f["displayName"])
|
||||
FLOW_ID = target["id"] # plain UUID — use directly as flowName
|
||||
print(FLOW_ID)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2 — Find the Failing Run
|
||||
|
||||
```python
|
||||
runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=5)
|
||||
# Returns direct array (newest first):
|
||||
# [{"name": "08584296068667933411438594643CU15",
|
||||
# "status": "Failed",
|
||||
# "startTime": "2026-02-25T06:13:38.6910688Z",
|
||||
# "endTime": "2026-02-25T06:15:24.1995008Z",
|
||||
# "triggerName": "manual",
|
||||
# "error": {"code": "ActionFailed", "message": "An action failed..."}},
|
||||
# {"name": "...", "status": "Succeeded", "error": null, ...}]
|
||||
|
||||
for r in runs:
|
||||
print(r["name"], r["status"], r["startTime"])
|
||||
|
||||
RUN_ID = next(r["name"] for r in runs if r["status"] == "Failed")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3 — Get the Top-Level Error
|
||||
|
||||
```python
|
||||
err = mcp("get_live_flow_run_error",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID)
|
||||
# Returns:
|
||||
# {
|
||||
# "runName": "08584296068667933411438594643CU15",
|
||||
# "failedActions": [
|
||||
# {"actionName": "Apply_to_each_prepare_workers", "status": "Failed",
|
||||
# "error": {"code": "ActionFailed", "message": "An action failed..."},
|
||||
# "startTime": "...", "endTime": "..."},
|
||||
# {"actionName": "HTTP_find_AD_User_by_Name", "status": "Failed",
|
||||
# "code": "NotSpecified", "startTime": "...", "endTime": "..."}
|
||||
# ],
|
||||
# "allActions": [
|
||||
# {"actionName": "Apply_to_each", "status": "Skipped"},
|
||||
# {"actionName": "Compose_WeekEnd", "status": "Succeeded"},
|
||||
# ...
|
||||
# ]
|
||||
# }
|
||||
|
||||
# failedActions is ordered outer-to-inner. The ROOT cause is the LAST entry:
|
||||
root = err["failedActions"][-1]
|
||||
print(f"Root action: {root['actionName']} → code: {root.get('code')}")
|
||||
|
||||
# allActions shows every action's status — useful for spotting what was Skipped
|
||||
# See common-errors.md to decode the error code.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 4 — Read the Flow Definition
|
||||
|
||||
```python
|
||||
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
actions = defn["properties"]["definition"]["actions"]
|
||||
print(list(actions.keys()))
|
||||
```
|
||||
|
||||
Find the failing action in the definition. Inspect its `inputs` expression
|
||||
to understand what data it expects.
|
||||
|
||||
---
|
||||
|
||||
## Step 5 — Inspect Action Outputs (Walk Back from Failure)
|
||||
|
||||
For each action **leading up to** the failure, inspect its runtime output:
|
||||
|
||||
```python
|
||||
for action_name in ["Compose_WeekEnd", "HTTP_Get_Data", "Parse_JSON"]:
|
||||
result = mcp("get_live_flow_run_action_outputs",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
runName=RUN_ID,
|
||||
actionName=action_name)
|
||||
# Returns an array — single-element when actionName is provided
|
||||
out = result[0] if result else {}
|
||||
print(action_name, out.get("status"))
|
||||
print(json.dumps(out.get("outputs", {}), indent=2)[:500])
|
||||
```
|
||||
|
||||
> ⚠️ Output payloads from array-processing actions can be very large.
|
||||
> Always slice (e.g. `[:500]`) before printing.
|
||||
|
||||
---
|
||||
|
||||
## Step 6 — Pinpoint the Root Cause
|
||||
|
||||
### Expression Errors (e.g. `split` on null)
|
||||
If the error mentions `InvalidTemplate` or a function name:
|
||||
1. Find the action in the definition
|
||||
2. Check what upstream action/expression it reads
|
||||
3. Inspect that upstream action's output for null / missing fields
|
||||
|
||||
```python
|
||||
# Example: action uses split(item()?['Name'], ' ')
|
||||
# → null Name in the source data
|
||||
result = mcp("get_live_flow_run_action_outputs", ..., actionName="Compose_Names")
|
||||
# Returns a single-element array; index [0] to get the action object
|
||||
if not result:
|
||||
print("No outputs returned for Compose_Names")
|
||||
names = []
|
||||
else:
|
||||
names = result[0].get("outputs", {}).get("body") or []
|
||||
nulls = [x for x in names if x.get("Name") is None]
|
||||
print(f"{len(nulls)} records with null Name")
|
||||
```
|
||||
|
||||
### Wrong Field Path
|
||||
Expression `triggerBody()?['fieldName']` returns null → `fieldName` is wrong.
|
||||
Check the trigger output shape with:
|
||||
```python
|
||||
mcp("get_live_flow_run_action_outputs", ..., actionName="<trigger-action-name>")
|
||||
```
|
||||
|
||||
### Connection / Auth Failures
|
||||
Look for `ConnectionAuthorizationFailed` — the connection owner must match the
|
||||
service account running the flow. Cannot fix via API; fix in PA designer.
|
||||
|
||||
---
|
||||
|
||||
## Step 7 — Apply the Fix
|
||||
|
||||
**For expression/data issues**:
|
||||
```python
|
||||
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
acts = defn["properties"]["definition"]["actions"]
|
||||
|
||||
# Example: fix split on potentially-null Name
|
||||
acts["Compose_Names"]["inputs"] = \
|
||||
"@coalesce(item()?['Name'], 'Unknown')"
|
||||
|
||||
conn_refs = defn["properties"]["connectionReferences"]
|
||||
result = mcp("update_live_flow",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
definition=defn["properties"]["definition"],
|
||||
connectionReferences=conn_refs)
|
||||
|
||||
print(result.get("error")) # None = success
|
||||
```
|
||||
|
||||
> ⚠️ `update_live_flow` always returns an `error` key.
|
||||
> A value of `null` (Python `None`) means success.
|
||||
|
||||
---
|
||||
|
||||
## Step 8 — Verify the Fix
|
||||
|
||||
```python
|
||||
# Resubmit the failed run
|
||||
resubmit = mcp("resubmit_live_flow_run",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID)
|
||||
print(resubmit)
|
||||
|
||||
# Wait ~30 s then check
|
||||
import time; time.sleep(30)
|
||||
new_runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=3)
|
||||
print(new_runs[0]["status"]) # Succeeded = done
|
||||
```
|
||||
|
||||
### Testing HTTP-Triggered Flows
|
||||
|
||||
For flows with a `Request` (HTTP) trigger, use `trigger_live_flow` instead
|
||||
of `resubmit_live_flow_run` to test with custom payloads:
|
||||
|
||||
```python
|
||||
# First inspect what the trigger expects
|
||||
schema = mcp("get_live_flow_http_schema",
|
||||
environmentName=ENV, flowName=FLOW_ID)
|
||||
print("Expected body schema:", schema.get("triggerSchema"))
|
||||
print("Response schemas:", schema.get("responseSchemas"))
|
||||
|
||||
# Trigger with a test payload
|
||||
result = mcp("trigger_live_flow",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
body={"name": "Test User", "value": 42})
|
||||
print(f"Status: {result['status']}, Body: {result.get('body')}")
|
||||
```
|
||||
|
||||
> `trigger_live_flow` handles AAD-authenticated triggers automatically.
|
||||
> Only works for flows with a `Request` (HTTP) trigger type.
|
||||
|
||||
---
|
||||
|
||||
## Quick-Reference Diagnostic Decision Tree
|
||||
|
||||
| Symptom | First Tool to Call | What to Look For |
|
||||
|---|---|---|
|
||||
| Flow shows as Failed | `get_live_flow_run_error` | `failedActions[-1]["actionName"]` = root cause |
|
||||
| Expression crash | `get_live_flow_run_action_outputs` on prior action | null / wrong-type fields in output body |
|
||||
| Flow never starts | `get_live_flow` | check `properties.state` = "Started" |
|
||||
| Action returns wrong data | `get_live_flow_run_action_outputs` | actual output body vs expected |
|
||||
| Fix applied but still fails | `get_live_flow_runs` after resubmit | new run `status` field |
|
||||
|
||||
---
|
||||
|
||||
## Reference Files
|
||||
|
||||
- [common-errors.md](references/common-errors.md) — Error codes, likely causes, and fixes
|
||||
- [debug-workflow.md](references/debug-workflow.md) — Full decision tree for complex failures
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `flowstudio-power-automate-mcp` — Core connection setup and operation reference
|
||||
- `flowstudio-power-automate-build` — Build and deploy new flows
|
||||
@@ -0,0 +1,188 @@
|
||||
# FlowStudio MCP — Common Power Automate Errors
|
||||
|
||||
Reference for error codes, likely causes, and recommended fixes when debugging
|
||||
Power Automate flows via the FlowStudio MCP server.
|
||||
|
||||
---
|
||||
|
||||
## Expression / Template Errors
|
||||
|
||||
### `InvalidTemplate` — Function Applied to Null
|
||||
|
||||
**Full message pattern**: `"Unable to process template language expressions... function 'split' expects its first argument 'text' to be of type string"`
|
||||
|
||||
**Root cause**: An expression like `@split(item()?['Name'], ' ')` received a null value.
|
||||
|
||||
**Diagnosis**:
|
||||
1. Note the action name in the error message
|
||||
2. Call `get_live_flow_run_action_outputs` on the action that produces the array
|
||||
3. Find items where `Name` (or the referenced field) is `null`
|
||||
|
||||
**Fixes**:
|
||||
```
|
||||
Before: @split(item()?['Name'], ' ')
|
||||
After: @split(coalesce(item()?['Name'], ''), ' ')
|
||||
|
||||
Or guard the whole foreach body with a condition:
|
||||
expression: "@not(empty(item()?['Name']))"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `InvalidTemplate` — Wrong Expression Path
|
||||
|
||||
**Full message pattern**: `"Unable to process template language expressions... 'triggerBody()?['FieldName']' is of type 'Null'"`
|
||||
|
||||
**Root cause**: The field name in the expression doesn't match the actual payload schema.
|
||||
|
||||
**Diagnosis**:
|
||||
```python
|
||||
# Check trigger output shape
|
||||
mcp("get_live_flow_run_action_outputs",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID,
|
||||
actionName="<trigger-name>")
|
||||
# Compare actual keys vs expression
|
||||
```
|
||||
|
||||
**Fix**: Update expression to use the correct key name. Common mismatches:
|
||||
- `triggerBody()?['body']` vs `triggerBody()?['Body']` (case-sensitive)
|
||||
- `triggerBody()?['Subject']` vs `triggerOutputs()?['body/Subject']`
|
||||
|
||||
---
|
||||
|
||||
### `InvalidTemplate` — Type Mismatch
|
||||
|
||||
**Full message pattern**: `"... expected type 'Array' but got type 'Object'"`
|
||||
|
||||
**Root cause**: Passing an object where the expression expects an array (e.g. a single item HTTP response vs a list response).
|
||||
|
||||
**Fix**:
|
||||
```
|
||||
Before: @outputs('HTTP')?['body']
|
||||
After: @outputs('HTTP')?['body/value'] ← for OData list responses
|
||||
@createArray(outputs('HTTP')?['body']) ← wrap single object in array
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Connection / Auth Errors
|
||||
|
||||
### `ConnectionAuthorizationFailed`
|
||||
|
||||
**Full message**: `"The API connection ... is not authorized."`
|
||||
|
||||
**Root cause**: The connection referenced in the flow is owned by a different
|
||||
user/service account than the one whose JWT is being used.
|
||||
|
||||
**Diagnosis**: Check `properties.connectionReferences` — the `connectionName` GUID
|
||||
identifies the owner. Cannot be fixed via API.
|
||||
|
||||
**Fix options**:
|
||||
1. Open flow in Power Automate designer → re-authenticate the connection
|
||||
2. Use a connection owned by the service account whose token you hold
|
||||
3. Share the connection with the service account in PA admin
|
||||
|
||||
---
|
||||
|
||||
### `InvalidConnectionCredentials`
|
||||
|
||||
**Root cause**: The underlying OAuth token for the connection has expired or
|
||||
the user's credentials changed.
|
||||
|
||||
**Fix**: Owner must sign in to Power Automate and refresh the connection.
|
||||
|
||||
---
|
||||
|
||||
## HTTP Action Errors
|
||||
|
||||
### `ActionFailed` — HTTP 4xx/5xx
|
||||
|
||||
**Full message pattern**: `"An HTTP request to... failed with status code '400'"`
|
||||
|
||||
**Diagnosis**:
|
||||
```python
|
||||
actions_out = mcp("get_live_flow_run_action_outputs", ..., actionName="HTTP_My_Call")
|
||||
item = actions_out[0] # first entry in the returned array
|
||||
print(item["outputs"]["statusCode"]) # 400, 401, 403, 500...
|
||||
print(item["outputs"]["body"]) # error details from target API
|
||||
```
|
||||
|
||||
**Common causes**:
|
||||
- 401 — missing or expired auth header
|
||||
- 403 — permission denied on target resource
|
||||
- 404 — wrong URL / resource deleted
|
||||
- 400 — malformed JSON body (check expression that builds the body)
|
||||
|
||||
---
|
||||
|
||||
### `ActionFailed` — HTTP Timeout
|
||||
|
||||
**Root cause**: Target endpoint did not respond within the connector's timeout
|
||||
(default 90 s for HTTP action).
|
||||
|
||||
**Fix**: Add retry policy to the HTTP action, or split the payload into smaller
|
||||
batches to reduce per-request processing time.
|
||||
|
||||
---
|
||||
|
||||
## Control Flow Errors
|
||||
|
||||
### `ActionSkipped` Instead of Running
|
||||
|
||||
**Root cause**: The `runAfter` condition wasn't met. E.g. an action set to
|
||||
`runAfter: { "Prev": ["Succeeded"] }` won't run if `Prev` failed or was skipped.
|
||||
|
||||
**Diagnosis**: Check the preceding action's status. Deliberately skipped
|
||||
(e.g. inside a false branch) is intentional — unexpected skip is a logic gap.
|
||||
|
||||
**Fix**: Add `"Failed"` or `"Skipped"` to the `runAfter` status array if the
|
||||
action should run on those outcomes too.
|
||||
|
||||
---
|
||||
|
||||
### Foreach Runs in Wrong Order / Race Condition
|
||||
|
||||
**Root cause**: `Foreach` without `operationOptions: "Sequential"` runs
|
||||
iterations in parallel, causing write conflicts or undefined ordering.
|
||||
|
||||
**Fix**: Add `"operationOptions": "Sequential"` to the Foreach action.
|
||||
|
||||
---
|
||||
|
||||
## Update / Deploy Errors
|
||||
|
||||
### `update_live_flow` Returns No-Op
|
||||
|
||||
**Symptom**: `result["updated"]` is empty list or `result["created"]` is empty.
|
||||
|
||||
**Likely cause**: Passing wrong parameter name. The required key is `definition`
|
||||
(object), not `flowDefinition` or `body`.
|
||||
|
||||
---
|
||||
|
||||
### `update_live_flow` — `"Supply connectionReferences"`
|
||||
|
||||
**Root cause**: The definition contains `OpenApiConnection` or
|
||||
`OpenApiConnectionWebhook` actions but `connectionReferences` was not passed.
|
||||
|
||||
**Fix**: Fetch the existing connection references with `get_live_flow` and pass
|
||||
them as the `connectionReferences` argument.
|
||||
|
||||
---
|
||||
|
||||
## Data Logic Errors
|
||||
|
||||
### `union()` Overriding Correct Records with Nulls
|
||||
|
||||
**Symptom**: After merging two arrays, some records have null fields that existed
|
||||
in one of the source arrays.
|
||||
|
||||
**Root cause**: `union(old_data, new_data)` — `union()` first-wins, so old_data
|
||||
values override new_data for matching records.
|
||||
|
||||
**Fix**: Swap argument order: `union(new_data, old_data)`
|
||||
|
||||
```
|
||||
Before: @sort(union(outputs('Old_Array'), body('New_Array')), 'Date')
|
||||
After: @sort(union(body('New_Array'), outputs('Old_Array')), 'Date')
|
||||
```
|
||||
@@ -0,0 +1,157 @@
|
||||
# FlowStudio MCP — Debug Workflow
|
||||
|
||||
End-to-end decision tree for diagnosing Power Automate flow failures.
|
||||
|
||||
---
|
||||
|
||||
## Top-Level Decision Tree
|
||||
|
||||
```
|
||||
Flow is failing
|
||||
│
|
||||
├── Flow never starts / no runs appear
|
||||
│ └── ► Check flow State: get_live_flow → properties.state
|
||||
│ ├── "Stopped" → flow is disabled; enable in PA designer
|
||||
│ └── "Started" + no runs → trigger condition not met (check trigger config)
|
||||
│
|
||||
├── Flow run shows "Failed"
|
||||
│ ├── Step A: get_live_flow_run_error → read error.code + error.message
|
||||
│ │
|
||||
│ ├── error.code = "InvalidTemplate"
|
||||
│ │ └── ► Expression error (null value, wrong type, bad path)
|
||||
│ │ └── See: Expression Error Workflow below
|
||||
│ │
|
||||
│ ├── error.code = "ConnectionAuthorizationFailed"
|
||||
│ │ └── ► Connection owned by different user; fix in PA designer
|
||||
│ │
|
||||
│ ├── error.code = "ActionFailed" + message mentions HTTP
|
||||
│ │ └── ► See: HTTP Action Workflow below
|
||||
│ │
|
||||
│ └── Unknown / generic error
|
||||
│ └── ► Walk actions backwards (Step B below)
|
||||
│
|
||||
└── Flow Succeeds but output is wrong
|
||||
└── ► Inspect intermediate actions with get_live_flow_run_action_outputs
|
||||
└── See: Data Quality Workflow below
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Expression Error Workflow
|
||||
|
||||
```
|
||||
InvalidTemplate error
|
||||
│
|
||||
├── 1. Read error.message — identifies the action name and function
|
||||
│
|
||||
├── 2. Get flow definition: get_live_flow
|
||||
│ └── Find that action in definition["actions"][action_name]["inputs"]
|
||||
│ └── Identify what upstream value the expression reads
|
||||
│
|
||||
├── 3. get_live_flow_run_action_outputs for the action BEFORE the failing one
|
||||
│ └── Look for null / wrong type in that action's output
|
||||
│ ├── Null string field → wrap with coalesce(): @coalesce(field, '')
|
||||
│ ├── Null object → add empty check condition before the action
|
||||
│ └── Wrong field name → correct the key (case-sensitive)
|
||||
│
|
||||
└── 4. Apply fix with update_live_flow, then resubmit
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## HTTP Action Workflow
|
||||
|
||||
```
|
||||
ActionFailed on HTTP action
|
||||
│
|
||||
├── 1. get_live_flow_run_action_outputs on the HTTP action
|
||||
│ └── Read: outputs.statusCode, outputs.body
|
||||
│
|
||||
├── statusCode = 401
|
||||
│ └── ► Auth header missing or expired OAuth token
|
||||
│ Check: action inputs.authentication block
|
||||
│
|
||||
├── statusCode = 403
|
||||
│ └── ► Insufficient permission on target resource
|
||||
│ Check: service principal / user has access
|
||||
│
|
||||
├── statusCode = 400
|
||||
│ └── ► Malformed request body
|
||||
│ Check: action inputs.body expression; parse errors often in nested JSON
|
||||
│
|
||||
├── statusCode = 404
|
||||
│ └── ► Wrong URL or resource deleted/renamed
|
||||
│ Check: action inputs.uri expression
|
||||
│
|
||||
└── statusCode = 500 / timeout
|
||||
└── ► Target system error; retry policy may help
|
||||
Add: "retryPolicy": {"type": "Fixed", "count": 3, "interval": "PT10S"}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Data Quality Workflow
|
||||
|
||||
```
|
||||
Flow succeeds but output data is wrong
|
||||
│
|
||||
├── 1. Identify the first "wrong" output — which action produces it?
|
||||
│
|
||||
├── 2. get_live_flow_run_action_outputs on that action
|
||||
│ └── Compare actual output body vs expected
|
||||
│
|
||||
├── Source array has nulls / unexpected values
|
||||
│ ├── Check the trigger data — get_live_flow_run_action_outputs on trigger
|
||||
│ └── Trace forward action by action until the value corrupts
|
||||
│
|
||||
├── Merge/union has wrong values
|
||||
│ └── Check union argument order:
|
||||
│ union(NEW, old) = new wins ✓
|
||||
│ union(OLD, new) = old wins ← common bug
|
||||
│
|
||||
├── Foreach output missing items
|
||||
│ ├── Check foreach condition — filter may be too strict
|
||||
│ └── Check if parallel foreach caused race condition (add Sequential)
|
||||
│
|
||||
└── Date/time values wrong timezone
|
||||
└── Use convertTimeZone() — utcNow() is always UTC
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Walk-Back Analysis (Unknown Failure)
|
||||
|
||||
When the error message doesn't clearly name a root cause:
|
||||
|
||||
```python
|
||||
# 1. Get all action names from definition
|
||||
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
actions = list(defn["properties"]["definition"]["actions"].keys())
|
||||
|
||||
# 2. Check status of each action in the failed run
|
||||
for action in actions:
|
||||
actions_out = mcp("get_live_flow_run_action_outputs",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID,
|
||||
actionName=action)
|
||||
# Returns an array of action objects
|
||||
item = actions_out[0] if actions_out else {}
|
||||
status = item.get("status", "unknown")
|
||||
print(f"{action}: {status}")
|
||||
|
||||
# 3. Find the boundary between Succeeded and Failed/Skipped
|
||||
# The first Failed action is likely the root cause (unless skipped by design)
|
||||
```
|
||||
|
||||
Actions inside Foreach / Condition branches may appear nested —
|
||||
check the parent action first to confirm the branch ran at all.
|
||||
|
||||
---
|
||||
|
||||
## Post-Fix Verification Checklist
|
||||
|
||||
1. `update_live_flow` returns `error: null` — definition accepted
|
||||
2. `resubmit_live_flow_run` confirms new run started
|
||||
3. Wait for run completion (poll `get_live_flow_runs` every 15 s)
|
||||
4. Confirm new run `status = "Succeeded"`
|
||||
5. If flow has downstream consumers (child flows, emails, SharePoint writes),
|
||||
spot-check those too
|
||||
@@ -0,0 +1,450 @@
|
||||
---
|
||||
name: flowstudio-power-automate-mcp
|
||||
description: >-
|
||||
Connect to and operate Power Automate cloud flows via a FlowStudio MCP server.
|
||||
Use when asked to: list flows, read a flow definition, check run history, inspect
|
||||
action outputs, resubmit a run, cancel a running flow, view connections, get a
|
||||
trigger URL, validate a definition, monitor flow health, or any task that requires
|
||||
talking to the Power Automate API through an MCP tool. Also use for Power Platform
|
||||
environment discovery and connection management. Requires a FlowStudio MCP
|
||||
subscription or compatible server — see https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Power Automate via FlowStudio MCP
|
||||
|
||||
This skill lets AI agents read, monitor, and operate Microsoft Power Automate
|
||||
cloud flows programmatically through a **FlowStudio MCP server** — no browser,
|
||||
no UI, no manual steps.
|
||||
|
||||
> **Requires:** A [FlowStudio](https://mcp.flowstudio.app) MCP subscription (or
|
||||
> compatible Power Automate MCP server). You will need:
|
||||
> - MCP endpoint: `https://mcp.flowstudio.app/mcp` (same for all subscribers)
|
||||
> - API key / JWT token (`x-api-key` header — NOT Bearer)
|
||||
> - Power Platform environment name (e.g. `Default-<tenant-guid>`)
|
||||
|
||||
---
|
||||
|
||||
## Source of Truth
|
||||
|
||||
| Priority | Source | Covers |
|
||||
|----------|--------|--------|
|
||||
| 1 | **Real API response** | Always trust what the server actually returns |
|
||||
| 2 | **`tools/list`** | Tool names, parameter names, types, required flags |
|
||||
| 3 | **SKILL docs & reference files** | Response shapes, behavioral notes, workflow recipes |
|
||||
|
||||
> **Start every new session with `tools/list`.**
|
||||
> It returns the authoritative, up-to-date schema for every tool — parameter names,
|
||||
> types, and required flags. The SKILL docs cover what `tools/list` cannot tell you:
|
||||
> response shapes, non-obvious behaviors, and end-to-end workflow patterns.
|
||||
>
|
||||
> If any documentation disagrees with `tools/list` or a real API response,
|
||||
> the API wins.
|
||||
|
||||
---
|
||||
|
||||
## Recommended Language: Python or Node.js
|
||||
|
||||
All examples in this skill and the companion build / debug skills use **Python
|
||||
with `urllib.request`** (stdlib — no `pip install` needed). **Node.js** is an
|
||||
equally valid choice: `fetch` is built-in from Node 18+, JSON handling is
|
||||
native, and the async/await model maps cleanly onto the request-response pattern
|
||||
of MCP tool calls — making it a natural fit for teams already working in a
|
||||
JavaScript/TypeScript stack.
|
||||
|
||||
| Language | Verdict | Notes |
|
||||
|---|---|---|
|
||||
| **Python** | ✅ Recommended | Clean JSON handling, no escaping issues, all skill examples use it |
|
||||
| **Node.js (≥ 18)** | ✅ Recommended | Native `fetch` + `JSON.stringify`/`JSON.parse`; async/await fits MCP call patterns well; no extra packages needed |
|
||||
| PowerShell | ⚠️ Avoid for flow operations | `ConvertTo-Json -Depth` silently truncates nested definitions; quoting and escaping break complex payloads. Acceptable for a quick `tools/list` discovery call but not for building or updating flows. |
|
||||
| cURL / Bash | ⚠️ Possible but fragile | Shell-escaping nested JSON is error-prone; no native JSON parser |
|
||||
|
||||
> **TL;DR — use the Core MCP Helper (Python or Node.js) below.** Both handle
|
||||
> JSON-RPC framing, auth, and response parsing in a single reusable function.
|
||||
|
||||
---
|
||||
|
||||
## What You Can Do
|
||||
|
||||
FlowStudio MCP has two access tiers. **FlowStudio for Teams** subscribers get
|
||||
both the fast Azure-table store (cached snapshot data + governance metadata) and
|
||||
full live Power Automate API access. **MCP-only subscribers** get the live tools —
|
||||
more than enough to build, debug, and operate flows.
|
||||
|
||||
### Live Tools — Available to All MCP Subscribers
|
||||
|
||||
| Tool | What it does |
|
||||
|---|---|
|
||||
| `list_live_flows` | List flows in an environment directly from the PA API (always current) |
|
||||
| `list_live_environments` | List all Power Platform environments visible to the service account |
|
||||
| `list_live_connections` | List all connections in an environment from the PA API |
|
||||
| `get_live_flow` | Fetch the complete flow definition (triggers, actions, parameters) |
|
||||
| `get_live_flow_http_schema` | Inspect the JSON body schema and response schemas of an HTTP-triggered flow |
|
||||
| `get_live_flow_trigger_url` | Get the current signed callback URL for an HTTP-triggered flow |
|
||||
| `trigger_live_flow` | POST to an HTTP-triggered flow's callback URL (AAD auth handled automatically) |
|
||||
| `update_live_flow` | Create a new flow or patch an existing definition in one call |
|
||||
| `add_live_flow_to_solution` | Migrate a non-solution flow into a solution |
|
||||
| `get_live_flow_runs` | List recent run history with status, start/end times, and errors |
|
||||
| `get_live_flow_run_error` | Get structured error details (per-action) for a failed run |
|
||||
| `get_live_flow_run_action_outputs` | Inspect inputs/outputs of any action (or every foreach iteration) in a run |
|
||||
| `resubmit_live_flow_run` | Re-run a failed or cancelled run using its original trigger payload |
|
||||
| `cancel_live_flow_run` | Cancel a currently running flow execution |
|
||||
|
||||
### Store Tools — FlowStudio for Teams Subscribers Only
|
||||
|
||||
These tools read from (and write to) the FlowStudio Azure table — a monitored
|
||||
snapshot of your tenant's flows enriched with governance metadata and run statistics.
|
||||
|
||||
| Tool | What it does |
|
||||
|---|---|
|
||||
| `list_store_flows` | Search flows from the cache with governance flags, run failure rates, and owner metadata |
|
||||
| `get_store_flow` | Get full cached details for a single flow including run stats and governance fields |
|
||||
| `get_store_flow_trigger_url` | Get the trigger URL from the cache (instant, no PA API call) |
|
||||
| `get_store_flow_runs` | Cached run history for the last N days with duration and remediation hints |
|
||||
| `get_store_flow_errors` | Cached failed-only runs with failed action names and remediation hints |
|
||||
| `get_store_flow_summary` | Aggregated stats: success rate, failure count, avg/max duration |
|
||||
| `set_store_flow_state` | Start or stop a flow via the PA API and sync the result back to the store |
|
||||
| `update_store_flow` | Update governance metadata (description, tags, monitor flag, notification rules, business impact) |
|
||||
| `list_store_environments` | List all environments from the cache |
|
||||
| `list_store_makers` | List all makers (citizen developers) from the cache |
|
||||
| `get_store_maker` | Get a maker's flow/app counts and account status |
|
||||
| `list_store_power_apps` | List all Power Apps canvas apps from the cache |
|
||||
| `list_store_connections` | List all Power Platform connections from the cache |
|
||||
|
||||
---
|
||||
|
||||
## Which Tool Tier to Call First
|
||||
|
||||
| Task | Tool | Notes |
|
||||
|---|---|---|
|
||||
| List flows | `list_live_flows` | Always current — calls PA API directly |
|
||||
| Read a definition | `get_live_flow` | Always fetched live — not cached |
|
||||
| Debug a failure | `get_live_flow_runs` → `get_live_flow_run_error` | Use live run data |
|
||||
|
||||
> ⚠️ **`list_live_flows` returns a wrapper object** with a `flows` array — access via `result["flows"]`.
|
||||
|
||||
> Store tools (`list_store_flows`, `get_store_flow`, etc.) are available to **FlowStudio for Teams** subscribers and provide cached governance metadata. Use live tools when in doubt — they work for all subscription tiers.
|
||||
|
||||
---
|
||||
|
||||
## Step 0 — Discover Available Tools
|
||||
|
||||
Always start by calling `tools/list` to confirm the server is reachable and see
|
||||
exactly which tool names are available (names may vary by server version):
|
||||
|
||||
```python
|
||||
import json, urllib.request
|
||||
|
||||
TOKEN = "<YOUR_JWT_TOKEN>"
|
||||
MCP = "https://mcp.flowstudio.app/mcp"
|
||||
|
||||
def mcp_raw(method, params=None, cid=1):
|
||||
payload = {"jsonrpc": "2.0", "method": method, "id": cid}
|
||||
if params:
|
||||
payload["params"] = params
|
||||
req = urllib.request.Request(MCP, data=json.dumps(payload).encode(),
|
||||
headers={"x-api-key": TOKEN, "Content-Type": "application/json",
|
||||
"User-Agent": "FlowStudio-MCP/1.0"})
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=30)
|
||||
except urllib.error.HTTPError as e:
|
||||
raise RuntimeError(f"MCP HTTP {e.code} — check token and endpoint") from e
|
||||
return json.loads(resp.read())
|
||||
|
||||
raw = mcp_raw("tools/list")
|
||||
if "error" in raw:
|
||||
print("ERROR:", raw["error"]); raise SystemExit(1)
|
||||
for t in raw["result"]["tools"]:
|
||||
print(t["name"], "—", t["description"][:60])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Core MCP Helper (Python)
|
||||
|
||||
Use this helper throughout all subsequent operations:
|
||||
|
||||
```python
|
||||
import json, urllib.request
|
||||
|
||||
TOKEN = "<YOUR_JWT_TOKEN>"
|
||||
MCP = "https://mcp.flowstudio.app/mcp"
|
||||
|
||||
def mcp(tool, args, cid=1):
|
||||
payload = {"jsonrpc": "2.0", "method": "tools/call", "id": cid,
|
||||
"params": {"name": tool, "arguments": args}}
|
||||
req = urllib.request.Request(MCP, data=json.dumps(payload).encode(),
|
||||
headers={"x-api-key": TOKEN, "Content-Type": "application/json",
|
||||
"User-Agent": "FlowStudio-MCP/1.0"})
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=120)
|
||||
except urllib.error.HTTPError as e:
|
||||
body = e.read().decode("utf-8", errors="replace")
|
||||
raise RuntimeError(f"MCP HTTP {e.code}: {body[:200]}") from e
|
||||
raw = json.loads(resp.read())
|
||||
if "error" in raw:
|
||||
raise RuntimeError(f"MCP error: {json.dumps(raw['error'])}")
|
||||
text = raw["result"]["content"][0]["text"]
|
||||
return json.loads(text)
|
||||
```
|
||||
|
||||
> **Common auth errors:**
|
||||
> - HTTP 401/403 → token is missing, expired, or malformed. Get a fresh JWT from [mcp.flowstudio.app](https://mcp.flowstudio.app).
|
||||
> - HTTP 400 → malformed JSON-RPC payload. Check `Content-Type: application/json` and body structure.
|
||||
> - `MCP error: {"code": -32602, ...}` → wrong or missing tool arguments.
|
||||
|
||||
---
|
||||
|
||||
## Core MCP Helper (Node.js)
|
||||
|
||||
Equivalent helper for Node.js 18+ (built-in `fetch` — no packages required):
|
||||
|
||||
```js
|
||||
const TOKEN = "<YOUR_JWT_TOKEN>";
|
||||
const MCP = "https://mcp.flowstudio.app/mcp";
|
||||
|
||||
async function mcp(tool, args, cid = 1) {
|
||||
const payload = {
|
||||
jsonrpc: "2.0",
|
||||
method: "tools/call",
|
||||
id: cid,
|
||||
params: { name: tool, arguments: args },
|
||||
};
|
||||
const res = await fetch(MCP, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"x-api-key": TOKEN,
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": "FlowStudio-MCP/1.0",
|
||||
},
|
||||
body: JSON.stringify(payload),
|
||||
});
|
||||
if (!res.ok) {
|
||||
const body = await res.text();
|
||||
throw new Error(`MCP HTTP ${res.status}: ${body.slice(0, 200)}`);
|
||||
}
|
||||
const raw = await res.json();
|
||||
if (raw.error) throw new Error(`MCP error: ${JSON.stringify(raw.error)}`);
|
||||
return JSON.parse(raw.result.content[0].text);
|
||||
}
|
||||
```
|
||||
|
||||
> Requires Node.js 18+. For older Node, replace `fetch` with `https.request`
|
||||
> from the stdlib or install `node-fetch`.
|
||||
|
||||
---
|
||||
|
||||
## List Flows
|
||||
|
||||
```python
|
||||
ENV = "Default-<tenant-guid>"
|
||||
|
||||
result = mcp("list_live_flows", {"environmentName": ENV})
|
||||
# Returns wrapper object:
|
||||
# {"mode": "owner", "flows": [{"id": "0757041a-...", "displayName": "My Flow",
|
||||
# "state": "Started", "triggerType": "Request", ...}], "totalCount": 42, "error": null}
|
||||
for f in result["flows"]:
|
||||
FLOW_ID = f["id"] # plain UUID — use directly as flowName
|
||||
print(FLOW_ID, "|", f["displayName"], "|", f["state"])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Read a Flow Definition
|
||||
|
||||
```python
|
||||
FLOW = "<flow-uuid>"
|
||||
|
||||
flow = mcp("get_live_flow", {"environmentName": ENV, "flowName": FLOW})
|
||||
|
||||
# Display name and state
|
||||
print(flow["properties"]["displayName"])
|
||||
print(flow["properties"]["state"])
|
||||
|
||||
# List all action names
|
||||
actions = flow["properties"]["definition"]["actions"]
|
||||
print("Actions:", list(actions.keys()))
|
||||
|
||||
# Inspect one action's expression
|
||||
print(actions["Compose_Filter"]["inputs"])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Check Run History
|
||||
|
||||
```python
|
||||
# Most recent runs (newest first)
|
||||
runs = mcp("get_live_flow_runs", {"environmentName": ENV, "flowName": FLOW, "top": 5})
|
||||
# Returns direct array:
|
||||
# [{"name": "08584296068667933411438594643CU15",
|
||||
# "status": "Failed",
|
||||
# "startTime": "2026-02-25T06:13:38.6910688Z",
|
||||
# "endTime": "2026-02-25T06:15:24.1995008Z",
|
||||
# "triggerName": "manual",
|
||||
# "error": {"code": "ActionFailed", "message": "An action failed..."}},
|
||||
# {"name": "08584296028664130474944675379CU26",
|
||||
# "status": "Succeeded", "error": null, ...}]
|
||||
|
||||
for r in runs:
|
||||
print(r["name"], r["status"])
|
||||
|
||||
# Get the name of the first failed run
|
||||
run_id = next((r["name"] for r in runs if r["status"] == "Failed"), None)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Inspect an Action's Output
|
||||
|
||||
```python
|
||||
run_id = runs[0]["name"]
|
||||
|
||||
out = mcp("get_live_flow_run_action_outputs", {
|
||||
"environmentName": ENV,
|
||||
"flowName": FLOW,
|
||||
"runName": run_id,
|
||||
"actionName": "Get_Customer_Record" # exact action name from the definition
|
||||
})
|
||||
print(json.dumps(out, indent=2))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Get a Run's Error
|
||||
|
||||
```python
|
||||
err = mcp("get_live_flow_run_error", {
|
||||
"environmentName": ENV,
|
||||
"flowName": FLOW,
|
||||
"runName": run_id
|
||||
})
|
||||
# Returns:
|
||||
# {"runName": "08584296068...",
|
||||
# "failedActions": [
|
||||
# {"actionName": "HTTP_find_AD_User_by_Name", "status": "Failed",
|
||||
# "code": "NotSpecified", "startTime": "...", "endTime": "..."},
|
||||
# {"actionName": "Scope_prepare_workers", "status": "Failed",
|
||||
# "error": {"code": "ActionFailed", "message": "An action failed..."}}
|
||||
# ],
|
||||
# "allActions": [
|
||||
# {"actionName": "Apply_to_each", "status": "Skipped"},
|
||||
# {"actionName": "Compose_WeekEnd", "status": "Succeeded"},
|
||||
# ...
|
||||
# ]}
|
||||
|
||||
# The ROOT cause is usually the deepest entry in failedActions:
|
||||
root = err["failedActions"][-1]
|
||||
print(f"Root failure: {root['actionName']} → {root['code']}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Resubmit a Run
|
||||
|
||||
```python
|
||||
result = mcp("resubmit_live_flow_run", {
|
||||
"environmentName": ENV,
|
||||
"flowName": FLOW,
|
||||
"runName": run_id
|
||||
})
|
||||
print(result) # {"resubmitted": true, "triggerName": "..."}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cancel a Running Run
|
||||
|
||||
```python
|
||||
mcp("cancel_live_flow_run", {
|
||||
"environmentName": ENV,
|
||||
"flowName": FLOW,
|
||||
"runName": run_id
|
||||
})
|
||||
```
|
||||
|
||||
> ⚠️ **Do NOT cancel a run that shows `Running` because it is waiting for an
|
||||
> adaptive card response.** That status is normal — the flow is paused waiting
|
||||
> for a human to respond in Teams. Cancelling it will discard the pending card.
|
||||
|
||||
---
|
||||
|
||||
## Full Round-Trip Example — Debug and Fix a Failing Flow
|
||||
|
||||
```python
|
||||
# ── 1. Find the flow ─────────────────────────────────────────────────────
|
||||
result = mcp("list_live_flows", {"environmentName": ENV})
|
||||
target = next(f for f in result["flows"] if "My Flow Name" in f["displayName"])
|
||||
FLOW_ID = target["id"]
|
||||
|
||||
# ── 2. Get the most recent failed run ────────────────────────────────────
|
||||
runs = mcp("get_live_flow_runs", {"environmentName": ENV, "flowName": FLOW_ID, "top": 5})
|
||||
# [{"name": "08584296068...", "status": "Failed", ...}, ...]
|
||||
RUN_ID = next(r["name"] for r in runs if r["status"] == "Failed")
|
||||
|
||||
# ── 3. Get per-action failure breakdown ──────────────────────────────────
|
||||
err = mcp("get_live_flow_run_error", {"environmentName": ENV, "flowName": FLOW_ID, "runName": RUN_ID})
|
||||
# {"failedActions": [{"actionName": "HTTP_find_AD_User_by_Name", "code": "NotSpecified",...}], ...}
|
||||
root_action = err["failedActions"][-1]["actionName"]
|
||||
print(f"Root failure: {root_action}")
|
||||
|
||||
# ── 4. Read the definition and inspect the failing action's expression ───
|
||||
defn = mcp("get_live_flow", {"environmentName": ENV, "flowName": FLOW_ID})
|
||||
acts = defn["properties"]["definition"]["actions"]
|
||||
print("Failing action inputs:", acts[root_action]["inputs"])
|
||||
|
||||
# ── 5. Inspect the prior action's output to find the null ────────────────
|
||||
out = mcp("get_live_flow_run_action_outputs", {
|
||||
"environmentName": ENV, "flowName": FLOW_ID,
|
||||
"runName": RUN_ID, "actionName": "Compose_Names"
|
||||
})
|
||||
nulls = [x for x in out.get("body", []) if x.get("Name") is None]
|
||||
print(f"{len(nulls)} records with null Name")
|
||||
|
||||
# ── 6. Apply the fix ─────────────────────────────────────────────────────
|
||||
acts[root_action]["inputs"]["parameters"]["searchName"] = \
|
||||
"@coalesce(item()?['Name'], '')"
|
||||
|
||||
conn_refs = defn["properties"]["connectionReferences"]
|
||||
result = mcp("update_live_flow", {
|
||||
"environmentName": ENV, "flowName": FLOW_ID,
|
||||
"definition": defn["properties"]["definition"],
|
||||
"connectionReferences": conn_refs
|
||||
})
|
||||
assert result.get("error") is None, f"Deploy failed: {result['error']}"
|
||||
# ⚠️ error key is always present — only fail if it is NOT None
|
||||
|
||||
# ── 7. Resubmit and verify ───────────────────────────────────────────────
|
||||
mcp("resubmit_live_flow_run", {"environmentName": ENV, "flowName": FLOW_ID, "runName": RUN_ID})
|
||||
|
||||
import time; time.sleep(30)
|
||||
new_runs = mcp("get_live_flow_runs", {"environmentName": ENV, "flowName": FLOW_ID, "top": 1})
|
||||
print(new_runs[0]["status"]) # Succeeded = done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Auth & Connection Notes
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Auth header | `x-api-key: <JWT>` — **not** `Authorization: Bearer` |
|
||||
| Token format | Plain JWT — do not strip, alter, or prefix it |
|
||||
| Timeout | Use ≥ 120 s for `get_live_flow_run_action_outputs` (large outputs) |
|
||||
| Environment name | `Default-<tenant-guid>` (find it via `list_live_environments` or `list_live_flows` response) |
|
||||
|
||||
---
|
||||
|
||||
## Reference Files
|
||||
|
||||
- [MCP-BOOTSTRAP.md](references/MCP-BOOTSTRAP.md) — endpoint, auth, request/response format (read this first)
|
||||
- [tool-reference.md](references/tool-reference.md) — response shapes and behavioral notes (parameters are in `tools/list`)
|
||||
- [action-types.md](references/action-types.md) — Power Automate action type patterns
|
||||
- [connection-references.md](references/connection-references.md) — connector reference guide
|
||||
|
||||
---
|
||||
|
||||
## More Capabilities
|
||||
|
||||
For **diagnosing failing flows** end-to-end → load the `power-automate-debug` skill.
|
||||
|
||||
For **building and deploying new flows** → load the `power-automate-build` skill.
|
||||
@@ -0,0 +1,53 @@
|
||||
# MCP Bootstrap — Quick Reference
|
||||
|
||||
Everything an agent needs to start calling the FlowStudio MCP server.
|
||||
|
||||
```
|
||||
Endpoint: https://mcp.flowstudio.app/mcp
|
||||
Protocol: JSON-RPC 2.0 over HTTP POST
|
||||
Transport: Streamable HTTP — single POST per request, no SSE, no WebSocket
|
||||
Auth: x-api-key header with JWT token (NOT Bearer)
|
||||
```
|
||||
|
||||
## Required Headers
|
||||
|
||||
```
|
||||
Content-Type: application/json
|
||||
x-api-key: <token>
|
||||
User-Agent: FlowStudio-MCP/1.0 ← required, or Cloudflare blocks you
|
||||
```
|
||||
|
||||
## Step 1 — Discover Tools
|
||||
|
||||
```json
|
||||
POST {"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}
|
||||
```
|
||||
|
||||
Returns all tools with names, descriptions, and input schemas.
|
||||
Free — not counted against plan limits.
|
||||
|
||||
## Step 2 — Call a Tool
|
||||
|
||||
```json
|
||||
POST {"jsonrpc":"2.0","id":1,"method":"tools/call",
|
||||
"params":{"name":"<tool_name>","arguments":{...}}}
|
||||
```
|
||||
|
||||
## Response Shape
|
||||
|
||||
```
|
||||
Success → {"result":{"content":[{"type":"text","text":"<JSON string>"}]}}
|
||||
Error → {"result":{"content":[{"type":"text","text":"{\"error\":{...}}"}]}}
|
||||
```
|
||||
|
||||
Always parse `result.content[0].text` as JSON to get the actual data.
|
||||
|
||||
## Key Tips
|
||||
|
||||
- Tool results are JSON strings inside the text field — **double-parse needed**
|
||||
- `"error"` field in parsed body: `null` = success, object = failure
|
||||
- `environmentName` is required for most tools, but **not** for:
|
||||
`list_live_environments`, `list_live_connections`, `list_store_flows`,
|
||||
`list_store_environments`, `list_store_makers`, `get_store_maker`,
|
||||
`list_store_power_apps`, `list_store_connections`
|
||||
- When in doubt, check the `required` array in each tool's schema from `tools/list`
|
||||
@@ -0,0 +1,79 @@
|
||||
# FlowStudio MCP — Action Types Reference
|
||||
|
||||
Compact lookup for recognising action types returned by `get_live_flow`.
|
||||
Use this to **read and understand** existing flow definitions.
|
||||
|
||||
> For full copy-paste construction patterns, see the `power-automate-build` skill.
|
||||
|
||||
---
|
||||
|
||||
## How to Read a Flow Definition
|
||||
|
||||
Every action has `"type"`, `"runAfter"`, and `"inputs"`. The `runAfter` object
|
||||
declares dependencies: `{"Previous": ["Succeeded"]}`. Valid statuses:
|
||||
`Succeeded`, `Failed`, `Skipped`, `TimedOut`.
|
||||
|
||||
---
|
||||
|
||||
## Action Type Quick Reference
|
||||
|
||||
| Type | Purpose | Key fields to inspect | Output reference |
|
||||
|---|---|---|---|
|
||||
| `Compose` | Store/transform a value | `inputs` (any expression) | `outputs('Name')` |
|
||||
| `InitializeVariable` | Declare a variable | `inputs.variables[].{name, type, value}` | `variables('name')` |
|
||||
| `SetVariable` | Update a variable | `inputs.{name, value}` | `variables('name')` |
|
||||
| `IncrementVariable` | Increment a numeric variable | `inputs.{name, value}` | `variables('name')` |
|
||||
| `AppendToArrayVariable` | Push to an array variable | `inputs.{name, value}` | `variables('name')` |
|
||||
| `If` | Conditional branch | `expression.and/or`, `actions`, `else.actions` | — |
|
||||
| `Switch` | Multi-way branch | `expression`, `cases.{case, actions}`, `default` | — |
|
||||
| `Foreach` | Loop over array | `foreach`, `actions`, `operationOptions` | `item()` / `items('Name')` |
|
||||
| `Until` | Loop until condition | `expression`, `limit.{count, timeout}`, `actions` | — |
|
||||
| `Wait` | Delay | `inputs.interval.{count, unit}` | — |
|
||||
| `Scope` | Group / try-catch | `actions` (nested action map) | `result('Name')` |
|
||||
| `Terminate` | End run | `inputs.{runStatus, runError}` | — |
|
||||
| `OpenApiConnection` | Connector call (SP, Outlook, Teams…) | `inputs.host.{apiId, connectionName, operationId}`, `inputs.parameters` | `outputs('Name')?['body/...']` |
|
||||
| `OpenApiConnectionWebhook` | Webhook wait (approvals, adaptive cards) | same as above | `body('Name')?['...']` |
|
||||
| `Http` | External HTTP call | `inputs.{method, uri, headers, body}` | `outputs('Name')?['body']` |
|
||||
| `Response` | Return to HTTP caller | `inputs.{statusCode, headers, body}` | — |
|
||||
| `Query` | Filter array | `inputs.{from, where}` | `body('Name')` (filtered array) |
|
||||
| `Select` | Reshape/project array | `inputs.{from, select}` | `body('Name')` (projected array) |
|
||||
| `Table` | Array → CSV/HTML string | `inputs.{from, format, columns}` | `body('Name')` (string) |
|
||||
| `ParseJson` | Parse JSON with schema | `inputs.{content, schema}` | `body('Name')?['field']` |
|
||||
| `Expression` | Built-in function (e.g. ConvertTimeZone) | `kind`, `inputs` | `body('Name')` |
|
||||
|
||||
---
|
||||
|
||||
## Connector Identification
|
||||
|
||||
When you see `type: OpenApiConnection`, identify the connector from `host.apiId`:
|
||||
|
||||
| apiId suffix | Connector |
|
||||
|---|---|
|
||||
| `shared_sharepointonline` | SharePoint |
|
||||
| `shared_office365` | Outlook / Office 365 |
|
||||
| `shared_teams` | Microsoft Teams |
|
||||
| `shared_approvals` | Approvals |
|
||||
| `shared_office365users` | Office 365 Users |
|
||||
| `shared_flowmanagement` | Flow Management |
|
||||
|
||||
The `operationId` tells you the specific operation (e.g. `GetItems`, `SendEmailV2`,
|
||||
`PostMessageToConversation`). The `connectionName` maps to a GUID in
|
||||
`properties.connectionReferences`.
|
||||
|
||||
---
|
||||
|
||||
## Common Expressions (Reading Cheat Sheet)
|
||||
|
||||
| Expression | Meaning |
|
||||
|---|---|
|
||||
| `@outputs('X')?['body/value']` | Array result from connector action X |
|
||||
| `@body('X')` | Direct body of action X (Query, Select, ParseJson) |
|
||||
| `@item()?['Field']` | Current loop item's field |
|
||||
| `@triggerBody()?['Field']` | Trigger payload field |
|
||||
| `@variables('name')` | Variable value |
|
||||
| `@coalesce(a, b)` | First non-null of a, b |
|
||||
| `@first(array)` | First element (null if empty) |
|
||||
| `@length(array)` | Array count |
|
||||
| `@empty(value)` | True if null/empty string/empty array |
|
||||
| `@union(a, b)` | Merge arrays — **first wins** on duplicates |
|
||||
| `@result('Scope')` | Array of action outcomes inside a Scope |
|
||||
@@ -0,0 +1,115 @@
|
||||
# FlowStudio MCP — Connection References
|
||||
|
||||
Connection references wire a flow's connector actions to real authenticated
|
||||
connections in the Power Platform. They are required whenever you call
|
||||
`update_live_flow` with a definition that uses connector actions.
|
||||
|
||||
---
|
||||
|
||||
## Structure in a Flow Definition
|
||||
|
||||
```json
|
||||
{
|
||||
"properties": {
|
||||
"definition": { ... },
|
||||
"connectionReferences": {
|
||||
"shared_sharepointonline": {
|
||||
"connectionName": "shared-sharepointonl-62599557c-1f33-4aec-b4c0-a6e4afcae3be",
|
||||
"id": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||
"displayName": "SharePoint"
|
||||
},
|
||||
"shared_office365": {
|
||||
"connectionName": "shared-office365-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
|
||||
"id": "/providers/Microsoft.PowerApps/apis/shared_office365",
|
||||
"displayName": "Office 365 Outlook"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Keys are **logical reference names** (e.g. `shared_sharepointonline`).
|
||||
These match the `connectionName` field inside each action's `host` block.
|
||||
|
||||
---
|
||||
|
||||
## Finding Connection GUIDs
|
||||
|
||||
Call `get_live_flow` on **any existing flow** that uses the same connection
|
||||
and copy the `connectionReferences` block. The GUID after the connector prefix is
|
||||
the connection instance owned by the authenticating user.
|
||||
|
||||
```python
|
||||
flow = mcp("get_live_flow", environmentName=ENV, flowName=EXISTING_FLOW_ID)
|
||||
conn_refs = flow["properties"]["connectionReferences"]
|
||||
# conn_refs["shared_sharepointonline"]["connectionName"]
|
||||
# → "shared-sharepointonl-62599557c-1f33-4aec-b4c0-a6e4afcae3be"
|
||||
```
|
||||
|
||||
> ⚠️ Connection references are **user-scoped**. If a connection is owned
|
||||
> by another account, `update_live_flow` will return 403
|
||||
> `ConnectionAuthorizationFailed`. You must use a connection belonging to
|
||||
> the account whose token is in the `x-api-key` header.
|
||||
|
||||
---
|
||||
|
||||
## Passing `connectionReferences` to `update_live_flow`
|
||||
|
||||
```python
|
||||
result = mcp("update_live_flow",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
definition=modified_definition,
|
||||
connectionReferences={
|
||||
"shared_sharepointonline": {
|
||||
"connectionName": "shared-sharepointonl-62599557c-1f33-4aec-b4c0-a6e4afcae3be",
|
||||
"id": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
Only include connections that the definition actually uses.
|
||||
|
||||
---
|
||||
|
||||
## Common Connector API IDs
|
||||
|
||||
| Service | API ID |
|
||||
|---|---|
|
||||
| SharePoint Online | `/providers/Microsoft.PowerApps/apis/shared_sharepointonline` |
|
||||
| Office 365 Outlook | `/providers/Microsoft.PowerApps/apis/shared_office365` |
|
||||
| Microsoft Teams | `/providers/Microsoft.PowerApps/apis/shared_teams` |
|
||||
| OneDrive for Business | `/providers/Microsoft.PowerApps/apis/shared_onedriveforbusiness` |
|
||||
| Azure AD | `/providers/Microsoft.PowerApps/apis/shared_azuread` |
|
||||
| HTTP with Azure AD | `/providers/Microsoft.PowerApps/apis/shared_webcontents` |
|
||||
| SQL Server | `/providers/Microsoft.PowerApps/apis/shared_sql` |
|
||||
| Dataverse | `/providers/Microsoft.PowerApps/apis/shared_commondataserviceforapps` |
|
||||
| Azure Blob Storage | `/providers/Microsoft.PowerApps/apis/shared_azureblob` |
|
||||
| Approvals | `/providers/Microsoft.PowerApps/apis/shared_approvals` |
|
||||
| Office 365 Users | `/providers/Microsoft.PowerApps/apis/shared_office365users` |
|
||||
| Flow Management | `/providers/Microsoft.PowerApps/apis/shared_flowmanagement` |
|
||||
|
||||
---
|
||||
|
||||
## Teams Adaptive Card Dual-Connection Requirement
|
||||
|
||||
Flows that send adaptive cards **and** post follow-up messages require two
|
||||
separate Teams connections:
|
||||
|
||||
```json
|
||||
"connectionReferences": {
|
||||
"shared_teams": {
|
||||
"connectionName": "shared-teams-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
|
||||
"id": "/providers/Microsoft.PowerApps/apis/shared_teams"
|
||||
},
|
||||
"shared_teams_1": {
|
||||
"connectionName": "shared-teams-yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy",
|
||||
"id": "/providers/Microsoft.PowerApps/apis/shared_teams"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Both can point to the **same underlying Teams account** but must be registered
|
||||
as two distinct connection references. The webhook (`OpenApiConnectionWebhook`)
|
||||
uses `shared_teams` and subsequent message actions use `shared_teams_1`.
|
||||
@@ -0,0 +1,445 @@
|
||||
# FlowStudio MCP — Tool Response Catalog
|
||||
|
||||
Response shapes and behavioral notes for the FlowStudio Power Automate MCP server.
|
||||
|
||||
> **For tool names and parameters**: Always call `tools/list` on the server.
|
||||
> It returns the authoritative, up-to-date schema for every tool.
|
||||
> This document covers what `tools/list` does NOT tell you: **response shapes**
|
||||
> and **non-obvious behaviors** discovered through real usage.
|
||||
|
||||
---
|
||||
|
||||
## Source of Truth
|
||||
|
||||
| Priority | Source | Covers |
|
||||
|----------|--------|--------|
|
||||
| 1 | **Real API response** | Always trust what the server actually returns |
|
||||
| 2 | **`tools/list`** | Tool names, parameter names, types, required flags |
|
||||
| 3 | **This document** | Response shapes, behavioral notes, gotchas |
|
||||
|
||||
> If this document disagrees with `tools/list` or real API behavior,
|
||||
> the API wins. Update this document accordingly.
|
||||
|
||||
---
|
||||
|
||||
## Environment & Tenant Discovery
|
||||
|
||||
### `list_live_environments`
|
||||
|
||||
Response: direct array of environments.
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "Default-26e65220-5561-46ef-9783-ce5f20489241",
|
||||
"displayName": "FlowStudio (default)",
|
||||
"sku": "Production",
|
||||
"location": "australia",
|
||||
"state": "Enabled",
|
||||
"isDefault": true,
|
||||
"isAdmin": true,
|
||||
"isMember": true,
|
||||
"createdTime": "2023-08-18T00:41:05Z"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
> Use the `id` value as `environmentName` in all other tools.
|
||||
|
||||
### `list_store_environments`
|
||||
|
||||
Same shape as `list_live_environments` but read from cache (faster).
|
||||
|
||||
---
|
||||
|
||||
## Connection Discovery
|
||||
|
||||
### `list_live_connections`
|
||||
|
||||
Response: wrapper object with `connections` array.
|
||||
```json
|
||||
{
|
||||
"connections": [
|
||||
{
|
||||
"id": "shared-office365-9f9d2c8e-55f1-49c9-9f9c-1c45d1fbbdce",
|
||||
"displayName": "user@contoso.com",
|
||||
"connectorName": "shared_office365",
|
||||
"createdBy": "User Name",
|
||||
"statuses": [{"status": "Connected"}],
|
||||
"createdTime": "2024-03-12T21:23:55.206815Z"
|
||||
}
|
||||
],
|
||||
"totalCount": 56,
|
||||
"error": null
|
||||
}
|
||||
```
|
||||
|
||||
> **Key field**: `id` is the `connectionName` value used in `connectionReferences`.
|
||||
>
|
||||
> **Key field**: `connectorName` maps to apiId:
|
||||
> `"/providers/Microsoft.PowerApps/apis/" + connectorName`
|
||||
>
|
||||
> Filter by status: `statuses[0].status == "Connected"`.
|
||||
>
|
||||
> **Note**: `tools/list` marks `environmentName` as optional, but the server
|
||||
> returns `MissingEnvironmentFilter` (HTTP 400) if you omit it. Always pass
|
||||
> `environmentName`.
|
||||
|
||||
### `list_store_connections`
|
||||
|
||||
Same connection data from cache.
|
||||
|
||||
---
|
||||
|
||||
## Flow Discovery & Listing
|
||||
|
||||
### `list_live_flows`
|
||||
|
||||
Response: wrapper object with `flows` array.
|
||||
```json
|
||||
{
|
||||
"mode": "owner",
|
||||
"flows": [
|
||||
{
|
||||
"id": "0757041a-8ef2-cf74-ef06-06881916f371",
|
||||
"displayName": "My Flow",
|
||||
"state": "Started",
|
||||
"triggerType": "Request",
|
||||
"triggerKind": "Http",
|
||||
"createdTime": "2023-08-18T01:18:17Z",
|
||||
"lastModifiedTime": "2023-08-18T12:47:42Z",
|
||||
"owners": "<aad-object-id>",
|
||||
"definitionAvailable": true
|
||||
}
|
||||
],
|
||||
"totalCount": 100,
|
||||
"error": null
|
||||
}
|
||||
```
|
||||
|
||||
> Access via `result["flows"]`. `id` is a plain UUID --- use directly as `flowName`.
|
||||
>
|
||||
> `mode` indicates the access scope used (`"owner"` or `"admin"`).
|
||||
|
||||
### `list_store_flows`
|
||||
|
||||
Response: **direct array** (no wrapper).
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "3991358a-f603-e49d-b1ed-a9e4f72e2dcb.0757041a-8ef2-cf74-ef06-06881916f371",
|
||||
"displayName": "Admin | Sync Template v3 (Solutions)",
|
||||
"state": "Started",
|
||||
"triggerType": "OpenApiConnectionWebhook",
|
||||
"environmentName": "3991358a-f603-e49d-b1ed-a9e4f72e2dcb",
|
||||
"runPeriodTotal": 100,
|
||||
"createdTime": "2023-08-18T01:18:17Z",
|
||||
"lastModifiedTime": "2023-08-18T12:47:42Z"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
> **`id` format**: `envId.flowId` --- split on the first `.` to extract the flow UUID:
|
||||
> `flow_id = item["id"].split(".", 1)[1]`
|
||||
|
||||
### `get_store_flow`
|
||||
|
||||
Response: single flow metadata from cache (selected fields).
|
||||
```json
|
||||
{
|
||||
"id": "envId.flowId",
|
||||
"displayName": "My Flow",
|
||||
"state": "Started",
|
||||
"triggerType": "Recurrence",
|
||||
"runPeriodTotal": 100,
|
||||
"runPeriodFailRate": 0.1,
|
||||
"runPeriodSuccessRate": 0.9,
|
||||
"runPeriodFails": 10,
|
||||
"runPeriodSuccess": 90,
|
||||
"runPeriodDurationAverage": 29410.8,
|
||||
"runPeriodDurationMax": 158900.0,
|
||||
"runError": "{\"code\": \"EACCES\", ...}",
|
||||
"description": "Flow description",
|
||||
"tier": "Premium",
|
||||
"complexity": "{...}",
|
||||
"actions": 42,
|
||||
"connections": ["sharepointonline", "office365"],
|
||||
"owners": ["user@contoso.com"],
|
||||
"createdBy": "user@contoso.com"
|
||||
}
|
||||
```
|
||||
|
||||
> `runPeriodDurationAverage` / `runPeriodDurationMax` are in **milliseconds** (divide by 1000).
|
||||
> `runError` is a **JSON string** --- parse with `json.loads()`.
|
||||
|
||||
---
|
||||
|
||||
## Flow Definition (Live API)
|
||||
|
||||
### `get_live_flow`
|
||||
|
||||
Response: full flow definition from PA API.
|
||||
```json
|
||||
{
|
||||
"name": "<flow-guid>",
|
||||
"properties": {
|
||||
"displayName": "My Flow",
|
||||
"state": "Started",
|
||||
"definition": {
|
||||
"triggers": { "..." },
|
||||
"actions": { "..." },
|
||||
"parameters": { "..." }
|
||||
},
|
||||
"connectionReferences": { "..." }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### `update_live_flow`
|
||||
|
||||
**Create mode**: Omit `flowName` --- creates a new flow. `definition` and `displayName` required.
|
||||
|
||||
**Update mode**: Provide `flowName` --- PATCHes existing flow.
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"created": false,
|
||||
"flowKey": "envId.flowId",
|
||||
"updated": ["definition", "connectionReferences"],
|
||||
"displayName": "My Flow",
|
||||
"state": "Started",
|
||||
"definition": { "...full definition..." },
|
||||
"error": null
|
||||
}
|
||||
```
|
||||
|
||||
> `error` is **always present** but may be `null`. Check `result.get("error") is not None`.
|
||||
>
|
||||
> On create: `created` is the new flow GUID (string). On update: `created` is `false`.
|
||||
>
|
||||
> `description` is **always required** (create and update).
|
||||
|
||||
### `add_live_flow_to_solution`
|
||||
|
||||
Migrates a non-solution flow into a solution. Returns error if already in a solution.
|
||||
|
||||
---
|
||||
|
||||
## Run History & Monitoring
|
||||
|
||||
### `get_live_flow_runs`
|
||||
|
||||
Response: direct array of runs (newest first).
|
||||
```json
|
||||
[{
|
||||
"name": "<run-id>",
|
||||
"status": "Succeeded|Failed|Running|Cancelled",
|
||||
"startTime": "2026-02-25T06:13:38Z",
|
||||
"endTime": "2026-02-25T06:14:02Z",
|
||||
"triggerName": "Recurrence",
|
||||
"error": null
|
||||
}]
|
||||
```
|
||||
|
||||
> `top` defaults to **30** and auto-paginates for higher values. Set `top: 300`
|
||||
> for 24-hour coverage on flows running every 5 minutes.
|
||||
>
|
||||
> Run ID field is **`name`** (not `runName`). Use this value as the `runName`
|
||||
> parameter in other tools.
|
||||
|
||||
### `get_live_flow_run_error`
|
||||
|
||||
Response: structured error breakdown for a failed run.
|
||||
```json
|
||||
{
|
||||
"runName": "08584296068667933411438594643CU15",
|
||||
"failedActions": [
|
||||
{
|
||||
"actionName": "Apply_to_each_prepare_workers",
|
||||
"status": "Failed",
|
||||
"error": {"code": "ActionFailed", "message": "An action failed."},
|
||||
"code": "ActionFailed",
|
||||
"startTime": "2026-02-25T06:13:52Z",
|
||||
"endTime": "2026-02-25T06:15:24Z"
|
||||
},
|
||||
{
|
||||
"actionName": "HTTP_find_AD_User_by_Name",
|
||||
"status": "Failed",
|
||||
"code": "NotSpecified",
|
||||
"startTime": "2026-02-25T06:14:01Z",
|
||||
"endTime": "2026-02-25T06:14:05Z"
|
||||
}
|
||||
],
|
||||
"allActions": [
|
||||
{"actionName": "Apply_to_each", "status": "Skipped"},
|
||||
{"actionName": "Compose_WeekEnd", "status": "Succeeded"},
|
||||
{"actionName": "HTTP_find_AD_User_by_Name", "status": "Failed"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
> `failedActions` is ordered outer-to-inner --- the **last entry is the root cause**.
|
||||
> Use `failedActions[-1]["actionName"]` as the starting point for diagnosis.
|
||||
|
||||
### `get_live_flow_run_action_outputs`
|
||||
|
||||
Response: array of action detail objects.
|
||||
```json
|
||||
[
|
||||
{
|
||||
"actionName": "Compose_WeekEnd_now",
|
||||
"status": "Succeeded",
|
||||
"startTime": "2026-02-25T06:13:52Z",
|
||||
"endTime": "2026-02-25T06:13:52Z",
|
||||
"error": null,
|
||||
"inputs": "Mon, 25 Feb 2026 06:13:52 GMT",
|
||||
"outputs": "Mon, 25 Feb 2026 06:13:52 GMT"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
> **`actionName` is optional**: omit it to return ALL actions in the run;
|
||||
> provide it to return a single-element array for that action only.
|
||||
>
|
||||
> Outputs can be very large (50 MB+) for bulk-data actions. Use 120s+ timeout.
|
||||
|
||||
---
|
||||
|
||||
## Run Control
|
||||
|
||||
### `resubmit_live_flow_run`
|
||||
|
||||
Response: `{ flowKey, resubmitted: true, runName, triggerName }`
|
||||
|
||||
### `cancel_live_flow_run`
|
||||
|
||||
Cancels a `Running` flow run.
|
||||
|
||||
> Do NOT cancel runs waiting for an adaptive card response --- status `Running`
|
||||
> is normal while a Teams card is awaiting user input.
|
||||
|
||||
---
|
||||
|
||||
## HTTP Trigger Tools
|
||||
|
||||
### `get_live_flow_http_schema`
|
||||
|
||||
Response keys:
|
||||
```
|
||||
flowKey - Flow GUID
|
||||
displayName - Flow display name
|
||||
triggerName - Trigger action name (e.g. "manual")
|
||||
triggerType - Trigger type (e.g. "Request")
|
||||
triggerKind - Trigger kind (e.g. "Http")
|
||||
requestMethod - HTTP method (e.g. "POST")
|
||||
relativePath - Relative path configured on the trigger (if any)
|
||||
requestSchema - JSON schema the trigger expects as POST body
|
||||
requestHeaders - Headers the trigger expects
|
||||
responseSchemas - Array of JSON schemas defined on Response action(s)
|
||||
responseSchemaCount - Number of Response actions that define output schemas
|
||||
```
|
||||
|
||||
> The request body schema is in `requestSchema` (not `triggerSchema`).
|
||||
|
||||
### `get_live_flow_trigger_url`
|
||||
|
||||
Returns the signed callback URL for HTTP-triggered flows. Response includes
|
||||
`flowKey`, `triggerName`, `triggerType`, `triggerKind`, `triggerMethod`, `triggerUrl`.
|
||||
|
||||
### `trigger_live_flow`
|
||||
|
||||
Response keys: `flowKey`, `triggerName`, `triggerUrl`, `requiresAadAuth`, `authType`,
|
||||
`responseStatus`, `responseBody`.
|
||||
|
||||
> **Only works for `Request` (HTTP) triggers.** Returns an error for Recurrence
|
||||
> and other trigger types: `"only HTTP Request triggers can be invoked via this tool"`.
|
||||
>
|
||||
> `responseStatus` + `responseBody` contain the flow's Response action output.
|
||||
> AAD-authenticated triggers are handled automatically.
|
||||
|
||||
---
|
||||
|
||||
## Flow State Management
|
||||
|
||||
### `set_store_flow_state`
|
||||
|
||||
Start or stop a flow. Pass `state: "Started"` or `state: "Stopped"`.
|
||||
|
||||
---
|
||||
|
||||
## Store Tools --- FlowStudio for Teams Only
|
||||
|
||||
### `get_store_flow_summary`
|
||||
|
||||
Response: aggregated run statistics.
|
||||
```json
|
||||
{
|
||||
"totalRuns": 100,
|
||||
"failRuns": 10,
|
||||
"failRate": 0.1,
|
||||
"averageDurationSeconds": 29.4,
|
||||
"maxDurationSeconds": 158.9,
|
||||
"firstFailRunRemediation": "<hint or null>"
|
||||
}
|
||||
```
|
||||
|
||||
### `get_store_flow_runs`
|
||||
|
||||
Cached run history for the last N days with duration and remediation hints.
|
||||
|
||||
### `get_store_flow_errors`
|
||||
|
||||
Cached failed-only runs with failed action names and remediation hints.
|
||||
|
||||
### `get_store_flow_trigger_url`
|
||||
|
||||
Trigger URL from cache (instant, no PA API call).
|
||||
|
||||
### `update_store_flow`
|
||||
|
||||
Update governance metadata (description, tags, monitor flag, notification rules, business impact).
|
||||
|
||||
### `list_store_makers` / `get_store_maker`
|
||||
|
||||
Maker (citizen developer) discovery and detail.
|
||||
|
||||
### `list_store_power_apps`
|
||||
|
||||
List all Power Apps canvas apps from the cache.
|
||||
|
||||
---
|
||||
|
||||
## Behavioral Notes
|
||||
|
||||
Non-obvious behaviors discovered through real API usage. These are things
|
||||
`tools/list` cannot tell you.
|
||||
|
||||
### `get_live_flow_run_action_outputs`
|
||||
- **`actionName` is optional**: omit to get all actions, provide to get one.
|
||||
This changes the response from N elements to 1 element (still an array).
|
||||
- Outputs can be 50 MB+ for bulk-data actions --- always use 120s+ timeout.
|
||||
|
||||
### `update_live_flow`
|
||||
- `description` is **always required** (create and update modes).
|
||||
- `error` key is **always present** in response --- `null` means success.
|
||||
Do NOT check `if "error" in result`; check `result.get("error") is not None`.
|
||||
- On create, `created` = new flow GUID (string). On update, `created` = `false`.
|
||||
|
||||
### `trigger_live_flow`
|
||||
- **Only works for HTTP Request triggers.** Returns error for Recurrence, connector,
|
||||
and other trigger types.
|
||||
- AAD-authenticated triggers are handled automatically (impersonated Bearer token).
|
||||
|
||||
### `get_live_flow_runs`
|
||||
- `top` defaults to **30** with automatic pagination for higher values.
|
||||
- Run ID field is `name`, not `runName`. Use this value as `runName` in other tools.
|
||||
- Runs are returned newest-first.
|
||||
|
||||
### Teams `PostMessageToConversation` (via `update_live_flow`)
|
||||
- **"Chat with Flow bot"**: `body/recipient` = `"user@domain.com;"` (string with trailing semicolon).
|
||||
- **"Channel"**: `body/recipient` = `{"groupId": "...", "channelId": "..."}` (object).
|
||||
- `poster`: `"Flow bot"` for Workflows bot identity, `"User"` for user identity.
|
||||
|
||||
### `list_live_connections`
|
||||
- `id` is the value you need for `connectionName` in `connectionReferences`.
|
||||
- `connectorName` maps to apiId: `"/providers/Microsoft.PowerApps/apis/" + connectorName`.
|
||||
Reference in New Issue
Block a user