mirror of
https://github.com/github/awesome-copilot.git
synced 2026-04-13 11:45:56 +00:00
Merge branch 'staged' into skill/lsp-setup
This commit is contained in:
@@ -2,11 +2,20 @@
|
||||
name: flowstudio-power-automate-build
|
||||
description: >-
|
||||
Build, scaffold, and deploy Power Automate cloud flows using the FlowStudio
|
||||
MCP server. Load this skill when asked to: create a flow, build a new flow,
|
||||
MCP server. Your agent constructs flow definitions, wires connections, deploys,
|
||||
and tests — all via MCP without opening the portal.
|
||||
Load this skill when asked to: create a flow, build a new flow,
|
||||
deploy a flow definition, scaffold a Power Automate workflow, construct a flow
|
||||
JSON, update an existing flow's actions, patch a flow definition, add actions
|
||||
to a flow, wire up connections, or generate a workflow definition from scratch.
|
||||
Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
|
||||
metadata:
|
||||
openclaw:
|
||||
requires:
|
||||
env:
|
||||
- FLOWSTUDIO_MCP_TOKEN
|
||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
||||
homepage: https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Build & Deploy Power Automate Flows with FlowStudio MCP
|
||||
@@ -64,14 +73,15 @@ ENV = "<environment-id>" # e.g. Default-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
|
||||
Always look before you build to avoid duplicates:
|
||||
|
||||
```python
|
||||
results = mcp("list_store_flows",
|
||||
environmentName=ENV, searchTerm="My New Flow")
|
||||
results = mcp("list_live_flows", environmentName=ENV)
|
||||
|
||||
# list_store_flows returns a direct array (no wrapper object)
|
||||
if len(results) > 0:
|
||||
# list_live_flows returns { "flows": [...] }
|
||||
matches = [f for f in results["flows"]
|
||||
if "My New Flow".lower() in f["displayName"].lower()]
|
||||
|
||||
if len(matches) > 0:
|
||||
# Flow exists — modify rather than create
|
||||
# id format is "envId.flowId" — split to get the flow UUID
|
||||
FLOW_ID = results[0]["id"].split(".", 1)[1]
|
||||
FLOW_ID = matches[0]["id"] # plain UUID from list_live_flows
|
||||
print(f"Existing flow: {FLOW_ID}")
|
||||
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
else:
|
||||
@@ -182,7 +192,7 @@ for connector in connectors_needed:
|
||||
> connection_references = ref_flow["properties"]["connectionReferences"]
|
||||
> ```
|
||||
|
||||
See the `power-automate-mcp` skill's **connection-references.md** reference
|
||||
See the `flowstudio-power-automate-mcp` skill's **connection-references.md** reference
|
||||
for the full connection reference structure.
|
||||
|
||||
---
|
||||
@@ -278,6 +288,8 @@ check = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
|
||||
# Confirm state
|
||||
print("State:", check["properties"]["state"]) # Should be "Started"
|
||||
# If state is "Stopped", use set_live_flow_state — NOT update_live_flow
|
||||
# mcp("set_live_flow_state", environmentName=ENV, flowName=FLOW_ID, state="Started")
|
||||
|
||||
# Confirm the action we added is there
|
||||
acts = check["properties"]["definition"]["actions"]
|
||||
@@ -294,38 +306,45 @@ print("Actions:", list(acts.keys()))
|
||||
> flow will do and wait for explicit approval before calling `trigger_live_flow`
|
||||
> or `resubmit_live_flow_run`.
|
||||
|
||||
### Updated flows (have prior runs)
|
||||
### Updated flows (have prior runs) — ANY trigger type
|
||||
|
||||
The fastest path — resubmit the most recent run:
|
||||
> **Use `resubmit_live_flow_run` first.** It works for EVERY trigger type —
|
||||
> Recurrence, SharePoint, connector webhooks, Button, and HTTP. It replays
|
||||
> the original trigger payload. Do NOT ask the user to manually trigger the
|
||||
> flow or wait for the next scheduled run.
|
||||
|
||||
```python
|
||||
runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=1)
|
||||
if runs:
|
||||
# Works for Recurrence, SharePoint, connector triggers — not just HTTP
|
||||
result = mcp("resubmit_live_flow_run",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=runs[0]["name"])
|
||||
print(result)
|
||||
print(result) # {"resubmitted": true, "triggerName": "..."}
|
||||
```
|
||||
|
||||
### Flows already using an HTTP trigger
|
||||
### HTTP-triggered flows — custom test payload
|
||||
|
||||
Fire directly with a test payload:
|
||||
Only use `trigger_live_flow` when you need to send a **different** payload
|
||||
than the original run. For verifying a fix, `resubmit_live_flow_run` is
|
||||
better because it uses the exact data that caused the failure.
|
||||
|
||||
```python
|
||||
schema = mcp("get_live_flow_http_schema",
|
||||
environmentName=ENV, flowName=FLOW_ID)
|
||||
print("Expected body:", schema.get("triggerSchema"))
|
||||
print("Expected body:", schema.get("requestSchema"))
|
||||
|
||||
result = mcp("trigger_live_flow",
|
||||
environmentName=ENV, flowName=FLOW_ID,
|
||||
body={"name": "Test", "value": 1})
|
||||
print(f"Status: {result['status']}")
|
||||
print(f"Status: {result['responseStatus']}")
|
||||
```
|
||||
|
||||
### Brand-new non-HTTP flows (Recurrence, connector triggers, etc.)
|
||||
|
||||
A brand-new Recurrence or connector-triggered flow has no runs to resubmit
|
||||
and no HTTP endpoint to call. **Deploy with a temporary HTTP trigger first,
|
||||
test the actions, then swap to the production trigger.**
|
||||
A brand-new Recurrence or connector-triggered flow has **no prior runs** to
|
||||
resubmit and no HTTP endpoint to call. This is the ONLY scenario where you
|
||||
need the temporary HTTP trigger approach below. **Deploy with a temporary
|
||||
HTTP trigger first, test the actions, then swap to the production trigger.**
|
||||
|
||||
#### 7a — Save the real trigger, deploy with a temporary HTTP trigger
|
||||
|
||||
@@ -384,7 +403,7 @@ if run["status"] == "Failed":
|
||||
root = err["failedActions"][-1]
|
||||
print(f"Root cause: {root['actionName']} → {root.get('code')}")
|
||||
# Debug and fix the definition before proceeding
|
||||
# See power-automate-debug skill for full diagnosis workflow
|
||||
# See flowstudio-power-automate-debug skill for full diagnosis workflow
|
||||
```
|
||||
|
||||
#### 7c — Swap to the production trigger
|
||||
@@ -428,7 +447,7 @@ else:
|
||||
| `union(old_data, new_data)` | Old values override new (first-wins) | Use `union(new_data, old_data)` |
|
||||
| `split()` on potentially-null string | `InvalidTemplate` crash | Wrap with `coalesce(field, '')` |
|
||||
| Checking `result["error"]` exists | Always present; true error is `!= null` | Use `result.get("error") is not None` |
|
||||
| Flow deployed but state is "Stopped" | Flow won't run on schedule | Check connection auth; re-enable |
|
||||
| Flow deployed but state is "Stopped" | Flow won't run on schedule | Call `set_live_flow_state` with `state: "Started"` — do **not** use `update_live_flow` for state changes |
|
||||
| Teams "Chat with Flow bot" recipient as object | 400 `GraphUserDetailNotFound` | Use plain string with trailing semicolon (see below) |
|
||||
|
||||
### Teams `PostMessageToConversation` — Recipient Formats
|
||||
|
||||
@@ -2,11 +2,20 @@
|
||||
name: flowstudio-power-automate-debug
|
||||
description: >-
|
||||
Debug failing Power Automate cloud flows using the FlowStudio MCP server.
|
||||
The Graph API only shows top-level status codes. This skill gives your agent
|
||||
action-level inputs and outputs to find the actual root cause.
|
||||
Load this skill when asked to: debug a flow, investigate a failed run, why is
|
||||
this flow failing, inspect action outputs, find the root cause of a flow error,
|
||||
fix a broken Power Automate flow, diagnose a timeout, trace a DynamicOperationRequestFailure,
|
||||
check connector auth errors, read error details from a run, or troubleshoot
|
||||
expression failures. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
|
||||
metadata:
|
||||
openclaw:
|
||||
requires:
|
||||
env:
|
||||
- FLOWSTUDIO_MCP_TOKEN
|
||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
||||
homepage: https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Power Automate Debugging with FlowStudio MCP
|
||||
@@ -14,6 +23,10 @@ description: >-
|
||||
A step-by-step diagnostic process for investigating failing Power Automate
|
||||
cloud flows through the FlowStudio MCP server.
|
||||
|
||||
> **Real debugging examples**: [Expression error in child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/fix-expression-error.md) |
|
||||
> [Data entry, not a flow bug](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/data-not-flow.md) |
|
||||
> [Null value crashes child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/null-child-flow.md)
|
||||
|
||||
**Prerequisite**: A FlowStudio MCP server must be reachable with a valid JWT.
|
||||
See the `flowstudio-power-automate-mcp` skill for connection setup.
|
||||
Subscribe at https://mcp.flowstudio.app
|
||||
@@ -59,46 +72,6 @@ ENV = "<environment-id>" # e.g. Default-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
|
||||
|
||||
---
|
||||
|
||||
## FlowStudio for Teams: Fast-Path Diagnosis (Skip Steps 2–4)
|
||||
|
||||
If you have a FlowStudio for Teams subscription, `get_store_flow_errors`
|
||||
returns per-run failure data including action names and remediation hints
|
||||
in a single call — no need to walk through live API steps.
|
||||
|
||||
```python
|
||||
# Quick failure summary
|
||||
summary = mcp("get_store_flow_summary", environmentName=ENV, flowName=FLOW_ID)
|
||||
# {"totalRuns": 100, "failRuns": 10, "failRate": 0.1,
|
||||
# "averageDurationSeconds": 29.4, "maxDurationSeconds": 158.9,
|
||||
# "firstFailRunRemediation": "<hint or null>"}
|
||||
print(f"Fail rate: {summary['failRate']:.0%} over {summary['totalRuns']} runs")
|
||||
|
||||
# Per-run error details (requires active monitoring to be configured)
|
||||
errors = mcp("get_store_flow_errors", environmentName=ENV, flowName=FLOW_ID)
|
||||
if errors:
|
||||
for r in errors[:3]:
|
||||
print(r["startTime"], "|", r.get("failedActions"), "|", r.get("remediationHint"))
|
||||
# If errors confirms the failing action → jump to Step 6 (apply fix)
|
||||
else:
|
||||
# Store doesn't have run-level detail for this flow — use live tools (Steps 2–5)
|
||||
pass
|
||||
```
|
||||
|
||||
For the full governance record (description, complexity, tier, connector list):
|
||||
```python
|
||||
record = mcp("get_store_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
# {"displayName": "My Flow", "state": "Started",
|
||||
# "runPeriodTotal": 100, "runPeriodFailRate": 0.1, "runPeriodFails": 10,
|
||||
# "runPeriodDurationAverage": 29410.8, ← milliseconds
|
||||
# "runError": "{\"code\": \"EACCES\", ...}", ← JSON string, parse it
|
||||
# "description": "...", "tier": "Premium", "complexity": "{...}"}
|
||||
if record.get("runError"):
|
||||
last_err = json.loads(record["runError"])
|
||||
print("Last run error:", last_err)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 1 — Locate the Flow
|
||||
|
||||
```python
|
||||
@@ -134,6 +107,13 @@ RUN_ID = next(r["name"] for r in runs if r["status"] == "Failed")
|
||||
|
||||
## Step 3 — Get the Top-Level Error
|
||||
|
||||
> **CRITICAL**: `get_live_flow_run_error` tells you **which** action failed.
|
||||
> `get_live_flow_run_action_outputs` tells you **why**. You must call BOTH.
|
||||
> Never stop at the error alone — error codes like `ActionFailed`,
|
||||
> `NotSpecified`, and `InternalServerError` are generic wrappers. The actual
|
||||
> root cause (wrong field, null value, HTTP 500 body, stack trace) is only
|
||||
> visible in the action's inputs and outputs.
|
||||
|
||||
```python
|
||||
err = mcp("get_live_flow_run_error",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID)
|
||||
@@ -164,7 +144,86 @@ print(f"Root action: {root['actionName']} → code: {root.get('code')}")
|
||||
|
||||
---
|
||||
|
||||
## Step 4 — Read the Flow Definition
|
||||
## Step 4 — Inspect the Failing Action's Inputs and Outputs
|
||||
|
||||
> **This is the most important step.** `get_live_flow_run_error` only gives
|
||||
> you a generic error code. The actual error detail — HTTP status codes,
|
||||
> response bodies, stack traces, null values — lives in the action's runtime
|
||||
> inputs and outputs. **Always inspect the failing action immediately after
|
||||
> identifying it.**
|
||||
|
||||
```python
|
||||
# Get the root failing action's full inputs and outputs
|
||||
root_action = err["failedActions"][-1]["actionName"]
|
||||
detail = mcp("get_live_flow_run_action_outputs",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
runName=RUN_ID,
|
||||
actionName=root_action)
|
||||
|
||||
out = detail[0] if detail else {}
|
||||
print(f"Action: {out.get('actionName')}")
|
||||
print(f"Status: {out.get('status')}")
|
||||
|
||||
# For HTTP actions, the real error is in outputs.body
|
||||
if isinstance(out.get("outputs"), dict):
|
||||
status_code = out["outputs"].get("statusCode")
|
||||
body = out["outputs"].get("body", {})
|
||||
print(f"HTTP {status_code}")
|
||||
print(json.dumps(body, indent=2)[:500])
|
||||
|
||||
# Error bodies are often nested JSON strings — parse them
|
||||
if isinstance(body, dict) and "error" in body:
|
||||
err_detail = body["error"]
|
||||
if isinstance(err_detail, str):
|
||||
err_detail = json.loads(err_detail)
|
||||
print(f"Error: {err_detail.get('message', err_detail)}")
|
||||
|
||||
# For expression errors, the error is in the error field
|
||||
if out.get("error"):
|
||||
print(f"Error: {out['error']}")
|
||||
|
||||
# Also check inputs — they show what expression/URL/body was used
|
||||
if out.get("inputs"):
|
||||
print(f"Inputs: {json.dumps(out['inputs'], indent=2)[:500]}")
|
||||
```
|
||||
|
||||
### What the action outputs reveal (that error codes don't)
|
||||
|
||||
| Error code from `get_live_flow_run_error` | What `get_live_flow_run_action_outputs` reveals |
|
||||
|---|---|
|
||||
| `ActionFailed` | Which nested action actually failed and its HTTP response |
|
||||
| `NotSpecified` | The HTTP status code + response body with the real error |
|
||||
| `InternalServerError` | The server's error message, stack trace, or API error JSON |
|
||||
| `InvalidTemplate` | The exact expression that failed and the null/wrong-type value |
|
||||
| `BadRequest` | The request body that was sent and why the server rejected it |
|
||||
|
||||
### Example: HTTP action returning 500
|
||||
|
||||
```
|
||||
Error code: "InternalServerError" ← this tells you nothing
|
||||
|
||||
Action outputs reveal:
|
||||
HTTP 500
|
||||
body: {"error": "Cannot read properties of undefined (reading 'toLowerCase')
|
||||
at getClientParamsFromConnectionString (storage.js:20)"}
|
||||
← THIS tells you the Azure Function crashed because a connection string is undefined
|
||||
```
|
||||
|
||||
### Example: Expression error on null
|
||||
|
||||
```
|
||||
Error code: "BadRequest" ← generic
|
||||
|
||||
Action outputs reveal:
|
||||
inputs: "body('HTTP_GetTokenFromStore')?['token']?['access_token']"
|
||||
outputs: "" ← empty string, the path resolved to null
|
||||
← THIS tells you the response shape changed — token is at body.access_token, not body.token.access_token
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 5 — Read the Flow Definition
|
||||
|
||||
```python
|
||||
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||
@@ -177,41 +236,48 @@ to understand what data it expects.
|
||||
|
||||
---
|
||||
|
||||
## Step 5 — Inspect Action Outputs (Walk Back from Failure)
|
||||
## Step 6 — Walk Back from the Failure
|
||||
|
||||
For each action **leading up to** the failure, inspect its runtime output:
|
||||
When the failing action's inputs reference upstream actions, inspect those
|
||||
too. Walk backward through the chain until you find the source of the
|
||||
bad data:
|
||||
|
||||
```python
|
||||
for action_name in ["Compose_WeekEnd", "HTTP_Get_Data", "Parse_JSON"]:
|
||||
# Inspect multiple actions leading up to the failure
|
||||
for action_name in [root_action, "Compose_WeekEnd", "HTTP_Get_Data"]:
|
||||
result = mcp("get_live_flow_run_action_outputs",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
runName=RUN_ID,
|
||||
actionName=action_name)
|
||||
# Returns an array — single-element when actionName is provided
|
||||
out = result[0] if result else {}
|
||||
print(action_name, out.get("status"))
|
||||
print(json.dumps(out.get("outputs", {}), indent=2)[:500])
|
||||
print(f"\n--- {action_name} ({out.get('status')}) ---")
|
||||
print(f"Inputs: {json.dumps(out.get('inputs', ''), indent=2)[:300]}")
|
||||
print(f"Outputs: {json.dumps(out.get('outputs', ''), indent=2)[:300]}")
|
||||
```
|
||||
|
||||
> ⚠️ Output payloads from array-processing actions can be very large.
|
||||
> Always slice (e.g. `[:500]`) before printing.
|
||||
|
||||
> **Tip**: Omit `actionName` to get ALL actions in a single call.
|
||||
> This returns every action's inputs/outputs — useful when you're not sure
|
||||
> which upstream action produced the bad data. But use 120s+ timeout as
|
||||
> the response can be very large.
|
||||
|
||||
---
|
||||
|
||||
## Step 6 — Pinpoint the Root Cause
|
||||
## Step 7 — Pinpoint the Root Cause
|
||||
|
||||
### Expression Errors (e.g. `split` on null)
|
||||
If the error mentions `InvalidTemplate` or a function name:
|
||||
1. Find the action in the definition
|
||||
2. Check what upstream action/expression it reads
|
||||
3. Inspect that upstream action's output for null / missing fields
|
||||
3. **Inspect that upstream action's output** for null / missing fields
|
||||
|
||||
```python
|
||||
# Example: action uses split(item()?['Name'], ' ')
|
||||
# → null Name in the source data
|
||||
result = mcp("get_live_flow_run_action_outputs", ..., actionName="Compose_Names")
|
||||
# Returns a single-element array; index [0] to get the action object
|
||||
if not result:
|
||||
print("No outputs returned for Compose_Names")
|
||||
names = []
|
||||
@@ -223,9 +289,20 @@ print(f"{len(nulls)} records with null Name")
|
||||
|
||||
### Wrong Field Path
|
||||
Expression `triggerBody()?['fieldName']` returns null → `fieldName` is wrong.
|
||||
Check the trigger output shape with:
|
||||
**Inspect the trigger output** to see the actual field names:
|
||||
```python
|
||||
mcp("get_live_flow_run_action_outputs", ..., actionName="<trigger-action-name>")
|
||||
result = mcp("get_live_flow_run_action_outputs", ..., actionName="<trigger-action-name>")
|
||||
print(json.dumps(result[0].get("outputs"), indent=2)[:500])
|
||||
```
|
||||
|
||||
### HTTP Actions Returning Errors
|
||||
The error code says `InternalServerError` or `NotSpecified` — **always inspect
|
||||
the action outputs** to get the actual HTTP status and response body:
|
||||
```python
|
||||
result = mcp("get_live_flow_run_action_outputs", ..., actionName="HTTP_Get_Data")
|
||||
out = result[0]
|
||||
print(f"HTTP {out['outputs']['statusCode']}")
|
||||
print(json.dumps(out['outputs']['body'], indent=2)[:500])
|
||||
```
|
||||
|
||||
### Connection / Auth Failures
|
||||
@@ -234,7 +311,7 @@ service account running the flow. Cannot fix via API; fix in PA designer.
|
||||
|
||||
---
|
||||
|
||||
## Step 7 — Apply the Fix
|
||||
## Step 8 — Apply the Fix
|
||||
|
||||
**For expression/data issues**:
|
||||
```python
|
||||
@@ -260,13 +337,23 @@ print(result.get("error")) # None = success
|
||||
|
||||
---
|
||||
|
||||
## Step 8 — Verify the Fix
|
||||
## Step 9 — Verify the Fix
|
||||
|
||||
> **Use `resubmit_live_flow_run` to test ANY flow — not just HTTP triggers.**
|
||||
> `resubmit_live_flow_run` replays a previous run using its original trigger
|
||||
> payload. This works for **every trigger type**: Recurrence, SharePoint
|
||||
> "When an item is created", connector webhooks, Button triggers, and HTTP
|
||||
> triggers. You do NOT need to ask the user to manually trigger the flow or
|
||||
> wait for the next scheduled run.
|
||||
>
|
||||
> The only case where `resubmit` is not available is a **brand-new flow that
|
||||
> has never run** — it has no prior run to replay.
|
||||
|
||||
```python
|
||||
# Resubmit the failed run
|
||||
# Resubmit the failed run — works for ANY trigger type
|
||||
resubmit = mcp("resubmit_live_flow_run",
|
||||
environmentName=ENV, flowName=FLOW_ID, runName=RUN_ID)
|
||||
print(resubmit)
|
||||
print(resubmit) # {"resubmitted": true, "triggerName": "..."}
|
||||
|
||||
# Wait ~30 s then check
|
||||
import time; time.sleep(30)
|
||||
@@ -274,16 +361,26 @@ new_runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=
|
||||
print(new_runs[0]["status"]) # Succeeded = done
|
||||
```
|
||||
|
||||
### Testing HTTP-Triggered Flows
|
||||
### When to use resubmit vs trigger
|
||||
|
||||
For flows with a `Request` (HTTP) trigger, use `trigger_live_flow` instead
|
||||
of `resubmit_live_flow_run` to test with custom payloads:
|
||||
| Scenario | Use | Why |
|
||||
|---|---|---|
|
||||
| **Testing a fix** on any flow | `resubmit_live_flow_run` | Replays the exact trigger payload that caused the failure — best way to verify |
|
||||
| Recurrence / scheduled flow | `resubmit_live_flow_run` | Cannot be triggered on demand any other way |
|
||||
| SharePoint / connector trigger | `resubmit_live_flow_run` | Cannot be triggered without creating a real SP item |
|
||||
| HTTP trigger with **custom** test payload | `trigger_live_flow` | When you need to send different data than the original run |
|
||||
| Brand-new flow, never run | `trigger_live_flow` (HTTP only) | No prior run exists to resubmit |
|
||||
|
||||
### Testing HTTP-Triggered Flows with custom payloads
|
||||
|
||||
For flows with a `Request` (HTTP) trigger, use `trigger_live_flow` when you
|
||||
need to send a **different** payload than the original run:
|
||||
|
||||
```python
|
||||
# First inspect what the trigger expects
|
||||
schema = mcp("get_live_flow_http_schema",
|
||||
environmentName=ENV, flowName=FLOW_ID)
|
||||
print("Expected body schema:", schema.get("triggerSchema"))
|
||||
print("Expected body schema:", schema.get("requestSchema"))
|
||||
print("Response schemas:", schema.get("responseSchemas"))
|
||||
|
||||
# Trigger with a test payload
|
||||
@@ -291,7 +388,7 @@ result = mcp("trigger_live_flow",
|
||||
environmentName=ENV,
|
||||
flowName=FLOW_ID,
|
||||
body={"name": "Test User", "value": 42})
|
||||
print(f"Status: {result['status']}, Body: {result.get('body')}")
|
||||
print(f"Status: {result['responseStatus']}, Body: {result.get('responseBody')}")
|
||||
```
|
||||
|
||||
> `trigger_live_flow` handles AAD-authenticated triggers automatically.
|
||||
@@ -301,13 +398,19 @@ print(f"Status: {result['status']}, Body: {result.get('body')}")
|
||||
|
||||
## Quick-Reference Diagnostic Decision Tree
|
||||
|
||||
| Symptom | First Tool to Call | What to Look For |
|
||||
|---|---|---|
|
||||
| Flow shows as Failed | `get_live_flow_run_error` | `failedActions[-1]["actionName"]` = root cause |
|
||||
| Expression crash | `get_live_flow_run_action_outputs` on prior action | null / wrong-type fields in output body |
|
||||
| Flow never starts | `get_live_flow` | check `properties.state` = "Started" |
|
||||
| Action returns wrong data | `get_live_flow_run_action_outputs` | actual output body vs expected |
|
||||
| Fix applied but still fails | `get_live_flow_runs` after resubmit | new run `status` field |
|
||||
| Symptom | First Tool | Then ALWAYS Call | What to Look For |
|
||||
|---|---|---|---|
|
||||
| Flow shows as Failed | `get_live_flow_run_error` | `get_live_flow_run_action_outputs` on the failing action | HTTP status + response body in `outputs` |
|
||||
| Error code is generic (`ActionFailed`, `NotSpecified`) | — | `get_live_flow_run_action_outputs` | The `outputs.body` contains the real error message, stack trace, or API error |
|
||||
| HTTP action returns 500 | — | `get_live_flow_run_action_outputs` | `outputs.statusCode` + `outputs.body` with server error detail |
|
||||
| Expression crash | — | `get_live_flow_run_action_outputs` on prior action | null / wrong-type fields in output body |
|
||||
| Flow never starts | `get_live_flow` | — | check `properties.state` = "Started" |
|
||||
| Action returns wrong data | `get_live_flow_run_action_outputs` | — | actual output body vs expected |
|
||||
| Fix applied but still fails | `get_live_flow_runs` after resubmit | — | new run `status` field |
|
||||
|
||||
> **Rule: never diagnose from error codes alone.** `get_live_flow_run_error`
|
||||
> identifies the failing action. `get_live_flow_run_action_outputs` reveals
|
||||
> the actual cause. Always call both.
|
||||
|
||||
---
|
||||
|
||||
|
||||
504
skills/flowstudio-power-automate-governance/SKILL.md
Normal file
504
skills/flowstudio-power-automate-governance/SKILL.md
Normal file
@@ -0,0 +1,504 @@
|
||||
---
|
||||
name: flowstudio-power-automate-governance
|
||||
description: >-
|
||||
Govern Power Automate flows and Power Apps at scale using the FlowStudio MCP
|
||||
cached store. Classify flows by business impact, detect orphaned resources,
|
||||
audit connector usage, enforce compliance standards, manage notification rules,
|
||||
and compute governance scores — all without Dataverse or the CoE Starter Kit.
|
||||
Load this skill when asked to: tag or classify flows, set business impact,
|
||||
assign ownership, detect orphans, audit connectors, check compliance, compute
|
||||
archive scores, manage notification rules, run a governance review, generate
|
||||
a compliance report, offboard a maker, or any task that involves writing
|
||||
governance metadata to flows. Requires a FlowStudio for Teams or MCP Pro+
|
||||
subscription — see https://mcp.flowstudio.app
|
||||
metadata:
|
||||
openclaw:
|
||||
requires:
|
||||
env:
|
||||
- FLOWSTUDIO_MCP_TOKEN
|
||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
||||
homepage: https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Power Automate Governance with FlowStudio MCP
|
||||
|
||||
Classify, tag, and govern Power Automate flows at scale through the FlowStudio
|
||||
MCP **cached store** — without Dataverse, without the CoE Starter Kit, and
|
||||
without the Power Automate portal.
|
||||
|
||||
This skill uses `update_store_flow` to write governance metadata and the
|
||||
monitoring tools (`list_store_flows`, `get_store_flow`, `list_store_makers`,
|
||||
etc.) to read tenant state. For monitoring and health-check workflows, see
|
||||
the `flowstudio-power-automate-monitoring` skill.
|
||||
|
||||
> **Start every session with `tools/list`** to confirm tool names and parameters.
|
||||
> This skill covers workflows and patterns — things `tools/list` cannot tell you.
|
||||
> If this document disagrees with `tools/list` or a real API response, the API wins.
|
||||
|
||||
---
|
||||
|
||||
## Critical: How to Extract Flow IDs
|
||||
|
||||
`list_store_flows` returns `id` in format `<environmentId>.<flowId>`. **You must split
|
||||
on the first `.`** to get `environmentName` and `flowName` for all other tools:
|
||||
|
||||
```
|
||||
id = "Default-<envGuid>.<flowGuid>"
|
||||
environmentName = "Default-<envGuid>" (everything before first ".")
|
||||
flowName = "<flowGuid>" (everything after first ".")
|
||||
```
|
||||
|
||||
Also: skip entries that have no `displayName` or have `state=Deleted` —
|
||||
these are sparse records or flows that no longer exist in Power Automate.
|
||||
If a deleted flow has `monitor=true`, suggest disabling monitoring
|
||||
(`update_store_flow` with `monitor=false`) to free up a monitoring slot
|
||||
(standard plan includes 20).
|
||||
|
||||
---
|
||||
|
||||
## The Write Tool: `update_store_flow`
|
||||
|
||||
`update_store_flow` writes governance metadata to the **Flow Studio cache
|
||||
only** — it does NOT modify the flow in Power Automate. These fields are
|
||||
not visible via `get_live_flow` or the PA portal. They exist only in the
|
||||
Flow Studio store and are used by Flow Studio's scanning pipeline and
|
||||
notification rules.
|
||||
|
||||
This means:
|
||||
- `ownerTeam` / `supportEmail` — sets who Flow Studio considers the
|
||||
governance contact. Does NOT change the actual PA flow owner.
|
||||
- `rule_notify_email` — sets who receives Flow Studio failure/missing-run
|
||||
notifications. Does NOT change Microsoft's built-in flow failure alerts.
|
||||
- `monitor` / `critical` / `businessImpact` — Flow Studio classification
|
||||
only. Power Automate has no equivalent fields.
|
||||
|
||||
Merge semantics — only fields you provide are updated. Returns the full
|
||||
updated record (same shape as `get_store_flow`).
|
||||
|
||||
Required parameters: `environmentName`, `flowName`. All other fields optional.
|
||||
|
||||
### Settable Fields
|
||||
|
||||
| Field | Type | Purpose |
|
||||
|---|---|---|
|
||||
| `monitor` | bool | Enable run-level scanning (standard plan: 20 flows included) |
|
||||
| `rule_notify_onfail` | bool | Send email notification on any failed run |
|
||||
| `rule_notify_onmissingdays` | number | Send notification when flow hasn't run in N days (0 = disabled) |
|
||||
| `rule_notify_email` | string | Comma-separated notification recipients |
|
||||
| `description` | string | What the flow does |
|
||||
| `tags` | string | Classification tags (also auto-extracted from description `#hashtags`) |
|
||||
| `businessImpact` | string | Low / Medium / High / Critical |
|
||||
| `businessJustification` | string | Why the flow exists, what process it automates |
|
||||
| `businessValue` | string | Business value statement |
|
||||
| `ownerTeam` | string | Accountable team |
|
||||
| `ownerBusinessUnit` | string | Business unit |
|
||||
| `supportGroup` | string | Support escalation group |
|
||||
| `supportEmail` | string | Support contact email |
|
||||
| `critical` | bool | Designate as business-critical |
|
||||
| `tier` | string | Standard or Premium |
|
||||
| `security` | string | Security classification or notes |
|
||||
|
||||
> **Caution with `security`:** The `security` field on `get_store_flow`
|
||||
> contains structured JSON (e.g. `{"triggerRequestAuthenticationType":"All"}`).
|
||||
> Writing a plain string like `"reviewed"` will overwrite this. To mark a
|
||||
> flow as security-reviewed, use `tags` instead.
|
||||
|
||||
---
|
||||
|
||||
## Governance Workflows
|
||||
|
||||
### 1. Compliance Detail Review
|
||||
|
||||
Identify flows missing required governance metadata — the equivalent of
|
||||
the CoE Starter Kit's Developer Compliance Center.
|
||||
|
||||
```
|
||||
1. Ask the user which compliance fields they require
|
||||
(or use their organization's existing governance policy)
|
||||
2. list_store_flows
|
||||
3. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Check which required fields are missing or empty
|
||||
4. Report non-compliant flows with missing fields listed
|
||||
5. For each non-compliant flow:
|
||||
- Ask the user for values
|
||||
- update_store_flow(environmentName, flowName, ...provided fields)
|
||||
```
|
||||
|
||||
**Fields available for compliance checks:**
|
||||
|
||||
| Field | Example policy |
|
||||
|---|---|
|
||||
| `description` | Every flow should be documented |
|
||||
| `businessImpact` | Classify as Low / Medium / High / Critical |
|
||||
| `businessJustification` | Required for High/Critical impact flows |
|
||||
| `ownerTeam` | Every flow should have an accountable team |
|
||||
| `supportEmail` | Required for production flows |
|
||||
| `monitor` | Required for critical flows (note: standard plan includes 20 monitored flows) |
|
||||
| `rule_notify_onfail` | Recommended for monitored flows |
|
||||
| `critical` | Designate business-critical flows |
|
||||
|
||||
> Each organization defines their own compliance rules. The fields above are
|
||||
> suggestions based on common Power Platform governance patterns (CoE Starter
|
||||
> Kit). Ask the user what their requirements are before flagging flows as
|
||||
> non-compliant.
|
||||
>
|
||||
> **Tip:** Flows created or updated via MCP already have `description`
|
||||
> (auto-appended by `update_live_flow`). Flows created manually in the
|
||||
> Power Automate portal are the ones most likely missing governance metadata.
|
||||
|
||||
### 2. Orphaned Resource Detection
|
||||
|
||||
Find flows owned by deleted or disabled Azure AD accounts.
|
||||
|
||||
```
|
||||
1. list_store_makers
|
||||
2. Filter where deleted=true AND ownerFlowCount > 0
|
||||
Note: deleted makers have NO displayName/mail — record their id (AAD OID)
|
||||
3. list_store_flows → collect all flows
|
||||
4. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Parse owners: json.loads(record["owners"])
|
||||
- Check if any owner principalId matches an orphaned maker id
|
||||
5. Report orphaned flows: maker id, flow name, flow state
|
||||
6. For each orphaned flow:
|
||||
- Reassign governance: update_store_flow(environmentName, flowName,
|
||||
ownerTeam="NewTeam", supportEmail="new-owner@contoso.com")
|
||||
- Or decommission: set_store_flow_state(environmentName, flowName,
|
||||
state="Stopped")
|
||||
```
|
||||
|
||||
> `update_store_flow` updates governance metadata in the cache only. To
|
||||
> transfer actual PA ownership, an admin must use the Power Platform admin
|
||||
> center or PowerShell.
|
||||
>
|
||||
> **Note:** Many orphaned flows are system-generated (created by
|
||||
> `DataverseSystemUser` accounts for SLA monitoring, knowledge articles,
|
||||
> etc.). These were never built by a person — consider tagging them
|
||||
> rather than reassigning.
|
||||
>
|
||||
> **Coverage:** This workflow searches the cached store only, not the
|
||||
> live PA API. Flows created after the last scan won't appear.
|
||||
|
||||
### 3. Archive Score Calculation
|
||||
|
||||
Compute an inactivity score (0-7) per flow to identify safe cleanup
|
||||
candidates. Aligns with the CoE Starter Kit's archive scoring.
|
||||
|
||||
```
|
||||
1. list_store_flows
|
||||
2. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
3. Compute archive score (0-7), add 1 point for each:
|
||||
+1 lastModifiedTime within 24 hours of createdTime
|
||||
+1 displayName contains "test", "demo", "copy", "temp", or "backup"
|
||||
(case-insensitive)
|
||||
+1 createdTime is more than 12 months ago
|
||||
+1 state is "Stopped" or "Suspended"
|
||||
+1 json.loads(owners) is empty array []
|
||||
+1 runPeriodTotal = 0 (never ran or no recent runs)
|
||||
+1 parse json.loads(complexity) → actions < 5
|
||||
4. Classify:
|
||||
Score 5-7: Recommend archive — report to user for confirmation
|
||||
Score 3-4: Flag for review →
|
||||
Read existing tags from get_store_flow response, append #archive-review
|
||||
update_store_flow(environmentName, flowName, tags="<existing> #archive-review")
|
||||
Score 0-2: Active, no action
|
||||
5. For user-confirmed archives:
|
||||
set_store_flow_state(environmentName, flowName, state="Stopped")
|
||||
Read existing tags, append #archived
|
||||
update_store_flow(environmentName, flowName, tags="<existing> #archived")
|
||||
```
|
||||
|
||||
> **What "archive" means:** Power Automate has no native archive feature.
|
||||
> Archiving via MCP means: (1) stop the flow so it can't run, and
|
||||
> (2) tag it `#archived` so it's discoverable for future cleanup.
|
||||
> Actual deletion requires the Power Automate portal or admin PowerShell
|
||||
> — it cannot be done via MCP tools.
|
||||
|
||||
### 4. Connector Audit
|
||||
|
||||
Audit which connectors are in use across monitored flows. Useful for DLP
|
||||
impact analysis and premium license planning.
|
||||
|
||||
```
|
||||
1. list_store_flows(monitor=true)
|
||||
(scope to monitored flows — auditing all 1000+ flows is expensive)
|
||||
2. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Parse connections: json.loads(record["connections"])
|
||||
Returns array of objects with apiName, apiId, connectionName
|
||||
- Note the flow-level tier field ("Standard" or "Premium")
|
||||
3. Build connector inventory:
|
||||
- Which apiNames are used and by how many flows
|
||||
- Which flows have tier="Premium" (premium connector detected)
|
||||
- Which flows use HTTP connectors (apiName contains "http")
|
||||
- Which flows use custom connectors (non-shared_ prefix apiNames)
|
||||
4. Report inventory to user
|
||||
- For DLP analysis: user provides their DLP policy connector groups,
|
||||
agent cross-references against the inventory
|
||||
```
|
||||
|
||||
> **Scope to monitored flows.** Each flow requires a `get_store_flow` call
|
||||
> to read the `connections` JSON. Standard plans have ~20 monitored flows —
|
||||
> manageable. Auditing all flows in a large tenant (1000+) would be very
|
||||
> expensive in API calls.
|
||||
>
|
||||
> **`list_store_connections`** returns connection instances (who created
|
||||
> which connection) but NOT connector types per flow. Use it for connection
|
||||
> counts per environment, not for the connector audit.
|
||||
>
|
||||
> DLP policy definitions are not available via MCP. The agent builds the
|
||||
> connector inventory; the user provides the DLP classification to
|
||||
> cross-reference against.
|
||||
|
||||
### 5. Notification Rule Management
|
||||
|
||||
Configure monitoring and alerting for flows at scale.
|
||||
|
||||
```
|
||||
Enable failure alerts on all critical flows:
|
||||
1. list_store_flows(monitor=true)
|
||||
2. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- If critical=true AND rule_notify_onfail is not true:
|
||||
update_store_flow(environmentName, flowName,
|
||||
rule_notify_onfail=true,
|
||||
rule_notify_email="oncall@contoso.com")
|
||||
- If NO flows have critical=true: this is a governance finding.
|
||||
Recommend the user designate their most important flows as critical
|
||||
using update_store_flow(critical=true) before configuring alerts.
|
||||
|
||||
Enable missing-run detection for scheduled flows:
|
||||
1. list_store_flows(monitor=true)
|
||||
2. For each flow where triggerType="Recurrence" (available on list response):
|
||||
- Skip flows with state="Stopped" or "Suspended" (not expected to run)
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- If rule_notify_onmissingdays is 0 or not set:
|
||||
update_store_flow(environmentName, flowName,
|
||||
rule_notify_onmissingdays=2)
|
||||
```
|
||||
|
||||
> `critical`, `rule_notify_onfail`, and `rule_notify_onmissingdays` are only
|
||||
> available from `get_store_flow`, not from `list_store_flows`. The list call
|
||||
> pre-filters to monitored flows; the detail call checks the notification fields.
|
||||
>
|
||||
> **Monitoring limit:** The standard plan (FlowStudio for Teams / MCP Pro+)
|
||||
> includes 20 monitored flows. Before bulk-enabling `monitor=true`, check
|
||||
> how many flows are already monitored:
|
||||
> `len(list_store_flows(monitor=true))`
|
||||
|
||||
### 6. Classification and Tagging
|
||||
|
||||
Bulk-classify flows by connector type, business function, or risk level.
|
||||
|
||||
```
|
||||
Auto-tag by connector:
|
||||
1. list_store_flows
|
||||
2. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Parse connections: json.loads(record["connections"])
|
||||
- Build tags from apiName values:
|
||||
shared_sharepointonline → #sharepoint
|
||||
shared_teams → #teams
|
||||
shared_office365 → #email
|
||||
Custom connectors → #custom-connector
|
||||
HTTP-related connectors → #http-external
|
||||
- Read existing tags from get_store_flow response, append new tags
|
||||
- update_store_flow(environmentName, flowName,
|
||||
tags="<existing tags> #sharepoint #teams")
|
||||
```
|
||||
|
||||
> **Two tag systems:** Tags shown in `list_store_flows` are auto-extracted
|
||||
> from the flow's `description` field (e.g. a maker writes `#operations` in
|
||||
> the PA portal description). Tags set via `update_store_flow(tags=...)`
|
||||
> write to a separate field in the Azure Table cache. They are independent —
|
||||
> writing store tags does not touch the description, and editing the
|
||||
> description in the portal does not affect store tags.
|
||||
>
|
||||
> **Tag merge:** `update_store_flow(tags=...)` overwrites the store tags
|
||||
> field. To avoid losing tags from other workflows, read the current store
|
||||
> tags from `get_store_flow` first, append new ones, then write back.
|
||||
>
|
||||
> `get_store_flow` already has a `tier` field (Standard/Premium) computed
|
||||
> by the scanning pipeline. Only use `update_store_flow(tier=...)` if you
|
||||
> need to override it.
|
||||
|
||||
### 7. Maker Offboarding
|
||||
|
||||
When an employee leaves, identify their flows and apps, and reassign
|
||||
Flow Studio governance contacts and notification recipients.
|
||||
|
||||
```
|
||||
1. get_store_maker(makerKey="<departing-user-aad-oid>")
|
||||
→ check ownerFlowCount, ownerAppCount, deleted status
|
||||
2. list_store_flows → collect all flows
|
||||
3. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Parse owners: json.loads(record["owners"])
|
||||
- If any principalId matches the departing user's OID → flag
|
||||
4. list_store_power_apps → filter where ownerId matches the OID
|
||||
5. For each flagged flow:
|
||||
- Check runPeriodTotal and runLast — is it still active?
|
||||
- If keeping:
|
||||
update_store_flow(environmentName, flowName,
|
||||
ownerTeam="NewTeam", supportEmail="new-owner@contoso.com")
|
||||
- If decommissioning:
|
||||
set_store_flow_state(environmentName, flowName, state="Stopped")
|
||||
Read existing tags, append #decommissioned
|
||||
update_store_flow(environmentName, flowName, tags="<existing> #decommissioned")
|
||||
6. Report: flows reassigned, flows stopped, apps needing manual reassignment
|
||||
```
|
||||
|
||||
> **What "reassign" means here:** `update_store_flow` changes who Flow
|
||||
> Studio considers the governance contact and who receives Flow Studio
|
||||
> notifications. It does NOT transfer the actual Power Automate flow
|
||||
> ownership — that requires the Power Platform admin center or PowerShell.
|
||||
> Also update `rule_notify_email` so failure notifications go to the new
|
||||
> team instead of the departing employee's email.
|
||||
>
|
||||
> Power Apps ownership cannot be changed via MCP tools. Report them for
|
||||
> manual reassignment in the Power Apps admin center.
|
||||
|
||||
### 8. Security Review
|
||||
|
||||
Review flows for potential security concerns using cached store data.
|
||||
|
||||
```
|
||||
1. list_store_flows(monitor=true)
|
||||
2. For each flow (skip entries without displayName or state=Deleted):
|
||||
- Split id → environmentName, flowName
|
||||
- get_store_flow(environmentName, flowName)
|
||||
- Parse security: json.loads(record["security"])
|
||||
- Parse connections: json.loads(record["connections"])
|
||||
- Read sharingType directly (top-level field, NOT inside security JSON)
|
||||
3. Report findings to user for review
|
||||
4. For reviewed flows:
|
||||
Read existing tags, append #security-reviewed
|
||||
update_store_flow(environmentName, flowName, tags="<existing> #security-reviewed")
|
||||
Do NOT overwrite the security field — it contains structured auth data
|
||||
```
|
||||
|
||||
**Fields available for security review:**
|
||||
|
||||
| Field | Where | What it tells you |
|
||||
|---|---|---|
|
||||
| `security.triggerRequestAuthenticationType` | security JSON | `"All"` = HTTP trigger accepts unauthenticated requests |
|
||||
| `sharingType` | top-level | `"Coauthor"` = shared with co-authors for editing |
|
||||
| `connections` | connections JSON | Which connectors the flow uses (check for HTTP, custom) |
|
||||
| `referencedResources` | JSON string | SharePoint sites, Teams channels, external URLs the flow accesses |
|
||||
| `tier` | top-level | `"Premium"` = uses premium connectors |
|
||||
|
||||
> Each organization decides what constitutes a security concern. For example,
|
||||
> an unauthenticated HTTP trigger is expected for webhook receivers (Stripe,
|
||||
> GitHub) but may be a risk for internal flows. Review findings in context
|
||||
> before flagging.
|
||||
|
||||
### 9. Environment Governance
|
||||
|
||||
Audit environments for compliance and sprawl.
|
||||
|
||||
```
|
||||
1. list_store_environments
|
||||
Skip entries without displayName (tenant-level metadata rows)
|
||||
2. Flag:
|
||||
- Developer environments (sku="Developer") — should be limited
|
||||
- Non-managed environments (isManagedEnvironment=false) — less governance
|
||||
- Note: isAdmin=false means the current service account lacks admin
|
||||
access to that environment, not that the environment has no admin
|
||||
3. list_store_flows → group by environmentName
|
||||
- Flow count per environment
|
||||
- Failure rate analysis: runPeriodFailRate is on the list response —
|
||||
no need for per-flow get_store_flow calls
|
||||
4. list_store_connections → group by environmentName
|
||||
- Connection count per environment
|
||||
```
|
||||
|
||||
### 10. Governance Dashboard
|
||||
|
||||
Generate a tenant-wide governance summary.
|
||||
|
||||
```
|
||||
Efficient metrics (list calls only):
|
||||
1. total_flows = len(list_store_flows())
|
||||
2. monitored = len(list_store_flows(monitor=true))
|
||||
3. with_onfail = len(list_store_flows(rule_notify_onfail=true))
|
||||
4. makers = list_store_makers()
|
||||
→ active = count where deleted=false
|
||||
→ orphan_count = count where deleted=true AND ownerFlowCount > 0
|
||||
5. apps = list_store_power_apps()
|
||||
→ widely_shared = count where sharedUsersCount > 3
|
||||
6. envs = list_store_environments() → count, group by sku
|
||||
7. conns = list_store_connections() → count
|
||||
|
||||
Compute from list data:
|
||||
- Monitoring %: monitored / total_flows
|
||||
- Notification %: with_onfail / monitored
|
||||
- Orphan count: from step 4
|
||||
- High-risk count: flows with runPeriodFailRate > 0.2 (on list response)
|
||||
|
||||
Detailed metrics (require get_store_flow per flow — expensive for large tenants):
|
||||
- Compliance %: flows with businessImpact set / total active flows
|
||||
- Undocumented count: flows without description
|
||||
- Tier breakdown: group by tier field
|
||||
|
||||
For detailed metrics, iterate all flows in a single pass:
|
||||
For each flow from list_store_flows (skip sparse entries):
|
||||
Split id → environmentName, flowName
|
||||
get_store_flow(environmentName, flowName)
|
||||
→ accumulate businessImpact, description, tier
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Field Reference: `get_store_flow` Fields Used in Governance
|
||||
|
||||
All fields below are confirmed present on the `get_store_flow` response.
|
||||
Fields marked with `*` are also available on `list_store_flows` (cheaper).
|
||||
|
||||
| Field | Type | Governance use |
|
||||
|---|---|---|
|
||||
| `displayName` * | string | Archive score (test/demo name detection) |
|
||||
| `state` * | string | Archive score, lifecycle management |
|
||||
| `tier` | string | License audit (Standard vs Premium) |
|
||||
| `monitor` * | bool | Is this flow being actively monitored? |
|
||||
| `critical` | bool | Business-critical designation (settable via update_store_flow) |
|
||||
| `businessImpact` | string | Compliance classification |
|
||||
| `businessJustification` | string | Compliance attestation |
|
||||
| `ownerTeam` | string | Ownership accountability |
|
||||
| `supportEmail` | string | Escalation contact |
|
||||
| `rule_notify_onfail` | bool | Failure alerting configured? |
|
||||
| `rule_notify_onmissingdays` | number | SLA monitoring configured? |
|
||||
| `rule_notify_email` | string | Alert recipients |
|
||||
| `description` | string | Documentation completeness |
|
||||
| `tags` | string | Classification — `list_store_flows` shows description-extracted hashtags only; store tags written by `update_store_flow` require `get_store_flow` to read back |
|
||||
| `runPeriodTotal` * | number | Activity level |
|
||||
| `runPeriodFailRate` * | number | Health status |
|
||||
| `runLast` | ISO string | Last run timestamp |
|
||||
| `scanned` | ISO string | Data freshness |
|
||||
| `deleted` | bool | Lifecycle tracking |
|
||||
| `createdTime` * | ISO string | Archive score (age) |
|
||||
| `lastModifiedTime` * | ISO string | Archive score (staleness) |
|
||||
| `owners` | JSON string | Orphan detection, ownership audit — parse with json.loads() |
|
||||
| `connections` | JSON string | Connector audit, tier — parse with json.loads() |
|
||||
| `complexity` | JSON string | Archive score (simplicity) — parse with json.loads() |
|
||||
| `security` | JSON string | Auth type audit — parse with json.loads(), contains `triggerRequestAuthenticationType` |
|
||||
| `sharingType` | string | Oversharing detection (top-level, NOT inside security) |
|
||||
| `referencedResources` | JSON string | URL audit — parse with json.loads() |
|
||||
|
||||
---
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `flowstudio-power-automate-monitoring` — Health checks, failure rates, inventory (read-only)
|
||||
- `flowstudio-power-automate-mcp` — Core connection setup, live tool reference
|
||||
- `flowstudio-power-automate-debug` — Deep diagnosis with action-level inputs/outputs
|
||||
- `flowstudio-power-automate-build` — Build and deploy flow definitions
|
||||
@@ -1,13 +1,22 @@
|
||||
---
|
||||
name: flowstudio-power-automate-mcp
|
||||
description: >-
|
||||
Connect to and operate Power Automate cloud flows via a FlowStudio MCP server.
|
||||
Give your AI agent the same visibility you have in the Power Automate portal — plus
|
||||
a bit more. The Graph API only returns top-level run status. Flow Studio MCP exposes
|
||||
action-level inputs, outputs, loop iterations, and nested child flow failures.
|
||||
Use when asked to: list flows, read a flow definition, check run history, inspect
|
||||
action outputs, resubmit a run, cancel a running flow, view connections, get a
|
||||
trigger URL, validate a definition, monitor flow health, or any task that requires
|
||||
talking to the Power Automate API through an MCP tool. Also use for Power Platform
|
||||
environment discovery and connection management. Requires a FlowStudio MCP
|
||||
subscription or compatible server — see https://mcp.flowstudio.app
|
||||
metadata:
|
||||
openclaw:
|
||||
requires:
|
||||
env:
|
||||
- FLOWSTUDIO_MCP_TOKEN
|
||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
||||
homepage: https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Power Automate via FlowStudio MCP
|
||||
@@ -16,6 +25,10 @@ This skill lets AI agents read, monitor, and operate Microsoft Power Automate
|
||||
cloud flows programmatically through a **FlowStudio MCP server** — no browser,
|
||||
no UI, no manual steps.
|
||||
|
||||
> **Real debugging examples**: [Expression error in child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/fix-expression-error.md) |
|
||||
> [Data entry, not a flow bug](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/data-not-flow.md) |
|
||||
> [Null value crashes child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/null-child-flow.md)
|
||||
|
||||
> **Requires:** A [FlowStudio](https://mcp.flowstudio.app) MCP subscription (or
|
||||
> compatible Power Automate MCP server). You will need:
|
||||
> - MCP endpoint: `https://mcp.flowstudio.app/mcp` (same for all subscribers)
|
||||
@@ -445,6 +458,6 @@ print(new_runs[0]["status"]) # Succeeded = done
|
||||
|
||||
## More Capabilities
|
||||
|
||||
For **diagnosing failing flows** end-to-end → load the `power-automate-debug` skill.
|
||||
For **diagnosing failing flows** end-to-end → load the `flowstudio-power-automate-debug` skill.
|
||||
|
||||
For **building and deploying new flows** → load the `power-automate-build` skill.
|
||||
For **building and deploying new flows** → load the `flowstudio-power-automate-build` skill.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
Compact lookup for recognising action types returned by `get_live_flow`.
|
||||
Use this to **read and understand** existing flow definitions.
|
||||
|
||||
> For full copy-paste construction patterns, see the `power-automate-build` skill.
|
||||
> For full copy-paste construction patterns, see the `flowstudio-power-automate-build` skill.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -138,7 +138,7 @@ Response: **direct array** (no wrapper).
|
||||
]
|
||||
```
|
||||
|
||||
> **`id` format**: `envId.flowId` --- split on the first `.` to extract the flow UUID:
|
||||
> **`id` format**: `<environmentId>.<flowId>` --- split on the first `.` to extract the flow UUID:
|
||||
> `flow_id = item["id"].split(".", 1)[1]`
|
||||
|
||||
### `get_store_flow`
|
||||
@@ -146,7 +146,7 @@ Response: **direct array** (no wrapper).
|
||||
Response: single flow metadata from cache (selected fields).
|
||||
```json
|
||||
{
|
||||
"id": "envId.flowId",
|
||||
"id": "<environmentId>.<flowId>",
|
||||
"displayName": "My Flow",
|
||||
"state": "Started",
|
||||
"triggerType": "Recurrence",
|
||||
@@ -204,7 +204,7 @@ Response:
|
||||
```json
|
||||
{
|
||||
"created": false,
|
||||
"flowKey": "envId.flowId",
|
||||
"flowKey": "<environmentId>.<flowId>",
|
||||
"updated": ["definition", "connectionReferences"],
|
||||
"displayName": "My Flow",
|
||||
"state": "Started",
|
||||
@@ -353,17 +353,69 @@ Response keys: `flowKey`, `triggerName`, `triggerUrl`, `requiresAadAuth`, `authT
|
||||
|
||||
> **Only works for `Request` (HTTP) triggers.** Returns an error for Recurrence
|
||||
> and other trigger types: `"only HTTP Request triggers can be invoked via this tool"`.
|
||||
> `Button`-kind triggers return `ListCallbackUrlOperationBlocked`.
|
||||
>
|
||||
> `responseStatus` + `responseBody` contain the flow's Response action output.
|
||||
> AAD-authenticated triggers are handled automatically.
|
||||
>
|
||||
> **Content-type note**: The body is sent as `application/octet-stream` (raw),
|
||||
> not `application/json`. Flows with a trigger schema that has `required` fields
|
||||
> will reject the request with `InvalidRequestContent` (400) because PA validates
|
||||
> `Content-Type` before parsing against the schema. Flows without a schema, or
|
||||
> flows designed to accept raw input (e.g. Baker-pattern flows that parse the body
|
||||
> internally), will work fine. The flow receives the JSON as base64-encoded
|
||||
> `$content` with `$content-type: application/octet-stream`.
|
||||
|
||||
---
|
||||
|
||||
## Flow State Management
|
||||
|
||||
### `set_live_flow_state`
|
||||
|
||||
Start or stop a Power Automate flow via the live PA API. Does **not** require
|
||||
a Power Clarity workspace — works for any flow the impersonated account can access.
|
||||
Reads the current state first and only issues the start/stop call if a change is
|
||||
actually needed.
|
||||
|
||||
Parameters: `environmentName`, `flowName`, `state` (`"Started"` | `"Stopped"`) — all required.
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"flowName": "6321ab25-7eb0-42df-b977-e97d34bcb272",
|
||||
"environmentName": "Default-26e65220-...",
|
||||
"requestedState": "Started",
|
||||
"actualState": "Started"
|
||||
}
|
||||
```
|
||||
|
||||
> **Use this tool** — not `update_live_flow` — to start or stop a flow.
|
||||
> `update_live_flow` only changes displayName/definition; the PA API ignores
|
||||
> state passed through that endpoint.
|
||||
|
||||
### `set_store_flow_state`
|
||||
|
||||
Start or stop a flow. Pass `state: "Started"` or `state: "Stopped"`.
|
||||
Start or stop a flow via the live PA API **and** persist the updated state back
|
||||
to the Power Clarity cache. Same parameters as `set_live_flow_state` but requires
|
||||
a Power Clarity workspace.
|
||||
|
||||
Response (different shape from `set_live_flow_state`):
|
||||
```json
|
||||
{
|
||||
"flowKey": "<environmentId>.<flowId>",
|
||||
"requestedState": "Stopped",
|
||||
"currentState": "Stopped",
|
||||
"flow": { /* full gFlows record, same shape as get_store_flow */ }
|
||||
}
|
||||
```
|
||||
|
||||
> Prefer `set_live_flow_state` when you only need to toggle state — it's
|
||||
> simpler and has no subscription requirement.
|
||||
>
|
||||
> Use `set_store_flow_state` when you need the cache updated immediately
|
||||
> (without waiting for the next daily scan) AND want the full updated
|
||||
> governance record back in the same call — useful for workflows that
|
||||
> stop a flow and immediately tag or inspect it.
|
||||
|
||||
---
|
||||
|
||||
@@ -424,6 +476,8 @@ Non-obvious behaviors discovered through real API usage. These are things
|
||||
- `error` key is **always present** in response --- `null` means success.
|
||||
Do NOT check `if "error" in result`; check `result.get("error") is not None`.
|
||||
- On create, `created` = new flow GUID (string). On update, `created` = `false`.
|
||||
- **Cannot change flow state.** Only updates displayName, definition, and
|
||||
connectionReferences. Use `set_live_flow_state` to start/stop a flow.
|
||||
|
||||
### `trigger_live_flow`
|
||||
- **Only works for HTTP Request triggers.** Returns error for Recurrence, connector,
|
||||
|
||||
399
skills/flowstudio-power-automate-monitoring/SKILL.md
Normal file
399
skills/flowstudio-power-automate-monitoring/SKILL.md
Normal file
@@ -0,0 +1,399 @@
|
||||
---
|
||||
name: flowstudio-power-automate-monitoring
|
||||
description: >-
|
||||
Monitor Power Automate flow health, track failure rates, and inventory tenant
|
||||
assets using the FlowStudio MCP cached store. The live API only returns
|
||||
top-level run status. Store tools surface aggregated stats, per-run failure
|
||||
details with remediation hints, maker activity, and Power Apps inventory —
|
||||
all from a fast cache with no rate-limit pressure on the PA API.
|
||||
Load this skill when asked to: check flow health, find failing flows, get
|
||||
failure rates, review error trends, list all flows with monitoring enabled,
|
||||
check who built a flow, find inactive makers, inventory Power Apps, see
|
||||
environment or connection counts, get a flow summary, or any tenant-wide
|
||||
health overview. Requires a FlowStudio for Teams or MCP Pro+ subscription —
|
||||
see https://mcp.flowstudio.app
|
||||
metadata:
|
||||
openclaw:
|
||||
requires:
|
||||
env:
|
||||
- FLOWSTUDIO_MCP_TOKEN
|
||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
||||
homepage: https://mcp.flowstudio.app
|
||||
---
|
||||
|
||||
# Power Automate Monitoring with FlowStudio MCP
|
||||
|
||||
Monitor flow health, track failure rates, and inventory tenant assets through
|
||||
the FlowStudio MCP **cached store** — fast reads, no PA API rate limits, and
|
||||
enriched with governance metadata and remediation hints.
|
||||
|
||||
> **Requires:** A [FlowStudio for Teams or MCP Pro+](https://mcp.flowstudio.app)
|
||||
> subscription.
|
||||
>
|
||||
> **Start every session with `tools/list`** to confirm tool names and parameters.
|
||||
> This skill covers response shapes, behavioral notes, and workflow patterns —
|
||||
> things `tools/list` cannot tell you. If this document disagrees with
|
||||
> `tools/list` or a real API response, the API wins.
|
||||
|
||||
---
|
||||
|
||||
## How Monitoring Works
|
||||
|
||||
Flow Studio scans the Power Automate API daily for each subscriber and caches
|
||||
the results. There are two levels:
|
||||
|
||||
- **All flows** get metadata scanned: definition, connections, owners, trigger
|
||||
type, and aggregate run statistics (`runPeriodTotal`, `runPeriodFailRate`,
|
||||
etc.). Environments, apps, connections, and makers are also scanned.
|
||||
- **Monitored flows** (`monitor: true`) additionally get per-run detail:
|
||||
individual run records with status, duration, failed action names, and
|
||||
remediation hints. This is what populates `get_store_flow_runs`,
|
||||
`get_store_flow_errors`, and `get_store_flow_summary`.
|
||||
|
||||
**Data freshness:** Check the `scanned` field on `get_store_flow` to see when
|
||||
a flow was last scanned. If stale, the scanning pipeline may not be running.
|
||||
|
||||
**Enabling monitoring:** Set `monitor: true` via `update_store_flow` or the
|
||||
Flow Studio for Teams app
|
||||
([how to select flows](https://learn.flowstudio.app/teams-monitoring)).
|
||||
|
||||
**Designating critical flows:** Use `update_store_flow` with `critical=true`
|
||||
on business-critical flows. This enables the governance skill's notification
|
||||
rule management to auto-configure failure alerts on critical flows.
|
||||
|
||||
---
|
||||
|
||||
## Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|---|---|
|
||||
| `list_store_flows` | List flows with failure rates and monitoring filters |
|
||||
| `get_store_flow` | Full cached record: run stats, owners, tier, connections, definition |
|
||||
| `get_store_flow_summary` | Aggregated run stats: success/fail rate, avg/max duration |
|
||||
| `get_store_flow_runs` | Per-run history with duration, status, failed actions, remediation |
|
||||
| `get_store_flow_errors` | Failed-only runs with action names and remediation hints |
|
||||
| `get_store_flow_trigger_url` | Trigger URL from cache (instant, no PA API call) |
|
||||
| `set_store_flow_state` | Start or stop a flow and sync state back to cache |
|
||||
| `update_store_flow` | Set monitor flag, notification rules, tags, governance metadata |
|
||||
| `list_store_environments` | All Power Platform environments |
|
||||
| `list_store_connections` | All connections |
|
||||
| `list_store_makers` | All makers (citizen developers) |
|
||||
| `get_store_maker` | Maker detail: flow/app counts, licenses, account status |
|
||||
| `list_store_power_apps` | All Power Apps canvas apps |
|
||||
|
||||
---
|
||||
|
||||
## Store vs Live
|
||||
|
||||
| Question | Use Store | Use Live |
|
||||
|---|---|---|
|
||||
| How many flows are failing? | `list_store_flows` | — |
|
||||
| What's the fail rate over 30 days? | `get_store_flow_summary` | — |
|
||||
| Show error history for a flow | `get_store_flow_errors` | — |
|
||||
| Who built this flow? | `get_store_flow` → parse `owners` | — |
|
||||
| Read the full flow definition | `get_store_flow` has it (JSON string) | `get_live_flow` (structured) |
|
||||
| Inspect action inputs/outputs from a run | — | `get_live_flow_run_action_outputs` |
|
||||
| Resubmit a failed run | — | `resubmit_live_flow_run` |
|
||||
|
||||
> Store tools answer "what happened?" and "how healthy is it?"
|
||||
> Live tools answer "what exactly went wrong?" and "fix it now."
|
||||
|
||||
> If `get_store_flow_runs`, `get_store_flow_errors`, or `get_store_flow_summary`
|
||||
> return empty results, check: (1) is `monitor: true` on the flow? and
|
||||
> (2) is the `scanned` field recent? Use `get_store_flow` to verify both.
|
||||
|
||||
---
|
||||
|
||||
## Response Shapes
|
||||
|
||||
### `list_store_flows`
|
||||
|
||||
Direct array. Filters: `monitor` (bool), `rule_notify_onfail` (bool),
|
||||
`rule_notify_onmissingdays` (bool).
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "Default-<envGuid>.<flowGuid>",
|
||||
"displayName": "Stripe subscription updated",
|
||||
"state": "Started",
|
||||
"triggerType": "Request",
|
||||
"triggerUrl": "https://...",
|
||||
"tags": ["#operations", "#sensitive"],
|
||||
"environmentName": "Default-26e65220-...",
|
||||
"monitor": true,
|
||||
"runPeriodFailRate": 0.012,
|
||||
"runPeriodTotal": 82,
|
||||
"createdTime": "2025-06-24T01:20:53Z",
|
||||
"lastModifiedTime": "2025-06-24T03:51:03Z"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
> `id` format: `Default-<envGuid>.<flowGuid>`. Split on first `.` to get
|
||||
> `environmentName` and `flowName`.
|
||||
>
|
||||
> `triggerUrl` and `tags` are optional. Some entries are sparse (just `id` +
|
||||
> `monitor`) — skip entries without `displayName`.
|
||||
>
|
||||
> Tags on `list_store_flows` are auto-extracted from the flow's `description`
|
||||
> field (maker hashtags like `#operations`). Tags written via
|
||||
> `update_store_flow(tags=...)` are stored separately and only visible on
|
||||
> `get_store_flow` — they do NOT appear in the list response.
|
||||
|
||||
### `get_store_flow`
|
||||
|
||||
Full cached record. Key fields:
|
||||
|
||||
| Category | Fields |
|
||||
|---|---|
|
||||
| Identity | `name`, `displayName`, `environmentName`, `state`, `triggerType`, `triggerKind`, `tier`, `sharingType` |
|
||||
| Run stats | `runPeriodTotal`, `runPeriodFails`, `runPeriodSuccess`, `runPeriodFailRate`, `runPeriodSuccessRate`, `runPeriodDurationAverage`/`Max`/`Min` (milliseconds), `runTotal`, `runFails`, `runFirst`, `runLast`, `runToday` |
|
||||
| Governance | `monitor` (bool), `rule_notify_onfail` (bool), `rule_notify_onmissingdays` (number), `rule_notify_email` (string), `log_notify_onfail` (ISO), `description`, `tags` |
|
||||
| Freshness | `scanned` (ISO), `nextScan` (ISO) |
|
||||
| Lifecycle | `deleted` (bool), `deletedTime` (ISO) |
|
||||
| JSON strings | `actions`, `connections`, `owners`, `complexity`, `definition`, `createdBy`, `security`, `triggers`, `referencedResources`, `runError` — all require `json.loads()` to parse |
|
||||
|
||||
> Duration fields (`runPeriodDurationAverage`, `Max`, `Min`) are in
|
||||
> **milliseconds**. Divide by 1000 for seconds.
|
||||
>
|
||||
> `runError` contains the last run error as a JSON string. Parse it:
|
||||
> `json.loads(record["runError"])` — returns `{}` when no error.
|
||||
|
||||
### `get_store_flow_summary`
|
||||
|
||||
Aggregated stats over a time window (default: last 7 days).
|
||||
|
||||
```json
|
||||
{
|
||||
"flowKey": "Default-<envGuid>.<flowGuid>",
|
||||
"windowStart": null,
|
||||
"windowEnd": null,
|
||||
"totalRuns": 82,
|
||||
"successRuns": 81,
|
||||
"failRuns": 1,
|
||||
"successRate": 0.988,
|
||||
"failRate": 0.012,
|
||||
"averageDurationSeconds": 2.877,
|
||||
"maxDurationSeconds": 9.433,
|
||||
"firstFailRunRemediation": null,
|
||||
"firstFailRunUrl": null
|
||||
}
|
||||
```
|
||||
|
||||
> Returns all zeros when no run data exists for this flow in the window.
|
||||
> Use `startTime` and `endTime` (ISO 8601) parameters to change the window.
|
||||
|
||||
### `get_store_flow_runs` / `get_store_flow_errors`
|
||||
|
||||
Direct array. `get_store_flow_errors` filters to `status=Failed` only.
|
||||
Parameters: `startTime`, `endTime`, `status` (array: `["Failed"]`,
|
||||
`["Succeeded"]`, etc.).
|
||||
|
||||
> Both return `[]` when no run data exists.
|
||||
|
||||
### `get_store_flow_trigger_url`
|
||||
|
||||
```json
|
||||
{
|
||||
"flowKey": "Default-<envGuid>.<flowGuid>",
|
||||
"displayName": "Stripe subscription updated",
|
||||
"triggerType": "Request",
|
||||
"triggerKind": "Http",
|
||||
"triggerUrl": "https://..."
|
||||
}
|
||||
```
|
||||
|
||||
> `triggerUrl` is null for non-HTTP triggers.
|
||||
|
||||
### `set_store_flow_state`
|
||||
|
||||
Calls the live PA API then syncs state to the cache and returns the
|
||||
full updated record.
|
||||
|
||||
```json
|
||||
{
|
||||
"flowKey": "Default-<envGuid>.<flowGuid>",
|
||||
"requestedState": "Stopped",
|
||||
"currentState": "Stopped",
|
||||
"flow": { /* full gFlows record, same shape as get_store_flow */ }
|
||||
}
|
||||
```
|
||||
|
||||
> The embedded `flow` object reflects the new state immediately — no
|
||||
> follow-up `get_store_flow` call needed. Useful for governance workflows
|
||||
> that stop a flow and then read its tags/monitor/owner metadata in the
|
||||
> same turn.
|
||||
>
|
||||
> Functionally equivalent to `set_live_flow_state` for changing state,
|
||||
> but `set_live_flow_state` only returns `{flowName, environmentName,
|
||||
> requestedState, actualState}` and doesn't sync the cache. Prefer
|
||||
> `set_live_flow_state` when you only need to toggle state and don't
|
||||
> care about cache freshness.
|
||||
|
||||
### `update_store_flow`
|
||||
|
||||
Updates governance metadata. Only provided fields are updated (merge).
|
||||
Returns the full updated record (same shape as `get_store_flow`).
|
||||
|
||||
Settable fields: `monitor` (bool), `rule_notify_onfail` (bool),
|
||||
`rule_notify_onmissingdays` (number, 0=disabled),
|
||||
`rule_notify_email` (comma-separated), `description`, `tags`,
|
||||
`businessImpact`, `businessJustification`, `businessValue`,
|
||||
`ownerTeam`, `ownerBusinessUnit`, `supportGroup`, `supportEmail`,
|
||||
`critical` (bool), `tier`, `security`.
|
||||
|
||||
### `list_store_environments`
|
||||
|
||||
Direct array.
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "Default-26e65220-...",
|
||||
"displayName": "Flow Studio (default)",
|
||||
"sku": "Default",
|
||||
"type": "NotSpecified",
|
||||
"location": "australia",
|
||||
"isDefault": true,
|
||||
"isAdmin": true,
|
||||
"isManagedEnvironment": false,
|
||||
"createdTime": "2017-01-18T01:06:46Z"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
> `sku` values: `Default`, `Production`, `Developer`, `Sandbox`, `Teams`.
|
||||
|
||||
### `list_store_connections`
|
||||
|
||||
Direct array. Can be very large (1500+ items).
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "<environmentId>.<connectionId>",
|
||||
"displayName": "user@contoso.com",
|
||||
"createdBy": "{\"id\":\"...\",\"displayName\":\"...\",\"email\":\"...\"}",
|
||||
"environmentName": "...",
|
||||
"statuses": "[{\"status\":\"Connected\"}]"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
> `createdBy` and `statuses` are **JSON strings** — parse with `json.loads()`.
|
||||
|
||||
### `list_store_makers`
|
||||
|
||||
Direct array.
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "09dbe02f-...",
|
||||
"displayName": "Catherine Han",
|
||||
"mail": "catherine.han@flowstudio.app",
|
||||
"deleted": false,
|
||||
"ownerFlowCount": 199,
|
||||
"ownerAppCount": 209,
|
||||
"userIsServicePrinciple": false
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
> Deleted makers have `deleted: true` and no `displayName`/`mail` fields.
|
||||
|
||||
### `get_store_maker`
|
||||
|
||||
Full maker record. Key fields: `displayName`, `mail`, `userPrincipalName`,
|
||||
`ownerFlowCount`, `ownerAppCount`, `accountEnabled`, `deleted`, `country`,
|
||||
`firstFlow`, `firstFlowCreatedTime`, `lastFlowCreatedTime`,
|
||||
`firstPowerApp`, `lastPowerAppCreatedTime`,
|
||||
`licenses` (JSON string of M365 SKUs).
|
||||
|
||||
### `list_store_power_apps`
|
||||
|
||||
Direct array.
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "<environmentId>.<appId>",
|
||||
"displayName": "My App",
|
||||
"environmentName": "...",
|
||||
"ownerId": "09dbe02f-...",
|
||||
"ownerName": "Catherine Han",
|
||||
"appType": "Canvas",
|
||||
"sharedUsersCount": 0,
|
||||
"createdTime": "2023-08-18T01:06:22Z",
|
||||
"lastModifiedTime": "2023-08-18T01:06:22Z",
|
||||
"lastPublishTime": "2023-08-18T01:06:22Z"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Find unhealthy flows
|
||||
|
||||
```
|
||||
1. list_store_flows
|
||||
2. Filter where runPeriodFailRate > 0.1 and runPeriodTotal >= 5
|
||||
3. Sort by runPeriodFailRate descending
|
||||
4. For each: get_store_flow for full detail
|
||||
```
|
||||
|
||||
### Check a specific flow's health
|
||||
|
||||
```
|
||||
1. get_store_flow → check scanned (freshness), runPeriodFailRate, runPeriodTotal
|
||||
2. get_store_flow_summary → aggregated stats with optional time window
|
||||
3. get_store_flow_errors → per-run failure detail with remediation hints
|
||||
4. If deeper diagnosis needed → switch to live tools:
|
||||
get_live_flow_runs → get_live_flow_run_action_outputs
|
||||
```
|
||||
|
||||
### Enable monitoring on a flow
|
||||
|
||||
```
|
||||
1. update_store_flow with monitor=true
|
||||
2. Optionally set rule_notify_onfail=true, rule_notify_email="user@domain.com"
|
||||
3. Run data will appear after the next daily scan
|
||||
```
|
||||
|
||||
### Daily health check
|
||||
|
||||
```
|
||||
1. list_store_flows
|
||||
2. Flag flows with runPeriodFailRate > 0.2 and runPeriodTotal >= 3
|
||||
3. Flag monitored flows with state="Stopped" (may indicate auto-suspension)
|
||||
4. For critical failures → get_store_flow_errors for remediation hints
|
||||
```
|
||||
|
||||
### Maker audit
|
||||
|
||||
```
|
||||
1. list_store_makers
|
||||
2. Identify deleted accounts still owning flows (deleted=true, ownerFlowCount > 0)
|
||||
3. get_store_maker for full detail on specific users
|
||||
```
|
||||
|
||||
### Inventory
|
||||
|
||||
```
|
||||
1. list_store_environments → environment count, SKUs, locations
|
||||
2. list_store_flows → flow count by state, trigger type, fail rate
|
||||
3. list_store_power_apps → app count, owners, sharing
|
||||
4. list_store_connections → connection count per environment
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `power-automate-mcp` — Core connection setup, live tool reference
|
||||
- `power-automate-debug` — Deep diagnosis with action-level inputs/outputs (live API)
|
||||
- `power-automate-build` — Build and deploy flow definitions
|
||||
- `power-automate-governance` — Governance metadata, tagging, notification rules, CoE patterns
|
||||
444
skills/python-pypi-package-builder/SKILL.md
Normal file
444
skills/python-pypi-package-builder/SKILL.md
Normal file
@@ -0,0 +1,444 @@
|
||||
---
|
||||
name: python-pypi-package-builder
|
||||
description: 'End-to-end skill for building, testing, linting, versioning, and publishing a production-grade Python library to PyPI. Covers all four build backends (setuptools+setuptools_scm, hatchling, flit, poetry), PEP 440 versioning, semantic versioning, dynamic git-tag versioning, OOP/SOLID design, type hints (PEP 484/526/544/561), Trusted Publishing (OIDC), and the full PyPA packaging flow. Use for: creating Python packages, pip-installable SDKs, CLI tools, framework plugins, pyproject.toml setup, py.typed, setuptools_scm, semver, mypy, pre-commit, GitHub Actions CI/CD, or PyPI publishing.'
|
||||
---
|
||||
|
||||
# Python PyPI Package Builder Skill
|
||||
|
||||
A complete, battle-tested guide for building, testing, linting, versioning, typing, and
|
||||
publishing a production-grade Python library to PyPI — from first commit to community-ready
|
||||
release.
|
||||
|
||||
> **AI Agent Instruction:** Read this entire file before writing a single line of code or
|
||||
> creating any file. Every decision — layout, backend, versioning strategy, patterns, CI —
|
||||
> has a decision rule here. Follow the decision trees in order. This skill applies to any
|
||||
> Python package type (utility, SDK, CLI, plugin, data library). Do not skip sections.
|
||||
|
||||
---
|
||||
|
||||
## Quick Navigation
|
||||
|
||||
| Section in this file | What it covers |
|
||||
|---|---|
|
||||
| [1. Skill Trigger](#1-skill-trigger) | When to load this skill |
|
||||
| [2. Package Type Decision](#2-package-type-decision) | Identify what you are building |
|
||||
| [3. Folder Structure Decision](#3-folder-structure-decision) | src/ vs flat vs monorepo |
|
||||
| [4. Build Backend Decision](#4-build-backend-decision) | setuptools / hatchling / flit / poetry |
|
||||
| [5. PyPA Packaging Flow](#5-pypa-packaging-flow) | The canonical publish pipeline |
|
||||
| [6. Project Structure Templates](#6-project-structure-templates) | Full layouts for every option |
|
||||
| [7. Versioning Strategy](#7-versioning-strategy) | PEP 440, semver, dynamic vs static |
|
||||
|
||||
| Reference file | What it covers |
|
||||
|---|---|
|
||||
| `references/pyproject-toml.md` | All four backend templates, `setuptools_scm`, `py.typed`, tool configs |
|
||||
| `references/library-patterns.md` | OOP/SOLID, type hints, core class design, factory, protocols, CLI |
|
||||
| `references/testing-quality.md` | `conftest.py`, unit/backend/async tests, ruff/mypy/pre-commit |
|
||||
| `references/ci-publishing.md` | `ci.yml`, `publish.yml`, Trusted Publishing, TestPyPI, CHANGELOG, release checklist |
|
||||
| `references/community-docs.md` | README, docstrings, CONTRIBUTING, SECURITY, anti-patterns, master checklist |
|
||||
| `references/architecture-patterns.md` | Backend system (plugin/strategy), config layer, transport layer, CLI, backend injection |
|
||||
| `references/versioning-strategy.md` | PEP 440, SemVer, pre-release, setuptools_scm deep-dive, flit static, decision engine |
|
||||
| `references/release-governance.md` | Branch strategy, branch protection, OIDC, tag author validation, prevent invalid tags |
|
||||
| `references/tooling-ruff.md` | Ruff-only setup (replaces black/isort), mypy config, pre-commit, asyncio_mode=auto |
|
||||
|
||||
**Scaffold script:** run `python skills/python-pypi-package-builder/scripts/scaffold.py --name your-package-name`
|
||||
to generate the entire directory layout, stub files, and `pyproject.toml` in one command.
|
||||
|
||||
---
|
||||
|
||||
## 1. Skill Trigger
|
||||
|
||||
Load this skill whenever the user wants to:
|
||||
|
||||
- Create, scaffold, or publish a Python package or library to PyPI
|
||||
- Build a pip-installable SDK, utility, CLI tool, or framework extension
|
||||
- Set up `pyproject.toml`, linting, mypy, pre-commit, or GitHub Actions for a Python project
|
||||
- Understand versioning (`setuptools_scm`, PEP 440, semver, static versioning)
|
||||
- Understand PyPA specs: `py.typed`, `MANIFEST.in`, `RECORD`, classifiers
|
||||
- Publish to PyPI using Trusted Publishing (OIDC) or API tokens
|
||||
- Refactor an existing package to follow modern Python packaging standards
|
||||
- Add type hints, protocols, ABCs, or dataclasses to a Python library
|
||||
- Apply OOP/SOLID design patterns to a Python package
|
||||
- Choose between build backends (setuptools, hatchling, flit, poetry)
|
||||
|
||||
**Also trigger for phrases like:** "build a Python SDK", "publish my library", "set up PyPI CI",
|
||||
"create a pip package", "how do I publish to PyPI", "pyproject.toml help", "PEP 561 typed",
|
||||
"setuptools_scm version", "semver Python", "PEP 440", "git tag release", "Trusted Publishing".
|
||||
|
||||
---
|
||||
|
||||
## 2. Package Type Decision
|
||||
|
||||
Identify what the user is building **before** writing any code. Each type has distinct patterns.
|
||||
|
||||
### Decision Table
|
||||
|
||||
| Type | Core Pattern | Entry Point | Key Deps | Example Packages |
|
||||
|---|---|---|---|---|
|
||||
| **Utility library** | Module of pure functions + helpers | Import API only | Minimal | `arrow`, `humanize`, `boltons`, `more-itertools` |
|
||||
| **API client / SDK** | Class with methods, auth, retry logic | Import API only | `httpx` or `requests` | `boto3`, `stripe-python`, `openai` |
|
||||
| **CLI tool** | Command functions + argument parser | `[project.scripts]` or `[project.entry-points]` | `click` or `typer` | `black`, `ruff`, `httpie`, `rich` |
|
||||
| **Framework plugin** | Plugin class, hook registration | `[project.entry-points."framework.plugin"]` | Framework dep | `pytest-*`, `django-*`, `flask-*` |
|
||||
| **Data processing library** | Classes + functional pipeline | Import API only | Optional: `numpy`, `pandas` | `pydantic`, `marshmallow`, `cerberus` |
|
||||
| **Mixed / generic** | Combination of above | Varies | Varies | Many real-world packages |
|
||||
|
||||
**Decision Rule:** Ask the user if unclear. A package can combine types (e.g., SDK with a CLI
|
||||
entry point) — use the primary type for structural decisions and add secondary type patterns on top.
|
||||
|
||||
For implementation patterns of each type, see `references/library-patterns.md`.
|
||||
|
||||
### Package Naming Rules
|
||||
|
||||
- PyPI name: all lowercase, hyphens — `my-python-library`
|
||||
- Python import name: underscores — `my_python_library`
|
||||
- Check availability: https://pypi.org/search/ before starting
|
||||
- Avoid shadowing popular packages (verify `pip install <name>` fails first)
|
||||
|
||||
---
|
||||
|
||||
## 3. Folder Structure Decision
|
||||
|
||||
### Decision Tree
|
||||
|
||||
```
|
||||
Does the package have 5+ internal modules OR multiple contributors OR complex sub-packages?
|
||||
├── YES → Use src/ layout
|
||||
│ Reason: prevents accidental import of uninstalled code during development;
|
||||
│ separates source from project root files; PyPA-recommended for large projects.
|
||||
│
|
||||
├── NO → Is it a single-module, focused package (e.g., one file + helpers)?
|
||||
│ ├── YES → Use flat layout
|
||||
│ └── NO (medium complexity) → Use flat layout, migrate to src/ if it grows
|
||||
│
|
||||
└── Is it multiple related packages under one namespace (e.g., myorg.http, myorg.db)?
|
||||
└── YES → Use namespace/monorepo layout
|
||||
```
|
||||
|
||||
### Quick Rule Summary
|
||||
|
||||
| Situation | Use |
|
||||
|---|---|
|
||||
| New project, unknown future size | `src/` layout (safest default) |
|
||||
| Single-purpose, 1–4 modules | Flat layout |
|
||||
| Large library, many contributors | `src/` layout |
|
||||
| Multiple packages in one repo | Namespace / monorepo |
|
||||
| Migrating old flat project | Keep flat; migrate to `src/` at next major version |
|
||||
|
||||
---
|
||||
|
||||
## 4. Build Backend Decision
|
||||
|
||||
### Decision Tree
|
||||
|
||||
```
|
||||
Does the user need version derived automatically from git tags?
|
||||
├── YES → Use setuptools + setuptools_scm
|
||||
│ (git tag v1.0.0 → that IS your release workflow)
|
||||
│
|
||||
└── NO → Does the user want an all-in-one tool (deps + build + publish)?
|
||||
├── YES → Use poetry (v2+ supports standard [project] table)
|
||||
│
|
||||
└── NO → Is the package pure Python with no C extensions?
|
||||
├── YES, minimal config preferred → Use flit
|
||||
│ (zero config, auto-discovers version from __version__)
|
||||
│
|
||||
└── YES, modern & fast preferred → Use hatchling
|
||||
(zero-config, plugin system, no setup.py needed)
|
||||
|
||||
Does the package have C/Cython/Fortran extensions?
|
||||
└── YES → MUST use setuptools (only backend with full native extension support)
|
||||
```
|
||||
|
||||
### Backend Comparison
|
||||
|
||||
| Backend | Version source | Config | C extensions | Best for |
|
||||
|---|---|---|---|---|
|
||||
| `setuptools` + `setuptools_scm` | git tags (automatic) | `pyproject.toml` + optional `setup.py` shim | Yes | Projects with git-tag releases; any complexity |
|
||||
| `hatchling` | manual or plugin | `pyproject.toml` only | No | New pure-Python projects; fast, modern |
|
||||
| `flit` | `__version__` in `__init__.py` | `pyproject.toml` only | No | Very simple, single-module packages |
|
||||
| `poetry` | `pyproject.toml` field | `pyproject.toml` only | No | Teams wanting integrated dep management |
|
||||
|
||||
For all four complete `pyproject.toml` templates, see `references/pyproject-toml.md`.
|
||||
|
||||
---
|
||||
|
||||
## 5. PyPA Packaging Flow
|
||||
|
||||
This is the canonical end-to-end flow from source code to user install.
|
||||
**Every step must be understood before publishing.**
|
||||
|
||||
```
|
||||
1. SOURCE TREE
|
||||
Your code in version control (git)
|
||||
└── pyproject.toml describes metadata + build system
|
||||
|
||||
2. BUILD
|
||||
python -m build
|
||||
└── Produces two artifacts in dist/:
|
||||
├── *.tar.gz → source distribution (sdist)
|
||||
└── *.whl → built distribution (wheel) — preferred by pip
|
||||
|
||||
3. VALIDATE
|
||||
twine check dist/*
|
||||
└── Checks metadata, README rendering, and PyPI compatibility
|
||||
|
||||
4. TEST PUBLISH (first release only)
|
||||
twine upload --repository testpypi dist/*
|
||||
└── Verify: pip install --index-url https://test.pypi.org/simple/ your-package
|
||||
|
||||
5. PUBLISH
|
||||
twine upload dist/* ← manual fallback
|
||||
OR GitHub Actions publish.yml ← recommended (Trusted Publishing / OIDC)
|
||||
|
||||
6. USER INSTALL
|
||||
pip install your-package
|
||||
pip install "your-package[extra]"
|
||||
```
|
||||
|
||||
### Key PyPA Concepts
|
||||
|
||||
| Concept | What it means |
|
||||
|---|---|
|
||||
| **sdist** | Source distribution — your source + metadata; used when no wheel is available |
|
||||
| **wheel (.whl)** | Pre-built binary — pip extracts directly into site-packages; no build step |
|
||||
| **PEP 517/518** | Standard build system interface via `pyproject.toml [build-system]` table |
|
||||
| **PEP 621** | Standard `[project]` table in `pyproject.toml`; all modern backends support it |
|
||||
| **PEP 639** | `license` key as SPDX string (e.g., `"MIT"`, `"Apache-2.0"`) — not `{text = "MIT"}` |
|
||||
| **PEP 561** | `py.typed` empty marker file — tells mypy/IDEs this package ships type information |
|
||||
|
||||
For complete CI workflow and publishing setup, see `references/ci-publishing.md`.
|
||||
|
||||
---
|
||||
|
||||
## 6. Project Structure Templates
|
||||
|
||||
### A. src/ Layout (Recommended default for new projects)
|
||||
|
||||
```
|
||||
your-package/
|
||||
├── src/
|
||||
│ └── your_package/
|
||||
│ ├── __init__.py # Public API: __all__, __version__
|
||||
│ ├── py.typed # PEP 561 marker — EMPTY FILE
|
||||
│ ├── core.py # Primary implementation
|
||||
│ ├── client.py # (API client type) or remove
|
||||
│ ├── cli.py # (CLI type) click/typer commands, or remove
|
||||
│ ├── config.py # Settings / configuration dataclass
|
||||
│ ├── exceptions.py # Custom exception hierarchy
|
||||
│ ├── models.py # Data classes, Pydantic models, TypedDicts
|
||||
│ ├── utils.py # Internal helpers (prefix _utils if private)
|
||||
│ ├── types.py # Shared type aliases and TypeVars
|
||||
│ └── backends/ # (Plugin pattern) — remove if not needed
|
||||
│ ├── __init__.py # Protocol / ABC interface definition
|
||||
│ ├── memory.py # Default zero-dep implementation
|
||||
│ └── redis.py # Optional heavy implementation
|
||||
├── tests/
|
||||
│ ├── __init__.py
|
||||
│ ├── conftest.py # Shared fixtures
|
||||
│ ├── unit/
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── test_core.py
|
||||
│ │ ├── test_config.py
|
||||
│ │ └── test_models.py
|
||||
│ ├── integration/
|
||||
│ │ ├── __init__.py
|
||||
│ │ └── test_backends.py
|
||||
│ └── e2e/ # Optional: end-to-end tests
|
||||
│ └── __init__.py
|
||||
├── docs/ # Optional: mkdocs or sphinx
|
||||
├── scripts/
|
||||
│ └── scaffold.py
|
||||
├── .github/
|
||||
│ ├── workflows/
|
||||
│ │ ├── ci.yml
|
||||
│ │ └── publish.yml
|
||||
│ └── ISSUE_TEMPLATE/
|
||||
│ ├── bug_report.md
|
||||
│ └── feature_request.md
|
||||
├── .pre-commit-config.yaml
|
||||
├── pyproject.toml
|
||||
├── CHANGELOG.md
|
||||
├── CONTRIBUTING.md
|
||||
├── SECURITY.md
|
||||
├── LICENSE
|
||||
├── README.md
|
||||
└── .gitignore
|
||||
```
|
||||
|
||||
### B. Flat Layout (Small / focused packages)
|
||||
|
||||
```
|
||||
your-package/
|
||||
├── your_package/ # ← at root, not inside src/
|
||||
│ ├── __init__.py
|
||||
│ ├── py.typed
|
||||
│ └── ... (same internal structure)
|
||||
├── tests/
|
||||
└── ... (same top-level files)
|
||||
```
|
||||
|
||||
### C. Namespace / Monorepo Layout (Multiple related packages)
|
||||
|
||||
```
|
||||
your-org/
|
||||
├── packages/
|
||||
│ ├── your-org-core/
|
||||
│ │ ├── src/your_org/core/
|
||||
│ │ └── pyproject.toml
|
||||
│ ├── your-org-http/
|
||||
│ │ ├── src/your_org/http/
|
||||
│ │ └── pyproject.toml
|
||||
│ └── your-org-cli/
|
||||
│ ├── src/your_org/cli/
|
||||
│ └── pyproject.toml
|
||||
├── .github/workflows/
|
||||
└── README.md
|
||||
```
|
||||
|
||||
Each sub-package has its own `pyproject.toml`. They share the `your_org` namespace via PEP 420
|
||||
implicit namespace packages (no `__init__.py` in the namespace root).
|
||||
|
||||
### Internal Module Guidelines
|
||||
|
||||
| File | Purpose | When to include |
|
||||
|---|---|---|
|
||||
| `__init__.py` | Public API surface; re-exports; `__version__` | Always |
|
||||
| `py.typed` | PEP 561 typed-package marker (empty) | Always |
|
||||
| `core.py` | Primary class / main logic | Always |
|
||||
| `config.py` | Settings dataclass or Pydantic model | When configurable |
|
||||
| `exceptions.py` | Exception hierarchy (`YourBaseError` → specifics) | Always |
|
||||
| `models.py` | Data models / DTOs / TypedDicts | When data-heavy |
|
||||
| `utils.py` | Internal helpers (not part of public API) | As needed |
|
||||
| `types.py` | Shared `TypeVar`, `TypeAlias`, `Protocol` definitions | When complex typing |
|
||||
| `cli.py` | CLI entry points (click/typer) | CLI type only |
|
||||
| `backends/` | Plugin/strategy pattern | When swappable implementations |
|
||||
| `_compat.py` | Python version compatibility shims | When 3.9–3.13 compat needed |
|
||||
|
||||
---
|
||||
|
||||
## 7. Versioning Strategy
|
||||
|
||||
### PEP 440 — The Standard
|
||||
|
||||
```
|
||||
Canonical form: N[.N]+[{a|b|rc}N][.postN][.devN]
|
||||
|
||||
Examples:
|
||||
1.0.0 Stable release
|
||||
1.0.0a1 Alpha (pre-release)
|
||||
1.0.0b2 Beta
|
||||
1.0.0rc1 Release candidate
|
||||
1.0.0.post1 Post-release (e.g., packaging fix only)
|
||||
1.0.0.dev1 Development snapshot (not for PyPI)
|
||||
```
|
||||
|
||||
### Semantic Versioning (recommended)
|
||||
|
||||
```
|
||||
MAJOR.MINOR.PATCH
|
||||
|
||||
MAJOR: Breaking API change (remove/rename public function/class/arg)
|
||||
MINOR: New feature, fully backward-compatible
|
||||
PATCH: Bug fix, no API change
|
||||
```
|
||||
|
||||
### Dynamic versioning with setuptools_scm (recommended for git-tag workflows)
|
||||
|
||||
```bash
|
||||
# How it works:
|
||||
git tag v1.0.0 → installed version = 1.0.0
|
||||
git tag v1.1.0 → installed version = 1.1.0
|
||||
(commits after tag) → version = 1.1.0.post1 (suffix stripped for PyPI)
|
||||
|
||||
# In code — NEVER hardcode when using setuptools_scm:
|
||||
from importlib.metadata import version, PackageNotFoundError
|
||||
try:
|
||||
__version__ = version("your-package")
|
||||
except PackageNotFoundError:
|
||||
__version__ = "0.0.0-dev" # Fallback for uninstalled dev checkouts
|
||||
```
|
||||
|
||||
Required `pyproject.toml` config:
|
||||
```toml
|
||||
[tool.setuptools_scm]
|
||||
version_scheme = "post-release"
|
||||
local_scheme = "no-local-version" # Prevents +g<hash> from breaking PyPI uploads
|
||||
```
|
||||
|
||||
**Critical:** always set `fetch-depth: 0` in every CI checkout step. Without full git history,
|
||||
`setuptools_scm` cannot find tags and the build version silently falls back to `0.0.0+dev`.
|
||||
|
||||
### Static versioning (flit, hatchling manual, poetry)
|
||||
|
||||
```python
|
||||
# your_package/__init__.py
|
||||
__version__ = "1.0.0" # Update this before every release
|
||||
```
|
||||
|
||||
### Version specifier best practices for dependencies
|
||||
|
||||
```toml
|
||||
# In [project] dependencies:
|
||||
"httpx>=0.24" # Minimum version — PREFERRED for libraries
|
||||
"httpx>=0.24,<1.0" # Upper bound only when a known breaking change exists
|
||||
"httpx==0.27.0" # Pin exactly ONLY in applications, NOT libraries
|
||||
|
||||
# NEVER do this in a library — it breaks dependency resolution for users:
|
||||
# "httpx~=0.24.0" # Too tight
|
||||
# "httpx==0.27.*" # Fragile
|
||||
```
|
||||
|
||||
### Version bump → release flow
|
||||
|
||||
```bash
|
||||
# 1. Update CHANGELOG.md — move [Unreleased] entries to [x.y.z] - YYYY-MM-DD
|
||||
# 2. Commit the changelog
|
||||
git add CHANGELOG.md
|
||||
git commit -m "chore: prepare release vX.Y.Z"
|
||||
# 3. Tag and push — this triggers publish.yml automatically
|
||||
git tag vX.Y.Z
|
||||
git push origin main --tags
|
||||
# 4. Monitor GitHub Actions → verify on https://pypi.org/project/your-package/
|
||||
```
|
||||
|
||||
For complete pyproject.toml templates for all four backends, see `references/pyproject-toml.md`.
|
||||
|
||||
---
|
||||
|
||||
## Where to Go Next
|
||||
|
||||
After understanding decisions and structure:
|
||||
|
||||
1. **Set up `pyproject.toml`** → `references/pyproject-toml.md`
|
||||
All four backend templates (setuptools+scm, hatchling, flit, poetry), full tool configs,
|
||||
`py.typed` setup, versioning config.
|
||||
|
||||
2. **Write your library code** → `references/library-patterns.md`
|
||||
OOP/SOLID principles, type hints (PEP 484/526/544/561), core class design, factory functions,
|
||||
`__init__.py`, plugin/backend pattern, CLI entry point.
|
||||
|
||||
3. **Add tests and code quality** → `references/testing-quality.md`
|
||||
`conftest.py`, unit/backend/async tests, parametrize, ruff/mypy/pre-commit setup.
|
||||
|
||||
4. **Set up CI/CD and publish** → `references/ci-publishing.md`
|
||||
`ci.yml`, `publish.yml` with Trusted Publishing (OIDC, no API tokens), CHANGELOG format,
|
||||
release checklist.
|
||||
|
||||
5. **Polish for community/OSS** → `references/community-docs.md`
|
||||
README sections, docstring format, CONTRIBUTING, SECURITY, issue templates, anti-patterns
|
||||
table, and master release checklist.
|
||||
|
||||
6. **Design backends, config, transport, CLI** → `references/architecture-patterns.md`
|
||||
Backend system (plugin/strategy pattern), Settings dataclass, HTTP transport layer,
|
||||
CLI with click/typer, backend injection rules.
|
||||
|
||||
7. **Choose and implement a versioning strategy** → `references/versioning-strategy.md`
|
||||
PEP 440 canonical forms, SemVer rules, pre-release identifiers, setuptools_scm deep-dive,
|
||||
flit static versioning, decision engine (DEFAULT/BEGINNER/MINIMAL).
|
||||
|
||||
8. **Govern releases and secure the publish pipeline** → `references/release-governance.md`
|
||||
Branch strategy, branch protection rules, OIDC Trusted Publishing setup, tag author
|
||||
validation in CI, tag format enforcement, full governed `publish.yml`.
|
||||
|
||||
9. **Simplify tooling with Ruff** → `references/tooling-ruff.md`
|
||||
Ruff-only setup replacing black/isort/flake8, mypy config, pre-commit hooks,
|
||||
asyncio_mode=auto (remove @pytest.mark.asyncio), migration guide.
|
||||
@@ -0,0 +1,555 @@
|
||||
# Architecture Patterns — Backend System, Config, Transport, CLI
|
||||
|
||||
## Table of Contents
|
||||
1. [Backend System (Plugin/Strategy Pattern)](#1-backend-system-pluginstrategy-pattern)
|
||||
2. [Config Layer (Settings Dataclass)](#2-config-layer-settings-dataclass)
|
||||
3. [Transport Layer (HTTP Client Abstraction)](#3-transport-layer-http-client-abstraction)
|
||||
4. [CLI Support](#4-cli-support)
|
||||
5. [Backend Injection in Core Client](#5-backend-injection-in-core-client)
|
||||
6. [Decision Rules](#6-decision-rules)
|
||||
|
||||
---
|
||||
|
||||
## 1. Backend System (Plugin/Strategy Pattern)
|
||||
|
||||
Structure your `backends/` sub-package with a clear base protocol, a zero-dependency default
|
||||
implementation, and optional heavy implementations behind extras.
|
||||
|
||||
### Directory Layout
|
||||
|
||||
```
|
||||
your_package/
|
||||
backends/
|
||||
__init__.py # Exports BaseBackend + factory; holds the Protocol/ABC
|
||||
base.py # Abstract base class (ABC) or Protocol definition
|
||||
memory.py # Default, zero-dependency in-memory implementation
|
||||
redis.py # Optional, heavier implementation (guarded by extras)
|
||||
```
|
||||
|
||||
### `backends/base.py` — Abstract Interface
|
||||
|
||||
```python
|
||||
# your_package/backends/base.py
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
|
||||
class BaseBackend(ABC):
|
||||
"""Abstract storage/processing backend.
|
||||
|
||||
All concrete backends must implement these methods.
|
||||
Never import heavy dependencies at module level — guard them inside the class.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get(self, key: str) -> str | None:
|
||||
"""Retrieve a value by key. Return None when the key does not exist."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def set(self, key: str, value: str, ttl: int | None = None) -> None:
|
||||
"""Store a value with an optional TTL (seconds)."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def delete(self, key: str) -> None:
|
||||
"""Remove a key. No-op when the key does not exist."""
|
||||
...
|
||||
|
||||
def close(self) -> None: # noqa: B027 (intentionally non-abstract)
|
||||
"""Optional cleanup hook. Override in backends that hold connections."""
|
||||
```
|
||||
|
||||
### `backends/memory.py` — Default Zero-Dep Implementation
|
||||
|
||||
```python
|
||||
# your_package/backends/memory.py
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
from collections.abc import Iterator
|
||||
from contextlib import contextmanager
|
||||
from threading import Lock
|
||||
|
||||
from .base import BaseBackend
|
||||
|
||||
|
||||
class MemoryBackend(BaseBackend):
|
||||
"""Thread-safe in-memory backend. No external dependencies required."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._store: dict[str, tuple[str, float | None]] = {}
|
||||
self._lock = Lock()
|
||||
|
||||
def get(self, key: str) -> str | None:
|
||||
with self._lock:
|
||||
entry = self._store.get(key)
|
||||
if entry is None:
|
||||
return None
|
||||
value, expires_at = entry
|
||||
if expires_at is not None and time.monotonic() > expires_at:
|
||||
del self._store[key]
|
||||
return None
|
||||
return value
|
||||
|
||||
def set(self, key: str, value: str, ttl: int | None = None) -> None:
|
||||
expires_at = time.monotonic() + ttl if ttl is not None else None
|
||||
with self._lock:
|
||||
self._store[key] = (value, expires_at)
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
with self._lock:
|
||||
self._store.pop(key, None)
|
||||
```
|
||||
|
||||
### `backends/redis.py` — Optional Heavy Implementation
|
||||
|
||||
```python
|
||||
# your_package/backends/redis.py
|
||||
from __future__ import annotations
|
||||
|
||||
from .base import BaseBackend
|
||||
|
||||
|
||||
class RedisBackend(BaseBackend):
|
||||
"""Redis-backed implementation. Requires: pip install your-package[redis]"""
|
||||
|
||||
def __init__(self, url: str = "redis://localhost:6379/0") -> None:
|
||||
try:
|
||||
import redis as _redis
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"RedisBackend requires redis. "
|
||||
"Install it with: pip install your-package[redis]"
|
||||
) from exc
|
||||
self._client = _redis.from_url(url, decode_responses=True)
|
||||
|
||||
def get(self, key: str) -> str | None:
|
||||
return self._client.get(key) # type: ignore[return-value]
|
||||
|
||||
def set(self, key: str, value: str, ttl: int | None = None) -> None:
|
||||
if ttl is not None:
|
||||
self._client.setex(key, ttl, value)
|
||||
else:
|
||||
self._client.set(key, value)
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
self._client.delete(key)
|
||||
|
||||
def close(self) -> None:
|
||||
self._client.close()
|
||||
```
|
||||
|
||||
### `backends/__init__.py` — Public API + Factory
|
||||
|
||||
```python
|
||||
# your_package/backends/__init__.py
|
||||
from __future__ import annotations
|
||||
|
||||
from .base import BaseBackend
|
||||
from .memory import MemoryBackend
|
||||
|
||||
__all__ = ["BaseBackend", "MemoryBackend", "get_backend"]
|
||||
|
||||
|
||||
def get_backend(backend_type: str = "memory", **kwargs: object) -> BaseBackend:
|
||||
"""Factory: return the requested backend instance.
|
||||
|
||||
Args:
|
||||
backend_type: "memory" (default) or "redis".
|
||||
**kwargs: Forwarded to the backend constructor.
|
||||
"""
|
||||
if backend_type == "memory":
|
||||
return MemoryBackend()
|
||||
if backend_type == "redis":
|
||||
from .redis import RedisBackend # Late import — redis is optional
|
||||
return RedisBackend(**kwargs) # type: ignore[arg-type]
|
||||
raise ValueError(f"Unknown backend type: {backend_type!r}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Config Layer (Settings Dataclass)
|
||||
|
||||
Centralise all configuration in one `config.py` module. Avoid scattering magic values and
|
||||
`os.environ` calls across the codebase.
|
||||
|
||||
### `config.py`
|
||||
|
||||
```python
|
||||
# your_package/config.py
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
|
||||
@dataclass
|
||||
class Settings:
|
||||
"""All runtime configuration for your package.
|
||||
|
||||
Attributes:
|
||||
api_key: Authentication credential. Never log or expose this.
|
||||
timeout: HTTP request timeout in seconds.
|
||||
retries: Maximum number of retry attempts on transient failures.
|
||||
base_url: API base URL. Override in tests with a local server.
|
||||
"""
|
||||
|
||||
api_key: str
|
||||
timeout: int = 30
|
||||
retries: int = 3
|
||||
base_url: str = "https://api.example.com/v1"
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if not self.api_key:
|
||||
raise ValueError("api_key must not be empty")
|
||||
if self.timeout < 1:
|
||||
raise ValueError("timeout must be >= 1")
|
||||
if self.retries < 0:
|
||||
raise ValueError("retries must be >= 0")
|
||||
|
||||
@classmethod
|
||||
def from_env(cls) -> "Settings":
|
||||
"""Construct Settings from environment variables.
|
||||
|
||||
Required env var: YOUR_PACKAGE_API_KEY
|
||||
Optional env vars: YOUR_PACKAGE_TIMEOUT, YOUR_PACKAGE_RETRIES
|
||||
"""
|
||||
api_key = os.environ.get("YOUR_PACKAGE_API_KEY", "")
|
||||
timeout = int(os.environ.get("YOUR_PACKAGE_TIMEOUT", "30"))
|
||||
retries = int(os.environ.get("YOUR_PACKAGE_RETRIES", "3"))
|
||||
return cls(api_key=api_key, timeout=timeout, retries=retries)
|
||||
```
|
||||
|
||||
### Using Pydantic (optional, for larger projects)
|
||||
|
||||
```python
|
||||
# your_package/config.py — Pydantic v2 variant
|
||||
from __future__ import annotations
|
||||
|
||||
from pydantic import Field
|
||||
from pydantic_settings import BaseSettings
|
||||
|
||||
|
||||
class Settings(BaseSettings):
|
||||
api_key: str = Field(..., min_length=1)
|
||||
timeout: int = Field(30, ge=1)
|
||||
retries: int = Field(3, ge=0)
|
||||
base_url: str = "https://api.example.com/v1"
|
||||
|
||||
model_config = {"env_prefix": "YOUR_PACKAGE_"}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Transport Layer (HTTP Client Abstraction)
|
||||
|
||||
Isolate all HTTP concerns — headers, retries, timeouts, error parsing — in a dedicated
|
||||
`transport/` sub-package. The core client depends on the transport abstraction, not on `httpx`
|
||||
or `requests` directly.
|
||||
|
||||
### Directory Layout
|
||||
|
||||
```
|
||||
your_package/
|
||||
transport/
|
||||
__init__.py # Re-exports HttpTransport
|
||||
http.py # Concrete httpx-based transport
|
||||
```
|
||||
|
||||
### `transport/http.py`
|
||||
|
||||
```python
|
||||
# your_package/transport/http.py
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
from ..config import Settings
|
||||
from ..exceptions import YourPackageError, RateLimitError, AuthenticationError
|
||||
|
||||
|
||||
class HttpTransport:
|
||||
"""Thin httpx wrapper that centralises auth, retries, and error mapping."""
|
||||
|
||||
def __init__(self, settings: Settings) -> None:
|
||||
self._settings = settings
|
||||
self._client = httpx.Client(
|
||||
base_url=settings.base_url,
|
||||
timeout=settings.timeout,
|
||||
headers={"Authorization": f"Bearer {settings.api_key}"},
|
||||
)
|
||||
|
||||
def request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
*,
|
||||
json: dict[str, Any] | None = None,
|
||||
params: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Send an HTTP request and return the parsed JSON body.
|
||||
|
||||
Raises:
|
||||
AuthenticationError: on 401.
|
||||
RateLimitError: on 429.
|
||||
YourPackageError: on all other non-2xx responses.
|
||||
"""
|
||||
response = self._client.request(method, path, json=json, params=params)
|
||||
self._raise_for_status(response)
|
||||
return response.json()
|
||||
|
||||
def _raise_for_status(self, response: httpx.Response) -> None:
|
||||
if response.status_code == 401:
|
||||
raise AuthenticationError("Invalid or expired API key.")
|
||||
if response.status_code == 429:
|
||||
raise RateLimitError("Rate limit exceeded. Back off and retry.")
|
||||
if response.is_error:
|
||||
raise YourPackageError(
|
||||
f"API error {response.status_code}: {response.text[:200]}"
|
||||
)
|
||||
|
||||
def close(self) -> None:
|
||||
self._client.close()
|
||||
|
||||
def __enter__(self) -> "HttpTransport":
|
||||
return self
|
||||
|
||||
def __exit__(self, *args: object) -> None:
|
||||
self.close()
|
||||
```
|
||||
|
||||
### Async variant
|
||||
|
||||
```python
|
||||
# your_package/transport/async_http.py
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
from ..config import Settings
|
||||
from ..exceptions import YourPackageError, RateLimitError, AuthenticationError
|
||||
|
||||
|
||||
class AsyncHttpTransport:
|
||||
"""Async httpx wrapper. Use with `async with AsyncHttpTransport(...) as t:`."""
|
||||
|
||||
def __init__(self, settings: Settings) -> None:
|
||||
self._settings = settings
|
||||
self._client = httpx.AsyncClient(
|
||||
base_url=settings.base_url,
|
||||
timeout=settings.timeout,
|
||||
headers={"Authorization": f"Bearer {settings.api_key}"},
|
||||
)
|
||||
|
||||
async def request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
*,
|
||||
json: dict[str, Any] | None = None,
|
||||
params: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
response = await self._client.request(method, path, json=json, params=params)
|
||||
self._raise_for_status(response)
|
||||
return response.json()
|
||||
|
||||
def _raise_for_status(self, response: httpx.Response) -> None:
|
||||
if response.status_code == 401:
|
||||
raise AuthenticationError("Invalid or expired API key.")
|
||||
if response.status_code == 429:
|
||||
raise RateLimitError("Rate limit exceeded. Back off and retry.")
|
||||
if response.is_error:
|
||||
raise YourPackageError(
|
||||
f"API error {response.status_code}: {response.text[:200]}"
|
||||
)
|
||||
|
||||
async def aclose(self) -> None:
|
||||
await self._client.aclose()
|
||||
|
||||
async def __aenter__(self) -> "AsyncHttpTransport":
|
||||
return self
|
||||
|
||||
async def __aexit__(self, *args: object) -> None:
|
||||
await self.aclose()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. CLI Support
|
||||
|
||||
Add a CLI entry point via `[project.scripts]` in `pyproject.toml`.
|
||||
|
||||
### `pyproject.toml` entry
|
||||
|
||||
```toml
|
||||
[project.scripts]
|
||||
your-cli = "your_package.cli:main"
|
||||
```
|
||||
|
||||
After installation, the user can run `your-cli --help` directly from the terminal.
|
||||
|
||||
### `cli.py` — Using Click
|
||||
|
||||
```python
|
||||
# your_package/cli.py
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
|
||||
import click
|
||||
|
||||
from .config import Settings
|
||||
from .core import YourClient
|
||||
|
||||
|
||||
@click.group()
|
||||
@click.version_option()
|
||||
def main() -> None:
|
||||
"""your-package CLI — interact with the API from the command line."""
|
||||
|
||||
|
||||
@main.command()
|
||||
@click.option("--api-key", envvar="YOUR_PACKAGE_API_KEY", required=True, help="API key.")
|
||||
@click.option("--timeout", default=30, show_default=True, help="Request timeout (s).")
|
||||
@click.argument("query")
|
||||
def search(api_key: str, timeout: int, query: str) -> None:
|
||||
"""Search the API and print results."""
|
||||
settings = Settings(api_key=api_key, timeout=timeout)
|
||||
client = YourClient(settings=settings)
|
||||
try:
|
||||
results = client.search(query)
|
||||
for item in results:
|
||||
click.echo(item)
|
||||
except Exception as exc:
|
||||
click.echo(f"Error: {exc}", err=True)
|
||||
sys.exit(1)
|
||||
```
|
||||
|
||||
### `cli.py` — Using Typer (modern alternative)
|
||||
|
||||
```python
|
||||
# your_package/cli.py
|
||||
from __future__ import annotations
|
||||
|
||||
import typer
|
||||
|
||||
from .config import Settings
|
||||
from .core import YourClient
|
||||
|
||||
app = typer.Typer(help="your-package CLI.")
|
||||
|
||||
|
||||
@app.command()
|
||||
def search(
|
||||
query: str = typer.Argument(..., help="Search query."),
|
||||
api_key: str = typer.Option(..., envvar="YOUR_PACKAGE_API_KEY"),
|
||||
timeout: int = typer.Option(30, help="Request timeout (s)."),
|
||||
) -> None:
|
||||
"""Search the API and print results."""
|
||||
settings = Settings(api_key=api_key, timeout=timeout)
|
||||
client = YourClient(settings=settings)
|
||||
results = client.search(query)
|
||||
for item in results:
|
||||
typer.echo(item)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
app()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Backend Injection in Core Client
|
||||
|
||||
**Critical:** always accept `backend` as a constructor argument. Never instantiate the backend
|
||||
inside the constructor without a fallback parameter — that makes testing impossible.
|
||||
|
||||
```python
|
||||
# your_package/core.py
|
||||
from __future__ import annotations
|
||||
|
||||
from .backends.base import BaseBackend
|
||||
from .backends.memory import MemoryBackend
|
||||
from .config import Settings
|
||||
|
||||
|
||||
class YourClient:
|
||||
"""Primary client. Accepts an injected backend for testability.
|
||||
|
||||
Args:
|
||||
settings: Resolved configuration. Use Settings.from_env() for production.
|
||||
backend: Storage/processing backend. Defaults to MemoryBackend when None.
|
||||
timeout: Deprecated — pass a Settings object instead.
|
||||
retries: Deprecated — pass a Settings object instead.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
api_key: str | None = None,
|
||||
*,
|
||||
settings: Settings | None = None,
|
||||
backend: BaseBackend | None = None,
|
||||
timeout: int = 30,
|
||||
retries: int = 3,
|
||||
) -> None:
|
||||
if settings is None:
|
||||
if api_key is None:
|
||||
raise ValueError("Provide either 'api_key' or 'settings'.")
|
||||
settings = Settings(api_key=api_key, timeout=timeout, retries=retries)
|
||||
self._settings = settings
|
||||
# CORRECT — default injected, not hardcoded
|
||||
self.backend: BaseBackend = backend if backend is not None else MemoryBackend()
|
||||
|
||||
# ... methods
|
||||
```
|
||||
|
||||
### Anti-Pattern — Never Do This
|
||||
|
||||
```python
|
||||
# BAD: hardcodes the backend; impossible to swap in tests
|
||||
class YourClient:
|
||||
def __init__(self, api_key: str) -> None:
|
||||
self.backend = MemoryBackend() # ← no injection possible
|
||||
|
||||
# BAD: hardcodes the package name literal in imports
|
||||
from your_package.backends.memory import MemoryBackend # only fine in your_package itself
|
||||
# use relative imports inside the package:
|
||||
from .backends.memory import MemoryBackend # ← correct
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Decision Rules
|
||||
|
||||
```
|
||||
Does the package interact with external state (cache, DB, queue)?
|
||||
├── YES → Add backends/ with BaseBackend + MemoryBackend
|
||||
│ Add optional heavy backends behind extras_require
|
||||
│
|
||||
└── NO → Skip backends/ entirely; keep core.py simple
|
||||
|
||||
Does the package call an external HTTP API?
|
||||
├── YES → Add transport/http.py; inject via Settings
|
||||
│
|
||||
└── NO → Skip transport/
|
||||
|
||||
Does the package need a command-line interface?
|
||||
├── YES, simple (1–3 commands) → Use argparse or click
|
||||
│ Add [project.scripts] in pyproject.toml
|
||||
│
|
||||
├── YES, complex (sub-commands, plugins) → Use click or typer
|
||||
│
|
||||
└── NO → Skip cli.py
|
||||
|
||||
Does runtime behaviour depend on user-supplied config?
|
||||
├── YES → Add config.py with Settings dataclass
|
||||
│ Expose Settings.from_env() for production use
|
||||
│
|
||||
└── NO → Accept params directly in the constructor
|
||||
```
|
||||
315
skills/python-pypi-package-builder/references/ci-publishing.md
Normal file
315
skills/python-pypi-package-builder/references/ci-publishing.md
Normal file
@@ -0,0 +1,315 @@
|
||||
# CI/CD, Publishing, and Changelog
|
||||
|
||||
## Table of Contents
|
||||
1. [Changelog format](#1-changelog-format)
|
||||
2. [ci.yml — lint, type-check, test matrix](#2-ciyml)
|
||||
3. [publish.yml — triggered on version tags](#3-publishyml)
|
||||
4. [PyPI Trusted Publishing (no API tokens)](#4-pypi-trusted-publishing)
|
||||
5. [Manual publish fallback](#5-manual-publish-fallback)
|
||||
6. [Release checklist](#6-release-checklist)
|
||||
7. [Verify py.typed ships in the wheel](#7-verify-pytyped-ships-in-the-wheel)
|
||||
8. [Semver change-type guide](#8-semver-change-type-guide)
|
||||
|
||||
---
|
||||
|
||||
## 1. Changelog Format
|
||||
|
||||
Keep a `CHANGELOG.md` following [Keep a Changelog](https://keepachangelog.com/) conventions.
|
||||
Every PR should update the `[Unreleased]` section. Before releasing, move those entries to a
|
||||
new version section with the date.
|
||||
|
||||
```markdown
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
---
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
- (in-progress features go here)
|
||||
|
||||
---
|
||||
|
||||
## [1.0.0] - 2026-04-02
|
||||
|
||||
### Added
|
||||
- Initial stable release
|
||||
- `YourMiddleware` with gradual, strict, and combined modes
|
||||
- In-memory backend (no extra deps)
|
||||
- Optional Redis backend (`pip install pkg[redis]`)
|
||||
- Per-route override via `Depends(RouteThrottle(...))`
|
||||
- `py.typed` marker — PEP 561 typed package
|
||||
- GitHub Actions CI: lint, mypy, test matrix, Trusted Publishing
|
||||
|
||||
### Changed
|
||||
### Fixed
|
||||
### Removed
|
||||
|
||||
---
|
||||
|
||||
## [0.1.0] - 2026-03-01
|
||||
|
||||
### Added
|
||||
- Initial project scaffold
|
||||
|
||||
[Unreleased]: https://github.com/you/your-package/compare/v1.0.0...HEAD
|
||||
[1.0.0]: https://github.com/you/your-package/compare/v0.1.0...v1.0.0
|
||||
[0.1.0]: https://github.com/you/your-package/releases/tag/v0.1.0
|
||||
```
|
||||
|
||||
### Semver — what bumps what
|
||||
|
||||
| Change type | Bump | Example |
|
||||
|---|---|---|
|
||||
| Breaking API change | MAJOR | `1.0.0 → 2.0.0` |
|
||||
| New feature, backward-compatible | MINOR | `1.0.0 → 1.1.0` |
|
||||
| Bug fix | PATCH | `1.0.0 → 1.0.1` |
|
||||
|
||||
---
|
||||
|
||||
## 2. `ci.yml`
|
||||
|
||||
Runs on every push and pull request. Tests across all supported Python versions.
|
||||
|
||||
```yaml
|
||||
# .github/workflows/ci.yml
|
||||
name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, master]
|
||||
pull_request:
|
||||
branches: [main, master]
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
name: Lint, Format & Type Check
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
- name: Install dev dependencies
|
||||
run: pip install -e ".[dev]"
|
||||
- name: ruff lint
|
||||
run: ruff check .
|
||||
- name: ruff format check
|
||||
run: ruff format --check .
|
||||
- name: mypy
|
||||
run: |
|
||||
if [ -d "src" ]; then
|
||||
mypy src/
|
||||
else
|
||||
mypy {mod}/
|
||||
fi
|
||||
|
||||
test:
|
||||
name: Test (Python ${{ matrix.python-version }})
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ["3.10", "3.11", "3.12", "3.13"]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0 # REQUIRED for setuptools_scm to read git tags
|
||||
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: pip install -e ".[dev]"
|
||||
|
||||
- name: Run tests with coverage
|
||||
run: pytest --cov --cov-report=xml
|
||||
|
||||
- name: Upload coverage
|
||||
uses: codecov/codecov-action@v4
|
||||
with:
|
||||
token: ${{ secrets.CODECOV_TOKEN }}
|
||||
fail_ci_if_error: false
|
||||
|
||||
test-redis:
|
||||
name: Test Redis backend
|
||||
runs-on: ubuntu-latest
|
||||
services:
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
ports: ["6379:6379"]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
|
||||
- name: Install with Redis extra
|
||||
run: pip install -e ".[dev,redis]"
|
||||
|
||||
- name: Run Redis tests
|
||||
run: pytest tests/test_redis_backend.py -v
|
||||
```
|
||||
|
||||
> **Always add `fetch-depth: 0`** to every checkout step when using `setuptools_scm`.
|
||||
> Without full git history, `setuptools_scm` can't find tags and the build fails with a version
|
||||
> detection error.
|
||||
|
||||
---
|
||||
|
||||
## 3. `publish.yml`
|
||||
|
||||
Triggered automatically when you push a tag matching `v*.*.*`. Uses Trusted Publishing (OIDC) —
|
||||
no API tokens in repository secrets.
|
||||
|
||||
```yaml
|
||||
# .github/workflows/publish.yml
|
||||
name: Publish to PyPI
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "v*.*.*"
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: Build distribution
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0 # Critical for setuptools_scm
|
||||
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
|
||||
- name: Install build tools
|
||||
run: pip install build twine
|
||||
|
||||
- name: Build package
|
||||
run: python -m build
|
||||
|
||||
- name: Check distribution
|
||||
run: twine check dist/*
|
||||
|
||||
- uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: dist/
|
||||
|
||||
publish:
|
||||
name: Publish to PyPI
|
||||
needs: build
|
||||
runs-on: ubuntu-latest
|
||||
environment: pypi
|
||||
permissions:
|
||||
id-token: write # Required for Trusted Publishing (OIDC)
|
||||
|
||||
steps:
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: dist/
|
||||
|
||||
- name: Publish to PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. PyPI Trusted Publishing
|
||||
|
||||
Trusted Publishing uses OpenID Connect (OIDC) so PyPI can verify that a publish came from your
|
||||
specific GitHub Actions workflow — no long-lived API tokens required, no rotation burden.
|
||||
|
||||
### One-time setup
|
||||
|
||||
1. Create an account at https://pypi.org
|
||||
2. Go to **Account → Publishing → Add a new pending publisher**
|
||||
3. Fill in:
|
||||
- GitHub owner (your username or org)
|
||||
- Repository name
|
||||
- Workflow filename: `publish.yml`
|
||||
- Environment name: `pypi`
|
||||
4. Create the `pypi` environment in GitHub:
|
||||
**repo → Settings → Environments → New environment → name it `pypi`**
|
||||
|
||||
That's it. The next time you push a `v*.*.*` tag, the workflow authenticates automatically.
|
||||
|
||||
---
|
||||
|
||||
## 5. Manual Publish Fallback
|
||||
|
||||
If CI isn't set up yet or you need to publish from your machine:
|
||||
|
||||
```bash
|
||||
pip install build twine
|
||||
|
||||
# Build wheel + sdist
|
||||
python -m build
|
||||
|
||||
# Validate before uploading
|
||||
twine check dist/*
|
||||
|
||||
# Upload to PyPI
|
||||
twine upload dist/*
|
||||
|
||||
# OR test on TestPyPI first (recommended for first release)
|
||||
twine upload --repository testpypi dist/*
|
||||
pip install --index-url https://test.pypi.org/simple/ your-package
|
||||
python -c "import your_package; print(your_package.__version__)"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Release Checklist
|
||||
|
||||
```
|
||||
[ ] All tests pass on main/master
|
||||
[ ] CHANGELOG.md updated — move [Unreleased] items to new version section with date
|
||||
[ ] Update diff comparison links at bottom of CHANGELOG
|
||||
[ ] git tag vX.Y.Z
|
||||
[ ] git push origin master --tags
|
||||
[ ] Monitor GitHub Actions publish.yml run
|
||||
[ ] Verify on PyPI: pip install your-package==X.Y.Z
|
||||
[ ] Test the installed version:
|
||||
python -c "import your_package; print(your_package.__version__)"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Verify py.typed Ships in the Wheel
|
||||
|
||||
After every build, confirm the typed marker is included:
|
||||
|
||||
```bash
|
||||
python -m build
|
||||
unzip -l dist/your_package-*.whl | grep py.typed
|
||||
# Must print: your_package/py.typed
|
||||
# If missing, check [tool.setuptools.package-data] in pyproject.toml
|
||||
```
|
||||
|
||||
If it's missing from the wheel, users won't get type information even though your code is
|
||||
fully typed. This is a silent failure — always verify before releasing.
|
||||
|
||||
---
|
||||
|
||||
## 8. Semver Change-Type Guide
|
||||
|
||||
| Change | Version bump | Example |
|
||||
|---|---|---|
|
||||
| Breaking API change (remove/rename public symbol) | MAJOR | `1.2.3 → 2.0.0` |
|
||||
| New feature, fully backward-compatible | MINOR | `1.2.3 → 1.3.0` |
|
||||
| Bug fix, no API change | PATCH | `1.2.3 → 1.2.4` |
|
||||
| Pre-release | suffix | `2.0.0a1 → 2.0.0rc1 → 2.0.0` |
|
||||
| Packaging-only fix (no code change) | post-release | `1.2.3 → 1.2.3.post1` |
|
||||
411
skills/python-pypi-package-builder/references/community-docs.md
Normal file
411
skills/python-pypi-package-builder/references/community-docs.md
Normal file
@@ -0,0 +1,411 @@
|
||||
# Community Docs, PR Checklist, Anti-patterns, and Release Checklist
|
||||
|
||||
## Table of Contents
|
||||
1. [README.md required sections](#1-readmemd-required-sections)
|
||||
2. [Docstrings — Google style](#2-docstrings--google-style)
|
||||
3. [CONTRIBUTING.md template](#3-contributingmd)
|
||||
4. [SECURITY.md template](#4-securitymd)
|
||||
5. [GitHub Issue Templates](#5-github-issue-templates)
|
||||
6. [PR Checklist](#6-pr-checklist)
|
||||
7. [Anti-patterns to avoid](#7-anti-patterns-to-avoid)
|
||||
8. [Master Release Checklist](#8-master-release-checklist)
|
||||
|
||||
---
|
||||
|
||||
## 1. `README.md` Required Sections
|
||||
|
||||
A good README is the single most important file for adoption. Users decide in 30 seconds whether
|
||||
to use your library based on the README.
|
||||
|
||||
```markdown
|
||||
# your-package
|
||||
|
||||
> One-line description — what it does and why it's useful.
|
||||
|
||||
[](https://pypi.org/project/your-package/)
|
||||
[](https://pypi.org/project/your-package/)
|
||||
[](https://github.com/you/your-package/actions/workflows/ci.yml)
|
||||
[](https://codecov.io/gh/you/your-package)
|
||||
[](LICENSE)
|
||||
|
||||
## Installation
|
||||
|
||||
pip install your-package
|
||||
|
||||
# With Redis backend:
|
||||
pip install "your-package[redis]"
|
||||
|
||||
## Quick Start
|
||||
|
||||
(A copy-paste working example — no setup required to run it)
|
||||
|
||||
from your_package import YourClient
|
||||
|
||||
client = YourClient(api_key="sk-...")
|
||||
result = client.process({"input": "value"})
|
||||
print(result)
|
||||
|
||||
## Features
|
||||
|
||||
- Feature 1
|
||||
- Feature 2
|
||||
|
||||
## Configuration
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|---|---|---|—--|
|
||||
| api_key | str | required | Authentication credential |
|
||||
| timeout | int | 30 | Request timeout in seconds |
|
||||
| retries | int | 3 | Number of retry attempts |
|
||||
|
||||
## Backends
|
||||
|
||||
Brief comparison — in-memory vs Redis — and when to use each.
|
||||
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING.md](./CONTRIBUTING.md)
|
||||
|
||||
## Changelog
|
||||
|
||||
See [CHANGELOG.md](./CHANGELOG.md)
|
||||
|
||||
## License
|
||||
|
||||
MIT — see [LICENSE](./LICENSE)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Docstrings — Google Style
|
||||
|
||||
Use Google-style docstrings for every public class, method, and function. IDEs display these
|
||||
as tooltips, mkdocs/sphinx can auto-generate documentation from them, and they convey intent
|
||||
clearly to contributors.
|
||||
|
||||
```python
|
||||
class YourClient:
|
||||
"""
|
||||
Main client for <purpose>.
|
||||
|
||||
Args:
|
||||
api_key: Authentication credential.
|
||||
timeout: Request timeout in seconds. Defaults to 30.
|
||||
retries: Number of retry attempts. Defaults to 3.
|
||||
|
||||
Raises:
|
||||
ValueError: If api_key is empty or timeout is non-positive.
|
||||
|
||||
Example:
|
||||
>>> from your_package import YourClient
|
||||
>>> client = YourClient(api_key="sk-...")
|
||||
>>> result = client.process({"input": "value"})
|
||||
"""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. `CONTRIBUTING.md`
|
||||
|
||||
```markdown
|
||||
# Contributing to your-package
|
||||
|
||||
## Development Setup
|
||||
|
||||
git clone https://github.com/you/your-package
|
||||
cd your-package
|
||||
pip install -e ".[dev]"
|
||||
pre-commit install
|
||||
|
||||
## Running Tests
|
||||
|
||||
pytest
|
||||
|
||||
## Running Linting
|
||||
|
||||
ruff check .
|
||||
black . --check
|
||||
mypy your_package/
|
||||
|
||||
## Submitting a PR
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch: `git checkout -b feat/your-feature`
|
||||
3. Make changes with tests
|
||||
4. Ensure CI passes: `pre-commit run --all-files && pytest`
|
||||
5. Update `CHANGELOG.md` under `[Unreleased]`
|
||||
6. Open a PR — use the PR template
|
||||
|
||||
## Commit Message Format (Conventional Commits)
|
||||
|
||||
- `feat: add Redis backend`
|
||||
- `fix: correct retry behavior on timeout`
|
||||
- `docs: update README quick start`
|
||||
- `chore: bump ruff to 0.5`
|
||||
- `test: add edge cases for memory backend`
|
||||
|
||||
## Reporting Bugs
|
||||
|
||||
Use the GitHub issue template. Include Python version, package version,
|
||||
and a minimal reproducible example.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. `SECURITY.md`
|
||||
|
||||
```markdown
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
| Version | Supported |
|
||||
|---|---|
|
||||
| 1.x.x | Yes |
|
||||
| < 1.0 | No |
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Do NOT open a public GitHub issue for security vulnerabilities.
|
||||
|
||||
Report via: GitHub private security reporting (preferred)
|
||||
or email: security@yourdomain.com
|
||||
|
||||
Include:
|
||||
- Description of the vulnerability
|
||||
- Steps to reproduce
|
||||
- Potential impact
|
||||
- Suggested fix (if any)
|
||||
|
||||
We aim to acknowledge within 48 hours and resolve within 14 days.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. GitHub Issue Templates
|
||||
|
||||
### `.github/ISSUE_TEMPLATE/bug_report.md`
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: Bug Report
|
||||
about: Report a reproducible bug
|
||||
labels: bug
|
||||
---
|
||||
|
||||
**Python version:**
|
||||
**Package version:**
|
||||
|
||||
**Describe the bug:**
|
||||
|
||||
**Minimal reproducible example:**
|
||||
```python
|
||||
# paste code here
|
||||
```
|
||||
|
||||
**Expected behavior:**
|
||||
|
||||
**Actual behavior:**
|
||||
```
|
||||
|
||||
### `.github/ISSUE_TEMPLATE/feature_request.md`
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: Feature Request
|
||||
about: Suggest a new feature or enhancement
|
||||
labels: enhancement
|
||||
---
|
||||
|
||||
**Problem this would solve:**
|
||||
|
||||
**Proposed solution:**
|
||||
|
||||
**Alternatives considered:**
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. PR Checklist
|
||||
|
||||
All items must be checked before requesting review. CI must be fully green.
|
||||
|
||||
### Code Quality Gates
|
||||
```
|
||||
[ ] ruff check . — zero errors
|
||||
[ ] black . --check — zero formatting issues
|
||||
[ ] isort . --check-only — imports sorted correctly
|
||||
[ ] mypy your_package/ — zero type errors
|
||||
[ ] pytest — all tests pass
|
||||
[ ] Coverage >= 80% (enforced by fail_under in pyproject.toml)
|
||||
[ ] All GitHub Actions workflows green
|
||||
```
|
||||
|
||||
### Structure
|
||||
```
|
||||
[ ] pyproject.toml: name, dynamic/version, description, requires-python, license, authors,
|
||||
keywords (10+), classifiers, dependencies, all [project.urls] filled in
|
||||
[ ] dynamic = ["version"] if using setuptools_scm
|
||||
[ ] [tool.setuptools_scm] with local_scheme = "no-local-version"
|
||||
[ ] setup.py shim present (if using setuptools_scm)
|
||||
[ ] py.typed marker file exists in the package directory (empty file)
|
||||
[ ] py.typed listed in [tool.setuptools.package-data]
|
||||
[ ] "Typing :: Typed" classifier in pyproject.toml
|
||||
[ ] __init__.py has __all__ listing all public symbols
|
||||
[ ] __version__ via importlib.metadata (not hardcoded string)
|
||||
```
|
||||
|
||||
### Testing
|
||||
```
|
||||
[ ] conftest.py has shared fixtures for client and backend
|
||||
[ ] Core happy path tested
|
||||
[ ] Error conditions and edge cases tested
|
||||
[ ] Each backend tested independently in isolation
|
||||
[ ] Redis backend tested in separate CI job with redis service (if applicable)
|
||||
[ ] asyncio_mode = "auto" in pyproject.toml (for async tests)
|
||||
[ ] fetch-depth: 0 in all CI checkout steps
|
||||
```
|
||||
|
||||
### Optional Backend (if applicable)
|
||||
```
|
||||
[ ] BaseBackend abstract class defines the interface
|
||||
[ ] MemoryBackend works with zero extra deps
|
||||
[ ] RedisBackend raises ImportError with clear pip install hint if redis not installed
|
||||
[ ] Both backends unit-tested independently
|
||||
[ ] redis extra declared in [project.optional-dependencies]
|
||||
[ ] README shows both install paths (base and [redis])
|
||||
```
|
||||
|
||||
### Changelog & Docs
|
||||
```
|
||||
[ ] CHANGELOG.md updated under [Unreleased]
|
||||
[ ] README has: description, install, quick start, config table, badges, license
|
||||
[ ] All public symbols have Google-style docstrings
|
||||
[ ] CONTRIBUTING.md: dev setup, test/lint commands, PR instructions
|
||||
[ ] SECURITY.md: supported versions, reporting process
|
||||
[ ] .github/ISSUE_TEMPLATE/bug_report.md
|
||||
[ ] .github/ISSUE_TEMPLATE/feature_request.md
|
||||
```
|
||||
|
||||
### CI/CD
|
||||
```
|
||||
[ ] ci.yml: lint + mypy + test matrix (all supported Python versions)
|
||||
[ ] ci.yml: separate job for Redis backend with redis service
|
||||
[ ] publish.yml: triggered on v*.*.* tags, uses Trusted Publishing (OIDC)
|
||||
[ ] fetch-depth: 0 in all workflow checkout steps
|
||||
[ ] pypi environment created in GitHub repo Settings → Environments
|
||||
[ ] No API tokens in repository secrets
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Anti-patterns to Avoid
|
||||
|
||||
| Anti-pattern | Why it's bad | Correct approach |
|
||||
|---|---|---|
|
||||
| `__version__ = "1.0.0"` hardcoded with setuptools_scm | Goes stale after first git tag | Use `importlib.metadata.version()` |
|
||||
| Missing `fetch-depth: 0` in CI checkout | setuptools_scm can't find tags → version = `0.0.0+dev` | Add `fetch-depth: 0` to **every** checkout step |
|
||||
| `local_scheme` not set | `+g<hash>` suffix breaks PyPI uploads (local versions rejected) | `local_scheme = "no-local-version"` |
|
||||
| Missing `py.typed` file | IDEs and mypy don't see package as typed | Create empty `py.typed` in package root |
|
||||
| `py.typed` not in `package-data` | File missing from installed wheel — useless | Add to `[tool.setuptools.package-data]` |
|
||||
| Importing optional dep at module top | `ImportError` on `import your_package` for all users | Lazy import inside the function/class that needs it |
|
||||
| Duplicating metadata in `setup.py` | Conflicts with `pyproject.toml`; drifts | Keep `setup.py` as 3-line shim only |
|
||||
| No `fail_under` in coverage config | Coverage regressions go unnoticed | Set `fail_under = 80` |
|
||||
| No mypy in CI | Type errors silently accumulate | Add mypy step to `ci.yml` |
|
||||
| API tokens in GitHub Secrets for PyPI | Security risk, rotation burden | Use Trusted Publishing (OIDC) |
|
||||
| Committing directly to `main`/`master` | Bypasses CI checks | Enforce via `no-commit-to-branch` pre-commit hook |
|
||||
| Missing `[Unreleased]` section in CHANGELOG | Changes pile up and get forgotten at release time | Keep `[Unreleased]` updated every PR |
|
||||
| Pinning exact dep versions in a library | Breaks dependency resolution for users | Use `>=` lower bounds only; avoid `==` |
|
||||
| No `__all__` in `__init__.py` | Users can accidentally import internal helpers | Declare `__all__` with every public symbol |
|
||||
| `from your_package import *` in tests | Tests pass even when imports are broken | Always use explicit imports |
|
||||
| No `SECURITY.md` | No path for responsible vulnerability disclosure | Add file with response timeline |
|
||||
| `Any` everywhere in type hints | Defeats mypy entirely | Use `object` for truly arbitrary values |
|
||||
| `Union` return types | Forces every caller to write `isinstance()` checks | Return concrete types; use overloads |
|
||||
| `setup.cfg` + `pyproject.toml` both active | Conflicts and confusing for contributors | Migrate everything to `pyproject.toml` |
|
||||
| Releasing on untagged commits | Version number is meaningless | Always tag before release |
|
||||
| Not testing on all supported Python versions | Breakage discovered by users, not you | Matrix test in CI |
|
||||
| `license = {text = "MIT"}` (old form) | Deprecated; PEP 639 uses SPDX strings | `license = "MIT"` |
|
||||
| No issue templates | Bug reports are inconsistent | Add `bug_report.md` + `feature_request.md` |
|
||||
|
||||
---
|
||||
|
||||
## 8. Master Release Checklist
|
||||
|
||||
Run through every item before pushing a release tag. CI must be fully green.
|
||||
|
||||
### Code Quality
|
||||
```
|
||||
[ ] ruff check . — zero errors
|
||||
[ ] ruff format . --check — zero formatting issues
|
||||
[ ] mypy src/your_package/ — zero type errors
|
||||
[ ] pytest — all tests pass
|
||||
[ ] Coverage >= 80% (fail_under enforced in pyproject.toml)
|
||||
[ ] All GitHub Actions CI jobs green (lint + test matrix)
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
[ ] pyproject.toml — name, description, requires-python, license (SPDX string), authors,
|
||||
keywords (10+), classifiers (Python versions + Typing :: Typed), urls (all 5 fields)
|
||||
[ ] dynamic = ["version"] set (if using setuptools_scm or hatch-vcs)
|
||||
[ ] [tool.setuptools_scm] with local_scheme = "no-local-version"
|
||||
[ ] setup.py shim present (if using setuptools_scm)
|
||||
[ ] py.typed marker file exists (empty file in package root)
|
||||
[ ] py.typed listed in [tool.setuptools.package-data]
|
||||
[ ] "Typing :: Typed" classifier in pyproject.toml
|
||||
[ ] __init__.py has __all__ listing all public symbols
|
||||
[ ] __version__ reads from importlib.metadata (not hardcoded)
|
||||
```
|
||||
|
||||
### Testing
|
||||
```
|
||||
[ ] conftest.py has shared fixtures for client and backend
|
||||
[ ] Core happy path tested
|
||||
[ ] Error conditions and edge cases tested
|
||||
[ ] Each backend tested independently in isolation
|
||||
[ ] asyncio_mode = "auto" in pyproject.toml (for async tests)
|
||||
[ ] fetch-depth: 0 in all CI checkout steps
|
||||
```
|
||||
|
||||
### CHANGELOG and Docs
|
||||
```
|
||||
[ ] CHANGELOG.md: [Unreleased] entries moved to [x.y.z] - YYYY-MM-DD
|
||||
[ ] README has: description, install commands, quick start, config table, badges
|
||||
[ ] All public symbols have Google-style docstrings
|
||||
[ ] CONTRIBUTING.md: dev setup, test/lint commands, PR instructions
|
||||
[ ] SECURITY.md: supported versions, reporting process with timeline
|
||||
```
|
||||
|
||||
### Versioning
|
||||
```
|
||||
[ ] All CI checks pass on the commit you plan to tag
|
||||
[ ] CHANGELOG.md updated and committed
|
||||
[ ] Git tag follows format v1.2.3 (semver, v prefix)
|
||||
[ ] No stale local_scheme suffixes will appear in the built wheel name
|
||||
```
|
||||
|
||||
### CI/CD
|
||||
```
|
||||
[ ] ci.yml: lint + mypy + test matrix (all supported Python versions)
|
||||
[ ] publish.yml: triggered on v*.*.* tags, uses Trusted Publishing (OIDC)
|
||||
[ ] pypi environment created in GitHub repo Settings → Environments
|
||||
[ ] No API tokens stored in repository secrets
|
||||
```
|
||||
|
||||
### The Release Command Sequence
|
||||
```bash
|
||||
# 1. Run full local validation
|
||||
ruff check . ; ruff format . --check ; mypy src/your_package/ ; pytest
|
||||
|
||||
# 2. Update CHANGELOG.md — move [Unreleased] to [x.y.z]
|
||||
# 3. Commit the changelog
|
||||
git add CHANGELOG.md
|
||||
git commit -m "chore: prepare release vX.Y.Z"
|
||||
|
||||
# 4. Tag and push — this triggers publish.yml automatically
|
||||
git tag vX.Y.Z
|
||||
git push origin main --tags
|
||||
|
||||
# 5. Monitor: https://github.com/<you>/<pkg>/actions
|
||||
# 6. Verify: https://pypi.org/project/your-package/
|
||||
```
|
||||
@@ -0,0 +1,606 @@
|
||||
# Library Core Patterns, OOP/SOLID, and Type Hints
|
||||
|
||||
## Table of Contents
|
||||
1. [OOP & SOLID Principles](#1-oop--solid-principles)
|
||||
2. [Type Hints Best Practices](#2-type-hints-best-practices)
|
||||
3. [Core Class Design](#3-core-class-design)
|
||||
4. [Factory / Builder Pattern](#4-factory--builder-pattern)
|
||||
5. [Configuration Pattern](#5-configuration-pattern)
|
||||
6. [`__init__.py` — explicit public API](#6-__init__py--explicit-public-api)
|
||||
7. [Optional Backends (Plugin Pattern)](#7-optional-backends-plugin-pattern)
|
||||
|
||||
---
|
||||
|
||||
## 1. OOP & SOLID Principles
|
||||
|
||||
Apply these principles to produce maintainable, testable, extensible packages.
|
||||
**Do not over-engineer** — apply the principle that solves a real problem, not all of them
|
||||
at once.
|
||||
|
||||
### S — Single Responsibility Principle
|
||||
|
||||
Each class/module should have **one reason to change**.
|
||||
|
||||
```python
|
||||
# BAD: one class handles data, validation, AND persistence
|
||||
class UserManager:
|
||||
def validate(self, user): ...
|
||||
def save_to_db(self, user): ...
|
||||
def send_email(self, user): ...
|
||||
|
||||
# GOOD: split responsibilities
|
||||
class UserValidator:
|
||||
def validate(self, user: User) -> None: ...
|
||||
|
||||
class UserRepository:
|
||||
def save(self, user: User) -> None: ...
|
||||
|
||||
class UserNotifier:
|
||||
def notify(self, user: User) -> None: ...
|
||||
```
|
||||
|
||||
### O — Open/Closed Principle
|
||||
|
||||
Open for extension, closed for modification. Use **protocols or ABCs** as extension points.
|
||||
|
||||
```python
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
class StorageBackend(ABC):
|
||||
"""Define the interface once; never modify it for new implementations."""
|
||||
@abstractmethod
|
||||
def get(self, key: str) -> str | None: ...
|
||||
@abstractmethod
|
||||
def set(self, key: str, value: str) -> None: ...
|
||||
|
||||
class MemoryBackend(StorageBackend): # Extend by subclassing
|
||||
...
|
||||
|
||||
class RedisBackend(StorageBackend): # Add new impl without touching StorageBackend
|
||||
...
|
||||
```
|
||||
|
||||
### L — Liskov Substitution Principle
|
||||
|
||||
Subclasses must be substitutable for their base. Never narrow a contract in a subclass.
|
||||
|
||||
```python
|
||||
class BaseProcessor:
|
||||
def process(self, data: dict) -> dict: ...
|
||||
|
||||
# BAD: raises TypeError for valid dicts — breaks substitutability
|
||||
class StrictProcessor(BaseProcessor):
|
||||
def process(self, data: dict) -> dict:
|
||||
if not data:
|
||||
raise TypeError("Must have data") # Base never raised this
|
||||
|
||||
# GOOD: accept what base accepts, fulfill the same contract
|
||||
class StrictProcessor(BaseProcessor):
|
||||
def process(self, data: dict) -> dict:
|
||||
if not data:
|
||||
return {} # Graceful — same return type, no new exceptions
|
||||
```
|
||||
|
||||
### I — Interface Segregation Principle
|
||||
|
||||
Prefer **small, focused protocols** over large monolithic ABCs.
|
||||
|
||||
```python
|
||||
# BAD: forces all implementers to handle read+write+delete+list
|
||||
class BigStorage(ABC):
|
||||
@abstractmethod
|
||||
def read(self): ...
|
||||
@abstractmethod
|
||||
def write(self): ...
|
||||
@abstractmethod
|
||||
def delete(self): ...
|
||||
@abstractmethod
|
||||
def list_all(self): ... # Not every backend needs this
|
||||
|
||||
# GOOD: separate protocols — clients depend only on what they need
|
||||
from typing import Protocol
|
||||
|
||||
class Readable(Protocol):
|
||||
def read(self, key: str) -> str | None: ...
|
||||
|
||||
class Writable(Protocol):
|
||||
def write(self, key: str, value: str) -> None: ...
|
||||
|
||||
class Deletable(Protocol):
|
||||
def delete(self, key: str) -> None: ...
|
||||
```
|
||||
|
||||
### D — Dependency Inversion Principle
|
||||
|
||||
High-level modules depend on **abstractions** (protocols/ABCs), not concrete implementations.
|
||||
Pass dependencies in via `__init__` (constructor injection).
|
||||
|
||||
```python
|
||||
# BAD: high-level class creates its own dependency
|
||||
class ApiClient:
|
||||
def __init__(self) -> None:
|
||||
self._cache = RedisCache() # Tightly coupled to Redis
|
||||
|
||||
# GOOD: depend on the abstraction; inject the concrete at call site
|
||||
class ApiClient:
|
||||
def __init__(self, cache: CacheBackend) -> None: # CacheBackend is a Protocol
|
||||
self._cache = cache
|
||||
|
||||
# User code (or tests):
|
||||
client = ApiClient(cache=RedisCache()) # Real
|
||||
client = ApiClient(cache=MemoryCache()) # Test
|
||||
```
|
||||
|
||||
### Composition Over Inheritance
|
||||
|
||||
Prefer delegating to contained objects over deep inheritance chains.
|
||||
|
||||
```python
|
||||
# Prefer this (composition):
|
||||
class YourClient:
|
||||
def __init__(self, backend: StorageBackend, http: HttpTransport) -> None:
|
||||
self._backend = backend
|
||||
self._http = http
|
||||
|
||||
# Avoid this (deep inheritance):
|
||||
class YourClient(BaseClient, CacheMixin, RetryMixin, LoggingMixin):
|
||||
... # Fragile, hard to test, MRO confusion
|
||||
```
|
||||
|
||||
### Exception Hierarchy
|
||||
|
||||
Always define a base exception for your package; layer specifics below it.
|
||||
|
||||
```python
|
||||
# your_package/exceptions.py
|
||||
class YourPackageError(Exception):
|
||||
"""Base exception — catch this to catch any package error."""
|
||||
|
||||
class ConfigurationError(YourPackageError):
|
||||
"""Raised when package is misconfigured."""
|
||||
|
||||
class AuthenticationError(YourPackageError):
|
||||
"""Raised on auth failure."""
|
||||
|
||||
class RateLimitError(YourPackageError):
|
||||
"""Raised when rate limit is exceeded."""
|
||||
def __init__(self, retry_after: int) -> None:
|
||||
self.retry_after = retry_after
|
||||
super().__init__(f"Rate limited. Retry after {retry_after}s.")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Type Hints Best Practices
|
||||
|
||||
Follow PEP 484 (type hints), PEP 526 (variable annotations), PEP 544 (protocols),
|
||||
PEP 561 (typed packages). These are not optional for a quality library.
|
||||
|
||||
```python
|
||||
from __future__ import annotations # Enables PEP 563 deferred evaluation — always add this
|
||||
|
||||
# For ARGUMENTS: prefer abstract / protocol types (more flexible for callers)
|
||||
from collections.abc import Iterable, Mapping, Sequence, Callable
|
||||
|
||||
def process_items(items: Iterable[str]) -> list[int]: ... # ✓ Accepts any iterable
|
||||
def process_items(items: list[str]) -> list[int]: ... # ✗ Too restrictive
|
||||
|
||||
# For RETURN TYPES: prefer concrete types (callers know exactly what they get)
|
||||
def get_names() -> list[str]: ... # ✓ Concrete
|
||||
def get_names() -> Iterable[str]: ... # ✗ Caller can't index it
|
||||
|
||||
# Use X | Y syntax (Python 3.10+), not Union[X, Y] or Optional[X]
|
||||
def find(key: str) -> str | None: ... # ✓ Modern
|
||||
def find(key: str) -> Optional[str]: ... # ✗ Old style
|
||||
|
||||
# None should be LAST in unions
|
||||
def get(key: str) -> str | int | None: ... # ✓
|
||||
|
||||
# Avoid Any — it disables type checking entirely
|
||||
def process(data: Any) -> Any: ... # ✗ Loses all safety
|
||||
def process(data: dict[str, object]) -> dict[str, object]: # ✓
|
||||
|
||||
# Use object instead of Any when a param accepts literally anything
|
||||
def log(value: object) -> None: ... # ✓
|
||||
|
||||
# Avoid Union return types — they require isinstance() checks at every call site
|
||||
def get_value() -> str | int: ... # ✗ Forces callers to branch
|
||||
```
|
||||
|
||||
### Protocols vs ABCs
|
||||
|
||||
```python
|
||||
from typing import Protocol, runtime_checkable
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
# Use Protocol when you don't control the implementer classes (duck typing)
|
||||
@runtime_checkable # Makes isinstance() checks work at runtime
|
||||
class Serializable(Protocol):
|
||||
def to_dict(self) -> dict[str, object]: ...
|
||||
|
||||
# Use ABC when you control the class hierarchy and want default implementations
|
||||
class BaseBackend(ABC):
|
||||
@abstractmethod
|
||||
async def get(self, key: str) -> str | None: ...
|
||||
|
||||
def get_or_default(self, key: str, default: str) -> str:
|
||||
result = self.get(key)
|
||||
return result if result is not None else default
|
||||
```
|
||||
|
||||
### TypeVar and Generics
|
||||
|
||||
```python
|
||||
from typing import TypeVar, Generic
|
||||
|
||||
T = TypeVar("T")
|
||||
T_co = TypeVar("T_co", covariant=True) # For read-only containers
|
||||
|
||||
class Repository(Generic[T]):
|
||||
"""Type-safe generic repository."""
|
||||
def __init__(self, model_class: type[T]) -> None:
|
||||
self._store: list[T] = []
|
||||
|
||||
def add(self, item: T) -> None:
|
||||
self._store.append(item)
|
||||
|
||||
def get_all(self) -> list[T]:
|
||||
return list(self._store)
|
||||
```
|
||||
|
||||
### dataclasses for data containers
|
||||
|
||||
```python
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
@dataclass(frozen=True) # frozen=True → immutable, hashable (good for configs/keys)
|
||||
class Config:
|
||||
api_key: str
|
||||
timeout: int = 30
|
||||
headers: dict[str, str] = field(default_factory=dict)
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if not self.api_key:
|
||||
raise ValueError("api_key must not be empty")
|
||||
```
|
||||
|
||||
### TYPE_CHECKING guard (avoid circular imports)
|
||||
|
||||
```python
|
||||
from __future__ import annotations
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from your_package.models import HeavyModel # Only imported during type checking
|
||||
|
||||
def process(model: "HeavyModel") -> None:
|
||||
...
|
||||
```
|
||||
|
||||
### Overload for multiple signatures
|
||||
|
||||
```python
|
||||
from typing import overload
|
||||
|
||||
@overload
|
||||
def get(key: str, default: None = ...) -> str | None: ...
|
||||
@overload
|
||||
def get(key: str, default: str) -> str: ...
|
||||
def get(key: str, default: str | None = None) -> str | None:
|
||||
... # Single implementation handles both
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Core Class Design
|
||||
|
||||
The main class of your library should have a clear, minimal `__init__`, sensible defaults for all
|
||||
parameters, and raise `TypeError` / `ValueError` early for invalid inputs. This prevents confusing
|
||||
errors at call time rather than at construction.
|
||||
|
||||
```python
|
||||
# your_package/core.py
|
||||
from __future__ import annotations
|
||||
|
||||
from your_package.exceptions import YourPackageError
|
||||
|
||||
|
||||
class YourClient:
|
||||
"""
|
||||
Main entry point for <your purpose>.
|
||||
|
||||
Args:
|
||||
api_key: Required authentication credential.
|
||||
timeout: Request timeout in seconds. Defaults to 30.
|
||||
retries: Number of retry attempts. Defaults to 3.
|
||||
|
||||
Raises:
|
||||
ValueError: If api_key is empty or timeout is non-positive.
|
||||
|
||||
Example:
|
||||
>>> from your_package import YourClient
|
||||
>>> client = YourClient(api_key="sk-...")
|
||||
>>> result = client.process(data)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
api_key: str,
|
||||
timeout: int = 30,
|
||||
retries: int = 3,
|
||||
) -> None:
|
||||
if not api_key:
|
||||
raise ValueError("api_key must not be empty")
|
||||
if timeout <= 0:
|
||||
raise ValueError("timeout must be positive")
|
||||
self._api_key = api_key
|
||||
self.timeout = timeout
|
||||
self.retries = retries
|
||||
|
||||
def process(self, data: dict) -> dict:
|
||||
"""
|
||||
Process data and return results.
|
||||
|
||||
Args:
|
||||
data: Input dictionary to process.
|
||||
|
||||
Returns:
|
||||
Processed result as a dictionary.
|
||||
|
||||
Raises:
|
||||
YourPackageError: If processing fails.
|
||||
"""
|
||||
...
|
||||
```
|
||||
|
||||
### Design rules
|
||||
|
||||
- Accept all config in `__init__`, not scattered across method calls.
|
||||
- Validate at construction time — fail fast with a clear message.
|
||||
- Keep `__init__` signatures stable. Adding new **keyword-only** args with defaults is backwards
|
||||
compatible. Removing or reordering positional args is a breaking change.
|
||||
|
||||
---
|
||||
|
||||
## 4. Factory / Builder Pattern
|
||||
|
||||
Use a factory function when users need to create pre-configured instances. This avoids cluttering
|
||||
`__init__` with a dozen keyword arguments and keeps the common case simple.
|
||||
|
||||
```python
|
||||
# your_package/factory.py
|
||||
from __future__ import annotations
|
||||
|
||||
from your_package.core import YourClient
|
||||
from your_package.backends.memory import MemoryBackend
|
||||
|
||||
|
||||
def create_client(
|
||||
api_key: str,
|
||||
*,
|
||||
timeout: int = 30,
|
||||
retries: int = 3,
|
||||
backend: str = "memory",
|
||||
backend_url: str | None = None,
|
||||
) -> YourClient:
|
||||
"""
|
||||
Factory that returns a configured YourClient.
|
||||
|
||||
Args:
|
||||
api_key: Required API key.
|
||||
timeout: Request timeout in seconds.
|
||||
retries: Number of retry attempts.
|
||||
backend: Storage backend type. One of 'memory' or 'redis'.
|
||||
backend_url: Connection URL for the chosen backend.
|
||||
|
||||
Example:
|
||||
>>> client = create_client(api_key="sk-...", backend="redis", backend_url="redis://localhost")
|
||||
"""
|
||||
if backend == "redis":
|
||||
from your_package.backends.redis import RedisBackend
|
||||
_backend = RedisBackend(url=backend_url or "redis://localhost:6379")
|
||||
else:
|
||||
_backend = MemoryBackend()
|
||||
|
||||
return YourClient(api_key=api_key, timeout=timeout, retries=retries, backend=_backend)
|
||||
```
|
||||
|
||||
**Why a factory, not a class method?** Both work. A standalone factory function is easier to
|
||||
mock in tests and avoids coupling the factory logic into the class itself.
|
||||
|
||||
---
|
||||
|
||||
## 5. Configuration Pattern
|
||||
|
||||
Use a dataclass (or Pydantic `BaseModel`) to hold configuration. This gives you free validation,
|
||||
helpful error messages, and a single place to document every option.
|
||||
|
||||
```python
|
||||
# your_package/config.py
|
||||
from __future__ import annotations
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
|
||||
@dataclass
|
||||
class YourSettings:
|
||||
"""
|
||||
Configuration for YourClient.
|
||||
|
||||
Attributes:
|
||||
timeout: HTTP timeout in seconds.
|
||||
retries: Number of retry attempts on transient errors.
|
||||
base_url: Base API URL.
|
||||
"""
|
||||
timeout: int = 30
|
||||
retries: int = 3
|
||||
base_url: str = "https://api.example.com"
|
||||
extra_headers: dict[str, str] = field(default_factory=dict)
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.timeout <= 0:
|
||||
raise ValueError("timeout must be positive")
|
||||
if self.retries < 0:
|
||||
raise ValueError("retries must be non-negative")
|
||||
```
|
||||
|
||||
If you need environment variable loading, use `pydantic-settings` as an **optional** dependency —
|
||||
declare it in `[project.optional-dependencies]`, not as a required dep.
|
||||
|
||||
---
|
||||
|
||||
## 6. `__init__.py` — Explicit Public API
|
||||
|
||||
A well-defined `__all__` is not just style — it tells users (and IDEs) exactly what's part of your
|
||||
public API, and prevents accidental imports of internal helpers as part of your contract.
|
||||
|
||||
```python
|
||||
# your_package/__init__.py
|
||||
"""your-package: <one-line description>."""
|
||||
|
||||
from importlib.metadata import version, PackageNotFoundError
|
||||
|
||||
try:
|
||||
__version__ = version("your-package")
|
||||
except PackageNotFoundError:
|
||||
__version__ = "0.0.0-dev"
|
||||
|
||||
from your_package.core import YourClient
|
||||
from your_package.config import YourSettings
|
||||
from your_package.exceptions import YourPackageError
|
||||
|
||||
__all__ = [
|
||||
"YourClient",
|
||||
"YourSettings",
|
||||
"YourPackageError",
|
||||
"__version__",
|
||||
]
|
||||
```
|
||||
|
||||
Rules:
|
||||
- Only export what users are supposed to use. Internal helpers go in `_utils.py` or submodules.
|
||||
- Keep imports at the top level of `__init__.py` shallow — avoid importing heavy optional deps
|
||||
(like `redis`) at module level. Import them lazily inside the class or function that needs them.
|
||||
- `__version__` is always part of the public API — it enables `your_package.__version__` for
|
||||
debugging.
|
||||
|
||||
---
|
||||
|
||||
## 7. Optional Backends (Plugin Pattern)
|
||||
|
||||
This pattern lets your package work out-of-the-box (no extra deps) with an in-memory backend,
|
||||
while letting advanced users plug in Redis, a database, or any custom storage.
|
||||
|
||||
### 5.1 Abstract base class — defines the interface
|
||||
|
||||
```python
|
||||
# your_package/backends/__init__.py
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
|
||||
class BaseBackend(ABC):
|
||||
"""Abstract storage backend interface.
|
||||
|
||||
Implement this to add a custom backend (database, cache, etc.).
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def get(self, key: str) -> str | None:
|
||||
"""Retrieve a value by key. Returns None if not found."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def set(self, key: str, value: str, ttl: int | None = None) -> None:
|
||||
"""Store a value. Optional TTL in seconds."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def delete(self, key: str) -> None:
|
||||
"""Delete a key."""
|
||||
...
|
||||
```
|
||||
|
||||
### 5.2 Memory backend — zero extra deps
|
||||
|
||||
```python
|
||||
# your_package/backends/memory.py
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import time
|
||||
from your_package.backends import BaseBackend
|
||||
|
||||
|
||||
class MemoryBackend(BaseBackend):
|
||||
"""Thread-safe in-memory backend. Works out of the box — no extra dependencies."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._store: dict[str, tuple[str, float | None]] = {}
|
||||
self._lock = asyncio.Lock()
|
||||
|
||||
async def get(self, key: str) -> str | None:
|
||||
async with self._lock:
|
||||
entry = self._store.get(key)
|
||||
if entry is None:
|
||||
return None
|
||||
value, expires_at = entry
|
||||
if expires_at is not None and time.time() > expires_at:
|
||||
del self._store[key]
|
||||
return None
|
||||
return value
|
||||
|
||||
async def set(self, key: str, value: str, ttl: int | None = None) -> None:
|
||||
async with self._lock:
|
||||
expires_at = time.time() + ttl if ttl is not None else None
|
||||
self._store[key] = (value, expires_at)
|
||||
|
||||
async def delete(self, key: str) -> None:
|
||||
async with self._lock:
|
||||
self._store.pop(key, None)
|
||||
```
|
||||
|
||||
### 5.3 Redis backend — raises clear ImportError if not installed
|
||||
|
||||
The key design: import `redis` lazily inside `__init__`, not at module level. This way,
|
||||
`import your_package` never fails even if `redis` isn't installed.
|
||||
|
||||
```python
|
||||
# your_package/backends/redis.py
|
||||
from __future__ import annotations
|
||||
from your_package.backends import BaseBackend
|
||||
|
||||
try:
|
||||
import redis.asyncio as aioredis
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"Redis backend requires the redis extra:\n"
|
||||
" pip install your-package[redis]"
|
||||
) from exc
|
||||
|
||||
|
||||
class RedisBackend(BaseBackend):
|
||||
"""Redis-backed storage for distributed/multi-process deployments."""
|
||||
|
||||
def __init__(self, url: str = "redis://localhost:6379") -> None:
|
||||
self._client = aioredis.from_url(url, decode_responses=True)
|
||||
|
||||
async def get(self, key: str) -> str | None:
|
||||
return await self._client.get(key)
|
||||
|
||||
async def set(self, key: str, value: str, ttl: int | None = None) -> None:
|
||||
await self._client.set(key, value, ex=ttl)
|
||||
|
||||
async def delete(self, key: str) -> None:
|
||||
await self._client.delete(key)
|
||||
```
|
||||
|
||||
### 5.4 How users choose a backend
|
||||
|
||||
```python
|
||||
# Default: in-memory, no extra deps needed
|
||||
from your_package import YourClient
|
||||
client = YourClient(api_key="sk-...")
|
||||
|
||||
# Redis: pip install your-package[redis]
|
||||
from your_package.backends.redis import RedisBackend
|
||||
client = YourClient(api_key="sk-...", backend=RedisBackend(url="redis://localhost:6379"))
|
||||
```
|
||||
470
skills/python-pypi-package-builder/references/pyproject-toml.md
Normal file
470
skills/python-pypi-package-builder/references/pyproject-toml.md
Normal file
@@ -0,0 +1,470 @@
|
||||
# pyproject.toml, Backends, Versioning, and Typed Package
|
||||
|
||||
## Table of Contents
|
||||
1. [Complete pyproject.toml — setuptools + setuptools_scm](#1-complete-pyprojecttoml)
|
||||
2. [hatchling (modern, zero-config)](#2-hatchling-modern-zero-config)
|
||||
3. [flit (minimal, version from `__version__`)](#3-flit-minimal-version-from-__version__)
|
||||
4. [poetry (integrated dep manager)](#4-poetry-integrated-dep-manager)
|
||||
5. [Versioning Strategy — PEP 440, semver, dep specifiers](#5-versioning-strategy)
|
||||
6. [setuptools_scm — dynamic version from git tags](#6-dynamic-versioning-with-setuptools_scm)
|
||||
7. [setup.py shim for legacy editable installs](#7-setuppy-shim)
|
||||
8. [PEP 561 typed package (py.typed)](#8-typed-package-pep-561)
|
||||
|
||||
---
|
||||
|
||||
## 1. Complete pyproject.toml
|
||||
|
||||
### setuptools + setuptools_scm (recommended for git-tag versioning)
|
||||
|
||||
```toml
|
||||
[build-system]
|
||||
requires = ["setuptools>=68", "wheel", "setuptools_scm"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "your-package"
|
||||
dynamic = ["version"] # Version comes from git tags via setuptools_scm
|
||||
description = "<your description> — <key feature 1>, <key feature 2>"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
license = "MIT" # PEP 639 SPDX expression (string, not {text = "MIT"})
|
||||
license-files = ["LICENSE"]
|
||||
authors = [
|
||||
{name = "Your Name", email = "you@example.com"},
|
||||
]
|
||||
maintainers = [
|
||||
{name = "Your Name", email = "you@example.com"},
|
||||
]
|
||||
keywords = [
|
||||
"python",
|
||||
# Add 10-15 specific keywords that describe your library — they affect PyPI discoverability
|
||||
]
|
||||
classifiers = [
|
||||
"Development Status :: 3 - Alpha", # Change to 5 at stable release
|
||||
"Intended Audience :: Developers",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Operating System :: OS Independent",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Programming Language :: Python :: 3.10",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
"Programming Language :: Python :: 3.12",
|
||||
"Programming Language :: Python :: 3.13",
|
||||
"Topic :: Software Development :: Libraries :: Python Modules",
|
||||
"Typing :: Typed", # Add this when shipping py.typed
|
||||
]
|
||||
dependencies = [
|
||||
# List your runtime dependencies here. Keep them minimal.
|
||||
# Example: "httpx>=0.24", "pydantic>=2.0"
|
||||
# Leave empty if your library has no required runtime deps.
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
redis = [
|
||||
"redis>=4.2", # Optional heavy backend
|
||||
]
|
||||
dev = [
|
||||
"pytest>=7.0",
|
||||
"pytest-asyncio>=0.21",
|
||||
"httpx>=0.24",
|
||||
"pytest-cov>=4.0",
|
||||
"ruff>=0.4",
|
||||
"black>=24.0",
|
||||
"isort>=5.13",
|
||||
"mypy>=1.0",
|
||||
"pre-commit>=3.0",
|
||||
"build",
|
||||
"twine",
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
Homepage = "https://github.com/yourusername/your-package"
|
||||
Documentation = "https://github.com/yourusername/your-package#readme"
|
||||
Repository = "https://github.com/yourusername/your-package"
|
||||
"Bug Tracker" = "https://github.com/yourusername/your-package/issues"
|
||||
Changelog = "https://github.com/yourusername/your-package/blob/master/CHANGELOG.md"
|
||||
|
||||
# --- Setuptools configuration ---
|
||||
[tool.setuptools.packages.find]
|
||||
include = ["your_package*"] # flat layout
|
||||
# For src/ layout, use:
|
||||
# where = ["src"]
|
||||
|
||||
[tool.setuptools.package-data]
|
||||
your_package = ["py.typed"] # Ship the py.typed marker in the wheel
|
||||
|
||||
# --- setuptools_scm: version from git tags ---
|
||||
[tool.setuptools_scm]
|
||||
version_scheme = "post-release"
|
||||
local_scheme = "no-local-version" # Prevents +local suffix breaking PyPI uploads
|
||||
|
||||
# --- Ruff (linting) ---
|
||||
[tool.ruff]
|
||||
target-version = "py310"
|
||||
line-length = 100
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "W", "I", "N", "UP", "B", "SIM", "C4", "PTH", "RUF"]
|
||||
ignore = ["E501"] # Line length enforced by formatter
|
||||
|
||||
[tool.ruff.lint.per-file-ignores]
|
||||
"tests/*" = ["S101", "ANN"] # Allow assert and missing annotations in tests
|
||||
"scripts/*" = ["T201"] # Allow print in scripts
|
||||
|
||||
[tool.ruff.format]
|
||||
quote-style = "double"
|
||||
|
||||
# --- Black (formatting) ---
|
||||
[tool.black]
|
||||
line-length = 100
|
||||
target-version = ["py310", "py311", "py312", "py313"]
|
||||
|
||||
# --- isort (import sorting) ---
|
||||
[tool.isort]
|
||||
profile = "black"
|
||||
line_length = 100
|
||||
|
||||
# --- mypy (static type checking) ---
|
||||
[tool.mypy]
|
||||
python_version = "3.10"
|
||||
warn_return_any = true
|
||||
warn_unused_configs = true
|
||||
warn_unused_ignores = true
|
||||
disallow_untyped_defs = true
|
||||
disallow_any_generics = true
|
||||
ignore_missing_imports = true
|
||||
strict = false # Set true for maximum strictness
|
||||
|
||||
[[tool.mypy.overrides]]
|
||||
module = "tests.*"
|
||||
disallow_untyped_defs = false # Relaxed in tests
|
||||
|
||||
# --- pytest ---
|
||||
[tool.pytest.ini_options]
|
||||
asyncio_mode = "auto"
|
||||
testpaths = ["tests"]
|
||||
pythonpath = ["."] # For flat layout; remove for src/
|
||||
python_files = "test_*.py"
|
||||
python_classes = "Test*"
|
||||
python_functions = "test_*"
|
||||
addopts = "-v --tb=short --cov=your_package --cov-report=term-missing"
|
||||
|
||||
# --- Coverage ---
|
||||
[tool.coverage.run]
|
||||
source = ["your_package"]
|
||||
omit = ["tests/*"]
|
||||
|
||||
[tool.coverage.report]
|
||||
fail_under = 80
|
||||
show_missing = true
|
||||
exclude_lines = [
|
||||
"pragma: no cover",
|
||||
"def __repr__",
|
||||
"raise NotImplementedError",
|
||||
"if TYPE_CHECKING:",
|
||||
"@abstractmethod",
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. hatchling (Modern, Zero-Config)
|
||||
|
||||
Best for new pure-Python projects that don't need C extensions. No `setup.py` needed. Use
|
||||
`hatch-vcs` for git-tag versioning, or omit it for manual version bumps.
|
||||
|
||||
```toml
|
||||
[build-system]
|
||||
requires = ["hatchling", "hatch-vcs"] # hatch-vcs for git-tag versioning
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "your-package"
|
||||
dynamic = ["version"] # Remove and add version = "1.0.0" for manual versioning
|
||||
description = "One-line description"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
license = "MIT"
|
||||
license-files = ["LICENSE"]
|
||||
authors = [{name = "Your Name", email = "you@example.com"}]
|
||||
keywords = ["python"]
|
||||
classifiers = [
|
||||
"Development Status :: 3 - Alpha",
|
||||
"Intended Audience :: Developers",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Operating System :: OS Independent",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Typing :: Typed",
|
||||
]
|
||||
dependencies = []
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = ["pytest>=8.0", "pytest-cov>=5.0", "ruff>=0.6", "mypy>=1.10"]
|
||||
|
||||
[project.urls]
|
||||
Homepage = "https://github.com/yourusername/your-package"
|
||||
Changelog = "https://github.com/yourusername/your-package/blob/master/CHANGELOG.md"
|
||||
|
||||
# --- Hatchling build config ---
|
||||
[tool.hatch.build.targets.wheel]
|
||||
packages = ["src/your_package"] # src/ layout
|
||||
# packages = ["your_package"] # ← flat layout
|
||||
|
||||
[tool.hatch.version]
|
||||
source = "vcs" # git-tag versioning via hatch-vcs
|
||||
|
||||
[tool.hatch.version.raw-options]
|
||||
local_scheme = "no-local-version"
|
||||
|
||||
# ruff, mypy, pytest, coverage sections — same as setuptools template above
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. flit (Minimal, Version from `__version__`)
|
||||
|
||||
Best for very simple, single-module packages. Zero config. Version is read directly from
|
||||
`your_package/__init__.py`. Always requires a **static string** for `__version__`.
|
||||
|
||||
```toml
|
||||
[build-system]
|
||||
requires = ["flit_core>=3.9"]
|
||||
build-backend = "flit_core.buildapi"
|
||||
|
||||
[project]
|
||||
name = "your-package"
|
||||
dynamic = ["version", "description"] # Read from __init__.py __version__ and docstring
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
license = "MIT"
|
||||
authors = [{name = "Your Name", email = "you@example.com"}]
|
||||
classifiers = [
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Typing :: Typed",
|
||||
]
|
||||
dependencies = []
|
||||
|
||||
[project.urls]
|
||||
Homepage = "https://github.com/yourusername/your-package"
|
||||
|
||||
# flit reads __version__ from your_package/__init__.py automatically.
|
||||
# Ensure __init__.py has: __version__ = "1.0.0" (static string — flit does NOT support
|
||||
# importlib.metadata for dynamic version discovery)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. poetry (Integrated Dependency + Build Manager)
|
||||
|
||||
Best for teams that want a single tool to manage deps, build, and publish. Poetry v2+
|
||||
supports the standard `[project]` table.
|
||||
|
||||
```toml
|
||||
[build-system]
|
||||
requires = ["poetry-core>=2.0"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
|
||||
[project]
|
||||
name = "your-package"
|
||||
version = "1.0.0"
|
||||
description = "One-line description"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
license = "MIT"
|
||||
authors = [{name = "Your Name", email = "you@example.com"}]
|
||||
classifiers = [
|
||||
"Programming Language :: Python :: 3",
|
||||
"Typing :: Typed",
|
||||
]
|
||||
dependencies = [] # poetry v2+ uses standard [project] table
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = ["pytest>=8.0", "ruff>=0.6", "mypy>=1.10"]
|
||||
|
||||
# Optional: use [tool.poetry] only for poetry-specific features
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
# Poetry-specific group syntax (alternative to [project.optional-dependencies])
|
||||
pytest = ">=8.0"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Versioning Strategy
|
||||
|
||||
### PEP 440 — The Standard
|
||||
|
||||
```
|
||||
Canonical form: N[.N]+[{a|b|rc}N][.postN][.devN]
|
||||
|
||||
Examples:
|
||||
1.0.0 Stable release
|
||||
1.0.0a1 Alpha (pre-release)
|
||||
1.0.0b2 Beta
|
||||
1.0.0rc1 Release candidate
|
||||
1.0.0.post1 Post-release (e.g., packaging fix only — no code change)
|
||||
1.0.0.dev1 Development snapshot (NOT for PyPI)
|
||||
```
|
||||
|
||||
### Semantic Versioning (SemVer) — use this for every library
|
||||
|
||||
```
|
||||
MAJOR.MINOR.PATCH
|
||||
|
||||
MAJOR: Breaking API change (remove/rename public function/class/arg)
|
||||
MINOR: New feature, fully backward-compatible
|
||||
PATCH: Bug fix, no API change
|
||||
```
|
||||
|
||||
| Change | What bumps | Example |
|
||||
|---|---|---|
|
||||
| Remove / rename a public function | MAJOR | `1.2.3 → 2.0.0` |
|
||||
| Add new public function | MINOR | `1.2.3 → 1.3.0` |
|
||||
| Bug fix, no API change | PATCH | `1.2.3 → 1.2.4` |
|
||||
| New pre-release | suffix | `2.0.0a1`, `2.0.0rc1` |
|
||||
|
||||
### Version in code — read from package metadata
|
||||
|
||||
```python
|
||||
# your_package/__init__.py
|
||||
from importlib.metadata import version, PackageNotFoundError
|
||||
|
||||
try:
|
||||
__version__ = version("your-package")
|
||||
except PackageNotFoundError:
|
||||
__version__ = "0.0.0-dev" # Fallback for uninstalled dev checkouts
|
||||
```
|
||||
|
||||
Never hardcode `__version__ = "1.0.0"` when using setuptools_scm — it goes stale after the
|
||||
first git tag. Use `importlib.metadata` always.
|
||||
|
||||
### Version specifier best practices for dependencies
|
||||
|
||||
```toml
|
||||
# In [project] dependencies — for a LIBRARY:
|
||||
"httpx>=0.24" # Minimum version — PREFERRED for libraries
|
||||
"httpx>=0.24,<1.0" # Upper bound only when a known breaking change exists
|
||||
|
||||
# ONLY for applications (never for libraries):
|
||||
"httpx==0.27.0" # Pin exactly — breaks dep resolution in libraries
|
||||
|
||||
# NEVER do this in a library:
|
||||
# "httpx~=0.24.0" # Compatible release operator — too tight
|
||||
# "httpx==0.27.*" # Wildcard pin — fragile
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Dynamic Versioning with `setuptools_scm`
|
||||
|
||||
`setuptools_scm` reads your git tags and sets the package version automatically — no more manually
|
||||
editing version strings before each release.
|
||||
|
||||
### How it works
|
||||
|
||||
```
|
||||
git tag v1.0.0 → package version = 1.0.0
|
||||
git tag v1.1.0 → package version = 1.1.0
|
||||
(commits after tag) → version = 1.1.0.post1+g<hash> (stripped for PyPI)
|
||||
```
|
||||
|
||||
`local_scheme = "no-local-version"` strips the `+g<hash>` suffix so PyPI uploads never fail with
|
||||
a "local version label not allowed" error.
|
||||
|
||||
### Access version at runtime
|
||||
|
||||
```python
|
||||
# your_package/__init__.py
|
||||
from importlib.metadata import version, PackageNotFoundError
|
||||
|
||||
try:
|
||||
__version__ = version("your-package")
|
||||
except PackageNotFoundError:
|
||||
__version__ = "0.0.0-dev" # Fallback for uninstalled dev checkouts
|
||||
```
|
||||
|
||||
Never hardcode `__version__ = "1.0.0"` when using setuptools_scm — it will go stale after the
|
||||
first tag.
|
||||
|
||||
### Full release flow (this is it — nothing else needed)
|
||||
|
||||
```bash
|
||||
git tag v1.2.0
|
||||
git push origin master --tags
|
||||
# GitHub Actions publish.yml triggers automatically
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. `setup.py` Shim
|
||||
|
||||
Some older tools and IDEs still expect a `setup.py`. Keep it as a three-line shim — all real
|
||||
configuration stays in `pyproject.toml`.
|
||||
|
||||
```python
|
||||
# setup.py — thin shim only. All config lives in pyproject.toml.
|
||||
from setuptools import setup
|
||||
|
||||
setup()
|
||||
```
|
||||
|
||||
Never duplicate `name`, `version`, `dependencies`, or any other metadata from `pyproject.toml`
|
||||
into `setup.py`. If you copy anything there it will eventually drift and cause confusing conflicts.
|
||||
|
||||
---
|
||||
|
||||
## 8. Typed Package (PEP 561)
|
||||
|
||||
A properly declared typed package means mypy, pyright, and IDEs automatically pick up your type
|
||||
hints without any extra configuration from your users.
|
||||
|
||||
### Step 1: Create the marker file
|
||||
|
||||
```bash
|
||||
# The file must exist; its content doesn't matter — its presence is the signal.
|
||||
touch your_package/py.typed
|
||||
```
|
||||
|
||||
### Step 2: Include it in the wheel
|
||||
|
||||
Already in the template above:
|
||||
|
||||
```toml
|
||||
[tool.setuptools.package-data]
|
||||
your_package = ["py.typed"]
|
||||
```
|
||||
|
||||
### Step 3: Add the PyPI classifier
|
||||
|
||||
```toml
|
||||
classifiers = [
|
||||
...
|
||||
"Typing :: Typed",
|
||||
]
|
||||
```
|
||||
|
||||
### Step 4: Type-annotate all public functions
|
||||
|
||||
```python
|
||||
# Good — fully typed
|
||||
def process(
|
||||
self,
|
||||
data: dict[str, object],
|
||||
*,
|
||||
timeout: int = 30,
|
||||
) -> dict[str, object]:
|
||||
...
|
||||
|
||||
# Bad — mypy will flag this, and IDEs give no completions to users
|
||||
def process(self, data, timeout=30):
|
||||
...
|
||||
```
|
||||
|
||||
### Step 5: Verify py.typed ships in the wheel
|
||||
|
||||
```bash
|
||||
python -m build
|
||||
unzip -l dist/your_package-*.whl | grep py.typed
|
||||
# Must show: your_package/py.typed
|
||||
```
|
||||
|
||||
If it's missing, check your `[tool.setuptools.package-data]` config.
|
||||
@@ -0,0 +1,354 @@
|
||||
# Release Governance — Branching, Protection, OIDC, and Access Control
|
||||
|
||||
## Table of Contents
|
||||
1. [Branch Strategy](#1-branch-strategy)
|
||||
2. [Branch Protection Rules](#2-branch-protection-rules)
|
||||
3. [Tag-Based Release Model](#3-tag-based-release-model)
|
||||
4. [Role-Based Access Control](#4-role-based-access-control)
|
||||
5. [Secure Publishing with OIDC (Trusted Publishing)](#5-secure-publishing-with-oidc-trusted-publishing)
|
||||
6. [Validate Tag Author in CI](#6-validate-tag-author-in-ci)
|
||||
7. [Prevent Invalid Release Tags](#7-prevent-invalid-release-tags)
|
||||
8. [Full `publish.yml` with Governance Gates](#8-full-publishyml-with-governance-gates)
|
||||
|
||||
---
|
||||
|
||||
## 1. Branch Strategy
|
||||
|
||||
Use a clear branch hierarchy to separate development work from releasable code.
|
||||
|
||||
```
|
||||
main ← stable; only receives PRs from develop or hotfix/*
|
||||
develop ← integration branch; all feature PRs merge here first
|
||||
feature/* ← new capabilities (e.g., feature/add-redis-backend)
|
||||
fix/* ← bug fixes (e.g., fix/memory-leak-on-close)
|
||||
hotfix/* ← urgent production fixes; PR directly to main + cherry-pick to develop
|
||||
release/* ← (optional) release preparation (e.g., release/v2.0.0)
|
||||
```
|
||||
|
||||
### Rules
|
||||
|
||||
| Rule | Why |
|
||||
|---|---|
|
||||
| No direct push to `main` | Prevent accidental breakage of the stable branch |
|
||||
| All changes via PR | Enforces review + CI before merge |
|
||||
| At least one approval required | Second pair of eyes on all changes |
|
||||
| CI must pass | Never merge broken code |
|
||||
| Only tags trigger releases | No ad-hoc publish from branch pushes |
|
||||
|
||||
---
|
||||
|
||||
## 2. Branch Protection Rules
|
||||
|
||||
Configure these in **GitHub → Settings → Branches → Add rule** for `main` and `develop`.
|
||||
|
||||
### For `main`
|
||||
|
||||
```yaml
|
||||
# Equivalent GitHub branch protection config (for documentation)
|
||||
branch: main
|
||||
rules:
|
||||
- require_pull_request_reviews:
|
||||
required_approving_review_count: 1
|
||||
dismiss_stale_reviews: true
|
||||
- require_status_checks_to_pass:
|
||||
contexts:
|
||||
- "Lint, Format & Type Check"
|
||||
- "Test (Python 3.11)" # at minimum; add all matrix versions
|
||||
strict: true # branch must be up-to-date before merge
|
||||
- restrict_pushes:
|
||||
allowed_actors: [] # nobody — only PR merges
|
||||
- require_linear_history: true # prevents merge commits on main
|
||||
```
|
||||
|
||||
### For `develop`
|
||||
|
||||
```yaml
|
||||
branch: develop
|
||||
rules:
|
||||
- require_pull_request_reviews:
|
||||
required_approving_review_count: 1
|
||||
- require_status_checks_to_pass:
|
||||
contexts: ["CI"]
|
||||
strict: false # less strict for the integration branch
|
||||
```
|
||||
|
||||
### Via GitHub CLI
|
||||
|
||||
```bash
|
||||
# Protect main (requires gh CLI and admin rights)
|
||||
gh api repos/{owner}/{repo}/branches/main/protection \
|
||||
--method PUT \
|
||||
--input - <<'EOF'
|
||||
{
|
||||
"required_status_checks": {
|
||||
"strict": true,
|
||||
"contexts": ["Lint, Format & Type Check", "Test (Python 3.11)"]
|
||||
},
|
||||
"enforce_admins": false,
|
||||
"required_pull_request_reviews": {
|
||||
"required_approving_review_count": 1,
|
||||
"dismiss_stale_reviews": true
|
||||
},
|
||||
"restrictions": null
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Tag-Based Release Model
|
||||
|
||||
**Only annotated tags on `main` trigger a release.** Branch pushes and PR merges never publish.
|
||||
|
||||
### Tag Naming Convention
|
||||
|
||||
```
|
||||
vMAJOR.MINOR.PATCH # Stable: v1.2.3
|
||||
vMAJOR.MINOR.PATCHaN # Alpha: v2.0.0a1
|
||||
vMAJOR.MINOR.PATCHbN # Beta: v2.0.0b1
|
||||
vMAJOR.MINOR.PATCHrcN # Release Candidate: v2.0.0rc1
|
||||
```
|
||||
|
||||
### Release Workflow
|
||||
|
||||
```bash
|
||||
# 1. Merge develop → main via PR (reviewed, CI green)
|
||||
|
||||
# 2. Update CHANGELOG.md on main
|
||||
# Move [Unreleased] entries to [vX.Y.Z] - YYYY-MM-DD
|
||||
|
||||
# 3. Commit the changelog
|
||||
git checkout main
|
||||
git pull origin main
|
||||
git add CHANGELOG.md
|
||||
git commit -m "chore: release v1.2.3"
|
||||
|
||||
# 4. Create and push an annotated tag
|
||||
git tag -a v1.2.3 -m "Release v1.2.3"
|
||||
git push origin v1.2.3 # ← ONLY the tag; not --tags (avoids pushing all tags)
|
||||
|
||||
# 5. Confirm: GitHub Actions publish.yml triggers automatically
|
||||
# Monitor: Actions tab → publish workflow
|
||||
# Verify: https://pypi.org/project/your-package/
|
||||
```
|
||||
|
||||
### Why annotated tags?
|
||||
|
||||
Annotated tags (`git tag -a`) carry a tagger identity, date, and message — lightweight tags do
|
||||
not. `setuptools_scm` works with both, but annotated tags are safer for release governance because
|
||||
they record *who* created the tag.
|
||||
|
||||
---
|
||||
|
||||
## 4. Role-Based Access Control
|
||||
|
||||
| Role | What they can do |
|
||||
|---|---|
|
||||
| **Maintainer** | Create release tags, approve PRs, manage branch protection |
|
||||
| **Contributor** | Open PRs to `develop`; cannot push to `main` or create release tags |
|
||||
| **CI (GitHub Actions)** | Publish to PyPI via OIDC; cannot push code or create tags |
|
||||
|
||||
### Implement via GitHub Teams
|
||||
|
||||
```bash
|
||||
# Create a Maintainers team and restrict tag creation to that team
|
||||
gh api repos/{owner}/{repo}/tags/protection \
|
||||
--method POST \
|
||||
--field pattern="v*"
|
||||
# Then set allowed actors to the Maintainers team only
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Secure Publishing with OIDC (Trusted Publishing)
|
||||
|
||||
**Never store a PyPI API token as a GitHub secret.** Use Trusted Publishing (OIDC) instead.
|
||||
The PyPI project authorises a specific GitHub repository + workflow + environment — no long-lived
|
||||
secret is exchanged.
|
||||
|
||||
### One-time PyPI Setup
|
||||
|
||||
1. Go to https://pypi.org/manage/project/your-package/settings/publishing/
|
||||
2. Click **Add a new publisher**
|
||||
3. Fill in:
|
||||
- **Owner:** your-github-username
|
||||
- **Repository:** your-repo-name
|
||||
- **Workflow name:** `publish.yml`
|
||||
- **Environment name:** `release` (must match the `environment:` key in the workflow)
|
||||
4. Save. No token required.
|
||||
|
||||
### GitHub Environment Setup
|
||||
|
||||
1. Go to **GitHub → Settings → Environments → New environment** → name it `release`
|
||||
2. Add a protection rule: **Required reviewers** (optional but recommended for extra safety)
|
||||
3. Add a deployment branch rule: **Only tags matching `v*`**
|
||||
|
||||
### Minimal `publish.yml` using OIDC
|
||||
|
||||
```yaml
|
||||
# .github/workflows/publish.yml
|
||||
name: Publish to PyPI
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "v[0-9]+.[0-9]+.[0-9]+*" # Matches v1.0.0, v2.0.0a1, v1.2.3rc1
|
||||
|
||||
jobs:
|
||||
publish:
|
||||
name: Build and publish
|
||||
runs-on: ubuntu-latest
|
||||
environment: release # Must match the PyPI Trusted Publisher environment name
|
||||
permissions:
|
||||
id-token: write # Required for OIDC — grants a short-lived token to PyPI
|
||||
contents: read
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0 # REQUIRED for setuptools_scm
|
||||
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
|
||||
- name: Install build
|
||||
run: pip install build
|
||||
|
||||
- name: Build distributions
|
||||
run: python -m build
|
||||
|
||||
- name: Validate distributions
|
||||
run: pip install twine ; twine check dist/*
|
||||
|
||||
- name: Publish to PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
# No `password:` or `user:` needed — OIDC handles authentication
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Validate Tag Author in CI
|
||||
|
||||
Restrict who can trigger a release by checking `GITHUB_ACTOR` against an allowlist.
|
||||
Add this as the **first step** in your publish job to fail fast.
|
||||
|
||||
```yaml
|
||||
- name: Validate tag author
|
||||
run: |
|
||||
ALLOWED_USERS=("your-github-username" "co-maintainer-username")
|
||||
if [[ ! " ${ALLOWED_USERS[*]} " =~ " ${GITHUB_ACTOR} " ]]; then
|
||||
echo "::error::Release blocked: ${GITHUB_ACTOR} is not an authorised releaser."
|
||||
exit 1
|
||||
fi
|
||||
echo "Release authorised for ${GITHUB_ACTOR}."
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- `GITHUB_ACTOR` is the GitHub username of the person who pushed the tag.
|
||||
- Store the allowlist in a separate file (e.g., `.github/MAINTAINERS`) for maintainability.
|
||||
- For teams: replace the username check with a GitHub API call to verify team membership.
|
||||
|
||||
---
|
||||
|
||||
## 7. Prevent Invalid Release Tags
|
||||
|
||||
Reject workflow runs triggered by tags that do not follow your versioning convention.
|
||||
This stops accidental publishes from tags like `test`, `backup-old`, or `v1`.
|
||||
|
||||
```yaml
|
||||
- name: Validate release tag format
|
||||
run: |
|
||||
# Accepts: v1.0.0 v1.0.0a1 v1.0.0b2 v1.0.0rc1 v1.0.0.post1
|
||||
if [[ ! "${GITHUB_REF}" =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+(a|b|rc|\.post)[0-9]*$ ]] && \
|
||||
[[ ! "${GITHUB_REF}" =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||
echo "::error::Tag '${GITHUB_REF}' does not match the required format v<MAJOR>.<MINOR>.<PATCH>[pre]."
|
||||
exit 1
|
||||
fi
|
||||
echo "Tag format valid: ${GITHUB_REF}"
|
||||
```
|
||||
|
||||
### Regex explained
|
||||
|
||||
| Pattern | Matches |
|
||||
|---|---|
|
||||
| `v[0-9]+\.[0-9]+\.[0-9]+` | `v1.0.0`, `v12.3.4` |
|
||||
| `(a\|b\|rc)[0-9]*` | `v1.0.0a1`, `v2.0.0rc2` |
|
||||
| `\.post[0-9]*` | `v1.0.0.post1` |
|
||||
|
||||
---
|
||||
|
||||
## 8. Full `publish.yml` with Governance Gates
|
||||
|
||||
Complete workflow combining tag validation, author check, TestPyPI gate, and production publish.
|
||||
|
||||
```yaml
|
||||
# .github/workflows/publish.yml
|
||||
name: Publish to PyPI
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "v[0-9]+.[0-9]+.[0-9]+*"
|
||||
|
||||
jobs:
|
||||
publish:
|
||||
name: Build, validate, and publish
|
||||
runs-on: ubuntu-latest
|
||||
environment: release
|
||||
permissions:
|
||||
id-token: write
|
||||
contents: read
|
||||
|
||||
steps:
|
||||
- name: Validate release tag format
|
||||
run: |
|
||||
if [[ ! "${GITHUB_REF}" =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+(a[0-9]*|b[0-9]*|rc[0-9]*|\.post[0-9]*)?$ ]]; then
|
||||
echo "::error::Invalid tag format: ${GITHUB_REF}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Validate tag author
|
||||
run: |
|
||||
ALLOWED_USERS=("your-github-username")
|
||||
if [[ ! " ${ALLOWED_USERS[*]} " =~ " ${GITHUB_ACTOR} " ]]; then
|
||||
echo "::error::${GITHUB_ACTOR} is not authorised to release."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
|
||||
- name: Install build tooling
|
||||
run: pip install build twine
|
||||
|
||||
- name: Build
|
||||
run: python -m build
|
||||
|
||||
- name: Validate distributions
|
||||
run: twine check dist/*
|
||||
|
||||
- name: Publish to TestPyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
repository-url: https://test.pypi.org/legacy/
|
||||
continue-on-error: true # Non-fatal; remove if you always want this to pass
|
||||
|
||||
- name: Publish to PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
```
|
||||
|
||||
### Security checklist
|
||||
|
||||
- [ ] PyPI Trusted Publishing configured (no API token stored in GitHub)
|
||||
- [ ] GitHub `release` environment has branch protection: tags matching `v*` only
|
||||
- [ ] Tag format validation step is the first step in the job
|
||||
- [ ] Allowed-users list is maintained and reviewed regularly
|
||||
- [ ] No secrets printed in logs (check all `echo` and `run` steps)
|
||||
- [ ] `permissions:` is scoped to `id-token: write` only — no `write-all`
|
||||
257
skills/python-pypi-package-builder/references/testing-quality.md
Normal file
257
skills/python-pypi-package-builder/references/testing-quality.md
Normal file
@@ -0,0 +1,257 @@
|
||||
# Testing and Code Quality
|
||||
|
||||
## Table of Contents
|
||||
1. [conftest.py](#1-conftestpy)
|
||||
2. [Unit tests](#2-unit-tests)
|
||||
3. [Backend unit tests](#3-backend-unit-tests)
|
||||
4. [Running tests](#4-running-tests)
|
||||
5. [Code quality tools](#5-code-quality-tools)
|
||||
6. [Pre-commit hooks](#6-pre-commit-hooks)
|
||||
|
||||
---
|
||||
|
||||
## 1. `conftest.py`
|
||||
|
||||
Use `conftest.py` to define shared fixtures. Keep fixtures focused — one fixture per concern.
|
||||
For async tests, use `pytest-asyncio` with `asyncio_mode = "auto"` in `pyproject.toml`.
|
||||
|
||||
```python
|
||||
# tests/conftest.py
|
||||
import pytest
|
||||
from your_package.core import YourClient
|
||||
from your_package.backends.memory import MemoryBackend
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def memory_backend() -> MemoryBackend:
|
||||
return MemoryBackend()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def client(memory_backend: MemoryBackend) -> YourClient:
|
||||
return YourClient(
|
||||
api_key="test-key",
|
||||
backend=memory_backend,
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Unit Tests
|
||||
|
||||
Test both the happy path and the edge cases (e.g. invalid inputs, error conditions).
|
||||
|
||||
```python
|
||||
# tests/test_core.py
|
||||
import pytest
|
||||
from your_package import YourClient
|
||||
from your_package.exceptions import YourPackageError
|
||||
|
||||
|
||||
def test_client_creates_with_valid_key():
|
||||
client = YourClient(api_key="sk-test")
|
||||
assert client is not None
|
||||
|
||||
|
||||
def test_client_raises_on_empty_key():
|
||||
with pytest.raises(ValueError, match="api_key"):
|
||||
YourClient(api_key="")
|
||||
|
||||
|
||||
def test_client_raises_on_invalid_timeout():
|
||||
with pytest.raises(ValueError, match="timeout"):
|
||||
YourClient(api_key="sk-test", timeout=-1)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_process_returns_expected_result(client: YourClient):
|
||||
result = await client.process({"input": "value"})
|
||||
assert "output" in result
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_process_raises_on_invalid_input(client: YourClient):
|
||||
with pytest.raises(YourPackageError):
|
||||
await client.process({}) # empty input should fail
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Backend Unit Tests
|
||||
|
||||
Test each backend independently, in isolation from the rest of the library. This makes failures
|
||||
easier to diagnose and ensures your abstract interface is actually implemented correctly.
|
||||
|
||||
```python
|
||||
# tests/test_backends.py
|
||||
import pytest
|
||||
from your_package.backends.memory import MemoryBackend
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_set_and_get():
|
||||
backend = MemoryBackend()
|
||||
await backend.set("key1", "value1")
|
||||
result = await backend.get("key1")
|
||||
assert result == "value1"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_missing_key_returns_none():
|
||||
backend = MemoryBackend()
|
||||
result = await backend.get("nonexistent")
|
||||
assert result is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_delete_removes_key():
|
||||
backend = MemoryBackend()
|
||||
await backend.set("key1", "value1")
|
||||
await backend.delete("key1")
|
||||
result = await backend.get("key1")
|
||||
assert result is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ttl_expires_entry():
|
||||
import asyncio
|
||||
backend = MemoryBackend()
|
||||
await backend.set("key1", "value1", ttl=1)
|
||||
await asyncio.sleep(1.1)
|
||||
result = await backend.get("key1")
|
||||
assert result is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_different_keys_are_independent():
|
||||
backend = MemoryBackend()
|
||||
await backend.set("key1", "a")
|
||||
await backend.set("key2", "b")
|
||||
assert await backend.get("key1") == "a"
|
||||
assert await backend.get("key2") == "b"
|
||||
await backend.delete("key1")
|
||||
assert await backend.get("key2") == "b"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Running Tests
|
||||
|
||||
```bash
|
||||
pip install -e ".[dev]"
|
||||
pytest # All tests
|
||||
pytest --cov --cov-report=html # With HTML coverage report (opens in browser)
|
||||
pytest -k "test_middleware" # Filter by name
|
||||
pytest -x # Stop on first failure
|
||||
pytest -v # Verbose output
|
||||
```
|
||||
|
||||
Coverage config in `pyproject.toml` enforces a minimum threshold (`fail_under = 80`). CI will
|
||||
fail if you drop below it, which catches coverage regressions automatically.
|
||||
|
||||
---
|
||||
|
||||
## 5. Code Quality Tools
|
||||
|
||||
### Ruff (linting — replaces flake8, pylint, many others)
|
||||
|
||||
```bash
|
||||
pip install ruff
|
||||
ruff check . # Check for issues
|
||||
ruff check . --fix # Auto-fix safe issues
|
||||
```
|
||||
|
||||
Ruff is extremely fast and replaces most of the Python linting ecosystem. Configure it in
|
||||
`pyproject.toml` — see `references/pyproject-toml.md` for the full config.
|
||||
|
||||
### Black (formatting)
|
||||
|
||||
```bash
|
||||
pip install black
|
||||
black . # Format all files
|
||||
black . --check # CI mode — reports issues without modifying files
|
||||
```
|
||||
|
||||
### isort (import sorting)
|
||||
|
||||
```bash
|
||||
pip install isort
|
||||
isort . # Sort imports
|
||||
isort . --check-only # CI mode
|
||||
```
|
||||
|
||||
Always set `profile = "black"` in `[tool.isort]` — otherwise black and isort conflict.
|
||||
|
||||
### mypy (static type checking)
|
||||
|
||||
```bash
|
||||
pip install mypy
|
||||
mypy your_package/ # Type-check your package source only
|
||||
```
|
||||
|
||||
Common fixes:
|
||||
|
||||
- `ignore_missing_imports = true` — ignore untyped third-party deps
|
||||
- `from __future__ import annotations` — enables PEP 563 deferred evaluation (Python 3.9 compat)
|
||||
- `pip install types-redis` — type stubs for the redis library
|
||||
|
||||
### Run all at once
|
||||
|
||||
```bash
|
||||
ruff check . && black . --check && isort . --check-only && mypy your_package/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Pre-commit Hooks
|
||||
|
||||
Pre-commit runs all quality tools automatically before each commit, so issues never reach CI.
|
||||
Install once per clone with `pre-commit install`.
|
||||
|
||||
```yaml
|
||||
# .pre-commit-config.yaml
|
||||
repos:
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
rev: v0.4.4
|
||||
hooks:
|
||||
- id: ruff
|
||||
args: [--fix]
|
||||
- id: ruff-format
|
||||
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 24.4.2
|
||||
hooks:
|
||||
- id: black
|
||||
|
||||
- repo: https://github.com/pycqa/isort
|
||||
rev: 5.13.2
|
||||
hooks:
|
||||
- id: isort
|
||||
|
||||
- repo: https://github.com/pre-commit/mirrors-mypy
|
||||
rev: v1.10.0
|
||||
hooks:
|
||||
- id: mypy
|
||||
additional_dependencies: [types-redis] # Add stubs for typed dependencies
|
||||
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.6.0
|
||||
hooks:
|
||||
- id: trailing-whitespace
|
||||
- id: end-of-file-fixer
|
||||
- id: check-yaml
|
||||
- id: check-toml
|
||||
- id: check-merge-conflict
|
||||
- id: debug-statements
|
||||
- id: no-commit-to-branch
|
||||
args: [--branch, master, --branch, main]
|
||||
```
|
||||
|
||||
```bash
|
||||
pip install pre-commit
|
||||
pre-commit install # Install once per clone
|
||||
pre-commit run --all-files # Run all hooks manually (useful before the first install)
|
||||
```
|
||||
|
||||
The `no-commit-to-branch` hook prevents accidentally committing directly to `main`/`master`,
|
||||
which would bypass CI checks. Always work on a feature branch.
|
||||
344
skills/python-pypi-package-builder/references/tooling-ruff.md
Normal file
344
skills/python-pypi-package-builder/references/tooling-ruff.md
Normal file
@@ -0,0 +1,344 @@
|
||||
# Tooling — Ruff-Only Setup and Code Quality
|
||||
|
||||
## Table of Contents
|
||||
1. [Use Only Ruff (Replaces black, isort, flake8)](#1-use-only-ruff-replaces-black-isort-flake8)
|
||||
2. [Ruff Configuration in pyproject.toml](#2-ruff-configuration-in-pyprojecttoml)
|
||||
3. [mypy Configuration](#3-mypy-configuration)
|
||||
4. [pre-commit Configuration](#4-pre-commit-configuration)
|
||||
5. [pytest and Coverage Configuration](#5-pytest-and-coverage-configuration)
|
||||
6. [Dev Dependencies in pyproject.toml](#6-dev-dependencies-in-pyprojecttoml)
|
||||
7. [CI Lint Job — Ruff Only](#7-ci-lint-job--ruff-only)
|
||||
8. [Migration Guide — Removing black and isort](#8-migration-guide--removing-black-and-isort)
|
||||
|
||||
---
|
||||
|
||||
## 1. Use Only Ruff (Replaces black, isort, flake8)
|
||||
|
||||
**Decision:** Use `ruff` as the single linting and formatting tool. Remove `black` and `isort`.
|
||||
|
||||
| Old (avoid) | New (use) | What it does |
|
||||
|---|---|---|
|
||||
| `black` | `ruff format` | Code formatting |
|
||||
| `isort` | `ruff check --select I` | Import sorting |
|
||||
| `flake8` | `ruff check` | Style and error linting |
|
||||
| `pyupgrade` | `ruff check --select UP` | Upgrade syntax to modern Python |
|
||||
| `bandit` | `ruff check --select S` | Security linting |
|
||||
| All of the above | `ruff` | One tool, one config section |
|
||||
|
||||
**Why ruff?**
|
||||
- 10–100× faster than the tools it replaces (written in Rust).
|
||||
- Single config section in `pyproject.toml` — no `.flake8`, `.isort.cfg`, `pyproject.toml[tool.black]` sprawl.
|
||||
- Actively maintained by Astral; follows the same rules as the tools it replaces.
|
||||
- `ruff format` is black-compatible — existing black-formatted code passes without changes.
|
||||
|
||||
---
|
||||
|
||||
## 2. Ruff Configuration in pyproject.toml
|
||||
|
||||
```toml
|
||||
[tool.ruff]
|
||||
target-version = "py310" # Minimum supported Python version
|
||||
line-length = 88 # black-compatible default
|
||||
src = ["src", "tests"]
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = [
|
||||
"E", # pycodestyle errors
|
||||
"W", # pycodestyle warnings
|
||||
"F", # pyflakes
|
||||
"I", # isort
|
||||
"B", # flake8-bugbear (opinionated but very useful)
|
||||
"C4", # flake8-comprehensions
|
||||
"UP", # pyupgrade (modernise syntax)
|
||||
"SIM", # flake8-simplify
|
||||
"TCH", # flake8-type-checking (move imports to TYPE_CHECKING block)
|
||||
"ANN", # flake8-annotations (enforce type hints — remove if too strict)
|
||||
"S", # flake8-bandit (security)
|
||||
"N", # pep8-naming
|
||||
]
|
||||
ignore = [
|
||||
"ANN101", # Missing type annotation for `self`
|
||||
"ANN102", # Missing type annotation for `cls`
|
||||
"S101", # Use of `assert` — necessary in tests
|
||||
"S603", # subprocess without shell=True — often intentional
|
||||
"B008", # Do not perform function calls in default arguments (false positives in FastAPI/Typer)
|
||||
]
|
||||
|
||||
[tool.ruff.lint.isort]
|
||||
known-first-party = ["your_package"]
|
||||
|
||||
[tool.ruff.lint.per-file-ignores]
|
||||
"tests/**" = ["S101", "ANN", "D"] # Allow assert and skip annotations/docstrings in tests
|
||||
|
||||
[tool.ruff.format]
|
||||
quote-style = "double" # black-compatible
|
||||
indent-style = "space"
|
||||
skip-magic-trailing-comma = false
|
||||
line-ending = "auto"
|
||||
```
|
||||
|
||||
### Useful ruff commands
|
||||
|
||||
```bash
|
||||
# Check for lint issues (no changes)
|
||||
ruff check .
|
||||
|
||||
# Auto-fix fixable issues
|
||||
ruff check --fix .
|
||||
|
||||
# Format code (replaces black)
|
||||
ruff format .
|
||||
|
||||
# Check formatting without changing files (CI mode)
|
||||
ruff format --check .
|
||||
|
||||
# Run both lint and format check in one command (for CI)
|
||||
ruff check . && ruff format --check .
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. mypy Configuration
|
||||
|
||||
```toml
|
||||
[tool.mypy]
|
||||
python_version = "3.10"
|
||||
strict = true
|
||||
warn_return_any = true
|
||||
warn_unused_ignores = true
|
||||
warn_redundant_casts = true
|
||||
disallow_untyped_defs = true
|
||||
disallow_incomplete_defs = true
|
||||
check_untyped_defs = true
|
||||
no_implicit_optional = true
|
||||
show_error_codes = true
|
||||
|
||||
# Ignore missing stubs for third-party packages that don't ship types
|
||||
[[tool.mypy.overrides]]
|
||||
module = ["redis.*", "pydantic_settings.*"]
|
||||
ignore_missing_imports = true
|
||||
```
|
||||
|
||||
### Running mypy — handle both src and flat layouts
|
||||
|
||||
```bash
|
||||
# src layout:
|
||||
mypy src/your_package/
|
||||
|
||||
# flat layout:
|
||||
mypy your_package/
|
||||
```
|
||||
|
||||
In CI, detect layout dynamically:
|
||||
|
||||
```yaml
|
||||
- name: Run mypy
|
||||
run: |
|
||||
if [ -d "src" ]; then
|
||||
mypy src/
|
||||
else
|
||||
mypy your_package/
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. pre-commit Configuration
|
||||
|
||||
```yaml
|
||||
# .pre-commit-config.yaml
|
||||
repos:
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
rev: v0.4.4 # Pin to a specific release; update periodically with `pre-commit autoupdate`
|
||||
hooks:
|
||||
- id: ruff
|
||||
args: [--fix] # Auto-fix what can be fixed
|
||||
- id: ruff-format # Format (replaces black hook)
|
||||
|
||||
- repo: https://github.com/pre-commit/mirrors-mypy
|
||||
rev: v1.10.0
|
||||
hooks:
|
||||
- id: mypy
|
||||
additional_dependencies:
|
||||
- types-requests
|
||||
- types-redis
|
||||
# Add stubs for any typed dependency used in your package
|
||||
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.6.0
|
||||
hooks:
|
||||
- id: trailing-whitespace
|
||||
- id: end-of-file-fixer
|
||||
- id: check-toml
|
||||
- id: check-yaml
|
||||
- id: check-merge-conflict
|
||||
- id: check-added-large-files
|
||||
args: ["--maxkb=500"]
|
||||
```
|
||||
|
||||
### ❌ Remove these hooks (replaced by ruff)
|
||||
|
||||
```yaml
|
||||
# DELETE or never add:
|
||||
- repo: https://github.com/psf/black # replaced by ruff-format
|
||||
- repo: https://github.com/PyCQA/isort # replaced by ruff lint I rules
|
||||
- repo: https://github.com/PyCQA/flake8 # replaced by ruff check
|
||||
- repo: https://github.com/PyCQA/autoflake # replaced by ruff check F401
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
```bash
|
||||
pip install pre-commit
|
||||
pre-commit install # Installs git hook — runs on every commit
|
||||
pre-commit run --all-files # Run manually on all files
|
||||
pre-commit autoupdate # Update all hooks to latest pinned versions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. pytest and Coverage Configuration
|
||||
|
||||
```toml
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
addopts = "-ra -q --strict-markers --cov=your_package --cov-report=term-missing"
|
||||
asyncio_mode = "auto" # Enables async tests without @pytest.mark.asyncio decorator
|
||||
|
||||
[tool.coverage.run]
|
||||
source = ["your_package"]
|
||||
branch = true
|
||||
omit = ["**/__main__.py", "**/cli.py"] # omit entry points from coverage
|
||||
|
||||
[tool.coverage.report]
|
||||
show_missing = true
|
||||
skip_covered = false
|
||||
fail_under = 85 # Fail CI if coverage drops below 85%
|
||||
exclude_lines = [
|
||||
"pragma: no cover",
|
||||
"if TYPE_CHECKING:",
|
||||
"raise NotImplementedError",
|
||||
"@abstractmethod",
|
||||
]
|
||||
```
|
||||
|
||||
### asyncio_mode = "auto" — remove @pytest.mark.asyncio
|
||||
|
||||
With `asyncio_mode = "auto"` set in `pyproject.toml`, **do not** add `@pytest.mark.asyncio`
|
||||
to test functions. The decorator is redundant and will raise a warning in modern pytest-asyncio.
|
||||
|
||||
```python
|
||||
# WRONG — the decorator is deprecated when asyncio_mode = "auto":
|
||||
@pytest.mark.asyncio
|
||||
async def test_async_operation():
|
||||
result = await my_async_func()
|
||||
assert result == expected
|
||||
|
||||
# CORRECT — just use async def:
|
||||
async def test_async_operation():
|
||||
result = await my_async_func()
|
||||
assert result == expected
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Dev Dependencies in pyproject.toml
|
||||
|
||||
Declare all dev/test tools in an `[extras]` group named `dev`.
|
||||
|
||||
```toml
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"pytest>=8",
|
||||
"pytest-asyncio>=0.23",
|
||||
"pytest-cov>=5",
|
||||
"ruff>=0.4",
|
||||
"mypy>=1.10",
|
||||
"pre-commit>=3.7",
|
||||
"httpx>=0.27", # If testing HTTP transport
|
||||
"respx>=0.21", # If mocking httpx in tests
|
||||
]
|
||||
redis = [
|
||||
"redis>=5",
|
||||
]
|
||||
docs = [
|
||||
"mkdocs-material>=9",
|
||||
"mkdocstrings[python]>=0.25",
|
||||
]
|
||||
```
|
||||
|
||||
Install dev dependencies:
|
||||
|
||||
```bash
|
||||
pip install -e ".[dev]"
|
||||
pip install -e ".[dev,redis]" # Include optional extras
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. CI Lint Job — Ruff Only
|
||||
|
||||
Replace the separate `black`, `isort`, and `flake8` steps with a single `ruff` step.
|
||||
|
||||
```yaml
|
||||
# .github/workflows/ci.yml — lint job
|
||||
lint:
|
||||
name: Lint & Type Check
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
|
||||
- name: Install dev dependencies
|
||||
run: pip install -e ".[dev]"
|
||||
|
||||
# Single step: ruff replaces black + isort + flake8
|
||||
- name: ruff lint
|
||||
run: ruff check .
|
||||
|
||||
- name: ruff format check
|
||||
run: ruff format --check .
|
||||
|
||||
- name: mypy
|
||||
run: |
|
||||
if [ -d "src" ]; then
|
||||
mypy src/
|
||||
else
|
||||
mypy $(basename $(ls -d */))/ 2>/dev/null || mypy .
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Migration Guide — Removing black and isort
|
||||
|
||||
If you are converting an existing project that used `black` and `isort`:
|
||||
|
||||
```bash
|
||||
# 1. Remove black and isort from dev dependencies
|
||||
pip uninstall black isort
|
||||
|
||||
# 2. Remove black and isort config sections from pyproject.toml
|
||||
# [tool.black] ← delete this section
|
||||
# [tool.isort] ← delete this section
|
||||
|
||||
# 3. Add ruff to dev dependencies (see Section 2 for config)
|
||||
|
||||
# 4. Run ruff format to confirm existing code is already compatible
|
||||
ruff format --check .
|
||||
# ruff format is black-compatible; output should be identical
|
||||
|
||||
# 5. Update .pre-commit-config.yaml (see Section 4)
|
||||
# Remove black and isort hooks; add ruff and ruff-format hooks
|
||||
|
||||
# 6. Update CI (see Section 7)
|
||||
# Remove black, isort, flake8 steps; add ruff check + ruff format --check
|
||||
|
||||
# 7. Reinstall pre-commit hooks
|
||||
pre-commit uninstall
|
||||
pre-commit install
|
||||
pre-commit run --all-files # Verify clean
|
||||
```
|
||||
@@ -0,0 +1,375 @@
|
||||
# Versioning Strategy — PEP 440, SemVer, and Decision Engine
|
||||
|
||||
## Table of Contents
|
||||
1. [PEP 440 — The Standard](#1-pep-440--the-standard)
|
||||
2. [Semantic Versioning (SemVer)](#2-semantic-versioning-semver)
|
||||
3. [Pre-release Identifiers](#3-pre-release-identifiers)
|
||||
4. [Versioning Decision Engine](#4-versioning-decision-engine)
|
||||
5. [Dynamic Versioning — setuptools_scm (Recommended)](#5-dynamic-versioning--setuptools_scm-recommended)
|
||||
6. [Hatchling with hatch-vcs Plugin](#6-hatchling-with-hatch-vcs-plugin)
|
||||
7. [Static Versioning — flit](#7-static-versioning--flit)
|
||||
8. [Static Versioning — hatchling manual](#8-static-versioning--hatchling-manual)
|
||||
9. [DO NOT Hardcode Version (except flit)](#9-do-not-hardcode-version-except-flit)
|
||||
10. [Dependency Version Specifiers](#10-dependency-version-specifiers)
|
||||
11. [PyPA Release Commands](#11-pypa-release-commands)
|
||||
|
||||
---
|
||||
|
||||
## 1. PEP 440 — The Standard
|
||||
|
||||
All Python package versions must comply with [PEP 440](https://peps.python.org/pep-0440/).
|
||||
Non-compliant versions (e.g., `1.0-beta`, `2023.1.1.dev`) will be rejected by PyPI.
|
||||
|
||||
```
|
||||
Canonical form: N[.N]+[{a|b|rc}N][.postN][.devN]
|
||||
|
||||
1.0.0 Stable release
|
||||
1.0.0a1 Alpha pre-release
|
||||
1.0.0b2 Beta pre-release
|
||||
1.0.0rc1 Release candidate
|
||||
1.0.0.post1 Post-release (packaging fix; same codebase)
|
||||
1.0.0.dev1 Development snapshot — DO NOT upload to PyPI
|
||||
2.0.0 Major release (breaking changes)
|
||||
```
|
||||
|
||||
### Epoch prefix (rare)
|
||||
|
||||
```
|
||||
1!1.0.0 Epoch 1; used when you need to skip ahead of an old scheme
|
||||
```
|
||||
|
||||
Use epochs only as a last resort to fix a broken version sequence.
|
||||
|
||||
---
|
||||
|
||||
## 2. Semantic Versioning (SemVer)
|
||||
|
||||
SemVer maps cleanly onto PEP 440. Always use `MAJOR.MINOR.PATCH`:
|
||||
|
||||
```
|
||||
MAJOR Increment when you make incompatible API changes (rename, remove, break)
|
||||
MINOR Increment when you add functionality backward-compatibly (new features)
|
||||
PATCH Increment when you make backward-compatible bug fixes
|
||||
|
||||
Examples:
|
||||
1.0.0 → 1.0.1 Bug fix, no API change
|
||||
1.0.0 → 1.1.0 New method added; existing API intact
|
||||
1.0.0 → 2.0.0 Public method renamed or removed
|
||||
```
|
||||
|
||||
### What counts as a breaking change?
|
||||
|
||||
| Change | Breaking? |
|
||||
|---|---|
|
||||
| Rename a public function | YES — `MAJOR` |
|
||||
| Remove a parameter | YES — `MAJOR` |
|
||||
| Add a required parameter | YES — `MAJOR` |
|
||||
| Add an optional parameter with a default | NO — `MINOR` |
|
||||
| Add a new function/class | NO — `MINOR` |
|
||||
| Fix a bug | NO — `PATCH` |
|
||||
| Update a dependency lower bound | NO (usually) — `PATCH` |
|
||||
| Update a dependency upper bound (breaking) | YES — `MAJOR` |
|
||||
|
||||
---
|
||||
|
||||
## 3. Pre-release Identifiers
|
||||
|
||||
Use pre-release versions to get user feedback before a stable release.
|
||||
Pre-releases are **not** installed by default by pip (`pip install pkg` skips them).
|
||||
Users must opt-in: `pip install "pkg==2.0.0a1"` or `pip install --pre pkg`.
|
||||
|
||||
```
|
||||
1.0.0a1 Alpha-1: very early; expect bugs; API may change
|
||||
1.0.0b1 Beta-1: feature-complete; API stabilising; seek broader feedback
|
||||
1.0.0rc1 Release candidate: code-frozen; final testing before stable
|
||||
1.0.0 Stable: ready for production
|
||||
```
|
||||
|
||||
### Increment rule
|
||||
|
||||
```
|
||||
Start: 1.0.0a1
|
||||
More alphas: 1.0.0a2, 1.0.0a3
|
||||
Move to beta: 1.0.0b1 (reset counter)
|
||||
Move to RC: 1.0.0rc1
|
||||
Stable: 1.0.0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Versioning Decision Engine
|
||||
|
||||
Use this decision tree to pick the right versioning strategy before writing any code.
|
||||
|
||||
```
|
||||
Is the project using git and tagging releases with version tags?
|
||||
├── YES → setuptools + setuptools_scm (DEFAULT — best for most projects)
|
||||
│ Git tag v1.0.0 becomes the installed version automatically.
|
||||
│ Zero manual version bumping.
|
||||
│
|
||||
└── NO — Is the project a simple, single-module library with infrequent releases?
|
||||
├── YES → flit
|
||||
│ Set __version__ = "1.0.0" in __init__.py.
|
||||
│ Update manually before each release.
|
||||
│
|
||||
└── NO — Does the team want an integrated build + dep management tool?
|
||||
├── YES → poetry
|
||||
│ Manage version in [tool.poetry] version field.
|
||||
│
|
||||
└── NO → hatchling (modern, fast, pure-Python)
|
||||
Use hatch-vcs plugin for dynamic versioning
|
||||
or set version manually in [project].
|
||||
|
||||
Does the package have C/Cython/Fortran extensions?
|
||||
└── YES (always) → setuptools (only backend with native extension support)
|
||||
```
|
||||
|
||||
### Summary Table
|
||||
|
||||
| Backend | Version source | Best for |
|
||||
|---|---|---|
|
||||
| `setuptools` + `setuptools_scm` | Git tags — fully automatic | DEFAULT for new projects |
|
||||
| `hatchling` + `hatch-vcs` | Git tags — automatic via plugin | hatchling users |
|
||||
| `flit` | `__version__` in `__init__.py` | Very simple, minimal config |
|
||||
| `poetry` | `[tool.poetry] version` field | Integrated dep + build management |
|
||||
| `hatchling` manual | `[project] version` field | One-off static versioning |
|
||||
|
||||
---
|
||||
|
||||
## 5. Dynamic Versioning — setuptools_scm (Recommended)
|
||||
|
||||
`setuptools_scm` reads the current git tag and computes the version at build time.
|
||||
No separate `__version__` update step — just tag and push.
|
||||
|
||||
### `pyproject.toml` configuration
|
||||
|
||||
```toml
|
||||
[build-system]
|
||||
requires = ["setuptools>=70", "setuptools_scm>=8"]
|
||||
build-backend = "setuptools.backends.legacy:build"
|
||||
|
||||
[project]
|
||||
name = "your-package"
|
||||
dynamic = ["version"]
|
||||
|
||||
[tool.setuptools_scm]
|
||||
version_scheme = "post-release"
|
||||
local_scheme = "no-local-version" # Prevents +g<hash> from breaking PyPI
|
||||
```
|
||||
|
||||
### `__init__.py` — correct version access
|
||||
|
||||
```python
|
||||
# your_package/__init__.py
|
||||
from importlib.metadata import version, PackageNotFoundError
|
||||
|
||||
try:
|
||||
__version__ = version("your-package")
|
||||
except PackageNotFoundError:
|
||||
# Package is not installed (running from a source checkout without pip install -e .)
|
||||
__version__ = "0.0.0.dev0"
|
||||
|
||||
__all__ = ["__version__"]
|
||||
```
|
||||
|
||||
### How the version is computed
|
||||
|
||||
```
|
||||
git tag v1.0.0 → installed_version = "1.0.0"
|
||||
3 commits after v1.0.0 → installed_version = "1.0.0.post3+g<hash>" (dev only)
|
||||
git tag v1.1.0 → installed_version = "1.1.0"
|
||||
```
|
||||
|
||||
With `local_scheme = "no-local-version"`, the `+g<hash>` suffix is stripped for PyPI
|
||||
uploads while still being visible locally.
|
||||
|
||||
### Critical CI requirement
|
||||
|
||||
```yaml
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0 # REQUIRED — without this, git has no tag history
|
||||
# setuptools_scm falls back to 0.0.0+d<date> silently
|
||||
```
|
||||
|
||||
**Every** CI job that installs or builds the package must have `fetch-depth: 0`.
|
||||
|
||||
### Debugging version issues
|
||||
|
||||
```bash
|
||||
# Check what version setuptools_scm would produce right now:
|
||||
python -m setuptools_scm
|
||||
|
||||
# If you see 0.0.0+d... it means:
|
||||
# 1. No tags reachable from HEAD, OR
|
||||
# 2. fetch-depth: 0 was not set in CI
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Hatchling with hatch-vcs Plugin
|
||||
|
||||
An alternative to setuptools_scm for teams already using hatchling.
|
||||
|
||||
```toml
|
||||
[build-system]
|
||||
requires = ["hatchling", "hatch-vcs"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "your-package"
|
||||
dynamic = ["version"]
|
||||
|
||||
[tool.hatch.version]
|
||||
source = "vcs"
|
||||
|
||||
[tool.hatch.build.hooks.vcs]
|
||||
version-file = "src/your_package/_version.py"
|
||||
```
|
||||
|
||||
Access the version the same way as setuptools_scm:
|
||||
|
||||
```python
|
||||
from importlib.metadata import version, PackageNotFoundError
|
||||
try:
|
||||
__version__ = version("your-package")
|
||||
except PackageNotFoundError:
|
||||
__version__ = "0.0.0.dev0"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Static Versioning — flit
|
||||
|
||||
Use flit only for simple, single-module packages where manual version bumping is acceptable.
|
||||
|
||||
### `pyproject.toml`
|
||||
|
||||
```toml
|
||||
[build-system]
|
||||
requires = ["flit_core>=3.9"]
|
||||
build-backend = "flit_core.buildapi"
|
||||
|
||||
[project]
|
||||
name = "your-package"
|
||||
dynamic = ["version", "description"]
|
||||
```
|
||||
|
||||
### `__init__.py`
|
||||
|
||||
```python
|
||||
"""your-package — a focused, single-purpose utility."""
|
||||
__version__ = "1.2.0" # flit reads this; update manually before each release
|
||||
```
|
||||
|
||||
**flit exception:** this is the ONLY case where hardcoding `__version__` is correct.
|
||||
flit discovers the version by importing `__init__.py` and reading `__version__`.
|
||||
|
||||
### Release flow for flit
|
||||
|
||||
```bash
|
||||
# 1. Bump __version__ in __init__.py
|
||||
# 2. Update CHANGELOG.md
|
||||
# 3. Commit
|
||||
git add src/your_package/__init__.py CHANGELOG.md
|
||||
git commit -m "chore: release v1.2.0"
|
||||
# 4. Tag (flit can also publish directly)
|
||||
git tag v1.2.0
|
||||
git push origin v1.2.0
|
||||
# 5. Build and publish
|
||||
flit publish
|
||||
# OR
|
||||
python -m build && twine upload dist/*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Static Versioning — hatchling manual
|
||||
|
||||
```toml
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "your-package"
|
||||
version = "1.0.0" # Manual; update before each release
|
||||
```
|
||||
|
||||
Update `version` in `pyproject.toml` before every release. No `__version__` required
|
||||
(access via `importlib.metadata.version()` as usual).
|
||||
|
||||
---
|
||||
|
||||
## 9. DO NOT Hardcode Version (except flit)
|
||||
|
||||
Hardcoding `__version__` in `__init__.py` when **not** using flit creates a dual source of
|
||||
truth that diverges over time.
|
||||
|
||||
```python
|
||||
# BAD — when using setuptools_scm, hatchling, or poetry:
|
||||
__version__ = "1.0.0" # gets stale; diverges from the installed package version
|
||||
|
||||
# GOOD — works for all backends except flit:
|
||||
from importlib.metadata import version, PackageNotFoundError
|
||||
try:
|
||||
__version__ = version("your-package")
|
||||
except PackageNotFoundError:
|
||||
__version__ = "0.0.0.dev0"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. Dependency Version Specifiers
|
||||
|
||||
Pick the right specifier style to avoid poisoning your users' environments.
|
||||
|
||||
```toml
|
||||
# [project] dependencies — library best practices:
|
||||
|
||||
"httpx>=0.24" # Minimum only — PREFERRED; lets users upgrade freely
|
||||
"httpx>=0.24,<2.0" # Upper bound only when a known breaking change exists in next major
|
||||
"requests>=2.28,<3.0" # Acceptable for well-known major-version breaks
|
||||
|
||||
# Application / CLI (pinning is fine):
|
||||
"httpx==0.27.2" # Lock exact version for reproducible deploys
|
||||
|
||||
# NEVER in a library:
|
||||
# "httpx~=0.24.0" # Too tight; blocks minor upgrades
|
||||
# "httpx==0.27.*" # Not valid PEP 440
|
||||
# "httpx" # No constraint; fragile against future breakage
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. PyPA Release Commands
|
||||
|
||||
The canonical sequence from code to user install.
|
||||
|
||||
```bash
|
||||
# Step 1: Tag the release (triggers CI publish.yml automatically if configured)
|
||||
git tag -a v1.2.3 -m "Release v1.2.3"
|
||||
git push origin v1.2.3
|
||||
|
||||
# Step 2 (manual fallback only): Build locally
|
||||
python -m build
|
||||
# Produces:
|
||||
# dist/your_package-1.2.3.tar.gz (sdist)
|
||||
# dist/your_package-1.2.3-py3-none-any.whl (wheel)
|
||||
|
||||
# Step 3: Validate
|
||||
twine check dist/*
|
||||
|
||||
# Step 4: Test on TestPyPI first (first release or major change)
|
||||
twine upload --repository testpypi dist/*
|
||||
pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ your-package==1.2.3
|
||||
|
||||
# Step 5: Publish to production PyPI
|
||||
twine upload dist/*
|
||||
# OR via GitHub Actions (recommended):
|
||||
# push the tag → publish.yml runs → pypa/gh-action-pypi-publish handles upload via OIDC
|
||||
|
||||
# Step 6: Verify
|
||||
pip install your-package==1.2.3
|
||||
python -c "import your_package; print(your_package.__version__)"
|
||||
```
|
||||
920
skills/python-pypi-package-builder/scripts/scaffold.py
Normal file
920
skills/python-pypi-package-builder/scripts/scaffold.py
Normal file
@@ -0,0 +1,920 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
scaffold.py — Generate a production-grade Python PyPI package structure.
|
||||
|
||||
Usage:
|
||||
python scaffold.py --name my-package
|
||||
python scaffold.py --name my-package --layout src
|
||||
python scaffold.py --name my-package --build hatchling
|
||||
|
||||
Options:
|
||||
--name PyPI package name (lowercase, hyphens). Required.
|
||||
--layout 'flat' (default) or 'src'.
|
||||
--build 'setuptools' (default, uses setuptools_scm) or 'hatchling'.
|
||||
--author Author name (default: Your Name).
|
||||
--email Author email (default: you@example.com).
|
||||
--output Output directory (default: current directory).
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
import textwrap
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def pkg_name(pypi_name: str) -> str:
|
||||
"""Convert 'my-pkg' → 'my_pkg'."""
|
||||
return pypi_name.replace("-", "_")
|
||||
|
||||
|
||||
def write(path: Path, content: str) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(textwrap.dedent(content).lstrip(), encoding="utf-8")
|
||||
print(f" created {path}")
|
||||
|
||||
|
||||
def touch(path: Path) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.touch()
|
||||
print(f" created {path}")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# File generators
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def gen_pyproject_setuptools(name: str, mod: str, author: str, email: str, layout: str) -> str:
|
||||
packages_find = (
|
||||
'where = ["src"]' if layout == "src" else f'include = ["{mod}*"]'
|
||||
)
|
||||
pkg_data_key = f"src/{mod}" if layout == "src" else mod
|
||||
pythonpath = "" if layout == "src" else '\npythonpath = ["."]'
|
||||
return f'''\
|
||||
[build-system]
|
||||
requires = ["setuptools>=68", "wheel", "setuptools_scm"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "{name}"
|
||||
dynamic = ["version"]
|
||||
description = "<your description>"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
license = {{text = "MIT"}}
|
||||
authors = [
|
||||
{{name = "{author}", email = "{email}"}},
|
||||
]
|
||||
keywords = [
|
||||
"python",
|
||||
# Add 10-15 specific keywords — they affect PyPI discoverability
|
||||
]
|
||||
classifiers = [
|
||||
"Development Status :: 3 - Alpha",
|
||||
"Intended Audience :: Developers",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Operating System :: OS Independent",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Programming Language :: Python :: 3.10",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
"Programming Language :: Python :: 3.12",
|
||||
"Programming Language :: Python :: 3.13",
|
||||
"Topic :: Software Development :: Libraries :: Python Modules",
|
||||
"Typing :: Typed",
|
||||
]
|
||||
dependencies = [
|
||||
# List your runtime dependencies here. Keep them minimal.
|
||||
# Example: "httpx>=0.24", "pydantic>=2.0"
|
||||
]
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
redis = [
|
||||
"redis>=4.2",
|
||||
]
|
||||
dev = [
|
||||
"pytest>=7.0",
|
||||
"pytest-asyncio>=0.21",
|
||||
"httpx>=0.24",
|
||||
"pytest-cov>=4.0",
|
||||
"ruff>=0.4",
|
||||
"black>=24.0",
|
||||
"isort>=5.13",
|
||||
"mypy>=1.0",
|
||||
"pre-commit>=3.0",
|
||||
"build",
|
||||
"twine",
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
Homepage = "https://github.com/yourusername/{name}"
|
||||
Documentation = "https://github.com/yourusername/{name}#readme"
|
||||
Repository = "https://github.com/yourusername/{name}"
|
||||
"Bug Tracker" = "https://github.com/yourusername/{name}/issues"
|
||||
Changelog = "https://github.com/yourusername/{name}/blob/master/CHANGELOG.md"
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
{packages_find}
|
||||
|
||||
[tool.setuptools.package-data]
|
||||
{mod} = ["py.typed"]
|
||||
|
||||
[tool.setuptools_scm]
|
||||
version_scheme = "post-release"
|
||||
local_scheme = "no-local-version"
|
||||
|
||||
[tool.ruff]
|
||||
target-version = "py310"
|
||||
line-length = 100
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "W", "I", "N", "UP", "B", "SIM", "C4", "PTH"]
|
||||
|
||||
[tool.ruff.lint.per-file-ignores]
|
||||
"tests/*" = ["S101"]
|
||||
|
||||
[tool.black]
|
||||
line-length = 100
|
||||
target-version = ["py310", "py311", "py312", "py313"]
|
||||
|
||||
[tool.isort]
|
||||
profile = "black"
|
||||
line_length = 100
|
||||
|
||||
[tool.mypy]
|
||||
python_version = "3.10"
|
||||
warn_return_any = true
|
||||
warn_unused_configs = true
|
||||
disallow_untyped_defs = true
|
||||
ignore_missing_imports = true
|
||||
strict = false
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
asyncio_mode = "auto"
|
||||
testpaths = ["tests"]{pythonpath}
|
||||
python_files = "test_*.py"
|
||||
python_classes = "Test*"
|
||||
python_functions = "test_*"
|
||||
addopts = "-v --tb=short --cov={mod} --cov-report=term-missing"
|
||||
|
||||
[tool.coverage.run]
|
||||
source = ["{mod}"]
|
||||
omit = ["tests/*"]
|
||||
|
||||
[tool.coverage.report]
|
||||
fail_under = 80
|
||||
show_missing = true
|
||||
'''
|
||||
|
||||
|
||||
def gen_pyproject_hatchling(name: str, mod: str, author: str, email: str) -> str:
|
||||
return f'''\
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "{name}"
|
||||
version = "0.1.0"
|
||||
description = "<your description>"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
license = {{text = "MIT"}}
|
||||
authors = [
|
||||
{{name = "{author}", email = "{email}"}},
|
||||
]
|
||||
keywords = ["python"]
|
||||
classifiers = [
|
||||
"Development Status :: 3 - Alpha",
|
||||
"Intended Audience :: Developers",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Typing :: Typed",
|
||||
]
|
||||
dependencies = [
|
||||
# List your runtime dependencies here.
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"pytest>=7.0",
|
||||
"pytest-asyncio>=0.21",
|
||||
"httpx>=0.24",
|
||||
"pytest-cov>=4.0",
|
||||
"ruff>=0.4",
|
||||
"black>=24.0",
|
||||
"isort>=5.13",
|
||||
"mypy>=1.0",
|
||||
"pre-commit>=3.0",
|
||||
"build",
|
||||
"twine",
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
Homepage = "https://github.com/yourusername/{name}"
|
||||
Changelog = "https://github.com/yourusername/{name}/blob/master/CHANGELOG.md"
|
||||
|
||||
[tool.hatch.build.targets.wheel]
|
||||
packages = ["{mod}"]
|
||||
|
||||
[tool.hatch.build.targets.wheel.sources]
|
||||
"{mod}" = "{mod}"
|
||||
|
||||
[tool.ruff]
|
||||
target-version = "py310"
|
||||
line-length = 100
|
||||
|
||||
[tool.black]
|
||||
line-length = 100
|
||||
|
||||
[tool.isort]
|
||||
profile = "black"
|
||||
|
||||
[tool.mypy]
|
||||
python_version = "3.10"
|
||||
disallow_untyped_defs = true
|
||||
ignore_missing_imports = true
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
asyncio_mode = "auto"
|
||||
testpaths = ["tests"]
|
||||
addopts = "-v --tb=short --cov={mod} --cov-report=term-missing"
|
||||
|
||||
[tool.coverage.report]
|
||||
fail_under = 80
|
||||
show_missing = true
|
||||
'''
|
||||
|
||||
|
||||
def gen_init(name: str, mod: str) -> str:
|
||||
return f'''\
|
||||
"""{name}: <one-line description>."""
|
||||
|
||||
from importlib.metadata import PackageNotFoundError, version
|
||||
|
||||
try:
|
||||
__version__ = version("{name}")
|
||||
except PackageNotFoundError:
|
||||
__version__ = "0.0.0-dev"
|
||||
|
||||
from {mod}.core import YourClient
|
||||
from {mod}.config import YourSettings
|
||||
from {mod}.exceptions import YourPackageError
|
||||
|
||||
__all__ = [
|
||||
"YourClient",
|
||||
"YourSettings",
|
||||
"YourPackageError",
|
||||
"__version__",
|
||||
]
|
||||
'''
|
||||
|
||||
|
||||
def gen_core(mod: str) -> str:
|
||||
return f'''\
|
||||
from __future__ import annotations
|
||||
|
||||
from {mod}.exceptions import YourPackageError
|
||||
|
||||
|
||||
class YourClient:
|
||||
"""
|
||||
Main entry point for <your purpose>.
|
||||
|
||||
Args:
|
||||
api_key: Required authentication credential.
|
||||
timeout: Request timeout in seconds. Defaults to 30.
|
||||
retries: Number of retry attempts. Defaults to 3.
|
||||
|
||||
Raises:
|
||||
ValueError: If api_key is empty or timeout is non-positive.
|
||||
|
||||
Example:
|
||||
>>> from {mod} import YourClient
|
||||
>>> client = YourClient(api_key="sk-...")
|
||||
>>> result = client.process(data)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
api_key: str,
|
||||
timeout: int = 30,
|
||||
retries: int = 3,
|
||||
) -> None:
|
||||
if not api_key:
|
||||
raise ValueError("api_key must not be empty")
|
||||
if timeout <= 0:
|
||||
raise ValueError("timeout must be positive")
|
||||
self._api_key = api_key
|
||||
self.timeout = timeout
|
||||
self.retries = retries
|
||||
|
||||
def process(self, data: dict) -> dict:
|
||||
"""
|
||||
Process data and return results.
|
||||
|
||||
Args:
|
||||
data: Input dictionary to process.
|
||||
|
||||
Returns:
|
||||
Processed result as a dictionary.
|
||||
|
||||
Raises:
|
||||
YourPackageError: If processing fails.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
'''
|
||||
|
||||
|
||||
def gen_exceptions(mod: str) -> str:
|
||||
return f'''\
|
||||
class YourPackageError(Exception):
|
||||
"""Base exception for {mod}."""
|
||||
|
||||
|
||||
class YourPackageConfigError(YourPackageError):
|
||||
"""Raised on invalid configuration."""
|
||||
'''
|
||||
|
||||
|
||||
def gen_backends_init() -> str:
|
||||
return '''\
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
|
||||
class BaseBackend(ABC):
|
||||
"""Abstract storage backend interface."""
|
||||
|
||||
@abstractmethod
|
||||
async def get(self, key: str) -> str | None:
|
||||
"""Retrieve a value by key. Returns None if not found."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def set(self, key: str, value: str, ttl: int | None = None) -> None:
|
||||
"""Store a value. Optional TTL in seconds."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def delete(self, key: str) -> None:
|
||||
"""Delete a key."""
|
||||
...
|
||||
'''
|
||||
|
||||
|
||||
def gen_memory_backend() -> str:
|
||||
return '''\
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import time
|
||||
|
||||
from . import BaseBackend
|
||||
|
||||
|
||||
class MemoryBackend(BaseBackend):
|
||||
"""Thread-safe in-memory backend. Zero extra dependencies."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._store: dict[str, tuple[str, float | None]] = {}
|
||||
self._lock = asyncio.Lock()
|
||||
|
||||
async def get(self, key: str) -> str | None:
|
||||
async with self._lock:
|
||||
entry = self._store.get(key)
|
||||
if entry is None:
|
||||
return None
|
||||
value, expires_at = entry
|
||||
if expires_at is not None and time.time() > expires_at:
|
||||
del self._store[key]
|
||||
return None
|
||||
return value
|
||||
|
||||
async def set(self, key: str, value: str, ttl: int | None = None) -> None:
|
||||
async with self._lock:
|
||||
expires_at = time.time() + ttl if ttl is not None else None
|
||||
self._store[key] = (value, expires_at)
|
||||
|
||||
async def delete(self, key: str) -> None:
|
||||
async with self._lock:
|
||||
self._store.pop(key, None)
|
||||
'''
|
||||
|
||||
|
||||
def gen_conftest(name: str, mod: str) -> str:
|
||||
return f'''\
|
||||
import pytest
|
||||
|
||||
from {mod}.backends.memory import MemoryBackend
|
||||
from {mod}.core import YourClient
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def memory_backend() -> MemoryBackend:
|
||||
return MemoryBackend()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def client(memory_backend: MemoryBackend) -> YourClient:
|
||||
return YourClient(
|
||||
api_key="test-key",
|
||||
backend=memory_backend,
|
||||
)
|
||||
'''
|
||||
|
||||
|
||||
def gen_test_core(mod: str) -> str:
|
||||
return f'''\
|
||||
import pytest
|
||||
|
||||
from {mod} import YourClient
|
||||
from {mod}.exceptions import YourPackageError
|
||||
|
||||
|
||||
def test_client_creates_with_valid_key() -> None:
|
||||
client = YourClient(api_key="sk-test")
|
||||
assert client is not None
|
||||
|
||||
|
||||
def test_client_raises_on_empty_key() -> None:
|
||||
with pytest.raises(ValueError, match="api_key"):
|
||||
YourClient(api_key="")
|
||||
|
||||
|
||||
def test_client_raises_on_invalid_timeout() -> None:
|
||||
with pytest.raises(ValueError, match="timeout"):
|
||||
YourClient(api_key="sk-test", timeout=-1)
|
||||
'''
|
||||
|
||||
|
||||
def gen_test_backends() -> str:
|
||||
return '''\
|
||||
import pytest
|
||||
from your_package.backends.memory import MemoryBackend
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_set_and_get() -> None:
|
||||
backend = MemoryBackend()
|
||||
await backend.set("key1", "value1")
|
||||
result = await backend.get("key1")
|
||||
assert result == "value1"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_missing_key_returns_none() -> None:
|
||||
backend = MemoryBackend()
|
||||
result = await backend.get("nonexistent")
|
||||
assert result is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_delete_removes_key() -> None:
|
||||
backend = MemoryBackend()
|
||||
await backend.set("key1", "value1")
|
||||
await backend.delete("key1")
|
||||
result = await backend.get("key1")
|
||||
assert result is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_different_keys_are_independent() -> None:
|
||||
backend = MemoryBackend()
|
||||
await backend.set("key1", "a")
|
||||
await backend.set("key2", "b")
|
||||
assert await backend.get("key1") == "a"
|
||||
assert await backend.get("key2") == "b"
|
||||
'''
|
||||
|
||||
|
||||
def gen_ci_yml(name: str, mod: str) -> str:
|
||||
return f'''\
|
||||
name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, master]
|
||||
pull_request:
|
||||
branches: [main, master]
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
name: Lint, Format & Type Check
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
- name: Install dev dependencies
|
||||
run: pip install -e ".[dev]"
|
||||
- name: ruff
|
||||
run: ruff check .
|
||||
- name: black
|
||||
run: black . --check
|
||||
- name: isort
|
||||
run: isort . --check-only
|
||||
- name: mypy
|
||||
run: mypy {mod}/
|
||||
|
||||
test:
|
||||
name: Test (Python ${{{{ matrix.python-version }}}})
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ["3.10", "3.11", "3.12", "3.13"]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{{{ matrix.python-version }}}}
|
||||
- name: Install dependencies
|
||||
run: pip install -e ".[dev]"
|
||||
- name: Run tests with coverage
|
||||
run: pytest --cov --cov-report=xml
|
||||
- name: Upload coverage
|
||||
uses: codecov/codecov-action@v4
|
||||
with:
|
||||
fail_ci_if_error: false
|
||||
'''
|
||||
|
||||
|
||||
def gen_publish_yml() -> str:
|
||||
return '''\
|
||||
name: Publish to PyPI
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "v*.*.*"
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: Build distribution
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
- name: Install build tools
|
||||
run: pip install build twine
|
||||
- name: Build package
|
||||
run: python -m build
|
||||
- name: Check distribution
|
||||
run: twine check dist/*
|
||||
- uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: dist/
|
||||
|
||||
publish:
|
||||
name: Publish to PyPI
|
||||
needs: build
|
||||
runs-on: ubuntu-latest
|
||||
environment: pypi
|
||||
permissions:
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: dist/
|
||||
- name: Publish to PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
'''
|
||||
|
||||
|
||||
def gen_precommit() -> str:
|
||||
return '''\
|
||||
repos:
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
rev: v0.4.4
|
||||
hooks:
|
||||
- id: ruff
|
||||
args: [--fix]
|
||||
- id: ruff-format
|
||||
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 24.4.2
|
||||
hooks:
|
||||
- id: black
|
||||
|
||||
- repo: https://github.com/pycqa/isort
|
||||
rev: 5.13.2
|
||||
hooks:
|
||||
- id: isort
|
||||
|
||||
- repo: https://github.com/pre-commit/mirrors-mypy
|
||||
rev: v1.10.0
|
||||
hooks:
|
||||
- id: mypy
|
||||
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.6.0
|
||||
hooks:
|
||||
- id: trailing-whitespace
|
||||
- id: end-of-file-fixer
|
||||
- id: check-yaml
|
||||
- id: check-toml
|
||||
- id: check-merge-conflict
|
||||
- id: debug-statements
|
||||
- id: no-commit-to-branch
|
||||
args: [--branch, master, --branch, main]
|
||||
'''
|
||||
|
||||
|
||||
def gen_changelog(name: str) -> str:
|
||||
return f'''\
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
---
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
- Initial project scaffold
|
||||
|
||||
[Unreleased]: https://github.com/yourusername/{name}/commits/master
|
||||
'''
|
||||
|
||||
|
||||
def gen_readme(name: str, mod: str) -> str:
|
||||
return f'''\
|
||||
# {name}
|
||||
|
||||
> One-line description — what it does and why it's useful.
|
||||
|
||||
[](https://pypi.org/project/{name}/)
|
||||
[](https://pypi.org/project/{name}/)
|
||||
[](LICENSE)
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install {name}
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
```python
|
||||
from {mod} import YourClient
|
||||
|
||||
client = YourClient(api_key="sk-...")
|
||||
result = client.process({{"input": "value"}})
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|---|---|---|---|
|
||||
| api_key | str | required | Authentication credential |
|
||||
| timeout | int | 30 | Request timeout in seconds |
|
||||
| retries | int | 3 | Number of retry attempts |
|
||||
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING.md](./CONTRIBUTING.md)
|
||||
|
||||
## Changelog
|
||||
|
||||
See [CHANGELOG.md](./CHANGELOG.md)
|
||||
|
||||
## License
|
||||
|
||||
MIT — see [LICENSE](./LICENSE)
|
||||
'''
|
||||
|
||||
|
||||
def gen_setup_py() -> str:
|
||||
return '''\
|
||||
# Thin shim for legacy editable install compatibility.
|
||||
# All configuration lives in pyproject.toml.
|
||||
from setuptools import setup
|
||||
|
||||
setup()
|
||||
'''
|
||||
|
||||
|
||||
def gen_license(author: str) -> str:
|
||||
return f'''\
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2026 {author}
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
'''
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main scaffold
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def scaffold(
|
||||
name: str,
|
||||
layout: str,
|
||||
build: str,
|
||||
author: str,
|
||||
email: str,
|
||||
output: str,
|
||||
) -> None:
|
||||
mod = pkg_name(name)
|
||||
root = Path(output) / name
|
||||
pkg_root = root / "src" / mod if layout == "src" else root / mod
|
||||
|
||||
print(f"\nScaffolding {name!r} ({layout} layout, {build} build backend)\n")
|
||||
|
||||
# Package source
|
||||
touch(pkg_root / "py.typed")
|
||||
write(pkg_root / "__init__.py", gen_init(name, mod))
|
||||
write(pkg_root / "core.py", gen_core(mod))
|
||||
write(pkg_root / "exceptions.py", gen_exceptions(mod))
|
||||
write(pkg_root / "backends" / "__init__.py", gen_backends_init())
|
||||
write(pkg_root / "backends" / "memory.py", gen_memory_backend())
|
||||
|
||||
# Tests
|
||||
write(root / "tests" / "__init__.py", "")
|
||||
write(root / "tests" / "conftest.py", gen_conftest(name, mod))
|
||||
write(root / "tests" / "test_core.py", gen_test_core(mod))
|
||||
write(root / "tests" / "test_backends.py", gen_test_backends())
|
||||
|
||||
# CI
|
||||
write(root / ".github" / "workflows" / "ci.yml", gen_ci_yml(name, mod))
|
||||
write(root / ".github" / "workflows" / "publish.yml", gen_publish_yml())
|
||||
write(
|
||||
root / ".github" / "ISSUE_TEMPLATE" / "bug_report.md",
|
||||
"""\
|
||||
---
|
||||
name: Bug Report
|
||||
about: Report a reproducible bug
|
||||
labels: bug
|
||||
---
|
||||
|
||||
**Python version:**
|
||||
**Package version:**
|
||||
|
||||
**Describe the bug:**
|
||||
|
||||
**Minimal reproducible example:**
|
||||
```python
|
||||
# paste here
|
||||
```
|
||||
|
||||
**Expected behavior:**
|
||||
|
||||
**Actual behavior:**
|
||||
""",
|
||||
)
|
||||
write(
|
||||
root / ".github" / "ISSUE_TEMPLATE" / "feature_request.md",
|
||||
"""\
|
||||
---
|
||||
name: Feature Request
|
||||
about: Suggest a new feature
|
||||
labels: enhancement
|
||||
---
|
||||
|
||||
**Problem this would solve:**
|
||||
|
||||
**Proposed solution:**
|
||||
|
||||
**Alternatives considered:**
|
||||
""",
|
||||
)
|
||||
|
||||
# Config files
|
||||
write(root / ".pre-commit-config.yaml", gen_precommit())
|
||||
write(root / "CHANGELOG.md", gen_changelog(name))
|
||||
write(root / "README.md", gen_readme(name, mod))
|
||||
write(root / "LICENSE", gen_license(author))
|
||||
|
||||
# pyproject.toml + setup.py
|
||||
if build == "setuptools":
|
||||
write(root / "pyproject.toml", gen_pyproject_setuptools(name, mod, author, email, layout))
|
||||
write(root / "setup.py", gen_setup_py())
|
||||
else:
|
||||
write(root / "pyproject.toml", gen_pyproject_hatchling(name, mod, author, email))
|
||||
|
||||
# .gitignore
|
||||
write(
|
||||
root / ".gitignore",
|
||||
"""\
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
build/
|
||||
dist/
|
||||
*.egg-info/
|
||||
.eggs/
|
||||
*.egg
|
||||
.env
|
||||
.venv
|
||||
venv/
|
||||
.mypy_cache/
|
||||
.ruff_cache/
|
||||
.pytest_cache/
|
||||
htmlcov/
|
||||
.coverage
|
||||
cov_annotate/
|
||||
*.xml
|
||||
""",
|
||||
)
|
||||
|
||||
print(f"\nDone! Created {root.resolve()}")
|
||||
print("\nNext steps:")
|
||||
print(f" cd {name}")
|
||||
print(" git init && git add .")
|
||||
print(' git commit -m "chore: initial scaffold"')
|
||||
print(" pip install -e '.[dev]'")
|
||||
print(" pre-commit install")
|
||||
print(" pytest")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# CLI
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Scaffold a production-grade Python PyPI package."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--name",
|
||||
required=True,
|
||||
help="PyPI package name (lowercase, hyphens). Example: my-package",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--layout",
|
||||
choices=["flat", "src"],
|
||||
default="flat",
|
||||
help="Project layout: 'flat' (default) or 'src'.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--build",
|
||||
choices=["setuptools", "hatchling"],
|
||||
default="setuptools",
|
||||
help="Build backend: 'setuptools' (default, uses setuptools_scm) or 'hatchling'.",
|
||||
)
|
||||
parser.add_argument("--author", default="Your Name", help="Author name.")
|
||||
parser.add_argument("--email", default="you@example.com", help="Author email.")
|
||||
parser.add_argument("--output", default=".", help="Output directory (default: .).")
|
||||
args = parser.parse_args()
|
||||
|
||||
# Validate name
|
||||
import re
|
||||
if not re.match(r"^[a-z][a-z0-9\-]*$", args.name):
|
||||
print(
|
||||
f"Error: --name must be lowercase letters, digits, and hyphens only. Got: {args.name!r}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
target = Path(args.output) / args.name
|
||||
if target.exists():
|
||||
print(f"Error: {target} already exists.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
scaffold(
|
||||
name=args.name,
|
||||
layout=args.layout,
|
||||
build=args.build,
|
||||
author=args.author,
|
||||
email=args.email,
|
||||
output=args.output,
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
158
skills/salesforce-apex-quality/SKILL.md
Normal file
158
skills/salesforce-apex-quality/SKILL.md
Normal file
@@ -0,0 +1,158 @@
|
||||
---
|
||||
name: salesforce-apex-quality
|
||||
description: 'Apex code quality guardrails for Salesforce development. Enforces bulk-safety rules (no SOQL/DML in loops), sharing model requirements, CRUD/FLS security, SOQL injection prevention, PNB test coverage (Positive / Negative / Bulk), and modern Apex idioms. Use this skill when reviewing or generating Apex classes, trigger handlers, batch jobs, or test classes to catch governor limit risks, security gaps, and quality issues before deployment.'
|
||||
---
|
||||
|
||||
# Salesforce Apex Quality Guardrails
|
||||
|
||||
Apply these checks to every Apex class, trigger, and test file you write or review.
|
||||
|
||||
## Step 1 — Governor Limit Safety Check
|
||||
|
||||
Scan for these patterns before declaring any Apex file acceptable:
|
||||
|
||||
### SOQL and DML in Loops — Automatic Fail
|
||||
|
||||
```apex
|
||||
// ❌ NEVER — causes LimitException at scale
|
||||
for (Account a : accounts) {
|
||||
List<Contact> contacts = [SELECT Id FROM Contact WHERE AccountId = :a.Id]; // SOQL in loop
|
||||
update a; // DML in loop
|
||||
}
|
||||
|
||||
// ✅ ALWAYS — collect, then query/update once
|
||||
Set<Id> accountIds = new Map<Id, Account>(accounts).keySet();
|
||||
Map<Id, List<Contact>> contactsByAccount = new Map<Id, List<Contact>>();
|
||||
for (Contact c : [SELECT Id, AccountId FROM Contact WHERE AccountId IN :accountIds]) {
|
||||
if (!contactsByAccount.containsKey(c.AccountId)) {
|
||||
contactsByAccount.put(c.AccountId, new List<Contact>());
|
||||
}
|
||||
contactsByAccount.get(c.AccountId).add(c);
|
||||
}
|
||||
update accounts; // DML once, outside the loop
|
||||
```
|
||||
|
||||
Rule: if you see `[SELECT` or `Database.query`, `insert`, `update`, `delete`, `upsert`, `merge` inside a `for` loop body — stop and refactor before proceeding.
|
||||
|
||||
## Step 2 — Sharing Model Verification
|
||||
|
||||
Every class must declare its sharing intent explicitly. Undeclared sharing inherits from the caller — unpredictable behaviour.
|
||||
|
||||
| Declaration | When to use |
|
||||
|---|---|
|
||||
| `public with sharing class Foo` | Default for all service, handler, selector, and controller classes |
|
||||
| `public without sharing class Foo` | Only when the class must run elevated (e.g. system-level logging, trigger bypass). Requires a code comment explaining why. |
|
||||
| `public inherited sharing class Foo` | Framework entry points that should respect the caller's sharing context |
|
||||
|
||||
If a class does not have one of these three declarations, **add it before writing anything else**.
|
||||
|
||||
## Step 3 — CRUD / FLS Enforcement
|
||||
|
||||
Apex code that reads or writes records on behalf of a user must verify object and field access. The platform does **not** enforce FLS or CRUD automatically in Apex.
|
||||
|
||||
```apex
|
||||
// Check before querying a field
|
||||
if (!Schema.sObjectType.Contact.fields.Email.isAccessible()) {
|
||||
throw new System.NoAccessException();
|
||||
}
|
||||
|
||||
// Or use WITH USER_MODE in SOQL (API 56.0+)
|
||||
List<Contact> contacts = [SELECT Id, Email FROM Contact WHERE AccountId = :accId WITH USER_MODE];
|
||||
|
||||
// Or use Database.query with AccessLevel
|
||||
List<Contact> contacts = Database.query('SELECT Id, Email FROM Contact', AccessLevel.USER_MODE);
|
||||
```
|
||||
|
||||
Rule: any Apex method callable from a UI component, REST endpoint, or `@InvocableMethod` **must** enforce CRUD/FLS. Internal service methods called only from trusted contexts may use `with sharing` instead.
|
||||
|
||||
## Step 4 — SOQL Injection Prevention
|
||||
|
||||
```apex
|
||||
// ❌ NEVER — concatenates user input into SOQL string
|
||||
String soql = 'SELECT Id FROM Account WHERE Name = \'' + userInput + '\'';
|
||||
|
||||
// ✅ ALWAYS — bind variable
|
||||
String soql = [SELECT Id FROM Account WHERE Name = :userInput];
|
||||
|
||||
// ✅ For dynamic SOQL with user-controlled field names — validate against a whitelist
|
||||
Set<String> allowedFields = new Set<String>{'Name', 'Industry', 'AnnualRevenue'};
|
||||
if (!allowedFields.contains(userInput)) {
|
||||
throw new IllegalArgumentException('Field not permitted: ' + userInput);
|
||||
}
|
||||
```
|
||||
|
||||
## Step 5 — Modern Apex Idioms
|
||||
|
||||
Prefer current language features (API 62.0 / Winter '25+):
|
||||
|
||||
| Old pattern | Modern replacement |
|
||||
|---|---|
|
||||
| `if (obj != null) { x = obj.Field__c; }` | `x = obj?.Field__c;` |
|
||||
| `x = (y != null) ? y : defaultVal;` | `x = y ?? defaultVal;` |
|
||||
| `System.assertEquals(expected, actual)` | `Assert.areEqual(expected, actual)` |
|
||||
| `System.assert(condition)` | `Assert.isTrue(condition)` |
|
||||
| `[SELECT ... WHERE ...]` with no sharing context | `[SELECT ... WHERE ... WITH USER_MODE]` |
|
||||
|
||||
## Step 6 — PNB Test Coverage Checklist
|
||||
|
||||
Every feature must be tested across all three paths. Missing any one of these is a quality failure:
|
||||
|
||||
### Positive Path
|
||||
- Expected input → expected output.
|
||||
- Assert the exact field values, record counts, or return values — not just that no exception was thrown.
|
||||
|
||||
### Negative Path
|
||||
- Invalid input, null values, empty collections, and error conditions.
|
||||
- Assert that exceptions are thrown with the correct type and message.
|
||||
- Assert that no records were mutated when the operation should have failed cleanly.
|
||||
|
||||
### Bulk Path
|
||||
- Insert/update/delete **200–251 records** in a single test transaction.
|
||||
- Assert that all records processed correctly — no partial failures from governor limits.
|
||||
- Use `Test.startTest()` / `Test.stopTest()` to isolate governor limit counters for async work.
|
||||
|
||||
### Test Class Rules
|
||||
```apex
|
||||
@isTest(SeeAllData=false) // Required — no exceptions without a documented reason
|
||||
private class AccountServiceTest {
|
||||
|
||||
@TestSetup
|
||||
static void makeData() {
|
||||
// Create all test data here — use a factory if one exists in the project
|
||||
}
|
||||
|
||||
@isTest
|
||||
static void givenValidInput_whenProcessAccounts_thenFieldsUpdated() {
|
||||
// Positive path
|
||||
List<Account> accounts = [SELECT Id FROM Account LIMIT 10];
|
||||
Test.startTest();
|
||||
AccountService.processAccounts(accounts);
|
||||
Test.stopTest();
|
||||
// Assert meaningful outcomes — not just no exception
|
||||
List<Account> updated = [SELECT Status__c FROM Account WHERE Id IN :accounts];
|
||||
Assert.areEqual('Processed', updated[0].Status__c, 'Status should be Processed');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Step 7 — Trigger Architecture Checklist
|
||||
|
||||
- [ ] One trigger per object. If a second trigger exists, consolidate into the handler.
|
||||
- [ ] Trigger body contains only: context checks, handler invocation, and routing logic.
|
||||
- [ ] No business logic, SOQL, or DML directly in the trigger body.
|
||||
- [ ] If a trigger framework (Trigger Actions Framework, ff-apex-common, custom base class) is already in use — extend it. Do not create a parallel pattern.
|
||||
- [ ] Handler class is `with sharing` unless the trigger requires elevated access.
|
||||
|
||||
## Quick Reference — Hardcoded Anti-Patterns Summary
|
||||
|
||||
| Pattern | Action |
|
||||
|---|---|
|
||||
| SOQL inside `for` loop | Refactor: query before the loop, operate on collections |
|
||||
| DML inside `for` loop | Refactor: collect mutations, DML once after the loop |
|
||||
| Class missing sharing declaration | Add `with sharing` (or document why `without sharing`) |
|
||||
| `escape="false"` on user data (VF) | Remove — auto-escaping enforces XSS prevention |
|
||||
| Empty `catch` block | Add logging and appropriate re-throw or error handling |
|
||||
| String-concatenated SOQL with user input | Replace with bind variable or whitelist validation |
|
||||
| Test with no assertion | Add a meaningful `Assert.*` call |
|
||||
| `System.assert` / `System.assertEquals` style | Upgrade to `Assert.isTrue` / `Assert.areEqual` |
|
||||
| Hardcoded record ID (`'001...'`) | Replace with queried or inserted test record ID |
|
||||
182
skills/salesforce-component-standards/SKILL.md
Normal file
182
skills/salesforce-component-standards/SKILL.md
Normal file
@@ -0,0 +1,182 @@
|
||||
---
|
||||
name: salesforce-component-standards
|
||||
description: 'Quality standards for Salesforce Lightning Web Components (LWC), Aura components, and Visualforce pages. Covers SLDS 2 compliance, accessibility (WCAG 2.1 AA), data access pattern selection, component communication rules, XSS prevention, CSRF enforcement, FLS/CRUD in AuraEnabled methods, view state management, and Jest test requirements. Use this skill when building or reviewing any Salesforce UI component to enforce platform-specific security and quality standards.'
|
||||
---
|
||||
|
||||
# Salesforce Component Quality Standards
|
||||
|
||||
Apply these checks to every LWC, Aura component, and Visualforce page you write or review.
|
||||
|
||||
## Section 1 — LWC Quality Standards
|
||||
|
||||
### 1.1 Data Access Pattern Selection
|
||||
|
||||
Choose the right data access pattern before writing JavaScript controller code:
|
||||
|
||||
| Use case | Pattern | Why |
|
||||
|---|---|---|
|
||||
| Read a single record reactively (follows navigation) | `@wire(getRecord, { recordId, fields })` | Lightning Data Service — cached, reactive |
|
||||
| Standard CRUD form for a single object | `<lightning-record-form>` or `<lightning-record-edit-form>` | Built-in FLS, CRUD, and accessibility |
|
||||
| Complex server query or filtered list | `@wire(apexMethodName, { param })` on a `cacheable=true` method | Allows caching; wire re-fires on param change |
|
||||
| User-triggered action, DML, or non-cacheable server call | Imperative `apexMethodName(params).then(...).catch(...)` | Required for DML — wired methods cannot be `@AuraEnabled` without `cacheable=true` |
|
||||
| Cross-component communication (no shared parent) | Lightning Message Service (LMS) | Decoupled, works across DOM boundaries |
|
||||
| Multi-object graph relationships | GraphQL `@wire(gql, { query, variables })` | Single round-trip for complex related data |
|
||||
|
||||
### 1.2 Security Rules
|
||||
|
||||
| Rule | Enforcement |
|
||||
|---|---|
|
||||
| No raw user data in `innerHTML` | Use `{expression}` binding in the template — the framework auto-escapes. Never use `this.template.querySelector('.el').innerHTML = userValue` |
|
||||
| Apex `@AuraEnabled` methods enforce CRUD/FLS | Use `WITH USER_MODE` in SOQL or explicit `Schema.sObjectType` checks |
|
||||
| No hardcoded org-specific IDs in component JavaScript | Query or pass as a prop — never embed record IDs in source |
|
||||
| `@api` properties from parent: validate before use | A parent can pass anything — validate type and range before using as a query parameter |
|
||||
|
||||
### 1.3 SLDS 2 and Styling Standards
|
||||
|
||||
- **Never** hardcode colours: `color: #FF3366` → use `color: var(--slds-c-button-brand-color-background)` or a semantic SLDS token.
|
||||
- **Never** override SLDS classes with `!important` — compose with custom CSS properties.
|
||||
- Use `<lightning-*>` base components wherever they exist: `lightning-button`, `lightning-input`, `lightning-datatable`, `lightning-card`, etc.
|
||||
- Base components include built-in SLDS 2, dark mode, and accessibility — avoid reimplementing their behaviour.
|
||||
- If using custom CSS, test in both **light mode** and **dark mode** before declaring done.
|
||||
|
||||
### 1.4 Accessibility Requirements (WCAG 2.1 AA)
|
||||
|
||||
Every LWC component must pass all of these before it is considered done:
|
||||
|
||||
- [ ] All form inputs have `<label>` or `aria-label` — never use placeholder as the only label
|
||||
- [ ] All icon-only buttons have `alternative-text` or `aria-label` describing the action
|
||||
- [ ] All interactive elements are reachable and operable by keyboard (Tab, Enter, Space, Escape)
|
||||
- [ ] Colour is not the only means of conveying status — pair with text, icon, or `aria-*` attributes
|
||||
- [ ] Error messages are associated with their input via `aria-describedby`
|
||||
- [ ] Focus management is correct in modals — focus moves into the modal on open and back on close
|
||||
|
||||
### 1.5 Component Communication Rules
|
||||
|
||||
| Direction | Mechanism |
|
||||
|---|---|
|
||||
| Parent → Child | `@api` property or calling a `@api` method |
|
||||
| Child → Parent | `CustomEvent` — `this.dispatchEvent(new CustomEvent('eventname', { detail: data }))` |
|
||||
| Sibling / unrelated components | Lightning Message Service (LMS) |
|
||||
| Never use | `document.querySelector`, `window.*`, or Pub/Sub libraries |
|
||||
|
||||
For Flow screen components:
|
||||
- Events that need to reach the Flow runtime must set `bubbles: true` and `composed: true`.
|
||||
- Expose `@api value` for two-way binding with the Flow variable.
|
||||
|
||||
### 1.6 JavaScript Performance Rules
|
||||
|
||||
- **No side effects in `connectedCallback`**: it runs on every DOM attach — avoid DML, heavy computation, or rendering state mutations here.
|
||||
- **Guard `renderedCallback`**: always use a boolean guard to prevent infinite render loops.
|
||||
- **Avoid reactive property traps**: setting a reactive property inside `renderedCallback` causes a re-render — use it only when necessary and guarded.
|
||||
- **Do not store large datasets in component state** — paginate or stream large results instead.
|
||||
|
||||
### 1.7 Jest Test Requirements
|
||||
|
||||
Every component that handles user interaction or retrieves Apex data must have a Jest test:
|
||||
|
||||
```javascript
|
||||
// Minimum test coverage expectations
|
||||
it('renders the component with correct title', async () => { ... });
|
||||
it('calls apex method and displays results', async () => { ... }); // Wire mock
|
||||
it('dispatches event when button is clicked', async () => { ... });
|
||||
it('shows error state when apex call fails', async () => { ... }); // Error path
|
||||
```
|
||||
|
||||
Use `@salesforce/sfdx-lwc-jest` mocking utilities:
|
||||
- `wire` adapter mocking: `setImmediate` + `emit({ data, error })`
|
||||
- Apex method mocking: `jest.mock('@salesforce/apex/MyClass.myMethod', ...)`
|
||||
|
||||
---
|
||||
|
||||
## Section 2 — Aura Component Standards
|
||||
|
||||
### 2.1 When to Use Aura vs LWC
|
||||
|
||||
- **New components: always LWC** unless the target context is Aura-only (e.g. extending `force:appPage`, using Aura-specific events in a legacy managed package).
|
||||
- **Migrating Aura to LWC**: prefer LWC, migrate component-by-component; LWC can be embedded inside Aura components.
|
||||
|
||||
### 2.2 Aura Security Rules
|
||||
|
||||
- `@AuraEnabled` controller methods must declare `with sharing` and enforce CRUD/FLS — Aura does **not** enforce them automatically.
|
||||
- Never use `{!v.something}` with unescaped user data in `<div>` unbound helpers — use `<ui:outputText value="{!v.text}" />` or `<c:something>` to escape.
|
||||
- Validate all inputs from component attributes before using them in SOQL / Apex logic.
|
||||
|
||||
### 2.3 Aura Event Design
|
||||
|
||||
- **Component events** for parent-child communication — lowest scope.
|
||||
- **Application events** only when component events cannot reach the target — they broadcast to the entire app and can be a performance and maintenance problem.
|
||||
- For hybrid LWC + Aura stacks: use Lightning Message Service to decouple communication — do not rely on Aura application events reaching LWC components.
|
||||
|
||||
---
|
||||
|
||||
## Section 3 — Visualforce Security Standards
|
||||
|
||||
### 3.1 XSS Prevention
|
||||
|
||||
```xml
|
||||
<!-- ❌ NEVER — renders raw user input as HTML -->
|
||||
<apex:outputText value="{!userInput}" escape="false" />
|
||||
|
||||
<!-- ✅ ALWAYS — auto-escaping on -->
|
||||
<apex:outputText value="{!userInput}" />
|
||||
<!-- Default escape="true" — platform HTML-encodes the output -->
|
||||
```
|
||||
|
||||
Rule: `escape="false"` is never acceptable for user-controlled data. If rich text must be rendered, sanitise server-side with a whitelist before output.
|
||||
|
||||
### 3.2 CSRF Protection
|
||||
|
||||
Use `<apex:form>` for all postback actions — the platform injects a CSRF token automatically into the form. Do **not** use raw `<form method="POST">` HTML elements, which bypass CSRF protection.
|
||||
|
||||
### 3.3 SOQL Injection Prevention in Controllers
|
||||
|
||||
```apex
|
||||
// ❌ NEVER
|
||||
String soql = 'SELECT Id FROM Account WHERE Name = \'' + ApexPages.currentPage().getParameters().get('name') + '\'';
|
||||
List<Account> results = Database.query(soql);
|
||||
|
||||
// ✅ ALWAYS — bind variable
|
||||
String nameParam = ApexPages.currentPage().getParameters().get('name');
|
||||
List<Account> results = [SELECT Id FROM Account WHERE Name = :nameParam];
|
||||
```
|
||||
|
||||
### 3.4 View State Management Checklist
|
||||
|
||||
- [ ] View state is under 135 KB (check in browser developer tools or the Salesforce View State tab)
|
||||
- [ ] Fields used only for server-side calculations are declared `transient`
|
||||
- [ ] Large collections are not persisted across postbacks unnecessarily
|
||||
- [ ] `readonly="true"` is set on `<apex:page>` for read-only pages to skip view-state serialisation
|
||||
|
||||
### 3.5 FLS / CRUD in Visualforce Controllers
|
||||
|
||||
```apex
|
||||
// Before reading a field
|
||||
if (!Schema.sObjectType.Account.fields.Revenue__c.isAccessible()) {
|
||||
ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.ERROR, 'You do not have access to this field.'));
|
||||
return null;
|
||||
}
|
||||
|
||||
// Before performing DML
|
||||
if (!Schema.sObjectType.Account.isDeletable()) {
|
||||
throw new System.NoAccessException();
|
||||
}
|
||||
```
|
||||
|
||||
Standard controllers enforce FLS for bound fields automatically. **Custom controllers do not** — FLS must be enforced manually.
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference — Component Anti-Patterns Summary
|
||||
|
||||
| Anti-pattern | Technology | Risk | Fix |
|
||||
|---|---|---|---|
|
||||
| `innerHTML` with user data | LWC | XSS | Use template bindings `{expression}` |
|
||||
| Hardcoded hex colours | LWC/Aura | Dark-mode / SLDS 2 break | Use SLDS CSS custom properties |
|
||||
| Missing `aria-label` on icon buttons | LWC/Aura/VF | Accessibility failure | Add `alternative-text` or `aria-label` |
|
||||
| No guard in `renderedCallback` | LWC | Infinite rerender loop | Add `hasRendered` boolean guard |
|
||||
| Application event for parent-child | Aura | Unnecessary broadcast scope | Use component event instead |
|
||||
| `escape="false"` on user data | Visualforce | XSS | Remove — use default escaping |
|
||||
| Raw `<form>` postback | Visualforce | CSRF vulnerability | Use `<apex:form>` |
|
||||
| No `with sharing` on custom controller | VF / Apex | Data exposure | Add `with sharing` declaration |
|
||||
| FLS not checked in custom controller | VF / Apex | Privilege escalation | Add `Schema.sObjectType` checks |
|
||||
| SOQL concatenated with URL param | VF / Apex | SOQL injection | Use bind variables |
|
||||
135
skills/salesforce-flow-design/SKILL.md
Normal file
135
skills/salesforce-flow-design/SKILL.md
Normal file
@@ -0,0 +1,135 @@
|
||||
---
|
||||
name: salesforce-flow-design
|
||||
description: 'Salesforce Flow architecture decisions, flow type selection, bulk safety validation, and fault handling standards. Use this skill when designing or reviewing Record-Triggered, Screen, Autolaunched, Scheduled, or Platform Event flows to ensure correct type selection, no DML/Get Records in loops, proper fault connectors on all data-changing elements, and appropriate automation density checks before deployment.'
|
||||
---
|
||||
|
||||
# Salesforce Flow Design and Validation
|
||||
|
||||
Apply these checks to every Flow you design, build, or review.
|
||||
|
||||
## Step 1 — Confirm Flow Is the Right Tool
|
||||
|
||||
Before designing a Flow, verify that a lighter-weight declarative option cannot solve the problem:
|
||||
|
||||
| Requirement | Best tool |
|
||||
|---|---|
|
||||
| Calculate a field value with no side effects | Formula field |
|
||||
| Prevent a bad record save with a user message | Validation rule |
|
||||
| Sum or count child records on a parent | Roll-up Summary field |
|
||||
| Complex multi-object logic, callouts, or high volume | Apex (Queueable / Batch) — not Flow |
|
||||
| Everything else | Flow ✓ |
|
||||
|
||||
If you are building a Flow that could be replaced by a formula field or validation rule, ask the user to confirm the requirement is genuinely more complex.
|
||||
|
||||
## Step 2 — Select the Correct Flow Type
|
||||
|
||||
| Use case | Flow type | Key constraint |
|
||||
|---|---|---|
|
||||
| Update a field on the same record before it is saved | Before-save Record-Triggered | Cannot send emails, make callouts, or change related records |
|
||||
| Create/update related records, emails, callouts | After-save Record-Triggered | Runs after commit — avoid recursion traps |
|
||||
| Guide a user through a multi-step UI process | Screen Flow | Cannot be triggered by a record event automatically |
|
||||
| Reusable background logic called from another Flow | Autolaunched (Subflow) | Input/output variables define the contract |
|
||||
| Logic invoked from Apex `@InvocableMethod` | Autolaunched (Invocable) | Must declare input/output variables |
|
||||
| Time-based batch processing | Scheduled Flow | Runs in batch context — respect governor limits |
|
||||
| Respond to events (Platform Events / CDC) | Platform Event–Triggered | Runs asynchronously — eventual consistency |
|
||||
|
||||
**Decision rule**: choose before-save when you only need to change the triggering record's own fields. Move to after-save the moment you need to touch related records, send emails, or make callouts.
|
||||
|
||||
## Step 3 — Bulk Safety Checklist
|
||||
|
||||
These patterns are governor limit failures at scale. Check for all of them before the Flow is activated.
|
||||
|
||||
### DML in Loops — Automatic Fail
|
||||
|
||||
```
|
||||
Loop element
|
||||
└── Create Records / Update Records / Delete Records ← ❌ DML inside loop
|
||||
```
|
||||
|
||||
Fix: collect records inside the loop into a collection variable, then run the DML element **outside** the loop.
|
||||
|
||||
### Get Records in Loops — Automatic Fail
|
||||
|
||||
```
|
||||
Loop element
|
||||
└── Get Records ← ❌ SOQL inside loop
|
||||
```
|
||||
|
||||
Fix: perform the Get Records query **before** the loop, then loop over the collection variable.
|
||||
|
||||
### Correct Bulk Pattern
|
||||
|
||||
```
|
||||
Get Records — collect all records in one query
|
||||
└── Loop over the collection variable
|
||||
└── Decision / Assignment (no DML, no Get Records)
|
||||
└── After the loop: Create/Update/Delete Records — one DML operation
|
||||
```
|
||||
|
||||
### Transform vs Loop
|
||||
When the goal is reshaping a collection (e.g. mapping field values from one object to another), use the **Transform** element instead of a Loop + Assignment pattern. Transform is bulk-safe by design and produces cleaner Flow graphs.
|
||||
|
||||
## Step 4 — Fault Path Requirements
|
||||
|
||||
Every element that can fail at runtime must have a fault connector. Flows without fault paths surface raw system errors to users.
|
||||
|
||||
### Elements That Require Fault Connectors
|
||||
- Create Records
|
||||
- Update Records
|
||||
- Delete Records
|
||||
- Get Records (when accessing a required record that might not exist)
|
||||
- Send Email
|
||||
- HTTP Callout / External Service action
|
||||
- Apex action (invocable)
|
||||
- Subflow (if the subflow can throw a fault)
|
||||
|
||||
### Fault Handler Pattern
|
||||
```
|
||||
Fault connector → Log Error (Create Records on a logging object or fire a Platform Event)
|
||||
→ Screen element with user-friendly message (Screen Flows)
|
||||
→ Stop / End element (Record-Triggered Flows)
|
||||
```
|
||||
|
||||
Never connect a fault path back to the same element that faulted — this creates an infinite loop.
|
||||
|
||||
## Step 5 — Automation Density Check
|
||||
|
||||
Before deploying, verify there are no overlapping automations on the same object and trigger event:
|
||||
|
||||
- Other active Record-Triggered Flows on the same `Object` + `When to Run` combination
|
||||
- Legacy Process Builder rules still active on the same object
|
||||
- Workflow Rules that fire on the same field changes
|
||||
- Apex triggers that also run on the same `before insert` / `after update` context
|
||||
|
||||
Overlapping automations can cause unexpected ordering, recursion, and governor limit failures. Document the automation inventory for the object before activating.
|
||||
|
||||
## Step 6 — Screen Flow UX Guidelines
|
||||
|
||||
- Every path through a Screen Flow must reach an **End** element — no orphan branches.
|
||||
- Provide a **Back** navigation option on multi-step flows unless back-navigation would corrupt data.
|
||||
- Use `lightning-input` and SLDS-compliant components for all user inputs — do not use HTML form elements.
|
||||
- Validate required inputs on the screen before the user can advance — use Flow validation rules on the screen.
|
||||
- Handle the **Pause** element if the flow may need to await user action across sessions.
|
||||
|
||||
## Step 7 — Deployment Safety
|
||||
|
||||
```
|
||||
Deploy as Draft → Test with 1 record → Test with 200+ records → Activate
|
||||
```
|
||||
|
||||
- Always deploy as **Draft** first and test thoroughly before activation.
|
||||
- For Record-Triggered Flows: test with the exact entry conditions (e.g. `ISCHANGED(Status)` — ensure the test data actually triggers the condition).
|
||||
- For Scheduled Flows: test with a small batch in a sandbox before enabling in production.
|
||||
- Check the Automation Density score for the object — more than 3 active automations on a single object increases order-of-execution risk.
|
||||
|
||||
## Quick Reference — Flow Anti-Patterns Summary
|
||||
|
||||
| Anti-pattern | Risk | Fix |
|
||||
|---|---|---|
|
||||
| DML element inside a Loop | Governor limit exception | Move DML outside the loop |
|
||||
| Get Records inside a Loop | SOQL governor limit exception | Query before the loop |
|
||||
| No fault connector on DML/email/callout element | Unhandled exception surfaced to user | Add fault path to every such element |
|
||||
| Updating the triggering record in an after-save flow with no recursion guard | Infinite trigger loops | Add an entry condition or recursion guard variable |
|
||||
| Looping directly on `$Record` collection | Incorrect behaviour at scale | Assign to a collection variable first, then loop |
|
||||
| Process Builder still active alongside a new Flow | Double-execution, unexpected ordering | Deactivate Process Builder before activating the Flow |
|
||||
| Screen Flow with no End element on all branches | Runtime error or stuck user | Ensure every branch resolves to an End element |
|
||||
Reference in New Issue
Block a user