# SuperPlane Docs (Full) > Comprehensive companion to `/llms.txt` with full page content for the current docs table of contents. SuperPlane is an open source DevOps control plane for long-lived, event-driven workflows. ## Get Started - [Welcome](https://docs.superplane.com/): Get up and running with SuperPlane, the open source DevOps control plane for event-driven workflows. - [Quickstart](https://docs.superplane.com/get-started/quickstart): Build and run your first workflow on the Canvas (no integrations required). - [Example use cases](https://docs.superplane.com/get-started/example-use-cases): Six templates you can copy and adapt to your own workflows. ## Concepts - [Canvas](https://docs.superplane.com/concepts/canvas): Learn about canvases and how to use the canvas page to design and manage workflows. - [Component Nodes](https://docs.superplane.com/concepts/component-nodes): Learn about components and component nodes, and how to add, configure, and use them in your workflows. - [Data flow](https://docs.superplane.com/concepts/data-flow): How events and payloads flow between nodes in SuperPlane workflows. - [Expressions](https://docs.superplane.com/concepts/expressions): How to write expressions and access payload data in SuperPlane workflows. - [Secrets](https://docs.superplane.com/concepts/secrets): How to store and use secrets in SuperPlane workflows. - [Access Control (RBAC)](https://docs.superplane.com/concepts/access-control): Roles, permissions, groups, and member access in SuperPlane organizations. - [Service Accounts](https://docs.superplane.com/concepts/service-accounts): Use service accounts and API tokens for programmatic and automation access. - [Public API Reference](https://docs.superplane.com/concepts/api-reference): Explore the SuperPlane REST API using the interactive Swagger documentation. - [Glossary](https://docs.superplane.com/concepts/glossary): Definitions for key SuperPlane concepts and terms. ## Installation - [Overview](https://docs.superplane.com/installation/overview): Choose the right installation path for local, single-host, or Kubernetes deployments. - [Try it on your computer](https://docs.superplane.com/installation/local): Run SuperPlane locally with Docker in less than a minute. - [EC2 on AWS](https://docs.superplane.com/installation/single-host/aws-ec2): Install SuperPlane on a single Amazon EC2 instance. - [Compute Engine on GCP](https://docs.superplane.com/installation/single-host/gcp-compute-engine): Install SuperPlane on a single Google Compute Engine VM. - [Hetzner](https://docs.superplane.com/installation/single-host/hetzner): Install SuperPlane on a single Hetzner server. - [DigitalOcean](https://docs.superplane.com/installation/single-host/digitalocean): Install SuperPlane on a single DigitalOcean Droplet. - [Linode](https://docs.superplane.com/installation/single-host/linode): Install SuperPlane on a single Linode instance. - [Generic server](https://docs.superplane.com/installation/single-host/generic-server): Install SuperPlane on a generic Linux server. - [Google Kubernetes Engine](https://docs.superplane.com/installation/kubernetes/gke): Run SuperPlane on a Google Kubernetes Engine (GKE) cluster with Cloud SQL PostgreSQL. - [Amazon Kubernetes (EKS)](https://docs.superplane.com/installation/kubernetes/amazon-eks): Run SuperPlane on an Amazon Elastic Kubernetes Service (EKS) cluster with RDS PostgreSQL. - [Beacon](https://docs.superplane.com/installation/beacon): Understand what the SuperPlane beacon sends and how to disable it. - [CLI](https://docs.superplane.com/installation/cli): Install the SuperPlane CLI and manage canvases, integrations, and runtime operations. ## Components - [AWS](https://docs.superplane.com/components/aws): Manage resources and execute AWS commands in workflows - [Bitbucket](https://docs.superplane.com/components/bitbucket): React to events in your Bitbucket repositories - [CircleCI](https://docs.superplane.com/components/circleci): Trigger and monitor CircleCI pipelines - [Claude](https://docs.superplane.com/components/claude): Use Claude models in workflows - [Cloudflare](https://docs.superplane.com/components/cloudflare): Manage Cloudflare zones, rules, and DNS - [Core](https://docs.superplane.com/components/core): Built-in SuperPlane components. - [Cursor](https://docs.superplane.com/components/cursor): Build workflows with Cursor AI Agents and track usage - [Dash0](https://docs.superplane.com/components/dash0): Connect to Dash0 to query data using Prometheus API - [Datadog](https://docs.superplane.com/components/datadog): Create events in Datadog - [Daytona](https://docs.superplane.com/components/daytona): Execute code in isolated sandbox environments - [DigitalOcean](https://docs.superplane.com/components/digitalocean): Manage and monitor your DigitalOcean infrastructure - [Discord](https://docs.superplane.com/components/discord): Send messages to Discord channels and fetch mentions - [DockerHub](https://docs.superplane.com/components/dockerhub): Manage and react to DockerHub repositories and tags - [Elastic](https://docs.superplane.com/components/elastic): Index documents into Elasticsearch and receive Kibana webhooks - [FireHydrant](https://docs.superplane.com/components/firehydrant): Manage and react to incidents in FireHydrant - [GitHub](https://docs.superplane.com/components/github): Manage and react to changes in your GitHub repositories - [GitLab](https://docs.superplane.com/components/gitlab): Manage and react to changes in your GitLab repositories - [Google Cloud](https://docs.superplane.com/components/googlecloud): Manage and use Google Cloud resources in your workflows - [Grafana](https://docs.superplane.com/components/grafana): Connect Grafana alerts, alert rules, dashboards, annotations, silences, and data queries to SuperPlane workflows - [Harness](https://docs.superplane.com/components/harness): Run and monitor Harness pipelines from SuperPlane workflows - [Hetzner Cloud](https://docs.superplane.com/components/hetznercloud): Create and delete Hetzner Cloud servers/load balancers and create/delete server snapshots - [Honeycomb](https://docs.superplane.com/components/honeycomb): Monitor observability alerts and send events to Honeycomb datasets - [Incident](https://docs.superplane.com/components/incident): Manage and react to incidents in incident.io - [JFrog Artifactory](https://docs.superplane.com/components/jfrogartifactory): Manage artifacts in JFrog Artifactory repositories - [Jira](https://docs.superplane.com/components/jira): Manage and react to issues in Jira - [LaunchDarkly](https://docs.superplane.com/components/launchdarkly): Manage feature flags and react to flag changes in LaunchDarkly - [Microsoft Azure](https://docs.superplane.com/components/microsoftazure): Manage and automate Microsoft Azure resources and services - [Microsoft Teams](https://docs.superplane.com/components/microsoftteams): Send and receive messages in Microsoft Teams channels - [New Relic](https://docs.superplane.com/components/newrelic): React to alerts and query telemetry data from New Relic - [Octopus Deploy](https://docs.superplane.com/components/octopusdeploy): Deploy releases and react to deployment events in Octopus Deploy - [OpenAI](https://docs.superplane.com/components/openai): Generate text responses with OpenAI models - [PagerDuty](https://docs.superplane.com/components/pagerduty): Manage and react to incidents in PagerDuty - [Perplexity](https://docs.superplane.com/components/perplexity): Run AI agents with Perplexity - [Prometheus](https://docs.superplane.com/components/prometheus): Monitor alerts from Prometheus and Alertmanager - [Render](https://docs.superplane.com/components/render): Deploy and manage Render services, and react to Render deploy/build events - [Rootly](https://docs.superplane.com/components/rootly): Manage and react to incidents in Rootly - [Semaphore](https://docs.superplane.com/components/semaphore): Run and react to your Semaphore workflows - [SendGrid](https://docs.superplane.com/components/sendgrid): Send transactional and marketing email with SendGrid - [Sentry](https://docs.superplane.com/components/sentry): React to issue events and manage issues and metric alerts in Sentry - [ServiceNow](https://docs.superplane.com/components/servicenow): Manage and react to incidents in ServiceNow - [Slack](https://docs.superplane.com/components/slack): Send and react to Slack messages and interactions - [SMTP](https://docs.superplane.com/components/smtp): Send emails via any SMTP server - [Statuspage](https://docs.superplane.com/components/statuspage): Create and manage incidents on your Atlassian Statuspage - [Telegram](https://docs.superplane.com/components/telegram): Send messages and react to events via Telegram bots ## Full Content ### Get Started #### Welcome Source URL: https://docs.superplane.com/ SuperPlane is an open source DevOps control plane for running long-lived, event-driven workflows. It works across the tools teams already use such as Git, CI/CD, incident response, observability, infra, notifications, and more. Most teams end up stitching these systems together with a mix of scripts, one-off CI jobs, and manual steps. That works until the workflow needs to span multiple tools, wait for humans, or run over hours and days. SuperPlane gives you a place to model these workflows as a system: connect your tools, define how events flow, and get a complete, queryable execution history for debugging, audit, and shared understanding. ![Run chain view showing end-to-end workflow execution history](../../assets/superplane-canvas-example.png) ## What you can build with SuperPlane Use SuperPlane when the workflow is bigger than a single pipeline or script: - **Cross-tool automation with guardrails**: coordinate releases with approvals, time windows, checks, and rollback paths. - **Human-in-the-loop operations**: pause for sign-off, collect decisions, and resume exactly where you left off. - **Incident and on-call workflows**: pull context from multiple systems, fan out notifications, and keep a work log. - **“Glue work” you don’t want to re-build**: webhooks, retries, routing, payload transforms, and a unified run history. ## Try it locally (fastest path) If you just want to click around and run a workflow, start the demo container: ```bash docker pull ghcr.io/superplanehq/superplane-demo:stable docker run --rm -p 3000:3000 -v spdata:/app/data -ti ghcr.io/superplanehq/superplane-demo:stable ``` Then open `http://localhost:3000`. For more details and options, see [installation guide](/installation/overview). ## Project status: alpha - **Self-hostable**: SuperPlane is designed to run on your own infrastructure. - **Expect rough edges**: we’re still stabilizing the core primitives and integrations. - **Breaking changes are possible**: but we'll do our best to avoid them. ## LLM and agent tooling For **agent-first engineering**, use the [SuperPlane skills repository](https://github.com/superplanehq/skills). It provides skills that help AI agents operate SuperPlane efficiently (CLI usage, canvas design, workflow debugging). Install all skills or a specific one: ```bash npx skills add superplanehq/skills # or a single skill, e.g.: npx skills add superplanehq/skills --skill superplane-cli ``` For **docs context** in your AI tooling, you can point at: - [`/llms.txt`](/llms.txt): compact docs index organized by docs sections. - [`/llms-full.txt`](/llms-full.txt): expanded companion with full page content. ## Get help / share feedback - Submit project issues and feature requests at [github.com/superplanehq/superplane](https://github.com/superplanehq/superplane). - Submit documentation issues at [github.com/superplanehq/docs](https://github.com/superplanehq/docs). - Talk to the devs in the [Discord server](https://discord.superplane.com). #### Quickstart Source URL: https://docs.superplane.com/get-started/quickstart This quickstart guides you to your first "hello world" moment in SuperPlane: building a small workflow on a **canvas**, running it, and inspecting the resulting **run**, **run items**, and **payloads**. You won't need to connect any third-party services. ## What you’ll build A tiny workflow that: - starts from a **Manual Run** - fetches a random cat fact via **HTTP Request** - branches with **If** (True/False output channels) based on the length of the cat fact - ends with one of the **No Operation** nodes Along the way you’ll learn the core mental model: nodes emit payloads, downstream nodes subscribe, and payloads accumulate into a **message chain** you can reference in expressions. ## Prerequisites - You can access the SuperPlane UI. - You can create or open a canvas in a project/workspace. ## Your first workflow (step-by-step) ### 1) Create a canvas Create a new canvas and name it **Hello world**. ### 2) Add the trigger: Manual Run 1. Click **“+ Components”**. 2. Add **Manual Run** to the canvas. When you drop it on the canvas, it will typically show up as a `start` node with a **Run** button. This is the trigger that will start the workflow. ![Manual Run node](../../../assets/quickstart/start-node.png) ### 3) Add an action: HTTP Request 1. Add **HTTP Request** to the canvas. 2. Connect **Manual Run → HTTP Request** (create a subscription). 3. Name the node **Get cat fact**. Configure the HTTP Request: - **Method**: `GET` - **URL**: `https://catfact.ninja/fact` - Click **Save** button at the bottom of the configuration panel. Next, click **Run** button on the manual trigger node to run the first HTTP request. Further nodes will use the response from this run to help us write expressions. This endpoint will fetch a random cat fact and return JSON like: ```json { "fact": "A cat will tremble or shiver when it is in extreme pain.", "length": 56 } ``` ![HTTP Request node](../../../assets/quickstart/get-cat-fact.png) ### 4) Add branching: If 1. Add **If** to the canvas. 2. Connect **HTTP Request → If**. For illustration purposes we will determine whether the cat fact can fit in an old-school tweet or not. Set the If expression to branch based on the API response: ``` $['Get cat fact'].data.body.length <= 160 ``` As you type the expression, you'll see that SuperPlane will provide you with a list of possible data attributes to choose from via autocompletion. ![Writing an expression](../../../assets/quickstart/if-expression.png) ### 5) End both paths safely: No Operation Add **two** No Operation nodes: - Connect **If / True → No Operation** and name it `fact is short`. - Connect **If / False → No Operation** and name it `fact is long`. This keeps the tutorial completely safe: the workflow does real work (an HTTP call), but has no external side effects. ### 6) Run it 1. Click the **Manual Run** node. 2. Click **Run**. Run it a couple more times. You should see nodes update with statuses as each run item finishes. Successfully running the workflow should look like this: ![Full hello world canvas](../../../assets/quickstart/full-canvas.png) ## Inspect a run (payloads, history, message chain) You can inspect exactly what happened and what data flowed between nodes. ### 1) Open a run 1. Click the last node in your workflow (for example, your `Post short message` node). 2. In the sidebar, within the **Runs** tab, click the most recent event. You’ll be taken to a run-focused view with: - **This run was triggered by** (the root trigger, e.g. `start`) - **Steps** (each node that executed in this run) ![Inspecting a run](../../../assets/quickstart/run-inspection.png) ### 2) Explore steps (the run chain) In **Steps**, click different nodes (e.g. `Get cat fact`, `if fact is short`, your No Operation node) to see their details for this run. ### 3) Inspect payloads For the **HTTP Request** step (your `Get cat fact` node), open **Payload** and look for: - `data.status` (HTTP status code) - `data.body.fact` (the cat fact) - `data.body.length` (length of the fact) For any step, you can also open **Details** to see metadata like the event type and when it was emitted. ### 4) Understand the message chain As a run executes, each node’s output is added to a message chain. You can reference any upstream node by name in expressions: ``` $['Get cat fact'].data.body.length $['Get cat fact'].data.body.fact ``` For a deeper explanation, see [Data Flow](/concepts/data-flow) and [Expressions](/concepts/expressions). ## Troubleshooting - **HTTP Request fails**: Open the HTTP Request run item payload and check `data.status` and `data.error`. Public APIs sometimes rate-limit; re-run after a minute if needed. - **A node didn’t run**: Verify the subscription lines on the canvas (Manual Run → HTTP Request → If), and ensure the No Operation nodes are connected to the correct If output channels. ## Next steps The "hello world" was not exactly a DevOps workflow, but it was a good way to get started with the fundamentals of SuperPlane. In the next section, we'll explore some real-world use cases. #### Example use cases Source URL: https://docs.superplane.com/get-started/example-use-cases import { Aside, Card, CardGrid } from "@astrojs/starlight/components"; import { Image } from "astro:assets"; import t1PolicyGatedDeployment from "../../../assets/templates/policy-gated-deployment.png"; import t2AutomatedRollback from "../../../assets/templates/automated-rollback.png"; import t3IncidentDataCollection from "../../../assets/templates/incident-triage.png"; import t4StagedRelease from "../../../assets/templates/progressive-delivery.png"; import t5IncidentRouter from "../../../assets/templates/incident-router.png"; import t6MultiRepoRelease from "../../../assets/templates/multi-repo-release.png"; SuperPlane ships with a set of built-in templates you can copy into your workspace and adapt. This page gives you a quick tour of what each template is for, plus screenshots you can reference as you build. Policy-gated deployment template canvas (cropped preview).

Gate deployments with a policy check, business hours, and approval.

Automated rollback template canvas (cropped preview).

Automatically trigger rollback workflows when deployments fail and verify system health with Dash0 monitoring.

Incident data collection template canvas (cropped preview).

Collect data and metrics when new P1/P2 incident is opened and create a GitHub issue

Staged release template canvas (cropped preview).

Gradually roll out releases through 10%, 50%, and 100% stages with health checks at each step.

Incident router template canvas (cropped preview).

Route P1 incidents from Slack mentions to PagerDuty and GitHub with AI-generated titles and descriptions.

Multi-repo release template canvas (cropped preview).

Coordinate releases across multiple repositories with unified CI builds and deployments.

### Concepts #### Canvas Source URL: https://docs.superplane.com/concepts/canvas A **canvas** is the workspace where you design and run workflows in SuperPlane. It's a visual graph of nodes connected by subscriptions that define how events flow between steps. Think of a canvas as: - **A workspace** for designing workflows visually - **A live system** where multiple runs can execute simultaneously - **A graph** that defines all possible execution paths - **A unified view** of your automation logic A single canvas can represent multiple possible workflows, depending on which paths events take through the graph. The canvas provides a place to model complex, event-driven workflows that span multiple tools, wait for human input, and run over extended periods of time. ## Visual Layout The canvas page displays nodes, connections, status indicators, and provides tools for building and managing your workflows. ![Canvas interface](../../../assets/canvas-overview-canvas.png) The canvas consists of: 1. **Nodes** — Instances of components, the core building blocks. See [Component Nodes](/concepts/component-nodes). 2. **Connections** — Indicate which node listens to which. See [Data Flow](/concepts/data-flow). 3. **Add new elements** — Add annotations and new components to the canvas. 4. **Helper toolbar** — Navigation tools, select/pan mode, search. 5. **Console** — Warnings, errors, and log of changes and events. ## Editing and Updating Canvases You can edit and update canvases in two ways: ### Visual Editor (UI) Use the visual editor to build and modify canvases interactively: - **Add nodes**: Drag components from the component palette onto the canvas - **Connect nodes**: Create subscriptions by connecting nodes together - **Configure nodes**: Click on any node to edit its configuration - **Delete elements**: Remove nodes or connections as needed Changes are saved automatically, and you can see your workflow update in real-time. ### Command Line (CLI) Use the SuperPlane CLI to manage canvases programmatically: ```sh # Export a canvas superplane canvases get > my_canvas.yaml # Edit the YAML file # ... make your changes ... # Apply updates superplane canvases update -f my_canvas.yaml ``` ## Versioning and Change Requests Canvases can run in two modes: - **Versioning disabled**: updates apply directly to live. - **Versioning enabled**: you edit a draft, open a change request, collect approvals, then publish. Effective versioning is controlled by organization + canvas settings: - Organization-level versioning ON forces versioning ON for all canvases. - Organization-level versioning OFF allows each canvas to toggle versioning independently. In the UI, versioned canvases expose an **Edit** mode and a **Versioning** view: 1. Enter **Edit** mode to work on your draft. 2. Use **Propose Change** to open a change request. 3. Review the request and run actions (`Approve`, `Unapprove`, `Reject`, `Reopen`, `Publish`). For CLI commands, see [SuperPlane CLI](/installation/cli). ### Approval policy in Canvas Settings Canvas settings define who can approve change requests. Supported approver types: - **Any user** - **Specific user** - **Role** Publish is allowed only when approval requirements are satisfied. ### Conflict resolution Two pull-request-style change requests can conflict when they edit overlapping nodes differently. When a request is conflicted: - It remains open, but cannot be approved/published. - You can resolve conflicts by updating the request with a resolved canvas version. - After resolve, the request can continue through normal review and publish. ## The Canvas Page ### Nodes and Connections **Nodes** are instances of components. To add a node, click **"+ Components"** and drag onto the canvas. **Connections** define how events flow between nodes — drag from a source node's output channel to a target node. Nodes show status badges (running, succeeded, failed) and key information from their latest payload. For details on components, see [Component Nodes](/concepts/component-nodes). ### Payloads and Events Every node emits a **payload** — JSON data containing the results of its execution. Click any node and view the Payload tab to inspect it. ![Inspecting a node payload](../../../assets/canvas-overview-payload.png) When configuring nodes, type `{{` in expression fields to access payload data from upstream nodes. ![Selecting payload data in expressions](../../../assets/canvas-overview-selecting-payload.png) Use `$['Node Name'].field` to reference data from any connected node. See [Data Flow](/concepts/data-flow) for more details. ### Workflows and Runs A single canvas can express multiple workflows depending on which trigger fires and which paths events take. **Multiple runs execute simultaneously** — the canvas updates in real-time as runs execute, with each node showing its current or most recent status. Click any node to view its run history. Select a run item to see the full run chain showing all nodes that executed as part of that run. ### Console The console tracks errors, warnings, and provides a log of all changes and events on the canvas. ![Console showing logs and errors](../../../assets/canvas-overview-console.png) - **Errors and warnings** — Count indicators show issues needing attention - **Canvas changes** — Logs when components or connections are added, updated, or removed - **Run details** — Execution logs for each run - **Search** — Filter through logs to find specific events ## Best Practices - **Organize logically**: Group related nodes together visually - **Use clear node names**: Make it easy to understand what each node does - **Test incrementally**: Build and test workflows step by step - **Monitor the console**: Check for errors and review run history regularly For more details on data flow, see [Data Flow](/concepts/data-flow). For component details, see [Component Nodes](/concepts/component-nodes). #### Component Nodes Source URL: https://docs.superplane.com/concepts/component-nodes **Components** are available building blocks that define capabilities in SuperPlane. A **component node** is one instance of a component on the Canvas. When you add a component to your canvas, it becomes a node that can receive events, perform work, and emit payloads. ## Components vs Component Nodes - **Component**: The building block definition — what it does, what configuration it needs, what it emits - **Component Node**: A single instance of a component placed on your canvas with specific configuration Think of it like this: a component is like a blueprint, and a component node is the actual building you construct from that blueprint. ## Component Types There are two types of components: ### Trigger Components **Trigger components** start workflow executions. They listen for external events or can be invoked manually. **Examples:** Webhook, Schedule, Manual Run, GitHub onPush, Slack onAppMention ### Action Components **Action components** execute operations in response to upstream events. They subscribe to events, perform operations, and emit payloads for downstream nodes. **Examples:** HTTP Request, Filter, Approval, GitHub runWorkflow, Slack sendMessage ## Adding Component Nodes to the Canvas New component nodes can be added to the Canvas in two ways: ### From the Components Menu 1. Click the **"+ Components"** button in the top right of the canvas 2. Select a component from the list of available components 3. Drag it onto the canvas where you want it The component is now a node on your canvas, ready to be configured and connected. ### From Output Channels You can also drag an output channel from an existing node to an empty space on the canvas. This creates a new component node and automatically subscribes it to that output channel, making it faster to model workflows. ## Node Overview on Canvas Each component node on the canvas displays key information and provides interactive elements: ![Component node on canvas](../../../assets/component-nodes-node.png) 1. **Input channel** — Drag to subscribe to events from other nodes (Action nodes only). 2. **Configuration overview** — Quick summary of key settings for this node. 3. **Latest Run Item** — Shows the last run executed or event emitted. 4. **Action menu** — On hover: manually emit, copy, collapse/expand, or delete. 5. **Output channels** — Subscribe other nodes or drag to create new components. ## Component Node Sidebar Clicking on a component node selects it and opens a component node sidebar. ![Component node sidebar](../../../assets/component-nodes-sidebar.png) 1. **Click to open** — You can click on a node to open the sidebar. 2. **Resizable sidebar** — Sidebar is resizable and contains node details. 3. **Latest runs section** — Recent executions with event ID, timestamp, and status. 4. **Configuration tab** — Settings for setting up and updating the node's configuration. Each component has its own configuration requirements (required fields, optional fields, and expression support using `{{ }}` syntax). 5. **Action menu for run item** — Cancel or push through running items. 6. **Queue** — Items waiting to execute (FIFO order). See [Expressions](/concepts/expressions) for details on writing expressions. ## Single Run Chain Select a run from the list to see the full chain of nodes it went through. ![Single run chain view](../../../assets/component-nodes-single-run.png) 1. **Run chain** — Shows all nodes in the run with current node preselected. 2. **Dimmed nodes** — Nodes not included in the run are dimmed on the canvas. 3. **Expandable details** — Expand other nodes in the chain to view their payloads. 4. **Details tab** — Execution info: start/finish time, result, duration. 5. **Payload tab** — The data this node emitted for downstream nodes. ## Component Availability Components are provided by **integrations**. SuperPlane includes: - **Core components**: Built-in components like Webhook, Filter, HTTP Request - **Integration components**: Components from integrations like GitHub, Slack, PagerDuty To use integration components, you may need to configure authentication or connection settings for that integration first. Browse the [Components](/components/core) section to see all available components and their documentation. ## Best Practices When working with component nodes: - **Choose the right component**: Understand what each component does before using it - **Use expressions**: Make configurations dynamic by referencing upstream data - **Name nodes clearly**: Use descriptive names that indicate purpose - **Test incrementally**: Verify component behavior before building complex workflows - **Monitor run history**: Check execution history to understand behavior and debug issues For more details on how component nodes connect and how data flows between them, see [Data Flow](/concepts/data-flow). For information about the canvas where you work with component nodes, see [Canvas](/concepts/canvas). #### Data flow Source URL: https://docs.superplane.com/concepts/data-flow SuperPlane is an event-driven workflow engine. Every node on the canvas emits a payload, and other nodes subscribe to these events to create workflows. This model enables flexible, composable automation pipelines. ## How It Works When an external event occurs (like a GitHub push), it triggers a node on your canvas. That node processes the event, emits a payload, and downstream nodes that subscribe to it receive the data and continue the chain. ```mermaid flowchart LR External[External Event] --> Trigger[Trigger Node] Trigger -->|payload| Action[Action Node] Action -->|payload| Next[Next Node] ``` Each node in the workflow: 1. **Receives** an event from its subscribed sources 2. **Processes** the event (executes an action, transforms data, etc.) 3. **Emits** a payload for downstream nodes As the workflow executes, payloads from each node accumulate into a message chain. Any node can access data from any upstream node in this chain using expressions. ## Runs and Run Items Understanding how SuperPlane tracks execution helps when working with data flow. ### Run Items A **run item** is a single execution within a single node: - For **trigger nodes**: a single received event (e.g., a GitHub push event) - For **action nodes**: a single execution (e.g., running a GitHub workflow) Each run item produces a payload that downstream nodes can access. ### Runs A **run** is a collection of run items and the dependencies between them. It represents a complete workflow execution from start to finish. - Starts with a **root event** — the first event that triggered the workflow (usually from a trigger node) - Grows as the workflow executes and each node adds its run item to the chain - Tracks the full execution history and data flow ```mermaid flowchart LR Root[Root Event] --> Item1[Run Item 1] Item1 --> Item2[Run Item 2] Item2 --> Item3[Run Item 3] Item1 --> Item4[Run Item 4] Item4 --> Item3 ``` ## The Message Chain As a run executes, each node's output is added to a **message chain**. This chain is accessible via the `$` variable — think of it as a message bus that streams all outputs to your current node. ### How It Works Consider this workflow: ```mermaid flowchart LR GitHub[GitHub onPush] --> Filter[Filter] --> Deploy[Deployment] ``` When the workflow executes, each node adds its output to `$`: ```json { "GitHub onPush": { "ref": "refs/heads/main", "commit": "abc123" }, "Filter": { "passed": true }, "Deployment": { "status": "success", "url": "https://app.example.com" } } ``` From the Deployment node, you can access any upstream output: ``` $['GitHub onPush'].ref // "refs/heads/main" $['Filter'].passed // true ``` You can also use `root()` to access the original event that started the run, and `previous()` to access the immediate upstream node. See the [Expressions](/concepts/expressions) page for details. ## Exploring Runs on the Canvas The workflow you see on the canvas is dynamic — it's not a single run, but a live view where multiple runs can execute simultaneously. ### Node Status Each node on the canvas shows a quick overview of its current or most recent run item. ![Node with run item status](../../../assets/data-flow-node-status.png) ### Run History Click on any node to open the sidebar. The sidebar shows the run history — all executions or events that have passed through this node, along with each execution's result. ![Run history sidebar](../../../assets/data-flow-run-history.png) ### Run Chain Click on any item in the run history to see the full run chain. This shows all run items from all nodes that executed as part of that particular run. ![Run chain view](../../../assets/data-flow-run-chain.png) ### Inspecting Run Items In the run chain view, the node you were inspecting is preselected. You can click on any other run item in the chain to explore its details and payload. ![Run item details expanded](../../../assets/data-flow-run-details.png) ## Payloads Every node emits a **payload** — a JSON object containing data from its execution. ### Trigger Components Trigger components listen to external resources and emit the event data as their payload. - Connect to external systems via webhooks or integrations - Emit events when something happens externally - Payload contains the raw event data from the external system **Examples:** [GitHub onPush](/components/github/#on-push), [GitHub onRelease](/components/github/#on-release), [Slack onAppMention](/components/slack/#on-app-mention) ### Action Components Action components execute operations and emit execution results as their payload. - Subscribe to events from upstream nodes - Execute operations on external systems - Payload contains execution results and any returned data **Examples:** [GitHub runWorkflow](/components/github/#run-workflow), [Slack sendMessage](/components/slack/#send-text-message), [HTTP request](/components/core/#http-request) ### Output Channels Nodes can emit through one or multiple output channels. Channels let you route data based on different outcomes. **Example: Pass/Fail Routing** ![Output channels](../../../assets/data-flow-output-channels.png) Subscribe to the `passed` channel to continue on success, or the `failed` channel to handle errors. **Output channels example** | Component | Channels | Description | | ----------------------- | ------------------------------- | ------------------------------------------- | | GitHub runWorkflow | `passed`, `failed` | Routes based on workflow success or failure | | Approval | `approved`, `rejected` | Routes based on approval decision | | Merge | `success`, `stopped`, `timeout` | Routes based on merge outcome | | Dash0 listIssues | `clear`, `degraded`, `critical` | Routes based on issue severity | | PagerDuty listIncidents | `clear`, `low`, `high` | Routes based on incident urgency | #### Expressions Source URL: https://docs.superplane.com/concepts/expressions SuperPlane uses [Expr](https://expr-lang.org) for expressions. Expressions let you access payload data, transform values, and evaluate conditions. ### Accessing Payload Data Use `$['Node Name']` to access payload data from any upstream node in the message chain: ``` $['Node Name'].field $['Node Name'].nested.field $['Node Name'].array[0].value ``` **Examples:** ``` $['GitHub onPush'].ref // Branch ref $['GitHub onPush'].head_commit.message // Commit message $['Deploy 10%'].workflow_run.html_url // Workflow URL ``` ### SuperPlane Functions These functions are specific to SuperPlane workflows: | Function | Description | Example | | ------------- | ------------------------------------------------ | -------------------------- | | `root()` | Returns the root payload that started the run | `root().data.ref` | | `previous()` | Returns payload from the immediate upstream node | `previous().data.status` | | `previous(n)` | Walk n levels upstream | `previous(2).data.version` | ### Common Functions Expr provides a rich set of built-in functions: **String** `lower()`, `upper()`, `trim()`, `split()`, `replace()`, `indexOf()`, `hasPrefix()`, `hasSuffix()` **Array** `filter()`, `map()`, `first()`, `last()`, `len()`, `any()`, `all()`, `count()`, `join()` **Date** `now()`, `date()`, `duration()` — with methods like `.Year()`, `.Month()`, `.Day()`, `.Hour()` **Type Conversion** `int()`, `float()`, `string()`, `toJSON()`, `fromJSON()`, `toBase64()`, `fromBase64()` For the complete function reference, see the [Expr language documentation](https://expr-lang.org/docs/language-definition). ### Using Expressions in Configuration Expressions can be used in component configuration fields using double curly braces: **Dynamic message:** ``` Deployment of {{$['Listen to new Releases'].data.release.name}} has failed. ``` **Filter expression:** ``` $['GitHub onPush'].ref == "refs/heads/main" ``` **String manipulation:** ``` indexOf(lower($['Slack Message'].data.text), "p1") != -1 ``` **Conditional logic:** ``` $['Check for alerts'].data.status != "clear" || $['Health Check'].data.body.healthy == false ``` #### Secrets Source URL: https://docs.superplane.com/concepts/secrets Secrets let you securely store sensitive credentials like API keys, passwords, and tokens for use in component configurations. ## How It Works Secrets are key-value stores scoped to your organization. Each secret contains one or more named keys that hold sensitive values. - **Organization-scoped**: Accessible to all workflows in the organization - **Encrypted at rest**: Secret data is encrypted before storage - **Referenced in configurations**: Components reference secrets by name and key, not by value - **Resolved at runtime**: Secret values are decrypted and resolved when components execute ## Creating Secrets Create secrets in **Organization Settings > Secrets**. **Example secret structure:** A secret named `production-ssh-keys` containing: - `private_key` = `-----BEGIN OPENSSH PRIVATE KEY-----...` - `passphrase` = `my-passphrase` Each secret can contain multiple key-value pairs. ## Using Secrets in Components Currently, secrets are available for the **SSH Command** component. When configuring the SSH Command component, select a secret and key from your organization's secrets for authentication credentials. Support for secrets in other components will be added in future releases. ## Secret Resolution During workflow execution, SuperPlane: 1. **Looks up the secret** by name in the organization 2. **Decrypts the secret data** using the encryption key 3. **Extracts the specific key** from the secret's key-value pairs 4. **Provides the value** to the component for execution If a secret or key doesn't exist, the component execution fails with an error. ## Best Practices - **Use descriptive names**: Name secrets clearly (e.g. `production-keys` and `staging-keys`) - **Organize by service**: Group related credentials in a single secret with multiple keys - **Rotate regularly**: Update secret values when credentials change - **Don't hardcode**: Always use secret key fields instead of entering values directly ## Permissions Secret management requires specific permissions: - `secrets.read` - View organization secrets (but not their values) - `secrets.create` - Create new secrets - `secrets.update` - Update existing secrets - `secrets.delete` - Delete secrets By default, only `Admin` and `Owner` roles have these permissions. See [Access Control](/concepts/access-control) for details. #### Access Control (RBAC) Source URL: https://docs.superplane.com/concepts/access-control **Overview** SuperPlane uses organization-scoped role-based access control (RBAC) to decide who can do what in an organization. Today, RBAC is only defined at the organization level. **RBAC Sections** - [Roles](#roles) - [Groups](#groups) - [Members](#members) - [Permissions Reference](#permissions-reference) **Role Model** - A member has one direct organization role at a time. Assigning a new role replaces the previous direct role. - Group membership can add additional roles. Effective permissions are the union of direct role, group roles, and inherited roles. - Default roles are `Owner`, `Admin`, and `Viewer`. They are read-only in the UI. - To change the permissions of a default role, create a custom role and assign it instead. **Role Inheritance** ```mermaid graph TD Owner --> Admin Admin --> Viewer ``` **Default Roles** | Role | Inherits | Summary | | --- | --- | --- | | Owner | Admin | Full admin access plus manage organization settings and deletion. | | Admin | Viewer | Manage members, groups, roles, canvases, integrations, secrets, and custom components (if enabled). | | Viewer | - | Read-only access to org settings, roles, groups, members, canvases, and custom components (if enabled). | New members are assigned the `Viewer` role by default. **Default Role Permissions** Viewer permissions: - `org.read` - `roles.read` - `groups.read` - `members.read` - `canvases.read` Admin permissions: - All Viewer permissions. - `canvases.create` - `canvases.update` - `canvases.delete` - `members.create` - `members.update` - `members.delete` - `groups.create` - `groups.update` - `groups.delete` - `integrations.create` - `integrations.read` - `integrations.update` - `integrations.delete` - `secrets.create` - `secrets.read` - `secrets.update` - `secrets.delete` - `roles.create` - `roles.update` - `roles.delete` Owner permissions: - All Admin permissions. - `org.update` - `org.delete` ## Permissions Reference Permissions are defined as resource/action pairs (for example, `members.create`). Use this list when building custom roles. **General** - `org.read` - View organization details and settings. - `org.update` - Update organization settings and configuration. - `org.delete` - Delete the organization (dangerous). **People & Groups** - `members.read` - View organization members and their details. - `members.create` - Invite or add members to the organization. - `members.update` - Update member roles and permissions. - `members.delete` - Remove members from the organization. - `groups.read` - View organization groups and their members. - `groups.create` - Create new groups within the organization. - `groups.update` - Update group settings and membership. - `groups.delete` - Delete groups from the organization. **Roles & Permissions** - `roles.read` - View organization roles and their permissions. - `roles.create` - Create new roles within the organization. - `roles.update` - Update role permissions and settings. - `roles.delete` - Delete roles from the organization. **Canvases** - `canvases.read` - View organization canvases. - `canvases.create` - Create new canvases within the organization. - `canvases.update` - Update canvas settings and configuration. - `canvases.delete` - Delete canvases from the organization. **Integrations** - `integrations.read` - View organization integrations. - `integrations.create` - Create new integrations. - `integrations.update` - Update integration settings and configuration. - `integrations.delete` - Delete integrations from the organization. **Secrets** - `secrets.read` - View organization secrets. - `secrets.create` - Create new secrets. - `secrets.update` - Update secrets. - `secrets.delete` - Delete secrets from the organization ## Roles Use **Organization Settings > Roles** to review roles and create custom roles. - Default roles are marked as **Default Role** and are read-only. - Custom roles can be created, edited, and deleted if you have `roles.*` permissions. ![Roles page](../../../assets/rbac-roles.png) The Create Role page lets you pick permissions by category. ![Create role page](../../../assets/rbac-create-role.png) ## Groups Groups map to a single role. When a user is added to a group, they inherit that role in addition to any direct role assignment. - Create groups in **Organization Settings > Groups**. - Change a group role from the Groups list; all group members inherit the new role immediately. ![Groups page](../../../assets/rbac-groups.png) ![Groups creation page](../../../assets/rbac-create-group.png) ## Members The Members page is where you assign a member's direct role and manage invite links. - New members start as `Viewer` by default. - Assigning a role replaces the previous direct role. - You must keep at least one `Owner` in the organization. ![Members page](../../../assets/rbac-members.png) #### Service Accounts Source URL: https://docs.superplane.com/concepts/service-accounts Service accounts are non-human identities for API access. Use them for scripts and integrations that need a dedicated set of permissions. ## When to use - **Scripts**: Call the SuperPlane API from automation. - **Integrations**: Let external systems call the SuperPlane API with their own identity and role. ## Create a service account and token 1. In the SuperPlane UI, go to **Organization Settings > Service accounts**. 2. Create a service account and assign it a role. 3. Generate an API token and copy it (it is shown only once). ## Use the token to configure the SuperPlane CLI ```sh superplane connect ``` ## Permissions The token can only do what the service account’s role allows. Permissions are organization-scoped and governed by [RBAC](/concepts/access-control). - **Viewer**: Read-only (e.g. list canvases, read run history). - **Admin** or custom roles: Create or update canvases, integrations, or secrets when required. ## Best practices - **One service account per external system**: Create a dedicated service account per integration or script so you can revoke access or rotate credentials without impacting others. - **Rotate**: Regenerate tokens periodically and update any stored copies. - **Least privilege**: Use the minimum role that satisfies the use case (e.g. Viewer for read-only). #### Public API Reference Source URL: https://docs.superplane.com/concepts/api-reference SuperPlane exposes a public REST API that lets you manage canvases, integrations, secrets, and runtime operations programmatically. The full API reference is available as an interactive Swagger document at: **[https://app.superplane.com/api/v1/docs](https://app.superplane.com/api/v1/docs)** ## Authentication All API requests require a valid API token sent in the `Authorization` header: ``` Authorization: Bearer ``` You can obtain a token in two ways: - **Service account token** (recommended for scripts and integrations): see [Service Accounts](/concepts/service-accounts). - **Personal token** (tied to your user): go to **Profile > API token** in the SuperPlane UI. ## Quick example ```sh curl -s https://app.superplane.com/api/v1/canvases \ -H "Authorization: Bearer " | jq ``` ## Using with the CLI The [SuperPlane CLI](/installation/cli) wraps the same API. If you prefer a terminal-based workflow, the CLI handles authentication and formatting for you. #### Glossary Source URL: https://docs.superplane.com/concepts/glossary This page defines the core terms used throughout the SuperPlane documentation. ## Canvas A **canvas** is the workspace where you design and run workflows. It is a graph of nodes connected by subscriptions that define how events flow between nodes. A canvas usually represents multiple possible workflows. ## Workflow A **workflow** is the behavior expressed by a canvas: what should happen when an event occurs, which steps run, and how data moves between steps. ## Node A **node** is a single step on a canvas. Each node receives an event, performs some work, and emits an event to any downstream nodes that subscribe to it. ## Component A **component** is the “type” of a node (for example, **Webhook**, **Manual Run**, **Filter**, or a GitHub action). Components define what configuration a node needs and what it emits. ## Trigger A **trigger** is a component that starts a workflow execution. Triggers typically receive external events (webhooks, schedules) or start runs manually. ## Action An **action** is a component that runs in response to an upstream event. Actions can call external systems, transform data, route events, or wait for human input. ## Integration An **integration** connects SuperPlane to an external system (for example GitHub, Slack, PagerDuty). Integrations provide triggers and actions you can use as nodes on the canvas. ## Event An **event** is the unit of work that flows between nodes. Events carry data (the payload) and are delivered to any downstream nodes that subscribe to them. ## Payload A **payload** is the JSON data associated with an event or a node execution. Payloads are what you inspect in run history and what you reference in expressions. ## Output channel (channel) A **channel** is a named output a node can emit on (for example `passed`/`failed`, `approved`/`rejected`). Channels let you route events to different downstream paths based on outcomes. ## Subscription A **subscription** is the connection from one node’s output (optionally a specific channel) to another node’s input. A canvas is essentially a graph of subscriptions. ## Run A **run** is a single end-to-end workflow execution, from the first triggering event through all downstream work it causes. Runs are what you use to debug “what happened?” across many steps. ## Run item A **run item** is the execution record for a single node within a run. A run is composed of many run items. ## Run history **Run history** is the UI view that lists past executions for a node or a canvas. It’s where you inspect payloads, timestamps, statuses, and errors. ## Message chain The **message chain** is the accumulated outputs from upstream nodes within a run. It allows downstream nodes to access and combine data from earlier steps. ## Expression An **expression** is a small program used to read and transform payload data (for example to build a message, compute a condition, or select an output path). See the [Expressions](/concepts/expressions) page for more details. ## Service account A **service account** is a non-human identity used to call the SuperPlane API from scripts and external systems. Access is governed by [RBAC](/concepts/access-control). See [Service Accounts](/concepts/service-accounts) for details. ### Installation #### Overview Source URL: https://docs.superplane.com/installation/overview SuperPlane can be installed in a few different ways depending on the level of scale and operational control you need. ## Try it on your computer Choose this if you want to quickly try SuperPlane on your local machine without setting up cloud resources or Kubernetes. - [Try it on your computer](/installation/local) ## Single-host installation Choose this if you want a production-like setup without Kubernetes. Single-host installs are simpler to operate and are ideal for smaller teams or early deployments. - [EC2 on AWS](/installation/single-host/aws-ec2) - [Compute Engine on GCP](/installation/single-host/gcp-compute-engine) - [Hetzner](/installation/single-host/hetzner) - [DigitalOcean](/installation/single-host/digitalocean) - [Linode](/installation/single-host/linode) - [Generic server](/installation/single-host/generic-server) ## Kubernetes Choose this if you want a scalable, production instance of SuperPlane. - [Google Kubernetes Engine](/installation/kubernetes/gke) - [Amazon Kubernetes (EKS)](/installation/kubernetes/amazon-eks) ## Updating SuperPlane Each installation method has its own upgrade process. See the upgrade section in your installation method's documentation: - [Local installation](/installation/local#upgrading) - Single-host installations: - [EC2 on AWS](/installation/single-host/aws-ec2#upgrading) - [Compute Engine on GCP](/installation/single-host/gcp-compute-engine#upgrading) - [Hetzner](/installation/single-host/hetzner#upgrading) - [DigitalOcean](/installation/single-host/digitalocean#upgrading) - [Linode](/installation/single-host/linode#upgrading) - [Generic server](/installation/single-host/generic-server#upgrading) - Kubernetes installations: - [Google Kubernetes Engine](/installation/kubernetes/gke#upgrading) - [Amazon Kubernetes (EKS)](/installation/kubernetes/amazon-eks#upgrading) #### Try it on your computer Source URL: https://docs.superplane.com/installation/local The fastest way to try SuperPlane is to run the latest version of the SuperPlane Docker container on your own machine. You'll have a working SuperPlane instance in less than a minute, without provisioning any cloud infrastructure. ## Prerequisites To run SuperPlane, you need: - [Docker installed and running][docker-install] (for example Docker Desktop on macOS/Windows, or Docker Engine on Linux). - A stable internet connection (SuperPlane opens a tunnel for incoming webhooks). ## Starting SuperPlane Run the latest stable SuperPlane Docker container: ```bash docker pull ghcr.io/superplanehq/superplane-demo:stable docker run --rm -p 3000:3000 -v spdata:/app/data -ti ghcr.io/superplanehq/superplane-demo:stable ``` This pulls the stable image and starts SuperPlane in your terminal. ### Public access and localtunnel SuperPlane needs to be reachable from the public internet to receive incoming webhooks. When you run the container, it automatically starts a [localtunnel][localtunnel] to expose your local instance through a public URL for incoming webhooks. This is convenient for quick trials, but it also means: - Your local SuperPlane instance becomes accessible from the internet via the tunnel URL. - You should **not** use this setup with sensitive data, secrets, or production systems. Use this setup for exploration and evaluation only. For more controlled, production-like deployments, use one of the single-host or Kubernetes installation options instead. ## Try the beta channel To try the latest features before they land in stable, use the beta tag: ```bash docker run -ti --rm ghcr.io/superplanehq/superplane-demo:beta ``` Beta images may change more frequently and can be less stable than the `stable` channel. ## Pin a specific version If you want to run a particular version, you can pin it explicitly: ```bash docker run -ti --rm ghcr.io/superplanehq/superplane-demo:v0.4 ``` Replace `v0.4` with the version you want to run. ## Updating SuperPlane To update to the latest version, run docker pull and restart the container: ``` docker pull ghcr.io/superplanehq/superplane-demo:stable docker run --rm -p 3000:3000 -v spdata:/app/data -ti ghcr.io/superplanehq/superplane-demo:stable ``` Replace `stable` with the specific version tag if needed. ## Removing SuperPlane To completely remove SuperPlane, remove the data volume and Docker images: ```bash docker volume rm spdata docker rmi ghcr.io/superplanehq/superplane-demo:stable ``` If you've used other tags (like `beta` or specific versions), remove those images as well: ```bash docker rmi ghcr.io/superplanehq/superplane-demo:beta docker rmi ghcr.io/superplanehq/superplane-demo:v0.4 ``` [localtunnel]: https://github.com/localtunnel/localtunnel [docker-install]: https://docs.docker.com/get-docker/ #### EC2 on AWS Source URL: https://docs.superplane.com/installation/single-host/aws-ec2 This guide walks you through setting up a new Amazon EC2 instance from scratch and installing SuperPlane using the single-host installer. ## 1. Create an EC2 instance 1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/). 2. Open the **EC2** service. 3. Click **Launch instance**. 4. Configure your instance: - Give it a name (for example `superplane-single-host`). - Under **Application and OS Images**, choose an Ubuntu LTS image (for example Ubuntu Server 22.04 LTS). - Choose an instance type such as `t3.medium` (2 vCPUs, 4 GiB memory). - Select or create an SSH key pair so you can log in securely. 5. Under **Network settings**, either create a new security group or use an existing one with inbound rules that allow: - TCP port 22 (SSH) from your IP. - TCP port 80 (HTTP) from the internet. - TCP port 443 (HTTPS) from the internet. 6. Launch the instance and note its public IPv4 address or DNS name. At this point you have a Linux server that is reachable from the internet. ## 2. Point your domain to the instance 1. In your DNS provider (Route 53 or another provider), create an `A` record for your domain or subdomain (for example `superplane.example.com`). 2. Point the `A` record to the public IP address of your EC2 instance. 3. Wait for DNS to propagate (usually a few minutes). SuperPlane will use this domain to issue and maintain an SSL certificate. Optionally, you can allocate an Elastic IP and associate it with your instance so its public IP does not change. ## 3. Verify security group rules In the EC2 console: 1. Go to **Instances** and select your SuperPlane instance. 2. In the **Security** tab, click the attached security group. 3. Under **Inbound rules**, ensure the following rules exist: - SSH (TCP 22) from your IP. - HTTP (TCP 80) from `0.0.0.0/0`. - HTTPS (TCP 443) from `0.0.0.0/0`. Save any changes you make to the security group. ## 4. Install Docker and Docker Compose SSH into your EC2 instance using the public DNS name or IP. For Ubuntu images, the default user is usually `ubuntu`: ```bash ssh -i /path/to/your-key.pem ubuntu@your-ec2-public-dns ``` On the instance, install Docker and Docker Compose. For example, on Ubuntu: ```bash sudo apt update sudo apt install -y ca-certificates curl gnupg sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg echo \ "deb [arch=$(dpkg --print-architecture) \ signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt update sudo apt install -y docker-ce docker-ce-cli containerd.io \ docker-buildx-plugin docker-compose-plugin sudo systemctl enable --now docker sudo usermod -aG docker ubuntu newgrp docker ``` You now have an EC2 instance with Docker and Docker Compose installed, reachable from the internet at your chosen domain. ## 5. Install SuperPlane With Docker set up, install SuperPlane using the single-host installer. First, download and unpack the installer: ```bash wget -q https://install.superplane.com/superplane-single-host.tar.gz tar -xf superplane-single-host.tar.gz cd superplane ``` Then run the installer: ```bash ./install.sh ``` This downloads the single-host bundle, extracts it, and runs the installer. The installer sets up the Docker Compose stack and starts SuperPlane on your instance. ## SSL certificates and public access Because SuperPlane needs to connect to external integrations and receive webhooks, your instance must be reachable from the public internet. During installation, SuperPlane automatically: - Issues an SSL certificate for your configured domain. - Renews the certificate so HTTPS continues to work over time. Ensure your security group allows inbound traffic on ports 80 and 443 so certificate issuance and HTTPS access can succeed. ## 6. Enable EBS snapshots To protect your SuperPlane instance, create regular snapshots of the root EBS volume. In the EC2 console: 1. Go to **Instances** and select your SuperPlane instance. 2. Open the **Storage** tab and note the root volume ID. 3. Click the volume ID to open it in the **Volumes** view. 4. Click **Actions → Create snapshot** to create a snapshot of the volume. You can use these snapshots to restore the volume, or create a new instance with the same data if something goes wrong. ## Updating SuperPlane 1. Check the [GitHub releases][github-releases] for the latest version tag. 2. Edit `docker-compose.yml` and update the `image` field with the new tag. 3. Restart the stack: ``` docker compose pull docker compose up -d ``` [github-releases]: https://github.com/superplanehq/superplane/releases [github-releases]: https://github.com/superplanehq/superplane/releases #### Compute Engine on GCP Source URL: https://docs.superplane.com/installation/single-host/gcp-compute-engine This guide walks you through setting up a new Google Compute Engine virtual machine from scratch and installing SuperPlane using the single-host installer. ## 1. Create a Compute Engine VM 1. Sign in to the [Google Cloud Console](https://console.cloud.google.com/). 2. Select or create a project. 3. Go to **Compute Engine → VM instances** and click **Create instance**. 4. Configure your VM: - Choose a region and zone close to you. - Under **Machine configuration**, pick a machine type such as `e2-medium` (2 vCPUs, 4 GB memory). - Under **Boot disk**, select an Ubuntu LTS image (for example Ubuntu 22.04 LTS). - Under **Firewall**, check **Allow HTTP traffic** and **Allow HTTPS traffic**. 5. Click **Create** and note the external IP address of the VM. At this point you have a Linux server that is reachable from the internet. ## 2. Point your domain to the VM 1. In your DNS provider (Cloud DNS or another provider), create an `A` record for your domain or subdomain (for example `superplane.example.com`). 2. Point the `A` record to the external IP of your VM. 3. Wait for DNS to propagate (usually a few minutes). SuperPlane will use this domain to issue and maintain an SSL certificate. ## 3. Verify firewall rules In the Google Cloud Console: 1. Go to **VPC network → Firewall**. 2. Ensure there are rules that allow: - TCP port 22 (SSH) to your VM. - TCP port 80 (HTTP) to your VM. - TCP port 443 (HTTPS) to your VM. The **Allow HTTP traffic** and **Allow HTTPS traffic** options you selected when creating the VM typically create these rules automatically. ## 4. Install Docker and Docker Compose SSH into your VM using the external IP or domain. For Ubuntu images, the default user is usually `ubuntu`: ```bash gcloud compute ssh your-instance-name --zone your-zone ``` or using plain SSH: ```bash ssh ubuntu@your-vm-external-ip-or-domain ``` On the VM, install Docker and Docker Compose. For example, on Ubuntu: ```bash sudo apt update sudo apt install -y ca-certificates curl gnupg sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg echo \ "deb [arch=$(dpkg --print-architecture) \ signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt update sudo apt install -y docker-ce docker-ce-cli containerd.io \ docker-buildx-plugin docker-compose-plugin sudo systemctl enable --now docker sudo usermod -aG docker ubuntu newgrp docker ``` You now have a Compute Engine VM with Docker and Docker Compose installed, reachable from the internet at your chosen domain. ## 5. Install SuperPlane With Docker set up, install SuperPlane using the single-host installer. First, download and unpack the installer: ```bash wget -q https://install.superplane.com/superplane-single-host.tar.gz tar -xf superplane-single-host.tar.gz cd superplane ``` Then run the installer: ```bash ./install.sh ``` This downloads the single-host bundle, extracts it, and runs the installer. The installer sets up the Docker Compose stack and starts SuperPlane on your VM. ## SSL certificates and public access Because SuperPlane needs to connect to external integrations and receive webhooks, your VM must be reachable from the public internet. During installation, SuperPlane automatically: - Issues an SSL certificate for your configured domain. - Renews the certificate so HTTPS continues to work over time. Ensure your firewall allows inbound traffic on ports 80 and 443 so certificate issuance and HTTPS access can succeed. ## 6. Enable disk snapshots To protect your SuperPlane instance, create regular snapshots of the boot disk. In the Google Cloud Console: 1. Go to **Compute Engine → Disks**. 2. Find the boot disk attached to your SuperPlane VM. 3. Click the disk name to open its details. 4. Click **Create snapshot** to create a snapshot of the disk. You can use these snapshots to restore the disk, or create a new VM with the same data if something goes wrong. ## Updating SuperPlane 1. Check the [GitHub releases][github-releases] for the latest version tag. 2. Edit `docker-compose.yml` and update the `image` field with the new tag. 3. Restart the stack: ``` docker compose pull docker compose up -d ``` [github-releases]: https://github.com/superplanehq/superplane/releases [github-releases]: https://github.com/superplanehq/superplane/releases #### Hetzner Source URL: https://docs.superplane.com/installation/single-host/hetzner This guide walks you through setting up a new Hetzner cloud server from scratch and installing SuperPlane using the single‑host installer. ## 1. Create a Hetzner cloud server 1. Sign in to the [Hetzner Cloud Console](https://console.hetzner.cloud/). 2. Create a new project (or use an existing one). 3. Create a server: - Choose a location close to you. - Select an Ubuntu LTS image (for example Ubuntu 22.04). - Pick a shared CPU type with 2 CPU and 4 GB RAM. - Add SSH keys so you can log in securely. 4. Note the server’s public IPv4 address. At this point you have a Linux server that is reachable from the internet. ## 2. Point your domain to the server 1. In your DNS provider, create an `A` record for your domain or subdomain (for example `superplane.example.com`). 2. Point the `A` record to the public IP of your Hetzner server. 3. Wait for DNS to propagate (usually a few minutes). SuperPlane will use this domain to issue and maintain an SSL certificate. ## 3. Open required ports In the Hetzner Cloud Console: 1. Open your project and go to the **Servers** view. 2. Click your SuperPlane server to open its details. 3. Go to the **Networking** tab. 4. Under **Firewalls**, either create a new firewall or edit the one attached to the server. 5. Add inbound rules that allow: - TCP port 22 (SSH) - TCP port 80 (HTTP, for certificate issuance) - TCP port 443 (HTTPS, for SuperPlane) 6. Apply the firewall to your server if it is not already attached. ## 4. Install Docker and Docker Compose SSH into your Hetzner server using the IP or domain: ```bash ssh root@your-server-ip-or-domain ``` On the server, install Docker and Docker Compose. For example, on Ubuntu: ```bash apt update apt install -y ca-certificates curl gnupg install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ gpg --dearmor -o /etc/apt/keyrings/docker.gpg chmod a+r /etc/apt/keyrings/docker.gpg echo \ "deb [arch=$(dpkg --print-architecture) \ signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ tee /etc/apt/sources.list.d/docker.list > /dev/null apt update apt install -y docker-ce docker-ce-cli containerd.io \ docker-buildx-plugin docker-compose-plugin systemctl enable --now docker ``` You now have a Hetzner Linux server with Docker and Docker Compose installed, reachable from the internet at your chosen domain. ## 5. Install SuperPlane With Docker set up, install SuperPlane using the single‑host installer. First, download and unpack the installer: ```bash wget -q https://install.superplane.com/superplane-single-host.tar.gz tar -xf superplane-single-host.tar.gz cd superplane ``` Then run the installer: ```bash ./install.sh ``` This downloads the single‑host bundle, extracts it, and runs the installer. The installer sets up the Docker Compose stack and starts SuperPlane on your server. ## SSL certificates and public access Because SuperPlane needs to connect to external integrations and receive webhooks, your server must be reachable from the public internet. During installation, SuperPlane automatically: - Issues an SSL certificate for your configured domain. - Renews the certificate so HTTPS continues to work over time. Ensure your firewall allows inbound traffic on ports 80 and 443 so certificate issuance and HTTPS access can succeed. ## 6. Set up full disk backups To protect your SuperPlane instance, enable full disk backups for your Hetzner server. In the Hetzner Cloud Console: 1. Open your project and go to the **Servers** view. 2. Click your SuperPlane server to open its details. 3. Go to the **Backups** section. 4. Enable backups for the server. Hetzner will now create regular full disk backups of your server's root volume. You can use these backups to restore the entire server to an earlier state if something goes wrong. ## Updating SuperPlane 1. Check the [GitHub releases][github-releases] for the latest version tag. 2. Edit `docker-compose.yml` and update the `image` field with the new tag. 3. Restart the stack: ``` docker compose pull docker compose up -d ``` [github-releases]: https://github.com/superplanehq/superplane/releases [github-releases]: https://github.com/superplanehq/superplane/releases #### DigitalOcean Source URL: https://docs.superplane.com/installation/single-host/digitalocean This guide walks you through setting up a new DigitalOcean Droplet from scratch and installing SuperPlane using the single-host installer. ## 1. Create a DigitalOcean Droplet 1. Sign in to the [DigitalOcean Control Panel](https://cloud.digitalocean.com/). 2. Click **Create → Droplets**. 3. Configure your Droplet: - Choose a region close to you. - Select an Ubuntu LTS image (for example Ubuntu 22.04). - Pick a Basic Droplet with 2 vCPUs and 4 GB RAM. - Add SSH keys so you can log in securely. 4. Create the Droplet and note its public IPv4 address. At this point you have a Linux server that is reachable from the internet. ## 2. Point your domain to the Droplet 1. In your DNS provider (DigitalOcean DNS or another provider), create an `A` record for your domain or subdomain (for example `superplane.example.com`). 2. Point the `A` record to the public IP of your Droplet. 3. Wait for DNS to propagate (usually a few minutes). SuperPlane will use this domain to issue and maintain an SSL certificate. ## 3. Open required ports with Cloud Firewalls In the DigitalOcean Control Panel: 1. Go to **Networking → Firewalls**. 2. Create a new firewall (or edit an existing one). 3. Under **Inbound rules**, allow: - TCP port 22 (SSH) - TCP port 80 (HTTP, for certificate issuance) - TCP port 443 (HTTPS, for SuperPlane) 4. Under **Apply to Droplets**, select your SuperPlane Droplet. 5. Save the firewall. ## 4. Install Docker and Docker Compose SSH into your Droplet using the IP or domain: ```bash ssh root@your-droplet-ip-or-domain ``` On the Droplet, install Docker and Docker Compose. For example, on Ubuntu: ```bash apt update apt install -y ca-certificates curl gnupg install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ gpg --dearmor -o /etc/apt/keyrings/docker.gpg chmod a+r /etc/apt/keyrings/docker.gpg echo \ "deb [arch=$(dpkg --print-architecture) \ signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ tee /etc/apt/sources.list.d/docker.list > /dev/null apt update apt install -y docker-ce docker-ce-cli containerd.io \ docker-buildx-plugin docker-compose-plugin systemctl enable --now docker ``` You now have a DigitalOcean Droplet with Docker and Docker Compose installed, reachable from the internet at your chosen domain. ## 5. Install SuperPlane With Docker set up, install SuperPlane using the single-host installer. First, download and unpack the installer: ```bash wget -q https://install.superplane.com/superplane-single-host.tar.gz tar -xf superplane-single-host.tar.gz cd superplane ``` Then run the installer: ```bash ./install.sh ``` This downloads the single-host bundle, extracts it, and runs the installer. The installer sets up the Docker Compose stack and starts SuperPlane on your Droplet. ## SSL certificates and public access Because SuperPlane needs to connect to external integrations and receive webhooks, your Droplet must be reachable from the public internet. During installation, SuperPlane automatically: - Issues an SSL certificate for your configured domain. - Renews the certificate so HTTPS continues to work over time. Ensure your firewall allows inbound traffic on ports 80 and 443 so certificate issuance and HTTPS access can succeed. ## 6. Enable automatic backups To protect your SuperPlane instance, enable automatic backups for your Droplet. In the DigitalOcean Control Panel: 1. Go to **Droplets** and click your SuperPlane Droplet. 2. Open the **Backups** tab. 3. Enable backups for the Droplet. DigitalOcean will now create regular full disk backups of your Droplet. You can use these backups to restore the entire Droplet to an earlier state if something goes wrong. ## Updating SuperPlane 1. Check the [GitHub releases][github-releases] for the latest version tag. 2. Edit `docker-compose.yml` and update the `image` field with the new tag. 3. Restart the stack: ``` docker compose pull docker compose up -d ``` [github-releases]: https://github.com/superplanehq/superplane/releases [github-releases]: https://github.com/superplanehq/superplane/releases #### Linode Source URL: https://docs.superplane.com/installation/single-host/linode This guide walks you through setting up a new Linode instance from scratch and installing SuperPlane using the single-host installer. ## 1. Create a Linode 1. Sign in to the [Linode Cloud Manager](https://cloud.linode.com/). 2. Click **Create → Linode**. 3. Configure your Linode: - Choose a region close to you. - Select an Ubuntu LTS image (for example Ubuntu 22.04 LTS). - Pick a shared CPU plan with 2 vCPUs and 4 GB RAM (for example **Linode 4GB**). - Add SSH keys so you can log in securely. 4. Create the Linode and note its public IPv4 address. At this point you have a Linux server that is reachable from the internet. ## 2. Point your domain to the Linode 1. In your DNS provider (Linode DNS or another provider), create an `A` record for your domain or subdomain (for example `superplane.example.com`). 2. Point the `A` record to the public IP of your Linode. 3. Wait for DNS to propagate (usually a few minutes). SuperPlane will use this domain to issue and maintain an SSL certificate. ## 3. Open required ports with Cloud Firewall In the Linode Cloud Manager: 1. Go to **Firewall**. 2. Create a new firewall (or edit an existing one). 3. Under **Inbound Rules**, allow: - TCP port 22 (SSH) - TCP port 80 (HTTP, for certificate issuance) - TCP port 443 (HTTPS, for SuperPlane) 4. Under **Linodes**, attach the firewall to your SuperPlane Linode. 5. Save the firewall. ## 4. Install Docker and Docker Compose SSH into your Linode using the IP or domain: ```bash ssh root@your-linode-ip-or-domain ``` On the Linode, install Docker and Docker Compose. For example, on Ubuntu: ```bash apt update apt install -y ca-certificates curl gnupg install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ gpg --dearmor -o /etc/apt/keyrings/docker.gpg chmod a+r /etc/apt/keyrings/docker.gpg echo \ "deb [arch=$(dpkg --print-architecture) \ signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ tee /etc/apt/sources.list.d/docker.list > /dev/null apt update apt install -y docker-ce docker-ce-cli containerd.io \ docker-buildx-plugin docker-compose-plugin systemctl enable --now docker ``` You now have a Linode with Docker and Docker Compose installed, reachable from the internet at your chosen domain. ## 5. Install SuperPlane With Docker set up, install SuperPlane using the single-host installer. First, download and unpack the installer: ```bash wget -q https://install.superplane.com/superplane-single-host.tar.gz tar -xf superplane-single-host.tar.gz cd superplane ``` Then run the installer: ```bash ./install.sh ``` This downloads the single-host bundle, extracts it, and runs the installer. The installer sets up the Docker Compose stack and starts SuperPlane on your Linode. ## SSL certificates and public access Because SuperPlane needs to connect to external integrations and receive webhooks, your Linode must be reachable from the public internet. During installation, SuperPlane automatically: - Issues an SSL certificate for your configured domain. - Renews the certificate so HTTPS continues to work over time. Ensure your firewall allows inbound traffic on ports 80 and 443 so certificate issuance and HTTPS access can succeed. ## 6. Enable backups To protect your SuperPlane instance, enable backups for your Linode. In the Linode Cloud Manager: 1. Go to **Linodes** and click your SuperPlane Linode. 2. Open the **Backups** tab. 3. Enable backups for the Linode. Linode will now create regular backups of your instance. You can use these backups to restore the Linode to an earlier state if something goes wrong. ## Updating SuperPlane 1. Check the [GitHub releases][github-releases] for the latest version tag. 2. Edit `docker-compose.yml` and update the `image` field with the new tag. 3. Restart the stack: ``` docker compose pull docker compose up -d ``` [github-releases]: https://github.com/superplanehq/superplane/releases #### Generic server Source URL: https://docs.superplane.com/installation/single-host/generic-server This guide describes how to install and run SuperPlane on a generic Linux server, such as a bare-metal host or virtual machine from any provider. ## Prerequisites Before you start, make sure you have: - A Linux server that is exposed to the internet (public IP or behind a public load balancer). - Docker and Docker Compose installed on the server. - A domain name that points to this server’s public IP. The single-host installation uses Docker Compose to run SuperPlane and its dependencies. It will also issue and maintain an SSL certificate for the domain you configure. ## Install Docker and Docker Compose Install Docker using Docker's official repository. On Ubuntu, run: ```bash apt update apt install -y ca-certificates curl gnupg install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ gpg --dearmor -o /etc/apt/keyrings/docker.gpg chmod a+r /etc/apt/keyrings/docker.gpg echo \ "deb [arch=$(dpkg --print-architecture) \ signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ tee /etc/apt/sources.list.d/docker.list > /dev/null apt update apt install -y docker-ce docker-ce-cli containerd.io \ docker-buildx-plugin docker-compose-plugin systemctl enable --now docker ``` ## Installation steps Run the following commands on your server to download and unpack the installer: ```bash wget -q https://install.superplane.com/superplane-single-host.tar.gz tar -xf superplane-single-host.tar.gz cd superplane ``` Then run the installer: ```bash ./install.sh ``` This downloads the single-host bundle, extracts it, and runs the installer. The installer sets up the Docker Compose stack and starts SuperPlane on your server. ## SSL certificates and public access Because SuperPlane needs to connect to external integrations and receive webhooks, your server must be reachable from the public internet. During installation, SuperPlane automatically: - Issues an SSL certificate for your configured domain. - Renews the certificate so HTTPS continues to work over time. Ensure your firewall or security group allows inbound traffic on ports 80 and 443 so certificate issuance and HTTPS access can succeed. ## Updating SuperPlane 1. Check the [GitHub releases][github-releases] for the latest version tag. 2. Edit `docker-compose.yml` and update the `image` field with the new tag. 3. Restart the stack: ``` docker compose pull docker compose up -d ``` [github-releases]: https://github.com/superplanehq/superplane/releases #### Google Kubernetes Engine Source URL: https://docs.superplane.com/installation/kubernetes/gke This guide describes how to deploy SuperPlane to Google Kubernetes Engine (GKE) using Terraform. ## Prerequisites - A GCP project with billing enabled - [Terraform][terraform-install] >= 1.5.0 - [`gcloud` CLI][gcloud-install] installed and authenticated - [`kubectl`][kubectl-install] installed ## Step 1: Authenticate with GCP ```bash gcloud auth application-default login gcloud config set project YOUR_PROJECT_ID ``` ## Step 2: Create Static IP Address Create a global static IP address for the ingress: ```bash gcloud compute addresses create superplane-ip --global --ip-version=IPV4 ``` Get the reserved IP address: ```bash gcloud compute addresses describe superplane-ip --global --format='get(address)' ``` ## Step 3: Configure DNS Create an A record in your DNS provider pointing to the static IP: - **Type:** A - **Name:** Your subdomain (e.g., `superplane`) - **Value:** The static IP address from Step 2 Wait for DNS propagation and verify: ```bash dig superplane.example.com +short ``` ## Step 4: Clone and Configure Terraform ```bash git clone https://github.com/superplanehq/superplane-terraform cd superplane-terraform/gke cp terraform.tfvars.example terraform.tfvars ``` Edit `terraform.tfvars`: ```hcl project_id = "my-gcp-project" domain_name = "superplane.example.com" static_ip_name = "superplane-ip" letsencrypt_email = "admin@example.com" ``` ### Configuration Options | Variable | Description | Default | | ---------------------------- | ------------------------------------- | --------------- | | `project_id` | GCP project ID | (required) | | `domain_name` | Domain name for SuperPlane | (required) | | `static_ip_name` | Name of pre-created static IP | (required) | | `letsencrypt_email` | Email for Let's Encrypt | (required) | | `region` | GCP region | `us-central1` | | `zone` | GCP zone | `us-central1-a` | | `cluster_name` | GKE cluster name | `superplane` | | `node_count` | Number of GKE nodes | `2` | | `machine_type` | GKE node machine type | `e2-medium` | | `superplane_image_tag` | SuperPlane image tag | `stable` | | `installation.beaconEnabled` | Enable anonymized beacon telemetry | `true` | ## Step 5: Deploy ```bash terraform init terraform apply ``` The deployment takes 15-20 minutes and creates: - GKE cluster - Cloud SQL PostgreSQL instance with VPC peering - Kubernetes secrets - cert-manager with Let's Encrypt - SuperPlane deployment ## Step 6: Verify Configure kubectl: ```bash gcloud container clusters get-credentials superplane --zone=us-central1-a \ --project=YOUR_PROJECT_ID ``` Check pods and ingress: ```bash kubectl get pods -n superplane kubectl get ingress -n superplane ``` Check SSL certificate status: ```bash kubectl get certificate -n superplane ``` Once the certificate shows `Ready`, access SuperPlane at `https://your-domain.com`. ## Updating SuperPlane 1. Check the [GitHub releases][github-releases] for the latest version tag. 2. Update `superplane_image_tag` in `terraform.tfvars` with the new tag. 3. Apply the changes: ``` terraform apply ``` ## Uninstalling ```bash # Disable deletion protection for the database and the cluster terraform apply \ -var="gke_deletion_protection=false" \ -var="sql_deletion_protection=false" # Then destroy all resources terraform destroy \ -var="gke_deletion_protection=false" \ -var="sql_deletion_protection=false" # Delete the static IP gcloud compute addresses delete superplane-ip --global ``` [terraform-install]: https://www.terraform.io/downloads [kubectl-install]: https://kubernetes.io/docs/tasks/tools/ [gcloud-install]: https://cloud.google.com/sdk/docs/install [github-releases]: https://github.com/superplanehq/superplane/releases #### Amazon Kubernetes (EKS) Source URL: https://docs.superplane.com/installation/kubernetes/amazon-eks This guide describes how to deploy SuperPlane to Amazon EKS using Terraform. ## Prerequisites - An AWS account with permissions to create EKS, RDS, and VPC resources - [Terraform][terraform-install] >= 1.5.0 - [AWS CLI][aws-cli-install] installed - [`kubectl`][kubectl-install] installed ## Step 1: Configure AWS CLI Configure the AWS CLI with your credentials: ```bash aws configure ``` You will be prompted to enter: - **AWS Access Key ID:** Your access key - **AWS Secret Access Key:** Your secret key - **Default region name:** e.g., `us-east-1` - **Default output format:** `json` (recommended) Verify the configuration: ```bash aws sts get-caller-identity ``` ## Step 2: Clone and Configure Terraform If you already have the SuperPlane repo checked out, the Terraform configuration lives in `superplane-terraform/eks`. Otherwise, clone it: ```bash git clone https://github.com/superplanehq/superplane-terraform ``` Then: ```bash cd superplane-terraform/eks cp terraform.tfvars.example terraform.tfvars ``` Edit `terraform.tfvars`: ```hcl domain_name = "superplane.example.com" letsencrypt_email = "admin@example.com" ``` ### Configuration Options | Variable | Description | Default | | ---------------------------- | ---------------------------------- | -------------- | | `domain_name` | Domain name for SuperPlane | (required) | | `letsencrypt_email` | Email for Let's Encrypt | (required) | | `region` | AWS region | `us-east-1` | | `cluster_name` | EKS cluster name | `superplane` | | `node_count` | Number of EKS nodes | `2` | | `instance_type` | EKS node instance type | `t3.medium` | | `db_instance_class` | RDS instance class | `db.t3.medium` | | `superplane_image_tag` | SuperPlane image tag | `stable` | | `installation.beaconEnabled` | Enable anonymized beacon telemetry | `true` | ## Step 3: Deploy ```bash terraform init terraform apply ``` The deployment takes 15-20 minutes and creates: - VPC with public and private subnets - EKS cluster with node group - RDS PostgreSQL instance - Network Load Balancer - cert-manager with Let's Encrypt - SuperPlane deployment ## Step 4: Configure kubectl ```bash aws eks update-kubeconfig --region us-east-1 --name superplane ``` ## Step 5: Configure DNS Get the Load Balancer hostname: ```bash kubectl get svc -n ingress-nginx ingress-nginx-controller \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' ``` Create a CNAME record in your DNS provider: - **Type:** CNAME - **Name:** Your subdomain (e.g., `superplane`) - **Value:** The hostname from the command above Wait for DNS propagation: ```bash dig superplane.example.com +short ``` ## Step 6: Verify Check pods and certificate status: ```bash kubectl get pods -n superplane kubectl get certificate -n superplane ``` Once the certificate shows `Ready`, access SuperPlane at `https://your-domain.com`. Note: Certificate issuance may take 5-10 minutes after DNS propagation completes. ## Updating SuperPlane 1. Check the [GitHub releases][github-releases] for the latest version tag. 2. Update `superplane_image_tag` in `terraform.tfvars` with the new tag. 3. Apply the changes: ``` terraform apply ``` ## Uninstalling ```bash # Disable deletion protection on the database aws rds modify-db-instance \ --db-instance-identifier superplane-db \ --no-deletion-protection \ --apply-immediately # Wait for modification to complete aws rds wait db-instance-available --db-instance-identifier superplane-db # Destroy all resources terraform destroy ``` [terraform-install]: https://www.terraform.io/downloads [kubectl-install]: https://kubernetes.io/docs/tasks/tools/ [aws-cli-install]: https://aws.amazon.com/cli/ [github-releases]: https://github.com/superplanehq/superplane/releases #### Beacon Source URL: https://docs.superplane.com/installation/beacon SuperPlane includes a lightweight beacon that sends a periodic ping to help the team understand installation volume and deployment types. ## What the beacon sends The beacon sends a JSON payload with: - `installation_type`: The installation type (for example `demo`, `single-host`, or `kubernetes`). - `installation_id`: A randomly generated UUID stored in the SuperPlane database No user data, workflow payloads, or secrets are included. The data is anonymized. ## How to disable the beacon You can disable the beacon by setting `SUPERPLANE_BEACON_ENABLED=no` and restarting SuperPlane. ### Local demo container Set the environment variable when starting the container: ```bash docker run --rm -p 3000:3000 -v spdata:/app/data \ -e SUPERPLANE_BEACON_ENABLED=no \ -ti ghcr.io/superplanehq/superplane-demo:stable ``` If you already have a data volume, you can also edit `/app/data/superplane.env` in that volume and restart the container. ### Single-host installer Edit `superplane.env` in the installation directory and set: ```bash SUPERPLANE_BEACON_ENABLED=no ``` Then restart the stack: ```bash docker compose up -d ``` ### Kubernetes (Helm) Set the Helm value and upgrade: ```yaml installation: beaconEnabled: false ``` Or with `--set`: ```bash helm upgrade superplane ./superplane-helm-chart \ --set installation.beaconEnabled=false ``` #### CLI Source URL: https://docs.superplane.com/installation/cli import { Aside } from "@astrojs/starlight/components"; Use the SuperPlane CLI to connect to your organization and manage workflows from your terminal. ## Installation Download the latest binary for your operating system and architecture: ``` curl -L https://install.superplane.com/superplane-cli-darwin-arm64 -o superplane chmod +x superplane sudo mv superplane /usr/local/bin/superplane ``` Or download a specific version: ``` curl -L https://install.superplane.com/v0.1.6/superplane-cli-darwin-arm64 -o superplane chmod +x superplane sudo mv superplane /usr/local/bin/superplane ``` ## Authentication The SuperPlane CLI uses API tokens for authentication. You can use: - **Service account token** (recommended for scripts and integrations): see [Service Accounts](/concepts/service-accounts). - **Personal token** (tied to your user): go to **Profile > API token** in the SuperPlane UI. ## Connect Connect to a SuperPlane organization: ```sh superplane connect ``` Show your identity for the current context: ```sh superplane whoami ``` ### Multiple organizations You can connect to multiple SuperPlane organizations at the same. Each connection will become a CLI context. You can list contexts and switch interactively: ```sh superplane contexts ``` Switch directly with a context selector: ```sh superplane contexts / ``` ## Output formats All commands support `--output` (or `-o`) to choose the response format. Possible values are: - `text` (default) - `json` - `yaml` Examples: ```sh superplane canvases list --output text superplane canvases list -o json superplane canvases get -o yaml ``` ## Managing canvases ### Create a canvas ```sh superplane canvases create ``` You can also create from a file: ```sh superplane canvases create --file my_canvas.yaml ``` ### Describe a canvas ```sh superplane canvases get ``` ### List canvases ```sh superplane canvases list ``` ### Set active canvas Since you're mostly working on a single canvas at a time, you can set the active canvas with: ```sh superplane canvases active ``` This allows you to not have to specify `--canvas-id` on commands that require it. ### Update a canvas Export the existing canvas, edit it, then apply your changes: ```sh superplane canvases get > my_canvas.yaml # update your YAML to reflect the changes you want to make superplane canvases update -f my_canvas.yaml ``` `superplane canvases update` applies auto layout by default. Use explicit auto-layout flags only when you need to control scope or seed nodes. #### Check whether versioning is enabled Check the canvas metadata directly: ```sh superplane canvases get -o json | jq '.metadata.versioningEnabled' ``` Expected values: - `true`: effective versioning enabled for this canvas. Use `superplane canvases update --draft -f ...`, then create a change request. - `false`: effective versioning disabled for this canvas. Use `superplane canvases update -f ...` (no `--draft`), and do not use change-request commands. Quick behavior-based check: - If `superplane canvases update --draft ...` returns `--draft cannot be used when effective canvas versioning is disabled`, versioning is disabled for this canvas. - If `superplane canvases update ...` (without `--draft`) returns `effective canvas versioning is enabled for this canvas; use --draft`, versioning is enabled for this canvas. - If `superplane canvases change-requests create ...` returns `effective canvas versioning is disabled for this canvas`, versioning is disabled for this canvas. ### Canvas versioning (drafts and change requests) Canvas update behavior depends on effective canvas mode: - Organization-level versioning ON forces effective versioning ON for all canvases. - Organization-level versioning OFF allows each canvas to toggle versioning independently. - If canvas versioning is enabled, `superplane canvases update` must use `--draft`. - If canvas versioning is disabled, do not use `--draft`; updates apply directly and change requests are not available. Draft workflow (versioning enabled): ```sh # Write changes to your draft version superplane canvases update --draft -f my_canvas.yaml # Create a change request from your current draft version superplane canvases change-requests create ``` You can include optional metadata when creating the change request: ```sh superplane canvases change-requests create \ --title "Add incident routing path" \ --description "Introduces PagerDuty fallback and Slack notification branch." ``` If you already set an active canvas with `superplane canvases active `, you can omit `` in `superplane canvases change-requests create`. ### Change request review actions When versioning is enabled, use `canvases change-requests` to review and publish: ```sh # List requests for a canvas (or active canvas) superplane canvases change-requests list # Create a change request superplane canvases change-requests create \ --title "Add incident routing path" \ --description "Introduces PagerDuty fallback and Slack notification branch." # Review actions superplane canvases change-requests approve superplane canvases change-requests unapprove superplane canvases change-requests reject superplane canvases change-requests reopen superplane canvases change-requests publish ``` Useful filters: ```sh # Open requests only superplane canvases change-requests list --status open # Conflicted requests only superplane canvases change-requests list --status conflicted # Requests created by you superplane canvases change-requests list --mine ``` ### Resolve conflicted change requests If two change requests edit overlapping nodes differently, one or both can become conflicted. Conflicted requests cannot be approved or published until resolved. Use a resolved canvas YAML and apply it to the change request: ```sh superplane canvases change-requests resolve \ --file resolved_canvas.yaml ``` Optional auto-layout can be applied during resolve: ```sh superplane canvases change-requests resolve \ --file resolved_canvas.yaml \ --auto-layout horizontal ``` ### Canvas YAML shape (minimal working example) When updating canvases via YAML, component nodes and edges must use the API field names. This example connects a `schedule` trigger to an `http` component that sends a keepalive request every minute: ```yaml apiVersion: v1 kind: Canvas metadata: id: name: Store app spec: edges: - sourceId: schedule-schedule-w3mak1 targetId: http-keepalive-ping channel: default nodes: - id: schedule-schedule-w3mak1 name: schedule type: TYPE_TRIGGER trigger: name: schedule paused: false position: x: 144 y: 0 configuration: type: minutes minutesInterval: 1 customName: Keepalive {{ now() }} - id: http-keepalive-ping name: http type: TYPE_COMPONENT component: name: http paused: false position: x: 456 y: 0 configuration: method: GET url: https://store-app-c6nr.examplepaas.com/ customName: PaaS keepalive ``` Notes: - For component nodes, `type` must be `TYPE_COMPONENT` and `component.name` is required. - For trigger nodes, use `type: TYPE_TRIGGER` and `trigger.name`. - Edge fields are `sourceId`, `targetId`, and optional `channel`. - Use `superplane index components` to find component keys (for example, `http`, `if`, `noop`). - Positioning guideline for agents: - Keep downstream nodes on the same row by default (`y` unchanged). - Use `x = upstream.x + 480` as the default spacing for new connected nodes. - Avoid changing positions of existing nodes unless explicitly requested. - If overlap still appears in UI, apply a small horizontal nudge (`x +/- 80..120`) before changing `y`. ## Discovery index Use `index` to discover available integration definitions, triggers, and components. ```sh superplane index integrations ``` Describe one integration definition: ```sh superplane index integrations --name ``` List core components: ```sh superplane index components ``` List components from an integration definition: ```sh superplane index components --from ``` Describe one component: ```sh superplane index components --name ``` List triggers from an integration definition: ```sh superplane index triggers --from ``` Describe one trigger: ```sh superplane index triggers --name ``` ## Managing integrations List connected integrations: ```sh superplane integrations list ``` Get details for one connected integration: ```sh superplane integrations get ``` List resources available from a connected integration: ```sh superplane integrations list-resources --id --type ``` You can pass additional query parameters when needed: ```sh superplane integrations list-resources \ --id \ --type \ --parameters key=value,key2=value2 ``` ## Managing secrets List secrets: ```sh superplane secrets list ``` Create a secret from a file: ```sh superplane secrets create --file my_secret.yaml ``` Update a secret from a file: ```sh superplane secrets update --file my_secret.yaml ``` Delete a secret: ```sh superplane secrets delete ``` ## Runtime operations List root events: ```sh superplane events list --canvas-id ``` List executions for a root event: ```sh superplane events list-executions --canvas-id --event-id ``` List node executions: ```sh superplane executions list --canvas-id --node-id ``` Cancel a node execution: ```sh superplane executions cancel --canvas-id --execution-id ``` List queue items for a node: ```sh superplane queue list --canvas-id --node-id ``` Delete a queue item: ```sh superplane queue delete --canvas-id --node-id --item-id ``` ## Updating SuperPlane To upgrade to the latest version, download the latest binary and replace the existing one: ```bash curl -L https://install.superplane.com/superplane-cli-darwin-arm64 -o superplane chmod +x superplane sudo mv superplane /usr/local/bin/superplane ``` Replace `darwin-arm64` with your operating system and architecture (e.g., `linux-amd64`, `darwin-amd64`). ### Components #### AWS Source URL: https://docs.superplane.com/components/aws Manage resources and execute AWS commands in workflows import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions Initially, you can leave the **"IAM Role ARN"** field empty, as you will be guided through the identity provider and IAM role creation process. ## CloudWatch • On Alarm The On Alarm trigger starts a workflow execution when a CloudWatch alarm transitions to the ALARM state. ### Use Cases - **Incident response**: Notify responders and open incidents when alarms fire - **Auto-remediation**: Execute rollback or recovery workflows immediately - **Audit and reporting**: Track alarm transitions over time ### Configuration - **Region**: AWS region where alarms are evaluated - **Alarms**: Optional alarm name filters (supports equals, not-equals, and regex matches) - **State**: Only trigger for alarms in the specified state (OK, ALARM, or INSUFFICIENT_DATA) ### Event Data Each alarm event includes: - **detail.alarmName**: CloudWatch alarm name - **detail.state.value**: Current alarm state - **detail.previousState.value**: Previous alarm state ### Example Data ```json { "data": { "account": "123456789012", "detail": { "alarmName": "HighCPUUtilization", "previousState": { "reason": "Threshold Crossed: 1 datapoint [35.0 (20/11/24 20:29:00)] was not greater than or equal to the threshold (90.0).", "timestamp": "2024-11-20T20:30:33.000+0000", "value": "OK" }, "state": { "reason": "Threshold Crossed: 1 datapoint [95.0 (20/11/24 20:34:00)] was greater than or equal to the threshold (90.0).", "timestamp": "2024-11-20T20:35:33.000+0000", "value": "ALARM" } }, "detail-type": "CloudWatch Alarm State Change", "id": "2f1ecf5c-8bc9-4b7d-9e76-8df420e8e1a7", "region": "us-east-1", "resources": [ "arn:aws:cloudwatch:us-east-1:123456789012:alarm:HighCPUUtilization" ], "source": "aws.cloudwatch", "time": "2024-11-20T20:35:33Z", "version": "0" }, "timestamp": "2026-02-10T12:00:00Z", "type": "aws.cloudwatch.alarm" } ``` ## CodeArtifact • On Package Version The On Package Version trigger starts a workflow execution when a package version is created, modified, or deleted in AWS CodeArtifact. ### Use Cases - **Release automation**: Trigger downstream workflows when a new package version is published - **Dependency monitoring**: Notify teams about changes to shared libraries - **Compliance checks**: Validate artifacts before promotion ### Example Data ```json { "data": { "account": "123456789012", "detail": { "changes": { "assetsAdded": 1, "assetsRemoved": 0, "assetsUpdated": 0, "metadataUpdated": false, "statusChanged": true }, "domainName": "example-domain", "domainOwner": "123456789012", "eventDeduplicationId": "5f87d1a3-2c1f-4ab0-8f55-8f4c2b4a5c76", "operationType": "Created", "packageFormat": "npm", "packageName": "@scope/example-package", "packageNamespace": null, "packageVersion": "1.2.3", "packageVersionRevision": "E30D52B451F42F41", "packageVersionState": "Published", "repositoryAdministrator": "arn:aws:sts::123456789012:assumed-role/ExampleRole/example-user", "repositoryName": "example-repo", "sequenceNumber": 1 }, "detail-type": "CodeArtifact Package Version State Change", "id": "d9e9ff4a-3514-3d2c-b6b8-1fb5e0b9d3b2", "region": "us-east-1", "resources": [ "arn:aws:codeartifact:us-east-1:123456789012:repository/example-domain/example-repo" ], "source": "aws.codeartifact", "time": "2024-11-20T20:35:33Z", "version": "0" }, "timestamp": "2026-03-10T14:25:30.31254162Z", "type": "aws.codeartifact.package.version" } ``` ## CodePipeline • On Pipeline The On Pipeline trigger starts a workflow execution when AWS CodePipeline emits a pipeline execution state change event. ### Use Cases - **Deployment visibility**: Start workflows whenever pipeline state changes - **Incident response**: Notify teams when a pipeline fails or is canceled - **Workflow orchestration**: Trigger follow-up automations on specific pipeline states ### Configuration - **Region**: AWS region where pipeline execution events are observed - **Pipelines**: Optional pipeline name filters (supports equals, not-equals, and regex matches) - **Pipeline States**: Optional list of states to match (for example STARTED, SUCCEEDED, FAILED) ### Event Data Each event includes: - **detail.pipeline**: CodePipeline pipeline name - **detail.execution-id**: Pipeline execution ID - **detail.state**: Pipeline execution state ### Example Data ```json { "data": { "account": "123456789012", "detail": { "execution-id": "00000000-0000-0000-0000-000000000001", "execution-trigger": { "trigger-detail": "arn:aws:sts::123456789012:assumed-role/superplane-demo-role/SuperPlane-00000000-0000-0000-0000-000000000000", "trigger-type": "StartPipelineExecution" }, "pipeline": "demo-pipeline", "pipeline-execution-attempt": 1, "start-time": "2026-02-24T15:21:42.016Z", "state": "STARTED", "version": 1 }, "detail-type": "CodePipeline Pipeline Execution State Change", "id": "00000000-0000-0000-0000-000000000002", "region": "us-east-1", "resources": [ "arn:aws:codepipeline:us-east-1:123456789012:demo-pipeline" ], "source": "aws.codepipeline", "time": "2026-02-24T15:21:42Z", "version": "0" }, "timestamp": "2026-02-24T15:21:52.212Z", "type": "aws.codepipeline.pipeline" } ``` ## EC2 • On Image The On Image trigger starts a workflow execution when an EC2 AMI changes state. ### Use Cases - **Image pipeline orchestration**: Continue workflows when a new AMI becomes available - **Failure handling**: Alert and remediate when AMI creation fails - **Compliance workflows**: Run validation and distribution after image creation ### Configuration - **Region**: AWS region where AMI state changes are monitored - **Image State**: State to trigger on (pending, available, failed) ### Event Data Each AMI state event includes: - **detail.ImageId**: AMI ID (for example: ami-1234567890abcdef0) - **detail.State**: AMI state - **detail.ErrorMessage**: Error message for failed states (if available) ### Example Data ```json { "data": { "account": "123456789012", "detail": { "ImageId": "ami-07f0e4f3e9c123abc", "State": "available" }, "detail-type": "EC2 AMI State Change", "id": "f74f3de5-f9b7-4f3d-909a-531fc3ff2f14", "region": "us-east-1", "resources": [ "arn:aws:ec2:us-east-1::image/ami-07f0e4f3e9c123abc" ], "source": "aws.ec2", "time": "2026-02-10T12:10:00Z", "version": "0" }, "timestamp": "2026-02-10T12:10:01Z", "type": "aws.ec2.image" } ``` ## ECR • On Image Push The On Image Push trigger starts a workflow execution when an image is pushed to an ECR repository. ### Use Cases - **Build pipelines**: Trigger builds and deployments on container pushes - **Security automation**: Kick off scans or alerts for newly pushed images - **Release workflows**: Promote artifacts when a tag is published ### Configuration - **Repositories**: Optional filters for ECR repository names - **Image Tags**: Optional filters for image tags (for example: `latest` or `^v[0-9]+`) ### Event Data Each image push event includes: - **detail.repository-name**: ECR repository name - **detail.image-tag**: Tag that was pushed - **detail.image-digest**: Digest of the image ### Example Data ```json { "data": { "account": "123456789012", "detail": { "action-type": "PUSH", "image-digest": "sha256:2c26b46b68ffc68ff99b453c1d30413413422f1642f0e2b8c7b8a0b8a96a909e", "image-tag": "latest", "repository-arn": "arn:aws:ecr:us-east-1:123456789012:repository/my-repo", "repository-name": "my-repo", "result": "SUCCESS" }, "detail-type": "ECR Image Action", "id": "c1b45a2c-9c3f-4c52-bc98-5ea31ce17692", "region": "us-east-1", "resources": [ "arn:aws:ecr:us-east-1:123456789012:repository/my-repo" ], "source": "aws.ecr", "time": "2024-01-01T12:00:00Z", "version": "0" }, "timestamp": "2026-02-03T12:00:00Z", "type": "aws.ecr.image.push" } ``` ## ECR • On Image Scan The On Image Scan trigger starts a workflow execution when an ECR image scan completes. ### Use Cases - **Security automation**: Notify teams or open issues on new findings - **Compliance checks**: Gate promotions based on severity thresholds - **Reporting**: Aggregate scan findings across repositories ### Configuration - **Repositories**: Optional filters for ECR repository names ### Notes - **Enhanced scanning**: Enhanced scanning events are sent by Amazon Inspector (aws.inspector2) ### Event Data Each image scan event includes: - **detail.scan-status**: Scan status (for example: COMPLETE) - **detail.repository-name**: ECR repository name - **detail.image-digest**: Digest of the image - **detail.image-tags**: Tags associated with the image - **detail.finding-severity-counts**: Counts per severity level (if any) ### Example Data ```json { "data": { "account": "123456789012", "detail": { "finding-severity-counts": { "CRITICAL": 10, "MEDIUM": 9 }, "image-digest": "sha256:7f5b2640fe6fb4f46592dfd3410c4a79dac4f89e4782432e0378abcd1234", "image-tags": [], "repository-name": "my-repo", "scan-status": "COMPLETE" }, "detail-type": "ECR Image Scan", "id": "df8b66c7-62c7-4b8a-9a6b-6ad7d6d8b3a2", "region": "us-east-1", "resources": [ "arn:aws:ecr:us-east-1:123456789012:repository/my-repo" ], "source": "aws.ecr", "time": "2024-01-01T12:00:00Z", "version": "0" }, "timestamp": "2026-03-10T14:25:30.31254162Z", "type": "aws.ecr.image.scan" } ``` ## SNS • On Topic Message The On Topic Message trigger starts a workflow execution when a message is published to an AWS SNS topic. ### Use Cases - **Event-driven automation**: React to messages published by external systems - **Notification processing**: Handle SNS payloads in workflow steps - **Routing and enrichment**: Trigger downstream workflows based on topic activity ### How it works During setup, SuperPlane creates a webhook endpoint for this trigger and subscribes it to the selected SNS topic using HTTPS. SNS sends notification payloads to the webhook endpoint, which then emits workflow events. ### Example Data ```json { "data": { "account": "123456789012", "detail": { "message": "{\"orderId\":\"ord_123\",\"status\":\"created\"}", "messageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e", "subject": "order.created", "timestamp": "2026-01-10T10:00:00Z", "topicArn": "arn:aws:sns:us-east-1:123456789012:orders-events" }, "message": "{\"orderId\":\"ord_123\",\"status\":\"created\"}", "messageAttributes": { "eventType": { "Type": "String", "Value": "order.created" } }, "messageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e", "region": "us-east-1", "subject": "order.created", "timestamp": "2026-01-10T10:00:00Z", "topicArn": "arn:aws:sns:us-east-1:123456789012:orders-events", "type": "Notification" }, "timestamp": "2026-01-10T10:00:02.000000000Z", "type": "aws.sns.topic.message" } ``` ## CodeArtifact • Copy Package Versions The Copy Package Versions component copies one or more package versions from a source repository to a destination repository in the same domain. ### Use Cases - **Promotion**: Copy approved versions from staging to production - **Replication**: Mirror packages across repositories - **Migration**: Move versions between repos in the same domain ### Example Output ```json { "data": { "failedVersions": {}, "successfulVersions": { "1.0.0": { "revision": "REVISION1", "status": "Published" }, "1.0.1": { "revision": "REVISION2", "status": "Published" } } }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "aws.codeartifact.package.versions.copied" } ``` ## CodeArtifact • Create Repository The Create Repository component creates a new repository in an AWS CodeArtifact domain. ### Use Cases - **Automated setup**: Create repositories as part of onboarding or pipeline setup - **Environment replication**: Mirror repository structure across domains - **Workflow provisioning**: Create a destination repository before copying packages ### Example Output ```json { "data": { "repository": { "administratorAccount": "123456789012", "arn": "arn:aws:codeartifact:us-east-1:123456789012:repository/example-domain/my-repo", "createdTime": 1706961600, "description": "Example repository created by workflow", "domainName": "example-domain", "domainOwner": "123456789012", "name": "my-repo" } }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "aws.codeartifact.repository" } ``` ## CodeArtifact • Delete Package Versions The Delete Package Versions component permanently removes package versions and their assets. Deleted versions cannot be restored. To remove from view but keep the option to restore later, use Update Package Versions Status to set status to Archived instead. ### Use Cases - **Cleanup**: Remove obsolete or invalid versions - **Compliance**: Permanently remove versions that must not be retained - **Storage**: Free space by deleting unused versions ### Example Output ```json { "data": { "failedVersions": {}, "successfulVersions": { "1.0.0": { "revision": "REVISION1", "status": "Deleted" } } }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "aws.codeartifact.packageVersions" } ``` ## CodeArtifact • Delete Repository The Delete Repository component deletes a repository from an AWS CodeArtifact domain. ### Use Cases - **Cleanup**: Remove repositories after migration or deprecation - **Environment teardown**: Delete temporary repositories created by workflows - **Lifecycle management**: Enforce retention by deleting old repositories ### Example Output ```json { "data": { "repository": { "administratorAccount": "123456789012", "arn": "arn:aws:codeartifact:us-east-1:123456789012:repository/example-domain/my-repo", "createdTime": 1706961600, "description": "Deleted repository", "domainName": "example-domain", "domainOwner": "123456789012", "name": "my-repo" } }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "aws.codeartifact.repository" } ``` ## CodeArtifact • Dispose Package Versions The Dispose Package Versions component deletes the assets of package versions and sets their status to Disposed. The version record remains so you can still see it in ListPackageVersions with status Disposed; assets cannot be restored. ### Use Cases - **Retention**: Keep version metadata for audit while removing binary assets - **Storage**: Free asset storage while preserving version history - **Lifecycle**: Mark versions as disposed after a retention period ### Example Output ```json { "data": { "failedVersions": {}, "successfulVersions": { "1.0.0": { "revision": "REVISION1", "status": "Disposed" } } }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "aws.codeartifact.packageVersions" } ``` ## CodeArtifact • Get Package Version The Get Package Version component retrieves metadata for a specific package version in AWS CodeArtifact. ### Use Cases - **Release automation**: Resolve package metadata before promotion - **Audit trails**: Capture version details for reporting - **Dependency checks**: Validate status and origin of package versions ### Example Output ```json { "data": { "assets": [ { "hashes": { "sha256": "1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef" }, "name": "example-package-1.2.3.tgz", "size": 1234567890 } ], "package": { "displayName": "example-package", "format": "npm", "homePage": "https://example.com/example-package", "licenses": [ { "name": "MIT", "url": "https://opensource.org/licenses/MIT" } ], "namespace": "@scope", "origin": { "domainEntryPoint": { "externalConnectionName": "npmjs", "repositoryName": "example-repo" }, "originType": "EXTERNAL" }, "packageName": "@scope/example-package", "revision": "E30D52B451F42F41", "sourceCodeRepository": "https://github.com/example/example-package", "status": "Published", "summary": "Example package for demonstration.", "version": "1.2.3" } }, "timestamp": "2026-02-03T12:00:00Z", "type": "aws.codeartifact.package.version" } ``` ## CodeArtifact • Update Package Versions Status The Update Package Versions Status component sets the status of package versions to Archived, Published, or Unlisted. ### Use Cases - **Lifecycle management**: Archive old versions or publish after validation - **Visibility**: Unlist versions without deleting them - **Compliance**: Align version status with release policies ### Example Output ```json { "data": { "failedVersions": {}, "successfulVersions": { "1.0.0": { "revision": "REVISION1", "status": "Archived" }, "1.0.1": { "revision": "REVISION2", "status": "Archived" } } }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "aws.codeartifact.packageVersions" } ``` ## CodePipeline • Get Pipeline The Get Pipeline component retrieves the full definition of an AWS CodePipeline pipeline. ### Use Cases - **Pipeline inspection**: Fetch pipeline stages, actions, and configuration - **Workflow branching**: Route workflow based on pipeline structure or version - **Audit and compliance**: Retrieve pipeline definitions for auditing purposes ### Configuration - **Region**: AWS region where the pipeline exists - **Pipeline**: Pipeline name to retrieve ### Output Emits the full pipeline definition including: - Pipeline name, version, and role ARN - All stages and their actions - Pipeline metadata (ARN, creation date, last updated date) ### Example Output ```json { "data": { "metadata": { "created": "2025-01-15T10:30:00Z", "pipelineArn": "arn:aws:codepipeline:us-east-1:123456789012:my-deploy-pipeline", "updated": "2026-02-20T14:00:00Z" }, "pipeline": { "name": "my-deploy-pipeline", "roleArn": "arn:aws:iam::123456789012:role/pipeline-role", "stages": [ { "actions": [ { "actionTypeId": { "category": "Source", "owner": "AWS", "provider": "CodeStarSourceConnection", "version": "1" }, "name": "SourceAction" } ], "name": "Source" }, { "actions": [ { "actionTypeId": { "category": "Deploy", "owner": "AWS", "provider": "CodeDeploy", "version": "1" }, "name": "DeployAction" } ], "name": "Deploy" } ], "version": 3 } }, "timestamp": "2026-02-22T10:00:00.000000000Z", "type": "aws.codepipeline.pipeline" } ``` ## CodePipeline • Get Pipeline Execution The Get Pipeline Execution component retrieves the details of a specific AWS CodePipeline execution. ### Use Cases - **Execution inspection**: Fetch the status, trigger, and artifact revisions of a pipeline run - **Post-deploy checks**: After a RunPipeline component, fetch details of that execution for logging - **Workflow branching**: Route workflow based on execution status or trigger type - **Audit and compliance**: Retrieve execution details for auditing purposes ### Configuration - **Region**: AWS region where the pipeline exists - **Pipeline**: Pipeline name - **Execution ID**: The ID of the specific execution to retrieve ### Output Emits the full pipeline execution details including: - Execution ID, status, and status summary - Pipeline name and version - Trigger type and detail - Artifact revisions (source code revisions involved) - Execution mode and type ### Example Output ```json { "data": { "artifactRevisions": [ { "name": "SourceArtifact", "revisionChangeIdentifier": "abc123def456789", "revisionId": "abc123def456789", "revisionSummary": "Merge pull request #42 from feature/add-auth", "revisionUrl": "https://github.com/example/repo/commit/abc123def456789" } ], "executionMode": "SUPERSEDED", "executionType": "STANDARD", "pipelineExecutionId": "a1b2c3d4-5678-90ab-cdef-111122223333", "pipelineName": "my-deploy-pipeline", "pipelineVersion": 3, "status": "Succeeded", "statusSummary": "Pipeline completed successfully", "trigger": { "triggerDetail": "arn:aws:iam::123456789012:user/developer", "triggerType": "StartPipelineExecution" } }, "timestamp": "2026-02-23T10:00:00.000000000Z", "type": "aws.codepipeline.pipeline.execution" } ``` ## CodePipeline • Retry Stage Execution The Retry Stage Execution component retries a stage within an existing AWS CodePipeline execution. ### Use Cases - **Recover failed deployments**: Retry only failed actions in a failed stage - **Re-run full stage**: Retry all actions for a stage when needed - **Workflow recovery**: Continue orchestration after a transient failure ### Configuration - **Region**: AWS region where the pipeline exists - **Pipeline**: Pipeline name - **Stage**: Stage name to retry - **Pipeline Execution**: Source execution to retry from - **Retry Mode**: Choose between failed actions only or all actions ### Output Emits retry result metadata including: - Pipeline name and stage - Selected retry mode - Source execution ID - New execution ID created by the retry ### Example Output ```json { "data": { "pipeline": { "name": "my-pipeline", "newExecutionId": "4444-5555-6666", "retryMode": "FAILED_ACTIONS", "sourceExecutionId": "1111-2222-3333", "stage": "Deploy" } }, "timestamp": "2026-02-23T10:00:00.000000000Z", "type": "aws.codepipeline.stage.retry" } ``` ## CodePipeline • Run Pipeline The Run Pipeline component triggers an AWS CodePipeline execution and waits for it to complete. ### Use Cases - **CI/CD orchestration**: Trigger deployments from SuperPlane workflows - **Pipeline automation**: Run CodePipeline pipelines as part of workflow automation - **Multi-stage deployments**: Coordinate complex deployment pipelines - **Workflow chaining**: Chain multiple CodePipeline pipelines together ### How It Works 1. Starts a CodePipeline execution with the specified pipeline name 2. Waits for the pipeline to complete (monitored via EventBridge webhook and polling) 3. Routes execution based on pipeline result: - **Passed channel**: Pipeline completed successfully - **Failed channel**: Pipeline failed or was cancelled ### Configuration - **Region**: AWS region where the pipeline exists - **Pipeline**: Pipeline name or ARN to execute ### Output Channels - **Passed**: Emitted when pipeline completes successfully - **Failed**: Emitted when pipeline fails or is cancelled ### Notes - The component automatically sets up EventBridge monitoring for pipeline completion - Falls back to polling if webhook doesn't arrive - Can be cancelled, which will stop the running pipeline execution ### Example Output ```json { "data": { "detail": { "execution-id": "a1b2c3d4-5678-90ab-cdef-111122223333", "pipeline": "my-deploy-pipeline", "state": "SUCCEEDED", "version": 1 }, "pipeline": { "executionId": "a1b2c3d4-5678-90ab-cdef-111122223333", "name": "my-deploy-pipeline", "state": "SUCCEEDED", "status": "Succeeded" } }, "timestamp": "2026-02-10T14:35:22.518372841Z", "type": "aws.codepipeline.pipeline.finished" } ``` ## EC2 • Copy Image The Copy Image component copies an AMI to another AWS region. ### Use Cases - **Multi-region rollouts**: Replicate golden images to deployment regions - **Disaster recovery**: Keep AMI backups in secondary regions - **Promotion workflows**: Copy validated images across environments ### Configuration - **Destination Region**: AWS region where the copied AMI is created - **Source Region**: AWS region where the source AMI exists - **Source Image ID**: AMI ID to copy - **Image Name**: Name for the copied AMI - **Description**: Optional AMI description ### Completion behavior - The component waits for EventBridge `EC2 AMI State Change` events for the copied AMI. - It completes when the AMI state becomes `available`. - It fails if the AMI state becomes `failed`. ### Example Output ```json { "data": { "image": { "architecture": "x86_64", "creationDate": "2026-02-19T09:00:00.000Z", "description": "Copied for disaster recovery", "hypervisor": "xen", "imageId": "ami-0c0ffee1234567890", "imageType": "machine", "name": "my-app-2026-02-19", "ownerId": "123456789012", "region": "us-west-2", "rootDeviceName": "/dev/xvda", "rootDeviceType": "ebs", "state": "available", "virtualizationType": "hvm" } }, "timestamp": "2026-02-19T09:00:00Z", "type": "aws.ec2.image" } ``` ## EC2 • Create Image The Create Image component creates a new Amazon Machine Image (AMI) from an EC2 instance. ### Use Cases - **Golden image pipelines**: Build immutable infrastructure images from validated instances - **Backup workflows**: Snapshot instance state before deployments or migrations - **Release automation**: Produce versioned AMIs as part of CI/CD ### Configuration - **Region**: AWS region where the instance runs - **Instance**: EC2 instance ID to create an image from - **Image Name**: Name for the AMI - **Description**: Optional image description - **No Reboot**: If enabled, create the image without rebooting the instance ### Completion behavior - The component waits for EventBridge `EC2 AMI State Change` events for the created AMI. - It completes when the AMI state becomes `available`. - It fails if the AMI state becomes `failed`. ### Example Output ```json { "data": { "image": { "architecture": "x86_64", "creationDate": "2026-02-18T12:00:00.000Z", "description": "Golden image for production", "hypervisor": "xen", "imageId": "ami-07f0e4f3e9c123abc", "imageType": "machine", "name": "my-app-2026-02-18", "ownerId": "123456789012", "region": "us-east-1", "rootDeviceName": "/dev/xvda", "rootDeviceType": "ebs", "state": "available", "virtualizationType": "hvm" } }, "timestamp": "2026-02-18T12:00:00Z", "type": "aws.ec2.image" } ``` ## EC2 • Deregister Image The Deregister Image component removes an AMI from your account in a region. ### Use Cases - **Image lifecycle cleanup**: Remove unused AMIs after promotion - **Compliance operations**: Retire images that should no longer be launched - **Automation rollback**: Clean up AMIs created by failed workflows ### Configuration - **Region**: AWS region where the AMI exists - **Image ID**: AMI ID to deregister - **Delete Snapshots**: If enabled, delete the snapshots associated with the AMI ### Example Output ```json { "data": { "deregistered": true, "imageId": "ami-07f0e4f3e9c123abc", "region": "us-east-1", "requestId": "req-deregister" }, "timestamp": "2026-02-19T09:10:00Z", "type": "aws.ec2.image.deregistered" } ``` ## EC2 • Disable Image The Disable Image component disables an AMI so it cannot be launched. ### Use Cases - **Risk containment**: Prevent new launches from vulnerable images - **Release control**: Temporarily block image usage during maintenance - **Lifecycle governance**: Enforce policies before image retirement ### Configuration - **Region**: AWS region where the AMI exists - **Image ID**: AMI ID to disable ### Example Output ```json { "data": { "disabled": true, "imageId": "ami-07f0e4f3e9c123abc", "region": "us-east-1", "requestId": "req-disable" }, "timestamp": "2026-02-19T09:30:00Z", "type": "aws.ec2.image.disabled" } ``` ## EC2 • Disable Image Deprecation The Disable Image Deprecation component removes the deprecation schedule from an AMI. ### Use Cases - **Release extension**: Keep an image available longer than planned - **Rollback support**: Reopen older images for temporary use - **Policy exceptions**: Remove deprecation when operational needs change ### Configuration - **Region**: AWS region where the AMI exists - **Image ID**: AMI ID to remove deprecation from ### Example Output ```json { "data": { "deprecationEnabled": false, "imageId": "ami-07f0e4f3e9c123abc", "region": "us-east-1", "requestId": "req-disable-deprecation" }, "timestamp": "2026-02-19T09:50:00Z", "type": "aws.ec2.image.deprecationDisabled" } ``` ## EC2 • Enable Image The Enable Image component enables a previously disabled AMI. ### Use Cases - **Release promotion**: Re-enable AMIs after staged validation - **Operational recovery**: Restore image availability after temporary restrictions - **Lifecycle workflows**: Toggle image launchability based on policy checks ### Configuration - **Region**: AWS region where the AMI exists - **Image ID**: AMI ID to enable ### Example Output ```json { "data": { "enabled": true, "imageId": "ami-07f0e4f3e9c123abc", "region": "us-east-1", "requestId": "req-enable" }, "timestamp": "2026-02-19T09:20:00Z", "type": "aws.ec2.image.enabled" } ``` ## EC2 • Enable Image Deprecation The Enable Image Deprecation component sets a deprecation time for an AMI. ### Use Cases - **Release lifecycle**: Schedule AMI retirement dates - **Compliance enforcement**: Ensure images expire on policy deadlines - **Operational hygiene**: Phase out outdated images in a controlled window ### Configuration - **Region**: AWS region where the AMI exists - **Image ID**: AMI ID to deprecate - **Deprecate At**: RFC3339 timestamp when deprecation takes effect ### Example Output ```json { "data": { "deprecateAt": "2026-04-01T00:00:00Z", "deprecationEnabled": true, "imageId": "ami-07f0e4f3e9c123abc", "region": "us-east-1", "requestId": "req-enable-deprecation" }, "timestamp": "2026-02-19T09:40:00Z", "type": "aws.ec2.image.deprecationEnabled" } ``` ## EC2 • Get Image The Get Image component retrieves metadata for an EC2 AMI. ### Use Cases - **Release automation**: Validate AMI metadata before deployment - **Operational checks**: Inspect AMI state and ownership in workflows - **Traceability**: Resolve AMI details by image ID ### Configuration - **Region**: AWS region of the AMI - **Image ID**: AMI ID (for example: ami-1234567890abcdef0) ### Example Output ```json { "data": { "image": { "architecture": "x86_64", "creationDate": "2026-02-18T12:00:00.000Z", "description": "Golden image for production", "hypervisor": "xen", "imageId": "ami-1234567890abcdef0", "imageType": "machine", "name": "my-app-2026-02-18", "ownerId": "123456789012", "region": "us-east-1", "rootDeviceName": "/dev/xvda", "rootDeviceType": "ebs", "state": "available", "virtualizationType": "hvm" } }, "timestamp": "2026-02-18T12:00:00Z", "type": "aws.ec2.image" } ``` ## ECR • Get Image The Get Image component retrieves image metadata from an ECR repository by digest, tag, or both. ### Use Cases - **Release automation**: Fetch image details before deployment - **Audit trails**: Resolve digests and tags for traceability - **Security workflows**: Enrich findings with image metadata ### Configuration - **Region**: AWS region of the ECR repository - **Repository**: ECR repository name or ARN - **Image Digest**: Digest of the image (optional) - **Image Tag**: Tag of the image (optional) At least one of **Image Digest** or **Image Tag** is required. If both are provided, the request includes both. ### Example Output ```json { "data": { "artifactMediaType": "application/vnd.docker.container.image.v1+json", "imageDigest": "sha256:8f1d3e4f5a6b7c8d9e0f11121314151617181920212223242526272829303132", "imageManifestMediaType": "application/vnd.docker.distribution.manifest.v2+json", "imagePushedAt": "2026-02-03T12:00:00Z", "imageSizeInBytes": 48273912, "imageTags": [ "latest", "v1.2.3" ], "registryId": "123456789012", "repositoryName": "my-repo" }, "timestamp": "2026-02-03T12:00:00Z", "type": "aws.ecr.image" } ``` ## ECR • Get Image Scan Findings The Get Image Scan Findings component retrieves vulnerability scan results for an ECR image. ### Use Cases - **Security automation**: Pull scan findings to drive alerting or approvals - **Compliance checks**: Validate images against severity thresholds - **Reporting**: Capture scan summaries and findings for audits ### Configuration - **Region**: AWS region of the ECR repository - **Repository**: ECR repository name or ARN - **Image Digest**: Digest of the image (optional) - **Image Tag**: Tag of the image (optional) At least one of **Image Digest** or **Image Tag** is required. If both are provided, the request includes both. ### Example Output ```json { "data": { "imageId": { "imageDigest": "sha256:8f1d3e4f5a6b7c8d9e0f11121314151617181920212223242526272829303132", "imageTag": "latest" }, "imageScanFindings": { "findingSeverityCounts": { "HIGH": 1 }, "findings": [ { "attributes": [ { "key": "package_name", "value": "openssl" }, { "key": "package_version", "value": "1.1.1k" } ], "description": "Example vulnerability in a package.", "name": "CVE-2024-12345", "severity": "HIGH", "uri": "https://example.com/cve-2024-12345" } ], "imageScanCompletedAt": "2026-02-03T12:05:00Z", "vulnerabilitySourceUpdatedAt": "2026-02-03T00:00:00Z" }, "imageScanStatus": { "description": "Scan completed", "status": "COMPLETE" }, "registryId": "123456789012", "repositoryName": "my-repo" }, "timestamp": "2026-02-03T12:05:00Z", "type": "aws.ecr.image.scanFindings" } ``` ## ECR • Scan Image The Scan Image component scans an ECR image for vulnerabilities. ### Use Cases - **Security automation**: Scan images for vulnerabilities - **Compliance checks**: Validate images against severity thresholds - **Reporting**: Capture scan summaries and findings for audits ### Configuration - **Region**: AWS region of the ECR repository - **Repository**: ECR repository name or ARN - **Image Digest**: Digest of the image (optional) - **Image Tag**: Tag of the image (optional) At least one of **Image Digest** or **Image Tag** is required. If both are provided, the request includes both. ### Example Output ```json { "data": { "imageId": { "imageDigest": "sha256:8f1d3e4f5a6b7c8d9e0f11121314151617181920212223242526272829303132", "imageTag": "latest" }, "imageScanFindings": { "findingSeverityCounts": { "HIGH": 1 }, "findings": [ { "attributes": [ { "key": "package_name", "value": "openssl" }, { "key": "package_version", "value": "1.1.1k" } ], "description": "Example vulnerability in a package.", "name": "CVE-2024-12345", "severity": "HIGH", "uri": "https://example.com/cve-2024-12345" } ], "imageScanCompletedAt": "2026-02-03T12:05:00Z", "vulnerabilitySourceUpdatedAt": "2026-02-03T00:00:00Z" }, "imageScanStatus": { "description": "Scan completed", "status": "COMPLETE" }, "registryId": "123456789012", "repositoryName": "my-repo" }, "timestamp": "2026-02-03T12:05:00Z", "type": "aws.ecr.image.scanFindings" } ``` ## ECS • Create Service The Create Service component creates a new ECS service in a cluster. ### Use Cases - **Provisioning workflows**: Create a service during environment setup - **Deployment automation**: Roll out new workloads from workflows - **Infrastructure orchestration**: Configure ECS service settings as part of release pipelines ### Notes - You can pass advanced ECS CreateService fields through **Additional ECS API Arguments**. - Do not combine **Launch Type** with **Capacity Provider Strategy**. ### Example Output ```json { "data": { "service": { "clusterArn": "arn:aws:ecs:us-east-1:111122223333:cluster/superplane-demo-cluster", "createdAt": "2026-02-12T08:15:10Z", "desiredCount": 2, "enableExecuteCommand": true, "launchType": "FARGATE", "pendingCount": 0, "platformVersion": "1.4.0", "propagateTags": "SERVICE", "runningCount": 2, "schedulingStrategy": "REPLICA", "serviceArn": "arn:aws:ecs:us-east-1:111122223333:service/superplane-demo-cluster/superplane-api", "serviceName": "superplane-api", "status": "ACTIVE", "taskDefinition": "arn:aws:ecs:us-east-1:111122223333:task-definition/superplane-api:5", "taskSets": [] } }, "timestamp": "2026-02-12T08:15:32.101112131Z", "type": "aws.ecs.service" } ``` ## ECS • Describe Service The Describe Service component fetches details about a single ECS service. ### Use Cases - **Deployment checks**: Inspect running/desired task counts before or after deployment - **Operational visibility**: Fetch service status and task definition details in workflows - **Automation branching**: Route workflow execution based on ECS service state ### Example Output ```json { "data": { "service": { "clusterArn": "arn:aws:ecs:us-west-1:123456789012:cluster/production-cluster-alpha", "createdAt": "2026-01-20T10:12:33Z", "deployments": [ { "createdAt": "2026-01-20T10:12:33Z", "desiredCount": 3, "id": "ecs-svc/8473629182736450912", "pendingCount": 0, "runningCount": 3, "status": "PRIMARY", "taskDefinition": "arn:aws:ecs:us-west-1:123456789012:task-definition/api-gateway-service:7", "updatedAt": "2026-01-20T10:18:11Z" } ], "desiredCount": 3, "enableExecuteCommand": true, "events": [ { "createdAt": "2026-01-20T10:18:11Z", "id": "d91b5e4a-7a5f-4b1d-bdb4-3d4f8f8a9912", "message": "(service api-gateway-service-prod) has reached a steady state." }, { "createdAt": "2026-01-20T10:17:02Z", "id": "a12f9c47-92c3-4c9f-8d12-88d6ab3f8e72", "message": "(service api-gateway-service-prod) (deployment ecs-svc/8473629182736450912) deployment completed." }, { "createdAt": "2026-01-20T10:13:05Z", "id": "c7a8e3b1-11f2-4fbc-9d8e-2194bb0eaf55", "message": "(service api-gateway-service-prod) has started 3 tasks: (task 9f8e7d6c5b4a3210e1f2a3b4c5d6e7f8)." } ], "launchType": "FARGATE", "networkConfiguration": { "awsvpcConfiguration": { "assignPublicIp": "DISABLED", "securityGroups": [ "sg-0a1b2c3d4e5f6a7b8", "sg-1b2c3d4e5f6a7b8c9" ], "subnets": [ "subnet-01a2b3c4d5e6f7a8b", "subnet-09f8e7d6c5b4a3210" ] } }, "pendingCount": 0, "platformVersion": "1.4.0", "propagateTags": "SERVICE", "runningCount": 3, "schedulingStrategy": "REPLICA", "serviceArn": "arn:aws:ecs:us-west-1:123456789012:service/production-cluster-alpha/api-gateway-service-prod", "serviceName": "api-gateway-service-prod", "status": "ACTIVE", "taskDefinition": "arn:aws:ecs:us-west-1:123456789012:task-definition/api-gateway-service:7", "taskSets": [] } }, "timestamp": "2026-01-20T12:45:09.123456789Z", "type": "aws.ecs.service" } ``` ## ECS • Execute Command The Execute Command component runs ECS Exec against a running task container. ### Use Cases - **Operational debugging**: Run diagnostics inside a live task - **Runtime inspection**: Check process state or config from workflows - **Automated remediation**: Trigger one-off commands in containerized services ### Notes - ECS Exec must be enabled and properly configured for the task/service. - Interactive mode opens an ECS session and returns session connection details. ### Example Output ```json { "data": { "command": { "clusterArn": "arn:aws:ecs:us-east-1:111122223333:cluster/superplane-demo-cluster", "containerArn": "arn:aws:ecs:us-east-1:111122223333:container/superplane-demo-cluster/aaaaaaaa11111111bbbbbbbb22222222/2d7f98c1e4d14f98a4d1f36e6f4f5d23", "containerName": "api", "interactive": false, "session": { "sessionId": "ecs-execute-command-0f2ea5a931534f9f8f37f7e706a2c100", "streamUrl": "wss://ssmmessages.us-east-1.amazonaws.com/v1/data-channel/ecs-execute-command-0f2ea5a931534f9f8f37f7e706a2c100?role=publish_subscribe", "tokenValue": "AQoDYXdzEJr//////////wEaDAi4cUd2QqXQwq2NAiD3rWkK9mYc9..." }, "taskArn": "arn:aws:ecs:us-east-1:111122223333:task/superplane-demo-cluster/aaaaaaaa11111111bbbbbbbb22222222" } }, "timestamp": "2026-02-12T09:10:21.445566778Z", "type": "aws.ecs.executeCommand" } ``` ## ECS • Run Task The Run Task component starts one or more ECS tasks and completes based on task lifecycle events. ### Use Cases - **One-off workloads**: Execute ad-hoc jobs on ECS - **Batch processing**: Trigger task runs from workflow events - **Operational automation**: Run remediation or maintenance tasks ### Completion behavior - Always waits for tasks to leave startup states (for example, PENDING) before completing. - If **Timeout (seconds)** is set, waits for all tracked tasks to reach STOPPED, or completes with timeout when that deadline is reached. ### Notes - For Fargate tasks, set **Network Configuration** using the ECS awsvpcConfiguration format. - Use **Capacity Provider Strategy** when you want ECS to choose capacity providers; it cannot be combined with **Launch Type**. ### Example Output ```json { "data": { "failures": [], "tasks": [ { "clusterArn": "arn:aws:ecs:us-east-1:111122223333:cluster/superplane-demo-cluster", "createdAt": "2026-02-10T14:30:01Z", "desiredStatus": "RUNNING", "group": "family:superplane-ecs-task", "lastStatus": "RUNNING", "launchType": "FARGATE", "platformVersion": "1.4.0", "startedBy": "", "stoppedReason": "", "taskArn": "arn:aws:ecs:us-east-1:111122223333:task/superplane-demo-cluster/aaaaaaaa11111111bbbbbbbb22222222", "taskDefinitionArn": "arn:aws:ecs:us-east-1:111122223333:task-definition/superplane-ecs-task:1" } ], "timedOut": false }, "timestamp": "2026-02-10T14:30:37.633534466Z", "type": "aws.ecs.task" } ``` ## ECS • Stop Task The Stop Task component requests ECS to stop a running task and waits for the task to reach STOPPED. ### Use Cases - **Operational control**: Stop ad-hoc or long-running tasks from workflows - **Remediation**: Terminate unhealthy tasks during automated incident response - **Cost control**: Stop no-longer-needed background workloads ### Notes - ECS sends a SIGTERM signal and then force-stops the task if it does not exit gracefully. - **Reason** is optional and appears in ECS task stop metadata when provided. ### Example Output ```json { "data": { "task": { "clusterArn": "arn:aws:ecs:us-east-1:111122223333:cluster/superplane-demo-cluster", "createdAt": "2026-02-10T14:30:01Z", "desiredStatus": "STOPPED", "group": "family:superplane-ecs-task", "lastStatus": "STOPPED", "launchType": "FARGATE", "platformVersion": "1.4.0", "startedBy": "", "stoppedReason": "stopping", "taskArn": "arn:aws:ecs:us-east-1:111122223333:task/superplane-demo-cluster/aaaaaaaa11111111bbbbbbbb22222222", "taskDefinitionArn": "arn:aws:ecs:us-east-1:111122223333:task-definition/superplane-ecs-task:1" } }, "timestamp": "2026-02-10T14:31:19.987196041Z", "type": "aws.ecs.task" } ``` ## ECS • Update Service The Update Service component updates configuration for an existing ECS service. ### Use Cases - **Deployments**: Roll out a new task definition - **Scaling workflows**: Change desired count dynamically - **Operational tuning**: Update deployment, network, or tag behavior ### Notes - You can pass advanced ECS UpdateService fields through **Additional ECS API Arguments**. - Do not combine **Launch Type** with **Capacity Provider Strategy**. ### Example Output ```json { "data": { "service": { "clusterArn": "arn:aws:ecs:us-east-1:111122223333:cluster/superplane-demo-cluster", "createdAt": "2026-02-12T08:15:10Z", "desiredCount": 3, "enableExecuteCommand": true, "launchType": "FARGATE", "pendingCount": 1, "platformVersion": "1.4.0", "propagateTags": "SERVICE", "runningCount": 2, "schedulingStrategy": "REPLICA", "serviceArn": "arn:aws:ecs:us-east-1:111122223333:service/superplane-demo-cluster/superplane-api", "serviceName": "superplane-api", "status": "ACTIVE", "taskDefinition": "arn:aws:ecs:us-east-1:111122223333:task-definition/superplane-api:6", "taskSets": [] } }, "timestamp": "2026-02-12T09:00:03.987654321Z", "type": "aws.ecs.service" } ``` ## Lambda • Run Function The Run Lambda component invokes a Lambda function. ### Use Cases - **Automated workflows**: Trigger Lambda functions from SuperPlane workflows - **Event processing**: Process events from other applications - **Data transformation**: Transform data in real-time - **API integrations**: Call Lambda functions from other applications ### How It Works 1. Invokes the specified Lambda function with the provided payload 2. Returns the function's response including status code, payload, and log output 3. Optionally creates a new Lambda function from inline JavaScript code ### Example Output ```json { "data": { "payload": { "message": "hello from lambda" }, "report": { "billedDuration": "100 ms", "duration": "89.81 ms", "initDuration": "160.97 ms", "maxMemoryUsed": "82 MB", "memorySize": "128 MB" }, "requestId": "9f8d2b5e-1c7a-4d62-8f1a-0f8b8e4f3a12" }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "aws.lambda.run" } ``` ## Route 53 • Create DNS Record The Create DNS Record component creates a new DNS record in an AWS Route 53 hosted zone. ### Use Cases - **Domain management**: Create DNS records for new services or endpoints - **Automated provisioning**: Set up DNS entries as part of infrastructure workflows - **Multi-environment setup**: Create environment-specific DNS records automatically ### How It Works 1. Connects to AWS Route 53 using the integration credentials 2. Creates a new DNS record in the specified hosted zone 3. Returns the change status and submission timestamp ### Example Output ```json { "data": { "change": { "id": "/change/C1234567890ABC", "status": "INSYNC", "submittedAt": "2026-01-28T10:30:00.000Z" }, "record": { "name": "api.example.com", "type": "A" } }, "timestamp": "2026-01-28T10:30:00.000Z", "type": "aws.route53.change" } ``` ## Route 53 • Delete DNS Record The Delete DNS Record component deletes a DNS record from an AWS Route 53 hosted zone. ### Use Cases - **Cleanup**: Remove DNS records when decommissioning services - **Environment teardown**: Delete DNS entries for temporary environments - **Migration**: Remove old DNS records after migrating to new endpoints ### How It Works 1. Connects to AWS Route 53 using the integration credentials 2. Deletes the specified DNS record from the hosted zone 3. The record name, type, TTL, and values must match the existing record exactly 4. Returns the change status and submission timestamp ### Example Output ```json { "data": { "change": { "id": "/change/C5555555555GHI", "status": "INSYNC", "submittedAt": "2026-01-28T10:30:00.000Z" }, "record": { "name": "api.example.com", "type": "A" } }, "timestamp": "2026-01-28T10:30:00.000Z", "type": "aws.route53.change" } ``` ## Route 53 • Upsert DNS Record The Upsert DNS Record component creates or updates a DNS record in an AWS Route 53 hosted zone. ### Use Cases - **Idempotent updates**: Safely create or update DNS records without checking existence first - **Rolling deployments**: Update DNS records to point to new infrastructure - **Failover management**: Switch DNS records between primary and secondary endpoints ### How It Works 1. Connects to AWS Route 53 using the integration credentials 2. Creates the DNS record if it doesn't exist, or updates it if it does 3. Returns the change status and submission timestamp ### Example Output ```json { "data": { "change": { "id": "/change/C9876543210DEF", "status": "INSYNC", "submittedAt": "2026-01-28T10:30:00.000Z" }, "record": { "name": "api.example.com", "type": "A" } }, "timestamp": "2026-01-28T10:30:00.000Z", "type": "aws.route53.change" } ``` ## SNS • Create Topic The Create Topic component creates an AWS SNS topic and returns its metadata. ### Use Cases - **Provisioning workflows**: Create topics as part of environment setup - **Automation bootstrap**: Prepare topics before publishing messages - **Self-service operations**: Provision messaging resources on demand ### Example Output ```json { "data": { "attributes": { "DisplayName": "Orders Events", "Owner": "123456789012", "TopicArn": "arn:aws:sns:us-east-1:123456789012:orders-events" }, "contentBasedDeduplication": false, "displayName": "Orders Events", "fifoTopic": false, "name": "orders-events", "owner": "123456789012", "topicArn": "arn:aws:sns:us-east-1:123456789012:orders-events" }, "timestamp": "2026-01-10T10:00:02.000000000Z", "type": "aws.sns.topic" } ``` ## SNS • Delete Topic The Delete Topic component deletes an AWS SNS topic. ### Use Cases - **Cleanup workflows**: Remove temporary topics after execution - **Lifecycle management**: Decommission unused messaging resources - **Rollback automation**: Remove topics created in failed provisioning runs ### Example Output ```json { "data": { "deleted": true, "topicArn": "arn:aws:sns:us-east-1:123456789012:orders-events" }, "timestamp": "2026-01-10T10:00:02.000000000Z", "type": "aws.sns.topic.deleted" } ``` ## SNS • Get Subscription The Get Subscription component retrieves metadata and attributes for an AWS SNS subscription. ### Use Cases - **Subscription audits**: Inspect endpoint and delivery configuration - **Workflow enrichment**: Load subscription metadata before downstream actions - **Validation**: Confirm subscription existence and protocol ### Example Output ```json { "data": { "attributes": { "Endpoint": "https://example.com/sns/events", "Protocol": "https", "RawMessageDelivery": "true", "TopicArn": "arn:aws:sns:us-east-1:123456789012:orders-events" }, "endpoint": "https://example.com/sns/events", "owner": "123456789012", "pendingConfirmation": false, "protocol": "https", "rawMessageDelivery": true, "subscriptionArn": "arn:aws:sns:us-east-1:123456789012:orders-events:7f8a3d50-f160-4d2d-8f8a-fb95d7f86a51", "topicArn": "arn:aws:sns:us-east-1:123456789012:orders-events" }, "timestamp": "2026-01-10T10:00:02.000000000Z", "type": "aws.sns.subscription" } ``` ## SNS • Get Topic The Get Topic component retrieves metadata and attributes for an AWS SNS topic. ### Use Cases - **Configuration audits**: Verify topic settings and attributes - **Workflow enrichment**: Load topic metadata before downstream actions - **Validation**: Confirm topic existence and ownership ### Example Output ```json { "data": { "attributes": { "DisplayName": "Orders Events", "Owner": "123456789012", "TopicArn": "arn:aws:sns:us-east-1:123456789012:orders-events" }, "contentBasedDeduplication": false, "displayName": "Orders Events", "fifoTopic": false, "name": "orders-events", "owner": "123456789012", "topicArn": "arn:aws:sns:us-east-1:123456789012:orders-events" }, "timestamp": "2026-01-10T10:00:02.000000000Z", "type": "aws.sns.topic" } ``` ## SNS • Publish Message The Publish Message component sends a message to an AWS SNS topic. ### Use Cases - **Event fan-out**: Broadcast workflow results to multiple subscribers - **Notifications**: Send operational updates to users and systems - **Automation**: Trigger downstream subscribers through SNS delivery ### Example Output ```json { "data": { "messageId": "a730a53a-a86d-5fcb-9ad1-ff72b8d0f104", "topicArn": "arn:aws:sns:us-east-1:123456789012:orders-events" }, "timestamp": "2026-01-10T10:00:02.000000000Z", "type": "aws.sns.message.published" } ``` ## SQS • Create Queue The Create Queue component creates a new AWS SQS queue. ### Configuration - **Region**: AWS region for the SQS queue - **Queue Name**: Name of the queue to create ### Example Output ```json { "data": { "queueName": "my-created-queue", "queueUrl": "https://sqs.us-east-1.amazonaws.com/123456789012/my-created-queue" }, "timestamp": "2026-02-11T12:00:00Z", "type": "aws.sqs.queue" } ``` ## SQS • Delete Queue The Delete Queue component deletes an AWS SQS queue. ### Configuration - **Region**: AWS region of the SQS queue - **Queue**: Target SQS queue to delete ### Example Output ```json { "data": { "deleted": true, "queueUrl": "https://sqs.us-east-1.amazonaws.com/123456789012/my-queue" }, "timestamp": "2026-02-11T12:00:00Z", "type": "aws.sqs.queue.deleted" } ``` ## SQS • Get Queue The Get Queue component retrieves metadata and attributes for an AWS SQS queue. ### Configuration - **Region**: AWS region of the SQS queue - **Queue**: Target SQS queue ### Example Output ```json { "data": { "attributes": { "DelaySeconds": "0", "MaximumMessageSize": "262144", "MessageRetentionPeriod": "345600", "QueueArn": "arn:aws:sqs:us-east-1:123456789012:my-queue", "ReceiveMessageWaitTimeSeconds": "0", "VisibilityTimeout": "30" }, "queueUrl": "https://sqs.us-east-1.amazonaws.com/123456789012/my-queue" }, "timestamp": "2026-02-11T12:00:00Z", "type": "aws.sqs.queue" } ``` ## SQS • Purge Queue The Purge Queue component removes all messages from an AWS SQS queue. ### Configuration - **Region**: AWS region of the SQS queue - **Queue**: Target SQS queue to purge ### Example Output ```json { "data": { "purged": true, "queueUrl": "https://sqs.us-east-1.amazonaws.com/123456789012/my-queue" }, "timestamp": "2026-02-11T12:00:00Z", "type": "aws.sqs.queue.purged" } ``` ## SQS • Send Message The Send Message component publishes a message to an AWS SQS queue. ### Configuration - **Region**: AWS region of the SQS queue - **Queue**: Target SQS queue - **Message Body**: The message payload to send ### Example Output ```json { "data": { "messageId": "d84c1b4d-1f3b-4b6c-9d9f-5c3a2b1f9e7a", "queueUrl": "https://sqs.us-east-1.amazonaws.com/123456789012/my-queue" }, "timestamp": "2026-02-11T12:00:00Z", "type": "aws.sqs.message" } ``` #### Bitbucket Source URL: https://docs.superplane.com/components/bitbucket React to events in your Bitbucket repositories import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Instructions To configure Bitbucket with SuperPlane: - **API Token mode**: - Go to **Atlassian Settings → Security → Create API token**. - Select **Bitbucket** App. - Create a token with admin:workspace:bitbucket scope. - **Workspace Access Token mode**: - Go to **Bitbucket Workspace Settings → Security → Access tokens**. - Create a workspace access token. - **Copy the token** and your workspace slug (for example: `my-workspace`) below. ## On Push The On Push trigger starts a workflow execution when code is pushed to a Bitbucket repository. ### Use Cases - **CI/CD automation**: Trigger builds and deployments on code pushes - **Code quality checks**: Run linting and tests on every push - **Notification workflows**: Send notifications when code is pushed ### Configuration - **Repository**: Select the Bitbucket repository to monitor - **Refs**: Configure which branches to monitor (e.g., `refs/heads/main`) ### Event Data Each push event includes: - **repository**: Repository information - **push.changes**: Array of reference changes with new/old commit details - **actor**: Information about who pushed ### Webhook Setup This trigger automatically sets up a Bitbucket webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "actor": { "display_name": "John Doe", "links": { "avatar": { "href": "https://bitbucket.org/account/johndoe/avatar/" }, "html": { "href": "https://bitbucket.org/johndoe/" } }, "nickname": "johndoe", "type": "user", "uuid": "{d301aafa-d676-4ee0-a3f1-8b94c681feaa}" }, "push": { "changes": [ { "closed": false, "commits": [ { "author": { "raw": "John Doe \u003cjohn@example.com\u003e", "type": "author" }, "hash": "709d658dc5b6d6afcd46049c2f332ee3f515a67d", "links": { "html": { "href": "https://bitbucket.org/my-workspace/my-repo/commits/709d658dc5b6d6afcd46049c2f332ee3f515a67d" } }, "message": "Add new feature\n", "type": "commit" } ], "created": false, "forced": false, "new": { "name": "main", "target": { "author": { "raw": "John Doe \u003cjohn@example.com\u003e", "type": "author", "user": { "display_name": "John Doe", "type": "user", "uuid": "{d301aafa-d676-4ee0-a3f1-8b94c681feaa}" } }, "date": "2024-01-15T10:30:00+00:00", "hash": "709d658dc5b6d6afcd46049c2f332ee3f515a67d", "links": { "html": { "href": "https://bitbucket.org/my-workspace/my-repo/commits/709d658dc5b6d6afcd46049c2f332ee3f515a67d" } }, "message": "Add new feature\n", "type": "commit" }, "type": "branch" }, "old": { "name": "main", "target": { "author": { "raw": "John Doe \u003cjohn@example.com\u003e", "type": "author" }, "date": "2024-01-14T15:00:00+00:00", "hash": "1e65c05c1d5171631d92438a13901ca7dae9618c", "message": "Previous commit\n", "type": "commit" }, "type": "branch" }, "truncated": false } ] }, "repository": { "full_name": "my-workspace/my-repo", "links": { "html": { "href": "https://bitbucket.org/my-workspace/my-repo" } }, "name": "my-repo", "type": "repository", "uuid": "{b7f10c3a-2a1e-4c36-af54-7e818f3b6e1d}" } }, "timestamp": "2024-01-15T10:30:00Z", "type": "bitbucket.push" } ``` #### CircleCI Source URL: https://docs.superplane.com/components/circleci Trigger and monitor CircleCI pipelines import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions Create a Personal API Token in CircleCI → User Settings → Personal API Tokens ## On Workflow Completed Triggers when a CircleCI workflow completes. ### Use Cases - **Workflow chaining**: Start SuperPlane workflows when CircleCI workflows complete - **Status monitoring**: Monitor CI/CD workflow results - **Notifications**: Send alerts when workflows succeed or fail - **Post-processing**: Process artifacts after workflow completion ### Configuration - **Project Slug**: The CircleCI project slug (e.g., gh/username/repo) ### Event Data Each workflow completion event includes: - **workflow**: Workflow information including ID, name, status, and URL - **pipeline**: Parent pipeline information including ID, number, and trigger details - **project**: Project information - **organization**: Organization information ### Webhook Setup This trigger automatically sets up a CircleCI webhook when configured. The webhook is managed by SuperPlane and cleaned up when the trigger is removed. ### Example Data ```json { "data": { "happened_at": "2021-09-01T22:49:34.317Z", "id": "3888f21b-eaa7-38e3-8f3d-75a63bba8895", "pipeline": { "created_at": "2021-09-01T22:49:03.544Z", "id": "1285fe1d-d3a6-44fc-8886-8979558254c4", "number": 130 }, "project": { "id": "84996744-a854-4f5e-aea3-04e2851dc1d2", "name": "repo", "slug": "github/username/repo" }, "type": "workflow-completed", "workflow": { "created_at": "2021-09-01T22:49:03.616Z", "id": "fda08377-fe7e-46b1-8992-3a7aaecac9c3", "name": "build-test-deploy", "status": "success", "stopped_at": "2021-09-01T22:49:34.170Z", "url": "https://app.circleci.com/pipelines/github/username/repo/130/workflows/fda08377-fe7e-46b1-8992-3a7aaecac9c3" } }, "timestamp": "2021-09-01T22:49:34.317Z", "type": "circleci.workflow.completed" } ``` ## Get Flaky Tests The Get Flaky Tests component identifies flaky tests in a CircleCI project using the Insights API. ### Use Cases - **Test reliability**: Identify tests that pass and fail inconsistently - **CI stability**: Find flaky tests that cause unreliable builds - **Quality improvement**: Prioritize fixing flaky tests for better developer experience ### Configuration - **Project Slug**: CircleCI project slug (e.g., gh/username/repo) ### Output Emits a `circleci.flakyTests` payload with a list of flaky tests and their flakiness data. ### Example Output ```json { "data": { "flakyTests": [ { "classname": "auth_test.go", "file": "pkg/auth/auth_test.go", "jobName": "test", "pipelineName": "build-pipeline", "source": "go", "testName": "TestUserAuthentication", "timesFlaky": 8, "workflowName": "build-test-deploy" }, { "classname": "cache_test.go", "file": "pkg/cache/cache_test.go", "jobName": "test", "pipelineName": "build-pipeline", "source": "go", "testName": "TestCacheInvalidation", "timesFlaky": 3, "workflowName": "build-test-deploy" } ], "totalFlakyTests": 2 }, "timestamp": "2021-09-01T22:55:34.317Z", "type": "circleci.flakyTests" } ``` ## Get Last Workflow The Get Last Workflow component retrieves the most recent workflow for a CircleCI project. ### Use Cases - **Latest status check**: Get the most recent workflow to check project health - **Branch monitoring**: Monitor the latest workflow on a specific branch - **Status filtering**: Find the last workflow with a specific status (e.g., last successful build) ### How It Works 1. Fetches recent pipelines for the project (optionally filtered by branch) 2. Iterates through pipelines to find workflows 3. Returns the first workflow matching the optional status filter ### Configuration - **Project Slug**: CircleCI project slug (e.g., gh/username/repo) - **Branch**: Optional branch filter - **Status**: Optional workflow status filter (success, failed, etc.) ### Output Emits a `circleci.workflow` payload with the most recent matching workflow details. ### Example Output ```json { "data": { "createdAt": "2021-09-01T22:49:03.544Z", "id": "fda08377-fe7e-46b1-8992-3a7aaecac9c3", "name": "build-test-deploy", "pipelineId": "1285fe1d-d3a6-44fc-8886-8979558254c4", "status": "success", "stoppedAt": "2021-09-01T22:55:34.317Z" }, "timestamp": "2021-09-01T22:55:34.317Z", "type": "circleci.workflow" } ``` ## Get Recent Workflow Runs The Get Recent Workflow Runs component fetches recent individual runs for a named CircleCI workflow via the Insights API. ### Use Cases - **Workflow health monitoring**: See recent run statuses and durations at a glance - **Performance tracking**: Monitor how long workflow runs take over time - **Branch comparison**: Compare recent runs across branches ### How It Works 1. Calls the CircleCI Insights endpoint for the given project and workflow name 2. Returns a list of individual workflow runs (up to 90 days back) 3. Each run includes its status, duration, branch, timestamps, and credits used ### Configuration - **Project Slug**: CircleCI project slug (e.g., gh/username/repo) - **Workflow Name**: Name of the workflow to fetch runs for - **Branch**: Optional branch filter (defaults to the project's default branch) ### Output Emits a `circleci.workflowRuns` payload containing an array of recent workflow runs with fields like `id`, `status`, `duration`, `branch`, `createdAt`, `stoppedAt`, and `creditsUsed`. ### Example Output ```json { "data": { "runs": [ { "branch": "main", "createdAt": "2021-09-01T22:49:03.544Z", "creditsUsed": 150, "duration": 384, "id": "fda08377-fe7e-46b1-8992-3a7aaecac9c3", "isApproval": false, "status": "success", "stoppedAt": "2021-09-01T22:55:27.544Z" }, { "branch": "main", "createdAt": "2021-08-31T14:22:10.000Z", "creditsUsed": 160, "duration": 412, "id": "b2c3d4e5-f6a7-8901-bcde-f12345678901", "isApproval": false, "status": "failed", "stoppedAt": "2021-08-31T14:29:02.000Z" } ] }, "timestamp": "2021-09-01T22:55:34.317Z", "type": "circleci.workflowRuns" } ``` ## Get Test Metrics The Get Test Metrics component fetches test performance data from the CircleCI Insights API. ### Use Cases - **Test health monitoring**: Track most failed and slowest tests - **CI optimization**: Identify tests that slow down your pipeline - **Quality tracking**: Monitor test success rates across runs ### Configuration - **Project Slug**: CircleCI project slug (e.g., gh/username/repo) - **Workflow Name**: Name of the workflow to get test metrics for ### Output Emits a `circleci.testMetrics` payload with most failed tests, slowest tests, and test run summaries. ### Example Output ```json { "data": { "mostFailedTests": [ { "classname": "auth_test.go", "failedRuns": 5, "flaky": true, "testName": "TestUserAuthentication", "totalRuns": 42 } ], "slowestTests": [ { "classname": "migration_test.go", "failedRuns": 1, "flaky": false, "p50DurationSecs": 12.5, "testName": "TestDatabaseMigration", "totalRuns": 42 } ], "testRuns": [ { "pipelineNumber": 130, "successRate": 0.95, "testCounts": { "error": 0, "failure": 2, "skipped": 1, "success": 38, "total": 41 }, "workflowId": "fda08377-fe7e-46b1-8992-3a7aaecac9c3" } ], "totalTestRuns": 42 }, "timestamp": "2021-09-01T22:55:34.317Z", "type": "circleci.testMetrics" } ``` ## Get Workflow The Get Workflow component fetches details for a CircleCI workflow. ### Use Cases - **Workflow inspection**: Fetch current workflow status, jobs, and metadata - **Workflow context**: Use workflow fields to drive branching decisions in later steps ### Configuration - **Workflow ID**: The ID of the CircleCI workflow to retrieve (supports expressions) ### Output Emits a `circleci.workflow` payload containing workflow fields like `id`, `name`, `status`, `createdAt`, and `stoppedAt`. ### Example Output ```json { "data": { "createdAt": "2021-09-01T22:49:03.544Z", "id": "fda08377-fe7e-46b1-8992-3a7aaecac9c3", "jobs": [ { "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "jobNumber": 42, "name": "build", "projectSlug": "gh/acme/my-app", "startedAt": "2021-09-01T22:49:05.000Z", "status": "success", "stoppedAt": "2021-09-01T22:52:10.000Z", "type": "build" }, { "id": "b2c3d4e5-f6a7-8901-bcde-f12345678901", "jobNumber": 43, "name": "test", "projectSlug": "gh/acme/my-app", "startedAt": "2021-09-01T22:52:15.000Z", "status": "success", "stoppedAt": "2021-09-01T22:54:30.000Z", "type": "build" } ], "name": "build-test-deploy", "status": "success", "stoppedAt": "2021-09-01T22:55:34.317Z" }, "timestamp": "2021-09-01T22:55:34.317Z", "type": "circleci.workflow" } ``` ## Run Pipeline The Run Pipeline component starts a CircleCI pipeline and waits for it to complete. ### Use Cases - **CI/CD orchestration**: Trigger builds and deployments from SuperPlane workflows - **Pipeline automation**: Run CircleCI pipelines as part of workflow automation - **Multi-stage deployments**: Coordinate complex deployment pipelines - **Workflow chaining**: Chain multiple CircleCI workflows together ### How It Works 1. Triggers a CircleCI pipeline with the specified location (branch or tag) and parameters 2. Waits for all workflows in the pipeline to complete (monitored via webhook) 3. Routes execution based on workflow results: - **Success channel**: All workflows completed successfully - **Failed channel**: Any workflow failed or was cancelled ### Configuration - **Project Slug**: CircleCI project slug (e.g., gh/username/repo) - **Location**: Branch or tag to run the pipeline - **Pipeline definition ID**: Find in CircleCI: Project Settings → Project Setup. - **Parameters**: Optional pipeline parameters as key-value pairs (supports expressions) ### Output Channels - **Success**: Emitted when all workflows complete successfully - **Failed**: Emitted when any workflow fails or is cancelled ### Example Output ```json { "data": { "pipeline": { "created_at": "2021-09-01T22:49:03.544Z", "id": "1285fe1d-d3a6-44fc-8886-8979558254c4", "number": 130 }, "workflows": [ { "id": "fda08377-fe7e-46b1-8992-3a7aaecac9c3", "name": "build-test-deploy", "status": "success" } ] }, "timestamp": "2021-09-01T22:49:34.317Z", "type": "circleci.workflow.completed" } ``` #### Claude Source URL: https://docs.superplane.com/components/claude Use Claude models in workflows import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Instructions To get new Claude API key, go to [platform.claude.com](https://platform.claude.com). ## Text Prompt The Text Prompt component uses Anthropic's Claude models to generate text responses. ### Use Cases - **Summarization**: Generate summaries of incidents or deployments. - **Code Analysis**: specific code review or PR comments. - **Content Generation**: Create documentation or drafting communications. ### Configuration - **Model**: The Claude model to use (e.g., claude-3-5-sonnet-latest). - **Prompt**: The main user message/instruction. - **System Message**: (Optional) Context to define the assistant's behavior or persona. - **Max Tokens**: (Optional) Limit the length of the generated response. - **Temperature**: (Optional) Control randomness (0.0 to 1.0). ### Output Returns a payload containing: - **text**: The content generated by Claude. - **usage**: Input and output token counts. - **stopReason**: Why the generation ended (e.g., "end_turn", "max_tokens"). - **model**: The specific model version used. ### Notes - Requires a valid Claude API key configured in integration - Response quality and speed depend on the selected model - Token usage is tracked and may incur costs based on your Claude plan ### Example Output ```json { "data": { "id": "msg_01X9JGt5...123456", "model": "claude-3-5-sonnet-latest", "response": { "content": [ { "text": "Here is the summary of the deployment logs you requested...", "type": "text" } ], "id": "msg_01X9JGt5...123456", "model": "claude-3-5-sonnet-latest", "role": "assistant", "stop_reason": "end_turn", "type": "message", "usage": { "input_tokens": 45, "output_tokens": 120 } }, "stopReason": "end_turn", "text": "Here is the summary of the deployment logs you requested...", "usage": { "input_tokens": 45, "output_tokens": 120 } }, "timestamp": "2026-02-06T12:00:00Z", "type": "claude.message" } ``` #### Cloudflare Source URL: https://docs.superplane.com/components/cloudflare Manage Cloudflare zones, rules, and DNS import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Instructions ## Create a Cloudflare API Token 1. Open the [Cloudflare API Tokens page](https://dash.cloudflare.com/profile/api-tokens) 2. Click **Create Token** 3. Click **Get started** next to "Create Custom Token" 4. Configure the token: - **Token name**: SuperPlane Integration - **Permissions** (click "+ Add more" to add each): - Zone / Zone / Read - Zone / DNS / Edit - Zone / Dynamic Redirect / Edit - Zone / DNS / Edit - **Zone Resources**: Include / All zones _(or select specific zones)_ 5. Click **Continue to summary**, then **Create Token** 6. Copy the token and paste it below > **Note**: The token is only shown once. Store it securely if needed elsewhere. ## Create DNS Record The Create DNS Record component creates a DNS record in a Cloudflare zone. ### Use Cases - **Provisioning**: Add records when new environments are created - **Verification**: Create TXT or CNAME records for domain ownership checks - **Releases**: Add or update canary or migration records ### Configuration - **Zone**: Select the Cloudflare zone or enter a domain name - **Type**: DNS record type (A, AAAA, CNAME, MX, TXT, NS, etc.) - **Name**: Record name (e.g., `www`, `api`, or `@` for apex) - **Content**: Record value (IP, hostname, or text) - **TTL**: Time-to-live in seconds (use `1` for auto) - **Proxied**: Proxy through Cloudflare (A, AAAA, CNAME only) - **Priority**: Priority value (MX or SRV only) ### Output Emits the created DNS record on the default channel. If the zone is not found, the record is invalid, or a duplicate exists, the run fails. ### Example Output ```json { "data": { "content": "192.0.2.1", "id": "record_abc123", "name": "api.example.com", "proxied": false, "ttl": 1, "type": "A" }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "cloudflare.dnsRecord" } ``` ## Delete DNS Record The Delete DNS Record component removes a DNS record from a Cloudflare zone. ### Use Cases - **Deprovisioning**: Remove DNS records when services or environments are torn down - **Cleanup**: Delete temporary verification records (e.g. migration or certificate validation) - **Maintenance**: Remove stale or incorrect records as part of workflow automation ### Configuration - **Record**: Select the DNS record to delete (e.g. zone-id/record-id or record name) ### Output Emits the deleted DNS record (zoneId, recordId, record) on the default channel. If the record is not found or deletion fails, the component goes to an error state and does not emit. ### Example Output ```json { "data": { "record": { "content": "example.com", "id": "record_abc123", "name": "app.example.com", "proxied": false, "ttl": 360, "type": "CNAME" }, "recordId": "record_abc123", "zoneId": "zone_xyz789" }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "cloudflare.dnsRecord" } ``` ## Update DNS Record The Update DNS Record component updates an existing DNS record in a Cloudflare zone. ### Use Cases - **Infrastructure changes**: Update record content when an IP or target changes - **Release automation**: Switch a record to proxied or adjust TTL during a migration - **Verification**: Update TXT records for ownership verification as part of workflows ### Configuration - **Record**: DNS record to update (e.g. zone-id/record-id or app.example.com) - **Content**: New record value - **TTL**: TTL in seconds (default 360) - **Proxied**: Whether Cloudflare should proxy traffic for this record ### Output Emits the updated DNS record (id, type, name, content, proxied, ttl) on the default channel. If the update fails (e.g. record not found, invalid update), the component goes to an error state and does not emit. ### Example Output ```json { "data": { "record": { "content": "203.0.113.10", "id": "record_abc123", "name": "app.example.com", "proxied": true, "ttl": 1, "type": "A" }, "recordId": "record_abc123", "zoneId": "zone_xyz789" }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "cloudflare.dnsRecord" } ``` ## Update Redirect Rule The Update Redirect Rule component modifies an existing redirect rule in a Cloudflare zone. ### Use Cases - **URL management**: Update redirect rules dynamically based on workflow events - **A/B testing**: Switch redirect targets for testing purposes - **Maintenance**: Temporarily redirect traffic during maintenance - **Migration**: Update redirects as part of site migration workflows ### Configuration - **Zone**: Select the Cloudflare zone containing the redirect rule - **Rule ID**: The ID of the redirect rule to update - **Description**: Optional description for the rule - **Match Type**: How to match URLs (exact match or expression-based) - **Source URL Pattern**: URL pattern to match (for exact match type) - **Expression**: Cloudflare expression for matching (for expression type) - **Target URL**: The URL to redirect to (supports expressions) - **Status Code**: HTTP status code for redirect (301, 302, 307, 308) - **Preserve Query String**: Whether to preserve query parameters in redirect - **Enabled**: Whether the rule is active ### Output Returns the updated redirect rule with all current configuration. ### Example Output ```json { "data": { "enabled": true, "rule": { "action": "redirect", "action_parameters": { "from_value": { "preserve_query_string": false, "status_code": 301, "target_url": { "value": "https://example.com/new-path" } } }, "description": "Redirect old path to new path", "enabled": true, "expression": "http.request.uri.path eq \"/old-path\"", "id": "rule_abc123" }, "ruleId": "rule_abc123", "zoneId": "zone_xyz789" }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "cloudflare.redirectRule" } ``` #### Core Source URL: https://docs.superplane.com/components/core Built-in SuperPlane components. import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Schedule The Schedule trigger starts workflow executions automatically based on a configured schedule. ### Use Cases - **Periodic tasks**: Run daily reports, backups, or maintenance tasks - **Data synchronization**: Regularly sync data between systems - **Monitoring**: Periodic health checks and monitoring - **Batch processing**: Process data on a recurring schedule ### Schedule Types - **Minutes**: Trigger every N minutes (1-59) - **Hours**: Trigger every N hours at a specific minute (1-23 hours) - **Days**: Trigger every N days at a specific time (1-31 days) - **Weeks**: Trigger every N weeks on specific weekdays at a specific time (1-52 weeks) - **Months**: Trigger every N months on a specific day and time (1-24 months) - **Cron**: Use a cron expression for advanced scheduling patterns ### Timezone Support For days, weeks, months, and cron schedules, you can specify a timezone to ensure triggers occur at the correct local time. ### Cron Expressions Supports both 5-field and 6-field cron expressions: - **5-field**: `minute hour day month dayofweek` (e.g., `30 14 * * MON-FRI`) - **6-field**: `second minute hour day month dayofweek` (e.g., `0 30 14 * * MON-FRI`) ### Event Data Each scheduled execution includes calendar information: - **calendar**: Year, month, day, hour, minute, second, week_day - **timezone**: Timezone information (for applicable schedule types) ### Examples - **Every 15 minutes**: Minutes schedule with 15-minute interval - **Daily at 9 AM**: Days schedule with hour=9, minute=0 - **Weekdays at 2 PM**: Weeks schedule with weekDays=[Monday-Friday], hour=14 - **First of every month**: Months schedule with dayOfMonth=1 ### Example Data ```json { "data": { "calendar": { "day": "1", "hour": "09", "minute": "00", "month": "January", "second": "00", "week_day": "Monday", "year": "2024" }, "timezone": "+00:00" }, "timestamp": "2024-01-01T09:00:00Z", "type": "scheduler.tick" } ``` ## Manual Run The Manual Run trigger allows you to start workflow executions manually from the SuperPlane UI. ### Use Cases - **Testing workflows**: Manually trigger workflows during development and testing - **One-off tasks**: Run workflows on-demand for specific operations - **Debugging**: Manually execute workflows to debug issues - **Ad-hoc processing**: Process data when needed without automation ### How It Works 1. Add the Manual Run trigger as the starting node of your workflow 2. Click the "Run" button in the workflow UI to start an execution 3. The workflow begins immediately with empty event data ### Configuration The Manual Run trigger requires no configuration. It's ready to use immediately after being added to a workflow. ### Event Data Manual runs start with an empty event payload. You can use this as a starting point and add data through subsequent components. ### Example Data ```json { "foo": "bar" } ``` ## Webhook The Webhook trigger starts a new workflow execution when an HTTP request is received at the generated webhook URL. ### Use Cases - **External system integration**: Receive events from third-party services - **CI/CD pipelines**: Trigger workflows from build systems - **Form submissions**: Process data from web forms - **Event notifications**: Receive notifications from external applications ### How It Works 1. When you add a Webhook trigger to a workflow, SuperPlane generates a unique webhook URL 2. Configure the authentication method for the webhook 3. External systems can send HTTP requests to this URL 4. Each request starts a new workflow execution with the request data ### Authentication Methods - **Signature (HMAC)**: Verify requests using HMAC-SHA256 signature in the `X-Signature-256` header - **Bearer Token**: Require a Bearer token in the `Authorization` header - **Header Token**: Require a raw token in a custom header (default: `X-Webhook-Token`) - **None (unsafe)**: No authentication (not recommended for production) ### Request Data The webhook payload includes: - **body**: Parsed request body (JSON if possible, otherwise raw data) - **headers**: All HTTP headers from the request ### Security - Each webhook has a unique secret key for authentication - Secrets can be reset using the "Reset Authentication" action - Maximum payload size: 64KB ### Example Usage Send a POST request to the webhook URL with your payload. The workflow will receive the data and start execution. ### Example Data ```json { "data": { "body": { "event": "push", "repository": "superplanehq/superplane" }, "headers": { "X-Event": [ "push" ] } }, "timestamp": "2026-01-19T12:00:00Z", "type": "webhook" } ``` ## Add Memory The Add Memory component appends a new item to canvas-level memory storage. ### Use Cases - Persist identifiers for later cleanup paths - Store cross-run mappings (for example pull request to resource ID) - Keep structured operational context per canvas ### How It Works 1. Reads `namespace` and value fields from configuration 2. Appends a new memory row for the current canvas 3. Emits `memory.added` with the saved payload ### Example Output ```json { "data": { "data": { "namespace": "machines", "values": { "creator": "alex", "id": "1", "pull_request": "123" } } }, "timestamp": "2026-01-19T12:00:00Z", "type": "memory.added" } ``` ## Approval The Approval component pauses workflow execution and waits for manual approval from specified users, groups, or roles before continuing. ### Use Cases - **Deployment approvals**: Require approval before deploying to production - **Financial transactions**: Get approval for high-value operations - **Content moderation**: Review content before publishing - **Compliance workflows**: Ensure regulatory approvals are obtained ### How It Works 1. When the Approval component executes, it creates approval requirements based on the configured approvers 2. The workflow pauses and waits for all required approvals 3. Approvers receive notifications and can approve or reject from the workflow UI 4. Once all approvals are collected, the workflow continues: - **Approved channel**: All required approvers approved - **Rejected channel**: At least one approver rejected ### Configuration - **Approvers**: List of users, groups, or roles who must approve - **Everyone**: Any authenticated user can approve - **Specific user**: Only the specified user can approve - **Group**: Any member of the specified group can approve - **Role**: Any user with the specified role can approve ### Output Channels - **Approved**: Emitted when all required approvers have approved - **Rejected**: Emitted when at least one approver rejects (after all have responded) ### Actions - **approve**: Approve a pending requirement (can include an optional comment) - **reject**: Reject a pending requirement (requires a reason) ### Example Output ```json { "data": { "records": [ { "approval": { "approvedAt": "2024-01-01T12:00:00Z", "comment": "Looks good" }, "index": 0, "state": "approved", "type": "user", "user": { "email": "alex@example.com", "id": "user_123", "name": "Alex Doe" } } ], "result": "approved" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "approval.finished" } ``` ## Delete Memory The Delete Memory component removes memory rows from canvas-level memory storage. ### Use Cases - Remove stale IDs after cleanup is complete - Keep memory stores bounded over time ### How It Works 1. Reads `namespace` and `matchList` from configuration 2. Deletes memory rows matching all configured key/value pairs 3. Emits `memory.deleted` to the `deleted` or `notFound` channel ### Output Channels - **Deleted**: At least one matching memory row was removed - **Not Found**: No matching memory rows were removed ### Example Output ```json { "data": { "data": { "count": 1, "deleted": [ { "creator": "igor", "pull_request": 123, "sandbox_id": "sbx-001" } ], "matches": { "creator": "igor", "pull_request": 123 }, "namespace": "machines" } }, "timestamp": "2026-02-28T00:00:00Z", "type": "memory.deleted" } ``` ## Filter The Filter component evaluates a boolean expression against incoming events and only forwards events that match the condition. ### Use Cases - **Data validation**: Only process events that meet certain criteria - **Event filtering**: Filter out unwanted events before processing - **Conditional routing**: Stop processing events that don't match requirements - **Data quality**: Ensure only valid data continues through the workflow ### How It Works 1. The Filter component evaluates a boolean expression against the incoming event data 2. If the expression evaluates to `true`, the event is emitted to the default output channel 3. If the expression evaluates to `false`, the execution passes without emitting (effectively filtering out the event) ### Expression Environment The expression has access to: - **$**: The run context data - **root()**: Access to the root event data - **previous()**: Access to previous node outputs (optionally with depth parameter) ### Examples - `$["Node Name"].status == "active"`: Only forward events where status is "active" - `$["Node Name"].amount > 1000`: Filter events with amount greater than 1000 - `$["Node Name"].user.role == "admin" && $["Node Name"].action == "delete"`: Complex condition checking multiple fields ### Example Output ```json { "data": {}, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "filter.executed" } ``` ## HTTP Request The HTTP component allows you to make HTTP requests to external APIs and services as part of your workflow. ### Use Cases - **API integration**: Call external REST APIs - **Webhook notifications**: Send notifications to external systems - **Data fetching**: Retrieve data from external services - **Service orchestration**: Coordinate with microservices ### Supported Methods - GET, POST, PUT, DELETE, PATCH ### Request Configuration - **URL**: The endpoint to call (supports expressions) - **Method**: HTTP method to use - **Query Parameters**: Optional URL query parameters - **Headers**: Custom HTTP headers (header names cannot use expressions) - **Body**: Request body in various formats: - **JSON**: Structured JSON payload - **Form Data**: URL-encoded form data - **Plain Text**: Raw text content - **XML**: XML formatted content ### Response Handling The component emits the response with: - **status**: HTTP status code - **headers**: Response headers - **body**: Parsed response body (JSON if possible, otherwise string) ### Example Output ```json { "data": { "body": { "message": "ok" }, "error": "Error to read request body: EOF", "headers": { "Content-Type": [ "application/json" ] }, "status": 200 }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "http.request.finished" } ``` ## If The If component evaluates a boolean expression and routes events to different output channels based on the result. ### Use Cases - **Conditional branching**: Route events down different paths based on conditions - **Decision logic**: Implement if-then-else logic in workflows - **Data routing**: Send events to different processing paths - **Workflow control**: Control workflow flow based on event properties ### How It Works 1. The If component evaluates a boolean expression against the incoming event data 2. If the expression evaluates to `true`, the event is emitted to the "True" output channel 3. If the expression evaluates to `false`, the event is emitted to the "False" output channel ### Output Channels - **True**: Events where the expression evaluates to `true` - **False**: Events where the expression evaluates to `false` ### Expression Environment The expression has access to: - **$**: The run context data - **root()**: Access to the root event data - **previous()**: Access to previous node outputs (optionally with depth parameter) ### Examples - `$["Node Name"].status == "approved"`: Route approved items to True channel - `$["Node Name"].amount > 1000`: Route high-value items to True channel - `$["Node Name"].user.role == "admin"`: Route admin actions to True channel ### Example Output ```json { "data": {}, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "if.executed" } ``` ## Merge The Merge component waits for events from all upstream nodes before forwarding a combined result downstream. ### Use Cases - **Parallel processing**: Wait for multiple parallel operations to complete - **Data aggregation**: Combine results from multiple sources - **Synchronization**: Synchronize multiple workflow branches - **Fan-in patterns**: Collect outputs from multiple upstream nodes ### How It Works 1. The Merge component waits for events from all distinct upstream source nodes 2. Once all inputs are received, it emits the combined data to the Success channel 3. Optional timeout and conditional stop features allow early completion ### Configuration Options - **Enable Timeout**: Cancel merge after a specified time if not all inputs are received - **Enable Conditional Stop**: Stop waiting early when a condition is met (e.g., if one branch fails) ### Output Channels - **Success**: Emitted when all upstream inputs are received - **Timeout**: Emitted if the timeout is reached before all inputs are received - **Fail**: Emitted if the conditional stop expression evaluates to true ### Behavior - Tracks distinct source nodes (ignoring multiple channels from the same source) - Combines all received event data into the output - Supports timeout to prevent indefinite waiting - Supports conditional early stop based on expression evaluation ### Example Output ```json { "data": { "eventIDs": [ "event_1", "event_2" ], "groupKey": "merge-group-123", "sources": [ "node_a", "node_b" ], "stopEarly": false }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "merge.finished" } ``` ## No Operation The No Operation component is a pass-through component that forwards events to downstream nodes without any modification or processing. ### Use Cases - **Testing workflows**: Use this component to test workflow connections and flow without side effects - **Placeholder nodes**: Temporarily replace components during workflow development - **Event forwarding**: Simply forward events when no processing is needed ### Behavior When executed, the No Operation component immediately emits the incoming event data to the default output channel without any transformation. It has no configuration options and requires no setup. ### Example Output ```json { "data": {}, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "noop.finished" } ``` ## Read Memory The Read Memory component looks up values from canvas-level memory storage. ### Use Cases - Retrieve previously stored IDs before cleanup actions - Check whether related data already exists - Rehydrate context from prior runs ### How It Works 1. Reads `namespace`, `resultMode`, `emitMode`, and `matchList` from configuration 2. Finds memory rows matching all configured key/value pairs 3. Emits `memory.read` to the `found` or `notFound` channel ### Output Channels - **Found**: At least one matching memory row was found - **Not Found**: No matching memory rows were found ### Example Output ```json { "data": { "data": { "count": 1, "emitMode": "allAtOnce", "matches": { "creator": "igor", "pull_request": 123 }, "namespace": "machines", "resultMode": "latest", "values": [ { "creator": "igor", "pull_request": 123, "sandbox_id": "sbx-001" } ] } }, "timestamp": "2026-02-28T00:00:00Z", "type": "memory.read" } ``` ## Send Email Notification The Send Email Notification component sends emails through the system's configured email provider (Resend or SMTP) without requiring a separate integration setup. ### Use Cases - **Notifications**: Send email notifications for workflow events - **Alerts**: Email alerts for errors or important conditions - **Status updates**: Notify stakeholders about workflow progress - **User communications**: Send emails to users as part of automated workflows ### Recipients Select recipients from your organization's users, groups, or roles. The system resolves the actual email addresses at send time. ### Configuration - **Recipients**: List of users, groups, or roles - **Subject**: Email subject line (supports expressions) - **Body**: Email body content (supports expressions) ### Output Emits the list of recipients and the subject to the default output channel. ### Example Output ```json { "data": { "groups": [], "roles": [], "subject": "Deployment completed", "to": [ "alice@example.com", "bob@example.com" ] }, "timestamp": "2026-03-19T12:00:00.000000000Z", "type": "sendEmail.sent" } ``` ## SSH Command Run one or more commands on a remote host via SSH. ### Authentication Choose **SSH key** or **Password**, then select the organization Secret and the key name within that secret that holds the credential. - **SSH key**: Secret key containing the private key (PEM/OpenSSH). Optionally a second secret+key for passphrase if the key is encrypted. - **Password**: Secret key containing the password. ### Configuration - **Host**, **Port** (default 22), **Username**: Connection details. - **Commands**: One or more commands to run, one per line (supports expressions). The output payload is based on the last command. - **Working directory**: Optional; Changes to this directory before running the command. - **Environment variables**: Optional list of key/value pairs available during command execution. - **Timeout (seconds)**: How long the command may run (default 60). - **Connection retry** (optional): Enable to retry connecting when the host is not reachable yet (e.g. server still booting). Set number of retries and interval between attempts. ### Output - **success**: Exit code 0 - **failed**: Non-zero exit code ### Example Output ```json { "data": { "exitCode": 0, "stderr": "", "stdout": "Hello, World!\n" }, "timestamp": "2026-01-19T12:00:00Z", "type": "ssh.command.executed" } ``` ## Time Gate The Time Gate component delays event processing until the next valid day and time window, with optional excluded dates. ### Use Cases - **Business hours**: Only process events during business hours - **Scheduled releases**: Delay deployments until off-peak hours - **Holiday handling**: Exclude specific dates from processing - **Time-based routing**: Route events based on time of day or specific dates ### Configuration - **Active Days**: Days of the week when the gate can open - **Active Time**: Start and end times in HH:MM-HH:MM format (24-hour) - **Timezone**: Timezone offset for time calculations (default: current) - **Exclude Dates**: Specific MM/DD dates that override the rules above ### Behavior - Events wait until the next valid time window is reached - Exclude dates override the day/time rules - Can be manually pushed through using the "Push Through" action - Automatically schedules execution when the time window is reached ### Example Output ```json { "data": {}, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "timegate.finished" } ``` ## Update Memory The Update Memory component updates matching rows in canvas-level memory storage. ### Use Cases - Patch stored records after external state changes - Enrich existing memory rows with additional fields - Keep identifiers and status data in sync ### How It Works 1. Reads `namespace`, `matchList`, and `valueList` from configuration 2. Updates all matching memory rows in a single SQL operation 3. Emits `memory.updated` to the `found` or `notFound` channel ### Output Channels - **Found**: At least one matching memory row was updated - **Not Found**: No matching memory rows were updated ### Example Output ```json { "data": { "data": { "count": 1, "matches": { "creator": "igor", "pull_request": 123 }, "namespace": "machines", "updated": [ { "creator": "igor", "pull_request": 123, "sandbox_id": "sbx-001", "status": "running", "updated_by": "workflow" } ], "values": { "status": "running", "updated_by": "workflow" } } }, "timestamp": "2026-02-28T00:00:00Z", "type": "memory.updated" } ``` ## Upsert Memory The Upsert Memory component updates matching rows in canvas-level memory storage, and creates a new row when no matches are found. ### Use Cases - Keep one record per identifier (for example environment or pull request) - Replace ad-hoc update-then-add branching with one component - Persist latest status snapshots with stable matching keys ### How It Works 1. Reads `namespace`, `matchList`, and `valueList` from configuration 2. Attempts to update all matching memory rows 3. If no rows were updated, inserts a new memory row with the values 4. Emits `memory.upserted` to the default channel with `operation` set to `updated` or `created` ### Simplified Matching If `matchList` is empty, the component treats the namespace as a singleton record and upserts at namespace level. This lets you store just one field (for example `value`) without extra marker fields. ### Output Always emits to the default channel. Check `data.operation` to know whether the component updated existing rows or created a new row. ### Example Output ```json { "data": { "data": { "count": 1, "matches": { "environment": "production", "latest_deployment_source": "manual_run" }, "namespace": "deployments", "operation": "updated", "records": [ { "environment": "production", "latest_deployment": "v1.0.1", "latest_deployment_source": "manual_run" } ], "values": { "environment": "production", "latest_deployment": "v1.0.1", "latest_deployment_source": "manual_run" } } }, "timestamp": "2026-02-28T00:00:00Z", "type": "memory.upserted" } ``` ## Wait The Wait component pauses workflow execution for a specified duration or until a specific time is reached. ### Use Cases - **Rate limiting**: Add delays between API calls - **Scheduled execution**: Wait until a specific time before proceeding - **Retry delays**: Wait before retrying failed operations - **Time-based workflows**: Delay processing until a specific date/time ### Wait Modes - **Interval**: Wait for a fixed duration (seconds, minutes, or hours) - Supports expressions for dynamic wait times - Example: `{{$.retry_delay}}` or `{{$.status == "urgent" ? 0 : 30}}` - **Countdown**: Wait until a specific date/time is reached - Supports ISO 8601 date formats - Supports expressions for dynamic target times - Example: `{{$.release_date}}` or `{{$.run_time + duration("48h")}}` ### Behavior - Execution pauses until the wait period completes - Can be manually pushed through using the "Push Through" action - Automatically resumes when the wait time expires - Emits metadata including start time, finish time, and result ### Output The component emits a payload with: - **started_at**: When the wait began - **finished_at**: When the wait completed - **result**: Completion status (completed, cancelled) - **reason**: How it completed (timeout, manual_override, user_cancel) ### Example Output ```json { "data": { "actor": { "display_name": "Alex Doe", "email": "alex@example.com" }, "finished_at": "2024-01-01T12:05:00Z", "reason": "timeout", "result": "completed", "started_at": "2024-01-01T12:00:00Z" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "wait.finished" } ``` #### Cursor Source URL: https://docs.superplane.com/components/cursor Build workflows with Cursor AI Agents and track usage import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Instructions To get your API keys, visit the [Cursor Dashboard](https://cursor.com/dashboard). You may need separate keys for Agents and Admin features. ## Get Daily Usage Data The Get Daily Usage Data component fetches team usage metrics from Cursor's Admin API. ### Use Cases - **Usage reporting**: Track team productivity and AI usage patterns - **Cost tracking**: Monitor usage-based requests and subscription consumption - **Analytics dashboards**: Build custom dashboards with Cursor usage data ### How It Works 1. Fetches usage data for the specified date range from Cursor's Admin API 2. Returns detailed metrics per user including lines added/deleted, requests, and model usage ### Configuration - **Start Date**: Start of the date range (YYYY-MM-DD format, defaults to 7 days ago) - **End Date**: End of the date range (YYYY-MM-DD format, defaults to today) ### Output The output includes per-user daily metrics: - Lines added/deleted (total and accepted) - Tab completions shown/accepted - Composer, chat, and agent requests - Subscription vs usage-based request counts - Most used model and file extensions ### Notes - Requires a valid Cursor Admin API key configured in the integration - Only returns data for active users ### Example Output ```json { "data": { "data": [ { "acceptedLinesAdded": 1102, "acceptedLinesDeleted": 645, "agentRequests": 12, "chatRequests": 128, "composerRequests": 45, "date": 1710720000000, "email": "developer@company.com", "isActive": true, "mostUsedModel": "gpt-4", "subscriptionIncludedReqs": 180, "totalAccepts": 73, "totalApplies": 87, "totalLinesAdded": 1543, "totalLinesDeleted": 892, "totalRejects": 14, "totalTabsAccepted": 289, "totalTabsShown": 342, "usageBasedReqs": 5 } ], "period": { "endDate": 1710892800000, "startDate": 1710720000000 } }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "cursor.getDailyUsageData.result" } ``` ## Get Last Message The Get Last Message component retrieves the last message from a Cursor Cloud Agent's conversation history. ### Use Cases - **Message tracking**: Get the latest response or prompt from an agent conversation - **Workflow automation**: Use the last message as input for downstream components - **Status monitoring**: Check what the agent last communicated ### How It Works 1. Fetches the conversation history for the specified agent ID 2. Extracts the last message from the conversation 3. Returns the message details including ID, type (user_message or assistant_message), and text ### Configuration - **Agent ID**: The unique identifier for the cloud agent (e.g., bc_abc123) ### Output The output includes: - **Agent ID**: The identifier of the agent - **Message**: The last message object containing: - **ID**: Unique message identifier - **Type**: Either "user_message" or "assistant_message" - **Text**: The message content ### Notes - Requires a valid Cursor Cloud Agent API key configured in the integration - If the agent has been deleted, the conversation cannot be accessed - Returns nil if the conversation has no messages ### Example Output ```json { "data": { "agentId": "bc_abc123", "message": { "id": "msg_005", "text": "I've added a troubleshooting section to the README.", "type": "assistant_message" } }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "cursor.getLastMessage.result" } ``` ## Launch Cloud Agent The Launch Cloud Agent component triggers a Cursor AI coding agent and waits for it to complete. ### Use Cases - **Automated code generation**: Generate code from natural language prompts - **PR fixes**: Automatically fix issues on existing pull requests - **Code refactoring**: Refactor code based on instructions - **Feature implementation**: Implement new features from specifications ### How It Works 1. Launches a Cursor Cloud Agent with the specified prompt and configuration 2. Waits for the agent to complete (monitored via webhook and polling) 3. Emits output with the agent result (success or failure) ### Example Output ```json { "data": { "agentId": "agent_12345", "branchName": "cursor/agent-550e8400", "prUrl": "https://github.com/org/repo/pull/42", "status": "done", "summary": "Refactored login logic." }, "timestamp": "2026-03-26T19:29:35.841265352Z", "type": "cursor.launchAgent.finished" } ``` #### Dash0 Source URL: https://docs.superplane.com/components/dash0 Connect to Dash0 to query data using Prometheus API import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## On Alert Notification The On Alert Notification trigger starts a workflow execution when Dash0 sends an alert notification webhook. ### Setup 1. Configure the Dash0 integration in SuperPlane. 2. Copy the webhook URL shown in the integration configuration. 3. In Dash0, configure alert notifications to send HTTP POST requests to that URL. ### Event Data The trigger emits the full JSON payload received from Dash0 as `dash0.alertNotification`. ### Example Data ```json { "data": { "issue": { "checkrules": [ { "annotations": { "summary": "High error rate detected" }, "description": "Alert when API error rate is high", "expression": "sum(rate(http_requests_total{status=~\"5..\"}[5m])) \u003e 0.05", "for": "5m", "id": "check_456", "interval": "1m", "keepFiringFor": "10m", "labels": { "env": "prod", "service": "api" }, "name": "API availability", "thresholds": { "critical": 0.05 }, "url": "https://app.dash0.com/check-rules/check_456" } ], "dataset": "default", "description": "Error rate exceeded threshold for API availability check.", "end": "", "id": "issue_123", "issueIdentifier": "availability-api-high-error-rate", "labels": [ { "key": "service", "value": { "stringValue": "api" } }, { "key": "env", "value": { "stringValue": "prod" } } ], "start": "2026-02-20T12:00:00Z", "status": "critical", "summary": "High error rate on API availability check", "url": "https://app.dash0.com/issues/issue_123" } }, "timestamp": "2026-02-20T12:00:00Z", "type": "dash0.alertNotification" } ``` ## On Synthetic Check Notification The On Synthetic Check Notification trigger starts a workflow execution when Dash0 sends a synthetic check notification webhook. ### Setup 1. Configure the Dash0 integration in SuperPlane. 2. Copy the webhook URL shown in the integration configuration. 3. In Dash0, configure synthetic check notifications to send HTTP POST requests to that URL. ### Event Data The trigger emits the full JSON payload received from Dash0 as `dash0.syntheticCheckNotification`. ### Labels Format Synthetic check notifications use a tuple-based label format where each label is an array of `[index, {key, value}]`. The trigger normalizes these labels into a flat `{key: value}` map in the emitted payload for easier downstream consumption. ### Example Data ```json { "data": { "issue": { "checkrules": [ { "annotations": { "summary": "API health check failure detected" }, "description": "Monitor API health endpoint availability", "expression": "", "for": "5m", "id": "check_101", "interval": "1m", "keepFiringFor": "10m", "labels": { "env": "prod", "service": "api" }, "name": "API Health Check", "thresholds": {}, "url": "https://app.dash0.com/check-rules/check_101" } ], "dataset": "default", "description": "Synthetic check detected failures for API health endpoint.", "end": "", "id": "issue_789", "issueIdentifier": "synthetic-check-api-health", "labels": [ [ "0", { "key": "dash0.resource.type", "value": { "stringValue": "synthetic" } } ], [ "1", { "key": "dash0.synthetic_check.attempt_id", "value": { "stringValue": "73768e2c" } } ], [ "2", { "key": "dash0.synthetic_check.failed_critical_assertions", "value": { "stringValue": "{\"be-brussels\":[{\"actualValue\":\"503\",\"assertion\":{\"kind\":\"status_code\",\"spec\":{\"operator\":\"is\",\"value\":\"200\"}},\"explanation\":\"Expected value to be 200, but got 503\"}]}" } } ], [ "3", { "key": "dash0.synthetic_check.id", "value": { "stringValue": "api-health-check" } } ], [ "4", { "key": "dash0.synthetic_check.name", "value": { "stringValue": "API Health Check" } } ] ], "start": "2026-02-20T12:00:00Z", "status": "critical", "summary": "Synthetic check failed: API Health Check", "url": "https://app.dash0.com/issues/issue_789" } }, "timestamp": "2026-02-20T12:00:00Z", "type": "dash0.syntheticCheckNotification" } ``` ## Create Check Rule The Create Check Rule component creates a Prometheus-style alert check rule in Dash0 to monitor metrics and trigger alerts based on PromQL expressions. ### Use Cases - **Service health monitoring**: Create alerts for service error rates, latency, or availability - **Resource monitoring**: Alert on high CPU, memory, or disk usage - **Business metrics**: Monitor key business metrics and trigger alerts when thresholds are exceeded - **SLO enforcement**: Create alerts based on Service Level Objectives (SLOs) ### Configuration #### Name & Expression - **Name**: Human-readable name for the check rule - **Expression**: PromQL expression to evaluate. Supports $__threshold variable for dynamic thresholding #### Thresholds - **Degraded**: Threshold value for degraded state (warning) - **Critical**: Threshold value for critical state (alert) - Required when using $__threshold in the expression #### Evaluation - **Interval**: How often to evaluate the expression (1m, 5m, 10m) - **For**: Grace period before triggering (pending duration) - **Keep Firing For**: Grace period before resolving (resolution duration) #### Metadata - **Summary**: Short templatable summary (max 255 chars) - **Description**: Detailed templatable description (max 2048 chars) - **Labels**: Prometheus labels for routing and grouping - **Annotations**: Prometheus annotations for additional context #### Control - **Enabled**: Whether the check rule is active - **Dataset**: Dash0 dataset to query (defaults to "default") ### Output Returns the created check rule details from the Dash0 API, including the rule ID and full configuration. ### Example Output ```json { "data": { "annotations": { "runbook": "https://wiki.example.com/runbooks/high-error-rate", "summary": "Error rate is {{ $value }} errors/sec" }, "dataset": "default", "description": "The error rate has exceeded the configured threshold", "enabled": true, "expression": "sum(rate(http_requests_total{status=~\"5..\"}[5m])) \u003e $__threshold", "for": "0s", "id": "high-error-rate-alert", "interval": "1m", "keepFiringFor": "0s", "labels": { "severity": "high", "team": "backend" }, "name": "High error rate alert", "summary": "Error rate is high", "thresholds": { "critical": 50, "degraded": 10 } }, "timestamp": "2026-03-06T12:00:00Z", "type": "dash0.checkRule.created" } ``` ## Create HTTP Synthetic Check The Create Synthetic Check component creates an HTTP synthetic check in Dash0 to monitor the availability and performance of your endpoints. ### Use Cases - **Uptime monitoring**: Create checks to monitor API endpoints and websites - **Performance validation**: Set response time thresholds to catch regressions - **Deployment verification**: Create synthetic checks after deployments to verify availability - **Multi-region monitoring**: Monitor endpoints from multiple global locations ### Configuration #### Name & Dataset - **Name**: Display name of the synthetic check - **Dataset**: The Dash0 dataset to create the check in (defaults to "default") #### Request - **URL**: Target URL to monitor - **Method**: HTTP method (GET, POST, PUT, PATCH, DELETE, HEAD) - **Redirects**: Whether to follow HTTP redirects - **Allow Insecure**: Skip TLS certificate validation (useful for staging environments) - **Headers**: Custom HTTP request headers - **Body**: Request body payload (for POST/PUT/PATCH) #### Schedule - **Interval**: How often the check runs (e.g. 30s, 1m, 5m, 1h, 2d) - **Locations**: Probe locations (Frankfurt, Oregon, North Virginia, London, Brussels, Melbourne) - **Strategy**: Execution strategy (all locations or round-robin) #### Assertions Each assertion has a kind, severity (critical or degraded), and kind-specific parameters: - **Status Code**: Validate the HTTP response status code - **Timing**: Set thresholds for response, request, SSL, connection, DNS, or total time - **Error Type**: Detect specific error types (DNS, connection, SSL, timeout) - **SSL Certificate Validity**: Enforce minimum days until certificate expiration - **Response Header**: Validate presence or value of a specific response header - **JSON Body**: Validate JSON response fields using JSONPath expressions - **Text Body**: Match plain-text response content #### Retries - **Attempts**: Number of retry attempts on failure - **Delay**: Delay between retries (e.g. 1s, 2s, 5s) ### Output Returns the created synthetic check details from the Dash0 API, including the check ID and full configuration. ### Example Output ```json { "data": { "kind": "Dash0SyntheticCheck", "metadata": { "annotations": {}, "labels": { "dash0.com/dataset": "default", "dash0.com/id": "64617368-3073-796e-7468-abc123def456", "dash0.com/origin": "api-abc12345-6789-0123-4567-890abcdef012", "dash0.com/version": "1" }, "name": "login-api-health-check" }, "spec": { "enabled": true, "plugin": { "display": { "name": "Login API health check" }, "kind": "http", "spec": { "assertions": { "criticalAssertions": [ { "kind": "status_code", "spec": { "operator": "is", "value": "200" } }, { "kind": "timing", "spec": { "operator": "lte", "type": "response", "value": "5000ms" } } ], "degradedAssertions": [ { "kind": "timing", "spec": { "operator": "lte", "type": "response", "value": "2000ms" } } ] }, "request": { "headers": [], "method": "get", "queryParameters": [], "redirects": "follow", "tls": { "allowInsecure": false }, "tracing": { "addTracingHeaders": true }, "url": "https://api.example.com/health" }, "retries": { "kind": "fixed", "spec": { "attempts": 3, "delay": "1s" } } } }, "schedule": { "interval": "1m", "locations": [ "de-frankfurt", "us-oregon" ], "strategy": "all_locations" } } }, "timestamp": "2026-01-19T12:00:00Z", "type": "dash0.syntheticCheck.created" } ``` ## Delete Check Rule The Delete Check Rule component removes a check rule (Prometheus alert rule) from Dash0 by its ID or origin. Use the check rule ID from a Create/Get/Update output or from the Dash0 dashboard. ### Use Cases - **Cleanup**: Remove obsolete or test check rules - **Automation**: Delete check rules as part of automated workflows - **Resource management**: Clean up check rules when services are decommissioned ### Configuration - **Check Rule**: The Dash0 check rule ID or origin to delete (required) - **Dataset**: The dataset the check rule belongs to (defaults to "default") ### Output Returns a confirmation payload indicating successful deletion. ### Example Output ```json { "data": { "deleted": true, "id": "api-b8dad545-7920-49f9-96be-df053cda312d" }, "timestamp": "2026-03-06T12:00:00Z", "type": "dash0.checkRule.deleted" } ``` ## Delete HTTP Synthetic Check The Delete HTTP Synthetic Check component removes a synthetic check from Dash0 by its ID. Use the check ID from a Create/Get/Update output (e.g. metadata.labels["dash0.com/id"]) or from the Dash0 dashboard. ### Configuration - **Check ID**: The Dash0 synthetic check ID to delete (required). - **Dataset**: The dataset the check belongs to (defaults to "default"). ### Output Returns a confirmation payload (e.g. deleted id). ### Example Output ```json { "data": { "deleted": true, "id": "64617368-3073-796e-7468-abc123def456" }, "timestamp": "2026-01-19T12:00:00Z", "type": "dash0.syntheticCheck.deleted" } ``` ## Get Check Rule The Get Check Rule component retrieves the full configuration of an existing check rule (Prometheus alert rule) from Dash0. ### Use Cases - **Configuration review**: Fetch current check rule settings for audit or documentation - **Workflow integration**: Retrieve check rule details to use in subsequent workflow steps - **Health monitoring**: Check if alert rules are properly configured ### Configuration - **Check Rule**: The ID or origin of the check rule to retrieve (from Dash0) - **Dataset**: The Dash0 dataset the check rule belongs to (defaults to "default") ### Output Returns the complete check rule configuration from the Dash0 API, including: - Name and expression (PromQL query) - Thresholds (degraded and critical) - Evaluation settings (interval, for, keepFiringFor) - Labels and annotations - Enabled status ### Example Output ```json { "data": { "annotations": { "runbook": "https://wiki.example.com/runbooks/high-error-rate", "summary": "Error rate is {{ $value }} errors/sec" }, "dataset": "default", "description": "The error rate has exceeded the configured threshold", "enabled": true, "expression": "sum(rate(http_requests_total{status=~\"5..\"}[5m])) \u003e $__threshold", "for": "0s", "id": "high-error-rate-alert", "interval": "1m", "keepFiringFor": "0s", "labels": { "severity": "high", "team": "backend" }, "name": "High error rate alert", "summary": "Error rate is high", "thresholds": { "critical": 50, "degraded": 10 } }, "timestamp": "2026-03-06T12:00:00Z", "type": "dash0.checkRule.fetched" } ``` ## Get HTTP Synthetic Check The Get HTTP Synthetic Check component retrieves the full configuration and operational metrics of an existing HTTP synthetic check from Dash0. ### Use Cases - **Health dashboards**: Fetch current uptime and performance metrics for display in workflows - **Audit and reporting**: Retrieve check configurations for compliance or documentation - **Incident response**: Quickly gather check status and recent performance data during incidents ### Configuration - **Check ID**: The ID of the synthetic check to retrieve (from Dash0) - **Dataset**: The Dash0 dataset the check belongs to (defaults to "default") ### Output Channels - **Healthy**: The check is passing — the most recent run outcome is "Healthy" - **Degraded**: The check is degraded — the most recent run outcome is "Degraded" - **Critical**: The check is failing — the most recent run outcome is "Critical" ### Output Returns a combined payload with: #### Configuration The full synthetic check configuration from the Dash0 API, including: - Name, URL, HTTP method - Schedule (interval, locations, strategy) - Assertions (critical and degraded thresholds) - Retry settings #### Metrics Operational metrics from the Dash0 Prometheus API: - **Healthy Runs (24h/7d)**: Number of successful check runs - **Critical Runs (24h/7d)**: Number of failed check runs - **Total Runs (24h/7d)**: Total number of check runs - **Avg Duration (24h/7d)**: Mean end-to-end response time (in milliseconds) - **Last Outcome**: Most recent run outcome (Healthy or Critical) Note: Metrics are fetched on a best-effort basis. If Prometheus metrics are unavailable for a check, the configuration is still returned with null metric values. ### Example Output ```json { "data": { "configuration": { "kind": "Dash0SyntheticCheck", "metadata": { "annotations": {}, "description": "", "labels": { "dash0.com/dataset": "default", "dash0.com/id": "64617368-3073-796e-7468-73599f287bf4", "dash0.com/origin": "", "dash0.com/version": "21" }, "name": "New synthetic check" }, "spec": { "display": { "name": "New synthetic check" }, "enabled": true, "labels": {}, "notifications": { "channels": [], "onlyCriticalChannels": [] }, "plugin": { "kind": "http", "spec": { "assertions": { "criticalAssertions": [ { "kind": "status_code", "spec": { "operator": "is", "value": "200" } } ], "degradedAssertions": [ { "kind": "timing", "spec": { "operator": "lte", "type": "total", "value": "2000ms" } } ] }, "request": { "headers": [], "method": "get", "queryParameters": [], "redirects": "follow", "tls": { "allowInsecure": false }, "tracing": { "addTracingHeaders": true }, "url": "https://example.com/health" } } }, "retries": { "kind": "off", "spec": {} }, "schedule": { "interval": "1m", "locations": [ "be-brussels" ], "strategy": "all_locations" } } }, "metrics": { "avgDuration24hMs": 1004, "avgDuration7dMs": 1004, "criticalRuns24h": 2275, "criticalRuns7d": 2438, "healthyRuns24h": 6, "healthyRuns7d": 37, "lastOutcome": "Healthy", "totalRuns24h": 2281, "totalRuns7d": 2475 } }, "timestamp": "2026-03-03T11:00:00Z", "type": "dash0.syntheticCheck.fetched" } ``` ## List Issues The List Issues component queries Dash0 to retrieve all current issues and routes execution based on issue severity. ### Use Cases - **Health monitoring**: Check system health and route based on issue severity - **Alert routing**: Route alerts to different channels based on issue status - **Issue tracking**: Monitor and process active issues - **Automated remediation**: Trigger remediation workflows based on issues ### Configuration - **Check Rules**: Optional list of check rules to filter issues (leave empty to get all issues) ### Output Channels - **Clear**: No active issues detected - **Degraded**: One or more degraded issues detected - **Critical**: One or more critical issues detected ### Output Returns a list of issues with: - **check_rule**: The check rule that generated the issue - **status**: Issue status (clear, degraded, critical) - **labels**: Metric labels associated with the issue - **metadata**: Additional issue metadata ### Example Output ```json { "data": { "data": { "result": [ { "metric": { "service_name": "test" }, "value": [ 1234567890, "1" ], "values": [ [ 1234567890, "1" ], [ 1234567900, "2" ] ] } ], "resultType": "vector" }, "status": "success" }, "timestamp": "2026-01-19T12:00:00Z", "type": "dash0.issues.list" } ``` ## Query Prometheus The Query Prometheus component executes PromQL queries against the Dash0 Prometheus API. ### Use Cases - **Metrics monitoring**: Query application and infrastructure metrics - **Alerting**: Check metric thresholds and trigger alerts - **Data analysis**: Analyze time-series data from your applications - **Performance monitoring**: Monitor system performance metrics ### Configuration - **PromQL Query**: The Prometheus Query Language query to execute (supports expressions) - **Dataset**: The dataset to query (default: "default") - **Query Type**: - **Instant**: Query a single point in time - **Range**: Query a time range with optional start, end, and step parameters ### Output Returns the Prometheus query response including: - **status**: Query status (success or error) - **data**: Query results with metric labels and values - **dataType**: Result type (vector, matrix, scalar, or string) ### Notes - Requires Dash0 API token and base URL configured in application settings - Supports all standard PromQL functions and operators - Range queries require start, end, and step parameters ### Example Output ```json { "data": { "data": { "result": [ { "metric": { "service_name": "test" }, "value": [ 1234567890, "1" ], "values": [ [ 1234567890, "1" ], [ 1234567900, "2" ] ] } ], "resultType": "vector" }, "status": "success" }, "timestamp": "2026-01-19T12:00:00Z", "type": "dash0.prometheus.response" } ``` ## Send Log Event The Send Log Event component sends log records from workflows to Dash0 via OTLP HTTP ingestion. ### Use Cases - **Audit trails**: Record workflow events (deployments, approvals, alerts) as log lines - **Observability correlation**: Tie workflow activity to traces and metrics in Dash0 - **Event tracking**: Create searchable log entries for workflow milestones - **Debugging**: Send diagnostic information from workflows to Dash0 Logs Explorer ### Configuration - **Severity**: Log severity level (TRACE, DEBUG, INFO, WARN, ERROR, FATAL) - **Event Name**: Optional name for this log event (e.g. deployment.completed) - **Service Name**: Optional service identifier (becomes OTLP resource attribute 'service.name') - **Body**: The log message content (plain text or JSON string) - **Attributes**: Optional key-value pairs for additional log metadata - **Dataset**: Optional dataset name for log organization (defaults to "default") ### Output Returns a confirmation that the log was sent along with the log record details: - **sent**: Boolean indicating success - **severityText**: The log severity level - **body**: The log message content - **eventName**: The event name (if provided) - **serviceName**: The service name (if provided) - **attributes**: Additional metadata (if provided) - **dataset**: The dataset name - **timestamp**: When the log was sent ### Notes - Requires Dash0 API token and base URL configured in application settings - Logs appear in Dash0 Logs Explorer and can be correlated with traces and metrics - Use INFO severity for normal workflow events, WARN/ERROR for issues ### Example Output ```json { "data": { "attributes": { "env": "production" }, "body": "Deployment started", "dataset": "default", "eventName": "deployment.created", "sent": true, "serviceName": "api-gateway", "severityText": "INFO" }, "timestamp": "2026-03-11T16:05:54.753430237Z", "type": "dash0.log.sent" } ``` ## Update Check Rule The Update Check Rule component updates an existing check rule (Prometheus alert rule) in Dash0. Use the check rule ID from a previous Create Check Rule output or from the Dash0 dashboard. ### Use Cases - **Threshold adjustment**: Update alert thresholds based on changing conditions - **Expression refinement**: Modify the PromQL query to better detect issues - **Notification changes**: Update labels and annotations for better routing - **Enable/disable**: Temporarily disable check rules during maintenance ### Configuration - **Check Rule**: The Dash0 check rule ID to update (required) - **Dataset**: The dataset the check rule belongs to (defaults to "default") - **Name**, **Expression**, **Thresholds**, **Interval**, etc.: Same as Create Check Rule; the full spec is sent to replace the existing check rule ### Output Returns the updated check rule details from the Dash0 API, including the rule ID and full configuration. ### Example Output ```json { "data": { "annotations": { "runbook": "https://wiki.example.com/runbooks/high-error-rate", "summary": "Error rate is {{ $value }} errors/sec" }, "dataset": "default", "description": "The error rate has exceeded the configured threshold", "enabled": true, "expression": "sum(rate(http_requests_total{status=~\"5..\"}[5m])) \u003e $__threshold", "for": "30s", "id": "high-error-rate-alert", "interval": "1m", "keepFiringFor": "5m", "labels": { "severity": "high", "team": "backend" }, "name": "High error rate alert", "summary": "Error rate is high", "thresholds": { "critical": 75, "degraded": 15 } }, "timestamp": "2026-03-06T12:00:00Z", "type": "dash0.checkRule.updated" } ``` ## Update HTTP Synthetic Check The Update HTTP Synthetic Check component updates an existing synthetic check in Dash0. Use the check ID from a previous Create HTTP Synthetic Check output (e.g. metadata.labels["dash0.com/id"]) or from the Dash0 dashboard. ### Configuration - **Check ID**: The Dash0 synthetic check ID to update (required). - **Dataset**: The dataset the check belongs to (defaults to "default"). - **Name**, **Request**, **Schedule**, **Assertions**, **Retries**: Same as Create HTTP Synthetic Check; the full spec is sent to replace the existing check. ### Example Output ```json { "data": { "kind": "Dash0SyntheticCheck", "metadata": { "annotations": {}, "labels": { "dash0.com/dataset": "default", "dash0.com/id": "64617368-3073-796e-7468-abc123def456", "dash0.com/origin": "api-abc12345-6789-0123-4567-890abcdef012", "dash0.com/version": "2" }, "name": "login-api-health-check" }, "spec": { "enabled": true, "plugin": { "display": { "name": "Login API health check" }, "kind": "http", "spec": { "assertions": { "criticalAssertions": [ { "kind": "status_code", "spec": { "operator": "is", "value": "200" } } ], "degradedAssertions": [] }, "request": { "headers": [], "method": "get", "queryParameters": [], "redirects": "follow", "tls": { "allowInsecure": false }, "tracing": { "addTracingHeaders": true }, "url": "https://api.example.com/health" }, "retries": { "kind": "fixed", "spec": { "attempts": 3, "delay": "1s" } } } }, "schedule": { "interval": "1m", "locations": [ "de-frankfurt", "us-oregon" ], "strategy": "all_locations" } } }, "timestamp": "2026-01-19T12:00:00Z", "type": "dash0.syntheticCheck.updated" } ``` #### Datadog Source URL: https://docs.superplane.com/components/datadog Create events in Datadog import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Instructions To configure Datadog to work with SuperPlane: 1. **Get API Keys**: In Datadog, go to Organization Settings > API Keys to get your API Key 2. **Get Application Key**: Go to Organization Settings > Application Keys to create an Application Key 3. **Select Site**: Choose the Datadog site that matches your account (US1, US3, US5, EU, or AP1) 4. **Enter Credentials**: Provide your API Key, Application Key, and Site in the integration configuration ## Create Event The Create Event component creates a new event in Datadog. ### Use Cases - **Deployment tracking**: Log deployment events to correlate with metrics - **Incident annotation**: Add context to incidents with custom events - **Workflow notifications**: Create events to track workflow execution milestones ### Outputs The component emits an event containing: - `id`: The unique identifier of the created event - `title`: The event title - `text`: The event body - `date_happened`: Unix timestamp when the event occurred - `alert_type`: The severity level (info, warning, error, success) - `priority`: Event priority (normal, low) - `tags`: Array of tags attached to the event - `url`: Link to view the event in Datadog ### Example Output ```json { "data": { "alert_type": "info", "date_happened": 1704067200, "id": 1234567890, "priority": "normal", "tags": [ "env:prod", "service:web" ], "text": "Application v1.2.3 has been deployed successfully", "title": "Deployment completed", "url": "https://app.datadoghq.com/event/event?id=1234567890" }, "timestamp": "2026-01-19T12:00:00Z", "type": "datadog.event" } ``` #### Daytona Source URL: https://docs.superplane.com/components/daytona Execute code in isolated sandbox environments import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Create Repository Sandbox The Create Repository Sandbox component creates a new Daytona sandbox, clones a repository, and runs a bootstrap script. ### Use Cases - **Ephemeral dev environments**: Spin up a fresh environment for a repository on demand - **CI-like workflows**: Clone code and run setup scripts before downstream tasks - **Automated validation**: Prepare repository state before executing tests or commands ### Notes - The component waits for the sandbox to reach the "started" state - Clone and bootstrap run sequentially in the same session - If clone or bootstrap fails, the component returns an error ### Example Output ```json { "data": { "bootstrap": { "from": "file", "path": "scripts/bootstrap.sh" }, "directory": "/home/daytona/example-app", "repository": "https://github.com/superplanehq/example-app.git", "sandboxId": "sandbox-abc123def456", "sandboxStartedAt": "2026-01-19T12:00:00Z", "secrets": [ { "name": "API_KEY", "type": "env", "value": { "key": "API_KEY", "secret": "api-key-secret" } } ], "timeout": 300 }, "timestamp": "2026-01-19T12:00:00Z", "type": "daytona.repository.sandbox" } ``` ## Create Sandbox The Create Sandbox component creates an isolated environment for executing code safely. ### Use Cases - **AI code execution**: Run AI-generated code in a secure sandbox - **Code testing**: Test untrusted code without affecting your infrastructure - **Development environments**: Create ephemeral development environments ### Configuration - **Snapshot**: Base environment snapshot (optional, uses default if not specified) - **Target**: Target region for the sandbox (optional) - **Auto Stop Interval**: Time in minutes before the sandbox auto-stops - **Environment Variables**: Key-value pairs to set as environment variables in the sandbox ### Output Returns the sandbox information including: - **id**: The unique sandbox identifier (use this in subsequent execute/delete operations) - **state**: The current state of the sandbox (e.g., "started") ### Notes - The component polls the sandbox state until it reaches "started" - Each sandbox is fully isolated - Remember to delete sandboxes when done to free resources ### Example Output ```json { "data": { "id": "sandbox-abc123def456", "state": "started" }, "timestamp": "2026-01-19T12:00:00Z", "type": "daytona.sandbox" } ``` ## Delete Sandbox The Delete Sandbox component removes an existing Daytona sandbox. ### Use Cases - **Resource cleanup**: Delete sandboxes after code execution is complete - **Cost management**: Remove unused sandboxes to free resources - **Workflow cleanup**: Clean up sandboxes at the end of automation workflows ### Configuration - **Sandbox**: The ID or name of the sandbox to delete (from createSandbox output) - **Force**: Optional flag to force deletion even if sandbox is running ### Output Returns deletion confirmation including: - **deleted**: Boolean indicating successful deletion - **id**: The ID of the deleted sandbox ### Notes - Always delete sandboxes when they are no longer needed - Sandboxes will auto-stop after the configured interval, but explicit deletion frees resources immediately ### Example Output ```json { "data": { "deleted": true, "id": "sandbox-abc123def456" }, "timestamp": "2026-01-19T12:00:00Z", "type": "daytona.delete.response" } ``` ## Execute Code The Execute Code component runs code in an existing Daytona sandbox. ### Use Cases - **AI code execution**: Run AI-generated code safely - **Code testing**: Execute untrusted code in isolation - **Script automation**: Run Python, TypeScript, or JavaScript scripts - **Data processing**: Execute data transformation scripts ### Configuration - **Sandbox**: The sandbox ID to execute code in (from createSandbox output). Supports expressions, e.g. `{{ $["daytona.createSandbox"].data.id }}` - **Code**: The code to execute (supports expressions) - **Language**: The programming language (python, typescript, javascript) - **Timeout**: Optional execution timeout in milliseconds ### Output Returns the execution result including: - **exitCode**: The process exit code (0 for success) - **result**: The stdout/output from the code execution ### Notes - The sandbox must be created first using createSandbox - Code output is captured from stdout - Non-zero exit codes indicate execution errors ### Example Output ```json { "data": { "exitCode": 0, "result": "Hello, World!\n" }, "timestamp": "2026-01-19T12:00:00Z", "type": "daytona.execute.response" } ``` ## Execute Command The Execute Command component runs shell commands in an existing Daytona sandbox. ### Use Cases - **Package installation**: Install dependencies (pip install, npm install) - **File operations**: Create, move, or delete files in the sandbox - **System commands**: Run any shell command in the isolated environment - **Build processes**: Execute build scripts or compilation commands ### Configuration - **Sandbox**: The sandbox ID to run commands in (from createSandbox output). Supports expressions, e.g. `{{ $["daytona.createSandbox"].data.id }}` - **Command**: The shell command to execute - **Working Directory**: Optional working directory for the command - **Environment Variables**: Optional key-value pairs exported before command execution - **Timeout**: Optional execution timeout in seconds ### Output Routes to one of two channels: - **success**: Exit code is 0 - **failed**: Exit code is non-zero The payload includes: - **exitCode**: The process exit code (0 for success) - **timeout**: Whether the command timed out - **result**: The stdout/output from the command execution ### Notes - The sandbox must be created first using createSandbox - Commands run in a shell environment - Non-zero exit codes indicate command failures ### Example Output ```json { "data": { "exitCode": 0, "result": "Successfully installed requests-2.31.0\n", "timeout": false }, "timestamp": "2026-01-19T12:00:00Z", "type": "daytona.command.response" } ``` ## Get Preview URL The Get Preview URL component generates a Daytona preview URL for a specific sandbox port. ### Use Cases - **Open sandbox web apps**: Access a web app running in a sandbox from a browser - **Share previews**: Generate signed URLs that can be opened without custom headers - **Automation**: Generate preview links for downstream steps and notifications ### Configuration - **Sandbox**: Sandbox to generate the preview URL for - **Port**: Sandbox port to preview (default: 3000) - **Signed URL**: Whether to generate a signed preview URL (default: true) - **Expires In Seconds**: Signed URL expiration in seconds (default: 60, max: 86400) ### URL Formats - **Standard URL** (`signed=false`): `https://{port}-{sandboxId}.{daytonaProxyDomain}` Requires `x-daytona-preview-token` header with the returned token - **Signed URL** (`signed=true`): `https://{port}-{token}.{daytonaProxyDomain}` Authentication is embedded in the URL, no custom header required ### Output Returns preview URL information including: - **sandbox**: The sandbox ID used in the request - **port**: The target sandbox port - **signed**: Whether the generated URL is signed - **url**: Generated preview URL - **token**: Preview token (embedded token for signed URLs, header token for standard URLs) - **expiresInSeconds**: Expiration for signed URLs ### Notes - The target port must be serving HTTP traffic in the sandbox, otherwise preview access may fail - Signed URLs can be opened directly in browsers - Standard URLs require `x-daytona-preview-token` header for private sandboxes ### Example Output ```json { "data": { "expiresInSeconds": 3600, "port": 3000, "sandbox": "sandbox-abc123def456", "signed": true, "token": "signed-token-abc123", "url": "https://3000-signed-token-abc123.preview.daytona.app" }, "timestamp": "2026-01-19T12:00:00Z", "type": "daytona.preview.response" } ``` #### DigitalOcean Source URL: https://docs.superplane.com/components/digitalocean Manage and monitor your DigitalOcean infrastructure import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Instructions ## DigitalOcean Personal Access Token Generate a [DigitalOcean Personal Access Token](https://cloud.digitalocean.com/account/api/tokens) and copy it. - Token name: `SuperPlane Integration` - Expiration: **No expiry** (or choose an appropriate expiration) - Scopes: **Full Access** (or customize as needed) ## Access Key (optional) Only required for **Spaces Object Storage** components. Create an [Access Key ID & Secret Access Key](https://cloud.digitalocean.com/spaces/access_keys) and copy the generated pair. - Scope: **Full Access** (all buckets) or **Limited Access** (specific buckets) > **Note:** The Personal Access Token and Secret Access Key are shown only once — store them somewhere safe before continuing. ## Add Data Source The Add Data Source component adds a new data source to an existing knowledge base on the DigitalOcean Gradient AI Platform. ### How it works Adds a single data source — either a Spaces bucket or a web/sitemap URL — to a knowledge base. When **Index after adding** is enabled (the default), the component also starts an indexing job and waits for it to complete before emitting the output. ### Data Source Types - **Spaces Bucket or Folder** — indexes all supported files in a DigitalOcean Spaces bucket or folder - **Web or Sitemap URL** — crawls a public website (seed URL) or a list of URLs from a sitemap ### Chunking Strategies Each data source has its own independent chunking configuration: - **Section-based** (default) — splits on structural elements like headings and paragraphs; fast and low-cost - **Semantic** — groups sentences by meaning; slower but context-aware - **Hierarchical** — creates parent (context) and child (retrieval) chunk pairs - **Fixed-length** — splits strictly by token count; best for logs and unstructured text ### Indexing When **Index after adding** is enabled, the component starts an indexing job scoped **only to the newly added data source** and polls every 30 seconds until the job completes. Other existing data sources in the knowledge base are not re-indexed. Disable it if you want to add multiple data sources first and index them all at once using the Index Knowledge Base component. ### Output Returns the added data source details: - **dataSourceUUID**: UUID of the newly added data source - **knowledgeBaseUUID**: UUID of the knowledge base - **knowledgeBaseName**: Name of the knowledge base When indexing is enabled, the output also includes: - **indexingJob**: Full indexing job details (status, totalTokens, completedDataSources, totalDataSources, startedAt, finishedAt) ### Example Output ```json { "data": { "dataSourceName": "https://example.com/", "dataSourceUUID": "e374bb5e-33e6-11f1-b074-4e01edede4", "indexingJob": { "completedDataSources": 1, "finishedAt": "2026-04-09T07:37:51Z", "startedAt": "2026-04-09T07:36:58Z", "status": "INDEX_JOB_STATUS_COMPLETED", "totalDataSources": 1, "totalTokens": "21" }, "knowledgeBaseName": "ecommerce-knowledge-base", "knowledgeBaseUUID": "3b88fe18-31bb-11f1-b074-4e013hhjte4" }, "timestamp": "2026-04-09T07:38:01.304486131Z", "type": "digitalocean.data_source.added" } ``` ## Assign Reserved IP The Assign Reserved IP component assigns or unassigns a DigitalOcean Reserved IP to a droplet. ### Use Cases - **Blue/green deployments**: Reassign a reserved IP to the new deployment with zero downtime - **Failover**: Quickly reassign a reserved IP from a failed droplet to a healthy replacement - **Maintenance**: Temporarily unassign a reserved IP while a droplet is being serviced ### Configuration - **Reserved IP**: The reserved IP address to manage (required) - **Action**: The operation to perform: assign or unassign (required) - **Droplet ID**: The target droplet for the assignment (required when action is assign) ### Output Returns the action result including: - **id**: Action ID - **status**: Final action status (completed) - **type**: Type of action performed (assign or unassign) - **started_at**: When the action started - **completed_at**: When the action completed - **resource_id**: Reserved IP resource identifier ### Important Notes - The component polls until the action completes - For **assign**, the reserved IP will be unassigned from any current droplet first - For **unassign**, the **Droplet ID** field is ignored ### Example Output ```json { "data": { "completed_at": "2026-03-13T10:10:05Z", "id": 2048576123, "region_slug": "nyc3", "resource_id": 2335912909, "resource_type": "floating_ip", "started_at": "2026-03-13T10:10:00Z", "status": "completed", "type": "assign_ip" }, "timestamp": "2026-03-13T10:10:08.000000000Z", "type": "digitalocean.reservedip.assign" } ``` ## Attach Knowledge Base The Attach Knowledge Base component connects a knowledge base to an existing Gradient AI agent, enabling the agent to use it for retrieval-augmented generation (RAG). ### Use Cases - **Post-creation wiring**: After creating a new knowledge base, attach it to an agent to make it immediately available - **Blue/green KB deployment**: Attach a newly indexed knowledge base to an agent as part of a promotion pipeline - **Multi-KB agents**: Add additional knowledge bases to an agent that already has others attached ### Configuration - **Agent**: The agent to attach the knowledge base to (required) - **Knowledge Base**: The knowledge base to attach — only shows knowledge bases not already attached to the selected agent (required) ### Output Returns confirmation of the attachment including: - **agentUUID**: UUID of the agent - **knowledgeBaseUUID**: UUID of the attached knowledge base ### Example Output ```json { "data": { "agentUUID": "20cd8434-6ea1-11f0-bf8f-4e013e2ddde4", "knowledgeBaseUUID": "a1b2c3d4-0000-0000-0000-000000000001" }, "timestamp": "2025-01-01T00:00:00Z", "type": "digitalocean.knowledge_base.attached" } ``` ## Copy Object The Copy Object component copies an object from one location to another within DigitalOcean Spaces. When **Delete Source** is enabled, the source object is deleted after a successful copy, effectively moving the object. ### Prerequisites Spaces Access Key ID and Secret Access Key must be configured in the DigitalOcean integration settings. ### Configuration - **Source Bucket**: The bucket containing the object to copy - **Source File Path**: The path to the source object (e.g. reports/daily.csv). Supports expressions. - **Destination Bucket**: The bucket to copy the object into (can be the same bucket) - **Destination File Path**: The path for the copied object (e.g. archive/2026/daily.csv). Supports expressions. - **Visibility**: Access control for the destination object — Private (default) or Public - **Delete Source**: When enabled, the source object is deleted after a successful copy (move operation) ### Output - **sourceBucket**: The source bucket name - **sourceFilePath**: The source object path - **destinationBucket**: The destination bucket name - **destinationFilePath**: The destination object path - **endpoint**: The full Spaces URL of the copied object - **eTag**: MD5 hash of the copied object - **moved**: `true` if the source was deleted, `false` otherwise ### Notes - Both buckets must be in the same region - Copying an object to the same path overwrites it - Metadata and tags are copied from the source object by default ### Use Cases - **Archiving**: Move processed files to an archive bucket or folder - **Promotion**: Copy an artifact from a staging bucket to production - **Backup**: Duplicate a file before modifying it - **Renaming**: Move a file to a new path within the same bucket ### Example Output ```json { "data": { "destinationBucket": "my-company-archive", "destinationFilePath": "archive/2026/daily.csv", "eTag": "a1b2c3d4ef567890a1b2c3d4ef567890", "endpoint": "https://my-company-archive.fra1.digitaloceanspaces.com/archive/2026/daily.csv", "moved": false, "sourceBucket": "my-company-assets", "sourceFilePath": "reports/daily.csv" }, "timestamp": "2026-03-25T09:00:00Z", "type": "digitalocean.spaces.object.copied" } ``` ## Create Alert Policy The Create Alert Policy component creates a monitoring alert policy that triggers notifications when droplet metrics cross defined thresholds. > **Note:** Monitoring is only available for droplets that had monitoring enabled during creation. Droplets created without monitoring will not report metrics or trigger alerts. ### Use Cases - **Capacity management**: Get notified when CPU or memory usage consistently exceeds a safe operating level - **Performance monitoring**: Detect and respond to high load averages or network saturation - **Automated workflows**: Chain downstream actions when infrastructure metrics breach limits ### Configuration - **Description**: Human-readable name for the alert policy (required) - **Metric Type**: The droplet metric to monitor, such as CPU Usage or Memory Usage (required) - **Comparison**: Alert when the value is GreaterThan or LessThan the threshold (required) - **Threshold Value**: The numeric threshold that triggers the alert (required) - **Evaluation Window**: The rolling time window over which the metric is averaged (required) - **Droplets**: Specific droplets to scope the policy to (optional) - **Tags**: Monitor all droplets with matching tags (optional) - **Enabled**: Whether the alert policy is immediately active (default: true) - **Email Notifications**: Email addresses to notify when the alert fires (optional) - **Slack Channel**: Slack channel to post alerts to, e.g. #alerts (optional) - **Slack Webhook URL**: Incoming webhook URL for the Slack workspace (required when Slack Channel is set) ### Output Returns the created alert policy including: - **uuid**: Alert policy UUID for use in Get/Delete operations - **description**: Human-readable description - **type**: Metric type being monitored - **compare**: Comparison operator (GreaterThan/LessThan) - **value**: Threshold value - **window**: Evaluation window - **enabled**: Whether the policy is active - **alerts**: Configured notification channels (email and/or Slack) ### Important Notes - At least one notification channel (email or Slack) is required - **Slack Channel** and **Slack Webhook URL** must be provided together - Scoping by **Droplets** and **Tags** are independent — you can use either, both, or neither (applies to all droplets) ### Example Output ```json { "data": { "alerts": { "email": [ "sammy@digitalocean.com" ] }, "compare": "GreaterThan", "description": "High CPU Usage", "enabled": true, "entities": [ "558899681" ], "tags": [], "type": "v1/insights/droplet/cpu", "uuid": "ffcaf816-f6a5-4b4a-b4c4-e84532755e82", "value": 20, "window": "5m" }, "timestamp": "2026-03-18T09:29:00.308296519Z", "type": "digitalocean.alertpolicy.created" } ``` ## Create App The Create App component provisions a new application on DigitalOcean's App Platform from a GitHub, GitLab, or Bitbucket repository. The component requires that you have connected your Git provider in your DigitalOcean account and granted access to the repository you want to deploy. You can do so by creating a sample app in the DigitalOcean control panel as illustrated here: https://docs.digitalocean.com/products/app-platform/getting-started/deploy-sample-apps/ ### Use Cases - **Deploy web services**: Provision web services and APIs with configurable instance sizes and HTTP ports - **Deploy static sites**: Host static websites and single-page applications with custom build and output directories - **Deploy workers**: Run background workers for processing tasks - **Deploy jobs**: Run one-off or scheduled jobs (pre-deploy, post-deploy, or failed-deploy) - **Automated provisioning**: Create app instances as part of infrastructure automation workflows - **Multi-environment setup**: Deploy separate app instances for dev, staging, and production ### Configuration - **Name**: The name for the app (required) - **Region**: The region to deploy the app in (required) - **Component Type**: The type of component - Service, Static Site, Worker, or Job (required, defaults to Service) - **Source Provider**: The source code provider - GitHub, GitLab, or Bitbucket (required) - **Repository**: The repository in owner/repo format (required, shown based on selected provider) - **Branch**: The branch to deploy from (defaults to "main", shown based on selected provider) - **Deploy on Push**: Automatically deploy when code is pushed to the branch (default: true) - **Environment Slug**: The runtime environment/buildpack (e.g., go, node-js, python, html) - **Build Command**: Custom build command (e.g., npm install && npm run build) - **Run Command**: Custom run command for services, workers, and jobs (e.g., npm start) - **Source Directory**: Path to the source code within the repository (defaults to /) - **HTTP Port**: The port the service listens on (services only) - **Instance Size**: The instance size slug (e.g., apps-s-1vcpu-1gb) for services, workers, and jobs - **Instance Count**: Number of instances to run (services, workers, and jobs) - **Output Directory**: Build output directory for static sites (e.g., build, dist, public) - **Index Document**: Index document for static sites (defaults to index.html) - **Error Document**: Custom error document for static sites (e.g., 404.html) - **Catchall Document**: Catchall document for single-page applications (e.g., index.html) - **Environment Variables**: Key-value pairs for environment variables (optional) #### Ingress Configuration - **Ingress Path**: Path prefix for routing traffic to the component (e.g., /api for services, / for static sites) - **CORS Allow Origins**: Origins allowed for Cross-Origin Resource Sharing (e.g., https://example.com) - **CORS Allow Methods**: HTTP methods allowed for CORS requests (e.g., GET, POST, PUT) #### Database Configuration - **Add Database**: Attach a database to the app - **Database Component Name**: Name used to reference the database in env vars (e.g., ${db.DATABASE_URL}) - **Database Engine**: PostgreSQL, MySQL, Redis, or MongoDB - **Database Version**: Engine version (e.g., 16 for PostgreSQL) - **Use Managed Database**: Connect to an existing DigitalOcean Managed Database cluster instead of a dev database - **Database Cluster Name**: Name of the existing managed database cluster (required for managed databases) - **Database Name / User**: Optional database name and user for managed database connections #### VPC Configuration - **VPC**: ID of the VPC to deploy into. Apps in a VPC can communicate with other resources over the private network. ### Output Returns the created app including: - **id**: The unique app ID - **name**: The app name - **default_ingress**: The default ingress URL - **live_url**: The live URL for the app - **region**: The region where the app is deployed - **active_deployment**: Information about the active deployment ### Notes - The app will be created with a single component of the selected type - Deployments are asynchronous and may take several minutes to complete - The component emits an output once the deployment reaches ACTIVE status - If the deployment fails, the component will report the failure - Dev databases are free and suitable for development; use managed databases for production - Use bindable variables (e.g., ${db.DATABASE_URL}) to reference database connection details in environment variables ### Example Output ```json { "data": { "defaultIngress": "https://my-app-22-b6v8c.ondigitalocean.app", "id": "6d8abe1c-7cc5-4db3-b1aa-d9cdd1c127e7", "liveURL": "https://my-app-22-b6v8c.ondigitalocean.app", "name": "my-app-22", "region": { "continent": "Asia", "data_centers": [ "blr1" ], "flag": "india", "label": "Bangalore", "slug": "blr" } }, "timestamp": "2026-03-24T07:01:24.678254095Z", "type": "digitalocean.app.created" } ``` ## Create DNS Record The Create DNS Record component creates a new DNS record for a domain managed by DigitalOcean. ### Use Cases - **Service discovery**: Add A or CNAME records when provisioning new services - **Email routing**: Create MX records for custom mail delivery - **Verification**: Add TXT records for domain ownership verification - **Subdomain management**: Dynamically create subdomains as part of provisioning workflows ### Configuration - **Domain**: The DigitalOcean-managed domain to add the record to (required) - **Type**: The DNS record type (required): A, AAAA, CNAME, MX, NS, TXT, SRV, CAA - **Name**: The subdomain name for the record (required, use @ for root) - **Data**: The record value, e.g. an IP address or hostname (required, supports expressions) - **TTL**: Time-to-live in seconds (optional, defaults to 1800) - **Priority**: Record priority for MX/SRV records (optional) - **Port**: Port number for SRV records (optional) - **Weight**: Weight for SRV records (optional) ### Output Returns the created DNS record including: - **id**: Record ID - **type**: Record type - **name**: Subdomain name - **data**: Record value - **ttl**: Time-to-live - **priority**: Priority (for MX/SRV) - **port**: Port (for SRV) - **weight**: Weight (for SRV) ### Example Output ```json { "data": { "data": "167.71.224.221.felixgateru2.com", "id": 1812548333, "name": "_sip._tcp", "port": 8000, "priority": 1, "ttl": 1800, "type": "SRV", "weight": 1 }, "timestamp": "2026-03-16T09:50:03.222782653Z", "type": "digitalocean.dns.record.created" } ``` ## Create Database The Create Database component adds a new database to an existing DigitalOcean Managed Database cluster. ### Use Cases - **Application bootstrap**: Create an application-specific database as part of environment setup - **Tenant provisioning**: Add a dedicated database for a new customer or workspace - **Migration workflows**: Prepare a destination database before importing data ### Configuration - **Database Cluster**: The managed database cluster that will contain the new database (required) - **Database Name**: The name of the database to create (required, supports expressions) ### Output Returns the created database including: - **name**: The created database name - **databaseClusterId**: The cluster UUID - **databaseClusterName**: The cluster name ### Important Notes - If you use custom token scopes, this action requires `database:create` and `database:read` - Database management is not supported for Caching or Valkey clusters ### Example Output ```json { "data": { "databaseClusterId": "9cc10173-e9ea-4176-9dbc-a4cee4c4ff30", "databaseClusterName": "primary-postgres", "name": "app_db" }, "timestamp": "2026-03-27T09:15:00Z", "type": "digitalocean.database.created" } ``` ## Create Database Cluster The Create Database Cluster component provisions a new DigitalOcean Managed Database cluster and waits until it is online. ### Use Cases - **Environment bootstrap**: Provision a managed database cluster before creating apps or databases - **Platform setup**: Create a dedicated cluster for a service, team, or customer environment - **Migration workflows**: Stand up a new cluster before importing data or cutover ### Configuration - **Name**: The database cluster name (required) - **Engine**: The database engine to provision, such as PostgreSQL or MySQL (required) - **Version**: The engine version to provision (required) - **Region**: The DigitalOcean region for the cluster (required) - **Size**: The node size slug for the cluster, for example `db-s-1vcpu-1gb` (required) - **Node Count**: The number of nodes in the cluster (required) ### Output Returns the created database cluster including: - **id**: The cluster UUID - **name**: The cluster name - **engine**: The provisioned engine - **version**: The engine version - **region**: The cluster region - **size**: The selected node size slug - **num_nodes**: The number of nodes - **status**: The current cluster status - **connection**: Connection information when available ### Important Notes - If you use custom token scopes, this action requires `database:create` and `database:read` - Valid versions, sizes, and node counts depend on the selected engine. Use the DigitalOcean Database Options API or dashboard values when configuring this component - The component polls until the cluster status becomes `online` ### Example Output ```json { "data": { "connection": { "host": "superplane-db-do-user-123456-0.j.db.ondigitalocean.com", "port": 25060, "ssl": true, "uri": "postgres://doadmin:[email protected]:25060/defaultdb?sslmode=require", "user": "doadmin" }, "created_at": "2026-03-27T13:00:00Z", "engine": "pg", "id": "65b497a5-1674-4b1a-a122-01aebe761ef7", "name": "superplane-db", "num_nodes": 1, "private_network_uuid": "7e6d2691-182b-4dd1-8452-529f88feb996", "region": "nyc1", "size": "db-s-1vcpu-1gb", "status": "online", "version": "18.0" }, "timestamp": "2026-03-27T13:00:05Z", "type": "digitalocean.database.cluster.created" } ``` ## Create Droplet The Create Droplet component creates a new droplet in DigitalOcean. ### Use Cases - **Infrastructure provisioning**: Automatically provision droplets from workflow events - **Scaling**: Create new instances in response to load or alerts - **Environment setup**: Spin up droplets for testing or staging environments ### Configuration - **Name**: The hostname for the droplet (required, supports expressions) - **Region**: Region slug where the droplet will be created (required) - **Size**: Size slug for the droplet (required) - **Image**: Image slug or ID for the droplet OS (required) - **SSH Keys**: SSH keys to add to the droplet. Must have been added to the DigitalOcean team. (optional) - **Tags**: Tags to apply to the droplet (optional) - **User Data**: Cloud-init user data script (optional) - **Backups**: Enable automated backups for the droplet (optional) - **IPv6**: Enable IPv6 networking on the droplet (optional) - **Monitoring**: Enable DigitalOcean monitoring agent on the droplet (optional) - **VPC UUID**: UUID of the VPC to create the droplet in (optional) ### Output Returns the created droplet object including: - **id**: Droplet ID - **name**: Droplet hostname - **status**: Current droplet status - **region**: Region information - **networks**: Network information including IP addresses ### Example Output ```json { "data": { "disk": 25, "id": 98765432, "image": { "id": 12345, "name": "Ubuntu 24.04 (LTS) x64", "slug": "ubuntu-24-04-x64" }, "memory": 1024, "name": "my-droplet", "networks": { "v4": [ { "ip_address": "104.131.186.241", "type": "public" } ] }, "region": { "name": "New York 3", "slug": "nyc3" }, "size_slug": "s-1vcpu-1gb", "status": "new", "tags": [ "web" ], "vcpus": 1 }, "timestamp": "2026-03-12T21:10:00.000000000Z", "type": "digitalocean.droplet.created" } ``` ## Create Knowledge Base The Create Knowledge Base component creates a new knowledge base on the DigitalOcean Gradient AI Platform, ready for use with AI agents via retrieval-augmented generation (RAG). ### How it works A knowledge base converts your data sources into vector embeddings using the selected embedding model. Those embeddings are stored in an OpenSearch database — either a newly provisioned one or one you already have. Once created, the knowledge base can be attached to any Gradient AI agent. ### Data Sources You can add multiple data sources of different types: - **Spaces Bucket or Folder** — indexes all supported files in a DigitalOcean Spaces bucket or folder - **Web or Sitemap URL** — crawls a public website (seed URL) or a list of URLs from a sitemap Each data source has its own independent chunking strategy. ### Chunking Strategies - **Section-based** (default) — splits on structural elements like headings and paragraphs; fast and low-cost - **Semantic** — groups sentences by meaning; slower but context-aware - **Hierarchical** — creates parent (context) and child (retrieval) chunk pairs - **Fixed-length** — splits strictly by token count; best for logs and unstructured text ### OpenSearch Database The knowledge base requires an OpenSearch database to store the vector embeddings: - **Create new** — provisions a new database automatically sized to your data - **Use existing** — connects to a database you already have by providing its ID ### Output Returns the created knowledge base including: - **uuid**: Knowledge base UUID for use in downstream components - **name**: Name of the knowledge base - **region**: Datacenter region - **embeddingModelUUID**: UUID of the embedding model used - **projectId**: Associated project ID - **databaseId**: UUID of the OpenSearch database (populated after provisioning completes for new databases) - **createdAt**: Creation timestamp ### Example Output ```json { "data": { "createdAt": "2025-01-01T00:00:00Z", "databaseId": "abf1055a-745d-4c24-a1db-1959ea819264", "embeddingModelUUID": "05700391-7aa8-11ef-bf8f-4e013e2ddde4", "name": "my-knowledge-base", "projectId": "37455431-84bd-4fa2-94cf-e8486f8f8c5e", "region": "tor1", "tags": [ "docs", "production" ], "uuid": "20cd8434-6ea1-11f0-bf8f-4e013e2ddde4" }, "timestamp": "2025-01-01T00:00:00Z", "type": "digitalocean.knowledge_base.created" } ``` ## Create Load Balancer The Create Load Balancer component creates a new load balancer in DigitalOcean and waits until it is active. ### Use Cases - **Traffic distribution**: Distribute incoming requests across multiple droplets - **High availability**: Ensure zero-downtime deployments by routing traffic across instances - **Scalable infrastructure**: Provision load balancers as part of automated environment setup ### Configuration - **Name**: The name of the load balancer (required, only letters, numbers, and hyphens) - **Region**: Region where the load balancer will be created (required) - **Forwarding Rules**: One or more forwarding rules specifying entry/target protocol, port, and optional TLS passthrough (required) - **Droplets**: The droplets to add as targets — must be in the same region as the load balancer (optional, mutually exclusive with Tag) - **Tag**: Tag used to dynamically target droplets (optional, mutually exclusive with Droplets) ### Output Returns the created load balancer object including: - **id**: Load balancer ID (UUID) - **name**: Load balancer name - **ip**: Assigned public IP address - **status**: Current status (active) - **region**: Region information - **forwarding_rules**: Configured forwarding rules - **droplet_ids**: Targeted droplet IDs ### Important Notes - The component polls until the load balancer status becomes **active** - Specify either **Droplet IDs** or **Tag** to define targets, not both - The load balancer name must contain only letters, numbers, and hyphens - All specified droplets must be in the same region as the load balancer ### Example Output ```json { "data": { "algorithm": "round_robin", "created_at": "2026-03-13T10:00:00Z", "droplet_ids": [ 98765432, 98765433 ], "forwarding_rules": [ { "entry_port": 80, "entry_protocol": "http", "target_port": 80, "target_protocol": "http" } ], "id": "4de7ac8b-495b-4884-9a69-1050c6793cd6", "ip": "104.131.186.241", "name": "my-load-balancer", "region": { "name": "New York 3", "slug": "nyc3" }, "status": "active", "tag": "" }, "timestamp": "2026-03-13T10:01:15.000000000Z", "type": "digitalocean.loadbalancer.created" } ``` ## Create Snapshot The Create Snapshot component creates a point-in-time snapshot of a DigitalOcean Droplet. ### Use Cases - **Backup**: Create a backup before performing risky operations on a droplet - **Image creation**: Create a custom image from an existing droplet for reuse - **Migration**: Snapshot a droplet before migrating to a different region or size ### Configuration - **Droplet**: The ID of the droplet to snapshot (required) - **Name**: A human-readable name for the snapshot (required) ### Output Returns the snapshot details including: - **id**: Snapshot ID - **name**: Snapshot name - **created_at**: When the snapshot was created - **resource_id**: The ID of the droplet that was snapshotted - **regions**: Regions where the snapshot is available - **min_disk_size**: Minimum disk size required to use this snapshot - **size_gigabytes**: Size of the snapshot in GB ### Example Output ```json { "data": { "created_at": "2026-03-13T13:35:43Z", "id": 220464921, "min_disk_size": 10, "name": "superplane-1773328904", "regions": [ "blr1" ], "resource_id": "98145763", "resource_type": "droplet", "size_gigabytes": 2.04 }, "timestamp": "2026-03-13T13:36:14.803060936Z", "type": "digitalocean.snapshot.created" } ``` ## Delete Alert Policy The Delete Alert Policy component permanently removes a monitoring alert policy from your DigitalOcean account. ### Use Cases - **Cleanup**: Remove alert policies that are no longer needed - **Policy rotation**: Delete old policies as part of a replace workflow - **Automated teardown**: Remove monitoring policies when decommissioning environments ### Configuration - **Alert Policy**: The alert policy to delete (required, supports expressions) ### Output Returns information about the deleted policy: - **alertPolicyUuid**: The UUID of the alert policy that was deleted ### Important Notes - This operation is **permanent** and cannot be undone - If the policy does not exist (already deleted), the component completes successfully (idempotent) ### Example Output ```json { "data": { "alertPolicyUuid": "669adfc8-d72b-4d2d-80ed-bea78d6e1562" }, "timestamp": "2026-03-17T10:00:00Z", "type": "digitalocean.alertpolicy.deleted" } ``` ## Delete App The Delete App component removes a DigitalOcean App Platform application. ### Use Cases - **Cleanup**: Remove applications that are no longer needed - **Environment teardown**: Delete temporary or test app instances - **Resource management**: Free up resources by deleting unused apps ### Configuration - **App**: The app to delete (required) ### Output Returns confirmation of the deleted app including: - **appId**: The ID of the deleted app ### Notes - This operation is idempotent - deleting an already deleted app will succeed - All deployments and associated resources will be removed - This action cannot be undone ### Example Output ```json { "data": { "appId": "20e27025-f9c1-4da4-bfc4-00a13eb9ff42" }, "timestamp": "2026-03-24T05:59:24.092664924Z", "type": "digitalocean.app.deleted" } ``` ## Delete DNS Record The Delete DNS Record component permanently removes a DNS record from a DigitalOcean-managed domain. ### Use Cases - **Cleanup**: Remove DNS records for decommissioned services - **Rotation**: Delete old records as part of a DNS rotation workflow - **Automated teardown**: Remove service discovery records when tearing down infrastructure ### Configuration - **Domain**: The DigitalOcean-managed domain containing the record (required) - **Record ID**: The ID of the DNS record to delete (required, supports expressions) ### Output Returns information about the deleted record: - **recordId**: The ID of the record that was deleted - **domain**: The domain the record belonged to ### Important Notes - This operation is **permanent** and cannot be undone - Deleting a record that does not exist is treated as a success (idempotent) - Record IDs can be obtained from the output of createDNSRecord or upsertDNSRecord ### Example Output ```json { "data": { "deleted": true, "domain": "example.com", "recordId": 12345678 }, "timestamp": "2026-03-13T10:05:00.000000000Z", "type": "digitalocean.dns.record.deleted" } ``` ## Delete Data Source The Delete Data Source component removes a data source from an existing knowledge base on the DigitalOcean Gradient AI Platform. ### How it works Deletes a single data source from a knowledge base. DigitalOcean automatically triggers a re-indexing job after every deletion to clean up stale embeddings from the OpenSearch database. The component waits for that job to complete before emitting the output. ### Output Returns the deleted data source details: - **dataSourceUUID**: UUID of the deleted data source - **knowledgeBaseUUID**: UUID of the knowledge base - **knowledgeBaseName**: Name of the knowledge base ### Example Output ```json { "data": { "dataSourceUUID": "650d31b0-3338-11f1-b074-4e013e2bere4", "indexingJob": { "completedDataSources": 1, "finishedAt": "2026-04-09T10:29:39Z", "startedAt": "2026-04-09T10:28:58Z", "status": "INDEX_JOB_STATUS_COMPLETED", "totalDataSources": 1, "totalTokens": "" }, "knowledgeBaseName": "ecommerce-knowledge-base-v2", "knowledgeBaseUUID": "3f3d8984-32a2-11f1-b074-4e01dffdde4" }, "timestamp": "2026-04-09T10:29:59.190938313Z", "type": "digitalocean.data_source.deleted" } ``` ## Delete Database The Delete Database component permanently removes a database from a DigitalOcean Managed Database cluster. ### Use Cases - **Cleanup**: Remove databases that are no longer needed after a workflow completes - **Environment teardown**: Delete temporary or preview-environment databases - **Tenant offboarding**: Remove customer-specific databases during deprovisioning ### Configuration - **Database Cluster**: The managed database cluster containing the database (required) - **Database**: The database to delete (required) ### Output Returns information about the deleted database: - **name**: The deleted database name - **databaseClusterId**: The cluster UUID - **databaseClusterName**: The cluster name - **deleted**: Whether the delete request succeeded ### Important Notes - If you use custom token scopes, this action requires `database:delete` and `database:read` - Database management is not supported for Caching or Valkey clusters - Deleting a database that no longer exists is treated as a success ### Example Output ```json { "data": { "databaseClusterId": "9cc10173-e9ea-4176-9dbc-a4cee4c4ff30", "databaseClusterName": "primary-postgres", "deleted": true, "name": "app_db" }, "timestamp": "2026-03-27T09:20:00Z", "type": "digitalocean.database.deleted" } ``` ## Delete Droplet The Delete Droplet component permanently deletes a droplet from your DigitalOcean account. ### Use Cases - **Cleanup**: Remove temporary or test droplets after use - **Cost optimization**: Automatically tear down unused infrastructure - **Automated workflows**: Delete droplets as part of deployment rollback or cleanup processes - **Environment management**: Remove ephemeral environments after testing ### Configuration - **Droplet**: The droplet to delete (required, supports expressions) ### Output Returns information about the deleted droplet: - **dropletId**: The ID of the droplet that was deleted ### Important Notes - This operation is **permanent** and cannot be undone - All data on the droplet will be lost - The droplet will be shut down if it's running before deletion - Any snapshots of the droplet will remain in your account ### Example Output ```json { "data": { "dropletId": 557784760 }, "timestamp": "2026-03-12T21:25:45.688697002Z", "type": "digitalocean.droplet.deleted" } ``` ## Delete Knowledge Base The Delete Knowledge Base component removes a knowledge base from the DigitalOcean Gradient AI Platform. ### How it works Deletes the specified knowledge base. Optionally, you can also delete the associated OpenSearch database that stores the vector embeddings. ### Use Cases - **Cleanup**: Remove knowledge bases that are no longer needed - **Resource management**: Free up resources by deleting unused knowledge bases and their databases - **Rotation**: Delete an old knowledge base after a new one has been verified and attached ### Configuration - **Knowledge Base**: The knowledge base to delete (required) - **Delete OpenSearch Database**: Whether to also delete the associated OpenSearch database (optional, defaults to off) ### Output Returns confirmation of the deletion including: - **knowledgeBaseUUID**: UUID of the deleted knowledge base - **databaseDeleted**: Whether the OpenSearch database was also deleted - **databaseId**: UUID of the deleted database (included when the database was deleted) - **databaseName**: Name of the deleted database (included when the database was deleted) ### Notes - If the knowledge base is currently attached to any agents, it will automatically be removed from those agents upon deletion. Consider using the Detach Knowledge Base component first if you need more control over the detachment process. - Deleting the OpenSearch database is irreversible and will remove all vector embeddings ### Example Output ```json { "data": { "databaseDeleted": true, "databaseId": "9cc10173-e9ea-4176-9dbc-a4cee4c4ff30", "databaseName": "my-knowledge-base-os", "knowledgeBaseUUID": "a1b2c3d4-0000-0000-0000-000000000001" }, "timestamp": "2025-01-01T00:00:00Z", "type": "digitalocean.knowledge_base.deleted" } ``` ## Delete Load Balancer The Delete Load Balancer component permanently deletes a load balancer from your DigitalOcean account. ### Use Cases - **Cleanup**: Remove load balancers after decommissioning a service - **Cost optimization**: Automatically tear down unused load balancers - **Environment management**: Delete load balancers as part of environment teardown workflows ### Configuration - **Load Balancer**: The load balancer to delete (required, supports expressions) ### Output Returns information about the deleted load balancer: - **loadBalancerID**: The UUID of the load balancer that was deleted ### Important Notes - This operation is **permanent** and cannot be undone - Deleting a load balancer does not delete the targeted droplets - If the load balancer does not exist (404), the component emits success (idempotent) ### Example Output ```json { "data": { "loadBalancerID": "4de7ac8b-495b-4884-9a69-1050c6793cd6" }, "timestamp": "2026-03-13T10:05:30.000000000Z", "type": "digitalocean.loadbalancer.deleted" } ``` ## Delete Object The Delete Object component permanently removes an object from a DigitalOcean Spaces bucket. ### Prerequisites Spaces Access Key ID and Secret Access Key must be configured in the DigitalOcean integration settings. ### Configuration - **Bucket**: The Spaces bucket containing the object. The dropdown lists all buckets — the region is determined automatically. - **File Path**: The full path to the object within the bucket (e.g. reports/daily.csv). Supports expressions to reference a path from an upstream component. ### Output - **bucket**: The bucket name - **filePath**: The path of the deleted object - **deleted**: Always `true` on success ### Notes - This operation is **permanent** and cannot be undone - The operation succeeds even if the object does not exist (idempotent) ### Use Cases - **Cleanup**: Remove temporary or processed files after a workflow completes - **Rotation**: Delete old reports or artifacts as part of a file rotation policy - **Pipeline teardown**: Remove objects created during a pipeline run ### Example Output ```json { "data": { "bucket": "my-company-assets", "deleted": true, "filePath": "reports/daily.csv" }, "timestamp": "2026-03-25T09:00:00Z", "type": "digitalocean.spaces.object.deleted" } ``` ## Delete Snapshot The Delete Snapshot component deletes a DigitalOcean snapshot image. ### Use Cases - **Cleanup**: Remove old snapshots to free up storage and reduce costs - **Lifecycle management**: Automatically delete snapshots after they are no longer needed - **Rotation**: Delete older snapshots as part of a snapshot rotation policy ### Configuration - **Snapshot**: The snapshot to delete (required) ### Output Returns confirmation of the deleted snapshot including: - **snapshotId**: The ID of the deleted snapshot - **deleted**: Confirmation that the snapshot was deleted ### Example Output ```json { "data": { "deleted": true, "snapshotId": "220431883" }, "timestamp": "2026-03-13T06:27:06.208635289Z", "type": "digitalocean.snapshot.deleted" } ``` ## Detach Knowledge Base The Detach Knowledge Base component removes a knowledge base from an existing Gradient AI agent. ### Use Cases - **Rollback**: Remove a knowledge base that is causing poor agent responses - **Cleanup**: Detach an outdated knowledge base before attaching a freshly indexed one - **Rotation**: As part of a blue/green pipeline, detach the old knowledge base after the new one is verified ### Configuration - **Agent**: The agent to detach the knowledge base from (required) - **Knowledge Base**: The knowledge base to detach — only shows knowledge bases currently attached to the selected agent (required) ### Output Returns confirmation of the detachment including: - **agentUUID**: UUID of the agent - **knowledgeBaseUUID**: UUID of the detached knowledge base ### Example Output ```json { "data": { "agentUUID": "20cd8434-6ea1-11f0-bf8f-4e013e2ddde4", "knowledgeBaseUUID": "a1b2c3d4-0000-0000-0000-000000000001" }, "timestamp": "2025-01-01T00:00:00Z", "type": "digitalocean.knowledge_base.detached" } ``` ## Get Alert Policy The Get Alert Policy component retrieves the full details of a monitoring alert policy. ### Use Cases - **Policy inspection**: Verify the current configuration of an alert policy - **Conditional logic**: Check whether a policy is enabled before modifying it downstream - **Audit workflows**: Retrieve alert policy details as part of a compliance or reporting pipeline ### Configuration - **Alert Policy**: The alert policy to retrieve (required, supports expressions) ### Output Returns the alert policy object including: - **uuid**: Alert policy UUID - **description**: Human-readable description - **type**: Metric type being monitored (e.g. v1/insights/droplet/cpu) - **compare**: Comparison operator (GreaterThan/LessThan) - **value**: Threshold value - **window**: Evaluation window (5m, 10m, 30m, 1h) - **entities**: Scoped droplet IDs - **tags**: Scoped droplet tags - **enabled**: Whether the policy is active - **alerts**: Configured notification channels ### Example Output ```json { "data": { "alerts": { "email": [ "sammy@digitalocean.com" ] }, "compare": "GreaterThan", "description": "High CPU Usage", "enabled": true, "entities": [ "558899681" ], "tags": [], "type": "v1/insights/droplet/cpu", "uuid": "ffcaf816-f6a5-4b4a-b4c4-e84532755e82", "value": 20, "window": "5m" }, "timestamp": "2026-03-18T09:54:58.914731251Z", "type": "digitalocean.alertpolicy.fetched" } ``` ## Get App The Get App component retrieves detailed information about a specific DigitalOcean App Platform application. ### Use Cases - **Status checks**: Verify app state and deployment status before performing operations - **Information retrieval**: Get current app configuration, URLs, and deployment details - **Pre-flight validation**: Check app exists before operations like update or delete - **Monitoring**: Track app configuration, active deployments, and ingress URLs - **Integration workflows**: Retrieve app details for use in downstream workflow steps ### Configuration - **App ID**: The unique identifier of the app to retrieve (required) ### Output Returns the app object including: - **id**: The unique app ID - **name**: The app name - **default_ingress**: The default ingress URL - **live_url**: The live URL for the app - **region**: The region where the app is deployed - **active_deployment**: Information about the active deployment - **in_progress_deployment**: Information about any in-progress deployment - **spec**: Complete app specification including services, workers, jobs, static sites, databases, and configuration ### Notes - The app ID can be obtained from the output of the Create App component or from the DigitalOcean dashboard - The component returns the current state of the app, including all deployed components - Use this component to verify deployment status before performing updates or other operations ### Example Output ```json { "data": { "active_deployment": { "cause": "initial deployment", "created_at": "2026-03-24T10:17:51Z", "id": "a5ad056d-523a-48e2-9715-76db183f5a15", "phase": "ACTIVE", "progress": { "steps": [ { "ended_at": "2026-03-24T10:18:56.473548704Z", "name": "build", "started_at": "2026-03-24T10:17:57.360621633Z", "status": "SUCCESS", "steps": [ { "ended_at": "2026-03-24T10:18:23.527631658Z", "name": "initialize", "started_at": "2026-03-24T10:17:57.360716517Z", "status": "SUCCESS" }, { "ended_at": "2026-03-24T10:18:54.938686340Z", "name": "components", "started_at": "2026-03-24T10:18:23.527664326Z", "status": "SUCCESS", "steps": [ { "name": "my-app", "status": "SUCCESS" } ] } ] }, { "ended_at": "2026-03-24T10:19:06.689255599Z", "name": "deploy", "started_at": "2026-03-24T10:19:02.998040282Z", "status": "SUCCESS", "steps": [ { "ended_at": "2026-03-24T10:19:04.851886393Z", "name": "initialize", "started_at": "2026-03-24T10:19:02.998063884Z", "status": "SUCCESS" }, { "ended_at": "2026-03-24T10:19:04.851934987Z", "name": "components", "started_at": "2026-03-24T10:19:04.851916324Z", "status": "SUCCESS" }, { "ended_at": "2026-03-24T10:19:06.689152591Z", "name": "finalize", "started_at": "2026-03-24T10:19:04.852021129Z", "status": "SUCCESS" } ] } ], "success_steps": 5, "total_steps": 5 }, "spec": { "ingress": { "rules": [ { "component": { "name": "my-app" }, "match": { "path": { "prefix": "/" } } } ] }, "name": "my-app", "region": "blr", "static_sites": [ { "github": { "branch": "main", "deploy_on_push": true, "repo": "digitalocean/hello-world" }, "name": "my-app" } ] }, "static_sites": [ { "name": "my-app", "source_commit_hash": "8ac84464242f1b430147319deb28abb5a20049b8" } ], "updated_at": "2026-03-24T10:19:07Z" }, "created_at": "2026-03-24T10:17:51Z", "default_ingress": "https://my-app-ixj6x.ondigitalocean.app", "id": "eb0fc7fa-8294-48c5-8a48-77d47fc6c89e", "last_deployment_created_at": "2026-03-24T10:17:51Z", "live_domain": "my-app-ixj6x.ondigitalocean.app", "live_url": "https://my-app-ixj6x.ondigitalocean.app", "live_url_base": "https://my-app-ixj6x.ondigitalocean.app", "pending_deployment": { "id": "" }, "region": { "continent": "Asia", "data_centers": [ "blr1" ], "flag": "india", "label": "Bangalore", "slug": "blr" }, "spec": { "ingress": { "rules": [ { "component": { "name": "my-app" }, "match": { "path": { "prefix": "/" } } } ] }, "name": "my-app", "region": "blr", "static_sites": [ { "github": { "branch": "main", "deploy_on_push": true, "repo": "digitalocean/hello-world" }, "name": "my-app" } ] }, "updated_at": "2026-03-24T10:19:13Z" }, "timestamp": "2026-03-26T08:21:30.077395918Z", "type": "digitalocean.app.fetched" } ``` ## Get Cluster Configuration The Get Cluster Configuration component retrieves the active configuration for a DigitalOcean Managed Database cluster. ### Use Cases - **Audit workflows**: Inspect the active cluster configuration for reporting or compliance checks - **Validation**: Compare the current cluster configuration before updates or maintenance - **Operational visibility**: Retrieve engine-specific settings that affect behavior and performance ### Configuration - **Database Cluster**: The managed database cluster to inspect (required) ### Output Returns the cluster configuration including: - **databaseClusterId**: The cluster UUID - **databaseClusterName**: The cluster name - **config**: The configuration object returned by the DigitalOcean API ### Important Notes - If you use custom token scopes, this action requires `database:read` - The keys inside `config` vary by database engine ### Example Output ```json { "data": { "config": { "autovacuum_naptime": 60, "backtrack_commit_timeout": 30, "default_toast_compression": "pglz", "idle_in_transaction_session_timeout": 0, "jit": true, "max_parallel_workers": 8 }, "databaseClusterId": "65b497a5-1674-4b1a-a122-01aebe761ef7", "databaseClusterName": "superplane-db-test" }, "timestamp": "2026-03-27T11:10:00.000000000Z", "type": "digitalocean.database.cluster.config.fetched" } ``` ## Get Database The Get Database component retrieves a managed database from a DigitalOcean cluster and enriches it with cluster context. ### Use Cases - **Routing decisions**: Inspect the database and cluster state before directing traffic or jobs - **Operational checks**: Review engine, region, and connection details before maintenance steps - **Audit workflows**: Retrieve the current database and cluster context for reporting or validation ### Configuration - **Database Cluster**: The managed database cluster containing the database (required) - **Database**: The database to retrieve (required) ### Output Returns the requested database enriched with cluster details, including: - **name**: The database name - **databaseClusterId**: The cluster UUID - **databaseClusterName**: The cluster name - **engine**: The cluster engine - **version**: The cluster engine version - **region**: The cluster region - **status**: The cluster status - **connection**: Connection information when available - **database**: The raw database object returned by the API ### Important Notes - If you use custom token scopes, this action requires `database:read` - Database management is not supported for Caching or Valkey clusters ### Example Output ```json { "data": { "connection": { "database": "defaultdb", "host": "superplane-db-test-do-user-1234567-0.j.db.ondigitalocean.com", "port": 25060, "ssl": true, "uri": "postgresql://doadmin@superplane-db-test-do-user-1234567-0.j.db.ondigitalocean.com:25060/defaultdb?sslmode=require", "user": "doadmin" }, "database": { "name": "app_db" }, "databaseClusterId": "65b497a5-1674-4b1a-a122-01aebe761ef7", "databaseClusterName": "superplane-db-test", "engine": "pg", "name": "app_db", "region": "nyc1", "status": "online", "version": "17" }, "timestamp": "2026-03-27T11:12:00.000000000Z", "type": "digitalocean.database.fetched" } ``` ## Get Database Cluster The Get Database Cluster component retrieves the details of an existing DigitalOcean Managed Database cluster. ### Use Cases - **Status checks**: Verify a cluster is online before creating databases or users - **Information retrieval**: Fetch connection details, sizing, engine, and region information - **Pre-flight validation**: Confirm a cluster exists before downstream operations ### Configuration - **Database Cluster**: The managed database cluster to retrieve (required) ### Output Returns the database cluster including: - **id**: The cluster UUID - **name**: The cluster name - **engine**: The configured engine - **version**: The engine version - **region**: The cluster region - **size**: The node size slug - **num_nodes**: The number of nodes - **status**: The cluster status - **connection**: Connection information when available ### Important Notes - If you use custom token scopes, this action requires `database:read` - The returned connection information depends on the cluster type and provisioning state. ### Example Output ```json { "data": { "connection": { "host": "superplane-db-do-user-123456-0.j.db.ondigitalocean.com", "port": 25060, "ssl": true, "uri": "postgres://doadmin:[email protected]:25060/defaultdb?sslmode=require", "user": "doadmin" }, "created_at": "2026-03-27T13:00:00Z", "engine": "pg", "id": "65b497a5-1674-4b1a-a122-01aebe761ef7", "name": "superplane-db", "num_nodes": 1, "private_network_uuid": "7e6d2691-182b-4dd1-8452-529f88feb996", "region": "nyc1", "size": "db-s-1vcpu-1gb", "status": "online", "version": "18.0" }, "timestamp": "2026-03-27T13:05:00Z", "type": "digitalocean.database.cluster.fetched" } ``` ## Get Droplet The Get Droplet component retrieves detailed information about a specific droplet. ### Use Cases - **Status checks**: Verify droplet state before performing operations - **Information retrieval**: Get current IP addresses, configuration, and status - **Pre-flight validation**: Check droplet exists before operations like snapshot or power management - **Monitoring**: Track droplet configuration and network details ### Configuration - **Droplet**: The droplet to retrieve (required, supports expressions) ### Output Returns the droplet object including: - **id**: Droplet ID - **name**: Droplet hostname - **status**: Current droplet status (new, active, off, archive) - **memory**: RAM in MB - **vcpus**: Number of virtual CPUs - **disk**: Disk size in GB - **region**: Region information - **image**: Image information - **size_slug**: Size identifier - **networks**: Network information including IP addresses - **tags**: Applied tags ### Example Output ```json { "data": { "disk": 25, "id": 557784760, "image": { "id": 220345895, "name": "superplane-1773328904", "slug": "" }, "memory": 1024, "name": "superplane-1773328904-s-1vcpu-1gb-nyc3-01", "networks": { "v4": [ { "ip_address": "192.0.2.1", "type": "public" }, { "ip_address": "10.108.0.3", "type": "private" } ] }, "region": { "name": "New York 3", "slug": "nyc3" }, "size_slug": "s-1vcpu-1gb", "status": "active", "tags": [], "vcpus": 1 }, "timestamp": "2026-03-12T21:13:32.946693411Z", "type": "digitalocean.droplet.fetched" } ``` ## Get Droplet Metrics The Get Droplet Metrics component retrieves CPU usage, memory utilization, and network bandwidth metrics for a droplet over a specified lookback window. > **Note:** Monitoring is only available for droplets that had monitoring enabled during creation. Droplets created without monitoring will not report metrics. ### Use Cases - **Performance monitoring**: Sample current resource utilization before scaling decisions - **Incident investigation**: Pull recent metrics when responding to an alert - **Capacity planning**: Gather trend data to inform right-sizing of infrastructure - **Automated scaling**: Use metric outputs to conditionally trigger resize or power operations ### Configuration - **Droplet**: The droplet to fetch metrics for (required, supports expressions) - **Lookback Period**: How far back to retrieve metrics — 1h, 6h, 24h, 7d, or 14d (required) ### Output Returns a combined metrics payload with averaged values over the lookback window: - **dropletId**: The ID of the queried droplet - **start**: ISO 8601 timestamp of the start of the metrics window - **end**: ISO 8601 timestamp of the end of the metrics window - **lookbackPeriod**: The selected lookback period - **avgCpuUsagePercent**: Average CPU usage percentage over the window - **avgMemoryUsagePercent**: Average memory utilization percentage, computed from (total − available) / total × 100 - **avgPublicOutboundBandwidthMbps**: Average public outbound bandwidth in Mbps (as reported by the DigitalOcean API) - **avgPublicInboundBandwidthMbps**: Average public inbound bandwidth in Mbps (as reported by the DigitalOcean API) All metric values are rounded to two decimal places. ### Important Notes - Metrics are only available for droplets with the DigitalOcean Monitoring Agent installed - The Monitoring Agent is pre-installed on droplets using official DigitalOcean images created after 2018 - Data point resolution varies by window: shorter windows return finer-grained data ### Example Output ```json { "data": { "avgCpuUsagePercent": 0.18, "avgMemoryUsagePercent": 34.39, "avgPublicInboundBandwidthMbps": 0.36, "avgPublicOutboundBandwidthMbps": 0.15, "dropletId": "559378149", "end": "2026-03-19T08:36:34Z", "lookbackPeriod": "1h", "start": "2026-03-19T07:36:34Z" }, "timestamp": "2026-03-19T08:36:37.115389929Z", "type": "digitalocean.droplet.metrics" } ``` ## Get Knowledge Base The Get Knowledge Base component retrieves comprehensive information about an existing knowledge base on the DigitalOcean Gradient AI Platform. ### How it works Fetches the knowledge base details including its OpenSearch database, all attached data sources, and the latest indexing job status. ### Use Cases - **Pre-flight checks**: Inspect a knowledge base before attaching it to an agent or running evaluations - **Health monitoring**: Check indexing status and data source count on a schedule - **Teardown workflows**: Fetch the database ID before deleting the knowledge base - **Auditing**: Verify region, embedding model, and data source configuration ### Output Returns the full knowledge base object including: - **uuid**, **name**, **status**, **region**, **tags** — core properties - **embeddingModelUUID**, **embeddingModelName** — embedding model details - **projectId**, **projectName** — associated project - **database** — OpenSearch database object with id, name, and status - **dataSources** — array of all attached data sources with type and source details - **lastIndexingJob** — full indexing job details: status, phase, totalTokens, data source progress, timing, and report availability - **createdAt**, **updatedAt** — timestamps ### Example Output ```json { "data": { "createdAt": "2025-01-01T00:00:00Z", "dataSources": [ { "chunkingAlgorithm": "CHUNKING_ALGORITHM_SECTION_BASED", "createdAt": "2025-01-01T00:00:00Z", "spacesBucket": "tor1/product-data", "type": "spaces", "updatedAt": "2025-06-01T00:00:00Z", "uuid": "a1b2c3d4-0000-0000-0000-000000000001" }, { "chunkingAlgorithm": "CHUNKING_ALGORITHM_SEMANTIC", "crawlingOption": "SCOPED", "createdAt": "2025-02-01T00:00:00Z", "type": "web", "updatedAt": "2025-06-01T00:00:00Z", "uuid": "a1b2c3d4-0000-0000-0000-000000000002", "webURL": "https://docs.example.com" } ], "database": { "id": "abf1055a-745d-4c24-a1db-1959ea819264", "name": "product-catalog-os", "status": "online" }, "databaseStatus": "ONLINE", "embeddingModelName": "GTE Large EN v1.5", "embeddingModelUUID": "05700391-7aa8-11ef-bf8f-4e013e2ddde4", "lastIndexingJob": { "completedDataSources": 2, "createdAt": "2025-06-01T00:00:00Z", "finishedAt": "2025-06-01T00:05:32Z", "isReportAvailable": true, "phase": "BATCH_JOB_PHASE_COMPLETE", "startedAt": "2025-06-01T00:00:00Z", "status": "INDEX_JOB_STATUS_COMPLETED", "totalDataSources": 2, "totalTokens": "12345", "uuid": "b2c3d4e5-0000-0000-0000-000000000001" }, "name": "product-catalog-v2", "projectId": "37455431-84bd-4fa2-94cf-e8486f8f8c5e", "projectName": "AI Agents", "region": "tor1", "tags": [ "production", "docs" ], "updatedAt": "2025-06-01T00:00:00Z", "uuid": "20cd8434-6ea1-11f0-bf8f-4e013e2ddde4" }, "timestamp": "2025-06-01T00:05:32Z", "type": "digitalocean.knowledge_base.fetched" } ``` ## Get Object The Get Object component retrieves an object and its metadata from a DigitalOcean Spaces bucket. ### Prerequisites Spaces Access Key ID and Secret Access Key must be configured in the DigitalOcean integration settings. The integration works without them for other components (Droplets, DNS, etc.), but they are required for any Spaces operation. ### Configuration - **Bucket**: The Spaces bucket to read from. The dropdown lists all buckets across all regions — the region is determined automatically. - **File Path**: The path to the object within the bucket (e.g. reports/daily.csv) - **Include Body**: Download the object content. Only supported for text-based content types (JSON, YAML, CSV, plain text, etc.) ### Output Channels - **Found**: The object exists — metadata, tags, and optional body are returned - **Not Found**: The object does not exist in the bucket (404) ### Output - **bucket**: The bucket name - **filePath**: The path to the object within the bucket - **endpoint**: The full Spaces URL of the object - **contentType**: MIME type of the object - **size**: Human-readable file size (e.g. 1.23 MiB) - **lastModified**: When the object was last modified (RFC1123 format) - **eTag**: MD5 hash of the object content — changes when the file changes - **metadata**: Custom metadata key-value pairs set on the object (x-amz-meta-* headers) - **tags**: Key-value tags applied to the object - **body**: Object content as a string (only present for text-based content types when Include Body is enabled) ### Use Cases - **Config reading**: Fetch a JSON or YAML config file and use its contents in downstream workflow steps - **File existence check**: Verify a file exists before triggering a process — route to notFound if it is missing - **Change detection**: Compare ETag or LastModified with a previously stored value to detect updates - **Backup verification**: Confirm a backup file was written today by checking LastModified - **Tag-based routing**: Read object tags to drive workflow logic (e.g. status=ready) ### Example Output ```json { "data": { "bucket": "my-company-assets", "contentType": "text/csv", "eTag": "a1b2c3d4ef567890a1b2c3d4ef567890", "endpoint": "https://my-company-assets.fra1.digitaloceanspaces.com/reports/daily.csv", "filePath": "reports/daily.csv", "lastModified": "Wed, 25 Mar 2026 08:45:00 GMT", "metadata": { "env": "production", "uploaded-by": "pipeline" }, "size": "41.31 KiB", "tags": { "env": "production", "status": "ready" } }, "timestamp": "2026-03-25T09:00:00Z", "type": "digitalocean.spaces.object.fetched" } ``` ## Index Knowledge Base The Index Knowledge Base component triggers a new indexing job on an existing knowledge base and polls until it completes. ### How it works Starts an indexing job that re-processes all data sources attached to the knowledge base. The component polls the job status every 30 seconds until the indexing job finishes successfully or fails. ### Use Cases - **Content refresh**: Re-index a knowledge base after updating files in a Spaces bucket or changing a website - **Scheduled re-indexing**: Combine with a Schedule trigger to re-index on a regular cadence (e.g. nightly) - **Pipeline orchestration**: Re-index after an upstream component adds or modifies data sources ### Output Returns the completed indexing job details: - **knowledgeBaseUUID**: UUID of the knowledge base - **knowledgeBaseName**: Name of the knowledge base - **jobUUID**: UUID of the indexing job - **status**: Final job status (e.g. INDEX_JOB_STATUS_COMPLETED) - **phase**: Final job phase (e.g. BATCH_JOB_PHASE_SUCCEEDED) - **totalTokens**: Total tokens consumed by the indexing job - **completedDataSources**: Number of data sources that finished indexing - **totalDataSources**: Total number of data sources - **startedAt**, **finishedAt**: Timing information - **isReportAvailable**: Whether a detailed indexing report is available ### Example Output ```json { "data": { "completedDataSources": 2, "finishedAt": "2025-06-01T00:05:32Z", "isReportAvailable": true, "jobUUID": "b2c3d4e5-0000-0000-0000-000000000001", "knowledgeBaseName": "product-catalog-v2", "knowledgeBaseUUID": "20cd8434-6ea1-11f0-bf8f-4e013e2ddde4", "phase": "BATCH_JOB_PHASE_SUCCEEDED", "startedAt": "2025-06-01T00:00:00Z", "status": "INDEX_JOB_STATUS_COMPLETED", "totalDataSources": 2, "totalTokens": "1500" }, "timestamp": "2025-06-01T00:05:32Z", "type": "digitalocean.knowledge_base.indexed" } ``` ## Manage Droplet Power The Manage Droplet Power component performs power management operations on a droplet. ### Use Cases - **Automated restarts**: Reboot droplets on a schedule or in response to alerts - **Cost optimization**: Power off droplets during non-business hours - **Maintenance workflows**: Shutdown droplets before updates, power on after completion - **Recovery procedures**: Power cycle droplets experiencing issues ### Configuration - **Droplet**: The droplet to manage (required, supports expressions) - **Operation**: The power operation to perform (required): - **power_on**: Power on a powered-off droplet - **power_off**: Power off a running droplet (forced shutdown) - **shutdown**: Gracefully shutdown a running droplet - **reboot**: Gracefully reboot a running droplet - **power_cycle**: Power cycle a droplet (forced reboot) ### Output Returns the action result including: - **id**: Action ID - **status**: Final action status (completed or errored) - **type**: Type of action performed - **started_at**: When the action started - **completed_at**: When the action completed - **resource_id**: Droplet ID - **region**: Region slug ### Important Notes - **power_off** and **power_cycle** are forced operations and may cause data loss - **shutdown** and **reboot** are graceful and wait for the OS to complete the operation - The component waits for the action to complete before emitting - Actions may take several minutes depending on the droplet state ### Example Output ```json { "data": { "completed_at": "2026-03-12T22:27:14Z", "id": 3087531973, "region_slug": "fra1", "resource_id": 557861237, "resource_type": "droplet", "started_at": "2026-03-12T22:27:07Z", "status": "completed", "type": "power_off" }, "timestamp": "2026-03-12T22:27:20.613332846Z", "type": "digitalocean.droplet.power.power_off" } ``` ## Put Object The Put Object component uploads a text-based object to a DigitalOcean Spaces bucket. ### Prerequisites Spaces Access Key ID and Secret Access Key must be configured in the DigitalOcean integration settings. ### Configuration - **Bucket**: The Spaces bucket to upload to. The dropdown lists all buckets — the region is determined automatically. - **File Path**: The full path including file name and extension (e.g. reports/2024/daily.csv). The content type is detected automatically from the extension. - **Body**: The text content to upload. Supports expressions to pass content from upstream components (e.g. `{{ $['HTTP Request'].body }}`). Only text-based formats are supported (JSON, CSV, YAML, plain text, XML, etc.) - **ACL**: Access control — `private` (default) restricts access to the owner, `public-read` makes the object publicly accessible. - **Metadata**: Optional key-value pairs stored as object metadata (x-amz-meta-* headers). Supports expressions. - **Tags**: Optional key-value tags applied to the object. Can be changed later without re-uploading. Supports expressions. ### Output - **bucket**: The bucket name - **filePath**: The path to the uploaded object - **endpoint**: The full Spaces URL of the object - **eTag**: MD5 hash of the uploaded content - **contentType**: Detected MIME type based on file extension - **size**: Human-readable size of the uploaded content - **metadata**: Metadata set on the object (only present if metadata was provided) - **tags**: Tags set on the object (only present if tags were provided) ### Use Cases - **Config publishing**: Upload a generated JSON or YAML config file to Spaces for downstream services to consume - **Report storage**: Save a CSV or text report generated by an upstream component - **Objectcopy**: Combine with Get Object to copy an object across buckets, preserving metadata and tags ### Example Output ```json { "data": { "bucket": "my-company-assets", "contentType": "text/csv", "eTag": "a1b2c3d4ef567890a1b2c3d4ef567890", "endpoint": "https://my-company-assets.fra1.digitaloceanspaces.com/reports/daily.csv", "filePath": "reports/daily.csv", "metadata": { "uploaded-by": "pipeline" }, "size": "128 B", "tags": { "env": "production" } }, "timestamp": "2026-03-25T09:00:00Z", "type": "digitalocean.spaces.object.uploaded" } ``` ## Run Evaluation The Run Evaluation component triggers a Gradient AI evaluation test case against an agent, waits for it to complete, and reports whether the agent passed or failed. ### How it works Runs a pre-configured evaluation test case against the selected agent. The test case already defines the prompts, metrics, and pass/fail thresholds. The component polls until the evaluation finishes, then fetches the results and routes to the appropriate output channel. ### Use Cases - **Blue/green deployments**: Evaluate a staging agent before promoting it to production - **Regression testing**: Automatically verify agent quality after knowledge base or configuration changes - **Continuous validation**: Schedule periodic evaluations to detect quality drift ### Configuration - **Test Case**: A pre-configured evaluation test case with prompts, metrics, and thresholds (required) - **Agent**: The agent to evaluate (required) - **Run Name**: A name for this evaluation run, visible in the DigitalOcean console (required, max 64 characters). Supports expressions for dynamic naming. ### Output Channels - **Passed**: The evaluation completed and the agent met all pass criteria defined in the test case - **Failed**: The evaluation completed but the agent did not meet the pass criteria, or the evaluation run itself errored ### Output Returns the evaluation results including: - **evaluationRunUUID**: UUID of the evaluation run - **testCaseUUID / testCaseName**: The test case that was run - **agentUUID / agentName**: The agent that was evaluated - **passed**: Whether the agent passed the evaluation - **status**: Final status of the evaluation run - **starMetric**: The primary metric result (name, numberValue, stringValue) - **runLevelMetrics**: All run-level metric results (name, numberValue, stringValue) - **prompts**: Per-prompt results including input, output, ground truth, and per-prompt metric scores - **startedAt / finishedAt**: Timing information - **errorDescription**: Present on the Failed channel if the run itself errored ### Notes - The evaluation typically takes 1–5 minutes depending on the number of prompts and complexity - The component polls every 30 seconds until completion - If the evaluation run itself fails (API error, timeout, etc.), the result is emitted to the Failed channel with the error description ### Example Output ```json { "data": { "agentName": "support-agent", "agentUUID": "7d5c762a-2e66-11f1-b074-4e013e2ddde4", "evaluationRunUUID": "ba42b577-9dab-40a9-a375-315a5be1922e", "finishedAt": "2025-01-01T00:03:44Z", "passed": true, "prompts": [ { "groundTruth": "A Droplet is a virtual machine that runs on DigitalOcean's cloud infrastructure.", "input": "What is a Droplet?", "metrics": [ { "metricName": "Correctness (general hallucinations)", "numberValue": 100, "stringValue": "" }, { "metricName": "Retrieved context relevance", "numberValue": 0, "stringValue": "" } ], "output": "A Droplet is a type of virtual private server (VPS) provided by a cloud platform." }, { "groundTruth": "Droplets have flexible pricing based on instance type, region, and options you select.", "input": "What is the pricing for Droplet?", "metrics": [ { "metricName": "Correctness (general hallucinations)", "numberValue": 66.67, "stringValue": "" }, { "metricName": "Retrieved context relevance", "numberValue": 0, "stringValue": "" } ], "output": "The pricing for Droplet starts at $5/month for a basic plan." } ], "runLevelMetrics": [ { "metricName": "Retrieved context relevance", "numberValue": 0, "stringValue": "" }, { "metricName": "Response-context completeness", "numberValue": 0, "stringValue": "" }, { "metricName": "Correctness (general hallucinations)", "numberValue": 93.33, "stringValue": "" } ], "starMetric": { "metricName": "Correctness (general hallucinations)", "numberValue": 93.33, "stringValue": "" }, "startedAt": "2025-01-01T00:00:00Z", "status": "successful", "testCaseName": "Product Knowledge Baseline", "testCaseUUID": "c6d75370-2f3e-11f1-b074-4e013e2ddde4" }, "timestamp": "2025-01-01T00:03:44Z", "type": "digitalocean.evaluation.passed" } ``` ## Update Alert Policy The Update Alert Policy component modifies an existing monitoring alert policy with new settings. > **Note:** Monitoring is only available for droplets that had monitoring enabled during creation. Droplets created without monitoring will not report metrics or trigger alerts. ### Use Cases - **Threshold tuning**: Adjust alert thresholds in response to changing baselines or scaling events - **Enable/disable policies**: Toggle alert policies on or off as part of maintenance windows or incident management - **Notification changes**: Update notification channels (email or Slack) without recreating the policy - **Automated policy management**: Programmatically adjust alert policies as part of infrastructure workflows ### Configuration - **Alert Policy**: The alert policy to update (required, supports expressions) - **Description**: Human-readable name for the alert policy (required) - **Metric Type**: The droplet metric to monitor, such as CPU Usage or Memory Usage (required) - **Comparison**: Alert when the value is GreaterThan or LessThan the threshold (required) - **Threshold Value**: The numeric threshold that triggers the alert (required) - **Evaluation Window**: The rolling time window over which the metric is averaged (required) - **Droplets**: Specific droplets to scope the policy to (optional) - **Tags**: Monitor all droplets with matching tags (optional) - **Enabled**: Whether the alert policy is active (default: true) - **Email Notifications**: Email addresses to notify when the alert fires (optional) - **Slack Channel**: Slack channel to post alerts to, e.g. #alerts (optional) - **Slack Webhook URL**: Incoming webhook URL for the Slack workspace (required when Slack Channel is set) ### Output Returns the updated alert policy including: - **uuid**: Alert policy UUID - **description**: Human-readable description - **type**: Metric type being monitored - **compare**: Comparison operator (GreaterThan/LessThan) - **value**: Threshold value - **window**: Evaluation window - **enabled**: Whether the policy is active - **alerts**: Configured notification channels (email and/or Slack) ### Important Notes - The update operation replaces the entire alert policy — all fields must be provided, not just the ones being changed - At least one notification channel (email or Slack) is required - **Slack Channel** and **Slack Webhook URL** must be provided together - Scoping by **Droplets** and **Tags** are independent — you can use either, both, or neither (applies to all droplets) ### Example Output ```json { "data": { "alerts": { "email": [ "sammy@digitalocean.com" ] }, "compare": "GreaterThan", "description": "High CPU Usage", "enabled": true, "entities": [ "558899681" ], "tags": [], "type": "v1/insights/droplet/cpu", "uuid": "ffcaf816-f6a5-4b4a-b4c4-e84532755e82", "value": 80, "window": "5m" }, "timestamp": "2026-03-18T09:29:00.308296519Z", "type": "digitalocean.alertpolicy.updated" } ``` ## Update App The Update App component modifies an existing DigitalOcean App Platform application. ### Use Cases - **Update configuration**: Change app settings like environment variables, branch, build commands, and more - **Rename apps**: Update the app name - **Migrate regions**: Move the app to a different region - **Inject secrets**: Add or update environment variables such as database connection strings - **Switch branches**: Change the deployed branch without recreating the app - **Scale resources**: Adjust instance size and count for services, workers, and jobs - **Configure networking**: Update ingress paths, CORS settings, and VPC connections - **Manage databases**: Add or update database attachments (dev or managed) ### Configuration - **App**: The app to update (required) - **Name**: Update the app name (optional) - **Region**: Update the region the app is deployed in (optional) - **Branch**: The branch to deploy from (optional, applies to all components' source providers) - **Deploy on Push**: Toggle automatic deployment when code is pushed to the branch - **Environment Slug**: Update the runtime environment/buildpack - **Build Command**: Update the build command - **Run Command**: Update the run command (services, workers, jobs) - **Source Directory**: Update the source directory path - **HTTP Port**: Update the service listening port - **Instance Size**: Update the instance size slug - **Instance Count**: Update the number of instances - **Output Directory**: Update the static site output directory - **Index/Error/Catchall Document**: Update static site document settings - **Environment Variables**: Key-value pairs to add or update (merges with existing) #### Ingress Configuration - **Ingress Path**: Update the path prefix for routing traffic - **CORS Allow Origins**: Update allowed origins for Cross-Origin Resource Sharing - **CORS Allow Methods**: Update HTTP methods allowed for CORS requests #### Database Configuration - **Add Database**: Attach a new database to the app - **Database Component Name, Engine, Version**: Configure the database - **Use Managed Database**: Connect to an existing managed database cluster #### VPC Configuration - **VPC**: Update the VPC ID for the app ### Output Returns the updated app including: - **id**: The unique app ID - **name**: The app name - **region**: The region where the app is deployed - **live_url**: The live URL for the app - **default_ingress**: The default ingress URL - **active_deployment**: Information about the updated deployment ### Notes - Environment variables are merged with existing ones (not replaced) - Build/runtime settings are applied to all matching components - Updating an app triggers a new deployment - The component emits an output once the deployment reaches ACTIVE status - If the deployment fails, the component will report the failure - Dev databases are free and suitable for development; use managed databases for production ### Example Output ```json { "data": { "defaultIngress": "https://my-app-22-b6v8c.ondigitalocean.app", "id": "6d8abe1c-7cc5-4db3-b1aa-d9cdd1c127e7", "liveURL": "https://my-app-22-b6v8c.ondigitalocean.app", "name": "my-app-22", "region": { "continent": "Asia", "data_centers": [ "blr1" ], "flag": "india", "label": "Bangalore", "slug": "blr" } }, "timestamp": "2026-03-24T07:06:01.044139416Z", "type": "digitalocean.app.updated" } ``` ## Upsert DNS Record The Upsert DNS Record component idempotently creates or updates a DNS record for a DigitalOcean-managed domain. It first looks up existing records with the same name and type. If a match is found it updates the record in-place; otherwise it creates a new one. ### Use Cases - **Idempotent provisioning**: Safely run DNS setup steps multiple times without creating duplicates - **IP updates**: Keep A/AAAA records in sync with changing IP addresses - **Dynamic configuration**: Update TXT records (e.g. SPF, DKIM) as part of automated workflows ### Configuration - **Domain**: The DigitalOcean-managed domain to manage the record in (required) - **Type**: The DNS record type (required): A, AAAA, CNAME, MX, NS, TXT, SRV, CAA - **Name**: The subdomain name for the record (required, use @ for root) - **Data**: The record value, e.g. an IP address or hostname (required, supports expressions) - **TTL**: Time-to-live in seconds (optional, defaults to 1800) - **Priority**: Record priority for MX/SRV records (optional) - **Port**: Port number for SRV records (optional) - **Weight**: Weight for SRV records (optional) ### Output Returns the created or updated DNS record including: - **id**: Record ID - **type**: Record type - **name**: Subdomain name - **data**: Record value - **ttl**: Time-to-live - **priority**: Priority (for MX/SRV) - **port**: Port (for SRV) - **weight**: Weight (for SRV) ### Example Output ```json { "data": { "data": "192.0.2.2", "id": 12345678, "name": "www", "port": null, "priority": null, "ttl": 1800, "type": "A", "weight": null }, "timestamp": "2026-03-13T10:10:00.000000000Z", "type": "digitalocean.dns.record.upserted" } ``` #### Discord Source URL: https://docs.superplane.com/components/discord Send messages to Discord channels and fetch mentions import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Instructions To set up Discord integration: 1. Go to the **Discord Developer Portal** (https://discord.com/developers/applications) 2. Click **New Application** and give it a name 3. Go to **OAuth2** → **URL Generator**: - Under **Scopes**, select **bot** - Under **Bot Permissions**, select **View Channels**, **Send Messages**, and **Read Message History** - Copy the generated URL and open it in a new tab to invite the bot to your server 4. Go to the **Bot** section: - Click **Add Bot** (if not already added) - Uncheck the **Public Bot** option - Under **Token**, click **Reset Token** then **Copy** to get your bot token 5. Paste the **Bot Token** in the **Bot Token** field below ## Get Last Mention The Get Last Mention component fetches recent messages from a Discord channel and returns the latest one that mentions your bot. ### Use Cases - **Command polling**: Retrieve the most recent mention command in a channel - **Manual workflows**: Pull latest bot mention on demand in a workflow step - **Mention auditing**: Inspect the latest mention payload before replying ### Configuration - **Channel**: Discord channel to search for mentions - **Since**: Optional date string lower-bound for mentions (only mentions at or after this time are considered). Supports expressions. Accepted formats include ISO 8601 (recommended) and Go's default timestamp format (e.g. 2026-03-16 04:17:08.750328135 +0000 UTC). ### Output The payload includes: - **channel_id**: Channel queried - **mention**: Full message payload for the latest bot mention (when found) Output channels: - **found**: Emitted when a matching mention is found - **notFound**: Emitted when no matching mention is found ### Notes - Requires the bot permission **Read Message History** in the selected channel - Only non-bot-authored messages are considered - Datetimes without timezone in **Since** are interpreted as UTC ### Example Output ```json { "data": { "channel_id": "1381731186335651880", "mention": { "author": { "bot": false, "id": "98432147091234567", "username": "pedro" }, "channel_id": "1381731186335651880", "content": "\u003c@1372980427293554709\u003e can you run /deploy?", "guild_id": "1381731029892325376", "id": "1381731283190517770", "mentions": [ { "bot": true, "id": "1372980427293554709", "username": "superplane-bot" } ], "timestamp": "2026-03-10T15:04:05.000Z" } }, "timestamp": "2026-03-10T15:04:05.000Z", "type": "discord.getLastMention.result" } ``` ## Send Text Message The Send Text Message component sends a message to a Discord channel. ### Use Cases - **Notifications**: Send notifications about workflow events or system status - **Alerts**: Alert teams about important events or errors - **Updates**: Provide status updates on long-running processes ### Configuration - **Channel**: Select the Discord channel to send the message to - **Content**: Plain text message content (max 2000 characters) - **Embed Title**: Optional title for a rich embed - **Embed Description**: Optional description for a rich embed - **Embed Color**: Hex color code for the embed (e.g., #5865F2) - **Embed URL**: Optional URL to link from the embed title ### Output Returns metadata about the sent message including message ID, channel ID, and author information. ### Notes - Either content or embed (title/description) must be provided - The Discord bot must be installed and have permission to post to the selected channel - Supports Discord's rich embed formatting for visually appealing messages ### Example Output ```json { "data": { "author": { "bot": true, "id": "1111111111111111111", "username": "Webhook" }, "channel_id": "9876543210987654321", "content": "Hello from SuperPlane", "id": "1234567890123456789", "timestamp": "2026-01-16T12:00:00.000Z" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "discord.message.sent" } ``` #### DockerHub Source URL: https://docs.superplane.com/components/dockerhub Manage and react to DockerHub repositories and tags import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions To generate a DockerHub access token: - Go to **DockerHub** → **Account Settings** → **Personal Access Tokens** - Generate a new token - **Copy the token**, and enter your DockerHub username and the token below ## On Image Push The On Image Push trigger starts a workflow execution when an image tag is pushed to DockerHub. ### Use Cases - **Build pipelines**: Trigger builds and deployments on container pushes - **Release workflows**: Promote artifacts when a new tag is published - **Security automation**: Kick off scans or alerts for newly pushed images ### Configuration - **Repository**: DockerHub repository name, in the format of `namespace/name` - **Tags**: Optional filters for image tags (for example: `latest` or `^v[0-9]+`) ### Webhook Setup This trigger generates a webhook URL in SuperPlane. Add that URL as a DockerHub webhook for the selected repository so DockerHub can deliver push events. ### Example Data ```json { "data": { "callback_url": "https://hub.docker.com/u/superplane/demo/hook/abcd/", "push_data": { "pushed_at": 1736400000, "pusher": "superplane-bot", "tag": "v1.2.3" }, "repository": { "description": "Demo image for SuperPlane workflows", "is_private": false, "name": "demo", "namespace": "superplane", "pull_count": 3456, "repo_name": "superplane/demo", "repo_url": "https://hub.docker.com/r/superplane/demo", "star_count": 12, "status": "Active" } }, "timestamp": "2026-02-03T12:00:00Z", "type": "dockerhub.image.push" } ``` ## Get Image Tag The Get Image Tag component retrieves metadata for a DockerHub image tag. ### Use Cases - **Release automation**: Fetch tag metadata for deployments - **Audit trails**: Resolve tag details for traceability - **Insights**: Inspect image sizes, digests, and last pushed times ### Configuration - **Repository**: DockerHub repository name, in the format of `namespace/name` - **Tag**: Image tag to retrieve (for example: `latest` or `v1.2.3`) ### Example Output ```json { "data": { "full_size": 52837442, "id": 123456, "images": [ { "architecture": "amd64", "digest": "sha256:fe12ab34cd56ef78ab90cd12ef34ab56cd78ef90ab12cd34ef56ab78cd90ef12", "last_pulled": "2025-01-06T11:02:10.123456Z", "last_pushed": "2025-01-05T21:06:53.506400Z", "os": "linux", "size": 52837442, "status": "active" } ], "last_updated": "2025-01-05T21:06:53.506400Z", "last_updater": 1234, "last_updater_username": "superplane-bot", "name": "latest", "repository": 98765, "status": "active", "tag_last_pulled": "2025-01-06T11:02:10.123456Z", "tag_last_pushed": "2025-01-05T21:06:53.506400Z", "v2": "true" }, "timestamp": "2026-02-03T12:00:00Z", "type": "dockerhub.tag" } ``` #### Elastic Source URL: https://docs.superplane.com/components/elastic Index documents into Elasticsearch and receive Kibana webhooks import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions To connect Elastic to SuperPlane: 1. Paste your **Elasticsearch URL** and **Kibana URL**. In Elastic Cloud, open https://cloud.elastic.co/home and copy both endpoints from the same deployment under the manage section. 2. In Settings, create an API key and paste it into SuperPlane. The key must be able to access Elasticsearch, Kibana cases, and Kibana connectors. ## When Alert Fires The When Alert Fires trigger starts a workflow execution when a Kibana alert rule fires via a webhook connector. ### Shared Connector SuperPlane creates **one Kibana Webhook connector per integration**, shared across all triggers that use the same Kibana instance. Each incoming request is routed to the correct trigger using the `eventType` field in the request body — this trigger only processes requests where `eventType` is `"alert_fired"`. Requests intended for other trigger types (e.g. `"document_indexed"`) are silently ignored. ### Setup 1. Select the Kibana alert rule in SuperPlane and save the trigger. 2. SuperPlane automatically creates or reuses the shared Kibana Webhook connector and attaches it to the selected rule if it is missing. This provisioning happens when the live version is published. Autosave on a draft version does not create the connector. #### Kibana action body SuperPlane configures the rule action body with these fields: ```json { "eventType": "alert_fired", "ruleId": "{{rule.id}}", "ruleName": "{{rule.name}}", "spaceId": "{{rule.spaceId}}", "tags": {{rule.tags}}, "severity": "{{context.severity}}", "status": "{{rule.status}}" } ``` The `eventType` field is required for routing. Kibana substitutes `{{rule.id}}` and `{{rule.name}}` at delivery time. Fields omitted from the body will not be filterable in SuperPlane. ### Filtering Select at least one **Rule**. Additional filter fields are optional. When multiple values are provided in a list, any value matching is sufficient (OR). All active filter types must match simultaneously (AND across types). **Rule ID** is the most reliable selector because rule names are user-editable. Use it when you need precise per-rule routing. ### Webhook Verification SuperPlane generates a random signing secret and configures the Kibana connector to include it on every request. Requests without the correct secret are rejected automatically. ### Event Data Each received alert emits the parsed JSON body sent by Kibana directly as the event data. Use the workflow event timestamp to know when SuperPlane received it. ### Example Data ```json { "data": { "context": { "message": "Error rate exceeded threshold: 15%", "threshold": 10, "value": 15 }, "eventType": "alert_fired", "ruleId": "abc-123", "ruleName": "High error rate detected", "severity": "critical", "spaceId": "default", "status": "active", "tags": [ "infrastructure", "prod" ] }, "timestamp": "2026-03-12T09:00:00Z", "type": "elastic.alert" } ``` ## When Case Status Changes The When Case Status Changes trigger fires a workflow execution when a Kibana Security case changes status. ### Shared Connector SuperPlane creates **one Kibana Webhook connector per integration**, shared across Elastic triggers that use the same Kibana instance. Each incoming request is routed to the correct trigger instance using two fields in the request body: - `eventType`: must be `"case_status_changed"`. - `routeKey`: a unique ID assigned per trigger node so multiple case-status triggers can coexist safely. ### How it works 1. When the trigger is saved, SuperPlane creates or reuses the shared Kibana Webhook connector. 2. SuperPlane automatically provisions a Kibana **Elasticsearch query** rule against `.kibana_alerting_cases` using `cases.updated_at` as the time field. 3. Every minute, that Kibana rule checks for case updates in the current window and fires the shared connector when matches are found. 4. SuperPlane receives the webhook, verifies the secret, validates the routing fields, then queries Kibana for cases updated since the stored checkpoint. 5. SuperPlane compares each returned case's current status to the last status stored in trigger metadata and only emits when the value changed. 6. SuperPlane emits one `elastic.case.status.changed` event per matching case whose status actually changed. Provisioning happens when the live version is published. Autosave on a draft version does not create the connector or rule. ### Configuration - **Cases**: Select one or more specific cases to monitor. - **Statuses** *(optional)*: Only fire when a case has one of these statuses. Leave empty to fire for any case update. - **Severities** *(optional)*: Only fire for cases with one of these severities. Leave empty to accept all severities. - **Tags** *(optional)*: Only fire for cases that include at least one tag matching any of these predicates. Leave empty to accept all cases. ### Event Data The trigger emits the full case details including id, title, status, severity, version, tags, description, and timestamps. ### Example Data ```json { "data": { "createdAt": "2026-03-12T09:00:00.000Z", "description": "Elevated error rate detected in production.", "id": "3c0a2b10-4e5f-11ee-be56-0242ac120002", "severity": "high", "status": "in-progress", "tags": [ "production", "incident" ], "title": "Production incident", "updatedAt": "2026-03-12T10:00:00.000Z", "version": "WzE3LDFd" }, "timestamp": "2026-03-12T10:00:00Z", "type": "elastic.case.status.changed" } ``` ## On Document Indexed The On Document Indexed trigger starts a workflow execution when a new document is indexed into an Elasticsearch index. ### Shared Connector SuperPlane creates **one Kibana Webhook connector per integration**, shared across all triggers that use the same Kibana instance. Each incoming request is routed to the correct trigger instance using two fields in the request body: - `eventType`: must be `"document_indexed"` — requests with any other value are silently ignored, allowing the shared connector to serve both this trigger and others (e.g. When Alert Fires). - `routeKey`: a unique ID assigned per trigger node — allows multiple On Document Indexed nodes on the same canvas to each react only to their own Kibana rule. ### How it works 1. When the trigger is saved, SuperPlane creates or reuses the shared Kibana Webhook connector and provisions a Kibana Elasticsearch query rule for the configured index. 2. Every minute, the rule checks for documents with an `@timestamp` value within the current window. When matches are found, Kibana fires the connector. 3. SuperPlane receives the webhook, queries Elasticsearch for all documents newer than its stored checkpoint, and emits one event per document. Provisioning happens when the live version is published. Autosave on a draft version does not create the connector or rule. ### Configuration - **Index**: The Elasticsearch index to monitor for new documents. > **Note**: This trigger requires an `@timestamp` field mapped as `date` on indexed documents. Documents without that field will be missed. To ensure all documents are captured, configure an ingest pipeline on the index to auto-populate the field if absent: > ```json > { "set": { "field": "@timestamp", "value": "{{{_ingest.timestamp}}}", "override": false } } > ``` ### Webhook Verification SuperPlane generates a random signing secret and configures the Kibana connector to include it on every request. Requests without the correct secret are rejected automatically. ### Event Data The webhook acts as a signal. When it fires, SuperPlane queries Elasticsearch for documents newer than the stored checkpoint and emits one event per document containing its ID, index, and full source. ### Example Data ```json { "data": { "id": "doc-1", "index": "workflow-audit", "source": { "@timestamp": "2026-03-12T09:00:00Z", "message": "deployment started", "service": "api" } }, "timestamp": "2026-03-12T09:00:00Z", "type": "elastic.document.indexed" } ``` ## Create Case The Create Case component opens a new case in Kibana Security. ### Configuration - **Title**: The case title - **Severity**: Case severity (low, medium, high, or critical) - **Owner**: The Kibana application that owns the case - **Description**: A description of the case - **Tags**: Optional list of tags to attach to the case ### Outputs The component emits an event containing: - `id`: The case ID assigned by Kibana - `title`: The case title - `status`: The initial case status - `severity`: The case severity - `version`: The case version (can be provided to later updates for explicit optimistic locking) - `createdAt`: The timestamp when the case was created ### Example Output ```json { "data": { "createdAt": "2026-03-12T09:00:00.000Z", "id": "3c0a2b10-4e5f-11ee-be56-0242ac120002", "severity": "high", "status": "open", "title": "Production incident", "version": "WzE2LDFd" }, "timestamp": "2026-03-12T09:00:00Z", "type": "elastic.case.created" } ``` ## Get Case The Get Case component retrieves an existing case from Kibana Security by its ID. ### Configuration - **Case**: The Kibana case to retrieve ### Outputs The component emits an event containing: - `id`: The case ID - `title`: The case title - `description`: The case description - `status`: The case status - `severity`: The case severity - `tags`: The case tags - `version`: The current case version returned by Kibana - `createdAt`: The timestamp when the case was created - `updatedAt`: The timestamp when the case was last updated ### Example Output ```json { "data": { "createdAt": "2026-03-12T09:00:00.000Z", "description": "Elevated error rate detected in production.", "id": "3c0a2b10-4e5f-11ee-be56-0242ac120002", "severity": "high", "status": "open", "tags": [ "production", "incident" ], "title": "Production incident", "updatedAt": "2026-03-12T10:00:00.000Z", "version": "WzE2LDFd" }, "timestamp": "2026-03-12T09:00:00Z", "type": "elastic.case.retrieved" } ``` ## Get Document The Get Document component retrieves a JSON document from an Elasticsearch index by its ID. ### Configuration - **Index**: The Elasticsearch index to read from - **Document**: The document to retrieve ### Outputs The component emits an event containing: - `id`: The document ID - `index`: The index the document was read from - `version`: The document version number - `source`: The document fields ### Example Output ```json { "data": { "id": "aB3kLmN4oPqR", "index": "workflow-audit", "source": { "env": "production", "message": "deployment started" }, "version": 3 }, "timestamp": "2026-03-12T09:00:00Z", "type": "elastic.document.retrieved" } ``` ## Index Document The Index Document component writes a JSON document to an Elasticsearch index. ### Use Cases - **Audit logging**: Record workflow actions in Elasticsearch for centralized search and dashboards - **Incident records**: Index structured incident data for analysis and alerting - **Workflow output**: Store results from any workflow step for downstream querying ### Configuration - **Index**: The Elasticsearch index name to write to (e.g. `workflow-audit`) - **Document**: The JSON object to index. The editor starts with an `@timestamp` template so documents are compatible with On Document Indexed by default. - **Document ID** *(optional)*: A stable ID for idempotent writes. If omitted, Elasticsearch generates one automatically. Providing an ID means re-runs update the existing document rather than creating a duplicate. ### Outputs The component emits an event containing: - `id`: The document ID assigned by Elasticsearch - `index`: The index the document was written to - `result`: Operation result (`created` or `updated`) - `version`: The document version number ### Example Output ```json { "data": { "id": "aB3kLmN4oPqR", "index": "workflow-audit", "result": "created", "version": 1 }, "timestamp": "2026-03-12T09:00:00Z", "type": "elastic.document.indexed" } ``` ## Update Case The Update Case component applies a partial update to an existing Kibana Security case. ### Configuration - **Case**: The Kibana case to update - **Title**: New title for the case (optional) - **Description**: New description for the case (optional) - **Status**: New status for the case (optional) - **Severity**: New severity for the case (optional) - **Tags**: New tags for the case (optional) ### Outputs The component emits an event containing: - `id`: The case ID - `title`: The updated case title - `status`: The updated case status - `severity`: The updated case severity - `version`: The new case version - `updatedAt`: The timestamp when the case was last updated ### Example Output ```json { "data": { "id": "3c0a2b10-4e5f-11ee-be56-0242ac120002", "severity": "high", "status": "in-progress", "title": "Production incident", "updatedAt": "2026-03-12T10:00:00.000Z", "version": "WzE3LDFd" }, "timestamp": "2026-03-12T09:00:00Z", "type": "elastic.case.updated" } ``` ## Update Document The Update Document component applies a partial update to an existing document in an Elasticsearch index. ### Configuration - **Index**: The Elasticsearch index containing the document - **Document**: The document to update - **Fields**: The fields to merge into the existing document (partial update). The editor starts with an `@timestamp` template for convenience. ### Outputs The component emits an event containing: - `id`: The document ID - `index`: The index the document belongs to - `result`: Operation result (`updated`) - `version`: The new document version number ### Example Output ```json { "data": { "id": "aB3kLmN4oPqR", "index": "workflow-audit", "result": "updated", "version": 4 }, "timestamp": "2026-03-12T09:00:00Z", "type": "elastic.document.updated" } ``` #### FireHydrant Source URL: https://docs.superplane.com/components/firehydrant Manage and react to incidents in FireHydrant import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions To connect FireHydrant, create an API key: 1. Go to **Settings → API Keys → Create API Key** in your FireHydrant account. This requires Owner permissions. 2. The API key should have **Write Access** in order to create incidents and webhooks. 3. Copy the API key and paste it into the configuration for this integration. ## On Incident The On Incident trigger starts a workflow execution when a FireHydrant incident is created or reaches a specific milestone. ### Use Cases - **Incident response**: Automatically notify Slack, update a status page, or create a Jira ticket when an incident is opened - **Alert escalation**: Trigger escalation workflows when critical incidents are created - **Milestone tracking**: React to incidents reaching specific milestones such as mitigated or resolved - **Cross-tool sync**: Sync new FireHydrant incidents to other incident management tools ### Configuration - **Current Milestone**: Select which incident milestones to trigger on (started, acknowledged, mitigated, resolved, etc.). The workflow will trigger when an incident is created or updated to match any of the selected milestones. - **Severities** (optional): Filter by severity levels. Only incidents matching the selected severities will trigger the workflow. If empty, all severities are accepted. ### Event Data Each incident event includes: - **name**: Incident name/title - **number**: Incident number - **severity**: Severity level - **priority**: Priority level - **current_milestone**: Current milestone (e.g., started, acknowledged) - **summary**: Incident summary ### Webhook Setup This trigger automatically sets up a FireHydrant webhook endpoint when configured. The endpoint is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "event": "incident.created", "incident": { "conference_bridges": [ { "id": "cb-9f1a2c3d", "name": "Zoom War Room", "provider": "zoom", "url": "https://zoom.us/j/9876543210" } ], "counts": { "starred_events": 3, "starred_messages": 12 }, "created_at": "2026-03-02 14:43:05 UTC", "current_milestone": "started", "custom_fields": [ { "display_name": "Detected by", "id": "b370d5b8-9ef1-49d5-be4a-96ee6e536c8b", "value": "Datadog Monitor: DB Health Check" } ], "customer_impact_summary": "All production API requests are failing due to database deletion. Users cannot log in or access core features.", "description": "Primary production database was accidentally deleted during maintenance.", "follow_ups": [ { "id": "fu-123", "status": "pending", "title": "Implement database deletion safeguards" } ], "id": "21fe20b5-8012-4a51-a34c-623b1389af66", "impacts": [ { "description": "Production users unable to access platform", "id": "impact-1", "type": "customer" } ], "incident_channels": [ { "id": "ic-789", "name": "#incident-001", "type": "slack", "visibility": "private" } ], "labels": [ "database", "production", "outage" ], "last_note": "Engineering team restoring latest backup from 14:30 UTC.", "milestones": [ { "created_at": "2026-03-02 14:43:05 UTC", "duration": "", "id": "d0eaad5c-6922-4f35-adfb-9cf481d39f46", "name": "Started", "occurred_at": "2026-03-02 14:43:05 UTC", "original_milestone_id": "99e55a31-894a-4e76-b680-89b3acab0d7b", "type": "started", "updated_at": "2026-03-02 14:43:06 UTC", "updated_by_id": "bd23faac-dcda-4fd7-a968-4e0b6a04bf35", "updated_by_type": "Bot" } ], "name": "Production Database Accidentally Deleted", "number": 42, "priority": "P1", "private_status_page_url": "https://app.firehydrant.io/incidents/internal/status_page/mock-url", "role_assignments": [ { "role": "Incident Commander", "user_id": "user-111", "user_name": "Alice Johnson" }, { "role": "Communications Lead", "user_id": "user-222", "user_name": "Bob Smith" } ], "severity": "SEV1", "started_at": "2026-03-02 14:43:05 UTC", "summary": "Production outage caused by accidental database deletion.", "tags": [ "critical", "customer-impacting" ], "tasks": [ { "description": "Restore database from latest backup", "id": "task-1", "status": "in_progress" }, { "description": "Validate data integrity after restore", "id": "task-2", "status": "pending" } ], "team_assignments": [ { "team_id": "team-ops", "team_name": "Operations" }, { "team_id": "team-eng", "team_name": "Engineering" } ], "updated_at": "2026-03-02 14:45:10 +0000" }, "operation": "CREATED", "resource_type": "incident" }, "timestamp": "2026-03-02T14:43:07.292043553Z", "type": "firehydrant.incident.created" } ``` ## Create Incident The Create Incident component creates a new incident in FireHydrant. ### Use Cases - **Alert escalation**: Create incidents from monitoring alerts or error tracking - **Cross-tool sync**: Open a FireHydrant incident from other SuperPlane triggers (e.g., PagerDuty, Rootly, GitHub) - **Manual incident creation**: Create incidents from workflow events - **Automated response**: Automatically declare incidents when thresholds are breached ### Configuration - **Name**: Incident name/title (required, supports expressions) - **Summary**: Short summary of the incident (optional, supports expressions) - **Description**: Detailed description of the incident (optional, supports expressions) - **Severity**: Severity level, e.g., SEV1, SEV2 (optional, populated from FireHydrant) - **Priority**: Priority level, e.g., P1, P2 (optional, populated from FireHydrant) ### Output Returns the created incident object including: - **id**: Incident ID - **name**: Incident name - **description**: Incident description - **summary**: Incident summary - **customer_impact_summary**: Summary of customer impact - **current_milestone**: Current milestone (e.g., started, acknowledged) - **number**: Incident number - **incident_url**: URL to the incident in FireHydrant - **severity**: Severity level - **priority**: Priority level - **tag_list**: List of tags associated with the incident - **impacts**: List of impacts associated with the incident - **milestones**: List of milestones associated with the incident ### Example Output ```json { "data": { "current_milestone": "started", "customer_impact_summary": "Some customers are experiencing slow performance and intermittent errors when accessing the application.", "description": "Users are experiencing slow database queries and connection timeouts.", "id": "04d9fd1a-ba9c-417d-b396-58a6e2c374de", "impacts": [], "incident_url": "https://app.firehydrant.com/incidents/04d9fd1a-ba9c-417d-b396-58a6e2c374de", "milestones": [ { "occurred_at": "2026-01-19T12:00:00Z", "type": "started" } ], "name": "Database connection issues", "number": 603, "priority": "P1", "severity": "SEV1", "summary": "Database connection pool exhausted causing cascading failures.", "tag_list": [] }, "timestamp": "2026-01-19T12:00:00Z", "type": "firehydrant.incident" } ``` #### GitHub Source URL: https://docs.superplane.com/components/github Manage and react to changes in your GitHub repositories import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## On Branch Created The On Branch Created trigger starts a workflow execution when a new branch is created in a GitHub repository. ### Use Cases - **Branch automation**: Set up environments or resources for new branches - **Branch validation**: Validate branch naming conventions - **Notification workflows**: Notify teams when important branches are created - **Branch processing**: Process or configure branches automatically ### Configuration - **Repository**: Select the GitHub repository to monitor - **Branches**: Configure which branches to listen for using predicates (e.g., equals "main", starts with "feature-") ### Event Data Each branch event includes: - **ref**: The branch reference (e.g., "refs/heads/feature/new-feature") - **ref_type**: Type of reference (branch) - **repository**: Repository information - **sender**: User who created the branch ### Webhook Setup This trigger automatically sets up a GitHub webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "description": "Example repository for webhook payloads", "master_branch": "main", "pusher_type": "user", "ref": "feature/new-endpoint", "ref_type": "branch", "repository": { "full_name": "acme/widgets", "html_url": "https://github.com/acme/widgets", "id": 123456 }, "sender": { "html_url": "https://github.com/octocat", "id": 101, "login": "octocat" } }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.branchCreated" } ``` ## On Issue The On Issue trigger starts a workflow execution when issue events occur in a GitHub repository. ### Use Cases - **Issue automation**: Automate responses to new or updated issues - **Notification workflows**: Send notifications when issues are created or closed - **Task management**: Sync issues with external task management systems - **Label automation**: Automatically label or categorize issues ### Configuration - **Repository**: Select the GitHub repository to monitor - **Actions**: Select which issue actions to listen for (opened, closed, reopened, etc.) ### Event Data Each issue event includes: - **action**: The action that triggered the event (opened, closed, reopened, etc.) - **issue**: Complete issue information including title, body, state, labels, assignees - **repository**: Repository information - **sender**: User who triggered the event ### Webhook Setup This trigger automatically sets up a GitHub webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "action": "opened", "assignee": null, "issue": { "html_url": "https://github.com/acme/widgets/issues/42", "number": 42, "state": "open", "title": "Fix flaky build", "user": { "login": "octocat" } }, "repository": { "full_name": "acme/widgets", "html_url": "https://github.com/acme/widgets", "id": 123456 }, "sender": { "html_url": "https://github.com/octocat", "id": 101, "login": "octocat" } }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.issue" } ``` ## On Issue Comment The On Issue Comment trigger starts a workflow execution when comments are added to issues. ### Use Cases - **Command processing**: Process slash commands in issue comments (e.g., /assign, /close) - **Bot interactions**: Respond to comments with automated actions - **Issue automation**: Automate issue management based on comment content - **Notification systems**: Notify teams when important comments are added ### Configuration - **Repository**: Select the GitHub repository to monitor - **Content Filter**: Optional regex pattern to filter comments (e.g., `/solve` to only trigger on comments containing "/solve") ### Event Data Each comment event includes: - **comment**: Comment information including body, author, created timestamp - **issue**: Issue information the comment was added to - **repository**: Repository information - **sender**: User who added the comment ### Webhook Setup This trigger automatically sets up a GitHub webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "action": "created", "comment": { "body": "I can reproduce this", "html_url": "https://github.com/acme/widgets/issues/42#issuecomment-5001", "id": 5001 }, "issue": { "html_url": "https://github.com/acme/widgets/issues/42", "number": 42, "title": "Fix flaky build" }, "repository": { "full_name": "acme/widgets", "html_url": "https://github.com/acme/widgets", "id": 123456 }, "sender": { "html_url": "https://github.com/octocat", "id": 101, "login": "octocat" } }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.issueComment" } ``` ## On PR Comment The On PR Comment trigger starts a workflow execution when comments are added on a pull request conversation. ### Use Cases - **Command processing**: Process slash commands in PR conversation comments (e.g., /deploy, /test) - **Bot interactions**: Respond to comments with automated actions - **Notification systems**: Notify teams when important PR conversation comments are added ### Configuration - **Repository**: Select the GitHub repository to monitor - **Content Filter**: Optional regex pattern to filter comments (e.g., `/solve` to only trigger on comments containing "/solve") ### Event Data Each comment event includes: - **comment**: Comment information including body, author, created timestamp - **issue**: Issue/PR information; for this trigger it is always a pull request issue - **repository**: Repository information - **sender**: User who added the comment SuperPlane passes through the full GitHub webhook payload under data for the issue_comment event type. Common expression paths: - PR number: `root().data.issue.number` - PR title: `root().data.issue.title` - PR URL: `root().data.issue.pull_request.html_url` - Comment body: `root().data.comment.body` ### Webhook Setup This trigger automatically sets up a GitHub webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "action": "created", "comment": { "author_association": "CONTRIBUTOR", "body": "Looks good to me — can we also update the README example to match the new title?", "created_at": "2026-02-25T18:14:22Z", "html_url": "https://github.com/acme-labs/snaketoy/pull/42#issuecomment-4928173041", "id": 4928173041, "issue_url": "https://api.github.com/repos/acme-labs/snaketoy/issues/42", "node_id": "IC_kwDOQ7c2jM7zXq2R", "performed_via_github_app": null, "pin": null, "reactions": { "+1": 2, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/acme-labs/snaketoy/issues/comments/4928173041/reactions" }, "updated_at": "2026-02-25T18:14:29Z", "url": "https://api.github.com/repos/acme-labs/snaketoy/issues/comments/4928173041", "user": { "avatar_url": "https://avatars.githubusercontent.com/u/124578901?v=4", "events_url": "https://api.github.com/users/jules-ramirez/events{/privacy}", "followers_url": "https://api.github.com/users/jules-ramirez/followers", "following_url": "https://api.github.com/users/jules-ramirez/following{/other_user}", "gists_url": "https://api.github.com/users/jules-ramirez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jules-ramirez", "id": 124578901, "login": "jules-ramirez", "node_id": "MDQ6VXNlcjEyNDU3ODkwMQ==", "organizations_url": "https://api.github.com/users/jules-ramirez/orgs", "received_events_url": "https://api.github.com/users/jules-ramirez/received_events", "repos_url": "https://api.github.com/users/jules-ramirez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jules-ramirez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jules-ramirez/subscriptions", "type": "User", "url": "https://api.github.com/users/jules-ramirez", "user_view_type": "public" } }, "issue": { "active_lock_reason": null, "assignee": null, "assignees": [], "author_association": "MEMBER", "body": "This PR renames the game title shown in the UI and updates the page header accordingly.", "closed_at": null, "comments": 3, "comments_url": "https://api.github.com/repos/acme-labs/snaketoy/issues/42/comments", "created_at": "2026-02-25T18:02:11Z", "draft": false, "events_url": "https://api.github.com/repos/acme-labs/snaketoy/issues/42/events", "html_url": "https://github.com/acme-labs/snaketoy/pull/42", "id": 5129048832, "labels": [ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 889120331, "name": "enhancement", "node_id": "LA_kwDOQ7c2jM8AAAABN9qLiw", "url": "https://api.github.com/repos/acme-labs/snaketoy/labels/enhancement" } ], "labels_url": "https://api.github.com/repos/acme-labs/snaketoy/issues/42/labels{/name}", "locked": false, "milestone": null, "node_id": "PR_kwDOQ7c2jM7hY2qB", "number": 42, "performed_via_github_app": null, "pull_request": { "diff_url": "https://github.com/acme-labs/snaketoy/pull/42.diff", "html_url": "https://github.com/acme-labs/snaketoy/pull/42", "merged_at": null, "patch_url": "https://github.com/acme-labs/snaketoy/pull/42.patch", "url": "https://api.github.com/repos/acme-labs/snaketoy/pulls/42" }, "reactions": { "+1": 5, "-1": 0, "confused": 0, "eyes": 2, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 9, "url": "https://api.github.com/repos/acme-labs/snaketoy/issues/42/reactions" }, "repository_url": "https://api.github.com/repos/acme-labs/snaketoy", "state": "open", "state_reason": null, "timeline_url": "https://api.github.com/repos/acme-labs/snaketoy/issues/42/timeline", "title": "Rename title from 'Snake' to 'Snake Toy'", "updated_at": "2026-02-25T18:14:29Z", "url": "https://api.github.com/repos/acme-labs/snaketoy/issues/42", "user": { "avatar_url": "https://avatars.githubusercontent.com/u/90231477?v=4", "events_url": "https://api.github.com/users/renato-dev/events{/privacy}", "followers_url": "https://api.github.com/users/renato-dev/followers", "following_url": "https://api.github.com/users/renato-dev/following{/other_user}", "gists_url": "https://api.github.com/users/renato-dev/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/renato-dev", "id": 90231477, "login": "renato-dev", "node_id": "MDQ6VXNlcjkwMjMxNDc3", "organizations_url": "https://api.github.com/users/renato-dev/orgs", "received_events_url": "https://api.github.com/users/renato-dev/received_events", "repos_url": "https://api.github.com/users/renato-dev/repos", "site_admin": false, "starred_url": "https://api.github.com/users/renato-dev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/renato-dev/subscriptions", "type": "User", "url": "https://api.github.com/users/renato-dev", "user_view_type": "public" } }, "repository": { "allow_forking": true, "archive_url": "https://api.github.com/repos/acme-labs/snaketoy/{archive_format}{/ref}", "archived": false, "assignees_url": "https://api.github.com/repos/acme-labs/snaketoy/assignees{/user}", "blobs_url": "https://api.github.com/repos/acme-labs/snaketoy/git/blobs{/sha}", "branches_url": "https://api.github.com/repos/acme-labs/snaketoy/branches{/branch}", "clone_url": "https://github.com/acme-labs/snaketoy.git", "collaborators_url": "https://api.github.com/repos/acme-labs/snaketoy/collaborators{/collaborator}", "comments_url": "https://api.github.com/repos/acme-labs/snaketoy/comments{/number}", "commits_url": "https://api.github.com/repos/acme-labs/snaketoy/commits{/sha}", "compare_url": "https://api.github.com/repos/acme-labs/snaketoy/compare/{base}...{head}", "contents_url": "https://api.github.com/repos/acme-labs/snaketoy/contents/{+path}", "contributors_url": "https://api.github.com/repos/acme-labs/snaketoy/contributors", "created_at": "2024-08-11T13:09:40Z", "default_branch": "main", "deployments_url": "https://api.github.com/repos/acme-labs/snaketoy/deployments", "description": "A tiny snake fangame written in JavaScript + HTML5 canvas", "disabled": false, "downloads_url": "https://api.github.com/repos/acme-labs/snaketoy/downloads", "events_url": "https://api.github.com/repos/acme-labs/snaketoy/events", "fork": false, "forks": 7, "forks_count": 7, "forks_url": "https://api.github.com/repos/acme-labs/snaketoy/forks", "full_name": "acme-labs/snaketoy", "git_commits_url": "https://api.github.com/repos/acme-labs/snaketoy/git/commits{/sha}", "git_refs_url": "https://api.github.com/repos/acme-labs/snaketoy/git/refs{/sha}", "git_tags_url": "https://api.github.com/repos/acme-labs/snaketoy/git/tags{/sha}", "git_url": "git://github.com/acme-labs/snaketoy.git", "has_discussions": true, "has_downloads": true, "has_issues": true, "has_pages": false, "has_projects": true, "has_pull_requests": true, "has_wiki": true, "homepage": "https://acme.example/snaketoy", "hooks_url": "https://api.github.com/repos/acme-labs/snaketoy/hooks", "html_url": "https://github.com/acme-labs/snaketoy", "id": 712304981, "is_template": false, "issue_comment_url": "https://api.github.com/repos/acme-labs/snaketoy/issues/comments{/number}", "issue_events_url": "https://api.github.com/repos/acme-labs/snaketoy/issues/events{/number}", "issues_url": "https://api.github.com/repos/acme-labs/snaketoy/issues{/number}", "keys_url": "https://api.github.com/repos/acme-labs/snaketoy/keys{/key_id}", "labels_url": "https://api.github.com/repos/acme-labs/snaketoy/labels{/name}", "language": "JavaScript", "languages_url": "https://api.github.com/repos/acme-labs/snaketoy/languages", "license": { "key": "mit", "name": "MIT License", "node_id": "MDc6TGljZW5zZTEz", "spdx_id": "MIT", "url": "https://api.github.com/licenses/mit" }, "merges_url": "https://api.github.com/repos/acme-labs/snaketoy/merges", "milestones_url": "https://api.github.com/repos/acme-labs/snaketoy/milestones{/number}", "mirror_url": null, "name": "snaketoy", "node_id": "MDEwOlJlcG9zaXRvcnk3MTIzMDQ5ODE=", "notifications_url": "https://api.github.com/repos/acme-labs/snaketoy/notifications{?since,all,participating}", "open_issues": 12, "open_issues_count": 12, "owner": { "avatar_url": "https://avatars.githubusercontent.com/u/55120911?v=4", "events_url": "https://api.github.com/users/acme-labs/events{/privacy}", "followers_url": "https://api.github.com/users/acme-labs/followers", "following_url": "https://api.github.com/users/acme-labs/following{/other_user}", "gists_url": "https://api.github.com/users/acme-labs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/acme-labs", "id": 55120911, "login": "acme-labs", "node_id": "MDEyOk9yZ2FuaXphdGlvbjU1MTIwOTEx", "organizations_url": "https://api.github.com/users/acme-labs/orgs", "received_events_url": "https://api.github.com/users/acme-labs/received_events", "repos_url": "https://api.github.com/users/acme-labs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/acme-labs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/acme-labs/subscriptions", "type": "Organization", "url": "https://api.github.com/users/acme-labs", "user_view_type": "public" }, "private": false, "pull_request_creation_policy": "all", "pulls_url": "https://api.github.com/repos/acme-labs/snaketoy/pulls{/number}", "pushed_at": "2026-02-25T18:13:58Z", "releases_url": "https://api.github.com/repos/acme-labs/snaketoy/releases{/id}", "size": 1380, "ssh_url": "git@github.com:acme-labs/snaketoy.git", "stargazers_count": 24, "stargazers_url": "https://api.github.com/repos/acme-labs/snaketoy/stargazers", "statuses_url": "https://api.github.com/repos/acme-labs/snaketoy/statuses/{sha}", "subscribers_url": "https://api.github.com/repos/acme-labs/snaketoy/subscribers", "subscription_url": "https://api.github.com/repos/acme-labs/snaketoy/subscription", "svn_url": "https://github.com/acme-labs/snaketoy", "tags_url": "https://api.github.com/repos/acme-labs/snaketoy/tags", "teams_url": "https://api.github.com/repos/acme-labs/snaketoy/teams", "topics": [ "game", "canvas", "javascript" ], "trees_url": "https://api.github.com/repos/acme-labs/snaketoy/git/trees{/sha}", "updated_at": "2026-02-12T09:41:27Z", "url": "https://api.github.com/repos/acme-labs/snaketoy", "visibility": "public", "watchers": 24, "watchers_count": 24, "web_commit_signoff_required": false }, "sender": { "avatar_url": "https://avatars.githubusercontent.com/u/55120911?v=4", "events_url": "https://api.github.com/users/acme-ci/events{/privacy}", "followers_url": "https://api.github.com/users/acme-ci/followers", "following_url": "https://api.github.com/users/acme-ci/following{/other_user}", "gists_url": "https://api.github.com/users/acme-ci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/acme-ci", "id": 77100412, "login": "acme-ci", "node_id": "MDQ6VXNlcjc3MTAwNDEy", "organizations_url": "https://api.github.com/users/acme-ci/orgs", "received_events_url": "https://api.github.com/users/acme-ci/received_events", "repos_url": "https://api.github.com/users/acme-ci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/acme-ci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/acme-ci/subscriptions", "type": "User", "url": "https://api.github.com/users/acme-ci", "user_view_type": "public" } }, "timestamp": "2026-02-25T18:14:31.384219122Z", "type": "github.prComment" } ``` ## On PR Review Comment The On PR Review Comment trigger starts a workflow execution when review comments are added to pull requests. ### Use Cases - **Code review automation**: React to line-level review comments - **Review workflows**: Trigger follow-up workflows when a review is submitted - **Notification systems**: Notify teams when new review comments are posted ### Configuration - **Repository**: Select the GitHub repository to monitor - **Content Filter**: Optional regex pattern to filter comment/review body (e.g., `/solve`) ### Event Data This trigger handles two GitHub webhook events: - **pull_request_review_comment**: line-level code review comments (`comment` and `pull_request`) - **pull_request_review**: submitted review comments (`review` and `pull_request`) SuperPlane passes through the full GitHub webhook payload under data. Common expression paths: - PR number: `root().data.pull_request.number` - Branch name: `root().data.pull_request.head.ref` - Head SHA: `root().data.pull_request.head.sha` - Review comment body: `root().data.comment.body` (pull_request_review_comment) - Review submission body: `root().data.review.body` (pull_request_review) ### Webhook Setup This trigger automatically sets up a GitHub webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "action": "created", "comment": { "_links": { "html": { "href": "https://github.com/acme-labs/snaketoy/pull/42#discussion_r3179045528" }, "pull_request": { "href": "https://api.github.com/repos/acme-labs/snaketoy/pulls/42" }, "self": { "href": "https://api.github.com/repos/acme-labs/snaketoy/pulls/comments/3179045528" } }, "author_association": "CONTRIBUTOR", "body": "Small nit: could we also update the page heading to match the new title?", "commit_id": "d6f3c8a2e8b7f0a9c0a1f67f0c5d7b2a1d9e3f44", "created_at": "2026-02-25T18:22:13Z", "diff_hunk": "@@ -3,7 +3,7 @@\n \u003chead\u003e\n \u003cmeta charset=\"UTF-8\"\u003e\n \u003cmeta name=\"viewport\" content=\"width=device-width, user-scalable=no, initial-scale=1\"\u003e\n- \u003ctitle\u003eSnake\u003c/title\u003e\n+ \u003ctitle\u003eSnake Toy\u003c/title\u003e", "html_url": "https://github.com/acme-labs/snaketoy/pull/42#discussion_r3179045528", "id": 3179045528, "line": 6, "node_id": "PRRC_kwDOQ7c2jM6_9s1Y", "original_commit_id": "d6f3c8a2e8b7f0a9c0a1f67f0c5d7b2a1d9e3f44", "original_line": 6, "original_position": 5, "original_start_line": null, "path": "index.html", "position": 5, "pull_request_review_id": 4632189077, "pull_request_url": "https://api.github.com/repos/acme-labs/snaketoy/pulls/42", "reactions": { "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/acme-labs/snaketoy/pulls/comments/3179045528/reactions" }, "side": "RIGHT", "start_line": null, "start_side": null, "subject_type": "line", "updated_at": "2026-02-25T18:22:21Z", "url": "https://api.github.com/repos/acme-labs/snaketoy/pulls/comments/3179045528", "user": { "avatar_url": "https://avatars.githubusercontent.com/u/124578901?v=4", "events_url": "https://api.github.com/users/jules-ramirez/events{/privacy}", "followers_url": "https://api.github.com/users/jules-ramirez/followers", "following_url": "https://api.github.com/users/jules-ramirez/following{/other_user}", "gists_url": "https://api.github.com/users/jules-ramirez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jules-ramirez", "id": 124578901, "login": "jules-ramirez", "node_id": "MDQ6VXNlcjEyNDU3ODkwMQ==", "organizations_url": "https://api.github.com/users/jules-ramirez/orgs", "received_events_url": "https://api.github.com/users/jules-ramirez/received_events", "repos_url": "https://api.github.com/users/jules-ramirez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jules-ramirez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jules-ramirez/subscriptions", "type": "User", "url": "https://api.github.com/users/jules-ramirez", "user_view_type": "public" } }, "pull_request": { "_links": { "comments": { "href": "https://api.github.com/repos/acme-labs/snaketoy/issues/42/comments" }, "commits": { "href": "https://api.github.com/repos/acme-labs/snaketoy/pulls/42/commits" }, "html": { "href": "https://github.com/acme-labs/snaketoy/pull/42" }, "issue": { "href": "https://api.github.com/repos/acme-labs/snaketoy/issues/42" }, "review_comment": { "href": "https://api.github.com/repos/acme-labs/snaketoy/pulls/comments{/number}" }, "review_comments": { "href": "https://api.github.com/repos/acme-labs/snaketoy/pulls/42/comments" }, "self": { "href": "https://api.github.com/repos/acme-labs/snaketoy/pulls/42" }, "statuses": { "href": "https://api.github.com/repos/acme-labs/snaketoy/statuses/d6f3c8a2e8b7f0a9c0a1f67f0c5d7b2a1d9e3f44" } }, "active_lock_reason": null, "assignee": null, "assignees": [], "author_association": "MEMBER", "auto_merge": null, "base": { "label": "acme-labs:main", "ref": "main", "repo": { "allow_auto_merge": false, "allow_forking": true, "allow_merge_commit": true, "allow_rebase_merge": true, "allow_squash_merge": true, "allow_update_branch": false, "archive_url": "https://api.github.com/repos/acme-labs/snaketoy/{archive_format}{/ref}", "archived": false, "assignees_url": "https://api.github.com/repos/acme-labs/snaketoy/assignees{/user}", "blobs_url": "https://api.github.com/repos/acme-labs/snaketoy/git/blobs{/sha}", "branches_url": "https://api.github.com/repos/acme-labs/snaketoy/branches{/branch}", "clone_url": "https://github.com/acme-labs/snaketoy.git", "collaborators_url": "https://api.github.com/repos/acme-labs/snaketoy/collaborators{/collaborator}", "comments_url": "https://api.github.com/repos/acme-labs/snaketoy/comments{/number}", "commits_url": "https://api.github.com/repos/acme-labs/snaketoy/commits{/sha}", "compare_url": "https://api.github.com/repos/acme-labs/snaketoy/compare/{base}...{head}", "contents_url": "https://api.github.com/repos/acme-labs/snaketoy/contents/{+path}", "contributors_url": "https://api.github.com/repos/acme-labs/snaketoy/contributors", "created_at": "2024-08-11T13:09:40Z", "default_branch": "main", "delete_branch_on_merge": false, "deployments_url": "https://api.github.com/repos/acme-labs/snaketoy/deployments", "description": "A tiny snake fangame written in JavaScript + HTML5 canvas", "disabled": false, "downloads_url": "https://api.github.com/repos/acme-labs/snaketoy/downloads", "events_url": "https://api.github.com/repos/acme-labs/snaketoy/events", "fork": false, "forks": 7, "forks_count": 7, "forks_url": "https://api.github.com/repos/acme-labs/snaketoy/forks", "full_name": "acme-labs/snaketoy", "git_commits_url": "https://api.github.com/repos/acme-labs/snaketoy/git/commits{/sha}", "git_refs_url": "https://api.github.com/repos/acme-labs/snaketoy/git/refs{/sha}", "git_tags_url": "https://api.github.com/repos/acme-labs/snaketoy/git/tags{/sha}", "git_url": "git://github.com/acme-labs/snaketoy.git", "has_discussions": true, "has_downloads": true, "has_issues": true, "has_pages": false, "has_projects": true, "has_pull_requests": true, "has_wiki": true, "homepage": "https://acme.example/snaketoy", "hooks_url": "https://api.github.com/repos/acme-labs/snaketoy/hooks", "html_url": "https://github.com/acme-labs/snaketoy", "id": 712304981, "is_template": false, "issue_comment_url": "https://api.github.com/repos/acme-labs/snaketoy/issues/comments{/number}", "issue_events_url": "https://api.github.com/repos/acme-labs/snaketoy/issues/events{/number}", "issues_url": "https://api.github.com/repos/acme-labs/snaketoy/issues{/number}", "keys_url": "https://api.github.com/repos/acme-labs/snaketoy/keys{/key_id}", "labels_url": "https://api.github.com/repos/acme-labs/snaketoy/labels{/name}", "language": "JavaScript", "languages_url": "https://api.github.com/repos/acme-labs/snaketoy/languages", "license": { "key": "mit", "name": "MIT License", "node_id": "MDc6TGljZW5zZTEz", "spdx_id": "MIT", "url": "https://api.github.com/licenses/mit" }, "merge_commit_message": "PR_TITLE", "merge_commit_title": "MERGE_MESSAGE", "merges_url": "https://api.github.com/repos/acme-labs/snaketoy/merges", "milestones_url": "https://api.github.com/repos/acme-labs/snaketoy/milestones{/number}", "mirror_url": null, "name": "snaketoy", "node_id": "MDEwOlJlcG9zaXRvcnk3MTIzMDQ5ODE=", "notifications_url": "https://api.github.com/repos/acme-labs/snaketoy/notifications{?since,all,participating}", "open_issues": 12, "open_issues_count": 12, "owner": { "avatar_url": "https://avatars.githubusercontent.com/u/55120911?v=4", "events_url": "https://api.github.com/users/acme-labs/events{/privacy}", "followers_url": "https://api.github.com/users/acme-labs/followers", "following_url": "https://api.github.com/users/acme-labs/following{/other_user}", "gists_url": "https://api.github.com/users/acme-labs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/acme-labs", "id": 55120911, "login": "acme-labs", "node_id": "MDEyOk9yZ2FuaXphdGlvbjU1MTIwOTEx", "organizations_url": "https://api.github.com/users/acme-labs/orgs", "received_events_url": "https://api.github.com/users/acme-labs/received_events", "repos_url": "https://api.github.com/users/acme-labs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/acme-labs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/acme-labs/subscriptions", "type": "Organization", "url": "https://api.github.com/users/acme-labs", "user_view_type": "public" }, "private": false, "pull_request_creation_policy": "all", "pulls_url": "https://api.github.com/repos/acme-labs/snaketoy/pulls{/number}", "pushed_at": "2026-02-25T18:21:49Z", "releases_url": "https://api.github.com/repos/acme-labs/snaketoy/releases{/id}", "size": 1380, "squash_merge_commit_message": "COMMIT_MESSAGES", "squash_merge_commit_title": "COMMIT_OR_PR_TITLE", "ssh_url": "git@github.com:acme-labs/snaketoy.git", "stargazers_count": 24, "stargazers_url": "https://api.github.com/repos/acme-labs/snaketoy/stargazers", "statuses_url": "https://api.github.com/repos/acme-labs/snaketoy/statuses/{sha}", "subscribers_url": "https://api.github.com/repos/acme-labs/snaketoy/subscribers", "subscription_url": "https://api.github.com/repos/acme-labs/snaketoy/subscription", "svn_url": "https://github.com/acme-labs/snaketoy", "tags_url": "https://api.github.com/repos/acme-labs/snaketoy/tags", "teams_url": "https://api.github.com/repos/acme-labs/snaketoy/teams", "topics": [ "game", "javascript", "canvas" ], "trees_url": "https://api.github.com/repos/acme-labs/snaketoy/git/trees{/sha}", "updated_at": "2026-02-12T09:41:27Z", "url": "https://api.github.com/repos/acme-labs/snaketoy", "use_squash_pr_title_as_default": false, "visibility": "public", "watchers": 24, "watchers_count": 24, "web_commit_signoff_required": false }, "sha": "2d5a1dfbba9d0a8a7f7b14b08a1b6c7c1e2f3a4b", "user": { "avatar_url": "https://avatars.githubusercontent.com/u/90231477?v=4", "events_url": "https://api.github.com/users/renato-dev/events{/privacy}", "followers_url": "https://api.github.com/users/renato-dev/followers", "following_url": "https://api.github.com/users/renato-dev/following{/other_user}", "gists_url": "https://api.github.com/users/renato-dev/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/renato-dev", "id": 90231477, "login": "renato-dev", "node_id": "MDQ6VXNlcjkwMjMxNDc3", "organizations_url": "https://api.github.com/users/renato-dev/orgs", "received_events_url": "https://api.github.com/users/renato-dev/received_events", "repos_url": "https://api.github.com/users/renato-dev/repos", "site_admin": false, "starred_url": "https://api.github.com/users/renato-dev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/renato-dev/subscriptions", "type": "User", "url": "https://api.github.com/users/renato-dev", "user_view_type": "public" } }, "body": "Renames the UI title and keeps the game logic unchanged.", "closed_at": null, "comments_url": "https://api.github.com/repos/acme-labs/snaketoy/issues/42/comments", "commits_url": "https://api.github.com/repos/acme-labs/snaketoy/pulls/42/commits", "created_at": "2026-02-25T18:02:11Z", "diff_url": "https://github.com/acme-labs/snaketoy/pull/42.diff", "draft": false, "head": { "label": "renato-dev:rename-title", "ref": "rename-title", "repo": { "default_branch": "main", "description": "A tiny snake fangame written in JavaScript + HTML5 canvas", "fork": false, "forks_count": 7, "full_name": "acme-labs/snaketoy", "html_url": "https://github.com/acme-labs/snaketoy", "id": 712304981, "language": "JavaScript", "name": "snaketoy", "node_id": "MDEwOlJlcG9zaXRvcnk3MTIzMDQ5ODE=", "open_issues_count": 12, "owner": { "avatar_url": "https://avatars.githubusercontent.com/u/55120911?v=4", "html_url": "https://github.com/acme-labs", "id": 55120911, "login": "acme-labs", "node_id": "MDEyOk9yZ2FuaXphdGlvbjU1MTIwOTEx", "site_admin": false, "type": "Organization" }, "private": false, "stargazers_count": 24, "url": "https://api.github.com/repos/acme-labs/snaketoy", "watchers_count": 24 }, "sha": "d6f3c8a2e8b7f0a9c0a1f67f0c5d7b2a1d9e3f44", "user": { "avatar_url": "https://avatars.githubusercontent.com/u/90231477?v=4", "events_url": "https://api.github.com/users/renato-dev/events{/privacy}", "followers_url": "https://api.github.com/users/renato-dev/followers", "following_url": "https://api.github.com/users/renato-dev/following{/other_user}", "gists_url": "https://api.github.com/users/renato-dev/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/renato-dev", "id": 90231477, "login": "renato-dev", "node_id": "MDQ6VXNlcjkwMjMxNDc3", "organizations_url": "https://api.github.com/users/renato-dev/orgs", "received_events_url": "https://api.github.com/users/renato-dev/received_events", "repos_url": "https://api.github.com/users/renato-dev/repos", "site_admin": false, "starred_url": "https://api.github.com/users/renato-dev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/renato-dev/subscriptions", "type": "User", "url": "https://api.github.com/users/renato-dev", "user_view_type": "public" } }, "html_url": "https://github.com/acme-labs/snaketoy/pull/42", "id": 4982710331, "issue_url": "https://api.github.com/repos/acme-labs/snaketoy/issues/42", "labels": [], "locked": false, "merge_commit_sha": "8b1a4c7d9e0f2a3b4c5d6e7f8a9b0c1d2e3f4a5b", "merged_at": null, "milestone": null, "node_id": "PR_kwDOQ7c2jM7hY2qB", "number": 42, "patch_url": "https://github.com/acme-labs/snaketoy/pull/42.patch", "requested_reviewers": [], "requested_teams": [], "review_comment_url": "https://api.github.com/repos/acme-labs/snaketoy/pulls/comments{/number}", "review_comments_url": "https://api.github.com/repos/acme-labs/snaketoy/pulls/42/comments", "state": "open", "statuses_url": "https://api.github.com/repos/acme-labs/snaketoy/statuses/d6f3c8a2e8b7f0a9c0a1f67f0c5d7b2a1d9e3f44", "title": "Rename title from 'Snake' to 'Snake Toy'", "updated_at": "2026-02-25T18:22:21Z", "url": "https://api.github.com/repos/acme-labs/snaketoy/pulls/42", "user": { "avatar_url": "https://avatars.githubusercontent.com/u/90231477?v=4", "events_url": "https://api.github.com/users/renato-dev/events{/privacy}", "followers_url": "https://api.github.com/users/renato-dev/followers", "following_url": "https://api.github.com/users/renato-dev/following{/other_user}", "gists_url": "https://api.github.com/users/renato-dev/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/renato-dev", "id": 90231477, "login": "renato-dev", "node_id": "MDQ6VXNlcjkwMjMxNDc3", "organizations_url": "https://api.github.com/users/renato-dev/orgs", "received_events_url": "https://api.github.com/users/renato-dev/received_events", "repos_url": "https://api.github.com/users/renato-dev/repos", "site_admin": false, "starred_url": "https://api.github.com/users/renato-dev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/renato-dev/subscriptions", "type": "User", "url": "https://api.github.com/users/renato-dev", "user_view_type": "public" } }, "repository": { "created_at": "2024-08-11T13:09:40Z", "default_branch": "main", "description": "A tiny snake fangame written in JavaScript + HTML5 canvas", "fork": false, "forks_count": 7, "full_name": "acme-labs/snaketoy", "html_url": "https://github.com/acme-labs/snaketoy", "id": 712304981, "language": "JavaScript", "name": "snaketoy", "node_id": "MDEwOlJlcG9zaXRvcnk3MTIzMDQ5ODE=", "open_issues_count": 12, "owner": { "avatar_url": "https://avatars.githubusercontent.com/u/55120911?v=4", "html_url": "https://github.com/acme-labs", "id": 55120911, "login": "acme-labs", "node_id": "MDEyOk9yZ2FuaXphdGlvbjU1MTIwOTEx", "site_admin": false, "type": "Organization" }, "private": false, "pushed_at": "2026-02-25T18:21:49Z", "stargazers_count": 24, "updated_at": "2026-02-12T09:41:27Z", "url": "https://api.github.com/repos/acme-labs/snaketoy", "visibility": "public", "watchers_count": 24 }, "sender": { "avatar_url": "https://avatars.githubusercontent.com/u/77100412?v=4", "events_url": "https://api.github.com/users/acme-ci/events{/privacy}", "followers_url": "https://api.github.com/users/acme-ci/followers", "following_url": "https://api.github.com/users/acme-ci/following{/other_user}", "gists_url": "https://api.github.com/users/acme-ci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/acme-ci", "id": 77100412, "login": "acme-ci", "node_id": "MDQ6VXNlcjc3MTAwNDEy", "organizations_url": "https://api.github.com/users/acme-ci/orgs", "received_events_url": "https://api.github.com/users/acme-ci/received_events", "repos_url": "https://api.github.com/users/acme-ci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/acme-ci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/acme-ci/subscriptions", "type": "User", "url": "https://api.github.com/users/acme-ci", "user_view_type": "public" } }, "timestamp": "2026-02-25T18:22:23.912774300Z", "type": "github.prReviewComment" } ``` ## On Pull Request The On Pull Request trigger starts a workflow execution when pull request events occur in a GitHub repository. ### Use Cases - **PR automation**: Automate actions when PRs are opened, merged, or closed - **Code review workflows**: Trigger review processes or notifications - **CI/CD integration**: Run tests or builds on PR events - **Status updates**: Update systems when PR status changes ### Configuration - **Repository**: Select the GitHub repository to monitor - **Actions**: Select which PR actions to listen for (opened, closed, synchronize, etc.) ### Event Data Each PR event includes: - **action**: The action that triggered the event (opened, closed, synchronize, etc.) - **pull_request**: Complete PR information including title, body, state, labels - **repository**: Repository information - **sender**: User who triggered the event ### Webhook Setup This trigger automatically sets up a GitHub webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "action": "opened", "assignee": null, "number": 101, "pull_request": { "html_url": "https://github.com/acme/widgets/pull/101", "number": 101, "state": "open", "title": "Add new endpoint", "user": { "login": "octocat" } }, "repository": { "full_name": "acme/widgets", "html_url": "https://github.com/acme/widgets", "id": 123456 }, "sender": { "html_url": "https://github.com/octocat", "id": 101, "login": "octocat" } }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.pullRequest" } ``` ## On Push The On Push trigger starts a workflow execution when code is pushed to a GitHub repository. ### Use Cases - **CI/CD automation**: Trigger builds and deployments on code pushes - **Code quality checks**: Run linting and tests on every push - **Notification workflows**: Send notifications when code is pushed - **Documentation updates**: Automatically update documentation on push ### Configuration - **Repository**: Select the GitHub repository to monitor - **Refs**: Configure which branches/tags to monitor (e.g., `refs/heads/main`, `refs/tags/*`) ### Event Data Each push event includes: - **repository**: Repository information - **ref**: The branch or tag that was pushed to - **commits**: Array of commit information - **pusher**: Information about who pushed - **before/after**: Commit SHAs before and after the push ### Webhook Setup This trigger automatically sets up a GitHub webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "after": "4f9c2e1a7b3d45c0d1e9f23456789abcdeffed01", "base_ref": null, "before": "1a2b3c4d5e6f708192a3b4c5d6e7f8090a1b2c3d", "commits": [ { "added": [], "author": { "date": "2026-03-10T14:22:11+01:00", "email": "alex.doe@example.com", "name": "Alex Doe", "username": "alexdoe" }, "committer": { "date": "2026-03-10T14:22:11+01:00", "email": "noreply@example.com", "name": "GitHub", "username": "web-flow" }, "distinct": true, "id": "4f9c2e1a7b3d45c0d1e9f23456789abcdeffed01", "message": "feat: add lightweight metrics endpoint (#42)\n\nAdds a basic /metrics handler with a minimal gauge.", "modified": [ "cmd/server/main.go", "pkg/metrics/handler.go", "docs/metrics.md" ], "removed": [], "timestamp": "2026-03-10T14:22:11+01:00", "tree_id": "7a8b9c0d1e2f3a4b5c6d7e8f90123456789abcde", "url": "https://github.com/example-org/example-repo/commit/4f9c2e1a7b3d45c0d1e9f23456789abcdeffed01" } ], "compare": "https://github.com/example-org/example-repo/compare/1a2b3c4d5e6f...4f9c2e1a7b3d", "created": false, "deleted": false, "forced": false, "head_commit": { "added": [], "author": { "date": "2026-03-10T14:22:11+01:00", "email": "alex.doe@example.com", "name": "Alex Doe", "username": "alexdoe" }, "committer": { "date": "2026-03-10T14:22:11+01:00", "email": "noreply@example.com", "name": "GitHub", "username": "web-flow" }, "distinct": true, "id": "4f9c2e1a7b3d45c0d1e9f23456789abcdeffed01", "message": "feat: add lightweight metrics endpoint (#42)\n\nAdds a basic /metrics handler with a minimal gauge.", "modified": [ "cmd/server/main.go", "pkg/metrics/handler.go", "docs/metrics.md" ], "removed": [], "timestamp": "2026-03-10T14:22:11+01:00", "tree_id": "7a8b9c0d1e2f3a4b5c6d7e8f90123456789abcde", "url": "https://github.com/example-org/example-repo/commit/4f9c2e1a7b3d45c0d1e9f23456789abcdeffed01" }, "organization": { "avatar_url": "https://avatars.githubusercontent.com/u/12345678?v=4", "description": "Example organization for demo data", "events_url": "https://api.github.com/orgs/example-org/events", "hooks_url": "https://api.github.com/orgs/example-org/hooks", "id": 12345678, "issues_url": "https://api.github.com/orgs/example-org/issues", "login": "example-org", "members_url": "https://api.github.com/orgs/example-org/members{/member}", "node_id": "O_kgDOBb1AaA", "public_members_url": "https://api.github.com/orgs/example-org/public_members{/member}", "repos_url": "https://api.github.com/orgs/example-org/repos", "url": "https://api.github.com/orgs/example-org" }, "pusher": { "email": "alex.doe@example.com", "name": "alexdoe" }, "ref": "refs/heads/main", "repository": { "allow_forking": true, "archive_url": "https://api.github.com/repos/example-org/example-repo/{archive_format}{/ref}", "archived": false, "assignees_url": "https://api.github.com/repos/example-org/example-repo/assignees{/user}", "blobs_url": "https://api.github.com/repos/example-org/example-repo/git/blobs{/sha}", "branches_url": "https://api.github.com/repos/example-org/example-repo/branches{/branch}", "clone_url": "https://github.com/example-org/example-repo.git", "collaborators_url": "https://api.github.com/repos/example-org/example-repo/collaborators{/collaborator}", "comments_url": "https://api.github.com/repos/example-org/example-repo/comments{/number}", "commits_url": "https://api.github.com/repos/example-org/example-repo/commits{/sha}", "compare_url": "https://api.github.com/repos/example-org/example-repo/compare/{base}...{head}", "contents_url": "https://api.github.com/repos/example-org/example-repo/contents/{+path}", "contributors_url": "https://api.github.com/repos/example-org/example-repo/contributors", "created_at": 1746900000, "custom_properties": {}, "default_branch": "main", "deployments_url": "https://api.github.com/repos/example-org/example-repo/deployments", "description": "Example repository for webhook payloads", "disabled": false, "downloads_url": "https://api.github.com/repos/example-org/example-repo/downloads", "events_url": "https://api.github.com/repos/example-org/example-repo/events", "fork": false, "forks": 2, "forks_count": 2, "forks_url": "https://api.github.com/repos/example-org/example-repo/forks", "full_name": "example-org/example-repo", "git_commits_url": "https://api.github.com/repos/example-org/example-repo/git/commits{/sha}", "git_refs_url": "https://api.github.com/repos/example-org/example-repo/git/refs{/sha}", "git_tags_url": "https://api.github.com/repos/example-org/example-repo/git/tags{/sha}", "git_url": "git://github.com/example-org/example-repo.git", "has_discussions": false, "has_downloads": true, "has_issues": true, "has_pages": false, "has_projects": true, "has_wiki": false, "homepage": null, "hooks_url": "https://api.github.com/repos/example-org/example-repo/hooks", "html_url": "https://github.com/example-org/example-repo", "id": 987654321, "is_template": false, "issue_comment_url": "https://api.github.com/repos/example-org/example-repo/issues/comments{/number}", "issue_events_url": "https://api.github.com/repos/example-org/example-repo/issues/events{/number}", "issues_url": "https://api.github.com/repos/example-org/example-repo/issues{/number}", "keys_url": "https://api.github.com/repos/example-org/example-repo/keys{/key_id}", "labels_url": "https://api.github.com/repos/example-org/example-repo/labels{/name}", "language": "TypeScript", "languages_url": "https://api.github.com/repos/example-org/example-repo/languages", "license": { "key": "apache-2.0", "name": "Apache License 2.0", "node_id": "MDc6TGljZW5zZTI=", "spdx_id": "Apache-2.0", "url": "https://api.github.com/licenses/apache-2.0" }, "master_branch": "main", "merges_url": "https://api.github.com/repos/example-org/example-repo/merges", "milestones_url": "https://api.github.com/repos/example-org/example-repo/milestones{/number}", "mirror_url": null, "name": "example-repo", "node_id": "R_kgDOAbCdEf", "notifications_url": "https://api.github.com/repos/example-org/example-repo/notifications{?since,all,participating}", "open_issues": 5, "open_issues_count": 5, "organization": "example-org", "owner": { "avatar_url": "https://avatars.githubusercontent.com/u/12345678?v=4", "email": null, "events_url": "https://api.github.com/users/example-org/events{/privacy}", "followers_url": "https://api.github.com/users/example-org/followers", "following_url": "https://api.github.com/users/example-org/following{/other_user}", "gists_url": "https://api.github.com/users/example-org/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/example-org", "id": 12345678, "login": "example-org", "name": "example-org", "node_id": "O_kgDOBb1AaA", "organizations_url": "https://api.github.com/users/example-org/orgs", "received_events_url": "https://api.github.com/users/example-org/received_events", "repos_url": "https://api.github.com/users/example-org/repos", "site_admin": false, "starred_url": "https://api.github.com/users/example-org/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/example-org/subscriptions", "type": "Organization", "url": "https://api.github.com/users/example-org", "user_view_type": "public" }, "private": false, "pulls_url": "https://api.github.com/repos/example-org/example-repo/pulls{/number}", "pushed_at": 1760000000, "releases_url": "https://api.github.com/repos/example-org/example-repo/releases{/id}", "size": 48200, "ssh_url": "git@github.com:example-org/example-repo.git", "stargazers": 3, "stargazers_count": 3, "stargazers_url": "https://api.github.com/repos/example-org/example-repo/stargazers", "statuses_url": "https://api.github.com/repos/example-org/example-repo/statuses/{sha}", "subscribers_url": "https://api.github.com/repos/example-org/example-repo/subscribers", "subscription_url": "https://api.github.com/repos/example-org/example-repo/subscription", "svn_url": "https://github.com/example-org/example-repo", "tags_url": "https://api.github.com/repos/example-org/example-repo/tags", "teams_url": "https://api.github.com/repos/example-org/example-repo/teams", "topics": [], "trees_url": "https://api.github.com/repos/example-org/example-repo/git/trees{/sha}", "updated_at": "2026-03-10T13:50:00Z", "url": "https://api.github.com/repos/example-org/example-repo", "visibility": "public", "watchers": 3, "watchers_count": 3, "web_commit_signoff_required": false }, "sender": { "avatar_url": "https://avatars.githubusercontent.com/u/87654321?v=4", "events_url": "https://api.github.com/users/octo-user/events{/privacy}", "followers_url": "https://api.github.com/users/octo-user/followers", "following_url": "https://api.github.com/users/octo-user/following{/other_user}", "gists_url": "https://api.github.com/users/octo-user/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/octo-user", "id": 87654321, "login": "octo-user", "node_id": "MDQ6VXNlcjg3NjU0MzIx", "organizations_url": "https://api.github.com/users/octo-user/orgs", "received_events_url": "https://api.github.com/users/octo-user/received_events", "repos_url": "https://api.github.com/users/octo-user/repos", "site_admin": false, "starred_url": "https://api.github.com/users/octo-user/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/octo-user/subscriptions", "type": "User", "url": "https://api.github.com/users/octo-user", "user_view_type": "public" } }, "timestamp": "2026-03-10T13:35:00.31254162Z", "type": "github.push" } ``` ## On Release The On Release trigger starts a workflow execution when release events occur in a GitHub repository. ### Use Cases - **Deployment automation**: Trigger deployments when releases are published - **Notification workflows**: Send notifications about new releases - **Release processing**: Process release artifacts or metadata - **Distribution workflows**: Distribute releases to multiple systems ### Configuration - **Repository**: Select the GitHub repository to monitor - **Actions**: Select which release actions to listen for (published, created, etc.) ### Event Data Each release event includes: - **action**: The action that triggered the event (published, created, etc.) - **release**: Complete release information including tag, name, body, assets - **repository**: Repository information - **sender**: User who triggered the event ### Webhook Setup This trigger automatically sets up a GitHub webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "action": "published", "release": { "html_url": "https://github.com/acme/widgets/releases/tag/v1.2.3", "id": 3001, "name": "Release 1.2.3", "tag_name": "v1.2.3" }, "repository": { "full_name": "acme/widgets", "html_url": "https://github.com/acme/widgets", "id": 123456 }, "sender": { "html_url": "https://github.com/octocat", "id": 101, "login": "octocat" } }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.release" } ``` ## On Tag Created The On Tag Created trigger starts a workflow execution when a new tag is created in a GitHub repository. ### Use Cases - **Version tagging**: Trigger workflows when version tags are created - **Release automation**: Automatically create releases from tags - **Deployment triggers**: Deploy specific versions based on tags - **Tag processing**: Process or validate tags as they're created ### Configuration - **Repository**: Select the GitHub repository to monitor - **Tags**: Configure which tags to listen for using predicates (e.g., equals "v*", starts with "release-") ### Event Data Each tag event includes: - **ref**: The tag reference (e.g., "refs/tags/v1.0.0") - **ref_type**: Type of reference (tag) - **repository**: Repository information - **sender**: User who created the tag ### Webhook Setup This trigger automatically sets up a GitHub webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "description": "Example repository for webhook payloads", "master_branch": "main", "pusher_type": "user", "ref": "v1.2.3", "ref_type": "tag", "repository": { "full_name": "acme/widgets", "html_url": "https://github.com/acme/widgets", "id": 123456 }, "sender": { "html_url": "https://github.com/octocat", "id": 101, "login": "octocat" } }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.tagCreated" } ``` ## On Workflow Run The On Workflow Run trigger starts a workflow execution when GitHub Actions workflow runs complete. ### Use Cases - **Workflow orchestration**: Chain workflows together based on completion - **Status monitoring**: Monitor CI/CD pipeline results - **Notification workflows**: Send notifications when workflows succeed or fail - **Post-processing**: Process artifacts or results after workflow completion ### Configuration - **Repository**: Select the GitHub repository to monitor - **Conclusions**: Select which workflow conclusions to listen for (success, failure, cancelled, etc.) - **Workflow Files**: Optional list of specific workflow files to monitor (leave empty for all workflows) ### Event Data Each workflow run event includes: - **action**: The action that triggered the event (completed, requested, etc.) - **workflow_run**: Complete workflow run information including status, conclusion, logs URL - **repository**: Repository information - **sender**: User who triggered the workflow ### Webhook Setup This trigger automatically sets up a GitHub webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "action": "completed", "organization": { "avatar_url": "https://avatars.githubusercontent.com/u/12345678?v=4", "description": "Example organization for demo data", "events_url": "https://api.github.com/orgs/example-org/events", "hooks_url": "https://api.github.com/orgs/example-org/hooks", "id": 12345678, "issues_url": "https://api.github.com/orgs/example-org/issues", "login": "example-org", "members_url": "https://api.github.com/orgs/example-org/members{/member}", "node_id": "O_kgDOBb1AaA", "public_members_url": "https://api.github.com/orgs/example-org/public_members{/member}", "repos_url": "https://api.github.com/orgs/example-org/repos", "url": "https://api.github.com/orgs/example-org" }, "repository": { "created_at": "2024-05-10T12:00:00Z", "default_branch": "main", "description": "Example repository for webhook payloads", "fork": false, "full_name": "example-org/example-repo", "html_url": "https://github.com/example-org/example-repo", "id": 987654321, "name": "example-repo", "node_id": "R_kgDOAbCdEf", "owner": { "avatar_url": "https://avatars.githubusercontent.com/u/12345678?v=4", "gravatar_id": "", "html_url": "https://github.com/example-org", "id": 12345678, "login": "example-org", "node_id": "O_kgDOBb1AaA", "site_admin": false, "type": "Organization", "url": "https://api.github.com/users/example-org" }, "private": false, "pushed_at": "2026-03-10T14:22:11Z", "updated_at": "2026-03-10T13:50:00Z", "url": "https://api.github.com/repos/example-org/example-repo" }, "sender": { "avatar_url": "https://avatars.githubusercontent.com/u/87654321?v=4", "gravatar_id": "", "html_url": "https://github.com/alexdoe", "id": 87654321, "login": "alexdoe", "node_id": "MDQ6VXNlcjg3NjU0MzIx", "site_admin": false, "type": "User", "url": "https://api.github.com/users/alexdoe" }, "workflow": { "badge_url": "https://github.com/example-org/example-repo/workflows/CI/badge.svg", "created_at": "2024-05-10T12:00:00Z", "html_url": "https://github.com/example-org/example-repo/actions/workflows/ci.yml", "id": 9876543, "name": "CI", "node_id": "W_kwDOBb1AaA0123456", "path": ".github/workflows/ci.yml", "state": "active", "updated_at": "2026-03-01T10:00:00Z", "url": "https://api.github.com/repos/example-org/example-repo/actions/workflows/9876543" }, "workflow_run": { "actor": { "avatar_url": "https://avatars.githubusercontent.com/u/87654321?v=4", "gravatar_id": "", "html_url": "https://github.com/alexdoe", "id": 87654321, "login": "alexdoe", "node_id": "MDQ6VXNlcjg3NjU0MzIx", "site_admin": false, "type": "User", "url": "https://api.github.com/users/alexdoe" }, "artifacts_url": "https://api.github.com/repos/example-org/example-repo/actions/runs/12345678901/artifacts", "cancel_url": "https://api.github.com/repos/example-org/example-repo/actions/runs/12345678901/cancel", "check_suite_id": 11223344556, "check_suite_node_id": "CS_kwDOBb1AaA0000000", "check_suite_url": "https://api.github.com/repos/example-org/example-repo/check-suites/11223344556", "conclusion": "success", "created_at": "2026-03-10T14:22:11Z", "event": "push", "head_branch": "main", "head_commit": { "author": { "email": "alex.doe@example.com", "name": "Alex Doe" }, "committer": { "email": "noreply@example.com", "name": "GitHub" }, "id": "4f9c2e1a7b3d45c0d1e9f23456789abcdeffed01", "message": "feat: add lightweight metrics endpoint (#42)\n\nAdds a basic /metrics handler with a minimal gauge.", "timestamp": "2026-03-10T14:22:11+01:00", "tree_id": "7a8b9c0d1e2f3a4b5c6d7e8f90123456789abcde" }, "head_repository": { "description": "Example repository for webhook payloads", "fork": false, "full_name": "example-org/example-repo", "html_url": "https://github.com/example-org/example-repo", "id": 987654321, "name": "example-repo", "node_id": "R_kgDOAbCdEf", "owner": { "avatar_url": "https://avatars.githubusercontent.com/u/12345678?v=4", "gravatar_id": "", "html_url": "https://github.com/example-org", "id": 12345678, "login": "example-org", "node_id": "O_kgDOBb1AaA", "site_admin": false, "type": "Organization", "url": "https://api.github.com/users/example-org" }, "private": false, "url": "https://api.github.com/repos/example-org/example-repo" }, "head_sha": "4f9c2e1a7b3d45c0d1e9f23456789abcdeffed01", "html_url": "https://github.com/example-org/example-repo/actions/runs/12345678901", "id": 12345678901, "jobs_url": "https://api.github.com/repos/example-org/example-repo/actions/runs/12345678901/jobs", "logs_url": "https://api.github.com/repos/example-org/example-repo/actions/runs/12345678901/logs", "name": "CI", "node_id": "WFR_kwLOBb1AaA123456789", "path": ".github/workflows/ci.yml", "pull_requests": [], "referenced_workflows": [], "repository": { "clone_url": "https://github.com/example-org/example-repo.git", "created_at": "2024-05-10T12:00:00Z", "default_branch": "main", "description": "Example repository for webhook payloads", "fork": false, "full_name": "example-org/example-repo", "git_url": "git://github.com/example-org/example-repo.git", "html_url": "https://github.com/example-org/example-repo", "id": 987654321, "name": "example-repo", "node_id": "R_kgDOAbCdEf", "owner": { "avatar_url": "https://avatars.githubusercontent.com/u/12345678?v=4", "gravatar_id": "", "html_url": "https://github.com/example-org", "id": 12345678, "login": "example-org", "node_id": "O_kgDOBb1AaA", "site_admin": false, "type": "Organization", "url": "https://api.github.com/users/example-org" }, "private": false, "pushed_at": "2026-03-10T14:22:11Z", "ssh_url": "git@github.com:example-org/example-repo.git", "updated_at": "2026-03-10T13:50:00Z", "url": "https://api.github.com/repos/example-org/example-repo" }, "rerun_url": "https://api.github.com/repos/example-org/example-repo/actions/runs/12345678901/rerun", "run_attempt": 1, "run_number": 42, "run_started_at": "2026-03-10T14:22:11Z", "status": "completed", "triggering_actor": { "avatar_url": "https://avatars.githubusercontent.com/u/87654321?v=4", "gravatar_id": "", "html_url": "https://github.com/alexdoe", "id": 87654321, "login": "alexdoe", "node_id": "MDQ6VXNlcjg3NjU0MzIx", "site_admin": false, "type": "User", "url": "https://api.github.com/users/alexdoe" }, "updated_at": "2026-03-10T14:25:30Z", "url": "https://api.github.com/repos/example-org/example-repo/actions/runs/12345678901", "workflow_id": 9876543, "workflow_url": "https://api.github.com/repos/example-org/example-repo/actions/workflows/9876543" } }, "timestamp": "2026-03-10T14:25:30.31254162Z", "type": "github.workflowRun" } ``` ## Add Issue Assignee The Add Issue Assignee component adds one or more assignees to an existing GitHub issue without affecting existing assignees. ### Use Cases - **Auto-assignment**: Automatically assign issues to team members based on workflow triggers - **Escalation**: Add additional assignees when issues require attention from specific people - **On-call routing**: Assign issues to the current on-call engineer ### Configuration - **Repository**: Select the GitHub repository containing the issue - **Issue Number**: The issue number to add assignees to (supports expressions) - **Assignees**: List of GitHub usernames to assign to the issue ### Output Returns the updated issue object with all current information. ### Example Output ```json { "data": { "assignees": [ { "id": 1, "login": "octocat" } ], "html_url": "https://github.com/acme/widgets/issues/42", "id": 101, "number": 42, "state": "open", "title": "Fix flaky build" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.issue" } ``` ## Add Issue Label The Add Issue Label component adds one or more labels to an existing GitHub issue without affecting existing labels. ### Use Cases - **Triage automation**: Automatically label issues based on content or source - **Status tracking**: Add status labels as issues move through workflows - **Priority tagging**: Apply priority labels based on external signals ### Configuration - **Repository**: Select the GitHub repository containing the issue - **Issue Number**: The issue number to add labels to (supports expressions) - **Labels**: List of label names to add to the issue ### Output Returns the full list of labels currently on the issue after the addition. ### Example Output ```json { "data": [ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 208045946, "name": "bug" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 208045947, "name": "enhancement" } ], "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.labels" } ``` ## Add Reaction The Add Reaction component adds a reaction emoji to a GitHub comment. ### Use Cases - **Acknowledge commands**: Add eyes to PR comments to indicate automation saw them - **Workflow feedback**: React with +1 or rocket on success paths - **Fast triage signals**: Use reactions to show status without posting extra comments ### Configuration - **Repository**: Select the GitHub repository - **Target**: Choose PR conversation comment or PR review line comment - **Comment ID**: The GitHub comment ID to react to (supports expressions) - **Reaction**: One of GitHub's supported reaction values ### Output Returns the created GitHub reaction object, including id, content, user, and timestamp. ### Example Output ```json { "data": { "content": "eyes", "created_at": "2026-01-16T17:56:16Z", "id": 1, "user": { "login": "superplane-bot" } }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.reaction" } ``` ## Create Issue The Create Issue component creates a new issue in a specified GitHub repository. ### Use Cases - **Automated bug reporting**: Create issues automatically when errors are detected - **Task creation**: Generate issues from external systems or workflows - **Notification tracking**: Convert notifications into trackable issues - **Workflow automation**: Create issues as part of automated processes ### Configuration - **Repository**: Select the GitHub repository where the issue will be created - **Title**: The issue title (supports expressions) - **Body**: The issue body/description (supports markdown and expressions) - **Assignees**: Optional list of GitHub usernames to assign the issue to - **Labels**: Optional list of labels to apply to the issue ### Output Returns the created issue object with details including: - Issue number - URL - State - Created timestamp - All issue metadata ### Example Output ```json { "data": { "html_url": "https://github.com/acme/widgets/issues/42", "id": 101, "number": 42, "state": "open", "title": "Fix flaky build", "user": { "login": "octocat" } }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.issue" } ``` ## Create Issue Comment The Create Issue Comment component adds a comment to an existing GitHub issue or pull request. Issues and pull requests share the same comment API in GitHub. ### Use Cases - **Deployment updates**: Post deployment status or remediation updates to GitHub issues - **Runbook linking**: Add runbook links, error details, or status for responders - **Cross-platform sync**: Sync Slack or PagerDuty notes into GitHub as comments - **Automated comments**: Add automated comments based on workflow events ### Configuration - **Repository**: Select the GitHub repository containing the issue - **Issue Number**: The issue or PR number to comment on (supports expressions) - **Body**: The comment text (supports Markdown and expressions) ### Output Returns the created comment object including: - Comment ID and URL - Comment body - Author information - Created timestamp ### Example Output ```json { "data": { "body": "Deployment to production completed successfully.", "created_at": "2026-01-16T17:56:16Z", "html_url": "https://github.com/acme/widgets/issues/42#issuecomment-5001", "id": 5001, "updated_at": "2026-01-16T17:56:16Z", "user": { "login": "superplane-app[bot]" } }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.issueComment" } ``` ## Create Release The Create Release component creates a new release in a GitHub repository. ### Use Cases - **Automated releases**: Create releases automatically after successful builds - **Version management**: Tag and release new versions of software - **Deployment automation**: Create releases as part of deployment workflows - **Release notes**: Automatically generate and publish release notes ### Configuration - **Repository**: Select the GitHub repository - **Version Strategy**: How to determine the version (manual tag, auto-increment) - **Tag Name**: Git tag name for the release (supports expressions) - **Name**: Release title/name (optional, supports expressions) - **Body**: Release notes/description (optional, supports markdown and expressions) - **Draft**: Create as draft release (not published) - **Prerelease**: Mark as pre-release - **Generate Release Notes**: Automatically generate release notes from commits ### Output Returns the created release object with all release information including tag, assets, and metadata. ### Example Output ```json { "data": { "draft": false, "html_url": "https://github.com/acme/widgets/releases/tag/v1.2.3", "id": 3001, "name": "Release 1.2.3", "prerelease": false, "tag_name": "v1.2.3" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.release" } ``` ## Create Review The Create Review component submits a pull request review (approve, request changes, or comment) on a GitHub pull request. ### Use Cases - **Automation**: Auto-approve when checks pass - **Quality gates**: Request changes when checks fail - **Bots**: Post review feedback ### Configuration - **Repository**: Select the GitHub repository - **Pull Number**: Pull request number - **Event**: APPROVE, REQUEST_CHANGES, or COMMENT - **Body**: Optional review body (required for REQUEST_CHANGES and COMMENT) ### Output Emits the submitted review object including: - id, state, submitted_at - body and user ### Example Output ```json { "data": { "body": "LGTM. Approving after successful CI.", "html_url": "https://github.com/acme/widgets/pull/42#pullrequestreview-9001", "id": 9001, "state": "APPROVED", "submitted_at": "2026-01-25T12:34:56Z", "user": { "html_url": "https://github.com/octocat", "id": 1, "login": "octocat" } }, "timestamp": "2026-01-25T12:34:56.000000000Z", "type": "github.pullRequestReview" } ``` ## Delete Release The Delete Release component removes a release from a GitHub repository. ### Use Cases - **Cleanup**: Remove old or incorrect releases - **Rollback**: Delete releases that were created in error - **Maintenance**: Clean up draft or test releases - **Automated cleanup**: Remove releases as part of maintenance workflows ### Configuration - **Repository**: Select the GitHub repository - **Release Strategy**: How to find the release (by tag name or latest) - **Tag Name**: Git tag name of the release to delete (if using tag strategy) - **Delete Tag**: Also delete the associated Git tag (optional) ### Output Returns confirmation of the deletion. ### Example Output ```json { "data": { "deleted_at": "2026-01-16T17:55:00Z", "draft": false, "html_url": "https://github.com/acme/widgets/releases/tag/v1.2.3", "id": 3001, "name": "Release 1.2.3", "prerelease": false, "tag_deleted": true, "tag_name": "v1.2.3" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.release" } ``` ## Get Issue The Get Issue component retrieves a specific issue from a GitHub repository by its issue number. ### Use Cases - **Issue lookup**: Fetch issue details for processing or display - **Workflow automation**: Get issue information to make decisions in workflows - **Data enrichment**: Retrieve issue data to combine with other information - **Status checking**: Check issue status before performing actions ### Configuration - **Repository**: Select the GitHub repository containing the issue - **Issue Number**: The issue number to retrieve (supports expressions) ### Output Returns the complete issue object including: - Issue number, title, and body - State (open/closed) - Labels and assignees - Created and updated timestamps - Author information - Comments count and other metadata ### Example Output ```json { "data": { "comments": 3, "html_url": "https://github.com/acme/widgets/issues/42", "id": 101, "number": 42, "state": "open", "title": "Fix flaky build", "user": { "login": "octocat" } }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.issue" } ``` ## Get Release The Get Release component retrieves release information from a GitHub repository. ### Use Cases - **Release Monitoring**: Get details about a specific release - **Deployment Pipelines**: Fetch release assets and metadata for deployment - **Version Tracking**: Monitor release status and changelog - **CI/CD Integration**: Retrieve release information for build processes ### Configuration - **Repository**: Select the GitHub repository - **Release Strategy**: How to find the release (by tag name, by ID, or latest) - **Tag Name**: Git tag name of the release (if using tag strategy) - **Release ID**: Numeric release ID (if using ID strategy) ### Output Returns release information including: - Release ID, name, and tag name - Release body/description - Draft and prerelease status - Created and published timestamps - Author information - Asset URLs ### Example Output ```json { "data": { "assets": [ { "browser_download_url": "https://github.com/acme/widgets/releases/download/v1.2.3/app-v1.2.3.zip", "content_type": "application/zip", "download_count": 42, "id": 1, "label": "Application Bundle", "name": "app-v1.2.3.zip", "size": 1024000 } ], "author": { "avatar_url": "https://github.com/images/error/octocat_happy.gif", "html_url": "https://github.com/octocat", "id": 1, "login": "octocat" }, "body": "## What's Changed\n\n- Feature A\n- Bug fix B", "created_at": "2026-01-15T10:00:00Z", "draft": false, "html_url": "https://github.com/acme/widgets/releases/tag/v1.2.3", "id": 3001, "name": "Release 1.2.3", "prerelease": false, "published_at": "2026-01-16T12:00:00Z", "tag_name": "v1.2.3" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.release" } ``` ## Get Repository Permission The Get Repository Permission component retrieves a user's effective permission level for a GitHub repository. ### Use Cases - **Access checks**: Verify if a user has expected repository access - **Automation gates**: Branch workflow behavior by repository permissions - **Auditing**: Inspect repository roles in automated compliance checks - **Triage routing**: Route incidents based on whether a user can push/administer ### Example Output ```json { "data": { "permission": "admin", "role_name": "admin", "user": { "html_url": "https://github.com/octocat", "id": 1, "login": "octocat" } }, "timestamp": "2026-02-26T14:30:00Z", "type": "github.repositoryPermission" } ``` ## Get Workflow Usage The Get Workflow Usage component retrieves billable GitHub Actions usage (minutes) for the installation's organization. ### Prerequisites This action calls GitHub's **billing usage** API, which requires the GitHub App to have **Organization permission: Organization administration (read)**. **Important**: Existing installations will need to approve the new permission when prompted by GitHub. Until the permission is granted, this action will return a 403 error. **Note**: This component uses GitHub's enhanced billing usage report API, which provides detailed usage information. ### Behavior - Returns billing data for the **current billing cycle** - Only private repositories on GitHub-hosted runners accrue billable minutes - Public repositories and self-hosted runners show zero billable usage - Can filter by specific repositories when selected - Uses enhanced billing platform API for accurate reporting ### Configuration - **Repositories** (optional, multiselect): Select one or more specific repositories to track. These will be included in the output for reference (max 5) and stored in node metadata with full repository details (ID, name, URL). When repositories are selected, only usage for those repositories is included in the totals. ### Output Returns usage data with: - `minutes_used`: Total billable minutes used in the current billing cycle - `minutes_used_breakdown`: Map of minutes by runner SKU (e.g., "Actions Linux": 120, "Actions Windows": 60, "Actions macOS": 30) - `included_minutes`: Always 0 (not provided by enhanced billing API) - `total_paid_minutes_used`: Estimated paid minutes based on cost data - `repositories`: List of selected repositories for tracking (max 5) **Note**: Breakdown is by runner SKU (OS and type), not by individual workflow. ### Node Metadata The component stores repository information in node metadata: - Repository ID, name, and URL for each selected repository (max 5) - This metadata is displayed in the workflow canvas for easy reference ### Use Cases - **Billing Monitoring**: Track GitHub Actions usage for billing purposes - **Quota Management**: Monitor usage to avoid exceeding billing quotas - **Cost Control**: Alert when usage approaches limits or budget thresholds - **Usage Reporting**: Generate monthly or periodic usage reports for compliance - **Resource Planning**: Analyze runner usage patterns by OS type ### References - [GitHub Billing Usage API](https://docs.github.com/rest/billing/usage) - [GitHub Enhanced Billing Platform](https://docs.github.com/billing/using-the-new-billing-platform) - [Permissions required for GitHub Apps - Organization Administration](https://docs.github.com/en/rest/overview/permissions-required-for-github-apps#organization-permissions-for-administration) - [Viewing your usage of metered products](https://docs.github.com/en/billing/managing-billing-for-github-actions/viewing-your-github-actions-usage) ### Example Output ```json { "data": { "included_minutes": 0, "minutes_used": 1840.5, "minutes_used_breakdown": { "Actions Linux": 1200, "Actions Windows": 400.5, "Actions macOS": 240 }, "repositories": [ "repo1", "repo2", "repo3" ], "total_paid_minutes_used": 150 }, "timestamp": "2026-02-17T20:00:00Z", "type": "github.workflowUsage" } ``` ## Publish Commit Status The Publish Commit Status component creates a status check on a GitHub commit, commonly used for CI/CD integrations. ### Use Cases - **CI/CD integration**: Report build and test results to GitHub - **Status reporting**: Update commit status from external systems - **Deployment tracking**: Mark commits as deployed or failed - **Quality gates**: Report code quality check results ### Configuration - **Repository**: Select the GitHub repository - **Commit SHA**: The full 40-character commit SHA (supports expressions) - **State**: Status state - pending, success, failure, or error - **Context**: A label to identify this status check (e.g., "ci/build", "deploy/production") - **Description**: Short description of the status (max ~140 characters, optional) - **Target URL**: Link to build logs, test results, or deployment details (optional) ### Output Returns the created status object with all status information. ### Example Output ```json { "data": { "context": "ci/build", "created_at": "2026-01-16T17:45:00Z", "description": "All checks passed", "state": "success", "target_url": "https://ci.example.com/builds/123", "updated_at": "2026-01-16T17:45:10Z" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.commitStatus" } ``` ## Remove Issue Assignee The Remove Issue Assignee component removes one or more assignees from an existing GitHub issue without affecting other assignees. ### Use Cases - **De-escalation**: Remove assignees when issues are resolved or transferred - **Rotation**: Remove previous on-call assignees when rotating responsibilities - **Cleanup**: Remove assignees who are no longer involved with an issue ### Configuration - **Repository**: Select the GitHub repository containing the issue - **Issue Number**: The issue number to remove assignees from (supports expressions) - **Assignees**: List of GitHub usernames to remove from the issue ### Output Returns the updated issue object with all current information. ### Example Output ```json { "data": { "assignees": [], "html_url": "https://github.com/acme/widgets/issues/42", "id": 101, "number": 42, "state": "open", "title": "Fix flaky build" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.issue" } ``` ## Remove Issue Label The Remove Issue Label component removes one or more labels from an existing GitHub issue without affecting other labels. ### Use Cases - **Triage cleanup**: Remove temporary triage labels after processing - **Status transitions**: Remove old status labels when issues move forward - **Automated cleanup**: Strip labels that no longer apply based on workflow events ### Configuration - **Repository**: Select the GitHub repository containing the issue - **Issue Number**: The issue number to remove labels from (supports expressions) - **Labels**: List of label names to remove from the issue ### Output Returns the remaining list of labels on the issue after the removal. ### Example Output ```json { "data": [ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 208045947, "name": "enhancement" } ], "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.labels" } ``` ## Run Workflow The Run Workflow component triggers a GitHub Actions workflow and waits for it to complete. ### Use Cases - **CI/CD orchestration**: Trigger builds and deployments from SuperPlane workflows - **Automated testing**: Run test suites as part of workflow automation - **Multi-stage pipelines**: Coordinate complex deployment pipelines - **Workflow chaining**: Chain multiple GitHub Actions workflows together ### How It Works 1. Dispatches the specified GitHub Actions workflow with optional inputs 2. Waits for the workflow run to complete (monitored via webhook and polling) 3. Routes execution based on workflow conclusion: - **Passed channel**: Workflow completed successfully - **Failed channel**: Workflow failed or was cancelled ### Configuration - **Repository**: Select the GitHub repository containing the workflow - **Workflow File**: Path to the workflow file (e.g., `.github/workflows/ci.yml`) - **Branch or Tag**: Git reference to run the workflow on (default: main) - **Inputs**: Optional workflow inputs as key-value pairs (supports expressions) ### Output Channels - **Passed**: Emitted when workflow completes successfully - **Failed**: Emitted when workflow fails or is cancelled ### Notes - The component automatically sets up webhook monitoring for workflow completion - Falls back to polling if webhook doesn't arrive - Can be cancelled, which will cancel the running GitHub Actions workflow ### Example Output ```json { "data": { "workflow_run": { "conclusion": "success", "html_url": "https://github.com/acme/widgets/actions/runs/9001", "id": 9001, "status": "completed" } }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.workflow.finished" } ``` ## Update Issue The Update Issue component modifies an existing GitHub issue with new information. ### Use Cases - **Status updates**: Change issue state (open/closed) based on workflow results - **Label management**: Add or update labels on issues - **Assignee updates**: Assign issues to team members automatically - **Content updates**: Update issue title or body with new information ### Configuration - **Repository**: Select the GitHub repository containing the issue - **Issue Number**: The issue number to update (supports expressions) - **Title**: New title for the issue (optional, supports expressions) - **Body**: New body/description for the issue (optional, supports expressions) - **State**: Change issue state to "open" or "closed" (optional) - **Assignees**: List of GitHub usernames to assign the issue to (optional) - **Labels**: List of labels to apply to the issue (optional) ### Output Returns the updated issue object with all current information. ### Example Output ```json { "data": { "html_url": "https://github.com/acme/widgets/issues/42", "id": 101, "number": 42, "state": "closed", "title": "Fix flaky build", "updated_at": "2026-01-16T17:40:00Z" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.issue" } ``` ## Update Release The Update Release component modifies an existing GitHub release. ### Use Cases - **Release updates**: Update release notes or metadata after creation - **Draft to published**: Convert draft releases to published releases - **Metadata updates**: Update release name, description, or tags - **Prerelease management**: Change prerelease status ### Configuration - **Repository**: Select the GitHub repository - **Release Strategy**: How to find the release (by tag name or latest) - **Tag Name**: Git tag name of the release to update (if using tag strategy) - **Name**: New release title/name (optional, supports expressions) - **Body**: New release notes/description (optional, supports markdown and expressions) - **Draft**: Update draft status - **Prerelease**: Update prerelease status - **Generate Release Notes**: Regenerate release notes from commits ### Output Returns the updated release object with all current information. ### Example Output ```json { "data": { "draft": false, "html_url": "https://github.com/acme/widgets/releases/tag/v1.2.3", "id": 3001, "name": "Release 1.2.3", "prerelease": false, "tag_name": "v1.2.3", "updated_at": "2026-01-16T17:50:00Z" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "github.release" } ``` #### GitLab Source URL: https://docs.superplane.com/components/gitlab Manage and react to changes in your GitLab repositories import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions When connecting using App OAuth: - Leave **Client ID** and **Secret** empty to start the setup wizard. When connecting using Personal Access Token: - Go to Preferences → Personal Access Token → Add New token - Use **Scopes**: api, read_user, read_api, write_repository, read_repository - Copy the token and paste it into the **Access Token** configuration field, then click **Save**. ## On Issue The On Issue trigger starts a workflow execution when issue events occur in a GitLab project. ### Use Cases - **Notify Slack** when an issue is created or assigned for triage - **Create a Jira issue** when a GitLab issue is created for traceability - **Update external dashboards** or close linked tickets when an issue is closed ### Configuration - **Project** (required): GitLab project to monitor - **Actions** (required): Select which issue actions to listen for (opened, closed, reopened, etc.). Default: opened. - **Labels** (optional): Only trigger for issues with specific labels ### Outputs - **Default channel**: Emits issue payload including issue IID, title, state, labels, assignees, author, and action type ### Webhook Setup This trigger automatically sets up a GitLab webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "assignees": [ { "avatar_url": "https://www.gravatar.com/avatar/abc123", "id": 1, "name": "John Doe", "username": "johndoe" } ], "event_type": "issue", "labels": [ { "color": "#dc3545", "created_at": "2026-01-01T00:00:00Z", "description": "Bug reports", "group_id": null, "id": 206, "project_id": 15, "template": false, "title": "bug", "type": "ProjectLabel", "updated_at": "2026-01-01T00:00:00Z" } ], "object_attributes": { "action": "open", "created_at": "2026-02-05T14:00:00Z", "description": "This is an example issue description for testing the webhook", "id": 301, "iid": 1, "state": "opened", "title": "Example Issue", "updated_at": "2026-02-05T14:00:00Z", "url": "https://gitlab.com/group/my-project/-/issues/1" }, "object_kind": "issue", "project": { "avatar_url": null, "default_branch": "main", "description": "Example project", "git_http_url": "https://gitlab.com/group/my-project.git", "git_ssh_url": "git@gitlab.com:group/my-project.git", "id": 15, "name": "my-project", "namespace": "group", "path_with_namespace": "group/my-project", "visibility_level": 20, "web_url": "https://gitlab.com/group/my-project" }, "repository": { "description": "Example project", "homepage": "https://gitlab.com/group/my-project", "name": "my-project", "url": "git@gitlab.com:group/my-project.git" }, "user": { "avatar_url": "https://www.gravatar.com/avatar/abc123", "email": "johndoe@example.com", "id": 1, "name": "John Doe", "username": "johndoe" } }, "timestamp": "2026-02-05T14:00:00.000000000Z", "type": "gitlab.issue" } ``` ## On Merge Request The On Merge Request trigger starts a workflow execution when merge request events occur in a GitLab project. ### Configuration - **Project** (required): GitLab project to monitor - **Actions** (required): Select which merge request actions to listen for (open, close, merge, etc.). Default: open. ### Outputs - **Default channel**: Emits merge request payload data with action, project, and object attributes ### Example Data ```json { "data": { "assignees": [ { "avatar_url": "https://www.gravatar.com/avatar/ab12cd34?s=80\u0026d=identicon", "email": "jrivera@example.com", "id": 4, "name": "Jamie Rivera", "username": "jrivera" } ], "changes": { "title": { "current": "Add merge request trigger", "previous": "Add trigger" } }, "event_type": "merge_request", "labels": [ { "id": 101, "title": "backend" } ], "object_attributes": { "action": "open", "description": "Adds support for additional GitLab webhook trigger types.", "id": 93, "iid": 12, "state": "opened", "title": "Add merge request trigger" }, "object_kind": "merge_request", "project": { "avatar_url": null, "ci_config_path": null, "default_branch": "main", "description": "Project used to demonstrate merge request webhook payloads.", "git_http_url": "https://gitlab.example.com/group/example.git", "git_ssh_url": "ssh://git@gitlab.example.com:group/example.git", "id": 1, "name": "Example Project", "namespace": "group", "path_with_namespace": "group/example", "visibility_level": 20, "web_url": "https://gitlab.example.com/group/example" }, "repository": { "description": "Project used to demonstrate merge request webhook payloads.", "git_http_url": "https://gitlab.example.com/group/example.git", "git_ssh_url": "ssh://git@gitlab.example.com:group/example.git", "homepage": "https://gitlab.example.com/group/example", "name": "Example Project", "url": "ssh://git@gitlab.example.com/group/example.git", "visibility_level": 20 }, "reviewers": [ { "avatar_url": "https://www.gravatar.com/avatar/ef56gh78?s=80\u0026d=identicon", "email": "mlee@example.com", "id": 6, "name": "Morgan Lee", "state": "unreviewed", "username": "mlee" } ], "user": { "avatar_url": "https://www.gravatar.com/avatar/1a29da0ccd099482194440fac762f5ccb4ec53227761d1859979367644a889a5?s=80\u0026d=identicon", "email": "agarcia@example.com", "id": 1, "name": "Alex Garcia", "username": "agarcia" } }, "timestamp": "2026-02-12T20:40:00.000000000Z", "type": "gitlab.mergeRequest" } ``` ## On Milestone The On Milestone trigger starts a workflow execution when milestone events occur in a GitLab project. ### Configuration - **Project** (required): GitLab project to monitor - **Actions** (required): Select which milestone actions to listen for. Default: create. ### Outputs - **Default channel**: Emits milestone payload data with action, project, and object attributes ### Example Data ```json { "data": { "action": "create", "event_type": "milestone", "object_attributes": { "created_at": "2025-06-16 14:10:57 UTC", "description": "First stable release", "due_date": "2025-06-30", "group_id": null, "id": 61, "iid": 10, "project_id": 1, "start_date": "2025-06-16", "state": "active", "title": "v1.0", "updated_at": "2025-06-16 14:10:57 UTC" }, "object_kind": "milestone", "project": { "avatar_url": null, "ci_config_path": null, "default_branch": "master", "description": "Aut reprehenderit ut est.", "git_http_url": "http://example.com/gitlabhq/gitlab-test.git", "git_ssh_url": "git@example.com:gitlabhq/gitlab-test.git", "homepage": "http://example.com/gitlabhq/gitlab-test", "http_url": "http://example.com/gitlabhq/gitlab-test.git", "id": 1, "name": "Gitlab Test", "namespace": "GitlabHQ", "path_with_namespace": "gitlabhq/gitlab-test", "ssh_url": "git@example.com:gitlabhq/gitlab-test.git", "url": "http://example.com/gitlabhq/gitlab-test.git", "visibility_level": 20, "web_url": "http://example.com/gitlabhq/gitlab-test" } }, "timestamp": "2026-02-12T20:40:00.000000000Z", "type": "gitlab.milestone" } ``` ## On Pipeline The On Pipeline trigger starts a workflow execution when pipeline events occur in a GitLab project. ### Configuration - **Project** (required): GitLab project to monitor - **Statuses** (required): Select which pipeline statuses to listen for. Default: success, failed, canceled. ### Outputs - **Default channel**: Emits pipeline webhook payload data including status, ref, SHA, and project information ### Webhook Setup This trigger automatically sets up a GitLab webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "merge_request": { "iid": 12, "title": "Improve CI pipeline" }, "object_attributes": { "created_at": "2026-02-10 12:00:00 UTC", "duration": 190, "finished_at": "2026-02-10 12:03:10 UTC", "id": 12345, "iid": 321, "ref": "main", "sha": "f4f6c5a0d2e5ad34be4c17c3f166f4d2ff8b0a55", "source": "push", "status": "success", "updated_at": "2026-02-10 12:03:10 UTC", "url": "https://gitlab.com/group/example-project/-/pipelines/12345" }, "object_kind": "pipeline", "project": { "id": 987, "name": "example-project", "path_with_namespace": "group/example-project", "web_url": "https://gitlab.com/group/example-project" }, "user": { "id": 22, "name": "Jamie Rivera", "username": "jrivera" } }, "timestamp": "2026-02-13T18:00:00.000000000Z", "type": "gitlab.pipeline" } ``` ## On Release The On Release trigger starts a workflow execution when release events occur in a GitLab project. ### Configuration - **Project** (required): GitLab project to monitor - **Actions** (required): Select which release actions to listen for. Default: create. ### Outputs - **Default channel**: Emits release payload data with action and release metadata ### Example Data ```json { "data": { "action": "create", "assets": { "count": 2, "links": [ { "id": 1, "link_type": "other", "name": "Changelog", "url": "https://example.net/changelog" } ], "sources": [ { "format": "zip", "url": "https://example.com/gitlab-org/release-webhook-example/-/archive/v1.1/release-webhook-example-v1.1.zip" }, { "format": "tar.gz", "url": "https://example.com/gitlab-org/release-webhook-example/-/archive/v1.1/release-webhook-example-v1.1.tar.gz" } ] }, "commit": { "author": { "email": "user@example.com", "name": "Example User" }, "id": "ee0a3fb31ac16e11b9dbb596ad16d4af654d08f8", "message": "Release v1.1", "timestamp": "2020-10-31T14:58:32+11:00", "title": "Release v1.1", "url": "https://example.com/gitlab-org/release-webhook-example/-/commit/ee0a3fb31ac16e11b9dbb596ad16d4af654d08f8" }, "created_at": "2020-11-02 12:55:12 UTC", "description": "v1.1 has been released", "id": 1, "name": "v1.1", "object_kind": "release", "project": { "avatar_url": null, "ci_config_path": null, "default_branch": "master", "description": "", "git_http_url": "https://example.com/gitlab-org/release-webhook-example.git", "git_ssh_url": "ssh://git@example.com/gitlab-org/release-webhook-example.git", "id": 1, "name": "release-webhook-example", "namespace": "Gitlab", "path_with_namespace": "gitlab-org/release-webhook-example", "visibility_level": 0, "web_url": "https://example.com/gitlab-org/release-webhook-example" }, "released_at": "2020-11-02 12:55:12 UTC", "tag": "v1.1", "url": "https://example.com/gitlab-org/release-webhook-example/-/releases/v1.1" }, "timestamp": "2026-02-12T20:40:00.000000000Z", "type": "gitlab.release" } ``` ## On Tag The On Tag trigger starts a workflow execution when tag push events occur in a GitLab project. ### Configuration - **Project** (required): GitLab project to monitor - **Tags** (required): Configure tag filters using predicates. You can match full refs (refs/tags/v1.0.0) or tag names (v1.0.0). ### Outputs - **Default channel**: Emits tag push payload data including ref, before/after SHA, and project information ### Example Data ```json { "data": { "after": "82b3d5ae55f7080f1e6022629cdb57bfae7cccc7", "before": "0000000000000000000000000000000000000000", "checkout_sha": "82b3d5ae55f7080f1e6022629cdb57bfae7cccc7", "commits": [], "event_name": "tag_push", "message": "Tag message", "object_kind": "tag_push", "project": { "avatar_url": null, "ci_config_path": null, "default_branch": "master", "description": "", "git_http_url": "http://example.com/jsmith/example.git", "git_ssh_url": "git@example.com:jsmith/example.git", "id": 1, "name": "Example", "namespace": "Jsmith", "path_with_namespace": "jsmith/example", "visibility_level": 0, "web_url": "http://example.com/jsmith/example" }, "push_options": {}, "ref": "refs/tags/v1.0.0", "ref_protected": true, "repository": { "description": "", "git_http_url": "http://example.com/jsmith/example.git", "git_ssh_url": "git@example.com:jsmith/example.git", "homepage": "http://example.com/jsmith/example", "name": "Example", "url": "ssh://git@example.com/jsmith/example.git", "visibility_level": 0 }, "total_commits_count": 0, "user_email": "john@example.com", "user_id": 1, "user_name": "John Smith", "user_username": "jsmith" }, "timestamp": "2026-02-12T20:40:00.000000000Z", "type": "gitlab.tag" } ``` ## On Vulnerability The On Vulnerability trigger starts a workflow execution when vulnerability events occur in a GitLab project. ### Configuration - **Project** (required): GitLab project to monitor ### Outputs - **Default channel**: Emits vulnerability payload data including severity, state, location, and linked issues ### Example Data ```json { "data": { "object_attributes": { "auto_resolved": false, "confidence": "unknown", "confidence_overridden": false, "confirmed_at": "2025-01-08T00:46:14.413Z", "confirmed_by_id": 1, "created_at": "2025-01-08T00:46:14.413Z", "cvss": [ { "vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "vendor": "NVD" } ], "dismissed_at": null, "dismissed_by_id": null, "identifiers": [ { "external_id": "29dce398-220a-4315-8c84-16cd8b6d9b05", "external_type": "gemnasium", "name": "Gemnasium-29dce398-220a-4315-8c84-16cd8b6d9b05", "url": "https://gitlab.com/gitlab-org/security-products/gemnasium-db/-/blob/master/gem/rexml/CVE-2024-41123.yml" }, { "external_id": "CVE-2024-41123", "external_type": "cve", "name": "CVE-2024-41123", "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-41123" } ], "issues": [ { "created_at": "2025-01-08T00:46:14.429Z", "title": "REXML ReDoS vulnerability", "updated_at": "2025-01-08T00:46:14.429Z", "url": "https://example.com/flightjs/Flight/-/issues/1" } ], "location": { "dependency": { "package": { "name": "rexml" }, "version": "3.3.1" }, "file": "Gemfile.lock" }, "project_id": 1, "report_type": "dependency_scanning", "resolved_at": null, "resolved_by_id": null, "resolved_on_default_branch": false, "severity": "high", "severity_overridden": false, "state": "confirmed", "title": "REXML DoS vulnerability", "updated_at": "2025-01-08T00:46:14.413Z", "url": "https://example.com/flightjs/Flight/-/security/vulnerabilities/1" }, "object_kind": "vulnerability" }, "timestamp": "2026-02-12T20:40:00.000000000Z", "type": "gitlab.vulnerability" } ``` ## Create Issue The Create Issue component creates a new issue in a specified GitLab project. ### Use Cases - **Automated Bug Reporting**: Create issues when a monitoring system detects an error - **Task Management**: Automatically create tasks for new employee onboarding - **Feedback Loop**: Turn customer feedback into actionable issues ### Configuration - **Project** (required): The GitLab project where the issue will be created - **Title** (required): The title of the new issue - **Description** (optional): The description/body of the issue - **Assignees** (optional): Users to assign the issue to - **Labels** (optional): Labels to apply to the issue (e.g., bug, enhancement) - **Milestone** (optional): Milestone to associate with the issue - **Due Date** (optional): Date when the issue is due ### Output The component outputs the created issue object, including: - **id**: The internal ID of the issue - **iid**: The project-relative ID of the issue - **web_url**: The URL to view the issue in GitLab - **state**: The current state of the issue (opened/closed) ### Example Output ```json { "data": { "_links": { "award_emoji": "http://gitlab.example.com/api/v4/projects/1/issues/1/award_emoji", "notes": "http://gitlab.example.com/api/v4/projects/1/issues/1/notes", "project": "http://gitlab.example.com/api/v4/projects/1", "self": "http://gitlab.example.com/api/v4/projects/1/issues/1" }, "assignee": { "avatar_url": "https://www.gravatar.com/avatar/e64c7d89f26bd1972efa854d13d7dd61?s=80\u0026d=identicon", "id": 1, "name": "Administrator", "state": "active", "username": "root", "web_url": "http://gitlab.example.com/root" }, "assignees": [ { "avatar_url": "https://www.gravatar.com/avatar/e64c7d89f26bd1972efa854d13d7dd61?s=80\u0026d=identicon", "id": 1, "name": "Administrator", "state": "active", "username": "root", "web_url": "http://gitlab.example.com/root" } ], "author": { "avatar_url": "https://www.gravatar.com/avatar/e64c7d89f26bd1972efa854d13d7dd61?s=80\u0026d=identicon", "id": 1, "name": "Administrator", "state": "active", "username": "root", "web_url": "http://gitlab.example.com/root" }, "blocking_issues_count": 0, "closed_at": null, "closed_by": null, "confidential": false, "created_at": "2023-01-01T10:00:00.000Z", "description": "This is an example issue created via SuperPlane", "discussion_locked": null, "downvotes": 0, "due_date": null, "has_tasks": false, "id": 1, "iid": 1, "issue_type": "issue", "labels": [ "bug", "urgent" ], "merge_requests_count": 0, "milestone": null, "project_id": 3, "references": { "full": "gitlab-org/gitlab-test#1", "relative": "#1", "short": "#1" }, "state": "opened", "task_completion_status": { "completed_count": 0, "count": 0 }, "time_stats": { "human_time_estimate": null, "human_total_time_spent": null, "time_estimate": 0, "total_time_spent": 0 }, "title": "Example Issue", "type": "ISSUE", "updated_at": "2023-01-01T10:00:00.000Z", "upvotes": 0, "user_notes_count": 0, "web_url": "http://gitlab.example.com/gitlab-org/gitlab-test/issues/1", "weight": null }, "timestamp": "2023-01-01T10:00:00.000Z", "type": "gitlab.issue" } ``` ## Get Latest Pipeline The Get Latest Pipeline component retrieves the newest pipeline for a GitLab project. ### Configuration - **Project** (required): The GitLab project to query - **Ref** (optional): Branch or tag to scope the latest pipeline search ### Example Output ```json { "data": { "before_sha": "f4f6c5a0d2e5ad34be4c17c3f166f4d2ff8b0a55", "committed_at": "2026-02-13T19:20:45.000Z", "coverage": "87.1", "created_at": "2026-02-13T19:21:00.000Z", "detailed_status": { "group": "success", "has_details": true, "icon": "status_success", "label": "passed", "text": "passed", "tooltip": "passed" }, "duration": 268, "finished_at": "2026-02-13T19:25:43.000Z", "id": 457882200, "iid": 9822, "project_id": 123456, "queued_duration": 12.6, "ref": "main", "sha": "afce89e8d28741d4f65ec71ad0a4174a801122cd", "source": "merge_request_event", "started_at": "2026-02-13T19:21:15.000Z", "status": "success", "tag": false, "updated_at": "2026-02-13T19:25:43.000Z", "user": { "avatar_url": "https://www.gravatar.com/avatar/ef56gh78", "id": 18, "name": "Alex Garcia", "username": "agarcia" }, "web_url": "https://gitlab.com/group/example-project/-/pipelines/457882200", "yaml_errors": null }, "timestamp": "2026-02-13T19:25:43.000Z", "type": "gitlab.pipeline" } ``` ## Get Pipeline The Get Pipeline component retrieves details for a specific GitLab pipeline. ### Configuration - **Project** (required): The GitLab project containing the pipeline - **Pipeline** (required): Select a pipeline from the selected project ### Output Returns pipeline data including status, ref, SHA, and pipeline URL. ### Example Output ```json { "data": { "before_sha": "0000000000000000000000000000000000000000", "committed_at": "2026-02-13T17:59:22.000Z", "coverage": null, "created_at": "2026-02-13T18:00:00.000Z", "detailed_status": { "group": "running", "has_details": true, "icon": "status_running", "label": "running", "text": "running", "tooltip": "running" }, "duration": 0, "finished_at": null, "id": 457882113, "iid": 9821, "project_id": 123456, "queued_duration": 8.2, "ref": "main", "sha": "f4f6c5a0d2e5ad34be4c17c3f166f4d2ff8b0a55", "source": "push", "started_at": "2026-02-13T18:00:12.000Z", "status": "running", "tag": false, "updated_at": "2026-02-13T18:00:10.000Z", "user": { "avatar_url": "https://www.gravatar.com/avatar/abc123", "id": 22, "name": "Jamie Rivera", "username": "jrivera" }, "web_url": "https://gitlab.com/group/example-project/-/pipelines/457882113", "yaml_errors": null }, "timestamp": "2026-02-13T18:00:10.000Z", "type": "gitlab.pipeline" } ``` ## Get Test Report Summary The Get Test Report Summary component fetches the test report summary for a GitLab pipeline. ### Configuration - **Project** (required): The GitLab project containing the pipeline - **Pipeline** (required): Select a pipeline from the selected project ### Example Output ```json { "data": { "test_suites": [ { "build_ids": [ 8934210 ], "error_count": 0, "failed_count": 1, "name": "backend-rspec", "skipped_count": 0, "success_count": 247, "suite_error": null, "total_count": 248, "total_time": 81.27 }, { "build_ids": [ 8934211 ], "error_count": 0, "failed_count": 1, "name": "frontend-jest", "skipped_count": 1, "success_count": 162, "suite_error": null, "total_count": 164, "total_time": 71.19 } ], "total": { "count": 412, "error": 0, "failed": 2, "skipped": 1, "success": 409, "suite_error": null, "time": 152.46 } }, "timestamp": "2026-02-13T19:26:01.000Z", "type": "gitlab.testReportSummary" } ``` ## Run Pipeline The Run Pipeline component triggers a GitLab pipeline and waits for it to complete. ### Use Cases - **CI/CD orchestration**: Trigger GitLab pipelines from SuperPlane workflows - **Deployment automation**: Run deployment pipelines with inputs - **Pipeline chaining**: Coordinate follow-up actions after pipeline completion ### Example Output ```json { "data": { "pipeline": { "before_sha": "0000000000000000000000000000000000000000", "committed_at": "2026-02-13T17:59:22.000Z", "coverage": "86.5", "created_at": "2026-02-13T18:00:00.000Z", "detailed_status": { "group": "success", "has_details": true, "icon": "status_success", "label": "passed", "text": "passed", "tooltip": "passed" }, "duration": 240, "finished_at": "2026-02-13T18:04:12.000Z", "id": 457882113, "iid": 9821, "project_id": 123456, "queued_duration": 8.2, "ref": "main", "sha": "f4f6c5a0d2e5ad34be4c17c3f166f4d2ff8b0a55", "source": "web", "started_at": "2026-02-13T18:00:12.000Z", "status": "success", "tag": false, "updated_at": "2026-02-13T18:04:12.000Z", "url": "https://gitlab.com/group/example-project/-/pipelines/457882113", "user": { "avatar_url": "https://www.gravatar.com/avatar/abc123", "id": 22, "name": "Jamie Rivera", "username": "jrivera" }, "web_url": "https://gitlab.com/group/example-project/-/pipelines/457882113", "yaml_errors": null } }, "timestamp": "2026-02-13T18:04:12.000Z", "type": "gitlab.pipeline.finished" } ``` #### Google Cloud Source URL: https://docs.superplane.com/components/googlecloud Manage and use Google Cloud resources in your workflows import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions ## Connection method ### Service Account Key 1. Go to [IAM & Admin → Service Accounts](https://console.cloud.google.com/iam-admin/serviceaccounts) in the Google Cloud Console. 2. Select a service account → **Keys** → **Add Key** → **JSON**. 3. Paste the downloaded JSON below. ### Workload Identity Federation (keyless) 1. Create a [Workload Identity Pool](https://cloud.google.com/iam/docs/workload-identity-federation) with an OIDC provider. 2. Set the **Issuer URL** to this SuperPlane instance's URL. 3. Set the **Audience** to the pool provider resource name. 4. Grant the federated identity permission to [impersonate a service account](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#mapping) with the roles your workflows need. 5. Enter the **pool provider resource name** and **Project ID** below. ## Required IAM roles - `roles/logging.configWriter` — create logging sinks for event triggers - `roles/pubsub.admin` — manage Pub/Sub topics, subscriptions, and IAM policies for event delivery - Additional roles depending on which components you use (e.g. `roles/compute.admin` for VM management) ## Artifact Registry • On Artifact Analysis The On Artifact Analysis trigger starts a workflow execution when Google Container Analysis publishes a new occurrence (e.g. vulnerability finding, build provenance, or attestation) for an artifact. **Trigger behavior:** SuperPlane subscribes to the `container-analysis-occurrences-v1` Pub/Sub topic that Container Analysis automatically publishes to. ### Use Cases - **Security automation**: React to new vulnerability findings for your container images - **Compliance workflows**: Trigger policy enforcement when attestations are created - **Build provenance**: React to new build provenance records ### Setup **Required GCP setup:** Ensure the **Container Analysis API** (`containeranalysis.googleapis.com`) and **Pub/Sub API** are enabled in your project. The service account must have `roles/pubsub.admin` and `roles/containeranalysis.occurrences.viewer`. ### Configuration - **Occurrence Kinds**: Filter by occurrence type. Leave empty to receive only **DISCOVERY** occurrences (one event per completed scan — recommended). Set explicitly to receive other types such as VULNERABILITY (one event per CVE found). - **Location / Repository / Package**: Optional filters to scope events to a specific artifact. ### Event Data Each event contains the full Container Analysis Occurrence resource, including `kind`, `resourceUri`, `noteName`, and the occurrence-specific data (e.g. `vulnerability` for vulnerability findings). ### Example Data ```json { "data": { "kind": "VULNERABILITY", "name": "projects/my-project/occurrences/vuln-001", "noteName": "projects/goog-vulnz/notes/CVE-2023-1234", "resourceUri": "https://us-central1-docker.pkg.dev/my-project/my-repo/my-image@sha256:abc123", "vulnerability": { "cvssScore": 7.5, "packageIssue": [ { "affectedPackage": "libssl1.1", "affectedVersion": { "kind": "NORMAL", "name": "1.1.1n-0+deb11u3" }, "fixedVersion": { "kind": "NORMAL", "name": "1.1.1n-0+deb11u5" } } ], "severity": "HIGH" } }, "timestamp": "2025-01-01T00:00:00Z", "type": "gcp.artifactregistry.artifact.analysis" } ``` ## Artifact Registry • On Artifact Push The On Artifact Push trigger starts a workflow execution when a Docker image or other container artifact is pushed to Artifact Registry. **Trigger behavior:** SuperPlane subscribes to the `gcr` Pub/Sub topic that Artifact Registry automatically publishes to for container image push events. ### Use Cases - **Post-push automation**: Trigger vulnerability scans, deployments, or notifications when a new image is pushed - **Release workflows**: Promote artifacts through environments when a new tag is published - **Security automation**: Kick off container analysis on every new push ### Setup **Required GCP setup:** Ensure the **Artifact Registry API** and **Pub/Sub API** are enabled in your project. The service account must have `roles/pubsub.admin` so SuperPlane can create the push subscription. ### Configuration - **Location**: Optional filter by Artifact Registry location. Leave empty to receive events for all locations. - **Repository**: Optional filter by repository name. Leave empty to receive events for all repositories. ### Event Data Each event contains: - `action`: Always `INSERT` for pushes - `digest`: Full image digest URI (e.g. `us-central1-docker.pkg.dev/project/repo/image@sha256:abc`) - `tag`: Full image tag URI (e.g. `us-central1-docker.pkg.dev/project/repo/image:latest`) ### Example Data ```json { "data": { "action": "INSERT", "digest": "https://us-central1-docker.pkg.dev/my-project/my-repo/my-image@sha256:abc123def456", "tag": "https://us-central1-docker.pkg.dev/my-project/my-repo/my-image:latest" }, "timestamp": "2025-01-01T00:00:00Z", "type": "gcp.artifactregistry.artifact.push" } ``` ## Cloud Build • On Build Complete The On Build Complete trigger starts a workflow execution when a GCP Cloud Build build finishes. **Trigger behavior:** SuperPlane subscribes to the `cloud-builds` Pub/Sub topic that Cloud Build automatically publishes to. Build notifications are pushed to SuperPlane and matched to this trigger. ### Use Cases - **Post-build automation**: Deploy artifacts, send notifications, or update tickets after a build succeeds - **Failure handling**: Alert teams or create incidents when builds fail - **Build pipelines**: Chain multiple build steps across different projects ### Setup **Required GCP setup:** Ensure the **Cloud Build API** and **Pub/Sub API** are enabled in your project. The service account used by the integration must have `roles/pubsub.admin` so SuperPlane can automatically create the `cloud-builds` topic and its push subscription. ### Configuration - **Statuses**: Filter by terminal Cloud Build status. - **Build Source**: Optionally limit events to trigger-based builds or direct/API builds. Leave empty to listen to both. - **Cloud Build Trigger**: Filter to a specific Cloud Build trigger. This only applies to trigger-based builds and cannot be combined with **Build Source = Direct/API Builds**. ### Event Data Each event contains the full Cloud Build resource, including `id`, `status` (SUCCESS, FAILURE, INTERNAL_ERROR, TIMEOUT, CANCELLED, EXPIRED), `buildTriggerId`, `logUrl`, `createTime`, `finishTime`, and more. ### Example Data ```json { "data": { "buildTriggerId": "abcdefgh-1234-5678-abcd-123456789012", "createTime": "2025-01-01T00:00:00Z", "finishTime": "2025-01-01T00:05:00Z", "id": "12345678-abcd-1234-5678-abcdef012345", "logUrl": "https://console.cloud.google.com/cloud-build/builds/12345678-abcd-1234-5678-abcdef012345", "projectId": "my-project", "status": "SUCCESS" }, "timestamp": "2025-01-01T00:05:00Z", "type": "gcp.cloudbuild.build" } ``` ## Compute • On VM Instance The On VM Instance trigger starts a workflow execution when a Compute Engine VM instance lifecycle event occurs. **Trigger behavior:** SuperPlane creates a Cloud Logging sink that captures Compute Engine audit log events and routes them to a shared Pub/Sub topic. Events are pushed to SuperPlane and matched to this trigger automatically. ### Use Cases - **Post-provisioning automation**: Run configuration, monitoring, or security setup after a VM is created - **Inventory and compliance**: Record new VMs or trigger audits - **Notifications**: Notify teams or systems when new VMs appear in a project or zone ### Setup **Required GCP setup:** Ensure the **Pub/Sub** API is enabled in your project and the integration's service account has `roles/logging.configWriter` and `roles/pubsub.admin` permissions. SuperPlane automatically creates a Cloud Logging sink to capture VM instance events. ### Event Data Each event includes the audit log entry with resourceName (e.g. projects/my-project/zones/us-central1-a/instances/my-vm), serviceName (compute.googleapis.com), methodName (v1.compute.instances.insert), and the full log entry data. ### Example Data ```json { "data": { "data": { "protoPayload": { "methodName": "v1.compute.instances.insert", "resourceName": "projects/my-project/zones/us-central1-a/instances/my-vm", "serviceName": "compute.googleapis.com" } }, "logName": "projects/my-project/logs/cloudaudit.googleapis.com%2Factivity", "methodName": "v1.compute.instances.insert", "resourceName": "projects/my-project/zones/us-central1-a/instances/my-vm", "serviceName": "compute.googleapis.com", "timestamp": "2025-02-14T12:00:00Z" }, "timestamp": "2025-02-14T12:00:00Z", "type": "gcp.compute.vmInstance" } ``` ## Pub/Sub • On Message The On Message trigger starts a workflow execution when a message is published to a GCP Pub/Sub topic. **Trigger behavior:** SuperPlane creates a push subscription on the selected topic. Published messages are pushed to SuperPlane and delivered to this trigger. ### Use Cases - **Event-driven workflows**: React to messages published by your applications - **Queue processing**: Process tasks published to Pub/Sub topics - **System integration**: Connect Pub/Sub events to downstream workflow steps ### Setup **Required GCP setup:** Ensure the **Pub/Sub API** is enabled in your project. The service account used by the integration must have `roles/pubsub.admin` to create push subscriptions on your topics. ### Configuration - **Topic**: Select the Pub/Sub topic to listen to. - **Subscription (optional)**: Reuse an existing subscription name. Leave empty to let SuperPlane create one. ### Event Data Each event contains the decoded message payload plus Pub/Sub metadata: - `data`: The decoded message body - `messageId`: The Pub/Sub message ID - `publishTime`: When the message was published - `attributes`: Any message attributes ### Example Data ```json { "data": { "attributes": { "eventType": "order.created" }, "data": "{\"event\":\"order.created\",\"orderId\":\"ord_abc123\"}", "messageId": "1234567890", "publishTime": "2025-01-01T00:00:00Z" }, "timestamp": "2025-01-01T00:00:00Z", "type": "gcp.pubsub.message" } ``` ## Artifact Registry • Get Artifact Retrieves the details of a specific artifact version from Google Artifact Registry. ### Configuration Provide either a **Resource URL** or the four fields below: - **Resource URL**: Full resource URL of the image (e.g. `https://us-central1-docker.pkg.dev/project/repo/image@sha256:abc`). Use this to pass a digest directly from an upstream event such as On Artifact Push. - **Location**: The GCP region where the repository is located. - **Repository**: The Artifact Registry repository containing the artifact. - **Package**: The package (image, library, etc.) within the repository. - **Version**: The version or tag to retrieve. ### Output The full Version resource, including `name`, `createTime`, `updateTime`, `description`, `relatedTags`, and `metadata`. ### Supported Formats Artifact Registry supports all package formats when using **Select from Registry** mode. **Resource URL** mode is intended for container image URLs (for example from On Artifact Push events). ### Example Output ```json { "data": { "createTime": "2025-01-01T00:00:00Z", "description": "my-image:latest", "fingerprints": [ { "type": "DIRSUM_SHA256", "value": "Ac2PwnIxFXnnS6DfUou2JchB7F+krMAKv4f6sJr8VzM=" } ], "metadata": { "buildTime": "1980-01-01T00:00:01Z", "imageSizeBytes": "20971520", "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "name": "projects/my-project/locations/us-central1/repositories/my-repo/dockerImages/my-image@sha256:abc123def456" }, "name": "projects/my-project/locations/us-central1/repositories/my-repo/packages/my-image/versions/sha256:abc123def456", "updateTime": "2025-01-01T00:05:00Z" }, "timestamp": "2025-01-01T00:05:00Z", "type": "gcp.artifactregistry.version" } ``` ## Artifact Registry • Get Artifact Analysis Retrieves existing Container Analysis occurrences for an artifact from Google Container Analysis. ### Configuration Provide either a **Resource URL** or the four fields below: - **Resource URL**: Full resource URL of the image (e.g. `https://us-central1-docker.pkg.dev/project/repo/image@sha256:abc`). Use this to pass a digest directly from an upstream event such as On Artifact Push. - **Location**: The GCP region where the repository is located. - **Repository**: The Artifact Registry repository containing the artifact. - **Package**: The package (image) within the repository. - **Version**: The version (digest) to query. ### Output An analysis summary for the artifact, including: - `resourceUri`: The analyzed artifact URI - `scanStatus`: Discovery scan status (if available) - Severity counts: `critical`, `high`, `medium`, `low` - `vulnerabilities`: Total vulnerability occurrences - `fixAvailable`: Count of vulnerabilities with fixes ### Notes - The **Container Analysis API** (`containeranalysis.googleapis.com`) must be enabled. - The service account needs `roles/containeranalysis.occurrences.viewer`. - This summarizes existing occurrences for the selected artifact. ### Example Output ```json { "data": { "critical": 0, "fixAvailable": 1, "high": 1, "low": 0, "medium": 2, "resourceUri": "https://us-central1-docker.pkg.dev/my-project/my-repo/my-image@sha256:abc123", "scanStatus": "FINISHED_SUCCESS", "vulnerabilities": 3 }, "timestamp": "2025-01-01T00:05:00Z", "type": "gcp.containeranalysis.occurrences" } ``` ## Cloud Build • Create Build Creates and starts a Google Cloud Build build, then waits for the build to reach a terminal status. ### Configuration - **Steps** (required): JSON array of build steps. Each step needs at minimum a `name` (builder image) and optional `args`. Example: `[{"name":"gcr.io/cloud-builders/docker","args":["build","-t","gcr.io/$PROJECT_ID/myapp","."]}]` - **Source**: Optional JSON object for the build source. This is the most flexible option and supports `gitSource`, `repoSource`, or `storageSource`. Example: `{"gitSource":{"url":"https://github.com/org/repo.git","revision":"main"}}` - **Connected Repository**: Optional Cloud Build 2nd-gen repository path. Select a location, connection, repository, and branch/tag/commit directly from GCP. SuperPlane sends `source.connectedRepository` and creates the build in the repository's region. - **Repository / Branch / Tag / Commit SHA**: Convenience shortcut for repository-backed builds. If the repository value looks like a Git URL (`https://...`, `ssh://...`, or `git@...`), SuperPlane creates `source.gitSource`. Otherwise it treats the value as a Cloud Source Repository name and creates `source.repoSource`. Choose exactly one revision field. - **Images**: Optional list of Docker image names to push after the build. - **Substitutions**: JSON object of substitution key-value pairs (e.g. `{"_ENV":"production"}`). - **Timeout**: Build timeout (e.g. `600s`). Defaults to Cloud Build default (10 minutes). - **Project ID Override**: Optionally run the build in a different project than the connected integration. ### Output The terminal Build resource, including `id`, `status`, `logUrl`, `createTime`, `finishTime`, and more. ### Output Channels - **Passed**: Emitted when Cloud Build finishes with `SUCCESS`. - **Failed**: Emitted when Cloud Build finishes with any other terminal status, including `FAILURE`, `INTERNAL_ERROR`, `TIMEOUT`, `CANCELLED`, or `EXPIRED`. ### Notes - SuperPlane listens for Cloud Build notifications through the connected GCP integration and falls back to polling if an event does not arrive. - SuperPlane automatically creates the shared `cloud-builds` Pub/Sub topic and push subscription when the GCP integration has `roles/pubsub.admin` and both the **Cloud Build** and **Pub/Sub** APIs are enabled. - Cancelling the running execution from the UI sends a Cloud Build cancel request for the active build. ### Example Output ```json { "data": { "createTime": "2025-01-01T00:00:00Z", "finishTime": "2025-01-01T00:05:00Z", "id": "12345678-abcd-1234-5678-abcdef012345", "logUrl": "https://console.cloud.google.com/cloud-build/builds/12345678-abcd-1234-5678-abcdef012345", "projectId": "my-project", "status": "SUCCESS" }, "timestamp": "2025-01-01T00:05:00Z", "type": "gcp.cloudbuild.build" } ``` ## Cloud Build • Get Build Retrieves the details of a specific Google Cloud Build build. ### Configuration - **Build ID** (required): The ID or full resource name of the Cloud Build build to retrieve. - **Project ID Override**: Override the GCP project ID from the integration. ### Output The full Build resource, including `id`, `status` (SUCCESS, FAILURE, WORKING, QUEUED, etc.), `logUrl`, `steps`, `images`, `createTime`, `finishTime`, and more. ### Example Output ```json { "data": { "createTime": "2025-01-01T00:00:00Z", "finishTime": "2025-01-01T00:05:00Z", "id": "12345678-abcd-1234-5678-abcdef012345", "logUrl": "https://console.cloud.google.com/cloud-build/builds/12345678-abcd-1234-5678-abcdef012345", "projectId": "my-project", "status": "SUCCESS" }, "timestamp": "2025-01-01T00:05:00Z", "type": "gcp.cloudbuild.build" } ``` ## Cloud Build • Run Trigger Runs an existing Cloud Build trigger and waits for the resulting build to reach a terminal status. ### Configuration - **Trigger** (required): The Cloud Build trigger to run. Select from triggers in the connected project. - **Branch or tag**: Override the branch or tag to build from. Leave empty to use the trigger's configured default. A 40-character hex string is treated as a commit SHA. - **Project ID Override**: Optionally run the trigger in a different project than the connected integration. ### Output The terminal Build resource, including `id`, `status`, `logUrl`, `createTime`, `finishTime`, and more. ### Output Channels - **Passed**: Emitted when Cloud Build finishes with `SUCCESS`. - **Failed**: Emitted when Cloud Build finishes with any other terminal status, including `FAILURE`, `INTERNAL_ERROR`, `TIMEOUT`, `CANCELLED`, or `EXPIRED`. ### Notes - SuperPlane listens for Cloud Build notifications through the connected GCP integration and falls back to polling if an event does not arrive. - SuperPlane automatically creates the shared `cloud-builds` Pub/Sub topic and push subscription when the GCP integration has `roles/pubsub.admin` and both the **Cloud Build** and **Pub/Sub** APIs are enabled. - Cancelling the running execution from the UI sends a Cloud Build cancel request for the active build. ### Example Output ```json { "data": { "buildTriggerId": "abcdefgh-1234-5678-abcd-123456789012", "createTime": "2025-01-01T00:00:00Z", "finishTime": "2025-01-01T00:05:00Z", "id": "12345678-abcd-1234-5678-abcdef012345", "logUrl": "https://console.cloud.google.com/cloud-build/builds/12345678-abcd-1234-5678-abcdef012345", "projectId": "my-project", "status": "SUCCESS" }, "timestamp": "2025-01-01T00:05:00Z", "type": "gcp.cloudbuild.build" } ``` ## Cloud DNS • Create Record The Create Record component creates a new DNS record set in a Google Cloud DNS managed zone. ### Configuration - **Managed Zone** (required): The Cloud DNS managed zone where the record will be created. - **Record Name** (required): The DNS name for the record (e.g. `api.example.com`). A trailing dot is added automatically. - **Record Type** (required): The DNS record type (A, AAAA, CNAME, TXT, MX, etc.). - **TTL** (required): Time to live in seconds. Defaults to 300. - **Record Values** (required): The values for the record (e.g. IP addresses for A records). ### Required IAM roles The service account must have `roles/dns.admin` or `roles/dns.editor` on the project. ### Output - `change.id`: The Cloud DNS change ID. - `change.status`: The change status (`done`). - `change.startTime`: When the change was submitted. - `record.name`: The DNS record name. - `record.type`: The DNS record type. ### Example Output ```json { "data": { "change": { "id": "1", "startTime": "2026-01-28T10:30:00.000Z", "status": "done" }, "record": { "name": "api.example.com.", "type": "A" } }, "timestamp": "2026-01-28T10:30:00.000Z", "type": "gcp.clouddns.change" } ``` ## Cloud DNS • Delete Record The Delete Record component deletes a DNS record set from a Google Cloud DNS managed zone. ### Configuration - **Managed Zone** (required): The Cloud DNS managed zone containing the record. - **Record Name** (required): The DNS name of the record to delete (e.g. `api.example.com`). - **Record Type** (optional): The DNS record type to delete (A, AAAA, CNAME, TXT, MX, etc.). If not specified, all record sets with the given name are deleted. ### Required IAM roles The service account must have `roles/dns.admin` or `roles/dns.editor` on the project. ### Output - `change.id`: The Cloud DNS change ID. - `change.status`: The change status (`done`). - `change.startTime`: When the change was submitted. - `record.name`: The DNS record name. - `record.type`: The DNS record type (comma-separated when multiple types were deleted). ### Example Output ```json { "data": { "change": { "id": "2", "startTime": "2026-01-28T10:31:00.000Z", "status": "done" }, "record": { "name": "old.example.com.", "type": "A" } }, "timestamp": "2026-01-28T10:31:00.000Z", "type": "gcp.clouddns.change" } ``` ## Cloud DNS • Update Record The Update Record component updates an existing DNS record set in a Google Cloud DNS managed zone. ### Configuration - **Managed Zone** (required): The Cloud DNS managed zone containing the record. - **Record Name** (required): The DNS name of the record to update (e.g. `api.example.com`). - **Record Type** (required): The DNS record type (A, AAAA, CNAME, TXT, MX, etc.). - **TTL** (required): New time to live in seconds. - **Record Values** (required): The new values for the record. ### Required IAM roles The service account must have `roles/dns.admin` or `roles/dns.editor` on the project. ### Output - `change.id`: The Cloud DNS change ID. - `change.status`: The change status (`done`). - `change.startTime`: When the change was submitted. - `record.name`: The DNS record name. - `record.type`: The DNS record type. ### Example Output ```json { "data": { "change": { "id": "3", "startTime": "2026-01-28T10:32:00.000Z", "status": "done" }, "record": { "name": "api.example.com.", "type": "A" } }, "timestamp": "2026-01-28T10:32:00.000Z", "type": "gcp.clouddns.change" } ``` ## Cloud Functions • Invoke Function Invokes a Google Cloud Function and waits for the response. ### Configuration - **Location** (required): The GCP region where the function is deployed (e.g. `us-central1`). - **Function** (required): The Cloud Function to invoke. Select from the list of deployed functions. - **Payload**: Optional JSON object sent as the function's input data. - **Project ID Override**: Override the GCP project ID from the integration. Leave empty to use the integration's project. ### Required IAM roles The service account used by the integration must have `roles/cloudfunctions.developer` (or `roles/cloudfunctions.viewer` + `roles/cloudfunctions.invoker`) on the project. - `roles/cloudfunctions.viewer` — list locations and functions (required for dropdowns) - `roles/cloudfunctions.invoker` — invoke the function - `roles/cloudfunctions.developer` — covers both of the above ### Output The invocation result, including: - `functionName`: Full resource name of the invoked function. - `executionId`: Unique ID assigned to this invocation. - `result`: The function's response, parsed as JSON when possible. - `resultRaw`: The raw string response (only present when the response is not valid JSON). ### Example Output ```json { "data": { "executionId": "h7g2k9qw3x", "functionName": "projects/my-project/locations/us-central1/functions/my-function", "result": { "message": "Hello, World!", "status": "ok" } }, "timestamp": "2025-01-01T00:00:05Z", "type": "gcp.cloudfunctions.invoke" } ``` ## Compute • Create Virtual Machine Creates a new Google Compute Engine VM. ### Steps 1. **Machine Configuration** – Region, zone, machine type, provisioning model (Spot/Standard), instance name. 2. **OS & Storage** – Boot disk source (public/custom image, snapshot, existing disk), disk type, size, snapshot schedule. 3. **Security** – Shielded VM (secure boot, vTPM, integrity monitoring), Confidential VM (AMD SEV/SEV-SNP, Intel TDX). 4. **Identity & API access** – VM service account, OAuth scopes, OS Login, block project-wide SSH keys. 5. **Networking** – VPC, subnet, NIC type, internal/external IP (including static), network tags, firewall rules. 6. **Management** – Metadata, startup script, automatic restart, on host maintenance, maintenance policy. 7. **Advanced** – GPU accelerators, placement policy (min node CPUs), sole-tenant/host affinity, resource policies. ### Output Emits a payload with instance details: instanceId, selfLink, internalIP, externalIP, status, zone, name, machineType. ### Example Output ```json { "data": { "externalIP": "34.1.2.3", "instanceId": "1234567890123456789", "internalIP": "10.0.0.2", "machineType": "e2-medium", "name": "my-vm", "selfLink": "https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-a/instances/my-vm", "status": "RUNNING", "zone": "us-central1-a" }, "timestamp": "2025-02-14T12:00:00Z", "type": "gcp.createVM.completed" } ``` ## Pub/Sub • Create Subscription The Create Subscription component creates a new GCP Pub/Sub subscription on a topic. ### Use Cases - **Provisioning workflows**: Wire up subscriptions as part of service deployment - **Pull queue setup**: Create pull subscriptions for batch processing workflows - **Push integration**: Create push subscriptions that deliver messages to an HTTP endpoint ### Example Output ```json { "data": { "name": "projects/my-project/subscriptions/my-subscription", "subscription": "my-subscription", "topic": "my-topic", "type": "pull" }, "timestamp": "2025-01-01T00:00:00Z", "type": "gcp.pubsub.subscription" } ``` ## Pub/Sub • Create Topic The Create Topic component creates a new GCP Pub/Sub topic. ### Use Cases - **Provisioning workflows**: Create topics as part of environment setup - **Dynamic routing**: Create topics on demand for new services or tenants - **Automation bootstrap**: Prepare messaging infrastructure before publishing ### Example Output ```json { "data": { "name": "projects/my-project/topics/my-topic", "topic": "my-topic" }, "timestamp": "2025-01-01T00:00:00Z", "type": "gcp.pubsub.topic" } ``` ## Pub/Sub • Delete Subscription The Delete Subscription component deletes a GCP Pub/Sub subscription. ### Use Cases - **Cleanup workflows**: Remove subscriptions as part of service teardown - **Lifecycle management**: Decommission subscriptions that are no longer needed - **Rollback automation**: Remove subscriptions created in failed provisioning runs ### Example Output ```json { "data": { "deleted": true, "subscription": "my-subscription" }, "timestamp": "2025-01-01T00:00:00Z", "type": "gcp.pubsub.subscription.deleted" } ``` ## Pub/Sub • Delete Topic The Delete Topic component deletes a GCP Pub/Sub topic. ### Use Cases - **Cleanup workflows**: Remove temporary topics after execution - **Lifecycle management**: Decommission unused messaging resources - **Rollback automation**: Remove topics created in failed provisioning runs ### Example Output ```json { "data": { "deleted": true, "topic": "my-topic" }, "timestamp": "2025-01-01T00:00:00Z", "type": "gcp.pubsub.topic.deleted" } ``` ## Pub/Sub • Publish Message The Publish Message component sends a message to a GCP Pub/Sub topic. ### Use Cases - **Event fan-out**: Broadcast workflow results to multiple subscribers - **Notifications**: Publish operational updates to downstream systems - **Automation**: Trigger Pub/Sub-based pipelines from workflows ### Example Output ```json { "data": { "messageId": "1234567890", "topic": "my-topic" }, "timestamp": "2025-01-01T00:00:00Z", "type": "gcp.pubsub.message.published" } ``` #### Grafana Source URL: https://docs.superplane.com/components/grafana Connect Grafana alerts, alert rules, dashboards, annotations, silences, and data queries to SuperPlane workflows import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions **Setup steps:** 1. In Grafana, go to **Administration → Users and access → Service Accounts**, select **Add service account**. > **Service Account Role:** > While naming the service account, go to **Roles → Basic roles** and select **Admin**. Navigate to the created service account and select **Add service account token**. Name it and set an expiration period then click **Generate token**. This is your **Service Account Token**. 2. Use your Grafana root URL as **Base URL** (for example `https://grafana.example.com`). 3. Fill in **Base URL** and **Service Account Token** below, then save. ## On Alert Firing The On Alert Firing trigger starts a workflow when Grafana Unified Alerting sends a firing alert webhook. ### Setup 1. SuperPlane automatically creates or updates a Grafana Webhook contact point and notification policy route for this trigger when provisioning succeeds. 2. SuperPlane manages webhook bearer authentication automatically. 3. Provisioning requires a Grafana integration with **Base URL** and **Service Account Token** and sufficient permissions for alerting and provisioning APIs. ### Configuration - **Alert Names**: Optional exact alert name filters ### Event Data The trigger emits the full Grafana webhook payload, including: - status (firing/resolved) - alerts array with labels and annotations - groupLabels, commonLabels, commonAnnotations - externalURL and other alerting metadata ### Example Data ```json { "data": { "alerts": [ { "annotations": { "summary": "Error rate above threshold" }, "labels": { "alertname": "HighErrorRate", "service": "api" }, "status": "firing" } ], "commonLabels": { "alertname": "HighErrorRate" }, "externalURL": "http://grafana.local", "ruleUid": "alert_rule_uid", "status": "firing", "title": "High error rate" }, "timestamp": "2026-02-12T16:18:03.362582388Z", "type": "grafana.alert.firing" } ``` ## Create Alert Rule The Create Alert Rule component creates a Grafana-managed alert rule using the Alerting Provisioning HTTP API. ### Use Cases - **Monitoring onboarding**: create baseline alerts when a new service or environment is provisioned - **Incident automation**: create temporary alert rules during an incident or validation workflow - **Policy rollout**: standardize alert coverage across teams using a shared rule definition ### Configuration - **Title**: Human-readable alert name shown in Grafana - **Folder**: Existing Grafana folder that should contain the rule - **Rule Group**: Grafana rule group to create the rule in - **Data Source**: Existing Grafana data source the query should use - **Query**: Expression Grafana evaluates when checking the alert - **Lookback Window**: How far back to query when evaluating the rule - **Reducer / Condition / Threshold(s)**: How the series is reduced, how it is compared to thresholds, and optional upper bound for range conditions - **For**: How long the condition must hold before firing - **No Data / Execution Error State**: Grafana behavior when the query returns no data or errors - **Contact Point**: Optional Grafana contact point for notifications when the rule fires - **Labels / Annotations**: Optional routing and context metadata attached to the rule - **Paused**: Whether the rule starts paused ### Output Returns the created Grafana alert rule object, including identifiers and evaluation metadata. ### Example Output ```json { "data": { "annotations": { "summary": "High error rate detected" }, "condition": "C", "data": [ { "datasourceUid": "prometheus-main", "model": { "editorMode": "code", "expr": "sum(rate(http_requests_total{status=~\"5..\"}[5m]))", "intervalMs": 1000, "maxDataPoints": 43200, "query": "sum(rate(http_requests_total{status=~\"5..\"}[5m]))", "refId": "A" }, "queryType": "", "refId": "A", "relativeTimeRange": { "from": 300, "to": 0 } }, { "datasourceUid": "__expr__", "model": { "expression": "A", "id": "reduce", "reducer": "last", "refId": "B", "settings": { "mode": "dropNN" }, "type": "reduce" }, "queryType": "", "refId": "B", "relativeTimeRange": { "from": 0, "to": 0 } }, { "datasourceUid": "__expr__", "model": { "conditions": [ { "evaluator": { "params": [ 1 ], "type": "gt" }, "operator": { "type": "and" }, "query": { "params": [ "C" ] }, "reducer": { "type": "last" }, "type": "query" } ], "expression": "B", "id": "threshold", "refId": "C", "type": "threshold" }, "queryType": "", "refId": "C", "relativeTimeRange": { "from": 0, "to": 0 } } ], "execErrState": "Alerting", "folderUID": "infra", "for": "5m", "id": 42, "isPaused": false, "labels": { "service": "api", "severity": "critical" }, "noDataState": "NoData", "orgID": 1, "ruleGroup": "service-health", "title": "High error rate", "uid": "cergr5pm79hj4d", "updated": "2026-03-31T10:20:30Z" }, "timestamp": "2026-03-31T10:20:30Z", "type": "grafana.alertRule" } ``` ## Create Annotation The Create Annotation component writes an annotation into Grafana, marking operational events on dashboard timelines. ### Use Cases - **Deploy tracking**: Annotate graphs at the exact moment a deployment is triggered or completes - **Incident markers**: Place a marker when an incident is opened or resolved for post-incident correlation - **Maintenance windows**: Mark the start and end of a maintenance window as a region annotation - **Change correlation**: Record configuration changes, feature flag toggles, or rollbacks directly on the timeline ### Configuration - **Dashboard**: Optional — choose a dashboard from your Grafana instance to scope the annotation - **Panel**: Required — choose the panel within the selected dashboard to attach the annotation to - **Text**: The annotation message (required) - **Tags**: Optional list of tags to label the annotation (e.g. deploy, rollback, incident) - **Time**: Optional start time value. Examples: `{{ now() }}` or `{{ now() - duration("5m") }}` - **Time End**: Optional end time value for a region annotation. Examples: `{{ now() }}` or `{{ now() + duration("24h") }}` ### Output Returns the ID of the newly created annotation. ### Example Output ```json { "data": { "id": 42, "url": "https://grafana.example.com/d/production-overview/production-overview?from=1739376783362\u0026to=1739377383362" }, "timestamp": "2026-02-12T16:18:03.362582388Z", "type": "grafana.annotation.created" } ``` ## Create Silence The Create Silence component creates a new Alertmanager silence in Grafana, suppressing alert notifications that match the configured matchers during the specified time window. ### Use Cases - **Deploy window**: Suppress noisy alerts during a planned maintenance or deployment window - **Incident management**: Prevent alert storms from flooding on-call channels while an incident is being worked on - **Testing**: Silence alerts during load tests or chaos experiments ### Configuration - **Matchers**: One or more label matchers that identify which alerts to silence (required). Each matcher uses an operator: equal (=), not equal (!=), regex match (=~), or regex does not match (!~), matching Grafana Alertmanager semantics. - **Starts At**: The start of the silence window (required) - **Ends At**: The end of the silence window (required) - **Comment**: A description of why the silence is being created (required) - The createdBy field sent to Grafana is set automatically to SuperPlane-<org_name> and is not configurable ### Output Returns the ID of the newly created silence. ### Example Output ```json { "data": { "endsAt": "2026-04-01T10:24:30Z", "silenceId": "a3e5c2d1-8b4f-4e1a-9c7d-2f0e6b3a1d5c", "startsAt": "2026-03-31T10:24:30Z" }, "timestamp": "2026-03-31T10:24:30Z", "type": "grafana.silence.created" } ``` ## Delete Alert Rule The Delete Alert Rule component deletes a Grafana-managed alert rule using the Alerting Provisioning HTTP API. ### Use Cases - **Alert cleanup**: remove temporary or obsolete rules after a rollout or incident - **Service retirement**: delete rules that are no longer needed when an environment is decommissioned - **Controlled cleanup**: pair deletions with approvals, notifications, or audit workflows ### Configuration - **Alert Rule**: The Grafana alert rule to delete ### Output Returns a confirmation object with the deleted alert rule UID, title, and deletion status. ### Example Output ```json { "data": { "deleted": true, "title": "High error rate", "uid": "cergr5pm79hj4d" }, "timestamp": "2026-03-31T10:24:30Z", "type": "grafana.alertRuleDeleted" } ``` ## Delete Annotation The Delete Annotation component removes an annotation from Grafana by ID. ### Use Cases - **Cleanup incorrect markers**: Remove an annotation that was created with wrong text or tags - **Automated lifecycle**: Delete temporary markers (e.g. maintenance window start) once the event is complete - **Idempotent workflows**: Allow re-runs to clean up previously created annotations before re-creating them ### Configuration - **Annotation**: The annotation to delete, chosen from your Grafana instance (required) ### Output Returns the annotation ID and a confirmation that the annotation was deleted. ### Example Output ```json { "data": { "deleted": true, "id": 42 }, "timestamp": "2026-02-12T16:18:03.362582388Z", "type": "grafana.annotation.deleted" } ``` ## Delete Silence The Delete Silence component expires an existing silence in Grafana Alertmanager. ### Use Cases - **End a maintenance window early**: Remove a silence once deployment or maintenance completes ahead of schedule - **Automated cleanup**: Expire silences created by automation after the condition they covered has resolved ### Configuration - **Silence**: The silence to expire (required) ### Output Returns the silence ID and a confirmation that the silence was deleted. ### Example Output ```json { "data": { "deleted": true, "silenceId": "a3e5c2d1-8b4f-4e1a-9c7d-2f0e6b3a1d5c" }, "timestamp": "2026-03-31T10:24:30Z", "type": "grafana.silence.deleted" } ``` ## Get Alert Rule The Get Alert Rule component fetches a Grafana-managed alert rule using the Alerting Provisioning HTTP API. ### Use Cases - **Configuration review**: inspect the current source of truth before changing a rule - **Workflow enrichment**: include alert rule details in notifications, tickets, or approvals - **Drift checks**: compare the current Grafana rule against an expected configuration ### Configuration - **Alert Rule**: The Grafana alert rule to retrieve ### Output Returns the full Grafana alert rule object, including title, folder, group, condition, queries, labels, and annotations. ### Example Output ```json { "data": { "annotations": { "summary": "High error rate detected" }, "condition": "C", "data": [ { "datasourceUid": "prometheus-main", "model": { "editorMode": "code", "expr": "sum(rate(http_requests_total{status=~\"5..\"}[5m]))", "intervalMs": 1000, "maxDataPoints": 43200, "query": "sum(rate(http_requests_total{status=~\"5..\"}[5m]))", "refId": "A" }, "queryType": "", "refId": "A", "relativeTimeRange": { "from": 300, "to": 0 } }, { "datasourceUid": "__expr__", "model": { "expression": "A", "id": "reduce", "reducer": "last", "refId": "B", "settings": { "mode": "dropNN" }, "type": "reduce" }, "queryType": "", "refId": "B", "relativeTimeRange": { "from": 0, "to": 0 } }, { "datasourceUid": "__expr__", "model": { "conditions": [ { "evaluator": { "params": [ 1 ], "type": "gt" }, "operator": { "type": "and" }, "query": { "params": [ "C" ] }, "reducer": { "type": "last" }, "type": "query" } ], "expression": "B", "id": "threshold", "refId": "C", "type": "threshold" }, "queryType": "", "refId": "C", "relativeTimeRange": { "from": 0, "to": 0 } } ], "execErrState": "Alerting", "folderUID": "infra", "for": "5m", "id": 42, "isPaused": false, "labels": { "service": "api", "severity": "critical" }, "noDataState": "NoData", "orgID": 1, "ruleGroup": "service-health", "title": "High error rate", "uid": "cergr5pm79hj4d", "updated": "2026-03-31T10:20:30Z" }, "timestamp": "2026-03-31T10:20:30Z", "type": "grafana.alertRule" } ``` ## Get Dashboard The Get Dashboard component fetches a Grafana dashboard using the Grafana Dashboards HTTP API. ### Use Cases - **Dashboard inspection**: retrieve current dashboard configuration for review or downstream use - **Workflow enrichment**: include dashboard details in notifications, tickets, or approvals - **Panel discovery**: list panels available in a dashboard for subsequent rendering or linking ### Configuration - **Dashboard**: The Grafana dashboard UID to retrieve ### Output Returns the Grafana dashboard object, including title, slug, URL, folder, tags, and panel summaries. ### Example Output ```json { "data": { "folder": "fdg4m1rt63hj8q", "folderTitle": "Platform", "panels": [ { "id": 1, "title": "Request Rate", "type": "timeseries" }, { "id": 2, "title": "Error Rate", "type": "timeseries" }, { "id": 3, "title": "P99 Latency", "type": "gauge" } ], "slug": "production-overview", "tags": [ "production", "platform" ], "title": "Production Overview", "uid": "cIBgcSjkk", "url": "https://grafana.example.com/d/cIBgcSjkk/production-overview" }, "timestamp": "2026-03-31T10:24:30Z", "type": "grafana.dashboard" } ``` ## Get Silence The Get Silence component fetches the details of a single silence from Grafana Alertmanager using its ID. ### Use Cases - **Inspect a silence**: Retrieve full details of a silence including state, comment, matchers, and times - **Verify a silence**: Confirm a silence is still active before taking action in a workflow ### Configuration - **Silence**: The silence to retrieve (required) ### Output Returns the silence object including ID, state, comment, matchers, start/end times, and the author. ### Example Output ```json { "data": { "comment": "Deploy window for v2.1.0", "createdBy": "devops-bot", "endsAt": "2026-03-31T11:00:00.000Z", "id": "a3e5c2d1-8b4f-4e1a-9c7d-2f0e6b3a1d5c", "matchers": [ { "isEqual": true, "isRegex": false, "name": "env", "value": "production" } ], "startsAt": "2026-03-31T10:00:00.000Z", "status": { "state": "active" }, "updatedAt": "2026-03-31T10:00:00.000Z" }, "timestamp": "2026-03-31T10:24:30Z", "type": "grafana.silence" } ``` ## List Alert Rules The List Alert Rules component lists Grafana-managed alert rules using the Alerting Provisioning HTTP API. ### Use Cases - **Alert audits**: review which Grafana alert rules currently exist - **Workflow enrichment**: send alert inventories to Slack, Jira, or documentation steps - **Follow-up automation**: feed alert rule summaries into downstream review or cleanup workflows ### Configuration All fields are optional: - **Folder**: When set, only alert rules in this Grafana folder are listed - **Rule Group**: When set, only rules in this Grafana rule group are listed When both are omitted, the component lists alert rules across the instance (subject to Grafana permissions). ### Output Returns an object containing the list of Grafana alert rule summaries, including each rule UID and title. ### Example Output ```json { "data": { "alertRules": [ { "title": "High error rate", "uid": "cergr5pm79hj4d" }, { "title": "High latency", "uid": "aer9k2pm71sh2b" }, { "title": "Service unavailable", "uid": "bfg4m1rt63hj8q" } ] }, "timestamp": "2026-03-31T10:24:30Z", "type": "grafana.alertRules" } ``` ## List Annotations The List Annotations component retrieves annotations from Grafana, optionally filtered by tag, dashboard, or time range. ### Use Cases - **Audit operational events**: Review recent deploy, incident, or change markers on a timeline - **Correlate incidents**: Retrieve annotations from around an incident time window for post-incident analysis - **Workflow branching**: Check for existing markers before creating duplicate annotations ### Configuration - **Dashboard**: Optional — filter to annotations on a specific dashboard from your Grafana instance - **Panel**: Optional — filter to annotations on a specific panel within the selected dashboard - **Text**: Optional — filter annotations whose text contains this value - **Tags**: Filter to annotations matching all of the specified tags (optional) - **From / To**: Time range filter values (optional). Examples: `{{ now() - duration("1h") }}` and `{{ now() }}` - **Limit**: Maximum number of annotations to return (optional) ### Output Returns a list of annotation objects including ID, text, tags, time, and dashboard/panel references. ### Example Output ```json { "data": { "annotations": [ { "dashboardUID": "abc123", "id": 42, "panelId": 3, "tags": [ "deploy", "production" ], "text": "Deploy v1.2.3 to production", "time": 1739376000000, "timeEnd": 1739376000000, "type": "annotation" }, { "dashboardUID": "abc123", "id": 41, "panelId": 3, "tags": [ "rollback", "production" ], "text": "Rollback to v1.2.2", "time": 1739289600000, "timeEnd": 1739289600000, "type": "annotation" } ], "from": "2026-02-12T15:18:03.362582388Z", "to": "2026-02-12T16:18:03.362582388Z" }, "timestamp": "2026-02-12T16:18:03.362582388Z", "type": "grafana.annotations" } ``` ## List Silences The List Silences component retrieves silences from Grafana Alertmanager. ### Use Cases - **Audit**: Review all currently active or pending silences in your Grafana instance - **Detect if already muted**: Check whether a specific alert or label set is already silenced before creating a duplicate - **Workflow logic**: Branch on silence state — e.g. skip escalation if an alert is already silenced ### Configuration - **Filter**: Optional label matcher string to filter silences (e.g. `alertname=~"High.*"`) ### Output Returns a list of silence objects, each including ID, state, comment, matchers, start/end times, and the author. ### Example Output ```json { "data": { "silences": [ { "comment": "Deploy window for v2.1.0", "createdBy": "devops-bot", "endsAt": "2026-03-31T11:00:00.000Z", "id": "a3e5c2d1-8b4f-4e1a-9c7d-2f0e6b3a1d5c", "matchers": [ { "isEqual": true, "isRegex": false, "name": "env", "value": "production" } ], "startsAt": "2026-03-31T10:00:00.000Z", "status": { "state": "active" }, "updatedAt": "2026-03-31T10:00:00.000Z" } ] }, "timestamp": "2026-03-31T10:24:30Z", "type": "grafana.silences" } ``` ## Query Data Source The Query Data Source component executes a query against a Grafana data source using the Grafana Query API. ### Use Cases - **Metrics investigation**: Run PromQL or other datasource queries from workflows - **Alert validation**: Validate alert conditions before escalation - **Incident context**: Pull current metrics into incident workflows ### Configuration - **Data Source**: The Grafana data source to query - **Query**: The datasource query (PromQL, InfluxQL, etc.) - **Time From / Time To**: Optional expressions for the query range (for example `now() - duration("5m")` and `now()`) - **Timezone**: Interprets datetime-local expression results using the selected timezone offset - If omitted, SuperPlane defaults the query to the last 5 minutes - **Format**: Optional query format (depends on the datasource) ### Output Returns the Grafana query API response JSON. ### Example Output ```json { "data": { "results": { "A": { "frames": [ { "data": { "values": [ [ "2026-02-07T08:00:00Z", "2026-02-07T08:01:00Z" ], [ 1, 1 ] ] }, "schema": { "fields": [ { "name": "time", "type": "time" }, { "name": "value", "type": "number" } ] } } ] } } }, "timestamp": "2026-02-12T16:18:03.362582388Z", "type": "grafana.query.result" } ``` ## Render Panel The Render Panel component constructs a Grafana image render URL for a dashboard panel using the Grafana Image Renderer. ### Use Cases - **Incident snapshots**: attach or link a rendered panel image in tickets or notifications - **Scheduled reports**: generate a reusable render URL for panel snapshots - **Workflow enrichment**: pass a compact panel image URL through workflow steps ### Configuration - **Dashboard**: The Grafana dashboard containing the panel to render - **Panel**: The panel to render - **Width**: Image width in pixels (default 1000) - **Height**: Image height in pixels (default 500) - **From**: Optional start of the time range. Examples: `{{ now() - duration("1h") }}` or `now-1h` - **To**: Optional end of the time range. Examples: `{{ now() }}` or `now` ### Output Returns the Grafana render URL along with the dashboard UID and panel. ### Example Output ```json { "data": { "dashboard": "cIBgcSjkk", "panel": 2, "url": "https://grafana.example.com/render/d-solo/cIBgcSjkk/production-overview?panelId=2\u0026width=1000\u0026height=500\u0026tz=UTC" }, "timestamp": "2026-03-31T10:24:30Z", "type": "grafana.panel.image" } ``` ## Update Alert Rule The Update Alert Rule component updates a Grafana-managed alert rule using the Alerting Provisioning HTTP API. ### Use Cases - **Threshold tuning**: refine alert conditions after incidents or noisy periods - **Ownership changes**: update labels and annotations used for routing and context - **Rollout safety**: adjust alert rules during migrations or environment transitions ### Configuration - **Alert Rule**: The Grafana alert rule to update - **All other fields are optional**: only the values you provide will be changed - **Folder / Rule Group**: Optional location changes for the rule in Grafana - **Data Source / Query**: Optional query details Grafana evaluates - **Lookback / Reducer / Condition / Threshold(s)**: Optional changes to evaluation and thresholds - **Contact Point**: Set to a contact point to attach notifications; clear the value to remove notification settings from the rule - **Labels / Annotations**: Optional metadata to update alongside the rule ### Output Returns the updated Grafana alert rule object after the provisioning API applies the change. ### Example Output ```json { "data": { "annotations": { "summary": "High error rate detected" }, "condition": "C", "data": [ { "datasourceUid": "prometheus-main", "model": { "editorMode": "code", "expr": "sum(rate(http_requests_total{status=~\"5..\"}[5m]))", "intervalMs": 1000, "maxDataPoints": 43200, "query": "sum(rate(http_requests_total{status=~\"5..\"}[5m]))", "refId": "A" }, "queryType": "", "refId": "A", "relativeTimeRange": { "from": 300, "to": 0 } }, { "datasourceUid": "__expr__", "model": { "expression": "A", "id": "reduce", "reducer": "last", "refId": "B", "settings": { "mode": "dropNN" }, "type": "reduce" }, "queryType": "", "refId": "B", "relativeTimeRange": { "from": 0, "to": 0 } }, { "datasourceUid": "__expr__", "model": { "conditions": [ { "evaluator": { "params": [ 1 ], "type": "gt" }, "operator": { "type": "and" }, "query": { "params": [ "C" ] }, "reducer": { "type": "last" }, "type": "query" } ], "expression": "B", "id": "threshold", "refId": "C", "type": "threshold" }, "queryType": "", "refId": "C", "relativeTimeRange": { "from": 0, "to": 0 } } ], "execErrState": "Alerting", "folderUID": "infra", "for": "5m", "id": 42, "isPaused": false, "labels": { "service": "api", "severity": "critical" }, "noDataState": "NoData", "orgID": 1, "ruleGroup": "service-health", "title": "High error rate", "uid": "cergr5pm79hj4d", "updated": "2026-03-31T10:20:30Z" }, "timestamp": "2026-03-31T10:20:30Z", "type": "grafana.alertRule" } ``` #### Harness Source URL: https://docs.superplane.com/components/harness Run and monitor Harness pipelines from SuperPlane workflows import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions 1. **Create API key:** In Harness, create a service-account API key with permission to run and read pipeline executions. 2. **Connect once, then configure nodes:** Scope fields (**Org**, **Project**, **Pipeline**) are selected in each Harness node. 3. **Account ID is automatic:** SuperPlane resolves account scope from your API key. 4. **Trigger notifications are automatic:** For **On Pipeline Completed** with a selected **Pipeline**, SuperPlane provisions a pipeline notification rule for you. 5. **Auth method:** SuperPlane calls Harness APIs with `x-api-key: ` against `https://app.harness.io/gateway` unless overridden by Base URL. ## On Pipeline Completed The On Pipeline Completed trigger starts a workflow when a Harness pipeline execution finishes. ### Use Cases - **Failure notifications**: Send Slack alerts when critical pipelines fail - **Release automation**: Trigger post-deploy checks when a deployment pipeline succeeds - **Incident workflows**: Create tickets for aborted/expired pipeline runs ### Configuration - **Org**: Harness organization identifier - **Project**: Harness project identifier - **Pipeline Identifier**: Optional pipeline identifier filter. Leave empty to accept all pipeline completions. - **Statuses**: Completion statuses that should trigger the workflow. ### Webhook Setup SuperPlane automatically provisions Harness pipeline `notificationRules` when **Pipeline** is selected. If no pipeline is selected, or webhook delivery is unavailable in your Harness account, SuperPlane falls back to polling recent executions. ### Example Data ```json { "data": { "eventType": "PIPELINE_END", "executionId": "3y9YlBC9SrOn6W7bPT5nCw", "pipelineIdentifier": "deploy_prod", "raw": { "data": { "pipelineIdentifier": "deploy_prod", "planExecutionId": "3y9YlBC9SrOn6W7bPT5nCw", "status": "FAILED" }, "eventType": "PIPELINE_END" }, "status": "failed" }, "timestamp": "2026-02-12T18:45:55Z", "type": "harness.pipeline.completed" } ``` ## Run Pipeline The Run Pipeline component starts a Harness pipeline execution and waits for it to finish. ### Use Cases - **CI/CD orchestration**: Trigger deploy pipelines from workflow events - **Approval-based releases**: Run release pipelines after manual approvals - **Scheduled automation**: Kick off recurring maintenance or validation pipelines ### How It Works 1. Starts a Harness pipeline execution 2. Stores the execution ID in node execution state 3. Watches execution completion via webhook (with polling fallback) 4. Routes output to: - **Success** when execution succeeds - **Failed** when execution fails, aborts, or expires ### Configuration - **Org**: Harness organization identifier - **Project**: Harness project identifier - **Pipeline**: Harness pipeline identifier - **Ref**: Optional git ref (`refs/heads/main` or `refs/tags/v1.2.3`) - **Input Set References**: Optional input set identifiers - **Runtime Input YAML**: Optional YAML override for runtime inputs ### Example Output ```json { "data": { "endedAt": "2026-02-12T18:45:55Z", "executionId": "3y9YlBC9SrOn6W7bPT5nCw", "pipelineIdentifier": "deploy_prod", "planExecutionUrl": "https://app.harness.io/ng/account/acc123/module/cd/orgs/default/projects/platform/pipelines/deploy_prod/executions/3y9YlBC9SrOn6W7bPT5nCw/pipeline", "startedAt": "2026-02-12T18:42:10Z", "status": "succeeded" }, "timestamp": "2026-02-12T18:45:55Z", "type": "harness.pipeline.finished" } ``` #### Hetzner Cloud Source URL: https://docs.superplane.com/components/hetznercloud Create and delete Hetzner Cloud servers/load balancers and create/delete server snapshots import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Instructions **API Token:** Create a token in [Hetzner Cloud Console](https://console.hetzner.cloud/) → Project → Security → API Tokens. Use **Read & Write** scope. ## Create Load Balancer The Create Load Balancer component creates a load balancer in Hetzner Cloud. ### How It Works 1. Creates a load balancer with the specified name, type, location, and algorithm via the Hetzner API 2. Emits the created load balancer details on the default output channel ### Configuration - **Name**: The name for the new load balancer (supports expressions) - **Type**: The load balancer type (e.g. lb11, lb21, lb31) - **Location**: The location where the load balancer will be created - **Algorithm**: The load balancing algorithm — Round Robin (default) or Least Connections ### Example Output ```json { "data": { "id": "12345", "name": "my-load-balancer", "status": "running" }, "timestamp": "2024-01-15T10:30:00Z", "type": "hetzner.load_balancer.created" } ``` ## Create Server The Create Server component creates a new server in Hetzner Cloud and waits for the create action to complete. ### How It Works 1. Creates a server with the given name, server type, image (system image or snapshot), and optional location/SSH keys/user data 2. Polls the Hetzner API until the create action finishes 3. Emits the server details on the default output when ready. If creation fails, the execution errors. ### Configuration - **Name**: Server name (supports expressions) - **Server type**: e.g. cx11, cpx11, cax11 - **Image**: System image or snapshot image ID - **Location** (optional): e.g. fsn1, nbg1, hel1 - **SSH keys** (optional): List of SSH key names or IDs - **Firewall** (optional): Attach an existing firewall to the server - **User data** (optional): Cloud-init user data ### Example Output ```json { "data": { "created": "2024-01-15T10:30:00+00:00", "id": 42, "name": "my-server", "publicIp": "1.2.3.4", "status": "running" }, "timestamp": "2024-01-15T10:30:00Z", "type": "hetzner.server.created" } ``` ## Create Snapshot The Create Snapshot component creates a snapshot image from an existing Hetzner Cloud server and waits for completion. ### How It Works 1. Calls the Hetzner API to create a snapshot from the selected server 2. Polls the action until snapshot creation finishes 3. Emits snapshot details (including image ID) on success. If creation fails, the execution errors. ### Configuration - **Server**: Existing server to snapshot - **Snapshot name** (optional): Snapshot description/name in Hetzner Cloud ### Example Output ```json { "data": { "actionId": "12345", "imageId": 67890, "imageType": "snapshot", "serverId": "42", "snapshotName": "workflow-snapshot" }, "timestamp": "2024-01-15T10:30:00Z", "type": "hetzner.snapshot.created" } ``` ## Delete Load Balancer The Delete Load Balancer component deletes a load balancer in Hetzner Cloud. ### How It Works 1. Deletes the selected load balancer via the Hetzner API 2. Emits on the default output when the load balancer is deleted. If deletion fails, the execution errors. ### Example Output ```json { "data": { "loadBalancerId": "12345" }, "timestamp": "2024-01-15T10:30:00Z", "type": "hetzner.load_balancer.deleted" } ``` ## Delete Server The Delete Server component deletes a server in Hetzner Cloud and waits for the delete action to complete. ### How It Works 1. Deletes the selected server via the Hetzner API 2. Polls the API until the delete action finishes 3. Emits on the default output when the server is deleted. If deletion fails, the execution errors. ### Example Output ```json { "data": { "actionId": 123, "serverId": 42 }, "timestamp": "2024-01-15T10:30:00Z", "type": "hetzner.server.deleted" } ``` ## Delete Snapshot The Delete Snapshot component deletes a snapshot image in Hetzner Cloud. ### How It Works 1. Deletes the selected snapshot via the Hetzner API 2. Emits on the default output when the snapshot is deleted. If deletion fails, the execution errors. ### Example Output ```json { "data": { "imageId": "67890" }, "timestamp": "2024-01-15T10:30:00Z", "type": "hetzner.snapshot.deleted" } ``` #### Honeycomb Source URL: https://docs.superplane.com/components/honeycomb Monitor observability alerts and send events to Honeycomb datasets import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions Connect Honeycomb to SuperPlane using a Management Key. **Required configuration:** - **Site**: US (api.honeycomb.io) or EU (api.eu1.honeycomb.io) based on your account region. - **Management Key**: Found in Honeycomb under Team Settings > API Keys. Must be in format <keyID>:<secret>. - **Team Slug**: Your team identifier, visible in the Honeycomb URL: honeycomb.io/<team-slug>. - **Environment Slug**: The environment containing your datasets (e.g. "production"). Found under Team Settings > Environments. SuperPlane will automatically validate your credentials and manage all necessary Honeycomb resources — webhook recipients for triggers and ingest keys for actions — so no manual setup is required. ## On Alert Fired Starts a workflow execution when a Honeycomb Trigger fires. **Configuration:** - **Dataset Slug**: The slug of the dataset that contains your Honeycomb trigger. Found in the dataset URL: honeycomb.io/<team>/datasets/<dataset-slug>. - **Trigger**: The exact name of the Honeycomb trigger to listen to (case-insensitive). Found in your dataset under Triggers. **How it works:** SuperPlane automatically creates a webhook recipient in Honeycomb and attaches it to the selected trigger. No manual webhook setup is required. When the trigger fires, SuperPlane receives the webhook and starts a workflow execution with the full alert payload. ### Example Data ```json { "data": { "alert_type": "on_true", "description": "production environment:\nCurrent value greater than threshold value (5)", "id": "kQjkatCVK6M", "is_test": false, "name": "High Error Rate", "operator": "greater than", "result_groups": [ { "Group": {}, "Result": 8.5 } ], "result_groups_triggered": [], "result_url": "https://ui.honeycomb.io/myteam/environments/production/datasets/api-production/result/p3o2dvAhYxx/a/z5rYVCoNUZz?utm_content=view_graph\u0026utm_medium=Trigger\u0026utm_source=webhook", "status": "TRIGGERED", "summary": "Triggered: High Error Rate", "threshold": 5, "trigger_description": "API error rate has exceeded the acceptable threshold", "trigger_url": "https://ui.honeycomb.io/myteam/environments/production/datasets/api-production/triggers/kQjkatCVK6M?utm_content=edit_trigger\u0026utm_medium=Trigger\u0026utm_source=webhook", "version": "v0.1.0" }, "timestamp": "2024-01-15T10:30:00Z", "type": "honeycomb.alert.fired" } ``` ## Create Event Sends a JSON event to a Honeycomb dataset. Each key in the JSON object becomes a Honeycomb field. Notes: • Dataset must exist • Fields must be valid JSON object • Timestamp is auto-added if missing ### Example Output ```json { "data": { "dataset": "example", "fields": { "deployed_by": "github-actions", "duration_seconds": 42, "environment": "production", "event_type": "deployment", "service": "billing-api", "success": true, "version": "2.4.1" }, "status": "sent" }, "timestamp": "2026-02-27T11:34:29.510313029Z", "type": "honeycomb.event.created" } ``` #### Incident Source URL: https://docs.superplane.com/components/incident Manage and react to incidents in incident.io import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions ## API integration 1. In [incident.io Settings > API keys](https://app.incident.io/settings/api-keys), click **Create API key** and give it a name. 2. Under **Add permissions**, select exactly these (use "Find a permission" if needed): - **View data, like public incidents and organisation settings** (needed to read severities) - **Create incidents** (needed for the Create Incident action) - **View all incident data, including private incidents** (only if you use private incidents) 3. Create the key and **paste the API key** in the Configuration section below. ## On Incident The On Incident trigger starts a workflow execution when incident.io sends webhooks for incident created or updated events. ### Use Cases - **Incident automation**: Notify Slack, update a status page, or create a Jira ticket when an incident is opened or updated - **Notification workflows**: Send notifications when incidents are created or their status changes - **Integration workflows**: Sync incidents with external systems ### Configuration - **Events**: Select which events to listen for (Incident created, Incident updated) - **Webhook signing secret**: Use the **Set signing secret** action below (after creating the webhook in incident.io) to store the signing secret. It is stored securely and never in the workflow configuration. ### Webhook Setup incident.io does not provide an API to register webhook endpoints. After adding this trigger: 1. Save the canvas to generate the webhook URL, then copy it from this panel. 2. In incident.io go to **Settings > Webhooks** and create a new endpoint with that URL. 3. Subscribe to **Public incident created (v2)** and **Public incident updated (v2)**. 4. Copy the **Signing secret** from the endpoint, then use **Set signing secret** below to store it securely. ### Example Data ```json { "data": { "event_type": "public_incident.incident_created_v2", "incident": { "created_at": "2021-08-17T13:28:57.801578Z", "id": "01FDAG4SAP5TYPT98WGR2N7W91", "incident_status": { "id": "01FCNDV6P870EA6S7TK1DSYD5H", "name": "Triage" }, "name": "Our database is sad", "permalink": "https://app.incident.io/incidents/123", "reference": "INC-123", "severity": { "id": "01FCNDV6P870EA6S7TK1DSYDG0", "name": "Minor" }, "summary": "Our database is really really sad, and we don't know why yet.", "updated_at": "2021-08-17T13:28:57.801578Z", "visibility": "public" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "incident.incident.created" } ``` ## Create Incident The Create Incident component creates a new incident in incident.io. ### Use Cases - **Alert escalation**: Create incidents from monitoring alerts - **Error tracking**: Automatically create incidents when errors are detected - **Manual incident creation**: Create incidents from workflow events - **Integration workflows**: Create incidents from external system events ### Configuration - **Name**: The incident name or title (required, supports expressions) - **Summary**: Additional details about the incident (optional, supports expressions) - **Severity**: Select a severity from your incident.io organization (required) - **Visibility**: Public (anyone can access) or Private (only invited users) ### Output Returns the created incident object including: - **id**: Incident ID - **name**: Incident name - **reference**: Human-readable reference (e.g. INC-123) - **permalink**: Link to the incident in incident.io - **severity**: Severity details if set - **visibility**: public or private - **created_at**, **updated_at**: Timestamps ### Example Output ```json { "data": { "created_at": "2021-08-17T13:28:57.801578Z", "id": "01FDAG4SAP5TYPT98WGR2N7W91", "name": "Database connectivity issues", "permalink": "https://app.incident.io/incidents/123", "reference": "INC-123", "severity": { "id": "01FCNDV6P870EA6S7TK1DSYDG0", "name": "Minor" }, "summary": "Users are experiencing slow queries and connection timeouts.", "updated_at": "2021-08-17T13:28:57.801578Z", "visibility": "public" }, "timestamp": "2026-01-19T12:00:00Z", "type": "incident.incident" } ``` #### JFrog Artifactory Source URL: https://docs.superplane.com/components/jfrogartifactory Manage artifacts in JFrog Artifactory repositories import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions To set up the JFrog Artifactory integration: 1. Log in to your JFrog Platform 2. Go to **User Menu** (top right) -> **Edit Profile** -> **Authentication Settings** 3. Click **Generate an Identity Token** 4. Copy the token and paste it in the **Access Token** field below 5. Enter your JFrog Platform URL without the /artifactory suffix (e.g. https://mycompany.jfrog.io) ## On Artifact Uploaded The On Artifact Uploaded trigger starts a workflow execution when an artifact is deployed to JFrog Artifactory. ### Configuration - **Repository** (optional): Filter events to a specific repository. Leave empty to trigger for all repositories. ### Outputs - **Default channel**: Emits artifact deploy data including repo, path, name, size, and sha256. ### Example Data ```json { "data": { "name": "artifact-1.0.jar", "path": "com/example/artifact-1.0.jar", "repo": "libs-release-local", "sha256": "abc123def456ghi789jkl012mno345pqr678stu901vwx234yz567", "size": 12345 }, "timestamp": "2026-01-23T12:00:00Z", "type": "jfrogArtifactory.artifactUploaded" } ``` ## Delete Artifact The Delete Artifact component removes an artifact from a JFrog Artifactory repository. ### Use Cases - **Cleanup pipelines**: Remove outdated or temporary artifacts after a release - **Storage management**: Delete artifacts that are no longer needed - **Automated housekeeping**: Trigger deletions based on workflow conditions ### Configuration - **Repository**: Select the Artifactory repository containing the artifact - **Path**: The path to the artifact within the repository (supports expressions) ### Output Returns the repository and path of the deleted artifact. ### Example Output ```json { "data": { "path": "/com/example/artifact/1.0/artifact-1.0.jar", "repo": "libs-release-local" }, "timestamp": "2026-01-23T12:00:00Z", "type": "jfrogArtifactory.artifact.deleted" } ``` ## Get Artifact Info The Get Artifact Info component retrieves metadata about an artifact stored in JFrog Artifactory. ### Use Cases - **Artifact verification**: Check artifact existence and checksums before deployment - **Pipeline metadata**: Retrieve artifact details for downstream workflow steps - **Audit and tracking**: Get creation time, size, and author information ### Configuration - **Repository**: Select the Artifactory repository containing the artifact - **Path**: The path to the artifact within the repository (supports expressions) ### Output Returns artifact metadata including repository, path, size, checksums, download URI, and timestamps. ### Example Output ```json { "data": { "checksums": { "md5": "d41d8cd98f00b204e9800998ecf8427e", "sha1": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" }, "created": "2026-01-15T10:30:00.000Z", "createdBy": "admin", "downloadUri": "https://mycompany.jfrog.io/artifactory/libs-release-local/com/example/artifact/1.0/artifact-1.0.jar", "lastModified": "2026-01-15T10:30:00.000Z", "mimeType": "application/java-archive", "modifiedBy": "admin", "path": "/com/example/artifact/1.0/artifact-1.0.jar", "repo": "libs-release-local", "size": "12345", "uri": "https://mycompany.jfrog.io/artifactory/api/storage/libs-release-local/com/example/artifact/1.0/artifact-1.0.jar" }, "timestamp": "2026-01-23T12:00:00Z", "type": "jfrogArtifactory.artifact.info" } ``` #### Jira Source URL: https://docs.superplane.com/components/jira Manage and react to issues in Jira import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Create Issue The Create Issue component creates a new issue in Jira. ### Use Cases - **Task creation**: Automatically create tasks from workflow events - **Bug tracking**: Create bugs from error detection systems - **Feature requests**: Generate feature request issues from external inputs ### Configuration - **Project**: The Jira project to create the issue in - **Issue Type**: The type of issue (e.g. Task, Bug, Story) - **Summary**: The issue summary/title - **Description**: Optional description text ### Output Returns the created issue including: - **id**: The issue ID - **key**: The issue key (e.g. PROJ-123) - **self**: API URL for the issue ### Example Output ```json { "data": { "id": "10001", "key": "PROJ-123", "self": "https://your-domain.atlassian.net/rest/api/3/issue/10001" }, "timestamp": "2026-01-19T12:00:00Z", "type": "jira.issue" } ``` #### LaunchDarkly Source URL: https://docs.superplane.com/components/launchdarkly Manage feature flags and react to flag changes in LaunchDarkly import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions ## API integration 1. In the [LaunchDarkly Account settings > Authorization](https://app.launchdarkly.com/settings/authorization), click **Create token**. 2. Give the token a name and select a role with at least **Reader** permissions for feature flags. - For the **Delete Feature Flag** action, the role must also include **Writer** permissions. 3. Create the token and **paste the API access token** in the Configuration section below. ## On Feature Flag Change The On Feature Flag Change trigger starts a workflow execution when LaunchDarkly sends webhooks for feature flags in a project. ### Use Cases - **Deployment automation**: Trigger deployments or rollbacks when a feature flag changes - **Audit workflows**: Track and log changes to flags for compliance - **Notification workflows**: Send notifications when a flag is created, updated, or deleted - **Integration workflows**: Sync flag changes with external systems ### Configuration - **Project**: The LaunchDarkly project to monitor - **Environments**: Optionally filter by environment(s). Leave empty to receive events for all environments. - **Feature Flags**: Optionally filter by specific flags or patterns. Leave empty to receive events for all flags. - **Actions**: Optionally filter by specific actions (e.g. only when a flag is turned on or off). Leave empty to receive all actions. ### Webhook Setup The webhook is automatically created in LaunchDarkly when you save the canvas. No manual setup is required. SuperPlane uses the LaunchDarkly API (via your configured API access token) to create a signed webhook scoped to the selected project, and securely stores the auto-generated signing secret. When LaunchDarkly sends events, SuperPlane verifies the signature and filters to the configured environments, flags, and actions automatically. ### Example Data ```json { "data": { "accesses": [ { "action": "updateOn", "resource": "proj/default:env/test:flag/another-toggle-feature" } ], "date": 1771939563356, "description": "", "kind": "flag", "member": { "email": "user@example.com", "firstName": "John", "lastName": "Doe" }, "name": "Another Toggle Feature", "parent": { "name": "Test", "resource": "proj/default:env/test" }, "target": { "name": "Another Toggle Feature", "resources": [ "proj/default:env/test:flag/another-toggle-feature" ] }, "title": "John Doe turned off the flag Another Toggle Feature in Test", "titleVerb": "turned off the flag" }, "timestamp": "2026-02-24T12:00:00Z", "type": "launchdarkly.flag.updateOn" } ``` ## Delete Feature Flag The Delete Feature Flag component permanently deletes a feature flag from a LaunchDarkly project. ### Use Cases - **Flag cleanup**: Remove stale or temporary flags after rollout is complete - **Automated lifecycle**: Delete flags as part of a release workflow - **Maintenance workflows**: Clean up archived flags that are no longer needed ### Configuration - **Project Key**: The key of the LaunchDarkly project containing the flag - **Flag Key**: The key of the feature flag to delete (supports expressions) ### Output Returns a confirmation payload with the deleted flag's project and flag keys. **Warning**: This action is irreversible. Once deleted, the flag and all its targeting rules are permanently removed. ### Example Output ```json { "data": { "deleted": true, "flagKey": "toggle-feature", "projectKey": "default" }, "timestamp": "2026-01-19T12:00:00Z", "type": "launchdarkly.flag.deleted" } ``` ## Get Feature Flag The Get Feature Flag component retrieves a specific feature flag from a LaunchDarkly project. ### Use Cases - **Flag lookup**: Fetch flag details for processing or display - **Workflow automation**: Get flag information to make decisions in workflows - **Status checking**: Check flag status before performing actions - **Audit and monitoring**: Retrieve flag data for compliance workflows ### Configuration - **Project Key**: The key of the LaunchDarkly project containing the flag - **Flag Key**: The key of the feature flag to retrieve (supports expressions) ### Output Returns the complete feature flag object including: - Flag key, name, and description - Kind (boolean, multivariate) - Creation date - Archived and temporary status - Variations, environments, and targeting rules ### Example Output ```json { "data": { "archived": false, "creationDate": 1704067200000, "description": "Controls access to the new feature", "key": "toggle-feature", "kind": "boolean", "name": "Toggle Feature", "temporary": false, "variations": [ { "name": "Enabled", "value": true }, { "name": "Disabled", "value": false } ] }, "timestamp": "2026-01-19T12:00:00Z", "type": "launchdarkly.flag" } ``` #### Microsoft Azure Source URL: https://docs.superplane.com/components/microsoftazure Manage and automate Microsoft Azure resources and services import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions ## Azure Workload Identity Federation Setup To connect SuperPlane to Microsoft Azure using Workload Identity Federation: ### 1. Create or Select an App Registration 1. Go to **Azure Portal** → **Azure Active Directory** → **App registrations** 2. Create a new registration or select an existing app 3. Note the **Application (client) ID** and **Directory (tenant) ID** ### 2. Complete the Connection Enter the following information below and create the integration: - **Tenant ID**: Your Azure AD tenant ID - **Client ID**: Your app registration's client ID - **Subscription ID**: Your Azure subscription ID After creation, you will be guided through configuring the Federated Identity Credential and granting the required permissions. SuperPlane will use Workload Identity Federation to authenticate without storing any credentials. ## On Blob Created The On Blob Created trigger starts a workflow execution when a blob is created or replaced in an Azure Storage Account. ### Use Cases - **Data pipelines**: Trigger processing when new files arrive in a storage container - **Image processing**: React to new images or media uploaded to blob storage - **Audit and compliance**: Record blob creation events for traceability - **ETL workflows**: Kick off data transformation when input files are uploaded ### How It Works This trigger listens to Azure Event Grid events from a Storage Account. When a blob is created or replaced, the `Microsoft.Storage.BlobCreated` event is delivered and the trigger fires with the full event payload. ### Configuration - **Resource Group** (required): The resource group containing the Storage Account. - **Storage Account** (required): The Storage Account to watch. - **Container Filter** (optional): A regex pattern to filter by container name. - **Blob Filter** (optional): A regex pattern to filter by blob path. ### Event Data Each blob created event includes: - **subject**: The full blob path in the format /blobServices/default/containers/{container}/blobs/{blob} - **data.api**: The operation that triggered the event (e.g., PutBlob, CopyBlob) - **data.contentType**: The content type of the blob - **data.contentLength**: The size of the blob in bytes - **data.blobType**: The blob type (BlockBlob, PageBlob, AppendBlob) - **data.url**: The URL of the blob ### Example Data ```json { "data": { "data": { "api": "PutBlob", "blobType": "BlockBlob", "clientRequestId": "6d6cef9a-a602-4a23-bc26-91bb68a2bf74", "contentLength": 524288, "contentType": "text/csv", "eTag": "0x8D4BCC2E4835CD0", "requestId": "d1e6b5a4-0001-0035-4a7b-2e5c4f000000", "sequencer": "00000000000004420000000000028963", "url": "https://mystorageaccount.blob.core.windows.net/mycontainer/path/to/myfile.csv" }, "dataVersion": "", "eventTime": "2026-03-16T10:00:00Z", "eventType": "Microsoft.Storage.BlobCreated", "id": "831e1650-001e-001b-66ab-eeb76e069631", "metadataVersion": "1", "subject": "/blobServices/default/containers/mycontainer/blobs/path/to/myfile.csv", "topic": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Storage/storageAccounts/mystorageaccount" }, "timestamp": "2026-03-16T10:00:00Z", "type": "azure.blob.created" } ``` ## On Blob Deleted The On Blob Deleted trigger starts a workflow execution when a blob is deleted from an Azure Storage Account. ### Use Cases - **Cleanup workflows**: Remove associated resources or records when a blob is deleted - **Audit and compliance**: Record blob deletions for traceability - **Notification workflows**: Alert teams when important files are removed from storage ### How It Works This trigger listens to Azure Event Grid events from a Storage Account. When a blob is deleted, the `Microsoft.Storage.BlobDeleted` event is delivered and the trigger fires with the full event payload. ### Configuration - **Resource Group** (required): The resource group containing the Storage Account. - **Storage Account** (required): The Storage Account to watch. - **Container Filter** (optional): A regex pattern to filter by container name. - **Blob Filter** (optional): A regex pattern to filter by blob path. ### Event Data Each blob deleted event includes: - **subject**: The full blob path in the format /blobServices/default/containers/{container}/blobs/{blob} - **data.api**: The operation that triggered the event (e.g., DeleteBlob) - **data.blobType**: The blob type (BlockBlob, PageBlob, AppendBlob) - **data.url**: The URL of the deleted blob ### Example Data ```json { "data": { "data": { "api": "DeleteBlob", "blobType": "BlockBlob", "clientRequestId": "6d6cef9a-a602-4a23-bc26-91bb68a2bf74", "contentLength": 0, "contentType": "text/csv", "eTag": "0x8D4BCC2E4835CD0", "requestId": "d1e6b5a4-0001-0035-4a7b-2e5c4f000000", "sequencer": "00000000000004420000000000028964", "url": "https://mystorageaccount.blob.core.windows.net/mycontainer/path/to/myfile.csv" }, "dataVersion": "", "eventTime": "2026-03-16T11:00:00Z", "eventType": "Microsoft.Storage.BlobDeleted", "id": "afc359b4-001e-001b-66ab-eeb76e069631", "metadataVersion": "1", "subject": "/blobServices/default/containers/mycontainer/blobs/path/to/myfile.csv", "topic": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Storage/storageAccounts/mystorageaccount" }, "timestamp": "2026-03-16T11:00:00Z", "type": "azure.blob.deleted" } ``` ## On Image Deleted The On Image Deleted trigger starts a workflow execution when a container image is deleted from an Azure Container Registry. ### Use Cases - **Cleanup workflows**: Remove associated resources when an image is deleted - **Audit trails**: Record image deletions for compliance purposes - **Notification workflows**: Alert teams when images are removed from the registry ### How It Works This trigger listens to Azure Event Grid events from an ACR registry. When an image or manifest is deleted, the `Microsoft.ContainerRegistry.ImageDeleted` event is delivered and the trigger fires with the full event payload. Note: Image deletions reference manifests by digest. Tags may be empty if the manifest itself was deleted. ### Configuration - **Resource Group** (required): The resource group containing the ACR registry. - **Registry** (required): The ACR registry to watch. - **Repository Filter** (optional): A regex pattern to filter by repository name. ### Event Data Each delete event includes: - **target.repository**: The repository name - **target.digest**: The manifest digest that was deleted - **target.tag**: The tag (may be empty for manifest deletes) - **actor.name**: The user or service principal that deleted the image ### Example Data ```json { "data": { "data": { "action": "delete", "actor": { "name": "myuser" }, "id": "afc359b4-001e-001b-66ab-eeb76e069631", "request": { "addr": "203.0.113.0:49926", "host": "myregistry.azurecr.io", "id": "6d6cef9a-a602-4a23-bc26-91bb68a2bf74", "method": "DELETE", "useragent": "docker/20.10.7" }, "source": { "addr": "myregistry.azurecr.io", "instanceID": "a29a591f-f89c-4f8d-b061-3c5d73d4756c" }, "target": { "digest": "sha256:abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890", "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "repository": "myrepository", "tag": "", "url": "https://myregistry.azurecr.io/v2/myrepository/manifests/sha256:abcdef1234567890" }, "timestamp": "2026-03-16T11:00:00Z" }, "dataVersion": "1.0", "eventTime": "2026-03-16T11:00:00Z", "eventType": "Microsoft.ContainerRegistry.ImageDeleted", "id": "afc359b4-001e-001b-66ab-eeb76e069631", "metadataVersion": "1", "subject": "myregistry.azurecr.io/myrepository@sha256:abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890", "topic": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.ContainerRegistry/registries/myregistry" }, "timestamp": "2026-03-16T11:00:00Z", "type": "azure.image.deleted" } ``` ## On Image Pushed The On Image Pushed trigger starts a workflow execution when a container image is pushed to an Azure Container Registry. ### Use Cases - **CI/CD pipelines**: Trigger deployments when a new image version is pushed - **Image scanning**: Kick off security scans when new images arrive - **Notification workflows**: Notify teams when images are updated - **Tag tracking**: React to specific image tags being published ### How It Works This trigger listens to Azure Event Grid events from an ACR registry. When an image push succeeds, the `Microsoft.ContainerRegistry.ImagePushed` event is delivered and the trigger fires with the full event payload. ### Configuration - **Resource Group** (required): The resource group containing the ACR registry. - **Registry** (required): The ACR registry to watch. - **Repository Filter** (optional): A regex pattern to filter by repository name. - **Tag Filter** (optional): A regex pattern to filter by image tag. ### Event Data Each push event includes: - **target.repository**: The repository name - **target.tag**: The image tag - **target.digest**: The image manifest digest - **actor.name**: The user or service principal that pushed the image - **request.host**: The registry hostname ### Example Data ```json { "data": { "data": { "action": "push", "actor": { "name": "myuser" }, "id": "831e1650-001e-001b-66ab-eeb76e069631", "request": { "addr": "203.0.113.0:49926", "host": "myregistry.azurecr.io", "id": "6d6cef9a-a602-4a23-bc26-91bb68a2bf74", "method": "PUT", "useragent": "docker/20.10.7" }, "source": { "addr": "myregistry.azurecr.io", "instanceID": "a29a591f-f89c-4f8d-b061-3c5d73d4756c" }, "target": { "digest": "sha256:abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890", "length": 1234567, "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "repository": "myrepository", "size": 1234567, "tag": "v1.2.3", "url": "https://myregistry.azurecr.io/v2/myrepository/manifests/sha256:abcdef1234567890" }, "timestamp": "2026-03-16T10:00:00Z" }, "dataVersion": "1.0", "eventTime": "2026-03-16T10:00:00Z", "eventType": "Microsoft.ContainerRegistry.ImagePushed", "id": "831e1650-001e-001b-66ab-eeb76e069631", "metadataVersion": "1", "subject": "myregistry.azurecr.io/myrepository:v1.2.3", "topic": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.ContainerRegistry/registries/myregistry" }, "timestamp": "2026-03-16T10:00:00Z", "type": "azure.image.pushed" } ``` ## On VM Deallocated The On VM Deallocated trigger starts a workflow execution when an Azure Virtual Machine is deallocated. ### Use Cases - **Cost tracking**: Log when VMs are deallocated and compute charges stop - **Resource cleanup**: Clean up associated resources (DNS, monitoring) when VMs are deallocated - **Scheduling verification**: Confirm VMs are being deallocated according to cost-saving schedules - **Capacity planning**: Track deallocation patterns for capacity planning - **Notification workflows**: Alert teams when VMs go offline ### How It Works This trigger listens to Azure Event Grid events for Virtual Machine deallocate actions. When a VM deallocate action succeeds (`status: Succeeded`), the trigger fires and provides the full Azure Event Grid event payload. Azure fires `Microsoft.Resources.ResourceActionSuccess` with operation name `Microsoft.Compute/virtualMachines/deallocate/action` when a VM is deallocated. Deallocation stops the VM and releases compute resources — the VM no longer incurs compute charges (only storage charges remain). **Important**: This trigger fires on deallocate, not on power-off (stop without deallocation). Use the "On VM Stopped" trigger for power-off events. ### Configuration - **Resource Group** (optional): Filter events to only trigger for VMs in a specific resource group. Leave empty to trigger for all resource groups in the subscription. - **VM Name Filter** (optional): A regex pattern to filter VMs by name. Only VMs whose name matches the pattern will trigger the workflow. Leave empty to trigger for all VM names. ### Event Data Each VM deallocate event includes the full Azure Event Grid event: - **id**: Unique event ID - **topic**: The Azure subscription topic - **subject**: The full ARM resource ID of the VM (with /deallocate appended) - **eventType**: The event type (`Microsoft.Resources.ResourceActionSuccess`) - **eventTime**: The timestamp when the event occurred - **data**: The event data including operationName, status, resourceProvider, resourceUri, subscriptionId, tenantId ### Azure Event Grid Setup Event Grid subscriptions are created automatically when the trigger is set up. SuperPlane will: 1. Create an Event Grid subscription at the Azure subscription scope 2. Configure it to forward `Microsoft.Resources.ResourceActionSuccess` events to the trigger webhook 3. Apply subject filters based on the configured resource group and resource type 4. Handle the Event Grid validation handshake automatically No manual setup is required. ### Notes - The trigger fires when a VM is deallocated (stopped and compute resources released) - After deallocation, only storage charges remain — compute charges stop - It does not fire on power-off (stop without deallocation) — use "On VM Stopped" for that - Failed deallocate operations do not trigger the workflow - The trigger processes events from Azure Event Grid in real-time - Multiple triggers can share the same Event Grid subscription if configured correctly ### Example Data ```json { "data": { "data": { "operationName": "Microsoft.Compute/virtualMachines/deallocate/action", "resourceProvider": "Microsoft.Compute", "resourceUri": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm-01", "status": "Succeeded", "subscriptionId": "12345678-1234-1234-1234-123456789abc", "tenantId": "12345678-1234-1234-1234-123456789abc" }, "dataVersion": "2", "eventTime": "2026-02-11T10:30:00Z", "eventType": "Microsoft.Resources.ResourceActionSuccess", "id": "c3d4e5f6-a7b8-9012-cdef-123456789012", "metadataVersion": "1", "subject": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm-01/deallocate", "topic": "/subscriptions/12345678-1234-1234-1234-123456789abc" }, "timestamp": "2026-02-11T10:30:00Z", "type": "azure.vm.deallocated" } ``` ## On VM Deleted The On VM Deleted trigger starts a workflow execution when an Azure Virtual Machine is deleted. ### Use Cases - **Cleanup workflows**: Remove DNS records, monitoring agents, or other dependent resources when a VM is deleted - **Inventory tracking**: Update external inventory systems when VMs are removed - **Notification workflows**: Send notifications to teams when VMs are deleted - **Cost tracking**: Log VM deletion events for cost analysis and reporting - **Compliance auditing**: Track and audit VM deletions for security and compliance ### How It Works This trigger listens to Azure Event Grid events for Virtual Machine resource delete operations. When a VM delete operation succeeds (`status: Succeeded`), the trigger fires and provides the full Azure Event Grid event payload. Azure fires `Microsoft.Resources.ResourceDeleteSuccess` when a VM is successfully deleted. This is a distinct event from write operations — it only fires when the VM is actually removed, not during creation or updates. ### Configuration - **Resource Group** (optional): Filter events to only trigger for VMs in a specific resource group. Leave empty to trigger for all resource groups in the subscription. - **VM Name Filter** (optional): A regex pattern to filter VMs by name. Only VMs whose name matches the pattern will trigger the workflow. Leave empty to trigger for all VM names. ### Event Data Each VM delete event includes the full Azure Event Grid event: - **id**: Unique event ID - **topic**: The Azure subscription topic - **subject**: The full ARM resource ID of the VM - **eventType**: The event type (`Microsoft.Resources.ResourceDeleteSuccess`) - **eventTime**: The timestamp when the event occurred - **data**: The event data including operationName, status, resourceProvider, resourceUri, subscriptionId, tenantId ### Azure Event Grid Setup Event Grid subscriptions are created automatically when the trigger is set up. SuperPlane will: 1. Create an Event Grid subscription at the Azure subscription scope 2. Configure it to forward `Microsoft.Resources.ResourceDeleteSuccess` events to the trigger webhook 3. Apply subject filters based on the configured resource group and resource type 4. Handle the Event Grid validation handshake automatically No manual setup is required. ### Notes - The trigger fires only when a VM is successfully deleted - Failed delete operations do not trigger the workflow - The trigger processes events from Azure Event Grid in real-time - Multiple triggers can share the same Event Grid subscription if configured correctly ### Example Data ```json { "data": { "data": { "operationName": "Microsoft.Compute/virtualMachines/delete", "resourceProvider": "Microsoft.Compute", "resourceUri": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm-01", "status": "Succeeded", "subscriptionId": "12345678-1234-1234-1234-123456789abc", "tenantId": "12345678-1234-1234-1234-123456789abc" }, "dataVersion": "2", "eventTime": "2026-02-11T10:30:00Z", "eventType": "Microsoft.Resources.ResourceDeleteSuccess", "id": "96257b6d-17d3-49e2-8369-fb185b29e1b5", "metadataVersion": "1", "subject": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm-01", "topic": "/subscriptions/12345678-1234-1234-1234-123456789abc" }, "timestamp": "2026-02-11T10:30:00Z", "type": "azure.vm.deleted" } ``` ## On VM Restarted The On VM Restarted trigger starts a workflow execution when an Azure Virtual Machine is restarted. ### Use Cases - **Post-restart validation**: Run health checks or smoke tests after a VM restarts - **Configuration reapplication**: Reapply configuration that may not persist across restarts - **Monitoring alerts**: Notify teams when VMs are restarted unexpectedly - **Audit logging**: Track VM restart events for compliance and troubleshooting ### How It Works This trigger listens to Azure Event Grid events for Virtual Machine restart actions. When a VM restart action succeeds (`status: Succeeded`), the trigger fires and provides the full Azure Event Grid event payload. Azure fires `Microsoft.Resources.ResourceActionSuccess` with operation name `Microsoft.Compute/virtualMachines/restart/action` when a VM is restarted. This reboots the VM in place without deallocating — the VM keeps its compute allocation and IP addresses. **Important**: This trigger fires only on explicit restart actions. It does not fire on start (after stop/deallocate) or power-off. Use the "On VM Started" or "On VM Stopped" triggers for those events. ### Configuration - **Resource Group** (optional): Filter events to only trigger for VMs in a specific resource group. Leave empty to trigger for all resource groups in the subscription. - **VM Name Filter** (optional): A regex pattern to filter VMs by name. Only VMs whose name matches the pattern will trigger the workflow. Leave empty to trigger for all VM names. ### Event Data Each VM restart event includes the full Azure Event Grid event: - **id**: Unique event ID - **topic**: The Azure subscription topic - **subject**: The full ARM resource ID of the VM (with /restart appended) - **eventType**: The event type (`Microsoft.Resources.ResourceActionSuccess`) - **eventTime**: The timestamp when the event occurred - **data**: The event data including operationName, status, resourceProvider, resourceUri, subscriptionId, tenantId ### Azure Event Grid Setup Event Grid subscriptions are created automatically when the trigger is set up. SuperPlane will: 1. Create an Event Grid subscription at the Azure subscription scope 2. Configure it to forward `Microsoft.Resources.ResourceActionSuccess` events to the trigger webhook 3. Apply subject filters based on the configured resource group and resource type 4. Handle the Event Grid validation handshake automatically No manual setup is required. ### Notes - The trigger fires when a VM is explicitly restarted (rebooted in place) - The VM keeps its compute allocation and IP addresses during a restart - It does not fire on start (after stop/deallocate) — use "On VM Started" for that - It does not fire on power-off or deallocate — use "On VM Stopped" or "On VM Deallocated" - Failed restart operations do not trigger the workflow - The trigger processes events from Azure Event Grid in real-time - Multiple triggers can share the same Event Grid subscription if configured correctly ### Example Data ```json { "data": { "data": { "operationName": "Microsoft.Compute/virtualMachines/restart/action", "resourceProvider": "Microsoft.Compute", "resourceUri": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm-01", "status": "Succeeded", "subscriptionId": "12345678-1234-1234-1234-123456789abc", "tenantId": "12345678-1234-1234-1234-123456789abc" }, "dataVersion": "2", "eventTime": "2026-02-11T10:30:00Z", "eventType": "Microsoft.Resources.ResourceActionSuccess", "id": "d4e5f6a7-b8c9-0123-defa-234567890123", "metadataVersion": "1", "subject": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm-01/restart", "topic": "/subscriptions/12345678-1234-1234-1234-123456789abc" }, "timestamp": "2026-02-11T10:30:00Z", "type": "azure.vm.restarted" } ``` ## On VM Started The On VM Started trigger starts a workflow execution when an Azure Virtual Machine is started. ### Use Cases - **Post-boot configuration**: Apply configuration, install agents, or run setup scripts when a VM starts - **Health checks**: Run health checks or readiness probes after a VM boots - **Notification workflows**: Notify teams when VMs come online - **Monitoring setup**: Register VMs with monitoring systems when they start - **Auto-scaling workflows**: Trigger downstream actions when new capacity comes online ### How It Works This trigger listens to Azure Event Grid events for Virtual Machine start actions. When a VM start action succeeds (`status: Succeeded`), the trigger fires and provides the full Azure Event Grid event payload. Azure fires `Microsoft.Resources.ResourceActionSuccess` with operation name `Microsoft.Compute/virtualMachines/start/action` when a VM is explicitly started. This event fires specifically when a stopped/deallocated VM is started via the Azure portal, CLI, or API. It does not fire on initial VM creation, restarts, or other VM actions. ### Configuration - **Resource Group** (optional): Filter events to only trigger for VMs in a specific resource group. Leave empty to trigger for all resource groups in the subscription. - **VM Name Filter** (optional): A regex pattern to filter VMs by name. Only VMs whose name matches the pattern will trigger the workflow. Leave empty to trigger for all VM names. ### Event Data Each VM start event includes the full Azure Event Grid event: - **id**: Unique event ID - **topic**: The Azure subscription topic - **subject**: The full ARM resource ID of the VM (with /start appended) - **eventType**: The event type (`Microsoft.Resources.ResourceActionSuccess`) - **eventTime**: The timestamp when the event occurred - **data**: The event data including operationName, status, resourceProvider, resourceUri, subscriptionId, tenantId ### Azure Event Grid Setup Event Grid subscriptions are created automatically when the trigger is set up. SuperPlane will: 1. Create an Event Grid subscription at the Azure subscription scope 2. Configure it to forward `Microsoft.Resources.ResourceActionSuccess` events to the trigger webhook 3. Apply subject filters based on the configured resource group and resource type 4. Handle the Event Grid validation handshake automatically No manual setup is required. ### Notes - The trigger fires when a stopped/deallocated VM is explicitly started - It does not fire on initial VM creation (use On VM Deleted's write counterpart for that) - It does not fire on VM restart — restart uses a separate `restart/action` operation - Failed start operations do not trigger the workflow - The trigger processes events from Azure Event Grid in real-time - Multiple triggers can share the same Event Grid subscription if configured correctly ### Example Data ```json { "data": { "data": { "operationName": "Microsoft.Compute/virtualMachines/start/action", "resourceProvider": "Microsoft.Compute", "resourceUri": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm-01", "status": "Succeeded", "subscriptionId": "12345678-1234-1234-1234-123456789abc", "tenantId": "12345678-1234-1234-1234-123456789abc" }, "dataVersion": "2", "eventTime": "2026-02-11T10:30:00Z", "eventType": "Microsoft.Resources.ResourceActionSuccess", "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "metadataVersion": "1", "subject": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm-01/start", "topic": "/subscriptions/12345678-1234-1234-1234-123456789abc" }, "timestamp": "2026-02-11T10:30:00Z", "type": "azure.vm.started" } ``` ## On VM Stopped The On VM Stopped trigger starts a workflow execution when an Azure Virtual Machine is powered off. ### Use Cases - **Cost alerts**: Notify teams when VMs are stopped but still allocated (still incurring compute charges) - **Shutdown workflows**: Run cleanup scripts or save state when a VM is powered off - **Compliance monitoring**: Track VM power-off events for audit trails - **Scheduling validation**: Verify that VMs are being stopped according to schedule ### How It Works This trigger listens to Azure Event Grid events for Virtual Machine power-off actions. When a VM power-off action succeeds (`status: Succeeded`), the trigger fires and provides the full Azure Event Grid event payload. Azure fires `Microsoft.Resources.ResourceActionSuccess` with operation name `Microsoft.Compute/virtualMachines/powerOff/action` when a VM is powered off. This stops the VM OS but keeps the compute allocation — the VM still incurs compute charges. To fully release compute resources, use deallocate instead. **Important**: This trigger fires on power-off (stop without deallocation), not on deallocate. Use the "On VM Deallocated" trigger for deallocate events. ### Configuration - **Resource Group** (optional): Filter events to only trigger for VMs in a specific resource group. Leave empty to trigger for all resource groups in the subscription. - **VM Name Filter** (optional): A regex pattern to filter VMs by name. Only VMs whose name matches the pattern will trigger the workflow. Leave empty to trigger for all VM names. ### Event Data Each VM stop event includes the full Azure Event Grid event: - **id**: Unique event ID - **topic**: The Azure subscription topic - **subject**: The full ARM resource ID of the VM (with /powerOff appended) - **eventType**: The event type (`Microsoft.Resources.ResourceActionSuccess`) - **eventTime**: The timestamp when the event occurred - **data**: The event data including operationName, status, resourceProvider, resourceUri, subscriptionId, tenantId ### Azure Event Grid Setup Event Grid subscriptions are created automatically when the trigger is set up. SuperPlane will: 1. Create an Event Grid subscription at the Azure subscription scope 2. Configure it to forward `Microsoft.Resources.ResourceActionSuccess` events to the trigger webhook 3. Apply subject filters based on the configured resource group and resource type 4. Handle the Event Grid validation handshake automatically No manual setup is required. ### Notes - The trigger fires when a VM is powered off (stopped without deallocation) - The VM still incurs compute charges after a power-off — use deallocate to release resources - It does not fire on deallocate — use the "On VM Deallocated" trigger for that - Failed power-off operations do not trigger the workflow - The trigger processes events from Azure Event Grid in real-time - Multiple triggers can share the same Event Grid subscription if configured correctly ### Example Data ```json { "data": { "data": { "operationName": "Microsoft.Compute/virtualMachines/powerOff/action", "resourceProvider": "Microsoft.Compute", "resourceUri": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm-01", "status": "Succeeded", "subscriptionId": "12345678-1234-1234-1234-123456789abc", "tenantId": "12345678-1234-1234-1234-123456789abc" }, "dataVersion": "2", "eventTime": "2026-02-11T10:30:00Z", "eventType": "Microsoft.Resources.ResourceActionSuccess", "id": "b2c3d4e5-f6a7-8901-bcde-f12345678901", "metadataVersion": "1", "subject": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm-01/powerOff", "topic": "/subscriptions/12345678-1234-1234-1234-123456789abc" }, "timestamp": "2026-02-11T10:30:00Z", "type": "azure.vm.stopped" } ``` ## Create Virtual Machine The Create Virtual Machine component creates a new Azure VM with full configuration options. ### Use Cases - **Infrastructure provisioning**: Automatically create VMs as part of deployment workflows - **Development environments**: Spin up temporary VMs for testing and development - **Auto-scaling**: Create VMs in response to load or events - **Disaster recovery**: Quickly provision replacement VMs ### How It Works 1. Validates the VM configuration parameters 2. Initiates VM creation via the Azure Compute API 3. Waits for the VM to be fully provisioned (using Azure's Long-Running Operation pattern) 4. Returns the VM details including ID, name, and provisioning state ### Configuration - **Resource Group**: The Azure resource group where the VM will be created - **Name**: The name for the new virtual machine - **Location**: The Azure region (e.g., "eastus", "westeurope") - **Size**: The VM size (e.g., "Standard_B1s", "Standard_D2s_v3") - **Admin Username**: Administrator username for the VM - **Admin Password**: Administrator password for the VM (must meet Azure complexity requirements) - **Virtual Network / Subnet**: The network for the VM - **Network Interface ID**: Optional existing NIC (overrides VNet/Subnet) - **OS Image**: Select from common presets (Ubuntu, Debian, Windows Server) ### Output Returns the created VM information including: - **id**: The Azure resource ID of the VM - **name**: The name of the VM - **provisioningState**: The provisioning state (typically "Succeeded") - **location**: The Azure region where the VM was created - **size**: The VM size ### Notes - The VM creation is a Long-Running Operation (LRO) that typically takes 2-5 minutes - The component waits for the VM to be fully provisioned before completing - The admin password must meet Azure's complexity requirements (12+ characters, mixed case, numbers, symbols) - If Network Interface ID is empty, a NIC is created automatically from the selected VNet/Subnet ### Example Output ```json { "data": { "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm", "location": "eastus", "name": "my-vm", "provisioningState": "Succeeded", "size": "Standard_B1s" }, "timestamp": "2026-03-16T10:00:00Z", "type": "azure.vm" } ``` ## Deallocate Virtual Machine The Deallocate Virtual Machine component deallocates an existing Azure VM, stopping it and releasing its compute resources. ### Use Cases - **Cost optimization**: Deallocate VMs to stop compute charges during off-hours - **Scheduled shutdown**: Deallocate VMs on a schedule to reduce costs - **Pre-resize operations**: Deallocate a VM before changing its size - **Environment teardown**: Deallocate VMs without deleting them ### How It Works 1. Validates the VM parameters 2. Initiates VM deallocation via the Azure Compute API 3. Waits for the VM to be fully deallocated (using Azure's Long-Running Operation pattern) 4. Returns the VM details including ID and name ### Configuration - **Resource Group**: The Azure resource group containing the VM - **VM Name**: The name of the virtual machine to deallocate ### Output Returns the deallocated VM information including: - **id**: The Azure resource ID of the VM - **name**: The name of the VM - **resourceGroup**: The resource group containing the VM ### Notes - Deallocation stops the VM and releases compute resources — **compute charges stop** - Only storage charges remain for the VM's disks - Dynamic public IP addresses are released on deallocation - The operation is a Long-Running Operation (LRO) that typically takes 1-3 minutes - The component waits for the VM to be fully deallocated before completing - To power off without releasing compute resources, use the "Stop Virtual Machine" action ### Example Output ```json { "data": { "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm", "name": "my-vm", "resourceGroup": "my-rg" }, "timestamp": "2026-03-16T10:00:00Z", "type": "azure.vm" } ``` ## Delete Virtual Machine The Delete Virtual Machine component deletes an existing Azure VM. ### Use Cases - **Infrastructure teardown**: Remove VMs as part of decommissioning workflows - **Cost optimization**: Delete unused or idle VMs to reduce costs - **Environment cleanup**: Remove temporary VMs after testing or development - **Auto-scaling**: Delete VMs in response to reduced load ### How It Works 1. Validates the VM deletion parameters 2. Initiates VM deletion via the Azure Compute API 3. Waits for the VM to be fully deleted (using Azure's Long-Running Operation pattern) 4. Returns the deleted VM details including ID and name ### Configuration - **Resource Group**: The Azure resource group containing the VM - **VM Name**: The name of the virtual machine to delete - **Delete Associated Resources**: When enabled, also deletes the VM's OS disk, data disks, network interfaces, and public IPs ### Output Returns the deleted VM information including: - **id**: The Azure resource ID of the deleted VM - **name**: The name of the deleted VM - **resourceGroup**: The resource group that contained the VM ### Notes - The VM deletion is a Long-Running Operation (LRO) that typically takes 1-3 minutes - The component waits for the VM to be fully deleted before completing - When "Delete Associated Resources" is enabled, Azure cascade-deletes OS disk, data disks, NICs, and public IPs along with the VM - Shared resources like virtual networks, subnets, and network security groups are never deleted - This operation is irreversible ### Example Output ```json { "data": { "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm", "name": "my-vm", "resourceGroup": "my-rg" }, "timestamp": "2026-03-16T10:00:00Z", "type": "azure.vm" } ``` ## Restart Virtual Machine The Restart Virtual Machine component restarts an existing Azure VM in place. ### Use Cases - **Apply updates**: Restart VMs after applying OS or software updates - **Troubleshooting**: Restart VMs to recover from transient issues - **Configuration changes**: Restart VMs after configuration changes that require a reboot - **Automated maintenance**: Restart VMs as part of maintenance workflows ### How It Works 1. Validates the VM parameters 2. Initiates VM restart via the Azure Compute API 3. Waits for the VM to be fully restarted (using Azure's Long-Running Operation pattern) 4. Returns the VM details including ID and name ### Configuration - **Resource Group**: The Azure resource group containing the VM - **VM Name**: The name of the virtual machine to restart ### Output Returns the restarted VM information including: - **id**: The Azure resource ID of the VM - **name**: The name of the VM - **resourceGroup**: The resource group containing the VM ### Notes - The VM is rebooted in place — it keeps its compute allocation and IP addresses - The operation is a Long-Running Operation (LRO) that typically takes 1-3 minutes - The component waits for the VM to be fully restarted before completing - This performs a graceful restart of the VM ### Example Output ```json { "data": { "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm", "name": "my-vm", "resourceGroup": "my-rg" }, "timestamp": "2026-03-16T10:00:00Z", "type": "azure.vm" } ``` ## Start Virtual Machine The Start Virtual Machine component starts a stopped or deallocated Azure VM. ### Use Cases - **Resume workloads**: Start VMs that were previously stopped or deallocated - **Scheduled startup**: Start VMs as part of scheduled workflows - **Auto-scaling**: Start pre-provisioned VMs to handle increased demand - **Disaster recovery**: Start standby VMs as part of failover workflows ### How It Works 1. Validates the VM parameters 2. Initiates VM start via the Azure Compute API 3. Waits for the VM to be fully started (using Azure's Long-Running Operation pattern) 4. Returns the VM details including ID and name ### Configuration - **Resource Group**: The Azure resource group containing the VM - **VM Name**: The name of the virtual machine to start ### Output Returns the started VM information including: - **id**: The Azure resource ID of the VM - **name**: The name of the VM - **resourceGroup**: The resource group containing the VM ### Notes - The VM must be in a stopped or deallocated state to be started - The operation is a Long-Running Operation (LRO) that typically takes 1-3 minutes - The component waits for the VM to be fully started before completing - Once started, the VM will begin incurring compute charges ### Example Output ```json { "data": { "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm", "name": "my-vm", "resourceGroup": "my-rg" }, "timestamp": "2026-03-16T10:00:00Z", "type": "azure.vm" } ``` ## Stop Virtual Machine The Stop Virtual Machine component powers off an existing Azure VM without deallocating it. ### Use Cases - **Temporary shutdown**: Stop a VM temporarily while keeping its compute allocation - **Maintenance windows**: Power off VMs during maintenance periods - **Pre-deallocate workflows**: Stop VMs before performing other operations ### How It Works 1. Validates the VM parameters 2. Initiates VM power-off via the Azure Compute API 3. Waits for the VM to be fully powered off (using Azure's Long-Running Operation pattern) 4. Returns the VM details including ID and name ### Configuration - **Resource Group**: The Azure resource group containing the VM - **VM Name**: The name of the virtual machine to stop ### Output Returns the stopped VM information including: - **id**: The Azure resource ID of the VM - **name**: The name of the VM - **resourceGroup**: The resource group containing the VM ### Notes - The VM is powered off but compute resources remain allocated — **you still incur compute charges** - To fully release compute resources and stop charges, use the "Deallocate Virtual Machine" action - The operation is a Long-Running Operation (LRO) that typically takes 1-2 minutes - The component waits for the VM to be fully powered off before completing ### Example Output ```json { "data": { "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm", "name": "my-vm", "resourceGroup": "my-rg" }, "timestamp": "2026-03-16T10:00:00Z", "type": "azure.vm" } ``` #### Microsoft Teams Source URL: https://docs.superplane.com/components/microsoftteams Send and receive messages in Microsoft Teams channels import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions ## Setup 1. **Create an Azure App Registration**: - Go to **Azure Portal** → **App registrations** → **New registration** - Name: "SuperPlane Bot" (or your preference) - Supported account types: **Accounts in any organizational directory** (multi-tenant) or single-tenant - Note the **Application (client) ID** — this is the **App ID** below - Go to **Certificates & secrets** → **New client secret** → copy the **Value** — this is the **App Password** below 2. **Add Graph API permissions** (required for channel listing): - Go to your **App Registration** → **API permissions** → **Add a permission** - Select **Microsoft Graph** → **Application permissions** - Add: **Team.ReadBasic.All** and **Channel.ReadBasic.All** - Click **Grant admin consent** 3. **Enter credentials** in the fields below and save ## On Mention The On Mention trigger starts a workflow execution when the Teams bot is @mentioned in a channel message. ### Use Cases - **Bot commands**: Process commands from Teams messages - **Bot interactions**: Create interactive Teams bots - **Team workflows**: Trigger workflows from Teams conversations - **Notification processing**: Process and respond to mentions ### Configuration - **Channel**: Optional channel filter — if specified, only mentions in this channel will trigger (leave empty to listen to all channels) - **Content Filter**: Optional regex pattern to filter messages by content (e.g., `/deploy` to only trigger on mentions containing "/deploy") ### Event Data Each mention event includes: - **text**: The message text containing the mention - **from**: User who mentioned the bot (ID and name) - **conversation**: Channel and team information - **timestamp**: When the mention occurred - **serviceUrl**: Bot Framework service URL for sending replies ### Setup This trigger automatically sets up a subscription for bot mention events when configured. The subscription is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Note This trigger works with the default Bot Framework behavior — the bot receives messages where it is @mentioned without any additional permissions. ### Example Data ```json { "data": { "channelId": "msteams", "conversation": { "conversationType": "channel", "id": "19:abc123def456@thread.tacv2", "isGroup": true, "name": "General", "tenantId": "00000000-0000-0000-0000-000000000000" }, "entities": [ { "mentioned": { "id": "28:bot-id-here", "name": "SuperPlane Bot" }, "text": "\u003cat\u003eSuperPlane Bot\u003c/at\u003e", "type": "mention" } ], "from": { "aadObjectId": "00000000-0000-0000-0000-000000000001", "id": "29:1abc2def3ghi4jkl5mno6pqr7stu8vwx", "name": "John Doe" }, "id": "1700000000000", "recipient": { "id": "28:bot-id-here", "name": "SuperPlane Bot" }, "serviceUrl": "https://smba.trafficmanager.net/teams/", "text": "\u003cat\u003eSuperPlane Bot\u003c/at\u003e deploy the latest build to staging", "timestamp": "2026-01-16T12:00:00.000Z", "type": "message" }, "timestamp": "2026-01-16T12:00:00Z", "type": "teams.bot.mention" } ``` ## On Message The On Message trigger starts a workflow execution when any message is posted in a Teams channel. ### Use Cases - **Message monitoring**: React to any message in a channel - **Keyword detection**: Process messages looking for specific content - **Activity tracking**: Track channel activity for analytics - **Auto-responses**: Automatically respond to specific message patterns ### Configuration - **Channel**: Optional channel filter — if specified, only messages in this channel will trigger (leave empty to listen to all channels) - **Content Filter**: Optional regex pattern to filter messages by content (e.g., `/ci` to only trigger on messages containing "/ci") ### Event Data Each message event includes: - **text**: The message text - **from**: User who sent the message (ID and name) - **conversation**: Channel and team information - **timestamp**: When the message was sent - **serviceUrl**: Bot Framework service URL for sending replies ### Setup This trigger automatically sets up a subscription for channel message events when configured. ### Important This trigger requires **Resource-Specific Consent (RSC)** permissions in the Teams app manifest. Specifically, the app must include the `ChannelMessage.Read.Group` permission. Without this permission, the bot will only receive messages where it is @mentioned. The generated Teams app manifest includes this permission by default. If you created the manifest manually, ensure this RSC permission is included. ### Example Data ```json { "data": { "channelId": "msteams", "conversation": { "conversationType": "channel", "id": "19:abc123def456@thread.tacv2", "isGroup": true, "name": "General", "tenantId": "00000000-0000-0000-0000-000000000000" }, "entities": [], "from": { "aadObjectId": "00000000-0000-0000-0000-000000000002", "id": "29:1abc2def3ghi4jkl5mno6pqr7stu8vwx", "name": "Jane Smith" }, "id": "1700000000001", "recipient": { "id": "28:bot-id-here", "name": "SuperPlane Bot" }, "serviceUrl": "https://smba.trafficmanager.net/teams/", "text": "The deployment to staging is complete. All tests passed.", "timestamp": "2026-01-16T12:05:00.000Z", "type": "message" }, "timestamp": "2026-01-16T12:05:00Z", "type": "teams.channel.message" } ``` ## Send Text Message The Send Text Message component sends a text message to a Microsoft Teams channel. ### Use Cases - **Notifications**: Send notifications about workflow events or system status - **Alerts**: Alert teams about important events or errors - **Updates**: Provide status updates on long-running processes - **Team communication**: Automate team communications from workflows ### Configuration - **Channel**: Select the Teams channel to send the message to - **Text**: The message text to send (supports expressions) ### Output Returns metadata about the sent message including the message ID and timestamp. ### Notes - The Teams bot must be installed in the team containing the target channel - Messages are sent as the configured bot user - The bot requires the appropriate permissions to post to the selected channel ### Example Output ```json { "data": { "conversationId": "19:abc123def456@thread.tacv2", "id": "1700000000002", "text": "Hello from SuperPlane", "timestamp": "2026-01-16T12:10:00.000Z" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "teams.message.sent" } ``` #### New Relic Source URL: https://docs.superplane.com/components/newrelic React to alerts and query telemetry data from New Relic import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions ### Getting your credentials 1. **Account ID**: Click your name in the bottom-left corner of New Relic. Your Account ID is displayed under the account name. 2. **User API Key**: Go to the **API Keys** page. Click **Create a key**. Select key type **User**. Give it a name (e.g. "SuperPlane") and click **Create a key**. This key is used for NerdGraph/NRQL queries — no additional permissions are needed. 3. **License Key**: On the same **API Keys** page, find the key with type **Ingest - License** and copy it. This key is used for sending metrics. If no license key exists, click **Create a key** and select **Ingest - License**. 4. **Region**: Choose **US** if your New Relic URL is `one.newrelic.com`, or **EU** if it is `one.eu.newrelic.com`. ### Webhook Setup SuperPlane automatically creates a Webhook Notification Channel in your New Relic account when you add the **On Issue** trigger to your canvas. Just attach it to your alert workflow in New Relic to start receiving alerts. ## On Issue The On Issue trigger starts a workflow execution when a New Relic alert issue is received via webhook. ### What this trigger does - Receives New Relic webhook payloads for alert issues - Filters by issue state (CREATED, ACTIVATED, ACKNOWLEDGED, CLOSED) - Optionally filters by priority (CRITICAL, HIGH, MEDIUM, LOW) - Emits matching issues as `newrelic.issue` events ### Configuration - **Statuses**: Required list of issue states to listen for - **Priorities**: Optional priority filter ### Webhook Setup SuperPlane automatically creates a Webhook Notification Channel in your New Relic account. Just attach it to your alert workflow to start receiving alerts. ### Example Data ```json { "data": { "accountId": 1234567, "conditionName": "CPU usage \u003e 90%", "createdAt": 1704067200000, "issueId": "MXxBSXxJU1NVRXwxMjM0NTY3ODk", "issueUrl": "https://one.newrelic.com/alerts-ai/issues/MXxBSXxJU1NVRXwxMjM0NTY3ODk", "policyName": "Production Infrastructure", "priority": "CRITICAL", "sources": [ "newrelic" ], "state": "ACTIVATED", "title": "High CPU usage on production server", "updatedAt": 1704067260000 }, "timestamp": "2026-01-19T12:00:00Z", "type": "newrelic.issue" } ``` ## Report Metric The Report Metric component sends custom metric data to New Relic's Metric API. ### Use Cases - **Deployment metrics**: Track deployment frequency and duration - **Business metrics**: Report custom KPIs like revenue, signups, or conversion rates - **Pipeline metrics**: Measure workflow execution times and success rates ### Configuration - `metricName`: The name of the metric (e.g., custom.deployment.count) - `metricType`: The type of metric (gauge, count, or summary) - `value`: The numeric value for the metric - `attributes`: Optional key-value labels for the metric - `timestamp`: Optional Unix epoch milliseconds (defaults to now) ### Outputs The component emits a metric confirmation containing: - `metricName`: The name of the reported metric - `metricType`: The type of the metric - `value`: The reported value - `timestamp`: The timestamp used ### Example Output ```json { "data": { "metricName": "custom.deployment.count", "metricType": "count", "timestamp": 1704067200000, "value": 1 }, "timestamp": "2026-01-19T12:00:00Z", "type": "newrelic.metric" } ``` ## Run NRQL Query The Run NRQL Query component executes a NRQL query against New Relic data via the NerdGraph API. ### Use Cases - **Health checks**: Query application error rates or response times before deployments - **Capacity planning**: Check resource utilization metrics - **Incident investigation**: Query telemetry data during incident workflows ### Configuration - `query`: The NRQL query string to execute - `timeout`: Optional query timeout in seconds (default: 30) ### Outputs The component emits query results containing: - `query`: The executed NRQL query - `results`: Array of result rows returned by the query ### Example Output ```json { "data": { "query": "SELECT count(*) FROM Transaction SINCE 1 hour ago", "results": [ { "count": 42567 } ] }, "timestamp": "2026-01-19T12:00:00Z", "type": "newrelic.nrqlResult" } ``` #### Octopus Deploy Source URL: https://docs.superplane.com/components/octopusdeploy Deploy releases and react to deployment events in Octopus Deploy import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions 1. **Server URL:** Your Octopus Deploy instance URL (e.g. `https://my-company.octopus.app`). 2. **API Key:** Create one in **Octopus Web Portal → Profile → My API Keys → New API Key**. - Enter a purpose (e.g., "SuperPlane Integration") and click **Generate New**. - **Copy the key immediately**—it cannot be viewed again. 3. **Space:** Select the Octopus Deploy space to use. Leave empty to use the default space. 4. **Auth:** SuperPlane sends requests using the `X-Octopus-ApiKey` header. 5. **Webhooks:** SuperPlane creates Octopus subscriptions automatically to receive deployment events. No manual setup is required. ## On Deployment Event The On Deployment Event trigger emits events when a deployment's status changes in Octopus Deploy. ### Use Cases - **Deploy notifications**: Notify Slack or PagerDuty when deployments succeed or fail - **Post-deploy automation**: Trigger smoke tests after a successful deployment - **Incident creation**: Create a ticket automatically when a deployment fails - **Deployment tracking**: Log deployment events for audit or reporting ### Configuration - **Event Categories**: Deployment event types to listen for. Defaults to `DeploymentSucceeded` and `DeploymentFailed`. - **Project** (optional): Filter events to a specific Octopus Deploy project. - **Environment** (optional): Filter events to a specific deployment environment. ### Event Categories | Category | Description | |---------------------|-------------------------------------| | DeploymentQueued | A deployment has been queued | | DeploymentStarted | A deployment has started executing | | DeploymentSucceeded | A deployment completed successfully | | DeploymentFailed | A deployment has failed | ### Webhook Verification Octopus Deploy subscriptions are configured with a custom header secret. SuperPlane verifies incoming webhooks by comparing the `X-SuperPlane-Webhook-Secret` header value against the stored secret. ### Event Data Each emitted event includes the following fields: | Field | Description | |-----------------|------------------------------------------------------------| | `eventType` | The deployment event category (e.g. `DeploymentSucceeded`) | | `category` | Same as `eventType` | | `timestamp` | When the subscription payload was sent | | `occurredAt` | When the event occurred in Octopus | | `message` | Human-readable event description | | `projectId` | Octopus project ID (e.g. `Projects-123`) | | `environmentId` | Octopus environment ID (e.g. `Environments-1`) | | `releaseId` | Octopus release ID (e.g. `Releases-789`) | | `deploymentId` | Octopus deployment ID (e.g. `Deployments-1011`) | | `serverUri` | Your Octopus Deploy server URL | ### Example Data ```json { "data": { "category": "DeploymentSucceeded", "deploymentId": "Deployments-1011", "environmentId": "Environments-456", "eventType": "DeploymentSucceeded", "message": "Deploy to Production succeeded", "occurredAt": "2026-02-05T16:00:00.000+00:00", "projectId": "Projects-123", "releaseId": "Releases-789", "serverUri": "https://my-company.octopus.app", "timestamp": "2026-02-05T16:00:00.000+00:00" }, "timestamp": "2026-02-05T16:00:01.000+00:00", "type": "octopus.deployment.succeeded" } ``` ## Deploy Release The Deploy Release component deploys a chosen release to a chosen environment in Octopus Deploy and waits for completion. ### Use Cases - **Deploy on merge**: Trigger a deployment after code is merged - **Scheduled deployments**: Deploy to staging or production on a schedule - **Approval-based deploys**: Deploy after manual approval in a workflow - **Chained deployments**: Deploy to next environment after success in the previous one ### How It Works 1. Creates a deployment for the selected release and environment via the Octopus Deploy API 2. Waits for the deployment task to complete (via webhook and polling fallback) 3. Routes execution based on deployment outcome: - **Success channel**: Task completed successfully - **Failed channel**: Task failed, timed out, or was cancelled ### Configuration - **Project**: The Octopus Deploy project - **Release**: The release to deploy (filtered by the selected project) - **Environment**: The target deployment environment ### Output Channels - **Success**: Emitted when the deployment completes successfully - **Failed**: Emitted when the deployment fails, times out, or is cancelled ### Notes - Deployment status is tracked via the Octopus Deploy task associated with the deployment - Polls the task status every 5 minutes as a fallback if the webhook does not arrive - Requires an API key configured on the Octopus Deploy integration ### Example Output ```json { "data": { "completedTime": "2026-02-05T16:05:00.000+00:00", "created": "2026-02-05T16:00:00.000+00:00", "deploymentId": "Deployments-1011", "duration": "5 minutes", "environmentId": "Environments-456", "projectId": "Projects-123", "releaseId": "Releases-789", "taskState": "Success" }, "timestamp": "2026-02-05T16:05:00.000+00:00", "type": "octopus.deployment.finished" } ``` #### OpenAI Source URL: https://docs.superplane.com/components/openai Generate text responses with OpenAI models import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Text Prompt The Text Prompt component generates text responses using OpenAI's language models. ### Use Cases - **Content generation**: Generate text content, summaries, or descriptions - **Natural language processing**: Process and transform text using AI - **Automated responses**: Generate responses to user queries or events - **Data transformation**: Convert structured data into natural language ### Configuration - **Model**: Select the OpenAI model to use (e.g., gpt-4, gpt-3.5-turbo) - **Prompt**: The text prompt to send to the model (supports expressions) ### Output Returns the generated response including: - **text**: The generated text response - **model**: The model used for generation - **usage**: Token usage information (prompt tokens, completion tokens, total tokens) - **id**: Response ID for tracking ### Notes - Requires a valid OpenAI API key configured in the application settings - Response quality and speed depend on the selected model - Token usage is tracked and may incur costs based on your OpenAI plan - Supports OpenAI-compatible providers by setting a custom Base URL in the integration settings (e.g., Azure OpenAI, Ollama, vLLM) ### Example Output ```json { "data": { "id": "cmpl-1234567890", "model": "gpt-5.2", "text": "Hello, world!", "usage": { "input_tokens": 10, "output_tokens": 10, "total_tokens": 20 } }, "timestamp": "2026-01-19T12:00:00Z", "type": "openai.response" } ``` #### PagerDuty Source URL: https://docs.superplane.com/components/pagerduty Manage and react to incidents in PagerDuty import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## On Incident The On Incident trigger starts a workflow execution when PagerDuty incident events occur. ### Use Cases - **Incident automation**: Automate responses to incident events - **Notification workflows**: Send notifications when incidents are triggered or resolved - **Integration workflows**: Sync incidents with external systems - **Escalation handling**: Handle incident escalations automatically ### Configuration - **Service**: Select the PagerDuty service to monitor - **Events**: Select which incident events to listen for (triggered, acknowledged, resolved) - **Urgencies**: Filter by urgency level (low, high) - leave empty to listen to all urgencies ### Event Data Each incident event includes: - **event**: Event type (incident.triggered, incident.acknowledged, incident.resolved) - **incident**: Complete incident information including title, description, urgency, status - **service**: Service information - **assignments**: Current incident assignments ### Webhook Setup This trigger automatically sets up a PagerDuty webhook subscription when configured. The subscription is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "agent": { "html_url": "https://acme.pagerduty.com/users/PLH1HKV", "id": "PLH1HKV", "self": "https://api.pagerduty.com/users/PLH1HKV", "summary": "Tenex Engineer", "type": "user_reference" }, "incident": { "assignees": [ { "html_url": "https://acme.pagerduty.com/users/PTUXL6G", "id": "PTUXL6G", "self": "https://api.pagerduty.com/users/PTUXL6G", "summary": "User 123", "type": "user_reference" } ], "conference_bridge": { "conference_number": "+1 1234123412,,987654321#", "conference_url": "https://example.com" }, "created_at": "2020-04-09T15:16:27Z", "escalation_policy": { "html_url": "https://acme.pagerduty.com/escalation_policies/PUS0KTE", "id": "PUS0KTE", "self": "https://api.pagerduty.com/escalation_policies/PUS0KTE", "summary": "Default", "type": "escalation_policy_reference" }, "html_url": "https://acme.pagerduty.com/incidents/PGR0VU2", "id": "PGR0VU2", "incident_key": "d3640fbd41094207a1c11e58e46b1662", "number": 2, "priority": { "html_url": "https://acme.pagerduty.com/account/incident_priorities", "id": "PSO75BM", "self": "https://api.pagerduty.com/priorities/PSO75BM", "summary": "P1", "type": "priority_reference" }, "reopened_at": "2020-10-02T18:45:22Z", "resolve_reason": null, "self": "https://api.pagerduty.com/incidents/PGR0VU2", "service": { "html_url": "https://acme.pagerduty.com/services/PF9KMXH", "id": "PF9KMXH", "self": "https://api.pagerduty.com/services/PF9KMXH", "summary": "API Service", "type": "service_reference" }, "status": "triggered", "teams": [ { "html_url": "https://acme.pagerduty.com/teams/PFCVPS0", "id": "PFCVPS0", "self": "https://api.pagerduty.com/teams/PFCVPS0", "summary": "Engineering", "type": "team_reference" } ], "title": "A little bump in the road", "type": "incident", "urgency": "high" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "pagerduty.onIncident" } ``` ## On Incident Annotated The On Incident Annotated trigger starts a workflow execution when a note is added to a PagerDuty incident. ### Use Cases - **Note tracking**: Track when notes are added to incidents - **Collaboration workflows**: Trigger actions based on incident annotations - **Audit logging**: Log all notes added to incidents - **Integration sync**: Sync notes with external ticketing systems ### Configuration - **Service**: Select the PagerDuty service to monitor for incident annotations ### Event Data Each annotation event includes: - **agent**: Information about who added the note - **incident**: Complete incident information ### Webhook Setup This trigger automatically sets up a PagerDuty webhook subscription when configured. The subscription is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "agent": { "html_url": "https://acme.pagerduty.com/users/PLH1HKV", "id": "PLH1HKV", "self": "https://api.pagerduty.com/users/PLH1HKV", "summary": "Tenex Engineer", "type": "user_reference" }, "annotation": { "content": "Investigating the root cause. Initial analysis suggests a database connection timeout." }, "incident": { "assignees": [ { "html_url": "https://acme.pagerduty.com/users/PTUXL6G", "id": "PTUXL6G", "self": "https://api.pagerduty.com/users/PTUXL6G", "summary": "User 123", "type": "user_reference" } ], "created_at": "2020-04-09T15:16:27Z", "escalation_policy": { "html_url": "https://acme.pagerduty.com/escalation_policies/PUS0KTE", "id": "PUS0KTE", "self": "https://api.pagerduty.com/escalation_policies/PUS0KTE", "summary": "Default", "type": "escalation_policy_reference" }, "html_url": "https://acme.pagerduty.com/incidents/PGR0VU2", "id": "PGR0VU2", "incident_key": "d3640fbd41094207a1c11e58e46b1662", "number": 2, "self": "https://api.pagerduty.com/incidents/PGR0VU2", "service": { "html_url": "https://acme.pagerduty.com/services/PF9KMXH", "id": "PF9KMXH", "self": "https://api.pagerduty.com/services/PF9KMXH", "summary": "API Service", "type": "service_reference" }, "status": "acknowledged", "teams": [ { "html_url": "https://acme.pagerduty.com/teams/PFCVPS0", "id": "PFCVPS0", "self": "https://api.pagerduty.com/teams/PFCVPS0", "summary": "Engineering", "type": "team_reference" } ], "title": "A little bump in the road", "type": "incident", "urgency": "high" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "pagerduty.incident.annotated" } ``` ## On Incident Status Update The On Incident Status Update trigger starts a workflow execution when PagerDuty incident status changes. ### Use Cases - **Status tracking**: Track incident status changes and update systems - **Workflow automation**: Trigger workflows when incidents are acknowledged or resolved - **Notification systems**: Notify teams about status updates - **Integration workflows**: Sync status changes with external systems ### Configuration - **Service**: Select the PagerDuty service to monitor for status updates ### Event Data Each status update event includes: - **event**: Event type (incident.status_updated) - **incident**: Complete incident information including current status - **status**: New incident status - **service**: Service information - **assignments**: Current incident assignments ### Webhook Setup This trigger automatically sets up a PagerDuty webhook subscription when configured. The subscription is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "agent": { "html_url": "https://acme.pagerduty.com/users/PLH1HKV", "id": "PLH1HKV", "self": "https://api.pagerduty.com/users/PLH1HKV", "summary": "Tenex Engineer", "type": "user_reference" }, "incident": { "html_url": "https://acme.pagerduty.com/incidents/PGR0VU2", "id": "PGR0VU2", "self": "https://api.pagerduty.com/incidents/PGR0VU2", "summary": "A little bump in the road", "type": "incident_reference" }, "status_update": { "created_at": "2026-01-19T12:30:00Z", "id": "P1234567", "message": "We have identified the issue and are working on a fix.", "sender": { "html_url": "https://acme.pagerduty.com/users/PLH1HKV", "id": "PLH1HKV", "self": "https://api.pagerduty.com/users/PLH1HKV", "summary": "Tenex Engineer", "type": "user_reference" } } }, "timestamp": "2026-01-19T12:30:00Z", "type": "pagerduty.incident.status_update_published" } ``` ## Acknowledge Incident The Acknowledge Incident component acknowledges an existing PagerDuty incident. ### Use Cases - **Incident response**: Acknowledge an incident to stop escalations and indicate someone is working on it - **Automated acknowledgement**: Automatically acknowledge incidents based on workflow conditions - **Integration workflows**: Acknowledge incidents when related events occur in other systems ### Configuration - **Incident ID**: The ID of the incident to acknowledge (e.g., A12BC34567...) - **From Email**: Email address of a valid PagerDuty user (required for App OAuth, optional for API tokens) ### Behavior When an incident is acknowledged, escalations are paused and the incident status changes to "acknowledged". The incident will remain acknowledged until it is resolved or re-triggered. ### Output Returns the acknowledged incident object with all current information. ### Example Output ```json { "data": { "incident": { "assigned_via": "escalation_policy", "assignments": [ { "assignee": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "at": "2015-11-10T00:31:52Z" } ], "created_at": "2015-10-06T21:30:42Z", "escalation_policy": { "html_url": "https://subdomain.pagerduty.com/escalation_policies/PT20YPA", "id": "PT20YPA", "self": "https://api.pagerduty.com/escalation_policies/PT20YPA", "summary": "Another Escalation Policy", "type": "escalation_policy_reference" }, "first_trigger_log_entry": { "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK/log_entries/Q02JTSNZWHSEKV", "id": "Q02JTSNZWHSEKV", "self": "https://api.pagerduty.com/log_entries/Q02JTSNZWHSEKV?incident_id=PT4KHLK", "summary": "Triggered through the API", "type": "trigger_log_entry_reference" }, "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK", "id": "PT4KHLK", "incident_key": "baf7cf21b1da41b4b0221008339ff357", "incident_number": 1234, "last_status_change_at": "2015-10-06T21:38:23Z", "last_status_change_by": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "priority": { "id": "P53ZZH5", "self": "https://api.pagerduty.com/priorities/P53ZZH5", "summary": "P2", "type": "priority_reference" }, "resolved_at": null, "self": "https://api.pagerduty.com/incidents/PT4KHLK", "service": { "html_url": "https://subdomain.pagerduty.com/service-directory/PWIXJZS", "id": "PWIXJZS", "self": "https://api.pagerduty.com/services/PWIXJZS", "summary": "My Mail Service", "type": "service_reference" }, "status": "acknowledged", "summary": "[#1234] The server is on fire.", "teams": [ { "html_url": "https://subdomain.pagerduty.com/teams/PQ9K7I8", "id": "PQ9K7I8", "self": "https://api.pagerduty.com/teams/PQ9K7I8", "summary": "Engineering", "type": "team_reference" } ], "title": "The server is on fire.", "type": "incident", "updated_at": "2015-10-06T21:40:23Z", "urgency": "high" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "pagerduty.incident" } ``` ## Annotate Incident The Annotate Incident component adds a note to an existing PagerDuty incident. ### Use Cases - **Status updates**: Add progress updates to an incident - **Investigation notes**: Document findings during incident investigation - **Handoff information**: Leave notes for the next responder - **Resolution details**: Document the root cause and resolution steps ### Configuration - **Incident ID**: The ID of the incident to annotate (e.g., A12BC34567...) - **Content**: The note content to add to the incident (supports expressions) - **From Email**: Email address of a valid PagerDuty user (required for App OAuth, optional for API tokens) ### Output Returns the incident object with all current information. ### Example Output ```json { "data": { "incident": { "assigned_via": "escalation_policy", "assignments": [ { "assignee": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "at": "2015-11-10T00:31:52Z" } ], "created_at": "2015-10-06T21:30:42Z", "escalation_policy": { "html_url": "https://subdomain.pagerduty.com/escalation_policies/PT20YPA", "id": "PT20YPA", "self": "https://api.pagerduty.com/escalation_policies/PT20YPA", "summary": "Another Escalation Policy", "type": "escalation_policy_reference" }, "first_trigger_log_entry": { "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK/log_entries/Q02JTSNZWHSEKV", "id": "Q02JTSNZWHSEKV", "self": "https://api.pagerduty.com/log_entries/Q02JTSNZWHSEKV?incident_id=PT4KHLK", "summary": "Triggered through the API", "type": "trigger_log_entry_reference" }, "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK", "id": "PT4KHLK", "incident_key": "baf7cf21b1da41b4b0221008339ff357", "incident_number": 1234, "incident_type": { "name": "major_incident" }, "last_status_change_at": "2015-10-06T21:38:23Z", "last_status_change_by": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "priority": { "id": "P53ZZH5", "self": "https://api.pagerduty.com/priorities/P53ZZH5", "summary": "P2", "type": "priority_reference" }, "resolved_at": null, "self": "https://api.pagerduty.com/incidents/PT4KHLK", "service": { "html_url": "https://subdomain.pagerduty.com/service-directory/PWIXJZS", "id": "PWIXJZS", "self": "https://api.pagerduty.com/services/PWIXJZS", "summary": "My Mail Service", "type": "service_reference" }, "status": "acknowledged", "summary": "[#1234] The server is on fire.", "teams": [ { "html_url": "https://subdomain.pagerduty.com/teams/PQ9K7I8", "id": "PQ9K7I8", "self": "https://api.pagerduty.com/teams/PQ9K7I8", "summary": "Engineering", "type": "team_reference" } ], "title": "The server is on fire.", "type": "incident", "updated_at": "2015-10-06T21:40:23Z", "urgency": "high" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "pagerduty.incident" } ``` ## Create Incident The Create Incident component creates a new incident in PagerDuty. ### Use Cases - **Alert escalation**: Create incidents from monitoring alerts - **Error tracking**: Automatically create incidents when errors are detected - **Manual incident creation**: Create incidents from workflow events - **Integration workflows**: Create incidents from external system events ### Configuration - **Title**: A succinct description of the incident (required, supports expressions) - **Description**: Additional details about the incident (optional, supports expressions) - **Urgency**: Incident urgency level (high or low) - **Service**: Select the PagerDuty service to create the incident in - **From Email**: Email address of a valid PagerDuty user (required for App OAuth, optional for API tokens) ### Output Returns the created incident object including: - **id**: Incident ID - **incident_number**: Human-readable incident number - **status**: Current incident status - **urgency**: Incident urgency - **service**: Service information - **created_at**: Incident creation timestamp ### Example Output ```json { "data": { "incident": { "assigned_via": "escalation_policy", "assignments": [ { "assignee": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "at": "2015-11-10T00:31:52Z" } ], "created_at": "2015-10-06T21:30:42Z", "escalation_policy": { "html_url": "https://subdomain.pagerduty.com/escalation_policies/PT20YPA", "id": "PT20YPA", "self": "https://api.pagerduty.com/escalation_policies/PT20YPA", "summary": "Another Escalation Policy", "type": "escalation_policy_reference" }, "first_trigger_log_entry": { "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK/log_entries/Q02JTSNZWHSEKV", "id": "Q02JTSNZWHSEKV", "self": "https://api.pagerduty.com/log_entries/Q02JTSNZWHSEKV?incident_id=PT4KHLK", "summary": "Triggered through the API", "type": "trigger_log_entry_reference" }, "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK", "id": "PT4KHLK", "incident_key": "baf7cf21b1da41b4b0221008339ff357", "incident_number": 1234, "incident_type": { "name": "major_incident" }, "last_status_change_at": "2015-10-06T21:38:23Z", "last_status_change_by": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "priority": { "id": "P53ZZH5", "self": "https://api.pagerduty.com/priorities/P53ZZH5", "summary": "P2", "type": "priority_reference" }, "resolved_at": null, "self": "https://api.pagerduty.com/incidents/PT4KHLK", "service": { "html_url": "https://subdomain.pagerduty.com/service-directory/PWIXJZS", "id": "PWIXJZS", "self": "https://api.pagerduty.com/services/PWIXJZS", "summary": "My Mail Service", "type": "service_reference" }, "status": "triggered", "summary": "[#1234] The server is on fire.", "teams": [ { "html_url": "https://subdomain.pagerduty.com/teams/PQ9K7I8", "id": "PQ9K7I8", "self": "https://api.pagerduty.com/teams/PQ9K7I8", "summary": "Engineering", "type": "team_reference" } ], "title": "The server is on fire.", "type": "incident", "updated_at": "2015-10-06T21:40:23Z", "urgency": "high" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "pagerduty.incident" } ``` ## Escalate Incident The Escalate Incident component escalates an existing PagerDuty incident to a specific escalation level within its current escalation policy. ### Important: High-Urgency Incidents Only **This action only works on high-urgency incidents.** Low-urgency incidents cannot be escalated in PagerDuty. If you need to reassign a low-urgency incident, use the "Reassign Escalation Policy" action instead. ### What is Escalation? In PagerDuty, an escalation policy defines a chain of responders: - **Level 1**: Primary on-call (e.g., the assigned engineer) - **Level 2**: Secondary responder (e.g., team lead) - **Level 3**: Tertiary responder (e.g., manager) - And so on... Escalating an incident moves it to a higher level, notifying the responders at that level immediately instead of waiting for the automatic escalation timeout. ### Use Cases - **Manual escalation**: Escalate when the current responder cannot resolve the issue - **Automated escalation**: Automatically escalate based on workflow conditions (e.g., incident age) - **Skip levels**: Jump directly to a higher level for critical situations ### Configuration - **Incident ID**: The ID of the incident to escalate (e.g., A12BC34567...) - **Escalation Level**: The level to escalate to (1-10). This is the level number within the incident's current escalation policy. - **From Email**: Email address of a valid PagerDuty user (required for App OAuth, optional for API tokens) ### Output Returns the escalated incident object with all current information. ### Example Output ```json { "data": { "incident": { "assigned_via": "escalation_policy", "assignments": [ { "assignee": { "html_url": "https://subdomain.pagerduty.com/users/PUSER02", "id": "PUSER02", "self": "https://api.pagerduty.com/users/PUSER02", "summary": "Manager Smith", "type": "user_reference" }, "at": "2015-10-06T21:42:00Z" } ], "created_at": "2015-10-06T21:30:42Z", "escalation_policy": { "html_url": "https://subdomain.pagerduty.com/escalation_policies/PT20YPA", "id": "PT20YPA", "self": "https://api.pagerduty.com/escalation_policies/PT20YPA", "summary": "Another Escalation Policy", "type": "escalation_policy_reference" }, "first_trigger_log_entry": { "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK/log_entries/Q02JTSNZWHSEKV", "id": "Q02JTSNZWHSEKV", "self": "https://api.pagerduty.com/log_entries/Q02JTSNZWHSEKV?incident_id=PT4KHLK", "summary": "Triggered through the API", "type": "trigger_log_entry_reference" }, "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK", "id": "PT4KHLK", "incident_key": "baf7cf21b1da41b4b0221008339ff357", "incident_number": 1234, "last_status_change_at": "2015-10-06T21:42:00Z", "last_status_change_by": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "priority": { "id": "P53ZZH5", "self": "https://api.pagerduty.com/priorities/P53ZZH5", "summary": "P2", "type": "priority_reference" }, "resolved_at": null, "self": "https://api.pagerduty.com/incidents/PT4KHLK", "service": { "html_url": "https://subdomain.pagerduty.com/service-directory/PWIXJZS", "id": "PWIXJZS", "self": "https://api.pagerduty.com/services/PWIXJZS", "summary": "My Mail Service", "type": "service_reference" }, "status": "triggered", "summary": "[#1234] The server is on fire.", "teams": [ { "html_url": "https://subdomain.pagerduty.com/teams/PQ9K7I8", "id": "PQ9K7I8", "self": "https://api.pagerduty.com/teams/PQ9K7I8", "summary": "Engineering", "type": "team_reference" } ], "title": "The server is on fire.", "type": "incident", "updated_at": "2015-10-06T21:42:00Z", "urgency": "high" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "pagerduty.incident" } ``` ## List Incidents The List Incidents component queries PagerDuty for open incidents and routes execution based on urgency levels. ### Use Cases - **Health checks**: Check for active incidents and route based on severity - **Incident monitoring**: Monitor incident status across services - **Automated response**: Trigger workflows based on incident presence - **Reporting**: Collect incident data for reporting or analysis ### Configuration - **Services**: Optional list of services to filter incidents (leave empty to get incidents from all services) ### Output Channels - **Clear**: No open incidents found - **Low**: Only low urgency incidents found - **High**: One or more high urgency incidents found ### Output Returns a list of open incidents with: - **id**: Incident ID - **incident_number**: Human-readable incident number - **status**: Incident status (triggered, acknowledged) - **urgency**: Incident urgency (low, high) - **title**: Incident title - **service**: Service information - **assignments**: Current assignments ### Example Output ```json { "data": { "incidents": [ { "acknowledgements": [], "assignments": [ { "assignee": { "html_url": "https://example.pagerduty.com/users/PUSER01", "id": "PUSER01", "summary": "John Doe", "type": "user_reference" }, "at": "2024-01-15T12:00:00Z" } ], "created_at": "2024-01-15T12:00:00Z", "description": "The production server is experiencing critical issues", "escalation_policy": { "html_url": "https://example.pagerduty.com/escalation_policies/PABCDEF", "id": "PABCDEF", "summary": "Default Escalation Policy", "type": "escalation_policy_reference" }, "html_url": "https://example.pagerduty.com/incidents/PT4KHLK", "id": "PT4KHLK", "incident_number": 1234, "priority": { "id": "P1PRIORITY", "summary": "P1", "type": "priority_reference" }, "service": { "html_url": "https://example.pagerduty.com/services/PX123456", "id": "PX123456", "summary": "Production API", "type": "service_reference" }, "status": "triggered", "title": "Server is on fire", "updated_at": "2024-01-15T12:00:00Z", "urgency": "high" }, { "acknowledgements": [ { "acknowledger": { "html_url": "https://example.pagerduty.com/users/PUSER01", "id": "PUSER01", "summary": "John Doe", "type": "user_reference" }, "at": "2024-01-15T12:35:00Z" } ], "assignments": [ { "assignee": { "html_url": "https://example.pagerduty.com/users/PUSER01", "id": "PUSER01", "summary": "John Doe", "type": "user_reference" }, "at": "2024-01-15T12:30:00Z" } ], "created_at": "2024-01-15T12:30:00Z", "description": "Database connections are timing out", "escalation_policy": { "html_url": "https://example.pagerduty.com/escalation_policies/PABCDEF", "id": "PABCDEF", "summary": "Default Escalation Policy", "type": "escalation_policy_reference" }, "html_url": "https://example.pagerduty.com/incidents/PT4KHLM", "id": "PT4KHLM", "incident_number": 1235, "priority": { "id": "P2PRIORITY", "summary": "P2", "type": "priority_reference" }, "service": { "html_url": "https://example.pagerduty.com/services/PX123456", "id": "PX123456", "summary": "Production API", "type": "service_reference" }, "status": "acknowledged", "title": "Database connection issues", "updated_at": "2024-01-15T12:35:00Z", "urgency": "high" } ], "total": 2 }, "timestamp": "2024-01-15T13:00:00Z", "type": "pagerduty.incidents.list" } ``` ## List Log Entries The List Log Entries component retrieves all log entries (audit trail) for a PagerDuty incident. ### Use Cases - **Audit trail**: Access complete incident history for compliance or review - **Timeline reconstruction**: Build a detailed timeline of all incident activity - **Incident analysis**: Analyze escalation patterns and response times - **Forensics**: Review all actions taken during an incident ### Configuration - **Incident ID**: The ID of the incident to list log entries for (e.g., A12BC34567...) - **Limit**: Maximum number of log entries to return (default: 100) ### Output Returns a list of log entries with: - **id**: Log entry ID - **type**: The type of log entry (e.g., trigger_log_entry, acknowledge_log_entry, annotate_log_entry) - **summary**: A summary of what happened - **created_at**: When the log entry was created - **agent**: The agent (user or service) that caused the log entry - **channel**: The channel through which the action was performed ### Example Output ```json { "data": { "log_entries": [ { "agent": { "html_url": "https://acme.pagerduty.com/services/PLH1HKV", "id": "PLH1HKV", "summary": "API Service", "type": "service_reference" }, "channel": { "type": "api" }, "created_at": "2024-01-15T10:00:00Z", "id": "Q02JTSNZWHSEKV", "summary": "Triggered through the API", "type": "trigger_log_entry" }, { "agent": { "html_url": "https://acme.pagerduty.com/users/PUSER01", "id": "PUSER01", "summary": "John Smith", "type": "user_reference" }, "channel": { "type": "web_ui" }, "created_at": "2024-01-15T10:15:00Z", "id": "Q02JTSNZWHSEKW", "summary": "Acknowledged by John Smith", "type": "acknowledge_log_entry" }, { "agent": { "html_url": "https://acme.pagerduty.com/users/PUSER01", "id": "PUSER01", "summary": "John Smith", "type": "user_reference" }, "channel": { "type": "web_ui" }, "created_at": "2024-01-15T10:30:00Z", "id": "Q02JTSNZWHSEKX", "summary": "John Smith added a note", "type": "annotate_log_entry" } ], "total": 3 }, "timestamp": "2024-01-15T11:00:00Z", "type": "pagerduty.log_entries.list" } ``` ## List Notes The List Notes component retrieves all notes (timeline entries) for a PagerDuty incident. ### Use Cases - **Incident review**: Review all notes added to an incident - **Timeline reconstruction**: Build a timeline of incident updates - **Audit trail**: Access the history of notes for compliance or review - **Note analysis**: Process or analyze notes for patterns or keywords ### Configuration - **Incident ID**: The ID of the incident to list notes for (e.g., A12BC34567...) ### Output Returns a list of notes with: - **id**: Note ID - **content**: The note content - **created_at**: When the note was created - **user**: The user who created the note - **channel**: The channel through which the note was created ### Example Output ```json { "data": { "notes": [ { "channel": { "type": "web_ui" }, "content": "Investigation started. Checking server logs for anomalies.", "created_at": "2024-01-15T10:30:00Z", "id": "PVL9NF8", "user": { "html_url": "https://acme.pagerduty.com/users/PLH1HKV", "id": "PLH1HKV", "summary": "John Smith", "type": "user_reference" } }, { "channel": { "type": "web_ui" }, "content": "Root cause identified: memory leak in the cache service. Deploying fix now.", "created_at": "2024-01-15T10:45:00Z", "id": "PVL9NF9", "user": { "html_url": "https://acme.pagerduty.com/users/PLH1HKV", "id": "PLH1HKV", "summary": "John Smith", "type": "user_reference" } }, { "channel": { "type": "api" }, "content": "Fix deployed successfully. Monitoring for stability.", "created_at": "2024-01-15T11:00:00Z", "id": "PVL9NFA", "user": { "html_url": "https://acme.pagerduty.com/users/PLH1HKW", "id": "PLH1HKW", "summary": "Jane Doe", "type": "user_reference" } } ], "total": 3 }, "timestamp": "2024-01-15T11:05:00Z", "type": "pagerduty.notes.list" } ``` ## Resolve Incident The Resolve Incident component resolves an existing PagerDuty incident. ### Use Cases - **Incident closure**: Resolve an incident when the issue has been fixed - **Automated resolution**: Automatically resolve incidents based on recovery signals - **Integration workflows**: Resolve incidents when related events occur in other systems ### Configuration - **Incident ID**: The ID of the incident to resolve (e.g., A12BC34567...) - **From Email**: Email address of a valid PagerDuty user (required for App OAuth, optional for API tokens) - **Resolution Notes**: Optional notes about the resolution (saved to incident description) ### Behavior When an incident is resolved, the incident status changes to "resolved" and all escalations stop. If resolution notes are provided, they will be saved to the incident description. ### Output Returns the resolved incident object with all current information. ### Example Output ```json { "data": { "incident": { "assigned_via": "escalation_policy", "assignments": [], "created_at": "2015-10-06T21:30:42Z", "escalation_policy": { "html_url": "https://subdomain.pagerduty.com/escalation_policies/PT20YPA", "id": "PT20YPA", "self": "https://api.pagerduty.com/escalation_policies/PT20YPA", "summary": "Another Escalation Policy", "type": "escalation_policy_reference" }, "first_trigger_log_entry": { "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK/log_entries/Q02JTSNZWHSEKV", "id": "Q02JTSNZWHSEKV", "self": "https://api.pagerduty.com/log_entries/Q02JTSNZWHSEKV?incident_id=PT4KHLK", "summary": "Triggered through the API", "type": "trigger_log_entry_reference" }, "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK", "id": "PT4KHLK", "incident_key": "baf7cf21b1da41b4b0221008339ff357", "incident_number": 1234, "last_status_change_at": "2015-10-06T21:45:00Z", "last_status_change_by": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "priority": { "id": "P53ZZH5", "self": "https://api.pagerduty.com/priorities/P53ZZH5", "summary": "P2", "type": "priority_reference" }, "resolved_at": "2015-10-06T21:45:00Z", "self": "https://api.pagerduty.com/incidents/PT4KHLK", "service": { "html_url": "https://subdomain.pagerduty.com/service-directory/PWIXJZS", "id": "PWIXJZS", "self": "https://api.pagerduty.com/services/PWIXJZS", "summary": "My Mail Service", "type": "service_reference" }, "status": "resolved", "summary": "[#1234] The server is on fire.", "teams": [ { "html_url": "https://subdomain.pagerduty.com/teams/PQ9K7I8", "id": "PQ9K7I8", "self": "https://api.pagerduty.com/teams/PQ9K7I8", "summary": "Engineering", "type": "team_reference" } ], "title": "The server is on fire.", "type": "incident", "updated_at": "2015-10-06T21:45:00Z", "urgency": "high" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "pagerduty.incident" } ``` ## Snooze Incident The Snooze Incident component temporarily pauses notifications for an acknowledged PagerDuty incident. ### Use Cases - **Temporary acknowledgement**: Snooze an incident while investigating - **Scheduled follow-up**: Re-trigger the incident after a specified time - **Avoid escalation**: Prevent escalation while work is in progress ### Configuration - **Incident ID**: The ID of the incident to snooze (must be in acknowledged state) - **Duration**: How long to snooze the incident (1 hour, 4 hours, 8 hours, or 24 hours) - **From Email**: Email address of a valid PagerDuty user (required for App OAuth, optional for API tokens) ### Behavior When an incident is snoozed, it will remain in the acknowledged state and no further notifications will be sent. After the snooze duration expires, the incident will return to a triggered state and notifications will resume. Note: Reassigning a snoozed incident will cancel the snooze timer. ### Output Returns the snoozed incident object with all current information. ### Example Output ```json { "data": { "incident": { "assigned_via": "escalation_policy", "assignments": [ { "assignee": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "at": "2015-11-10T00:31:52Z" } ], "created_at": "2015-10-06T21:30:42Z", "escalation_policy": { "html_url": "https://subdomain.pagerduty.com/escalation_policies/PT20YPA", "id": "PT20YPA", "self": "https://api.pagerduty.com/escalation_policies/PT20YPA", "summary": "Another Escalation Policy", "type": "escalation_policy_reference" }, "first_trigger_log_entry": { "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK/log_entries/Q02JTSNZWHSEKV", "id": "Q02JTSNZWHSEKV", "self": "https://api.pagerduty.com/log_entries/Q02JTSNZWHSEKV?incident_id=PT4KHLK", "summary": "Triggered through the API", "type": "trigger_log_entry_reference" }, "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK", "id": "PT4KHLK", "incident_key": "baf7cf21b1da41b4b0221008339ff357", "incident_number": 1234, "last_status_change_at": "2015-10-06T21:38:23Z", "last_status_change_by": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "priority": { "id": "P53ZZH5", "self": "https://api.pagerduty.com/priorities/P53ZZH5", "summary": "P2", "type": "priority_reference" }, "resolved_at": null, "self": "https://api.pagerduty.com/incidents/PT4KHLK", "service": { "html_url": "https://subdomain.pagerduty.com/service-directory/PWIXJZS", "id": "PWIXJZS", "self": "https://api.pagerduty.com/services/PWIXJZS", "summary": "My Mail Service", "type": "service_reference" }, "status": "acknowledged", "summary": "[#1234] The server is on fire.", "teams": [ { "html_url": "https://subdomain.pagerduty.com/teams/PQ9K7I8", "id": "PQ9K7I8", "self": "https://api.pagerduty.com/teams/PQ9K7I8", "summary": "Engineering", "type": "team_reference" } ], "title": "The server is on fire.", "type": "incident", "updated_at": "2015-10-06T21:40:23Z", "urgency": "high" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "pagerduty.incident" } ``` ## Update Incident The Update Incident component modifies an existing PagerDuty incident. ### Use Cases - **Status updates**: Update incident status (acknowledge, resolve) - **Priority management**: Change incident priority - **Assignment**: Assign incidents to users or escalation policies ### Configuration - **Incident ID**: The ID of the incident to update (e.g., A12BC34567...) - **From Email**: Email address of a valid PagerDuty user (required for App OAuth, optional for API tokens) - **Status**: Update incident status (acknowledged, resolved) - **Priority**: Update incident priority (select from available priorities) - **Title**: Update incident title (optional, supports expressions) - **Description**: Update incident description (optional, supports expressions) - **Escalation Policy**: Change escalation policy (optional) - **Assignees**: Assign to specific users (optional) ### Output Returns the updated incident object with all current information. ### Example Output ```json { "data": { "incident": { "assigned_via": "escalation_policy", "assignments": [ { "assignee": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "at": "2015-11-10T00:31:52Z" } ], "created_at": "2015-10-06T21:30:42Z", "escalation_policy": { "html_url": "https://subdomain.pagerduty.com/escalation_policies/PT20YPA", "id": "PT20YPA", "self": "https://api.pagerduty.com/escalation_policies/PT20YPA", "summary": "Another Escalation Policy", "type": "escalation_policy_reference" }, "first_trigger_log_entry": { "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK/log_entries/Q02JTSNZWHSEKV", "id": "Q02JTSNZWHSEKV", "self": "https://api.pagerduty.com/log_entries/Q02JTSNZWHSEKV?incident_id=PT4KHLK", "summary": "Triggered through the API", "type": "trigger_log_entry_reference" }, "html_url": "https://subdomain.pagerduty.com/incidents/PT4KHLK", "id": "PT4KHLK", "incident_key": "baf7cf21b1da41b4b0221008339ff357", "incident_number": 1234, "incident_type": { "name": "major_incident" }, "last_status_change_at": "2015-10-06T21:38:23Z", "last_status_change_by": { "html_url": "https://subdomain.pagerduty.com/users/PXPGF42", "id": "PXPGF42", "self": "https://api.pagerduty.com/users/PXPGF42", "summary": "Earline Greenholt", "type": "user_reference" }, "priority": { "id": "P53ZZH5", "self": "https://api.pagerduty.com/priorities/P53ZZH5", "summary": "P2", "type": "priority_reference" }, "resolved_at": null, "self": "https://api.pagerduty.com/incidents/PT4KHLK", "service": { "html_url": "https://subdomain.pagerduty.com/service-directory/PWIXJZS", "id": "PWIXJZS", "self": "https://api.pagerduty.com/services/PWIXJZS", "summary": "My Mail Service", "type": "service_reference" }, "status": "acknowledged", "summary": "[#1234] The server is on fire.", "teams": [ { "html_url": "https://subdomain.pagerduty.com/teams/PQ9K7I8", "id": "PQ9K7I8", "self": "https://api.pagerduty.com/teams/PQ9K7I8", "summary": "Engineering", "type": "team_reference" } ], "title": "The server is on fire.", "type": "incident", "updated_at": "2015-10-06T21:40:23Z", "urgency": "high" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "pagerduty.incident" } ``` #### Perplexity Source URL: https://docs.superplane.com/components/perplexity Run AI agents with Perplexity import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Run Agent The Run Agent component uses Perplexity's Agent API to run AI agents that can search the web and fetch URLs. ### Use Cases - **Research and synthesis**: Ask complex questions that require gathering and synthesizing information from multiple sources - **Automated analysis**: Run AI-powered analysis on web content - **Content generation with citations**: Generate text grounded in real-time web sources ### Configuration - **Preset**: Agent preset to use (fast-search, pro-search, deep-research, advanced-deep-research). When set, model is ignored. - **Model**: Model to use when no preset is specified - **Input**: The prompt or question for the agent (supports expressions) - **Instructions**: Optional system-level instructions - **Web Search**: Enable the web_search tool (default: true) - **Fetch URL**: Enable the fetch_url tool (default: true) ### Output Returns the agent response including: - **text**: The generated text response - **citations**: Source citations from web results - **model**: The model used - **usage**: Token and cost usage information ### Example Output ```json { "data": { "citations": [ { "type": "citation", "url": "https://example.com/ai-news" }, { "type": "citation", "url": "https://example.com/research" } ], "id": "resp_1234567890", "model": "openai/gpt-5.2", "status": "completed", "text": "Recent developments in AI include significant advances in reasoning capabilities and safety research...", "usage": { "cost": { "total_cost": 0.05 }, "input_tokens": 3681, "output_tokens": 780, "total_tokens": 4461 } }, "timestamp": "2026-01-19T12:00:00Z", "type": "perplexity.agent.response" } ``` #### Prometheus Source URL: https://docs.superplane.com/components/prometheus Monitor alerts from Prometheus and Alertmanager import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions ### Connection Configure this integration with: - **Prometheus Base URL**: URL of your Prometheus server (e.g., `https://prometheus.example.com`) - **Alertmanager Base URL** (optional): URL of your Alertmanager instance (e.g., `https://alertmanager.example.com`). Required for Silence components. If omitted, the Prometheus Base URL is used. - **API Auth**: `none`, `basic`, or `bearer` for API requests - **Webhook Secret** (recommended): If set, Alertmanager must send `Authorization: Bearer ` on webhook requests ### Alertmanager Setup (manual) The trigger setup panel in SuperPlane shows the generated webhook URL. Use the On Alert trigger setup instructions in the workflow sidebar for the exact `alertmanager.yml` snippet. After editing config, reload Alertmanager (for example `POST /-/reload` when lifecycle reload is enabled). ## On Alert The On Alert trigger starts a workflow execution when Alertmanager sends alerts to SuperPlane. ### What this trigger does - Receives Alertmanager webhook payloads - Optionally validates bearer auth when **Webhook Secret** is configured - Emits one event per matching alert as `prometheus.alert` - Filters by selected statuses (`firing` and/or `resolved`) ### Configuration - **Statuses**: Required list of alert statuses to emit - **Alert Names**: Optional exact `alertname` filters ### Alertmanager setup (manual) When the node is saved, SuperPlane generates a webhook URL shown in the trigger setup panel. Copy that URL into your Alertmanager receiver. Receiver registration in upstream Alertmanager is config-based (not API-created by SuperPlane). Use the setup instructions shown in the workflow sidebar for the exact `alertmanager.yml` snippet. After updating Alertmanager config, reload it (for example `POST /-/reload` when lifecycle reload is enabled). ### Example Data ```json { "data": { "annotations": { "description": "Demo alert from local Prometheus setup", "summary": "SuperPlane test alert is firing" }, "commonAnnotations": { "description": "Demo alert from local Prometheus setup", "summary": "SuperPlane test alert is firing" }, "commonLabels": { "alertname": "SuperplaneTestAlert", "severity": "warning" }, "endsAt": "0001-01-01T00:00:00Z", "externalURL": "http://localhost:9093", "fingerprint": "aac3b474e2c0658c", "generatorURL": "http://fd66aa456472:9090/graph?g0.expr=vector%281%29\u0026g0.tab=1", "groupKey": "{}:{alertname=\"SuperplaneTestAlert\"}", "groupLabels": { "alertname": "SuperplaneTestAlert" }, "labels": { "alertname": "SuperplaneTestAlert", "severity": "warning" }, "receiver": "superplane", "startsAt": "2026-02-12T16:08:39Z", "status": "firing" }, "timestamp": "2026-02-12T16:18:03.362582388Z", "type": "prometheus.alert" } ``` ## Create Silence The Create Silence component creates a silence in Alertmanager (`POST /api/v2/silences`) to suppress matching alerts. ### Configuration - **Matchers**: Required list of matchers. Each matcher has: - **Name**: Label name to match - **Value**: Label value to match - **Is Regex**: Whether value is a regex pattern (default: false) - **Is Equal**: Whether to match equality (true) or inequality (false) (default: true) - **Duration**: Required duration string (e.g. `1h`, `30m`, `2h30m`) - **Created By**: Required name of who is creating the silence - **Comment**: Required reason for the silence ### Output Emits one `prometheus.silence` payload with silence ID, status, matchers, timing, and creator info. ### Example Output ```json { "data": { "comment": "Scheduled maintenance window for database migration", "createdBy": "SuperPlane", "endsAt": "2026-02-12T17:30:00Z", "matchers": [ { "isEqual": true, "isRegex": false, "name": "alertname", "value": "HighLatency" }, { "isEqual": true, "isRegex": false, "name": "severity", "value": "critical" } ], "silenceID": "a1b2c3d4-e5f6-4789-a012-3456789abcde", "startsAt": "2026-02-12T16:30:00Z", "status": "active" }, "timestamp": "2026-02-12T16:30:05.123456789Z", "type": "prometheus.silence" } ``` ## Expire Silence The Expire Silence component expires an active silence in Alertmanager (`DELETE /api/v2/silence/{silenceID}`). ### Configuration - **Silence**: Required silence to expire. Supports expressions so users can reference `$['Create Silence'].silenceID`. ### Output Emits one `prometheus.silence.expired` payload with silence ID and status. ### Example Output ```json { "data": { "silenceID": "a1b2c3d4-e5f6-4789-a012-3456789abcde", "status": "expired" }, "timestamp": "2026-02-12T17:45:10.987654321Z", "type": "prometheus.silence.expired" } ``` ## Get Alert The Get Alert component fetches active alerts from Prometheus (`/api/v1/alerts`) and returns the first alert that matches. ### Configuration - **Alert Name**: Required `labels.alertname` value to search for (supports expressions) - **State**: Optional filter (`any`, `firing`, `pending`, `inactive`) ### Output Emits one `prometheus.alert` payload with labels, annotations, state, and timing fields. ### Example Output ```json { "data": { "annotations": { "description": "Demo alert from local Prometheus setup", "summary": "SuperPlane test alert is firing" }, "labels": { "alertname": "SuperplaneTestAlert", "severity": "warning" }, "startsAt": "2026-02-12T16:08:09.000517289Z", "status": "firing", "value": "1e+00" }, "timestamp": "2026-02-12T16:18:05.943610583Z", "type": "prometheus.alert" } ``` ## Get Silence The Get Silence component retrieves a silence from Alertmanager (`GET /api/v2/silence/{silenceID}`) by its ID. ### Configuration - **Silence**: Required silence to retrieve (supports expressions, e.g. `{{ $['Create Silence'].silenceID }}`) ### Output Emits one `prometheus.silence` payload with silence ID, status, matchers, timing, and creator info. ### Example Output ```json { "data": { "comment": "Scheduled maintenance window", "createdBy": "SuperPlane", "endsAt": "2026-02-12T17:30:00Z", "matchers": [ { "isEqual": true, "isRegex": false, "name": "alertname", "value": "HighLatency" } ], "silenceID": "a1b2c3d4-e5f6-4789-a012-3456789abcde", "startsAt": "2026-02-12T16:30:00Z", "status": "active" }, "timestamp": "2026-02-12T16:30:05.123456789Z", "type": "prometheus.silence" } ``` ## Query The Query component executes an instant PromQL query against Prometheus (`GET /api/v1/query`). ### Configuration - **Query**: Required PromQL expression to evaluate (supports expressions). Example: `up` ### Output Emits one `prometheus.query` payload with the result type and results. ### Example Output ```json { "data": { "result": [ { "metric": { "__name__": "up", "instance": "localhost:9090", "job": "prometheus" }, "value": [ 1708000000, "1" ] } ], "resultType": "vector" }, "timestamp": "2026-02-12T16:30:05.123456789Z", "type": "prometheus.query" } ``` ## Query Range The Query Range component executes a range PromQL query against Prometheus (`GET /api/v1/query_range`). ### Configuration - **Query**: Required PromQL expression to evaluate (supports expressions). Example: `up` - **Start**: Required start timestamp in RFC3339 or Unix format (supports expressions). Example: `2026-01-01T00:00:00Z` - **End**: Required end timestamp in RFC3339 or Unix format (supports expressions). Example: `2026-01-02T00:00:00Z` - **Step**: Required query resolution step (e.g. `15s`, `1m`) ### Output Emits one `prometheus.queryRange` payload with the result type and results. ### Example Output ```json { "data": { "result": [ { "metric": { "__name__": "up", "instance": "localhost:9090", "job": "prometheus" }, "values": [ [ 1708000000, "1" ], [ 1708000015, "1" ], [ 1708000030, "1" ] ] } ], "resultType": "matrix" }, "timestamp": "2026-02-12T16:30:05.123456789Z", "type": "prometheus.queryRange" } ``` #### Render Source URL: https://docs.superplane.com/components/render Deploy and manage Render services, and react to Render deploy/build events import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions 1. **API Key:** Create it in [Render Account Settings -> API Keys](https://dashboard.render.com/u/settings#api-keys). 2. **Workspace (optional):** Use your Render workspace ID (`usr-...` or `tea-...`) or workspace name. Leave empty to use the first workspace available to the API key. 3. **Workspace Plan:** Select **Professional** or **Organization / Enterprise** (used to choose webhook strategy). 4. **Auth:** SuperPlane sends requests to [Render API v1](https://api.render.com/v1/) using `Authorization: Bearer `. 5. **Webhooks:** SuperPlane configures Render webhooks automatically via the [Render Webhooks API](https://render.com/docs/webhooks). No manual setup is required. 6. **Troubleshooting:** Check [Render Dashboard -> Integrations -> Webhooks](https://dashboard.render.com/) and the [Render webhook docs](https://render.com/docs/webhooks). Note: **Plan requirement:** Render webhooks require a Professional plan or higher. ## On Build The On Build trigger emits build-related Render events for one selected service. ### Use Cases - **Build failure alerts**: Notify your team when builds fail - **Build success hooks**: Trigger follow-up automation after successful builds ### Configuration - **Service**: Required Render service. - **Event Types**: Build event states to listen for. Defaults to `build_ended`. ### Webhook Verification Render webhooks are validated using the secret generated when SuperPlane creates the webhook via the Render API. Verification checks: - `webhook-id` - `webhook-timestamp` - `webhook-signature` (`v1,`) ### Event Data The default output emits payload data fields like `buildId`, `eventId`, `serviceId`, `serviceName`, and `status` (when present). ### Example Data ```json { "data": { "buildId": "bld-cukouhrtq21c73e9scng", "createdAt": "2026-02-05T16:00:00.000000Z", "eventId": "evj-cukouhrtq21c73e9scng", "serviceId": "srv-cukouhrtq21c73e9scng", "serviceName": "backend-api", "status": "failed" }, "timestamp": "2026-02-05T16:00:01.000000Z", "type": "render.build.ended" } ``` ## On Deploy The On Deploy trigger emits deploy-related Render events for one selected service. ### Use Cases - **Deploy notifications**: Notify Slack or PagerDuty when deploys succeed/fail - **Post-deploy automation**: Trigger smoke tests after successful deploy completion events - **Release orchestration**: Trigger downstream workflows when deploy stages change ### Configuration - **Service**: Required Render service. - **Event Types**: Deploy event states to listen for. Defaults to `deploy_ended`. ### Webhook Verification Render webhooks are validated using the secret generated when SuperPlane creates the webhook via the Render API. Verification checks: - `webhook-id` - `webhook-timestamp` - `webhook-signature` (`v1,`) ### Event Data The default output emits payload data fields like `deployId`, `eventId`, `serviceId`, `serviceName`, and `status` (when present). ### Example Data ```json { "data": { "createdAt": "2026-02-05T16:00:00.000000Z", "deployId": "dep-cukouhrtq21c73e9scng", "eventId": "evj-cukouhrtq21c73e9scng", "serviceId": "srv-cukouhrtq21c73e9scng", "serviceName": "backend-api", "status": "succeeded" }, "timestamp": "2026-02-05T16:00:01.000000Z", "type": "render.deploy.ended" } ``` ## Cancel Deploy The Cancel Deploy component cancels an in-progress deploy for a Render service and waits for it to complete. ### Use Cases - **Automated rollback/abort**: Cancel deploys when health checks fail - **Manual intervention**: Stop a deploy triggered earlier in a workflow ### How It Works 1. Sends a cancel request for the specified deploy via the Render API 2. Waits for the deploy to finish (via deploy_ended webhook and optional polling fallback) 3. Routes execution based on deploy outcome: - **Success channel**: Deploy was cancelled successfully (status is `canceled`) - **Failed channel**: Deploy finished with an unexpected status ### Configuration - **Service**: Render service that owns the deploy - **Deploy ID**: Deploy ID to cancel (supports expressions) ### Output Channels - **Success**: Emitted when the deploy is cancelled successfully - **Failed**: Emitted when the deploy finishes with a non-cancelled status ### Notes - Uses the existing integration webhook for deploy_ended events - Falls back to polling if the webhook does not arrive - Requires a Render API key configured on the integration ### Example Output ```json { "data": { "createdAt": "2026-02-05T16:10:00.000000Z", "deployId": "dep-cukouhrtq21c73e9scng", "finishedAt": "2026-02-05T16:12:00.000000Z", "serviceId": "srv-cukouhrtq21c73e9scng", "status": "canceled", "trigger": "api" }, "timestamp": "2026-02-05T16:12:00.000000Z", "type": "render.deploy" } ``` ## Deploy The Deploy component starts a new deploy for a Render service and waits for it to complete. ### Use Cases - **Merge to deploy**: Trigger production deploys after a successful GitHub merge and CI pass - **Scheduled redeploys**: Redeploy staging services on schedules or external content changes - **Chained deploys**: Deploy service B when service A finishes successfully ### How It Works 1. Triggers a new deploy for the selected Render service via the Render API 2. Waits for the deploy to complete (via deploy_ended webhook and optional polling fallback) 3. Routes execution based on deploy outcome: - **Success channel**: Deploy completed successfully - **Failed channel**: Deploy failed or was cancelled ### Configuration - **Service**: Render service to deploy - **Clear Cache**: Clear build cache before deploying ### Output Channels - **Success**: Emitted when the deploy completes successfully - **Failed**: Emitted when the deploy fails or is cancelled ### Notes - Uses the existing integration webhook for deploy_ended events (same as On Deploy trigger) - Falls back to polling if the webhook does not arrive - Requires a Render API key configured on the integration ### Example Output ```json { "data": { "createdAt": "2026-02-05T16:10:00.000000Z", "deployId": "dep-cukouhrtq21c73e9scng", "finishedAt": "2026-02-05T16:15:00.000000Z", "serviceId": "srv-cukouhrtq21c73e9scng", "status": "succeeded" }, "timestamp": "2026-02-05T16:15:00.000000Z", "type": "render.deploy.finished" } ``` ## Get Deploy The Get Deploy component fetches a deploy for a Render service. ### Use Cases - **Status checks**: Inspect deploy status and timestamps - **Debugging**: Fetch deploy metadata after receiving an event ### Configuration - **Service**: Render service that owns the deploy - **Deploy ID**: Deploy ID to retrieve (supports expressions) ### Output Emits a `render.deploy` payload containing deploy fields like `deployId`, `status`, `trigger`, and timestamps when available. ### Example Output ```json { "data": { "commit": { "createdAt": "2026-02-05T16:09:30.000000Z", "id": "1a2b3c4d5e6f", "message": "Release v1.2.3" }, "createdAt": "2026-02-05T16:10:00.000000Z", "deployId": "dep-cukouhrtq21c73e9scng", "finishedAt": "2026-02-05T16:15:00.000000Z", "image": { "ref": "registry.example.com/backend-api:1a2b3c4d", "sha": "sha256:4f7c2d7f0bb27e2f8d4d9b3d2b3a1a9a3b2c1d0e" }, "serviceId": "srv-cukouhrtq21c73e9scng", "startedAt": "2026-02-05T16:10:10.000000Z", "status": "live", "trigger": "api" }, "timestamp": "2026-02-05T16:15:00.000000Z", "type": "render.deploy" } ``` ## Get Service The Get Service component fetches details for a Render service. ### Use Cases - **Service inspection**: Fetch current service configuration and metadata - **Workflow context**: Use service fields to drive branching decisions in later steps ### Configuration - **Service**: Render service to retrieve ### Output Emits a `render.service` payload containing service fields like `serviceId`, `serviceName`, `type`, `dashboardUrl`, and `suspended`. ### Example Output ```json { "data": { "createdAt": "2026-02-05T15:00:00.000000Z", "dashboardUrl": "https://dashboard.render.com/web/srv-cukouhrtq21c73e9scng", "serviceId": "srv-cukouhrtq21c73e9scng", "serviceName": "backend-api", "suspended": "not_suspended", "type": "web_service", "updatedAt": "2026-02-05T16:00:00.000000Z" }, "timestamp": "2026-02-05T16:05:00.000000Z", "type": "render.service" } ``` ## Purge Cache The Purge Cache component requests a build cache purge for a Render service. ### Use Cases - **Cache reset**: Force a clean rebuild when you suspect stale dependencies or build artifacts - **Operational tooling**: Provide a one-click cache purge in incident response workflows ### Configuration - **Service**: Render service whose build cache should be purged ### Output Emits a `render.cache.purge.requested` payload with `serviceId` and a `status` field indicating the request was accepted. ### Example Output ```json { "data": { "serviceId": "srv-cukouhrtq21c73e9scng", "status": "accepted" }, "timestamp": "2026-02-05T16:20:00.000000Z", "type": "render.cache.purge.requested" } ``` ## Rollback Deploy The Rollback Deploy component triggers a rollback deploy for a Render service and waits for it to complete. ### Use Cases - **Automated recovery**: Roll back after detecting errors in a new deploy - **One-click rollback**: Trigger rollbacks from an incident workflow ### How It Works 1. Triggers a rollback deploy for the selected Render service via the Render API 2. Waits for the deploy to complete (via deploy_ended webhook and optional polling fallback) 3. Routes execution based on deploy outcome: - **Success channel**: Deploy completed successfully (status is `live`) - **Failed channel**: Deploy failed or was cancelled ### Configuration - **Service**: Render service to roll back - **Deploy ID**: The deploy ID to roll back to (supports expressions) ### Output Channels - **Success**: Emitted when the rollback deploy completes successfully - **Failed**: Emitted when the rollback deploy fails or is cancelled ### Notes - Uses the existing integration webhook for deploy_ended events - Falls back to polling if the webhook does not arrive - Includes `rollbackToDeployId` in the output payload for reference - Requires a Render API key configured on the integration ### Example Output ```json { "data": { "createdAt": "2026-02-05T16:18:00.000000Z", "deployId": "dep-cukouhrtq21c73e9scng", "rollbackToDeployId": "dep-cukouhrtq21c73e9scnf", "serviceId": "srv-cukouhrtq21c73e9scng", "status": "build_in_progress", "trigger": "rollback" }, "timestamp": "2026-02-05T16:18:00.000000Z", "type": "render.deploy" } ``` ## Update Env Var The Update Env Var component updates a Render service environment variable. ### Use Cases - **Rotate secrets**: Generate a new value for an env var (for example, API tokens) and optionally emit it - **Configuration changes**: Update non-secret environment values as part of a workflow ### Configuration - **Service**: Render service that owns the env var - **Key**: Env var key to update - **Value Strategy**: - `Set value`: provide the `Value` field - `Generate value`: request Render to generate a new value - **Value**: New env var value (sensitive). Required when using `Set value` - **Emit Value**: When enabled, include the env var `value` in output. Disabled by default to avoid leaking secrets. ### Output Emits a `render.envVar.updated` payload with `serviceId`, `key`, and a `valueGenerated` boolean. The `value` field is only included when `emitValue` is enabled. ### Example Output ```json { "data": { "key": "DATABASE_URL", "serviceId": "srv-cukouhrtq21c73e9scng", "valueGenerated": false }, "timestamp": "2026-02-05T16:25:00.000000Z", "type": "render.envVar.updated" } ``` #### Rootly Source URL: https://docs.superplane.com/components/rootly Manage and react to incidents in Rootly import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## On Incident The On Incident trigger starts a workflow execution when Rootly incident events occur. ### Use Cases - **Incident automation**: Automate responses to incident events - **Notification workflows**: Send notifications when incidents are created or resolved - **Integration workflows**: Sync incidents with external systems - **Post-incident actions**: Trigger follow-up workflows when incidents are mitigated or resolved ### Configuration - **Events**: Select which incident events to listen for (created, updated, mitigated, resolved, cancelled, deleted) ### Event Data Each incident event includes: - **event**: Event type (incident.created, incident.updated, etc.) - **incident**: Complete incident information including title, summary, severity, status ### Webhook Setup This trigger automatically sets up a Rootly webhook endpoint when configured. The endpoint is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "event": "incident.created", "incident": { "id": "abc123-def456", "mitigated_at": null, "resolved_at": null, "severity": "sev2", "started_at": "2026-01-19T12:00:00Z", "status": "started", "summary": "The API response times have increased significantly across all endpoints.", "title": "API latency spike detected", "url": "https://app.rootly.com/incidents/abc123-def456" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "rootly.onIncident" } ``` ## On Incident Timeline Event The On Incident Timeline Event trigger starts a workflow execution when Rootly incident timeline events are created or updated. Only events with kind "event" are emitted. ### Use Cases - **Note automation**: Run workflows when investigation notes are added - **Timeline sync**: Sync incident timeline events to Slack or external systems - **Annotation tracking**: Track updates to incident annotations - **Audit logging**: Capture timeline events for compliance or reporting ### Configuration - **Incident Status**: Optional filter by incident status (open, resolved, etc.) - **Severity**: Optional filter by incident severity - **Service**: Optional filter by service name - **Team**: Optional filter by team name - **Event Source**: Optional filter by event source (web, api, system) - **Visibility**: Optional filter by event visibility (internal or external) ### Event Data Each incident event includes: - **id**: Event ID - **event**: Event content - **event_raw**: Raw event content - **event_id**: Webhook event ID - **event_type**: Event type (incident_event.created or incident_event.updated) - **kind**: Event kind - **source**: Event source - **visibility**: Event visibility - **occurred_at**: When the event occurred - **created_at**: When the event was created - **updated_at**: When the event was last updated - **issued_at**: When the webhook was issued - **incident_id**: Incident ID - **incident**: Incident information ### Webhook Setup This trigger automatically sets up a Rootly webhook endpoint when configured. The endpoint is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "created_at": "2026-02-22T09:46:23.868-08:00", "event": "Investigation started, will update accordingly", "event_id": "b3065ca8-69a6-4781-b6b4-94d6f0317ccf", "event_raw": "Investigation started, will update accordingly", "event_type": "incident_event.created", "id": "56f7b488-e3c5-4091-9bb4-cf132007f98c", "incident": { "id": "64c39fde-1626-4f78-874e-9db91c0639d3", "services": [ "UI - User Profile Block" ], "severity": "sev2", "status": "mitigated", "teams": [ "Customer Relations" ], "title": "new remake from main" }, "incident_id": "64c39fde-1626-4f78-874e-9db91c0639d3", "issued_at": "2026-02-22T09:46:24.018-08:00", "kind": "event", "occurred_at": "2026-02-22T09:46:23.868-08:00", "source": "web", "updated_at": "2026-02-22T09:46:23.868-08:00", "visibility": "internal" }, "timestamp": "2026-02-22T17:46:40.603539728Z", "type": "rootly.onIncidentTimelineEvent" } ``` ## Create Event The Create Event component adds a timeline event (note/annotation) to a Rootly incident. ### Use Cases - **Investigation notes**: Add detailed investigation notes to the incident timeline - **Status updates**: Post automated status updates as workflows progress - **Cross-system sync**: Sync comments from external tools into the incident timeline ### Configuration - **Incident ID**: The Rootly incident UUID to add the event to (required, supports expressions) - **Event**: The note/annotation text (required, supports expressions) - **Visibility**: Internal or external visibility (optional, default per Rootly) ### Output Returns the created incident event with: - **id**: Event ID - **event**: Event content - **visibility**: Event visibility - **occurred_at**: Event timestamp - **created_at**: Creation timestamp ### Example Output ```json { "data": { "created_at": "2026-02-10T07:34:35.902-8:00", "event": "Investigation update: database connections stabilized.", "id": "a2d32bb7-0417-4d0d-8483-a583c3-7853", "occurred_at": "2026-02-10T07:34:35.902-8:00", "visibility": "internal" }, "timestamp": "2026-02-10T15:34:36.09877478Z", "type": "rootly.incident.event" } ``` ## Create Incident The Create Incident component creates a new incident in Rootly. ### Use Cases - **Alert escalation**: Create incidents from monitoring alerts - **Error tracking**: Automatically create incidents when errors are detected - **Manual incident creation**: Create incidents from workflow events - **Integration workflows**: Create incidents from external system events ### Configuration - **Title**: A succinct description of the incident (required, supports expressions) - **Summary**: Additional details about the incident (optional, supports expressions) - **Severity**: Incident severity level (optional, supports expressions) ### Output Returns the created incident object including: - **id**: Incident ID - **title**: Incident title - **status**: Current incident status - **severity**: Incident severity - **started_at**: Incident creation timestamp - **url**: Link to the incident in Rootly ### Example Output ```json { "data": { "id": "abc123-def456", "severity": "sev1", "started_at": "2026-01-19T12:00:00Z", "status": "started", "summary": "Users are experiencing slow database queries and connection timeouts.", "title": "Database connection issues", "url": "https://app.rootly.com/incidents/abc123-def456" }, "timestamp": "2026-01-19T12:00:00Z", "type": "rootly.incident" } ``` ## Get Incident The Get Incident component retrieves a single incident from Rootly by ID, including related resources. ### Use Cases - **Incident enrichment**: Fetch full incident details including services, groups, and action items - **Status checks**: Check the current status and severity of an incident - **Post-incident analysis**: Retrieve incident timeline events and action items - **Cross-system sync**: Get incident data to sync with external systems ### Configuration - **Incident ID**: The ID of the incident to retrieve (required, supports expressions) ### Output Returns the incident object including: - **id**: Incident ID - **sequential_id**: Sequential incident number - **title**: Incident title - **slug**: URL-friendly incident identifier - **status**: Current incident status - **summary**: Incident summary - **severity**: Incident severity slug - **url**: Link to the incident in Rootly - **started_at**: When the incident started - **mitigated_at**: When the incident was mitigated - **resolved_at**: When the incident was resolved - **user**: User who created the incident - **started_by**: User who started the incident - **services**: Affected services - **groups**: Associated groups - **events**: Incident timeline events - **action_items**: Follow-up action items ### Example Output ```json { "data": { "action_items": [ { "id": "ai-001", "status": "open", "summary": "Investigate root cause of latency increase" } ], "events": [ { "created_at": "2026-01-19T12:00:00Z", "id": "evt-001", "kind": "incident_created", "visibility": "internal" } ], "groups": [ { "id": "grp-001", "name": "Backend Team", "slug": "backend-team" } ], "id": "abc123-def456", "mitigated_at": "2026-01-19T12:30:00Z", "resolved_at": null, "sequential_id": 42, "services": [ { "id": "svc-001", "name": "Production API", "slug": "production-api" } ], "severity": "sev1", "slug": "api-latency-spike-detected", "started_at": "2026-01-19T12:00:00Z", "started_by": { "email": "john@example.com", "full_name": "John Doe", "id": "user-002" }, "status": "mitigated", "summary": "The API response times have increased significantly across all endpoints.", "title": "API latency spike detected", "url": "https://app.rootly.com/incidents/abc123-def456", "user": { "email": "jane@example.com", "full_name": "Jane Smith", "id": "user-001" } }, "timestamp": "2026-01-19T12:05:00Z", "type": "rootly.incident" } ``` ## Update Incident The Update Incident component updates an existing incident in Rootly. ### Use Cases - **Status updates**: Update incident status when new information arrives - **Severity changes**: Adjust severity based on impact assessment - **Service association**: Attach affected services to an incident - **Team assignment**: Assign teams to respond to an incident - **Metadata updates**: Add labels to categorize incidents ### Configuration - **Incident ID**: The UUID of the incident to update (required, supports expressions) - **Title**: Update the incident title (optional, supports expressions) - **Summary**: Update the incident summary (optional, supports expressions) - **Status**: Update the incident status (optional) - **Sub-Status**: Update the incident sub-status (optional, required by some Rootly accounts when changing status) - **Severity**: Update the incident severity level (optional) - **Services**: Services to attach to the incident (optional) - **Teams**: Teams to attach to the incident (optional) - **Labels**: Key-value labels for the incident (optional) ### Output Returns the updated incident object including: - **id**: Incident UUID - **sequential_id**: Sequential incident number - **title**: Incident title - **slug**: URL-friendly slug - **status**: Current incident status - **updated_at**: Last update timestamp ### Example Output ```json { "data": { "id": "abc123-def456", "mitigated_at": "2026-01-19T13:30:00Z", "sequential_id": 42, "severity": "sev1", "slug": "database-connection-issues", "started_at": "2026-01-19T12:00:00Z", "status": "mitigated", "summary": "Root cause identified. Connection pool exhausted.", "title": "Database connection issues - Updated", "updated_at": "2026-01-19T13:30:00Z", "url": "https://app.rootly.com/incidents/abc123-def456" }, "timestamp": "2026-01-19T13:30:00Z", "type": "rootly.incident" } ``` #### Semaphore Source URL: https://docs.superplane.com/components/semaphore Run and react to your Semaphore workflows import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## On Pipeline Done The On Pipeline Done trigger starts a workflow execution when a Semaphore pipeline completes. ### Use Cases - **Pipeline orchestration**: Chain workflows together based on pipeline completion - **Status monitoring**: Monitor CI/CD pipeline results - **Notification workflows**: Send notifications when pipelines succeed or fail - **Post-processing**: Process artifacts or results after pipeline completion ### Configuration - **Project**: Select the Semaphore project to monitor - **Refs**: Optional ref filters (for example `refs/heads/main`) - **Results**: Optional pipeline result filters (for example `passed`, `failed`) - **Pipelines**: Optional pipeline file filters (for example `.semaphore/semaphore.yml`, `.semaphore/production/deploy.yml`) ### Event Data Each pipeline done event includes: - **pipeline**: Pipeline information including ID, state, and result - **workflow**: Workflow information including ID and URL - **project**: Project information - **result**: Pipeline result (passed, failed, stopped, etc.) - **state**: Pipeline state (done) ### Webhook Setup This trigger automatically sets up a Semaphore webhook when configured. The webhook is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "blocks": [ { "jobs": [ { "id": "00000-00000-00000-00000-00000", "index": 0, "name": "Report result to SuperPlane", "result": "passed", "status": "finished" } ], "name": "Block #1", "result": "passed", "result_reason": "test", "state": "done" } ], "organization": { "id": "00000000-0000-0000-0000-000000000000", "name": "test" }, "pipeline": { "created_at": "2026-01-19T12:00:00Z", "done_at": "2026-01-19T12:00:00Z", "error_description": "", "id": "00000000-0000-0000-0000-000000000000", "name": "Initial Pipeline", "pending_at": "2026-01-19T12:00:00Z", "queuing_at": "2026-01-19T12:00:00Z", "result": "passed", "result_reason": "test", "running_at": "2026-01-19T12:00:00Z", "state": "done", "stopping_at": "1970-01-01T00:00:00Z", "working_directory": ".semaphore", "yaml_file_name": "semaphore.yml" }, "project": { "id": "00000000-0000-0000-0000-000000000000", "name": "test" }, "repository": { "slug": "test/test", "url": "https://github.com/test/test" }, "revision": { "branch": { "commit_range": "0000000000000000000000000000000000000000^...0000000000000000000000000000000000000000", "name": "test" }, "commit_message": "Merge branch 'test' into test", "commit_sha": "0000000000000000000000000000000000000000", "pull_request": null, "reference": "refs/heads/test", "reference_type": "branch", "sender": { "avatar_url": "https://avatars2.githubusercontent.com/u/0000000000000000000000000000000000000000?s=460\u0026v=4", "email": "test@test.com", "login": "test" }, "tag": null }, "version": "1.0.0", "workflow": { "created_at": "2026-01-19T12:00:00Z", "id": "00000000-0000-0000-0000-000000000000", "initial_pipeline_id": "00000000-0000-0000-0000-000000000000" } }, "timestamp": "2026-01-19T12:00:00Z", "type": "semaphore.pipeline.done" } ``` ## Get Pipeline The Get Pipeline component fetches a Semaphore pipeline by its ID and returns its current state, result, and metadata. ### Use Cases - **Pipeline status checking**: After Run Workflow starts a pipeline, fetch its status to decide when to proceed - **Pipeline lookup**: Look up the result of a specific pipeline from event data to get full details - **Conditional deployment**: Build a status-check step that verifies a pipeline before triggering dependent actions ### Configuration - **Pipeline ID**: The Semaphore pipeline ID (supports expressions, e.g. `{{ event.pipeline.id }}`) ### Output Returns the pipeline object including: - Pipeline ID (ppl_id) - Pipeline name - Workflow ID (wf_id) - State (e.g. running, done) - Result (e.g. passed, failed) ### Example Output ```json { "data": { "branch_name": "main", "commit_message": "feat: add new feature", "commit_sha": "a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2", "created_at": "2026-01-22T15:32:47.000000Z", "done_at": "2026-01-22T15:32:56.000000Z", "error_description": "", "name": "Initial Pipeline", "ppl_id": "00000000-0000-0000-0000-000000000000", "project_id": "22222222-2222-2222-2222-222222222222", "promotion_of": "", "result": "passed", "result_reason": "test", "running_at": "2026-01-22T15:32:48.000000Z", "state": "done", "terminated_by": "", "wf_id": "11111111-1111-1111-1111-111111111111", "working_directory": ".semaphore", "yaml_file_name": "semaphore.yml" }, "timestamp": "2026-01-22T15:32:56.061430218Z", "type": "semaphore.pipeline" } ``` ## Run Workflow The Run Workflow component triggers a Semaphore CI/CD workflow and waits for it to complete. ### Use Cases - **CI/CD orchestration**: Trigger builds and deployments from SuperPlane workflows - **Pipeline automation**: Run Semaphore pipelines as part of workflow automation - **Multi-stage deployments**: Coordinate complex deployment pipelines - **Workflow chaining**: Chain multiple Semaphore workflows together ### How It Works 1. Creates and starts a Semaphore workflow with the specified pipeline file and parameters 2. Waits for the pipeline to complete (monitored via webhook and polling) 3. Routes execution based on pipeline result: - **Passed channel**: Pipeline completed successfully - **Failed channel**: Pipeline failed or was cancelled ### Configuration - **Project**: Select the Semaphore project containing the workflow - **Pipeline File**: Path to the pipeline YAML file (e.g., `.semaphore/pipeline.yml`) - **Ref**: Git reference to run the workflow on (branch, tag, or commit SHA) - **Commit SHA**: Optional specific commit SHA to run (if not provided, uses latest from ref) - **Parameters**: Optional workflow parameters as key-value pairs (supports expressions) ### Output Channels - **Passed**: Emitted when pipeline completes successfully - **Failed**: Emitted when pipeline fails or is cancelled ### Notes - The component automatically sets up webhook monitoring for pipeline completion - Falls back to polling if webhook doesn't arrive - Can be cancelled, which will stop the running Semaphore workflow ### Example Output ```json { "data": { "blocks": [ { "jobs": [ { "id": "00000-00000-00000-00000-00000", "index": 0, "name": "Job #1", "result": "passed", "status": "finished" } ], "name": "Block #1", "result": "passed", "result_reason": "test", "state": "done" } ], "organization": { "id": "00000000-0000-0000-0000-000000000000", "name": "test" }, "pipeline": { "created_at": "2026-01-22T15:32:47Z", "done_at": "2026-01-22T15:32:55Z", "error_description": "", "id": "00000000-0000-0000-0000-000000000000", "name": "Initial Pipeline", "pending_at": "2026-01-22T15:32:48Z", "queuing_at": "2026-01-22T15:32:48Z", "result": "passed", "result_reason": "test", "running_at": "2026-01-22T15:32:48Z", "state": "done", "stopping_at": "1970-01-01T00:00:00Z", "working_directory": ".semaphore", "yaml_file_name": "semaphore.yml" }, "project": { "id": "00000000-0000-0000-0000-000000000000", "name": "test" }, "repository": { "slug": "test/test", "url": "https://github.com/test/test" }, "revision": { "branch": { "commit_range": "0000000000000000000000000000000000000000^...0000000000000000000000000000000000000000", "name": "test" }, "commit_message": "test", "commit_sha": "0000000000000000000000000000000000000000", "pull_request": null, "reference": "refs/heads/test", "reference_type": "branch", "sender": { "avatar_url": "https://example.com/avatar.png", "email": "test@test.com", "login": "test" }, "tag": null }, "version": "1.0.0", "workflow": { "created_at": "2026-01-22T15:32:47Z", "id": "00000000-0000-0000-0000-000000000000", "initial_pipeline_id": "00000000-0000-0000-0000-000000000000" } }, "timestamp": "2026-01-22T15:32:56.061430218Z", "type": "semaphore.workflow.finished" } ``` #### SendGrid Source URL: https://docs.superplane.com/components/sendgrid Send transactional and marketing email with SendGrid import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions ### Connection Configure the SendGrid integration in SuperPlane with: - **API Key**: SendGrid API key with Mail Send and Mail Settings Read scopes - **Default From Email**: Required sender email address for SendGrid actions - **Default From Name**: Optional sender name for SendGrid actions ### Actions and Triggers The SendGrid base integration establishes API access. Actions and triggers will be documented here once they are available. ## On Email Event The On Email Event trigger emits events when SendGrid posts delivery or engagement events to your webhook. ### Use Cases - **Bounce handling**: Stop sending to bounced addresses and notify your team - **Delivery confirmations**: Trigger follow-ups when critical notifications are delivered - **Engagement tracking**: Update CRM records when recipients open or click emails ### Configuration - **Event Types**: Optional filter for specific SendGrid events (processed, delivered, bounce, open, click, etc.) - **Category Filter**: Optional list of predicates (`equals`, `notEquals`, `matches` regex) ### Webhook Verification SuperPlane configures the SendGrid Event Webhook via API and enables Signed Event Webhook by default. The verification key is stored automatically. Verification uses: - `X-Twilio-Email-Event-Webhook-Signature` header - `X-Twilio-Email-Event-Webhook-Timestamp` header - Raw request body (no transformations) ### Event Data Each event includes fields such as `event`, `email`, `timestamp`, `sg_event_id`, `sg_message_id`, `category` and event-specific properties like `reason`, `response`, `url`, `bounce_classification`. ### Example Data ```json { "data": { "category": [ "order-confirmation" ], "email": "recipient@example.com", "event": "delivered", "response": "250 OK", "sg_event_id": "ZGVmYXVsdC1ldmVudC1pZA", "sg_message_id": "YWJjMTIzX2RlZmF1bHRfbXNnX2lk", "timestamp": 1700000000 }, "timestamp": "2026-02-04T12:00:00Z", "type": "sendgrid.email.event" } ``` ## Create or Update Contact Create or update a contact in SendGrid using the Marketing Contacts API. ### Use Cases - **Signup sync**: Add new signups to SendGrid lists for onboarding or newsletters - **CRM sync**: Keep SendGrid contacts updated from your CRM or database - **Post-purchase follow-up**: Add buyers to follow-up campaigns ### Configuration - **Email**: Contact email address (unique identifier) - **First Name**: Optional contact first name - **Last Name**: Optional contact last name - **List IDs**: Optional SendGrid list IDs to add the contact to - **Custom Fields**: Optional custom fields map (must exist in SendGrid) ### Output Channels - **Default**: Emitted when SendGrid accepts the upsert request - **Failed**: Emitted when validation fails or the API request is rejected ### Notes - Requires a SendGrid API key with Marketing Contacts permissions ### Example Output ```json { "data": { "email": "recipient@example.com", "jobId": "job-123", "status": "202 Accepted", "statusCode": 202 }, "timestamp": "2026-02-04T12:00:00Z", "type": "sendgrid.contact.upserted" } ``` ## Send Email Send a single email via SendGrid's Mail Send API. ### Use Cases - **Notifications**: Send alert or notification emails when workflows fail or complete - **Receipts**: Send order confirmations or receipts from workflow runs - **Reports**: Deliver scheduled digests or reports to stakeholders ### Configuration - **To**: Recipient email address(es), comma-separated - **Subject**: Email subject line - **Sending Mode**: Choose text, HTML, or dynamic template - **Text Body**: Email body content (plain text) - **HTML Body**: Email body content (HTML) - **CC**: CC recipients, comma-separated - **BCC**: BCC recipients, comma-separated - **From Name**: Optional sender display name override - **From Email**: Optional sender email override (must be verified in SendGrid) - **Reply-To**: Reply-to email address - **Template ID**: SendGrid dynamic template ID (e.g. `d-xxxxxxxx`) - **Template Data**: JSON object of template substitution variables - **Categories**: Optional comma-separated list of category names. SendGrid attaches these to the message for tracking and filtering in the Email Activity feed and for use with the Event Webhook (e.g. to filter events by category in an On Email Event trigger). ### Output Channels - **Default**: Emitted when SendGrid accepts the message - **Failed**: Emitted when validation fails or the API request is rejected ### Notes - Requires a SendGrid API key configured on the integration - When using a template, SendGrid may override the subject and body ### Example Output ```json { "data": { "messageId": "YzQ2ODAtMTg5NC0xMWVmLTk0NGYtNTZlYjU5OGY4Y2Q3", "status": "202 Accepted", "statusCode": 202, "subject": "Your receipt is ready", "to": [ "recipient@example.com" ] }, "timestamp": "2026-02-04T12:00:00Z", "type": "sendgrid.email.sent" } ``` #### Sentry Source URL: https://docs.superplane.com/components/sentry React to issue events and manage issues and metric alerts in Sentry import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions **Setup steps:** 1. Create a [personal auth token](https://sentry.io/settings/account/api/auth-tokens/) in Sentry with the permissions below. Copy the token. > **Token Permissions:** > Project -> `Read` · Releases -> `Read & Write` (`project:releases`) · Team -> `Read` · Issue & Event -> `Read & Write` · Organization -> `Read & Write` 2. In Sentry, go to **Settings → Integrations → Custom Integrations → Create New Integration → Internal Integration**. 3. Name it (e.g. `SuperPlane`), leave **Webhook URL** empty, and save. Copy the **Client Secret** shown on the bottom of the integration page. 4. Fill in **Sentry URL**, **User Token**, **Integration Name**, and **Client Secret** below, then save. SuperPlane configures the webhook and subscribes to issue events automatically. ## On Issue Event The On Issue Event trigger starts a workflow execution when Sentry sends issue webhooks for the connected organization. ### Use Cases - **Escalation workflows**: react when a new issue is created in Sentry - **Triage automation**: assign follow-up actions when issues are assigned or resolved - **Cross-tool sync**: mirror Sentry issue state changes into incident or ticketing systems ### Configuration - **Project**: Optionally limit the trigger to a single Sentry project - **Actions**: Select which issue actions should trigger the workflow ### Event Data The trigger emits the full Sentry webhook payload, including: - **action**: the issue event action - **data.issue**: the Sentry issue object - **actor**: the user or team that triggered the event when available ### Setup This trigger uses the webhook URL configured on your Sentry internal integration. SuperPlane verifies each webhook signature using your Sentry client secret before routing the event to matching triggers. ### Example Data ```json { "data": { "action": "created", "actor": { "id": "789", "name": "Person", "type": "user" }, "data": { "issue": { "assignedTo": { "id": "789", "name": "Person", "type": "user" }, "culprit": "SentryCustomError(frontend/src/util)", "firstSeen": "2022-04-04T18:17:18.320000Z", "id": "123", "lastSeen": "2022-04-04T18:17:18.320000Z", "level": "error", "permalink": "https://your-org.sentry.io/issues/123/", "project": { "id": "456", "name": "ipe", "slug": "ipe" }, "shortId": "IPE-1", "status": "unresolved", "substatus": "new", "title": "Error #1: This is a test error!", "web_url": "https://your-org.sentry.io/issues/123/" } }, "installation": { "uuid": "7a485448-a9e2-4c85-8a3c-4f44175783c9" }, "resource": "issue", "timestamp": "2022-04-04T18:17:18.320000Z" }, "timestamp": "2022-04-04T18:17:18.320000Z", "type": "sentry.issue" } ``` ## Create Alert The Create Alert component creates a Sentry metric alert rule for a selected project. ### Use Cases - **Coverage automation**: create alert rules automatically after provisioning a service - **Policy enforcement**: ensure critical projects always have baseline metric alerts - **Release safety**: create release-specific alert rules after deploy workflows ### Configuration - **Project**: Sentry project that owns the metric alert rule - **Name**: Alert rule name shown in Sentry - **Aggregate**: Metric expression such as `count()` - **Query**: Optional event search query to narrow the alert - **Time Window**: Evaluation window in minutes - **Threshold Type**: Whether the threshold fires above or below the configured value - **Environment**: Optional environment filter - **Event Types**: Event types included in the alert evaluation - **Critical / Warning**: Thresholds and notification targets for each trigger level. Select the target type first, then choose a Sentry user or team. ### Output Returns the created Sentry metric alert rule, including triggers, actions, and project association. ### Example Output ```json { "data": { "aggregate": "count()", "dataset": "events", "dateCreated": "2026-03-26T09:15:00Z", "dateModified": "2026-03-26T09:15:00Z", "environment": "production", "eventTypes": [ "default", "error" ], "id": "19436", "name": "SuperPlane metric alert example", "organizationId": "4511070267441152", "owner": null, "projects": [ "go-gin" ], "query": "environment:production", "queryType": 0, "resolveThreshold": null, "thresholdType": 0, "timeWindow": 60, "triggers": [ { "actions": [ { "alertRuleTriggerId": "28750", "dateCreated": "2026-03-26T09:15:00Z", "id": "26870", "inputChannelId": null, "integrationId": null, "priority": null, "sentryAppId": null, "targetIdentifier": "4346509", "targetType": "user", "type": "email" } ], "alertRuleId": "19436", "alertThreshold": 5, "dateCreated": "2026-03-26T09:15:00Z", "id": "28750", "label": "critical", "resolveThreshold": 2, "thresholdType": 0 } ] }, "timestamp": "2026-03-26T09:15:00Z", "type": "sentry.alertRule" } ``` ## Create Deploy The Create Deploy component marks a deploy against an existing Sentry release. ### Use Cases - **Deployment tracking**: record when a release reaches staging or production - **Release automation**: pair with Create Release in CI/CD canvases - **Operational context**: give Sentry deployment timing and environment information for later triage ### Configuration - **Project**: Optional Sentry project to associate with the deploy - **Release**: Select the existing Sentry release to deploy - **Environment**: The target environment, such as staging or production - **Name**: Optional deploy name - **Deploy URL**: Optional URL for the deployment - **Started At**: Optional deployment start time - **Finished At**: Optional deployment finish time ### Output Returns the created deploy record, including environment, timestamps, URL, and release version. ### Example Output ```json { "data": { "dateFinished": "2026-03-25T10:24:12.000Z", "dateStarted": "2026-03-25T10:20:00.000Z", "environment": "production", "id": "1234567", "name": "Deploy #42", "projects": [ "go-gin" ], "releaseVersion": "2026.03.25", "url": "https://example.com/deploys/42" }, "timestamp": "2026-03-25T10:24:12.000Z", "type": "sentry.deploy" } ``` ## Create Release The Create Release component registers a new release in Sentry for a selected project. ### Use Cases - **Release tracking**: create a release after a build or deploy succeeds - **Commit association**: attach commits and refs so Sentry can correlate new issues with code changes - **Post-deploy automation**: feed the created release into downstream deployment and monitoring steps ### Configuration - **Project**: Select the Sentry project this release applies to - **Version**: The release version identifier - **Ref**: Optional commit or tag reference for the release - **Release URL**: Optional URL for the release, build, or changelog - **Commits**: Optional commit metadata to associate with the release - **Refs**: Optional repository head/previous commit refs for release comparison ### Output Returns the created Sentry release object, including version, associated projects, deploy count, and release metadata. ### Example Output ```json { "data": { "commitCount": 2, "dateCreated": "2026-03-25T10:12:55.109Z", "dateReleased": "2026-03-25T10:15:00.000Z", "deployCount": 0, "id": 2, "newGroups": 0, "projects": [ { "name": "go-gin", "slug": "go-gin" } ], "ref": "6ba09a7c53235ee8a8fa5ee4c1ca8ca886e7fdbb", "shortVersion": "2026.03.25", "url": "https://github.com/superplanehq/superplane/releases/tag/2026.03.25", "version": "2026.03.25" }, "timestamp": "2026-03-25T10:15:00.000Z", "type": "sentry.release" } ``` ## Delete Alert The Delete Alert component deletes an existing Sentry metric alert rule. ### Use Cases - **Alert cleanup**: remove obsolete rules after a service is retired - **Policy rotation**: delete temporary alert rules after a rollout is complete - **CRUD completion**: pair with alert listing and update workflows ### Configuration - **Project**: Optional project to narrow alert rule selection - **Alert Rule**: Metric alert rule to delete ### Output Returns the deleted alert ID and name so downstream steps can record the removal. ### Example Output ```json { "data": { "deleted": true, "id": "19436", "name": "SuperPlane metric alert example" }, "timestamp": "2026-03-26T09:25:00Z", "type": "sentry.alertDeleted" } ``` ## Get Alert The Get Alert component retrieves a Sentry metric alert rule and returns its full configuration. ### Use Cases - **Conditional logic**: inspect alert thresholds and projects before taking action - **Alert enrichment**: include alert configuration in downstream notifications or tickets - **Auditing**: fetch a specific metric alert rule for verification or reporting ### Configuration - **Project**: Optional project to narrow alert selection - **Alert Rule**: The metric alert rule to retrieve ### Output Returns the selected metric alert rule, including projects, query, thresholds, triggers, and action details. ### Example Output ```json { "data": { "aggregate": "count()", "createdBy": { "email": "washington@example.com", "name": "Washington" }, "dataset": "events", "dateCreated": "2026-03-25T10:00:00Z", "dateModified": "2026-03-25T10:05:00Z", "environment": "production", "eventTypes": [ "error" ], "id": "177412243058", "name": "High error rate in production", "owner": "team:42", "projects": [ "backend" ], "query": "event.type:error environment:production", "timeWindow": 60, "triggers": [ { "actions": [ { "id": "394280", "inputChannelId": "#alerts", "targetIdentifier": "30489048931789", "targetType": "specific", "type": "slack" } ], "alertThreshold": 100, "id": "294385908", "label": "critical" } ] }, "timestamp": "2026-03-25T10:05:00Z", "type": "sentry.alertRule" } ``` ## Get Issue The Get Issue component retrieves a Sentry issue and enriches it with recent events for downstream routing and escalation. ### Use Cases - **Routing decisions**: inspect assignee, status, project, and issue frequency before branching - **Escalation context**: include recent events and tags in notifications or ticket creation - **Release correlation**: check whether an issue is already tied to a release before actioning it ### Configuration - **Issue**: Select the Sentry issue to retrieve ### Output Returns the Sentry issue object including: - issue metadata such as title, status, assignee, tags, and frequency stats - recent issue events for additional context ### Example Output ```json { "data": { "assignedTo": { "id": "42", "name": "Platform", "type": "team" }, "count": "7", "events": [ { "dateCreated": "2026-03-24T13:03:56Z", "eventID": "evt_1", "id": "evt_1", "message": "RuntimeError: SuperPlane trigger validation test 008", "platform": "other", "tags": [ { "key": "environment", "value": "production" } ], "title": "RuntimeError: SuperPlane trigger validation test 008" } ], "id": "106740394", "numComments": 0, "permalink": "https://washington-x2.sentry.io/issues/106740394/", "priority": "high", "project": { "id": "4511070273339472", "name": "go-gin", "slug": "go-gin" }, "shortId": "GO-GIN-G", "stats": { "24h": [ [ 1711270800, 7 ] ] }, "status": "unresolved", "tags": [ { "key": "environment", "value": "production" }, { "key": "level", "value": "error" } ], "title": "RuntimeError: SuperPlane trigger validation test 008", "userCount": 3, "web_url": "https://washington-x2.sentry.io/issues/106740394/" }, "timestamp": "2026-03-24T13:03:56Z", "type": "sentry.issue" } ``` ## List Alerts The List Alerts component lists Sentry metric alert rules for the connected organization. ### Use Cases - **Alert audits**: review metric alert coverage for an organization or project - **Conditional workflows**: branch based on whether matching alert rules already exist - **Reporting**: feed alert rule inventories into downstream notification or documentation steps ### Configuration - **Project**: Optional Sentry project to filter alert rules ### Output Returns an object containing the list of matching metric alert rules and their configuration details. ### Example Output ```json { "data": { "alerts": [ { "aggregate": "count()", "dataset": "events", "dateCreated": "2026-03-25T10:00:00Z", "dateModified": "2026-03-25T10:05:00Z", "environment": "production", "eventTypes": [ "error" ], "id": "177412243058", "name": "High error rate in production", "owner": "team:42", "projects": [ "backend" ], "query": "event.type:error environment:production", "timeWindow": 60, "triggers": [ { "actions": [ { "id": "394280", "inputChannelId": "#alerts", "targetIdentifier": "30489048931789", "targetType": "specific", "type": "slack" } ], "alertThreshold": 100, "id": "294385908", "label": "critical" } ] } ] }, "timestamp": "2026-03-25T10:05:00Z", "type": "sentry.alertRules" } ``` ## Update Alert The Update Alert component updates an existing Sentry metric alert rule. ### Use Cases - **Threshold tuning**: adjust thresholds after an incident review - **Ownership updates**: redirect alert notifications to a different user or team - **Environment changes**: tighten or loosen alert conditions for a new rollout ### Configuration - **Project**: Optional project to narrow alert rule selection or replace the rule's project - **Alert Rule**: Existing Sentry alert rule to update - **Name / Aggregate / Query / Time Window / Threshold Type / Environment / Event Types**: Optional overrides for the existing rule - **Critical / Warning**: Optional updates to trigger thresholds and notification targets. Select the target type first, then choose a Sentry user or team. ### Output Returns the updated Sentry metric alert rule after the change is applied. ### Example Output ```json { "data": { "aggregate": "count()", "dataset": "events", "dateCreated": "2026-03-26T09:15:00Z", "dateModified": "2026-03-26T09:25:00Z", "environment": "production", "eventTypes": [ "error" ], "id": "19436", "name": "SuperPlane metric alert example", "organizationId": "4511070267441152", "owner": null, "projects": [ "go-gin" ], "query": "environment:production level:error", "queryType": 0, "resolveThreshold": null, "thresholdType": 0, "timeWindow": 30, "triggers": [ { "actions": [ { "alertRuleTriggerId": "28750", "dateCreated": "2026-03-26T09:15:00Z", "id": "26870", "inputChannelId": null, "integrationId": null, "priority": null, "sentryAppId": null, "targetIdentifier": "4346509", "targetType": "user", "type": "email" } ], "alertRuleId": "19436", "alertThreshold": 10, "dateCreated": "2026-03-26T09:15:00Z", "id": "28750", "label": "critical", "resolveThreshold": 3, "thresholdType": 0 } ] }, "timestamp": "2026-03-26T09:25:00Z", "type": "sentry.alertRule" } ``` ## Update Issue The Update Issue component updates an existing issue in Sentry. ### Use Cases - **Resolve issues automatically** after a remediation workflow succeeds - **Reopen issues** when a related deployment regresses - **Route ownership** by assigning issues to a user or team - **Escalate triage** by changing issue priority - **Mark issues reviewed** after automation handles the first response - **Manage visibility and subscriptions** for follow-up workflows ### Configuration - **Issue**: Select the Sentry issue to update - **Status**: Optional new issue status - **Priority**: Optional issue priority - **Assigned To**: Optional assignee from the selected issue's Sentry project - **Seen**: Optional reviewed flag for the connected user - **Public**: Optional issue sharing visibility - **Subscribed**: Optional workflow subscription for the connected user ### Output Returns the updated Sentry issue object. ### Example Output ```json { "data": { "assignedTo": { "id": "789", "name": "Person", "type": "user" }, "culprit": "SentryCustomError(frontend/src/util)", "firstSeen": "2022-04-04T18:17:18.320000Z", "id": "123", "lastSeen": "2022-04-04T18:17:18.320000Z", "level": "error", "permalink": "https://your-org.sentry.io/issues/123/", "project": { "id": "456", "name": "ipe", "slug": "ipe" }, "shortId": "IPE-1", "status": "resolved", "substatus": "resolved", "title": "Error #1: This is a test error!", "web_url": "https://your-org.sentry.io/issues/123/" }, "timestamp": "2022-04-04T18:17:18.320000Z", "type": "sentry.issue" } ``` #### ServiceNow Source URL: https://docs.superplane.com/components/servicenow Manage and react to incidents in ServiceNow import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Instructions Requires a ServiceNow instance with OAuth API access. Before creating OAuth credentials, enable client credentials grant on your instance: - Go to **System Properties** (sys_properties_list.do) and search for: - **Name**: glide.oauth.inbound.client.credential.grant_type.enabled - (Important: the property name ends with **enabled**) - If it does not exist, create it with: - **Application Scope**: Global - **Type**: true | false - **Value**: true Then configure OAuth: - Go to **System OAuth > Inbound Integrations** - Create a new integration with **OAuth - Client Credentials Grant** - Copy the generated **Client ID** and **Client Secret** - Assign required permissions to the integration account: - **itil** role (required for incident read/write) - Optionally **admin** if broader scoped access is needed - Optionally enable **Web Service Access Only** on the integration account to restrict it to API-only use. ## Create Incident The Create Incident component creates a new incident in ServiceNow using the Table API. ### Use Cases - **Alert escalation**: Create incidents from monitoring alerts - **Error tracking**: Automatically create incidents when errors are detected - **Manual incident creation**: Create incidents from workflow events - **Integration workflows**: Create incidents from external system events ### Required Permissions The ServiceNow integration account needs: - **itil** role — grants read/write access to the Incident table ### Configuration - **Short Description**: A brief summary of the incident (required, supports expressions) - **Description**: Detailed description of the incident (optional, supports expressions) - **Urgency**: Incident urgency level (1-High, 2-Medium, 3-Low) - **Impact**: Incident impact level (1-High, 2-Medium, 3-Low) - **Category**: Incident category (select from list) - **Subcategory**: Incident subcategory (depends on the selected category) - **Assignment Group**: The group responsible for resolving the incident (select from list) - **Assigned To**: The user assigned to resolve the incident (select from list) - **Caller**: The user reporting the incident (select from list) ### Output Returns the created incident object from the ServiceNow Table API, including: - **sys_id**: Unique identifier - **number**: Human-readable incident number (e.g. INC0010001) - **state**: Current incident state - **short_description**: Incident summary - **created_on**: Creation timestamp ### Example Output ```json { "data": { "active": "true", "activity_due": "2026-02-21 18:00:00", "additional_assignee_list": "c1a2b3d4e5f60718293a4b5c6d7e8f90", "approval": "not requested", "approval_history": "", "approval_set": "", "assigned_to": "46d44a1b2f123010a9ad2572f699b6a7", "assignment_group": "8a5055c9c61122780043563ef53438e3", "business_duration": "1970-01-01 02:15:00", "business_impact": "Moderate impact affecting internal users", "business_service": "Email Service", "business_stc": "2", "calendar_duration": "1970-01-01 03:00:00", "calendar_stc": "3", "caller_id": "6816f79cc0a8016401c5a33be04be441", "category": "inquiry", "cause": "User mailbox quota exceeded", "caused_by": "", "child_incidents": "0", "close_code": "", "close_notes": "", "closed_at": "", "closed_by": "", "cmdb_ci": "3f8a9d12db2310108f5d5e1234567890", "comments": "User reported inability to send emails.", "comments_and_work_notes": "", "company": "Acme Corp", "contact_type": "email", "contract": "Standard IT Support", "correlation_display": "", "correlation_id": "EXT-EMAIL-20260220-001", "delivery_plan": "", "delivery_task": "", "description": "User cannot send or receive emails since this morning. Outlook shows mailbox full error.", "due_date": "2026-02-21 18:00:00", "escalation": "0", "expected_start": "2026-02-20 17:30:00", "follow_up": "2026-02-22 10:00:00", "group_list": "", "hold_reason": "", "impact": "2", "incident_state": "2", "knowledge": "false", "location": "São Paulo - Office 3rd Floor", "made_sla": "true", "notify": "1", "number": "INC0010456", "opened_at": "2026-02-20 17:00:06", "opened_by": { "link": "https://dev194606.service-now.com/api/now/table/sys_user/6816f79cc0a8016401c5a33be04be441", "value": "6816f79cc0a8016401c5a33be04be441" }, "order": "", "origin_id": "", "origin_table": "", "parent": "", "parent_incident": "", "priority": "3", "problem_id": "", "reassignment_count": "1", "reopen_count": "0", "reopened_by": "", "reopened_time": "", "resolved_at": "", "resolved_by": "", "rfc": "", "route_reason": "", "service_offering": "Corporate Email - Exchange Online", "severity": "3", "short_description": "User unable to send emails - mailbox full", "sla_due": "2026-02-21 18:00:00", "state": "2", "subcategory": "email", "sys_class_name": "incident", "sys_created_by": "admin", "sys_created_on": "2026-02-20 17:00:06", "sys_domain": { "link": "https://dev194606.service-now.com/api/now/table/sys_user_group/global", "value": "global" }, "sys_domain_path": "/", "sys_id": "5fd8a0b983c332104005f7efeeaad999", "sys_mod_count": "3", "sys_tags": "email,quota,user-issue", "sys_updated_by": "it.support", "sys_updated_on": "2026-02-20 18:45:22", "task_effective_number": "INC0010456", "time_worked": "5400", "universal_request": "", "upon_approval": "proceed", "upon_reject": "cancel", "urgency": "2", "user_input": "Cannot send emails since 9 AM.", "watch_list": "manager@acme.com,it-team@acme.com", "work_end": "", "work_notes": "Mailbox cleaned and quota increased to 100GB. Issue under monitoring.", "work_notes_list": "", "work_start": "2026-02-20 17:30:00" }, "timestamp": "2026-02-20T18:45:25.123456789Z", "type": "servicenow.incident" } ``` ## Get Incident Fetch a single ServiceNow incident by selecting it from the dropdown ### Example Output ```json { "data": { "category": "Network", "impact": "2", "number": "INC0010001", "priority": "3", "short_description": "Server is unresponsive", "state": "1", "subcategory": "DNS", "sys_created_on": "2026-01-19 12:00:00", "sys_id": "a1b2c3d4e5f6g7h8i9j0", "sys_updated_on": "2026-01-19 12:00:00", "urgency": "2" }, "timestamp": "2026-01-19T12:00:00Z", "type": "servicenow.incident" } ``` #### Slack Source URL: https://docs.superplane.com/components/slack Send and react to Slack messages and interactions import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions You can install the Slack app without the **Bot Token** and **Signing Secret**. After installation, follow the setup prompt to create the Slack app and add those values. ## On App Mention The On App Mention trigger starts a workflow execution when the Slack app is mentioned in a message. ### Use Cases - **Slash commands**: Process commands from Slack messages - **Bot interactions**: Create interactive Slack bots - **Team workflows**: Trigger workflows from Slack conversations - **Notification processing**: Process and respond to mentions ### Configuration - **Channel**: Optional channel filter - if specified, only mentions in this channel will trigger (leave empty to listen to all channels) ### Event Data Each mention event includes: - **event**: Event information including message text, channel, and timestamp - **user**: User who mentioned the app - **channel**: Channel where the mention occurred - **text**: The message text containing the mention ### Setup This trigger automatically sets up a Slack event subscription when configured. The subscription is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "api_app_id": "A123ABC456", "authed_users": [ "U123ABC456", "U222222222" ], "event": { "channel": "C123ABC456", "event_ts": "1515449522000016", "text": "\u003c@U0LAN0Z89\u003e is it everything a river should be?", "ts": "1515449522.000016", "type": "app_mention", "user": "U061F7AUR" }, "event_id": "Ev123ABC456", "event_time": 123456789, "team_id": "T123ABC456", "token": "XXYYZZ", "type": "event_callback" }, "timestamp": "2026-01-19T12:00:00Z", "type": "slack.app.mention" } ``` ## Send Text Message The Send Text Message component sends a text message to a Slack channel. ### Use Cases - **Notifications**: Send notifications about workflow events or system status - **Alerts**: Alert teams about important events or errors - **Updates**: Provide status updates on long-running processes - **Team communication**: Automate team communications from workflows ### Configuration - **Channel**: Select the Slack channel to send the message to - **Text**: The message text to send (supports expressions and Slack markdown formatting) ### Output Returns metadata about the sent message including channel information. ### Notes - The Slack app must be installed and have permission to post to the selected channel - Supports Slack markdown formatting in message text - Messages are sent as the configured Slack bot user ### Example Output ```json { "data": { "channel": "C123456", "text": "Hello from SuperPlane", "ts": "1700000000.000100", "user": "U123456" }, "timestamp": "2026-01-16T17:56:16.680755501Z", "type": "slack.message.sent" } ``` ## Wait for Button Click The Wait for Button Click component sends a message to a Slack channel or DM with interactive buttons and waits for the user to click one of the configured buttons. ### Use Cases - **Request approval or input**: Get structured input from a user in Slack before applying or deploying (e.g., Approve / Reject buttons) - **Pause a workflow**: Wait until a human selects an option (e.g., Confirm / Cancel) - **Implement slash-command style flows**: Create interactive flows that need a structured reply via buttons ### Configuration - **Channel**: Slack channel or DM channel name to post to (required) - **Message**: Message text (supports Slack formatting, required) - **Timeout**: Maximum time to wait in seconds (optional) - **Buttons**: Set of 1–4 items, each with name (label) and value (required) ### Output Channels - **Received**: Emits when the user clicks a button; payload includes the selected value and clicker info (when available) - **Timeout**: Emits when no button click is received within the configured timeout ### Behavior - The message is posted with interactive buttons - The workflow pauses until a button is clicked or timeout occurs - Only the first button click is processed; subsequent clicks are ignored - If timeout is not configured, the component waits indefinitely ### Notes - The Slack app must be installed and have permission to post to the selected channel - Supports Slack markdown formatting in message text - Button clicks are processed through Slack's interactive components API ### Example Output ```json { "data": { "clicked_at": "2026-02-10T21:00:00Z", "clicked_by": { "id": "U01234567", "username": "pedro" }, "value": "approve" }, "timestamp": "2026-02-10T21:00:00Z", "type": "slack.button.clicked" } ``` #### SMTP Source URL: https://docs.superplane.com/components/smtp Send emails via any SMTP server import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Send Email The Send Email component sends emails through a configured SMTP server. ### Use Cases - **Notifications**: Send email notifications for workflow events - **Alerts**: Email alerts for errors or important events - **Reports**: Send automated reports via email - **User communications**: Send emails to users as part of workflows ### Configuration - **To**: Recipient email addresses (comma-separated for multiple recipients, supports expressions) - **CC**: Carbon copy recipients (optional, comma-separated) - **BCC**: Blind carbon copy recipients (optional, comma-separated) - **Subject**: Email subject line (supports expressions) - **Body**: Email body content (supports expressions and HTML) - **Is HTML**: Toggle to send HTML-formatted emails - **From Name**: Sender display name (optional, uses app default if not specified) - **From Email**: Sender email address (optional, uses app default if not specified) - **Reply To**: Reply-to email address (optional) ### SMTP Configuration The SMTP server must be configured in the application settings before using this component. Configure: - SMTP host and port - Authentication credentials - TLS/SSL settings - Default sender information ### Output Returns metadata about the sent email including recipients and subject. ### Example Output ```json { "data": { "cc": [], "fromEmail": "sender@example.com", "sentAt": "2025-01-21T12:00:00Z", "subject": "Hello from Superplane", "success": true, "to": [ "recipient@example.com" ] }, "timestamp": "2025-01-21T12:00:00.000000000Z", "type": "smtp.email.sent" } ``` #### Statuspage Source URL: https://docs.superplane.com/components/statuspage Create and manage incidents on your Atlassian Statuspage import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Actions ## Instructions To get your API key: Open your Statuspage, click the icon in the top-right corner, select API info, then create an API key. ## Create Incident The Create Incident component creates a new realtime or scheduled incident on your Atlassian Statuspage. ### Use Cases - **Realtime incidents**: Create and notify subscribers when an unexpected outage occurs - **Scheduled maintenance**: Schedule maintenance windows with optional reminders and auto-transitions - **Integration workflows**: Create incidents from monitoring alerts or other workflow events ### Configuration - **Page** (required): The Statuspage to create the incident on. Supports expressions for workflow chaining (e.g. {{ $['Create Incident'].data.page_id }}). - **Incident type**: Realtime (active incident) or Scheduled (planned maintenance) - **Name** (required): Short title for the incident - **Body** (optional): Initial message shown as the first incident update - **Status** (realtime): investigating, identified, monitoring, or resolved - **Impact override** (realtime): none, minor, major, or critical - **Components** (optional): List of components and their status. Each item has Component ID (supports expressions) and Status (operational, degraded_performance, partial_outage, major_outage, under_maintenance) - **Scheduled For / Until** (scheduled): Start and end time for scheduled maintenance (ISO 8601, e.g. 2026-02-15T02:00) - **Scheduled timezone** (scheduled): Timezone for the scheduled times (default UTC). Output is converted to UTC for the API. - **Scheduled options** (scheduled): Remind prior, auto in-progress, auto completed - **Deliver notifications** (optional): Whether to send notifications for the initial update (default: true) ### Output Returns the full Statuspage Incident object from the API. The payload has structure { type, timestamp, data } where data is the incident. Common expression paths (use $['Node Name'].data. as prefix): - data.id, data.name, data.status, data.impact - data.shortlink — link to the incident - data.created_at, data.updated_at - data.components — array of affected components - data.incident_updates — array of update messages ### Example Output ```json { "data": { "auto_transition_deliver_notifications_at_end": null, "auto_transition_deliver_notifications_at_start": null, "auto_transition_to_maintenance_state": null, "auto_transition_to_operational_state": null, "components": [ { "automation_email": "component+example123...", "created_at": "2026-02-12T10:30:00.000Z", "description": null, "group": false, "group_id": null, "id": "8kbf7d35c070", "name": "API", "only_show_if_degraded": false, "page_id": "kctbh9vrtdwd", "position": 1, "showcase": true, "start_date": "2026-02-12", "status": "partial_outage", "updated_at": "2026-02-12T10:30:00.000Z" }, { "automation_email": "component+example456...", "created_at": "2026-02-12T10:30:00.000Z", "description": null, "group": false, "group_id": null, "id": "9kbf7d35c071", "name": "Management Portal", "only_show_if_degraded": false, "page_id": "kctbh9vrtdwd", "position": 2, "showcase": true, "start_date": "2026-02-12", "status": "degraded_performance", "updated_at": "2026-02-12T10:30:00.000Z" } ], "created_at": "2026-02-12T10:30:00.000Z", "id": "p31zjtct2jer", "impact": "major", "impact_override": "major", "incident_updates": [ { "affected_components": [ { "code": "8kbf7d35c070", "name": "API", "new_status": "partial_outage", "old_status": "partial_outage" }, { "code": "9kbf7d35c071", "name": "Management Portal", "new_status": "degraded_performance", "old_status": "degraded_performance" } ], "body": "We are investigating reports of slow database queries.", "created_at": "2026-02-12T10:30:00.000Z", "custom_tweet": null, "deliver_notifications": true, "display_at": "2026-02-12T10:30:00.000Z", "id": "upd1", "incident_id": "p31zjtct2jer", "status": "investigating", "tweet_id": null, "twitter_updated_at": null, "updated_at": "2026-02-12T10:30:00.000Z", "wants_twitter_update": false } ], "metadata": {}, "monitoring_at": null, "name": "Database Connection Issues", "page_id": "kctbh9vrtdwd", "postmortem_body": null, "postmortem_body_last_updated_at": null, "postmortem_ignored": false, "postmortem_notified_subscribers": false, "postmortem_notified_twitter": false, "postmortem_published_at": null, "reminder_intervals": null, "resolved_at": null, "scheduled_auto_completed": false, "scheduled_auto_in_progress": false, "scheduled_for": null, "scheduled_remind_prior": false, "scheduled_reminded_at": null, "scheduled_until": null, "shortlink": "https://stspg.io/p31zjtct2jer", "status": "investigating", "updated_at": "2026-02-12T10:30:00.000Z" }, "timestamp": "2026-02-12T10:30:00.000Z", "type": "statuspage.incident" } ``` ## Get Incident The Get Incident component fetches the full details of an existing incident on your Atlassian Statuspage. ### Use Cases - **Incident lookup**: Fetch incident details for processing or display - **Workflow automation**: Get incident information to make decisions in workflows - **Timeline enrichment**: Retrieve the incident timeline (incident_updates) for reporting or notifications - **Status checking**: Check incident status before performing actions ### Configuration - **Page** (required): The Statuspage containing the incident. Select from the dropdown, or switch to expression mode for workflow chaining (e.g. {{ $['Create Incident'].data.page_id }}). - **Incident** (required): Incident ID to fetch. Supports expressions for workflow chaining (e.g. {{ $['Create Incident'].data.id }}). ### Output Returns the full Statuspage Incident object from the API. The payload has structure { type, timestamp, data } where data is the incident. Common expression paths (use $['Node Name'].data. as prefix): - data.id, data.name, data.status, data.impact - data.shortlink — link to the incident - data.created_at, data.updated_at, data.resolved_at - data.components — array of affected components - data.incident_updates — array of update messages (timeline), in API order ### Example Output ```json { "data": { "auto_transition_deliver_notifications_at_end": null, "auto_transition_deliver_notifications_at_start": null, "auto_transition_to_maintenance_state": null, "auto_transition_to_operational_state": null, "components": [ { "automation_email": "component+example123...", "created_at": "2026-02-12T10:30:00.000Z", "description": null, "group": false, "group_id": null, "id": "8kbf7d35c070", "name": "API", "only_show_if_degraded": false, "page_id": "kctbh9vrtdwd", "position": 1, "showcase": true, "status": "partial_outage", "updated_at": "2026-02-12T10:30:00.000Z" } ], "created_at": "2026-02-12T10:30:00.000Z", "id": "p31zjtct2jer", "impact": "major", "impact_override": "major", "incident_updates": [ { "affected_components": null, "body": "We are investigating reports of slow database queries.", "created_at": "2026-02-12T10:30:00.000Z", "custom_tweet": null, "deliver_notifications": true, "display_at": "2026-02-12T10:30:00.000Z", "id": "upd1", "incident_id": "p31zjtct2jer", "status": "monitoring", "tweet_id": null, "twitter_updated_at": null, "updated_at": "2026-02-12T10:30:00.000Z", "wants_twitter_update": false }, { "affected_components": [ { "code": "8kbf7d35c070", "name": "API", "new_status": "degraded_performance", "old_status": "operational" } ], "body": "We have identified the problem and are working on a fix.", "created_at": "2026-02-12T10:30:00.000Z", "custom_tweet": null, "deliver_notifications": true, "display_at": "2026-02-12T10:30:00.000Z", "id": "upd2", "incident_id": "p31zjtct2jer", "status": "identified", "tweet_id": null, "twitter_updated_at": null, "updated_at": "2026-02-12T10:30:00.000Z", "wants_twitter_update": false } ], "metadata": {}, "monitoring_at": "2026-02-12T10:30:00.000Z", "name": "Database Connection Issues", "page_id": "kctbh9vrtdwd", "postmortem_body": null, "postmortem_body_last_updated_at": null, "postmortem_ignored": false, "postmortem_notified_subscribers": false, "postmortem_notified_twitter": false, "postmortem_published_at": null, "reminder_intervals": null, "resolved_at": null, "scheduled_auto_completed": false, "scheduled_auto_in_progress": false, "scheduled_for": null, "scheduled_remind_prior": false, "scheduled_reminded_at": null, "scheduled_until": null, "shortlink": "https://stspg.io/p31zjtct2jer", "status": "monitoring", "updated_at": "2026-02-12T10:30:00.000Z" }, "timestamp": "2026-02-12T10:30:00.000Z", "type": "statuspage.incident" } ``` ## Update Incident The Update Incident component updates an existing incident on your Atlassian Statuspage. ### Use Cases - **Status transitions**: Update incident status (e.g. investigating → identified → resolved) - **Maintenance updates**: Transition scheduled maintenance to in progress or completed - **Integration workflows**: Update incidents from monitoring systems or approval workflows ### Configuration - **Page** (required): The Statuspage containing the incident. Select from the dropdown, or switch to expression mode for workflow chaining (e.g. {{ $['Create Incident'].data.page_id }}). - **Incident** (required): Incident ID to update. Supports expressions for workflow chaining (e.g. {{ $['Create Incident'].data.id }}). - **Incident type**: Realtime or Scheduled — determines which status options are shown. You cannot change an incident's type. - **Status** (optional): New status. Options depend on incident type: realtime (investigating, identified, monitoring, resolved) or scheduled (scheduled, in progress, verifying, completed) - **Body** (optional): Update message shown as the latest incident update - **Impact override** (optional, realtime only): Override displayed severity (none, maintenance, minor, major, critical) - **Components** (optional): List of components and their status. Each item has Component ID (supports expressions) and Status (operational, degraded_performance, partial_outage, major_outage, under_maintenance) - **Deliver notifications** (optional): Whether to send notifications for this update (default: true) At least one of Status, Body, Impact override, or Components must be provided. ### Output Returns the full Statuspage Incident object from the API. The payload has structure { type, timestamp, data } where data is the incident. Common expression paths (use $['Node Name'].data. as prefix): - data.id, data.name, data.status, data.impact - data.shortlink — link to the incident - data.created_at, data.updated_at - data.components — array of affected components - data.incident_updates — array of update messages ### Example Output ```json { "data": { "auto_transition_deliver_notifications_at_end": null, "auto_transition_deliver_notifications_at_start": null, "auto_transition_to_maintenance_state": null, "auto_transition_to_operational_state": null, "components": [ { "automation_email": "component+example123...", "created_at": "2026-02-12T10:30:00.000Z", "description": null, "group": false, "group_id": null, "id": "8kbf7d35c070", "name": "API", "only_show_if_degraded": false, "page_id": "kctbh9vrtdwd", "position": 1, "showcase": true, "start_date": "2026-02-12", "status": "partial_outage", "updated_at": "2026-02-12T10:30:00.000Z" }, { "automation_email": "component+example456...", "created_at": "2026-02-12T10:30:00.000Z", "description": null, "group": false, "group_id": null, "id": "9kbf7d35c071", "name": "Management Portal", "only_show_if_degraded": false, "page_id": "kctbh9vrtdwd", "position": 2, "showcase": true, "start_date": "2026-02-12", "status": "degraded_performance", "updated_at": "2026-02-12T10:30:00.000Z" } ], "created_at": "2026-02-12T10:30:00.000Z", "id": "p31zjtct2jer", "impact": "minor", "impact_override": "minor", "incident_updates": [ { "affected_components": null, "body": "We're tracking the problem.", "created_at": "2026-02-12T11:00:00.000Z", "custom_tweet": null, "deliver_notifications": true, "display_at": "2026-02-12T11:00:00.000Z", "id": "upd1", "incident_id": "p31zjtct2jer", "status": "monitoring", "tweet_id": null, "twitter_updated_at": null, "updated_at": "2026-02-12T11:00:00.000Z", "wants_twitter_update": false }, { "affected_components": [ { "code": "8kbf7d35c070", "name": "API", "new_status": "partial_outage", "old_status": "partial_outage" }, { "code": "9kbf7d35c071", "name": "Management Portal", "new_status": "degraded_performance", "old_status": "degraded_performance" } ], "body": "There is a problem.", "created_at": "2026-02-12T10:30:00.000Z", "custom_tweet": null, "deliver_notifications": true, "display_at": "2026-02-12T10:30:00.000Z", "id": "upd2", "incident_id": "p31zjtct2jer", "status": "investigating", "tweet_id": null, "twitter_updated_at": null, "updated_at": "2026-02-12T10:30:00.000Z", "wants_twitter_update": false } ], "metadata": {}, "monitoring_at": "2026-02-12T11:00:00.000Z", "name": "Database Connection Issues", "page_id": "kctbh9vrtdwd", "postmortem_body": null, "postmortem_body_last_updated_at": null, "postmortem_ignored": false, "postmortem_notified_subscribers": false, "postmortem_notified_twitter": false, "postmortem_published_at": null, "reminder_intervals": null, "resolved_at": null, "scheduled_auto_completed": false, "scheduled_auto_in_progress": false, "scheduled_for": null, "scheduled_remind_prior": false, "scheduled_reminded_at": null, "scheduled_until": null, "shortlink": "https://stspg.io/p31zjtct2jer", "status": "monitoring", "updated_at": "2026-02-12T11:00:00.000Z" }, "timestamp": "2026-02-12T11:00:00.000Z", "type": "statuspage.incident" } ``` #### Telegram Source URL: https://docs.superplane.com/components/telegram Send messages and react to events via Telegram bots import { CardGrid, LinkCard } from "@astrojs/starlight/components"; ## Triggers ## Actions ## Instructions To set up Telegram integration: 1. Get a bot token from @BotFather and paste it in the field below 2. Disable privacy mode so the bot can receive messages in groups: send /setprivacy to @BotFather, select your bot, and choose Disable 3. Add the bot to your group or channel Note: if the bot was already in a group before disabling privacy mode, remove and re-add it for the change to take effect. ## On Mention The On Mention trigger starts a workflow execution when the Telegram bot is mentioned in a message. ### Use Cases - **Bot commands**: Process commands from Telegram messages - **Bot interactions**: Create interactive Telegram bots - **Team workflows**: Trigger workflows from Telegram conversations - **Notification processing**: Process and respond to mentions ### Configuration - **Chat ID**: Optional chat filter - if specified, only mentions in this chat will trigger (leave empty to listen to all chats) ### Event Data Each mention event includes: - **message_id**: The unique message identifier - **from**: User who mentioned the bot - **chat**: Chat where the mention occurred - **text**: The message text containing the mention - **date**: Unix timestamp of the message ### Setup This trigger automatically sets up a webhook subscription when configured. The subscription is managed by SuperPlane and will be cleaned up when the trigger is removed. ### Example Data ```json { "data": { "chat": { "id": -9876543210, "title": "My Group", "type": "group" }, "date": 1737028800, "entities": [ { "length": 6, "offset": 0, "type": "mention" } ], "from": { "first_name": "John", "id": 111222333, "is_bot": false, "username": "johndoe" }, "message_id": 1234, "text": "@mybot hello!" }, "timestamp": "2026-01-16T12:00:00.000Z", "type": "telegram.message.mention" } ``` ## Send Message The Send Message component sends a text message to a Telegram chat. ### Use Cases - **Notifications**: Send notifications about workflow events or system status - **Alerts**: Alert teams about important events or errors - **Updates**: Provide status updates on long-running processes - **Bot interactions**: Send automated responses to users ### Configuration - **Chat ID**: The Telegram chat ID (can be a user, group, or channel) - **Text**: The message text to send - **Parse Mode**: Optional formatting mode (Markdown) ### Output Returns metadata about the sent message including message ID, chat ID, text, and timestamp. ### Notes - The bot must have permission to post messages in the specified chat - For groups and channels, add the bot as a member first - Use parse mode for rich text formatting in your messages - Chat ID can be negative for groups and channels ### Example Output ```json { "data": { "chat_id": -9876543210, "date": 1737028800, "message_id": 1234, "text": "Hello from SuperPlane" }, "timestamp": "2026-01-16T12:00:00.000Z", "type": "telegram.message.sent" } ``` ## Wait for Button Click The Wait for Button Click component sends a message to a Telegram chat with inline keyboard buttons and waits for the user to click one of the configured buttons. ### Use Cases - **Request approval or input**: Get structured input from a user in Telegram before applying or deploying (e.g., Approve / Reject buttons) - **Pause a workflow**: Wait until a human selects an option (e.g., Confirm / Cancel) - **Implement interactive flows**: Create interactive flows that need a structured reply via buttons ### Configuration - **Chat ID**: Telegram chat ID (user, group, or channel) to post to (required) - **Message**: Message text (required) - **Timeout**: Maximum time to wait in seconds (optional) - **Buttons**: Set of 1–4 items, each with name (label) and value (required) ### Output Channels - **Received**: Emits when the user clicks a button; payload includes the selected value and clicker info (when available) - **Timeout**: Emits when no button click is received within the configured timeout ### Behavior - The message is posted with inline keyboard buttons - The workflow pauses until a button is clicked or timeout occurs - Only the first button click is processed; subsequent clicks are ignored - If timeout is not configured, the component waits indefinitely ### Notes - The Telegram bot must be added to the chat and have permission to post messages - Button clicks are processed through Telegram's callback query API ### Example Output ```json { "data": { "clicked_at": "2026-02-10T21:00:00Z", "clicked_by": { "id": 123456789, "username": "john" }, "value": "approve" }, "timestamp": "2026-02-10T21:00:00Z", "type": "telegram.button.clicked" } ```