DigitalOcean
Manage and monitor your DigitalOcean infrastructure
Actions
Section titled “Actions”Instructions
Section titled “Instructions”DigitalOcean Personal Access Token
Section titled “DigitalOcean Personal Access Token”Generate a DigitalOcean Personal Access Token and copy it.
- Token name:
SuperPlane Integration - Expiration: No expiry (or choose an appropriate expiration)
- Scopes: Full Access (or customize as needed)
Access Key (optional)
Section titled “Access Key (optional)”Only required for Spaces Object Storage components.
Create an Access Key ID & Secret Access Key and copy the generated pair.
- Scope: Full Access (all buckets) or Limited Access (specific buckets)
Note: The Personal Access Token and Secret Access Key are shown only once — store them somewhere safe before continuing.
Add Data Source
Section titled “Add Data Source”The Add Data Source component adds a new data source to an existing knowledge base on the DigitalOcean Gradient AI Platform.
How it works
Section titled “How it works”Adds a single data source — either a Spaces bucket or a web/sitemap URL — to a knowledge base. When Index after adding is enabled (the default), the component also starts an indexing job and waits for it to complete before emitting the output.
Data Source Types
Section titled “Data Source Types”- Spaces Bucket or Folder — indexes all supported files in a DigitalOcean Spaces bucket or folder
- Web or Sitemap URL — crawls a public website (seed URL) or a list of URLs from a sitemap
Chunking Strategies
Section titled “Chunking Strategies”Each data source has its own independent chunking configuration:
- Section-based (default) — splits on structural elements like headings and paragraphs; fast and low-cost
- Semantic — groups sentences by meaning; slower but context-aware
- Hierarchical — creates parent (context) and child (retrieval) chunk pairs
- Fixed-length — splits strictly by token count; best for logs and unstructured text
Indexing
Section titled “Indexing”When Index after adding is enabled, the component starts an indexing job scoped only to the newly added data source and polls every 30 seconds until the job completes. Other existing data sources in the knowledge base are not re-indexed. Disable it if you want to add multiple data sources first and index them all at once using the Index Knowledge Base component.
Output
Section titled “Output”Returns the added data source details:
- dataSourceUUID: UUID of the newly added data source
- knowledgeBaseUUID: UUID of the knowledge base
- knowledgeBaseName: Name of the knowledge base
When indexing is enabled, the output also includes:
- indexingJob: Full indexing job details (status, totalTokens, completedDataSources, totalDataSources, startedAt, finishedAt)
Example Output
Section titled “Example Output”{ "data": { "dataSourceName": "https://example.com/", "dataSourceUUID": "e374bb5e-33e6-11f1-b074-4e01edede4", "indexingJob": { "completedDataSources": 1, "finishedAt": "2026-04-09T07:37:51Z", "startedAt": "2026-04-09T07:36:58Z", "status": "INDEX_JOB_STATUS_COMPLETED", "totalDataSources": 1, "totalTokens": "21" }, "knowledgeBaseName": "ecommerce-knowledge-base", "knowledgeBaseUUID": "3b88fe18-31bb-11f1-b074-4e013hhjte4" }, "timestamp": "2026-04-09T07:38:01.304486131Z", "type": "digitalocean.data_source.added"}Assign Reserved IP
Section titled “Assign Reserved IP”The Assign Reserved IP component assigns or unassigns a DigitalOcean Reserved IP to a droplet.
Use Cases
Section titled “Use Cases”- Blue/green deployments: Reassign a reserved IP to the new deployment with zero downtime
- Failover: Quickly reassign a reserved IP from a failed droplet to a healthy replacement
- Maintenance: Temporarily unassign a reserved IP while a droplet is being serviced
Configuration
Section titled “Configuration”- Reserved IP: The reserved IP address to manage (required)
- Action: The operation to perform: assign or unassign (required)
- Droplet ID: The target droplet for the assignment (required when action is assign)
Output
Section titled “Output”Returns the action result including:
- id: Action ID
- status: Final action status (completed)
- type: Type of action performed (assign or unassign)
- started_at: When the action started
- completed_at: When the action completed
- resource_id: Reserved IP resource identifier
Important Notes
Section titled “Important Notes”- The component polls until the action completes
- For assign, the reserved IP will be unassigned from any current droplet first
- For unassign, the Droplet ID field is ignored
Example Output
Section titled “Example Output”{ "data": { "completed_at": "2026-03-13T10:10:05Z", "id": 2048576123, "region_slug": "nyc3", "resource_id": 2335912909, "resource_type": "floating_ip", "started_at": "2026-03-13T10:10:00Z", "status": "completed", "type": "assign_ip" }, "timestamp": "2026-03-13T10:10:08.000000000Z", "type": "digitalocean.reservedip.assign"}Attach Knowledge Base
Section titled “Attach Knowledge Base”The Attach Knowledge Base component connects a knowledge base to an existing Gradient AI agent, enabling the agent to use it for retrieval-augmented generation (RAG).
Use Cases
Section titled “Use Cases”- Post-creation wiring: After creating a new knowledge base, attach it to an agent to make it immediately available
- Blue/green KB deployment: Attach a newly indexed knowledge base to an agent as part of a promotion pipeline
- Multi-KB agents: Add additional knowledge bases to an agent that already has others attached
Configuration
Section titled “Configuration”- Agent: The agent to attach the knowledge base to (required)
- Knowledge Base: The knowledge base to attach — only shows knowledge bases not already attached to the selected agent (required)
Output
Section titled “Output”Returns confirmation of the attachment including:
- agentUUID: UUID of the agent
- knowledgeBaseUUID: UUID of the attached knowledge base
Example Output
Section titled “Example Output”{ "data": { "agentUUID": "20cd8434-6ea1-11f0-bf8f-4e013e2ddde4", "knowledgeBaseUUID": "a1b2c3d4-0000-0000-0000-000000000001" }, "timestamp": "2025-01-01T00:00:00Z", "type": "digitalocean.knowledge_base.attached"}Copy Object
Section titled “Copy Object”The Copy Object component copies an object from one location to another within DigitalOcean Spaces. When Delete Source is enabled, the source object is deleted after a successful copy, effectively moving the object.
Prerequisites
Section titled “Prerequisites”Spaces Access Key ID and Secret Access Key must be configured in the DigitalOcean integration settings.
Configuration
Section titled “Configuration”- Source Bucket: The bucket containing the object to copy
- Source File Path: The path to the source object (e.g. reports/daily.csv). Supports expressions.
- Destination Bucket: The bucket to copy the object into (can be the same bucket)
- Destination File Path: The path for the copied object (e.g. archive/2026/daily.csv). Supports expressions.
- Visibility: Access control for the destination object — Private (default) or Public
- Delete Source: When enabled, the source object is deleted after a successful copy (move operation)
Output
Section titled “Output”- sourceBucket: The source bucket name
- sourceFilePath: The source object path
- destinationBucket: The destination bucket name
- destinationFilePath: The destination object path
- endpoint: The full Spaces URL of the copied object
- eTag: MD5 hash of the copied object
- moved:
trueif the source was deleted,falseotherwise
- Both buckets must be in the same region
- Copying an object to the same path overwrites it
- Metadata and tags are copied from the source object by default
Use Cases
Section titled “Use Cases”- Archiving: Move processed files to an archive bucket or folder
- Promotion: Copy an artifact from a staging bucket to production
- Backup: Duplicate a file before modifying it
- Renaming: Move a file to a new path within the same bucket
Example Output
Section titled “Example Output”{ "data": { "destinationBucket": "my-company-archive", "destinationFilePath": "archive/2026/daily.csv", "eTag": "a1b2c3d4ef567890a1b2c3d4ef567890", "endpoint": "https://my-company-archive.fra1.digitaloceanspaces.com/archive/2026/daily.csv", "moved": false, "sourceBucket": "my-company-assets", "sourceFilePath": "reports/daily.csv" }, "timestamp": "2026-03-25T09:00:00Z", "type": "digitalocean.spaces.object.copied"}Create Alert Policy
Section titled “Create Alert Policy”The Create Alert Policy component creates a monitoring alert policy that triggers notifications when droplet metrics cross defined thresholds.
Note: Monitoring is only available for droplets that had monitoring enabled during creation. Droplets created without monitoring will not report metrics or trigger alerts.
Use Cases
Section titled “Use Cases”- Capacity management: Get notified when CPU or memory usage consistently exceeds a safe operating level
- Performance monitoring: Detect and respond to high load averages or network saturation
- Automated workflows: Chain downstream actions when infrastructure metrics breach limits
Configuration
Section titled “Configuration”- Description: Human-readable name for the alert policy (required)
- Metric Type: The droplet metric to monitor, such as CPU Usage or Memory Usage (required)
- Comparison: Alert when the value is GreaterThan or LessThan the threshold (required)
- Threshold Value: The numeric threshold that triggers the alert (required)
- Evaluation Window: The rolling time window over which the metric is averaged (required)
- Droplets: Specific droplets to scope the policy to (optional)
- Tags: Monitor all droplets with matching tags (optional)
- Enabled: Whether the alert policy is immediately active (default: true)
- Email Notifications: Email addresses to notify when the alert fires (optional)
- Slack Channel: Slack channel to post alerts to, e.g. #alerts (optional)
- Slack Webhook URL: Incoming webhook URL for the Slack workspace (required when Slack Channel is set)
Output
Section titled “Output”Returns the created alert policy including:
- uuid: Alert policy UUID for use in Get/Delete operations
- description: Human-readable description
- type: Metric type being monitored
- compare: Comparison operator (GreaterThan/LessThan)
- value: Threshold value
- window: Evaluation window
- enabled: Whether the policy is active
- alerts: Configured notification channels (email and/or Slack)
Important Notes
Section titled “Important Notes”- At least one notification channel (email or Slack) is required
- Slack Channel and Slack Webhook URL must be provided together
- Scoping by Droplets and Tags are independent — you can use either, both, or neither (applies to all droplets)
Example Output
Section titled “Example Output”{ "data": { "alerts": { "email": [ "sammy@digitalocean.com" ] }, "compare": "GreaterThan", "description": "High CPU Usage", "enabled": true, "entities": [ "558899681" ], "tags": [], "type": "v1/insights/droplet/cpu", "uuid": "ffcaf816-f6a5-4b4a-b4c4-e84532755e82", "value": 20, "window": "5m" }, "timestamp": "2026-03-18T09:29:00.308296519Z", "type": "digitalocean.alertpolicy.created"}Create App
Section titled “Create App”The Create App component provisions a new application on DigitalOcean’s App Platform from a GitHub, GitLab, or Bitbucket repository. The component requires that you have connected your Git provider in your DigitalOcean account and granted access to the repository you want to deploy. You can do so by creating a sample app in the DigitalOcean control panel as illustrated here: https://docs.digitalocean.com/products/app-platform/getting-started/deploy-sample-apps/
Use Cases
Section titled “Use Cases”- Deploy web services: Provision web services and APIs with configurable instance sizes and HTTP ports
- Deploy static sites: Host static websites and single-page applications with custom build and output directories
- Deploy workers: Run background workers for processing tasks
- Deploy jobs: Run one-off or scheduled jobs (pre-deploy, post-deploy, or failed-deploy)
- Automated provisioning: Create app instances as part of infrastructure automation workflows
- Multi-environment setup: Deploy separate app instances for dev, staging, and production
Configuration
Section titled “Configuration”- Name: The name for the app (required)
- Region: The region to deploy the app in (required)
- Component Type: The type of component - Service, Static Site, Worker, or Job (required, defaults to Service)
- Source Provider: The source code provider - GitHub, GitLab, or Bitbucket (required)
- Repository: The repository in owner/repo format (required, shown based on selected provider)
- Branch: The branch to deploy from (defaults to “main”, shown based on selected provider)
- Deploy on Push: Automatically deploy when code is pushed to the branch (default: true)
- Environment Slug: The runtime environment/buildpack (e.g., go, node-js, python, html)
- Build Command: Custom build command (e.g., npm install && npm run build)
- Run Command: Custom run command for services, workers, and jobs (e.g., npm start)
- Source Directory: Path to the source code within the repository (defaults to /)
- HTTP Port: The port the service listens on (services only)
- Instance Size: The instance size slug (e.g., apps-s-1vcpu-1gb) for services, workers, and jobs
- Instance Count: Number of instances to run (services, workers, and jobs)
- Output Directory: Build output directory for static sites (e.g., build, dist, public)
- Index Document: Index document for static sites (defaults to index.html)
- Error Document: Custom error document for static sites (e.g., 404.html)
- Catchall Document: Catchall document for single-page applications (e.g., index.html)
- Environment Variables: Key-value pairs for environment variables (optional)
Ingress Configuration
Section titled “Ingress Configuration”- Ingress Path: Path prefix for routing traffic to the component (e.g., /api for services, / for static sites)
- CORS Allow Origins: Origins allowed for Cross-Origin Resource Sharing (e.g., https://example.com)
- CORS Allow Methods: HTTP methods allowed for CORS requests (e.g., GET, POST, PUT)
Database Configuration
Section titled “Database Configuration”- Add Database: Attach a database to the app
- Database Component Name: Name used to reference the database in env vars (e.g., ${db.DATABASE_URL})
- Database Engine: PostgreSQL, MySQL, Redis, or MongoDB
- Database Version: Engine version (e.g., 16 for PostgreSQL)
- Use Managed Database: Connect to an existing DigitalOcean Managed Database cluster instead of a dev database
- Database Cluster Name: Name of the existing managed database cluster (required for managed databases)
- Database Name / User: Optional database name and user for managed database connections
VPC Configuration
Section titled “VPC Configuration”- VPC: ID of the VPC to deploy into. Apps in a VPC can communicate with other resources over the private network.
Output
Section titled “Output”Returns the created app including:
- id: The unique app ID
- name: The app name
- default_ingress: The default ingress URL
- live_url: The live URL for the app
- region: The region where the app is deployed
- active_deployment: Information about the active deployment
- The app will be created with a single component of the selected type
- Deployments are asynchronous and may take several minutes to complete
- The component emits an output once the deployment reaches ACTIVE status
- If the deployment fails, the component will report the failure
- Dev databases are free and suitable for development; use managed databases for production
- Use bindable variables (e.g., ${db.DATABASE_URL}) to reference database connection details in environment variables
Example Output
Section titled “Example Output”{ "data": { "defaultIngress": "https://my-app-22-b6v8c.ondigitalocean.app", "id": "6d8abe1c-7cc5-4db3-b1aa-d9cdd1c127e7", "liveURL": "https://my-app-22-b6v8c.ondigitalocean.app", "name": "my-app-22", "region": { "continent": "Asia", "data_centers": [ "blr1" ], "flag": "india", "label": "Bangalore", "slug": "blr" } }, "timestamp": "2026-03-24T07:01:24.678254095Z", "type": "digitalocean.app.created"}Create DNS Record
Section titled “Create DNS Record”The Create DNS Record component creates a new DNS record for a domain managed by DigitalOcean.
Use Cases
Section titled “Use Cases”- Service discovery: Add A or CNAME records when provisioning new services
- Email routing: Create MX records for custom mail delivery
- Verification: Add TXT records for domain ownership verification
- Subdomain management: Dynamically create subdomains as part of provisioning workflows
Configuration
Section titled “Configuration”- Domain: The DigitalOcean-managed domain to add the record to (required)
- Type: The DNS record type (required): A, AAAA, CNAME, MX, NS, TXT, SRV, CAA
- Name: The subdomain name for the record (required, use @ for root)
- Data: The record value, e.g. an IP address or hostname (required, supports expressions)
- TTL: Time-to-live in seconds (optional, defaults to 1800)
- Priority: Record priority for MX/SRV records (optional)
- Port: Port number for SRV records (optional)
- Weight: Weight for SRV records (optional)
Output
Section titled “Output”Returns the created DNS record including:
- id: Record ID
- type: Record type
- name: Subdomain name
- data: Record value
- ttl: Time-to-live
- priority: Priority (for MX/SRV)
- port: Port (for SRV)
- weight: Weight (for SRV)
Example Output
Section titled “Example Output”{ "data": { "data": "167.71.224.221.felixgateru2.com", "id": 1812548333, "name": "_sip._tcp", "port": 8000, "priority": 1, "ttl": 1800, "type": "SRV", "weight": 1 }, "timestamp": "2026-03-16T09:50:03.222782653Z", "type": "digitalocean.dns.record.created"}Create Database
Section titled “Create Database”The Create Database component adds a new database to an existing DigitalOcean Managed Database cluster.
Use Cases
Section titled “Use Cases”- Application bootstrap: Create an application-specific database as part of environment setup
- Tenant provisioning: Add a dedicated database for a new customer or workspace
- Migration workflows: Prepare a destination database before importing data
Configuration
Section titled “Configuration”- Database Cluster: The managed database cluster that will contain the new database (required)
- Database Name: The name of the database to create (required, supports expressions)
Output
Section titled “Output”Returns the created database including:
- name: The created database name
- databaseClusterId: The cluster UUID
- databaseClusterName: The cluster name
Important Notes
Section titled “Important Notes”- If you use custom token scopes, this action requires
database:createanddatabase:read - Database management is not supported for Caching or Valkey clusters
Example Output
Section titled “Example Output”{ "data": { "databaseClusterId": "9cc10173-e9ea-4176-9dbc-a4cee4c4ff30", "databaseClusterName": "primary-postgres", "name": "app_db" }, "timestamp": "2026-03-27T09:15:00Z", "type": "digitalocean.database.created"}Create Database Cluster
Section titled “Create Database Cluster”The Create Database Cluster component provisions a new DigitalOcean Managed Database cluster and waits until it is online.
Use Cases
Section titled “Use Cases”- Environment bootstrap: Provision a managed database cluster before creating apps or databases
- Platform setup: Create a dedicated cluster for a service, team, or customer environment
- Migration workflows: Stand up a new cluster before importing data or cutover
Configuration
Section titled “Configuration”- Name: The database cluster name (required)
- Engine: The database engine to provision, such as PostgreSQL or MySQL (required)
- Version: The engine version to provision (required)
- Region: The DigitalOcean region for the cluster (required)
- Size: The node size slug for the cluster, for example
db-s-1vcpu-1gb(required) - Node Count: The number of nodes in the cluster (required)
Output
Section titled “Output”Returns the created database cluster including:
- id: The cluster UUID
- name: The cluster name
- engine: The provisioned engine
- version: The engine version
- region: The cluster region
- size: The selected node size slug
- num_nodes: The number of nodes
- status: The current cluster status
- connection: Connection information when available
Important Notes
Section titled “Important Notes”- If you use custom token scopes, this action requires
database:createanddatabase:read - Valid versions, sizes, and node counts depend on the selected engine. Use the DigitalOcean Database Options API or dashboard values when configuring this component
- The component polls until the cluster status becomes
online
Example Output
Section titled “Example Output”{ "data": { "connection": { "host": "superplane-db-do-user-123456-0.j.db.ondigitalocean.com", "port": 25060, "ssl": true, "uri": "postgres://doadmin:[email protected]:25060/defaultdb?sslmode=require", "user": "doadmin" }, "created_at": "2026-03-27T13:00:00Z", "engine": "pg", "id": "65b497a5-1674-4b1a-a122-01aebe761ef7", "name": "superplane-db", "num_nodes": 1, "private_network_uuid": "7e6d2691-182b-4dd1-8452-529f88feb996", "region": "nyc1", "size": "db-s-1vcpu-1gb", "status": "online", "version": "18.0" }, "timestamp": "2026-03-27T13:00:05Z", "type": "digitalocean.database.cluster.created"}Create Droplet
Section titled “Create Droplet”The Create Droplet component creates a new droplet in DigitalOcean.
Use Cases
Section titled “Use Cases”- Infrastructure provisioning: Automatically provision droplets from workflow events
- Scaling: Create new instances in response to load or alerts
- Environment setup: Spin up droplets for testing or staging environments
Configuration
Section titled “Configuration”- Name: The hostname for the droplet (required, supports expressions)
- Region: Region slug where the droplet will be created (required)
- Size: Size slug for the droplet (required)
- Image: Image slug or ID for the droplet OS (required)
- SSH Keys: SSH keys to add to the droplet. Must have been added to the DigitalOcean team. (optional)
- Tags: Tags to apply to the droplet (optional)
- User Data: Cloud-init user data script (optional)
- Backups: Enable automated backups for the droplet (optional)
- IPv6: Enable IPv6 networking on the droplet (optional)
- Monitoring: Enable DigitalOcean monitoring agent on the droplet (optional)
- VPC UUID: UUID of the VPC to create the droplet in (optional)
Output
Section titled “Output”Returns the created droplet object including:
- id: Droplet ID
- name: Droplet hostname
- status: Current droplet status
- region: Region information
- networks: Network information including IP addresses
Example Output
Section titled “Example Output”{ "data": { "disk": 25, "id": 98765432, "image": { "id": 12345, "name": "Ubuntu 24.04 (LTS) x64", "slug": "ubuntu-24-04-x64" }, "memory": 1024, "name": "my-droplet", "networks": { "v4": [ { "ip_address": "104.131.186.241", "type": "public" } ] }, "region": { "name": "New York 3", "slug": "nyc3" }, "size_slug": "s-1vcpu-1gb", "status": "new", "tags": [ "web" ], "vcpus": 1 }, "timestamp": "2026-03-12T21:10:00.000000000Z", "type": "digitalocean.droplet.created"}Create Knowledge Base
Section titled “Create Knowledge Base”The Create Knowledge Base component creates a new knowledge base on the DigitalOcean Gradient AI Platform, ready for use with AI agents via retrieval-augmented generation (RAG).
How it works
Section titled “How it works”A knowledge base converts your data sources into vector embeddings using the selected embedding model. Those embeddings are stored in an OpenSearch database — either a newly provisioned one or one you already have. Once created, the knowledge base can be attached to any Gradient AI agent.
Data Sources
Section titled “Data Sources”You can add multiple data sources of different types:
- Spaces Bucket or Folder — indexes all supported files in a DigitalOcean Spaces bucket or folder
- Web or Sitemap URL — crawls a public website (seed URL) or a list of URLs from a sitemap
Each data source has its own independent chunking strategy.
Chunking Strategies
Section titled “Chunking Strategies”- Section-based (default) — splits on structural elements like headings and paragraphs; fast and low-cost
- Semantic — groups sentences by meaning; slower but context-aware
- Hierarchical — creates parent (context) and child (retrieval) chunk pairs
- Fixed-length — splits strictly by token count; best for logs and unstructured text
OpenSearch Database
Section titled “OpenSearch Database”The knowledge base requires an OpenSearch database to store the vector embeddings:
- Create new — provisions a new database automatically sized to your data
- Use existing — connects to a database you already have by providing its ID
Output
Section titled “Output”Returns the created knowledge base including:
- uuid: Knowledge base UUID for use in downstream components
- name: Name of the knowledge base
- region: Datacenter region
- embeddingModelUUID: UUID of the embedding model used
- projectId: Associated project ID
- databaseId: UUID of the OpenSearch database (populated after provisioning completes for new databases)
- createdAt: Creation timestamp
Example Output
Section titled “Example Output”{ "data": { "createdAt": "2025-01-01T00:00:00Z", "databaseId": "abf1055a-745d-4c24-a1db-1959ea819264", "embeddingModelUUID": "05700391-7aa8-11ef-bf8f-4e013e2ddde4", "name": "my-knowledge-base", "projectId": "37455431-84bd-4fa2-94cf-e8486f8f8c5e", "region": "tor1", "tags": [ "docs", "production" ], "uuid": "20cd8434-6ea1-11f0-bf8f-4e013e2ddde4" }, "timestamp": "2025-01-01T00:00:00Z", "type": "digitalocean.knowledge_base.created"}Create Load Balancer
Section titled “Create Load Balancer”The Create Load Balancer component creates a new load balancer in DigitalOcean and waits until it is active.
Use Cases
Section titled “Use Cases”- Traffic distribution: Distribute incoming requests across multiple droplets
- High availability: Ensure zero-downtime deployments by routing traffic across instances
- Scalable infrastructure: Provision load balancers as part of automated environment setup
Configuration
Section titled “Configuration”- Name: The name of the load balancer (required, only letters, numbers, and hyphens)
- Region: Region where the load balancer will be created (required)
- Forwarding Rules: One or more forwarding rules specifying entry/target protocol, port, and optional TLS passthrough (required)
- Droplets: The droplets to add as targets — must be in the same region as the load balancer (optional, mutually exclusive with Tag)
- Tag: Tag used to dynamically target droplets (optional, mutually exclusive with Droplets)
Output
Section titled “Output”Returns the created load balancer object including:
- id: Load balancer ID (UUID)
- name: Load balancer name
- ip: Assigned public IP address
- status: Current status (active)
- region: Region information
- forwarding_rules: Configured forwarding rules
- droplet_ids: Targeted droplet IDs
Important Notes
Section titled “Important Notes”- The component polls until the load balancer status becomes active
- Specify either Droplet IDs or Tag to define targets, not both
- The load balancer name must contain only letters, numbers, and hyphens
- All specified droplets must be in the same region as the load balancer
Example Output
Section titled “Example Output”{ "data": { "algorithm": "round_robin", "created_at": "2026-03-13T10:00:00Z", "droplet_ids": [ 98765432, 98765433 ], "forwarding_rules": [ { "entry_port": 80, "entry_protocol": "http", "target_port": 80, "target_protocol": "http" } ], "id": "4de7ac8b-495b-4884-9a69-1050c6793cd6", "ip": "104.131.186.241", "name": "my-load-balancer", "region": { "name": "New York 3", "slug": "nyc3" }, "status": "active", "tag": "" }, "timestamp": "2026-03-13T10:01:15.000000000Z", "type": "digitalocean.loadbalancer.created"}Create Snapshot
Section titled “Create Snapshot”The Create Snapshot component creates a point-in-time snapshot of a DigitalOcean Droplet.
Use Cases
Section titled “Use Cases”- Backup: Create a backup before performing risky operations on a droplet
- Image creation: Create a custom image from an existing droplet for reuse
- Migration: Snapshot a droplet before migrating to a different region or size
Configuration
Section titled “Configuration”- Droplet: The ID of the droplet to snapshot (required)
- Name: A human-readable name for the snapshot (required)
Output
Section titled “Output”Returns the snapshot details including:
- id: Snapshot ID
- name: Snapshot name
- created_at: When the snapshot was created
- resource_id: The ID of the droplet that was snapshotted
- regions: Regions where the snapshot is available
- min_disk_size: Minimum disk size required to use this snapshot
- size_gigabytes: Size of the snapshot in GB
Example Output
Section titled “Example Output”{ "data": { "created_at": "2026-03-13T13:35:43Z", "id": 220464921, "min_disk_size": 10, "name": "superplane-1773328904", "regions": [ "blr1" ], "resource_id": "98145763", "resource_type": "droplet", "size_gigabytes": 2.04 }, "timestamp": "2026-03-13T13:36:14.803060936Z", "type": "digitalocean.snapshot.created"}Delete Alert Policy
Section titled “Delete Alert Policy”The Delete Alert Policy component permanently removes a monitoring alert policy from your DigitalOcean account.
Use Cases
Section titled “Use Cases”- Cleanup: Remove alert policies that are no longer needed
- Policy rotation: Delete old policies as part of a replace workflow
- Automated teardown: Remove monitoring policies when decommissioning environments
Configuration
Section titled “Configuration”- Alert Policy: The alert policy to delete (required, supports expressions)
Output
Section titled “Output”Returns information about the deleted policy:
- alertPolicyUuid: The UUID of the alert policy that was deleted
Important Notes
Section titled “Important Notes”- This operation is permanent and cannot be undone
- If the policy does not exist (already deleted), the component completes successfully (idempotent)
Example Output
Section titled “Example Output”{ "data": { "alertPolicyUuid": "669adfc8-d72b-4d2d-80ed-bea78d6e1562" }, "timestamp": "2026-03-17T10:00:00Z", "type": "digitalocean.alertpolicy.deleted"}Delete App
Section titled “Delete App”The Delete App component removes a DigitalOcean App Platform application.
Use Cases
Section titled “Use Cases”- Cleanup: Remove applications that are no longer needed
- Environment teardown: Delete temporary or test app instances
- Resource management: Free up resources by deleting unused apps
Configuration
Section titled “Configuration”- App: The app to delete (required)
Output
Section titled “Output”Returns confirmation of the deleted app including:
- appId: The ID of the deleted app
- This operation is idempotent - deleting an already deleted app will succeed
- All deployments and associated resources will be removed
- This action cannot be undone
Example Output
Section titled “Example Output”{ "data": { "appId": "20e27025-f9c1-4da4-bfc4-00a13eb9ff42" }, "timestamp": "2026-03-24T05:59:24.092664924Z", "type": "digitalocean.app.deleted"}Delete DNS Record
Section titled “Delete DNS Record”The Delete DNS Record component permanently removes a DNS record from a DigitalOcean-managed domain.
Use Cases
Section titled “Use Cases”- Cleanup: Remove DNS records for decommissioned services
- Rotation: Delete old records as part of a DNS rotation workflow
- Automated teardown: Remove service discovery records when tearing down infrastructure
Configuration
Section titled “Configuration”- Domain: The DigitalOcean-managed domain containing the record (required)
- Record ID: The ID of the DNS record to delete (required, supports expressions)
Output
Section titled “Output”Returns information about the deleted record:
- recordId: The ID of the record that was deleted
- domain: The domain the record belonged to
Important Notes
Section titled “Important Notes”- This operation is permanent and cannot be undone
- Deleting a record that does not exist is treated as a success (idempotent)
- Record IDs can be obtained from the output of createDNSRecord or upsertDNSRecord
Example Output
Section titled “Example Output”{ "data": { "deleted": true, "domain": "example.com", "recordId": 12345678 }, "timestamp": "2026-03-13T10:05:00.000000000Z", "type": "digitalocean.dns.record.deleted"}Delete Data Source
Section titled “Delete Data Source”The Delete Data Source component removes a data source from an existing knowledge base on the DigitalOcean Gradient AI Platform.
How it works
Section titled “How it works”Deletes a single data source from a knowledge base. DigitalOcean automatically triggers a re-indexing job after every deletion to clean up stale embeddings from the OpenSearch database. The component waits for that job to complete before emitting the output.
Output
Section titled “Output”Returns the deleted data source details:
- dataSourceUUID: UUID of the deleted data source
- knowledgeBaseUUID: UUID of the knowledge base
- knowledgeBaseName: Name of the knowledge base
Example Output
Section titled “Example Output”{ "data": { "dataSourceUUID": "650d31b0-3338-11f1-b074-4e013e2bere4", "indexingJob": { "completedDataSources": 1, "finishedAt": "2026-04-09T10:29:39Z", "startedAt": "2026-04-09T10:28:58Z", "status": "INDEX_JOB_STATUS_COMPLETED", "totalDataSources": 1, "totalTokens": "" }, "knowledgeBaseName": "ecommerce-knowledge-base-v2", "knowledgeBaseUUID": "3f3d8984-32a2-11f1-b074-4e01dffdde4" }, "timestamp": "2026-04-09T10:29:59.190938313Z", "type": "digitalocean.data_source.deleted"}Delete Database
Section titled “Delete Database”The Delete Database component permanently removes a database from a DigitalOcean Managed Database cluster.
Use Cases
Section titled “Use Cases”- Cleanup: Remove databases that are no longer needed after a workflow completes
- Environment teardown: Delete temporary or preview-environment databases
- Tenant offboarding: Remove customer-specific databases during deprovisioning
Configuration
Section titled “Configuration”- Database Cluster: The managed database cluster containing the database (required)
- Database: The database to delete (required)
Output
Section titled “Output”Returns information about the deleted database:
- name: The deleted database name
- databaseClusterId: The cluster UUID
- databaseClusterName: The cluster name
- deleted: Whether the delete request succeeded
Important Notes
Section titled “Important Notes”- If you use custom token scopes, this action requires
database:deleteanddatabase:read - Database management is not supported for Caching or Valkey clusters
- Deleting a database that no longer exists is treated as a success
Example Output
Section titled “Example Output”{ "data": { "databaseClusterId": "9cc10173-e9ea-4176-9dbc-a4cee4c4ff30", "databaseClusterName": "primary-postgres", "deleted": true, "name": "app_db" }, "timestamp": "2026-03-27T09:20:00Z", "type": "digitalocean.database.deleted"}Delete Droplet
Section titled “Delete Droplet”The Delete Droplet component permanently deletes a droplet from your DigitalOcean account.
Use Cases
Section titled “Use Cases”- Cleanup: Remove temporary or test droplets after use
- Cost optimization: Automatically tear down unused infrastructure
- Automated workflows: Delete droplets as part of deployment rollback or cleanup processes
- Environment management: Remove ephemeral environments after testing
Configuration
Section titled “Configuration”- Droplet: The droplet to delete (required, supports expressions)
Output
Section titled “Output”Returns information about the deleted droplet:
- dropletId: The ID of the droplet that was deleted
Important Notes
Section titled “Important Notes”- This operation is permanent and cannot be undone
- All data on the droplet will be lost
- The droplet will be shut down if it’s running before deletion
- Any snapshots of the droplet will remain in your account
Example Output
Section titled “Example Output”{ "data": { "dropletId": 557784760 }, "timestamp": "2026-03-12T21:25:45.688697002Z", "type": "digitalocean.droplet.deleted"}Delete Knowledge Base
Section titled “Delete Knowledge Base”The Delete Knowledge Base component removes a knowledge base from the DigitalOcean Gradient AI Platform.
How it works
Section titled “How it works”Deletes the specified knowledge base. Optionally, you can also delete the associated OpenSearch database that stores the vector embeddings.
Use Cases
Section titled “Use Cases”- Cleanup: Remove knowledge bases that are no longer needed
- Resource management: Free up resources by deleting unused knowledge bases and their databases
- Rotation: Delete an old knowledge base after a new one has been verified and attached
Configuration
Section titled “Configuration”- Knowledge Base: The knowledge base to delete (required)
- Delete OpenSearch Database: Whether to also delete the associated OpenSearch database (optional, defaults to off)
Output
Section titled “Output”Returns confirmation of the deletion including:
- knowledgeBaseUUID: UUID of the deleted knowledge base
- databaseDeleted: Whether the OpenSearch database was also deleted
- databaseId: UUID of the deleted database (included when the database was deleted)
- databaseName: Name of the deleted database (included when the database was deleted)
- If the knowledge base is currently attached to any agents, it will automatically be removed from those agents upon deletion. Consider using the Detach Knowledge Base component first if you need more control over the detachment process.
- Deleting the OpenSearch database is irreversible and will remove all vector embeddings
Example Output
Section titled “Example Output”{ "data": { "databaseDeleted": true, "databaseId": "9cc10173-e9ea-4176-9dbc-a4cee4c4ff30", "databaseName": "my-knowledge-base-os", "knowledgeBaseUUID": "a1b2c3d4-0000-0000-0000-000000000001" }, "timestamp": "2025-01-01T00:00:00Z", "type": "digitalocean.knowledge_base.deleted"}Delete Load Balancer
Section titled “Delete Load Balancer”The Delete Load Balancer component permanently deletes a load balancer from your DigitalOcean account.
Use Cases
Section titled “Use Cases”- Cleanup: Remove load balancers after decommissioning a service
- Cost optimization: Automatically tear down unused load balancers
- Environment management: Delete load balancers as part of environment teardown workflows
Configuration
Section titled “Configuration”- Load Balancer: The load balancer to delete (required, supports expressions)
Output
Section titled “Output”Returns information about the deleted load balancer:
- loadBalancerID: The UUID of the load balancer that was deleted
Important Notes
Section titled “Important Notes”- This operation is permanent and cannot be undone
- Deleting a load balancer does not delete the targeted droplets
- If the load balancer does not exist (404), the component emits success (idempotent)
Example Output
Section titled “Example Output”{ "data": { "loadBalancerID": "4de7ac8b-495b-4884-9a69-1050c6793cd6" }, "timestamp": "2026-03-13T10:05:30.000000000Z", "type": "digitalocean.loadbalancer.deleted"}Delete Object
Section titled “Delete Object”The Delete Object component permanently removes an object from a DigitalOcean Spaces bucket.
Prerequisites
Section titled “Prerequisites”Spaces Access Key ID and Secret Access Key must be configured in the DigitalOcean integration settings.
Configuration
Section titled “Configuration”- Bucket: The Spaces bucket containing the object. The dropdown lists all buckets — the region is determined automatically.
- File Path: The full path to the object within the bucket (e.g. reports/daily.csv). Supports expressions to reference a path from an upstream component.
Output
Section titled “Output”- bucket: The bucket name
- filePath: The path of the deleted object
- deleted: Always
trueon success
- This operation is permanent and cannot be undone
- The operation succeeds even if the object does not exist (idempotent)
Use Cases
Section titled “Use Cases”- Cleanup: Remove temporary or processed files after a workflow completes
- Rotation: Delete old reports or artifacts as part of a file rotation policy
- Pipeline teardown: Remove objects created during a pipeline run
Example Output
Section titled “Example Output”{ "data": { "bucket": "my-company-assets", "deleted": true, "filePath": "reports/daily.csv" }, "timestamp": "2026-03-25T09:00:00Z", "type": "digitalocean.spaces.object.deleted"}Delete Snapshot
Section titled “Delete Snapshot”The Delete Snapshot component deletes a DigitalOcean snapshot image.
Use Cases
Section titled “Use Cases”- Cleanup: Remove old snapshots to free up storage and reduce costs
- Lifecycle management: Automatically delete snapshots after they are no longer needed
- Rotation: Delete older snapshots as part of a snapshot rotation policy
Configuration
Section titled “Configuration”- Snapshot: The snapshot to delete (required)
Output
Section titled “Output”Returns confirmation of the deleted snapshot including:
- snapshotId: The ID of the deleted snapshot
- deleted: Confirmation that the snapshot was deleted
Example Output
Section titled “Example Output”{ "data": { "deleted": true, "snapshotId": "220431883" }, "timestamp": "2026-03-13T06:27:06.208635289Z", "type": "digitalocean.snapshot.deleted"}Detach Knowledge Base
Section titled “Detach Knowledge Base”The Detach Knowledge Base component removes a knowledge base from an existing Gradient AI agent.
Use Cases
Section titled “Use Cases”- Rollback: Remove a knowledge base that is causing poor agent responses
- Cleanup: Detach an outdated knowledge base before attaching a freshly indexed one
- Rotation: As part of a blue/green pipeline, detach the old knowledge base after the new one is verified
Configuration
Section titled “Configuration”- Agent: The agent to detach the knowledge base from (required)
- Knowledge Base: The knowledge base to detach — only shows knowledge bases currently attached to the selected agent (required)
Output
Section titled “Output”Returns confirmation of the detachment including:
- agentUUID: UUID of the agent
- knowledgeBaseUUID: UUID of the detached knowledge base
Example Output
Section titled “Example Output”{ "data": { "agentUUID": "20cd8434-6ea1-11f0-bf8f-4e013e2ddde4", "knowledgeBaseUUID": "a1b2c3d4-0000-0000-0000-000000000001" }, "timestamp": "2025-01-01T00:00:00Z", "type": "digitalocean.knowledge_base.detached"}Get Alert Policy
Section titled “Get Alert Policy”The Get Alert Policy component retrieves the full details of a monitoring alert policy.
Use Cases
Section titled “Use Cases”- Policy inspection: Verify the current configuration of an alert policy
- Conditional logic: Check whether a policy is enabled before modifying it downstream
- Audit workflows: Retrieve alert policy details as part of a compliance or reporting pipeline
Configuration
Section titled “Configuration”- Alert Policy: The alert policy to retrieve (required, supports expressions)
Output
Section titled “Output”Returns the alert policy object including:
- uuid: Alert policy UUID
- description: Human-readable description
- type: Metric type being monitored (e.g. v1/insights/droplet/cpu)
- compare: Comparison operator (GreaterThan/LessThan)
- value: Threshold value
- window: Evaluation window (5m, 10m, 30m, 1h)
- entities: Scoped droplet IDs
- tags: Scoped droplet tags
- enabled: Whether the policy is active
- alerts: Configured notification channels
Example Output
Section titled “Example Output”{ "data": { "alerts": { "email": [ "sammy@digitalocean.com" ] }, "compare": "GreaterThan", "description": "High CPU Usage", "enabled": true, "entities": [ "558899681" ], "tags": [], "type": "v1/insights/droplet/cpu", "uuid": "ffcaf816-f6a5-4b4a-b4c4-e84532755e82", "value": 20, "window": "5m" }, "timestamp": "2026-03-18T09:54:58.914731251Z", "type": "digitalocean.alertpolicy.fetched"}Get App
Section titled “Get App”The Get App component retrieves detailed information about a specific DigitalOcean App Platform application.
Use Cases
Section titled “Use Cases”- Status checks: Verify app state and deployment status before performing operations
- Information retrieval: Get current app configuration, URLs, and deployment details
- Pre-flight validation: Check app exists before operations like update or delete
- Monitoring: Track app configuration, active deployments, and ingress URLs
- Integration workflows: Retrieve app details for use in downstream workflow steps
Configuration
Section titled “Configuration”- App ID: The unique identifier of the app to retrieve (required)
Output
Section titled “Output”Returns the app object including:
- id: The unique app ID
- name: The app name
- default_ingress: The default ingress URL
- live_url: The live URL for the app
- region: The region where the app is deployed
- active_deployment: Information about the active deployment
- in_progress_deployment: Information about any in-progress deployment
- spec: Complete app specification including services, workers, jobs, static sites, databases, and configuration
- The app ID can be obtained from the output of the Create App component or from the DigitalOcean dashboard
- The component returns the current state of the app, including all deployed components
- Use this component to verify deployment status before performing updates or other operations
Example Output
Section titled “Example Output”{ "data": { "active_deployment": { "cause": "initial deployment", "created_at": "2026-03-24T10:17:51Z", "id": "a5ad056d-523a-48e2-9715-76db183f5a15", "phase": "ACTIVE", "progress": { "steps": [ { "ended_at": "2026-03-24T10:18:56.473548704Z", "name": "build", "started_at": "2026-03-24T10:17:57.360621633Z", "status": "SUCCESS", "steps": [ { "ended_at": "2026-03-24T10:18:23.527631658Z", "name": "initialize", "started_at": "2026-03-24T10:17:57.360716517Z", "status": "SUCCESS" }, { "ended_at": "2026-03-24T10:18:54.938686340Z", "name": "components", "started_at": "2026-03-24T10:18:23.527664326Z", "status": "SUCCESS", "steps": [ { "name": "my-app", "status": "SUCCESS" } ] } ] }, { "ended_at": "2026-03-24T10:19:06.689255599Z", "name": "deploy", "started_at": "2026-03-24T10:19:02.998040282Z", "status": "SUCCESS", "steps": [ { "ended_at": "2026-03-24T10:19:04.851886393Z", "name": "initialize", "started_at": "2026-03-24T10:19:02.998063884Z", "status": "SUCCESS" }, { "ended_at": "2026-03-24T10:19:04.851934987Z", "name": "components", "started_at": "2026-03-24T10:19:04.851916324Z", "status": "SUCCESS" }, { "ended_at": "2026-03-24T10:19:06.689152591Z", "name": "finalize", "started_at": "2026-03-24T10:19:04.852021129Z", "status": "SUCCESS" } ] } ], "success_steps": 5, "total_steps": 5 }, "spec": { "ingress": { "rules": [ { "component": { "name": "my-app" }, "match": { "path": { "prefix": "/" } } } ] }, "name": "my-app", "region": "blr", "static_sites": [ { "github": { "branch": "main", "deploy_on_push": true, "repo": "digitalocean/hello-world" }, "name": "my-app" } ] }, "static_sites": [ { "name": "my-app", "source_commit_hash": "8ac84464242f1b430147319deb28abb5a20049b8" } ], "updated_at": "2026-03-24T10:19:07Z" }, "created_at": "2026-03-24T10:17:51Z", "default_ingress": "https://my-app-ixj6x.ondigitalocean.app", "id": "eb0fc7fa-8294-48c5-8a48-77d47fc6c89e", "last_deployment_created_at": "2026-03-24T10:17:51Z", "live_domain": "my-app-ixj6x.ondigitalocean.app", "live_url": "https://my-app-ixj6x.ondigitalocean.app", "live_url_base": "https://my-app-ixj6x.ondigitalocean.app", "pending_deployment": { "id": "" }, "region": { "continent": "Asia", "data_centers": [ "blr1" ], "flag": "india", "label": "Bangalore", "slug": "blr" }, "spec": { "ingress": { "rules": [ { "component": { "name": "my-app" }, "match": { "path": { "prefix": "/" } } } ] }, "name": "my-app", "region": "blr", "static_sites": [ { "github": { "branch": "main", "deploy_on_push": true, "repo": "digitalocean/hello-world" }, "name": "my-app" } ] }, "updated_at": "2026-03-24T10:19:13Z" }, "timestamp": "2026-03-26T08:21:30.077395918Z", "type": "digitalocean.app.fetched"}Get Cluster Configuration
Section titled “Get Cluster Configuration”The Get Cluster Configuration component retrieves the active configuration for a DigitalOcean Managed Database cluster.
Use Cases
Section titled “Use Cases”- Audit workflows: Inspect the active cluster configuration for reporting or compliance checks
- Validation: Compare the current cluster configuration before updates or maintenance
- Operational visibility: Retrieve engine-specific settings that affect behavior and performance
Configuration
Section titled “Configuration”- Database Cluster: The managed database cluster to inspect (required)
Output
Section titled “Output”Returns the cluster configuration including:
- databaseClusterId: The cluster UUID
- databaseClusterName: The cluster name
- config: The configuration object returned by the DigitalOcean API
Important Notes
Section titled “Important Notes”- If you use custom token scopes, this action requires
database:read - The keys inside
configvary by database engine
Example Output
Section titled “Example Output”{ "data": { "config": { "autovacuum_naptime": 60, "backtrack_commit_timeout": 30, "default_toast_compression": "pglz", "idle_in_transaction_session_timeout": 0, "jit": true, "max_parallel_workers": 8 }, "databaseClusterId": "65b497a5-1674-4b1a-a122-01aebe761ef7", "databaseClusterName": "superplane-db-test" }, "timestamp": "2026-03-27T11:10:00.000000000Z", "type": "digitalocean.database.cluster.config.fetched"}Get Database
Section titled “Get Database”The Get Database component retrieves a managed database from a DigitalOcean cluster and enriches it with cluster context.
Use Cases
Section titled “Use Cases”- Routing decisions: Inspect the database and cluster state before directing traffic or jobs
- Operational checks: Review engine, region, and connection details before maintenance steps
- Audit workflows: Retrieve the current database and cluster context for reporting or validation
Configuration
Section titled “Configuration”- Database Cluster: The managed database cluster containing the database (required)
- Database: The database to retrieve (required)
Output
Section titled “Output”Returns the requested database enriched with cluster details, including:
- name: The database name
- databaseClusterId: The cluster UUID
- databaseClusterName: The cluster name
- engine: The cluster engine
- version: The cluster engine version
- region: The cluster region
- status: The cluster status
- connection: Connection information when available
- database: The raw database object returned by the API
Important Notes
Section titled “Important Notes”- If you use custom token scopes, this action requires
database:read - Database management is not supported for Caching or Valkey clusters
Example Output
Section titled “Example Output”{ "data": { "connection": { "database": "defaultdb", "host": "superplane-db-test-do-user-1234567-0.j.db.ondigitalocean.com", "port": 25060, "ssl": true, "uri": "postgresql://doadmin@superplane-db-test-do-user-1234567-0.j.db.ondigitalocean.com:25060/defaultdb?sslmode=require", "user": "doadmin" }, "database": { "name": "app_db" }, "databaseClusterId": "65b497a5-1674-4b1a-a122-01aebe761ef7", "databaseClusterName": "superplane-db-test", "engine": "pg", "name": "app_db", "region": "nyc1", "status": "online", "version": "17" }, "timestamp": "2026-03-27T11:12:00.000000000Z", "type": "digitalocean.database.fetched"}Get Database Cluster
Section titled “Get Database Cluster”The Get Database Cluster component retrieves the details of an existing DigitalOcean Managed Database cluster.
Use Cases
Section titled “Use Cases”- Status checks: Verify a cluster is online before creating databases or users
- Information retrieval: Fetch connection details, sizing, engine, and region information
- Pre-flight validation: Confirm a cluster exists before downstream operations
Configuration
Section titled “Configuration”- Database Cluster: The managed database cluster to retrieve (required)
Output
Section titled “Output”Returns the database cluster including:
- id: The cluster UUID
- name: The cluster name
- engine: The configured engine
- version: The engine version
- region: The cluster region
- size: The node size slug
- num_nodes: The number of nodes
- status: The cluster status
- connection: Connection information when available
Important Notes
Section titled “Important Notes”- If you use custom token scopes, this action requires
database:read - The returned connection information depends on the cluster type and provisioning state.
Example Output
Section titled “Example Output”{ "data": { "connection": { "host": "superplane-db-do-user-123456-0.j.db.ondigitalocean.com", "port": 25060, "ssl": true, "uri": "postgres://doadmin:[email protected]:25060/defaultdb?sslmode=require", "user": "doadmin" }, "created_at": "2026-03-27T13:00:00Z", "engine": "pg", "id": "65b497a5-1674-4b1a-a122-01aebe761ef7", "name": "superplane-db", "num_nodes": 1, "private_network_uuid": "7e6d2691-182b-4dd1-8452-529f88feb996", "region": "nyc1", "size": "db-s-1vcpu-1gb", "status": "online", "version": "18.0" }, "timestamp": "2026-03-27T13:05:00Z", "type": "digitalocean.database.cluster.fetched"}Get Droplet
Section titled “Get Droplet”The Get Droplet component retrieves detailed information about a specific droplet.
Use Cases
Section titled “Use Cases”- Status checks: Verify droplet state before performing operations
- Information retrieval: Get current IP addresses, configuration, and status
- Pre-flight validation: Check droplet exists before operations like snapshot or power management
- Monitoring: Track droplet configuration and network details
Configuration
Section titled “Configuration”- Droplet: The droplet to retrieve (required, supports expressions)
Output
Section titled “Output”Returns the droplet object including:
- id: Droplet ID
- name: Droplet hostname
- status: Current droplet status (new, active, off, archive)
- memory: RAM in MB
- vcpus: Number of virtual CPUs
- disk: Disk size in GB
- region: Region information
- image: Image information
- size_slug: Size identifier
- networks: Network information including IP addresses
- tags: Applied tags
Example Output
Section titled “Example Output”{ "data": { "disk": 25, "id": 557784760, "image": { "id": 220345895, "name": "superplane-1773328904", "slug": "" }, "memory": 1024, "name": "superplane-1773328904-s-1vcpu-1gb-nyc3-01", "networks": { "v4": [ { "ip_address": "192.0.2.1", "type": "public" }, { "ip_address": "10.108.0.3", "type": "private" } ] }, "region": { "name": "New York 3", "slug": "nyc3" }, "size_slug": "s-1vcpu-1gb", "status": "active", "tags": [], "vcpus": 1 }, "timestamp": "2026-03-12T21:13:32.946693411Z", "type": "digitalocean.droplet.fetched"}Get Droplet Metrics
Section titled “Get Droplet Metrics”The Get Droplet Metrics component retrieves CPU usage, memory utilization, and network bandwidth metrics for a droplet over a specified lookback window.
Note: Monitoring is only available for droplets that had monitoring enabled during creation. Droplets created without monitoring will not report metrics.
Use Cases
Section titled “Use Cases”- Performance monitoring: Sample current resource utilization before scaling decisions
- Incident investigation: Pull recent metrics when responding to an alert
- Capacity planning: Gather trend data to inform right-sizing of infrastructure
- Automated scaling: Use metric outputs to conditionally trigger resize or power operations
Configuration
Section titled “Configuration”- Droplet: The droplet to fetch metrics for (required, supports expressions)
- Lookback Period: How far back to retrieve metrics — 1h, 6h, 24h, 7d, or 14d (required)
Output
Section titled “Output”Returns a combined metrics payload with averaged values over the lookback window:
- dropletId: The ID of the queried droplet
- start: ISO 8601 timestamp of the start of the metrics window
- end: ISO 8601 timestamp of the end of the metrics window
- lookbackPeriod: The selected lookback period
- avgCpuUsagePercent: Average CPU usage percentage over the window
- avgMemoryUsagePercent: Average memory utilization percentage, computed from (total − available) / total × 100
- avgPublicOutboundBandwidthMbps: Average public outbound bandwidth in Mbps (as reported by the DigitalOcean API)
- avgPublicInboundBandwidthMbps: Average public inbound bandwidth in Mbps (as reported by the DigitalOcean API)
All metric values are rounded to two decimal places.
Important Notes
Section titled “Important Notes”- Metrics are only available for droplets with the DigitalOcean Monitoring Agent installed
- The Monitoring Agent is pre-installed on droplets using official DigitalOcean images created after 2018
- Data point resolution varies by window: shorter windows return finer-grained data
Example Output
Section titled “Example Output”{ "data": { "avgCpuUsagePercent": 0.18, "avgMemoryUsagePercent": 34.39, "avgPublicInboundBandwidthMbps": 0.36, "avgPublicOutboundBandwidthMbps": 0.15, "dropletId": "559378149", "end": "2026-03-19T08:36:34Z", "lookbackPeriod": "1h", "start": "2026-03-19T07:36:34Z" }, "timestamp": "2026-03-19T08:36:37.115389929Z", "type": "digitalocean.droplet.metrics"}Get Knowledge Base
Section titled “Get Knowledge Base”The Get Knowledge Base component retrieves comprehensive information about an existing knowledge base on the DigitalOcean Gradient AI Platform.
How it works
Section titled “How it works”Fetches the knowledge base details including its OpenSearch database, all attached data sources, and the latest indexing job status.
Use Cases
Section titled “Use Cases”- Pre-flight checks: Inspect a knowledge base before attaching it to an agent or running evaluations
- Health monitoring: Check indexing status and data source count on a schedule
- Teardown workflows: Fetch the database ID before deleting the knowledge base
- Auditing: Verify region, embedding model, and data source configuration
Output
Section titled “Output”Returns the full knowledge base object including:
- uuid, name, status, region, tags — core properties
- embeddingModelUUID, embeddingModelName — embedding model details
- projectId, projectName — associated project
- database — OpenSearch database object with id, name, and status
- dataSources — array of all attached data sources with type and source details
- lastIndexingJob — full indexing job details: status, phase, totalTokens, data source progress, timing, and report availability
- createdAt, updatedAt — timestamps
Example Output
Section titled “Example Output”{ "data": { "createdAt": "2025-01-01T00:00:00Z", "dataSources": [ { "chunkingAlgorithm": "CHUNKING_ALGORITHM_SECTION_BASED", "createdAt": "2025-01-01T00:00:00Z", "spacesBucket": "tor1/product-data", "type": "spaces", "updatedAt": "2025-06-01T00:00:00Z", "uuid": "a1b2c3d4-0000-0000-0000-000000000001" }, { "chunkingAlgorithm": "CHUNKING_ALGORITHM_SEMANTIC", "crawlingOption": "SCOPED", "createdAt": "2025-02-01T00:00:00Z", "type": "web", "updatedAt": "2025-06-01T00:00:00Z", "uuid": "a1b2c3d4-0000-0000-0000-000000000002", "webURL": "https://docs.example.com" } ], "database": { "id": "abf1055a-745d-4c24-a1db-1959ea819264", "name": "product-catalog-os", "status": "online" }, "databaseStatus": "ONLINE", "embeddingModelName": "GTE Large EN v1.5", "embeddingModelUUID": "05700391-7aa8-11ef-bf8f-4e013e2ddde4", "lastIndexingJob": { "completedDataSources": 2, "createdAt": "2025-06-01T00:00:00Z", "finishedAt": "2025-06-01T00:05:32Z", "isReportAvailable": true, "phase": "BATCH_JOB_PHASE_COMPLETE", "startedAt": "2025-06-01T00:00:00Z", "status": "INDEX_JOB_STATUS_COMPLETED", "totalDataSources": 2, "totalTokens": "12345", "uuid": "b2c3d4e5-0000-0000-0000-000000000001" }, "name": "product-catalog-v2", "projectId": "37455431-84bd-4fa2-94cf-e8486f8f8c5e", "projectName": "AI Agents", "region": "tor1", "tags": [ "production", "docs" ], "updatedAt": "2025-06-01T00:00:00Z", "uuid": "20cd8434-6ea1-11f0-bf8f-4e013e2ddde4" }, "timestamp": "2025-06-01T00:05:32Z", "type": "digitalocean.knowledge_base.fetched"}Get Object
Section titled “Get Object”The Get Object component retrieves an object and its metadata from a DigitalOcean Spaces bucket.
Prerequisites
Section titled “Prerequisites”Spaces Access Key ID and Secret Access Key must be configured in the DigitalOcean integration settings. The integration works without them for other components (Droplets, DNS, etc.), but they are required for any Spaces operation.
Configuration
Section titled “Configuration”- Bucket: The Spaces bucket to read from. The dropdown lists all buckets across all regions — the region is determined automatically.
- File Path: The path to the object within the bucket (e.g. reports/daily.csv)
- Include Body: Download the object content. Only supported for text-based content types (JSON, YAML, CSV, plain text, etc.)
Output Channels
Section titled “Output Channels”- Found: The object exists — metadata, tags, and optional body are returned
- Not Found: The object does not exist in the bucket (404)
Output
Section titled “Output”- bucket: The bucket name
- filePath: The path to the object within the bucket
- endpoint: The full Spaces URL of the object
- contentType: MIME type of the object
- size: Human-readable file size (e.g. 1.23 MiB)
- lastModified: When the object was last modified (RFC1123 format)
- eTag: MD5 hash of the object content — changes when the file changes
- metadata: Custom metadata key-value pairs set on the object (x-amz-meta-* headers)
- tags: Key-value tags applied to the object
- body: Object content as a string (only present for text-based content types when Include Body is enabled)
Use Cases
Section titled “Use Cases”- Config reading: Fetch a JSON or YAML config file and use its contents in downstream workflow steps
- File existence check: Verify a file exists before triggering a process — route to notFound if it is missing
- Change detection: Compare ETag or LastModified with a previously stored value to detect updates
- Backup verification: Confirm a backup file was written today by checking LastModified
- Tag-based routing: Read object tags to drive workflow logic (e.g. status=ready)
Example Output
Section titled “Example Output”{ "data": { "bucket": "my-company-assets", "contentType": "text/csv", "eTag": "a1b2c3d4ef567890a1b2c3d4ef567890", "endpoint": "https://my-company-assets.fra1.digitaloceanspaces.com/reports/daily.csv", "filePath": "reports/daily.csv", "lastModified": "Wed, 25 Mar 2026 08:45:00 GMT", "metadata": { "env": "production", "uploaded-by": "pipeline" }, "size": "41.31 KiB", "tags": { "env": "production", "status": "ready" } }, "timestamp": "2026-03-25T09:00:00Z", "type": "digitalocean.spaces.object.fetched"}Index Knowledge Base
Section titled “Index Knowledge Base”The Index Knowledge Base component triggers a new indexing job on an existing knowledge base and polls until it completes.
How it works
Section titled “How it works”Starts an indexing job that re-processes all data sources attached to the knowledge base. The component polls the job status every 30 seconds until the indexing job finishes successfully or fails.
Use Cases
Section titled “Use Cases”- Content refresh: Re-index a knowledge base after updating files in a Spaces bucket or changing a website
- Scheduled re-indexing: Combine with a Schedule trigger to re-index on a regular cadence (e.g. nightly)
- Pipeline orchestration: Re-index after an upstream component adds or modifies data sources
Output
Section titled “Output”Returns the completed indexing job details:
- knowledgeBaseUUID: UUID of the knowledge base
- knowledgeBaseName: Name of the knowledge base
- jobUUID: UUID of the indexing job
- status: Final job status (e.g. INDEX_JOB_STATUS_COMPLETED)
- phase: Final job phase (e.g. BATCH_JOB_PHASE_SUCCEEDED)
- totalTokens: Total tokens consumed by the indexing job
- completedDataSources: Number of data sources that finished indexing
- totalDataSources: Total number of data sources
- startedAt, finishedAt: Timing information
- isReportAvailable: Whether a detailed indexing report is available
Example Output
Section titled “Example Output”{ "data": { "completedDataSources": 2, "finishedAt": "2025-06-01T00:05:32Z", "isReportAvailable": true, "jobUUID": "b2c3d4e5-0000-0000-0000-000000000001", "knowledgeBaseName": "product-catalog-v2", "knowledgeBaseUUID": "20cd8434-6ea1-11f0-bf8f-4e013e2ddde4", "phase": "BATCH_JOB_PHASE_SUCCEEDED", "startedAt": "2025-06-01T00:00:00Z", "status": "INDEX_JOB_STATUS_COMPLETED", "totalDataSources": 2, "totalTokens": "1500" }, "timestamp": "2025-06-01T00:05:32Z", "type": "digitalocean.knowledge_base.indexed"}Manage Droplet Power
Section titled “Manage Droplet Power”The Manage Droplet Power component performs power management operations on a droplet.
Use Cases
Section titled “Use Cases”- Automated restarts: Reboot droplets on a schedule or in response to alerts
- Cost optimization: Power off droplets during non-business hours
- Maintenance workflows: Shutdown droplets before updates, power on after completion
- Recovery procedures: Power cycle droplets experiencing issues
Configuration
Section titled “Configuration”- Droplet: The droplet to manage (required, supports expressions)
- Operation: The power operation to perform (required):
- power_on: Power on a powered-off droplet
- power_off: Power off a running droplet (forced shutdown)
- shutdown: Gracefully shutdown a running droplet
- reboot: Gracefully reboot a running droplet
- power_cycle: Power cycle a droplet (forced reboot)
Output
Section titled “Output”Returns the action result including:
- id: Action ID
- status: Final action status (completed or errored)
- type: Type of action performed
- started_at: When the action started
- completed_at: When the action completed
- resource_id: Droplet ID
- region: Region slug
Important Notes
Section titled “Important Notes”- power_off and power_cycle are forced operations and may cause data loss
- shutdown and reboot are graceful and wait for the OS to complete the operation
- The component waits for the action to complete before emitting
- Actions may take several minutes depending on the droplet state
Example Output
Section titled “Example Output”{ "data": { "completed_at": "2026-03-12T22:27:14Z", "id": 3087531973, "region_slug": "fra1", "resource_id": 557861237, "resource_type": "droplet", "started_at": "2026-03-12T22:27:07Z", "status": "completed", "type": "power_off" }, "timestamp": "2026-03-12T22:27:20.613332846Z", "type": "digitalocean.droplet.power.power_off"}Put Object
Section titled “Put Object”The Put Object component uploads a text-based object to a DigitalOcean Spaces bucket.
Prerequisites
Section titled “Prerequisites”Spaces Access Key ID and Secret Access Key must be configured in the DigitalOcean integration settings.
Configuration
Section titled “Configuration”- Bucket: The Spaces bucket to upload to. The dropdown lists all buckets — the region is determined automatically.
- File Path: The full path including file name and extension (e.g. reports/2024/daily.csv). The content type is detected automatically from the extension.
- Body: The text content to upload. Supports expressions to pass content from upstream components (e.g.
{{ $['HTTP Request'].body }}). Only text-based formats are supported (JSON, CSV, YAML, plain text, XML, etc.) - ACL: Access control —
private(default) restricts access to the owner,public-readmakes the object publicly accessible. - Metadata: Optional key-value pairs stored as object metadata (x-amz-meta-* headers). Supports expressions.
- Tags: Optional key-value tags applied to the object. Can be changed later without re-uploading. Supports expressions.
Output
Section titled “Output”- bucket: The bucket name
- filePath: The path to the uploaded object
- endpoint: The full Spaces URL of the object
- eTag: MD5 hash of the uploaded content
- contentType: Detected MIME type based on file extension
- size: Human-readable size of the uploaded content
- metadata: Metadata set on the object (only present if metadata was provided)
- tags: Tags set on the object (only present if tags were provided)
Use Cases
Section titled “Use Cases”- Config publishing: Upload a generated JSON or YAML config file to Spaces for downstream services to consume
- Report storage: Save a CSV or text report generated by an upstream component
- Objectcopy: Combine with Get Object to copy an object across buckets, preserving metadata and tags
Example Output
Section titled “Example Output”{ "data": { "bucket": "my-company-assets", "contentType": "text/csv", "eTag": "a1b2c3d4ef567890a1b2c3d4ef567890", "endpoint": "https://my-company-assets.fra1.digitaloceanspaces.com/reports/daily.csv", "filePath": "reports/daily.csv", "metadata": { "uploaded-by": "pipeline" }, "size": "128 B", "tags": { "env": "production" } }, "timestamp": "2026-03-25T09:00:00Z", "type": "digitalocean.spaces.object.uploaded"}Run Evaluation
Section titled “Run Evaluation”The Run Evaluation component triggers a Gradient AI evaluation test case against an agent, waits for it to complete, and reports whether the agent passed or failed.
How it works
Section titled “How it works”Runs a pre-configured evaluation test case against the selected agent. The test case already defines the prompts, metrics, and pass/fail thresholds. The component polls until the evaluation finishes, then fetches the results and routes to the appropriate output channel.
Use Cases
Section titled “Use Cases”- Blue/green deployments: Evaluate a staging agent before promoting it to production
- Regression testing: Automatically verify agent quality after knowledge base or configuration changes
- Continuous validation: Schedule periodic evaluations to detect quality drift
Configuration
Section titled “Configuration”- Test Case: A pre-configured evaluation test case with prompts, metrics, and thresholds (required)
- Agent: The agent to evaluate (required)
- Run Name: A name for this evaluation run, visible in the DigitalOcean console (required, max 64 characters). Supports expressions for dynamic naming.
Output Channels
Section titled “Output Channels”- Passed: The evaluation completed and the agent met all pass criteria defined in the test case
- Failed: The evaluation completed but the agent did not meet the pass criteria, or the evaluation run itself errored
Output
Section titled “Output”Returns the evaluation results including:
- evaluationRunUUID: UUID of the evaluation run
- testCaseUUID / testCaseName: The test case that was run
- agentUUID / agentName: The agent that was evaluated
- passed: Whether the agent passed the evaluation
- status: Final status of the evaluation run
- starMetric: The primary metric result (name, numberValue, stringValue)
- runLevelMetrics: All run-level metric results (name, numberValue, stringValue)
- prompts: Per-prompt results including input, output, ground truth, and per-prompt metric scores
- startedAt / finishedAt: Timing information
- errorDescription: Present on the Failed channel if the run itself errored
- The evaluation typically takes 1–5 minutes depending on the number of prompts and complexity
- The component polls every 30 seconds until completion
- If the evaluation run itself fails (API error, timeout, etc.), the result is emitted to the Failed channel with the error description
Example Output
Section titled “Example Output”{ "data": { "agentName": "support-agent", "agentUUID": "7d5c762a-2e66-11f1-b074-4e013e2ddde4", "evaluationRunUUID": "ba42b577-9dab-40a9-a375-315a5be1922e", "finishedAt": "2025-01-01T00:03:44Z", "passed": true, "prompts": [ { "groundTruth": "A Droplet is a virtual machine that runs on DigitalOcean's cloud infrastructure.", "input": "What is a Droplet?", "metrics": [ { "metricName": "Correctness (general hallucinations)", "numberValue": 100, "stringValue": "" }, { "metricName": "Retrieved context relevance", "numberValue": 0, "stringValue": "" } ], "output": "A Droplet is a type of virtual private server (VPS) provided by a cloud platform." }, { "groundTruth": "Droplets have flexible pricing based on instance type, region, and options you select.", "input": "What is the pricing for Droplet?", "metrics": [ { "metricName": "Correctness (general hallucinations)", "numberValue": 66.67, "stringValue": "" }, { "metricName": "Retrieved context relevance", "numberValue": 0, "stringValue": "" } ], "output": "The pricing for Droplet starts at $5/month for a basic plan." } ], "runLevelMetrics": [ { "metricName": "Retrieved context relevance", "numberValue": 0, "stringValue": "" }, { "metricName": "Response-context completeness", "numberValue": 0, "stringValue": "" }, { "metricName": "Correctness (general hallucinations)", "numberValue": 93.33, "stringValue": "" } ], "starMetric": { "metricName": "Correctness (general hallucinations)", "numberValue": 93.33, "stringValue": "" }, "startedAt": "2025-01-01T00:00:00Z", "status": "successful", "testCaseName": "Product Knowledge Baseline", "testCaseUUID": "c6d75370-2f3e-11f1-b074-4e013e2ddde4" }, "timestamp": "2025-01-01T00:03:44Z", "type": "digitalocean.evaluation.passed"}Update Alert Policy
Section titled “Update Alert Policy”The Update Alert Policy component modifies an existing monitoring alert policy with new settings.
Note: Monitoring is only available for droplets that had monitoring enabled during creation. Droplets created without monitoring will not report metrics or trigger alerts.
Use Cases
Section titled “Use Cases”- Threshold tuning: Adjust alert thresholds in response to changing baselines or scaling events
- Enable/disable policies: Toggle alert policies on or off as part of maintenance windows or incident management
- Notification changes: Update notification channels (email or Slack) without recreating the policy
- Automated policy management: Programmatically adjust alert policies as part of infrastructure workflows
Configuration
Section titled “Configuration”- Alert Policy: The alert policy to update (required, supports expressions)
- Description: Human-readable name for the alert policy (required)
- Metric Type: The droplet metric to monitor, such as CPU Usage or Memory Usage (required)
- Comparison: Alert when the value is GreaterThan or LessThan the threshold (required)
- Threshold Value: The numeric threshold that triggers the alert (required)
- Evaluation Window: The rolling time window over which the metric is averaged (required)
- Droplets: Specific droplets to scope the policy to (optional)
- Tags: Monitor all droplets with matching tags (optional)
- Enabled: Whether the alert policy is active (default: true)
- Email Notifications: Email addresses to notify when the alert fires (optional)
- Slack Channel: Slack channel to post alerts to, e.g. #alerts (optional)
- Slack Webhook URL: Incoming webhook URL for the Slack workspace (required when Slack Channel is set)
Output
Section titled “Output”Returns the updated alert policy including:
- uuid: Alert policy UUID
- description: Human-readable description
- type: Metric type being monitored
- compare: Comparison operator (GreaterThan/LessThan)
- value: Threshold value
- window: Evaluation window
- enabled: Whether the policy is active
- alerts: Configured notification channels (email and/or Slack)
Important Notes
Section titled “Important Notes”- The update operation replaces the entire alert policy — all fields must be provided, not just the ones being changed
- At least one notification channel (email or Slack) is required
- Slack Channel and Slack Webhook URL must be provided together
- Scoping by Droplets and Tags are independent — you can use either, both, or neither (applies to all droplets)
Example Output
Section titled “Example Output”{ "data": { "alerts": { "email": [ "sammy@digitalocean.com" ] }, "compare": "GreaterThan", "description": "High CPU Usage", "enabled": true, "entities": [ "558899681" ], "tags": [], "type": "v1/insights/droplet/cpu", "uuid": "ffcaf816-f6a5-4b4a-b4c4-e84532755e82", "value": 80, "window": "5m" }, "timestamp": "2026-03-18T09:29:00.308296519Z", "type": "digitalocean.alertpolicy.updated"}Update App
Section titled “Update App”The Update App component modifies an existing DigitalOcean App Platform application.
Use Cases
Section titled “Use Cases”- Update configuration: Change app settings like environment variables, branch, build commands, and more
- Rename apps: Update the app name
- Migrate regions: Move the app to a different region
- Inject secrets: Add or update environment variables such as database connection strings
- Switch branches: Change the deployed branch without recreating the app
- Scale resources: Adjust instance size and count for services, workers, and jobs
- Configure networking: Update ingress paths, CORS settings, and VPC connections
- Manage databases: Add or update database attachments (dev or managed)
Configuration
Section titled “Configuration”- App: The app to update (required)
- Name: Update the app name (optional)
- Region: Update the region the app is deployed in (optional)
- Branch: The branch to deploy from (optional, applies to all components’ source providers)
- Deploy on Push: Toggle automatic deployment when code is pushed to the branch
- Environment Slug: Update the runtime environment/buildpack
- Build Command: Update the build command
- Run Command: Update the run command (services, workers, jobs)
- Source Directory: Update the source directory path
- HTTP Port: Update the service listening port
- Instance Size: Update the instance size slug
- Instance Count: Update the number of instances
- Output Directory: Update the static site output directory
- Index/Error/Catchall Document: Update static site document settings
- Environment Variables: Key-value pairs to add or update (merges with existing)
Ingress Configuration
Section titled “Ingress Configuration”- Ingress Path: Update the path prefix for routing traffic
- CORS Allow Origins: Update allowed origins for Cross-Origin Resource Sharing
- CORS Allow Methods: Update HTTP methods allowed for CORS requests
Database Configuration
Section titled “Database Configuration”- Add Database: Attach a new database to the app
- Database Component Name, Engine, Version: Configure the database
- Use Managed Database: Connect to an existing managed database cluster
VPC Configuration
Section titled “VPC Configuration”- VPC: Update the VPC ID for the app
Output
Section titled “Output”Returns the updated app including:
- id: The unique app ID
- name: The app name
- region: The region where the app is deployed
- live_url: The live URL for the app
- default_ingress: The default ingress URL
- active_deployment: Information about the updated deployment
- Environment variables are merged with existing ones (not replaced)
- Build/runtime settings are applied to all matching components
- Updating an app triggers a new deployment
- The component emits an output once the deployment reaches ACTIVE status
- If the deployment fails, the component will report the failure
- Dev databases are free and suitable for development; use managed databases for production
Example Output
Section titled “Example Output”{ "data": { "defaultIngress": "https://my-app-22-b6v8c.ondigitalocean.app", "id": "6d8abe1c-7cc5-4db3-b1aa-d9cdd1c127e7", "liveURL": "https://my-app-22-b6v8c.ondigitalocean.app", "name": "my-app-22", "region": { "continent": "Asia", "data_centers": [ "blr1" ], "flag": "india", "label": "Bangalore", "slug": "blr" } }, "timestamp": "2026-03-24T07:06:01.044139416Z", "type": "digitalocean.app.updated"}Upsert DNS Record
Section titled “Upsert DNS Record”The Upsert DNS Record component idempotently creates or updates a DNS record for a DigitalOcean-managed domain.
It first looks up existing records with the same name and type. If a match is found it updates the record in-place; otherwise it creates a new one.
Use Cases
Section titled “Use Cases”- Idempotent provisioning: Safely run DNS setup steps multiple times without creating duplicates
- IP updates: Keep A/AAAA records in sync with changing IP addresses
- Dynamic configuration: Update TXT records (e.g. SPF, DKIM) as part of automated workflows
Configuration
Section titled “Configuration”- Domain: The DigitalOcean-managed domain to manage the record in (required)
- Type: The DNS record type (required): A, AAAA, CNAME, MX, NS, TXT, SRV, CAA
- Name: The subdomain name for the record (required, use @ for root)
- Data: The record value, e.g. an IP address or hostname (required, supports expressions)
- TTL: Time-to-live in seconds (optional, defaults to 1800)
- Priority: Record priority for MX/SRV records (optional)
- Port: Port number for SRV records (optional)
- Weight: Weight for SRV records (optional)
Output
Section titled “Output”Returns the created or updated DNS record including:
- id: Record ID
- type: Record type
- name: Subdomain name
- data: Record value
- ttl: Time-to-live
- priority: Priority (for MX/SRV)
- port: Port (for SRV)
- weight: Weight (for SRV)
Example Output
Section titled “Example Output”{ "data": { "data": "192.0.2.2", "id": 12345678, "name": "www", "port": null, "priority": null, "ttl": 1800, "type": "A", "weight": null }, "timestamp": "2026-03-13T10:10:00.000000000Z", "type": "digitalocean.dns.record.upserted"}