Any Database. Any Query: Native Data Connectors in Flomation

Andy Esser
Apr 13, 2026

Any Database. Any Query: Native Data Connectors in Flomation

Part 4 of our Workflows Unleashed series. This week: why every database dialect deserves its own action.

Databases are the backbone of most automation. Read a record, transform it, write it somewhere else. Poll for changes. Archive old data. Sync between systems. If your workflow platform can't talk to your databases natively, you're writing wrapper scripts — and you're back where you started.

Flomation has dedicated actions for six database systems. Not a single "run SQL" box with a connection string field. Six separate actions, each built for its dialect's quirks, strengths, and idioms.

SQL Databases

PostgreSQL

The PostgreSQL action connects with full TLS support, connection pooling, and parameterised queries. Write a SELECT and the results arrive as a structured array of objects — each row is a map, each column is a typed field. Downstream nodes reference ${node.rows[0].email} or iterate with a For loop over ${node.rows}.

INSERT, UPDATE, and DELETE return the affected row count. Errors include the PostgreSQL error code and message, routable through the On Error node for handling.

Why a dedicated action instead of a generic SQL executor? Because PostgreSQL has features the others don't — JSONB operators, array types, CTEs, LISTEN/NOTIFY. The action understands these. A generic executor would either ignore them or expose them as untyped strings.

MySQL

Same pattern, different dialect. The MySQL action handles connection pooling, prepared statements, and the differences in type system and error codes. A query that works in the PostgreSQL action won't necessarily work here — and that's the point. Each action validates inputs against its own dialect.

Oracle

Oracle's connection model, bind variables, and PL/SQL support are different enough to warrant their own action. The Oracle action handles TNS connection strings, Oracle-specific data types, and the particular way Oracle returns result sets.

NoSQL Databases

MongoDB

The MongoDB action supports the operations teams actually use: find (with query filters and projections), insertOne, updateOne, deleteOne, and aggregate pipelines. Results are JSON documents that flow directly into downstream actions.

Aggregate pipelines are particularly powerful in automation. A single MongoDB action can filter, group, sort, and transform data before it reaches the next node — reducing the number of flow nodes needed and keeping data processing close to the source.

Redis

The Redis action handles the commands that appear in automation most often: GET, SET, DEL, HGET, HSET, and key expiry. It's designed for the caching, queueing, and state management patterns that workflows commonly need.

Redis in a flow typically serves one of three roles: a fast cache layer between expensive operations, a queue for coordinating parallel workers, or a state store for tracking progress across flow executions.

DynamoDB

The DynamoDB action supports Query, Scan, PutItem, GetItem, and DeleteItem with full expression syntax — KeyConditionExpression, FilterExpression, ProjectionExpression, and ExpressionAttributeValues. The action handles the serialisation between Go types and DynamoDB's attribute value format.

For teams already on AWS, DynamoDB is often the natural choice for flow state, configuration, and metadata storage. The action uses the same credential chain as the S3 and EC2 actions — IAM roles in production, local credentials in development.

The Pattern

Every database action follows the same shape:

  1. Configure the connection — host, port, credentials (from ${secrets.DB_PASSWORD}), database name, TLS settings
  2. Write your query — native syntax for the dialect, with variable references for dynamic values
  3. Use the results — structured output accessible by field name in downstream nodes

No ORM. No query builder. No abstraction layer between your query and the database. You write the query you'd write in a terminal, and the results flow into the next step as structured data.

When Databases Meet Flows

The real power isn't any single database action — it's composing them.

Read from PostgreSQL. Transform with Data Extract. Write to MongoDB. Archive to S3. Notify via Slack. All in one flow. All triggered by a cron schedule, a webhook, or a database event.

Cross-database migration? PostgreSQL → MySQL with a For loop over the rows. ETL pipeline? S3 → Data Transform → DynamoDB. Real-time sync? Webhook trigger → PostgreSQL read → Redis cache update.

The database actions are building blocks. The flow editor is the assembly line.

Next Week

We cover triggers — the ten different ways an event can start a Flomation flow, from webhooks to QR code scans.

www.flomation.co — free to start, no credit card.