Connect Snowflake to Syncaut to run SQL queries, insert data, create tables, and load data from stages — all from within your workflows. Use Snowflake as a central data warehouse to aggregate order, customer, and product data from all your e-commerce stores.
Before adding a Snowflake node to your workflow, you need the following from your Snowflake account:
Your Account Identifier
A Username
A Password
A Role (e.g. ACCOUNTADMIN or a custom role)
A Warehouse name (e.g. COMPUTE_WH)
A Database name (e.g. PRODUCTION)
All six values are required. The node will fail to connect if any of them are missing or incorrect.
Your account identifier is not your login URL — it is a specific identifier used for programmatic connections.
Log in to Snowflake
Go to Admin → Accounts
Hover over your account name — you will see a locator in the format orgname-accountname (e.g. myorg-abc12345)
Alternatively, run this query in a Snowflake worksheet:
SELECT CURRENT_ACCOUNT();
The result is your account locator. Use the full format orgname-accountname for the account field in your credential.
⚠️ Important: Using the wrong account identifier format is the most common cause of Snowflake connection failures. Do not use your web login URL (e.g.
https://abc12345.snowflakecomputing.com) — use only the identifier portion (e.g.myorg-abc12345).
The role you provide must have the necessary privileges on the database and schema you intend to query.
ACCOUNTADMIN has full access but should be avoided for automation. Create a dedicated role with only the permissions Syncaut needs.
The warehouse must be active. If the warehouse is suspended, Snowflake will resume it automatically on the first query, but this adds latency.
To verify your role has access, run the following in a Snowflake worksheet before saving credentials:
SHOW GRANTS TO ROLE your_role_name;
Snowflake credentials are stored as a JSON object containing all connection details.
In Syncaut, go to your Workspace → Credentials
Click Add Credential
Select Snowflake as the type
In the credential value field, paste the following JSON — replacing the placeholder values with your actual details:
{
"account": "myorg-accountname",
"username": "your_snowflake_username",
"password": "your_snowflake_password",
"role": "ACCOUNTADMIN",
"warehouse": "COMPUTE_WH",
"database": "PRODUCTION",
"schema": "PUBLIC"
}
Give it a recognisable name (e.g. "Client Data Warehouse — Snowflake")
Save
⚠️ The credential must be valid JSON with all required fields. Missing any of
account,username, orpasswordwill cause the connection to fail immediately. Therole,warehouse,database, andschemafields have sensible defaults but should always be set explicitly to avoid connecting to the wrong context.
⚠️ The
accountfield must be the account identifier, not the full URL. For example usemyorg-abc12345nothttps://myorg-abc12345.snowflakecomputing.com.
Your credentials are encrypted at rest and never exposed after saving.
When adding a Snowflake node to your workflow, you go through three steps:
Step name — a variable name used to reference this node's output in later steps (e.g. queryOrders). Must start with a letter or underscore, no spaces.
Credential — select the Snowflake credential you added above
Choose what you want this node to do:
Execute query — run any SQL statement and get the results back
Insert data — insert one or more rows into a table
Create table — create a new table with defined columns
Load data from stage — load data from a Snowflake stage (S3, GCS, or Azure Blob)
Each action comes with a pre-loaded JSON template. Replace the {{placeholders}} with actual values or reference outputs from previous workflow steps using {{stepName.data}}.
The database, schema, and warehouse fields in the payload override the defaults set in your credential for that specific node execution.
Runs any SQL query and returns the result rows. Use this for SELECT statements, aggregations, or any custom SQL you need to run.
Key payload fields:
query — required. The SQL statement to execute. Can include Handlebars variables for dynamic values.
database — the Snowflake database to run against
schema — the schema to use
warehouse — the virtual warehouse to use for compute
Example:
{
"query": "SELECT * FROM orders WHERE status = '{{status}}' LIMIT 100",
"database": "PRODUCTION",
"schema": "PUBLIC",
"warehouse": "COMPUTE_WH"
}
The result rows are available as {{stepName.rows}}.
Inserts one or more rows into a Snowflake table using parameterised binds. Safer than string interpolation and prevents SQL injection.
Key payload fields:
table — required. The table name to insert into.
database, schema, warehouse — connection context
data — required. An array of row objects. All objects must have the same keys — the keys become the column names.
Example:
{
"table": "orders",
"database": "PRODUCTION",
"schema": "PUBLIC",
"warehouse": "COMPUTE_WH",
"data": [
{
"order_id": "{{orderId}}",
"customer_name": "{{customerName}}",
"total": {{total}},
"status": "pending",
"created_at": "{{timestamp}}"
}
]
}
The data array supports multiple rows — add more objects to the array to insert multiple rows in a single operation.
Creates a new table in Snowflake. Uses CREATE TABLE IF NOT EXISTS so it is safe to run multiple times without throwing an error if the table already exists.
Key payload fields:
table — required. The name of the table to create.
database, schema, warehouse — connection context
columns — required. Array of column definition objects, each with name and type.
Example:
{
"table": "order_summary",
"database": "PRODUCTION",
"schema": "PUBLIC",
"warehouse": "COMPUTE_WH",
"columns": [
{ "name": "order_id", "type": "VARCHAR(100)" },
{ "name": "customer_name", "type": "VARCHAR(255)" },
{ "name": "total", "type": "DECIMAL(10,2)" },
{ "name": "status", "type": "VARCHAR(50)" },
{ "name": "created_at", "type": "TIMESTAMP" }
]
}
Common Snowflake column types: VARCHAR(n), NUMBER, DECIMAL(p,s), BOOLEAN, TIMESTAMP, DATE, VARIANT (for semi-structured JSON data).
Loads data from a Snowflake named stage into a table using the COPY INTO command. The stage must already be configured in your Snowflake account pointing to your file source (S3, GCS, or Azure Blob).
Key payload fields:
table — required. The destination table.
stage — required. The Snowflake stage path, e.g. @my_stage/data.csv or @my_s3_stage/
fileFormat — the file format type. Options: CSV, JSON, PARQUET, AVRO, ORC
database, schema, warehouse — connection context
Before using this action, make sure your stage exists in Snowflake. You can create and manage stages in Snowflake under Data → Stages.
Reference outputs from previous workflow steps inside the payload using Handlebars syntax:
{{stepName.data}}
For example, if a previous step named getOrders fetched orders from Shopify, reference the order ID in your insert payload as:
{{getOrders.data[0].id}}
To pass an entire object or array as a JSON string, use the {{json variable}} helper:
"metadata": "{{json getOrders.data}}"
Use the variable picker (+ variable) above the payload editor to insert common variables without typing them manually.
Every Snowflake node stores its result under the step name you provide. The output structure is:
{
"data": { "rows": [] },
"action": "execute_query",
"rows": [],
"rowCount": 0
}
{{stepName.rows}} — the array of result rows (most common)
{{stepName.rowCount}} — the number of rows returned
{{stepName.data.rows}} — same as rows, alternative path
For a query that returns order data, access individual fields like:
{{stepName.rows[0].ORDER_ID}}
Note: Snowflake returns column names in uppercase by default unless you used quoted identifiers when creating the table. Reference them in uppercase in your variable paths.
Failed to connect to Snowflake The most common causes are a wrong account identifier, incorrect username or password, or a role that does not exist. Double-check all fields in your credential JSON. Make sure account is in the orgname-accountname format, not a full URL.
Cannot parse Snowflake credentials Your credential JSON is malformed. Edit it in Workspace → Credentials and check for missing quotes, trailing commas, or extra characters. The JSON must be valid and contain all required fields.
Snowflake query failed: Object does not exist The database, schema, or table referenced in your query does not exist, or the role you are using does not have permission to see it. Verify the names are correct and that your role has been granted access with GRANT USAGE ON DATABASE, GRANT USAGE ON SCHEMA, and GRANT SELECT ON TABLE.
Snowflake query failed: Insufficient privileges Your role does not have the required privilege for the operation. For SELECT queries you need SELECT privilege. For INSERT you need INSERT. For CREATE TABLE you need CREATE TABLE on the schema. Grant the necessary privileges in Snowflake or switch to a role with broader access.
table and data are required for insert_data Your Insert Data payload is missing either table or data. Make sure both fields are present and that data is a non-empty array of objects.
table and columns are required for create_table Your Create Table payload is missing either table or columns. Make sure columns is a non-empty array with each entry having both name and type.
table and stage are required for load_data Your Load Data payload is missing table or stage. Make sure the stage path matches a named stage that exists in your Snowflake account.
Credential not found The credential attached to the node was deleted from your workspace. Re-add it under Workspace → Credentials and update the node.
Invalid JSON payload The payload has a syntax error or an unresolved variable returned an unexpected value. Check the payload editor and make sure all {{placeholders}} are resolving correctly before running.
Warehouse suspended / slow first query If your Snowflake warehouse is set to auto-suspend, the first query after a period of inactivity will take longer while the warehouse resumes. This is normal Snowflake behaviour. Consider setting a shorter auto-suspend window or a larger warehouse size for latency-sensitive workflows.
Navigate