A fully-managed SaaS data warehouse solution, Snowflake runs on Amazon Web Services cloud infrastructure: AWS EC2 virtual compute instances are used for compute needs, while S3 is utilized for persistent data storage.
Pricing
Snowflake pricing is based on two factors: the volume or data stored in your Snowflake destination and the amount of compute usage (the time the server runs) in seconds.
Snowflake offers two types of plans, each with varying levels of access and features. There are On Demand plans which are commitment-free and usage-based. The alternative is a Capacity option, which guarantees secure price discounts. Learn more about Snowflake plans and pricing here.
Snowflake warehouse sizes
Snowflake data warehouses can be different sizes - X-Small, Large, and 3X-Large, for example - which defines how many servers will comprise each cluster in a warehouse.
While the size of a warehouse can impact the time required to execute queries, bigger doesn’t always mean better. Warehouse size is directly tied to the number of credits used, which will directly impact your Snowflake costs. Learn more about Snowflake warehouse sizes here.
To help you select the warehouse size that fits your needs and budget, check out Snowflake’s Warehouse Considerations guide before getting started.
Automated warehouse management
To reduce usage, you can elect to automate the management of your Snowflake warehouse. This means that you can elect to suspend the warehouse when there’s no activity after a specified period of time, and then automatically resume when there is. Note that these settings apply to the entire warehouse and not individual clusters.
Enabling these settings depends on your workload and availability needs. Learn more about the Auto Suspend and Auto Resume features here.
Note: Stitch will only ever impact your Snowflake usage when loading data.
Setup
High-level details about Stitch’s implementation of Snowflake, such as supported connection methods, availability on Stitch plans, etc.
Replication
Details about Stitch and replication for Snowflake destinations.
Incremental Replication - Upserts |
Supported Using a table’s Primary Keys, Stitch will perform an update operation on existing rows to overwrite the data. |
Primary Keys |
Supported Snowflake destinations natively support Primary Keys. Stitch requires Primary Keys to de-dupe data during Incremental Replication. Primary Key columns are created using Primary Key constraints, which requires that values be both unique and not null. |
Multipe data types |
Unsupported Snowflake destinations do not support multiple data types in a single column. To accommodate this scenario, Stitch will create additional columns for each data type. Refer to the Table structural changes guide for examples and more info. |
Nested data structure support |
Supported Snowflake destinations natively support nested data structures. When Stitch replicates source data containing objects or arrays, Stitch will load the data intact into a You can then use Snowflake’s functions for semi-structured data to parse the data. More info can be found in Snowflake’s documentation. |
Incompatible sources |
No reported incompatibilities No integration compatibility issues have been reported for Snowflake. |
Limitations
In this section:
-
Object name limits - Details about object naming rules imposed by Snowflake destinations.
-
Data limits - Details about data limitations imposed by Snowflake destinations.
-
Column naming - Details about transformations performed by Stitch to ensure source column names adhere to the object naming rules imposed by Snowflake destinations.
Object name limits
Details about object naming rules imposed by Snowflake destinations.
Case sensitivity |
Insensitive This means that Snowflake destinations treat upper- and lowercase characters as different characters. For example: Refer to the Column naming section of this guide for examples. |
Table name length |
255 characters Tables with names that exceed this limit will be rejected. Rejected tables are logged in the |
Column name length |
251 characters Columns with names that exceed this limit will be rejected. Rejected columns are logged in the |
Data limits
Details about data limitations imposed by Snowflake destinations.
Max record size |
4 MB Records that exceed this limit will be rejected. Rejected columns are logged in the |
Timezone support |
Unsupported Snowflake will store the data as More info about timestamp data types can be found in Snowflake’s documentation. |
Timestamp ranges |
Up to 10,000 AD Timestamp values outside of this range will be rejected and logged in the More info about Snowflake time data types can be found in Amazon’s documentation. |
Integer range |
Limited to 38 digits of scale Snowflake will store integer data as Integer values that exceed the limit noted above will be rejected and logged in the |
Decimal precision |
Must be between 1 and 38 (38 is the default) Snowflake will store decimal data as Decimal values outside of this range will be rejected and logged in the |
VARCHAR max width |
16 MB Note: While Snowflake’s limit is 16 MB, Stitch imposes a record limit of 4 MB. The record as a whole must be less than 4 MB or it will be rejected and logged in the |
Column naming
Details about transformations performed by Stitch to ensure source column names adhere to the object naming rules imposed by Snowflake destinations.
Compare destinations
Not sure if Snowflake is the data warehouse for you? Check out the Choosing a Stitch Destination guide to compare each of Stitch’s destination offerings.
Related | Troubleshooting |
Questions? Feedback?
Did this article help? If you have questions or feedback, feel free to submit a pull request with your suggestions, open an issue on GitHub, or reach out to us.