Snowflake Setup Guide
Follow our setup guide to connect your Snowflake data warehouse to Fivetran.
Prerequisites
To connect Snowflake to Fivetran, you need the following:
Snowflake account: A Snowflake account with a user role that has the permissions to create a user, role, and warehouse for Fivetran (such as the
sysadmin
orsecurityadmin
roles).Fivetran role: A Fivetran role with the Create Destinations or Manage Destinations permissions.
Collation requirements: The Snowflake data warehouse's collation must be capable of handling both case-sensitive and non-ASCII characters to ensure the integrity of your data.
NOTE: Avoid applying custom collation to your account, database, schema, table, or column. Instead, use collation directly in your queries. Custom collation reduces the maximum column storage for strings from 16 MB to 8 MB, which can result in a String is too long to be compared using collation error if more than 8 MB of data is loaded into a column with collation enabled.
Snowflake configuration parameters:
- The
PREVENT_LOAD_FROM_INLINE_URL
andREQUIRE_STORAGE_INTEGRATION_FOR_STAGE_OPERATION
parameters of the Snowflake data warehouse must be set toFALSE
. - If your database connector is configured to preserve source naming, the
QUOTED_IDENTIFIERS_IGNORE_CASE
parameter must be set toFALSE
for Fivetran users.
- The
IMPORTANT: In Snowflake, if you use double quotes around an identifier name, it makes the identifier name case-sensitive. We recommend using the
create <identifier> <identifier_name>
or thecreate <identifier> "IDENTIFIER_NAME"
format. See Snowflake's documentation on identifiers for more information.
Setup instructions
Choose your deployment model
Before setting up your destination, decide which deployment model best suits your organization's requirements. This destination supports both SaaS and Hybrid deployment models, offering flexibility to meet diverse compliance and data governance needs.
See our Architecture documentation to understand the use cases of each model and choose the model that aligns with your security and operational requirements.
NOTE: You must have an Enterprise or Business Critical plan to use the Hybrid Deployment model.
Choose Snowflake warehouse type
You can choose to create an exclusive warehouse for Fivetran or use an existing warehouse:
You can create and use an exclusive warehouse for Fivetran. Fivetran operations will never contend with your queries for resources. You will have to pay the cost of running the warehouse.
You can use a shared warehouse to reduce your warehouse running cost. Fivetran loads data incrementally and consumes very little compute resources. Fivetran operations may have to contend with your queries for the shared resources.
Run script in Snowflake warehouse
Log in to your Snowflake data warehouse.
Copy the following script to a new worksheet:
begin; -- create variables for user / password / role / warehouse / database (needs to be uppercase for objects) set role_name = 'FIVETRAN_ROLE'; set user_name = 'FIVETRAN_USER'; set user_password = 'password123'; set warehouse_name = 'FIVETRAN_WAREHOUSE'; set database_name = 'FIVETRAN_DATABASE'; -- change role to securityadmin for user / role steps use role securityadmin; -- create role for fivetran create role if not exists identifier($role_name); grant role identifier($role_name) to role SYSADMIN; -- create a user for fivetran create user if not exists identifier($user_name) password = $user_password default_role = $role_name default_warehouse = $warehouse_name; grant role identifier($role_name) to user identifier($user_name); -- set binary_input_format to BASE64 ALTER USER identifier($user_name) SET BINARY_INPUT_FORMAT = 'BASE64'; -- change role to sysadmin for warehouse / database steps use role sysadmin; -- create a warehouse for fivetran create warehouse if not exists identifier($warehouse_name) warehouse_size = xsmall warehouse_type = standard auto_suspend = 60 auto_resume = true initially_suspended = true; -- create database for fivetran create database if not exists identifier($database_name); -- grant fivetran role access to warehouse grant USAGE on warehouse identifier($warehouse_name) to role identifier($role_name); -- grant fivetran access to database grant CREATE SCHEMA, MONITOR, USAGE on database identifier($database_name) to role identifier($role_name); -- change role to ACCOUNTADMIN for STORAGE INTEGRATION support (only needed for Snowflake on GCP) use role ACCOUNTADMIN; grant CREATE INTEGRATION on account to role identifier($role_name); use role sysadmin; commit;
Depending on whether you want to create a new warehouse or use a shared warehouse do either:
- If you want to create a new exclusive warehouse, don't make any changes to the
FIVETRAN_WAREHOUSE
value in the script. - If you want Fivetran to use a shared warehouse to ingest data, change the
FIVETRAN_WAREHOUSE
value in the script to the name of the shared warehouse.
- If you want to create a new exclusive warehouse, don't make any changes to the
Replace the default
FIVETRAN_ROLE
,FIVETRAN_DATABASE
,FIVETRAN_USER
, andpassword123
values with values that conform to your specific naming conventions for those resources.IMPORTANT: Do not use this username for any other purpose.
Make a note of the values that replace the default
FIVETRAN_DATABASE
,FIVETRAN_USER
, andpassword123
values. You will need them to configure Fivetran.Depending on the Snowflake interface you are using, do one of the following:
- Classic web interface: Select the All Queries checkbox.
- Snowsight: Select the text of all queries in the query editor.
Run the script.
(Optional) Key-pair authentication
Perform the following steps if you want to use key-pair authentication:
Open the command line in a terminal window.
Generate a private key. You can generate an encrypted version of the private key or an unencrypted version of the private key.
To generate an unencrypted version, you can execute one of the following commands:
openssl genrsa -out rsa_key.pem 2048
openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8 -nocrypt
To generate an encrypted version, execute the command
openssl genrsa 2048 | openssl pkcs8 -topk8 -v1 <ALGORITHM> -inform PEM -out rsa_key.p8
. You can use different algorithms with the-v1
command line option. These algorithms use the PKCS#12 password-based encryption algorithm and allow you to use strong encryption algorithms like triple DES or 128-bit RC2. You can use the following encryption algorithms:- PBE-SHA1-RC2-40
- PBE-SHA1-RC4-40
- PBE-SHA1-RC2-128
- PBE-SHA1-RC4-128
- PBE-SHA1-3DES
- PBE-SHA1-2DES
To use stronger encryption algorithms, execute the command
openssl genrsa 2048 | openssl pkcs8 -topk8 -v2 <ALGORITHM> -inform PEM -out rsa_key.p8
. You can use different algorithms with the-v2
command line option. You can use the following encryption algorithms:- AES128
- AES256
- DES3
From the command line, generate the public key by referencing the private key. Execute the command
openssl rsa -in rsa_key.p8 -pubout -out rsa_key.pub
.Assign the public key to the Snowflake user. In a Snowflake worksheet, execute the command
alter user <USERNAME> set rsa_public_key='<PUBLIC_KEY>';
.NOTE: You must have the
sysadmin
orsecurityadmin
role to execute this command.
(Optional) Configure external storage for Hybrid Deployment
IMPORTANT: This is a mandatory step only for the Hybrid Deployment model. Skip to the next step if you want to use the SaaS Deployment model.
In the Hybrid Deployment model, we temporarily stage your data in an external storage location before writing it to your Snowflake destination.
We support the following storage services for staging your data:
- Snowflake internal stage: Stores your data in Snowflake's user stage. By default, we use this storage for staging your data. To use Snowflake's internal stage, you do not have to configure anything. We will use the default user stage associated with your Snowflake account.
- Amazon S3: Stores your data in an S3 bucket.
- Azure Blob Storage: Stores your data in an Azure Blob Storage container.
- Google Cloud Storage: Stores your data in a Google Cloud Storage bucket.
Configure Amazon S3 bucket
Create S3 bucket
Create an S3 bucket by following the instructions in AWS's documentation.
NOTE: Your S3 bucket and Snowflake account must be in the same region.
Create IAM policy for S3 bucket
Open your Amazon IAM console.
Go to Policies, and then click Create policy.
Go to the JSON tab.
Copy the following policy and paste it in the JSON editor.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:DeleteObjectTagging", "s3:ReplicateObject", "s3:PutObject", "s3:GetObjectAcl", "s3:GetObject", "s3:DeleteObjectVersion", "s3:ListBucket", "s3:PutObjectTagging", "s3:DeleteObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::{your-bucket-name}/*", "arn:aws:s3:::{your-bucket-name}" ] } ] }
In the policy, replace
{your-bucket-name}
with the name of your S3 bucket.Click Next.
Enter a Policy name.
Click Create policy.
(Optional) Configure IAM role authentication
IMPORTANT:
- Perform this step only if you want us to use AWS Identity and Access Management (IAM) to authenticate requests in your S3 bucket. Skip to the next step if you want to use IAM user credentials for authentication.
- To authentication using IAM, your Hybrid Deployment Agent must run on an EC2 instance in the account associated with your S3 bucket.
In the Amazon IAM console, go to Roles, and then click Create role.
Select AWS service.
In Service or use case drop-down menu, select EC2.
Click Next.
Select the checkbox for the IAM policy you created for your S3 bucket.
Click Next.
Enter the Role name and click Create role.
In the Amazon IAM console, go to the EC2 service.
Go to Instances, and then select the EC2 instance hosting your Hybrid Deployment Agent.
In the top right corner, click Actions and go to Security > Modify IAM role.
In the IAM role drop-down menu, select the new IAM role you created and click Update IAM role.
(Optional) Configure IAM user authentication
IMPORTANT: Perform this step only if you want us to use IAM user credentials to authenticate requests in your S3 bucket.
In the Amazon IAM console, go to Users, and then click Create user.
Enter a User name, and then click Next.
Select Attach policies directly.
Select the checkbox next to the policy you create in the Create IAM policy for S3 bucket step, and then click Next.
In the Review and create page, click Create user.
In the Users page, open the user you created.
Click Create access key.
Select Application running outside AWS, and then click Next.
Click Create access key.
Click Download .csv file to download the Access key ID and Secret access key to your local drive. You will need them to configure Fivetran.
Configure Azure Blob storage container
Create Azure storage account
Create an Azure storage account by following the instructions in Azure Blob Storage's documentation. While creating the account, make sure you do the following:
In the Advanced tab:
- select the Require secure transfer for REST API operations and Enable storage account key access checkboxes.
- in the Permitted scope for copy operations drop-down menu, select From any storage account.
In the Networking tab, select one of the following Network access options:
- If your Snowflake destination is not hosted on Azure or if your storage container and destination are in different regions, select Enable public access from all networks.
- If your Snowflake destination is hosted on Azure and if it is in the same region as your storage container, select Enable public access from selected virtual networks and IP addresses.
In the Encryption tab, choose Microsoft-managed keys (MMK) as the Encryption type.
If you selected Enable public access from selected virtual networks and IP addresses as the Network access option in Step 1, do the following:
i. Log in to your Snowflake account and run the following commands to get your Snowflake VNet subnet ID:
USE ROLE ACCOUNTADMIN; SELECT SYSTEM$GET_SNOWFLAKE_PLATFORM_INFO();
ii. Log in to your Azure account using Azure CLI and run the following command:
az storage account network-rule add --resource-group "<your_resource_group>" --account-name "<your_storage_account>" --subnet <your_snowflake_vnet_subnet_id>
NOTE: Before running the command, replace
<your_resource_group>
with your Azure resource group name,<your_storage_account>
with your Azure storage account name, and <your_snowflake_vnet_subnet_id> with your Snowflake VNet subnet ID.iii. Log in to the Azure portal.
iv. Go to your storage account.
v. On the navigation menu, click Networking under Security + networking.
vi. Go to the Firewall section.
vii. In the Address range field, enter the IP address of the machine that hosts your Hybrid Deployment Agent.
viii. Click Save.
Find storage account name and access key
Log in to the Azure portal.
Go to your storage account.
On the navigation menu, click Access keys under Security + networking.
Make a note of the Storage account name and Key. You will need them to configure Fivetran.
IMPORTANT: As a security best practice, do not save your access key and account name anywhere in plain text that is accessible to others.
Configure Google Cloud Storage bucket
Create Google Cloud Storage bucket
Create a Google Cloud Storage (GCS) bucket by following the instructions in GCS's documentation.
NOTE: Your GCS bucket and Snowflake account must be in the same region.
Make a note of the bucket name. You will need it to configure Fivetran.
Create service account and private key
Create a service account to provide Fivetran access to the GCS bucket you created in the previous step.
Create a private service account key in JSON format for the new service account you created in Step 1. The private key must be in the following format:
{ "type": "service_account", "project_id": "<project_id>", "private_key_id": "<key_id>", "private_key": "*****", "client_email": "name@project.iam.gserviceaccount.com", "client_id": "<client_id>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/name%40project.iam.gserviceaccount.com" }
Keep the JSON file in a secure location. You will need it to configure Fivetran.
Assign permissions to service account
Log in to the Google Cloud console.
Go to Storage > Browser.
Select the bucket you want to use.
Go to the Permissions tab and then click Add Principals.
Specify the service account you created.
In the Select a role drop-down menu, select Storage Object Admin.
Click Save.
Add object lifecycle rule
In your Google Cloud console, go to Storage > Browser.
Find and select the bucket you are using for Fivetran.
In the Lifecycle rules column, select its rules.
Click ADD A RULE. A detail view will open.
In the Select an action section, select Delete object.
Click CONTINUE.
In the Select object conditions section, select the Age checkbox and then enter 1.
Click CONTINUE and then click CREATE.
(Optional) Connect using AWS PrivateLink, Azure Private Link, or Google Cloud Private Service Connect
IMPORTANT: Do not perform this step if you want to use Hybrid Deployment for your data pipeline. You must have a Business Critical plan to use AWS PrivateLink, Azure Private Link, or Google Cloud Private Service Connect.
You can connect Fivetran to your Snowflake destination using either AWS PrivateLink, Azure Private Link, or Google Cloud Private Service Connect. Fivetran uses your chosen service to move your data securely between our system and your Snowflake destination.
Connect using AWS PrivateLink
AWS PrivateLink allows VPCs and AWS-hosted or on-premises services to communicate with one another without exposing traffic to the public internet. PrivateLink is the most secure connection method. Learn more in AWS’ PrivateLink documentation.
Prerequisites
To set up AWS PrivateLink, you need a Business Critical Snowflake account in one of our supported regions.
Postrequisites
To use AWS PrivateLink, you must select AWS as a Cloud service provider in the Complete Fivetran configuration step.
Configure AWS PrivateLink for Snowflake destination
Contact Snowflake Support and tell them that you want to enable AWS PrivateLink for Fivetran. Provide the following information:
- Fivetran’s AWS VPC Account ID:
arn:aws:iam::834469178297:root
- Your Snowflake account URL
Once Snowflake receives this information, they will allow Fivetran to establish a private link connection to your Snowflake destination.
- Fivetran’s AWS VPC Account ID:
Snowflake will provide you with a VPCe in the format
com.amazonaws.vpce.<region_id>.vpce-svc-xxxxxxxxxxxxxxxxx
. Make a note of this VPCe. You will need it later.Go to your Snowflake instance and execute the following query:
SELECT SYSTEM$GET_PRIVATELINK_CONFIG();
Send the VPCe you found in Step 2 and the output of the query to your Fivetran account manager. The output will be in the following format:
{ "privatelink-account-name": "<account_name>", "privatelink-account-url": "<privatelink_account_url>", "privatelink-ocsp-url": "<privatelink_ocsp_url>", "privatelink-vpce-id": "<aws_vpce_id>" }
Notify your Fivetran account manager that you have completed these steps. We then finish setting up PrivateLink for your Snowflake destination on our side. Once the setup is complete, we send you the host address and resource ID for your PrivateLink connection.
Make a note of the host address that you received from Fivetran. You need it to configure Fivetran.
Contact Snowflake Support and provide the resource ID (also known as the vpc_endpoint_id) that you received from Fivetran.
Connect using Azure Private Link
Azure Private Link allows VNet and Azure-hosted or on-premises services to communicate with one another without exposing traffic to the public internet. Learn more in Microsoft's Azure Private Link documentation.
NOTE: Inform the Fivetran Account team if you have enabled Azure Private Endpoints for Internal Stages so we can make sure that all data is traversing through Private Link and the associated Private Endpoints.
Prerequisites
To set up Azure Private Link, you need a Snowflake account hosted in Azure.
Postrequisites
To use Azure PrivateLink, you must select Azure as a Cloud service provider in the Complete Fivetran configuration step.
Configure Azure Private Link for Snowflake destination
Contact Snowflake Support and tell them that you want to enable Azure Private Link for Fivetran. Provide the following information:
- Fivetran’s Azure subscription ID:
6d755170-32cd-4a50-8bf2-621c984f3528
- Your Snowflake account URL
Once Snowflake receives this information, they will allow Fivetran to establish a private link connection to your Snowflake destination.
- Fivetran’s Azure subscription ID:
Once Snowflake has approved your request, go to your Snowflake instance and execute the following query as a user with the Snowflake ACCOUNTADMIN role to obtain the URL that we need to access Snowflake through Azure Private Link:
SELECT SYSTEM$GET_PRIVATELINK_CONFIG();
Send the output of the query to your Fivetran account manager. The output will be in the following format:
{ "privatelink-account-name": "<account_identifier>", "privatelink-internal-stage": "<privatelink_stage_endpoint>", "privatelink-account-url":"<privatelink_account_url>", "privatelink-ocsp-url": "<privatelink_ocsp_url>", "privatelink-pls-id": "<azure_privatelink_service_id>" }
We then finish setting up Private Link for your Snowflake destination on our side. Once the setup is complete, we send you the host address and the resource ID for your Private Link connection.
Make a note of the host address that you received from Fivetran. You need it to configure Fivetran.
Contact Snowflake Support and provide the resource ID (also known as the private_endpoint_id) that you received from Fivetran.
For more information, see How to set up Privatelink to Snowflake from 3rd party Cloud Service vendors.
Connect using Google Cloud Private Service Connect
Google Cloud Private Service Connect allows VPCs and Google-hosted or on-premises services to communicate with one another without exposing traffic to the public internet. Learn more in Google Cloud's Private Service Connect documentation.
Prerequisites
To set up Google Cloud Private Service Connect, you need a Business Critical Snowflake account in one of our supported regions.
Postrequisites
To use Google Cloud Private Service Connect, you must select GCP as a Cloud service provider in the Complete Fivetran configuration step.
Configure Google Cloud Private Service Connect for Snowflake destination
Follow the instructions in Snowflake's Google Cloud Private Service Connect & Snowflake documentation. When contacting Snowflake Support, tell them that you want to enable Google Cloud Private Service Connect for Fivetran. Provide the following information:
- Fivetran’s project id:
fivetran-donkeys
- Your Snowflake account URL
Once Snowflake receives this information, they will allow auto-approval for Fivetran’s project.
- Fivetran’s project id:
Once Snowflake has approved your request, go to your Snowflake instance and execute the following query as a user with the Snowflake ACCOUNTADMIN role to obtain the URL that we need to access Snowflake through Google Cloud Private Service Connect:
SELECT SYSTEM$GET_PRIVATELINK_CONFIG();
Send the output of the query to your Fivetran account manager. The output will be in the following format:
{ "privatelink-account-name": "<account_identifier>", "privatelink-account-url":"<privatelink_account_url>", "privatelink-ocsp-url": "<privatelink_ocsp_account_url>", "privatelink-gcp-service-attachment": "<privatelink_service_attachment_id>" }
We then finish setting up Private Service Connect for your Snowflake destination on our side. Once the setup is complete, we send you the host address and the Resource ID (also known as the psc_connection_id).
Make a note of the host address that you received from Fivetran. You need it to configure Fivetran.
NOTE: If you need to troubleshoot your connection with Snowflake Support, be sure to provide your Private Service Connection ID.
(Optional) Configure Snowflake network policy
If you have defined a Snowflake Network Policy, update your Snowflake Network Policy to add Fivetran IP address CIDRs or VPC IDs from one of the following sections.
Without AWS PrivateLink, Azure Private Link, or Google Cloud Private Service Connect
If you haven't configured AWS PrivateLink, Azure Private Link, or Google Cloud Private Service Connect connectivity, add Fivetran's IP addresses to your network policy's allowed list to allow connections from Fivetran.
With AWS PrivateLink
If you have configured AWS Private Link, add Fivetran's internal VPC CIDR range to the Snowflake network policy's allowed list: 10.0.0.0/8
.
Alternatively, add the Fivetran VPC Endpoint ID (or AWS VPCE ID) to your network policy's allowed list.
NOTE: Contact our support team and provide the name of the PrivateLink for the corresponding for the Fivetran VPC Endpoint ID (or AWS VPCE ID).
With Azure Private Link
If you have configured Azure Private Link, add Fivetran's internal VNet CIDR range to the Snowflake network policy's allowed list: 10.0.0.0/8
.
Alternatively, add the Fivetran Resource ID (or Azure LinkID) to your network policy's allowed list.
NOTE: Contact our support team and provide the name of the Private Link for the corresponding Fivetran Fivetran Resource ID (or Azure LinkID).
With Google Cloud Private Service Connect
If you have configured Google Cloud Private Service Connect, add Fivetran's internal VPC CIDR range to the Snowflake network policy's allowed list: 10.0.0.0/8
.
(Optional) Configure failover
IMPORTANT: You must have a Snowflake Business Critical account or higher to use failover. Learn more in Snowflake's failover documentation.
Since Fivetran supports using Snowflake Client Redirect, you can set up database failover for Snowflake if you'd like. To configure failover in Snowflake, you must have two Snowflake accounts in two different regions. You must also have an account with the ORGADMIN role to add the accounts to a Snowflake organization and navigate between them.
Alternatively, you could have one Snowflake account and one organization account if you have the ACCOUNTADMIN role. To enable failover for the organization account, run the following command as the ORGADMIN:
USE ROLE ORGADMIN;
SELECT system$global_account_set_parameter('<org-name>.<org-account-itself>', 'ENABLE_ACCOUNT_DATABASE_REPLICATION', 'true');
In either case, do the following to set up failover:
- If you haven't already, replicate your database. Follow the setup instructions in Snowflake's Replicating Databases Across Multiple Accounts documentation.
- Enable failover for your primary database to your secondary database. Follow the setup instructions in Snowflake's Enable Replication for Accounts in the Organization documentation.
- Set up Client Redirect to generate a single global URL that automatically points to the current primary account. Follow the setup in Snowflake's Configuring Client Redirect documentation.
Provide your Snowflake Client Redirect URL in the Host section of your setup form to ensure that Fivetran always points to the active account in case of failover.
Complete Fivetran configuration
Log in to your Fivetran account.
Go to the Destinations page and click Add destination.
Enter a Destination name of your choice and then click Add.
Select Snowflake as the destination type.
(Enterprise and Business Critical accounts only) Select the deployment model of your choice:
- SaaS Deployment
- Hybrid Deployment
If you choose Hybrid Deployment, select an existing Hybrid Deployment Agent in the Select an existing agent drop-down menu or configure a new agent.
NOTE: For more information about configuring a new agent, see our Hybrid Deployment setup guides.
(Hybrid Deployment only) Select the external storage you configured to stage your data.
(Hybrid Deployment only) Depending on the external storage you selected, do the following:
i. If you selected Snowflake Internal Stage, skip to the next step.
ii. If you selected AWS S3, select the Authentication type (IAM ROLE or IAM USER) you configured for your S3 bucket, and then enter your S3 bucket's name and region.
iii. If you selected Azure Blob Storage, enter your storage account name and storage account key.
iv. If you selected Google Cloud Storage, upload the JSON file containing the service account key and enter the GCS bucket name.
(Not applicable to Hybrid Deployment) In the Connection Method drop-down menu, select how you want Fivetran to connect to your destination.
- Connect directly
- Connect via PrivateLink
NOTE: The Connect via PrivateLink option is available only for Business Critical accounts.
(Not applicable to Hybrid Deployment) If you selected Connect via PrivateLink as the connection method, select an existing PrivateLink connection, or create a new AWS PrivateLink or Azure Private Link connection.
If you selected Connect directly as the connection method or if you choose Hybrid Deployment as the deployment model, enter the host name or IP address of the database server in the Host field.
NOTE: If you're using database failover, enter the Snowflake Client Redirect URL in the Host field to ensure that Fivetran always points to the active account in case of failover. If you use AWS PrivateLink or Azure Private Link, the URL format is
<org-name>-<conn-name>.privatelink.snowflakecomputing.com
. If you don't, the URL format is<org-name>-<conn-name>.snowflakecomputing.com
.Enter the User and Database names you found in Step 2.
Choose your authentication mode: PASSWORD or KEY-PAIR. See the additional steps for key-pair authentication.
- If you selected PASSWORD, enter the Password you found in Step 3.
- If you selected KEY-PAIR, enter the Private key. The private key cannot have spaces and must be prefixed by
-----BEGIN PRIVATE KEY-----
and postfixed by-----END PRIVATE KEY-----
.
IMPORTANT: If you use an encrypted private key, set the Is Private Key encrypted toggle to ON, then enter your Passphrase. Encrypted private keys must be prefixed by
-----BEGIN RSA PRIVATE KEY-----
and postfixed with-----END RSA PRIVATE KEY-----
.(Optional) If you want Fivetran to use a specific role instead of your default role, enter the Role name.
(Not applicable to Hybrid Deployment) Choose the Data processing location. Depending on the plan you are on and your selected cloud service provider, you may also need to choose a Cloud service provider and cloud region as described in our Destinations documentation.
IMPORTANT: If you are using AWS PrivateLink, Azure Private Link, or Google Cloud Private Connect Service, select the corresponding Cloud service provider.
Choose your Timezone.
(Optional for Business Critical accounts and not applicable to Hybrid Deployment) To enable regional failover, set the Use Failover toggle to ON, and then select your Failover Location and Failover Region. Make a note of the IP addresses of the secondary region and safelist these addresses in your firewall.
Click Save & Test.
Fivetran tests and validates the Snowflake connection. On successful completion of the setup tests, you can sync your data using Fivetran connectors to the Snowflake destination.
In addition, Fivetran automatically configures a Fivetran Platform Connector to transfer the connector logs and account metadata to a schema in this destination. The Fivetran Platform Connector enables you to monitor your connectors, track your usage, and audit changes. The connector sends all these details at the destination level.
IMPORTANT: If you are an Account Administrator, you can manually add the Fivetran Platform Connector on an account level so that it syncs all the metadata and logs for all the destinations in your account to a single destination. If an account-level Fivetran Platform Connector is already configured in a destination in your Fivetran account, then we don't add destination-level Fivetran Platform Connectors to the new destinations you create.
Setup tests
Fivetran performs the following Snowflake connection tests:
The Host Connection test checks the accessibility of the host and validates the database credentials you provided in the setup form.
The Validate Passphrase test validates your private key against the passphrase if you are using key-pair authentication.
The Default Warehouse test checks if the Snowflake data warehouse exists and if you have set it as the default warehouse.
The Database Connection test checks if we can connect to your Snowflake database.
The Permission test checks if we have the CREATE SCHEMA and CREATE TEMPORARY TABLES permission on your Snowflake database.
The Validate Privileges On Integration test checks if the default role assigned to the Fivetran user account has the required permissions on the storage integration. Fivetran performs this test only if your Snowflake data warehouse is hosted on Google Cloud Platform (GCP).
TIP: If you are setting up a new destination in the Hybrid Deployment model, the Validate Privileges On Integration test will fail for the first time. To resolve this, retrieve the cloud storage service account for your Snowflake account and grant the service account permissions to access bucket objects, and then click Save & Test again.
Related articles
description Destination Overview
settings API Destination Configuration