Release Notes
November 2024
History mode is now in beta for the following destinations:
October 2024
We now support automatic schema migration of Delta Lake tables in Databricks. For more information about configuring automatic schema migration, see our setup instructions.
You can no longer modify the following fields in the S3 Data Lake setup form after you create the destination:
- Bucket
- S3 Prefix Path
September 2024
To avoid naming conflicts with Iceberg's reserved field names, we now prefix the following column names with #
before writing them to the Iceberg tables in your destination:
_deleted
_file
_partition
_pos
_spec_id
file_path
pos
row
We will verify the existing column names in your destination and rename the columns with these reserved names. You may observe a sync delay when we rename your existing columns.
We are gradually rolling this change to all existing destinations.
June 2024
We have significantly reduced the time taken to query your data from the non-primary key columns of your destination tables. To facilitate this enhancement, our connectors created on or after June 20, 2024 now update two statistics, minimum value and maximum value, for all the columns in the tables that contain up to 200 columns.
We now support the Delta Lake format for your destination tables. For more information, see our S3 Data Lake documentation.
You can now integrate Databricks Unity Catalog with your S3 Data Lake destination and create external tables for the data stored in your destination's Delta Lake tables. For more information, see our Unity Catalog setup instructions.
You can now specify how long you want us to retain your table snapshots before deleting them through table maintenance operations. To facilitate this enhancement, we have added a new drop-down menu, Snapshot Retention Period, to the destination setup form. For more information, see our setup instructions.
February 2024
We now create a file, sequence_number.txt
, to track the changes made to the Iceberg tables in your destination. We create this file in each table's metadata folder. Be sure not to delete these files from your destination.
September 2023
Our S3 Data Lake destination is now generally available. Read our S3 Data Lake destination documentation.
You can now connect Fivetran to your S3 buckets using an AWS PrivateLink connection. You can opt to use an AWS PrivateLink connection if your S3 bucket and destination are in the same AWS Region. This feature is supported only in Business Critical plans. For more information, see our setup instructions.
July 2023
We have added support for field IDs in the data files.
We have added support for three new table maintenance operations. These operations do the following in your S3 bucket:
- Delete the snapshots that are older than 7 days
- Clean up orphan files
- Delete old metadata files
We are gradually rolling out this improvement to all existing destinations.
May 2023
You can now configure your S3 Data Lake destination using the Fivetran REST API.
We have added support for the BINARY data type.
We have added support for two new AWS Regions, Asia Pacific (Tokyo) (ap-northeast-1) and Asia Pacific (Sydney) (ap-southeast-2).
March 2023
We now support data lakes built on Amazon S3 as destinations. We use AWS Glue as the data catalog for your destination tables.