Oracle Connector
The Fivetran Oracle connector lets you sync data (tables and materialized views) from your Oracle database to your destination. Regular views (also known as virtual tables) are not supported.
NOTE: You must have an Enterprise or Business Critical plan to use Oracle connector.
Supported services
Fivetran supports the following Oracle database services:
Supported configurations
Fivetran supports the following Oracle configurations:
Supportability Category | Supported Values |
---|---|
Database versions | 11g R2 - 21c |
Connection limit per database | No limit |
Transport Layer Security (TLS) | TLS 1.2 |
Which Oracle instance types we can connect to depends on your database service.
Instance Types | Supported |
---|---|
Generic Oracle | |
Primary instance | check |
Active Data Guard standby instance | |
Physical standby instance | |
Real Application Cluster (RAC) | check |
Multitenant (CDBs/PDBs) | check |
Amazon RDS for Oracle | |
Primary instance | check |
Active Data Guard standby instance | |
Physical standby instance |
Known limitation
Oracle LogMiner doesn't support object names longer than 30 characters. Consequently, for Oracle connectors, we don't sync tables with names longer than 30 characters or tables that contain columns with names longer than 30 characters.
TIP: If you are the owner of the tables, you can rename them and their columns with shorter names so that we can sync them.
Features
Feature Name | Supported | Notes |
---|---|---|
Capture deletes | check | |
History mode | check | |
Custom data | check | |
Data blocking | check | |
Column hashing | check | |
Re-sync | check | |
API configurable | check | API configuration |
Priority-first sync | ||
Fivetran data models | ||
Private networking | check | |
Authorization via API | check |
Setup guide
Follow our step-by-step setup guides for specific instructions on how to set up your Oracle database type:
Sync overview
Once Fivetran is connected to your Oracle source database, we pull a full dump of all selected data from your database. We then pull all your new and changed data at regular intervals using your archived redo logs and LogMiner. If data in the source changes (for example, you add new tables or change a data type), Fivetran automatically detects and persists these changes to your destination.
Schema information
Fivetran tries to replicate the exact schema and tables from your Oracle source database to your destination according to our standard database update strategies. We create schemas in your destination that map directly to the schemas in your source Oracle database, ensuring that the data in your destination is in a familiar format to work with.
When you connect to Fivetran and specify a source database, you also select a schema prefix. We map the schemas we discover in your source database to your destination and prepend the destination schema name with the prefix you selected.
Fivetran-generated columns
Fivetran adds the following columns to every table in your destination:
_fivetran_deleted
(BOOLEAN) marks rows that were deleted in the source database._fivetran_synced
(UTC TIMESTAMP) indicates the time when Fivetran last successfully synced the row._fivetran_id
(STRING) is a unique ID that Fivetran uses to avoid duplicate rows in tables that do not have a primary key.
We add these columns to give you insight into the state of your data and the progress of your data syncs. For more information about these columns, see our System Columns and Tables documentation.
Type transformations and mapping
As we extract your data, we match Oracle data types to types that Fivetran supports. If we don't support a certain data type, we automatically change that type to the closest supported type or, in some cases, don't load that data at all. Our system automatically skips columns of data types that we don't accept or transform.
The following table illustrates how we transform your Oracle data types into Fivetran supported types:
Oracle Type | Fivetran Type | Fivetran Supported | Notes |
---|---|---|---|
CHAR | STRING | True | |
NCHAR | STRING | True | |
VARCHAR | STRING | True | |
VARCHAR2 | STRING | True | |
NVARCHAR2 | STRING | True | |
NUMBER | One of the following types based on the source value's precision: - Short integers (precision < 5) to SHORT - Integers (precision < 10) to INTEGER - Long integers (precision < 19) to LONG - Integers (precision < default value) to BIGDECIMAL - Integers (precision > default value) to STRING | True | - We map floating-point, double-precision floating-point, or big decimal numbers to the BIGDECIMAL data type. - If we detect precision or scale larger than your default values, we map that data to the STRING data type. - If we detect non-specified precision or scale, we map that data to the FLOAT data type in BigQuery or Snowflake destinations. - If we detect non-specified precision or scale, we map that data to the FLOAT data type, but if the value is too big, we map it to STRING. - NUMBER types with high precision and scale may be rounded by LogMiner in Oracle 11g. This does not happen in Oracle 12c or higher. |
FLOAT | BIGDECIMAL | True | FLOAT types with high precision and scale may be rounded by LogMiner in Oracle 11g. This does not happen in Oracle 12c or higher. |
DOUBLE PRECISION | BIGDECIMAL | True | DOUBLE PRECISION types with high precision and scale may be rounded by LogMiner in Oracle 11g. This does not happen in Oracle 12c or higher. |
REAL | BIGDECIMAL | True | REAL types with high precision and scale may be rounded by LogMiner in Oracle 11g. This does not happen in Oracle 12c or higher. |
DATE | LOCALDATE | True | We map DATE to LOCALDATETIME type if it has non-zero time information. |
TIMESTAMP | LOCALDATETIME | True | |
TIMESTAMP WITH TIME ZONE | INSTANT | True | |
TIMESTAMP WITH LOCAL TIME ZONE | INSTANT | True | |
RAW | BINARY | True | |
INTERVAL YEAR TO MONTH | STRING | True | |
INTERVAL DAY TO SECOND | STRING | True | |
BINARY_FLOAT | FLOAT | True | Supported only for BFILE remote capture. However, not supported as primary keys. |
BINARY_DOUBLE | DOUBLE | True | Supported only for BFILE remote capture. However, not supported as primary keys. |
LONG | False | ||
LONG RAW | False | ||
ROWID | True | Supported only for BFILE remote capture. | |
UROWID | False | ||
BFILE | False | ||
BLOB | False | ||
CLOB | False | ||
NCLOB | False |
If we are missing an important type that you need, reach out to Support.
In some cases, when loading data into your destination, we may need to convert Fivetran data types into data types that are supported by the destination. For more information, see the individual data destination pages.
Excluding source data
If you don’t want to sync all the data from your source database, you can exclude schemas, tables, or columns from your syncs on your Fivetran dashboard. To do so, go to your connection details page and uncheck the objects you would like to omit from syncing. For more information, see our Data Blocking documentation.
You can also use SELECT permissions if you want to exclude specific schemas and tables from syncing. The Fivetran user can only discover schemas and tables for which it has been given explicit SELECT permission. Note that permissions will not shield confidential data, such as PII (Personally Identifiable Information), from Fivetran because they do not apply to archived redo logs. Via the archived redo log files, Fivetran has access to the full contents of any changed rows of all tables that pass through your system. However, we filter out the prohibited data at the earliest possible stage of our syncs and do not load it into your destination.
Initial sync
When Fivetran connects to a new database, we first copy all rows from every table in every schema for which we have SELECT permission (except for those you excluded in your Fivetran dashboard) and add Fivetran-generated columns. We copy rows by performing a SELECT statement on each table. We import each table in full before moving onto the next one.
The duration of initial syncs can vary depending on the number and size of tables to be imported. We therefore interleave incremental updates with the table imports during the initial sync. If the import of a table takes an hour or longer, we process any accumulated incremental updates before we begin importing the next table.
Updating data
Fivetran performs incremental updates of any new or modified data from your source database. We use one of the following incremental sync methods for incremental updates: LogMiner, BFILE remote capture, and Fivetran Teleport Sync.
All methods keep a record of recent data changes, which allows Fivetran to update only the data that has changed since our last sync.
WARNING: Switching the incremental sync method will require a historical resync to guarantee data integrity.
LogMiner vs BFILE remote capture vs Fivetran Teleport Sync
There are several key differences between LogMiner, BFILE remote capture, and Fivetran Teleport Sync:
Feature | LogMiner | BFILE remote capture | Fivetran Teleport Sync |
---|---|---|---|
Syncs tables with primary keys | Yes | Yes | Yes |
Syncs tables without primary keys | Yes | Yes | Yes |
Tracks DDL changes | Yes1 | Partially2 | Yes |
Tracks DML changes | Yes | Yes | Yes |
Tracks deletes | Yes | Yes | Yes |
Tracks inserts and updates | Yes | Yes | Yes |
Provides full record of changed data | Yes | Yes | Yes |
Available for all Oracle database services | Yes | No3 | No |
Available for all Oracle versions | Yes | No4 | No |
Syncs all supported data types5 | Yes | Partially | Yes6 |
Table limit per sync | No | No | 400 tables |
Row limit per table | No | No | 5 million rows |
1Tracking certain DDL operations requires a re-sync. Learn more in our LogMiner documentation.
2Learn which DDL changes are supported in our BFILE remote capture documentation.
3BFile remote capture is only supported for generic Oracle and Amazon RDS for Oracle.
4BFile remote capture is only supported for Oracle version 19 or higher.
5See which data types the Oracle connector supports in our Type transformations and mapping documentation.
6Learn which data types Fivetran Teleport Sync supports in our Fivetran Teleport Sync documentation.
LogMiner
We use LogMiner, a utility that is part of Oracle database, to detect modified rows in the source tables. We use the system change number (SCN) to keep track of our place in the logs at the end of each update to ensure a seamless hand-off between syncs. During each update, we sync all committed transactions to the destination.
For Oracle database versions earlier than 19c, we capture the changes both from active redo log files and archived log files. For Oracle database 19c and later, we only capture the changes from archived log files. By default, filling the current redo log file automatically triggers a log switch, which changes the file to an archived log file. The default redo log file size is 4MB, but the size can be larger depending on the configuration. If you have a quiet database with little activity, it can take a long time for you to fill a log file and trigger a log switch. In this case, you must run the command ALTER SYSTEM SWITCH LOGFILE;
in your database to trigger the log switch. Alternatively, you can configure the time-based automatic log switch by running the command ALTER SYSTEM SET ARCHIVE_LAG_TARGET=900 scope=BOTH;
. This configuration guarantees that the log file switch will occur less than 15 minutes after the last log switch.
The following DDL operations are handled without triggering a resync:
ANALYZE
ASSOCIATE STATISTICS
AUDIT
COMMENT
DISASSOCIATE STATISTICS
GRANT
NOAUDIT
PURGE
REVOKE
CREATE TABLE
ALTER TABLE
NOTE: This operation does not trigger a resync with operations like
SHRINK SPACE CHECK
,ENABLE NOVALIDATE CONSTRAINT
,ADD SUPPLEMENTAL LOG DATA
, parallelism changes, partition/subpartition modifications, non-primary key constraints, virtual statements.ADD/DROP COLUMN
NOTE: This operation is managed with schema updates unless issues arise with default values or unsupported operations.
TRUNCATE
NOTE: This operation triggers a soft delete, marking rows as deleted without requiring a manual refresh.
For tables with primary keys, the above operations are handled as expected, without triggering a resync, unless the primary key itself is modified.
For tables without primary keys, most DDL operations, including ADD/DROP COLUMN
, ALTER TABLE
, and TRUNCATE
, will trigger a full resync.
Due to the limitations of LogMiner, the following DDL operations will trigger a resync:
ALTER TABLE
, unless covered by the exclusions listed aboveDROP, RENAME
CREATE
in operations that don't involve a table- Other unhandled DDL types
Additionally, any modifications to primary keys will trigger a resync, as primary key changes affect how data is tracked.
BFILE remote capture Private Preview
BFILE remote capture is an incremental sync method that reads Oracle redo log files directly, capturing changes without the need for an agent on the source database host. This method uses Oracle’s BFILE object type to access the redo log files remotely.
By default, when a redo log file is full, Oracle automatically switches to the next log file and archives the previous one. However, in a low-activity database, it may take a long time for a log file to fill, delaying the automatic log switch. To handle this, you can use one of the following methods:
Manually trigger a log switch if you need an immediate switch. Run the following command:
ALTER SYSTEM SWITCH LOGFILE;
Automatically enforce regular log switches by setting a time-based interval. Run the following command:
ALTER SYSTEM SET ARCHIVE_LAG_TARGET=900 SCOPE=BOTH;
This command ensures that a log switch occurs at least every 15 minutes, even if the log file is not full.
Using the second method (automatic log switch) prevents delays in log archiving without requiring manual intervention.
BFILE remote capture is supported only for Oracle 19 and is available for the following Fivetran connectors:
This incremental sync method handles only the following DDL operations. All other DDL operations are ignored.
ALTER TABLE
NOTE: Supports partition/subpartition modifications that trigger row movement on existing rows within the table, which triggers a table resync. Any operation that causes row movement on existing rows, such as
SHRINK SPACE
, also triggers a table resync.ADD/DROP/RENAME COLUMN
NOTE:
ADD COLUMN
with a default value andRENAME COLUMN
triggers a table resync.ADD COLUMN
without a default value andDROP COLUMN
are handled without triggering a table resync.TRUNCATE
NOTE: This operation triggers a soft delete, marking rows as deleted in the destination.
Fivetran Teleport Sync Beta
Fivetran Teleport Sync is a proprietary incremental sync method that can incrementally replicate your database with minimal additional setup and a read-only database connection.
Use Fivetran Teleport Sync when the LogMiner method is not an option for the following reasons:
- You cannot or do not want to provide Fivetran access to your database's change logs
- You need to limit the tables or columns that Fivetran can read from
- LogMiner syncing is slow or resource intensive due to the number of updates
- You load data into the source database using a TRUNCATE/LOAD or BACKUP/RESTORE process that invalidates change logs
- You currently use snapshot syncing without incremental updates and want a faster method
Fivetran Teleport Sync's queries perform the following operations on your database:
- Do a full table scan of each synced table
- Perform calculations on all values in each synced table's rows
- Aggregate a compressed table snapshot in the database's memory
We do not support syncing more than 400 tables or tables with more than 15 million rows with Fivetran Teleport Sync for the Fivetran Oracle connector
For optimum Fivetran Teleport Sync performance, we recommend that you make the following resources available in your database:
- 1 GB Free RAM
- 1 Free CPU Core
- IOPS (Teleport Sync times decrease linearly with an increase of available IOPS)
- A minimum of 10% of your largest table size as the temporary table space
Fivetran Teleport Sync for Oracle supports the following data types:
Data Type | Regular Column Support | Primary Key Support | Notes |
---|---|---|---|
VARCHAR | Yes | Yes | |
VARCHAR2 | Yes | Yes | |
CHAR | Yes | No | |
NCHAR | Yes | No | |
NVARCHAR2 | Yes | No | |
NUMBER | Yes | Yes | Supports INTEGER primary keys only; i.e., NUMBER(p) or NUMBER(p,0) |
DOUBLE PRECISION | Yes | No | |
REAL | Yes | No | |
FLOAT | Yes | No | |
DATE | Yes | Yes | |
TIMESTAMP | Yes | No | |
TIMESTAMP WITH TIMEZONE | Yes | No | |
TIMESTAMP with LOCAL TIME ZONE | Yes | No | |
RAW | Yes | Yes | Supports RAW(16) primary keys. The values must fill the entire length of bytes with no leading 0s. For regular columns, we support up to 4000 bytes or characters if MAX_STRING_SIZE = STANDARD . |
INTERVAL YEAR TO MONTH | Yes | No | |
INTERVAL DAY TO SECOND | Yes | No | |
BINARY_FLOAT | No | No | |
BINARY_DOUBLE | No | No | |
LONG | No | No | |
LONG RAW | No | No | |
ROWID | No | No | |
UROWID | No | No | |
BFILE | No | No | |
BLOB | No | No | |
CLOB | No | No | |
NCLOB | No | No |
IMPORTANT: Teleport for Oracle also supports tables with composite primary keys and tables with no primary key.
Automatic table re-sync for Teleport
Teleport for Oracle supports ADD COLUMN
DDL operations with null or default static values. All other table or column alterations result in an automatic table re-sync.
Tables without a primary key using Teleport
To sync tables without a primary key using Teleport, please refer to our Oracle Setup Guide.
ROWID as primary key for Teleport
For Oracle Teleport syncs on primary key-less tables, this is an alternate approach that uses Oracle's ROWID
pseudo column to populate our _fivetran_id
column. We designate our _fivetran_id
column as the primary key for primary key-less tables. This method eliminates the need to create temporary tables.
IMPORTANT: This method does not support soft deletes or tables with row movement enabled.
Contact our support team to try this approach.
Recommendations
- If you want to sync tables with primary keys, we recommend that the combined size of all the tables does not exceed 100 GB.
- If you want to sync tables without primary keys, we recommend that the combined size of all the tables does not exceed 10 GB. Contact our support team if you need any assistance with your syncs.
ROWID-based sync for BFILE
The BFILE remote capture uses ROWID as a replication key regardless of whether or not a table has a primary key. We designate our _fivetran_id
column and use Oracle's ROWID
pseudo column to populate it. If any row movement occurs in a table due to a partition column value change, we keep track of the change and update the ROWID
accordingly. However, if bulk row movements happen as a result of a DDL event with a partition table, such as DROP PARTITION
, we will trigger automatic table resync.
Tables with a primary key
We merge changes to tables with primary keys into the corresponding tables in your destination:
- An
INSERT
in the source table generates a new row in the destination with_fivetran_deleted = FALSE
. - A
DELETE
in the source table updates the corresponding row in the destination with_fivetran_deleted = TRUE
. - An
UPDATE
in the source table updates the data in the corresponding row in the destination.
If we detect that your primary key has changed, we handle DELETEs and UPDATEs differently:
- If you have set
SUPPLEMENTAL LOG DATA (ALL)
on your source table, we update the row with the old primary key value with_fivetran_deleted = TRUE
. We then insert a row with the new primary key value with_fivetran_deleted = FALSE
. - If you have not set
SUPPLEMENTAL LOG DATA (ALL)
on your source table, we do not update the row with the old primary key value because we can't identify it. We insert a row with the new primary key value with_fivetran_deleted = FALSE
.
NOTE: If we detect that your primary key has changed and you have not set
SUPPLEMENTAL LOG DATA (ALL)
on your source table, we send you a warning telling you to setSUPPLEMENTAL LOG DATA (ALL)
on the table.
Tables without a primary key
For tables without a primary key, we designate our _fivetran_id
column as the primary key. We populate this column in one of two ways:
- If the table is partitioned and row movement is enabled, we use a hash of the column's non-Fivetran values to populate our
_fivetran_id
column. - Otherwise, we use Oracle's
ROWID
pseudo column to populate our_fivetran_id
column.- If you enable row movement later, we automatically re-sync the table and use a hash of the column's non-Fivetran values to populate our
_fivetran_id
column. - If we detect any DDL operations (excluding
GRANT
), we automatically re-sync the table and re-import its contents with the newly assignedROWID
s.
- If you enable row movement later, we automatically re-sync the table and use a hash of the column's non-Fivetran values to populate our
Deleted rows
We do not delete rows from your destination. When a row is deleted from the source table, we set the _fivetran_deleted
column value of the corresponding row in the destination to TRUE
.
Deleted columns
We do not delete columns from your destination. When a column is deleted from the source table, we replace the existing values in the corresponding destination column with NULL
values.
Excluded tables
Fivetran does not sync the following tables:
- Temporary tables
- System-generated supplemental tables (for example, IOT or MLOG$ tables)
- System tables (for example, those in the
SYSTEM
tablespace or those owned bySYS
user)
Migrating service providers
If you want to migrate service providers, we will need to do a full re-sync of your data because the new service provider won't retain the same change tracking data as your original Oracle database.
Syncing empty tables and columns
Fivetran can sync empty tables and columns for your Oracle connection. For more information, see our Features documentation.