High-Volume Agent SAP ECC on HANA Private Previewlink
SAP ECC on HANA is an enterprise resource planning and analytics platform built on top of SAP HANA database. Fivetran replicates data from your SAP HANA source database and loads it into your destination using High-Volume Agent (HVA) connector.
IMPORTANT:
- You must have an Enterprise or Business Critical plan to use the High-Volume Agent SAP ECC on HANA connector.
- In Private Preview, the connector only supports Snowflake, BigQuery, PostgreSQL and Databricks as destinations.
NOTE: SAP superseded SAP ECC on HANA with SAP S/4HANA. See our documentation on our SAP S/4HANA connector.
Featureslink
Feature Name | Supported | Notes |
---|---|---|
Capture deletes | check | All tables and fields |
Custom data | check | All tables and fields |
Data blocking | check | Column level, table level, and schema level |
Column hashing | check | |
Re-sync | check | Table level |
History | check | |
API configurable | check | API configuration |
Priority-first sync | ||
Fivetran data models | check | Get the models: source / transform |
Private networking |
ODBC connectionlink
Fivetran requires the installation of the HANA client on the machine with the HVA agent. The HVA agent uses the HANA ODBC driver to connect, read, and write data into HANA. For more information about the supported ODBC driver version, see the release notes (hvr.rel
) in the hvr_home
directory of the agent installation.
Setup guidelink
Follow our High-Volume Agent SAP ECC on HANA Setup Guide to connect SAP ECC on HANA to your destination using the HVA connector.
Sync overviewlink
Once Fivetran is connected to your HANA source database, we pull a full dump of selected data from your database during the initial sync and send it to your destination. We then capture all new and changed data using one of our proprietary capture methods. Fivetran automatically detects schema changes in the source database (for example, new tables or changed data types) and persists these changes to your destination. We offer one of the following capture methods:
- DirectDBMS Log Reading: This method captures changes directly from HANA's log segments and log backups. This method is very fast in capturing changes from the HANA database. However, it requires High-Volume Agent to be installed on the HANA machine.
- Archive Only: This method captures changes from HANA's log backups only. High-Volume Agent can either detect the location of the log backups automatically or use pre-configured custom path and format. It is important to understand that during the capture process, High-Volume Agent is not connected to the source database. Therefore, using a custom path allows the High-Volume Agent to capture from a location other than HANA's backup location(s).
NOTE: We support DirectDBMS Log Reading and Archive Only capture methods when the HVA and HANA database are located on the same node. However, we only support the Archive Only capture method if they are on separate nodes.
INFO: The Archive Only capture method generally exhibits higher latency than the DirectDBMS Log Reading method because changes can only be captured when a log backup file is created. While this capture method enables high-performance data capture with minimal operating system and database privileges, it comes at the cost of higher capture latency.
Schema informationlink
Fivetran tries to replicate the exact schema and tables from your HANA source database to your destination according to our standard database update strategies. This ensures that the data in your destination is in a familiar format to work with.
When you connect to Fivetran and specify a source database, you also select a schema prefix. We map the SAP schema in your source database to your destination and prepend the destination schema name with the prefix you selected.
Fivetran-generated columnslink
Fivetran adds the following columns to every table in your destination:
_fivetran_deleted
(BOOLEAN) marks rows that were deleted in the source database._fivetran_synced
(UTC TIMESTAMP) indicates the time when Fivetran last successfully synced the row._fivetran_rowid
(NUMBER) is HANA internal row id ($rowid$
column) of the row. It is also used as a primary key if there is no other primary key defined on the table.
We add these columns to give you insight into the state of your data and the progress of your data syncs. For more information about these columns, see our System Columns and Tables documentation.
Type transformations and mappinglink
As we extract your data, we match SAP data types to types that Fivetran supports. Our system attempts to infer the types of any columns with data types that we don't recognize.
The following table illustrates how we transform your SAP data types into Fivetran supported types:
SAP Type | Fivetran Type | Fivetran Supported |
---|---|---|
D16R | BINARY | True |
D16S | BINARY | True |
D34R | BINARY | True |
D34R | BINARY | True |
RAW | BINARY | True |
ACCP | STRING | True |
CHAR | STRING | True |
CLNT | STRING | True |
CUKY | STRING | True |
DATS | STRING | True |
LANG | STRING | True |
NUMC | STRING | True |
SSTR | STRING | True |
TIMS | STRING | True |
UNIT | STRING | True |
LCHR | STRING | True |
LRAW | BINARY | True |
RSTR | BINARY | True |
STRG | STRING | True |
CURR | BIGDECIMAL | True |
DEC | BIGDECIMAL | True |
D16D | STRING | True |
D34D | STRING | True |
FLTP | DOUBLEPRECISION | True |
INT1 | SMALLINT | True |
INT2 | INTEGER | True |
INT4 | INTEGER | True |
INT8 | LONG | True |
PREC | INTEGER | True |
QUAN | BIGDECIMAL | True |
VARC | - | False |
If we are missing an important data type that you need, please reach out to support.
In some cases, when loading data into your destination, we may need to convert Fivetran data types into data types that are supported by the destination. For more information, see the individual data destination pages.
Schema changeslink
Schema changes are not recognized in Private Preview.
New tables are not automatically added. Use the Schema tab to enable the automatic addition of new tables.
NOTE: The connector does not support the truncate table operation. In case you need to truncate a table, we recommend performing a table-level re-sync instead.
Database upgradeslink
Fivetran may fail with the invalid column name
DBMS error when upgrading a HANA database (for example, from HANA 2.0 SPS 04 to HANA 2.0 SPS 05). To resolve this issue, it is necessary to recreate all the views in the _HVR
schema. Execute the hvrhanaviews.sql
script after the upgrade. For detailed step-by-step instructions, refer to the High-Volume Agent SAP ECC on HANA Setup Guide.
Initial synclink
When Fivetran connects to a new database, we first copy all rows from every selected table and add Fivetran-generated columns. We copy rows by performing a SELECT statement on each table. We import up to four tables at a time before moving on to the next until completion.
To keep your data up to date after the initial sync, we use log-based capture. This allows Fivetran to capture and then update only the data that has changed since the last sync.
Updating datalink
Fivetran performs incremental updates of any new or modified data from your source database. We use Direct Capture, a proprietary replication method, to extract the change data from your database's log files directly.
Tables with a primary keylink
We merge changes to tables with primary keys into the corresponding tables in your destination:
- An INSERT in the source table generates a new row in the destination with
_fivetran_deleted = FALSE
. - A DELETE in the source table updates the corresponding row in the destination with
_fivetran_deleted = TRUE
. - An UPDATE in the source table updates the data in the corresponding row in the destination.
NOTE: Redefining the primary key will trigger a re-sync.
Duplicate rowslink
Duplicates may appear for some rows that were updated at the same time during the initial sync. These duplicates are eventually cleaned up from the destination.
Tables without a primary keylink
We designate our _fivetran_rowid
column as the primary key for tables without a primary key.
Deleted rowslink
We do not delete rows from your destination. When a row is deleted from the source table, we set the _fivetran_deleted
column value of the corresponding row in the destination to TRUE
.
Deleted columnslink
We do not delete columns from your destination.
Excluded tableslink
Fivetran does not sync the following tables:
- Row Store tables
- Long text
STXL
tables - Database-level system tables
- Temporary tables