Db2 for LUW as Source
Capture
Fivetran HVR supports capturing changes from Db2 for Linux, UNIX and Windows. For the list of supported Db2 for LUW versions, from which HVR can capture changes, see Capture changes from location in Capabilities.
If you are using Db2 for LUW as the hub repository database and as the source location, we recommend using the HVR agent to capture changes from the Db2 instance.
This is because when the hub repository database and capture job share the same Db2 instance, two connections to the same Db2 instance are opened in the same thread during capture. As a result, this can lead to the db2readlog API misinterpreting updates, causing it to send data updates from both connections to the API caller.
Table Types
HVR supports capture from the following table types in Db2 for LUW:
- Regular Tables
- Multidimensional Clustering (MDC) Tables
- Insert Time Clustering (ITC) Tables
- Uncompressed Tables
- Row Compressed Tables (both static and adaptive)
- Value Compressed Tables (both static and adaptive)
Capture Methods
HVR allows only the Log-based Capture method for capturing (Capture) changes from Db2 for LUW.
Log-based Capture
In this capture method, HVR uses the db2readlog API to read the Db2 transaction logs. For this the database user needs to have authorization SYSADM or DBADM. For more information about why this authorization is required, refer to the db2ReadLog API - Read log records section in the Db2 documentation.
Supplemental Logging
HVR supports supplemental logging for log-based capture from Db2 for LUW. To enable supplemental logging, the DATA CAPTURE attribute must be set to CHANGES in Db2. So while creating a new table, ensure to set this attribute to CHANGES. If this attribute is defined at the schema level, the newly created tables will automatically inherit this attribute from the schema.
Supplemental logging can be enabled by executing Activate Replication with option Supplemental Logging (hvractivate with option -ol).
Alternatively, executing the following command on replicated tables has the same effect.
ALTER TABLE tablename DATA CAPTURE CHANGES INCLUDE LONGVAR COLUMNS
To alter a table, the HVR database user must have one of the following privileges: ALTER, CONTROL, or ALTERIN. Alternatively, the user must have DBADM authority.
While executing Activate Replication, if supplemental logging is enabled, HVR will also execute the following (only if required by Db2 to reorganize the tables for better performance):
REORG TABLE tablename
To enable "archive logging" in db2 , define the database configuration parameters logarchmeth1 and logarchmeth2.
For logarchmeth1, you must set value to either logretain or disk and, for logarchmeth2, set value to either off or disk. For example:
db2 UPDATE DB CFG FOR databasename USING LOGARCHMETH1 LOGRETAIN db2 UPDATE DB CFG FOR databasename USING LOGARCHMETH2 OFF
or
db2 UPDATE DB CFG FOR databasename USING LOGARCHMETH1 DISK:/u/dbuser/archived_logs db2 UPDATE DB CFG FOR databasename USING LOGARCHMETH2 OFF
The user executing this command should be part of SYSADM , SYSCTRL or SYSMAINT. This does not have to be the HVR database user.
Capturing from SAP Source
HVR allows you to capture changes from a Db2 for LUW database which is used by an SAP ECC system. To enable capture using SAP dictionary, the location property SAP Source (SAP_Source_Schema) must be defined while creating a location or by editing the existing location's source and target properties. Then while adding tables to a channel, the Table Selection dialog will display the SAP tables defined in the SAP dictionaries.
For Usage-based Subscription, an additional SAP Unpack license is required to unpack the cluster and pool tables from the SAP database. Contact Fivetran Technical Support to obtain the necessary SAP Unpack license. For the Consumption-based model, a separate license is NOT required.
When SAP pool, cluster, and long text (STXL) tables are added to a channel using the Table Selection dialog, the following actions are automatically defined:
TableProperties with parameters PackedInside, CoerceErrorPolicy, and CoerceErrorType
- For each container (pool/cluster) table a separate action TableProperties is defined.
- This action is not defined for long text (STXL) tables.
Transform with parameter SapUnpack
Irrespective of the number of tables, only a single action Transform is defined.
SAP columns are non-nullable by default. They will be described as nullable in the hub's repository and thus as nullable in the target. This is valid for the non-key columns. Key columns will remain non-nullable.
SAP Data Types Conversion
Since v6.1.0/7
This option enables conversion/mapping of SAP specific data types (available in SAP dictionary meta-data) in source location to corresponding data type in the target location. The SAP specific data type will be localized with the source DB's data type and then mapped to HVR's Repository data type. For example, if you have an SAP system on Db2 for LUW, the DATS
data type will be localized as Db2 for LUW's Date
type, and then it is mapped to HVR Repository type ansidate
.
This feature is supported for the following SAP specific data types:
- DATS
If the SAP Data Types Conversion option is NOT selected, SAP specific data types are mapped to various other HVR Repositry data types. For more information, see Data Type Mapping for SAP NetWeaver (or SAP dictionary).
If the SAP Data Types Conversion option is selected during location creation, HVR will automatically define action TableProperties with parameters CoerceErrorPolicy and CoerceErrorType.
If the SAP Source (SAP_Source_Schema) location property/option is selected during location creation, by default, the SAP Data Types Conversion option also gets selected.
However, to enable SAP data type conversion for an existing location, select the SAP Data Types Conversion option by editing the location's source and target properties and then manually define action TableProperties with parameters CoerceErrorPolicy and CoerceErrorType.
Capturing from Db2 DPF
To capture from Db2 Database Partitioning Feature (DPF), you need to install the HVR Agent on each node of the Db2 DPF cluster. Additionally, you need to create a separate location for each node, allowing specific capture of changes per node.
For environments without the HVR Agent, you need to:
- Create a separate location for each node in the Db2 DPF cluster.
- On each location, add the Environment action that sets the DB2NODE value to a corresponding node number in the DPF cluster.
Compare and Refresh from Db2 DPF
In a Db2 DPF environment, transactionality is maintained within individual nodes but not across nodes. Selecting data from one node retrieves data from all nodes. Therefore, when performing operations like Compare and Refresh, it is crucial to select only one Db2 DPF location to prevent data duplication and potential inconsistencies. This principle also applies when you Integrate changes into a Db2 DPF location.
Limitations
Truncate/load operations across multiple Db2 DPF nodes may not work as the truncates for each node are captured/integrated at different points in time, potentially undoing the work done so far. For detailed information on limitations in database logging within a DPF environment, see the IBM documentation.
Latency statistics may be inaccurately reported due to the complexities introduced by the DPF environment.
Capture Limitations
This section describes the limitations for capturing changes from Db2 for Linux, Unix and Windows using HVR.
Compare and Refresh from Db2 for LUW
HVR allows you to perform only Compare and Refresh from Db2 for LUW database (without using Capture). This section describes the configuration requirements for performing only Compare and Refresh from Db2 for LUW database.
Grants for Compare and Refresh from Db2 for LUW
This section lists the grants required for performing only Compare and Refresh from Db2 for LUW database.