Capture from SAP NetWeaver on Oracle
This section describes the requirements for capturing changes from SAP NetWeaver on Oracle.
Table Types
HVR supports capture from the following table types in Oracle:
- Ordinary (heap-organized) tables
- Partitioned tables
- Index-organized tables
Capture Methods
HVR allows the following methods for capturing (Capture) changes from SAP NetWeaver on Oracle:
Direct Redo Access
Capture changes directly from Oracle's redo and archive file.Click here for the configuration required when using this capture method.
In this capture method (Capture_Method=DIRECT), HVR reads transaction log records directly from the DBMS log file using the file I/O. This method is very fast in capturing changes from the Oracle database. The DIRECT log read method requires that the HVR Agent is installed on the source database machine.
Oracle parameter DB_BLOCK_CHECKSUM=OFF is not supported by log-based capture. Values TYPICAL (the default) and FULL (unnecessarily high) are supported by HVR.
Native Access to Redo Files
HVR's capture job needs permission to read Oracle's redo and archive files at the Operating System level. There are three different ways that this can be done;
- Install HVR so it runs as Oracle's user (e.g. oracle).
- Install HVR under a username (e.g. User) which is a member of Oracle's default Operating System group (typically either oinstall or dba).
- On Linux the default group of user oracle can be seen in the 4th field of its line in /etc/passwd. The HVR user be made a member of that group by adding its name to file /etc/group (e.g. line oinstall:x:101:oracle).
- On Windows, right-click My Computer and select Manage ▶ Local Users and Groups ▶ Groups ▶ ora_dba ▶ Add to Group ▶ Add.
Note that adding HVR's user to group dba will also give HVR sysdba privilege.
- Create special Access Control Lists (ACLs) on these files so that HVR's user can read them.
On Linux, the following commands can be run as user oracle to allow user User to see redo files in $ORACLE_HOME/oradata/SID and archive files in $ORACLE_HOME/ora_arch. Note that an extra "default ACL" is needed for the archive directory, so that future archive files will also inherit the directory's permissions.
$ setfacl -R -m u:hvr:rx $ORACLE_HOME/oradata $ setfacl -R -m u:hvr:rx,d:u:hvr:rx $ORACLE_HOME/ora_arch
Extra Grants For Accessing Redo Files Over TNS
For certain Oracle versions (11.2.0.2 and 11.2.0.3), HVR reads the redo and archive files directly through its SQL connection, provided those files are on ASM storage or the connection to the source database is over TNS.
The HVR database user must be granted the privileges mentioned in section Grants for Log-Based Capture and additionally
select any transaction
privilege for HVR to read the redo and archive files directly through its SQL connection.Archive Only
Capture changes from Oracle's transaction log backup files.Click here for the configuration required when using this capture method.
In this capture method (Capture_Method=ARCHIVE_ONLY), HVR reads changes from redo archive files available in the directory specified in ARCHIVE WATCH DIRECTORY (Archive_Log_Path). Also, HVR can be configured to consider only the files that match the format defined in LOG ARCHIVE FORMAT (Archive_Log_Format).
HVR does not read anything from online redo files or the 'primary' archive destination. This allows the HVR process to reside on a different machine than the Oracle DBMS and read changes from files that are sent to it by some remote file copy mechanism (e.g. FTP). The capture job still needs an SQL connection to the database for accessing dictionary tables, but this can be a regular connection.
Using the Archive Only capture method may result in longer replication delays compared to the 'online' mode. To minimize these delays, it is possible to force Oracle to issue an archive once per a predefined period.
HVR supports cross-platform capture of archived redo files. For example, on a Linux machine where Capture_Method=ARCHIVE_ONLY is defined, HVR can capture archived redo files coming from an AIX machine.
On RAC systems, the delays are defined by the slowest or the least busy node because archives from all threads must be merged by SCNs to generate replicated data flow.
Archive Watch Directory
HVR will search for the log archives in the directory specified in this field.
Any process such as the Oracle archiver (if another LOG_ARCHIVE_DEST_N is defined), RMAN, or a simple shell script can copy log archive files to this directory. Therefore, it is important to ensure that the files in this directory are purged periodically to prevent the directory from filling up.
Log Archive Format
If the names of the compressed archive log files differ from the original names of the archive log files, you may describe the filename format (template) of the transaction log archive files stored in the ARCHIVE WATCH DIRECTORY.
This field accepts the following format variables:
- %d - match numbers (zero or more decimal digits). Numbers matched using this variable are ignored by HVR.
- %r or %R - resetlogs ID
- %s or %S - log sequence number
- %t or %T - thread number
- %z or %Z - match alphanumeric characters. Characters matched using this variable are ignored by HVR.
Wildcard character * is not supported.
For more information about the format variables, refer to LOG_ARCHIVE_FORMAT in Oracle documentation.
When this location field is not defined, then by
default
HVR will query the database for Oracle's initialization field LOG_ARCHIVE_FORMAT.Example 1:
Suppose an archive log file is named 'o1_mf_1_41744_234324343.arc' according to a certain Oracle archive log format pattern 'o1_mf_<thread>_<sequence>_<some_number>.arc'. The archive file is copied to some custom directory and compressed to 'o1_mf_1_41744_234324343.arc.gz' with the .gz extension added to its name. In such a case, specify the following pattern 'o1_mf_%t_%s_%z.arc.gz' in LOG ARCHIVE FORMAT field.Example 2:
Suppose the compressed archive log file is named CQ1arch1_142657_1019376160.dbf.Z with the .Z extension added to its name. In such a case, specify the following pattern 'CQ1arch%t_%s_%r.dbf.Z' in LOG ARCHIVE FORMAT field.The following are used by HVR if LOG ARCHIVE FORMAT field is not defined,
For Oracle ASM system, the default name pattern used is 'thread_%t_seq_%s.%d.%d'.
Non-ASM system,
if Fast Recovery Area (FRA) is used, then the default name pattern used is 'o1_mf_%t_%s_%z_.arc'
if FRA is not used, then HVR uses the following SQL query:
SELECT value FROM v$parameter WHERE name = 'log_archive_format';
HVR picks up the first valid archive destination and then finds the format as described above.
To determine whether the destination uses FRA, HVR uses the following query:
SELECT destination FROM v$archive_dest WHERE dest_name='LOG_ARCHIVE_DEST_[n]';
For example, for destination 1, the query is as follows:
SELECT destination FROM v$archive_dest WHERE dest_name='LOG_ARCHIVE_DEST_1';
If the query returns USE_DB_RECOVERY_FILE_DEST, it indicates the destination uses FRA.
Capturing from Compressed Archive Log Files
HVR supports capturing changes from compressed archive log files that are moved from a 'primary' archive log directory to a custom directory. HVR automatically detects the compressed files, decompresses them, and reads data from them.
This feature is activated when the directory defined in ARCHIVE WATCH DIRECTORY is set to the custom directory.
Only gzip compressed files are supported.
Compressed Archive Log File Format
If the names of the compressed archive log files differ from the original names of the archive log files, then the relevant naming format should be defined in LOG ARCHIVE FORMAT field.
Grants for Log-Based Capture
This section lists the grants required for capturing changes from Oracle database.
HVR can either connect to the database as the owner of the replicated tables, or it can connect as a special user (e.g. username).
The HVR database user must be granted the
create session
privilege:grant create session to username;
To improve the performance of Activate Replication (hvractivate) for channels with a large number of tables (more than 150), HVR creates a temporary table (hvr_sys_table) within a schema. The HVR database user must be granted the
create table
privilege so that HVR can automatically create this temporary table:If you do not wish to provide
create table
privilege to the HVR database user, the alternative method is to manually create this temporary table using the SQL statement:create global temporary table hvr_sys_table (table_name varchar(128), table_owner varchar(128));
This temporary table is not used when capturing from a Data Guard standby database.
The HVR database user must be granted the
select any table
privilege to replicate tables that are owned by other schemas (using the parameter Schema in action TableProperties). Else, the HVR database user must be granted the individual table-levelselect
privileges.grant select any table to username;
The HVR database user must be granted the
select any dictionary
privilege to read the data dictionaries in Oracle's SYS schema.grant select any dictionary to username;
Alternatively, the HVR database user may be granted
select
privilege only for the required data dictionary objects:Click here for the specific grants
grant select on sys.v_$archive_dest to username; grant select on sys.v_$archived_log to username; grant select on sys.v_$database to username; grant select on sys.v_$database_incarnation to username; /* The following grant (sys.v_$dnfs_files) is required for identifying the redo files located on DirectNFS */ grant select on sys.v_$dnfs_files to username; /* The following grant (sys.v_$encryption_wallet) is required for decryption */ grant select on sys.v_$encryption_wallet to username; grant select on sys.v_$log to username; grant select on sys.v_$logfile to username; /* The following grant (sys.v_$logmnr_contents) is required for Oracle LogMiner capture method. */ grant select on sys.v_$logmnr_contents to username; grant select on sys.v_$nls_parameters to username; grant select on sys.v_$parameter to username; grant select on sys.v_$pdbs to username; /* The following grant (sys.v_$system_parameter) is required for reading the value of 'filesystemio_options' parameter which in turn is used for reading the redo logs */ grant select on sys.v_$system_parameter to username; grant select on sys.all_cons_columns to username; grant select on sys.all_constraints to username; grant select on sys.all_ind_columns to username; grant select on sys.all_indexes to username; grant select on sys.all_lobs to username; grant select on sys.all_log_groups to username; grant select on sys.all_objects to username; grant select on sys.all_tab_cols to username; grant select on sys.all_tables to username; grant select on sys.all_triggers to username; grant select on sys.all_encrypted_columns to username; grant select on sys.dba_tablespaces to username; grant select on sys.obj$ to username; /* The following grant (sys.ecol$) is required for Oracle Database 11.2 and above since default values for added columns are stored differently. */ grant select on sys.ecol$ to username; grant select on sys.user$ to username; grant select on sys.col$ to username; grant select on sys.enc$ to username; grant select on sys.tabpart$ to username; grant select on sys.tabsubpart$ to username; /* The following three grants are required for Refreshing Data (hvrrefresh with option -qrw) and action AdaptDDL */ grant select on sys.v_$locked_object to username; grant select on sys.v_$transaction to username; grant select on sys.dba_objects to username;
The HVR database user must be granted the following privilege to capture
create table
statements and add supplemental logging to the newly created table(s):grant alter any table to username;
The HVR database user must be granted the following privileges to use action DbSequence:
grant select any sequence to username; grant select on sys.seq$ to username;
An alternative to all of the above grants is to provide the sysdba privilege to the HVR database user (e.g. username):
On Unix and Linux, add the user name used by HVR to the line in /etc/group for the Oracle sysadmin group.
On Windows, right-click My Computer and select Manage ▶ Local Users and Groups ▶ Groups ▶ ora_dba ▶ Add to Group ▶ Add.
Supplemental Logging
HVR needs the Oracle supplemental logging feature enabled on replicate tables that it replicates. Otherwise, when an update
is done, Oracle will only log the columns which are changed. But HVR also needs other data (e.g. the key columns) so that it can generate a full update
statement on the target database. The Oracle supplemental logging can be set at the database level and on specific tables. In certain cases, this requirement can be dropped by defining action ColumnProperties with parameter CaptureFromRowId, for more information, see section Capturing from Oracle ROWID.
The very first time that Activate Replication (hvractivate) runs, it will check if the database allows any supplemental logging at all. If it is not, then Activate Replication will attempt statement alter database add supplemental log data
(see Extra Grants for Supplemental Logging to execute this statement). Note that this statement will hang if other users are changing tables. This is called 'minimal supplemental logging'; it does not actually cause extra logging; that only happens once supplemental logging is also enabled on a specific table.
To see the status of supplemental logging, use the following query:
select log_group_type from all_log_groups where table_name='mytable'
To check the current state of supplemental logging, use the following query:
select supplemental_log_data_min, supplemental_log_data_pk, supplemental_log_data_all from v$database
This query should return at least ['YES', 'NO', 'NO']
To disable supplemental logging, use the following query:
alter database drop supplemental log data
Activate Replication will normally only enable supplemental logging for the key columns of each replicated table, using statement alter table table1 add supplemental log data (primary key) columns
. But in some cases, Activate Replication will instead perform alter table table1 add supplemental log data (all) columns
, this will happen if either of the following condition is met:
- the key defined in the replication channel differs from the Oracle table's primary key or
- a table has any type of compression enabled or
- no key is defined in the replication channel and parameter NoDuplicateRows of action TableProperties is not defined on the table or
- one of the following actions with the mentioned parameters is defined on the table:
On the capture location:
Location Action Parameter Source ColumnProperties CaptureExpression Source Restrict CaptureCondition, HorizColumn On the target location:
Location Action Parameter Target FileFormat BeforeUpdateColumns On any location:
Location Action Parameter * CollisionDetect * ColumnProperties IntegrateExpression, Key or TimeKey * Integrate DbProc or Resilient * Integrate RenameExpression * Restrict AddressTo or IntegrateCondition
Supplemental Log Data Subset Database Replication
HVR does not support the 'Supplemental Log Data Subset Database Replication' option on Oracle 19c and higher versions. This feature must be disabled for your Oracle database when using HVR for replication.
To verify whether the database is enabled for subset database replication ('YES' or 'NO'), use the following query:
select supplemental_log_data_sr from v$database;
To disable this option, use the following query:
alter database drop supplemental log data subset database replication;
Capturing from Oracle ROWID
If none of the above requirements force HVR to enable supplemental logging on all columns, the requirement for supplemental logging on key columns can be removed by defining action ColumnProperties with parameters CaptureFromRowId and SurrogateKey to the channel. By doing so, HVR will treat the Oracle rowid column as part of the table and use it as the key column during replication, seamlessly integrating it into the target database. To ensure the capture of rowid values and their use as surrogate replication keys, the following two actions must be added to the channel prior to adding tables:
Location | Action | Parameter | Annotation |
---|---|---|---|
Source | ColumnProperties | Name=hvr_rowid CaptureFromRowId | This action should be defined for capture location(s) only. |
* | ColumnProperties | Name=hvr_rowid SurrogateKey | This action should be defined for both capture and integrate locations. |
Grants for Supplemental Logging
The HVR database user must be granted the privileges mentioned in section Grants for Log-Based Capture and additionally the following grants for using supplemental logging:
To execute
alter database add supplemental log data
, the HVR database user must have the sysdba privilege. Otherwise, HVR will generate an error message, requesting that a different user with this privilege execute this statement.If HVR needs to replicate tables that are owned by other schemas, then optionally the HVR user can also be granted
alter any table
privilege, so that Activate Replication (hvractivate) can enable supplemental logging on each of the replicated tables. If this privilege is not granted then Activate Replication will not be able to execute thealter table…add supplemental log data
statements itself; instead, it will write all the statements that it needs to execute into a file and then write an error message which requests that a different user (who does have this privilege) execute thesealter table
statements.
Accessing Redo and Archive
The Oracle instance must have archiving enabled. If archiving is not enabled, HVR will lose changes if it falls behind the redo logs or it is suspended for a time.
HVR supports capturing changes made by Oracle's Direct-Load INSERT feature (e.g. using
insert
statements with 'append hints' (insert /*+ append */ into
)). For HVR to capture these changes:
- A table/tablespace must not be in the NOLOGGING mode, because in this mode, data is inserted without redo logging.
- Archive log mode must be enabled.
Archiving of redo log files can be enabled by running the following statement as sysdba against a mounted but unopened database:
alter database archivelog
The current state of archiving can be checked using query:
select log_mode from v$database
The current archive destination can be checked with the query select destination, status from v$archive_dest
. By default, this query will return values USE_DB_RECOVERY_FILE_DEST, VALID, which means that HVR will read changes from within the flashback recovery area. Alternatively, an archive destination can be defined with the following statement: alter system set log_archive_dest_1='location=/disk1/arch'
and then restart the instance.
Often Oracle's RMAN will be configured to delete archive files after a certain time. But if they are deleted too quickly then HVR may fail if it falls behind or it is suspended for a time. This can be resolved by re-configuring RMAN so that archive files are guaranteed to be available for a specific longer period (e.g. 2 days).
If HVR is restarted it will need to go back to the start oldest transaction that was still open, so if the system has long running transactions then archive files will need to be retained for longer period.
Managing Archive or Redo Log files
If log-based capture is configured for an Oracle database then HVR may need to revert to reading the Oracle archive/redo files. However, it is important to note that each site has an existing backup/recovery regime (normal RMAN) that periodically deletes these Oracle archive files. To ensure that these archive files remain available for the HVR capture job(s), it is necessary to configure RMAN in such a way that the archive files are retained for a sufficient amount of time. The length of this time will vary depending on the replication environment, and factors such as how long could replication be interrupted, and after what period of time would the system be reset using a Refresh.