Capture from SAP NetWeaver on HANA
This section describes the requirements for capturing changes from SAP NetWeaver on HANA.
Table Types
HVR supports capture from the following table type in HANA:
- column-storage
Capture Methods
HVR allows the following methods for capturing (Capture) changes from SAP NetWeaver on HANA:
Direct DBMS Log Reading
In this capture method (Capture_Method=DIRECT), HVR reads changes directly from HANA's log segments and log backups. This method is very fast in capturing changes from the HANA database. However, it requires HVR to be installed on the HANA machine.
Archive Only
In this capture method (Capture_Method=ARCHIVE_ONLY), HVR reads changes from backup transaction log files. This capture method allows the HVR process to reside on machine other than that on which HANA DBMS resides and read changes from backup transaction log files that may be sent to it by some file transfer mechanism. HVR must be configured to find these files by defining the location properties BACKUP DIRECTORY (Archive_Log_Path) and optionally FILENAME FORMAT (Archive_Log_Format) while creating a location or by editing the existing location's source and target properties.
This capture method will generally expose higher latency than the Direct DBMS Log Reading method because changes can only be captured when the transaction log backup file is created. However, the Archive Only method enables high-performance log-based capture with minimal OS privileges at the cost of higher capture latency.
Grants for Capture
The following grants and access configurations are required for capturing changes from SAP NetWeaver on HANA:
HVR requires access to data from system dictionaries. This access is provided through views created in the HANA database by the SYSTEM user. You can create these views in HANA using one of the following methods:
Using the HVR script:
- Modify the HVR script file hvrhanaviews.sql available in the HVR_HOME/dbms/hana directory.
- Update the first line in the script to specify the schema where the views are to be created. For example, replace
SET SCHEMA _HVR;
withSET SCHEMA {SAPABAPSCHEMA};
to use the default SAP schema. - Save the changes to the script file.
- Connect to the HANA database as user SYSTEM.
- Execute the modified script to create the necessary views in HANA.
Using the SAP GUI:
- Logon to SAP.
- Start transaction DBCO.
- Create a secondary connection using HANA user SYSTEM.
This connection should only be used for creating views, never for the standard replication process.
- Run the program /HVR/SAPAPPCONNECT_HANASYSVIEW using transaction SA38.
- Populate the field SYS DB Connection with the secondary connection created earlier and select the show log option.
- Execute the program.
Views are created in the default SAP schema with name beginning with /HVR/view_name and will therefore not conflict with other existing SAP standard and custom objects.
Grant for reading data from the database.
- If the SAP parameter DB Connection Name is not defined, the default SAP database user is used. This user has access to all tables in SAP’s default schema, and there is no need for additional grants.
HVR recommends to use the default SAP database user for reading data from the database.
- If the SAP parameter DB Connection Name is defined with a non-default SAP database user (i.e., HVR user), the HVR database user must be granted the
select
privilege to read from the default SAP schema:grant select on schema default_SAP_schema to username;
- If the SAP parameter DB Connection Name is not defined, the default SAP database user is used. This user has access to all tables in SAP’s default schema, and there is no need for additional grants.
Log Mode and Log Archive Retention
For HVR to capture changes from SAP NetWeaver on HANA, the automatic backup of transaction logs must be enabled in HANA. Normally, HVR reads changes from the 'online' transaction log file. However, in the event of an interruption (e.g. 2 hours), HVR must be able to read from transaction log backup files to capture older changes. Full backups are not necessary as HVR only reads transaction log backup files.
To enable automatic log backup in HANA, the log mode must be set to normal. Once the log mode is changed from overwrite to normal, a full data backup must be created. For more information, search for Log Modes in SAP HANA Documentation.
The log mode can be changed using HANA Studio. For detailed steps, search for Change Log Modes in SAP HANA Documentation. Alternatively, you can execute the following SQL statement:
alter system alter configuration ('global.ini', 'SYSTEM') set ('persistence', 'log_mode') = 'normal' with reconfigure;
Transaction Log (archive) Retention
If a backup process has already moved these files to tape and deleted them, then HVR's capture will give an error and a refresh will have to be performed before replication can be restarted. The amount of 'retention' needed (in hours or days) depends on organization factors (how real-time must it be?) and practical issues (does a refresh take 1 hour or 24 hours?).
When performing log-shipping (Archive Only capture), file names must not be changed in the process because begin-sequence and timestamp are encoded in the file name and capture uses them.
OS Level Permissions or Requirements
To capture from HANA database, HVR should be installed on the HANA database server itself, and the HVR Agent listener should be configured to accept remote connections. The operating system (OS) user under which the HVR is running should have READ permission on the HANA database files. This can typically be achieved by adding this user to the sapsys user group.
Channel Setup Requirements
It is not possible to enable 'supplemental logging' on HANA. This means that the real key values are not generally available to HVR during Capture. A workaround for this limitation is capturing the Row ID values and use them as a surrogate replication key.
The following two additional actions should be defined prior to Adding Tables to a Channel to instruct HVR to capture Row ID values and to use them as surrogate replication keys.
Location | Action | Parameter(s) | Annotation |
---|---|---|---|
Source | ColumnProperties | Name=hvr_rowid CaptureFromRowId | This action should be defined for capture locations only. |
* | ColumnProperties | Name=hvr_rowid SurrogateKey | This action should be defined for both capture and integrate locations |
HANA Encrypted Log Files and Log Backups
Since v6.1.0/36
HVR supports capturing changes from HANA encrypted log files and log backups.
If both compression and encryption are enabled for the log backups, you may randomly encounter the following error:
F_JZ0A1B: Unsupported record version code 251 encountered at seq# 520809984 in backup transaction log file.
To resolve this issue, you need to disable compression or encryption, or contact SAP HANA support.
To capture from encrypted log files and log backups, the following must be configured:
Execute the hvrhanaviews.sql script under the schema with SAP tables. This script is available in the HVR_HOME/dbms/hana directory. It will create a set of views that HVR uses to read from SAP dictionaries on Netweaver HANA.
The HVR database user must be granted the following privilege to read the encrypted log files and log backups:
grant execute on SAP_Schema."/HVR/ROOT_KEYS_EXTRACT" to username;
If using a multi-tenant configuration, the encryption configuration management should be delegated to tenants. Execute the following command on the system database to change control over to the tenant databases:
alter system encryption configuration controlled by local databases;
Set password for the encryption root keys backup in the database from which the capture is to be done:
alter system set encryption root keys backup password password;
Define the location property ROOT KEYS BACKUP PASSWORD (HANA_Root_Keys_Backup_Password) while creating a SAP HANA location or by editing the existing location's source and target properties). The password that was set for the encryption root keys backup in the previous step should be set for this property.
To set this location property from the CLI, use the command hvrlocationconfig:
hvrlocationconfig hub hana_location_name HANA_Root_Keys_Backup_Password=password;
Recognizing SAP Archived Records
Since v6.1.5/7
HVR can distinguish between records manually deleted by a user and records automatically archived by SAP NetWeaver on HANA. By recognizing deletions performed by the archiving process, HVR can mark these records as archived instead of deleted, ensuring the data remains relevant for reporting purposes.
Perform the following to enable this feature:
Configure SAP NetWeaver on HANA database:
Import the latest HVR transport files available in the HVR_HOME/dbms/netweaver directory.
In the SAP NetWeaver UI, click Set archive username to add an SAP application user to the user list in /N/HVR/TRIGGERS transaction. This user will perform archive deletion in SAP NetWeaver on HANA. All deletions made by this user will be marked as archived.
The internal table /HVR/IC_ARCHBLK tracks archive deletions by creating a single record for each process. It serves only as a transactional marker and does not serve any other purposes. To prevent data accumulation, this table requires periodic cleanup. You can use the built-in functionalities of your database system to automate the cleanup of this table. Alternatively, you can manually delete old entries from the table.
Create the SAP NetWeaver on HANA location and define the required actions in HVR:
Select the Recognize SAP Archiving Process option (or define the equivalent location property SAP_Archiving) while creating a location or by editing the existing location's source and target properties.
Define the following three actions on the target location:
Location Action Parameter(s) Target ColumnProperties Name={col_name_for_arch_deletes}
ArchiveDelete
ExtraTarget Restrict CompareCondition=“{col_name_for_arch_deletes}=0”
Context=!compare_deletedTarget Restrict Context=refresh_keep_deletes
RefreshCondition=“{col_name_for_arch_deletes}=0"These action definitions are required to create an extra column col_name_for_arch_deletes in the target location. The value populated in this column indicates the type of record deletion.
- If a row is automatically archived by SAP, the value in this column will be set to 2.
- If parameter SoftDelete is also defined, value 1 indicates the record was manually deleted by a user (who is not registered as an SAP application user that will perform archive deletion).
- Value 0 indicates the record is not deleted.
If you use ArchiveDelete and SoftDelete for the target location simultaneously, the columns created for ArchiveDelete and SoftDelete must use the same column names. Otherwise, errors will occur.
Defining ColumnProperties action with ArchiveDelete parameter is similar to defining SoftDelete for identifying the archived deletes.
To ensure that the feature is configured correctly and functioning as intended, check the HVR logs for the target (Integrate) location. The logs should include a line similar to
Integrated number_of_records archive deletes for table table_name
.
Capturing from Backint
Since v6.1.0/36
HVR supports capturing changes from log files stored in the Backint for SAP HANA interface.
Backint for SAP HANA is an API that enables direct connectivity between third-party backup agents/applications and the SAP HANA database. Backups are transferred from the SAP HANA database to the third-party backup agent, which runs on the SAP HANA database server and sends the backups to the third-party backup server.
Both capture methods (Direct DBMS Log Reading and Archive Only) are supported when the HVR and HANA database are located on the same node. However, if they are on separate nodes, only the Archive Only capture method is supported.
In order to capture data from log files stored in Backint for SAP HANA, the backup application should be installed and configured on the node where the HVR is installed.
The HVR uses the HANA system views (backup catalogs) to get the list of existing log files.
HVR is compatible with any third party backup application certified by SAP HANA. We have tested HVR with AWS Backint agent, IBM Tivoli Storage FlashCopy Manager, and Rubrik.
You can disable the Backint functionality in HVR by setting the environment variable ZIZ_HANA_USE_BACKUP_CATALOG to 0.
Configuration for Capturing from Backint
To enable HVR to retrieve data from log files stored in Backint for SAP HANA, the following configuration steps are required:
If you are upgrading from 6.1.0/35 or older version of HVR, run the script hvrhanaviews.sql available in the HVR_HOME/dbms/hana directory under schema with SAP tables. The script creates a set of views that HVR uses to read from SAP dictionaries on Netweaver SAP HANA.
If HVR and HANA database are located on different nodes, define the location properties BACKINT EXECUTABLE PATH (Hana_Backint_Executable_Path) and BACKINT CONFIGURATION FILE PATH (Hana_Backint_Configuration_Path).
Add the Backint application as a trusted external application. By default, HVR does not trust external applications.
Copy the file hvrosaccess_example.conf from HVR_HOME/etc to HVR_CONFIG/etc and rename it to hvrosaccess.conf.
Edit the hvrosaccess.conf file to add the full path of the Backint application under
Allowed_Plugin_Paths
. Typically, the default path is /usr/sap/HDB/SYS/global/hdb/opt. Example:{ ## Following section 'safelists' two directories for command ## execution. Otherwise LDP will only run binaries and scripts ## inside '$HVR_CONFIG/plugin/agent' and '$HVR_CONFIG/plugin/transform' # Allowed_Plugin_Paths: [ /usr/sap/HDB/SYS/global/hdb/opt ] }
If HVR and Backint agent are located on the same node, the values for the location properties BACKINT EXECUTABLE PATH (Hana_Backint_Executable_Path) and BACKINT CONFIGURATION FILE PATH (Hana_Backint_Configuration_Path) are received automatically from the SAP HANA database.
Note that specific configuration steps are required for the AWS Backint agent:Create a separate AWS Backint configuration file on the HANA node. For example,
cp /hana/shared/aws-backint-agent/aws-backint-agent-config.yaml \ /hana/shared/aws-backint-agent/aws-backint-agent-config-hvr.yaml
Modify the content of the
LogFile
parameter in the new file. For example,LogFile: "/hana/shared/aws-backint-agent/aws-backint-agent-catalog-hvr.log"
Define the file path for the new file (e.g. aws-backint-agent-config-hvr.yaml) in the location property BACKINT CONFIGURATION FILE PATH (Hana_Backint_Configuration_Path).
These configuration steps are necessary because the Backint log file is created with permissions such as
622
(read-write for the owner, read for the group, and read for others). By default, the owner for the HANA database is the hdbadm user, while the HVR instance user belongs only to the same group as hdbadm. To address this issue, it is possible to use aumask
policy, but it is generally considered unsafe.As a result of this configuration, two Backint log files are generated. The first log file records all standard backup operations. The second log file specifically contains information related to the HVR operations.
If the Backint storage is hosted on Amazon S3 and the user chooses shared key authentication, it may be necessary to create an AWS configuration folder named $(HOME)/.aws and place a 'credentials' file inside it. The 'credentials' file should contain a valid section specifying the aws_access_key_id and aws_secret_access_key, which are essential for authentication purposes.
Capture Limitations
This section describes the limitations for capturing changes from SAP NetWeaver on HANA using HVR.
HVR does not support Capture from multi-node HANA clusters.
Until version 6.1.0/35, HVR does not support Capture from HANA encrypted logs.
HANA allows encryption of transaction logs and transaction log backups separately. So if only the transaction logs are encrypted and not the transaction log backups, then HVR can capture changes using the Archive Only method.
Since HANA does not support supplemental logging, HVR cannot process specific actions/parameters that require the value of a column, and that column is not present in the HANA logs.
The following action/parameter definitions will not function due to this limitation:
- Action CollisionDetect
- Parameter TimeKey in action ColumnProperties
- Parameter DbProc in action Integrate
- Parameter Resilient in action Integrate (will not function only if a row is missing on the target)
- Parameter BeforeUpdateColumns in action FileFormat
- Parameter BeforeUpdateColumnsWhenChanged in action FileFormat
The following action/parameter definitions will not function if it requires the value of a column, and that column is not present in the HANA logs:
- Parameter CaptureExpression in action ColumnProperties (however, CaptureExpressionType=SQL_WHERE_ROW will function normally)
- Parameter IntegrateExpression in action ColumnProperties
- Parameter RenameExpression in action Integrate
- Parameter CaptureCondition in action Restrict
- Parameter IntegrateCondition in action Restrict
- Parameter HorizColumn in action Restrict
- Parameter AddressTo in action Restrict