Location Properties
This section lists and describes the location properties.
A location property specifies the characteristics/attributes of a location in Fivetran HVR. This can include location connection parameters, location/database type, database version, method of capture, etc.
A property that is automatically discovered by HVR when it connects to a database/location is called discovered property. A user cannot specify/input value into a discovered property.
An array property and map property can store multiple values. The syntax for updating them from the Command Line Interface (CLI) varies.
Property | Argument | Description |
---|---|---|
ABFS_Account | account | Name of the Azure Data Lake Storage Gen2 storage account. This property is required for connecting HVR to an ADLS Gen2 location. |
ABFS_Authentication_Method | method | Authentication method for connecting HVR to Azure Data Lake Storage (ADLS) Gen2 server. Available options for method are:
|
ABFS_Container | container | Name of the container available within the Azure Data Lake Storage Gen2 storage account (defined in ABFS_Account). |
ADL_Authentication_Method | method | Authentication method for connecting HVR to Azure Data Lake Storage Gen1 server. Available options for method are:
|
Agent_Client_Kerberos_Keytab | keytabfile | Directory path to the Kerberos keytab file that contains a security key for identifying the hub to the agent during authentication (when connecting hub to the agent). If defined, this keytab file will be used instead of the operating system defaults. |
Agent_Client_Kerberos_Principal | principal | Kerberos principal name for identifying the hub to the agent during authentication (when connecting hub to the agent). If defined, this principal name will be used instead of the operating system defaults. |
Agent_Client_Kerberos_Ticket_Cache | file | Directory path to the Kerberos ticket cache file for identifying the hub to the agent during authentication (when connecting hub to the agent). If defined, this ticket cache file will be used instead of the operating system defaults. |
Agent_Fingerprint | This is a discovered property that stores the unique identifier (fingerprint) of the server on which the HVR Agent is installed. | |
Agent_Host | host | Hostname or IP-address of the server on which the HVR Agent is installed/running. |
Agent_HVR_CONFIG | This is a discovered property that stores the directory path of HVR_CONFIG for HVR Agent. | |
Agent_HVR_HOME | This is a discovered property that stores the directory path of HVR_HOME for HVR Agent. | |
Agent_Operating_System | This is a discovered property that stores the name of the operating system on which HVR Agent is installed/running. | |
Agent_Oracle_RAC_Port | port | Port number of the Oracle RAC database available on the remote server. |
Agent_Oracle_RAC_Service | service | Service name of the Oracle RAC database available on the remote server. Example: HVR1900 |
Agent_Password | password | Password for the HVR Agent (defined in Agent_User). |
Agent_Platform | This is a discovered property that stores the name of the HVR platform (e.g, linux_glibc2.12-x64-64bit, windows-x64-64bit) used for installing the HVR Agent. | |
Agent_Port | port | TCP/IP port number of the HVR Agent. This is used for connecting HVR Hub to the HVR Agent. For Oracle RAC connection, this is the TCP/IP port number of the HVR Agent on the RAC nodes. |
Agent_Server_Kerberos_Principal | principal | User specified Kerberos principal name for identifying the agent to the hub during authentication (when connecting hub to the agent). |
Agent_Server_Public_Certificate | base64 | The SSL public certificate file for the HVR Agent. This property is discovered on first connection to the agent and verified for all future connections. |
Agent_User | username | Username for the HVR Agent. This property is used for connecting HVR Hub to the HVR Agent. |
Agent_Version | This is a discovered property that stores the HVR version of the agent installation. | |
Archive_Log_Format | format | Describes the filename format (template) of the transaction log archive files stored in the directory specified by the Archive_Log_Path property. The list of supported format variables and the default format string are database-specific. OracleThis property accepts the following format variables:
For more information about the format variables, refer to the article LOG_ARCHIVE_FORMAT in Oracle documentation. When this location property is not defined, then by SQL ServerThis property accepts the following format variables:
This property has no SAP HANAThis property accepts the following format variables:
This property is optional. When this property is not defined, the Sybase ASEThis property accepts the following format variables:
default HVR will scan all files available in the Archive_Log_Path.
|
Archive_Log_Path | dir | HVR will search for the transaction log archives in the specified directory (path) dir. The behavior of this property is database-specific. OracleHVR will search for the log archives in the specified directory dir in addition to the primary Oracle archive directory. If the Capture_Method is set to ARCHIVE_ONLY then HVR will search for the log archives in the directory dir only. Any process could be copying log archive files to this directory; the Oracle archiver (if another LOG_ARCHIVE_DEST_N is defined), RMAN, or a simple shell script. It should be ensured that the files in this directory are purged periodically, otherwise the directory will fill up. SQL ServerHVR normally locates the transaction log backup files by querying the backup history table in the msdb database. Specifying this property tells HVR to search for the log backup files in the specified directory dir instead. If this property is defined, then Archive_Log_Format must also be defined. SAP HANAHVR will search for the log backups in the directory dir in addition to the default log backup location for the source database. For HVR versions prior to 6.1.1/0 and 6.1.0/1, HVR will search for the log backups only in the specified directory dir instead of the default log backup location for the source database. SAP NetWeaver on HANAHVR will search for the log backups in the directory dir in addition to the default log backup location for the source database. For HVR versions prior to 6.1.1/0 and 6.1.0/1, HVR will search for the log backups only in the specified directory dir instead of the default log backup location for the source database. Sybase ASEHVR will search for the transaction log backups in the specified directory dir. |
AWS_Access_Key_Id | keyid | Access key ID of IAM user for connecting HVR to Amazon S3. This property is used together with AWS_Secret_Access_Key when connecting HVR to Amazon S3 using IAM User Access Keys. For more information about Access Keys, refer to Understanding and Getting Your Security Credentials in AWS documentation. |
AWS_IAM_Role | role | AWS IAM role name for connecting HVR to Amazon S3. This property is used when connecting HVR to Amazon S3 using AWS Identity and Access Management (IAM) Role. This property may be used only if the HVR Agent or the HVR Hub System is running inside the AWS network on an EC2 instance and the AWS IAM role specified here should be attached to this EC2 instance. When a role is used, HVR obtains temporary Access Keys Pair from the EC2 server. For more information about IAM Role, refer to IAM Roles in AWS documentation. |
AWS_Secret_Access_Key | key | Secret access key of IAM user for connecting HVR to Amazon S3. This property is used together with AWS_Access_Key_Id when connecting HVR to Amazon S3 using IAM User Access Keys. |
Azure_Auth_Proxy_Host | host | Host name of the authentication proxy server used for connecting HVR to the Azure DLS server. |
Azure_Auth_Proxy_Password | password | Password for the Azure_Auth_Proxy_User. |
Azure_Auth_Proxy_Port | port | Port number of the authentication proxy server host used for connecting HVR to the Azure DLS server. |
Azure_Auth_Proxy_Scheme | protocol | Protocol for the authentication proxy server host used for connecting HVR to the Azure DLS server. Available option:
|
Azure_Auth_Proxy_User | Username for the authentication proxy server host used for connecting HVR to the Azure DLS server. | |
Azure_OAuth2_Client_Id | id | Client ID (or application ID) used to obtain Microsoft Entra ID (formerly Azure Active Directory) access token. This property is required only if the authentication method (ABFS_Authentication_Method or ADL_Authentication_Method or SqlServer_Authentication_Method) is set to CLIENT_CREDS or REFRESH_TOKEN. |
Azure_OAuth2_Client_Secret | key | Secret key of the Azure_OAuth2_Client_Id. This property is required only if the authentication method (ABFS_Authentication_Method or ADL_Authentication_Method or SqlServer_Authentication_Method) is set to CLIENT_CREDS. |
Azure_OAuth2_Endpoint | url | URL used for obtaining the bearer token with credential token. Ensure that you are using the OAuth 2.0 endpoint. The URL path should include v2.0. The format for the endpoint URL is This property is required only if the authentication method (ABFS_Authentication_Method or ADL_Authentication_Method or SqlServer_Authentication_Method) is set to CLIENT_CREDS. |
Azure_OAuth2_MSI_Port | port | Port number for the REST endpoint of the token service exposed to localhost by the identity extension in the Azure VM. The default value for this property is 50342. This property is required only if the authentication method (ADL_Authentication_Method) is set to MSI. |
Azure_OAuth2_MSI_Tenant | url | URL for the REST endpoint of the token service exposed to localhost by the identity extension in the Azure VM. For Azure Data Lake Storage, this property is required only if the authentication method (ABFS_Authentication_Method) is set to MSI. |
Azure_OAuth2_Password | password | Password for Azure_OAuth2_User. |
Azure_OAuth2_Refresh_Token | path | Directory path to the text file containing the refresh token. This property is required only if the authentication method (ABFS_Authentication_Method or ADL_Authentication_Method) is set to REFRESH_TOKEN. |
Azure_OAuth2_User | user | Username for the OAuth 2.0 authentication. This property is required only if the authentication method (ABFS_Authentication_Method) is set to USER_PASS. |
Azure_Shared_Secret_Key | account | Access key of the Azure storage account. For Azure Data Lake Storage, this property is required only if the authentication method (ABFS_Authentication_Method) is set to SHARED_KEY. |
BigQuery_Region | Geographic location of the dataset. For more information about dataset locations, refer to Dataset Locations in BigQuery Documentation. Examples: US, europe-west4, us-west4 | |
Capture_Checkpoint_Frequency | secs | Checkpointing frequency in seconds for long running transactions, so the capture job can recover quickly when it restarts. Value secs is the interval (in seconds) at which the capture job creates checkpoints. The Without checkpoints, capture jobs must rewind back to the start of the oldest open transaction, which can take a long time and may require access too many old DBMS log files (e.g. archive files). The checkpoints are written into the HVR_CONFIG/hubs/hub/channels/channel/locs/location/capckp directory. If a transaction continues to make changes for a long period then successive checkpoints will not rewrite its same changes each time; instead the checkpoint will only write new changes for that transaction; for older changes it will reuse files written by earlier checkpoints. Checkpoints are written only for long-running transactions. For example, if the checkpoint frequency is each 5 minutes but users always do an SQL commit within 4 minutes then checkpoints will never be written. However, if users keep transactions open for 10 minutes, then those transactions will be saved but shorter-lived ones in the same period will not. The frequency with which capture checkpoints are written is relative to the capture jobs own clock, but it decides whether a transaction has been running long enough to be checkpointed by comparing the timestamps in its DBMS logging records. As a consequence, the maximum (worst-case) time that an interrupted capture job would need to recover (rewind back over all its open transactions) is its checkpoint frequency plus the amount of time it takes to reread the amount of changes that the DBMS can write in that period of time. When a capture job is recovering it will only use checkpoints which were written before the 'capture cycle' was completed. This means that very frequent capture checkpointing (say every 10 seconds) is wasteful and will not speed up capture job recovery time. This property is supported only for certain location types. For the list of supported location types, see Log-based capture checkpointing using location property Capture_Checkpoint_Frequency in Capabilities. |
Capture_Checkpoint_Retention | secs | Retains capture checkpoint files up to the specified period secs (in seconds). The retained checkpoint files are saved in the HVR_CONFIG/hubs/hub/channels/channel/locs/location/capckpretain directory. Depending on the storage location defined in Capture_Checkpoint_Storage, this directory can be located either on the capture location or hub. |
Capture_Checkpoint_Storage | stor | Storage location of capture checkpoint files for quick capture recovery. Available options for storare:
For both the storage locations, the checkpoint files are saved in the HVR_CONFIG/hubs/hub/channels/channel/locs/location/capckp directory. When the capture job is restarted and if it cannot find the most recent checkpoint files (perhaps the contents of that directory have been lost during a failover) then it will write a warning and rewind back to the start of the oldest open transaction..
|
Capture_Method | method | Method of reading/capturing changes from the DBMS log file. This property is supported only for location types from which HVR can capture changes. For the list of supported location types, see Capture changes from location in Capabilities. Valid values for method are:
|
Capture_Method_Unavailable | This is a discovered property that stores information whether the location supports a specific capture method. | |
Case_Sensitive_Names | true | Normally, HVR converts the DBMS table names to lowercase and treats table and column names as case insensitive. If set to true, DBMS table names and column names are treated case sensitive by HVR. Defining this property allows the replication of tables with mixed case names or tables whose names do not match the DBMS case convention. For example, normally an Oracle table name is held in uppercase internally (e.g. MYTAB), so this property is needed to replicate a table named mytab or MyTab. This property is supported only for certain location types. For the list of supported location types, see Treat DBMS table names and columns case sensitive in Capabilities. Columns with duplicate names with different cases are not supported within the same table: e.g., column1 and COLUMN1. |
Class | class | This property specifies class of the database. For example, oracle, sqlserver, etc. |
Class_Flavor | This is a discovered property that stores the flavor of the specific database Class. The combination of Class and Class_Flavor forms the location type. Example: For Azure Database, the Class is sqlserver and Class_Flavor is azure. | |
Class_Version | This is a discovered property that stores the version of the database Class. | |
Connection_Timezone_NameSince 6.1.5/5 | timezone | Time zone for the Databricks location. Specifying value in this property ensures that the time zone of the HVR Agent and the Databricks database match. It is not required to define this property if the time zone is UTC. Example: America/Los_Angeles |
Database_Char_Encoding | This is a discovered property that stores the character encoding of the database (defined in Database_Name). | |
Database_Client_Private_Key | path | Directory path where the .pem file containing the client's SSL private key is located. This property is required for enabling two way SSL. Defining this property along with Database_Public_Certificate, Database_Client_Public_Certificate, and Database_Client_Private_Key_Password will enable two way SSL, which means, HVR will authenticate the Hive server by validating the SSL certificate shared by the Hive server. |
Database_Client_Private_Key_Password | password | Password of the client's SSL private key specified in Database_Client_Private_Key. This property is required for enabling two way SSL. Defining this property along with Database_Public_Certificate, Database_Client_Public_Certificate, and Database_Client_Private_Key will enable two way SSL, which means, HVR will authenticate the Hive server by validating the SSL certificate shared by the Hive server. |
Database_Client_Public_Certificate | path | Directory path where the .pem file containing the client's SSL public certificate is located. This property is required for enabling two way SSL. Defining this property along with Database_Public_Certificate, Database_Client_Private_Key, and Database_Client_Private_Key_Password will enable two way SSL, which means, HVR will authenticate the Hive server by validating the SSL certificate shared by the Hive server. |
Database_Default_CaseSince v6.1.5/7 | This is a discovered property that stores the default case used in the database. Available values:
| |
Database_Default_Schema | This is a discovered property that stores the name of the default schema in the database (Database_Name). | |
Database_Host | host | Hostname or IP-address of the server on which the database is running. For Db2 for i, this is the hostname or IP-address of the Db2 for i system. |
Database_Name | dbname | Name of the database. For Db2 for i, this is the named database in Db2 for i. It could be on another (independent) auxiliary storage pool (IASP). The user profile's default setting will be used when no value is specified. Specifying *SYSBAS will connect a user to the SYSBAS database. For Db2 for LUW and Db2 for z/OS, the following formats are supported for this property:
For BigQuery, this is the name of the dataset in Google BigQuery. For HANA, this is the name of the specific database in a multiple-container environment. |
Database_Nchar_Encoding | This is a discovered property that stores the national character encoding of the database (defined in Database_Name). | |
Database_Password | password | Password for the Database_User. |
Database_Port | port | Port number on which the database (defined in Database_Host) server is expecting connections. |
Database_Public_Certificate | path | Directory path where the .pem file containing the server's public SSL certificate signed by a trusted CA is located. Defining this property will enable (one way) SSL, which means, HVR will authenticate the Hive server by validating the SSL certificate shared by the Hive server. This property is also required for enabling two way SSL. For enabling two way SSL, this property must be defined along with Database_Client_Public_Certificate, Database_Client_Private_Key, and Database_Client_Private_Key_Password. |
Database_Schema | schema | Name of the default schema to be used for this connection. |
Database_User | user | Username for connecting HVR to the database (defined in Database_Name). For Azure SQL Database, this is the user name and host name of the Azure SQL Database server. The format to be used is username@hostname. For Sybase ASE, this property can be used only if the Sybase_Authentication_Method is set to USER_PASS. For Teradata, this is the username for connecting HVR to the Teradata Node. |
Databricks_Authentication_Method | method | Authentication method for connecting HVR to Azure Databricks server. Available options for method are:
|
Databricks_CatalogSince v6.1.0/33 | name | Catalog name in a Unity Catalog metastore. If the target database is implemented in the Unity Catalog, and if this property is not defined, Databricks will use the default catalog hive_metastore. |
Databricks_HTTP_Path | url | URL for the Databricks compute resource. For more information, refer to Azure Databricks documentation. |
Databricks_Location | path | Path for the external tables in Databricks. For Databricks on AWS, this can be a mount path /mnt/... (optionally prefixed with dbfs:) or an s3:// URL. For Databricks on Azure, this can be a mount path /mnt/... (optionally prefixed with dbfs:) or an abfss:// URL. If a path is defined without specifying the dbfs:/ or abfss:// or s3://, it is assumed to be a mount path beginning with dbfs:/. |
Databricks_Location_ABFSS | url | URL (abfss://) for the external tables in Databricks. This is required only if the Databricks_Location is set to a mount path (/mnt/... or dbfs:/...). |
Databricks_Location_S3S | url | URL (s3s://) for the external tables in Databricks. This is required only if the Databricks_Location is set to a mount path (/mnt/... or dbfs:/...). |
DB2i_Log_Journal | Name of the Db2 for i journal from which data changes will be captured. It is mandatory to define this property when creating a Db2 for i capture location. A channel can only contain tables that share the same journal. To capture changes from tables associated with different journals, use separate channels for each journal. | |
DB2i_Log_Journal_Schema | Schema or library of the Db2 for i journal (DB2i_Log_Journal). It is mandatory to define this property when creating a Db2 for i capture location. | |
DB2i_Log_Journal_SysSeq | Capture from journal using *SYSSEQ. This property requires DB2i_Log_Journal and DB2i_Log_Journal_Schema.
| |
Db2_DB2INSTANCE | instance | When using "Db2 client or Db2 server or Db2 Connect", the name of the Db2 instance must be specified. When using "IBM Data Server Driver for ODBC and CLI" this property should not be defined. |
Db2_INSTHOME | path | When using "Db2 client or Db2 server or Db2 Connect" the directory path of the Db2 installation must be specified. When using "IBM Data Server Driver for ODBC and CLI" the directory path of the IBM Data Server Driver for ODBC and CLI installation on an HVR machine (e.g. /distr/db2/driver/odbc_cli/clidriver) must be specified. |
Db2_Use_SSLSince 6.1.5/9 | true | Enable/disable (one way) SSL. If set to true, HVR authenticates the location connection by validating the SSL certificate shared by the database server. For SSL connection configuration requirements on Linux, see Configuration for SSL connection on Linux. |
Description | description | Description for location created in HVR. |
File_Host | host | Hostname or IP-address of the server on which the file server is running. |
File_Password | password | Password for the File_User. |
File_Path | path | Directory path where the files are replicated to or captured from. For Amazon S3, this is the directory path in the S3 BUCKET where the files are replicated to or captured from. For Azure Blob Storage, this is the directory path in the container (defined in WASB_Container) where the files are replicated to or captured from. For Azure Data Lake Storage Gen1, this is the directory path where the files are replicated to or captured from. For Azure Data Lake Storage Gen2, this is the directory path in container (defined in ABFS_Container) where the files are replicated to or captured from. For Google Cloud Storage, this is the directory path in the Google Cloud Storage BUCKET where the files are replicated to or captured from. |
File_Port | port | Port number on which the file server (defined in File_Host) is expecting connections. |
File_Proxy_Host | host | Host name of the proxy server used for connecting HVR to the file server (defined in File_Host). |
File_Proxy_Password | password | Password for the File_Proxy_User. |
File_Proxy_Port | port | Port number for the proxy server (defined in File_Proxy_Host) used for connecting HVR to the file server (File_Host). |
File_Proxy_Scheme | protocol | Protocol for the proxy server (defined in File_Proxy_Host) used for connecting HVR to the file server (defined in File_Host). Available options for protocol are:
|
File_Proxy_User | username | Username for the proxy server (defined in File_Proxy_Host) used for connecting HVR to the file server (defined in File_Host). |
File_Scheme | protocol | Protocol for connecting HVR to the file server (defined in File_Host). The options available/supported for protocol are location type-specific. Amazon S3
Azure Blob Storage
Azure Data Lake Storage
File/FTP/SFTP
Google Cloud Storage
Sharepoint / WebDAV
|
File_State_Directory | path | Directory path for internal state files used by HVR during file replication. By default these files are created in sub-directory _hvr_state which is created inside the file location top directory. If path is relative (e.g. ../work), then the path used is relative to the file location's top directory. The state directory can either be defined to be a path inside the location's top directory or put outside this top directory. If the state directory is on the same file system as the file location's top directory, then HVR integrates file move operations will be 'atomic', so users will not be able to see the file partially written. Defining this property on a SharePoint/WebDAV integrate location ensures that the SharePoint version history is preserved. |
File_State_Directory_Is_Local | true | If set to true, the directory specified in File_State_Directory is stored on the local drive of the file location's server. If this property is not set to true or enabled, then by default the internal state files are stored in file location. For example, in Amazon S3, by default the state directory is stored in the S3 bucket. |
File_User | username | Username for connecting HVR to the file server (defined in File_Host). |
ForceCaseSince v6.1.0/34 | sensitivity | Manage case sensitivity of object names created in the target DBMS tables. This property applies to Activating Replication, Refresh, or Compare. Available options for case sensitivity are:
|
FTP_Encryption_Type | type | Encryption type used for connecting HVR to the file server (defined in File_Host). This is applicable only if File_Scheme is set to FTP. Available options for type are:
|
GCloud_Authentication_Method | method | Authentication method for connecting HVR to the Google Cloud server (Cloud Storage and Google BigQuery). The options available/supported for method are location type-specific. Google Cloud Storage
Google BigQuery
|
GCloud_Email | Service account email for connecting HVR to the Google BigQuery server. | |
GCloud_OAuth_Env | true | If set to true, enables OAuth 2.0 protocol based authentication for connecting HVR to the Google Cloud Storage. This method connects using the credentials fetched from the environment variable GOOGLE_APPLICATION_CREDENTIALS. For more information about configuring this environment variable, see Getting Started with Authentication in Google Cloud Storage documentation. |
GCloud_OAuth_File | path | Directory path for the service account key file (JSON) used in OAuth 2.0 protocol based authentication. This property is required only if GCloud_Authentication_Method is set to OAUTH_FILE. |
GCloud_Project | id | ID of the google cloud project. For more information about google cloud projects, refer to Creating and Managing Projects in BigQuery Documentation. |
GS_Bucket | bucket | Name or IP address of the Google Cloud Storage bucket. |
GS_Bucket_Region | region | Geographic location of the dataset. For more information about dataset locations, refer to Dataset Locations in BigQuery Documentation. |
GS_HMAC_Access_Key_Id | id | The HMAC access ID of the service account. This property is required only if GCloud_Authentication_Method is set to HMAC when connecting HVR to the Google Cloud Storage. |
GS_HMAC_Secret_Access_Key | key | The HMAC secret of the service account. This property is required only if GCloud_Authentication_Method is set to HMAC when connecting HVR to the Google Cloud Storage. |
GS_Storage_Integration | name | Integration name of the google cloud storage. |
Hana_Backint_Executable_Path | path | Directory path for the Backint application installed on the same node as the HVR. |
Hana_Backint_Configuration_Path | path | Directory path for the Backint configuration on the same node as the HVR. |
HANA_Root_Keys_Backup_Password | password | Password for encrypting root key backups in SAP HANA. This should be same as the password set for encrypting root key backups in SAP HANA. |
HDFS_Kerberos_Credential_Cache | path | Directory path for the Kerberos ticket cache file. It is not required to define this property if keytab file is used for authentication or if Kerberos is not used on the Hadoop cluster. For more information about using Kerberos authentication, see HDFS Authentication and Kerberos. |
HDFS_Namenode | host | Hostname of the HDFS NameNode. |
Hive_Authentication_Method | method | Authentication method for connecting HVR to Hive Server 2. This property is required only if Hive_Server_Type is set to 2. Available options for method are:
|
Hive_HTTP_Path | url | The partial URL corresponding to the Hive server. This property is required only if Hive_Thrift_Transport is set to HTTP. |
Hive_Kerberos_Host | host | Fully Qualified Domain Name (FQDN) of the Hive server host. This is the host part of Kerberos principal of the Hive server. For example, if the principal is "hive/example.host@EXAMPLE.REALM" then "example.host" should be specified here. The value for this property may be set to _HOST to use the Hive server hostname as the domain name for Kerberos authentication. If Hive_Service_Discovery_Mode is set to NONE, then the driver uses the value specified in the Host connection attribute. |
Hive_Kerberos_Realm | realm | Realm of the Hive Server 2 host. It is not required to specify any value in this property if the realm of the Hive Server 2 host is defined as the default realm in Kerberos configuration. This property is required only if Hive_Authentication_Method is set to Kerberos. |
Hive_Kerberos_Service | name | Kerberos service principal name of the Hive server. This is the service name part of Kerberos principal of the Hive server. For example, if the principal is hive/example.host@EXAMPLE.REALM then "hive" should be specified here. This property is required only if Hive_Authentication_Method is set to Kerberos. |
Hive_Server_Type | type | Type of the Hive server. Available options for type are:
|
Hive_Service_Discovery_Mode | mode | Mode for connecting HVR to Hive Server 2. This property is required only if Hive_Server_Type is set to 2. Available options for mode are:
|
Hive_Thrift_Transport | protocol | Transport protocol to use in the Thrift layer. This property is required only if Hive_Server_Type is set to 2. Available options for protocol are:
|
Hive_Zookeeper_Namespace | namespace | Namespace on ZooKeeper under which Hive Server 2 nodes are added. This property is required only if Hive_Service_Discovery_Mode is set to ZooKeeper. |
Ingres_II_SYSTEM | path | Directory path where the Actian Vector or Ingres database is installed. |
Intermediate_Directory | path | Directory path for storing 'intermediate files' that are generated during compare. Intermediate files are generated while performing direct file or online compare. If this property is not defined, then by default the intermediate files are stored in integratedir/_hvr_intermediate directory. The integratedir is the replication directory (File_Path) defined while creating a file location. |
Intermediate_Directory_Is_Local | true | If set to true, the directory specified in Intermediate_Directory is stored on the local drive of the file location's server. If not set to true, then by default the intermediate files are stored in file location. For example, in Amazon S3, by default the intermediate directory is stored in the S3 bucket. |
Kafka_Authentication_Method | method | Authentication method for connecting HVR to Kafka server (Broker). Available options for method are:
|
Kafka_Brokers | list:host, ports | Hostname or IP address of the Kafka broker server(s) along with the TCP port that the Kafka server uses to listen for client connections. The default port is 9092. This is an array property that can store multiple values. |
Kafka_Default_Topic | topic | Kafka topic to which the messages are written. You can use strings/text or expressions as Kafka topic name. Following are the expressions to substitute capture location or table or schema name as topic name:
|
Kafka_Kerberos_Client_Principal | host | Full Kerberos principal of the client connecting to the Kafka server. This property definition is required only on Linux/Unix. This property is required only if Kafka_Authentication_Method is set to KERBEROS. |
Kafka_Kerberos_Keytab | path | Directory path where the Kerberos keytab file containing key for the Kafka_Kerberos_Client_Principal is located. This property is required only if Kafka_Authentication_Method is set to KERBEROS. |
Kafka_Kerberos_Service | name | Kerberos Service Principal Name (SPN) of the Kafka server. This property is required only if Kafka_Authentication_Method is set to KERBEROS. |
Kafka_Message_Bundling | mode | Number of messages written (bundled) into single Kafka message. Regardless of the file format chosen, each Kafka message contains one row by default. Available options for mode are:
Note that Confluent's Kafka Connect only allows certain message formats and does not allow any message bundling, therefore Kafka_Message_Bundling must either be undefined or set to ROW. Bundled messages simply consist of the contents of several single-row messages concatenated together.
|
Kafka_Message_Bundling_Threshold | threshold | Threshold (in bytes) for bundling rows in a Kafka message. Rows continue to be bundled into the same message until this threshold is exceeded, after which, the message is sent and new rows are bundled into the next message. The default value is 800,000 bytes. This property may be defined only if Kafka_Message_Bundling is set to TRANSACTION or THRESHOLD.
|
Kafka_Message_Compress | algorithm | HVR will configure the Kafka transport protocol to compress message sets transmitted to Kafka broker using one of the available algorithms. The compression allows to decrease the network latency and save disk space on the Kafka broker. Each message set can contain more than one Kafka message. For more information, see section Kafka Message Bundling and Size in Apache Kafka Requirements. Available options for the algorithm are:
|
Kafka_Schema_Registry | url | URL (http:// or https://) of the schema registry to use Confluent compatible messages in Avro format. For HVR versions until 6.1.0/27, if the basic authentication is configured for the schema registry, then the login credentials (username and password) must be specified in the URL. The format is http[s]://user:password@schemaregistry_url:port For more information, see section Kafka Message Format in Apache Kafka Requirements. |
Kafka_Schema_Registry_PasswordSince v6.1.0/28 | password | Password of the Kafka schema registry user (Kafka_Schema_User). |
Kafka_Schema_UserSince v6.1.0/28 | url | Username for accessing the Kafka schema registry. If the basic authentication is configured for the schema registry, then the login credentials (username and password) must be specified. |
Kafka_Schema_Registry_Format | format | Format of the Kafka message. For more information, see section Kafka Message Format in Apache Kafka Requirements. Available options for format are:
|
Key_Only_Trigger_Tables | true | If set to true, then write only the key columns into the capture table to improve the performance of trigger-based capture (when Capture_Method is DB_TRIGGER).
The non-key columns are extracted using an The disadvantage of this technique is that 'transient' column values can sometimes be replicated, for example, if a
|
Log_Truncater | method | Specify who advances the SQL Server/Sybase ASE transaction log truncation point (truncates the log). Valid values for method are database-specific: SQL Server
Sybase ASE
|
Log_Truncater_Unavailable | This is a discovered property that stores information regarding HVR support for log truncation. | |
MySQL_CA_Certificate_FileSince v6.1.5/3 | path | Absolute path of the Certificate Authority (CA) certificate file. The value in this field must point to the same certificate used by the server. If a value is specified in this field, the server's Common Name (CN) in its certificate is verified against the hostname used for the connection. If there is a mismatch between the CN and the hostname, the connection will be rejected. This property requires MySQL_Use_SSL.This property can be defined only for Aurora MySQL, MariaDB, and MySQL. |
MySQL_Client_Pub_Cert_FileSince v6.1.5/3 | path | Absolute path of the client public key certificate file. This property requires MySQL_Use_SSL.This property can be defined only for Aurora MySQL, MariaDB, and MySQL. |
MySQL_Client_Priv_Key_FileSince v6.1.5/3 | path | Absolute path of the client private key file. This property requires MySQL_Use_SSL.This property can be defined only for Aurora MySQL, MariaDB, and MySQL. |
MySQL_Server_Pub_Key_FileSince v6.1.5/3 | path | Absolute path to a .pem file containing the client-side copy of the public key required by the server for RSA key pair-based password exchange. This is relevant only for clients using the sha256_password authentication plugin. It is ignored for accounts using other authentication plugins or when RSA-based password exchange is not in use, such as when the client connects to the server via a secure connection. |
MySQL_SSL_CipherSince v6.1.5/3 | comma-separated string | Encryption algorithms (ciphers) permitted for establishing a secure connection between HVR and the database server. If this field is left empty, a default set of ciphers will be used. To specify multiple ciphers, list them as comma-separated values. For the connection to succeed, both HVR and the database server must support at least one common cipher from the specified list. The SSL/TLS library will then select the highest-priority cipher compatible with the provided certificate. |
MySQL_SSL_CRL_FileSince v6.1.5/3 | path | Absolute path to a file containing one or more revoked X.509 certificates to use for TLS. |
MySQL_SSL_Min_TLS_VersionSince v6.1.5/3 | tls version | Minimum protocols the client permits for a TLS connection. Available options for TLS version are:
This property can be defined only for Aurora MySQL, MariaDB, and MySQL. |
MySQL_Use_SSLSince v6.1.5/3 | true | Enable/disable (one way) SSL. If set to true, HVR authenticates the location connection by validating the SSL certificate shared by the database server. This property can be defined only for Aurora MySQL, MariaDB, and MySQL. |
NetWeaver_Native_DB_Dictionaries | true | If set to true, HVR will query the native database dictionaries instead of the SAP dictionaries. When this property is defined, you cannot select/add the SAP Cluster and Pool tables to channel. |
ODBC_DM_Lib_Path | path | Directory path where the ODBC Driver Manager Library is installed. This property is applicable only for Linux/Unix operating system. For a default installation, the ODBC Driver Manager Library is available at /usr/lib64 and does not need to be specified. However, when UnixODBC is installed in for example /opt/unixodbc the value for this field would be /opt/unixodbc/lib. |
ODBC_Driver | odbcdriver | Name of the user defined (installed) ODBC driver used for connecting HVR to the database. |
ODBC_Inst | path | Directory path where the odbcinst.ini file is located. This property is applicable only for Linux/Unix operating system. For Databricks, the odbcinst.ini file should contain information about the Simba Spark ODBC Driver under the heading [Simba Spark ODBC Driver 64-bit]. |
ODBC_Sysini | path | Directory path where the odbc.ini and odbcinst.ini files are located. This property is applicable only for Linux/Unix operating system. For a default installation, these files are available at /etc directory and do not need to be specified using this property. However, when UnixODBC is installed in for example /opt/unixodbc the value for this field would be /opt/unixodbc/etc. For Azure SQL Database, the odbcinst.ini file should contain information about the Azure SQL Database ODBC Driver under the heading [ODBC Driverversionfor SQL Server]. For Db2 for i, the odbcinst.ini file should contain information about the IBM i Access Client Solutions ODBC Driver under the heading [IBM i Access ODBC Driver 64-bit]. For Redshift, the odbcinst.ini file should contain information about the Amazon Redshift ODBC Driver under the heading [Amazon Redshift (x64)]. For SAP HANA, the odbcinst.ini file should contain information about the HANA ODBC Driver under heading [HDBODBC] or [HDBODBC32]. For Snowflake, the odbcinst.ini file should contain information about the Snowflake ODBC Driver under the heading [SnowflakeDSIIDriver]. |
Oracle_ASM_Home | path | Directoy path where the Oracle ASM instance is installed. In Linux/Unix, by default, this is located in /etc/oratab file. This property is only relevant for a source Oracle with redo and/or archive files in ASM and Capture_Method is DIRECT. The value of this property explicitly sets the system identifier (SID) for the ASM instance. Typically the value is +ASM or +ASM[instance_number], but in some cases it may be +asm in lowercase. HVR can automatically assess what it should be.
|
Oracle_ASM_Password | password | Password for Oracle_ASM_User. |
Oracle_ASM_TNS | connstring | Connection string for connecting HVR to Oracle's Automatic Storage Management (ASM) using Transparent Network Substrate (TNS). The format for the connection string is host:port/service_name. |
Oracle_ASM_User | username | Username for connecting to Oracle ASM instance. This user must have sysasm privileges. |
Oracle_BFile_Dirs_Mapping | JSON string | Set the real path to the wallet directory if the path includes symbolic links. This is necessary because Oracle does not allow access through the BFile interface to a directory that has symbolic links in its path. For example, if the wallet directory is /var/foo/xxx, where /var/foo/ is a symbolic link to path /yyy/zzz. Thus, the real path to the wallet directory is /yyy/zzz/xxx. In this case, the property should be set as follows: Oracle_BFile_Dirs_Mapping={"/var/foo":"/yyy/zzz"}. This is a map property that can store multiple values. |
Oracle_Container | This is a discovered property that stores information whether the Oracle database is Root Container or Pluggable Database (PDB). | |
Oracle_Container_Root_Password | password | Password for the Oracle_Container_Root_User. |
Oracle_Container_Root_RAC_ServiceSince v6.1.0/27 | service | Service name of the Oracle database for the root container. |
Oracle_Container_Root_SID | identifier | Unique name identifier of the Oracle root container. |
Oracle_Container_Root_TNS | connstring | Connection string for connecting HVR to Oracle root container using Transparent Network Substrate (TNS). The format for the connection string is host:port/service_name. |
Oracle_Container_Root_User | username | Username for connecting to Oracle root container. |
Oracle_Dataguard_Primary_Password | password | Password for Oracle_Dataguard_Primary_User. |
Oracle_Dataguard_Primary_TNS | connstring | Connection string for connecting HVR to the Oracle data guard primary database using Transparent Network Substrate (TNS). The format for the connection string is host:port/service_name. |
Oracle_Dataguard_Primary_User | username | Username for connecting HVR to the primary database. |
Oracle_Home | path | Directory path where either Oracle or the Oracle client is installed. When connecting to an Oracle instance through an HVR Agent installed on the database server, this property should point to the ORACLE_HOME directory of the Oracle source database.In all other cases, it should point to the directory where an Oracle client is installed. |
Oracle_NLS_LANG | This is a discovered property that stores the value of Oralce's NLS_LANG parameter used for connecting to Oracle database. | |
Oracle_Show_Invisible_Columns | true | Enables replication of invisible columns in Oracle tables. For example, it can be used to capture information stored by Oracle Label Security. This property should be set for the location from which you want to replicate the invisible columns.
|
Oracle_SID | identifier | Unique name identifier of the Oracle instance/database. |
Oracle_TDE_Wallet_Password | password | Password for the Oracle TDE wallet. |
Oracle_TDE_Wallet_Reading_By_BFile | Boolean | Enables access to the Oracle TDE wallet through the BFIile interface. This property allows to access the wallet remotely. The primary usage of this property is remote capturing on ASM systems. For configuration steps, see section Configuring access to TDE wallet through BFile interface. |
Oracle_TNS | connstring | Connection string for connecting to the Oracle database using TNS (Transparent Network Substrate). The format for the connection string is host:port/service_name. Alternatively, you can add the connection details into the clients tnsnames.ora file and use that net service name in this field. This method requires easy connect enabled |
PostgreSQL_Pglib | path | Directory path of the library (lib) directory in the PostgreSQL installation. This property can be left empty to use the system default path. Example: /postgres/935/lib |
PostgreSQL_XLog | path | Directory path containing the current PostgreSQL xlog files. |
S3_Bucket | bucket | Name or IP address of the Amazon S3 bucket. |
S3_Bucket_Region | This is a discovered property that stores the region of the S3 bucket for the connected location. | |
S3_Encryption_KMS_Access_Key_Id | keyid | If client-side encryption using a CMK stored in AWS KMS is enabled (S3_Encryption_KMS_Customer_Master_Key_Id without S3_Encryption_SSE_KMS), this specifies the AWS access key id when querying KMS. By default, the credentials of the S3 connection is used. |
S3_Encryption_KMS_Customer_Master_Key_Id | keyid | If S3_Encryption_SSE_KMS is defined, this specifies the KMS CMK ID which is used for the server-side encryption. Otherwise, it enables client-side encryption using a CMK stored in AWS KMS. For client-side encryption, each object is encrypted with a unique AES256 data key obtained from KMS. This data key is stored alongside the S3 object.
|
S3_Encryption_KMS_IAM_Role | role | If client-side encryption using a CMK stored in AWS KMS is enabled (S3_Encryption_KMS_Customer_Master_Key_Id without S3_Encryption_SSE_KMS), this specifies the IAM role when querying KMS. By default, the credentials of the S3 connection is used. |
S3_Encryption_KMS_Region | region | If client-side encryption using a CMK stored in AWS KMS is enabled (S3_Encryption_KMS_Customer_Master_Key_Id without S3_Encryption_SSE_KMS), this specifies the KMS region when querying KMS. By default, the region of the S3 connection is used. |
S3_Encryption_KMS_Secret_Access_Key | key | If client-side encryption using a CMK stored in AWS KMS is enabled (S3_Encryption_KMS_Customer_Master_Key_Id without S3_Encryption_SSE_KMS), this specifies the AWS secret access key when querying KMS. By default, the credentials of the S3 connection is used. |
S3_Encryption_Master_Symmetric_Key | key | Enable client-side encryption using a master symmetric key for AES. Each object is encrypted with a unique AES256 data key. This data key is encrypted using AES256 with the specified master symmetric key and then stored alongside the S3 object. |
S3_Encryption_Materials_Description | desc | Provides optional encryption materials description which is stored alongside the S3 object. If used with KMS, the value must be a JSON object containing only string values. |
S3_Encryption_SSE | true | If set to true, enables server-side encryption with Amazon S3 managed keys. |
S3_Encryption_SSE_KMS | true | If set to true, enables server-side encryption with customer master keys (CMKs) stored in AWS key management service (KMS). If S3_Encryption_KMS_Customer_Master_Key_Id is not defined, a KMS managed CMK is used. |
Salesforce_Bulk_API | true | If set to true, use Salesforce Bulk API instead of the SOAP interface. This is more efficient for large volumes of data, because less round-trips are used across the network. A potential disadvantage is that some Salesforce.com licenses limit the number of bulk API operations per day. If this property is defined for any table, then it affects all tables captured from that location. |
Salesforce_Dataloader | path | Directory path where the Salesforce dataloader.jar file is located.
|
Salesforce_Endpoint | url | Complete URL for connecting HVR to Salesforce. |
Salesforce_Serial_Mode | true | If set to true, force serial mode instead of parallel processing for Bulk API. The default is parallel processing, but enabling Salesforce_Serial_Mode can be used to avoid some problems inside Salesforce.com. If this property is defined for any table, then it affects all tables captured from that location. |
SAP_ArchivingSince v6.1.5/7 | true | If set to true, HVR can recognize records manually deleted by a user and records automatically archived by SAP on HANA database. For SAP HANA, this property requires SAP_Source_Schema to be defined. For more information about using this property with SAP HANA, see section Recognizing SAP Archived Records in SAP HANA as Source. For more information about using this property with SAP NetWeaver on HANA, see section Recognizing SAP Archived Records in Capture from SAP NetWeaver on HANA. |
SAP_Authentication_Method | method | Authentication method for connecting HVR to the SAP system. Available options for method are:
|
SAP_Client | clientid | Three digit (000-999) identifier of the SAP client, which is sent to an AS ABAP upon logon. |
SAP_Connection_TypeSince v6.1.0/17 | type | Connection type for the SAP systems. |
SAP_Database_Owner | This is a discovered property that stores information about the database schema that contain the SAP data. This property is discovered when creating or modifying a SAP NetWeaver location. When SAP dictionaries are used, HVR will add only SAP tables from the database to the channel. | |
SAP_Instance_Number | number | Two digit number (00-97) of the SAP instance within its host. |
SAP_MessageServer_GroupSince v6.1.0/17 | name | Name of the SAP logon group. The default value is PUBLIC.
|
SAP_MessageServer_ServiceSince v6.1.0/17 | name or port | Port number or service name (like sapms<SID> ) available in local /etc/services file.
Specify this parameter only if the message server does not listen on the standard service sapms<SysID>
or if this service is not defined in the services file, and you need to specify the network port directly.
|
SAP_MessageServer_SystemIDSince v6.1.0/17 | id | Unique identifier <sapsid> of the SAP system. |
SAP_MessageServer_Use_Symbolic_NamesSince v6.1.0/17 | true | If set to true, only symbolic name can be specified in SAP_MessageServer_Service |
SAP_SNC_NameSince v6.1.0/17 | name | Token/identifier representing the external RFC program, client SNC name (DataStage Server SNC Name). It is also referred as client Personal Security Environment (PSE) name. |
SAP_SNC_Partner_NameSince v6.1.0/17 | name | Token/identifier representing the backend system, the communication partner’s SNC name. |
SAP_SNC_Library_PathSince v6.1.0/17 | path | Path for the external security product’s library. |
SAP_Source_Schema | schema | Name of the database schema that contain the SAP data. Defining this property enables the SAP table explore and the SAP unpack feature. If this property is defined, the SAP dictionaries are used, HVR will add only SAP tables from the database to the channel. |
SAP_NetWeaver_RFC_Library_Path | path | Directory path containing the SAP NetWeaver RFC SDK library files. For more information about the NetWeaver RFC SDK library file location, see section Install NetWeaver RFC SDK Libraries in SAP NetWeaver Requirements. |
Service_Password | password | Password for the Salesforce Service_User. |
Service_User | username | Username for connecting HVR to Salesforce. |
Snowflake_Role | role | Name of the Snowflake role. |
Snowflake_Warehouse | warehouse | Name of the Snowflake warehouse. |
SqlServer_Authentication_MethodSince v6.1.0/4 | method | Authentication method for connecting HVR to Azure SQL database. Available options for method are:
|
SqlServer_Native_Replicator_Connection | true | If set to true, disables the firing of database triggers, foreign key constraints, and check constraints during integration, provided these objects were defined with the `NOT FOR REPLICATION` option. This is done by connecting to the database with the SQL Server replication connection capability. If using HVR version 6.1.5/8 (or older) or a MSODBC driver older than version 17.8, the database connection string format in SqlServer_Server must be server_name,port_number; the alternative connection string formats are not supported. The port_number must be configured in the Network Configuration section of the SQL Server Configuration Manager.
|
SqlServer_Server | server | Server/instance name for connecting to SQL Server, Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. Azure SQL DatabaseFully qualified domain name (FQDN) of the Azure SQL Database. Example: cbiz2nhmpv.database.windows.net Azure SQL Managed InstanceFully qualified host name of the Azure SQL Managed Instance. The format for this property is:
Example: tcp:hvr-managed-instance.public.hd6fjk12b8a9.database.windows.net,3342 Azure Synapse AnalyticsFully qualified domain name (FQDN) of the Azure Synapse Analytics server. The format for this property is:
Example: tcp:hvrdw.database.windows.net SQL ServerName of the server (host) on which SQL Server is running and the Port number or the instance name of SQL Server. The following formats are supported for this property:
|
SqlServer_TDE_Database_CertificatesSince v6.1.0/10 | certificate | Certificate used to protect a database encryption key (DEK). This property is defined by a key-value pair, where the key is a certificate name (a string) and the value is the respective certificate. The certificate must be a base64-encoded string. HVR accepts DER and PEM encoded certificates. This is a map property that can store multiple values. |
SqlServer_TDE_Database_Private_KeysSince v6.1.0/10 | key | Private key associated with the certificate (defined in SqlServer_TDE_Database_Certificates). This property is defined by a key-value pair, where the key is a certificate name (a string) and the value is the respective certificate private key. The private key must be a base64-encoded string. HVR accepts PVK and PEM encoded private keys. The private key is stored encrypted in the repository database. This is a map property that can store multiple values. |
SqlServer_TDE_Database_Private_Key_PasswordsSince v6.1.0/10 | password | Password of the private key (defined in SqlServer_TDE_Database_Private_Keys). This property is defined by a key-value pair, where the key is a certificate name (string) and the value is the respective password. The password must be a string. The password is stored encrypted in the repository database. This is a map property that can store multiple values. |
Staging_Directory | path | Directory path for bulk load staging files. For certain databases (Redshift and Snowflake), HVR splits large amount data into multiple staging files, to optimize performance. This property is supported only for certain location classes. For the list of supported location classes, see Bulk load requires a staging area in Capabilities. For MariaDB or MySQL, when direct loading by the MySQL/MariaDB server option is used, this should be a directory local to the MySQL/MariaDB server on which the HVR user has write access from the server that HVR uses to connect to the DBMS. And when initial loading by the MySQL/MariaDB client option is used, this should be a local directory on the server where HVR connects to the DBMS. For Redshift and Snowflake, this should be an S3 location. |
Staging_Directory_Database | path | Directory path for the bulk load staging files visible from the database. This property should point to the same files as Staging_Directory. This property requires Staging_Directory. This property is supported only for certain location classes. For the list of supported location classes, see Bulk load requires a staging area in Capabilities. For Greenplum, this should either be a local directory on the Greenplum head-node or it should be a URL pointing to Staging_Directory, for example a path starting with gpfdist: or gpfdists:. For HANA, this should be a local directory on the HANA server which is configured for importing data by HANA. For MariaDB or MySQL, when direct loading by the MySQL/MariaDB server option is used, this should be the directory from which the MySQL/MariaDB server should load the files. And when initial loading by the MySQL/MariaDB client option is used, this should be left empty. For Redshift and Snowflake, this should be the S3 location that is used for Staging_Directory. |
Staging_Directory_Is_Local | true | If set to true, the directory specified in Staging_Directory_Database is stored on the local drive of the file location's server. If this property is not set to true or enabled, then by default the bulk load staging files are stored in the bucket or container available in the file location. For example, in Amazon S3, by default the staging directory is stored in the S3 bucket. |
Stream_Client_Private_Key | path | Directory path where the .pem file containing the client's SSL private key is located. |
Stream_Client_Private_Key_Password | password | Password of the private key file that is specified in Stream_Client_Private_Key. |
Stream_Client_Public_Certificate | path | Directory path where the .pem file containing the client's SSL public certificate is located. |
Stream_Password | password | Password of the Stream_User. |
Stream_Public_Certificate | path | Directory path where the file containing public certificate of Kafka server is located. |
Stream_User | username | Username for connecting HVR to the Kafka server. This property is required only if Kafka_Authentication_Method is set to USER_PASS. |
Supplemental_Logging | method | Specify what action should be performed to enable supplemental logging for tables. Supplemental logging should be enabled for HVR to perform log-based capture of updates. For more details see, section Supplemental Logging in SQL Server Requirements. Valid values for method are:
|
Supplemental_Logging_Unavailable | This is a discovered property that stores information whether the database supports supplemental logging. | |
Sybase | path | Directory path where the Sybase ASE database is installed. |
Sybase_ASE_Server_NameSince v6.1.5/7 | name | Name of the Sybase ASE database server. The interfaces file contains an entry for each SAP Sybase server on the network, identified by a server name. It enables the Open Client library used by HVR to locate the correct entry within the file. This property is required only if the Sybase_Net_Trans_Source is set to INTERFACES_FILE. |
Sybase_Authentication_Method | method | Authentication method for connecting HVR to Sybase ASE server. Available options for method are:
|
Sybase_CT_Library | path | Directory path where the Sybase Open Client (CT library) is installed. |
Sybase_Kerberos_Keytab | Directory path where the Kerberos keytab file is located. This keytab file contains the security key for the Database_User. This property is required only if Sybase_Authentication_Method is set to KERBEROS. | |
Sybase_Kerberos_Security_Mechanism | Name of the security mechanism that performs security services for this connection. Security mechanism names are defined in the Sybase libtcl.cfg configuration file. If this property is not defined, the default mechanism defined in the libtcl.cfg file will be used. This property is required only if Sybase_Authentication_Method is set to KERBEROS. | |
Sybase_Kerberos_Security_Services | Kerberos security mechanism services. It only defines how the connection behaves. This property is required only if Sybase_Authentication_Method is set to KERBEROS. Defining this property is optional.Available options:
| |
Sybase_Kerberos_Server_Principal | The Kerberos Service Principal Name (SPN) of the Sybase ASE server. This property is required only if Sybase_Authentication_Method is set to KERBEROS. | |
Sybase_Net_Trans_SourceSince v6.1.5/7 | source | Source of the network transport information required to connect HVR to the Sybase ASE database server. Available options for source are:
|
Sybase_SSL_EnabledSince v6.1.5/7 | true | SSL based authentication for Sybase ASE location connection. If set to true, HVR authenticates the Sybase ASE database server by validating the SSL certificate shared by the Sybase ASE database server. This property can be used only if the Sybase_Net_Trans_Source is set to DIRECT. |
Sybase_SSL_Common_NameSince v6.1.5/7 | name | ASE server name for SSL certificate validation. The name specified in this property should match the Sybase ASE database server name as specified in the command used to start ASE. For more information, see section Common Name Validation in an SDC Environment in SAP documentation. This property is required only if Sybase_SSL_Enabled is selected set to true. |
Teradata_TPT_Lib_Path | path | Directory path where the Teradata TPT Library is installed. Example: /opt/teradata/client/16.10/odbc_64/lib |
Trigger_Quick_Toggle | true | If set to true, allows end user transactions to avoid lock on toggle table. The toggle table is changed by HVR during trigger-based capture. Normally all changes from user transactions before a toggle is put into one set of capture tables and changes from after a toggle are put in the other set. This ensures that transactions are not split. If an end user transaction is running when HVR changes the toggle then HVR must wait, and if other end user transactions start then they must wait behind HVR. Defining this property allows other transactions to avoid waiting, but the consequence is that their changes can be split across both sets of capture tables. During integration these changes will be applied in separate transactions; in between these transactions the target database is not consistent. This property requires Capture_Method set to DB_TRIGGER.
|
Trigger_Toggle_Frequency | secs | Instruct HVR trigger-based capture jobs to wait for a fixed interval secs (in seconds) before toggling and reselecting capture tables. This property requires Capture_Method set to DB_TRIGGER.
|
View_Class | class | Class of the database (defined in View_Database_Name) that is used for providing an SQL based view on the file location. For example, Hive External Tables. |
View_Class_Flavor | This is a discovered property that stores the flavor of the database (defined in View_Database_Name). | |
View_Class_Version | This is a discovered property that stores the version of the database (defined in View_Database_Name). HVR stores and uses this number internally to determine which Hive functionality should HVR attempt to use. For example, if value 121 is stored in this property it indicates Hive version 1.2.1. | |
View_Database_Char_Encoding | This is a discovered property that stores the character encoding of the database (defined in View_Database_Name). | |
View_Database_Client_Private_Key | path | Directory path where the .pem file containing the client's SSL private key is located. |
View_Database_Client_Private_Key_Password | password | Password for the private key file specified in View_Database_Client_Private_Key. |
View_Database_Client_Public_Certificate | path | Directory path where the .pem file containing the client's SSL public certificate is located. |
View_Database_Default_Schema | schema | This is a discovered property that stores the name of the default schema in the database (defined in View_Database_Name). |
View_Database_Host | host | The hostname or IP-address of the server on which the database (defined in View_Database_Name) is running. |
View_Database_Name | name | Name of the database used for an SQL based view on the file location. |
View_Database_Nchar_Encoding | charset | This is a discovered property that stores the national character encoding of the database (defined in View_Database_Name). |
View_Database_Password | password | Password for the View_Database_User. |
View_Database_Port | port | Port number for the database (defined in View_Database_Name). |
View_Database_Public_Certificate | path | Directory path where the .pem file containing the server's public SSL certificate signed by a trusted CA is located. |
View_Database_User | user | Username for connecting to the database (defined in View_Database_Name). |
View_Hive_Authentication_Method | method | Authentication method for connecting HVR to Hive Server 2 instance. This property is required only if View_Hive_Server_Type is set to Hive Server 2. Available options for method are:
|
View_Hive_HTTP_Path | url | The partial URL corresponding to the Hive server. This property is required only if View_Hive_Thrift_Transport is set to HTTP. |
View_Hive_Kerberos_Host | name | Fully Qualified Domain Name (FQDN) of the Hive Server 2 host. The value of Host can be set to _HOST to use the Hive server hostname as the domain name for Kerberos authentication. If Hive_Service_Discovery_Mode is disabled, then the driver uses the value specified in the Host connection attribute. |
View_Hive_Kerberos_Realm | realm | Realm of the Hive Server 2 host. It is not required to specify any value in this property if the realm of the Hive Server 2 host is defined as the default realm in Kerberos configuration. This property is required only if View_Hive_Authentication_Method is set to Kerberos. |
View_Hive_Kerberos_Service | name | Kerberos service principal name of the Hive server. This property is required only if View_Hive_Authentication_Method is set to Kerberos. |
View_Hive_Server_Type | type | Type of the Hive server to which HVR will be connected. Available options for type are:
|
View_Hive_Service_Discovery_Mode | mode | Mode for connecting to Hive. This property is required only if View_Hive_Server_Type is set to Hive Server 2. Available options for mode are:
|
View_Hive_Thrift_Transport | protocol | Transport protocol to use in the Thrift layer. This property is required only if View_Hive_Server_Type is set to Hive Server 2. Available options for protocol are:
|
View_Hive_Zookeeper_Namespace | namespace | Namespace on ZooKeeper under which Hive Server 2 nodes are added. This property is required only if View_Hive_Service_Discovery_Mode is ZooKeeper. |
View_ODBC_DM_Lib_Path | path | Directory path where the ODBC Driver Manager Library is installed. This property is applicable only for Linux/Unix operating system. For a default installation, the ODBC Driver Manager Library is available at /usr/lib64 and does not need to be specified. However, when UnixODBC is installed in for example /opt/unixodbc the value for this field would be /opt/unixodbc/lib. |
View_ODBC_Driver | drivername | User defined (installed) ODBC driver for connecting HVR to the database. |
View_ODBC_Sysini | path | Directory path where the odbc.ini and odbcinst.ini files are located. This property is applicable only for Linux/Unix operating system. For a default installation, these files are available at /etc and do not need to be specified. However, when UnixODBC is installed in for example /opt/unixodbc the value for this property would be /opt/unixodbc/etc.
|
WASB_Account | account | Name of the Azure Blob Storage account. |
WASB_Container | container | Name of the container available within the Azure Blob Storage account. |