Metrics for Statistics
This section lists and describes the metrics used by Fivetran HVR for tracking the replication performance. By using these metrics the performance of replication is represented graphically in Statistics and Topology for easier tracking of performance.
Latency Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Capture Latency Min | secs | Minimum latency value as reported by a capture job. This value is taken from the log message of the capture job. Therefore, the value is only available when the capture job has captured some rows. |
Capture Latency Max | secs | Maximum latency value as reported by a capture job. This value is taken from the log message of the capture job. Therefore, the value is only available when the capture job has captured some rows. |
Integrate Latency Min | secs | Minimum latency value as reported by an integrate job. This value is taken from the log message of the integrate job. Therefore, the value is only available when the integrate job has integrated some rows. |
Integrate Latency Max | secs | Maximum latency value as reported by an integrate job. This value is taken from the log message of the integrate job. Therefore, the value is only available when the integrate job has integrated some rows. |
Capture Rewind Interval | secs | Time interval describing how far the capture job has to rewind back after a restart to safely capture all transactions that have not been emitted yet. This interval can grow if there are long-running transactions. |
Stats Logs Gather Latency | secs | Latency of the logs gatherer. |
Stats Glob Gather Latency | secs | Latency of the glob gatherer. |
Router Capture Latency Times | secs | This metric can be used to calculate the current latency of the capture job and consists of 2 timestamps separated by a comma ('t1,t2'). First value t1 is the timestamp until which all changes were captured. Second value t2 is the timestamp when t1 was generated. Hence, at timestamp t2 the capture latency is given by t2-t1. For everything between 'now' and t2, the exact latency is uncertain. |
Router Integrate LatencyTimes | secs | This metric can be used to calculate the current latency of the integrate job and consists of 1 or 2 timestamps separated by a comma ('t1[,t2]'). First value t1 is the timestamp until which all changes were captured, but not integrated yet. If there are no changes to integrate, the second value t2 is the timestamp when t1 was generated. Then, at timestamp t2, the integrate latency is given by t2-t1. Otherwise, if there are changes to integrate, there is no second value, and the integrate latency is given by the difference between the last time this metric was gathered (X-Hvr-Glob-Last-Cycle) and t1. For everything between 'now' and t2 or X-Hvr-Glob-Last-Cycle, the exact latency is uncertain. |
Router Latency Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Router Capture Latency Times | secs | This metric can be used to calculate the current latency of the capture job. It consists of 2 timestamps separated by a comma ('t1,t2'). First value t1 is the timestamp until which all changes were captured. Second value t2 is the timestamp when t1 was generated. Hence, at timestamp t2, the capture latency is given by t2 - t1. For everything between 'now' and t2 the exact latency is uncertain. |
Router Integrate Latency Times | secs | This metric can be used to calculate the current latency of the integrate job. It consists of 1 or 2 timestamps separated by a comma ('t1[,t2]'). First value t1 is the timestamp until which all changes were captured, but not integrated yet. If there are no changes to integrate, the second value t2 is the timestamp when t1 was generated. Then, at timestamp t2, the integrate latency is given by t2 - t1. Otherwise, if there are changes to integrate, there is no second value and the integrate latency is given by the difference between the last time this metric was gathered (X-Hvr-Glob-Last-Cycle) and t1. For everything between 'now' and t2 or X-Hvr-Glob-Last-Cycle, the exact latency is uncertain. |
Captured Row Counts Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Captured Inserts | rows | Number of captured inserts . |
Captured Updates | rows | Number of captured updates . The updates are captured as 2 rows, not 1, unless parameter NoBeforeUpdate of action Capture is defined. Therefore, the total number of captured inserts , updates , and deletes can be less than the captured rows. |
Captured Deletes | rows | Number of captured deletes . |
Captured Changes | rows | Number of all captured changes. That is, the total number of captured inserts , updates , and deletes . Captured changes can be less than the captured rows. |
Captured Changes Backdated | rows | A 'backdated' version of 'Captured Changes'. Backdated means that the latency in the log message is used to assign the value to a time earlier than the message's timestamp. For example, if a message 'Scanned 100 changes from 30 mins ago' has timestamp '16:55:00', then 100 is both added to 'Captured Changes' for 16:55 and also to 'Captured Changes Backdated' for 16:25, 30 minutes earlier. The backdated version shows when the changes happened in a DBMS, not when HVR captured them. Visual comparison of the backdated metric to its regularly dated one can display a bottleneck pattern; the area under both graph lines should be identical (the total amount of work done), but if the backdated graph shows a peak that quickly subsides and the regularly dated line shows a smaller rise that subsided more slowly, then a bottleneck is visible. |
Captured Rows | rows | Number of all captured changes. That is, the total number of captured inserts , updates , and deletes . Sometimes changes (e.g. updates) are captured as 2 rows, not 1. Therefore, captured changes can be less than captured rows. |
Captured Rows Backdated | rows | A 'backdated' version of 'Captured Rows'. Backdated means that the latency in the log message is used to assign the value to a time earlier than the message's timestamp. For example if a message 'Scanned 100 changes from 30 mins ago' has timestamp '16:55:00', then 100 is both added to 'Captured Rows' for 16:55 and also to 'Captured Rows Backdated' for 16:25, 30 minutes earlier. The backdated version shows when the changes happened in a DBMS, not when HVR captured them. Visual comparison of the backdated metric to its regularly dated one can display a bottleneck pattern; the area under both graph lines should be identical (the total amount of work done), but if the backdated graph shows a peak that quickly subsides and the regularly dated line shows a smaller rise that subsided more slowly, then a bottleneck is visible. |
Captured Skipped Rows | rows | Changes that are skipped by the HVR 'controls', e.g. after an online refresh. |
Augmented Rows | rows | This counts situations where a capture job performs a database query to fetch extra column value(s) to 'augment' other column values read from DBMS logging. These operations are relatively slow (database query needed). |
SAP Augment Selects | rows | Only occurs when action Transform with parameter SapUnpack is used. Counts situations where a capture job performs a database query to fetch extra rows to 'augment' SAP cluster rows read from DBMS logging. This happens when rows from the DBMS logging do not contain all relevant information to process the SAP cluster. These operations are relatively slow (database query needed). |
Captured DDL Statements | rows | Measures the number of DDL statements processed by action AdaptDDL. |
Integrated Change Counts Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Integrated Inserts | changes | Number of integrate inserts . |
Integrated Updates | changes | Number of integrate updates . Although updates can be captured as 2 rows not 1, this metric measures the updated changes, not the underlying rows. |
Integrated Deletes | changes | Number of integrate deletes . |
Integrated Changes | changes | Number of all integrated changes. That is, the total number of integrated inserts , updates , and deletes . Sometimes changes (e.g. updates ) are captured as 2 rows not 1. Such an update would count as 1, not 2. Therefore, the number of changes can be less than the number of rows. This count does NOT include DDL changes (or rows refreshed due to DDL changes). |
Integrated Skipped Changes | changes | Changes that are skipped by the HVR 'controls', e.g. during an online refresh or if a job is restarted after some interrupt. |
Changes Coalesced Away | changes | Counts how many changes got optimized away by coalescing. Coalescing is an operation where HVR combines consecutive changes on the same key into a single change (e.g. 1 insert + 3 updates become just 1 insert ). Action Integrate with parameters Method=Burst or Method=Coalesce will do coalescing. |
Failed Inserts Saved | changes | Failed inserts written to the tbl__f table after errors when action Integrate with parameter OnErrorSaveFailed is defined, possibly because a row already existed. |
Failed Updates Saved | changes | Failed updates written to the tbl__f table after errors when action Integrate with parameter OnErrorSaveFailed is defined, possibly because a row did not exist. |
Failed Deletes Saved | changes | Failed deletes written to the tbl__f table after errors when action Integrate with parameter OnErrorSaveFailed is defined, possibly because a row did not exist. |
Failed Changes Saved | changes | Total number of changes (inserts , updates , and deletes ) which failed and were written to the tbl__f table after errors when action Integrate with parameter OnErrorSaveFailed is defined, possibly because a row did not exist. |
Collision Changes Discarded | changes | Change which was discarded due to collision detection (action CollisionDetect) that decided that change was older then the current row in the target database. |
Empty Updates Discarded | changes | Number of updates (produced by a source database) that were discarded because no replicated column was changed. |
Transactions Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Captured Transactions | transaction | This metric counts the number of commits of transactions which contain changes to replicated tables. Note that one can see if a system is dominated by large (e.g. 1000 row/commit) transactions, but comparing this filed with Captured Rows only shows an average. |
Captured Transactions Backdated | transaction | A 'backdated' version of 'Captured Transactions'. Backdated means that the latency in the log message is used to assign the value to a time earlier than the message's timestamp. For example, if a message 'Scanned 100 changes from 30 mins ago' has timestamp '16:55:00', then 100 is both added to 'Captured Transactions' for 16:55 and also to 'Captured Transactions Backdated' for 16:25, 30 minutes earlier. The backdated version shows when the changes happened in a DBMS, not when HVR captured them. Visual comparison of the backdated metric to its regularly dated one can display a bottleneck pattern; the area under both graph lines should be identical (the total amount of work done), but if the backdated graph shows a peak that quickly subsides and the regularly dated line shows a smaller rise that subsided more slowly, then the bottleneck is visible. |
Integrated Transactions | transaction | Integrate will bundle lots of smaller Captured Transactions values into fewer Integrated Transactions for speed. This metric measures these bundled commits, not the original ones. |
Durations Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Capture Duration Total | secs | Total amount of time that capture cycles took. |
Capture Duration Max | secs | Duration of the longest capture cycle. |
Capture Duration Average | secs | Average duration of a capture cycle. |
Integrate Duration Total | secs | Total amount of time that integrate cycles took. |
Integrate Duration Max | secs | Duration of the longest integrate cycle. |
Integrate Duration Average | secs | Average duration of an integrate cycle. |
Integrate Burst Move Prepare Duration Max | secs | Maximum amount of time that burst integrate cycle took to 'prepare' the changes before moving them into the burst table. Preparing includes reading the compressed txt file, transporting over network, uncompressing, and (in some situations) applying transforms. It ends at the 'tipping point' of the 'sort', just before coalesce happens. |
Integrate Burst Move Staging Duration Max | secs | Maximum amount of time that burst integrate cycle took between the sorting 'tipping point' before coalesce and writing into staging files. The metric is not available if HVR does not use a staging file to bulk load data into a database target (e.g. it 'streams' rows into a bulk-load API instead). |
Integrate Burst Move Load Duration Max | secs | Maximum amount of time that burst integrate cycle took to load the changes into the burst table. If staging is used, this is the duration it took to bulk load from the staging files into the burst table. Otherwise, it is the duration to stream the sorted changes directly into the burst table. |
Integrate Burst Move Prepare Duration | secs | Amount of time that burst integrate cycle took to 'prepare' the changes before moving them into the burst table. Preparing includes reading the compressed txt file, transporting over network, uncompressing, and (in some situations) applying transforms. It ends at the 'tipping point' of the 'sort', just before coalesce happens. |
Integrate Burst Move Staging Duration | secs | Amount of time that burst integrate cycle took between the sorting 'tipping point' before coalesce and writing into staging files. The metric is not available if HVR does not use a staging file to bulk load data into a database target (e.g. it 'streams' rows into a bulk-load API instead). |
Integrate Burst Move Load Duration | secs | Amount of time that burst integrate cycle took to load the changes into the burst table. If staging is used, this is the duration it took to bulk load from the staging files into the burst table. Otherwise, it is the duration to stream the sorted changes directly into the burst table. |
Integrate Burst Apply Duration | secs | Amount of time that burst integrate cycle took to apply changes from the burst table to the base table using set-wise SQL statements. |
Integrate Burst Apply Duration Max | secs | Maximum amount of time that burst integrate cycle took to apply changes from the burst table to the base table using set-wise SQL statements. |
Integrate Burst Move Prepare Duration Average | secs | Average amount of time that burst integrate cycle took to 'prepare' the changes before moving them into the burst table. Preparing includes reading the compressed txt file, transporting over network, uncompressing, and (in some situations) applying transforms. It ends at the 'tipping point' of the 'sort', just before coalesce happens. |
Integrate Burst Move Staging Duration Average | secs | Average amount of time that burst integrate cycle took between the sorting 'tipping point' before coalesce and writing into staging files. The metric is not available if HVR does not use a staging file to bulk load data into a database target (e.g. it 'streams' rows into a bulk-load API instead). |
Integrate Burst Move Load Duration Average | secs | Average amount of time that burst integrate cycle took to load the changes into the burst table. If staging is used, this is the duration it took to bulk load from the staging files into the burst table. Otherwise, it is the duration to stream the sorted changes directly into the burst table. |
Integrate Burst Apply Duration Average | secs | Average amount of time that burst integrate cycle took to apply changes from the burst table to the base table using set-wise SQL statements. |
Integrate Burst Duration Average Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Integrate Burst Move Prepare Duration Average | secs | Average amount of time that burst integrate cycle took to 'prepare' the changes before moving them into the burst table. Preparing includes reading the compressed txt file, transporting over network, uncompressing, and (in some situations) applying transforms. It ends at the 'tipping point' of the 'sort', just before coalesce happens. |
Integrate Burst Move Staging Duration Average | secs | Average amount of time that burst integrate cycle took between the sorting 'tipping point' before coalesce and writing into staging files. The metric is not available if HVR does not use a staging file to bulk load data into a database target (e.g. it 'streams' rows into a bulk-load API instead). |
Integrate Burst Move Load Duration Average | secs | Average amount of time that burst integrate cycle took to load the changes into the burst table. If staging is used, this is the duration it took to bulk load from the staging files into the burst table. Otherwise, it is the duration to stream the sorted changes directly into the burst table. |
Integrate Burst Apply Duration Average | secs | Average amount of time that burst integrate cycle took to apply changes from the burst table to the base table using set-wise SQL statements. |
Speed Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Capture Speed Max | rows/sec | Maximum speed that capture cycle reached. |
Capture Speed Average | rows/sec | Average speed of capture cycles. |
Integrate Speed Max | rows/sec | Maximum speed that integrate cycle had through all phases. |
Integrate Speed Average | rows/sec | Average speed that burst integrate cycle had through all phases. |
Integrate Burst Apply Speed Max | rows/sec | Maximum speed that the burst integrate cycle had when applying changes from the burst table to the base table using set-wise SQL statements. |
Integrate Burst Apply Speed Average | rows/sec | Average speed that the burst integrate cycle had when applying changes from the burst table to the base table using set-wise SQL statements. |
Integrate Burst Move Prepare Speed Max | rows/sec | Maximum speed that burst integrate cycle had when 'preparing' the changes before moving them into the burst table. Preparing includes reading the compressed txt file, transporting over network, uncompressing, and (in some situations) applying transforms. It ends at the 'tipping point' of the 'sort', just before coalesce happens. |
Integrate Burst Move Prepare Speed Average | rows/sec | Average speed that burst integrate cycle had when 'preparing' the changes before moving them into the burst table. Preparing includes reading the compressed txt file, transporting over network, uncompressing, and (in some situations) applying transforms. It ends at the 'tipping point' of the 'sort', just before coalesce happens. |
Integrate Burst Move Staging Speed Max | rows/sec | Maximum speed that burst integrate cycle had between the sorting 'tipping point' before coalesce and writing into staging files. The metric is not available if HVR does not use a staging file to bulk load data into a database target (e.g. it 'streams' rows into a bulk-load API instead). |
Integrate Burst Move Staging Speed Average | rows/sec | Average speed that burst integrate cycle had between the sorting 'tipping point' before coalesce and writing into staging files. The metric is not available if HVR does not use a staging file to bulk load data into a database target (e.g. it 'streams' rows into a bulk-load API instead). |
Integrate Burst Move Load Speed Max | rows/sec | Maximum speed that burst integrate cycle took to load the changes into the burst table. If staging is used, this is the duration it took to bulk load from the staging files into the burst table. Otherwise, it is the duration to stream the sorted changes directly into the burst table. |
Integrate Burst Move Load Speed Average | rows/sec | Average speed that burst integrate cycle took to load the changes into the burst table. If staging is used, this is the duration it took to bulk load from the staging files into the burst table. Otherwise, it is the duration to stream the sorted changes directly into the burst table. |
Cycles Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Capture Cycles | cycles | Count of capture cycles that started during this time period. This does not include 'sub-cycles' or 'silent cycles', but does include 'empty cycles'. A 'sub-cycle' happens when a busy capture job emits a block of changes but has not yet caught up to the 'top'. It can be recognized as an extra 'Scanning' message which is not preceded by a 'Cycle X' line'. A 'silent-cycle' is when a capture job sees no change activity and decides to progress its 'capture state' files but without writing a line in the log (above every 10 secs). An 'empty cycle' is when a capture job sees no change activity and does write a line in the log (above every 10 mins). |
Integrate Cycles | cycles | Count of integrate cycles that started during this time period. Note that the integrate activity may fall into a subsequent period. |
Performance Metrics
Performance Metric | Description |
---|---|
CPU Usage | Average CPU usage of a hub or agent machine per minute, represented in percentage. |
IO Wait | The time spent by CPU of a hub or agent machine waiting for IO, represented in percentage. |
Network Dropped Incoming/Outgoing Packages | The number of dropped incoming/outgoing network packages of a hub or agent machine. |
Network Incoming/Outgoing Errors | The number of incoming/outgoing network errors of a hub or agent machine. |
Config/Temp/HVR_HOME Disk Time Spent Reading/Writing | The time the disk spent reading/writing to the HVR_HOME, HVR_CONFIG, and TEMP directories on a hub or agent machine, measured in milliseconds. |
Config/Temp/HVR_HOME Disk Time Spent doing IO | The time the disk spent on IO operations for the HVR_HOME, HVR_CONFIG, and TEMP directories on a hub or agent machine, measured in milliseconds. |
Config/Temp/HVR_HOME Disk Weighted Time Spent doing IO | The weighted time the disk spent on IO operations for the HVR_HOME, HVR_CONFIG, and TEMP directories on a hub or agent machine in milliseconds. |
Config/Temp/HVR_HOME Disk IO Operations In Progress | The number of in-progress disk IO operations for the HVR_HOME, HVR_CONFIG, and TEMP directories on a hub or agent machine. |
Byte I/O Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Routed Bytes Written | bytes | Amount of compressed bytes HVR routed from the capture machine to the hub machine. |
Routed Bytes Written Uncompressed | bytes | Amount of uncompressed bytes HVR routed from the capture machine to the hub machine. These routed bytes refer to HVR's representation of routed rows in memory. This is different from the DBMS's 'storage size' (DBMS's storage of that row on a disk). For example, a table has a varchar(100) column containing 'Hello World' which HVR manages to compress down to 3 bytes. In HVR, the memory representation of varchar(100) is 103 bytes, whereas the DBMS storage is 13 bytes. |
Captured File Size | bytes | In a file-to-database or file-to-file replication channel, this counts the total size of the captured files. |
Capture DbmsLog Bytes | bytes | Amount of bytes HVR captured from the database's log file. |
Capture DbmsLog Bytes Backdated | bytes | A 'backdated' version of 'Capture DbmsLog Bytes'. 'Backdated' means latency from the HVR's log message is used to assign the value to a time earlier than the message's timestamp. This reflects when the database wrote the bytes, not when HVR captured them. |
Compression Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Compression Ratio Max | % | Maximum memory compression ratio. When transporting table rows, HVR reports its memory compression ratio; the number of compressed bytes transmitted over network compared with HVR's representation of that row in memory. This is different from the DBMS's storage compression ratio (the number of compressed bytes transmitted compared with the DBMS's storage of that row on a disk). For example, a table has a varchar(100) column containing 'Hello World' which HVR manages to compress down to 3 bytes. In HVR, the memory representation of varchar(100) is 103 bytes, whereas the DBMS storage is 13 bytes. In this case, HVR's memory compression ratio is 97% (1-(3/103)), whereas the storage compression ratio would be 77% (1-(13/103)). |
Compression Ratio Average | % | Average over all compression ratios. Also see 'Compression Ratio Max'. |
Replicated Files Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Captured Files | files | In a file-to-database or file-to-file replication channel, this counts the number of captured files. |
Integrated Files | files | For file integration, this counts the number of integrated files. |
Failed Files Saved | files | Number of failed files saved. |
Errors/Warnings Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Errors | lines | Counts the number of errors (lines matching F_J*) in the job log file. Such an error line could affect multiple rows or could affect one row and be repeated lots of times (so it counts as multiple errors). |
^Errors | string | Annotation: Text of the most recent error line as additional information for metric Errors. |
Errors F_J* | lines | Groups the appearing errors by up to the 2 most significant error numbers. For example, if error F_JA1234 happened, then metric ErrorsF_JA1234 will increase by 1. |
^Errors F_J* | string | Annotation: Groups the appearing error messages by up to the 2 most significant error numbers. For example, if error F_JA1234 happened, then metric ^Errors F_JA1234 will hold the message of F_JA1234. |
Warnings | lines | Counts the number of warnings (lines matching W_J*) in the job log file. |
^Warnings | string | Annotation: Most recent warning line. |
Warnings W_J* | lines | Groups the appearing warnings by the warning number. For example, if warning W_JA1234 happened, then metric WarningsW_JA1234 will increase by 1. |
^Warnings W_J* | string | Annotation: Groups the appearing warning messages by the warning number. For example, warning W_JA1234 happened. Then metric ^Warnings W_JA1234 will hold the message of W_JA1234. |
Router Rows Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Capture Router Rows | rows | Number of rows that are queued up for integration in router transaction files. This counts rows for all integrate locations that integrate changes captured from the capture location of this measurement. |
Integrate Router Rows | rows | Number of rows that are queued up for integration in router transaction files. This counts rows for all capture locations that capture changes that are integrated into the location of this measurement. |
Router Bytes Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Capture Router Bytes | bytes | Amount of bytes that is queued up for integration in router transaction files. This counts bytes for all integrate locations that integrate changes captured from the capture location of this measurement. |
Integrate Router Bytes | bytes | Amount of bytes that is queued up for integration in router transaction files. This counts bytes for all capture locations that capture changes that are integrated into the location of this measurement. |
Router Files Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Capture Router Files | files | Number of router transaction files that are queued up for integration. This counts files for all integrate locations that integrate changes captured from the capture location of this measurement. |
Integrate Router Files | files | Number of router transaction files that are queued up for integration. This counts files for all capture locations that capture changes that are integrated into the location of this measurement. |
Enroll Revision Files | time | Number of enroll revision files in the enroll directory. This is only measured for capture locations with action AdaptDDL. |
Router Timestamps Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Capture Rewind Time | time | Timestamp that capture has to rewind back to after a restart to safely capture all transactions that have not been emitted yet. |
Capture Emit Time | time | Timestamp of last emitted change. |
Capture Last Cycle Time | time | Timestamp when the last capture cycle finished. |
Capture Router Timestamp | time | Earliest creation timestamp of router transaction files. This checks files for all integrate locations that integrate changes captured from the capture location of this measurement. |
Integrate Router Timestamp | time | Earliest creation timestamp of router transaction files. This checks files for all capture locations that capture changes that are integrated into the location of this measurement. |
Enroll File Timestamp | time | Timestamp when last enroll was performed. |
Job Breakdown Stats Metrics
Metric Name | Unit | Remarks |
---|---|---|
Capture Hub Disk IO HVR Config Duration | secs | Time spent by the capture job on the hub while reading/writing files in the HVR_HOME and HVR_CONFIG folders. |
Capture Agent Disk IO HVR Config Duration | secs | Time spent by the capture job on the agent while reading/writing files in the HVR_HOME and HVR_CONFIG directories. |
Capture Hub Disk IO Location Duration | secs | Time spent by the capture job on the hub while reading files from the outside of the HVR directories. |
Capture Agent Disk IO Location Duration | secs | Time spent by the capture job on the agent while reading files from the outside of the HVR directories. |
Capture Hub Disk IO Temp Duration | secs | Time spent by the capture job on the hub while reading/writing temporary files in the HVR_TMP directory. |
Capture Agent Disk IO Temp Duration | secs | Time spent by the capture job on the agent while reading/writing temporary files in the HVR_TMP directory. |
Capture Hub FPipe Duration | secs | Time spent by the capture job on the hub while executing FPipes. |
Capture Agent FPipe Duration | secs | Time spent by the capture job on the agent while executing FPipes. |
Capture Hub FProc Duration | secs | Time spent by the capture job on the hub while executing FProcs. |
Capture Agent FProc Duration | secs | Time spent by the capture job on the agent while executing FProcs. |
Capture Hub SQL Query Duration | secs | Time spent by the capture job on the hub while executing SQL queries. |
Capture Agent SQL Query Duration | secs | Time spent by the capture job on the agent while executing SQL queries. |
Capture Hub Network IO Duration | secs | Time spent by the capture job on the hub while sending, receiving data, or waiting for the network in the HVR protocol. |
Capture Agent Network IO Duration | secs | Time spent by the capture job on the agent while sending, receiving data or waiting for the network in the HVR protocol. |
Capture Hub Log Scan Duration | secs | Time spent by the capture job on the hub while scanning redo log files. |
Capture Agent Log Scan Duration | secs | Time spent by the capture job on the agent while scanning redo log files. |
Capture Hub Idle Duration | secs | Time spent by the capture job on the hub while waiting idle. |
Capture Agent Idle Duration | secs | Time spent by the capture job on the agent while waiting idle. |
Integrate Hub Disk IO HVR Config Duration | secs | Time spent by the integrate job on the hub while reading/writing files in the HVR_HOME and HVR_CONFIG directories. |
Integrate Agent Disk IO HVR Config Duration | secs | Time spent by the integrate job on the agent while reading/writing files in the HVR_HOME and HVR_CONFIG directories. |
Integrate Hub Disk IO Location Duration | secs | Time spent by the integrate job on the hub while writing files from outside of the HVR directories. |
Integrate Agent Disk IO Location Duration | secs | Time spent by the integrate job on the agent while writing files from outside of the HVR directories. |
Integrate Hub Disk IO Temp Duration | secs | Time spent by the integrate job on the hub while reading/writing temporary files in the HVR_TMP directory. |
Integrate Agent Disk IO Temp Duration | secs | Time spent by the integrate job on the agent while reading/writing temporary files in the HVR_TMP directory. |
Integrate Hub FPipe Duration | secs | Time spent by the integrate job on the hub while executing FPipes. |
Integrate Agent FPipe Duration | secs | Time spent by the integrate job on the agent while executing FPipes. |
Integrate Hub FProc Duration | secs | Time spent by the integrate job on the hub while executing FProcs. |
Integrate Agent FProc Duration | secs | Time spent by the integrate job on the agent while executing FProcs. |
Integrate Hub SQL Query Duration | secs | Time spent by the integrate job on the hub while executing SQL queries. |
Integrate Agent SQL Query Duration | secs | Time spent by the integrate job on the agent while executing SQL queries. |
Integrate Hub Network IO Duration | secs | Time spent by the integrate job on the hub while sending, receiving data or waiting for the network in the HVR protocol. |
Integrate Agent Network IO Duration | secs | Time spent by the integrate job on the agent while sending, receiving data or waiting for the network in the HVR protocol. |
Integrate Hub Log Scan Duration | secs | Time spent by the integrate job on the hub while scanning redo log files. |
Integrate Agent Log Scan Duration | secs | Time spent by the integrate job on the agent while scanning redo log files. |
Integrate Hub Idle Duration | secs | Time spent by the integrate job on the hub while waiting idle. |
Integrate Agent Idle Duration | secs | Time spent by the integrate job on the agent while waiting idle. |
Description of Units
The unit of measurement is different for each metrics, following are the units and its description:
Unit | Description |
---|---|
% | Percent. When used for compression this means the ratio of bytes left after compression. So if 100 bytes are compressed by 70% then 30 bytes remain. |
bytes | Amount of bytes. |
changes | Changes of tables affected by the insert , update , or delete statements. Updates are sometimes moved as 2 rows (before-update and after-update). |
cycles | A capture or integrate cycle. |
files | Number of files. |
int | Integer number. |
lines | Number of message lines written to the log file. A single line could be an error message which mentions multiple failed changes. |
rows | Rows of tables affected by the insert , update or delete statements. |
rows/sec | Measures the average of 'rows' or 'changes' replicated per second. |
runs | Number of times a job has been run. |
secs | Seconds. |
string | String data type. |
time | Timestamp. |
transaction | Unit 'transaction' means a group of changes terminated by a commit (not just a changed row). |