What Happens If Source Database Has High Log Generation?
Question
The log generation is very high. Will replication be impacted?
Environment
HVR 5
Answer
If the source generates a large log in a short period of time, then what will happen on the target depends on a number of factors. An important factor is whether the log has many small transactions or few large transactions. Here are a few considerations to help guide you answer the question:
HVR captures all transactions that are performed on the system. But it only propagates the transactions if a commit happens. Database changes are written to the log all the time and HVR will continuously try to stay up to date and process the log as it is written to. So even though a large volume of log data may be written in a relatively short period of time, HVR may still keep up with the capture, and start propagating changes as soon as it sees the commit for the transaction(s) it is tracking.
HVR only captures changes for tables that are part of the channel. There may be lots of database changes irrelevant to HVR. Even though HVR tracks the transactions, it may not keep a lot of data per transaction.
Transactions in the database log are written in a commit order. Every database has a commit sequence number (e.g., in Oracle this is the SCN (System Commit Number)). By default and when not using the /Burst option in Integrate, HVR performs the changes in the target database in commit order on the source. If there is a long-running transaction on the source that made a lot of changes to tables that are captured by HVR then it may take some time for HVR to process the changes for this transaction on the target. If that happens then you will see that latency in HVR increases simply because HVR is working on a large transaction that takes time to process.
If for example, it takes 2 minutes to perform a single transaction on the target database, then at the end of the transaction HVR will show 2 minutes of latency (that is quickly caught up afterward). Any short running transactions that came after the long running transaction will only be applied to the large transaction applied to keep the destination database consistent.NOTE: You can use the /TxSplitLimit option in Integrate to break up large transactions into multiple Integrate processes. But doing that typically breaks transaction boundaries and breaks the consistency maintained within a channel.
If there is a backlog of transaction files to be processed then by default HVR will process up to 10 MB of compressed transaction files and apply these as a single transaction to the target database. Given performing a commit is a relatively expensive database operation this is often the best way to speed up database integration, but it may lead to relatively large transactions on the target database. The option /CycleByteLimit on the Integrate action can be used to decrease or increase this cycle limit and cause more or less frequent commits if there is a backlog of transactions to be processed on the Integrate side. Learn more about parameters in our Integrate documentation.