News
Perth, Australia
+ (61) 417713124
prosolutions@gavinsoorma.com

Configuring a Downstream Capture database for Oracle GoldenGate

  • Posted by Gavin Soorma
  • On March 12, 2016
  • 0 Comments
  • Downstream Capture, integrated extract, Oracle GoldenGate

Oracle GoldenGate versions 11.2 and above enables downstream capture of data from a single source or multiple sources. This feature is specific to Oracle databases only. This feature helps customers meet their IT requirement of limiting new processes being installed on their production source system.

This feature requires some configuration of log transport to the downstream database on the source system. It also requires an open read-write downstream database which is where the Integrated Extract will be installed.

 


Integrated Capture Deployment Options

There are two deployment options for integrated capture, depending on where the mining database is deployed. The mining database is the one where the logmining server is deployed.

Local deployment: For local deployment, the source database and the mining database are the same. The source database is the database for which you want to mine the redo stream to capture changes, and also where you deploy the logmining server. Because integrated capture is fully integrated with the database, this mode does not require any special database setup.

Downstream deployment: In downstream deployment, the source and mining databases are different databases. You create the logmining server at the downstream database. You configure redo transport at the source database to ship the redo logs to the downstream mining database for capture at that location. Using a downstream mining server for capture may be desirable to offload the capture overhead and any other overhead from transformation or other processing from the production server, but requires log shipping and other configuration.

Downstream deployment allows you to offload the source database. The source database ships its redo logs to a downstream database, and Extract uses the logmining server at the downstream database to mine the redo logs.

When online logs are shipped to the downstream database, real-time capture by Extract is possible. Changes are captured as though Extract is reading from the source logs. In order to accept online redo logs from a source database, the downstream mining database must have standby redo logs configured.

 

Here is a high-level overview of the process.

• Changes occurring on the source database are written to the Online Redo log files by the database Log Writer background process (LGWR).

• Changes to Online Redo Log files are also written to the Archive Log Files.

• When each Archive Log File fills up it is shipped or sent via Redo Transport services to the target Downstream database where it is received by the RFS process running on the Downstream database end.

• The Downstream database can be configured with Standby Log files which will receive redo data as soon as a transaction is committed on the source database. The RFS (remote file server) process writes changes to the Standby Redo Log files. This is real-time apply.

• If the Downstream database has not been configured with Standby redo log files, then the RFS will only receive changes once the entire Archived redo log file is filled on the source.

• The Log Mining Server engine running on the Downstream database will extract these changes in the form of Logical Change Records (LCR’s) and these are handed over to the Goldengate Integrated Extract process.

• The Goldengate Integrated Extract process will write the changes to the Goldengate trail files.

• The Trail files are then sent to the target database where the Goldengate Replicat process will finally apply the changes to the target database.

 

Read the note on How to Configure a Downstream Capture Database for Oracle GoldenGate

 

0 Comments

Leave Reply

Your email address will not be published. Required fields are marked *