下游Streams流的配置

来源:互联网 发布:朗读录音软件 编辑:程序博客网 时间:2024/04/29 14:05

 

Data Replication Using Oracle Downstream Capture
The ever increasing demand of sharing, consolidating and synchronizing critical enterprise data and the ability to provide secondary access to updated information without compromising production system performance require a technology like Oracle Streams.
Oracle Streams is versatile, robust and high performance information distribution software used for sharing data across multiple Oracle or heterogeneous database systems. It uses Oracle’s LogMiner-based technology to extract database changes from the redo log files.
This article describes how to effectively leverage and set up Oracle archived-log downstream capture technology to replicate your enterprise data.
Oracle Downstream Capture
Data can basically be captured on a local database where the capture process runs on the source database or on a remote database where the capture process runs on a database other than the source database also called the downstream database.
Oracle downstream capture feature was introduced in Oracle 10g Release 1, to shift the data capturing operations from the source database to the downstream database. This relieves the system resources on the production database server for other critical activities.
With archived-log downstream capture, archived redo logs from the source database are shipped to the downstream database using any copy mechanisms such as the redo transport services or file transfer protocol (FTP). The capture process scans and captures DML/DDL changes from the archived redo logs based on a set of defined rules, formats the captured changes into events called logical change records (LCRs) and queues them in a staging queue. A propagation process propagates the LCRs to a destination queue on the destination database and they are then dequeued and applied by the apply process.
The disadvantage of archived-log based downstream capture is that database changes on the source are not reflected immediately on the destination database. Changes in the online redo logs cannot be propagated to the destination database until they are archived. If the latency of data capture is not acceptable, explore using real-time downstream capture. Real-time downstream capture however, does not allow you to capture changes from multiple source databases on the same downstream database. Refer to http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14229/strms_capture.htm#sthref170 for a description of real-time downstream capture.
Configure Archived-Log Downstream Capture
Typically, the downstream database and the destination database are hosted on different systems. If you want to avoid the overhead of maintaining a downstream database and additional database licensing costs, an alternate solution is to set up the downstream capture process in the destination database.
We will illustrate the procedure to set up an environment where the downstream database is also the destination database and replicate a schema on the source database to the destination database. The steps listed in this section are also applicable if the downstream and destination databases are located on different systems.
An overview of the Oracle Streams environment:

Host Name
Instance Name
Global Database Name
Database Role
dhcp-cbjs05-151-183
henry
Henry.sun.net
Source database
nazar
stream1
stream1.sun.net
Downstream / Destination database

Parameter
stream1
henry
Description
compatible
10.2.0
10.2.0
To use downstream database, set this parameter to at least 10.1.0.
global_names
true
true
Database link name must match global name.
job_queue_processes
10
10
The maximum number of job queue processes to propagate messages. Set this parameter to at least 2.
log_archive_dest_2
service=henry noregister template=/export/home/oracle/arch/stream1_%t_%s_%r.arc
Enables the shipping of archived redo logs to destination database using log transport services.
service – the service name of the downstream database.
noregister – location of the archived redo log is not recorded at the downstream database control file.
template – the directory location and format template for archived redo logs at the downstream database. Make sure the specified directory exists at the downstream database system.
log_archive_dest_state_2
true
true
Enables the log_archive_dest_2 destination.
parallel_max_servers
40
40
The maximum number of parallel execution processes that can be used by the capture and apply processes.
You may encounter ora-1372 or ora-16081 during Streams capture and apply if the value is set too low. Set this parameter to at least 6.
sga_target
n/a
500M
Enables Automatic Shared Memory Management (ASMM). In Oracle 10gR2, ASMM automatically manages the streams_pool_size. streams_pool_size provides buffer areas for streams processing.
If this parameter is not set, you should manually set streams_pool_area to at least 200M. Otherwise, memory will be allocated for Streams from the shared pool area.
alter system set streams_pool_size=100m;

Step 1: Configure archive log mode
Verify both db are in archive log mode. Archived redo logs are required by the capture process to capture changes.
SQL> select * from global_name;
GLOBAL_NAME
--------------------------------------------------------------------------------
HENRY.SUN.NET
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 1
Next log sequence to archive 3
Current log sequence 3
SQL> select * from global_name;
GLOBAL_NAME
--------------------------------------------------------------------------------
STREAM1.SUN.NET
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 4
Next log sequence to archive 6
Current log sequence 6
Step 2: Modify initialization parameters
Step 3: Set up tnsnames.ora
Add the TNS stream1 entries on henry.
STREAM1 =  (DESCRIPTION =    (ADDRESS = (PROTOCOL = TCP)(HOST = nazar)(PORT = 1521))    (CONNECT_DATA =      (SERVER = DEDICATED)      (SERVICE_NAME = stream1.sun.net)    )  )HENRY =  (DESCRIPTION =    (ADDRESS = (PROTOCOL = TCP)(HOST = dhcp-cbjs05-151-183)(PORT = 1521))    (CONNECT_DATA =      (SERVER = DEDICATED)      (SERVICE_NAME = henry.sun.net)    )  )
Step 4: Create Streams administrator
Create a Streams administrator, strmadmin, with the required administrator privileges on henry and stream1. You should create a tablespace on henry to exclusively hold the Streams administrator’s tables and indexes. The strmadmin user on stream1 however, does not create any database objects. The account is used to perform administrative tasks such as objects instantiation and obtaining the first System Change Number (SCN) for the capture process at the downstream database.
create tablespace streamsts datafile '/oracle/streamsts01.dbf' size 100M;create user strmadmin identified by strmadmin default tablespace streamsts temporary tablespace temp;grant dba, select_catalog_role to strmadmin;exec dbms_streams_auth.grant_admin_privilege ('strmadmin', true);
Step 5: Create application schema
Assuming that there is already an existing SGREPORTS schema in henry, create the destination SGREPORTS schema on stream1.
create tablespace sg_report_data datafile '/oracle/sg_report_data01.dbf' size 100M; create user sgreports identified by sgreports default tablespace sg_report_data temporary tablespace temp;grant connect, resource to sgreports;
Step 7: Create database link
As the Streams administrator, create a private database link from steam1 to henry. The Streams administrator uses the database link to perform administrative tasks on stream1.
create database link henry.sun.net connect to strmadmin identified by strmadmin using 'henry';
Step 8: Create Streams queues
Database changes are captured in queues and propagated to other databases. Create the capture queue on the downstream database and the apply queue on the destination database. In this example, since the downstream database is also the destination database, create both the capture and apply queues on stream1.
connect strmadmin/strmadmin@stream1begin     dbms_streams_adm.set_up_queue(        queue_table  => 'capture_stream1_table',        queue_name   => 'capture_stream1_queue',        queue_user   => 'strmadmin');end;/begin     dbms_streams_adm.set_up_queue(        queue_table  => 'apply_stream1_table',        queue_name   => 'apply_stream1_queue',        queue_user   => 'strmadmin');end;/
Step 9: Create capture process
Use the create_capture procedure to create the capture process, capture_stream1 on the downstream database, stream1. The capture process executes the following activities on stream1 - extracts the data dictionary to the redo log, prepares objects instantiation and gets the first SCN.
connect strmadmin/strmadmin@stream1begin        dbms_capture_adm.create_capture (        queue_name              => 'capture_stream1_queue',        capture_name            => 'capture_stream1',        source_database => 'henry.sun.net',        use_database_link       => true);end;/
Step 10: Configure capture process
Rules are added in positive or negative rule sets using the dbms_streams_adm package. Changes are captured using positive rule set by specifying a true value for the inclusion_rule parameter. In the add_schema_rules procedure below, we specify the previously created capture streams name and queue name and create a positive rule set for the capture process to extract DML and DDL changes for the test table.
connect strmadmin/strmadmin@stream1BEGIN     DBMS_STREAMS_ADM.ADD_TABLE_RULES(                    table_name     => 'sgreports.test',                           streams_type   => 'capture',                                      streams_name   => 'capture_stream1',                             queue_name     => 'capture_stream1_queue',                       include_dml    => true,                                       source_database=>'henry.sun.net',                             include_ddl    => true,                                         inclusion_rule => true);                                      END;                         /
Step 11: Configure propagation process
The configuration for the propagation process is similar to the capture process. We add rules to the positive rule set and specify the source queue name and destination queue name. Changes are propagated from the source queue to the destination queue.
connect strmadmin/strmadmin@sfdb.worldBEGIN      DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(         table_name              => 'sgreports.test',              streams_name            => 'henry_stream1',                source_queue_name       => 'capture_stream1_queue',                    destination_queue_name  => 'apply_stream1_queue',                        include_dml             => true,                             include_ddl             => true,                                  source_database         => 'henry.sun.net',                                       inclusion_rule          => true,                                           queue_to_queue          => true);                                             END;                                               / 
Step 12: Create objects for the destination application schema
On the source system:
  1. As the system user, obtain the current SCN. This SCN is used later in the expdp command.
  2. create table sgreports.test as select * from dba_objects;
  3.  
  4.  
  5. select dbms_flashback.get_system_change_number() from dual;
  6.  
  7. select dbms_flashback.get_system_change_number() from dual;
  8.  
  9. DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER()
  10. -----------------------------------------
  11. 697666
2. As the system user, create the Oracle directory for the export dump file. Make sure the physical directory exists on the filesystem.
create directory expdir as '/export/home/oracle/ora_backup';
3. Export the table
     create directory impdir as '/oracle/ora_backup';gt; expdp system/oracle tables=sgreports.test  directory=expdir logfile=test.log dumpfile=test.dmp flashback_scn=697666                  
On the destination system:
1. As the system user, create the Oracle directory for the export dump file. Make sure the physical directory exists on the filesystem.
___FCKpd___10
  1. Copy the export dump file, test.dmp from the source system to /oracle/ora_backup
  2. scp test.dmp oracle@nazar.prc.sun.com:/oracle/ora_backup
3. Import the application schema:
     connect strmadmin/strmadmin@stream1BEGIN     DBMS_STREAMS_ADM.ADD_TABLE_RULES(       table_name      => 'sgreports.test',           streams_type    => 'apply',               streams_name    => 'apply_stream1',                   queue_name      => 'apply_stream1_queue',                     include_dml     => true,                         include_ddl     => true,                              source_database => 'henry.sun.net',                                inclusion_rule  => true);                                  END;                                  /gt; impdp system/sys directory=impdir dumpfile=test.dmp logfile=test.log
Step 13: Configure apply process
Create the apply process and add rules to the positive rule set. The apply process dequeues the LCR events and applies the changes to the destination schema.
___FCKpd___12
Configure the apply process to continue running even when there are errors during the apply process.
begin        dbms_apply_adm.set_parameter (                apply_name      => 'apply_stream1',                parameter       => 'disable_on_error',                value   => 'n');end;/
Step 14: Start apply process
Start the apply process at the destination database.
connect strmadmin/strmadmin@stream1begin        dbms_apply_adm.start_apply (                apply_name      => 'apply_stream1');end;/
Step 15: Start capture process
Start the capture process at the downstream database.
connect strmadmin/strmadmin@stream1begin        dbms_capture_adm.start_capture (                capture_name    => 'capture_stream1');end;/
Step 16: Test Oracle Streams
The Streams environment is now ready to capture, propagate and apply changes from the source database to the destination database.
SQL> connect sgreports/sgreports@henry.sun.netConnected.SQL> insert into sgreports.test select * from sgreports.test;49769 rows created.SQL> commit;Commit complete.alter system switch logfile;
Perform DML and DDL changes on the source database:
SQL> connect sgreports/sgreports@stream1Connected.
Troubleshooting/Monitoring
Listed here are a few views for obtaining information on the capture, propagation and apply processes. They provide information such as the status of the processes, the number of messages enqueued and dequeued and the error messages encounter during capture and apply.
  • v$streams_apply_reader
  • v$streams_apply_coordinator
  • v$streams_capture
  • dba_apply
  • dba_apply_error
  • dba_capture
  • dba_propagation
  • dba_queue_schedules
Conclusion
With Oracle Streams' replication, information can be shared easily among multiple databases. This article focuses on archived-log downstream capture solution. Alternatively, Oracle Streams supports local capture and real-time downstream capture, which are also easy to implement. Hopefully this article has presented a straightforward and concise overview of Oracle downstream capture and its capabilities.
原创粉丝点击