Hi everyone, this post is in continuance with Oracle Streams - Datapump Instantiation (Part 1),
please refer to that beforehand to get a clear picture on
prerequisites. This post will discuss about setting up of Streams
replication between to servers. To begin with, we have below things
ready:
With above done, we are ready to create Streams related configurations in database.
Create ANYDATA queue on source and destination databases by executing below command:
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => '<SCHEMA_NAME>',
streams_type => 'capture',
streams_name => 'CAPTURE_PROD',
queue_name => 'streamadm.streams_queue',
include_dml => TRUE,
include_ddl => TRUE,
include_tagged_lcr => FALSE,
source_database => 'PROD.EXAMPLE.COM',
inclusion_rule => TRUE,
and_condition => NULL);
END;
/
Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.
1. Find out current SCN number of database
SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
eg: 6069217041919
2. Take Datapump Export
expdp streamadm/streamadm DUMPFILE=exp_meta_user.dmp LOGFILE=exp_meta_user.log DIRECTORY=STREAM_SRC CONTENT=METADATA_ONLY FULL=y INCLUDE=ROLE,PROFILE
expdp streamadm/streamadm SCHEMAS=SCHEMA_NAMES_LIST DIRECTORY=STREAM_SRC DUMPFILE=exp_schemas.dmp FLASHBACK_SCN=6069217041919
Taking ROLE,PROFILE export is suggested as in my case import at destination failed due to absence of these objects. FLASHBACK_SCN parameter is used since the schema(s) may contain foreign key constraints. After we have obtained the SCN, it must be ensured that no DDL changes occur in database. SCHEMA_NAMES_LIST should be comma separated schema names.
SET PAGES 10000 LINES 150 LONG 99999999;
SELECT DISTINCT 'SELECT DBMS_METADATA.GET_DDL(''TABLESPACE'','''
|| TABLESPACE_NAME || ''') FROM DUAL;'
FROM DBA_SEGMENTS
WHERE OWNER IN ('SCHEMA_NAMES_LIST');
SELECT DISTINCT 'SELECT DBMS_METADATA.GET_DDL(''TABLESPACE'','''
|| TEMPORARY_TABLESPACE || ''') FROM DUAL;'
FROM DBA_USERS
WHERE USERNAME IN ('SCHEMA_NAMES_LIST');
After obtaining DDL statements of these tablespaces re-construct them based on database filesystem of destination database. Copy OR spool the statements in a .sql file and create these tablespaces at destination database.
impdp streamadm/streamadm DUMPFILE=exp_meta_user.dmp LOGFILE=imp_meta_user.log DIRECTORY=STREAM_DBA
impdp streamadm/streamadm
SCHEMAS=SCHEMA_NAMES_LIST DIRECTORY=STREAM_DEST DUMPFILE=exp_schemas.dmp LOGFILE=imp_schemas.log
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN@REPL.EXAMPLE.COM(
source_schema_name => 'SCHEMA_NAME',
source_database_name => 'PROD.EXAMPLE.COM',
instantiation_scn => iscn,
recursive => TRUE);
END;
/
Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'SCHEMA_NAME',
streams_type => 'apply',
streams_name => 'APPLY_RPL',
queue_name => 'streamadm.streams_queue',
include_dml => TRUE,
include_ddl => TRUE,
include_tagged_lcr => FALSE,
source_database => 'PROD.EXAMPLE.COM',
inclusion_rule => TRUE,
and_condition => NULL);
END;
/
A Propagation can created automatically when propagation rules are added for the first time using below procedure:
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => 'SCHEMA_NAME',
streams_name => 'PROD_TO_RPL',
source_queue_name => 'streamadm.streams_queue',
destination_queue_name => 'streamadm.streams_queue@RPL.EXAMPLE.COM',
include_dml => TRUE,
include_ddl => TRUE,
include_tagged_lcr => FALSE,
source_database => 'PROD.EXAMPLE.COM',
inclusion_rule => TRUE,
and_condition => NULL,
queue_to_queue => TRUE);
END;
/
Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'APPLY_RPL',
parameter => 'disable_on_error',
value => 'N');
END;
/
BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'APPLY_RPL');
END;
/
- A PROD.EXAMPLE.COM (Source) database and REPL.EXAMPLE.COM (Destination) database
- Oracle NET configured between these two and both are in ARCHIVELOG mode
- STREAMADM user is setup with default tablespace STREAM_TBS and Stream Administrator privileges on both source and destination databases respectively
- A database link has been created under STREAMADM user on source/destination database to access STREAMADM user on destination/source database
With above done, we are ready to create Streams related configurations in database.
Create ANYDATA queue
An ANYDATA queue stores messages whose payloads are of ANYDATA type. Therefore, an ANYDATA queue can store a message with a payload of nearly any type, if the payload is wrapped in an ANYDATA wrapper.Create ANYDATA queue on source and destination databases by executing below command:
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
Create CAPTURE Process/Rules
A Capture process will capture changes happening in database/schema/tables depending upon rules setup for it. The procedure used for setting up rules automatically creates a CAPTURE process if it does not exists.BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => '<SCHEMA_NAME>',
streams_type => 'capture',
streams_name => 'CAPTURE_PROD',
queue_name => 'streamadm.streams_queue',
include_dml => TRUE,
include_ddl => TRUE,
include_tagged_lcr => FALSE,
source_database => 'PROD.EXAMPLE.COM',
inclusion_rule => TRUE,
and_condition => NULL);
END;
/
- streams_name parameter specifies name of the capture process, this if does not exists will be created in first execution. I named it CAPTURE_PROD
- include_dml and include_ddl parameters are set to TRUE since I want to capture both of these changes
- source_database must be provided with global database name of source
Prepare Schemas for Instantiation
The procedure below enables supplemental logging for all columns in the tables in the schema being prepared for instantiation and for any table added to this schema in the future. The columns are logged unconditionally. This will ensure that all the changes to tables under schema are captured by capture process.
BEGIN
DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION(
schema_name => '<SCHEMA_NAME>',
supplemental_logging => 'all');
END;
/
Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.
Take Datapump Export
Datapump export will create a dumpfile for all those selected schemas with are to be replicated from source to destination. As also discussed earlier, the objects to be replicated must exist on both databases, thus datapump will ensure to do this for us.1. Find out current SCN number of database
SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
eg: 6069217041919
2. Take Datapump Export
expdp streamadm/streamadm DUMPFILE=exp_meta_user.dmp LOGFILE=exp_meta_user.log DIRECTORY=STREAM_SRC CONTENT=METADATA_ONLY FULL=y INCLUDE=ROLE,PROFILE
expdp streamadm/streamadm SCHEMAS=SCHEMA_NAMES_LIST DIRECTORY=STREAM_SRC DUMPFILE=exp_schemas.dmp FLASHBACK_SCN=6069217041919
Taking ROLE,PROFILE export is suggested as in my case import at destination failed due to absence of these objects. FLASHBACK_SCN parameter is used since the schema(s) may contain foreign key constraints. After we have obtained the SCN, it must be ensured that no DDL changes occur in database. SCHEMA_NAMES_LIST should be comma separated schema names.
Create Required Tablespaces
Permanent tablespaces associated with schemas that are exported from source must exist at destination database for import to be successful. The required tablespace names and their DDLs can be obtained by using query:SET PAGES 10000 LINES 150 LONG 99999999;
SELECT DISTINCT 'SELECT DBMS_METADATA.GET_DDL(''TABLESPACE'','''
|| TABLESPACE_NAME || ''') FROM DUAL;'
FROM DBA_SEGMENTS
WHERE OWNER IN ('SCHEMA_NAMES_LIST');
SELECT DISTINCT 'SELECT DBMS_METADATA.GET_DDL(''TABLESPACE'','''
|| TEMPORARY_TABLESPACE || ''') FROM DUAL;'
FROM DBA_USERS
WHERE USERNAME IN ('SCHEMA_NAMES_LIST');
After obtaining DDL statements of these tablespaces re-construct them based on database filesystem of destination database. Copy OR spool the statements in a .sql file and create these tablespaces at destination database.
Import Schemas into Destination Database
Copy exported dumpfile exp_meta_user.dmp and exp_schemas.dmp from source server to destination server under /tmp/dest_dp directory which is path for directory object STREAM_DEST and initiate import.impdp streamadm/streamadm DUMPFILE=exp_meta_user.dmp LOGFILE=imp_meta_user.log DIRECTORY=STREAM_DBA
Instantiate Objects at Destination
Instantiating objects at destination database will set the SCN for schemas from where the changes will be applied at destination. Below procedure would be used for this operation:DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN@REPL.EXAMPLE.COM(
source_schema_name => 'SCHEMA_NAME',
source_database_name => 'PROD.EXAMPLE.COM',
instantiation_scn => iscn,
recursive => TRUE);
END;
/
Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.
Create APPLY Process/Rules
Apply process is created at destination database, this process will apply changes sent by capture process from source database. The procedure used to setup APPLY rules automatically creates an Apply process if it does not exists.BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'SCHEMA_NAME',
streams_type => 'apply',
streams_name => 'APPLY_RPL',
queue_name => 'streamadm.streams_queue',
include_dml => TRUE,
include_ddl => TRUE,
include_tagged_lcr => FALSE,
source_database => 'PROD.EXAMPLE.COM',
inclusion_rule => TRUE,
and_condition => NULL);
END;
/
- streams_name parameter specifies name of the apply process, this if does not exists will be created in first execution. I named it APPLY_RPL
- include_dml and include_ddl parameters are set to TRUE since I want to apply both of these changes
- source_database must be provided with global database name of source
Create Propagation at Source
We have now Capture process ready to capture change, an Apply process waiting to apply changes and an anydata queue to hold changes. The propagation will setup mechanism to ship changes captured into queue at source database to queue at destination database.A Propagation can created automatically when propagation rules are added for the first time using below procedure:
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => 'SCHEMA_NAME',
streams_name => 'PROD_TO_RPL',
source_queue_name => 'streamadm.streams_queue',
destination_queue_name => 'streamadm.streams_queue@RPL.EXAMPLE.COM',
include_dml => TRUE,
include_ddl => TRUE,
include_tagged_lcr => FALSE,
source_database => 'PROD.EXAMPLE.COM',
inclusion_rule => TRUE,
and_condition => NULL,
queue_to_queue => TRUE);
END;
/
Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.
Start Apply Process
At destination database run below procedures to start APPLY process:BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'APPLY_RPL',
parameter => 'disable_on_error',
value => 'N');
END;
/
BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'APPLY_RPL');
END;
/
Start Capture Process
At source database run below procedure to start CAPTURE process:
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name =>
'CAPTURE_DBA');
END;
/
All Done! Now test replication by creating a table, inserting a few values and dropping it.
No comments:
Post a Comment