Friday, 20 May 2016

Setup Oracle Streams - Datapump Instantiation (Part 2) - Step by Step

Hi everyone, this post is in continuance with Oracle Streams - Datapump Instantiation (Part 1), please refer to that beforehand to get a clear picture on prerequisites. This post will discuss about setting up of Streams replication between to servers. To begin with, we have below things ready:

  • A PROD.EXAMPLE.COM (Source) database and REPL.EXAMPLE.COM (Destination) database
  • Oracle NET configured between these two and both are in ARCHIVELOG mode
  • STREAMADM user is setup with default tablespace STREAM_TBS and Stream Administrator privileges on both source and destination databases respectively
  • A database link has been created under STREAMADM user on source/destination database to access STREAMADM user on destination/source database

With above done, we are ready to create Streams related configurations in database.

Create ANYDATA queue 

An ANYDATA queue stores messages whose payloads are of ANYDATA type. Therefore, an ANYDATA queue can store a message with a payload of nearly any type, if the payload is wrapped in an ANYDATA wrapper.

Create ANYDATA queue on source and destination databases by executing below command:

EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();

Create CAPTURE Process/Rules

A Capture process will capture changes happening in database/schema/tables depending upon rules setup for it. The procedure used for setting up rules automatically creates a CAPTURE process if it does not exists.

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
   schema_name          => '<SCHEMA_NAME>',
   streams_type         => 'capture',
   streams_name         => 'CAPTURE_PROD',
   queue_name           => 'streamadm.streams_queue',
   include_dml          => TRUE,
   include_ddl          => TRUE,
   include_tagged_lcr   => FALSE,
   source_database      => 'PROD.EXAMPLE.COM',
   inclusion_rule       => TRUE,
   and_condition        => NULL);
END;
/

  • streams_name parameter specifies name of the capture process, this if does not exists will be created in first execution. I named it CAPTURE_PROD
  • include_dml and include_ddl parameters are set to TRUE since I want to capture both of these changes
  • source_database must be provided with global database name of source
Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.

Prepare Schemas for Instantiation

The procedure below enables supplemental logging for all columns in the tables in the schema being prepared for instantiation and for any table added to this schema in the future. The columns are logged unconditionally. This will ensure that all the changes to tables under schema are captured by capture process.


BEGIN
  DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION(
    schema_name          => '<SCHEMA_NAME>',
    supplemental_logging => 'all'); 
END;  
/

Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.

Take Datapump Export

Datapump export will create a dumpfile for all those selected schemas with are to be replicated from source to destination. As also discussed earlier, the objects to be replicated must exist on both databases, thus datapump will ensure to do this for us.

1. Find out current SCN number of database

SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL; 
eg: 6069217041919

2. Take Datapump Export

expdp streamadm/streamadm DUMPFILE=exp_meta_user.dmp LOGFILE=exp_meta_user.log DIRECTORY=STREAM_SRC CONTENT=METADATA_ONLY FULL=y INCLUDE=ROLE,PROFILE


expdp streamadm/streamadm SCHEMAS=SCHEMA_NAMES_LIST DIRECTORY=STREAM_SRC DUMPFILE=exp_schemas.dmp FLASHBACK_SCN=6069217041919

Taking ROLE,PROFILE export is suggested as in my case import at destination failed due to absence of these objects. FLASHBACK_SCN parameter is used since the schema(s) may contain foreign key constraints. After we have obtained the SCN, it must be ensured that no DDL changes occur in database. SCHEMA_NAMES_LIST should be comma separated schema names.

Create Required Tablespaces

Permanent tablespaces associated with schemas that are exported from source must exist at destination database for import to be successful. The required tablespace names and their DDLs can be obtained by using query:

SET PAGES 10000 LINES 150 LONG 99999999;

SELECT DISTINCT 'SELECT DBMS_METADATA.GET_DDL(''TABLESPACE'','''
  || TABLESPACE_NAME || ''') FROM DUAL;'
FROM DBA_SEGMENTS
WHERE OWNER IN ('SCHEMA_NAMES_LIST');

SELECT DISTINCT 'SELECT DBMS_METADATA.GET_DDL(''TABLESPACE'','''
  || TEMPORARY_TABLESPACE || ''') FROM DUAL;'
FROM DBA_USERS
WHERE USERNAME IN ('SCHEMA_NAMES_LIST');


After obtaining DDL statements of these tablespaces re-construct them based on database filesystem of destination database. Copy OR spool the statements in a .sql file and create these tablespaces at destination database.

Import Schemas into Destination Database

Copy exported dumpfile exp_meta_user.dmp and exp_schemas.dmp from source server to destination server under /tmp/dest_dp directory which is path for directory object STREAM_DEST and initiate import.

impdp streamadm/streamadm DUMPFILE=exp_meta_user.dmp LOGFILE=imp_meta_user.log DIRECTORY=STREAM_DBA

impdp streamadm/streamadm SCHEMAS=SCHEMA_NAMES_LIST DIRECTORY=STREAM_DEST DUMPFILE=exp_schemas.dmp LOGFILE=imp_schemas.log
 

Instantiate Objects at Destination

Instantiating objects at destination database will set the SCN for schemas from where the changes will be applied at destination. Below procedure would be used for this operation:

DECLARE
  iscn  NUMBER;         -- Variable to hold instantiation SCN value
BEGIN
  iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
  DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN@REPL.EXAMPLE.COM(
    source_schema_name    => 'SCHEMA_NAME',
    source_database_name  => 'PROD.EXAMPLE.COM',
    instantiation_scn     => iscn,
    recursive             => TRUE);
END;
/


Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.

Create APPLY Process/Rules

Apply process is created at destination database, this process will apply changes sent by capture process from source database. The procedure used to setup APPLY rules automatically creates an Apply process if it does not exists.

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
   schema_name          => 'SCHEMA_NAME',
   streams_type         => 'apply',
   streams_name         => 'APPLY_RPL',
   queue_name           => 'streamadm.streams_queue',
   include_dml          => TRUE,
   include_ddl          => TRUE,
   include_tagged_lcr   => FALSE,
   source_database      => 'PROD.EXAMPLE.COM',
   inclusion_rule       => TRUE,
   and_condition        => NULL);
END;
/

  • streams_name parameter specifies name of the apply process, this if does not exists will be created in first execution. I named it APPLY_RPL
  • include_dml and include_ddl parameters are set to TRUE since I want to apply both of these changes
  • source_database must be provided with global database name of source
Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.

Create Propagation at Source

We have now Capture process ready to capture change, an Apply process waiting to apply changes and an anydata queue to hold changes. The propagation will setup mechanism to ship changes captured into queue at source database to queue at destination database.

A Propagation can created automatically when propagation rules are added for the first time using below procedure:

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
   schema_name              => 'SCHEMA_NAME',
   streams_name             => 'PROD_TO_RPL',
   source_queue_name        => 'streamadm.streams_queue',
   destination_queue_name   => 'streamadm.streams_queue@RPL.EXAMPLE.COM',
   include_dml              => TRUE,
   include_ddl              => TRUE,
   include_tagged_lcr       => FALSE,
   source_database          => 'PROD.EXAMPLE.COM',
   inclusion_rule           => TRUE,
   and_condition            => NULL,
   queue_to_queue           => TRUE);
END;
/


Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.

Start Apply Process

At destination database run below procedures to start APPLY process:

BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name  => 'APPLY_RPL',
    parameter   => 'disable_on_error',
    value       => 'N');
END;
/

BEGIN
DBMS_APPLY_ADM.START_APPLY(
    apply_name  => 'APPLY_RPL');
END;
/


Start Capture Process

At source database run below procedure to start CAPTURE process:

BEGIN
   DBMS_CAPTURE_ADM.START_CAPTURE(
      capture_name  => 'CAPTURE_DBA');
END;
/
 
All Done! Now test replication by creating a table, inserting a few values and dropping it.

Friday, 13 May 2016

Setup Oracle Streams - Datapump Instantiation (Part 1) - Step by Step

Today,  I will continue from my previous post Instantiation - Oracle Streams Basics which talked about basics of instantiation and its role in replication using Streams. This post will cover details on setting up environment for replication between two databases, one being source containing schemas to be replicated, second one being blank destination database.
 

Prerequisites:


1. Source database: We already have it, an Oracle 11gR2 Enterprise Edition instance. Lets call it PROD.EXAMPLE.COM

2. Destination database: Install an Oracle 11gR2 Standard Edition software at destination server and setup a database with default tablespaces (SYSTEM, SYSAUX, UNDO etc). Lets call it REPL.EXAMPLE.COM

3. Oracle Net: Configure Oracle Net (listener.ora and tnsnames.ora) between two databases. Test connectivity by connecting as SYS or any database user. I configured TNS alias PROD for PROD.EXAMPLE.COM and REPL for REPL.EXAMPLE.COM

4. Update Instance Parameter File: Update instance pfile/spfile with below parameters set to recommended values:


compatible           : Should be same as database version (highest possible)
global_names         : Must be true
open_links           : 4 or higher
processes            : 100 or higher
sessions             : (1.5 x processes) + 22

Setting global_names to true is mandatory as Oracle Streams will use global names of source and destination databases respectively for identification.

5. Archivelog Mode: Both source and destination databases must be in ARCHIVELOG mode

$ sqlplus / as sysdba
SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;
-- If output shows "ARCHIVELOG" then move on to step 5, otherwise follow below steps
Set log_archive_dest and log_archive_format parameters
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database archivelog;
SQL> alter database open;
SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;
-- Archivelog should be now enabled

6. Schemas Excluded: Remember SYS, SYSTEM and CTXSYS and other SYS related schemas are excluded by Streams, so make sure not to create any normal user related objects under these.

7. Unsupported Objects: Streams would not capture changes from specific data types / columns / change types etc. To know of such objects in database use below query:

SQL> SELECT * FROM DBA_STREAMS_UNSUPPORTED;

Note: Get specific details on Oracle Streams Restrictions from Oracle.

Setup Environment


Streams environment consists of a dedicated Streams Administrator user. This user will be configured to run capture process and propagation on source and apply process on destination. The user is specifically provided its own default tablespace to keep any relevant objects, this also makes cleanup job easier if Streams are to be removed from database. The user is also assigned set of privileges along with DBA and Streams Admin privilege

 

Connect to Source Database

Prepare directory for keeping datapump export from source database
$ mkdir /tmp/src_dp
$ sqlplus / as sysdba
SQL> CREATE DIRECTORY STREAM_SRC AS '/tmp/src_dp';

Create Tablespace for STREAMADM user - This user will act as Streams Administrator
SQL> CREATE TABLESPACE STREAM_TBS DATAFILE '/<location>/stream_tbs01.dbf' SIZE 25M AUTOEXTEND ON;

Create STREAMADM user
SQL> CREATE USER STREAMADM IDENTIFIED BY STREAMADM DEFAULT TABLESPACE STREAM_TBS QUOTA UNLIMITED ON STREAM_TBS;
SQL> GRANT CONNECT, RESOURCE, DBA, SELECT_CATALOG_ROLE TO STREAMADM;
SQL> BEGIN
     DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
                  GRANTEE=> 'STREAMADM',
                  GRANT_PRIVILEGES => TRUE);
     END;
     /

SQL> GRANT ALL ON DIRECTORY STREAM_SRC TO STREAMADM;

Create Database Link
SQL> CONNECT STREAMADM/STREAMADM
SQL> CREATE DATABASE LINK REPL.EXAMPLE.COM CONNECT TO STREAMADM IDENTIFIED BY STREAMADM USING 'REPL';

Connect to Destination Database

Prepare directory for keeping datapump export from source database
$ mkdir /tmp/dest_dp
$ sqlplus / as sysdba
SQL> CREATE DIRECTORY STREAM_DEST AS '/tmp/dest_dp';

Create Tablespace for STREAMADM user - This user will act as Streams Administrator
SQL> CREATE TABLESPACE STREAM_TBS DATAFILE '/<location>/stream_tbs01.dbf' SIZE 25M AUTOEXTEND ON;

Create STREAMADM user

SQL> CREATE USER STREAMADM IDENTIFIED BY STREAMADM DEFAULT TABLESPACE STREAM_TBS QUOTA UNLIMITED ON STREAM_TBS;
SQL> GRANT CONNECT, RESOURCE, DBA, SELECT_CATALOG_ROLE TO STREAMADM;
SQL> BEGIN
      DBMS_STREAMS_AUTH.GRANAT_ADMIN_PRIVILEGE(
           GRANTEE=> 'STREAMADM',
           GRANT_PRIVILEGES=> 'TRUE');
      END;
      /
SQL> GRANT ALL ON DIRECTORY STREAM_SRC TO STREAMADM;

Create Database Link

SQL> CONNECT STREAMADM/STREAMADM
SQL> CREATE DATABASE LINK PROD.EXAMPLE.COM CONNECT TO STREAMADM IDENTIFIED BY STREAMADM USING 'PROD';
This completes setting up of environment for Oracle Streams. In Part 2, I will post details on setting-up replication between these two databases.

Thursday, 12 May 2016

Instantiation - Oracle Streams Basics


Instantiation is perhaps the most challenging yet most simple thing I faced trying to understand basics of Oracle Streams. This came up when I had to do a project involving streams replication and almost everywhere on web I got plenty of tutorials explaining how to do one-way replication, bi-directional replication etc. but none of them talked about setting up an environment.
Now, in my case, client’s environment had a terabyte of database and their requirement was to replicate a bunch of schemas from total of 50+ schemas in their database. So, requirement followed as:

a.  A source database with 14 schemas having several tables scattered over various tablespaces, 
b.  needs to be replicated to a target database, 
c.  source being Oracle 11gR2 Enterprise Edition and destination Oracle 11gR2 Standard Edition

If you have gone through documentation of Streams, you will know there are lot of ways to setup replication between two databases, basically employing a change capture, propagation and apply. These can be setup automatically/manually using DBMS_STREAMS_ADM package or Streams Replication Wizard etc.
After analyzing, I came up with plan to use DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS package to setup environment but before setting-up replication I needed a destination database. Here all confusion started arising and below were the challenges to setup destination database:
  • Is RMAN cloning an option? Probably not because I do not want complete database at destination
  • Is RMAN transportable tablespaces? I might miss schema objects in tablespaces that will not be transported!
  • Is datapump? Yes, this sounds fine. I can obviously choose schemas I need

Thus I decided to go with datapump, it has its own limitations that needed to be addressed but it was the best option among all.

This is the place where instantiation comes in scenario. In a replication environment we cannot simply clone/copy a database and initiate capture/apply processes, it is must that the database objects are instantiated. When any database object is to be replicated it is important to have a reference point-of-time from which apply process will apply the changes at destination, the phenomenon of preparing objects for replication is called instantiation. If a database where changes to the source database objects will be applied is a different database than the source database, then the destination database must have a copy of these database objects.

In Oracle Streams, the following general steps instantiate a database object:
  • Prepare the database object for instantiation at the source database.
  • If a copy of the database object does not exist at the destination database, then create a database object physically at the destination database based on a database object at the source database. You can use export/import, transportable tablespaces, or RMAN to copy database objects for instantiation. If the database object already exists at the destination database, then this step is not necessary.
  • Set the instantiation system change number (SCN) for the database object at the destination database. An instantiation SCN instructs an apply process at the destination database to apply only changes that committed at the source database after the specified SCN.
Will post further on replication in upcoming posts.

References:
      Instantiation and Oracle Streams Replication 
[http://docs.oracle.com/cd/E11882_01/server.112/e10705/instant.htm]