Thursday, 4 August 2016

Deploy a Standard RAC using Oracle VM Templates - Part 2

Let us now startup our freshly created VMs, for those who want to review Part 1, here is the Quick Link to Part 1.

To startup VMs, login to your OVM dashboard > navigate to server pool hosting Oracle VMs > select Node 1 and click start as in below screenshot.










Repeat same step for Node 2.

After both the VMs have been started. Select Node 1 and click on Launch Console button as below:








Repeat the same step by selecting Node 2 and clicking on Launch Console button, this will open two new browser windows providing interactive access to both machines console.

Upon starting up, the VM will go through initial checks. After these are completed you will see certain questions being asked as this is first startup of Oracle VM like below. Keep pressing ENTER key until you have reached the question "Do you want to access 2-node RAC interview on console? [YES | NO]"

Type YES and hit ENTER key.



















The 2-node RAC interview will start by asking question "Is this the first node in the cluster (YES/NO):"
Type YES and hit ENTER key.






Now on second node, answer the same question by typing NO and hitting ENTER key.








Now go back to Node 1's console. There you will find prompt requesting IP address and resolution name details. My details were as below, you may fill in the details that suits your environment.

















After you have entered the details, answer YES to question "Do you want to configure his cluster?". If you need to review the details, just answer NO and cursor will return back to initial position.

Important Note!!!: It is very important that you assign SCAN IP an IP address that has its subsequent IP addresses ready to be allotted as SCAN may require upto 3 IP addresses. Otherwise, you might get an error on IP address conflict during RAC configuration.

Upon successful configuration the VMs will boot up to login screen. You may login as "root" with password "ovsroot" (this is default password for root user in OVM templates). You may now directly deploy RAC, or if you are interested in changing your Database Name OR Instance Name OR both then follow below steps:














Default name for database is 'ORCL' in RAC VM, which I changed to 'ORBRAC' by editing the file params.ini in vi editor.

Now to deploy cluster just run buildcluster.sh script as below:

[root@rac01 racovm]# ./buildcluster.sh
Are you sure you want to install RAC Cluster?
Do not run if software is already installed and/or running..  [yes|no]? yes


Answer the question "YES", hit enter, and relax until the script configures you cluster:

Below was output on my machine:

Invoking on rac01 as root...
   Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.5) for Oracle VM - (c) 2010-2016 Oracle Corporation
   Cksum: [753932160 565500 racovm.sh] at Thu Jul 14 12:37:52 EDT 2016
   Kernel: 3.8.13-118.2.2.el7uek.x86_64 (x86_64) [1 processor(s)] 2001 MB | xen | HVM
   Kit Version: 12.1.0.2.5 (RAC Mode, 2 nodes, Enterprise Edition)
   Step(s): buildcluster

INFO (node:rac01): Skipping confirmation, flag (-s) supplied on command line
2016-07-14 12:37:52:[buildcluster:Start:rac01] Building 12c RAC Cluster
2016-07-14 12:37:53:[setsshroot:Start:rac01] SSH Setup for the root user...

INFO (node:rac01): Running as root: /u01/racovm/ssh/setssh-Linux.sh -s -x -c NO -h nodelist    (setup on 2 node(s): rac01 rac02)
.............setssh-Linux.sh Done.
2016-07-14 12:38:05:[setsshroot:Done :rac01] SSH Setup for the root user completed successfully
2016-07-14 12:38:05:[setsshroot:Time :rac01] Completed successfully in 12 seconds (0h:00m:12s)
2016-07-14 12:38:05:[copykit:Start:rac01] Copy kit files to remote nodes
Kit files: buildsingle.sh buildcluster.sh netconfig.sh netconfig.ini common.sh cleanlocal.sh diskconfig.sh racovm.sh ssh params.ini doall.sh  netconfig GetSystemTimeZone.class kitversion.txt mcast

INFO (node:rac01): Copied kit to remote node rac02 as root user
2016-07-14 12:38:07:[copykit:Done :rac01] Copy kit files to (1) remote nodes
2016-07-14 12:38:07:[copykit:Time :rac01] Completed successfully in 2 seconds (0h:00m:02s)
2016-07-14 12:38:07:[usrsgrps:Start:rac01] Verifying Oracle users & groups on all nodes (create/modify mode)..
..
2016-07-14 12:38:09:[usrsgrpslocal:Start:rac01] Verifying Oracle users & groups (create/modify mode)..

INFO (node:rac01): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1031(dba)

2016-07-14 12:38:09:[usrsgrpslocal:Done :rac01] Verifying Oracle users & groups (create/modify mode)..
2016-07-14 12:38:09:[usrsgrpslocal:Time :rac01] Completed successfully in 0 seconds (0h:00m:00s)
2016-07-14 12:38:09:[usrsgrpslocal:Start:rac02] Verifying Oracle users & groups (create/modify mode)..

INFO (node:rac02): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1031(dba)

2016-07-14 12:38:09:[usrsgrpslocal:Done :rac02] Verifying Oracle users & groups (create/modify mode)..
2016-07-14 12:38:09:[usrsgrpslocal:Time :rac02] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac01): Setting passwords for the (oracle) user on all nodes; needed for passwordless SSH setup (setorapasswd)
Changing password for user oracle.
passwd: all authentication tokens updated successfully.
Changing password for user oracle.
passwd: all authentication tokens updated successfully.
2016-07-14 12:38:11:[usrsgrps:Done :rac01] Verifying Oracle users & groups on all nodes (create/modify mode)..
2016-07-14 12:38:11:[usrsgrps:Time :rac01] Completed successfully in 4 seconds (0h:00m:04s)

INFO (node:rac01): Parameters loaded from params.ini...
  Users & Groups:
   Role Separation: no  Running as: root
   OInstall    : oinstall       GID: 1000
   RAC Owner   : oracle         UID: 1101
    DB OSDBA   : dba            GID: 1031
    DB OSOPER  :                GID:
    DB OSKMDBA : dba            GID:
    DB OSDGDBA : dba            GID:
    DB OSBACKUP: dba            GID:
   Grid Owner  : oracle         UID: 1101
    GI OSDBA   : dba            GID: 1031
    GI OSOPER  :                GID:
    GI OSASM   : dba            GID: 1031
  Software Locations:
   Operating Mode: RAC
   Central Inventory: /u01/app/oraInventory
   Grid Home: /u01/app/12.1.0/grid  (Detected: 12c, Enterprise Edition)
   RAC Home : /u01/app/oracle/product/12.1.0/dbhome_1  (Detected: 12c, Enterprise Edition)
   RAC Base : /u01/app/oracle
   DB/RAC OVM kit : /u01/racovm
   Attach RAC Home: yes   GI Home: yes  Relink Homes: no   On OS Change: yes
   Addnode Copy: no
  Database & Storage:
   Database : yes (rac)  DBName: ORBRAC  SIDName: ORBRAC  DG: DATA   Listener Port: 1521
   Policy Managed: no
   DBExpress: no         DBExpress port: 5500
   Grid Management DB: no
   ASM Discovery String: /dev/xvd[c-g]1
   ASM diskgroup: DATA             Redundancy: EXTERNAL   Allocation Unit (au_size): 1
      Disks     : /dev/xvdc1 /dev/xvdd1 /dev/xvde1 /dev/xvdf1 /dev/xvdg1
   Persistent disknames: yes  Stamp: yes  Partition: yes  Align: yes  GPT: no Permissions: 660
   ACFS Filesystem: no

Network information loaded from netconfig.ini...
  Default Gateway: 192.168.1.254  Domain: example.com
  DNS:
  Public NIC : eth0  Mask: 255.255.255.0
  Private NIC: eth1  Mask: 255.255.255.0
  SCAN Name: rac-scan  SCAN IP: 192.168.1.157  Scan Port: 1521
  Cluster Name: orbrac
  Nodes & IP Addresses (2 of 2 nodes)
  Node  1: PubIP : 192.168.1.151   PubName : rac01
           VIPIP : 192.168.1.155   VIPName : rac01-vip
           PrivIP: 192.168.1.153   PrivName: rac01-priv
  Node  2: PubIP : 192.168.1.152   PubName : rac02
           VIPIP : 192.168.1.156   VIPName : rac02-vip
           PrivIP: 192.168.1.154   PrivName: rac02-priv
Running on rac01 as root...
   Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.5) for Oracle VM - (c) 2010-2016 Oracle Corporation
   Cksum: [753932160 565500 racovm.sh] at Thu Jul 14 12:38:11 EDT 2016
   Kernel: 3.8.13-118.2.2.el7uek.x86_64 (x86_64) [1 processor(s)] 2001 MB | xen | HVM
   Kit Version: 12.1.0.2.5 (RAC Mode, 2 nodes, Enterprise Edition)
2016-07-14 12:38:11:[printparams:Time :rac01] Completed successfully in 0 seconds (0h:00m:00s)
2016-07-14 12:38:11:[setsshora:Start:rac01] SSH Setup for the Oracle user(s)...

INFO (node:rac01): Running as oracle: /u01/racovm/ssh/setssh-Linux.sh -s -x -c NO -h nodelist    (setup on 2 node(s): rac01 rac02)
.............setssh-Linux.sh Done.
2016-07-14 12:38:24:[setsshora:Done :rac01] SSH Setup for the oracle user completed successfully
2016-07-14 12:38:24:[setsshora:Time :rac01] Completed successfully in 13 seconds (0h:00m:13s)
2016-07-14 12:38:24:[diskconfig:Start:rac01] Storage Setup
2016-07-14 12:38:25:[diskconfig:Start:rac01] Running in configuration mode (local & remote nodes)
.
2016-07-14 12:38:25:[diskconfig:Disks:rac01] Verifying disks exist, are free and with no overlapping partitions (localhost)...
/dev/xvdc./dev/xvdd./dev/xvde./dev/xvdf./dev/xvdg....................OK
2016-07-14 12:38:28:[diskconfig:Disks:rac01] Checking contents of disks (localhost)...
/dev/xvdc1/dev/xvdd1/dev/xvde1/dev/xvdf1/dev/xvdg1.
2016-07-14 12:38:28:[diskconfig:Remote:rac01] Assuming persistent disk names on remote nodes with stamping (existence check)...
/dev/xvdc./dev/xvdd./dev/xvde./dev/xvdf./dev/xvdg...........OK
2016-07-14 12:38:34:[diskconfig:Remote:rac01] Verify disks are free on remote nodes...
rac02................OK
2016-07-14 12:38:56:[diskconfig:Disks:rac01] Checking contents of disks (remote nodes)...
rac02.....OK
2016-07-14 12:38:58:[diskconfig:Disks:rac01] Setting disk permissions for next startup (all nodes)...
.....OK
2016-07-14 12:39:00:[diskconfig:ClearPartTables:rac01] Clearing partition tables...
./dev/xvdc./dev/xvdd./dev/xvde./dev/xvdf./dev/xvdg.............OK
2016-07-14 12:39:06:[diskconfig:CreatePartitions:rac01] Creating 'msdos' partitions on disks (as needed)...
./dev/xvdc./dev/xvdd./dev/xvde./dev/xvdf./dev/xvdg.............OK
2016-07-14 12:39:15:[diskconfig:CleanPartitions:rac01] Cleaning new partitions...
./dev/xvdc1./dev/xvdd1./dev/xvde1./dev/xvdf1./dev/xvdg1...OK
2016-07-14 12:39:15:[diskconfig:Done :rac01] Done configuring and checking disks on all nodes
2016-07-14 12:39:15:[diskconfig:Done :rac01] Storage Setup
2016-07-14 12:39:15:[diskconfig:Time :rac01] Completed successfully in 51 seconds (0h:00m:51s)
2016-07-14 12:39:16:[clearremotelogs:Time :rac01] Completed successfully in 1 seconds (0h:00m:01s)
2016-07-14 12:39:16:[check:Start:rac01] Pre-install checks on all nodes
..

INFO (node:rac01): Check found that all (2) nodes have the following (19504744 19547013 19769480 20299023 20831110 20875898 20875943 21359755 21359758 21359761 21436941 21485069) patches applied to the Grid Infrastructure Home (/u01/app/12.1.0/grid), the following (19504744 19547013 19769480 19877336 20299023 20831110 20875898 20875943 21359755 21359758 21485069) patches applied to the RAC Home (/u01/app/oracle/product/12.1.0/dbhome_1)
.2016-07-14 12:39:20:[checklocal:Start:rac01] Pre-install checks
2016-07-14 12:39:20:[usrsgrpslocal:Start:rac01] Verifying Oracle users & groups (check only mode)..

INFO (node:rac01): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1031(dba)

2016-07-14 12:39:20:[usrsgrpslocal:Done :rac01] Verifying Oracle users & groups (check only mode)..
2016-07-14 12:39:20:[checklocal:Start:rac02] Pre-install checks
2016-07-14 12:39:21:[usrsgrpslocal:Start:rac02] Verifying Oracle users & groups (check only mode)..

INFO (node:rac02): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1031(dba)

2016-07-14 12:39:21:[usrsgrpslocal:Done :rac02] Verifying Oracle users & groups (check only mode)..

INFO (node:rac01): Node forming new RAC cluster; Kernel: 3.8.13-118.2.2.el7uek.x86_64 (x86_64) [1 processor(s)] 2001 MB | xen | HVM

INFO (node:rac02): Node forming new RAC cluster; Kernel: 3.8.13-118.2.2.el7uek.x86_64 (x86_64) [1 processor(s)] 2001 MB | xen | HVM

INFO (node:rac01): Running disk checks on all nodes, persistent disk names (/u01/racovm/diskconfig.sh -n 2 -D 1 -s)

INFO (node:rac02): Running network checks...
....2016-07-14 12:39:26:[diskconfig:Start:rac01] Running in dry-run mode (local & remote nodes, level 1), no stamping, partitioning or OS configuration files will be modified...(assuming persistent disk names)
.
2016-07-14 12:39:27:[diskconfig:Disks:rac01] Verifying disks exist, are free and with no overlapping partitions (localhost)...
/dev/xvdc./dev/xvdd./dev/xvde./dev/xvdf./dev/xvdg....................OK
2016-07-14 12:39:29:[diskconfig:Disks:rac01] Checking existence of automatically renamed disks (localhost)...
/dev/xvdc1./dev/xvdd1./dev/xvde1./dev/xvdf1./dev/xvdg1.
2016-07-14 12:39:29:[diskconfig:Disks:rac01] Checking permissions of disks (localhost)...
/dev/xvdc1/dev/xvdd1/dev/xvde1/dev/xvdf1/dev/xvdg1
2016-07-14 12:39:29:[diskconfig:Disks:rac01] Checking contents of disks (localhost)...
/dev/xvdc1/dev/xvdd1/dev/xvde1/dev/xvdf1/dev/xvdg1.
2016-07-14 12:39:29:[diskconfig:Remote:rac01] Assuming persistent disk names on remote nodes with NO stamping (existence check)...
rac02.......OK
2016-07-14 12:39:34:[diskconfig:Remote:rac01] Verify disks are free on remote nodes...
rac02............
INFO (node:rac01): Waiting for all checklocal operations to complete on all nodes (At 12:39:50, elapsed: 0h:00m:30s, 2) nodes remaining, all background pid(s): 6870 6938)...
.........OK
2016-07-14 12:39:58:[diskconfig:Remote:rac01] Checking existence of automatically renamed disks (remote nodes)...
rac02..
2016-07-14 12:40:04:[diskconfig:Remote:rac01] Checking permissions of disks (remote nodes)...
rac02.....
INFO (node:rac02): Check completed successfully
2016-07-14 12:40:05:[checklocal:Done :rac02] Pre-install checks
2016-07-14 12:40:05:[checklocal:Time :rac02] Completed successfully in 45 seconds (0h:00m:45s)

2016-07-14 12:40:07:[diskconfig:Disks:rac01] Checking contents of disks (remote nodes)...
rac02.....OK
2016-07-14 12:40:11:[diskconfig:Done :rac01] Dry-run (local & remote, level 1) completed successfully, most likely normal run will too
..
INFO (node:rac01): Running multicast check on 230.0.1.0 port 42040 for 2 nodes...

INFO (node:rac01): All nodes can multicast to all other nodes on interface eth1 multicast address 230.0.1.0 port 42040...

INFO (node:rac01): Running network checks...
...................
INFO (node:rac01): Check completed successfully
2016-07-14 12:40:51:[checklocal:Done :rac01] Pre-install checks
2016-07-14 12:40:51:[checklocal:Time :rac01] Completed successfully in 91 seconds (0h:01m:31s)

INFO (node:rac01): All checklocal operations completed on all (2) node(s) at: 12:40:53
2016-07-14 12:40:53:[check:Done :rac01] Pre-install checks on all nodes
2016-07-14 12:40:53:[check:Time :rac01] Completed successfully in 97 seconds (0h:01m:37s)
2016-07-14 12:40:53:[creategrid:Start:rac01] Creating 12c Grid Infrastructure
..
2016-07-14 12:40:55:[preparelocal:Start:rac01] Preparing node for Oracle installation

INFO (node:rac01): Resetting permissions on Oracle Homes... May take a while...
2016-07-14 12:40:55:[preparelocal:Start:rac02] Preparing node for Oracle installation

INFO (node:rac02): Resetting permissions on Oracle Homes... May take a while...

INFO (node:rac01): Attempting to adjust size of /dev/shm to (1661MB) to account for requested configuration (to disable this functionality set CLONE_TMPFS_SHM_RESIZE_NEVER=yes in params.ini); size of /dev/shm before the change (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs          1001M     0 1001M   0% /dev/shm

INFO (node:rac01): Successfully modified tmpfs (shm) size in /etc/fstab due to CLONE_TMPFS_SHM_RESIZE_MODIFY_FSTAB=yes

WARNING (node:rac01): Successfully adjusted the size of /dev/shm, however, the system appears to run with only (2001MB), which is less memory than the minimum required to run 12c; may result in poor performance.

INFO (node:rac01): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.7G     0  1.7G   0% /dev/shm
2016-07-14 12:41:11:[preparelocal:Done :rac01] Preparing node for Oracle installation
2016-07-14 12:41:11:[preparelocal:Time :rac01] Completed successfully in 16 seconds (0h:00m:16s)

INFO (node:rac02): Attempting to adjust size of /dev/shm to (1661MB) to account for requested configuration (to disable this functionality set CLONE_TMPFS_SHM_RESIZE_NEVER=yes in params.ini); size of /dev/shm before the change (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs          1001M     0 1001M   0% /dev/shm

INFO (node:rac02): Successfully modified tmpfs (shm) size in /etc/fstab due to CLONE_TMPFS_SHM_RESIZE_MODIFY_FSTAB=yes

WARNING (node:rac02): Successfully adjusted the size of /dev/shm, however, the system appears to run with only (2001MB), which is less memory than the minimum required to run 12c; may result in poor performance.

INFO (node:rac02): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.7G     0  1.7G   0% /dev/shm
2016-07-14 12:41:12:[preparelocal:Done :rac02] Preparing node for Oracle installation
2016-07-14 12:41:12:[preparelocal:Time :rac02] Completed successfully in 17 seconds (0h:00m:17s)
2016-07-14 12:41:12:[prepare:Time :rac01] Completed successfully in 19 seconds (0h:00m:19s)
....
2016-07-14 12:41:19:[giclonelocal:Start:rac01] Attaching 12c Grid Infrastructure Home

INFO (node:rac01): Running on: rac01 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.1.0/grid 2>/dev/null
2016-07-14 12:41:19:[giattachlocal:Start:rac01] Attaching Grid Infratructure Home on node rac01

INFO (node:rac01): Running on: rac01 as oracle: /u01/app/12.1.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.1.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle' 'CLUSTER_NODES={rac01,rac02}' LOCAL_NODE='rac01' CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
2016-07-14 12:41:23:[giclonelocal:Start:rac02] Attaching 12c Grid Infrastructure Home

INFO (node:rac02): Running on: rac02 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.1.0/grid 2>/dev/null
2016-07-14 12:41:23:[giattachlocal:Start:rac02] Attaching Grid Infratructure Home on node rac02

INFO (node:rac02): Running on: rac02 as oracle: /u01/app/12.1.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.1.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle' 'CLUSTER_NODES={rac01,rac02}' LOCAL_NODE='rac02' CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory pointer is located at /etc/oraInst.loc

INFO (node:rac01): Waiting for all giclonelocal operations to complete on all nodes (At 12:41:45, elapsed: 0h:00m:30s, 2) nodes remaining, all background pid(s): 9930 9935)...
Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session.
Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

INFO (node:rac02): Running on: rac02 as root: /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
2016-07-14 12:41:55:[giattachlocal:Done :rac02] Attaching Grid Infratructure Home on node rac02
2016-07-14 12:41:55:[giattachlocal:Time :rac02] Completed successfully in 32 seconds (0h:00m:32s)
'AttachHome' was successful.

INFO (node:rac01): Running on: rac01 as root: /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
2016-07-14 12:41:55:[giattachlocal:Done :rac01] Attaching Grid Infratructure Home on node rac01
2016-07-14 12:41:55:[giattachlocal:Time :rac01] Completed successfully in 36 seconds (0h:00m:36s)
2016-07-14 12:41:56:[girootlocal:Start:rac01] Running root.sh on Grid Infrastructure home
2016-07-14 12:41:56:[girootlocal:Start:rac02] Running root.sh on Grid Infrastructure home

INFO (node:rac01): Running on: rac01 as root: /u01/app/12.1.0/grid/root.sh -silent

INFO (node:rac02): Running on: rac02 as root: /u01/app/12.1.0/grid/root.sh -silent
Check /u01/app/12.1.0/grid/install/root_rac01_2016-07-14_12-41-56.log for the output of root script
Check /u01/app/12.1.0/grid/install/root_rac02_2016-07-14_12-41-56.log for the output of root script
2016-07-14 12:41:56:[girootlocal:Done :rac01] Running root.sh on Grid Infrastructure home
2016-07-14 12:41:56:[girootlocal:Done :rac02] Running root.sh on Grid Infrastructure home
2016-07-14 12:41:56:[girootlocal:Time :rac01] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac01): Resetting permissions on Oracle Home (/u01/app/12.1.0/grid)...
2016-07-14 12:41:56:[girootlocal:Time :rac02] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac02): Resetting permissions on Oracle Home (/u01/app/12.1.0/grid)...
2016-07-14 12:41:57:[giclonelocal:Done :rac01] Attaching 12c Grid Infrastructure Home
2016-07-14 12:41:57:[giclonelocal:Time :rac01] Completed successfully in 41 seconds (0h:00m:41s)
2016-07-14 12:41:57:[giclonelocal:Done :rac02] Attaching 12c Grid Infrastructure Home
2016-07-14 12:41:57:[giclonelocal:Time :rac02] Completed successfully in 39 seconds (0h:00m:39s)

INFO (node:rac01): All giclonelocal operations completed on all (2) node(s) at: 12:41:59
2016-07-14 12:41:59:[giclone:Time :rac01] Completed successfully in 47 seconds (0h:00m:47s)
....
2016-07-14 12:42:01:[girootcrslocal:Start:rac01] Running rootcrs.pl

INFO (node:rac01): rootcrs.pl log location is: /u01/app/12.1.0/grid/cfgtoollogs/crsconfig/rootcrs_rac01_<timestamp>.log

INFO (node:rac01): Running on: rac01 as root: /u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/rootcrs.pl -auto
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2016/07/14 12:42:04 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2016/07/14 12:43:25 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
2016/07/14 12:44:38 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.evmd' on 'rac01'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac01'
CRS-2676: Start of 'ora.mdnsd' on 'rac01' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac01'
CRS-2676: Start of 'ora.gpnpd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac01'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac01'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac01' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac01'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac01'
CRS-2676: Start of 'ora.diskmon' on 'rac01' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac01' succeeded

ASM created and started successfully.

Disk Group DATA created successfully.

CRS-2672: Attempting to start 'ora.crf' on 'rac01'
CRS-2672: Attempting to start 'ora.storage' on 'rac01'
CRS-2676: Start of 'ora.storage' on 'rac01' succeeded
CRS-2676: Start of 'ora.crf' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac01'
CRS-2676: Start of 'ora.crsd' on 'rac01' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 476fa12407c34ff7bf35a6400b7895e7.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   476fa12407c34ff7bf35a6400b7895e7 (/dev/xvdc1) [DATA]
Located 1 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac01'
CRS-2677: Stop of 'ora.crsd' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'rac01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac01'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac01'
CRS-2677: Stop of 'ora.storage' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac01'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac01'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac01'
CRS-2673: Attempting to stop 'ora.asm' on 'rac01'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac01' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac01' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac01' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac01' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac01' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac01'
CRS-2677: Stop of 'ora.cssd' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac01'
CRS-2677: Stop of 'ora.gipcd' on 'rac01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac01'
CRS-2672: Attempting to start 'ora.evmd' on 'rac01'
CRS-2676: Start of 'ora.mdnsd' on 'rac01' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac01'
CRS-2676: Start of 'ora.gpnpd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac01'
CRS-2676: Start of 'ora.gipcd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac01'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac01'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac01'
CRS-2676: Start of 'ora.diskmon' on 'rac01' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac01'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac01'
CRS-2676: Start of 'ora.ctssd' on 'rac01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac01'
CRS-2676: Start of 'ora.asm' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac01'
CRS-2676: Start of 'ora.storage' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac01'
CRS-2676: Start of 'ora.crf' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac01'
CRS-2676: Start of 'ora.crsd' on 'rac01' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rac01
CRS-6016: Resource auto-start has completed for server rac01
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2016/07/14 12:52:14 CLSRSC-343: Successfully started Oracle Clusterware stack
CRS-2672: Attempting to start 'ora.asm' on 'rac01'
CRS-2676: Start of 'ora.asm' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac01'
CRS-2676: Start of 'ora.DATA.dg' on 'rac01' succeeded
Preparing packages...
cvuqdisk-1.0.9-1.x86_64
2016/07/14 12:54:00 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

INFO (node:rac01): Shutting down Tracefile Analyzer (TFA) due to (CLONE_TRACEFILE_ANALYZER=no) set or defaulted to in params.ini
.Shutting down TFA
.....2016-07-14 12:55:05:[girootcrslocal:Done :rac01] Running rootcrs.pl
2016-07-14 12:55:05:[girootcrslocal:Time :rac01] Completed successfully in 784 seconds (0h:13m:04s)
2016-07-14 12:55:27:[girootcrslocal:Start:rac02] Running rootcrs.pl

INFO (node:rac02): rootcrs.pl log location is: /u01/app/12.1.0/grid/cfgtoollogs/crsconfig/rootcrs_rac02_<timestamp>.log

INFO (node:rac02): Running on: rac02 as root: /u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/rootcrs.pl -auto
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2016/07/14 12:55:30 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

INFO (node:rac01): Waiting for all girootcrslocal operations to complete on all nodes (At 12:55:56, elapsed: 0h:00m:30s, 1) node remaining, all background pid(s): 29408)...
.2016/07/14 12:56:41 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
OLR initialization - successful
...2016/07/14 12:58:04 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
.CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
.
INFO (node:rac01): Waiting for all girootcrslocal operations to complete on all nodes (At 12:58:58, elapsed: 0h:03m:32s, 1) node remaining, all background pid(s): 29408)...
..CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac02'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac02'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac02' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
....
INFO (node:rac01): Waiting for all girootcrslocal operations to complete on all nodes (At 13:01:59, elapsed: 0h:06m:33s, 1) node remaining, all background pid(s): 29408)...
..CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac02'
CRS-2672: Attempting to start 'ora.evmd' on 'rac02'
CRS-2676: Start of 'ora.mdnsd' on 'rac02' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac02'
CRS-2676: Start of 'ora.gpnpd' on 'rac02' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac02'
CRS-2676: Start of 'ora.gipcd' on 'rac02' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac02'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac02'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac02'
CRS-2676: Start of 'ora.diskmon' on 'rac02' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac02' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac02'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac02'
CRS-2676: Start of 'ora.ctssd' on 'rac02' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac02'
CRS-2676: Start of 'ora.asm' on 'rac02' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac02'
CRS-2676: Start of 'ora.storage' on 'rac02' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac02'
CRS-2676: Start of 'ora.crf' on 'rac02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac02'
CRS-2676: Start of 'ora.crsd' on 'rac02' succeeded
CRS-6017: Processing resource auto-start for servers: rac02
CRS-2672: Attempting to start 'ora.net1.network' on 'rac02'
CRS-2676: Start of 'ora.net1.network' on 'rac02' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'rac02'
CRS-2676: Start of 'ora.ons' on 'rac02' succeeded
CRS-6016: Resource auto-start has completed for server rac02
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2016/07/14 13:03:17 CLSRSC-343: Successfully started Oracle Clusterware stack
.Preparing packages...
cvuqdisk-1.0.9-1.x86_64
2016/07/14 13:03:54 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

INFO (node:rac02): Shutting down Tracefile Analyzer (TFA) due to (CLONE_TRACEFILE_ANALYZER=no) set or defaulted to in params.ini
.Shutting down TFA
.......2016-07-14 13:04:56:[girootcrslocal:Done :rac02] Running rootcrs.pl
2016-07-14 13:04:56:[girootcrslocal:Time :rac02] Completed successfully in 569 seconds (0h:09m:29s)
.
INFO (node:rac01): Waiting for all girootcrslocal operations to complete on all nodes (At 13:05:00, elapsed: 0h:09m:34s, 1) node remaining, all background pid(s): 29408)...

INFO (node:rac01): All girootcrslocal operations completed on all (2) node(s) at: 13:05:28
2016-07-14 13:05:28:[girootcrs:Time :rac01] Completed successfully in 1409 seconds (0h:23m:29s)
2016-07-14 13:05:28:[giassist:Start:rac01] Running RAC Home assistants (netca, asmca)

INFO (node:rac01): Creating the node Listener using NETCA...

INFO (node:rac01): Running on: rac01 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.1.0/grid; /u01/app/12.1.0/grid/bin/netca /orahome /u01/app/12.1.0/grid /instype typical /inscomp client,oraclenet,javavm,server,ano /insprtcl tcp /cfg local /authadp NO_VALUE /responseFile /u01/app/12.1.0/grid/network/install/netca_typ.rsp /silent /orahnam OraGrid12c

Parsing command line arguments:
    Parameter "orahome" = /u01/app/12.1.0/grid
    Parameter "instype" = typical
    Parameter "inscomp" = client,oraclenet,javavm,server,ano
    Parameter "insprtcl" = tcp
    Parameter "cfg" = local
    Parameter "authadp" = NO_VALUE
    Parameter "responsefile" = /u01/app/12.1.0/grid/network/install/netca_typ.rsp
    Parameter "silent" = true
    Parameter "orahnam" = OraGrid12c
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Profile configuration complete.
Listener "LISTENER" already exists.
Oracle Net Services configuration successful. The exit code is 0

INFO (node:rac01): Running on: rac01 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.1.0/grid; /u01/app/12.1.0/grid/bin/asmca -silent -postConfigureASM
PostConfiguration completed successfully

2016-07-14 13:06:10:[creatediskgroups:Start:rac01] Creating additional diskgroups

INFO (node:rac01): Not creating any additional diskgroups since DBASMGROUPNAME_RECO & RACASMGROUPNAME_RECO and DBASMGROUPNAME_EXTRA & RACASMGROUPNAME_EXTRA are unset in params.ini
2016-07-14 13:06:10:[creatediskgroups:Done :rac01] Creating additional diskgroups
2016-07-14 13:06:10:[creatediskgroups:Time :rac01] Completed successfully in 1 seconds (0h:00m:01s)
2016-07-14 13:06:10:[giassist:Done :rac01] Running RAC Home assistants (netca, asmca)
2016-07-14 13:06:10:[giassist:Time :rac01] Completed successfully in 42 seconds (0h:00m:42s)
2016-07-14 13:06:10:[creategrid:Done :rac01] Creating 12c Grid Infrastructure
2016-07-14 13:06:10:[creategrid:Time :rac01] Completed successfully in 1517 seconds (0h:25m:17s)
2016-07-14 13:06:10:[cvupostcrs:Start:rac01] Cluster Verification Utility (CVU), stage: Post crsinst

INFO (node:rac01): Running on: rac01 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.1.0/grid; /u01/app/12.1.0/grid/bin/cluvfy stage -post crsinst -n rac01,rac02

Performing post-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "rac01"


Checking user equivalence...
User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) rac01,rac02
TCP connectivity check passed for subnet "192.168.1.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed.

ERROR:

PRVG-11073 : Subnet on interface "eth0" of node "rac01" is overlapping with the subnet on interface "eth1". IP address range ["192.168.1.0"-"192.168.1.255"] is overlapping with IP address range ["192.168.1.0"-"192.168.1.255"].
PRVG-11073 : Subnet on interface "eth0" of node "rac02" is overlapping with the subnet on interface "eth1". IP address range ["192.168.1.0"-"192.168.1.255"] is overlapping with IP address range ["192.168.1.0"-"192.168.1.255"].

Node connectivity check failed

Checking multicast communication...

Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.

Checking whether the ASM filter driver is active and consistent on all nodes
ASM filter driver library is not installed on any of the cluster nodes.
ASM filter driver configuration was found consistent across all the cluster nodes.
Time zone consistency check passed

Checking Cluster manager integrity...


Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.

Cluster manager integrity check passed


UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations


UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations

Default user file creation mask check passed

Checking cluster integrity...


Cluster integrity check passed


Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations


Checking daemon liveness...
Liveness check passed for "CRS daemon"

Checking OCR config file "/etc/oracle/ocr.loc"...

OCR config file "/etc/oracle/ocr.loc" check successful


Disk group for ocr location "+DATA/orbrac/OCRFILE/registry.255.917182147" is available on all the nodes


NOTE:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.

OCR integrity check passed

Checking CRS integrity...

Clusterware version consistency passed.

CRS integrity check passed

Checking node application existence...

Checking existence of VIP node application (required)
VIP node application check passed

Checking existence of NETWORK node application (required)
NETWORK node application check passed

Checking existence of ONS node application (optional)
ONS node application check passed


Checking Single Client Access Name (SCAN)...

Checking TCP connectivity to SCAN listeners...
TCP connectivity to SCAN listeners exists on all cluster nodes

Checking name resolution setup for "rac-scan"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


ERROR:
PRVG-1101 : SCAN name "rac-scan" failed to resolve

ERROR:
PRVF-4657 : Name resolution setup check for "rac-scan" (IP address: 192.168.1.157) failed

ERROR:
PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan"

Checking SCAN IP addresses...
Check of SCAN IP addresses passed

Verification of SCAN VIP and listener setup failed

Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed

WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.

OLR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed

User "oracle" is not part of "root" group. Check passed
Oracle Clusterware is installed on all nodes.
CTSS resource check passed
Query of CTSS for time offset passed

CTSS is in Observer state. Switching over to clock synchronization checks using NTP


Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP configuration file "/etc/ntp.conf" existence check passed
PRVF-7590 : "ntpd" is not running on node "rac02"
PRVF-7590 : "ntpd" is not running on node "rac01"
PRVG-1024 : The NTP Daemon or Service was not running on any of the cluster nodes.
PRVF-5415 : Check to see if NTP daemon or service is running failed
Clock synchronization check using Network Time Protocol(NTP) failed


PRVF-9652 : Cluster Time Synchronization Services check failed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Post-check for cluster services setup was unsuccessful on all the nodes.

INFO (node:rac01): Since no DNS Server was specified in DNSIP (in netconfig.ini) it is safe to ignore any possible errors above PRVG-1101, PRVF-4664 & PRVF-4657 generated by CVU. See Note:887471.1 for details

2016-07-14 13:08:57:[cvupostcrs:Done :rac01] Cluster Verification Utility (CVU), stage: Post crsinst
2016-07-14 13:08:57:[cvupostcrs:Time :rac01] Completed successfully in 167 seconds (0h:02m:47s)
2016-07-14 13:08:57:[racclone:Start:rac01] Cloning 12c RAC Home on all nodes
..
2016-07-14 13:09:05:[racclonelocal:Start:rac01] Attaching 12c RAC Home
2016-07-14 13:09:05:[racclonelocal:Start:rac02] Attaching 12c RAC Home

INFO (node:rac01): Running on: rac01 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.1.0/dbhome_1 2>/dev/null

INFO (node:rac02): Running on: rac02 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.1.0/dbhome_1 2>/dev/null
2016-07-14 13:09:14:[racattachlocal:Start:rac01] Attaching RAC Home on node rac01

INFO (node:rac01): Running on: rac01 as oracle: /u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome  ORACLE_HOME='/u01/app/oracle/product/12.1.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' ORACLE_BASE='/u01/app/oracle' 'CLUSTER_NODES={rac01,rac02}' LOCAL_NODE='rac01' -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4054 MB    Passed
2016-07-14 13:09:15:[racattachlocal:Start:rac02] Attaching RAC Home on node rac02

INFO (node:rac02): Running on: rac02 as oracle: /u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome  ORACLE_HOME='/u01/app/oracle/product/12.1.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' ORACLE_BASE='/u01/app/oracle' 'CLUSTER_NODES={rac01,rac02}' LOCAL_NODE='rac02' -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4089 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory pointer is located at /etc/oraInst.loc

INFO (node:rac01): Waiting for all racclonelocal operations to complete on all nodes (At 13:09:28, elapsed: 0h:00m:30s, 2) nodes remaining, all background pid(s): 3008 3013)...
'AttachHome' was successful.
'AttachHome' was successful.
2016-07-14 13:09:50:[racattachlocal:Done :rac02] Attaching RAC Home on node rac02
2016-07-14 13:09:50:[racattachlocal:Time :rac02] Completed successfully in 35 seconds (0h:00m:35s)
2016-07-14 13:09:50:[racrootlocal:Start:rac02] Running root.sh on RAC Home
2016-07-14 13:09:50:[racattachlocal:Done :rac01] Attaching RAC Home on node rac01
2016-07-14 13:09:50:[racattachlocal:Time :rac01] Completed successfully in 36 seconds (0h:00m:36s)
2016-07-14 13:09:50:[racrootlocal:Start:rac01] Running root.sh on RAC Home
Check /u01/app/oracle/product/12.1.0/dbhome_1/install/root_rac02_2016-07-14_13-09-50.log for the output of root script
Check /u01/app/oracle/product/12.1.0/dbhome_1/install/root_rac01_2016-07-14_13-09-50.log for the output of root script
2016-07-14 13:09:51:[racrootlocal:Done :rac02] Running root.sh on RAC Home
2016-07-14 13:09:51:[racrootlocal:Time :rac02] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac02): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.1.0/dbhome_1)...
2016-07-14 13:09:51:[racrootlocal:Done :rac01] Running root.sh on RAC Home
2016-07-14 13:09:51:[racrootlocal:Time :rac01] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac01): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.1.0/dbhome_1)...
2016-07-14 13:09:51:[racclonelocal:Done :rac02] Attaching 12c RAC Home
2016-07-14 13:09:51:[racclonelocal:Time :rac02] Completed successfully in 51 seconds (0h:00m:51s)
2016-07-14 13:09:51:[racclonelocal:Done :rac01] Attaching 12c RAC Home
2016-07-14 13:09:51:[racclonelocal:Time :rac01] Completed successfully in 51 seconds (0h:00m:51s)

INFO (node:rac01): All racclonelocal operations completed on all (2) node(s) at: 13:09:53
2016-07-14 13:09:53:[racclone:Done :rac01] Cloning 12c RAC Home on all nodes
2016-07-14 13:09:53:[racclone:Time :rac01] Completed successfully in 56 seconds (0h:00m:56s)
2016-07-14 13:09:53:[createdb:Start:rac01] Creating 12c RAC Database (ORBRAC) & Instances
....2016-07-14 13:10:03:[adjustmemlocal:Start:rac01] Adjusting memory settings

INFO (node:rac01): Not attempting to adjust size of /dev/shm since the available space (1026MB) exceeds the calculated needed space

INFO (node:rac01): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.7G  635M  1.1G  39% /dev/shm
2016-07-14 13:10:03:[adjustmemlocal:Done :rac01] Adjusting memory settings
2016-07-14 13:10:03:[adjustmemlocal:Time :rac01] Completed successfully in 0 seconds (0h:00m:00s)
2016-07-14 13:10:04:[adjustmemlocal:Start:rac02] Adjusting memory settings

INFO (node:rac02): Not attempting to adjust size of /dev/shm since the available space (1026MB) exceeds the calculated needed space

INFO (node:rac02): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.7G  635M  1.1G  39% /dev/shm
2016-07-14 13:10:04:[adjustmemlocal:Done :rac02] Adjusting memory settings
2016-07-14 13:10:04:[adjustmemlocal:Time :rac02] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac02): Not attempting to adjust size of /dev/shm since the available space (1026MB) exceeds the calculated needed space

INFO (node:rac02): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.7G  635M  1.1G  39% /dev/shm
2016-07-14 13:10:04:[adjustmemlocal:Done :rac02] Adjusting memory settings
2016-07-14 13:10:04:[adjustmemlocal:Time :rac02] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac01): Creating Admin Managed database (ORBRAC) on (2) cluster members: rac01,rac02

INFO (node:rac01): Running on: rac01 as oracle: export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; export ORACLE_BASE=/u01/app/oracle; export SKIP_CVU_CHECK=true; /u01/app/oracle/product/12.1.0/dbhome_1/bin/dbca -silent -createDatabase -adminManaged -emConfiguration NONE -nodelist 'rac01,rac02' -templateName 'General_Purpose.dbc' -storageType ASM -diskGroupName 'DATA' -datafileJarLocation '/u01/app/oracle/product/12.1.0/dbhome_1/assistants/dbca/templates' -characterset 'WE8MSWIN1252' -sampleSchema false -oratabLocation /etc/oratab  -runCVUChecks false -continueOnNonFatalErrors true -gdbName 'ORBRAC' -sid 'ORBRAC'
Copying database files
1% complete
3% complete
30% complete
Creating and starting Oracle instance
32% complete
36% complete
40% complete
44% complete
45% complete
48% complete
50% complete
Creating cluster database views
52% complete
70% complete
Completing Database Creation
73% complete
76% complete
85% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/ORBRAC/ORBRAC.log" for further details.

INFO (node:rac01): Running on: rac01 as oracle: export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; /u01/app/oracle/product/12.1.0/dbhome_1/bin/srvctl status database -d ORBRAC
Instance ORBRAC1 is running on node rac01
Instance ORBRAC2 is running on node rac02
.
INFO (node:rac01): DBCA post creation scripts are running in background (due to DBCA_POST_SQL_BG=yes) as pid: 8001... (continuing execution)
See log at: /u01/racovm/buildcluster1_createdbpostsql_2016Jul14_13_56_07.log

INFO (node:rac01): Setup oracle's environment

INFO (node:rac02): Setup oracle's environment
2016-07-14 13:56:25:[createdb:Done :rac01] Creating 12c RAC Database (ORBRAC) & Instances
2016-07-14 13:56:25:[createdb:Time :rac01] Completed successfully in 2792 seconds (0h:46m:32s)

INFO (node:rac02): Disabling passwordless ssh access for root user (from remote nodes)
2016-07-14 13:56:29:[rmsshrootlocal:Time :rac02] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac01): Disabling passwordless ssh access for root user (from remote nodes)
2016-07-14 13:56:32:[rmsshrootlocal:Time :rac01] Completed successfully in 1 seconds (0h:00m:01s)
2016-07-14 13:56:32:[rmsshroot:Time :rac01] Completed successfully in 7 seconds (0h:00m:07s)

INFO (node:rac01): Current cluster state (13:56:32)...

INFO (node:rac01): Running on: rac01 as root: /u01/app/12.1.0/grid/bin/olsnodes -n -s -t
rac01   1       Active  Hub     Unpinned
rac02   2       Active  Hub     Unpinned
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
Oracle Clusterware version on node [rac01] is [12.1.0.2.0]
CRS Administrator List: oracle root
Cluster is running in "standard" mode
ASM Flex mode disabled

INFO (node:rac01): Running on: rac01 as oracle: export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; /u01/app/oracle/product/12.1.0/dbhome_1/bin/srvctl status database -d ORBRAC
Instance ORBRAC1 is running on node rac01
Instance ORBRAC2 is running on node rac02

INFO (node:rac01): Running on: rac01 as root: /u01/app/12.1.0/grid/bin/crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
ora.asm
               ONLINE  ONLINE       rac01                    Started,STABLE
               ONLINE  ONLINE       rac02                    Started,STABLE
ora.net1.network
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
ora.ons
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac01                    STABLE
ora.cvu
      1        ONLINE  ONLINE       rac01                    STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.orbrac.db
      1        ONLINE  ONLINE       rac01                    Open,STABLE
      2        ONLINE  ONLINE       rac02                    Open,STABLE
ora.rac01.vip
      1        ONLINE  ONLINE       rac01                    STABLE
ora.rac02.vip
      1        ONLINE  ONLINE       rac02                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac01                    STABLE
--------------------------------------------------------------------------------

INFO (node:rac01): For an explanation on resources in OFFLINE state, see Note:1068835.1
2016-07-14 13:56:49:[clusterstate:Time :rac01] Completed successfully in 17 seconds (0h:00m:17s)
2016-07-14 13:56:50:[buildcluster:Done :rac01] Building 12c RAC Cluster
2016-07-14 13:56:50:[buildcluster:Time :rac01] Completed successfully in 4738 seconds (1h:18m:58s)

INFO (node:rac01): This entire build was logged in logfile: /u01/racovm/buildcluster1.log


That completes it! Cheers. :)

No comments:

Post a Comment