Home
 | 
Articles
 | 
High Availability
 | 
Data Guard - RAC Physical Standby (12c)
 | 
Data Guard
  - RAC Physical Standby (12c)
Last modified: December 2016
Recent Article
All Archives
Topics
Comments 
by ShanNura
»»
This article explains the step by step process of building the 12c R1 (12.1.0.2) 2 node(s) RAC primary to 2 node(s) RAC Physical Standby.
This article assumes the primary site is already running a RAC database with 2 nodes and at the standby site, the Grid Infrastructure and the database software is already installed.
Protection Mode
Unpin
This article illustrates Maximum Performance mode. The decision on which protection mode to use lies in how much data loss your company can afford in the event of a failover.

An Oracle 11g Database dataguard configuration can run in any one of these modes. Click on each mode for more detailed description.

  • Maximum Performance
    This is the default mode. This protection mode provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s).

    This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance.
  • Maximum Availability
    This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.
  • Maximum Protection
    This protection mode ensures that zero data loss occurs if a primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database.

    Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.
Environment overview:
  Primary Standby
Operating System Oracle Linux 7.3 (x64) Oracle Linux 7.3 (x64)
Clusterware 12c R1 Grid Infrastructure (12.1.0.2) 12c R1 Grid Infrastructure (12.1.0.2)
Cluster node(s) ol-alpha, ol-beta ol-alpha-dr, ol-beta-dr
IPs 172.168.190.101, 172.168.190.102 172.168.190.105, 172.168.190.106
SCAN shannuracluster-scan nurashancluster-scan
SCAN Listener port        1521 1521
VIPs ol-alpha-vip, ol-beta-vip ol-alpha-dr-vip, ol-beta-dr-vip
VIP IPs 172.168.190.201, 172.168.190.202 172.168.190.205, 172.168.190.206
DB_NAME phoenix phoenix
DB_UNIQUE_NAME phoenix unicorn
DB instance(s) phoenix1, phoenix2 unicorn1, unicorn2
DB Listener/port ol-alpha-vip, ol-beta-vip (port 1521) ol-alpha-dr-vip, ol-beta-dr-vip (port 1521)
DB storage ASM ASM
Diskgroups +DATADG, +FRADG +DATADG, +FRADG
ORACLE_HOME /u01/app/oracle/product/12.1.0/db_1 /u01/app/oracle/product/12.1.0/db_1
Preparing the Primary site
Ensure primary database is in archive log mode
SQL>
SYS@phoenix1> select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG
SQL Prompt
Unpin
$ORACLE_HOME/sqlplus/admin/glogin.sql
set sqlprompt "&&_USER@&&_CONNECT_IDENTIFIER> "
Note: In case, your RAC database isn't running in ARCHIVELOG mode, following these steps:
$<>  
[oracle@ol-alpha ~]$ srvctl stop database -database phoenix
[oracle@ol-alpha ~]$ srvctl start instance -database phoenix -node ol-alpha -startoption MOUNT
SQL>
SYS@phoenix1> alter database archivelog ;
$<>  
[oracle@ol-alpha ~]$ srvctl stop instance -database phoenix -node ol-alpha -stopoption IMMEDIATE
[oracle@ol-alpha ~]$ srvctl start database -database phoenix
Enable force logging
SQL>
SYS@phoenix1> select force_logging from v$database ;

FORCE_LOGGING
---------------------------------------
NO

SYS@phoenix1> alter database force logging;

Database altered.
Modify Dataguard related initialization parameters

db_name='phoenix'
db_unique_name='phoenix'
log_archive_config='DG_CONFIG=(phoenix,unicorn)'
log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST arch reopen=60 max_failure=0 mandatory valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=phoenix'
log_archive_dest_2='service=unicorn LGWR ASYNC NOAFFIRM max_failure=10 max_connections=2 reopen=400 valid_for=(online_logfiles,primary_role) db_unique_name=unicorn'
log_archive_format='%t_%s_%r.arc'
log_archive_max_processes=8
fal_server='unicorn'
db_file_name_convert='unicorn','phoenix','UNICORN','PHOENIX'
log_file_name_convert='unicorn','phoenix','UNICORN','PHOENIX'
remote_login_passwordfile='EXCLUSIVE'
standby_file_management='AUTO'
The DB_UNIQUE_NAME parameter has already been set to the appropriate value during the initial creation of the RAC database. The LOG_ARCHIVE_DEST_STATE_n and REMOTE_LOGIN_PASSWORDFILE have default values set to ENABLE and EXCLUSIVE respectively. So, only below mentioned parameter needed to be changed here.
SQL><>  
SQL@phoenix1> alter system set log_archive_config='DG_CONFIG=(phoenix,unicorn)' scope=both sid='*';
SQL@phoenix1> alter system set log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST arch reopen=60 max_failure=0 mandatory valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=phoenix' scope=both sid='*';
SQL@phoenix1> alter system set log_archive_dest_2='service=unicorn LGWR ASYNC NOAFFIRM max_failure=10 max_connections=2 reopen=400 valid_for=(online_logfiles,primary_role) db_unique_name=unicorn' scope=both sid='*';
SQL@phoenix1> alter system set log_archive_format='%t_%s_%r.arc' scope=spfile sid='*';
SQL@phoenix1> alter system set log_archive_max_processes=8 scope=both sid='*';
SQL@phoenix1> alter system set fal_server='unicorn' scope=both sid='*';
SQL@phoenix1> alter system set db_file_name_convert='unicorn','phoenix','UNICORN','PHOENIX' scope=spfile sid='*';
SQL@phoenix1> alter system set log_file_name_convert='unicorn','phoenix','UNICORN','PHOENIX' scope=spfile sid='*';
SQL@phoenix1> alter system set remote_login_passwordfile='EXCLUSIVE' scope=spfile sid='*';
SQL@phoenix1> alter system set standby_file_management='AUTO' scope=both sid='*';

Bounce the database to take effect of these initialization parameters.
$
[oracle@ol-alpha ~]$ srvctl stop database -database phoenix
[oracle@ol-alpha ~]$ srvctl start database -database phoenix
Create the Standby Redo Logs (SRLs) on Primary and Standby
There should be minimum of (threads)*(groups per threads + 1) SRLs created on the standby database. There are 2 threads with 2 groups per thread in this configuration on the primary side so there should be total of 6 SLRs at minimum needs to be created.
SQL>
SYS@phoenix1> select group#, thread#, bytes/1024/1024 as mb from v$log;

    GROUP#    THREAD#         MB
========== ========== ==========
         1          1         50
         2          1         50
         3          2         50
         4          2         50
SQL>
SQL@phoenix1> alter database add standby logfile group  5 ('+FRADG','+DATADG') size 50m;
SQL@phoenix1> alter database add standby logfile group  6 ('+FRADG','+DATADG') size 50m;
SQL@phoenix1> alter database add standby logfile group  7 ('+FRADG','+DATADG') size 50m;
SQL@phoenix1> alter database add standby logfile group  8 ('+FRADG','+DATADG') size 50m;
SQL@phoenix1> alter database add standby logfile group  9 ('+FRADG','+DATADG') size 50m;
SQL@phoenix1> alter database add standby logfile group 10 ('+FRADG','+DATADG') size 50m;
Backup the primary database for standby
$
[oracle@ol-alpha ~]$ mkdir -p /u02/stage/backup
[oracle@ol-alpha ~]$ rman target / nocatalog
RMAN>
RMAN> run {
sql "alter system switch logfile";
allocate channel ch1 type disk format '/u02/stage/backup/primary_bkp_for_stndby_%U';
backup database;
backup current controlfile for standby;
sql "alter system archive log current";
}
Note: Copy these backup pieces to the same directory structure in the standby node1 (e.g. ol-alpha-dr:/u02/stage/backup)
$
[oracle@ol-alpha ~]$ scp -p /u02/stage/backup/* ol-alpha-dr:/u02/stage/backup
Create PFILE for standby
SQL>
SYS@phoenix1> create pfile='/u02/stage/initunicorn1.ora' from spfile ;
Note: Copy this file across to standby node1 perhaps under the same location (ol-alpha-dr:/u02/stage/).
TNS entries
Check the database service (in tis case, phoenix.shannura.com) that the listener is listening to.
$<>  
[oracle@ol-alpha stage]$ lsnrctl status listener

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 28-FEB-2017 03:46:38

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                28-FEB-2017 03:35:38
Uptime                    0 days 0 hr. 11 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/ol-alpha/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.168.190.101)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.168.190.201)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "phoenix.shannura.com" has 1 instance(s).
  Instance "phoenix1", status READY, has 1 handler(s) for this service...
Service "phoenixXDB.shannura.com" has 1 instance(s).
  Instance "phoenix1", status READY, has 1 handler(s) for this service...
The command completed successfully

$ORACLE_HOME/network/admin/tnsnames.ora


#primary
PHOENIX =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = shannuracluster-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = phoenix.shannura.com)
    )
  )

#standby
unicorn =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = nurashancluster-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = unicorn.shannura.com)
    )
  )

Note: Ensure these entries are similar in all 4 nodes (including the standby site)
Password File
In the primary site, the password file is located under the ASM.
$
[oracle@ol-alpha ~]$ srvctl config database -database phoenix
Database unique name: phoenix
Database name: phoenix
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATADG/PHOENIX/PARAMETERFILE/spfile.261.937514361
Password file: +DATADG/PHOENIX/PASSWORD/pwdphoenix.256.937513941
Domain: shannura.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: FRADG,DATADG
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances: phoenix1,phoenix2
Configured nodes: ol-alpha,ol-beta
Database is administrator managed
ASMCMD><>  
[grid@ol-alpha ~]$ asmcmd -p
ASMCMD [+] > pwget --dbuniquename phoenix
+DATADG/PHOENIX/PASSWORD/pwdphoenix.256.937513941

ASMCMD [+] > pwcopy --dbuniquename phoenix +DATADG/PHOENIX/PASSWORD/pwdphoenix.256.937513941 /u02/stage/orapwphoenix1
copying +DATADG/PHOENIX/PASSWORD/pwdphoenix.256.937513941 -> /u02/stage/orapwphoenix
ASMCMD-9456: password file should be located on an ASM disk group
Copy this password file (/u02/stage/orapwphoenix) across to both standby nodes under under $ORACLE_HOME/dbs
$
[oracle@ol-alpha ~]$ scp -p /u02/stage/orapwphoenix ol-alpha-dr:$ORACLE_HOME/dbs/orapwunicorn
[oracle@ol-alpha ~]$ scp -p /u02/stage/orapwphoenix ol-beta-dr:$ORACLE_HOME/dbs/orapwunicorn
Preparing the Standby site
Ensure RMAN backup pieces from primary node1 (ol-alpha) are available under ol-alpha-dr:/u02/stage/backup.
$
ol-alpha:/u02/stage/backup/*  ->  ol-alpha-dr:/u02/stage/backup/
$
[oracle@ol-alpha-dr backup]$ ls -lrt
total 1338340
-rw-r-----. 1 oracle oinstall 1332412416 Feb 28 05:33 primary_bkp_for_standby_01rtmen5_1_1
-rw-r-----. 1 oracle oinstall   19038208 Feb 28 05:33 primary_bkp_for_standby_02rtmepr_1_1
-rw-r-----. 1 oracle oinstall   19005440 Feb 28 05:33 primary_bkp_for_standby_03rtmeq6_1_1
Ensure the password file from primary (/u02/stage/orapwphoenix) is copied into the standby nodes as below:

ol-alpha-dr:$ORACLE_HOME/dbs/orapwunicorn
ol-beta-dr:$ORACLE_HOME/dbs/orapwunicorn
Ensure TNS entries from primary ($ORACLE_HOME/network/admin/tnsnames.ora) are copied into $ORACLE_HOME/network/admin in both standby notes.

ol-alpha-dr:$ORACLE_HOME/network/admin/tnsnames.ora
ol-beta-dr:$ORACLE_HOME/network/admin/tnsnames.ora
Ensure the initialization parameter file is copied from the primary node1 that we generated earlier.

ol-alpha:/u02/stage/initunicorn1.ora  ->  ol-alpha-dr:/u02/stage/initunicorn1.ora
Open the initialization parameter file and modify as highlighted here (as indicated as #modified) for the standby.

[oracle@ol-alpha-dr stage]$ cat initunicorn1.ora
*.audit_file_dest='/u01/app/oracle/admin/unicorn/adump' #modified
*.audit_trail='db'
*.cluster_database=true
*.compatible='12.1.0.2.0'
*.control_files='+DATADG','+FRADG' #modified
*.db_block_size=8192
*.db_create_file_dest='+DATADG'
*.db_domain='shannura.com'
*.db_file_name_convert='phoenix','unicorn','PHOENIX','UNICORN' #modified
*.db_name='phoenix'
*.db_unique_name='unicorn' #modified
*.db_recovery_file_dest='+FRADG'
*.db_recovery_file_dest_size=4785m
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=phoenixXDB)'
*.fal_server='phoenix' #modified
unicorn1.instance_number=1
unicorn2.instance_number=2
*.log_archive_config='DG_CONFIG=(phoenix,unicorn)'
*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST arch reopen=60 max_failure=0 mandatory valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=unicorn' #modified
*.log_archive_dest_2='service=phoenix LGWR ASYNC NOAFFIRM max_failure=10 max_connections=2 reopen=400 valid_for=(online_logfiles,primary_role) db_unique_name=phoenix' #modified
*.log_archive_format='%t_%s_%r.arc'
*.log_archive_max_processes=8
*.log_file_name_convert='phoenix','unicorn','PHOENIX','UNICORN' #modified
*.memory_target=1024m
*.open_cursors=300
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.standby_file_management='AUTO'
unicorn2.thread=2
unicorn1.thread=1
unicorn2.undo_tablespace='UNDOTBS2'
unicorn1.undo_tablespace='UNDOTBS1'
Create this directory structure
$
[oracle@ol-alpha-dr ~]$ mkdir -p /u01/app/oracle/admin/unicorn/adump
[oracle@ol-beta-dr ~]$ mkdir -p /u01/app/oracle/admin/unicorn/adump
Start the standby instance in NOMOUNT state
$
[oracle@ol-alpha-dr ~]$ export ORACLE_SID=unicorn1
[oracle@ol-alpha-dr ~]$ export ORACLE_BASE=/u01/app/oracle
[oracle@ol-alpha-dr ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
[oracle@ol-alpha-dr ~]$ export PATH=$PATH:$ORACLE_HOME/bin:.
$
[oracle@ol-alpha-dr stage]$ sqlplus / as sysdba
SQL>
SQL> startup nomount pfile='/u02/stage/initunicorn1.ora'
Start the restore using RMAN backup
$
[oracle@ol-alpha-dr backup]$ rman target sys/oracle@phoenix auxiliary /

Recovery Manager: Release 12.1.0.2.0 - Production on Sun Feb 28 06:22:53 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PHOENIX (DBID=245662841)
connected to auxiliary database: PHOENIX (not mounted)

RMAN>
RMAN><>  
RMAN> duplicate target database for standby nofilenamecheck ;

Starting Duplicate Db at 28-FEB-2016 06:23:19
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=237 instance=phoenix1 device type=DISK

contents of Memory Script:
{
   restore clone standby controlfile;
}
executing Memory Script

Starting restore at 28-FEB-2016 06:23:20
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/stage/backup/primary_bkp_for_standby_03rtmeq6_1_1
channel ORA_AUX_DISK_1: piece handle=/u02/stage/backup/primary_bkp_for_standby_03rtmeq6_1_1 tag=TAG20160228T053357
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
output file name=+DATADG/UNICORN/CONTROLFILE/current.256.947681141
output file name=+FRADG/UNICORN/CONTROLFILE/current.256.947681141
Finished restore at 28-FEB-2016 06:23:28

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database
RMAN-05529: WARNING: DB_FILE_NAME_CONVERT resulted in invalid ASM names; names changed to disk group only.

contents of Memory Script:
{
   set newname for tempfile  1 to
 "+DATADG";
   switch clone tempfile all;
   set newname for datafile  1 to
 "+DATADG";
   set newname for datafile  3 to
 "+DATADG";
   set newname for datafile  4 to
 "+DATADG";
   set newname for datafile  5 to
 "+DATADG";
   set newname for datafile  6 to
 "+DATADG";
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to +DATADG in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 28-FEB-2016 06:23:35
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to +DATADG
channel ORA_AUX_DISK_1: restoring datafile 00003 to +DATADG
channel ORA_AUX_DISK_1: restoring datafile 00004 to +DATADG
channel ORA_AUX_DISK_1: restoring datafile 00005 to +DATADG
channel ORA_AUX_DISK_1: restoring datafile 00006 to +DATADG
channel ORA_AUX_DISK_1: reading from backup piece /u02/stage/backup/primary_bkp_for_standby_01rtmen5_1_1
channel ORA_AUX_DISK_1: piece handle=/u02/stage/backup/primary_bkp_for_standby_01rtmen5_1_1 tag=TAG20160228T053221
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:55
Finished restore at 28-FEB-2016 06:25:31

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=6 STAMP=937117532 file name=+DATADG/UNICORN/DATAFILE/system.277.937117417
datafile 3 switched to datafile copy
input datafile copy RECID=7 STAMP=937117532 file name=+DATADG/UNICORN/DATAFILE/sysaux.276.937117419
datafile 4 switched to datafile copy
input datafile copy RECID=8 STAMP=937117532 file name=+DATADG/UNICORN/DATAFILE/undotbs1.274.937117419
datafile 5 switched to datafile copy
input datafile copy RECID=9 STAMP=937117532 file name=+DATADG/UNICORN/DATAFILE/undotbs2.273.937117419
datafile 6 switched to datafile copy
input datafile copy RECID=10 STAMP=937117532 file name=+DATADG/UNICORN/DATAFILE/users.272.937117419
Finished Duplicate Db at 28-FEB-2016 06:26:57

RMAN>
Start the Managed Recovery Process
Start the Managed Recovery Process (MRP) on ol-alpha-dr and verify that the log transport and log application is happening. Alert log is a quick and easy way to see if things log transport/Gap resolution and log application is working as expected. Start the tail –f on alert logs on both the standby nodes before starting the MRP.
$
SYS@unicorn1> alter database recover managed standby database disconnect from session ;
Perform couple of log switches on the primary database so initiate the log Transport.
SQL>
SYS@phoenix1> alter system archive log current;
SYS@phoenix1> /
SYS@phoenix1> /
If everything went fine, then we should start seeing the archive sync at the standby site. You can use these SQL scripts to verify the sync.

select * from v$archive_gap;
select process, client_process, sequence#, status from v$managed_standby;
select sequence#, first_time, next_time, applied from v$archived_log;
select archived_thread#, archived_seq#, applied_thread#, applied_seq# from v$archive_dest_status;
select thread#, max (sequence#) from v$log_history group by thread#;
select thread#, max (sequence#) from v$archived_log where APPLIED='YES' group by thread#;
Tidy up the standby
Create spfile from pfile
Open the pfile (/u02/stage/initunicorn1.ora) and update the control_files with the correct names.

*.control_files='+DATADG/UNICORN/CONTROLFILE/current.256.947681141','+FRADG/UNICORN/CONTROLFILE/current.256.947681141'
SQL>
SYS@unicorn1> create spfile='+DATADG' from pfile='/u02/stage/initunicorn1.ora';
After creating the spfile, create the below init.ora files under $ORACLE_HOME/dbs on both the standby nodes with the spfile entry so that the instance can start with the newly created spfile.
$
[oracle@ol-alpha-dr dbs]$ cat initunicorn1.ora
spfile='+DATADG/UNICORN/PARAMETERFILE/spfile.277.937547735'

[oracle@ol-beta-dr dbs]$ cat initunicorn2.ora
spfile='+DATADG/UNICORN/PARAMETERFILE/spfile.277.937547735'
Add Standby database and instances to OCR
$<>  

[oracle@ol-alpha-dr ~]$ srvctl add database -db unicorn -oraclehome /u01/app/oracle/product/12.1.0/db_1 \
                        -spfile +DATADG/UNICORN/PARAMETERFILE/spfile.277.937547735 -dbname phoenix -role PHYSICAL_STANDBY -startoption mount
[oracle@ol-alpha-dr ~]$ srvctl add instance -database unicorn -instance unicorn1 -node ol-alpha-dr
[oracle@ol-alpha-dr ~]$ srvctl add instance -database unicorn -instance unicorn2 -node ol-beta-dr
[oracle@ol-alpha-dr ~]$ srvctl modify database -database unicorn -pwfile /u01/app/oracle/product/12.1.0/db_1/dbs/orapwunicorn
[oracle@ol-alpha-dr ~]$ srvctl config database -database unicorn
Please write your comment if this article was useful.

ShanNura

/
You might want to read this:
Using a Physical Standby Database for Read/Write Testing