Join with the other

    300

Subscribe Blog Via Email

Flag Counter

12c Server Pools in action : Instance and service failover to server pools

In the Previous post we have converted the orcl database to server pool OLTP_SP which is running on three nodes rac01,rac03,rac04 and free server is available in rac02.

Okay lets have some discussion before proceeding,

1. Does in pre 11g version , if a instance crashed does it failed over to any node?

No. Because there is no free node available, and Instances will run on all nodes, and bound to start on specific node. Means there is no instance failover instead service will failover to available node.

Keeping in mind, with server pools we have flexibility in failing over the instances along with it services if we keep our databases in policy managed where oracle grid infrastructure will make your instances running on the serverpool it defined. That’s is the reason when you define the database to a server pool (like below) you just specify the server pool name only rather how many instances etc etc.

Ex: srvctl modify database –d orcl –g OLTP_SP

note: OLTP_SP is a server pool with 3 nodes runnings, so now although i have four nodes my database will have only running 3 instances in three nodes.

Read on now,

If we kill a pmon or crash the node rac01, the free pool node rac02 will be assigned to server pool OLTP_SP, lets see in action.

Configuration check,

(BTW, Many Thanks Deepak (again) for sending this wonderful screenshots)

image

Lets kill the pmon in rac01,

 

image

Now monitor the server pool and the read  the comments updated in the screenshots.

As above the pmon has been killed, now orcl_4 is not running rac01, initially the free pool has 1 server and after a while the free pool has given away 1 server  (as you can see 0 at last) to server pool , and finally you can see the three instances are back, but this time it is rac02,rac03,rac04.

image

As we have seen the instance crash is failing over its instance to free pool, Next Post lets see Service failover with in server pools

Thanks for reading,

Sureshgandhi

12c Database : Convert Admin Managed Database to Policy Managed Database

From 11g onwards, the database can be managed in two modes, Administer managed, i.e instance allocation/deletion service allocation/deletion and evictions will be managed by Administrator.

Where in from the 11gr2 onwards the instance allocation to hosts will be based on the cardinality and services running is also based on their configuration with in the serverpools.

To convert the database to policy managed one must configured the server pools as like in the previous post

First check our database configuration.

 

7 Database configuration

Check if you have any services, yes we do have a service server_taf as like created in this post

5. Configuaration setting 

CRS Profile setting

6 Profile setting -1

Here is the notes for above screenshot,

  • I have a database running orcl on 4 nodes with orc1,2,3,4 instances. Where the server_taf is running on orcl1 and orcl2.
  • Now we want to run our database should run on server pool OLTP_SP (read here) which is having 3 nodes in his pool. So ideally my database should run only 3 instances rather 4.
  • Also if you observe the the instances name marked as orcl1,orcl2,orcl3 which will be changed once we moved to policy managed.

Lets convert the database do policy managed and run on OLTP_SP server pool

image uhu, got error, the reason is that my service server_taf (here) is running on two nodes orcl1 and orcl2, for server pools concept the service should either run on all nodes or one node i.e uniform and singleton.

I will have to remove the service or modify the service to run either in one node or all nodes. I will to remove it. I tried to modify the service to uniform which will fail as such my database is in admin managed mode.

image

image Now I have removed the service, now let me try again to modify the database to server pool OLTP_SP as policy managed database, The following step will stop the instances, be noted while doing in production. Use –f option will stop the instances.

image 

 

Check the configuration of the database now.

12

We have now successfully moved our database to policy managed , start the database see the instances where they are running

imageAs you observed above, the rac02 is not having any instances since oltp_sp server pool contains maximum 3 nodes, so rac01,rac03,rac04 has been picked up oracle and rac02 has been given to free pool, in case of failure or some crash rac02 will be given to oltp_sp pool.

Next Post initiate a failover on rac01 and see if the free server rac02 is assigned to server pool oltp_sp

Thanks for reading,

12c Server Pools : Creating & Managing Server Pools

Hello All,

If you have not read server pool concept so far please read it from here.

In this post we were going to see,

1. create the server pools OLTP_SP with 2 nodes minimum and maximum 2 nodes with importance 4

2. That leaves 2 node in free pool out of my 4 nodes

3. Modify the server pool OLTP_SP to minimum 3 nodes and maximum 3 nodes

Read on,

1. create the server pools OLTP_SP with 2 nodes minimum and maximum 2 nodes with importance 4

[root@grid#] srvctl add serverpool –g OLTP_SP –min 2 –max 2 –importance 4

2. Check the server pool configuration

image 3. Modifying the server pool to have minimum 3 nodes.

[root@grid#] srvctl modify serverpool –g OLTP_SP –min 3 –max 3 –importance 4

[root@grid#] srvctl config serverpool –g OLTP_SP

image

 

Generic Notes on server pool:-

1. Server pools gives you the flexibility in running the pool of instances/services

2. RAC Instances can run in any of the pools defined by it , rather instance to node bounding, earlier we need to srvctl add instance but now there is no requirement to add instances (11g as well), srvctl add database would do the addition of instances automatically.

3. Further, services can run on server pool, for example in our case OLTP_SP has three nodes running i.e three instance i can create an instances either to run on single node (singleton) or all nodes(uniform). It eliminates the preferred and available method.

4. Even the rac instances names would be dynamically assigned rather orcl1, orcl2 etc, it will dynamically allocated the instances based on the pools availability.

Next Post modifying database to serverpool managed, i.e policy managed databases.

Managing Grid Infrastructure 12c : Creating and Testing TAF Services

Hello,

In this post we will create TAF (Transparent Application Failover) Services as such pre-11gr2 method that is available and preferred instances way.

Environment:-

1. 4 Node RAC, rac01,rac02,rac03,rac04 running orcl database and its instances.

2. A service server_taf is created, for which orcl1 is available instance and orcl2 is the preferred instance.

TAF Setup & Testing:-

1. Create Service

srvctl add service -d orcl -s server_taf –r orcl1 –a orcl2 -P BASIC

2. Start the service

srvctl start service -d rac -s server_taf

3. Check the services

srvctl config service -d rac

3. Modify the service to have load balancing options (optional)

SQL> execute dbms_service.modify_service (service_name => ‘server_taf’ –
, aq_ha_notifications => true –
, failover_method => dbms_service.failover_method_basic –
, failover_type => dbms_service.failover_type_select –
, failover_retries => 180 –
, failover_delay => 5 –
, clb_goal => dbms_service.clb_goal_long);

 

4. Keep tns entries in both client and server side (on all nodes too)

SERVERTAF =
(DESCRIPTION =
(LOAD_BALANCE = yes)

(ADDRESS = (PROTOCOL = TCP)(HOST = scan.localdomain)(PORT = 1521))

(CONNECT_DATA =(SERVICE_NAME = server_taf.localdomain)

(FAILOVER_MODE= (TYPE=select)(METHOD=method)

)

5. Check the configuration

All the screenshot has been sent by Deepak kumar from his test server, Much thanks to him for his time and effort and permitted us to upload for all benefit1

 

6. Check the Service run state, server_taf is running on orcl1

2. Service is running in NODE01 or orcl1 currently ( Initially)

7. Test the TAF Service

sqlplus system/manager@server_taf

check in the node with following query where is the session and failover mode status

SELECT MACHINE, FAILOVER_TYPE, FAILOVER_METHOD, FAILED_OVER, COUNT(*)
FROM V$SESSION
GROUP BY MACHINE, FAILOVER_TYPE, FAILOVER_METHOD, FAILED_OVER;

 

8. Run a very big query in the session you have connected

SQL> Select * from dba_objects;

This will run approx 5-10 mins in my test machine

9. Kill the pmon at OS level to simulate crash on Node 1 and apparently service should failover to the available node, and also the workload i.e the above dba_objects query rather the session disconnection and disruption

 

4. On NODE01 we had kill at OS Level and checking status at DB Level or SRVCTL

On node 2, check the v$session with following query, observe the failed_over column shows that rac01 session has been failoved over to orcl2 and marked yes.3. Service was killed m NOD01 and not running in NODE02 or orcl2 currently ( relocated)

And you may also observe the session in orcl1 (node1) is still running as such.

10. Start the instance orcl1 on node 1 and relocate the services back to orcl1 from orcl24b . RELOCATE it back to orcl1 once the pmon or DB is up in RAC01 from RAC02

11. Check the configuration

5. Configuaration setting

In Next post we will see to create a server pools and then add this service to serverpool. Basically in this method, the services are bounded to instances , available node and preferred node, but in serverpools concept the services bound to nodes anywhere.

-Thanks

Managing Grid Infrastructure 12c : Adding a New Node in 12c RAC

In Previous post here and here we have seen deleting and adding back the deleted node to Cluster.

In this post we will see how to add a new node in the cluster

Environment, Assuming you have a new server with OS (OEL) installed and,

Phase 1. You have to install or clone the GRID Home, for this you can either choose GUI to install GI Binaries, or Clone the existing home, we will the later approach

Phase 2. You have to add this new node to cluster

Read on,

Phase 1 : Clone the Grid Home

1. Assuming that GI is installed on the source stop the GI using the following command.

[root@rac2 bin]# ./crsctl stop crs

2. Create a stage directory and a tar ball of the source

[root@rac2 grid]# mkdir /u01/stageGI

[root@rac2 grid]# cp -prf /u01/app/11.2.0.3/grid /u01/stageGI

[root@rac2 grid]# pwd

/u01/stageGI/grid

[root@rac2 grid]#

[root@rac2 grid]# tar -cvf /tmp/tar11203.tar .

[root@rac2 grid]#

3. Start GI on the source node

[root@rac2 bin]# ./crsctl start crs

4. Create a software location on the new node rac3 and extract the tar ball. As root execute on the new node rac4

mkdir –p /u01/app/11.2.0.3/grid

mkdir –p /u01/app/grid

mkdir –p /u01/app/oracle

chown grid:oinstall /u01/app/11.2.0.3/grid

chown grid:oinstall /u01/app/grid

chown oracle:oinstall /u01/app/oracle

chown –R grid:oinstall /u01

mkdir –p /u01/app/oracle

chmod –R 775 /u01/

As grid user execute on the new node rac3

cd /u01/app/11.2.0.3/grid

[grid@rac3 grid]$ tar -xvf /tmp/tar11203.tar

5. Clean the node specific configuration details and set proper permissions and ownership. As root execute the following

cd /u01/app/11.2.0.3/grid

rm -rf rac2

rm -rf log/rac2

rm -rf gpnp/rac2

rm -rf crs/init

rm -rf cdata

rm -rf crf

find gpnp -type f -exec rm -f {} \;

rm -rf network/admin/*.ora

find . -name ‘*.ouibak’ -exec rm {} \;

find . -name ‘*.ouibak.1′ -exec rm {} \;

rm -rf root.sh*

cd cfgtoollogs

find . -type f -exec rm -f {} \;

chown -R grid:oinstall /u01/app/11.2.0.3/grid

chown –R grid:oinstall /u01

chmod –R 775 /u01/

As grid user execute

[grid@rac3 cfgtoollogs]$ chmod u+s /u01/app/11.2.0.3/grid/bin/oracle

[grid@rac3 cfgtoollogs]$ chmod g+s /u01/app/11.2.0.3/grid/bin/oracle

6. Verify the Peer Compatibility and node reach ability, You can see the log here and here (uploaded to ensure the post looks small)

7. Run clone.pl from $GI_HOME/clone/bin on the new node rac3 as grid user, Before running clone.pl prepare the following information:

· ORACLE_BASE=
/u01/app/grid

· ORACLE_HOME=
/u01/app/11.2.0.3/grid

· ORACLE_HOME_NAME=
Ora11g_gridinfrahome2 – use OUI from any existing cluster node

Run Clone.pl

[grid@rac3 bin]$ perl clone.pl ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome2 ORACLE_BASE=/u01/app/grid SHOW_ROOTSH_CONFIRMATION=false

Note: Basically what clone.pl does is its relink the binaries to the new nodes, as we already copied the whole set of oraclehome from other node.

Phase 2 : Add the node to cluster

Run addnode.sh from any of the node

[grid@rac2 bin]$ ./addNode.sh -silent -noCopy ORACLE_HOME=/u01/app/11.2.0.3/grid “CLUSTER_NEW_NODES={rac4}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac4-vip}” “CLUSTER_NEW_VIPS={rac4-vip}” CRS_ADDNODE=true CRS_DHCP_ENABLED=false

Note: Here you need not to specify VIP if use GNS

Error1:- Node vip is already exists, we have given VIP which is already configured for scan here the screeshot for the same.

Error during addition of Node -1 Corrected the entries in host file and then ran again

Command Line addition of Node

Do not run root.sh once it completed,

Copy the gpnp etc from the node you have ran the addnode.sh

[grid@rac2 install]$ scp :/u01/app/11.2.0.3/grid/crs/install/crsconfig_params rac3:/u01/app/11.2.0.3/grid/crs/install/crsconfig_params

[grid@rac2 install]$ scp :/u01/app/11.2.0.3/grid/crs/install/crsconfig_addparams rac3:/u01/app/11.2.0.3/grid/crs/install/crsconfig_addparams

[grid@rac2 install]$

Copy the content of /u01/app/11.2.0.3/grid/gpnp on any existing cluster node, for example rac2 to /u01/app/11.2.0.3/grid/gpnp on rac3.

Finally Run the root.sh

[root@rac3 grid]# ./root.sh

Logfile

Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.1.0/grid
   Copying dbhome to /usr/local/bin …
   Copying oraenv to /usr/local/bin …
   Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2014/07/31 11:04:23 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
[0m
2014/07/31 11:04:51 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
[0m
OLR initialization – successful
2014/07/31 11:05:45 CLSRSC-330: Adding Clusterware entries to file ‘oracle-ohasd.conf’
[0m
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘rac04′
CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘rac04′
CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘rac04′ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘rac04′ has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘rac04′
CRS-2672: Attempting to start ‘ora.evmd’ on ‘rac04′
CRS-2676: Start of ‘ora.evmd’ on ‘rac04′ succeeded
CRS-2676: Start of ‘ora.mdnsd’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘rac04′
CRS-2676: Start of ‘ora.gpnpd’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘rac04′
CRS-2676: Start of ‘ora.gipcd’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘rac04′
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘rac04′
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘rac04′
CRS-2676: Start of ‘ora.diskmon’ on ‘rac04′ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip’ on ‘rac04′
CRS-2672: Attempting to start ‘ora.ctssd’ on ‘rac04′
CRS-2676: Start of ‘ora.ctssd’ on ‘rac04′ succeeded
CRS-2676: Start of ‘ora.cluster_interconnect.haip’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.asm’ on ‘rac04′
CRS-2676: Start of ‘ora.asm’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.storage’ on ‘rac04′
CRS-2676: Start of ‘ora.storage’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.crf’ on ‘rac04′
CRS-2676: Start of ‘ora.crf’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.crsd’ on ‘rac04′
CRS-2676: Start of ‘ora.crsd’ on ‘rac04′ succeeded
CRS-6017: Processing resource auto-start for servers: rac04
CRS-2672: Attempting to start ‘ora.net1.network’ on ‘rac04′
CRS-2676: Start of ‘ora.net1.network’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.ons’ on ‘rac04′
CRS-2676: Start of ‘ora.ons’ on ‘rac04′ succeeded
CRS-6016: Resource auto-start has completed for server rac04
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2014/07/31 11:09:20 CLSRSC-343: Successfully started Oracle Clusterware stack
[0m
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
[1m2014/07/31 11:09:44 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster … succeeded
[0m
[0m

verify OLS Nodes

01. OLSNODE

Managing Grid Infrastructure 12c : Adding a deleted Node

Hello

This is the continuation of previous post from the good work by Deepak.

In last post we deleted a rac04 node and here we will see how to add back the same to cluster.

 

1. Attach the home in rac04 (as you have detached home), (optional)

If you have detached the home as like We did in previous post, you must attach the Home first in the node (assuming you have not run rootcrs.pl –deinstall since this will destroy the oracle grid home and you will have to reinstall the binaries again, for now assuming you have deleted a node rac04 and detached the home, where you are adding back)

$ORACLE_HOME/oui/bin/runInstaller -attachHome ORACLE_HOME_NAME="OraDb11g_home2" \
ORACLE_HOME="/u01/app/local/oracle/product/11.2.0/db_2" \
"CLUSTER_NODES={rac01,rac02,rac03}" \
LOCAL_NODE="rac04"

2. Run addnode.sh

To add the node in 12c, you have a separate folder addnode in grid home unlike in 11gr2.

[grid@rac01] cd $GI_HOME/addnode

[grid@rac01 addnode]./addnode.sh –slient –noCopy ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NEW_NODES=(rac04)" "CLUSTER_NEW_VIRTUAL_HOSTNMAE=(rac04-vip)"

Add_deleted_node_A

 

3. Execute root.sh in the node 04.

Add_deleted_node_2 4. Run update node list in other nodes in the cluster

      [ grid@rac01 ~   ]$ cd /u01/app/12.1.0/grid/oui/bin/ 
      [ grid@rac01 bin ]$  ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NODES={rac01,rac02,rac03,rac04}" CRS=TRUE -silent

5. Check the inventory is updated image

 

6. Check olsnode

01. OLSNODE

 

 

 

Managing Grid Infrastructure 12c : Deletion of Node

Hello Readers,

I will be running through a series of Grid Infrastructure tasks like deletion of node, addition of node, serverpools and taf in 12c.

I would like to Thanks & appreciation Mr. Deepak Kumar from Bangalore India, for his nice work in sending this screenshots.

Without his time and effort this would not be possible at all, Please accept my thanks for all this and your kind thought to share to wider dba community through my blog.

In this post, we will see the deletion of node in 11gR2 Grid Infrastructure,  Lets’s run the olsnodes to see the node list in cluster, Here we will be deleting the rac04 node.

01. OLSNODE

1. On the NODE which needs to be deleted as root         

      [root@rac04]#cd /u01/app/12.1.0/grid/crs/install

2. Disable oracle clusterware applications and applications  on the node (rac3)      

      [root@rac04]#./rootcrs.pl –deconfig

     

06 DEeleting or deconfigution the clusterware in the deleted node

3. From a node that will remain(rac1) , delete the node 

      [root@rac01 addnode]# cd /u01/app/12.1.0/grid/bin/


      [root@rac01 bin]# ./olsnodes -n –t

      [root@rac01]#crsctl delete node -n rac04
07. Remove the deleted node from the existing node

 

4. As grid user, On the NODE which needs to be deleted 

      [ grid@rac04 bin ]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NODES={ rac04 }" CRS=TRUE -silent -local
04 detachHome

      [ grid@rac04 bin ]$ ./runInstaller -detachHome ORACLE_HOME=/u01/app/12.1.0/grid -silent -local

      Note: This is optional step, which you want to completely remove the node.

      [ grid@rac04 bin ]$ ./runInstaller -detachHome ORACLE_HOME=/u01/app/12.1.0/grid -silent -local
04 detachHome_1

   Check olsnodes now, to check the node has successfully removed from the cluster.

    [root@rac01 bin]# ./olsnodes -n –t08 FINAL SCREEN -2

 

5 On any NODE which shall exist . run the below command 
      [
grid@rac01 ~   ]$ cd /u01/app/12.1.0/grid/oui/bin/
      [
grid@rac01 bin ]$  ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NODES={rac01,rac02,rac03}" CRS=TRUE -silent

08 FINAL SCREEN -1

Duplicate Standby : Issues & Issues all the way

Well doing,

All the DBA’s using the duplicate database to build a standby database and its proved most of the times it works. But it happens sometimes that it creates a series of issues.

Thanks to our good old DBA Basavaraju (a very long time DBA) for pointing out this series of issues and permitted me to update in this blog for all benefit.

Here is the environment he was building

1. 11.2.0.3 with Grid Infrastructure
2. Primary 2 nodes and standby with single node rac
3. Using Duplicate database from active database to build standby

Issue 1:- “RMAN-06217: not connected to auxiliary database with a net service name”

Solution:- Ensure not to use Scan IP’s of production box during DR Creation with the Duplicate command to avoid the issue change the Local_listener = Local VIP in both sides.

Issue 2:- “RMAN-04006: error from auxiliary database: ORA-12154: TNS:could not resolve the connect identifier specified”

Solution :- Ensure tnsnames. ora has “(UR=A)” to avoid the below error at both the locations(DR & Primary)

Issue 3:- “Undefined subroutine &main::read_file called at /u01/gi/oragrid/grid/11.2.0.2/crs/install/crspatch.pm line 86.”

Solution:- We have mandatory patch to apply 12880299, to fix poison attack, Its a bug, change the $GI_HOME/OUI/bin/crsconfig_lib.pm
Make the following change in that file crsconfig_lib.pm From
my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR
To
my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR read_file
Above issue would be fixed with this above change and also it would start the has services.

Issue 4:- ORA-00245: control file backup failed; target is likely on a local file system

Solution:- RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘+FRA/test/snapcf_opartl2.f';

Issue 5:- PLS-00201: identifier ‘DBMS_RCVCAT.GETDBID’ must be declared

Solution:- Use nocatalog in the rman connection script, like this this rman target sys@source auxiliary sys@targer nocatalog

Issue 6:- ERROR: failed to establish dependency between database target(standby) and diskgroup resource ora.DATA2.dg

Solution:- srvctl modify database -d targetdbname -a “DATA2,RECO2″
srvctl add database -d targetdbname -o /u01/app/oracle/product/11.2.0.4/dbhome_1

Issue 7:- Once you create the Stanadby arch no more applied, failed with error Error 1017 received logging on to the standby , ERror-16197

Solution:- Although you copy password files from production set param case_sensitivity_logon = false and also while creating password file use ignorecase=y option

Oop’s very unfortunate day then, lots of issues in sequence for a standby from a well known feature duplicate standby. Hope the above helps if you fall down in this trap.

-Thanks
Sureshgandhi

difference between roothas.pl and rootcrs.pl

Hello,

I have got a question during my session with my colleagues, the difference between roothas.pl and rootcrs.pl

Both resides in $GRID_HOME/oui/bin

roothas.pl will be useful or used when you run the grid infrastructure in standalone mode (single node Cluster)

rootcrs.pl will be useful or used when you run the grid infrastructure in normal mode (normal cluster comprising of one or more node)

-Thanks
Sureshgandhi

12c Release 1 – addnode.sh is missing?

Hello,

While doing practice have been encountered this that addnode.sh is no more located in $GI_HOME/oui/bin, instead it is avaliable in separate folder at $GI_HOME/addnode/addnode.sh.

Crazy, I should also request deletenode.sh :(, where I have stumbled upon one more thing

Where in the rootcrs.pl or the roothas.pl does not contain delete option instead it contains only deconfig option . Is this due to root.sh will startover from previous failure? (new thing introduced from 11.2.0.2)

Reference:- How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation (Doc ID 942166.1)
excerpt:- Step 0: For 11.2.0.2 and above, root.sh is restartable.

11gR2 – Release 1 – the following works
“$GRID_HOME/crs/install/rootcrs.pl -verbose -delete -force”

12c – Release 1 – but the following works
“$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force”

-Thanks
Sureshgandhi