Join with the other

    312

Subscribe Blog Via Email

Flag Counter

ACFS-9317: No ADVM/ACFS distribution media detected at location: root.sh failed

Again, OEL 7 (UEK) and 12c Grid Infrastructure issue. OEL UEK 7 does not shipped or support ACFS yet, but we tried and stumbled :(.

Here is the log for root.sh failure.

ACFS-9317: No ADVM/ACFS distribution media detected at location: ‘/u02/app/11.2.0.2/grid1/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin’
root@ol5-112-rac1 ~]# /u02/app/11.2.0.2/grid1/bin/acfsroot install
ACFS-9320: Missing file: ‘advmutil’.
ACFS-9320: Missing file: ‘advmutil.bin’.
ACFS-9320: Missing file: ‘fsck.acfs’.
ACFS-9320: Missing file: ‘fsck.acfs.bin’.
ACFS-9320: Missing file: ‘mkfs.acfs’.
ACFS-9320: Missing file: ‘mkfs.acfs.bin’.
ACFS-9320: Missing file: ‘mount.acfs’.
ACFS-9320: Missing file: ‘mount.acfs.bin’.
ACFS-9320: Missing file: ‘acfsdbg’.
ACFS-9320: Missing file: ‘acfsdbg.bin’.
ACFS-9320: Missing file: ‘acfsutil’.
ACFS-9320: Missing file: ‘acfsutil.bin’.
ACFS-9320: Missing file: ‘umount.acfs’.
ACFS-9320: Missing file: ‘umount.acfs.bin’.
ACFS-9301: ADVM/ACFS installation can not proceed:
ACFS-9317: No ADVM/ACFS distribution media detected at location: ‘/u02/app/11.2.      

.. and so on, the cluster is not started, the voting disk is not formatted and left in insomnia state :(. Really trying now 12c on different kernels may take your lot of time.

Anyways, to fix this, I have tweaked the code in rootcrs.pl and crsconfig_lib.pm , as this is a test environment and I am really not worrrying about ACFS at the moment.

So hence forth back, root.sh calls /u02/app/11.2.0.2/grid1/crs/install/rootcrs.pl and that in turns see a file called /u02/app/11.2.0.2/grid1/crs/install/crsconfig_lib.pm which had lot of functions to install what required,

Now open the file rootcrs.pl (take a backup ofcourse) and comment all the following lines, search for USM keyword in the file

image Open the /u02/app/11.2.0.2/grid1/crs/install/crsconfig_lib.pm and remove the usminstall function in the call, remove the highlighted keyword completely

Note: USM calls for universal storage management which installs ACFS and ADVM as part of root.sh

image

Now run the root.sh again it will work fine.

By the way, do not install 12.1 on OEL 7 UEK for now since it seems to have lot of issues, but I see that 12.1.0.2 has not having the same

ORA-12547: TNS: lost contact on DBCA in 12c with OEL 7 (UEK)

While running dbca we are receiving the ora-12547 and we could not able to create database using DBCA anymore.

After googling and metalink search found many notes which does not resolve the issue, the reasons can be multifold

  1. Environments- SID, PATH, LD_LIBRARY_PATH – Not resolved
  2. Permission on oracle executable to 6751 – Not resolved
  3. Package gcc issue, package already installed – Not resolved
  4. Package libaio issue, package already installed – Not resolved
  5. Listeners are up and running , reload and tns changes – Not resolved

Finally, I have ended to understand the oracle and other executables are not relinked properly (while installation we ignored some errors, our bad) apparently the relink got failed.

Whoa,! wait, we did not got any pre-requisities failures when we run cluvfy, how could it been issue with relinking as all of packages are installed.

Struck now, then recollected we tweaked some files in rdbms/lib for 12.1.0.1 on OEL – 7, while installing grid, and it seems the same issue.

I really want to curse on our fate, :(, seriously , because we are not able to understand is this problem with shipping the software (oracle does it by mistake) or our installation issue, leaving that apart the following has solved our issue.

Installation log shows,

/usr/bin/ld: note: ‘__tls_get_addr@@GLIBC_2.3′ is defined in
DSO /lib64/ld-linux-x86-64.so.2 so try adding it to the linker
command line /lib64/ld-linux-x86-64.so.2: could not read symbols:
Invalid operation

INFO: collect2: error: ld returned 1 exit status

cd $ORACLE_HOME/rdbms/lib

cp env_rdbms.mk env_rdbms.mk.bck

    make changes in $ORACLE_HOME/rdbms/lib/env_rdbms.mk

modify line 176

LINKTTLIBS=$(LLIBCLNTSH) $(ORACLETTLIBS) $(LINKLDLIBS)

to

LINKTTLIBS=$(LLIBCLNTSH) $(ORACLETTLIBS) $(LINKLDLIBS) -lons

modify line 279 and 280

LINK=$(FORT_CMD) $(PURECMDS) $(ORALD) $(LDFLAGS) $(COMPSOBJS)
LINK32=$(FORT_CMD) $(PURECMDS) $(ORALD) $(LDFLAGS32) $(COMPSOBJS)

to

LINK=$(FORT_CMD) $(PURECMDS) $(ORALD) $(LDFLAGS) $(COMPSOBJS) -Wl,–no-as-needed
LINK32=$(FORT_CMD) $(PURECMDS) $(ORALD) $(LDFLAGS32) $(COMPSOBJS) -Wl,–no-as-needed

modify line 3041 and 3042

TG4PWD_LINKLINE= $(LINK) $(OPT) $(TG4PWDMAI) \
        $(LLIBTHREAD) $(LLIBCLNTSH) $(LINKLDLIBS)

to

TG4PWD_LINKLINE= $(LINK) $(OPT) $(TG4PWDMAI) \
        $(LLIBTHREAD) $(LLIBCLNTSH) $(LINKLDLIBS) -lnnz12

Now, we tried to relink the libraries, and its thrown below error

cd $ORACLE_HOME/bin

relink all

INFO: /u01/app/oracle/product/12.1.0/db_1/rdbms/lib/config.o: file not recognized: File truncated collect2: error: ld returned 1 exit status

To resolve this, you will have to remove the config.o file and relink the oracle to create new config.o (metalink note

 

make -f ins_rdbms.mk config.o

make -f ins_rdbms.mk ioracle

and perform relink all again, verify the logs in $ORACLE_HOME/install/relinkaction…log for any errors

And finally, it resolved the dbca is working properly and other executables.

Conclusion: I am not favour in running 12.1.0.1 on OEL 7 (UEK) any more since this is sequence of issues coming up while performing installations, rather try 6.3 (not even 6.4) to ensure you have seemless installations

-Happy Reading

Sureshgandhi

Upgrading 11gr2 to 12c using Transient logical standby – Rolling upgrade

Well, this is fairly an old topic but explored lately by me. Very big post click on read more to scroll down to full page.

Ah, I was trying to explore the upgrade from 11gr2 to 12c but not traditional way, and go these two excellent white papers.

http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-transientlogicalrollingu-1-131927.pdf

http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-upgrades-made-easy-131972.pdf

The high level overview of this feature is something like below,

  1. Environment consists of Primary & Physical standby
  2. Guaranteed restore point created
  3. Physical standby will be converted into a logical standby temporarily to allow the upgrade
  4. Startup the logical standby in 12c home (assuming you have the 12c software only installation done)
  5. Upgrade the logical standby
  6. Switch over the primary to physical standby and logical standby to primary
  7. Flashback the primary database to guaranteed restore point taken
  8. Apply the redo’s generated in logical standby in primary (no upgrade required here)

To allow you the failed upgrade,

  1. You should create flashback guaranteed restore point
  2. Do not change the compatible parameter until all the databases (primary/standby) are upgraded

Let’s have a look of my environment

  1. Primary:- 11.2.0.2 – Prod1 (on same host oinfo11g)
  2. Standby:- 11.2.0.2 – Prod1DR (Physical Standby) (on same host oinfo11g)
  3. 12c software installed on both sides – /u01/app/oracle/product/12.1.0/db_1
  4. Flash back enabled on both sides

In order to perform all the activities (6 steps) you will have to do lot of manual work, instead Oracle provide you the script to physru.

Oracle11g Data Guard: Database Rolling Upgrade Shell Script(Doc ID 949322.1)

Now executed the script, (no prereqs, no changes, nothing just that both DB’s are up and running)

Step 1

Explanation:- This script takes 6 inputs, username, Primary, Standby, PrimaryTns, StandbyTns, Targetversion

In below, if you see,

Stage1 :- Script stops the media recovery, create restore point on both standby and primary and also backs up the control file

Stage2:- Script create the dictionary build and converts the logical standby to physical standby and awaits until the dictionary load happen and finally open the Prod1DR in open mode (logical standby)

Continue reading Upgrading 11gr2 to 12c using Transient logical standby – Rolling upgrade

12c Server Pools in action : Instance and service failover to server pools

In the Previous post we have converted the orcl database to server pool OLTP_SP which is running on three nodes rac01,rac03,rac04 and free server is available in rac02.

Okay lets have some discussion before proceeding,

1. Does in pre 11g version , if a instance crashed does it failed over to any node?

No. Because there is no free node available, and Instances will run on all nodes, and bound to start on specific node. Means there is no instance failover instead service will failover to available node.

Keeping in mind, with server pools we have flexibility in failing over the instances along with it services if we keep our databases in policy managed where oracle grid infrastructure will make your instances running on the serverpool it defined. That’s is the reason when you define the database to a server pool (like below) you just specify the server pool name only rather how many instances etc etc.

Ex: srvctl modify database –d orcl –g OLTP_SP

note: OLTP_SP is a server pool with 3 nodes runnings, so now although i have four nodes my database will have only running 3 instances in three nodes.

Read on now,

If we kill a pmon or crash the node rac01, the free pool node rac02 will be assigned to server pool OLTP_SP, lets see in action.

Configuration check,

(BTW, Many Thanks Deepak (again) for sending this wonderful screenshots)

image

Lets kill the pmon in rac01,

 

image

Now monitor the server pool and the read  the comments updated in the screenshots.

As above the pmon has been killed, now orcl_4 is not running rac01, initially the free pool has 1 server and after a while the free pool has given away 1 server  (as you can see 0 at last) to server pool , and finally you can see the three instances are back, but this time it is rac02,rac03,rac04.

image

As we have seen the instance crash is failing over its instance to free pool, Next Post lets see Service failover with in server pools

Thanks for reading,

Sureshgandhi

12c Database : Convert Admin Managed Database to Policy Managed Database

From 11g onwards, the database can be managed in two modes, Administer managed, i.e instance allocation/deletion service allocation/deletion and evictions will be managed by Administrator.

Where in from the 11gr2 onwards the instance allocation to hosts will be based on the cardinality and services running is also based on their configuration with in the serverpools.

To convert the database to policy managed one must configured the server pools as like in the previous post

First check our database configuration.

 

7 Database configuration

Check if you have any services, yes we do have a service server_taf as like created in this post

5. Configuaration setting 

CRS Profile setting

6 Profile setting -1

Here is the notes for above screenshot,

  • I have a database running orcl on 4 nodes with orc1,2,3,4 instances. Where the server_taf is running on orcl1 and orcl2.
  • Now we want to run our database should run on server pool OLTP_SP (read here) which is having 3 nodes in his pool. So ideally my database should run only 3 instances rather 4.
  • Also if you observe the the instances name marked as orcl1,orcl2,orcl3 which will be changed once we moved to policy managed.

Lets convert the database do policy managed and run on OLTP_SP server pool

image uhu, got error, the reason is that my service server_taf (here) is running on two nodes orcl1 and orcl2, for server pools concept the service should either run on all nodes or one node i.e uniform and singleton.

I will have to remove the service or modify the service to run either in one node or all nodes. I will to remove it. I tried to modify the service to uniform which will fail as such my database is in admin managed mode.

image

image Now I have removed the service, now let me try again to modify the database to server pool OLTP_SP as policy managed database, The following step will stop the instances, be noted while doing in production. Use –f option will stop the instances.

image 

 

Check the configuration of the database now.

12

We have now successfully moved our database to policy managed , start the database see the instances where they are running

imageAs you observed above, the rac02 is not having any instances since oltp_sp server pool contains maximum 3 nodes, so rac01,rac03,rac04 has been picked up oracle and rac02 has been given to free pool, in case of failure or some crash rac02 will be given to oltp_sp pool.

Next Post initiate a failover on rac01 and see if the free server rac02 is assigned to server pool oltp_sp

Thanks for reading,

12c Server Pools : Creating & Managing Server Pools

Hello All,

If you have not read server pool concept so far please read it from here.

In this post we were going to see,

1. create the server pools OLTP_SP with 2 nodes minimum and maximum 2 nodes with importance 4

2. That leaves 2 node in free pool out of my 4 nodes

3. Modify the server pool OLTP_SP to minimum 3 nodes and maximum 3 nodes

Read on,

1. create the server pools OLTP_SP with 2 nodes minimum and maximum 2 nodes with importance 4

[root@grid#] srvctl add serverpool –g OLTP_SP –min 2 –max 2 –importance 4

2. Check the server pool configuration

image 3. Modifying the server pool to have minimum 3 nodes.

[root@grid#] srvctl modify serverpool –g OLTP_SP –min 3 –max 3 –importance 4

[root@grid#] srvctl config serverpool –g OLTP_SP

image

 

Generic Notes on server pool:-

1. Server pools gives you the flexibility in running the pool of instances/services

2. RAC Instances can run in any of the pools defined by it , rather instance to node bounding, earlier we need to srvctl add instance but now there is no requirement to add instances (11g as well), srvctl add database would do the addition of instances automatically.

3. Further, services can run on server pool, for example in our case OLTP_SP has three nodes running i.e three instance i can create an instances either to run on single node (singleton) or all nodes(uniform). It eliminates the preferred and available method.

4. Even the rac instances names would be dynamically assigned rather orcl1, orcl2 etc, it will dynamically allocated the instances based on the pools availability.

Next Post modifying database to serverpool managed, i.e policy managed databases.

Managing Grid Infrastructure 12c : Creating and Testing TAF Services

Hello,

In this post we will create TAF (Transparent Application Failover) Services as such pre-11gr2 method that is available and preferred instances way.

Environment:-

1. 4 Node RAC, rac01,rac02,rac03,rac04 running orcl database and its instances.

2. A service server_taf is created, for which orcl1 is available instance and orcl2 is the preferred instance.

TAF Setup & Testing:-

1. Create Service

srvctl add service -d orcl -s server_taf –r orcl1 –a orcl2 -P BASIC

2. Start the service

srvctl start service -d rac -s server_taf

3. Check the services

srvctl config service -d rac

3. Modify the service to have load balancing options (optional)

SQL> execute dbms_service.modify_service (service_name => ‘server_taf’ –
, aq_ha_notifications => true –
, failover_method => dbms_service.failover_method_basic –
, failover_type => dbms_service.failover_type_select –
, failover_retries => 180 –
, failover_delay => 5 –
, clb_goal => dbms_service.clb_goal_long);

 

4. Keep tns entries in both client and server side (on all nodes too)

SERVERTAF =
(DESCRIPTION =
(LOAD_BALANCE = yes)

(ADDRESS = (PROTOCOL = TCP)(HOST = scan.localdomain)(PORT = 1521))

(CONNECT_DATA =(SERVICE_NAME = server_taf.localdomain)

(FAILOVER_MODE= (TYPE=select)(METHOD=method)

)

5. Check the configuration

All the screenshot has been sent by Deepak kumar from his test server, Much thanks to him for his time and effort and permitted us to upload for all benefit1

 

6. Check the Service run state, server_taf is running on orcl1

2. Service is running in NODE01 or orcl1 currently ( Initially)

7. Test the TAF Service

sqlplus system/manager@server_taf

check in the node with following query where is the session and failover mode status

SELECT MACHINE, FAILOVER_TYPE, FAILOVER_METHOD, FAILED_OVER, COUNT(*)
FROM V$SESSION
GROUP BY MACHINE, FAILOVER_TYPE, FAILOVER_METHOD, FAILED_OVER;

 

8. Run a very big query in the session you have connected

SQL> Select * from dba_objects;

This will run approx 5-10 mins in my test machine

9. Kill the pmon at OS level to simulate crash on Node 1 and apparently service should failover to the available node, and also the workload i.e the above dba_objects query rather the session disconnection and disruption

 

4. On NODE01 we had kill at OS Level and checking status at DB Level or SRVCTL

On node 2, check the v$session with following query, observe the failed_over column shows that rac01 session has been failoved over to orcl2 and marked yes.3. Service was killed m NOD01 and not running in NODE02 or orcl2 currently ( relocated)

And you may also observe the session in orcl1 (node1) is still running as such.

10. Start the instance orcl1 on node 1 and relocate the services back to orcl1 from orcl24b . RELOCATE it back to orcl1 once the pmon or DB is up in RAC01 from RAC02

11. Check the configuration

5. Configuaration setting

In Next post we will see to create a server pools and then add this service to serverpool. Basically in this method, the services are bounded to instances , available node and preferred node, but in serverpools concept the services bound to nodes anywhere.

-Thanks

Managing Grid Infrastructure 12c : Adding a New Node in 12c RAC

In Previous post here and here we have seen deleting and adding back the deleted node to Cluster.

In this post we will see how to add a new node in the cluster

Environment, Assuming you have a new server with OS (OEL) installed and,

Phase 1. You have to install or clone the GRID Home, for this you can either choose GUI to install GI Binaries, or Clone the existing home, we will the later approach

Phase 2. You have to add this new node to cluster

Read on,

Phase 1 : Clone the Grid Home

1. Assuming that GI is installed on the source stop the GI using the following command.

[root@rac2 bin]# ./crsctl stop crs

2. Create a stage directory and a tar ball of the source

[root@rac2 grid]# mkdir /u01/stageGI

[root@rac2 grid]# cp -prf /u01/app/11.2.0.3/grid /u01/stageGI

[root@rac2 grid]# pwd

/u01/stageGI/grid

[root@rac2 grid]#

[root@rac2 grid]# tar -cvf /tmp/tar11203.tar .

[root@rac2 grid]#

3. Start GI on the source node

[root@rac2 bin]# ./crsctl start crs

4. Create a software location on the new node rac3 and extract the tar ball. As root execute on the new node rac4

mkdir –p /u01/app/11.2.0.3/grid

mkdir –p /u01/app/grid

mkdir –p /u01/app/oracle

chown grid:oinstall /u01/app/11.2.0.3/grid

chown grid:oinstall /u01/app/grid

chown oracle:oinstall /u01/app/oracle

chown –R grid:oinstall /u01

mkdir –p /u01/app/oracle

chmod –R 775 /u01/

As grid user execute on the new node rac3

cd /u01/app/11.2.0.3/grid

[grid@rac3 grid]$ tar -xvf /tmp/tar11203.tar

5. Clean the node specific configuration details and set proper permissions and ownership. As root execute the following

cd /u01/app/11.2.0.3/grid

rm -rf rac2

rm -rf log/rac2

rm -rf gpnp/rac2

rm -rf crs/init

rm -rf cdata

rm -rf crf

find gpnp -type f -exec rm -f {} \;

rm -rf network/admin/*.ora

find . -name ‘*.ouibak’ -exec rm {} \;

find . -name ‘*.ouibak.1′ -exec rm {} \;

rm -rf root.sh*

cd cfgtoollogs

find . -type f -exec rm -f {} \;

chown -R grid:oinstall /u01/app/11.2.0.3/grid

chown –R grid:oinstall /u01

chmod –R 775 /u01/

As grid user execute

[grid@rac3 cfgtoollogs]$ chmod u+s /u01/app/11.2.0.3/grid/bin/oracle

[grid@rac3 cfgtoollogs]$ chmod g+s /u01/app/11.2.0.3/grid/bin/oracle

6. Verify the Peer Compatibility and node reach ability, You can see the log here and here (uploaded to ensure the post looks small)

7. Run clone.pl from $GI_HOME/clone/bin on the new node rac3 as grid user, Before running clone.pl prepare the following information:

· ORACLE_BASE=
/u01/app/grid

· ORACLE_HOME=
/u01/app/11.2.0.3/grid

· ORACLE_HOME_NAME=
Ora11g_gridinfrahome2 – use OUI from any existing cluster node

Run Clone.pl

[grid@rac3 bin]$ perl clone.pl ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome2 ORACLE_BASE=/u01/app/grid SHOW_ROOTSH_CONFIRMATION=false

Note: Basically what clone.pl does is its relink the binaries to the new nodes, as we already copied the whole set of oraclehome from other node.

Phase 2 : Add the node to cluster

Run addnode.sh from any of the node

[grid@rac2 bin]$ ./addNode.sh -silent -noCopy ORACLE_HOME=/u01/app/11.2.0.3/grid “CLUSTER_NEW_NODES={rac4}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac4-vip}” “CLUSTER_NEW_VIPS={rac4-vip}” CRS_ADDNODE=true CRS_DHCP_ENABLED=false

Note: Here you need not to specify VIP if use GNS

Error1:- Node vip is already exists, we have given VIP which is already configured for scan here the screeshot for the same.

Error during addition of Node -1 Corrected the entries in host file and then ran again

Command Line addition of Node

Do not run root.sh once it completed,

Copy the gpnp etc from the node you have ran the addnode.sh

[grid@rac2 install]$ scp :/u01/app/11.2.0.3/grid/crs/install/crsconfig_params rac3:/u01/app/11.2.0.3/grid/crs/install/crsconfig_params

[grid@rac2 install]$ scp :/u01/app/11.2.0.3/grid/crs/install/crsconfig_addparams rac3:/u01/app/11.2.0.3/grid/crs/install/crsconfig_addparams

[grid@rac2 install]$

Copy the content of /u01/app/11.2.0.3/grid/gpnp on any existing cluster node, for example rac2 to /u01/app/11.2.0.3/grid/gpnp on rac3.

Finally Run the root.sh

[root@rac3 grid]# ./root.sh

Logfile

Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.1.0/grid
   Copying dbhome to /usr/local/bin …
   Copying oraenv to /usr/local/bin …
   Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2014/07/31 11:04:23 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
[0m
2014/07/31 11:04:51 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
[0m
OLR initialization – successful
2014/07/31 11:05:45 CLSRSC-330: Adding Clusterware entries to file ‘oracle-ohasd.conf’
[0m
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘rac04′
CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘rac04′
CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘rac04′ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘rac04′ has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘rac04′
CRS-2672: Attempting to start ‘ora.evmd’ on ‘rac04′
CRS-2676: Start of ‘ora.evmd’ on ‘rac04′ succeeded
CRS-2676: Start of ‘ora.mdnsd’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘rac04′
CRS-2676: Start of ‘ora.gpnpd’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘rac04′
CRS-2676: Start of ‘ora.gipcd’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘rac04′
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘rac04′
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘rac04′
CRS-2676: Start of ‘ora.diskmon’ on ‘rac04′ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip’ on ‘rac04′
CRS-2672: Attempting to start ‘ora.ctssd’ on ‘rac04′
CRS-2676: Start of ‘ora.ctssd’ on ‘rac04′ succeeded
CRS-2676: Start of ‘ora.cluster_interconnect.haip’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.asm’ on ‘rac04′
CRS-2676: Start of ‘ora.asm’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.storage’ on ‘rac04′
CRS-2676: Start of ‘ora.storage’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.crf’ on ‘rac04′
CRS-2676: Start of ‘ora.crf’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.crsd’ on ‘rac04′
CRS-2676: Start of ‘ora.crsd’ on ‘rac04′ succeeded
CRS-6017: Processing resource auto-start for servers: rac04
CRS-2672: Attempting to start ‘ora.net1.network’ on ‘rac04′
CRS-2676: Start of ‘ora.net1.network’ on ‘rac04′ succeeded
CRS-2672: Attempting to start ‘ora.ons’ on ‘rac04′
CRS-2676: Start of ‘ora.ons’ on ‘rac04′ succeeded
CRS-6016: Resource auto-start has completed for server rac04
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2014/07/31 11:09:20 CLSRSC-343: Successfully started Oracle Clusterware stack
[0m
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
[1m2014/07/31 11:09:44 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster … succeeded
[0m
[0m

verify OLS Nodes

01. OLSNODE

Managing Grid Infrastructure 12c : Adding a deleted Node

Hello

This is the continuation of previous post from the good work by Deepak.

In last post we deleted a rac04 node and here we will see how to add back the same to cluster.

 

1. Attach the home in rac04 (as you have detached home), (optional)

If you have detached the home as like We did in previous post, you must attach the Home first in the node (assuming you have not run rootcrs.pl –deinstall since this will destroy the oracle grid home and you will have to reinstall the binaries again, for now assuming you have deleted a node rac04 and detached the home, where you are adding back)

$ORACLE_HOME/oui/bin/runInstaller -attachHome ORACLE_HOME_NAME="OraDb11g_home2" \
ORACLE_HOME="/u01/app/local/oracle/product/11.2.0/db_2" \
"CLUSTER_NODES={rac01,rac02,rac03}" \
LOCAL_NODE="rac04"

2. Run addnode.sh

To add the node in 12c, you have a separate folder addnode in grid home unlike in 11gr2.

[grid@rac01] cd $GI_HOME/addnode

[grid@rac01 addnode]./addnode.sh –slient –noCopy ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NEW_NODES=(rac04)" "CLUSTER_NEW_VIRTUAL_HOSTNMAE=(rac04-vip)"

Add_deleted_node_A

 

3. Execute root.sh in the node 04.

Add_deleted_node_2 4. Run update node list in other nodes in the cluster

      [ grid@rac01 ~   ]$ cd /u01/app/12.1.0/grid/oui/bin/ 
      [ grid@rac01 bin ]$  ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NODES={rac01,rac02,rac03,rac04}" CRS=TRUE -silent

5. Check the inventory is updated image

 

6. Check olsnode

01. OLSNODE

 

 

 

Managing Grid Infrastructure 12c : Deletion of Node

Hello Readers,

I will be running through a series of Grid Infrastructure tasks like deletion of node, addition of node, serverpools and taf in 12c.

I would like to Thanks & appreciation Mr. Deepak Kumar from Bangalore India, for his nice work in sending this screenshots.

Without his time and effort this would not be possible at all, Please accept my thanks for all this and your kind thought to share to wider dba community through my blog.

In this post, we will see the deletion of node in 11gR2 Grid Infrastructure,  Lets’s run the olsnodes to see the node list in cluster, Here we will be deleting the rac04 node.

01. OLSNODE

1. On the NODE which needs to be deleted as root         

      [root@rac04]#cd /u01/app/12.1.0/grid/crs/install

2. Disable oracle clusterware applications and applications  on the node (rac3)      

      [root@rac04]#./rootcrs.pl –deconfig

     

06 DEeleting or deconfigution the clusterware in the deleted node

3. From a node that will remain(rac1) , delete the node 

      [root@rac01 addnode]# cd /u01/app/12.1.0/grid/bin/


      [root@rac01 bin]# ./olsnodes -n –t

      [root@rac01]#crsctl delete node -n rac04
07. Remove the deleted node from the existing node

 

4. As grid user, On the NODE which needs to be deleted 

      [ grid@rac04 bin ]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NODES={ rac04 }" CRS=TRUE -silent -local
04 detachHome

      [ grid@rac04 bin ]$ ./runInstaller -detachHome ORACLE_HOME=/u01/app/12.1.0/grid -silent -local

      Note: This is optional step, which you want to completely remove the node.

      [ grid@rac04 bin ]$ ./runInstaller -detachHome ORACLE_HOME=/u01/app/12.1.0/grid -silent -local
04 detachHome_1

   Check olsnodes now, to check the node has successfully removed from the cluster.

    [root@rac01 bin]# ./olsnodes -n –t08 FINAL SCREEN -2

 

5 On any NODE which shall exist . run the below command 
      [
grid@rac01 ~   ]$ cd /u01/app/12.1.0/grid/oui/bin/
      [
grid@rac01 bin ]$  ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NODES={rac01,rac02,rac03}" CRS=TRUE -silent

08 FINAL SCREEN -1