Heterogeneous Standby Database : Primary on windows , standby on Linux – Different Bits together

Quick Question: Is it possible to have Standby Database on different platforms? i.e Windows (Primary) –> Linux (Standby)

Answer: Yes, From 11gr2 onwards. You can have different Operating Systems on primary and standby.

Note: From 10g onwards, Different endian support  i.e Linux 64bit(Primary) –> Linux 32bit (Standby) is possible but not different platforms.

How to: Using RMAN duplicate database from either active database,image copies, backupsets, service (12c) methods and this is only supported method.

References: Metalink Note: [Article ID 413484.1] & [Article ID 1085687.1]

Test Environment:

( I have completely mixed up the environments to test all possible mixed cases, 32 Bit DB to 64 Bit DB, Non-ASM to ASM, not only just Operating systems combination)

Description Primary Database Standby Database
Operating System Windows 64 Bit Linux 64 Bit
Database Version 11.2.0.1 – 32 Bit 11.2.0.1 – 64 Bit
Database Name oinfo11g oinfo11g_dr
Storage Non-ASM ASM
Others
used password feature redo_transport_user redo_transport_user
password file case sensitive off off

Steps:-

1. Create a 32 bit database in windows 64 bit operating system

2. Prepare the linux 64 bit operating system with 64 bit database binaries (no database we will create with rman duplicate)

3. Set the tnsnames.ora and listener.ora on both sides

4. Create pfile and password file in standby side

5. Run rman duplicate command

6. Startup the mrp process and keep standby in active data guard

Additional tests:-

1. Add a datafile in primary and see creating in standby, as the endianness and platform is different I want to test it

2. Switchover the Primary to standby and Switch back.

3. Use flashback feature

Step 3: Tnsnames.ora and Listener.ora

OINFO11G =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = Suresh)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = oinfo11g)
    )
  )

OINFO_DR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = oinfo11g)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = oinfo_dr)
    )
  )

On Primary Listener, add static entry

(SID_DESC =
          (GLOBAL_NAME = oinfo11g)
          (ORACLE_HOME = c:\oracle\app\product\11.2.0\dbhome_1)
          (SID_NAME = oinfo11g)
    )

On Standby Listener, add static entry

(SID_DESC =
          (GLOBAL_NAME = oinfo_dr)
          (ORACLE_HOME = c:\oracle\app\product\11.2.0\dbhome_1)
          (SID_NAME = oinfo_dr)
    )

Step 4:  create password file on standby

orapwd file=/u01/app/oracle/product/11.2.0/db_1/dbs/orapwoinfo11g password=admin123 entries=5

Step 4: Create a single entry pfile on standby and create necessary directories

!cat initoinfo_dr.ora

*.db_name=oinfo_dr

[oracle@oinfo11g admin]$ mkdir -p /u01/oradata/oinfo_dr
[oracle@oinfo11g admin]$ mkdir -p /u01/app/oracle/admin/oinfo_dr/adump
[oracle@oinfo11g admin]$ mkdir -p /u01/oradata/oinfo_dr/arch

Step 5: Connect to rman on Primary and auxiliary to Standby

[oracle@oinfo11g admin]$  rman target auxiliary sys/admin123@orcl_dr

DUPLICATE TARGET DATABASE
FOR STANDBY
BACKUP LOCATION ‘/home/oracle/backup/’
NOFILENAMECHECK
DORECOVER
SPFILE
SET DB_UNIQUE_NAME="oinfo_dr"
SET AUDIT_FILE_DEST="/u01/app/oracle/admin/oinfo_dr/adump"
SET DIAGNOSTIC_DEST="/u01/app/oracle"
SET LOG_ARCHIVE_DEST_2="service=oinfo_dr LGWR SYNC REGISTER VALID_FOR=(online_logfile,primary_role)"
SET FAL_SERVER="oinfo_dr"
SET FAL_CLIENT="oinfo"
SET CONTROL_FILES=’/u01/oradata/oinfo_dr/control01.ctl’,'/u01/oradata/oinfo_dr/control02.ctl’,'/u01/oradata/oinfo_dr/control03.ctl’
SET DB_FILE_NAME_CONVERT=’C:\oracle\app\oradata\oinfo11g\’,'/u01/oradata/oinfo_dr/’
SET LOG_FILE_NAME_CONVERT=’C:\oracle\app\oradata\oinfo11g\’,'/u01/oradata/oinfo_dr/’;

Failure: 1

Starting Duplicate Db at 22-MAR-14
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=19 device type=DISK

contents of Memory Script:
{
   backup as copy reuse
   targetfile  ‘c:\oracle\app\product\11.2.0\dbhome_1\DATABASE\PWDoinfo11g.ORA’ auxiliary format
‘/u01/app/oracle/product/11.2.0/db_1/dbs/orapwoinfo_dr’   targetfile
‘C:\ORACLE\APP\PRODUCT\11.2.0\DBHOME_1\DATABASE\SPFILEOINFO11G.ORA’ auxiliary format
‘/u01/app/oracle/product/11.2.0/db_1/dbs/spfileoinfo_dr.ora’   ;
   sql clone "alter system set spfile= ”/u01/app/oracle/product/11.2.0/db_1/dbs/spfileoinfo_dr.ora”";
}
executing Memory Script

Starting backup at 22-MAR-14
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=8 device type=DISK
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 03/22/2014 10:25:18
ORA-17627: ORA-12577: Message 12577 not found;  product=RDBMS; facility=ORA
continuing other job steps, job failed will not be re-run
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 03/22/2014 10:25:25
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 03/22/2014 10:25:25
ORA-17627: ORA-12577: Message 12577 not found;  product=RDBMS; facility=ORA

DB is 32 bit on source side and 64 Bit on Target side, duplicate from active database will not work

    Note: 1344615.1

    Issue:- Different ‘Word Size’ between Primary/Standby

    eg:
      Primary  OS Details Redhat Linux 5 64-bits
      Standby OS Details Redhat Linux 4 32-bits

    This is a known bug:
     Bug 8406972: ACTIVE DATABASE DUPLICATION 64BITS – 32 BITS FAILS WITH ORA-12577
    There is no fix available

    Solution:- Use RMAN duplicate database based on backup from standby database.

Need to take a backup in rman and move them to Standby database. Ensure your backup contains spfile and controlfile too.

Okay done…. ( :))

Step 5:- This time from standby site

Few things to note and must see,

1. Backup location

2. Ensure keep the windows format path’s in CAPITAL Letters

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY BACKUP LOCATION ‘/home/oracle/backup/’
NOFILENAMECHECK
DORECOVER
SPFILE
SET DB_UNIQUE_NAME="oinfo_dr"
SET AUDIT_FILE_DEST="/u01/app/oracle/admin/oinfo_dr/adump"
SET DIAGNOSTIC_DEST="/u01/app/oracle"
SET LOG_ARCHIVE_DEST_2="service=oinfo_dr LGWR SYNC REGISTER VALID_FOR=(online_logfile,primary_role)"
SET FAL_SERVER="oinfo_dr"
SET FAL_CLIENT="oinfo"
SET CONTROL_FILES=’/u01/oradata/oinfo_dr/control01.ctl’,'/u01/oradata/oinfo_dr/control02.ctl’,'/u01/oradata/oinfo_dr/control03.ctl’
SET DB_FILE_NAME_CONVERT=’C:\oracle\app\oradata\oinfo11g\’,'/u01/oradata/oinfo_dr/’
4> SET LOG_FILE_NAME_CONVERT=’C:\oracle\app\oradata\oinfo11g\’,'/u01/oradata/oinfo_dr/’
SET db_recovery_file_dest=’/u01/oradata/flash_recovery_area/’;5> 6> 7> 8> 9> 10> 11> 12> 13> 14>

Starting Duplicate Db at 22-MAR-14

contents of Memory Script:
{
   restore clone spfile to  ‘/u01/app/oracle/product/11.2.0/db_1/dbs/spfileoinfo_dr.ora’ from
‘/home/oracle/backup/O1_MF_NCSNF_TAG20140322T103934_9LT6T794_.BKP’;
   sql clone "alter system set spfile= ”/u01/app/oracle/product/11.2.0/db_1/dbs/spfileoinfo_dr.ora”";
}
executing Memory Script

Starting restore at 22-MAR-14
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=19 device type=DISK

channel ORA_AUX_DISK_1: restoring spfile from AUTOBACKUP /home/oracle/backup/O1_MF_NCSNF_TAG20140322T103934_9LT6T794_.BKP
channel ORA_AUX_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 22-MAR-14

sql statement: alter system set spfile= ”/u01/app/oracle/product/11.2.0/db_1/dbs/spfileoinfo_dr.ora”

contents of Memory Script:
{
   sql clone "alter system set  db_unique_name =
”oinfo_dr” comment=
”” scope=spfile";
   sql clone "alter system set  AUDIT_FILE_DEST =
”/u01/app/oracle/admin/oinfo_dr/adump” comment=
”” scope=spfile";
   sql clone "alter system set  DIAGNOSTIC_DEST =
”/u01/app/oracle” comment=
”” scope=spfile";
   sql clone "alter system set  LOG_ARCHIVE_DEST_2 =
”service=oinfo_dr LGWR SYNC REGISTER VALID_FOR=(online_logfile,primary_role)” comment=
”” scope=spfile";
   sql clone "alter system set  FAL_SERVER =
”oinfo_dr” comment=
”” scope=spfile";
   sql clone "alter system set  FAL_CLIENT =
”oinfo” comment=
”” scope=spfile";
   sql clone "alter system set  CONTROL_FILES =
”/u01/oradata/oinfo_dr/control01.ctl”, ”/u01/oradata/oinfo_dr/control02.ctl”, ”/u01/oradata/oinfo_dr/control03.ctl” comment=
”” scope=spfile";
   sql clone "alter system set  db_file_name_convert =
”C:\oracle\app\oradata\oinfo11g\”, ”/u01/oradata/oinfo_dr/” comment=
”” scope=spfile";
   sql clone "alter system set  LOG_FILE_NAME_CONVERT =
”C:\oracle\app\oradata\oinfo11g\”, ”/u01/oradata/oinfo_dr/” comment=
”” scope=spfile";
   sql clone "alter system set  db_recovery_file_dest =
”/u01/oradata/flash_recovery_area/” comment=
”” scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_unique_name =  ”oinfo_dr” comment= ”” scope=spfile

sql statement: alter system set  AUDIT_FILE_DEST =  ”/u01/app/oracle/admin/oinfo_dr/adump” comment= ”” scope=spfile

sql statement: alter system set  DIAGNOSTIC_DEST =  ”/u01/app/oracle” comment= ”” scope=spfile

sql statement: alter system set  LOG_ARCHIVE_DEST_2 =  ”service=oinfo_dr LGWR SYNC REGISTER VALID_FOR=(online_logfile,primary_role)” comment= ”” scope=spfile

sql statement: alter system set  FAL_SERVER =  ”oinfo_dr” comment= ”” scope=spfile

sql statement: alter system set  FAL_CLIENT =  ”oinfo” comment= ”” scope=spfile

sql statement: alter system set  CONTROL_FILES =  ”/u01/oradata/oinfo_dr/control01.ctl”, ”/u01/oradata/oinfo_dr/control02.ctl”, ”/u01/oradata/oinfo_dr/control03.ctl” comment= ”” scope=spfile

sql statement: alter system set  db_file_name_convert =  ”C:\oracle\app\oradata\oinfo11g\”, ”/u01/oradata/oinfo_dr/” comment= ”” scope=spfile

sql statement: alter system set  LOG_FILE_NAME_CONVERT =  ”C:\oracle\app\oradata\oinfo11g\”, ”/u01/oradata/oinfo_dr/” comment= ”” scope=spfile

sql statement: alter system set  db_recovery_file_dest =  ”/u01/oradata/flash_recovery_area/” comment= ”” scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2230768 bytes
Variable Size                222299664 bytes
Database Buffers             574619648 bytes
Redo Buffers                   2551808 bytes

contents of Memory Script:
{
   restore clone standby controlfile from  ‘/home/oracle/backup/O1_MF_NCSNF_TAG20140322T103934_9LT6T794_.BKP’;
}
executing Memory Script

Starting restore at 22-MAR-14
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=19 device type=DISK

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/oradata/oinfo_dr/control01.ctl
output file name=/u01/oradata/oinfo_dr/control02.ctl
output file name=/u01/oradata/oinfo_dr/control03.ctl
Finished restore at 22-MAR-14

contents of Memory Script:
{
   sql clone ‘alter database mount standby database’;
}
executing Memory Script

sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=19 device type=DISK

contents of Memory Script:
{
   set until scn  959309;
   set newname for tempfile  1 to
"C:\ORACLE\APP\ORADATA\OINFO11G\TEMP01.DBF";
   switch clone tempfile all;
   set newname for datafile  1 to
"C:\ORACLE\APP\ORADATA\OINFO11G\SYSTEM01.DBF";
   set newname for datafile  2 to
"C:\ORACLE\APP\ORADATA\OINFO11G\SYSAUX01.DBF";
   set newname for datafile  3 to
"C:\ORACLE\APP\ORADATA\OINFO11G\UNDOTBS01.DBF";
   set newname for datafile  4 to
"C:\ORACLE\APP\ORADATA\OINFO11G\USERS01.DBF";
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

renamed tempfile 1 to C:\ORACLE\APP\ORADATA\OINFO11G\TEMP01.DBF in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 22-MAR-14
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to C:\ORACLE\APP\ORADATA\OINFO11G\SYSTEM01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00002 to C:\ORACLE\APP\ORADATA\OINFO11G\SYSAUX01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00003 to C:\ORACLE\APP\ORADATA\OINFO11G\UNDOTBS01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00004 to C:\ORACLE\APP\ORADATA\OINFO11G\USERS01.DBF
channel ORA_AUX_DISK_1: reading from backup piece /home/oracle/backup/O1_MF_NNNDF_TAG20140322T103934_9LT6RH1Q_.BKP
channel ORA_AUX_DISK_1: piece handle=/home/oracle/backup/O1_MF_NNNDF_TAG20140322T103934_9LT6RH1Q_.BKP tag=TAG20140322T103934
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:46
Finished restore at 22-MAR-14

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=5 STAMP=842873062 file name=/u01/app/oracle/product/11.2.0/db_1/dbs/C:ORACLEAPPORADATAOINFO11GSYSTEM01.DBF
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=842873062 file name=/u01/app/oracle/product/11.2.0/db_1/dbs/C:ORACLEAPPORADATAOINFO11GSYSAUX01.DBF
datafile 3 switched to datafile copy
input datafile copy RECID=7 STAMP=842873062 file name=/u01/app/oracle/product/11.2.0/db_1/dbs/C:ORACLEAPPORADATAOINFO11GUNDOTBS01.DBF
datafile 4 switched to datafile copy
input datafile copy RECID=8 STAMP=842873062 file name=/u01/app/oracle/product/11.2.0/db_1/dbs/C:ORACLEAPPORADATAOINFO11GUSERS01.DBF

contents of Memory Script:
{
   set until scn  959309;
   recover
   standby
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 22-MAR-14
using channel ORA_AUX_DISK_1

starting media recovery

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=3
channel ORA_AUX_DISK_1: reading from backup piece /home/oracle/backup/RMAN_06P3QBL1%
channel ORA_AUX_DISK_1: piece handle=/home/oracle/backup/RMAN_06P3QBL1% tag=TAG20140322T104032
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/oradata/flash_recovery_area/OINFO_DR/archivelog/2014_03_22/o1_mf_1_3_9lt9dj4d_.arc thread=1 sequence=3
channel clone_default: deleting archived log(s)
archived log file name=/u01/oradata/flash_recovery_area/OINFO_DR/archivelog/2014_03_22/o1_mf_1_3_9lt9dj4d_.arc RECID=1 STAMP=842873064
media recovery complete, elapsed time: 00:00:00
Finished recover at 22-MAR-14
Finished Duplicate Db at 22-MAR-14

RMAN>

Duplicate of my database done, standby created.

Step 6:- Set parameters both sides and start recovery mode

On Primary

SQL> alter system set standby_file_management=AUTO scope=both;

System altered.

SQL> alter system set fal_server=oinfo11g scope=both;

System altered.

SQL> alter system set fal_client=oinfo_dr scope=both;

System altered.

SQL> alter system set LOG_ARCHIVE_DEST_2=’SERVICE=oinfo_dr LGWR SYNC VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) NET_TIMEOUT=60 DB_UNIQUE_NAME=oinfo_dr’ scope=both;

System altered.

On Standby

SQL> alter system set LOG_ARCHIVE_DEST_1=’LOCATION=/u01/oradata/oinfo_dr/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oinfo_dr’;

System altered.

SQL> alter system set LOG_ARCHIVE_DEST_1=’LOCATION=/u01/oradata/oinfo_dr/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oinfo_dr’ scope=both;

System altered.

SQL> alter system set LOG_ARCHIVE_DEST_2=’SERVICE=oinfo11g LGWR SYNC VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) NET_TIMEOUT=60 DB_UNIQUE_NAME=oinfo11g’ scope=both;

System altered.

SQL>

alter system SET DB_FILE_NAME_CONVERT=’/u01/oradata/oinfo_dr/’,'C:\ORACLE\APP\ORADATA\OINFO11G\’ scope=spfile;
alter system SET LOG_FILE_NAME_CONVERT=’/u01/oradata/oinfo_dr/’,'C:\ORACLE\APP\ORADATA\OINFO11G\’ scope=spfile;

SQL> alter system set standby_file_management=AUTO scope=both;

System altered.

SQL> alter system set fal_server=oinfo_dr scope=both;

System altered.

SQL> alter system set fal_client=oinfo11g scope=both;

System altered.

SQL> shut immediate
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL> startup nomount
ORACLE instance started.

Total System Global Area  801701888 bytes
Fixed Size                  2230768 bytes
Variable Size             222299664 bytes
Database Buffers          574619648 bytes
Redo Buffers                2551808 bytes

SQL> alter database mount;

Add Standby redo logs

On Primary

ALTER DATABASE ADD STANDBY LOGFILE group 4 (‘C:\oracle\app\oradata\oinfo11g\stbyredo1.log’) SIZE 100M;
ALTER DATABASE ADD STANDBY LOGFILE group 5 (‘C:\oracle\app\oradata\oinfo11g\stbyredo2.log’) SIZE 100M;
ALTER DATABASE ADD STANDBY LOGFILE group 6 (‘C:\oracle\app\oradata\oinfo11g\stbyredo3.log’) SIZE 100M;

On Standby

ALTER DATABASE ADD STANDBY LOGFILE group 4 (‘/u01/oradata/oinfo_dr/stbyredo1.log’) SIZE 100M;
ALTER DATABASE ADD STANDBY LOGFILE group 5 (‘/u01/oradata/oinfo_dr/stbyredo2.log’) SIZE 100M;
ALTER DATABASE ADD STANDBY LOGFILE group 6 (‘/u01/oradata/oinfo_dr/stbyredo3.log’) SIZE 100M;

Keep Standby in sync and real time apply mode (Active Dataguard)

SQL> alter database recover managed standby database disconnect from session using current logfile;

Database altered.

SQL> select name,open_mode,switchover_status,database_role from v$database;

NAME      OPEN_MODE            SWITCHOVER_STATUS    DATABASE_ROLE
——— ——————– ——————– —————-
OINFO11G  MOUNTED              SESSIONS ACTIVE      PHYSICAL STANDBY

Test Cases:-

1. Adding tablespace (datafile), Added – no problem found – Success

2. Switchover – While opening in standby got boot strap issue, to resolve , open the database in upgrade mode and then again switch back- Success

3. Flashback – Yet to test (No time, will update)

-Thanks

Sureshgandhi

Current_SCN how it populates?

My friend has asked this question today as he got the same in one his interview…

The common answer we give is from any base table as v$database (current_scn) is populated by any base table.

However the base table is $KCCDI under column DICUR_SCN is current SCN. you can find this by querying the v$database view code in dba_views.

Lets look


	SQL> select current_scn from v$database;

	CURRENT_SCN
	-----------
	102707163


	SQL> select DICKP_SCN,DICUR_SCN from X$KCCDI;

	DICKP_SCN        DICUR_SCN
	---------------- ----------------

	102711050        102711285

	SQL> select current_scn from v$database;
	CURRENT_SCN
	-----------
	102711289
Note: DICKP_SCN is last checkpoint SCN, when the log switch happens then checkpoint will occur that SCN.

Further why this question has been asked I don’t know, but this way we can find the interval of last checkpoint SCN as well. by DICKP_SCN – DICUR_SCN and then map to SCN_TO_TIMESTAMP.

-:)

12c Database : Network enhancements – DEFAULT_SDU_SIZE to 2M

How many you know this structure at first glance, I am sure the old DBA junkies will say what this called immediately, but for those who does not know this diagram is called Network OSI diagram for Oracle and layers between client and database servers and how oracle connects.

Oracle uses different methods to connect to database server and when used Oracle Net it uses the above network model to communicate with Oracle database. Oracle client has to be installed in the other end and it can communicate using Net layer and database layer uses same Net 8 foundation layer to listen the requests.

OSI Model depicts the following,

 

Layer#

Name

Description

7

Application

Standard communication services and applications that everyone can use

6

Presentation

Layer that takes data from application and presents it  in a format that is recognized by other layers; Functions like character set conversion is done by this layer

5

Session

Establishes, maintain & end connections

4

Transport

Delivers data without errors in sequence and no data loss or duplications. Large messages can be delivered but is constrained by message size limits imposed by the network layer.

3

Network

Controls the way data will be sent to recipient ; Factors like the physical path the data should take based  etc are performed by this layer. This layer translates logical addresses, or names, into physical addresses, performs routing

2

Data Link

Handles the transfer of data across the network ; It  establishes and terminates the logical link between two nodes.

1

Physical

The lowest layer of the OSI  is generally concerned with the transfer of data medium like optical or electrical medium etc.

 

Okay back to post now, the bold part above.

Session Data Unit(SDU) is the session layer responsible for  sending/receiving data from transport layer.  See table below for OSI diagram. Generally increasing the value of SDU can result in improved performance, efficient network utilization when transmitting large amounts of data. 

However until Oracle 11g  the SDU size was limited  to 64K. Starting with Oracle12c, support for  SDU  is increased to 2M thereby allowing efficient use of  high bandwidth network.

Very useful in dataguard environments where you have network bandwidth issues.

SDU can be set at  database level by configuring DEFAULT_SDU_SIZE parameter in sqlnet.ora or SDU listener.ora parameter. On the client size SDU can set be by configuring DEFAULT_SDU_SIZE parameter in client sqlnet.ora or SDU tnsnames.ora parameter.

12c Database : Upgrades – 460 Slides deck from Oracle

Really,  about 460 slides on just upgrades, all information about upgrade matrix, upgrade paths, references etc etc. A must read for all.

https://blogs.oracle.com/UPGRADE/entry/new_version_of_slide_deck?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+orana+(OraNA)

I will also upload the same soon.

-Sureshgandhi

RACONENODE : convert your rac database to active/passive

Configuring RACONENODE in RAC database.

We all know that Oracle RAC is active/active configuration, but due to application compatibility (the design part like cache fusion incompatibilities etc) you will need to stick to active/passive configuration but you want High availability solution and you do not want to use any third party clustering like VCS, Answer is yes, use raconenode feature. Here are the steps for the same.

You can also use the service level restriction stick to one node by specifying preferred and available nodes, the following is works exactly as service level but the second node is always passive which can reduce the overheads of cache fusion since your application is not compatible, like you have lots of reads from both nodes on single block which may lead to contention for buffer busy waits.

a) Check Status of RAC database

	[RACDB1@oinfo-rac1 ] $srvctl status database -d RACDB
	Instance RACDB1 is running on node oinfo-rac1
	Instance RACDB2 is running on node oinfo-rac2

b) Keep one of the instance down before converting database to one-node RAC configuration.

	[RACDB1@oinfo-rac1 ] $srvctl stop instance -d RACDB -i RACDB2

c) Keep one of the instance active in Cluster before converting database to one-node RAC configuration.

	[RACDB1@oinfo-rac1 ] $srvctl convert database -d RACDB -c RACONENODE -i RACDB1
	PRCD-1214 : Administrator-managed RAC database RACDB has more than one instance

	The above error means you should have only 1 Instance of Database running and active for converting it to One-Node RAC.

	[RACDB1@oinfo-rac1 ] $srvctl remove instance -d RACDB -i RACDB2
	Remove instance from the database RACDB? (y/[n]) y

d)It is mandatory to have at-least one service active for given instance before you convert it into RACONENODE mode, else you will get error.

	[RACDB1@oinfo-rac1 ] $srvctl convert database -d RACDB -c RACONENODE -i RACDB1
	PRCD-1242 : Unable to convert RAC database RACDB to RAC One Node database because the database had no service added

	[RACDB1@oinfo-rac1 ] $srvctl add  service -d RACDB -s TEST -r RACDB1

e) Use srvctl convert command to change your database from RAC mode to RACONENODE.

	[RACDB1@oinfo-rac1 ] $srvctl convert database -d RACDB -c RACONENODE -i RACDB1

f) Verify One Node RAC configuration.

	[RACDB1@oinfo-rac1 ] $srvctl config database -d RACDB
	Database unique name: RACDB
	Database name: RACDB
	Oracle home: /u01/app/oracle/product/11.2.0/db_1
	Oracle user: oracle
	Spfile: +ASMDATA/RACDB/spfileRACDB.ora
	Domain:
	Start options: open
	Stop options: immediate
	Database role: PRIMARY
	Management policy: AUTOMATIC
	Server pools: RACDB
	Database instances:
	Disk Groups: DATA, FRA
	Mount point paths:
	Services: TEST
	Type: RACOneNode    <<<<<<< The database has successfully turned into 1-Node Configuration.
	Online relocation timeout: 30
	Instance name prefix: RACDB1
	Candidate servers: oinfo-rac1
	Database is administrator managed

g) In case you want to relocate a RAC 1-Node database from 1 node to another, you can use following syntax

	[RACDB1@oinfo-rac1 ]srvctl relocate database -d RACDB -n oinfo-rac2 

Sureshgandhi

Unix : Solaris psrset – bound cpu to specific processes

Using psrset, we can assign specific cpu’s as a group and assign specific processes to that processor set. This is useful in RAC where there node evictions due to CPU scheduling delays or CPU bottlenecks are high, we can assign the critical process like cssd, crsd, ohasd etc, so that CPU is always available to the clusterware processes.

Here is the documentation excerpt for psrset command

The psrset utility controls the management of processor sets. Processor sets allow the binding of processes or LWPs to groups of processors, rather than just a single processor. Processors assigned to processor sets can run only LWPs that have been bound to that processor set.

For example in a RAC cluster, on node 1 create a processor set and assign the critical process to that processor set.

On Node 1:

	$ psrset -c 2
	successfully created pset 1
	successfully assigned processor 2 to pset 1

	$ psrset -a 1 2
	successfully assigned processor 2 to pset 1

	$ ps -fu root | grep -i grid
	    root  6547     1  0  Jan 13  ?        49:09 /home/oracle/product/11.2.0/grid/bin/cssdagent
	    root  6647     1  0  Jan 13  ?        487:44 /home/oracle/product/11.2.0/grid/bin/crsd.bin reboot
	    root  6553     1  0  Jan 13  ?        1633:50 /home/oracle/product/11.2.0/grid/bin/orarootagent.bin
	    root  4696     1  0  Jan 13  ?        486:50 /home/oracle/product/11.2.0/grid/bin/ohasd.bin reboot
	    root  6542     1  0  Jan 13  ?        49:54 /home/oracle/product/11.2.0/grid/bin/cssdmonitor
	    root  6847     1  0  Jan 13  ?        907:03 /home/oracle/product/11.2.0/grid/bin/orarootagent.bin
	    root  6643     1  0  Jan 13  ?        414:05 /home/oracle/product/11.2.0/grid/bin/octssd.bin reboot
 
	$ psrset -b 1 6643 6847 6542 4696 6553 6647 6547
	successfully bound pid 6643 to pset 1
	successfully bound pid 6847 to pset 1
	successfully bound pid 6542 to pset 1
	successfully bound pid 4696 to pset 1
	successfully bound pid 6553 to pset 1
	successfully bound pid 6647 to pset 1
	successfully bound pid 6547 to pset 1

Similarly, On Node 2:

 
	$ psrset -c 2
	successfully created pset 1
	successfully assigned processor 2 to pset 1

	$ psrset -a 1 2
	successfully assigned processor 2 to pset 1

	$ ps -fu root | grep -i grid
	    root 12165     1  0  Jan 10  ?        725:48 /home/oracle/product/11.2.0/grid/bin/crsd.bin reboot
	    root 12248     1  0  Jan 10  ?        962:46 /home/oracle/product/11.2.0/grid/bin/orarootagent.bin
	    root 12080     1  0  Jan 10  ?        1865:43 /home/oracle/product/11.2.0/grid/bin/orarootagent.bin
	    root 11975     1  0  Jan 10  ?        613:11 /home/oracle/product/11.2.0/grid/bin/ohasd.bin reboot
	   root 12161     1  0  Jan 10  ?        467:22 /home/oracle/product/11.2.0/grid/bin/octssd.bin reboot
	    root 12072     1  0  Jan 10  ?        79:46 /home/oracle/product/11.2.0/grid/bin/cssdmonitor
	    root 12074     1  0  Jan 10  ?        82:59 /home/oracle/product/11.2.0/grid/bin/cssdagent

	$ psrset -b 1 12165 12248 12080 11975 12161 12072 12074

	successfully bound pid 12165 to pset 1
	successfully bound pid 12248 to pset 1
	successfully bound pid 12080 to pset 1
	successfully bound pid 11975 to pset 1
	successfully bound pid 12161 to pset 1
	successfully bound pid 12072 to pset 1
	successfully bound pid 12074 to pset 1

Flashback : Guaranteed vs normal restore point

The key difference

  • Guaranteed restore point does not need explicitly flashback on, where in the normal restore point does
  • Guaranteed restore point requires less I/O as compared to the normal restore point due to the fact that flashback log maintains only one copy of block image before it get modified. where in the normal restore point maintains the different block versions at different timestamp
  • This does mean that, guaranteed restore point does not get your database flashback to different point in time, it does flashback your database at that particular point of time when the guaranteed restore point created
  • You can have normal and guaranteed restore points in the database. if a normal restore point is created first and then guaranteed restore point created later then the database can be flashbacked to different point in times.
  • No logging operations i.e direct path inserts etc will have same affect on both normal and guaranteed restore point
  • Flashback area will not be cleared if you have guaranteed restore point, where in the space will be cleared in normal restore point adhering to DB_FLASHBACK_RETENTION_TARGET
  • The size of the flashback log is equal to the flashback log buffer generally the redo log buffer size in both cases, but the volume is different i.e number of flashback logs can vary due to the fact that guaranteed restore point will generate less flashback log
  • Only flashback database operations will use the flashback logs and archivelogs, rest all flashback operations like flashback query,table,as of uses undo
  • Redo logs are required for both types since the redo log files will need to apply the changes (i.e rollforward the changes) to the block that has been flashbacked during flashback operation. For example in general case when flashback is on
    • When the block is modified it will be kept in flashback buffer (a part of shared pool) before it get modified
    • RVWR flushes that block to flashback log, remember only changed blocks will get recorded into the flashback buffer, but redo will maintain all information about database
    • For example this, flashback block will contains SCN 1 to SCN 10
    • When you want to flashback to particular SCN 5,
    • Flashback operation will perform the operation until 1 mean flashback until SCN 1 and then uses redo to reapply the changes from 1 to 5 for that block.
    • hence redo is required to get the block to consistent state with that of all other blocks.

Thanks to my friend who asked me this question yesterday and it let me to writeup some notes on it for all benefit.

-Sureshgandhi

OffTopic post : Telugu Website by my teacher

Hello All,

This post is off the topic from Oracle which I generally not do in my blog, But I have to do this as I met my childhood teacher and wondered he has done a great work by creating a website for Telugu and trying to website as a medium to get the popularity for Telugu language. We have a great time with him and we (our childhood friends) had an opportunity to felicitate him in our old students meet up happened on 12-Jan-2014 at Vijayawada.

His name is K.V.S.V Prasad garu, He is well known Telugu teacher and remarkably contributing to the wide spread of the Telugu Language. He’s not only attained the master ship in Telugu language but we always admired of his simple and principled living.

Here it is, http://prasadkale.webnode.in/

This website is so beautiful and attractive, not only that, you can see lot of Telugu related material which can be useful to your kids and of course we can recap our telugu language.

Thanks for reading this..

SureshGandhi

BigData Posts – Update

Hello,

You might be wondering about no posts in my blog. But the fact is am busy learning BigData and praticising and preparing a tutorial content in my new subdomain blog. http://bigdata.oracle-info.com

So far four lessons have been posted on Bigdata Hadoop Fundamentals, if you have missed and interested to see them here are they,

  1. Hadoop Overview
  2. HDFS Architecture – Part 1
  3. HDFS Installation & SHELL – Part 2
  4. HDFS Using JAVA API – Part 3

Happy Reading !

-Thanks

Sureshgandhi

 

BigData Posts – Update

Hello,

You might be wondering about no posts in my blog. But the fact is am busy learning BigData and praticising and preparing a tutorial content in my new subdomain blog. http://bigdata.oracle-info.com

So far three lessons have been posted on Bigdata Hadoop Fundamentals, if you have missed to see them here are they,

  1. Hadoop Overview
  2. HDFS Architecture – Part 1
  3. HDFS Installation & SHELL – Part 2
  4. HDFS Using JAVA API – Part 3

Happy Reading !

-Thanks

Sureshgandhi