Active Data Guard 12c New Features - Oracle Day 2014

Exadata MAA Presentation - Oracle Day 2013 Istanbul

Packt Publishing e-book Promotion

I would like to inform you that, to mark Columbus Day, Packt is offering its entire catalog of eBooks and videos at 50% off. This is te last day of the promotion. If you plan to buy some e-books don't miss this. http://bit.ly/1bqvB29

Discount code: COL50 






The Book is Out!


After one year of hard work, it's very nice to introduce the Oracle Data Guard 11gR2 Administration book written by me and Nassyam Basha. As for me, i can honestly say that it has been a really good book and will help its reader a lot to learn and implement Data Guard. We wanted to prepare a book that starts Data Guard from scratch and covers all important details of it with clear and easy to follow examples and i think we succeeded. Hope readers will think the same.

There is a lot of information we prepared about what this book covers, who this book is for etc. on book's web page. So, I will not mention about those here. But I wanted to write about the following topics, because I think they are important for potential readers (and writers).




- Why the Beginner's Guide Format...

The "Beginner's Guide" format is one of the writing templates of Packt. "Beginner's Guide" doesn't mean to be a "light" book in terms of content. This format was designed to help the "beginner" reader to understand the subject easily with more examples, hands-ons and Q&As. However the book covers all Data Guard topics from beginning to the advanced topics. You can see this when you look at the Table of Contents.

- Why 11gR2...

We of course discussed if the book should be for 12c. However most of the DBAs who will read this book will need to configure Data Guard 11gR2 for their production systems (and the rest will configure in lower releases but not 12c for now). As we all know a new version needs at least 1-2 years to be used on production systems. A new version book would be only informative but not practical for now. On the other side this book is a hands-on book to help DBAs built Data Guard configurations and DBAs will mostly work on 11gR2 for at least several years. It's not certain but we may also upgrade the book for 12c in the meantime.

- Writing a Book...

First of all i have to say that it has been more difficult than i thought to make time to write. Last 12 months have been the busiest time of my life. I had to write chapters at nights and weekends which of course affected my social and family life. Before authoring this book, in 2009 i contributed to “RMAN 11g Backup & Recovery” book from Oracle Press by writing a chapter for that book. So, i should have known what writing a book is like but I think, idea of authoring a book make me forget that :) Preparing the outline, writing the first drafts, editor reviews, technical reviews, technical editor reviews, final review, in the mean time increasing versions of the drafts, writing the preface, writing the Title Information Sheet, review of the preface, review of the Title Information Sheet and so on. At the end one year full of writing :) When i look to my mailbox, the folder containing the e-mails related with the book has 1200 e-mails inside. These are the e-mails that i'm in and i'm sure there are many more that was sent inside Packt team. This may show how busy were the people in this team and how much effort was shown. At the end i'm happy to hold the book in my hands and i can say it's a strange feeling to browse through the book :) Thanks again to everyone in the team from the four corners of the world that made this book possible.

- Reviews...

We received some (honest) reviews from very important Oracle professionals that the book is really good. Our editors and technical reviewers also made very nice comments during the writing phase. (Thanks again to Syed Jaffar Hussain, Michael Seberg, Joel Perez and the Packt editor team for their valuable comments and edits) Seeing these nice feedbacks were the motivation of writing. As i said, i also trust in this book and I’m pretty sure it'll be a good resource to learn Data Guard.

I already thanked at the beginning pages of the book but here i don't want to miss and thank again to my family for their support and my friend Nassyam Basha for all his effort and friendship.

Data Guard Queries


After all those years, my 5 years old post “How To Query Dataguard Status” still has the top visitors of this blog, so I wanted to write a fresh one including newer queries, commands. Here it is:

  • Standby database process status: You can run following query on standby database to see what MRP and RFS processes are doing, which block of which archivelog sequences are being shipped or being applied.


SQL> select process, status, thread#, sequence#, block#, blocks from v$managed_standby ;

PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK#     BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
ARCH      CLOSING               1      69479     932864        261
ARCH      CLOSING               1      69480     928768        670
ARCH      CLOSING               2      75336     933888        654
ARCH      CLOSING               2      78079     930816        842
ARCH      CLOSING               1      69475     943104         79
RFS       IDLE                  0          0          0          0
...
RFS       RECEIVING             1      69481     688130       1024
MRP0      WAIT_FOR_LOG          2      78080          0          0
RFS       IDLE                  2      78080     873759          3

TROUG - Oracle EBS Day 2013


About "Expert Oracle Database Architecture" Book


I'm reading the "Expert Oracle Database Architecture" book of Thomas Kyte, which is a must read for everyone who is serious on being an Oracle DBA.

Jonathan Lewis already said: "Frankly, if every DBA and developer in the world were made to work carefully through Tom Kyte’s book, I’d probably have to start offering consultancy services to SQL Server users because the number of clients needing Oracle consultancy would drop dramatically." in the foreword of the book. Still, i wanted to share some paragraphs from the "Data Files" part of the book. Following are taken out of only three pages of the book. I'm sharing this to show how the book is full of information and encourage you to immediately buy one and read.

Segments
==============================
You will find many CREATE statements that create multisegment objects. The confusion lies in the fact that a single CREATE statement may ultimately create objects that consist of zero, one, or more segments! For example, CREATE TABLE T ( x int primary key, y clob ) will create four segments: one for the TABLE T, one for the index that will be created in support of the primary key, and two for the CLOB (one segment for the CLOB is the LOB index and the other segment is the LOB data itself). On the other hand, CREATE TABLE T ( x int, y date ) cluster MY_CLUSTER will create zero segments (the cluster is the segment in this case).

Extents
==============================
Extents vary in size from one Oracle data block to 2GB. 11g Release 2 has introduced the concept of a “deferred” segment—a segment that will not immediately allocate an extent-, so in that release and going forward, a segment might defer allocating its initial extent until data is inserted into it.

Blocks
==============================
Here’s a little-known fact: the default block size for a database does not have to be a power of two. Powers of two are just a commonly used convention. You can, in fact, create a database with a 5KB, 7KB, or nKB block size, where n is between 2KB and 32KB.

Most blocks, regardless of their size, have the same general format, which looks something like:

 ------------------
|Header           | -> type of block (table block, index block, and so on), 
|                  |   transaction information when relevant regarding active and 
|                 |   past transactions on the block; and the address (location) 
|                 |   of the block on the disk.
|------------------|   The next two block components are found on the HEAP-                 |                  |   organized tables.      
|Table Directory   | -> The table directory, if present, contains information  
|                 |   about the tables that store rows in this block
|------------------|
|Row Directory     | -> The row directory contains information describing the 
|                 |   rows that are to be found on the block.
|------------------|
|                  | -> The remaining 2 pieces of the block are straightforward
|Free Space        |   there may be free space on a block, and
|                  |   then there will generally be used space that is 
|                  |   currently storing data.
|------------------|
|                  |
|Data              |
|                  |
|------------------|
|Tail              |
 ------------------


Exceptions to this format include LOB segment blocks and hybrid columnar compressed blocks in Exadata storage, for example, but the vast majority of blocks in your database will resemble the format in Figure

Delete Force Noprompt Backuppiece


In some cases you may not be able to delete the obsolete RMAN backup pieces that are visible with “report obsolete” command. In my case, after a switchover, i was getting the following error when my daily backup scripts tries to delete obsolete backups after the crosscheck:

RMAN-06216: WARNING: db_unique_name mismatch - 334 objects could not be updated
RMAN-06218: List of objects requiring same operation on database with db_unique_name XXX

And when i try to delete these object by connecting the standby (old primary) database as RMAN target, i get this error:

RMAN-06207: WARNING: 334 objects could not be deleted for DISK channel(s) due
RMAN-06208:          to mismatched status.  Use CROSSCHECK command to fix status
RMAN-06210: List of Mismatched objects
RMAN-06211: ==========================
....

Even i re-run the crosscheck, i wasn’t able to delete the obsolete backup pieces having the same error. So, i used “delete noprompt force backuppiece” command to get rid of these obsolete backup information. Below is the example of RMAN run block: (Tape channel is configured for TSM)


run
{
allocate channel c1 device type disk;
allocate channel c2 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)' connect='user/passwd@bckpsrv';
delete force noprompt backuppiece 'pamahdkt_1_1';
delete force noprompt backuppiece 'p9mahdkt_1_1';
delete force noprompt backuppiece 'p2mahdkr_1_1';
...
release channel c11;
}

Moving OCR & Voting Files In 11gR1 (Using Raw Devices)


-- Query current status of OCR and Voting Files:

oracle@server1:/> cat /etc/oracle/ocr.loc 
ocrconfig_loc=/dev/ocr1_test11g
ocrmirrorconfig_loc=/dev/ocr2_test11g
local_only=FALSE

oracle@server1:/> crsctl query css votedisk
 0.     0    /dev/voting1_test11g
 1.     0    /dev/voting2_test11g
 2.     0    /dev/voting3_test11g
Located 3 voting disk(s).


-- Check OCR integrity with either ocrcheck or “cluvfy comp ocr” command:

oracle@server1:/> ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :    1048300
         Used space (kbytes)      :       4656
         Available space (kbytes) :    1043644
         ID                       :  182479978
         Device/File Name         : /dev/ocr1_test11g
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/ocr2_test11g
                                    Device/File integrity check succeeded
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user

oracle@server1:/> cluvfy comp ocr -n db1,db2,db3 -verbose
Verifying OCR integrity 
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.
Uniqueness check for OCR device passed.
Checking the version of OCR...
OCR of correct Version "2" exists.
Checking data integrity of OCR...
Data integrity check for OCR passed.
OCR integrity check passed.
Verification of OCR integrity was successful. 

Basic Oracle LogMiner Steps

This is a quick note about using LogMiner for mining archived redo logs on the database itself. (Source and mining databases are same)
- Specify a dictionary file with the following procedure:

EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora','/oracle/acs/logmnr', OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

- Determine the minimum and maximum archived log sequences that is in the mining scope.


- Specify one archived log file with DBMS_LOGMNR.ADD_LOGFILE procedure, dbms_logmnr.NEW option. Then we can add other archived log sequences with dbms_logmnr.ADDFILE option:

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '+ARCH1/xx/archivelog/2013_01_28/thread_2_seq_31844.932.805914077', OPTIONS => dbms_logmnr.NEW); 
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '+ARCH1/xxx/archivelog/2013_01_28/thread_2_seq_31844.932.805914077', OPTIONS => dbms_logmnr.ADDFILE); 
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '+ARCH1/xxx/archivelog/2013_01_28/thread_2_seq_31845.1334.805914145', OPTIONS => dbms_logmnr.ADDFILE); 
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '+ARCH1/xxx/archivelog/2013_01_28/thread_2_seq_31846.1062.805914271', OPTIONS => dbms_logmnr.ADDFILE);