Auditing on Physical Standby Database with XML

Turning on auditing on a physical standby database may be preferred when you want to audit the queries against the read only standby database. Obviously it's not possible to keep audit records on standby database by setting audit_trail to DB or DB_EXTENDED. Because the database is read-only and the aud$ table is just being synchronized with the primary aud$. So the DB or DB_EXTENDED setting on a physical standby database doesn't cause any auditing. When we open an 11g physical standby database in Read Only mode, the AUDIT_TRAIL parameter changes to OS automatically. What you'll see on the alert log will be:

AUDIT_TRAIL initialization parameter is changed to OS, as DB is NOT compatible for database opened with read-only access

Setting the audit_trail to OS works for a standby database but there's no option as OS_EXTENDED, so if you need sql_text and sql_bind info, this will not be the solution for you. The only solution that records sql_text and sql_bind will be XML_EXTENDED. This setting will produce xml files under audit_file_dest. 

The problem in this point will be querying the audit records. Here comes the view V$XML_AUDIT_TRAIL which reads the XML files on the fly into memory when queried. Database used to maintain a txt file (adx_.txt) which keeps track of the XML audit files for this reason. However, starting with the list of the xml audit files is dynamically obtained when V$XML_AUDIT_TRAIL is queried. 

In the MOS note 879305.1 (ORA-00600[kzaxgfl:lowmem] when ADX file has dangling entries) this change was stated as:

Starting with release, the usage of the ADX file has been removed. The ADX global, generic file is no longer maintained in the adump destination. Consequently, starting with this release, the manual maintenance of the ADX file is no longer needed. 
More precisely: From, the xml index file is dynamically created before creating the xml file list.
This file is specific to the process as opposed to generic index file present is older versions and will have the PID suffix with the index filename.
Once the operation is done, this file is deleted.
This file should never be manually managed - removing or changing the process specific file manually can result in the error investigated in this note or in ORA 600 [KZAXGFL:IDX]
Note 1392161.1 Queries on XML Audit Trail Views Crashes With: ORA 600 [KZAXGFL:IDX] 

Obviously it was not a brilliant idea to keep xml file list in a text file which cannot track the changes when you delete xml files from filesystem in time, thus cause ORA-600 errors. Hereby this behaviour is changed with

Anyway, to get to the main point, we can query V$XML_AUDIT_TRAIL on the standby to find out the audit records. In my case, i prepared a shell script running on the primary which connects to primary database, queries the standby V$XML_AUDIT_TRAIL via db link and insert into a table on primary, then deletes the xml files on the standby server. With this way it's possible to filter the audit data on the xml files when inserting into database. It's also possible to create need-based indexes which will speed up your audit queries or partition the table for administrative purposes.

A Deep Dive into ASM redundancy in Exadata

I made this presentation at Serbia Oracle User Group event in Zlatibor and Harmony 2015 event in Tallinn at last month. Both were very nice organizations that i really enjoyed, learned much and met with many Oracle experts.

I prepared this presentation to answer to following questions, which i think are really important for Exadata Database Machine administrators:
- To what degree, disk and cell server failures are tolerated?
- How to understand if ASM is able to re-build redundancy after disk or cell failures?
- What happens when multiple disks are failed consecutively and does it matter which disks failed?
- What we need to pay attention to in terms of redundancy, when we do administrative task such as rolling restart of cell servers, resizing diskgroups, etc. ?
- How can we interpret and track the "usable space" in Exadata?

Hope you find it useful, and please comment for anything incomprehensive or any insertion.

TIMESTAMP to DATE Conversion with Online Redefinition

Online redefinition is a great way to make structural change on "big" tables having "lots of" DML. Using online redefinition, partitioning-nonpartitioning, adding-dropping columns, changing column data types, moving to another tablespace and more can be done with a very small unavailability of the table when compared with direct operations. 

Here are some online redefinition MOS notes which make life easier:
  • Dbms_Redefinition Online Reorganization Of Tables (Doc Id 149564.1)
  • How To Re-Organize A Table Online (Doc Id 177407.1)
  • How To Shrink A Table Using Online Redefinition(Doc Id 1357878.1)
  • How To Compress A Table While It Is Online(Doc Id 1353967.1)
  • How To Move A Table To A New / Different Tablespace While It Is Online
  • How To Convert Long Column To Clob Using Dbms_Redefinition Package (Doc Id 251417.1)
  • Online Redefinition Of Table Using Rowid Option (Doc Id 210407.1)
  • An Example Of A Complex Online Table Redefinition (Dbms_Redefinition) (Doc Id 1358236.1)

My case was to convert a TIMESTAMP column to DATE.
In order to prepare a test environment:

SQL> create table table1 (col1 number, col2 timestamp);
SQL> insert into table1 values (1,systimestamp);
SQL> commit;
SQL> create table table2 (col1 number, col2 date);
 (uname => 'eb'
 ,tname => 'table1'
 ,options_flag => dbms_redefinition.cons_use_rowid);
PL/SQL procedure successfully completed.

When converting column data types, "col_mapping" parameter must be defined onDBMS_REDEFINITION.START_REDEF_TABLE procedure. If not following error raises:

(uname => 'eb'
,orig_table => 'table1'
,int_table => 'table2'
,options_flag => dbms_redefinition.cons_use_rowid);
ERROR at line 1:
ORA-42016: shape of interim table does not match specified column mapping

When we specify TO_DATE function on the col_mapping parameter in our case :

(uname => 'eb'
,orig_table => 'table1'
,int_table => 'table2'
,col_mapping => 'col1 col1,to_date(col2) col2'
,options_flag => dbms_redefinition.cons_use_rowid);
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-01830: date format picture ends before converting entire input string

As a workaround to this problem, i used first TO_CHAR and then TO_DATE functions and it worked.

SQL> BEGIN       
(uname => 'eb'
,orig_table => 'table1'
,int_table => 'table2'
,col_mapping => 'col1 col1,TO_DATE(TO_CHAR(col2,''dd/MM/yyyy hh24/mi/ss''),''dd/MM/yyyy hh24/mi/ss'') col2'
,options_flag => dbms_redefinition.cons_use_rowid);
PL/SQL procedure successfully completed.

(uname => 'eb'
,orig_table => 'table1'
,int_table => 'table2');
PL/SQL procedure successfully completed.

SQL> desc table1
 Name          Null?    Type
 ------------- -------- ---------------
 COL1                   NUMBER
 COL2                   DATE

On the other hand when converting from DATE to TIMESTAMP you can use only TO_TIMESTAMP function with no errors on 11gR2. For 10g & 11gR1 there's an issue and you get "ORA-42016: shape of interim table does not match specified column mapping" error. Following note offers a workaround for this issue by using a user defined function.

  • How to Convert Date Column to Timestamp Using DBMS_REDEFINITION Package? (Doc ID 556283.1)

create or replace function convert_date_to_ts(mydate date) return timestamp 
return to_timestamp(mydate); 

DBMS_REDEFINITION.START_REDEF_TABLE('scott','sno_date2','sno_timestamp2','myid myid, convert_date_to_ts(mydate) mydate', dbms_redefinition.cons_use_pk); 

Standby Database SCN - x$kcvfh

The case was to roll forward a physical standby with an RMAN SCN incremental backup taken from primary. The standby database was just restored and necessary archived logs was missing somehow (That's another story). It was something i already did in the past so we set to work with my previous notes. Took the backup, copied files to standby server and recovered standby database. But the problem was, RECOVER DATABASE NOREDO statement was doing nothing so media recovery was asking for the same archived logs. 

Cross-checked the steps with Data Guard Concepts and Administration Guide there was nothing we were missing.

And then after the warning of my team-mate, checked the note on MyOracle Support "Steps to perform for Rolling forward a standby database using RMAN Incremental Backup. (Doc ID 836986.1)" and voila! My note and Administration Guide said "check the SCN on the standby database using V$DATABASE" and support note was saying:

"You need to use the 'lowest SCN' from the the 3 queries below"
SQL> select min(fhscn) from x$kcvfh;
SQL> select min(f.fhscn) from x$kcvfh f, v$datafile d where f.hxfil =d.file# and d.enabled != 'READ ONLY';

Again, in my case no media recovery was run on the standby and the control file was not sync with datafile headers.V$DATABASE gives information about the database from the control file and it was not the SCN i need on the incremental backup. I need the minimum SCN information on the datafile headers and that information is on x$kcvfh(Kernel Cache Recovery Component File Headers). Each row returned from this view is populated by reading the header off files on disk. While searching more on this view i realized it gives important information specially on problematic recovery cases. 
Happy recovering...

Auditing on Oracle Database in a Nutshell (11gR2)

In an Oracle Database we can mention following auditing types:
  • Mandatory Auditing
  • Standard Auditing
  • Fine-Grained Auditing
  • SYS Auditing
Mandatory Auditing causes database start-up/shut-down and SYSDBA-SYSOPER login logout information to be written into AUDIT_FILE_DEST. This auditing cannot be turned off and it's always written into operating system directory specified with AUDIT_FILE_DEST.

Standard Auditing is controlled with AUDIT_TRAIL parameter. (NONE,OS,DB,DB_EXTENDED,XML,XML_EXTENDED)
When DB or DB_EXTENDED is used, audit records are written into database (aud$ table). (Not the mandatory auditing records, they're always on OS)
XML options write to operating system in XML format.
When *EXTENDED is used SQLBIND and SQLTEXT information is included in the audit trail. If not used, not included.
By default, Standard Auditing audits SQL statements which use following privileges:
It's possible to audit Statements, Privileges and Objects with Standard Auditing. For example:
  • Privilege auditing: audit select any table; 
  • Statement auditing: audit select table; 
  • Object auditing: audit select on SCOTT.SALARY;
Following views give information about current Standard Auditing configuration in the database:
  • DBA_STMT_AUDIT_OPTS; ==> describes current statements being audited across the system
  • DBA_PRIV_AUDIT_OPTS; ==> describes current system privileges being audited across the system
  • DBA_OBJ_AUDIT_OPTS;   ==> describes auditing options for all objects 
You can use the SQL "AUDIT" statement to set auditing options regardless of the setting of AUDIT_TRAIL parameter. However, Oracle Database does not generate audit records until you enable Standard Auditing using AUDIT_TRAIL parameter.
Auditing to OS offers higher performance when compared with DB. 

Fine-Grained Auditing is used to audit operations like: 
  • Accessing a table outside of normal working hours 
  • Logging in from a particular IP address 
  • Selecting or updating a particular table column
DBMS_FGA package is used to manage Fine-Grained Auditing. DBA_AUDIT_POLICIES view describes all fine-grained auditing policies in the database.
It's not mandatory to enable Standard Auditing in order to use Fine-Grained Auditing or SYS Auditing.
SYS AuditingAUDIT_SYS_OPERATIONS parameter (TRUE/FALSE) enables or disables the auditing of SQL statements that directly issued by users connected with SYSDBA or SYSOPER privileges (SQL statements run from within PL/SQL procedures or functions are not audited). These audit records are written into OS.
It's not mandatory to enable Standard Auditing in order to use SYS Auditing.

Active Data Guard 12c New Features - Oracle Day 2014

Packt Publishing e-book Promotion

I would like to inform you that, to mark Columbus Day, Packt is offering its entire catalog of eBooks and videos at 50% off. This is te last day of the promotion. If you plan to buy some e-books don't miss this.

Discount code: COL50 

The Book is Out!

After one year of hard work, it's very nice to introduce the Oracle Data Guard 11gR2 Administration book written by me and Nassyam Basha. As for me, i can honestly say that it has been a really good book and will help its reader a lot to learn and implement Data Guard. We wanted to prepare a book that starts Data Guard from scratch and covers all important details of it with clear and easy to follow examples and i think we succeeded. Hope readers will think the same.

There is a lot of information we prepared about what this book covers, who this book is for etc. on book's web page. So, I will not mention about those here. But I wanted to write about the following topics, because I think they are important for potential readers (and writers).

- Why the Beginner's Guide Format...

The "Beginner's Guide" format is one of the writing templates of Packt. "Beginner's Guide" doesn't mean to be a "light" book in terms of content. This format was designed to help the "beginner" reader to understand the subject easily with more examples, hands-ons and Q&As. However the book covers all Data Guard topics from beginning to the advanced topics. You can see this when you look at the Table of Contents.

- Why 11gR2...

We of course discussed if the book should be for 12c. However most of the DBAs who will read this book will need to configure Data Guard 11gR2 for their production systems (and the rest will configure in lower releases but not 12c for now). As we all know a new version needs at least 1-2 years to be used on production systems. A new version book would be only informative but not practical for now. On the other side this book is a hands-on book to help DBAs built Data Guard configurations and DBAs will mostly work on 11gR2 for at least several years. It's not certain but we may also upgrade the book for 12c in the meantime.

- Writing a Book...

First of all i have to say that it has been more difficult than i thought to make time to write. Last 12 months have been the busiest time of my life. I had to write chapters at nights and weekends which of course affected my social and family life. Before authoring this book, in 2009 i contributed to “RMAN 11g Backup & Recovery” book from Oracle Press by writing a chapter for that book. So, i should have known what writing a book is like but I think, idea of authoring a book make me forget that :) Preparing the outline, writing the first drafts, editor reviews, technical reviews, technical editor reviews, final review, in the mean time increasing versions of the drafts, writing the preface, writing the Title Information Sheet, review of the preface, review of the Title Information Sheet and so on. At the end one year full of writing :) When i look to my mailbox, the folder containing the e-mails related with the book has 1200 e-mails inside. These are the e-mails that i'm in and i'm sure there are many more that was sent inside Packt team. This may show how busy were the people in this team and how much effort was shown. At the end i'm happy to hold the book in my hands and i can say it's a strange feeling to browse through the book :) Thanks again to everyone in the team from the four corners of the world that made this book possible.

- Reviews...

We received some (honest) reviews from very important Oracle professionals that the book is really good. Our editors and technical reviewers also made very nice comments during the writing phase. (Thanks again to Syed Jaffar Hussain, Michael Seberg, Joel Perez and the Packt editor team for their valuable comments and edits) Seeing these nice feedbacks were the motivation of writing. As i said, i also trust in this book and I’m pretty sure it'll be a good resource to learn Data Guard.

I already thanked at the beginning pages of the book but here i don't want to miss and thank again to my family for their support and my friend Nassyam Basha for all his effort and friendship.

Data Guard Queries

After all those years, my 5 years old post “How To Query Dataguard Status” still has the top visitors of this blog, so I wanted to write a fresh one including newer queries, commands. Here it is:

  • Standby database process status: You can run following query on standby database to see what MRP and RFS processes are doing, which block of which archivelog sequences are being shipped or being applied.

SQL> select process, status, thread#, sequence#, block#, blocks from v$managed_standby ;

--------- ------------ ---------- ---------- ---------- ----------
ARCH      CLOSING               1      69479     932864        261
ARCH      CLOSING               1      69480     928768        670
ARCH      CLOSING               2      75336     933888        654
ARCH      CLOSING               2      78079     930816        842
ARCH      CLOSING               1      69475     943104         79
RFS       IDLE                  0          0          0          0
RFS       RECEIVING             1      69481     688130       1024
MRP0      WAIT_FOR_LOG          2      78080          0          0
RFS       IDLE                  2      78080     873759          3

Powered by Blogger.

Page Views

- Copyright © Emre Baransel - Oracle Blog -Metrominimalist- Powered by Blogger - Designed by Johanes Djogan -