Mount Clone Database in TSPITR

"alter database mount clone database" is a statement that forces all datafiles to be put OFFLINE when mounting the target instance.
You'll see it's being used in the background when you run RMAN commands like:

"transport tablespace"
"recover tablespace .. auxiliary destination .."
"recover table"  (12c feature)

In my case, i was trying to recover an 11.2.0.4 dropped table using "Tablespace Point In Time Recovery" by following the MOS note "How to Recover From a DROP / TRUNCATE / DELETE TABLE with RMAN (Doc ID 223543.1)". The basic steps are:

- create pfile
- restore controlfile
- mount with "alter database mount clone database;"
- restore and recover a subset of the database with an RMAN script similar to:

RMAN> connect target /
        run {
        allocate channel c1 device type DISK;
        set until time "to_date( '13-01-2016 13:50', 'DD-MM-YYYY HH24:MI')";
        set newname for datafile 1 to "+RECO";
        set newname for datafile 3 to "+RECO";
        set newname for datafile 4 to "+RECO";
        set newname for datafile 23 to "+RECO";
        restore tablespace SYSTEM, UNDOTBS01, UNDOTBS02, TOOLS;
        switch datafile all;
        sql "alter database datafile 1,3,4,23 online";
        recover database skip forever tablespace TEMP,INDX,USERS,etc.;
        sql "alter database rename file ''+RECO/xxx/onlinelog/group_1.281.739547347'' to ''+RECO''";
        sql "alter database rename file ''+RECO/xxx/onlinelog/group_2.282.739547353'' to ''+RECO''";
        sql "alter database rename file ''+RECO/xxx/onlinelog/group_3.283.739547359'' to ''+RECO''";
        sql "alter database rename file ''+RECO/xxx/onlinelog/group_4.284.739547365'' to ''+RECO''";
release channel c1;
}

After completing the required steps, recovery raised the following error:

ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup

Of course i got the error when tried to open with resetlogs. After struggling with possible causes, i found an internal support note similar with my issue: "ORA-1547 After Recovery When Database Was Mounted As Clone (Doc ID 252181.1)". The interesting part was this note was published at Oct 23, 2003 for versions 8.1.7.4 to 9.2.0.4 and the fix was:

Use the following workaround until the bug gets resolved :
Do not mount the database as CLONE as in :
SQL> alter database mount clone database;
but mount is as normal database with :
SQL> alter database mount;

But there's no bug number stated in the note. Even i thought that mysterious bug must have been fixed many years ago, desperation made me try the workaround with mounting the instance with "alter database mount" and manually taking the required datafiles offline. Suprisingly it worked and i was able to open the database with resetlogs.

As a result, it's fine for me that RMAN uses "mount clone database" statement internally, however when i'll do TSPITR again you can be sure that i'll mount the instance with the way i used to :)

Backup to ZFS Storage Appliance Best Practices for Exadata Database Machine

I made a presentation called "Best Practices for Exadata Maximum Availability" at BGOUG autumn event last week. In addition to general Exadata best practices about backup&recovery, disaster recovery, RAC&ASM, corruption detection features; i reserved half of my time to speak about integration of Exadata and ZFS storage appliance for the best performance of database backup&recovery. This topic is very important for ZFS SA users, because applying best practices will have a huge effect on B&R performance. Here are my slides for "Backup to ZFS Storage Appliance Best Practices for Exadata Database Machine". Find more information at the papers available on the last slide.


Exadata 12.1.2.2.0 Software is Released

Good new features announced, such as finding Flash Cache and Flash Log statistics on AWR, automatic ASM data redundancy check even shutting down a storage server by pressing the power button or through ILOM, preventing Flash Cache population in cell to cell rebalance, disabling SSH on Storage Servers and running CellCLI commands from Compute Nodes via new ExaCLI utility.

Find details in the MOS doc: Exadata 12.1.2.2.0 release and patch (20131726) (Doc ID 2038073.1)

Highlights
  • Oracle Exadata Database Machine X5-8 Support
    • New Exadata X5-8 Database Server
      • Exadata X5-8 updates the 8-socket database server to use the latest and fastest Intel Xeon E7-8895 v3 “Haswell-EX” processors with 18 cores (vs. 15 cores in X4-8) for 20% greater performance. The HBA no longer depends on battery to retain cached data, hence eliminating the need for preventive maintenance.
  • IPv6 Support
    • Compute nodes and storage servers are now enabled to use IPv6 for the management network, ILOM, and the client access network. This works for both bare metal and virtualized deployments.
  • Disabling SSH on Storage Servers
    • By default, SSH is enabled on storage servers. If required, you can "lock" the storage servers to disable SSH access.You can still perform operations on the cell using ExaCLI, which runs on compute nodes and communicates using https and REST APIs to a web service running on the cell.
  • Running CellCLI Commands from Compute Nodes
    • The new ExaCLI utility enables you to run CellCLI commands on cell nodes remotely from compute nodes. This is useful in cases where you locked the cell nodes by disabling SSH access.
  • Updating Database Nodes with patchmgr
    • Oracle Exadata database nodes (running releases later than 11.2.2.4.2), Oracle Exadata Virtual Server nodes (dom0), and Oracle Exadata Virtual Machines (domU) can be updated, rolled back, and backed up in a rolling and non-rolling fashion using patchmgr in addition to running dbnodeupdate.sh standalone.  Performing this update via patchmgr enables you to run a single command to update multiple nodes at the same time; you do not need to run dbnodeupdate.sh separately on each node. The patchmgr and dbnodeupdate.sh to use for this activity are shipped within the new dbserver.patch.zip which can be downloaded via document 1553103.1. See the maintenance guide for more details.
  • Creating Users and Roles
    • You can control which commands users can run by granting privileges to roles, and granting roles to users. For example, you can specify that a user can run the "list griddisk" command but not "alter griddisk". This level of control is useful in Cloud environments, where you might want to allow full access to the system to only a few users.
  • MTU size on database nodes not changed when updating from 12.1.2.1.x to 12.1.2.2.0
    • When updating database nodes from existing 12.1.2.1.x releases to 12.1.2.2.0 MTU settings for infiniband devices will remain the same. However touching a file (touch /opt/oracle/EXADATA_UPDATE_MTU) before starting the update, enables you to automatically adjust the settings to the Exadata default of 65520 during the update.
  • Fixed Allocations for Databases in the Flash Cache
    •  The ALTER IORMPLAN command has a new attribute called flashcachesize which enables you to allocate a fixed amount of space in the flash cache for a database. The value specified in flashcachesize is a hard limit, which means that the database cannot use more than the specified value. This is different from the flashcachelimit value, which is a "soft" maximum: databases can exceed this value if the flash cache is not full.
  • Oracle Exadata Storage Statistics in AWR Reports
    • The Exadata Flash Cache Performance Statistics sections have been enhanced in the AWR report:
      • Added support for Columnar Flash Cache and Keep Cache.
      • Added a section on Flash Cache Performance Summary to summarize Exadata storage cell statistics along with database statistics.
      The Exadata Flash Log Statistics section in the AWR report now includes statistics for first writes to disk and flash.
  • Increased Maximum Numbers of Database Processes
    • Please see table below for the maximum number of database processes supported per database node.  These numbers are higher than in previous releases. The best practice is to keep the process count below these values. If a subset of your workload is running parallel queries, the maximum database process count will be between the "Number of Processes with No Parallel Queries" column and the "Number of Processes with All Running Parallel Queries" column.
  • Custom Diagnostic Package for Storage Server Alerts
    • Storage servers automatically collect customized diagnostic packages that include relevant logs and traces upon generating a cell alert. This applies to all cell alerts, including both hardware alerts and software alerts. The timely collection of the diagnostic information prevents rollover of critical logs.
  • kdump Operational for 8-Socket Database Nodes
    • In releases earlier than 12.1.2.2.0, kdump, a service that creates and stores kernel crash dumps, was disabled on Exadata 8-socket database nodes because generating the vmcore took too long and consumed too much space.  Starting with Exadata release 12.1.2.2.0, kdump is fully operational on 8-socket database nodes due to the further optimizations.
  • Redundancy Check When Powering Down the Storage Server
    • If you try to shut down gracefully a storage server by pressing the power button on the front or going through ILOM, the storage server performs an ASM data redundancy check. If shutting down the storage server could lead to an ASM disk group force dismount due to reduced data redundancy, the shutdown is aborted, and all three LEDs on all hard drives blink for 10 seconds to alert the user that shutting down the storage server is not safe. You should not attempt a hard reset on the storage server.
  • Specifying an IP Address for SNMP Traps
    • If the IP address associated with eth0 is not registered with ASR Manager, you can specify a different IP address using the new "fromIP" field in the "alter cell" command (for storage servers) or the "alter dbserver" command (for database servers).
  • Reverse Offload Improvements
    • Reverse offload from storage servers to database nodes is essential in providing a more uniform usage of all the database and storage CPU resources available in an Exadata environment. In most configurations, there are more database CPUs than storage CPUs, and the ratio may vary depending on the hardware generation and the number of database and cell nodes.
  • Cell to Cell Rebalance Preserves Flash Cache Population
    • When a hard disk hits a predictive failure or true failure, and data needs to be rebalanced out of it, some of the data that resides on this hard disk might have been cached on the flash disk, providing better latency and bandwidth accesses for this data. To maintain an application's current performance SLA, it is critical to rebalance the data while honoring the caching status of the different regions on the hard disk during the cell-to-cell offloaded rebalance.

DBMCLI for Exadata Database Servers

Monitoring database servers for hardware failures in Exadata used to be an issue. We have CELLCLI for storage servers where we can make many administration tasks and set SMTP configuration to receive e-mail for hardware failures. For the DB servers we were able to configure SMTP Client on database server's ILOMs for email alerts but it was not a straightforward and easy configuration as we do in CELLCLI.
With Exadata Storage Server Release 12.1.2.1.0 now we have a new command-line interface called DBMCLI to to configure, monitor, and manage the database servers. Like with the CELLCLI, we can monitor the status of hardware component, stop/start services, set thresholds, list alert history, configure SMTP with DBMCLI.

Detail here:
http://docs.oracle.com/cd/E50790_01/doc/doc.121/e51951/app_dbmcli.htm#DBMMN22053

An example output of "list dbserver detail"

DBMCLI> list dbserver detail
         name:                   dm01dbadm01
         bbuStatus:              normal
         coreCount:              8
         cpuCount:               16
         diagHistoryDays:        7
         fanCount:               16/16
         fanStatus:              normal
         id:                     1323FML0GM
         interconnectCount:      2
         ipaddress1:             192.168.1.1/22
         kernelVersion:          2.6.39-400.248.3.el6uek.x86_64
         locatorLEDStatus:       off
         makeModel:              Oracle Corporation SUN FIRE X4170 M3
         metricHistoryDays:      7
         msVersion:              OSS_12.1.2.1.1_LINUX.X64_150316.2
         powerCount:             2/2
         powerStatus:            normal
         releaseImageStatus:     success
         releaseVersion:         12.1.2.1.1.150316.2
         releaseTrackingBug:     20240049
         status:                 online
         temperatureReading:     22.0
         temperatureStatus:      normal
         upTime:                 62 days, 2:37
         msStatus:               running
         rsStatus:               running


To configure SMTP on database server:

DBMCLI> ALTER DBSERVER smtpServer='mailserver.domain.com', smtpFromAddr='exadata@domain.com', smtpFrom='exadata', smtpToAddr='someone@domain.com,anotherone@domain.com', notificationPolicy='critical,warning,clear', notificationMethod='mail', smtpPort=25, smtpPwd=somepassword, smtpUseSSL=FALSE
DBMCLI> ALTER DBSERVER VALIDATE MAIL
DBServer dm01dbadm01 successfully altered

"list dbserver detail" output after configuring SMTP:

DBMCLI> list dbserver detail
         name:                   dm01dbadm01
         bbuStatus:              normal
         coreCount:              8
         cpuCount:               16
         diagHistoryDays:        7
         emailFormat:            text
         fanCount:               16/16
         fanStatus:              normal
         id:                     1323FML0GM
         interconnectCount:      2
         ipaddress1:             192.168.1.1/22
         kernelVersion:          2.6.39-400.248.3.el6uek.x86_64
         locatorLEDStatus:       off
         makeModel:              Oracle Corporation SUN FIRE X4170 M3
         metricHistoryDays:      7
         msVersion:              OSS_12.1.2.1.1_LINUX.X64_150316.2
         notificationMethod:     mail
         notificationPolicy:     clear
         powerCount:             2/2
         powerStatus:            normal
         releaseImageStatus:     success
         releaseVersion:         12.1.2.1.1.150316.2
         releaseTrackingBug:     20240049
         smtpFrom:               exadata
         smtpFromAddr:           exadata@domain.com
         smtpPort:               25
         smtpPwd:                ******
         smtpServer:             mailserver.domain.com
         smtpToAddr:             someone@domain.com,anotherone@domain.com
         status:                 online
         temperatureReading:     21.0
         temperatureStatus:      normal
         upTime:                 62 days, 3:05
         msStatus:               running
         rsStatus:               running

Auditing on Physical Standby Database with XML

Turning on auditing on a physical standby database may be preferred when you want to audit the queries against the read only standby database. Obviously it's not possible to keep audit records on standby database by setting audit_trail to DB or DB_EXTENDED. Because the database is read-only and the aud$ table is just being synchronized with the primary aud$. So the DB or DB_EXTENDED setting on a physical standby database doesn't cause any auditing. When we open an 11g physical standby database in Read Only mode, the AUDIT_TRAIL parameter changes to OS automatically. What you'll see on the alert log will be:

AUDIT_TRAIL initialization parameter is changed to OS, as DB is NOT compatible for database opened with read-only access

Setting the audit_trail to OS works for a standby database but there's no option as OS_EXTENDED, so if you need sql_text and sql_bind info, this will not be the solution for you. The only solution that records sql_text and sql_bind will be XML_EXTENDED. This setting will produce xml files under audit_file_dest. 

The problem in this point will be querying the audit records. Here comes the view V$XML_AUDIT_TRAIL which reads the XML files on the fly into memory when queried. Database used to maintain a txt file (adx_.txt) which keeps track of the XML audit files for this reason. However, starting with 11.2.0.2 the list of the xml audit files is dynamically obtained when V$XML_AUDIT_TRAIL is queried. 

In the MOS note 879305.1 (ORA-00600[kzaxgfl:lowmem] when ADX file has dangling entries) this change was stated as:

Starting with release 11.2.0.2, the usage of the ADX file has been removed. The ADX global, generic file is no longer maintained in the adump destination. Consequently, starting with this release, the manual maintenance of the ADX file is no longer needed. 
More precisely: From 11.2.0.2, the xml index file is dynamically created before creating the xml file list.
This file is specific to the process as opposed to generic index file present is older versions and will have the PID suffix with the index filename.
Once the operation is done, this file is deleted.
This file should never be manually managed - removing or changing the process specific file manually can result in the error investigated in this note or in ORA 600 [KZAXGFL:IDX]
Note 1392161.1 Queries on XML Audit Trail Views Crashes With: ORA 600 [KZAXGFL:IDX] 

Obviously it was not a brilliant idea to keep xml file list in a text file which cannot track the changes when you delete xml files from filesystem in time, thus cause ORA-600 errors. Hereby this behaviour is changed with 11.2.0.2.

Anyway, to get to the main point, we can query V$XML_AUDIT_TRAIL on the standby to find out the audit records. In my case, i prepared a shell script running on the primary which connects to primary database, queries the standby V$XML_AUDIT_TRAIL via db link and insert into a table on primary, then deletes the xml files on the standby server. With this way it's possible to filter the audit data on the xml files when inserting into database. It's also possible to create need-based indexes which will speed up your audit queries or partition the table for administrative purposes.


A Deep Dive into ASM redundancy in Exadata

I made this presentation at Serbia Oracle User Group event in Zlatibor and Harmony 2015 event in Tallinn at last month. Both were very nice organizations that i really enjoyed, learned much and met with many Oracle experts.

I prepared this presentation to answer to following questions, which i think are really important for Exadata Database Machine administrators:
- To what degree, disk and cell server failures are tolerated?
- How to understand if ASM is able to re-build redundancy after disk or cell failures?
- What happens when multiple disks are failed consecutively and does it matter which disks failed?
- What we need to pay attention to in terms of redundancy, when we do administrative task such as rolling restart of cell servers, resizing diskgroups, etc. ?
- How can we interpret and track the "usable space" in Exadata?

Hope you find it useful, and please comment for anything incomprehensive or any insertion.




TIMESTAMP to DATE Conversion with Online Redefinition

Online redefinition is a great way to make structural change on "big" tables having "lots of" DML. Using online redefinition, partitioning-nonpartitioning, adding-dropping columns, changing column data types, moving to another tablespace and more can be done with a very small unavailability of the table when compared with direct operations. 

Here are some online redefinition MOS notes which make life easier:
  • Dbms_Redefinition Online Reorganization Of Tables (Doc Id 149564.1)
  • How To Re-Organize A Table Online (Doc Id 177407.1)
  • How To Shrink A Table Using Online Redefinition(Doc Id 1357878.1)
  • How To Compress A Table While It Is Online(Doc Id 1353967.1)
  • How To Move A Table To A New / Different Tablespace While It Is Online
  • How To Convert Long Column To Clob Using Dbms_Redefinition Package (Doc Id 251417.1)
  • Online Redefinition Of Table Using Rowid Option (Doc Id 210407.1)
  • An Example Of A Complex Online Table Redefinition (Dbms_Redefinition) (Doc Id 1358236.1)

My case was to convert a TIMESTAMP column to DATE.
In order to prepare a test environment:

SQL> create table table1 (col1 number, col2 timestamp);
SQL> insert into table1 values (1,systimestamp);
SQL> commit;
SQL> create table table2 (col1 number, col2 date);
SQL> BEGIN 
 DBMS_REDEFINITION.CAN_REDEF_TABLE
 (uname => 'eb'
 ,tname => 'table1'
 ,options_flag => dbms_redefinition.cons_use_rowid);
 END;
 /
PL/SQL procedure successfully completed.



When converting column data types, "col_mapping" parameter must be defined onDBMS_REDEFINITION.START_REDEF_TABLE procedure. If not following error raises:

SQL> BEGIN 
DBMS_REDEFINITION.START_REDEF_TABLE 
(uname => 'eb'
,orig_table => 'table1'
,int_table => 'table2'
,options_flag => dbms_redefinition.cons_use_rowid);
 END;
 /
ERROR at line 1:
ORA-42016: shape of interim table does not match specified column mapping


When we specify TO_DATE function on the col_mapping parameter in our case :

SQL> BEGIN 
DBMS_REDEFINITION.START_REDEF_TABLE 
(uname => 'eb'
,orig_table => 'table1'
,int_table => 'table2'
,col_mapping => 'col1 col1,to_date(col2) col2'
,options_flag => dbms_redefinition.cons_use_rowid);
 END;
 /
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-01830: date format picture ends before converting entire input string

As a workaround to this problem, i used first TO_CHAR and then TO_DATE functions and it worked.

SQL> BEGIN       
DBMS_REDEFINITION.START_REDEF_TABLE 
(uname => 'eb'
,orig_table => 'table1'
,int_table => 'table2'
,col_mapping => 'col1 col1,TO_DATE(TO_CHAR(col2,''dd/MM/yyyy hh24/mi/ss''),''dd/MM/yyyy hh24/mi/ss'') col2'
,options_flag => dbms_redefinition.cons_use_rowid);
 END;
 /
PL/SQL procedure successfully completed.

SQL> BEGIN   
DBMS_REDEFINITION.FINISH_REDEF_TABLE
(uname => 'eb'
,orig_table => 'table1'
,int_table => 'table2');
 END;
 /
PL/SQL procedure successfully completed.

SQL> desc table1
 Name          Null?    Type
 ------------- -------- ---------------
 COL1                   NUMBER
 COL2                   DATE

Note:
On the other hand when converting from DATE to TIMESTAMP you can use only TO_TIMESTAMP function with no errors on 11gR2. For 10g & 11gR1 there's an issue and you get "ORA-42016: shape of interim table does not match specified column mapping" error. Following note offers a workaround for this issue by using a user defined function.

  • How to Convert Date Column to Timestamp Using DBMS_REDEFINITION Package? (Doc ID 556283.1)

create or replace function convert_date_to_ts(mydate date) return timestamp 
is 
begin 
return to_timestamp(mydate); 
end; 

BEGIN 
DBMS_REDEFINITION.START_REDEF_TABLE('scott','sno_date2','sno_timestamp2','myid myid, convert_date_to_ts(mydate) mydate', dbms_redefinition.cons_use_pk); 
end; 


Standby Database SCN - x$kcvfh

The case was to roll forward a physical standby with an RMAN SCN incremental backup taken from primary. The standby database was just restored and necessary archived logs was missing somehow (That's another story). It was something i already did in the past so we set to work with my previous notes. Took the backup, copied files to standby server and recovered standby database. But the problem was, RECOVER DATABASE NOREDO statement was doing nothing so media recovery was asking for the same archived logs. 

Cross-checked the steps with Data Guard Concepts and Administration Guide there was nothing we were missing.
http://docs.oracle.com/cd/E11882_01/server.112/e41134/rman.htm#SBYDB00759

And then after the warning of my team-mate, checked the note on MyOracle Support "Steps to perform for Rolling forward a standby database using RMAN Incremental Backup. (Doc ID 836986.1)" and voila! My note and Administration Guide said "check the SCN on the standby database using V$DATABASE" and support note was saying:

"You need to use the 'lowest SCN' from the the 3 queries below"
SQL> SELECT CURRENT_SCN FROM V$DATABASE;
SQL> select min(fhscn) from x$kcvfh;
SQL> select min(f.fhscn) from x$kcvfh f, v$datafile d where f.hxfil =d.file# and d.enabled != 'READ ONLY';

Again, in my case no media recovery was run on the standby and the control file was not sync with datafile headers.V$DATABASE gives information about the database from the control file and it was not the SCN i need on the incremental backup. I need the minimum SCN information on the datafile headers and that information is on x$kcvfh(Kernel Cache Recovery Component File Headers). Each row returned from this view is populated by reading the header off files on disk. While searching more on this view i realized it gives important information specially on problematic recovery cases. 
Happy recovering...






Auditing on Oracle Database in a Nutshell (11gR2)

In an Oracle Database we can mention following auditing types:
  • Mandatory Auditing
  • Standard Auditing
  • Fine-Grained Auditing
  • SYS Auditing
Mandatory Auditing causes database start-up/shut-down and SYSDBA-SYSOPER login logout information to be written into AUDIT_FILE_DEST. This auditing cannot be turned off and it's always written into operating system directory specified with AUDIT_FILE_DEST.

Standard Auditing is controlled with AUDIT_TRAIL parameter. (NONE,OS,DB,DB_EXTENDED,XML,XML_EXTENDED)
When DB or DB_EXTENDED is used, audit records are written into database (aud$ table). (Not the mandatory auditing records, they're always on OS)
XML options write to operating system in XML format.
When *EXTENDED is used SQLBIND and SQLTEXT information is included in the audit trail. If not used, not included.
By default, Standard Auditing audits SQL statements which use following privileges:
ALTER ANY PROCEDURE | CREATE ANY LIBRARY | DROP ANY TABLE | ALTER ANY TABLE | CREATE ANY PROCEDURE | DROP PROFILE | ALTER DATABASE | CREATE ANY TABLE | DROP USER | ALTER PROFILE | CREATE EXTERNAL JOB | EXEMPT ACCESS POLICY | ALTER SYSTEM | CREATE PUBLIC DATABASE LINK | GRANT ANY OBJECT PRIVILEGE | ALTER USER | CREATE SESSION | GRANT ANY PRIVILEGE | AUDIT SYSTEM | CREATE USER | GRANT ANY ROLE | CREATE ANY JOB | DROP ANY PROCEDURE
It's possible to audit Statements, Privileges and Objects with Standard Auditing. For example:
  • Privilege auditing: audit select any table; 
  • Statement auditing: audit select table; 
  • Object auditing: audit select on SCOTT.SALARY;
Following views give information about current Standard Auditing configuration in the database:
  • DBA_STMT_AUDIT_OPTS; ==> describes current statements being audited across the system
  • DBA_PRIV_AUDIT_OPTS; ==> describes current system privileges being audited across the system
  • DBA_OBJ_AUDIT_OPTS;   ==> describes auditing options for all objects 
You can use the SQL "AUDIT" statement to set auditing options regardless of the setting of AUDIT_TRAIL parameter. However, Oracle Database does not generate audit records until you enable Standard Auditing using AUDIT_TRAIL parameter.
Auditing to OS offers higher performance when compared with DB. 

Fine-Grained Auditing is used to audit operations like: 
  • Accessing a table outside of normal working hours 
  • Logging in from a particular IP address 
  • Selecting or updating a particular table column
DBMS_FGA package is used to manage Fine-Grained Auditing. DBA_AUDIT_POLICIES view describes all fine-grained auditing policies in the database.
It's not mandatory to enable Standard Auditing in order to use Fine-Grained Auditing or SYS Auditing.
SYS AuditingAUDIT_SYS_OPERATIONS parameter (TRUE/FALSE) enables or disables the auditing of SQL statements that directly issued by users connected with SYSDBA or SYSOPER privileges (SQL statements run from within PL/SQL procedures or functions are not audited). These audit records are written into OS.
It's not mandatory to enable Standard Auditing in order to use SYS Auditing.




Active Data Guard 12c New Features - Oracle Day 2014

Powered by Blogger.

Page Views

- Copyright © Emre Baransel - Oracle Blog -Metrominimalist- Powered by Blogger - Designed by Johanes Djogan -