Cascade standby database configuration lets your layer1 standby database transmit the archivelogs to a layer2 standby database. This feature can be configured with simple LOG_ARHIVE_DEST_x settings but (as always) there are some key points that you should know.


• You must use standby redo logs on layer1 standby database.

• RAC primary database and Data Guard Broker are not supported in Cascade Standby configuration

• The Layer1 standby database cannot be a logical standby where Layer2 can be. (Oracle mentions about Layer1 logical standby configuration on 10gR2 doc, and indicates that a Layer1 logical standby will generate and transmit redo information which won’t be identical with the one between primary and Layer1 standby. As a result "Layer2 physical standby databases created from a logical standby database will be a block-for-block copy of the logical standby database and a logical copy of the original primary database.”
But it is indicated that information in Metalink bulletin 409013.1 replaces Appendix E Cascaded Destinations, of Oracle Data Guard Concepts and Administration 10g Release 2 document. Metalink bulletin 409013.1 and 11gR1 doc says that “A logical standby database cannot forward redo to another standby database” and specifies that Layer1 logical standby configuration is not supported.

• Layer2 standby database may lag behind primary and Layer1 standby, so Layer1 standby database is recommended for Disaster Recovery purpose not Layer2.
To minimize risk of data loss and ensure the fastest possible role transition, Oracle recommends that any standby databases that are primarily configured for disaster-recovery purposes receive redo data directly from the primary database.”

Above are valid for both 10g and 11g releases.


Configuration Example:

The below configuration sample is ready for switchover between primary and Layer1 physical standby database. After the switchover, archivelogs will continue to ship through the Layer2 standby database. Layer2 database (Denver) is a physical standby in this example.


Boston Database (Primary Role)

DB_UNIQUE_NAME=Boston
REMOTE_LOGIN_PASSWORDFILE= EXCLUSIVE
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(chicago,boston,denver)'
STANDBY_ARCHIVE_DEST=/arch1/boston/
LOG_ARCHIVE_DEST_1= 'LOCATION=/arch1/boston/ VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_2= 'SERVICE=denver VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_3= 'SERVICE=chicago VALID_FOR= (ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago'

Chicago Database (Layer1 Standby Role)
DB_UNIQUE_NAME=chicago
REMOTE_LOGIN_PASSWORDFILE= EXCLUSIVE
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(chicago,boston,denver)'
STANDBY_ARCHIVE_DEST=/arch1/chicago/
LOG_ARCHIVE_DEST_1= 'LOCATION=/arch1/chicago/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_2= 'SERVICE=denver VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_3= 'SERVICE=boston VALID_FOR= (ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=boston'

Denver Database (Layer2 Standby Role)
DB_UNIQUE_NAME=denver
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(chicago,boston,denver)'
STANDBY_ARCHIVE_DEST=/arch1/denver/
LOG_ARCHIVE_DEST_1= 'LOCATION=/arch1/denver/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=denver'


Where to use?

This configuration may be suitable if you consider one local (over LAN) and one remote (over WAN) standby. In a disaster/failure that you just lose your primary, you can failover to local (Layer1) standby and your database will still be in safe because remote (Layer2) standby will keep working. In a disaster you lose your primary and local standby you can failover to remote standby with a small amount of data loss.

In another scenario, you may want to have two remote standby and keep the Layer1 standby for disaster recovery pupose where you use Layer2 standby for extraction or test purposes.

You can also configure 2 or more destinations to your primary to transmit redo but keep in mind that cascade configuration will reduce the overhead of your primary database.


15 Responses so far.

  1. Philippe says:

    Hi Emre,

    thanks for this article, for a customer of mine we are probably going to use a cascading standby setup, as fallback plan of an upgrade from 10.2.0.3 to 10.2.0.5. I am testing it for the moment. They have several Layer 1 physical standby's and one futur specific cascading layer 1 / 2 config.
    In case the upgrade fails we activate the layer 1 standby of the layer 1/2 config. In that case we have a new primary but also directly a standby.

    One question rac primary is not supported ? however can the standby be rac ?

    regards,

    Philippe

  2. Hi Philippe,
    Documentation only mentions that RAC primary database is not supported. But my comment is, there is something wrong with dataguard & rac, so personally i don't recomment using this configuration before consulting to oracle.
    Regards

  3. Philippe says:

    Hi Emre,

    in the meantime i consulted Oracle Support and they strongly advised against cascading standby databases and RAC, why because a RAC db has more than 1 thread (depending on nr of nodes in the cluster) and there is an issue with sending multiple threads. I suggested to disable a thread but the support engineer said that development strongly enlists of not using cascading standby's and arc.... if we would encounter something we would be on our own...

    In normal dataguard configurations i had no issues so far combining RAC and non RAC db's ....

    best regards,

  4. Unknown says:

    Merhaba Emre,
    Paylaşım için teşekkürler.
    Archivelogları uzak replikasyonlara, localdaki replikasyondan atmak için kullanıyorum.

    Tşkler
    Murat

  5. Selam Murat
    Dedigin gibi archivelog'ları yedeklemek icin guzel bir yontem. Peki hic eksik archivelog gonderdigi oluyormu? Ben boyle birsey yasadim. Arastırdıgımda "gap resolution" calismasi icin karsı tarafın archive apply etmesi gerektigini ogrendim. Eksik arsivi tekrar istemesini MRP0 process'i tetikliyormus. Uzaktaki instance sadece archive cekmek icin mount edilmis datafile'siz bir instance oldugu icin de MRP0 calismiyor dogal olarak. Bu sıkıntı yuzunden vazgecmistim.

  6. Unknown says:

    Selam Emre
    Benim konfigurasyonumda archiveloglar localdeki fiziksel standby veritabanından,
    diğer şehirlerdeki fiziksel standbylara log gönderip, işleyecek şekilde ayarlamıştık.
    Log göndermenin yanında, uzaktaki standby veritabanı logları işliyor.
    MRP0 sağolsun :), şimdiye kadar eksik log gönderme işinden dolayı replikasyonum bozulmadı.

    Bu konfigurasyonun yanında, tüm veritabanlarının sadece archiveloglarını toplayan bir logserver
    hazırladık. Mevcut durumda crontab'a yazdığımız scriptler ile üretilen logları logserver'a gönderiyoruz.
    Logserver'a datafile'sız oracle software kurarak, logları logserver'a taşımayı düşünmüştükte, senin tecrübenden
    sonra daha crontab'taki işlerle taşınmaya devam ederiz sanırım :))

    Paylaşım için teşekkürler
    Murat

  7. Hi,

    This is a great article, I am looking forward on implementing this format..

    But I am looking towards Real time log apply service feature from Primary to PhyStandby and then Second PhyStandby.

    Any tips would be appreciated.
    RGDS,
    Sheldon

  8. Marvelous work.Just wanted to drop a comment and say I am new to your blog and really like what I am reading.

  9. John says:

    Hi
    Is it mandotary to set log_archive_dest_2 to denver in the primary database ?


    LOG_ARCHIVE_DEST_2= 'SERVICE=denver VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=denver'

  10. Hi, it's not mandatory but if you set it, cascade configuration will continue working after a switchover between boston and chicago.

  11. Anonymous says:

    How about the parameters FAL_SERVER and FAL_CLIENT? What value will be filled in primary, standby and cascasde standby databasse?

  12. Such a nice blog, I really like what you write in this blog, I also have some relevant Information about Oracle Fusion HCM Online Training if you want more information.
    Oracle Fusion HCM Online Training

  13. shivani says:

    Amazing web journal I visit this blog it's extremely marvelous. Interestingly, in this blog content composed plainly and reasonable. The substance of data is educational.
    Oracle Fusion Financials Online Training
    Oracle Fusion HCM Online Training
    Oracle Fusion SCM Online Training


  14. shivani says:

    Thanks for sharing a useful information.. we have learnt so much information from your blog..... keep sharing
    Oracle Fusion Financials Online Training
    Oracle Fusion HCM Online Training
    Oracle Fusion SCM Online Training

  15. Anonymous says:

    Thank you for sharing such a nice and interesting blog with us. I have seen that all will say the same thing repeatedly. But in your blog, I had a chance to get some useful and unique information.

    sem training in hyderabad
    seo training in hyderabad
    SMM Training In Hyderabad

Powered by Blogger.

Page Views

- Copyright © Emre Baransel - Oracle Blog -Metrominimalist- Powered by Blogger - Designed by Johanes Djogan -