Anda di halaman 1dari 12

Oracle Data-guard Issues - 'APPLIED'-Column not updated in v$archived_log table

I am taking a break from my regular route to Beginners Guide because indeed today I
faced an issue which typically was confusing. So confusing that at-last when I got
over it I thought of blogging it so that if similar issues you face in the near
future you can get an instant solution to it and wont go around pulling your
hairs.

So coming to the issue let me brief the scenario in short.

We have a Production Database where heavy OLTP transactions occur and to support
the disaster recovery for this server we have a Data-guard solution set up. The
Oracle version we are using here is 10.2.0.4 Enterprise Edition on Sun SPARC OS.

Yesterday, the client reported that DG is not in Synch with the Production so
thats where my job came into scene. Logging into the database I found that indeed
there was an archive log gap and due to this the DG had stop synching with the
Production.

I ran the following command on DG to verify the archive log gap.

SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

THREAD# LOW_SEQUENCE HIGH_SEQUENCE#


----------------------------------------------
1 65524 65533

SQL> SELECT max(sequence#) AS "STANDBY", applied FROM v$archived_log GROUP BY


applied;

STANDBY APPLIED
-----------------
65783 NO
65523 YES

From above output it is quite clear that there is an archive gap and due to which
the recovery services had stopped. The first thing to do insuch scenario is to
check if the missing archives are present at the Production. If they are present..
Pheww!!!! You are saved from a hectic recovery schedule because you just need to
take the backup of this archives from production or transfer these archives from
Production to DR and apply them there. But in case you are not so lucky and you
dont have archive backups at the production you need to follow the other way
around where you need to take the incremental RMAN backup at Production and restore
the same at the DR end. But that is the topic for another blog.

In my case I was too lucky as the retention policy at the Production for archives
is 3 days. So the archives were present there. Lucky me So what I did was just
transferred the missing archives from Production to DR and at DR stopped the media
recovery process, recovered the DR with RMAN and started the media recovery
process. Guess this should have solved my problem. Right!!!!!! Well I was wrong So
what went wrong? These are the steps I followed.

Used Simple SCP to transfer the missing archives from sequence 65524 to 65533 from
Production to DR.
Stopped the Media recovery Process at DR.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Recover the DR database using simple RMAN recovery.


bash$ rman target /
rman> RECOVER DATABASE UNTIL SEQUENCE 65533 THREAD 1;

Start the media recovery process again.


ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM CURRENT LOGFILE;

At first instance all was well. The recovery process started well and the gap was
filled and synching was completed. Then the real problem arose.
Well I was checking the DR end regarding the synching using following commands.

sql> ARCHIVE LOG LIST


Database log mode Archive Mode
Automatic archival Enabled
Archive destination C:\app\mudassar\archive
Oldest online log sequence 65783
Next log sequence to archive
Current log sequence 65794

sql> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;


no rows selected

sql> SELECT max(sequence#) AS "STANDBY", applied FROM v$archived_log GROUP BY


applied;

STANDBY APPLIED
-----------------
65794 YES

Seems fine But hell No Client was not convinced he ran the following query at the
Production and the output was always NO.

sql> select a.sequence#,a.applied,a.archived, b.sequence#, b.applied,b.archived


from (select * from v$archived_log where sequence# in(select max(sequence#)-1 from
v$archived_log) and name='stndby') a, (select * from v$archived_log where sequence#
in(select max(sequence#)-1 from v$archived_log) and name like '/archive/archive-log
%') b where a.sequence#!=b.sequence# OR a.APPLIED!='YES' OR b.APPLIED!='NO';

SEQUENCE# APP ARC


------------------------------------
65794 NO YES
65794 NO YES

Whats wrong in this then? Why the output different at Production and DR when it
should be same? After lots of searching, I found that this is bug in Oracle 10g
whereby the MEDIA RECOVERY PROCESS gets hanged due to some reason causing this
issue. I wont go in more detail regarding the bug but if you have metalink account
on Oracle you can logged in and go through Note: 1369630.1 if not you can download
the documentation here.

Bug Note: 1369630.1

Note that this error doesnt mean that Production and DR are not in synch but as
the Media Recovery Process is hanged at Production Site it is not able to update
the internal views. The solution to this is upgrade to Oracle 11g or higher (Oracle
12c is already introduced) or you can restart the Production Database if possible
to overcome the issue temporarily.

Hope this would have been help to all you folks!!!! Comments are WELCOME!!!!
==============
Resolving Gaps in Data Guard Apply Using Incremental RMAN BAckup
Recently, we had a glitch on a Data Guard (physical standby database) on
infrastructure. This is not a critical database; so the monitoring was relatively
lax. And that being done by an outsourcer does not help it either. In any case, the
laxness resulted in a failure remaining undetected for quite some time and it was
eventually discovered only when the customer complained. This standby database is
usually opened for read only access from time to time.This time, however, the
customer saw that the data was significantly out of sync with primary and raised a
red flag. Unfortunately, at this time it had become a rather political issue.

Since the DBA in charge couldnt resolve the problem, I was called in. In this
post, I will describe the issue and how it was resolved. In summary, there are two
parts of the problem:

(1) What happened


(2) How to fix it

What Happened

Lets look at the first question what caused the standby to lag behind. First, I
looked for the current SCN numbers of the primary and standby databases. On the
primary:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447102

On the standby:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1301571

Clearly there is a difference. But this by itself does not indicate a problem;
since the standby is expected to lag behind the primary (this is an asynchronous
non-real time apply setup). The real question is how much it is lagging in the
terms of wall clock. To know that I used the scn_to_timestamp function to translate
the SCN to a timestamp:

SQL> select scn_to_timestamp(1447102) from dual;

SCN_TO_TIMESTAMP(1447102)
-------------------------------
18-DEC-09 08.54.28.000000000 AM

I ran the same query to know the timestamp associated with the SCN of the standby
database as well (note, I ran it on the primary database, though; since it will
fail in the standby in a mounted mode):

SQL> select scn_to_timestamp(1301571) from dual;

SCN_TO_TIMESTAMP(1301571)
-------------------------------
15-DEC-09 07.19.27.000000000 PM
This shows that the standby is two and half days lagging! The data at this point is
not just stale; it must be rotten.

The next question is why it would be lagging so far back in the past. This is a
10.2 database where FAL server should automatically resolved any gaps in archived
logs. Something must have happened that caused the FAL (fetch archived log) process
to fail. To get that answer, first, I checked the alert log of the standby
instance. I found these lines that showed the issue clearly:

Fri Dec 18 06:12:26 2009


Waiting for all non-current ORLs to be archived...
Media Recovery Waiting for thread 1 sequence 700
Fetching gap sequence in thread 1, gap sequence 700-700

Fri Dec 18 06:13:27 2009
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 700-700
DBID 846390698 branch 697108460
FAL[client]: All defined FAL servers have been attempted.

Going back in the alert log, I found these lines:

Tue Dec 15 17:16:15 2009


Fetching gap sequence in thread 1, gap sequence 700-700
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:15 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Tue Dec 15 17:16:45 2009
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:45 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor

This clearly showed the issue. On December 15th at 17:16:15, the Managed Recovery
Process encountered an error while receiving the log information from the primary.
The error was ORA-12514 TNS:listener does not currently know of service requested
in connect descriptor. This is usually the case when the TNS connect string is
incorrectly specified. The primary is called DEL1 and there is a connect string
called DEL1 in the standby server.

The connect string works well. Actually, right now there is no issue with the
standby getting the archived logs; so there connect string is fine - now. The
standby is receiving log information from the primary. There must have been some
temporary hiccups causing that specific archived log not to travel to the standby.
If that log was somehow skipped (could be an intermittent problem), then it should
have been picked by the FAL process later on; but that never happened. Since the
sequence# 700 was not applied, none of the logs received later 701, 702 and so on
were applied either. This has caused the standby to lag behind since that time.

So, the fundamental question was why FAL did not fetch the archived log sequence#
700 from the primary. To get to that, I looked into the alert log of the primary
instance. The following lines were of interest:

Tue Dec 15 19:19:58 2009


Thread 1 advanced to log sequence 701 (LGWR switch)
Current log# 2 seq# 701 mem# 0: /u01/oradata/DEL1/onlinelog/o1_mf_2_5bhbkg92_.log
Tue Dec 15 19:20:29 2009Errors in file
/opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3
Tue Dec 15 19:20:29 2009
FAL[server, ARC1]: FAL archive failed, see trace file.
Tue Dec 15 19:20:29 2009
Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed.
Archiver continuing
Tue Dec 15 19:20:29 2009
ORACLE Instance DEL1 - Archival Error. Archiver continuing.

These lines showed everything clearly. The issue was:

ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'


ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory

The archived log simply was not available. The process could not see the file and
couldnt get it across to the standby site.

Upon further investigation I found that the DBA actually removed the archived logs
to make some room in the filesystem without realizing that his action has removed
the most current one which was yet to be transmitted to the remote site. The
mystery surrounding why the FAL did not get that log was finally cleared.

Solution

Now that I know the cause, the focus was now on the resolution. If the archived log
sequence# 700 was available on the primary, I could have easily copied it over to
the standby, registered the log file and let the managed recovery process pick it
up. But unfortunately, the file was gone and I couldnt just recreate the file.
Until that logfile was applied, the recovery will not move forward. So, what are my
options?

One option is of course to recreate the standby - possible one but not technically
feasible considering the time required. The other option is to apply the
incremental backup of primary from that SCN number. Thats the key the backup
must be from a specific SCN number. I have described the process since it is not
very obvious. The following shows the step by step approach for resolving this
problem. I have shown where the actions must be performed [Standby] or [Primary].

1. [Standby] Stop the managed standby apply process:

SQL> alter database recover managed standby database cancel;

Database altered.
2. [Standby] Shutdown the standby database

3. [Primary] On the primary, take an incremental backup from the SCN number where
the standby has been stuck:

RMAN> run {
2> allocate channel c1 type disk format '/u01/oraback/%U.rmb';
3> backup incremental from scn 1301571 database;
4> }

using target database control file instead of recovery catalog


allocated channel: c1
channel c1: sid=139 devtype=DISK

Starting backup at 18-DEC-09


channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/oradata/DEL1/datafile/o1_mf_system_5bhbh59c_.dbf

piece handle=/u01/oraback/06l16u1q_1_1.rmb tag=TAG20091218T083619 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:06
Finished backup at 18-DEC-09
released channel: c1

4. [Primary] On the primary, create a new standby controlfile:

SQL> alter database create standby controlfile as '/u01/oraback/DEL1_standby.ctl';

Database altered.

5. [Primary] Copy these files to standby host:

oracle@oradba1 /u01/oraback# scp *.rmb *.ctl oracle@oradba2:/u01/oraback


oracle@oradba2's password:
06l16u1q_1_1.rmb 100% 43MB 10.7MB/s 00:04
DEL1_standby.ctl 100% 43MB 10.7MB/s 00:04

6. [Standby] Bring up the instance in nomount mode:

SQL> startup nomount

7. [Standby] Check the location of the controlfile:

SQL> show parameter control_files

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
control_files string /u01/oradata/standby_cntfile.ctl

8. [Standby] Replace the controlfile with the one you just created in primary.

9. $ cp /u01/oraback/DEL1_standby.ctl /u01/oradata/standby_cntfile.ctl

10.[Standby] Mount the standby database:

SQL> alter database mount standby database;

11.[Standby] RMAN does not know about these files yet; so you must let it know by
a process called cataloging. Catalog these files:
$ rman target=/

Recovery Manager: Release 10.2.0.4.0 - Production on Fri Dec 18 06:44:25 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: DEL1 (DBID=846390698, not open)


RMAN> catalog start with '/u01/oraback';

using target database control file instead of recovery catalog


searching for all files that match the pattern /u01/oraback

List of Files Unknown to the Database


=====================================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files


=======================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

12.Recover these files:

RMAN> recover database;

Starting recover at 18-DEC-09


using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001:
/u01/oradata/DEL2/datafile/o1_mf_system_5lptww3f_.dbf
...
channel ORA_DISK_1: reading from backup piece /u01/oraback/05l16u03_1_1.rmb
channel ORA_DISK_1: restored backup piece 1
piece handle=/u01/oraback/05l16u03_1_1.rmb tag=TAG20091218T083619
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

starting media recovery

archive log thread 1 sequence 8012 is already on disk as file


/u01/oradata/1_8012_697108460.dbf
archive log thread 1 sequence 8013 is already on disk as file
/u01/oradata/1_8013_697108460.dbf

13. After some time, the recovery fails with the message:

archive log filename=/u01/oradata/1_8008_697108460.dbf thread=1 sequence=8009


RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 12/18/2009 06:53:02
RMAN-11003: failure during parse/execution of SQL statement: alter database recover
logfile '/u01/oradata/1_8008_697108460.dbf'
ORA-00310: archived log contains sequence 8008; sequence 8009 required
ORA-00334: archived log: '/u01/oradata/1_8008_697108460.dbf'

This happens because we have come to the last of the archived logs. The expected
archived log with sequence# 8008 has not been generated yet.

14.At this point exit RMAN and start managed recovery process:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

15.Check the SCNs in primary and standby:

[Standby] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447474
[Primary] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447478
Now they are very close to each other. The standby has now caught up.
=================
Resolving Gaps in Data Guard Apply Using Incremental RMAN BAckup
Resolving Gaps in Data Guard Apply Using Incremental RMAN Backup:-

On the primary:

SQL> select name,database_role,switchover_status from v$database;

NAME DATABASE_ROLE SWITCHOVER_STATUS


--------- ---------------- --------------------
ORCL PRIMARY UNRESOLVABLE GAP

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
2076457

On the standby:

SQL> select database_role,switchover_status from v$database;

DATABASE_ROLE SWITCHOVER_STATUS
---------------- --------------------
PHYSICAL STANDBY NOT ALLOWED

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1998045

Solution:-
1. [Standby] Stop the managed standby apply process:

SQL> alter database recover managed standby database cancel;


Database altered.

2. [Standby] Shutdown the standby database;

SQL> shutdown immediate


ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL>

3. [Primary] On the primary, take an incremental backup from the SCN number where
the standby has been stuck;

RMAN> backup incremental from scn 1998045 database;

Starting backup at 15-SEP-14

allocated channel: ORA_DISK_1


channel ORA_DISK_1: SID=51 device type=DISK
backup will be obsolete on date 22-SEP-14
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/MONEY.DBF
input datafile file number=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/orcl/example01.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/orcl/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: starting piece 1 at 15-SEP-14
channel ORA_DISK_1: finished piece 1 at 15-SEP-14
piece handle=/u01/neeraj/29pij5mb_1_1 tag=TAG20140915T181954 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:25

using channel ORA_DISK_1


backup will be obsolete on date 22-SEP-14
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 15-SEP-14
channel ORA_DISK_1: finished piece 1 at 15-SEP-14
piece handle=/u01/neeraj/2apij5p1_1_1 tag=TAG20140915T181954 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 15-SEP-14

4. [Primary] On the primary, create a new standby controlfile:

SQL> alter database create standby controlfile as '/u01/neeraj/stdb_cont.ctl';


Database altered.

5. [Primary] Copy these files to standby host:

[oracle@localhost ~]$ cd /u01/neeraj/


[oracle@localhost neeraj]$ ls
29pij5mb_1_1 2apij5p1_1_1 stdb_cont.ctl
[oracle@localhost neeraj]$
[oracle@localhost neeraj]$
[oracle@localhost neeraj]$ scp * oracle@192.168.204.129:/u01/tspr
Address 192.168.204.129 maps to localhost.localdomain, but this does not map back
to the address - POSSIBLE BREAK-IN ATTEMPT!
oracle@192.168.204.129's password:
29pij5mb_1_1 100% 59MB 19.7MB/s 00:03
2apij5p1_1_1 100% 11MB 11.1MB/s 00:00
stdb_cont.ctl 100% 11MB 11.1MB/s 00:01
[oracle@localhost neeraj]$

6. [Standby] Bring up the instance in nomount mode:

SQL> startup nomount


ORACLE instance started.

Total System Global Area 1071333376 bytes


Fixed Size 1341312 bytes
Variable Size 201328768 bytes
Database Buffers 864026624 bytes
Redo Buffers 4636672 bytes

7. [Standby] Check the location of the controlfile:

SQL> show parameter control_files

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
control_files string
/u01/app/oracle/oradata/stdb/control01.ctl --- "rename or
remove(control01_old.ctl)"

8. [Standby] Replace the controlfile with the one you just created in primary. and
rename the existing file which is already exist on the standby.

$ cp /u01/tspr/stdb_cont.ctl /u01/app/oracle/oradata/stdb/control01.ctl

9.[Standby] Mount the standby database:

SQL> alter database mount standby database;

Database altered.

10 .[Standby] RMAN does not know about these files yet; so you must let it know
by a process called cataloging. Catalog these files:

RMAN> catalog start with '/u01/tspr';

searching for all files that match the pattern /u01/tspr

List of Files Unknown to the Database


=====================================
File Name: /u01/tspr/29pij5mb_1_1
File Name: /u01/tspr/2apij5p1_1_1

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done
List of Cataloged Files
=======================
File Name: /u01/tspr/29pij5mb_1_1
File Name: /u01/tspr/2apij5p1_1_1

11 .Recover these files:

RMAN> recover database;

Starting recover at 15-SEP-14


using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001:
/u01/app/oracle/oradata/stdb/system01.dbf
destination for restore of datafile 00002:
/u01/app/oracle/oradata/stdb/sysaux01.dbf
destination for restore of datafile 00003:
/u01/app/oracle/oradata/stdb/undotbs01.dbf
destination for restore of datafile 00004: /u01/app/oracle/oradata/stdb/users01.dbf
destination for restore of datafile 00005:
/u01/app/oracle/oradata/stdb/example01.dbf
destination for restore of datafile 00006: /u01/MONEY.DBF
channel ORA_DISK_1: reading from backup piece /u01/tspr/29pij5mb_1_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 09/15/2014 18:37:55
ORA-19870: error while restoring backup piece /u01/tspr/29pij5mb_1_1
ORA-19573: cannot obtain exclusive enqueue for datafile 6

if you will get this error then follow the step. and database must be mounted;

NOTE:- ORA-19573: CANNOT OBTAIN EXCLUSIVE ENQUEUE FOR DATAFILE

SQL> alter database recover managed standby database cancel;

Database altered.

Then run again recovery command:

RMAN> recover database;

Starting recover at 15-SEP-14


using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001:
/u01/app/oracle/oradata/stdb/system01.dbf
destination for restore of datafile 00002:
/u01/app/oracle/oradata/stdb/sysaux01.dbf
destination for restore of datafile 00003:
/u01/app/oracle/oradata/stdb/undotbs01.dbf
destination for restore of datafile 00004: /u01/app/oracle/oradata/stdb/users01.dbf
destination for restore of datafile 00005:
/u01/app/oracle/oradata/stdb/example01.dbf
destination for restore of datafile 00006: /u01/MONEY.DBF
channel ORA_DISK_1: reading from backup piece /u01/tspr/29pij5mb_1_1
channel ORA_DISK_1: piece handle=/u01/tspr/29pij5mb_1_1 tag=TAG20140915T181954
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:06

starting media recovery

archived log for thread 1 with sequence 373 is already on disk as file
/u01/app/oracle/flash_recovery_area/STDB/orcl_373_1_850048081.arc
archived log file
name=/u01/app/oracle/flash_recovery_area/STDB/orcl_373_1_850048081.arc thread=1
sequence=373
unable to find archived log
archived log thread=1 sequence=374
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 09/15/2014 18:50:43
RMAN-06054: media recovery requesting unknown archived log for thread 1 with
sequence 374 and starting SCN of 2078522

Note:- After some time, the recovery fails with the message. This happens because
we have come to the last of the archived logs.

12 .At this point exit RMAN and start managed recovery process:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
2082113

On the primary database:

SQL> select database_role,switchover_status from v$database;

DATABASE_ROLE SWITCHOVER_STATUS
---------------- --------------------
PRIMARY TO STANDBY

Please share your suggestion and comment on this.


all the best.... :)

Anda mungkin juga menyukai