Powered By Blogger

Wednesday, March 28, 2012

Command to disable or enable firewall

netsh firewall set opmode mode = mode
Mode = enable or disable
Thank you
Regards,
Navaneeth

Rename a datafile in the primary and propogate the changes to the standby database

Rename a datafile in the primary database and manually propagate the changes to the standby database.

PRIMARY

1. Set STANDBY_FILE_MANAGEMENT=MANUAL on both Primary and Standby Database.

SQL> show parameter standby_file_manag

NAME                                 TYPE        VALUE
------------------------------------ -----------
standby_file_management              string      AUTO

SQL> col value format a50
SQL>  select name,value from v$parameter where name like 'db_unique_name';

NAME               VALUE
-------------  ------------------------------------------------
db_unique_name   orcl11g

SQL>
SQL> alter system set standby_file_management=MANUAL;

System altered.

SQL>  show parameter standby_file

NAME                                 TYPE        VALUE
------------------------------------ -----------
standby_file_management              string      MANUAL


2. Take the Tablespace offline on the Primary Database:

SQL> select * from v$tablespace;

       TS# NAME                           INC BIG FLA ENC
---------- ------------------------------ --- --- --- ---
         0 SYSTEM                         YES NO  YES
         1 SYSAUX                         YES NO  YES
         2 UNDOTBS1                       YES NO  YES
         4 USERS                          YES NO  YES
         3 TEMP                           NO  NO  YES
         6 FDB                            YES NO  YES

6 rows selected.


SQL> alter tablespace users offline;

Tablespace altered.

3. Rename Datafile on Primary Site:

   copy the file using OS command to the new location

4. Rename the Datafile in the Primary Database.

SQL> alter tablespace users rename datafile 'D:\app\nvcob\oradata\orcl11g\USERS01.DBF' TO 'D:\app\nav6cob\oradata\orcl11g_1\USERS01.DBF';

Tablespace altered.

5. Bring the Tablespace back online:

SQL> ALTER TABLESPACE USERS ONLINE;

Tablespace altered.

SQL> ALTER SYSTEM SWITCH LOGFILE;

System altered.

SQL>

------------------------------------------------------

STANDBY




SQL> alter system set standby_file_management=MANUAL;

System altered.

SQL> select name,value from v$parameter where name like 'db_unique_name';

NAME                                                                             VALUE
-------------------------------------------------------------------------------- --------------------------------------------------
db_unique_name                                                                   11gstdby

SQL> show parameter standby_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      MANUAL
SQL>

1. Stop Redo Apply on Standby Database.

SQL> recover managed standby database cancel;
Media recovery complete.


2. Shutdown the Standby Database.

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.


3. Move the Datafile to the new location at the Standby site.

Copy the file using OS command to the new location.

4. Start and mount the Standby Database.


SQL> startup mount
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1375792 bytes
Variable Size             209715664 bytes
Database Buffers          318767104 bytes
Redo Buffers                5804032 bytes
Database mounted.

5.Rename the Datafile in the Standby Database control file.

SQL> alter database rename file 'D:\app\nav6cob\oradata\11gstdby\USERS01.DBF' to 'D:\app\nav6cob\oradata\11gstdby_1\USERS01.DBF';

Database altered.

6. On the Standby Database, restart Redo Apply:

SQL> recover managed standby database disconnect from session;
Media recovery complete.


7. Set STANDBY_FILE_MANAGEMENT=AUTO on both Primary and Standby Database.


SQL> alter system set standby_file_management=auto;

System altered.

SQL> show parameter standby_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      AUTO
SQL>  select file#,name from v$datafile where name like '%USERS%';

     FILE# NAME
---------- ----------------------------------------------------------------------------------------------------
          4 D:\APP\NAV6COB\ORADATA\11GSTDBY_1\USERS01.DBF

SQL>

Oracle export/import done from Higher to lower version

Export :
[oracle@HN300949l ~]$ expdp test/test DUMPFILE=test10092011.dmp DIRECTORY=EXPDP SCHEMAS=test version=10.2.0.1
Export: Release 11.2.0.1.0 - Production on Mon Oct 10 12:52:15 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TEST"."SYS_EXPORT_SCHEMA_01":  test/******** DUMPFILE=test10092011.dmp DIRECTORY=EXPDP SCHEMAS=test version=10.2.0.1
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 55.68 MB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "TEST"."DEDUPLICATE_TAB"                    9.552 MB    1000 rows
. . exported "TEST"."NOCOMPRESS_TAB"                     9.552 MB    1000 rows
. . exported "TEST"."KEEP_DUPLICATES_TAB"                9.552 MB    1000 rows
. . exported "TEST"."COMPRESS_TAB"                       9.552 MB    1000 rows
. . exported "TEST"."SF_TAB"                             5.335 KB       1 rows
. . exported "TEST"."BF_TAB"                             5.335 KB       1 rows
. . exported "TEST"."MY_CUSTOMERS"                       6.335 KB       1 rows
. . exported "TEST"."TEST"                               6.382 KB       1 rows
. . exported "TEST"."OUTER_NTAB"                             0 KB       0 rows
. . exported "TEST"."INNER_NTAB"                             0 KB       0 rows
. . exported "TEST"."STU_TAB"                                0 KB       0 rows
. . exported "TEST"."PER_TAB"                                0 KB       0 rows
. . exported "TEST"."TIME_TABLE"                             0 KB       0 rows
Master table "TEST"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TEST.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/test10092011.dmp
Job "TEST"."SYS_EXPORT_SCHEMA_01" successfully completed at 12:57:30
****************************************** -----------------------------------------------------------*********************************************************
Import :
D:\oracle\product\10.2.0\db_1\BIN>
D:\oracle\product\10.2.0\db_1\BIN>impdp 'sys/database as sysdba' directory=DATA_PUMP_DIR dumpfile=test10092011.dmp
Import: Release 10.2.0.1.0 - Production on Monday, 10 October, 2011 13:50:20
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01":  'sys/******** AS SYSDBA' directory=DATA_PUMP_DIR dumpfile=test10092011.dmp
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."DEDUPLICATE_TAB"                    9.552 MB    1000 rows
. . imported "TEST"."NOCOMPRESS_TAB"                     9.552 MB    1000 rows
. . imported "TEST"."KEEP_DUPLICATES_TAB"                9.552 MB    1000 rows
. . imported "TEST"."COMPRESS_TAB"                       9.552 MB    1000 rows
. . imported "TEST"."SF_TAB"                             5.335 KB       1 rows
. . imported "TEST"."BF_TAB"                             5.335 KB       1 rows
. . imported "TEST"."MY_CUSTOMERS"                       6.335 KB       1 rows
. . imported "TEST"."TEST"                               6.382 KB       1 rows
. . imported "TEST"."OUTER_NTAB"                             0 KB       0 rows
. . imported "TEST"."INNER_NTAB"                             0 KB       0 rows
. . imported "TEST"."STU_TAB"                                0 KB       0 rows
. . imported "TEST"."PER_TAB"                                0 KB       0 rows
. . imported "TEST"."TIME_TABLE"                             0 KB       0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 13:51:04
Regards,
Navaneeth

Oracle DB cloning Simple steps using RMAN

   1) Install Oracle Home software on new server. Make the Oracle Home directory structure same as old server.

2) Create an Oracle Service same as on old server from OS prompt:

cd \bin
set ORACLE_SID=
oradim -new -sid %ORACLE_SID% -intpwd MYSECRETPASSWORD -startmode M

3) Configure listener on new server using netca utility from OS prompt.

4) Create/check same directory structure on new server for controlfiles, datafiles, redo logfiles, dump files, networking files like sqlnet.ora, tnsnames.ora, listener.ora. This information can be obtained as follows on old server:

conn / as sysdba
select member from v$logfile ;
select name from v$datafile ;

For rest of the files (e.g. controfiles, dump/trace,log files), see parameter file on old server.

5. copy the backup files to the same location which RMAN access it from.

      5.1 Startup nomount using pfile (taken backup)

6. Restore controlfile from backup
7. restore database
8. open database

Reagrds,
Navaneeth

Thursday, March 22, 2012

Simple RMAN Script for Creating Standby database

RMAN Script for creating standby database:

run {
   allocate channel prmy1 type disk;
    allocate channel prmy2 type disk;
    allocate channel prmy3 type disk;
    allocate channel prmy4 type disk;
    allocate auxiliary channel stby type disk;
    duplicate target database for standby from active database
    spfile
        parameter_value_convert 'orcl','orclsby1'
        set db_unique_name='orclsby1'
        set db_file_name_convert='/orcl/','/orclsby1/'
        set log_file_name_convert='/orcl/','/orclsby1/'
        set control_files='/u01/app/oracle/oradata/orclsby1.ctl'
        set log_archive_max_processes='5'
        set fal_client='orclsby1'
        set fal_server='orcl'
        set standby_file_management='AUTO'
        set log_archive_config='dg_config=(orcl,orclsby1)'
        set log_archive_dest_1='service=orcl ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=orcl';
     }

Adding standby redologs in standby database:

alter database add standby logfile
('/u01/app/oracle/oradata/orcl/srl01.log',
'/u01/app/oracle/oradata/orcl/srl02.log',
'/u01/app/oracle/oradata/orcl/srl03.log',
'/u01/app/oracle/oradata/orcl/srl04.log')
size 52428800
/
Regards,
Navaneeth

Standby recovery done in production environment 9i



http://dbaspot.com/forums/oracle-server/71881-physical-standby-database-cannot-close-gap-if-any-gap-exists.html


'ORA-00280: change 3220290269 for thread 1 is in sequence #17219
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17218.ARC' no longer needed for this recovery


ORA-00279: change 3220290274 generated at 04/11/2010 19:03:32 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17220.ARC
ORA-00280: change 3220290274 for thread 1 is in sequence #17220
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17219.ARC' no longer needed for this recovery


ORA-00279: change 3220300520 generated at 04/11/2010 23:27:08 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17221.ARC
ORA-00280: change 3220300520 for thread 1 is in sequence #17221
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17220.ARC' no longer needed for this recovery


ORA-00279: change 3220300533 generated at 04/11/2010 23:27:16 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17222.ARC
ORA-00280: change 3220300533 for thread 1 is in sequence #17222
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17221.ARC' no longer needed for this recovery


ORA-00279: change 3220304453 generated at 04/12/2010 01:05:52 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17223.ARC
ORA-00280: change 3220304453 for thread 1 is in sequence #17223
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17222.ARC' no longer needed for this recovery


ORA-00279: change 3220304456 generated at 04/12/2010 01:05:53 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17224.ARC
ORA-00280: change 3220304456 for thread 1 is in sequence #17224
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17223.ARC' no longer needed for this recovery


ORA-00279: change 3220429248 generated at 04/12/2010 09:49:53 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17225.ARC
ORA-00280: change 3220429248 for thread 1 is in sequence #17225
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17224.ARC' no longer needed for this recovery


ORA-00308: cannot open archived log 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17225.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 32) The process cannot access the file because it is being used by another process.


SQL>
SQL> recover standby database until cancel;
ORA-00279: change 3220429248 generated at 04/12/2010 09:49:53 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17225.ARC
ORA-00280: change 3220429248 for thread 1 is in sequence #17225


Specify log: {=suggested | filename | AUTO | CANCEL}
AUTO
ORA-00279: change 3220572194 generated at 04/12/2010 10:46:06 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17226.ARC
ORA-00280: change 3220572194 for thread 1 is in sequence #17226
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17225.ARC' no longer needed for this recovery


ORA-00279: change 3220627014 generated at 04/12/2010 11:06:36 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17227.ARC
ORA-00280: change 3220627014 for thread 1 is in sequence #17227
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17226.ARC' no longer needed for this recovery


ORA-00279: change 3220627289 generated at 04/12/2010 11:06:51 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17228.ARC
ORA-00280: change 3220627289 for thread 1 is in sequence #17228
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17227.ARC' no longer needed for this recovery


ORA-00279: change 3220763002 generated at 04/12/2010 12:27:59 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17229.ARC
ORA-00280: change 3220763002 for thread 1 is in sequence #17229
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17228.ARC' no longer needed for this recovery


ORA-00279: change 3220899620 generated at 04/12/2010 13:24:57 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17230.ARC
ORA-00280: change 3220899620 for thread 1 is in sequence #17230
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17229.ARC' no longer needed for this recovery


ORA-00279: change 3221036980 generated at 04/12/2010 14:44:50 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17231.ARC
ORA-00280: change 3221036980 for thread 1 is in sequence #17231
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17230.ARC' no longer needed for this recovery


ORA-00279: change 3221165247 generated at 04/12/2010 16:11:07 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17232.ARC
ORA-00280: change 3221165247 for thread 1 is in sequence #17232
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17231.ARC' no longer needed for this recovery


ORA-00279: change 3221302585 generated at 04/12/2010 17:30:15 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17233.ARC
ORA-00280: change 3221302585 for thread 1 is in sequence #17233
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17232.ARC' no longer needed for this recovery


ORA-00279: change 3221437356 generated at 04/12/2010 18:37:11 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17234.ARC
ORA-00280: change 3221437356 for thread 1 is in sequence #17234
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17233.ARC' no longer needed for this recovery


ORA-00279: change 3221499410 generated at 04/12/2010 19:02:44 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17235.ARC
ORA-00280: change 3221499410 for thread 1 is in sequence #17235
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17234.ARC' no longer needed for this recovery


ORA-00279: change 3221499630 generated at 04/12/2010 19:03:02 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17236.ARC
ORA-00280: change 3221499630 for thread 1 is in sequence #17236
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17235.ARC' no longer needed for this recovery


ORA-00279: change 3221663610 generated at 04/12/2010 20:36:12 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17237.ARC
ORA-00280: change 3221663610 for thread 1 is in sequence #17237
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17236.ARC' no longer needed for this recovery


ORA-00279: change 3221809532 generated at 04/12/2010 21:37:13 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17238.ARC
ORA-00280: change 3221809532 for thread 1 is in sequence #17238
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17237.ARC' no longer needed for this recovery


ORA-00279: change 3221951748 generated at 04/12/2010 22:41:44 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17239.ARC
ORA-00280: change 3221951748 for thread 1 is in sequence #17239
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17238.ARC' no longer needed for this recovery


ORA-00279: change 3222098577 generated at 04/12/2010 23:39:32 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17240.ARC
ORA-00280: change 3222098577 for thread 1 is in sequence #17240
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17239.ARC' no longer needed for this recovery


ORA-00279: change 3222248037 generated at 04/13/2010 00:55:46 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17241.ARC
ORA-00280: change 3222248037 for thread 1 is in sequence #17241
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17240.ARC' no longer needed for this recovery


ORA-00279: change 3222264966 generated at 04/13/2010 01:05:47 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17242.ARC
ORA-00280: change 3222264966 for thread 1 is in sequence #17242
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17241.ARC' no longer needed for this recovery


ORA-00279: change 3222265468 generated at 04/13/2010 01:06:02 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17243.ARC
ORA-00280: change 3222265468 for thread 1 is in sequence #17243
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17242.ARC' no longer needed for this recovery


ORA-00279: change 3222405780 generated at 04/13/2010 02:19:18 needed for thread 1
ORA-00289: suggestion : D:\ORADATA\sys8\ARCHIVE\LOG_1_17244.ARC
ORA-00280: change 3222405780 for thread 1 is in sequence #17244
ORA-00278: log file 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17243.ARC' no longer needed for this recovery


ORA-00308: cannot open archived log 'D:\ORADATA\sys8\ARCHIVE\LOG_1_17244.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.


SQL> disc
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.7.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.7.0 - Production
SQL> conn / as sysdba
Connected.
SQL>
SQL> set pages 100
SQL> set lines 1000
SQL>
SQL> select sequence#,archived,applied,deleted from v$archived_log where completion_time like '12-APR-10%';

 SEQUENCE# ARC APP DEL
---------- --- --- ---
     17222 YES YES NO
     17223 YES YES NO
     17224 YES YES NO
     17225 YES YES NO
     17226 YES YES NO
     17227 YES YES NO
     17228 YES YES NO
     17229 YES YES NO
     17230 YES YES NO
     17231 YES YES NO
     17232 YES YES NO
     17233 YES YES NO
     17234 YES YES NO
     17235 YES YES NO
     17236 YES YES NO
     17237 YES YES NO
     17238 YES YES NO
     17239 YES YES NO

18 rows selected.

SQL> select sequence#,archived,applied,deleted from v$archived_log where completion_time like '13-APR-10%';

 SEQUENCE# ARC APP DEL
---------- --- --- ---
     17240 YES YES NO
     17241 YES YES NO
     17242 YES YES NO
     17243 YES YES NO

SQL> select sequence#,archived,applied,deleted from v$archived_log where completion_time like '13-APR-10%';

 SEQUENCE# ARC APP DEL
---------- --- --- ---
     17240 YES YES NO
     17241 YES YES NO
     17242 YES YES NO
     17243 YES YES NO
     17244 YES NO  NO

SQL> select sequence#,archived,applied,deleted from v$archived_log where completion_time like '13-APR-10%';

 SEQUENCE# ARC APP DEL
---------- --- --- ---
     17240 YES YES NO
     17241 YES YES NO
     17242 YES YES NO
     17243 YES YES NO
     17244 YES NO  NO
     17245 YES NO  NO

6 rows selected.

SQL> select sequence#,archived,applied,deleted from v$archived_log where completion_time like '13-APR-10%';

 SEQUENCE# ARC APP DEL
---------- --- --- ---
     17240 YES YES NO
     17241 YES YES NO
     17242 YES YES NO
     17243 YES YES NO
     17244 YES NO  NO
     17245 YES NO  NO

6 rows selected.

SQL> select sequence#,archived,applied,deleted from v$archived_log where completion_time like '13-APR-10%';

 SEQUENCE# ARC APP DEL
---------- --- --- ---
     17240 YES YES NO
     17241 YES YES NO
     17242 YES YES NO
     17243 YES YES NO
     17244 YES NO  NO
     17245 YES NO  NO

6 rows selected.

SQL> select sequence#,archived,applied,deleted from v$archived_log where completion_time like '13-APR-10%';

 SEQUENCE# ARC APP DEL
---------- --- --- ---
     17240 YES YES NO
     17241 YES YES NO
     17242 YES YES NO
     17243 YES YES NO
     17244 YES YES NO
     17245 YES YES NO

6 rows selected.

SQL> select sequence#,archived,applied,deleted from v$archived_log where completion_time like '13-APR-10%';

 SEQUENCE# ARC APP DEL
---------- --- --- ---
     17240 YES YES NO
     17241 YES YES NO
     17242 YES YES NO
     17243 YES YES NO
     17244 YES YES NO
     17245 YES YES NO

6 rows selected.

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.7.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.7.0 - Production

C:\Documents and Settings\Administrator>

Regards,
Navaneeth

Retention policy and Recovery Window

Retention policy:
Assume the following retention policy illustrated in figure below. The retention policy has the following aspects:
·         The recovery window is 7 days.
·         Database backups are scheduled every two weeks on these days:
o        January 1
o        January 15
o        January 29
o        February 12
·         The database runs in ARCHIVELOG mode, and archived logs are saved on disk only as long as needed for the retention policy.
Part 1

As illustrated in Figure , the current time is January 23 and the point of recoverability is January 16. Hence, the January 14 backup is needed for recovery, and so are the archived logs from log sequence 500 through 850. The logs before 500 and the January 1 backup are obsolete because they are not needed for recovery to a point within the window.
Assume the same scenario a week later, as depicted in 
 Recovery Window, Part 2


In this scenario, the current time is January 30 and the point of recoverability is January 23. Note how the January 14 backup is not obsolete even though a more recent backup (January 28) exists in the recovery window. This situation occurs because restoring the January 28 backup does not enable you to recover to the earliest time in the window, January 23. To ensure recoverability to any point within the window, you must save the January 14 backup as well as all archived redo logs from log sequence 500 to 1150.


CONTROL_FILE_RECORD_KEEP_TIME = retention period + level 0 backup interval + 1

for e.g.

 Level 0 backup once a week with retention policy of a recovery windows of 14 days then in this
case the CONTROL_FILE_RECORD_KEEP_TIME should be 14+7+1=22

Regards,
Navaneeth

Thursday, March 1, 2012

Oracle 11g Arch diag

Oracle Data Pump

Oracle DataPump

Oracle Data Pump is the replacement for the original Export and Import utilities.Available starting in Oracle Database 10g, Oracle Data Pump enables very high-speed movement of data and metadata from one database to another.

NEW CONCEPTS IN ORACLE DATA PUMP

Two new concepts in Oracle Data Pump that are different from original Export and Import.

DIRECTORY OBJECTS
Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes. These server processes access files for the Data Pump jobs using directory objects that identify the location of the files. 



INTERACTIVE COMMAND-LINE MODE
There is now a very powerful interactive command-line mode which allows the user to monitor and control Data Pump Export and Import operations.

CHANGING FROM ORIGINAL EXPORT/IMPORT TO ORACLE DATA PUMP
CREATING DIRECTORY OBJECTS
In order to use Data Pump, the database administrator must create a directory object and grant privileges to the user on that directory object. If a directory object is not specified, a default directory object called data_pump_dir is provided. In the following example, the following SQL statement creates a directory object named dpumpdir1 that is mapped to a directory located at 

D:\test\.

You would login to SQL*Plus as system and enter the following SQL command to
create a directory.

1. sql> CREATE DIRECTORY
dpumpdir1 AS ‘D:\test\’;

After a directory is created, you need to grant READ and WRITE permission on the
directory to other users. For example, to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpumpdir1, you must execute the following command:

2. sql> GRANT READ,WRITE ON DIRECTORY dpumpdir1 TO scott;

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges. Similarly, the Oracle database requires permission from the operating system to read and write files in the directories.

Once the directory access is granted, the user scott can export his database objects with command arguments:

3. Run the command

cmd> expdp username/password DIRECTORY=dpumpdir1 dumpfile=scott.dmp

COMPARISON OF COMMAND-LINE PARAMETERS FROM ORIGINAL EXPORT AND IMPORT TO DATA PUMP

1) Example import of tables from scott’s account to Nav’s account

Original Import:

cmd> imp username/password FILE=scott.dmp FROMUSER=scott TOUSER=Nav TABLES=(*)

Data Pump Import:

cmd> impdp username/password DIRECTORY=dpumpdir1 DUMPFILE=scott.dmp
TABLES=scott.emp REMAP_SCHEMA=scott:Nav

Note how the FROMUSER/TOUSER syntax is replaced by the REMAP_SCHEMA
option.

2) Example export of an entire database to a dump file with all GRANTS, INDEXES,
and data

cmd> exp username/password FULL=y FILE=fulldb.dmp GRANTS=y INDEXES=y ROWS=y
 

cmd> expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX
DIRECTORY=dpumpdir1 DUMPFILE=fulldb.dmp CONTENT=ALL

The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job. The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job. You cannot mix the two parameters in one job. Both parameters work with Data Pump Import as well, and you can use different INCLUDE and EXCLUDE options for different operations on the same dump file.

3) Tuning Parameters

Unlike Original Export and Import, which used the BUFFER, COMMIT, COMPRESS,
CONSISTENT, DIRECT, and RECORDLENGTH parameters, Data Pump needs no
tuning to achieve maximum performance. Data Pump chooses the best method to
ensure that data and metadata are exported and imported in the most efficient manner. Initialization parameters should be sufficient upon installation.

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import. With original Export, you had to run an older version of Export to produce a dump file that was compatible with an older database version. With Data Pump, you use the current Export version and simply use the VERSION parameter to specify the target database version. You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g).

Example:

cmd> expdp username/password TABLES=hr.employees VERSION=10.1
DIRECTORY=dpumpdir1 DUMPFILE=emp.dmp

Data Pump Import can always read dump file sets created by older versions of Data
Pump Export.

Note that Data Pump Import cannot read dump files produced by original Export.

11g RELEASE 1 NEW FEATURES

• Compression on data and/or metadata
• Encryption options with data and/or metadata
• Partition options with import
• Reuse dumpfiles
• Remap Tables during import
• Remap data within specified columns during an export.
• Automatically restart worker processes on certain errors

11g RELEASE 2 NEW FEATURES

• Available Legacy Mode
• Tables are able to reside on multiple
• Wildcard usage in table names expanded for tables and partitions
• Removed limitations on Real Application Cluster concurrent usage
• Support for Editions   
 
MAXIMIZING THE POWER OF ORACLE DATA PUMP
Data Pump works great with default parameters, but once you are comfortable with Data Pump, there are new capabilities that you will want to explore.

PARALLELISM
 

Now Data Pump operations can take advantage of the server’s parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database 10g.)

The number of parallel processes can be changed on the fly using Data Pump’s
interactive command-line mode. You may have a certain number of processes running
during the day and decide to change that number if more system resources become
available at night (or vice versa). For best performance, you should do the following:

• Make sure your system is well balanced across CPU, memory, and I/O.

• Have at least one dump file for each degree of parallelism. If there aren’t enough dump files, performance will not be optimal because multiple threads of
execution will be trying to access the same dump file.

• Put files that are members of a dump file set on separate disks so that they will be written and read in parallel.

• For export operations, use the %U variable in the DUMPFILE parameter so
multiple dump files can be automatically generated.

Example:

cmd> expdp username/password DIRECTORY=dpumpdir1 JOB_NAME=hr
DUMPFILE=par_exp%u.dmp PARALLEL=4

REMAP

• REMAP_TABLESPACE – This allows you to easily import a table into a different
tablespace from which it was originally exported. The databases have to be 10.1
or later.

Example:

cmd> impdp username/password REMAP_TABLESPACE=tb1:tb6 DIRECTORY=dpumpdir1 DUMPFILE=employees.dmp

• REMAP_DATAFILES – This is a very useful feature when you move databases
between platforms that have different file naming conventions. This parameter
changes the source datafile name to the target datafile name in all SQL
statements where the source datafile is referenced. Because the
REMAP_DATAFILE value uses quotation marks, it’s best to specify the
parameter within a parameter file.

Example:
The parameter file, payroll.par, has the following content:
DIRECTORY=dpump_dir1
FULL=Y
DUMPFILE=db_full.dmp
REMAP_DATAFILE=”’C:\DB1\HRDATA\PAYROLL\tbs6.dbf’:’/db1/hrdata/payroll/tbs6.dbf’”
You can then issue the following command:

cmd> impdp username/password PARFILE=payroll.par

EVEN MORE ADVANCED FEATURES OF ORACLE DATA PUMP

Beyond the command-line and performance features of Oracle Data Pump are new
capabilities that DBAs will find invaluable. A couple of prominent features are described
here.

INTERACTIVE COMMAND-LINE MODE

You have much more control in monitoring and controlling Data Pump jobs with
interactive command-line mode. Because Data Pump jobs run entirely on the server,
you can start an export or import job, detach from it, and later reconnect to the job to monitor its progress. Here are some of the things you can do while in this mode:

• See the status of the job. All of the information needed to monitor the job’s
execution is available.

• Add more dump files if there is insufficient disk space for an export file.

• Change the default size of the dump files.

• Stop the job (perhaps it is consuming too many resources) and later restart it
(when more resources become available).

• Restart the job. If a job was stopped for any reason (system failure, power
outage), you can attach to the job and then restart it.

• Increase or decrease the number of active worker processes for the job.
(Enterprise Edition only.)

• Attach to a job from a remote site (such as from home) to monitor status.

NETWORK MODE

Data Pump gives you the ability to pass data between two databases over a network (via a database link), without creating a dump file on disk. This is very useful if you’re moving data between databases, like data marts to data warehouses, and disk space is not readily available. Note that if you are moving large volumes of data, Network mode is probably going to be slower than file mode.

Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance.

Network export gives you the ability to export read-only databases. (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance.) This is useful when there is a need to export data from a standby database.

GENERATING SQLFILES

In original Import, the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script.

With Data Pump, it’s a lot easier to get a workable DDL script. When you run Data
Pump Import and specify the SQLFILE parameter, a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types, not just tables and indexes. Although this output file is ready for execution, the DDL statements are not actually executed, so the target system will not be changed.

SQLFILEs can be particularly useful when pre-creating tables and objects in a new
database. Note that the INCLUDE and EXCLUDE parameters can be used for tailoring
sqlfile output. For example, if you want to create a database that contains all the tables and indexes of the source database, but that does not include the same constraints, grants, and other metadata, you would issue a command as follows:

cmd> impdp username/password DIRECTORY=dpumpdir1 DUMPFILE=expfull.dmp
SQLFILE=dpumpdir2:expfull.sql INCLUDE=TABLE,INDEX

The SQL file named expfull.sql is written to dpumpdir2 and would include SQL DDL
that could be executed in another database to create the tables and indexes as desired.

FREQUENTLY ASKED QUESTIONS


What makes Data Pump faster than original Export and Import?

There are three main reasons that Data Pump is faster than original Export and Import. 

First, the Direct Path data access method (which permits the server to bypass SQL and go right to the data blocks on disk) has been rewritten to be much more efficient and now supports Data Pump Import and Export. 


Second, because Data Pump does its processing on the server rather than in the client, much less data has to be moved between client and server. 


Finally, Data Pump was designed from the ground up to take advantage of modern hardware and operating system architectures in ways that original Export/ and Import cannot. These factors combine to produce significant performance improvements for Data Pump over original Export and Import 


How much faster is Data Pump than the original Export and Import utilities?

For a single stream, Data Pump Export is approximately 2 times faster than original Export and Data Pump Import is approximately 15 to 40 times faster than original Import. Speed can be dramatically improved using the PARALLEL parameter.

Why is Data Pump slower on small jobs?

Data Pump was designed for big jobs with lots of data. Each Data Pump job has a
master table that has all the information about the job and is needed for restartability. The overhead of creating this master table makes small jobs take longer, but the speed in processing large amounts of data gives Data Pump a significant advantage in medium and larger jobs.

Are original Export and Import going away?

Original Export is being deprecated with the Oracle Database 11g release. Original
Import will always be supported so that dump files from earlier releases (release 5.0 and later) will be able to be imported. Original and Data Pump dump file formats are not compatible.

Are Data Pump dump files and original Export and Import dump files compatible?

No, the dump files are not compatible or interchangeable. If you have original Export dump files, you must use original Import to load them.

How can I monitor my Data Pump jobs to see what is going on?

In interactive mode, you can get a lot of detail through the STATUS command. In
SQL, you can query the following views:

• DBA_DATAPUMP_JOBS - all active Data Pump jobs and the state of each
job
• USER_DATAPUMP_JOBS – summary of the user’s active Data Pump jobs
• DBA_DATAPUMP_SESSIONS – all active user sessions that are attached to
a Data Pump job
• V$SESSION_LONGOPS – shows all progress on each active Data Pump job

Can I use gzip with Data Pump?

You can do a Data Pump Export and then run the dump files through gzip to compress
them. You cannot compress the dump files prior to writing them. (The new
COMPRESS parameter can be used to compress metadata, and metadata compression
is enabled by default in Oracle Database 10g Release 2.) 



Regards,
Navaneeth