Basically intended for recovery during a media failure.
Every Oracle database has a set of two or more redo log files. Suppose a media failure prevents the data in the buffer to be wriitern in to the datafile, we will always have an option to retrieve data from the redo log file.
Redo log file basically stores the modification done to data.
To prevent any failure involving redo log file itself , oracle offers something known as multiplexed redo log where in two or more copies of the redo log file are maintained in two different disks.
Recovering data from a redolog file during a recovery operation is known as rolling forward
Friday, May 22, 2009
Wednesday, May 20, 2009
A-Z of RMAN
Refrence:http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmarchi.htm#BGBJCJBF
Target database:
The control files, datafiles, and optional archived redo logs that RMAN is in charge of backing up or restoring. RMAN uses the target database control file to gather metadata about the target database and to store information about its own operations. The work of backup and recovery is performed by server sessions running on the target database.
RMAN client:
The client application that manages backup and recovery operations for a target database. The RMAN client can use Oracle Net to connect to a target database, so it can be located on any host that is connected to the target host through Oracle Net.
To be continued-
Target database:
The control files, datafiles, and optional archived redo logs that RMAN is in charge of backing up or restoring. RMAN uses the target database control file to gather metadata about the target database and to store information about its own operations. The work of backup and recovery is performed by server sessions running on the target database.
RMAN client:
The client application that manages backup and recovery operations for a target database. The RMAN client can use Oracle Net to connect to a target database, so it can be located on any host that is connected to the target host through Oracle Net.
To be continued-
Cloning Peoplsoft database:
Apart from changing the values of PSDBOWNER to the current database name, there is something else thats got to be changed.
This is the value of field GUID in the the tools table PSOPTIONS. Though the environment would still work without that, but this is advisable if we have database and it clone in the same server. While PSEMAgent, the agent would be assume that both instances are same and there will be a confusion.
Remedy:
The value of the GUID can be set to space char.
This is the value of field GUID in the the tools table PSOPTIONS. Though the environment would still work without that, but this is advisable if we have database and it clone in the same server. While PSEMAgent, the agent would be assume that both instances are same and there will be a confusion.
Remedy:
The value of the GUID can be set to space char.
Monday, May 11, 2009
Data Pump Notes
Oracle data pump documentation:
Following points may be noted for data pump
1) Not supportive with the normal exp & imp
2) Self tuning: Parameters like Buffer are not required or needed.
3) Parallelism: Access data through multiple path
4) Runs on the server side rather than on client side
Directory Object:
This is the object through which oracle creates dumps and log files.
To perform datapump operation:
1) Create directory object:
SQL> create directory datapump as ‘c:\datapump’
Here a directory is created with the name datapump in c folder. This directory is where the logs and the dump files will be stored. This directory will be used as a parameter for performing datapump export or import operation.
One thing to note here is that the log files with the same names in the directory will get overwritten and the dump file with the same name will result in error.
2) Give Access:
SQL>give exp_full_database to user;
SQL>grant read, write on directory datapump to user;
3) Export Command:
expdp username/pwd FULL=y DIRECTORY=datapump DUMPFILE=expdata.dmp LOGFILE=expdata.log
Following points may be noted for data pump
1) Not supportive with the normal exp & imp
2) Self tuning: Parameters like Buffer are not required or needed.
3) Parallelism: Access data through multiple path
4) Runs on the server side rather than on client side
Directory Object:
This is the object through which oracle creates dumps and log files.
To perform datapump operation:
1) Create directory object:
SQL> create directory datapump as ‘c:\datapump’
Here a directory is created with the name datapump in c folder. This directory is where the logs and the dump files will be stored. This directory will be used as a parameter for performing datapump export or import operation.
One thing to note here is that the log files with the same names in the directory will get overwritten and the dump file with the same name will result in error.
2) Give Access:
SQL>give exp_full_database to user;
SQL>grant read, write on directory datapump to user;
3) Export Command:
expdp username/pwd FULL=y DIRECTORY=datapump DUMPFILE=expdata.dmp LOGFILE=expdata.log
Subscribe to:
Posts (Atom)