Overview of database recovery techniques in DBMS

 Overview of database recovery techniques  

·         Database Recovery is a process of recovering or restoring data in the database when a data loss occurs or data gets deleted by system crash, hacking, errors in the transaction, damage occurred coincidentally, by viruses, sudden terrible failure, commands incorrect implementation, etc

·         Recovery techniques are dependent upon the special file known as a system log. It contains information and tracks the start and end of each transaction and any updates which occur during the transaction. This information is needed to recover from transaction failure via a rollback.

Log-based Recovery

The log is a sequence of records, which maintains the records of actions performed by a transaction. It is important that the logs are written prior to the actual modification and stored on a stable storage media, which is failsafe.

Log-based recovery works as follows −

·        The log file is kept on a stable storage media.

·        When a transaction enters the system and starts execution, it writes a log about it.

<Tn, Start>

·        When the transaction modifies an item X, it writes logs as follows −

<Tn, X, V1, V2>

It reads Tn has changed the value of X, from V1 to V2.

  • When the transaction finishes, it logs −

<Tn, commit>


·         So, recovery techniques that are based on deferred updates and immediate updates or backing up data can be used to stop loss in the database.

  • Recovery is the process of restoring a database to the correct state in the event of a failure.
  • It ensures that the database is reliable and remains in a consistent state in case of failure.



Undoing – If a transaction crashes, then the recovery manager may undo transactions i.e. reverse the operations of a transaction. There are two major techniques for recovery from transaction failures: 1.      delayed updates and 2.      Immediate updates.

Delayed update – This technique updates the database on disk after transaction commit. Before committing, all transaction updates are recorded in the local workspace. If a transaction fails, it will not have changed the database in any way so UNDO is not needed.  Also known as the No-undo/redo algorithm.

Immediate update – In the immediate update, the database may be updated by some operations of a transaction before reaches its commit point. These operations are recorded in a log on disk before they are applied to the database, making recovery still possible. If a transaction fails its operation must be undone i.e. the transaction must be rolled back hence we require both undo and redo. This technique is known as undo/redo algorithm.

Caching/Buffering – A collection of the data items of disk pages in-main memory buffers called the DBMS cache is kept under the control of DBMS for holding these buffers.

Shadow paging – It provides atomicity and durability. A directory with n entries is constructed, where the ith entry points to the ith database page on the link. When a transaction began executing the current directory is copied into a shadow directory. When a page is to be modified, a shadow page is allocated in which changes are made and when it is ready to become durable, all pages that refer to the original are updated to refer to the new replacement page.


Database recovery can be classified into two parts;

1. Rolling Forward applies to redo records to the corresponding data blocks.
2. Rolling Back applies rollback segments to the data files. It is stored in transaction tables.


Classification of failure

The following points are the generalization of failure into various classifications, to examine the source of a problem,

  1. Transaction failure: a transaction has to terminate when it arrives at a point from where it can’t extend any further and when it fails to implement the operation.
    Transaction failure reasons could be,
    • Logical errors: The errors which take place in some code or any fundamental error situation, where a transaction cannot be properly fulfilled.
    • System errors: The errors which take place when the database management system is not able to implement the active transaction or it has to terminate it because of some conditions in a system.
  2. System Crash: There are issues that may stop the system unexpectedly from outside and may create the system condition to crash. For example, disturbance or interference in the power supply may create the system condition of fundamental hardware or software to crash or fail.
  3. Disk Failure: Disk failures comprise bad sectors evolution in the disk, disk inaccessibility, and head crash in the disk, other failures which damage disk storage completely or its particular parts.

Storage structure

The storage structure can be classified into two following categories,

  •  Volatile (temporary) storage: A volatile storage cannot hold on to crashes in the system. These devices are located within reach of the CPU. Examples of volatile storage are the main memory and cache memory.
  • Non-volatile storage: Non-volatile storage is created to hold on to crashes in the system. These devices are enormous in the magnitude of data storage, but not quick in approachability. Examples of non-volatile storage are hard disks, magnetic tapes, and flash memory.


Recovery and Atomicity

When a DBMS recovers from a crash, it should maintain the following −

·        It should check the states of all the transactions, which were being executed.

·        A transaction may be in the middle of some operation; the DBMS must ensure the atomicity of the transaction in this case.

·        It should check whether the transaction can be completed now or if it needs to be rolled back.

·        No transactions would be allowed to leave the DBMS in an inconsistent state.

There are two types of techniques, which can help a DBMS in recovering as well as maintaining the atomicity of a transaction −

·        Maintaining the logs of each transaction, and writing them onto some stable storage before actually modifying the database.

·        Maintaining shadow paging, where the changes are done on a volatile memory, and later, the actual database is updated.

Recovery with Concurrent Transactions

·         The logs are interleaved when multiple transactions are being implemented in collateral. It would be difficult for the system of recovery to make an order of sequence of all logs again, and then start recovering at the time of recovery. In most recent times Database systems use the abstraction of 'checkpoints' to make this condition uncomplicated.


  • ·         Checkpoint acts like a benchmark.
  • ·         Checkpoints are also called Syncpoints or Savepoints.
  • ·          It is a mechanism where all the previous logs are removed from the system and stored permanently in a storage system.
  • ·         It declares a point before which the database management system was in a consistent state and all the transactions were committed.
  • ·         It is a point of synchronization between the database and the transaction log file.
  • ·         It involves operations like writing log records in main memory to secondary storage, writing the modified blocks in the database buffers to secondary storage and writing a checkpoint record to the log file.
  • ·         The checkpoint record contains the identifiers of all transactions that are active at the time of the checkpoint.


Recovery Techniques:

  1. Recovery program: Run after a crash to attempt to restore the system to a valid state. Used when all other techniques fail or were not used. Good for cases where buffers were lost in a crash and one wants to reconstruct what was lost.
  2. Incremental dumping: Modified files copied to store after job completed or at intervals.
  3. Audit trail: Sequences of actions on files are recorded. Optimal for "backing out" of transactions.
  4. Differential files: Separate file is maintained to keep track of changes, periodically merged with the main file.
  5. Backup/current version: Present files from the current version of the database. Files containing previous values form a consistent backup version.
  6. Multiple copies: Multiple active copies of each file are maintained during the normal operation of the database. In cases of failure, a comparison between the versions can be used to find a consistent version.
  7. Careful replacement: Nothing is updated in place, with the original only being deleted after the operation is complete.



Post a Comment