Anda di halaman 1dari 30

UNIT-VI

What is transaction? - Transaction state-implementation of atomicity and


durability- concurrency control- serializability -testing for serializabilityconcurrency control with locking methods -concurrency control with time
stamping methods - concurrency control with optimistic methods - database
recovery management -validation based protocols - multiple granularities

What is transaction?
Transaction processing systems collect, store, modify and retrieve the
transactions of an organisation
A transaction is an event that generates or modifies data that is eventually
stored in an information system.
Transaction is a small application program or unit of programs executed by the
user.
Every transaction is written in high level manipulation language or
programming language.
Transaction states:For recovery purposes, the system needs to keep track of when the
transaction starts, terminates, and commits or aborts
Every transaction having 5 states which are as follows:

Active state
Partially committed
Committed
Failure
Abort state

Active state:Active state marks the beginning of transaction execution.


Or
Active state is initial state of the transaction.
If there is any error in the transaction then it went to failed state.
If there is no error in the transaction then it went to partially committed
state.
Partially committed state:When the transaction comes from active state to partially committed state
the transaction executes some list of operations.
Failed state:When ever we are performing transactions due to some failures that mean
Hardware or software failures or system crashes then the transactions lead to
failure state.
Committed state:Commit gives the signal a successful end of the transaction so that any
changes executed by the transaction can be safely committed to the database.
Or
If the transaction is successfully executed then the transaction is in the
commit state.
Abort state:Abort state is also called as rollback state.
Abort state gives the signal that the transaction so that any changes or effects
that the transaction may have applied to the database.
Abort state performs 2 operations on the failed states they are as follows.

Kill the transaction


Restart the transaction
Kill the transaction:Kill the transaction performs when the transaction have a logical errors.
Reconstructing the new transaction and executing the new transaction.
Restart the transaction:It performs recovery mechanism after identifying and rectifying the type of
error.
Recovery mechanisms examples are
Write ahead protocols
Shadow paging technique
Or
Shadow database scheme.
In abort state transaction may be restarted later or automatically or after being
resubmitted by the user.
The transaction state diagram is follows as

Types of Transactions:Transactions are classified into 2 types namely:


1. serial transaction
2. non-serial transaction
Serial Transaction:Serial transaction follows the property called serializability.
List of operations are operating serially (or) sequence called transaction.
Ex:- consider one example followed by read and write operations to perform a
serializability i.e., funds transferring from A account to B account .
T1: READ (A)
A: A-50
WRITE (A)
T2: READ (B)
B: B+50
WRITE (B)
Non serial transaction:Non serial transaction is also called as concurrent transactions.
A transaction executing parallel order (or) non serial order called non serial
transaction
Non serial transaction follows the property called non serializability.
Ex:- consider one example followed by read and write operations to perform
non serializability i.e., funds transferring from both accounts.

T1: READ (A)


A: A-50
WRITE (A)
T2: READ (B)
B: B-50
WRITE (B)
T3: READ (B)
B: B+50
WRITE (B)
T4: READ (A)
A: A+50
WRITE (A)
Schedule:Executing a list of operations in a transaction is called schedule.
Schedules are again classified into 2 types they are:
serial schedule
non serial schedule
Serial schedule called serial transaction
Non serial schedule called concurrent transactions or non serial transactions

Note:All the list of begin and end statements.

Properties of transaction:Every transaction is executed by following 4 properties called ACID


properties namely:

A- Atomicity
C- Consistency
I- Isolation
D- Durability.
Atomicity:In an atomicity the transaction may be executed successfully or may not be
executed successfully.
Consistency:In a DBMS always database is consistent then the result is of a transaction shall
consistent in the database.
Isolation:DBMS allows the transaction may be executed concurrently but transaction
does not know one another. Isolation leads to complication when the transactions
are executing in concurrent transactions.
Complications are nothing but it gives a loss of data.

To recover the loss of data we use Recovery


Mechanisms.
Recovery mechanisms are 2-phase locking protocol, 2-phase commitment
protocol.
Durability:If the transaction is executed successfully it is stored in the persistent way
i.e., if any failure occurs it does not effect to the result of a transactions.

Implementation of atomicity and durability:Problems occurred in atomicity and durability so we are used to implementing
atomicity and durability
Atomicity problem:Atomicity problem occurs due to some hardware or software failure or system
crashes which means does not execute successfully this leads to the inconsistent
database.
Durability problem:If atomicity property fails it causes the durability which mean the result does
not stored in the persistent way
To overcome the atomicity and durability problems we are implementing
atomicity and durability with the help of recovery mechanisms namely:

Write ahead logging protocol.


Shadow paging technique.
Or
Shadow database scheme

Write ahead logging protocol:-

To be able to recover from failure that affects transactions, the system


maintains a log to keep track of all transaction operations that affect the values
of database items. This information may be needed to permit recovery from
failures. The log is kept on disk, so it is not affected by ant type of failure
except for disk (or) catastrophic failure. In addition, the log is periodically
backed up to archival storage (tape) to guard against such disk failures.
Log is protocol used for the recovery mechanism which maintains a log file.
Log contains all the details of the transaction redo and undo operations.
If our transaction got failure while we executing, no problem with the help of
log file we can recovery the loss of data after find outing and rectifying the type
of errors.
Here the type of error means hardware (or) system crashes (or) software
failure (or) improper logic.
Shadow paging technique:Shadow paging technique is also called as shadow database scheme.
Shadow paging technique is one of the recovery mechanisms.
Shadow paging techniques maintain 2 databases namely old copy and
shadow copy.
Shadow copy is also called as new copy and old copy is also called as
original copy.
Every database contains the database pointer.
Database pointer used to point the error in a shadow copy.
Shadow paging technique is not wormed with concurrent transactions.
It means it is worked with only serial transactions.
It does not allow large database.
During transaction execution the shadow copy is never modified.

The database pointer which moves back to old copy. By doing this we can
recovery the loss of data of a particular transaction.
In a multi user environment concurrent transactions logs and check points
must be incorporated into a shadow paging technique.
If our operations in a new database executed in successfully there we are
deleting old copy of the data base.
The diagrammatical representation of an shadow paging technique is:-

Concurrency control:Database management systems allow multiple transactions to allow


execution of a transaction.
Concurrent transactions contains advantages when compare with serial
transactions.
Advantages are,
Increased processing speed and disk utilisation.

Reduced average time for transaction.


Concurrency control scheme:Mechanisms to achieve isolation property that mean to control the
interactions among the concurrent transactions and in order to provide
consistent database we are using concurrency control schemes.
Eg:Locking protocols
Time stamp based protocol,

Optimistic method

Concurrency control mechanism through locking technique:Lock is a variable and it is associated with the data item.
Locking is a technique for concurrency control lock information managed
by the lock manager.
Every database management system contains lock manager.
The main techniques are used to control concurrent executions of a
transaction are based on the concept of locking.
Types of locks:Locks are classified into 3 types they are:
1. binary lock
2. multiple mode lock
3. 2-phase locking
Binary locking:Binary locking is only applied on the levels of granularities.
In binary lock every lock has 2 states.
Locked state

Unlocked state
Locked state is denoted with the symbol 1.
Unlocked state is denoted with the symbol 0.
Locking is nothing but acquiring the lock.
Unlocking is nothing but releasing the lock.
There are 4 levels of granularities shown in below as follows:
1. column level
2. row level
3. page level
4. table level
To control concurrent transactions we are using binary locking.
A binary lock us simple to be implemented by represented the lock as a record of
he form
<Data item, name, lock, locking transaction> + a queue for waiting transaction
for the data item x.
Multiple mode locking (or) shared (or) exclusive locking:Multiple mode locking is also called as shared / exclusive locking (or)
read/write locking.
Every data base management contains lock manager.
Locking are 2 types.
1. Shared lock.
2. Mutually exclusive
Shared lock:If the transactions are T1 and T2. T1 is applying with the shared lock
operation when T2 is read the data from T1 then T2 does not read because it is
locked. So, T2 sends the request for the lock manager then the lock manager gives
the permission then T2 reads data from T1.

Mutually exclusive lock:If the transactions are T1 and T2.T1 is apply with exclusive lock when T2 is
read the data from T1 but it cannot read because it has licked then T2 send request
for the lock manager but the lock manager does not releasing to the T2. If the T1
transaction is completed automatically it releases then we read the data from T1
toT2.If does not give permission then T2 is waiting (or) suspended.
If LOCK(X) = write locked, the value of locking transactions is a single
transaction that holds the exclusive (write) lock on x.
If LOCK(X) = read-locked, the value of locking transactions is a list of one
(or) more transactions that hold the shared (read) lock on x.

2-phase mode locking:A transaction is said to follow two- phase locking protocol if all the locking
operations are (read-lock, write-lock) precede the first unlock operation in the
transaction.
(Or)
To follow two- phase locking protocol in a transaction here the condition is it
should be unlock in the first phase.
Such a transaction can be divided into 2 phases:
1. growing phase
2. shrinking phase
Growing phase:Growing phase is also called as expanding phase.
Growing phase is the first phase, during which new locks on items can be
acquired but none can be released.

(Or)
In growing phase it acquires the lock on a transaction but no transaction will
be released.
Shrinking phase:Shrinking phase is second phase, during which exists the locks can be released
but no new locks can be acquired.
(Or)
No new locks can be granted.

TStarts

TEnds

Note:Locking mechanisms over concurrency control suffer with dead lock problem.

Dead lock:Dead lock occurs when each transaction T in a set of 2 (or) more transactions is
waiting for some data item that is locked by some other transaction T in the set.

Example:T1

T2

READ LOCK(X)
WRITE LOCK (X)
READ LOCK(X)
WRITE LOCK(X)

Dead lock can be solved by using some other concurrency control mechanisms
like:

1. optimistic concurrency control


2. dead lock prevention protocol
3. validation based protocol
4. dead lock detection protocol
5. time outs
6. dead lock avoidance protocol
Serializability:Serializability means list of instructions (or) operations are executed serially
(or) non-serially but it produces same result.
In this we are using the concept of schedules.
If there is serializability then the database is always consistent.

Serializability is classified into 2 types.


1. Conflict serializability
2. View serializability.

Conflict serializability:Consider 2 transactions TA and TB with the instructions IA and IB and


perform read and write operations in the same data item x then it belongs to the
conflict serializability.
In conflict serializability IA is dependent of IB and IB is dependent of IA.
Conflicts are 4 types they are:
1. Read-read conflict.(mostly not occur)
2. Read-write conflict.
3. Write-read conflict.
4. Write-write conflict.
To solve conflict serializability we are using testing for serializability with the
precedence graph (or) directed graph (or) serializability graph.

Directed graph can be represented as:

Ta

Tb

Every directed graph contains vertices (or) nodes (or) arcs and edges.
Transactions can be represented as nodes.

Operations can be represented as arcs.


Every directed graph is a cyclic graph.
In directed graph all the transactions are dependent with no conflicts.
Testing for serializability:Testing of serializability is nothing but we have to test the conflict
serializability.
Read read conflict:Consider 2 transactions TA and TB of instructions IA and IB.
Suppose IB wants to perform read operation but it cannot read the data item
until and unless if IA performs read operation on the data item X.
Here data item is same for TA and TB transactions.
Read write conflict:Consider 2 transactions TA and TB of instructions of IA and IB. suppose IA
wants to perform read operation but it cannot read until and unless write
operation is performed on the data item X by the instruction IB in transaction
TB.
Write Read Conflict:
Consider 2 transactions TA and TB of instructions of
IA and IB. suppose IA wants to perform write operation but it cannot perform
on the data item X by the instruction IB in the transaction TB.

Write write conflict:Consider 2 transactions TA and TB of instructions of IA and IB. suppose


IA wants to perform write operation but it cannot write until and unless write
operation perform on the data item x by the instruction IB in the transaction TB.
View serializability:View serializability is a part of serializability.

Here first we are executing all the operations (or) instructions of TA then
we are moving to TB else first execute all operations of TB then TA.
Example:TA

TB

Read (A)
A: A-50
Write (A)
Read (B)
B: B+50
Write (B)
Read (A)
A: A-50
Write (A)
Read (B)
B: B+50
Write (B)

Time stamp mechanisms to control concurrent transactions:In the time stamp mechanism it has a time stamp manager.
Time stamp manager classified into 2 types they are:
1. Global clock manager.

2. Local clock manager.


Time stamp mechanism is used to decide if the transaction involved in a dead
lock situation there we are performing different operations to solve dead lock.
1. Abort (or) pre-empt.
2. Kill the transaction.
Time stamp mechanism for each transaction is denoted with the symbol TS
(TA)
Here,
TS is time stamp value
TA is transaction A.
Every time stamp contains 2 properties:
1. Uniqueness.
2. Monotonocity.
Uniqueness:Every transaction contains unique time stamp value.
Monotonocity:Always time stamp value should be increases.
Example:If T1 stared before T2 then it is noted as
TS (T1) < TS (T2)
Here T1 is the older transaction.
T2 is the younger transaction.
There are 2 schemes to prevent dead lock which are as follows:
Wait die and
Wound wait
Wait die:-

Suppose that T1 transaction tries to lock the data item x but it is not able
because the data item X is already locked by T2 and here the rule is:
If TS (T1)< TS (T2) then T1 is allowed to wait (or) if T1 is younger transaction
then T1 dies and restart it later.
Wound wait:Suppose that T1 tries to lock X but it is not able to because X is locked by T1
with a conflicting lock. Then the rule is;
If TS (T1) < TS (T2) then abort T2 (T1 wound T2) and restart it later with the
same time stamp otherwise T1 is allowed to wait.
If any transaction is in the wait state then it leads to the dead lock phase.
Optimistic concurrency control:Optimistic is one mechanism to control concurrent transaction.
Optimistic concurrency control is also called as validation based protocol.
Optimistic concurrency control which cannot use any locking and time stamp
mechanism.
In this every transaction can be executed in 3 phases:
1. Read phase.
2. Validation phase.
3. Write phase.

Read phase:-

In this we are reading the database (or) transaction (or) data item and we are
performing some computations and its results will be stored in the temporary
database (or) copy of the database. Then we have to move to validation phase.
Validation phase:In this we are validating the transaction and its results are stored in the
temporary database.
If the results are positive (or) validation phase by copying the results from
temporary to permanent database.
If the results are negative (or) validation is negative then the restart (or)
reschedule the transaction from the previous phase that mean read phase.
Write phase:In the write phase we are writing the transaction in permanent database when
the validation is positive.
Recovery management:Recovery management means recovery mechanism to recover the loss of data
throw shadow copy technique (or) shadow paging technique and write ahead
logging protocol.
Write ahead logging protocol:To be able to recover from failure that affects transactions, the system
maintains a log to keep track of all transaction operations that affect the values
of database items. This information may be needed to permit recovery from
failures. . The log is kept on disk, so it is not affected by ant type of failure
except for disk (or) catastrophic failure. In addition, the log is periodically
backed up to archival storage (tape) to guard against such disk failures.
Log is protocol used for the recovery mechanism which maintains a log file.
Log contains all the details of the transaction redo and undo operations.

If our transaction got failure while we executing, no problem with the help of
log file we can recovery the loss of data after find outing and rectifying the type
of errors.
Here the type of error means hardware (or) system crashes (or) software
failure (or) improper logic.
Shadow paging technique:Shadow paging technique is also called as shadow database scheme.
Shadow paging technique is one of the recovery mechanisms.
Shadow paging techniques maintain 2 databases namely old copy and
shadow copy.
Shadow copy is also called as new copy and old copy is also called as
original copy.
Every database contains the database pointer.
Database pointer used to point the error in a shadow copy.
Shadow paging technique is not wormed with concurrent transactions.
It means it is worked with only serial transactions.
It does not allow large database.
During transaction execution the shadow copy is never modified.
The database pointer which moves back to old copy. By doing this we can
recovery the loss of data of a particular transaction.
In a multi user environment concurrent transactions logs and check points
must be incorporated into a shadow paging technique.
If our operations in a new database executed in successfully there we are
deleting old copy of the data base.
The diagrammatical representation of an shadow paging technique is:-

Multiple Granularities:Multiple locking is a locking mechanism to apply locking on different data


items at a time where as, in normal locking it is not possible to lock different
data items of different transactions. So, normal locking is meant for to lock
single data item at a time.
Multiple granularities is defined as applying locking at different levels
Here, different levels are database, tables, columns, pages, rows, etc.
Multiple granularities is divided into 2 methods to apply locking.
1. finite granularity
2. coarse granularity
Finite granularity:Finite granularity as applying locking from bottom to top and here performance
is high for concurrent transactions.
High order locking is also available.
Coarse granularity:Coarse granularity is defined as locking from top to bottom here performance is
low for concurrent transaction and also low order locking is available.

Unit-VII
RECOVERY SYSTEM
1.
2.
3.
4.
5.
6.
7.

RECOVERY AND ATOMICITY


LOG BASED RECOVERY
RECOVERY WITH CONCURRENT TRANSACTIONS
BUFFER MANAGEMENTS
FAILURE WITH LOSS OF NONVOLATILE STORAGE
ADVANCE RECOVERY TECHNIQUES
REMOTE BACKUP SYSTEMS

RECOVERY & ATOMICITY:


Consider 2 transactions with 2 accounts A & B now we are transferring funds from
A account ot B account initially A contains 700 and B contains 500
Now the transaction performed by the user by adding Rs.80 from T(A) to
T(B).Suddenly if power failure or any errors occur to that transaction then
database leads to inconsistent state that is transaction is in ideal state.Rs.80 is not
added to T(B). If user perform 2 operations like re-exchange dont re-execute
which leads to also inconsistent state because transaction is not committed. So use
recovery technique to recover the transaction from inconsistent state to consistant
state. Recovery technique provides recovery mechanisms like lock based recovery,
shadow phasing check point, write head logging protocol etc..,
LOGBASED RECOVERY:Log is a file which maintain all the details of a transaction. Log is most widely
used structure for recording database and its modifications. The log refers to
sequence of log records and maintains a history of all updated activities in the
database. Log maintains the details like
*Transaction identifier
*Data-item identifier
*Old value

*New value
Transaction identifier refers to transaction which executes the write operation
Data-item identifier refers to the data item written to the disk that is location of
data item on disk
Old value refer to the value of data item before performing the write operation
New value refer to the value of data item after performing the write operation
Log records that are useful for recovery from system and disk failures must reside
on stable storage. Log records help in recording the significant activities associated
with the database transactions such as the beginning of a transaction, aborting or
committing of a transaction ect.
< T begin>
This refer to begin of transaction
< T, x, v1, v2>
This refer to transaction T which preform a write operation on the data item x. The
value of x before and after performing the write operation ate v1 & v2 respectively
< T commit>
This refer to commit transaction T
Basic operation of log are redo& undo
Through log we can recovery the loss of data with 2 methods or 2 tables
1. Transaction Table: It contains all the new values or updated operations on a
transaction
2. Dirty page Table: It contains all the old values or out dated operation on a
transaction.
RECOVERY WITH CONCURRENT TRANSACTIONS:
1. Log based recovery techniques used for handling concurrent transactions.
2. Every system not only deals with the execution of concurrent transactions
but also contains log and disk buffer.

3. In case of transaction failure an undo operation will be performed during the


recovery phase undo also called as back-word scanning. (redo also called as
forward scanning).
4. Consider an example suppose if the values of a data-item(x) has been
changed or modified by T1 then for restoring its values some log based
recovery technique can be used.
5. In this technique the old value of data-item(x) can be achieved by using
undo operation.
6. Only a single transaction should update a data-item at any time that is only
after commit transaction.
Note:-Number of transactions can perform different update operations by 2phase locking protocol.
TRANSACTION ROLLBACK:Every transaction can rollback by using 2 operations
1. Undo
2. undo
Every record in the log in the form <T, x, 15, 25>
For example consider 2 transactions T1 & T2 in the form <T1, p, 25, 35>&<T2, q,
35, 45>
The above 2 transactions T1 and T2 has modified its data items values.
Hence back-word scanning of the log assign a value 25 to p and if a forward
scanning is done then the value of p is 35. During the role back of transaction if a
data item is updated by T1 then no other transactions can perform an update
operation on it.
CHECK POINTS:1. Check point is a recovery mechanism.
2. Checkpoints are the synchronization points between the database and the log
file for reducing the time taken to recover from the system.
3. During the system crash of particular transaction, the entire log is to be
search for determining the transaction by using undo or redo operations.

4. Log results 2 major problems


(a).Time consuming while searching the record in entire log.
(b). the redo operation takes longer time.
5. Checkpoints refer to a sequence of actions
(a).Every log that are currently available inside the main memory must be
move to stable storage.
(b).Every log records contain <check point L>
Note:-During the recovery of a system the number of records in the log
that the system must can scan be minimized by using checkpoints.
BUFFER MANAGEMENT:
Buffer managements allows 2 techniques to ensure consistency and to reduce the
interactions of a transactions in a database. The 2 techniques are
1. Log record buffering.
2. Database buffering.
Log record buffering:1. During the creation of log records all the records available in the Log are
output to stable storage.
2. The log records in a stable storage are in the form of units called data blocks.
3. At the physical level mostly this blocks apper greater than the records.
4. At the physical level the result of data block may consist several different
output operations.
5. Several different output operations leads to confusion in recovery
mechanism.
6. So use additional requirements while storing log records into the stable
storage.
7. The additional requirements are after providing the transaction <T commit>
it only stores to the stable storage.
8. After providing all the records associated with a Data locks which is
stored in stable storage.

From the above rules some of the log records should be given to stable storage
called log record buffering.

DATABASE BUFFERING:1. The database is stored in the non-volatile storage disk then we can transfer
needed data blocks from the system to main memory.
2. Typically the size of main memory is less than the entire database.
3. Applying requirements or rules on the block of data called database
buffering.
4. Consider 2 data blocks B1 & B2 then by considering certain rules, the log
records are given to the stable storage.
5. Thisrules restrict the freedom to system to provide blocks of data to main
memory that is before bringing the block 2 into the main memory B1 has
taken out from the main memory into the disk then operations are possible
on block B2.
6. In database buffering the following actions take place
(a). The log records are brought into the stable storage.
(b).The block B1 is brought onto the disk after its operations.
(c).From the disk the block B2 is brought into main memory.
FAILURES WITH LOSS OF NON VOLATILE STORAGE:1. The information present in the volatile storage gets lost whenever a system
crash occurs.
2. But the loss of information in a non-volatile storage is very rare.
3. To avoid this type of failure certain techniques to be considered.
4. One technique is to be dump the entire data base information onto the
stable storage.
5. When a system crashes the information present in physical data-base get
lost.
6. In order to bring the data-base back to consistant state the dump is used for
restoring the information by the system.
7. During the processing of a dump no transaction must be in process.

8. It follows the additional requirements which are: All the records that are present in the memory must be stored into the
stable storage.
All the DB information is copied onto the stable storage.
A log record of the form log< dump > is stored into the stable storage.
NOTE: If a failure results in the loss of information residing in a nonvolatile storage then with the help of dump procedure the data-base can
be stored back to the disk by the system.
Disadvantages:
1. During the processing of dump technique no transaction is allowed.
2. Dump procedure is expensive because huge amount of data transfer is
copied into the stable storage.
ADVANCED RECOVERY TECHNIQUES:
Advanced recovery technique is a combination of several techniques.
Techniques are undo phase & redo phase techniques like Transaction
rollback, write ahead logging protocol, checkpoints ect.
ARIES:

It is a recovery algorithm.
It is based on write ahead log protocol.
It supports undo and redo operations.
It stores related log record into the stable storage.
It supports concurrent protocol.
Example: care optimistic concurrency control, time stamp concurrency
control.
It supports dirty page table and transaction page table.
It has three phases
a. Analysis phase: the following recovery techniques are performed
1. Determining log records in stable storage.
2. Determining checkpoints
3. Determining dirty page table & transaction table.
b. Redo: forward scanning.
c. Undo: backward scanning

Other features are supporting locking at different levels by using multiple


granularity technique.
REMOTE BACKUP SYSTEM:
1.There is a need for designing transaction processing systems that can continue
even if the system fails or crashes due to some natural disasters like earth quakes,
floods ect..,
2. That can be possible by using the property for transaction processing system
called high degree of availability.
3. Availability can be define as same copy of data presented at primary site as well
as secondary sites.
4.Now remote backup system defined as maintaining same copy of data at several
sites even if site got failure no problem we can continue our transaction with
backup sites.
5. while designing remote backup system we should consider the following major
issues
a. Failure detection
b. Recovery time
c. Control transfer
d. commit time

a.FAILURE DETECTION:
The failure of primary site must be detected by remote backup system possible
through good network communication.
b.RECOVERY TIME:

Using checkpoints we can reduce the recovery time because log size at remote
system also affects the recovery time that is if the size of the log increases then the
time taken to perform recovery also increases.
c.CONTROL TRANSFER:
Control transfer can be defined as acting remote system as a primary site.

d.COMMIT TIME:
After completion of every transaction we must commit the transaction else leads to
inconsistent data-base means not achieving durability.

Anda mungkin juga menyukai