Anda di halaman 1dari 39

Process Management

Thread and Mutual Exclusion


Contents

1 Mutual exclusion 1
1.1 Problem description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Enforcing mutual exclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Hardware solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Software solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Bound on the mutual exclusion problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Recoverable mutual exclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Types of mutual exclusion devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Critical section 7
2.1 Need for Critical Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Implementation of critical sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Uses of critical sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.1 Kernel-level critical sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 Critical sections in data-structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.3 Critical sections in computer networking . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Thread (computing) 12
3.1 Single vs Multiprocessor systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Threads vs. processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Single threading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.5 Multithreading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.6 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.7 Processes, kernel threads, user threads, and bers . . . . . . . . . . . . . . . . . . . . . . . . . . 15

i
ii CONTENTS

3.7.1 Thread and ber issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16


3.8 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.8.1 1:1 (kernel-level threading) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.8.2 N:1 (user-level threading) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.8.3 M:N (hybrid threading) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.8.4 Hybrid implementation examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.8.5 Fiber implementation examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.9 Programming language support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.10 Practical multithreading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.11 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.12 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.13 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.14 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4 Concurrency control 21
4.1 Concurrency control in databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1.1 Database transaction and the ACID rules . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1.2 Why is concurrency control needed? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1.3 Concurrency control mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.1.4 Major goals of concurrency control mechanisms . . . . . . . . . . . . . . . . . . . . . . . 24
4.1.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.1.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.1.7 Footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Concurrency control in operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

5 Race condition 29
5.1 Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.1.1 Critical and non-critical forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.1.2 Static, dynamic, and essential forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2.2 File systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2.3 Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2.4 Life-critical systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2.5 Computer security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3 Examples outside of computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3.1 Biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.4 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
CONTENTS iii

5.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33


5.8 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.8.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.8.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.8.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Chapter 1

Mutual exclusion

For the concept in logic and probability theory, see Mutual exclusivity.
Mutex redirects here. For the synchronization device commonly used to establish mutual exclusion, see lock (com-
puter science).
In computer science, mutual exclusion is a property of concurrency control, which is instituted for the purpose

Two nodes, i and i + 1, being removed simultaneously results in node i + 1 not being removed.

of preventing race conditions; it is the requirement that one thread of execution never enter its critical section at the
same time that another concurrent thread of execution enters its own critical section.[lower-alpha 1]
The requirement of mutual exclusion was rst identied and solved by Edsger W. Dijkstra in his seminal 1965 paper
titled Solution of a problem in concurrent programming control,[1][2] which is credited as the rst topic in the study of
concurrent algorithms.[3]
A simple example of why mutual exclusion is important in practice can be visualized using a singly linked list of
four items, where the second and third are to be removed. The removal of a node that sits between 2 other nodes
is performed by changing the next pointer of the previous node to point to the next node (in other words, if node i

1
2 CHAPTER 1. MUTUAL EXCLUSION

is being removed, then the next pointer of node i 1 is changed to point to node i + 1, thereby removing from the
linked list any reference to node i). When such a linked list is being shared between multiple threads of execution, two
threads of execution may attempt to remove two dierent nodes simultaneously, one thread of execution changing
the next pointer of node i 1 to point to node i + 1, while another thread of execution changes the next pointer of
node i to point to node i + 2. Although both removal operations complete successfully, the desired state of the linked
list is not achieved: node i + 1 remains in the list, because the next pointer of node i 1 points to node i + 1.
This problem (called a race condition) can be avoided by using the requirement of mutual exclusion to ensure that
simultaneous updates to the same part of the list cannot occur.

1.1 Problem description


The problem which mutual exclusion addresses is a problem of resource sharing: how can a software system control
multiple processes access to a shared resource, when each process needs exclusive control of that resource while
doing its work? The mutual-exclusion solution to this problem makes the shared resource available only while the
process is in a specic code segment called the critical section. It controls access to the shared resource by controlling
each process execution of that part of its program where the resource would be used.
A successful solution to this problem must have at least these two properties:

It must implement mutual exclusion: only one process can be in the critical section at a time.
It must be free of deadlocks: if processes are trying to enter the critical section, one of them must eventually
be able to do so successfully, provided no process stays in the critical section permanently.

Deadlock freedom can be expanded to implement one or both of these properties:

Lockout-freedom guarantees that any process wishing to enter the critical section will be able to do so eventually.
This is distinct from deadlock avoidance, which requires that some waiting process be able to get access to the
critical section, but does not require that every process gets a turn. If two processes continually trade a resource
between them, a third process could be locked out and experience resource starvation, even though the system
is not in deadlock. If a system is free of lockouts, it ensure that every process can get a turn at some point in
the future.
A k-bounded waiting property gives a more precise commitment than lockout-freedom. Lockout-freedom
ensures every process can access the critical section eventually: it gives no guarantee about how long the
wait will be. In practice, a process could be overtaken an arbitrary or unbounded number of times by other
higher-priority processes before it gets its turn. Under a k-bounded waiting property, each process has a nite
maximum wait time. This works by setting a limit to the number of times other processes can cut in line, so
that no process can enter the critical section more than k times while another is waiting.[4]

Every process program can be partitioned into four sections, resulting in four states. Program execution cycles
through these four states in order:[5]

Non-Critical Section Operation is outside the critical section; the process is not using or requesting the shared
resource.

Trying The process attempts to enter the critical section.

Critical Section The process is allowed to access the shared resource in this section.

Exit The process leaves the critical section and makes the shared resource available to other processes.

If a process wishes to enter the critical section, it must rst execute the trying section and wait until it acquires access
to the critical section. After the process has executed its critical section and is nished with the shared resources, it
needs to execute the exit section to release them for other processes use. The process then returns to its non-critical
section.
1.2. ENFORCING MUTUAL EXCLUSION 3

the cycle of sections of a single process

1.2 Enforcing mutual exclusion


There are both software and hardware solutions for enforcing mutual exclusion. Some dierent solutions are discussed
below.

1.2.1 Hardware solutions

On uniprocessor systems, the simplest solution to achieve mutual exclusion is to disable interrupts during a processs
critical section. This will prevent any interrupt service routines from running (eectively preventing a process from
being preempted). Although this solution is eective, it leads to many problems. If a critical section is long, then
the system clock will drift every time a critical section is executed because the timer interrupt is no longer serviced,
so tracking time is impossible during the critical section. Also, if a process halts during its critical section, control
will never be returned to another process, eectively halting the entire system. A more elegant method for achieving
mutual exclusion is the busy-wait.
Busy-waiting is eective for both uniprocessor and multiprocessor systems. The use of shared memory and an atomic
test-and-set instruction provides the mutual exclusion. A process can test-and-set on a location in shared memory,
and since the operation is atomic, only one process can set the ag at a time. Any process that is unsuccessful in
setting the ag can either go on to do other tasks and try again later, release the processor to another process and try
again later, or continue to loop while checking the ag until it is successful in acquiring it. Preemption is still possible,
so this method allows the system to continue to functioneven if a process halts while holding the lock.
4 CHAPTER 1. MUTUAL EXCLUSION

Several other atomic operations can be used to provide mutual exclusion of data structures; most notable of these is
compare-and-swap (CAS). CAS can be used to achieve wait-free mutual exclusion for any shared data structure by
creating a linked list where each node represents the desired operation to be performed. CAS is then used to change
the pointers in the linked list[6] during the insertion of a new node. Only one process can be successful in its CAS; all
other processes attempting to add a node at the same time will have to try again. Each process can then keep a local
copy of the data structure, and upon traversing the linked list, can perform each operation from the list on its local
copy.

1.2.2 Software solutions


Beside hardware-supported solutions, some software solutions exist that use busy waiting to achieve mutual exclusion.
Examples of these include the following:

Dekkers algorithm;
Petersons algorithm;
Lamports bakery algorithm;[7]
Szymanskis algorithm;
Taubenfelds black-white bakery algorithm.[2]

These algorithms do not work if out-of-order execution is used on the platform that executes them. Programmers
have to specify strict ordering on the memory operations within a thread.[8]
It is often preferable to use synchronization facilities provided by an operating systems multithreading library, which
will take advantage of hardware solutions if possible but will use software solutions if no hardware solutions exist. For
example, when the operating systems lock library is used and a thread tries to acquire an already acquired lock, the
operating system could suspend the thread using a context switch and swap it out with another thread that is ready to
be run, or could put that processor into a low power state if there is no other thread that can be run. Therefore, most
modern mutual exclusion methods attempt to reduce latency and busy-waits by using queuing and context switches.
However, if the time that is spent suspending a thread and then restoring it can be proven to be always more than
the time that must be waited for a thread to become ready to run after being blocked in a particular situation, then
spinlocks are an acceptable solution (for that situation only).

1.3 Bound on the mutual exclusion problem


One binary test&set register is sucient to provide the deadlock-free solution to the mutual exclusion problem. But
a solution built with a test&set register
can possibly lead to the starvation of some processes which become caught
in the trying section.[4] In fact, ( n) distinct memory states are required to avoid lockout. To avoid unbounded
waiting, n distinct memory states are required.[9]

1.4 Recoverable mutual exclusion


Most algorithms for mutual exclusion are designed with the assumption that no failure occurs while a process is
running inside the critical section. However, in reality such failures may be commonplace. For example, a sudden
loss of power or faulty interconnect might cause a process in a critical section to experience an unrecoverable error
or otherwise be unable to continue. If such a failure occurs, conventional, non-failure-tolerant mutual exclusion
algorithms may deadlock or otherwise fail key liveness properties. To deal with this problem, several solutions using
crash-recovery mechanisms have been proposed.[10]

1.5 Types of mutual exclusion devices


The solutions explained above can be used to build the synchronization primitives below:
1.6. SEE ALSO 5

locks;

readerswriter locks;

recursive locks;

semaphores;

monitors;

message passing;

tuple space.

Many forms of mutual exclusion have side-eects. For example, classic semaphores permit deadlocks, in which
one process gets a semaphore, another process gets a second semaphore, and then both wait forever for the other
semaphore to be released. Other common side-eects include starvation, in which a process never gets sucient
resources to run to completion; priority inversion, in which a higher priority thread waits for a lower-priority thread;
and high latency, in which response to interrupts is not prompt.
Much research is aimed at eliminating the above eects, often with the goal of guaranteeing non-blocking progress.
No perfect scheme is known. Blocking system calls used to sleep an entire process. Until such calls became threadsafe,
there was no proper mechanism for sleeping a single thread within a process (see polling).

1.6 See also


Atomicity (programming)

Concurrency control

Exclusive or

Mutually exclusive events

Semaphore

Dining philosophers problem

Reentrant mutex

Spinlock

1.7 Notes
[1] Here, critical section refers to an interval of time during which a thread of execution accesses a shared resource, such as
shared memory.

1.8 References
[1] Dijkstra, E. W. (1965). Solution of a problem in concurrent programming control. Communications of the ACM. 8 (9):
569. doi:10.1145/365559.365617.

[2] Taubenfeld, The Black-White Bakery Algorithm. In Proc. Distributed Computing, 18th international conference, DISC
2004. Vol 18, 56-70, 2004

[3] PODC Inuential Paper Award: 2002, ACM Symposium on Principles of Distributed Computing, retrieved 2009-08-24

[4] Attiya, Hagit; Welch, Jennifer (Mar 25, 2004). Distributed computing: fundamentals, simulations, and advanced topics.
John Wiley & Sons, Inc. ISBN 978-0-471-45324-6.

[5] Lamport, Leslie (June 26, 2000), The Mutual Exclusion Problem Part II: Statement and Solutions (PDF)
6 CHAPTER 1. MUTUAL EXCLUSION

[6] https://timharris.uk/papers/2001-disc.pdf

[7] Lamport, Leslie (August 1974). A new solution of Dijkstras concurrent programming problem. Communications of the
ACM. 17 (8): 453455. doi:10.1145/361082.361093.

[8] Holzmann, Gerard J.; Bosnacki, Dragan (1 October 2007). The Design of a Multicore Extension of the SPIN Model
Checker. IEEE Transactions on Software Engineering. 33 (10): 659674. doi:10.1109/TSE.2007.70724.

[9] Burns, James E.; Paul Jackson, Nancy A. Lynch (January 1982), Data Requirements for Implementation of N-Process
Mutual Exclusion Using a Single Shared Variable (PDF)

[10] Golab, Wojciech; Ramaraju, Aditya (July 2016), Recoverable Mutual Exclusion

1.9 Further reading


Michel Raynal: Algorithms for Mutual Exclusion, MIT Press, ISBN 0-262-18119-3

Sunil R. Das, Pradip K. Srimani: Distributed Mutual Exclusion Algorithms, IEEE Computer Society, ISBN
0-8186-3380-8

Thomas W. Christopher, George K. Thiruvathukal: High-Performance Java Platform Computing, Prentice


Hall, ISBN 0-13-016164-0

Gadi Taubenfeld, Synchronization Algorithms and Concurrent Programming, Pearson/Prentice Hall, ISBN 0-
13-197259-6

1.10 External links


Article A Simple Mutex Program

Common threads: POSIX threads explained - The little things called mutexes" by Daniel Robbins
Mutual Exclusion Petri Net

Mutual Exclusion with Locks - an Introduction


Mutual exclusion variants in OpenMP

The Black-White Bakery Algorithm


Chapter 2

Critical section

In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior, so
parts of the program where the shared resource is accessed is protected. This protected section is the critical section
or critical region. It cannot be executed by more than one process. Typically, the critical section accesses a shared
resource, such as a data structure, a peripheral device, or a network connection, that would not operate correctly in
the context of multiple concurrent accesses.[1]

2.1 Need for Critical Sections


Dierent codes/processes may consist of the same variables which need to be read or written but which are conicting
in nature. For e.g., a variable x is to be read by process A and process B has to write to the same variable x at the
same time.
Process A:
// Process A . . b = x+5; // instruction executes at time = Tx .

Process B:
// Process B . . x = 3+z; // instruction executes at time = Tx .
In cases like these, a critical section is important. In the above case, if A needs to read the updated value of x,
executing Process A and Process B at the same time may not give required results. To prevent this, variable x is
protected by a critical section. First, B gets the access to the section. Once B nishes writing the value, A gets the
access to the critical section and variable x can be read.
By carefully controlling which variables are modied inside and outside the critical section, concurrent access to
the shared variable are prevented. A critical section is typically used when a multi-threaded program must update
multiple related variables without a separate thread making conicting changes to that data. In a related situation,
a critical section may be used to ensure that a shared resource, for example, a printer, can only be accessed by one
process at a time.

2.2 Implementation of critical sections


The implementation of critical sections vary among dierent operating systems.
A critical section will usually terminate in nite time,[2] and a thread, task, or process will have to wait for a xed
time to enter it (bounded waiting). To ensure exclusive use of critical sections some synchronization mechanism is
required at the entry and exit of the program.
Critical section is a piece of a program that requires mutual exclusion of access.
As shown in Fig 2,[3] in the case of mutual exclusion (Mutex), one thread blocks a critical section by using locking
techniques when it needs to access the shared resource and other threads have to wait to get their turn to enter into
the section. This prevents conicts when two or more threads share the same memory space and want to access a

7
8 CHAPTER 2. CRITICAL SECTION

Fig 1: Flow graph depicting need for critical section

common resource.[2]
The simplest method to prevent any change of processor control inside the critical section is implementing a semaphore.
In uni processor systems, this can be done by disabling interrupts on entry into the critical section, avoiding system
calls that can cause a context switch while inside the section, and restoring interrupts to their previous state on exit.
Any thread of execution entering any critical section anywhere in the system will, with this implementation, pre-
vent any other thread, including an interrupt, from being granted processing time on the CPUand therefore from
entering any other critical section or, indeed, any code whatsoeveruntil the original thread leaves its critical section.
This brute-force approach can be improved upon by using semaphores. To enter a critical section, a thread must
2.3. USES OF CRITICAL SECTIONS 9

Fig 2: Locks and critical sections in multiple threads

Fig 3: Pseudo code for implementing critical section

obtain a semaphore, which it releases on leaving the section. Other threads are prevented from entering the critical
section at the same time as the original thread, but are free to gain control of the CPU and execute other code,
including other critical sections that are protected by dierent semaphores. Semaphore locking also has a time limit
to prevent a deadlock condition in which a lock is acquired by a single process for an innite time stalling the other
processes which need to use the shared resource protected by the critical session.

2.3 Uses of critical sections

2.3.1 Kernel-level critical sections

Typically, critical sections prevent thread and process migration between processors and the preemption of processes
and threads by interrupts and other processes and threads.
Critical sections often allow nesting. Nesting allows multiple critical sections to be entered and exited at little cost.
If the scheduler interrupts the current process or thread in a critical section, the scheduler will either allow the currently
executing process or thread to run to completion of the critical section, or it will schedule the process or thread for
another complete quantum. The scheduler will not migrate the process or thread to another processor, and it will not
schedule another process or thread to run while the current process or thread is in a critical section.
10 CHAPTER 2. CRITICAL SECTION

Similarly, if an interrupt occurs in a critical section, the interrupt information is recorded for future processing, and
execution is returned to the process or thread in the critical section.[4] Once the critical section is exited, and in some
cases the scheduled quantum completed, the pending interrupt will be executed. The concept of scheduling quantum
applies to "round-robin" and similar scheduling policies.
Since critical sections may execute only on the processor on which they are entered, synchronization is only required
within the executing processor. This allows critical sections to be entered and exited at almost zero cost. No inter-
processor synchronization is required. Only instruction stream synchronization[5] is needed. Most processors provide
the required amount of synchronization by the simple act of interrupting the current execution state. This allows
critical sections in most cases to be nothing more than a per processor count of critical sections entered.
Performance enhancements include executing pending interrupts at the exit of all critical sections and allowing the
scheduler to run at the exit of all critical sections. Furthermore, pending interrupts may be transferred to other
processors for execution.
Critical sections should not be used as a long-lasting locking primitive. Critical sections should be kept short enough
so that it can be entered, executed, and exited without any interrupts occurring from the hardware and the scheduler.
Kernel-level critical sections are the base of the software lockout issue.

2.3.2 Critical sections in data-structures

In parallel programming, the code is divided into threads. The read-write conicting variables are split between
threads and each thread has a copy of them. Data-structures like linked list, tree, hash tables etc. have data variables
that are linked and cannot be split between threads and hence implementing parallelism is very dicult.[6] To improve
the eciency of implementing data-structures multiple operations like insertion, deletion, search need to be executed
in parallel. While performing these operations, there may be scenarios where the same element is being searched by
one thread and is being deleted by another. In such cases, the output may be erroneous. The thread searching the
element may have a hit, whereas the other thread may delete it just after that time. These scenarios will cause issues
in the program running by providing false data. To prevent this, one method is that the entire data-structure can be
kept under critical section so that only one operation is handled at a time. Another method is locking the node in use
under critical section, so that other operations do not use the same node. Using critical section, thus, ensures that the
code provides expected outputs.[6]

2.3.3 Critical sections in computer networking

Critical sections are also needed in computer networking. When the data arrives at network sockets, it may not
arrive in an ordered format. Lets say program X running on the machine needs to collect the data from the socket,
rearrange it and check if anything is missing. While this program works on the data, no other program should access
the same socket for that particular data. Hence, the data of the socket is protected by a critical section so that program
X can use it exclusively.

2.4 See also


Lock (computer science)

Mutual exclusion

Lamports bakery algorithm

Dekkers algorithm

Eisenberg & McGuire Algorithm

Szymanskis Algorithm

Petersons Algorithm
2.5. REFERENCES 11

2.5 References
[1] Raynal, Michel (2012). Concurrent Programming: Algorithms, Principles, and Foundations. Springer Science & Business
Media. p. 9. ISBN 3642320279.

[2] Jones, M. Tim (2008). GNU/Linux Application Programming (2nd ed.). [Hingham, Mass.] Charles River Media. p. 264.
ISBN 978-1-58450-568-6.

[3] Chen, Stenstrom, Guancheng, Per (Nov 1016, 2012). Critical Lock Analysis: Diagnosing Critical Section Bottlenecks
in Multithreaded Applications. High Performance Computing, Networking, Storage and Analysis (SC), 2012 International
Conference.

[4] RESEARCH PAPER ON SOFTWARE SOLUTION OF CRITICAL SECTION PROBLEM. International Journal of
Advance Technology & Engineering Research (IJATER). 1. November 2011.

[5] Dubois, Scheurich, Michel, Christoph. Synchronization, Coherence,and Event Ordering in Multiprocessors (PDF). Sur-
vey and Tutorial Series.

[6] Solihin, Yan. Fundamentals of Parallel Multicore Architecture. ISBN 9781482211184.

2.6 External links


Critical Section documentation on the MSDN Library web page

Tutorial on Critical Sections


Code examples for Mutex

Tutorial on Semaphores
Lock contention in Critical Sections
Chapter 3

Thread (computing)

This article is about the concurrency concept. For the multithreading in hardware, see Multithreading (computer
architecture). For the form of code consisting entirely of subroutine calls, see Threaded code. For other uses, see
Thread (disambiguation).
In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed

Process

Thread #1 Thread #2

Time

A process with two threads of execution, running on one processor

independently by a scheduler, which is typically a part of the operating system.[1] The implementation of threads and

12
3.1. SINGLE VS MULTIPROCESSOR SYSTEMS 13

processes diers between operating systems, but in most cases a thread is a component of a process. Multiple threads
can exist within one process, executing concurrently and sharing resources such as memory, while dierent processes
do not share these resources. In particular, the threads of a process share its executable code and the values of its
variables at any given time.

3.1 Single vs Multiprocessor systems


Systems with a single processor generally implement multithreading by time slicing: the central processing unit
(CPU) switches between dierent software threads. This context switching generally happens very often and rapidly
enough that users perceive the threads or tasks as running in parallel. On a multiprocessor or multi-core system,
multiple threads can execute in parallel, with every processor or core executing a separate thread simultaneously; on
a processor or core with hardware threads, separate software threads can also be executed concurrently by separate
hardware threads.

3.2 History
Threads made an early appearance in OS/360 Multiprogramming with a Variable Number of Tasks (MVT) in 1967,
in which context they were called tasks. The term thread has been attributed to Victor A. Vyssotsky.[2] Process
schedulers of many modern operating systems directly support both time-sliced and multiprocessor threading, and
the operating system kernel allows programmers to manipulate threads by exposing required functionality through
the system call interface. Some threading implementations are called kernel threads, whereas light-weight processes
(LWP) are a specic type of kernel thread that share the same state and information. Furthermore, programs can have
user-space threads when threading with timers, signals, or other methods to interrupt their own execution, performing
a sort of ad hoc time slicing.

3.3 Threads vs. processes


Threads dier from traditional multitasking operating system processes in that:

processes are typically independent, while threads exist as subsets of a process

processes carry considerably more state information than threads, whereas multiple threads within a process
share process state as well as memory and other resources

processes have separate address spaces, whereas threads share their address space

processes interact only through system-provided inter-process communication mechanisms

context switching between threads in the same process is typically faster than context switching between pro-
cesses.

Systems such as Windows NT and OS/2 are said to have cheap threads and expensive processes; in other operating
systems there is not so great a dierence except the cost of an address space switch which on some architectures
(notably x86) results in a translation lookaside buer (TLB) ush.

3.4 Single threading


In computer programming, single threading is the processing of one command at a time.[3] The opposite of single
threading is multithreading.[4] While it has been suggested that the term single threading is misleading, the term has
been widely accepted within the functional programming community.[5]
14 CHAPTER 3. THREAD (COMPUTING)

3.5 Multithreading
Multithreading is mainly found in multitasking operating systems. Multithreading is a widespread programming and
execution model that allows multiple threads to exist within the context of one process. These threads share the
processs resources, but are able to execute independently. The threaded programming model provides developers
with a useful abstraction of concurrent execution. Multithreading can also be applied to one process to enable parallel
execution on a multiprocessing system.
Multithreaded applications have the following advantages:

Responsiveness: multithreading can allow an application to remain responsive to input. In a one-thread program,
if the main execution thread blocks on a long-running task, the entire application can appear to freeze. By
moving such long-running tasks to a worker thread that runs concurrently with the main execution thread, it is
possible for the application to remain responsive to user input while executing tasks in the background. On the
other hand, in most cases multithreading is not the only way to keep a program responsive, with non-blocking
I/O and/or Unix signals being available for gaining similar results.[6]
Faster execution: this advantage of a multithreaded program allows it to operate faster on computer systems
that have multiple central processing units (CPUs) or one or more multi-core processors, or across a cluster
of machines, because the threads of the program naturally lend themselves to parallel execution, assuming
sucient independence (that they do not need to wait for each other).
Lower resource consumption: using threads, an application can serve multiple clients concurrently using fewer
resources than it would need when using multiple process copies of itself. For example, the Apache HTTP
server uses thread pools: a pool of listener threads for listening to incoming requests, and a pool of server
threads for processing those requests.
Better system utilization: as an example, a le system using multiple threads can achieve higher throughput
and lower latency since data in a faster medium (such as cache memory) can be retrieved by one thread while
another thread retrieves data from a slower medium (such as external storage) with neither thread waiting for
the other to nish.
Simplied sharing and communication: unlike processes, which require a message passing or shared memory
mechanism to perform inter-process communication (IPC), threads can communicate through data, code and
les they already share.
Parallelization: applications looking to use multicore or multi-CPU systems can use multithreading to split
data and tasks into parallel subtasks and let the underlying architecture manage how the threads run, either
concurrently on one core or in parallel on multiple cores. GPU computing environments like CUDA and
OpenCL use the multithreading model where dozens to hundreds of threads run in parallel across data on a
large number of cores.

Multithreading has the following drawbacks:

Synchronization: since threads share the same address space, the programmer must be careful to avoid race
conditions and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often
need to rendezvous in time in order to process the data in the correct order. Threads may also require mutually
exclusive operations (often implemented using semaphores) in order to prevent common data from being si-
multaneously modied or read while in the process of being modied. Careless use of such primitives can lead
to deadlocks.
Thread crashes a process: an illegal operation performed by a thread crashes the entire process; therefore, one
misbehaving thread can disrupt the processing of all the other threads in the application.

3.6 Scheduling
Operating systems schedule threads either preemptively or cooperatively. Preemptive multithreading is generally
considered the superior approach, as it allows the operating system to determine when a context switch should occur.
The disadvantage of preemptive multithreading is that the system may make a context switch at an inappropriate time,
3.7. PROCESSES, KERNEL THREADS, USER THREADS, AND FIBERS 15

causing lock convoy, priority inversion or other negative eects, which may be avoided by cooperative multithreading.
Cooperative multithreading, on the other hand, relies on the threads themselves to relinquish control once they are at
a stopping point. This can create problems if a thread is waiting for a resource to become available.
Until the early 2000s, most desktop computers had only one single-core CPU, with no support for hardware threads,
although threads were still used on such computers because switching between threads was generally still quicker
than full-process context switches. In 2002, Intel added support for simultaneous multithreading to the Pentium 4
processor, under the name hyper-threading; in 2005, they introduced the dual-core Pentium D processor and AMD
introduced the dual-core Athlon 64 X2 processor.
Processors in embedded systems, which have higher requirements for real-time behaviors, might support multithread-
ing by decreasing the thread-switch time, perhaps by allocating a dedicated register le for each thread instead of
saving/restoring a common register le.

3.7 Processes, kernel threads, user threads, and bers


Main articles: Process (computing) and Fiber (computer science)

Scheduling can be done at the kernel level or user level, and multitasking can be done preemptively or cooperatively.
This yields a variety of related concepts.
At the kernel level, a process contains one or more kernel threads, which share the processs resources, such as memory
and le handles a process is a unit of resources, while a thread is a unit of scheduling and execution. Kernel
scheduling is typically uniformly done preemptively or, less commonly, cooperatively. At the user level a process such
as a runtime system can itself schedule multiple threads of execution. If these do not share data, as in Erlang, they are
usually analogously called processes,[7] while if they share data they are usually called (user) threads, particularly if
preemptively scheduled. Cooperatively scheduled user threads are known as bers; dierent processes may schedule
user threads dierently. User threads may be executed by kernel threads in various ways (one-to-one, many-to-
one, many-to-many). The term "light-weight process" variously refers to user threads or to kernel mechanisms for
scheduling user threads onto kernel threads.
A process is a heavyweight unit of kernel scheduling, as creating, destroying, and switching processes is relatively
expensive. Processes own resources allocated by the operating system. Resources include memory (for both code and
data), le handles, sockets, device handles, windows, and a process control block. Processes are isolated by process
isolation, and do not share address spaces or le resources except through explicit methods such as inheriting le
handles or shared memory segments, or mapping the same le in a shared way see interprocess communication.
Creating or destroying a process is relatively expensive, as resources must be acquired or released. Processes are typ-
ically preemptively multitasked, and process switching is relatively expensive, beyond basic cost of context switching,
due to issues such as cache ushing.[lower-alpha 1]
A kernel thread is a lightweight unit of kernel scheduling. At least one kernel thread exists within each process. If
multiple kernel threads exist within a process, then they share the same memory and le resources. Kernel threads
are preemptively multitasked if the operating systems process scheduler is preemptive. Kernel threads do not own
resources except for a stack, a copy of the registers including the program counter, and thread-local storage (if any),
and are thus relatively cheap to create and destroy. Thread switching is also relatively cheap: it requires a context
switch (saving and restoring registers and stack pointer), but does not change virtual memory and is thus cache-friendly
(leaving TLB valid). The kernel can assign one thread to each logical core in a system (because each processor splits
itself up into multiple logical cores if it supports multithreading, or only supports one logical core per physical core if
it does not), and can swap out threads that get blocked. However, kernel threads take much longer than user threads
to be swapped.
Threads are sometimes implemented in userspace libraries, thus called user threads. The kernel is unaware of them,
so they are managed and scheduled in userspace. Some implementations base their user threads on top of several
kernel threads, to benet from multi-processor machines (M:N model). In this article the term thread (without
kernel or user qualier) defaults to referring to kernel threads. User threads as implemented by virtual machines
are also called green threads. User threads are generally fast to create and manage, but cannot take advantage of
multithreading or multiprocessing, and will get blocked if all of their associated kernel threads get blocked even if
there are some user threads that are ready to run.
Fibers are an even lighter unit of scheduling which are cooperatively scheduled: a running ber must explicitly "yield"
16 CHAPTER 3. THREAD (COMPUTING)

to allow another ber to run, which makes their implementation much easier than kernel or user threads. A ber can
be scheduled to run in any thread in the same process. This permits applications to gain performance improvements
by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the
application). Parallel programming environments such as OpenMP typically implement their tasks through bers.
Closely related to bers are coroutines, with the distinction being that coroutines are a language-level construct, while
bers are a system-level construct.

3.7.1 Thread and ber issues


Concurrency and data structures

Threads in the same process share the same address space. This allows concurrently running code to couple tightly and
conveniently exchange data without the overhead or complexity of an IPC. When shared between threads, however,
even simple data structures become prone to race conditions if they require more than one CPU instruction to update:
two threads may end up attempting to update the data structure at the same time and nd it unexpectedly changing
underfoot. Bugs caused by race conditions can be very dicult to reproduce and isolate.
To prevent this, threading application programming interfaces (APIs) oer synchronization primitives such as mutexes
to lock data structures against concurrent access. On uniprocessor systems, a thread running into a locked mutex
must sleep and hence trigger a context switch. On multi-processor systems, the thread may instead poll the mutex in
a spinlock. Both of these may sap performance and force processors in symmetric multiprocessing (SMP) systems
to contend for the memory bus, especially if the granularity of the locking is ne.

Although threads seem to be a small step from sequential computation, in fact, they represent a
huge step. They discard the most essential and appealing properties of sequential computation: un-
derstandability, predictability, and determinism. Threads, as a model of computation, are wildly non-
deterministic, and the job of the programmer becomes one of pruning that nondeterminism.
The Problem with Threads, Edward A. Lee, UC Berkeley, 2006[8]

I/O and scheduling

User thread or ber implementations are typically entirely in userspace. As a result, context switching between user
threads or bers within the same process is extremely ecient because it does not require any interaction with the
kernel at all: a context switch can be performed by locally saving the CPU registers used by the currently executing
user thread or ber and then loading the registers required by the user thread or ber to be executed. Since scheduling
occurs in userspace, the scheduling policy can be more easily tailored to the requirements of the programs workload.
However, the use of blocking system calls in user threads (as opposed to kernel threads) or bers can be problematic.
If a user thread or a ber performs a system call that blocks, the other user threads and bers in the process are unable
to run until the system call returns. A typical example of this problem is when performing I/O: most programs are
written to perform I/O synchronously. When an I/O operation is initiated, a system call is made, and does not return
until the I/O operation has been completed. In the intervening period, the entire process is blocked by the kernel
and cannot run, which starves other user threads and bers in the same process from executing.
A common solution to this problem is providing an I/O API that implements a synchronous interface by using non-
blocking I/O internally, and scheduling another user thread or ber while the I/O operation is in progress. Similar
solutions can be provided for other blocking system calls. Alternatively, the program can be written to avoid the use
of synchronous I/O or other blocking system calls.
SunOS 4.x implemented light-weight processes or LWPs. NetBSD 2.x+, and DragonFly BSD implement LWPs as
kernel threads (1:1 model). SunOS 5.2 through SunOS 5.8 as well as NetBSD 2 to NetBSD 4 implemented a two
level model, multiplexing one or more user level threads on each kernel thread (M:N model). SunOS 5.9 and later, as
well as NetBSD 5 eliminated user threads support, returning to a 1:1 model.[9] FreeBSD 5 implemented M:N model.
FreeBSD 6 supported both 1:1 and M:N, users could choose which one should be used with a given program using
/etc/libmap.conf. Starting with FreeBSD 7, the 1:1 became the default. FreeBSD 8 no longer supports the M:N
model.
The use of kernel threads simplies user code by moving some of the most complex aspects of threading into the
kernel. The program does not need to schedule threads or explicitly yield the processor. User code can be written in a
3.8. MODELS 17

familiar procedural style, including calls to blocking APIs, without starving other threads. However, kernel threading
may force a context switch between threads at any time, and thus expose race hazards and concurrency bugs that
would otherwise lie latent. On SMP systems, this is further exacerbated because kernel threads may literally execute
on separate processors in parallel.

3.8 Models

3.8.1 1:1 (kernel-level threading)

Threads created by the user in a 1:1 correspondence with schedulable entities in the kernel[10] are the simplest possible
threading implementation. OS/2 and Win32 used this approach from the start, while on Linux the usual C library
implements this approach (via the NPTL or older LinuxThreads). This approach is also used by Solaris, NetBSD,
FreeBSD, macOS, and iOS.

3.8.2 N:1 (user-level threading)

An N:1 model implies that all application-level threads map to one kernel-level scheduled entity;[10] the kernel has no
knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition,
it can be implemented even on simple kernels which do not support threading. One of the major drawbacks however
is that it cannot benet from the hardware acceleration on multithreaded processors or multi-processor computers:
there is never more than one thread being scheduled at the same time.[10] For example: If one of the threads needs to
execute an I/O request, the whole process is blocked and the threading advantage cannot be used. The GNU Portable
Threads uses User-level threading, as does State Threads.

3.8.3 M:N (hybrid threading)

M:N maps some M number of application threads onto some N number of kernel entities,[10] or virtual processors.
This is a compromise between kernel-level (1:1) and user-level (N:1) threading. In general, M:N threading
systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-
space code are required. In the M:N implementation, the threading library is responsible for scheduling user threads
on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls.
However, this increases complexity and the likelihood of priority inversion, as well as suboptimal scheduling without
extensive (and expensive) coordination between the userland scheduler and the kernel scheduler.

3.8.4 Hybrid implementation examples


Scheduler activations used by the NetBSD native POSIX threads library implementation (an M:N model as
opposed to a 1:1 kernel or userspace implementation model)

Light-weight processes used by older versions of the Solaris operating system

Marcel from the PM2 project.

The OS for the Tera-Cray MTA-2

Microsoft Windows 7

The Glasgow Haskell Compiler (GHC) for the language Haskell uses lightweight threads which are scheduled
on operating system threads.

3.8.5 Fiber implementation examples

Fibers can be implemented without operating system support, although some operating systems or libraries provide
explicit support for them.
18 CHAPTER 3. THREAD (COMPUTING)

Win32 supplies a ber API[11] (Windows NT 3.51 SP3 and later)


Ruby as Green threads
Netscape Portable Runtime (includes a user-space bers implementation)
ribs2

3.9 Programming language support


IBM PL/I(F) included support for multithreading (called multitasking) in the late 1960s, and this was continued in
the Optimizing Compiler and later versions. The IBM Enterprise PL/I compiler introduced a new model thread
API. Neither version was part of the PL/I standard.
Many programming languages support threading in some capacity. Many implementations of C and C++ support
threading, and provide access to the native threading APIs of the operating system. Some higher level (and usually
cross-platform) programming languages, such as Java, Python, and .NET Framework languages, expose threading to
developers while abstracting the platform specic dierences in threading implementations in the runtime. Several
other programming languages and language extensions also try to abstract the concept of concurrency and thread-
ing from the developer fully (Cilk, OpenMP, Message Passing Interface (MPI)). Some languages are designed for
sequential parallelism instead (especially using GPUs), without requiring concurrency or threads (Ateji PX, CUDA).
A few interpreted programming languages have implementations (e.g., Ruby MRI for Ruby, CPython for Python)
which support threading and concurrency but not parallel execution of threads, due to a global interpreter lock (GIL).
The GIL is a mutual exclusion lock held by the interpreter that can prevent the interpreter from simultaneously
interpreting the applications code on two or more threads at once, which eectively limits the parallelism on multiple
core systems. This limits performance mostly for processor-bound threads, which require the processor, and not
much for I/O-bound or network-bound ones.
Other implementations of interpreted programming languages, such as Tcl using the Thread extension, avoid the GIL
limit by using an Apartment model where data and code must be explicitly shared between threads. In Tcl each
thread has at one or more interpreters.
Event-driven programming hardware description languages such as Verilog have a dierent threading model that
supports extremely large numbers of threads (for modeling hardware).

3.10 Practical multithreading


A standardized interface for thread implementation is POSIX Threads (Pthreads), which is a set of C-function library
calls. OS vendors are free to implement the interface as desired, but the application developer should be able to use the
same interface across multiple platforms. Most Unix platforms including Linux support Pthreads. Microsoft Windows
has its own set of thread functions in the process.h interface for multithreading, like beginthread. Java provides yet
another standardized interface over the host operating system using the Java concurrency library java.util.concurrent.
Multithreading libraries provide a function call to create a new thread, which takes a function as a parameter. A
concurrent thread is then created which starts running the passed function and ends when the function returns. The
thread libraries also oer synchronization functions which make it possible to implement race condition-error free
multithreading functions using mutexes, condition variables, critical sections, semaphores, monitors and other syn-
chronization primitives.
Another paradigm of thread usage is that of thread pools where a set number of threads are created at startup that
then wait for a task to be assigned. When a new task arrives, it wakes up, completes the task and goes back to waiting.
This avoids the relatively expensive thread creation and destruction functions for every task performed and takes
thread management out of the application developers hand and leaves it to a library or the operating system that is
better suited to optimize thread management. For example, frameworks like Grand Central Dispatch and Threading
Building Blocks.
In programming models such as CUDA designed for data parallel computation, an array of threads run the same
code in parallel using only its ID to nd its data in memory. In essence, the application must be designed so that each
thread performs the same operation on dierent segments of memory so that they can operate in parallel and use the
GPU architecture.
3.11. SEE ALSO 19

3.11 See also


Clone (Linux system call)
Communicating sequential processes
Computer multitasking
Multi-core (computing)
Multithreading (computer hardware)
Non-blocking algorithm
Priority inversion
Protothreads
Simultaneous multithreading
Thread pool pattern
Thread safety
Win32 Thread Information Block

3.12 Notes
[1] Process switching changes virtual memory addressing, causing invalidation and thus ushing of an untagged translation
lookaside buer, notably on x86.

3.13 References
[1] Lamport, Leslie (September 1979). How to Make a Multiprocessor Computer That Correctly Executes Multiprocess
Programs (PDF). IEEE Transactions on Computers. C28 (9): 690691. doi:10.1109/tc.1979.1675439.

[2] Trac Control in a Multiplexed Computer System, Jerome Howard Saltzer, Doctor of Science thesis, 1966, see footnote
on page 20.

[3] Ral Menndez; Doug Lowe (2001). Murachs CICS for the COBOL Programmer. Mike Murach & Associates. p. 512.
ISBN 1-890774-09-X.

[4] Stephen R. G. Fraser. Pro Visual C++/CLI and the .NET 3.5 Platform. Apress. p. 780. ISBN 1-4302-1053-2.

[5] Peter William O'Hearn; R. D. Tennent (1997). ALGOL-like languages. 2. Birkhuser Verlag. p. 157. ISBN 0-8176-3937-
3.

[6] Single-Threading: Back to the Future? Sergey Ignatchenko, Overload #97

[7] Erlang: 3.1 Processes.

[8] "The Problem with Threads", Edward A. Lee, UC Berkeley, January 10, 2006, Technical Report No. UCB/EECS-2006-1

[9] Multithreading in the Solaris Operating Environment (PDF). 2002. Archived from the original (PDF) on February 26,
2009.

[10] Gagne, Abraham Silberschatz, Peter Baer Galvin, Greg (2013). Operating system concepts (9th ed.). Hoboken, N.J.: Wiley.
pp. 170171. ISBN 9781118063330.

[11] CreateFiber, MSDN

David R. Butenhof: Programming with POSIX Threads, Addison-Wesley, ISBN 0-201-63392-2


Bradford Nichols, Dick Buttlar, Jacqueline Proulx Farell: Pthreads Programming, O'Reilly & Associates, ISBN
1-56592-115-1
20 CHAPTER 3. THREAD (COMPUTING)

Charles J. Northrup: Programming with UNIX Threads, John Wiley & Sons, ISBN 0-471-13751-0

Mark Walmsley: Multi-Threaded Programming in C++, Springer, ISBN 1-85233-146-1


Paul Hyde: Java Thread Programming, Sams, ISBN 0-672-31585-8

Bill Lewis: Threads Primer: A Guide to Multithreaded Programming, Prentice Hall, ISBN 0-13-443698-9
Steve Kleiman, Devang Shah, Bart Smaalders: Programming With Threads, SunSoft Press, ISBN 0-13-172389-
8
Pat Villani: Advanced WIN32 Programming: Files, Threads, and Process Synchronization, Harpercollins Pub-
lishers, ISBN 0-87930-563-0
Jim Beveridge, Robert Wiener: Multithreading Applications in Win32, Addison-Wesley, ISBN 0-201-44234-5

Thuan Q. Pham, Pankaj K. Garg: Multithreaded Programming with Windows NT, Prentice Hall, ISBN 0-13-
120643-5
Len Dorfman, Marc J. Neuberger: Eective Multithreading in OS/2, McGraw-Hill Osborne Media, ISBN 0-
07-017841-0
Alan Burns, Andy Wellings: Concurrency in ADA, Cambridge University Press, ISBN 0-521-62911-X

Uresh Vahalia: Unix Internals: the New Frontiers, Prentice Hall, ISBN 0-13-101908-2
Alan L. Dennis: .Net Multithreading , Manning Publications Company, ISBN 1-930110-54-5

Tobin Titus, Fabio Claudio Ferracchiati, Srinivasa Sivakumar, Tejaswi Redkar, Sandra Gopikrishna: C#
Threading Handbook, Peer Information Inc, ISBN 1-86100-829-5

Tobin Titus, Fabio Claudio Ferracchiati, Srinivasa Sivakumar, Tejaswi Redkar, Sandra Gopikrishna: Visual
Basic .Net Threading Handbook, Wrox Press Inc, ISBN 1-86100-713-2

3.14 External links


Answers to frequently asked questions for comp.programming.threads
What makes multi-threaded programming hard?

Article "Query by Slice, Parallel Execute, and Join: A Thread Pool Pattern in Java" by Binildas C. A.
Article "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software" by Herb Sutter

Article "The Problem with Threads" by Edward Lee

Concepts of Multithreading
ConTest - A Tool for Testing Multithreaded Java Applications by IBM

Debugging and Optimizing Multithreaded OpenMP Programs


Multithreading at DMOZ

Multithreading in the Solaris Operating Environment


POSIX threads explained by Daniel Robbins

The C10K problem


Chapter 4

Concurrency control

In information technology and computer science, especially in the elds of computer programming, operating sys-
tems, multiprocessors, and databases, concurrency control ensures that correct results for concurrent operations are
generated, while getting those results as quickly as possible.
Computer systems, both software and hardware, consist of modules, or components. Each component is designed
to operate correctly, i.e., to obey or to meet certain consistency rules. When components that operate concurrently
interact by messaging or by sharing accessed data (in memory or storage), a certain components consistency may be
violated by another component. The general area of concurrency control provides rules, methods, design method-
ologies, and theories to maintain the consistency of components operating concurrently while interacting, and thus
the consistency and correctness of the whole system. Introducing concurrency control into a system means applying
operation constraints which typically result in some performance reduction. Operation consistency and correctness
should be achieved with as good as possible eciency, without reducing performance below reasonable levels. Con-
currency control can require signicant additional complexity and overhead in a concurrent algorithm compared to
the simpler sequential algorithm.
For example, a failure in concurrency control can result in data corruption from torn read or write operations.

4.1 Concurrency control in databases


Comments:

1. This section is applicable to all transactional systems, i.e., to all systems that use database transactions (atomic
transactions; e.g., transactional objects in Systems management and in networks of smartphones which typi-
cally implement private, dedicated database systems), not only general-purpose database management systems
(DBMSs).

2. DBMSs need to deal also with concurrency control issues not typical just to database transactions but rather to
operating systems in general. These issues (e.g., see Concurrency control in operating systems below) are out
of the scope of this section.

Concurrency control in Database management systems (DBMS; e.g., Bernstein et al. 1987, Weikum and Vossen
2001), other transactional objects, and related distributed applications (e.g., Grid computing and Cloud computing)
ensures that database transactions are performed concurrently without violating the data integrity of the respective
databases. Thus concurrency control is an essential element for correctness in any system where two database trans-
actions or more, executed with time overlap, can access the same data, e.g., virtually in any general-purpose database
system. Consequently, a vast body of related research has been accumulated since database systems emerged in
the early 1970s. A well established concurrency control theory for database systems is outlined in the references
mentioned above: serializability theory, which allows to eectively design and analyze concurrency control methods
and mechanisms. An alternative theory for concurrency control of atomic transactions over abstract data types is
presented in (Lynch et al. 1993), and not utilized below. This theory is more rened, complex, with a wider scope,
and has been less utilized in the Database literature than the classical theory above. Each theory has its pros and cons,
emphasis and insight. To some extent they are complementary, and their merging may be useful.

21
22 CHAPTER 4. CONCURRENCY CONTROL

To ensure correctness, a DBMS usually guarantees that only serializable transaction schedules are generated, unless
serializability is intentionally relaxed to increase performance, but only in cases where application correctness is not
harmed. For maintaining correctness in cases of failed (aborted) transactions (which can always happen for many
reasons) schedules also need to have the recoverability (from abort) property. A DBMS also guarantees that no
eect of committed transactions is lost, and no eect of aborted (rolled back) transactions remains in the related
database. Overall transaction characterization is usually summarized by the ACID rules below. As databases have
become distributed, or needed to cooperate in distributed environments (e.g., Federated databases in the early 1990,
and Cloud computing currently), the eective distribution of concurrency control mechanisms has received special
attention.

4.1.1 Database transaction and the ACID rules


Main articles: Database transaction and ACID

The concept of a database transaction (or atomic transaction) has evolved in order to enable both a well understood
database system behavior in a faulty environment where crashes can happen any time, and recovery from a crash
to a well understood database state. A database transaction is a unit of work, typically encapsulating a number of
operations over a database (e.g., reading a database object, writing, acquiring lock, etc.), an abstraction supported
in database and also other systems. Each transaction has well dened boundaries in terms of which program/code
executions are included in that transaction (determined by the transactions programmer via special transaction com-
mands). Every database transaction obeys the following rules (by support in the database system; i.e., a database
system is designed to guarantee them for the transactions it runs):

Atomicity - Either the eects of all or none of its operations remain (all or nothing semantics) when a
transaction is completed (committed or aborted respectively). In other words, to the outside world a committed
transaction appears (by its eects on the database) to be indivisible (atomic), and an aborted transaction does
not aect the database at all.
Consistency - Every transaction must leave the database in a consistent (correct) state, i.e., maintain the pre-
determined integrity rules of the database (constraints upon and among the databases objects). A transaction
must transform a database from one consistent state to another consistent state (however, it is the responsibility
of the transactions programmer to make sure that the transaction itself is correct, i.e., performs correctly what
it intends to perform (from the applications point of view) while the predened integrity rules are enforced by
the DBMS). Thus since a database can be normally changed only by transactions, all the databases states are
consistent.
Isolation - Transactions cannot interfere with each other (as an end result of their executions). Moreover,
usually (depending on concurrency control method) the eects of an incomplete transaction are not even visible
to another transaction. Providing isolation is the main goal of concurrency control.
Durability - Eects of successful (committed) transactions must persist through crashes (typically by recording
the transactions eects and its commit event in a non-volatile memory).

The concept of atomic transaction has been extended during the years to what has become Business transactions
which actually implement types of Workow and are not atomic. However also such enhanced transactions typically
utilize atomic transactions as components.

4.1.2 Why is concurrency control needed?


If transactions are executed serially, i.e., sequentially with no overlap in time, no transaction concurrency exists. How-
ever, if concurrent transactions with interleaving operations are allowed in an uncontrolled manner, some unexpected,
undesirable result may occur, such as:

1. The lost update problem: A second transaction writes a second value of a data-item (datum) on top of a
rst value written by a rst concurrent transaction, and the rst value is lost to other transactions running
concurrently which need, by their precedence, to read the rst value. The transactions that have read the wrong
value end with incorrect results.
4.1. CONCURRENCY CONTROL IN DATABASES 23

2. The dirty read problem: Transactions read a value written by a transaction that has been later aborted. This
value disappears from the database upon abort, and should not have been read by any transaction (dirty read).
The reading transactions end with incorrect results.

3. The incorrect summary problem: While one transaction takes a summary over the values of all the instances
of a repeated data-item, a second transaction updates some instances of that data-item. The resulting summary
does not reect a correct result for any (usually needed for correctness) precedence order between the two
transactions (if one is executed before the other), but rather some random result, depending on the timing of
the updates, and whether certain update results have been included in the summary or not.

Most high-performance transactional systems need to run transactions concurrently to meet their performance re-
quirements. Thus, without concurrency control such systems can neither provide correct results nor maintain their
databases consistent.

4.1.3 Concurrency control mechanisms


Categories

The main categories of concurrency control mechanisms are:

Optimistic - Delay the checking of whether a transaction meets the isolation and other integrity rules (e.g.,
serializability and recoverability) until its end, without blocking any of its (read, write) operations ("...and be
optimistic about the rules being met...), and then abort a transaction to prevent the violation, if the desired
rules are to be violated upon its commit. An aborted transaction is immediately restarted and re-executed,
which incurs an obvious overhead (versus executing it to the end only once). If not too many transactions are
aborted, then being optimistic is usually a good strategy.
Pessimistic - Block an operation of a transaction, if it may cause violation of the rules, until the possibility of
violation disappears. Blocking operations is typically involved with performance reduction.
Semi-optimistic - Block operations in some situations, if they may cause violation of some rules, and do not
block in other situations while delaying rules checking (if needed) to transactions end, as done with optimistic.

Dierent categories provide dierent performance, i.e., dierent average transaction completion rates (throughput),
depending on transaction types mix, computing level of parallelism, and other factors. If selection and knowledge
about trade-os are available, then category and method should be chosen to provide the highest performance.
The mutual blocking between two transactions (where each one blocks the other) or more results in a deadlock, where
the transactions involved are stalled and cannot reach completion. Most non-optimistic mechanisms (with blocking)
are prone to deadlocks which are resolved by an intentional abort of a stalled transaction (which releases the other
transactions in that deadlock), and its immediate restart and re-execution. The likelihood of a deadlock is typically
low.
Blocking, deadlocks, and aborts all result in performance reduction, and hence the trade-os between the categories.

Methods

Many methods for concurrency control exist. Most of them can be implemented within either main category above.
The major methods,[1] which have each many variants, and in some cases may overlap or be combined, are:

1. Locking (e.g., Two-phase locking - 2PL) - Controlling access to data by locks assigned to the data. Access
of a transaction to a data item (database object) locked by another transaction may be blocked (depending on
lock type and access operation type) until lock release.
2. Serialization graph checking (also called Serializability, or Conict, or Precedence graph checking) - Check-
ing for cycles in the schedules graph and breaking them by aborts.
3. Timestamp ordering (TO) - Assigning timestamps to transactions, and controlling or checking access to data
by timestamp order.
24 CHAPTER 4. CONCURRENCY CONTROL

4. Commitment ordering (or Commit ordering; CO) - Controlling or checking transactions chronological order
of commit events to be compatible with their respective precedence order.

Other major concurrency control types that are utilized in conjunction with the methods above include:

Multiversion concurrency control (MVCC) - Increasing concurrency and performance by generating a new
version of a database object each time the object is written, and allowing transactions read operations of several
last relevant versions (of each object) depending on scheduling method.

Index concurrency control - Synchronizing access operations to indexes, rather than to user data. Specialized
methods provide substantial performance gains.

Private workspace model (Deferred update) - Each transaction maintains a private workspace for its accessed
data, and its changed data become visible outside the transaction only upon its commit (e.g., Weikum and
Vossen 2001). This model provides a dierent concurrency control behavior with benets in many cases.

The most common mechanism type in database systems since their early days in the 1970s has been Strong strict
Two-phase locking (SS2PL; also called Rigorous scheduling or Rigorous 2PL) which is a special case (variant) of
both Two-phase locking (2PL) and Commitment ordering (CO). It is pessimistic. In spite of its long name (for
historical reasons) the idea of the SS2PL mechanism is simple: Release all locks applied by a transaction only after
the transaction has ended. SS2PL (or Rigorousness) is also the name of the set of all schedules that can be generated
by this mechanism, i.e., these are SS2PL (or Rigorous) schedules, have the SS2PL (or Rigorousness) property.

4.1.4 Major goals of concurrency control mechanisms

Concurrency control mechanisms rstly need to operate correctly, i.e., to maintain each transactions integrity rules
(as related to concurrency; application-specic integrity rule are out of the scope here) while transactions are running
concurrently, and thus the integrity of the entire transactional system. Correctness needs to be achieved with as good
performance as possible. In addition, increasingly a need exists to operate eectively while transactions are distributed
over processes, computers, and computer networks. Other subjects that may aect concurrency control are recovery
and replication.

Correctness

Serializability Main article: Serializability

For correctness, a common major goal of most concurrency control mechanisms is generating schedules with the
Serializability property. Without serializability undesirable phenomena may occur, e.g., money may disappear from
accounts, or be generated from nowhere. Serializability of a schedule means equivalence (in the resulting database
values) to some serial schedule with the same transactions (i.e., in which transactions are sequential with no overlap
in time, and thus completely isolated from each other: No concurrent access by any two transactions to the same data
is possible). Serializability is considered the highest level of isolation among database transactions, and the major
correctness criterion for concurrent transactions. In some cases compromised, relaxed forms of serializability are
allowed for better performance (e.g., the popular Snapshot isolation mechanism) or to meet availability requirements
in highly distributed systems (see Eventual consistency), but only if applications correctness is not violated by the
relaxation (e.g., no relaxation is allowed for money transactions, since by relaxation money can disappear, or appear
from nowhere).
Almost all implemented concurrency control mechanisms achieve serializability by providing Conict serializablity, a
broad special case of serializability (i.e., it covers, enables most serializable schedules, and does not impose signicant
additional delay-causing constraints) which can be implemented eciently.

Recoverability

See Recoverability in Serializability


4.1. CONCURRENCY CONTROL IN DATABASES 25

Comment: While in the general area of systems the term recoverability may refer to the ability of a system to
recover from failure or from an incorrect/forbidden state, within concurrency control of database systems this term
has received a specic meaning.
Concurrency control typically also ensures the Recoverability property of schedules for maintaining correctness in
cases of aborted transactions (which can always happen for many reasons). Recoverability (from abort) means that
no committed transaction in a schedule has read data written by an aborted transaction. Such data disappear from the
database (upon the abort) and are parts of an incorrect database state. Reading such data violates the consistency rule
of ACID. Unlike Serializability, Recoverability cannot be compromised, relaxed at any case, since any relaxation
results in quick database integrity violation upon aborts. The major methods listed above provide serializability
mechanisms. None of them in its general form automatically provides recoverability, and special considerations and
mechanism enhancements are needed to support recoverability. A commonly utilized special case of recoverability is
Strictness, which allows ecient database recovery from failure (but excludes optimistic implementations; e.g., Strict
CO (SCO) cannot have an optimistic implementation, but has semi-optimistic ones).
Comment: Note that the Recoverability property is needed even if no database failure occurs and no database recovery
from failure is needed. It is rather needed to correctly automatically handle transaction aborts, which may be unrelated
to database failure and recovery from it.

Distribution

With the fast technological development of computing the dierence between local and distributed computing over
low latency networks or buses is blurring. Thus the quite eective utilization of local techniques in such distributed
environments is common, e.g., in computer clusters and multi-core processors. However the local techniques have
their limitations and use multi-processes (or threads) supported by multi-processors (or multi-cores) to scale. This
often turns transactions into distributed ones, if they themselves need to span multi-processes. In these cases most
local concurrency control techniques do not scale well.

Distributed serializability and Commitment ordering

See Distributed serializability in Serializability

Main article: Global serializability


Main article: Commitment ordering

As database systems have become distributed, or started to cooperate in distributed environments (e.g., Federated
databases in the early 1990s, and nowadays Grid computing, Cloud computing, and networks with smartphones),
some transactions have become distributed. A distributed transaction means that the transaction spans processes,
and may span computers and geographical sites. This generates a need in eective distributed concurrency control
mechanisms. Achieving the Serializability property of a distributed systems schedule (see Distributed serializability
and Global serializability (Modular serializability)) eectively poses special challenges typically not met by most of
the regular serializability mechanisms, originally designed to operate locally. This is especially due to a need in
costly distribution of concurrency control information amid communication and computer latency. The only known
general eective technique for distribution is Commitment ordering, which was disclosed publicly in 1991 (after being
patented). Commitment ordering (Commit ordering, CO; Raz 1992) means that transactions chronological order
of commit events is kept compatible with their respective precedence order. CO does not require the distribution of
concurrency control information and provides a general eective solution (reliable, high-performance, and scalable)
for both distributed and global serializability, also in a heterogeneous environment with database systems (or other
transactional objects) with dierent (any) concurrency control mechanisms.[1] CO is indierent to which mechanism
is utilized, since it does not interfere with any transaction operation scheduling (which most mechanisms control), and
only determines the order of commit events. Thus, CO enables the ecient distribution of all other mechanisms, and
also the distribution of a mix of dierent (any) local mechanisms, for achieving distributed and global serializability.
The existence of such a solution has been considered unlikely until 1991, and by many experts also later, due to
misunderstanding of the CO solution (see Quotations in Global serializability). An important side-benet of CO is
automatic distributed deadlock resolution. Contrary to CO, virtually all other techniques (when not combined with
CO) are prone to distributed deadlocks (also called global deadlocks) which need special handling. CO is also the
name of the resulting schedule property: A schedule has the CO property if the chronological order of its transactions
commit events is compatible with the respective transactions precedence (partial) order.
26 CHAPTER 4. CONCURRENCY CONTROL

SS2PL mentioned above is a variant (special case) of CO and thus also eective to achieve distributed and global
serializability. It also provides automatic distributed deadlock resolution (a fact overlooked in the research literature
even after COs publication), as well as Strictness and thus Recoverability. Possessing these desired properties to-
gether with known ecient locking based implementations explains SS2PLs popularity. SS2PL has been utilized to
eciently achieve Distributed and Global serializability since the 1980, and has become the de facto standard for it.
However, SS2PL is blocking and constraining (pessimistic), and with the proliferation of distribution and utilization
of systems dierent from traditional database systems (e.g., as in Cloud computing), less constraining types of CO
(e.g., Optimistic CO) may be needed for better performance.
Comments:

1. The Distributed conict serializability property in its general form is dicult to achieve eciently, but it is
achieved eciently via its special case Distributed CO: Each local component (e.g., a local DBMS) needs both
to provide some form of CO, and enforce a special vote ordering strategy for the Two-phase commit protocol
(2PC: utilized to commit distributed transactions). Dierently from the general Distributed CO, Distributed
SS2PL exists automatically when all local components are SS2PL based (in each component CO exists, implied,
and the vote ordering strategy is now met automatically). This fact has been known and utilized since the 1980s
(i.e., that SS2PL exists globally, without knowing about CO) for ecient Distributed SS2PL, which implies
Distributed serializability and strictness (e.g., see Raz 1992, page 293; it is also implied in Bernstein et al. 1987,
page 78). Less constrained Distributed serializability and strictness can be eciently achieved by Distributed
Strict CO (SCO), or by a mix of SS2PL based and SCO based local components.

2. About the references and Commitment ordering: (Bernstein et al. 1987) was published before the discovery
of CO in 1990. The CO schedule property is called Dynamic atomicity in (Lynch et al. 1993, page 201).
CO is described in (Weikum and Vossen 2001, pages 102, 700), but the description is partial and misses
COs essence. (Raz 1992) was the rst refereed and accepted for publication article about CO algorithms
(however, publications about an equivalent Dynamic atomicity property can be traced to 1988). Other CO
articles followed. (Bernstein and Newcomer 2009)[1] note CO as one of the four major concurrency control
methods, and COs ability to provide interoperability among other methods.

Distributed recoverability Unlike Serializability, Distributed recoverability and Distributed strictness can be achieved
eciently in a straightforward way, similarly to the way Distributed CO is achieved: In each database system they
have to be applied locally, and employ a vote ordering strategy for the Two-phase commit protocol (2PC; Raz 1992,
page 307).
As has been mentioned above, Distributed SS2PL, including Distributed strictness (recoverability) and Distributed
commitment ordering (serializability), automatically employs the needed vote ordering strategy, and is achieved
(globally) when employed locally in each (local) database system (as has been known and utilized for many years; as
a matter of fact locality is dened by the boundary of a 2PC participant (Raz 1992) ).

Other major subjects of attention

The design of concurrency control mechanisms is often inuenced by the following subjects:

Recovery Main article: Data recovery

All systems are prone to failures, and handling recovery from failure is a must. The properties of the generated
schedules, which are dictated by the concurrency control mechanism, may aect the eectiveness and eciency of
recovery. For example, the Strictness property (mentioned in the section Recoverability above) is often desirable for
an ecient recovery.

Replication Main article: Replication (computer science)

For high availability database objects are often replicated. Updates of replicas of a same database object need to be
kept synchronized. This may aect the way concurrency control is done (e.g., Gray et al. 1996[2] ).
4.2. CONCURRENCY CONTROL IN OPERATING SYSTEMS 27

4.1.5 See also


Schedule

Isolation (computer science)

Distributed concurrency control

Global concurrency control

4.1.6 References
Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman (1987): Concurrency Control and Recovery in
Database Systems (free PDF download), Addison Wesley Publishing Company, 1987, ISBN 0-201-10715-
5

Gerhard Weikum, Gottfried Vossen (2001): Transactional Information Systems, Elsevier, ISBN 1-55860-508-
8

Nancy Lynch, Michael Merritt, William Weihl, Alan Fekete (1993): Atomic Transactions in Concurrent and
Distributed Systems , Morgan Kauman (Elsevier), August 1993, ISBN 978-1-55860-104-8, ISBN 1-55860-
104-X

Yoav Raz (1992): The Principle of Commitment Ordering, or Guaranteeing Serializability in a Heterogeneous
Environment of Multiple Autonomous Resource Managers Using Atomic Commitment. (PDF), Proceed-
ings of the Eighteenth International Conference on Very Large Data Bases (VLDB), pp. 292-312, Vancouver,
Canada, August 1992. (also DEC-TR 841, Digital Equipment Corporation, November 1990)

4.1.7 Footnotes
[1] Philip A. Bernstein, Eric Newcomer (2009): Principles of Transaction Processing, 2nd Edition, Morgan Kaufmann (Else-
vier), June 2009, ISBN 978-1-55860-623-4 (page 145)

[2] Gray, J.; Helland, P.; O'Neil, P.; Shasha, D. (1996). Proceedings of the 1996 ACM SIGMOD International Conference on
Management of Data. The dangers of replication and a solution (PDF). pp. 173182. doi:10.1145/233269.233330.

4.2 Concurrency control in operating systems


Multitasking operating systems, especially real-time operating systems, need to maintain the illusion that all tasks
running on top of them are all running at the same time, even though only one or a few tasks really are running at any
given moment due to the limitations of the hardware the operating system is running on. Such multitasking is fairly
simple when all tasks are independent from each other. However, when several tasks try to use the same resource,
or when tasks try to share information, it can lead to confusion and inconsistency. The task of concurrent computing
is to solve that problem. Some solutions involve locks similar to the locks used in databases, but they risk causing
problems of their own such as deadlock. Other solutions are Non-blocking algorithms and Read-copy-update.

4.2.1 See also


Linearizability

Mutual exclusion

Semaphore (programming)

Lock (computer science)

Software transactional memory

Transactional Synchronization Extensions


28 CHAPTER 4. CONCURRENCY CONTROL

4.2.2 References
Andrew S. Tanenbaum, Albert S Woodhull (2006): Operating Systems Design and Implementation, 3rd Edition,
Prentice Hall, ISBN 0-13-142938-8

Silberschatz, Avi; Galvin, Peter; Gagne, Greg (2008). Operating Systems Concepts, 8th edition. John Wiley &
Sons. ISBN 0-470-12872-0.
Chapter 5

Race condition

A race condition or race hazard is the behavior of an electronic, software or other system where the output is
dependent on the sequence or timing of other uncontrollable events. It becomes a bug when events do not happen in
the order the programmer intended. The term originates with the idea of two signals racing each other to inuence
the output rst.
Race conditions can occur in electronics systems, especially logic circuits, and in computer software, especially
multithreaded or distributed programs.

5.1 Electronics
A typical example of a race condition may occur in a system of logic gates where inputs vary. If a given output
depends on the state of the inputs it may only be dened for steady-state signals. As the inputs change state a small
delay will occur before the output changes due to the physical nature of the electronic system. The output may, for
a brief period, change to an unwanted state before settling back to the designed state. Certain systems can tolerate
such glitches but if this output functions as a clock signal for further systems that contain memory, for example, the
system can rapidly depart from its designed behaviour (in eect, the temporary glitch becomes a permanent glitch).
Consider, for example, a two-input AND gate fed with a logic signal A on one input and its negation, NOT A, on
another input. In theory the output (A AND NOT A) should never be true. If, however, changes in the value of A
take longer to propagate to the second input than the rst when A changes from false to true then a brief period will
ensue during which both inputs are true, and so the gates output will also be true.[1]
Design techniques such as Karnaugh maps encourage designers to recognize and eliminate race conditions before
they cause problems. Often logic redundancy can be added to eliminate some kinds of races.
As well as these problems, some logic elements can enter metastable states, which create further problems for circuit
designers.

5.1.1 Critical and non-critical forms

A critical race condition occurs when the order in which internal variables are changed determines the eventual state
that the state machine will end up in.
A non-critical race condition occurs when the order in which internal variables are changed does not determine the
eventual state that the state machine will end up in.

5.1.2 Static, dynamic, and essential forms

A static race condition occurs when a signal and its complement are combined together.
A dynamic race condition occurs when it results in multiple transitions when only one is intended. They are due to
interaction between gates. It can be eliminated by using no more than two levels of gating.

29
30 CHAPTER 5. RACE CONDITION

Race condition in a logic circuit. Here, t1 and t2 represent the propagation delays of the logic elements. When the input value A
changes from low to high, the circuit outputs a short spike of duration (t1 + t2 ) t2 = t1 .

An essential race condition occurs when an input has two transitions in less than the total feedback propagation time.
Sometimes they are cured using inductive delay line elements to eectively increase the time duration of an input
signal.
5.2. SOFTWARE 31

5.2 Software
Race conditions arise in software when an application depends on the sequence or timing of processes or threads for
it to operate properly. As with electronics, there are critical race conditions that result in invalid execution and bugs as
well as non-critical race conditions that result in unanticipated behavior. Critical race conditions often happen when
the processes or threads depend on some shared state. Operations upon shared states are critical sections that must
be mutually exclusive. Failure to obey this rule opens up the possibility of corrupting the shared state.
The memory model dened in the C11 and C++11 standards uses the term data race for a critical race condition
caused by concurrent reads and writes of a shared memory location. A C or C++ program containing a data race has
undened behavior.[2][3]
Race conditions have a reputation of being dicult to reproduce and debug, since the end result is nondeterministic and
depends on the relative timing between interfering threads. Problems occurring in production systems can therefore
disappear when running in debug mode, when additional logging is added, or when attaching a debugger, often referred
to as a "Heisenbug". It is therefore better to avoid race conditions by careful software design rather than attempting
to x them afterwards.

5.2.1 Example

As a simple example, let us assume that two threads want to increment the value of a global integer variable by one.
Ideally, the following sequence of operations would take place:
In the case shown above, the nal value is 2, as expected. However, if the two threads run simultaneously without
locking or synchronization, the outcome of the operation could be wrong. The alternative sequence of operations
below demonstrates this scenario:
In this case, the nal value is 1 instead of the expected result of 2. This occurs because here the increment operations
are not mutually exclusive. Mutually exclusive operations are those that cannot be interrupted while accessing some
resource such as a memory location.

5.2.2 File systems

In le systems, two or more programs may collide in their attempts to modify or access a le, which could result in
data corruption. File locking provides a commonly used solution. A more cumbersome remedy involves organizing
the system in such a way that one unique process (running a daemon or the like) has exclusive access to the le, and
all other processes that need to access the data in that le do so only via interprocess communication with that one
process. This requires synchronization at the process level.
A dierent form of race condition exists in le systems where unrelated programs may aect each other by suddenly
using up available resources such as disk space, memory space or processor cycles. Software not carefully designed
to anticipate and handle this race situation may then become unpredictable. Such a risk may be overlooked for a
long time in a system that seems very reliable. But eventually enough data may accumulate or enough other software
may be added to critically destabilize many parts of a system. An example of this occurred with the near loss of the
Mars Rover Spirit not long after landing. A solution is for software to request and reserve all the resources it will
need before beginning a task; if this request fails then the task is postponed, avoiding the many points where failure
could have occurred. Alternatively, each of those points can be equipped with error handling, or the success of the
entire task can be veried afterwards, before continuing. A more common approach is to simply verify that enough
system resources are available before starting a task; however, this may not be adequate because in complex systems
the actions of other running programs can be unpredictable.

5.2.3 Networking

In networking, consider a distributed chat network like IRC, where a user who starts a channel automatically acquires
channel-operator privileges. If two users on dierent servers, on dierent ends of the same network, try to start the
same-named channel at the same time, each users respective server will grant channel-operator privileges to each
user, since neither server will yet have received the other servers signal that it has allocated that channel. (Note that
this problem has been largely solved by various IRC server implementations.)
32 CHAPTER 5. RACE CONDITION

In this case of a race condition, the concept of the shared resource" covers the state of the network (what channels
exist, as well as what users started them and therefore have what privileges), which each server can freely change as
long as it signals the other servers on the network about the changes so that they can update their conception of the state
of the network. However, the latency across the network makes possible the kind of race condition described. In this
case, heading o race conditions by imposing a form of control over access to the shared resourcesay, appointing
one server to control who holds what privilegeswould mean turning the distributed network into a centralized one
(at least for that one part of the network operation).
Race conditions can also exist when a computer program is written with non-blocking sockets, in which case the
performance of the program can be dependent on the speed of the network link.

5.2.4 Life-critical systems

Software aws in life-critical systems can be disastrous. Race conditions were among the aws in the Therac-25
radiation therapy machine, which led to the death of at least three patients and injuries to several more.[4]
Another example is the Energy Management System provided by GE Energy and used by Ohio-based FirstEnergy
Corp (among other power facilities). A race condition existed in the alarm subsystem; when three sagging power lines
were tripped simultaneously, the condition prevented alerts from being raised to the monitoring technicians, delaying
their awareness of the problem. This software aw eventually led to the North American Blackout of 2003.[5] GE
Energy later developed a software patch to correct the previously undiscovered error.

5.2.5 Computer security

A specic kind of race condition involves checking for a predicate (e.g. for authentication), then acting on the
predicate, while the state can change between the time of check and the time of use. When this kind of bug exists in
security-conscious code, a security vulnerability called a time-of-check-to-time-of-use (TOCTTOU) bug is created.

5.3 Examples outside of computing

5.3.1 Biology

Neuroscience is demonstrating that race conditions can occur in mammal (rat) brains as well.[6][7]

5.4 Tools
Many software tools exist to help detect race conditions in software. They can be largely categorized into two groups:
static analysis tools and dynamic analysis tools.
Thread Safety Analysis is a static analysis tool for annotation-based intra-procedural static analysis, originally imple-
mented as a branch of gcc, and now reimplemented in Clang. supporting PThreads.[8]
There are several dynamic analysis tools including Intel Inspector, a memory and thread checking and debugging tool
to increase the reliability, security, and accuracy of C/C++ and Fortran applications, Intel Advisor, a sampling based,
SIMD vectorization optimization and shared memory threading assistance tool for C, C++, C# and Fortran software
developers and architects, ThreadSanitizer that uses binary (Valgrind-based) or source, LLVM-based instrumentation,
and supports PThreads)[9] and Helgrind, a Valgrind tool for detecting synchronisation errors in C, C++ and Fortran
programs that use the POSIX pthreads threading primitives.[10]

5.5 See also


Concurrency control

Deadlock
5.6. REFERENCES 33

Time of check to time of use

Synchronization (computer science)


Linearizability

Racetrack problem
Call collision

5.6 References
[1] Unger, S.H. (June 1995). Hazards, Critical Races, and Metastability. IEEE Transactions on Computers. 44 (6): 754768.
doi:10.1109/12.391185.

[2] ISO/IEC 9899:2011 - Information technology - Programming languages - C

[3] ISO/IEC 14882:2011. ISO. 2 September 2011. Retrieved 3 September 2011.

[4] An Investigation of Therac-25 Accidents I. Courses.cs.vt.edu. Retrieved 2011-09-19.

[5] Kevin Poulsen (2004-04-07). Tracking the blackout bug. Securityfocus.com. Retrieved 2011-09-19.

[6] How Brains Race to Cancel Errant Movements. Discover Magazine blogs. 2013-08-03.

[7] Schmidt, Robert; Leventhal, Daniel K; Mallet, Nicolas; Chen, Fujun; Berke, Joshua D (2013). Canceling actions involves
a race between basal ganglia pathways. Nature Neuroscience. 16 (8): 111824. doi:10.1038/nn.3456. PMC 3733500 .
PMID 23852117.

[8] Thread Safety Analysis.

[9] THREADSANITIZER.

[10] Helgrind: a thread error detector.

5.7 External links


Karam, G.M.; Buhr, R.J.A. (August 1990). Starvation and Critical Race Analyzers for Ada. IEEE Transac-
tions on Software Engineering. 16 (8): 829843. doi:10.1109/32.57622.
Fuhrer, R.M.; Lin, B.; Nowick, S.M. (March 1995). Algorithms for the optimal state assignment of asyn-
chronous state machines. Advanced Research in VLSI, 1995. Proceedings., 16th Conference on. pp. 5975.
doi:10.1109/ARVLSI.1995.515611. ISBN 0-8186-7047-9. as PDF

Paper "A Novel Framework for Solving the State Assignment Problem for Event-Based Specications" by
Luciano Lavagno, Cho W. Moon, Robert K. Brayton and Alberto Sangiovanni-Vincentelli

Wheeler, David A. (7 October 2004). Secure programmer: Prevent race conditionsResource contention
can be used against you (PDF). IBM developerWorks. Archived from the original (pdf) on Nov 14, 2013.

Chapter "Avoid Race Conditions" (Secure Programming for Linux and Unix HOWTO)
Race conditions, security, and immutability in Java, with sample source code and comparison to C code, by
Chiral Software
Karpov, Andrey (11 April 2009). Interview with Dmitriy Vyukov the author of Relacy Race Detector
(RRD)". Intel Software Library Articles.
Microsoft Support description
34 CHAPTER 5. RACE CONDITION

5.8 Text and image sources, contributors, and licenses


5.8.1 Text
Mutual exclusion Source: https://en.wikipedia.org/wiki/Mutual_exclusion?oldid=760666241 Contributors: Aldie, SimonP, RTC, Dmd3e,
Kku, Dori, Julesd, Furrykef, Wernher, Topbanana, Raul654, Tea2min, Enochlau, Giftlite, TomViza, Leonard G., Rchandra, Alis-
tairMcMillan, Christopherlin, Neilc, Mamizou, Andreas Kaufmann, Abdull, Gazpacho, CanisRufus, Charm, Alansohn, Nealcardwell,
Poromenos, Pion, TShilo12, Mcsee, Uncle G, BD2412, Qwertyus, Kbdank71, Chris Purcell, Kri, Michael Suess, Chobot, Bgwhite,
Roboto de Ajvol, Hairy Dude, RmM, Toncek, Adicarlo, SimonMorgan, KnightRider~enwiki, SmackBot, Aim Here, McGeddon, Thumper-
ward, Nbarth, JonHarder, Melkhior, Fig wright, Aleenf1, ACA, Jive Dadson, JSoules, Jesse Viviano, Ntsimp, Mblumber, Besieged,
Ebrahim, Thijs!bot, Carewolf, JAnDbot, Maxaeran, Digitalfunda, Rich257, VolkovBot, Rponamgi, Salvar, Psychotic Spoon, Milan
Kerlger, Wykypydya, SieBot, WereSpielChequers, Drothlis, Oxymoron83, Mild Bill Hiccup, Deineka, Addbot, Mortense, Ghetto-
blaster, Yobot, SplinterOfChaos, AnomieBOT, Rubinbot, Citation bot, LilHelpa, Sandiejat, FrescoBot, Ashelly, RaulMetumtam, Coder0x2,
Mfwitten, Maggyero, Winterst, Topest1, Artelius, Yoleg, Jfmantis, EmausBot, ChrisCooper1991, ChuispastonBot, ClueBot NG, Loopy48,
Wbm1058, BendelacBOT, It-spie-nl, Silvrous, KWVisor, BattyBot, Aaron Nitro Danielson, Electricmun11, Mquinson, Aquilaria,
Rob1793, Tentinator, Vahid598, Manish181192, JaconaFrere, Monkbot, Highway 231, B2u9vm1ea8g, AnnieFromTaiwan, Jackwu502,
KaynanK and Anonymous: 111
Critical section Source: https://en.wikipedia.org/wiki/Critical_section?oldid=764517007 Contributors: Bevo, Topbanana, Raul654, Owen,
Psi36, Ryanrs, Jason Quinn, Mike Rosoft, Aranel, Pearle, Lornova~enwiki, Josephw, GregorB, Marudubshinki, BD2412, Qwertyus,
RadioActive~enwiki, Margosbot~enwiki, Gurch, BMF81, Chobot, Bgwhite, Nmondal, Phorque, Shinmawa, JLaTondre, Locke Cole,
SmackBot, AndreniW, Rocksoccer, Unyoyega, Jedikaiti, Robocoder, Aruslan, Hgrosser, Veggies, Improfane, MegaHasher, Harryboyles,
Platonides, JoeBot, Simeon, Revolus, Zurg342000, Ebrahim, Thijs!bot, Reswobslc, Dgies, Ad88110, JAnDbot, Maxaeran, Rlopez,
JmG~enwiki, GrahamDavies, Kakoui, ClueBot, PixelBot, Stickee, Deineka, Addbot, LaaknorBot, Numbo3-bot, Luckas-bot, Yobot,
SplinterOfChaos, Xqbot, Rilium, Mark Schierbecker, Zolija, Maggyero, John of Reading, Mikhail Ryazanov, ClueBot NG, BG19bot,
Aaron Nitro Danielson, Metheglyn, Shadrx, Kartikharia, Akhande5 and Anonymous: 63
Thread (computing) Source: https://en.wikipedia.org/wiki/Thread_(computing)?oldid=763098796 Contributors: Lee Daniel Crocker,
Bryan Derksen, The Anome, Ramesh, Hari, Aldie, PierreAbbat, Ghakko, Tenbaset, Liftarn, TakuyaMurata, Alo, Yaronf, Julesd, Em-
perorbma, Magnus.de, Dysprosia, Furrykef, Itai, Ed g2s, Wernher, Bevo, Toreau, Tjdw, Raul654, Jamesday, Finlay McWalter, Robbot,
RedWolf, Altenmann, Pingveno, Guy Peters, Tea2min, Netoholic, Levin, Esap, Jorend, Waxmop, Ferdinand Pienaar, AlistairMcMillan,
Edcolins, Neilc, LiDaobing, Phe, Quarl, Urhixidur, Quota, Andreas Kaufmann, Abdull, RandalSchwartz, Imroy, Discospinster, Rich
Farmbrough, Aris Katsaris, Murtasa, Pavel Vozenilek, ChrisJ, Dyl, Alanc, Violetriga, CanisRufus, Coherers, Bobo192, Cmdrjameson,
R. S. Shaw, Dungodung, Apostrophe, Nevyn, Terrycojones, Jhertel, Csabo, Foant, Guy Harris, RuiPaulo, Uogl, Vedantm, Cburnett, Suru-
ena, MIT Trekkie, Forderud, Oleg Alexandrov, Feezo, Rchrd, Dkanter, Tom W.M., Jacj, BD2412, Qwertyus, Reisio, Rjwilmsi, Lordsatri,
Bruce1ee, Raaele Megabyte, Ligulem, TSFS, Bubba73, Erkcan, FlaBot, Dirkbike, RexNL, Intgr, Subversive, Pengu, BMF81, Chobot,
Martin Hinks, Barocco, Roboto de Ajvol, Wavelength, Borgx, Laurentius, RussBot, Spl, Pi Delport, Yuhong, EngineerScotty, Dianne
Hackborn, Krystyn Dominik, ZacBowling, CLAES, BOT-Superzerocool, Sleepnomore, Elkman, EvanYares, MStraw, JLaTondre, Ty-
omitch, Owl-syme, SmackBot, FishSpeaker, Mmernex, Hydrogen Iodide, Basil.bourque, Od Mishehu, Vald, Brick Thrower, Pieleric,
Nejko, AnOddName, Ranieris, Commander Keane bot, Vkj~enwiki, Betacommand, Chris the speller, TimBentley, Computer Guru,
Thumperward, Octahedron80, Nbarth, Konstable, Gracenotes, Wynand.winterbach, Jsu~enwiki, Stiang, Frap, Zvar, Allan McInnes,
AndySimpson, Normxxx, Cybercobra, PPBlais, Bdiscoe, A5b, Jna runn, SashatoBot, Jiminikiz, Azraell, Loadmaster, Hvn0413,
Manifestation, EdC~enwiki, Alessandro57, Pvodenski, Martin Kozk, Momet, FatalError, Ahy1, Jesse Viviano, Eric Le Bigot, Asztal,
Cydebot, BudVVeezer, Zer0faults, Ebrahim, NotQuiteEXPComplete, Kubanczyk, Btball, Hervegirod, Marek69, Mattalec101, Dawkeye,
Jsaiya, Bangaram, Apantomimehorse, PhiLiP, JAnDbot, MER-C, Arch dude, Skezo, MichaelSHoman, Soulbot, SharShar, Allstare-
cho, Falcor84, Gwern, Yeetn, Silas S. Brown, Red Thrush, Scott.leishman, Je G., AlnoktaBOT, Trasz, Amarniit, Orgads, Rei-bot,
Paulka, Michael Hodgson, Vttale, MicahWedemeyer, Wykypydya, Haseo9999, Crashie, Falcon8765, AlleborgoBot, Logan, Quietbri-
tishjim, Jerryobject, Crossland~enwiki, Strife911, Mochan Shrestha, Oxymoron83, Lightmouse, Karim ElDeeb, Benglar, Ksrichand,
Wantnot, ClueBot, GorillaWarfare, The Thing That Should Not Be, Niceguyedc, Sir Anon, M4gnum0n, Shaiguitar, Echion2, Legacypac,
Diaa abdelmoneim, Frau K, Xorbyte, Groxx, Johnuniq, PretentiousSnot, Fantr, XLinkBot, Asafshelly, Bart.smaalders, Dododerek, Har-
landQPitt, Good Olfactory, Dsimic, Deineka, Addbot, Ghettoblaster, Fyrael, Scientus, MrOllie, MrVanBot, CarsracBot, Jasper Deng,
Loupeter, Zorrobot, Luckas-bot, Yobot, TaBOT-zerem, Peter Flass, AnomieBOT, Pankaj.tux, Kingpin13, Materialscientist, A123a,
LilHelpa, Xqbot, TinucherianBot II, Unixphil, SamuelThibault, Shadowjams, Nixn, FrescoBot, Sandgem Addict, Mark Renier, Mag-
gyero, Nickyus, Allen4names, Lysander89, Jesse V., Ripchip Bot, Lambdatypes, JamieHanlon, Wpdkc, Davidvandebunte, EmausBot,
WikitanvirBot, Dewritech, NotAnonymous0, Chricho, Ida Shaw, Mrmatiko, Hotspoons, Fabrictramp(public), Hughht5, L Kensington,
Ipsign, Mikhail Ryazanov, ClueBot NG, Jaanus.kalde, Ptrb, Matthiaspaul, Kejia, Snotbot, Zakblade2000, Rezabot, Widr, BG19bot,
MusikAnimal, Nsda, Ishwar Gajanan Kurwade, Usernameinmisuse, Heloo1234554321, BattyBot, Cyberbot II, Tagremover, Mjsteiner,
Dexbot, Greenstruck, Zoon van Zaal, Franois Robere, Andrey.a.mitin, Eric Corbett, Lesser Cartographies, YiFeiBot, Krz.dor, Acalycine,
Local4554, Danyc0, TienShan0, Endojelly, , Oltemative, BU Rob13, ConIntegration, Qzd, Sangam lamsal, Co z psem, GreenC
bot, Fmadd, Ishachopra1, Pololoasdfasdgafg, Home Lander and Anonymous: 402
Concurrency control Source: https://en.wikipedia.org/wiki/Concurrency_control?oldid=730186339 Contributors: The Anome, Ewen,
Jose Icaza, Bdesham, Kku, Karada, Poor Yorick, Clausen, Furrykef, Craig Stuntz, Peak, Rholton, Gdimitr, DavidCary, Mgarcia~enwiki,
TonyW, KeyStroke, Leibniz, YUL89YYZ, CanisRufus, PaulMcKenney, Nealcardwell, Mindmatrix, Ruud Koot, Nguyen Thanh Quang,
Tumble, Thoreaulylazy, CarlHewitt, Victor falk, SmackBot, Reedy, Brick Thrower, Nbarth, DHN-bot~enwiki, Malbrain, JonHarder,
Acdx, Oioisaveloy, Soumyasch, Wikidrone, Comps, GeraldH, Jesse Viviano, Christian75, DumbBOT, Thijs!bot, Touko vk, 2GooD~enwiki,
Dalliance, Jirislaby, AntiVandalBot, Magioladitis, Tikuko, Winterspan, Gerakibot, Flyer22 Reborn, JCLately, Siskus, M4gnum0n, Thingg,
Dthomsen8, Addbot, Numbo3-bot, Vincnet, Amirobot, AnomieBOT, Materialscientist, Gilo1969, Miym, Smallman12q, N3rV3, Fres-
coBot, Mark Renier, I dream of horses, Trappist the monk, John of Reading, Primefac, Donner60, Adrianmunyua, Augsod, ClueBot NG,
Helpful Pixie Bot, Wbm1058, Cyberpower678, Cyberbot II, Joeinwiki, Phamnhatkhanh, Franois Robere, Soham, FDMS4, Vieque,
Yoav Raz, MKMack and Anonymous: 70
Race condition Source: https://en.wikipedia.org/wiki/Race_condition?oldid=764417813 Contributors: Aldie, SimonP, Patrick, RTC,
Michael Hardy, Karada, Mr100percent, GRAHAMUK, Hashar, Dysprosia, Colin Marquardt, Pedant17, Furrykef, Joy, Bloodshed-
der, Benwing, Razi~enwiki, Nurg, Nilmerg, Mdrejhon, Tea2min, ManuelGR, DavidCary, Gracefool, Elmindreda, Gadum, Quarl, Si-
moneau, McCart42, Freakofnurture, Rich Farmbrough, Drano, Smyth, David Schaich, Neko-chan, Aaronbrick, Cuervo, R. S. Shaw,
5.8. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 35

Kamyar~enwiki, Daf, Pearle, Hooperbloob, Tom Yates, Liao, Walter Grlitz, Ynhockey, Alai, Forderud, Kenyon, Crosbiesmith, Daira
Hopwood, Mido, Cbdorsett, Male1979, Marudubshinki, E090, FlaBot, Bubbleboys, Intgr, Bgwhite, Pinecar, YurikBot, Bhny, Barefoot-
guru, CarlHewitt, Yahya Abdal-Aziz, Zwobot, Square87~enwiki, Lt-wiki-bot, Curpsbot-unicodify, Erik Sandberg, SmackBot, Slamb,
Unyoyega, Dbnull, Commander Keane bot, PJTraill, RDBrown, Thumperward, Nbarth, Tsca.bot, JonHarder, Allan McInnes, Pcgomes,
JonathanWakely, Soarhead77, Kuru, Moabdave, MTSbot~enwiki, Sakurambo, Cydebot, Mblumber, DumbBOT, Jamesjiao, Barticus88,
Michagal, Pietrodn, Parsiferon, EdJohnston, Jirka6, HarmonicFeather, Greensburger, .anacondabot, Andrewdolby, Stijn Vermeeren,
Madanmus, Japo, Hbent, R'n'B, Wiki Raja, Erkan Yilmaz, Szeder, Uncle Dick, AngryBear, Kyle the bot, TXiKiBoT, Softtest123, Sash-
man, Forlornturtle, ToePeu.bot, Jimmy the Snout, Psychless, Jerryobject, JCLately, HighInBC, PerryTachett, WurmWoode, The Thing
That Should Not Be, RFST, DumZiBoT, XLinkBot, Addbot, Ghettoblaster, Some jerk on the Internet, Olli Niemitalo, Tothwolf, Leszek
Jaczuk, Wikomidia, Numbo3-bot, Tide rolls, Luckas-bot, Yobot, PMLawrence, Rubinbot, Darolew, Xqbot, Miym, Erik9, Abed pa-
cino, Maggyero, Winterst, Vrenator, Msghani, Alph Bot, ToneDaBass, Lambdatypes, Moswento, ZroBot, AManWithNoPlan, Music
Sorter, Eda eng, Ego White Tray, Ipsign, ChuispastonBot, ClueBot NG, Naveenmouni, Snotbot, Zakblade2000, JagexSucks, Jorgenev,
Uwadb, Wbm1058, BG19bot, PhnomPencil, ElphiBot, Pleet, AllenZh, Musicologyman, Mr.goodbyte42, Hari.raghu, Napy65, Eric Cor-
bett, Skynorth, Mmpozulp, Cpt Wise, Godugu jaya, Scrabbler94, Bin927, Nauzilus, B2u9vm1ea8g, H1994tesh and Anonymous: 143

5.8.2 Images
File:8bit-dynamiclist_(reversed).gif Source: https://upload.wikimedia.org/wikipedia/commons/c/cc/8bit-dynamiclist_%28reversed%
29.gif License: CC-BY-SA-3.0 Contributors: This le was derived from: 8bit-dynamiclist.gif
Original artist: Seahen, User:Rezonansowy
File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Orig-
inal artist: ?
File:Critical_section_fg.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/1d/Critical_section_fg.jpg License: CC BY-
SA 4.0 Contributors: Own work Original artist: Kartikharia
File:Critical_section_pseudo_code.png Source: https://upload.wikimedia.org/wikipedia/commons/8/87/Critical_section_pseudo_code.
png License: CC BY-SA 4.0 Contributors: Own work Original artist: Kartikharia
File:Desktop_computer_clipart_-_Yellow_theme.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d7/Desktop_computer_
clipart_-_Yellow_theme.svg License: CC0 Contributors: https://openclipart.org/detail/17924/computer Original artist: AJ from openclipart.org
File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-
by-sa-3.0 Contributors: ? Original artist: ?
File:Green_bug_and_broom.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/83/Green_bug_and_broom.svg License:
LGPL Contributors: File:Broom icon.svg, file:Green_bug.svg Original artist: Poznaniak, pozostali autorzy w plikach rdowych
File:Lock-green.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg License: CC0 Contributors: en:
File:Free-to-read_lock_75.svg Original artist: User:Trappist the monk
File:Locks_and_critical_sections.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/18/Locks_and_critical_sections.jpg
License: CC BY-SA 4.0 Contributors: Own work Original artist: Akhande5
File:Multithreaded_process.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a5/Multithreaded_process.svg License:
CC-BY-SA-3.0 Contributors: Own work in Inkscape Original artist: en:User:Cburnett
File:Mutual_exclusion_example_with_linked_list.png Source: https://upload.wikimedia.org/wikipedia/commons/2/2f/Mutual_exclusion_
example_with_linked_list.png License: CC BY-SA 3.0 Contributors: Own work Original artist: KWVisor
File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0
Contributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
File:Race_condition.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/78/Race_condition.svg License: CC-BY-SA-3.0
Contributors: Transferred from en.wikipedia to Commons by Lampak using CommonsHelper. Original artist: The original uploader was
Sakurambo at English Wikipedia
File:State_graph.png Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/State_graph.png License: CC BY-SA 4.0 Con-
tributors: Own work Original artist: Jackwu502
File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_
with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg
from the Tango project. Original artist: Benjamin D. Esham (bdesham)
File:Unbalanced_scales.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fe/Unbalanced_scales.svg License: Public do-
main Contributors: ? Original artist: ?
File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:
CC-BY-SA-3.0 Contributors: This le was derived from Wiki letter w.svg: <a href='//commons.wikimedia.org/wiki/File:Wiki_letter_w.
svg' class='image'><img alt='Wiki letter w.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/
50px-Wiki_letter_w.svg.png' width='50' height='50' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_
w.svg/75px-Wiki_letter_w.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/100px-Wiki_
letter_w.svg.png 2x' data-le-width='44' data-le-height='44' /></a>
Original artist: Derivative work by Thumperward
File:Wikiversity-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Wikiversity-logo.svg License: CC BY-SA
3.0 Contributors: Snorky (optimized and cleaned up by verdy_p) Original artist: Snorky (optimized and cleaned up by verdy_p)

5.8.3 Content license


Creative Commons Attribution-Share Alike 3.0

Anda mungkin juga menyukai