OS: Processes vs. Threads Process address space is generally organized into
code, data (static/global), heap, and stack segments
CIT 595
Spring 2010 Every process in execution works with:
Registers: PC, Working Registers
Call stack (Stack Pointer Reference)
CIT 595 2
State/Activity Description
new Process is being created
running Instructions are being executed on the processor
waiting/blocked Process is waiting for some event to occur
1
Process Creation Process Management
A process can create several other process a.k.a child or OS maintains a data structure for each process called Process
sub processes Control Block (PCB)
Exploit the ability of the system to concurrently execute Information associated with each PCB:
E.g. gcc program invokes different processes for the compiler, Process state: e.g. ready, or waiting etc.
assembler and linker
assembler,
Program counter: address of next instruction
Each process could get its resources directly from OS
Can restrict resources to subset of parent’s process CPU registers: PC, working registers, stack pointer, condition code
¾ Prevent overloading of system with too many children
CPU scheduling information: scheduling order and priority
The new process gets its own space in memory
Memory-management
Memory management information: page table/segment table pointers
Parent and child processes address space are still different
Accounting information: book keeping info e.g. amt CPU used
Because parent and child are isolated, they can
communicate only via system calls I/O status information: list of I/O devices allocated to this process
2
Non-Preemptive vs. Preemptive Multithreading
A process can give up CPU in two ways Multithreading a program appears to do more than
one thing at a time
Non-preemptive: A process voluntarily gives up CPU
I/O request
¾ Process is blocked, then when request ready it is put back into The idea of Multithreading is same as
ready queue Multiprogramming i.e. multitasking but within a single
A process creates a new child/sub process (more later) process
Finished Instructions to execute (Process termination) Multiprogramming is multitasking across different process
¾ PCB and resources assigned are de-allocated
Communicating between processes is costly because A thread shares with other threads a process’s (to Process with
which it belongs to) 2 threads
mostt communication
i ti goes through
th h th
the OS Code section
Inter-Process Communication (IPC) Data section (static + heap)
Overhead of system calls and copying data Permissions
Other resources (e.g. files)
3
Difference between Single vs. Multithread Process Advantages of Multithreading
Increase responsiveness to the user
Allows a program to continue running even if parts of it is
“waiting”
R
Resource Sh
Sharing
i
Threads share memory and resources of the process to which
they belong
All threads run within same address space
Economical
Theyy can communicate throughg shared data and thereby
y
eliminate the overhead of system calls
Java Threads may be created by: As long as shared data is not being modified there is no problem
Extending Thread class
Implementing
p g the Runnable interface
But concurrent access to shared data that modify the value of the
data can lead to data inconsistency
In C, threads are created using functions in <pthread.h> library (p E.g. Printer driver consumed data before print program produced it
stand for posix)
4
Threading Example Threading Example (contd..)
class Counter { If a Counter object is Remember that single expression “c++” can be
private int c = 0; referenced from multiple decomposed into three steps:
public void increment() { threads
c++;
Th
There will
ill be
b iinterference
t f 1
1. Retrieve the current value of c.
c
}
between threads when 2 2. Increment the retrieved value by 1.
public void decrement() { operations (increment and
c--; decrement), running in 3. Store the incremented value back in c.
} different threads, but acting
public int value() { on the same data (i.e. c)
The same applies for c--
return c;
} This means that the two
t o
} operations consist of multiple
steps, and the sequences of
steps overlap.
5
Synchronization Implementing Synchronization
To avoid race conditions, need to guarantee “atomic” OS
Done via system call
execution of sequence of instructions
Block (wait) until you have exclusive access
Execute without an interruptions Interrupts are temporarily disabled to carryout atomic execution
Mutual exclusion for shared data
Low-level support
Big Picture E.g. Test-and-Set (TS) instruction provided in IBM/370 ISA
Lock/Unlock mechanism
Request entry to critical section (regions of code that may
¾ Lock state is implemented by a memory location
change shared data) ¾ Location contains value 0 if the lock is unlocked and 1 if the lock
Access (shared) data in critical section is locked
Exit from critical section ¾ If value is 0, then lock is closed and critical section is executed.
After finishing the critical section
section, the lock is opened
This support is usually for shared memory multiprocessor system
¾ CPU executing such a instruction locks the memory bus to
prohibit other CPUs
CIT 595 23