Anda di halaman 1dari 135

Threads and light weight process

2006 IBM Corporation

2 limitations of process model


1. largely independent tasks
run concurrently, share a common address space and other resources server-side database managers, transaction-processing monitors, etc. processes are parallel in nature, programming model to support parallelism UNIX systems forced to serialize tasks, awkward and inefficient mechanisms to manage multiple operations

2. traditional processes cant take advantage of multiprocessor architectures


Becoz. process can use only one processor at a time Application: number of separate processes and dispatch them to processors find ways of sharing memory and resources, and synchronizing the tasks

Tackled by UNIX variants:


primitives in OS support concurrent processing

Each UNIX variant: kernel threads, user threads, kernel-supported user threads, C-threads, pthreads, and lightweight processes
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Motivation
several independent tasks: neednt be serialized Database server
listen for and process numerous client requests Neednt be serviced in a particular order, could run in parallel perform better, if provided mechanisms for concurrent execution of the subtasks

UNIX systems
programs use multiple processes server applications have a listener process that waits for client requests When a request arrives, the listener forks a new process to service it Since servicing of the request often involves I/O operations that may block the process, this approach yields some concurrency benefits even on uniprocessor systems

Uniprocessor machines
divide the work among multiple processes one process must block for I/O or page fault servicing, another process can progress in the meantime UNIX allows users to compile several files in parallel, using a separate process for each
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Multiple processes: disadvantages


1. Creating processes adds overhead:
becoz. fork is expensive system call

1. IPC ( message passing or shared memory) is needed


as each process has its own address space

1. Additional work
dispatch processes to different machines or processors pass information between these processes wait for their completion gather the results

4. UNIX: no appropriate frameworks for sharing certain resources, e.g., network connections model is justified only
benefits of concurrency offset the cost of creating and managing multiple processes

Thread abstraction
independent computational unit that is part of the total processing work of an application few interactions with one another and so low synchronization reqts application contain 1/ more such units UNIX process single-threaded: all computation is serialized within the same unit
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Multiple Threads and Processors multithreaded systems + multiprocessor architectures


parallelized, compute-bound applications True || : running each thread on a different processor If the no. of threads > no. of processors, threads be multiplexed on the available processors Application has n threads running on n processors, finish its work in 1/n th the time required by a single-threaded version Practically, overhead of creating, managing, and synchronizing thread, and that of the multiprocessor operating system: reduce the benefit below this ideal ratio

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Single and Multithreaded Processes

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Traditional UNIX - uniprocessor with single-threaded processes


single-threaded processes executing on a uniprocessor machine provides an illusion of concurrency by executing each process for a brief period of time (time slice) before switching to the next first 3 processes: server side of a client-server application server program spawns a new process for each active client Processes: identical address spaces and share information with one another using IPC Lower 2 processes: run another server application

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Multithreaded processes in a uniprocessor system


2 servers: running in a multithreaded system
Each server runs as a single process, with multiple threads sharing a single address space handled by either the kernel or a user-level threads library, depending on OS

Interthread context switching Reduces load on the memory subsystem: eliminate multiple, nearly identical address spaces for each application copy-on-write memory sharing: manage separate address translation maps for each process Since all threads of an application share a common address space
Can use efficient, lightweight, interthread communication and synchronization mechanisms single-threaded process: not have to protect its data from other processes Multithreaded processes: concerned with every object in their address space If more than one thread can access an object, use synchronization to avoid data corruption

disadvantages

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Multithreaded processes in a multiprocessor system


2 multithreaded processes running on a multiprocessor threads of one process share same address space, but each runs on a diff.processor all run concurrently improves performance, but complicates the synchronization problems multiprocessor system useful for single-threaded applications, as several processes can run in parallel significant benefits of multithreaded applications even on single- processor systems When one thread must block for I/O or some other resource, another thread can be scheduled to run, and the application continues to progress thread abstraction is more suited for representing the intrinsic concurrency of a program than for mapping software designs to multiprocessor hardware architectures

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Concurrency and parallelism


parallelism
actual degree of parallel execution achieved limited by the number of physical processors available to the application

concurrency
Achieve maximum parallelism with an unlimited no. of processors Depends how the application is written how many threads of control can execute simultaneously, with the proper resources available provided at the system or application level kernel provides system concurrency by recognizing multiple threads of control (hot threads) within a process and scheduling them independently benefit from system concurrency even on a uniprocessor if one thread blocks on an event or resource, the kernel can schedule another thread
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

user-level thread libraries: provide user concurrency


user threads, or coroutines (cold threads), not recognized by the kernel scheduled and managed by the applications themselves

does not provide true concurrency or parallelism, since such threads cannot actually run in parallel provide a more natural programming model for concurrent applications Threads
organizational tools and exploit multiple processors Kernel threads allow parallel execution on multiprocessors, not suitable for structuring user applications only useful for structuring applications does not permit parallel execution of code system + user concurrency kernel recognizes multiple threads in a process, and libraries add user threads that are not seen by the kernel User threads allow synchronization between concurrent routines in a program without the overhead of making system calls always a good idea to reduce the size and responsibilities of the kernel, and splitting the thread support functionality between the kernel and the threads library

Purely user-level facility:


Dual concurrency model

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Fundamental Abstractions
Process: compound entity, 2 components
set of threads and a collection of resources dynamic object that represents a control point in the process and that executes a sequence of instructions private objects (program counter, a stack, and a register context) address space, open files, user credentials, quotas, and so on, are shared by all threads in the process

1. thread

1. Resources UNIX process:single thread of control Multithreaded systems: allow more than one thread of control in each process. Centralizing resource ownership: drawbacks
server application assumes the identity of the client while servicing a request installed with superuser privileges calls setuid, setgid, and setgroups to temporarily change itsuser credentials to match those of the client Multithreading server to increase the concurrency: security problems process has a single set of credentials, it can only pretend to be one client at a time server is forced to serialize (single-thread) all system calls that check for security

several different types of threads, different properties and uses 3 important types

kernel threads lightweight processes user threads


BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Kernel Threads
1. Not associated with a user process created and destroyed as needed internally by the kernel responsible for executing a specific function shares the kernel text and global data, has its own kernel stack independently scheduled uses std synchronization mechanisms of the kernel (sleep (), wakeup () useful for performing asynchronous I/O Rather than special mechanisms to handle it
Kernel create a new thread to handle each such request handled synchronously by the thread, but appears asynchronous to the rest of the kernel

2. used to handle interrupts Adv: inexpensive to create and use


only resources are kernel stack and an area to save the register context when not running, data structure to hold scheduling and synchronization information Context switching between kernel threads: quick, since the memory mappings do not have to be flushed System processes such as the pagedaemon: equivalent to kernel threads Daemon processes such as nfsd (the Network File System server process) are started at the user level, but once started, execute entirely in the kernel user context is not required once they enter kernel mode

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Lightweight Processes
kernel-supported user thread system must support kernel threads before it can support LWPs Every process: one / more LWPs, each supported by a separate kernel thread LWPs
independently scheduled share the address space and other resources of the process make system calls and block for I/O or resources benefits of true parallelism (each LWP dispatched to run on a different processor) resource and I/O waits block individual LWPs, not the entire process

multiprocessor system even on a uniprocessor Besides the kernel stack and register context, an LWP also needs to maintain some user state
user register context, which must be saved when the LWP is pre- empted

User code is fully preemptible, and all LWPs in a process share a common address space If any data can be accessed concurrently by multiple LWPs, such access must be synchronized kernel provides facilities to lock shared variables and to block an LWP if it tries to access locked data
mutual exclusion (mutex) locks, semaphores, and condition variables
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Limitations:
creation, destruction, and synchronization of LWP: require system calls expensive operations
2 mode switches one from user to kernel mode on invocation another back to user mode on completion

On each mode switch, LWP crosses a protection boundary


kernel must copy the system call parameters from user to kernel space and validate them to protect against malicious or buggy processes on return from the system call, the kernel must copy data back to user space synchronization overhead can nullify any performance benefits If a thread wants a resource that is currently unavailable, execute a busy-wait without kernel involvement Busy-waiting: reasonable for resources held only briefly; otherwise block the thread Blocking:requires kernel involvement, expensive

When the LWPs frequently access shared data multiprocessor systems: locks at the user level

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

LWPs
consumes significant kernel resources, including physical memory for a kernel stack system cannot support a large no. of LWPs, general enough to support most reasonable applications Not suitable for applications use a large number of threads frequently create and destroy them transfer control from one thread to another

must be scheduled by the kernel fairness issue: user can monopolize the processor by creating a large no. of LWPs Kernel provides mechanisms for creating, synchronizing, and managing LWPs, it is the responsibility of the programmer to use them judiciously

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

User threads
entirely at the user level, without the kernel knowing anything through library packages such as Mach's C-threads and POSIX pthreads provide all the functions for creating, synchronizing, scheduling, and managing threads with no special assistance from the kernel interaction do not involve the kernel, extremely fast user threads + lightweight processes
very powerful programming environment

kernel recognizes, schedules, and manages LWPs user-level library


multiplexes user threads on top of LWPs provides facilities for interthread scheduling, context switching, and synchronization without involving the kernel

library acts as a miniature kernel for the threads it controls user threads is possible
user-level context of a thread can be saved and restored without kernel intervention own user stack, an area to save user-level register context, and other state information, such as signal masks saving the current thread's stack and registers Load newly scheduled one
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

user thread library schedules and switches context between user threads

User thread implementations

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

kernel
responsibility for process switching, because it alone has the privilege to modify the memory management registers not truly schedulable entities kernel has no knowledge of it schedules the underlying process or LWP, which in turn uses library functions to schedule its threads When the process or LWP is preempted, so are its threads user thread makes a blocking system call, it blocks the underlying LWP If the process has only one LWP (or if the user threads are implemented on a single-threaded system), all its threads are blocked Library: provides synchronization objects to protect shared data structures type of lock variable (such as a semaphore) and a queue of threads blocked on it Threads must acquire the lock before accessing the data structure If the object is already locked, the library blocks the thread by linking it onto its blocked threads queue and transfer ring control to another thread Modern UNIX systems: asynchronous I/O mechanisms, which allow processes to perform I/O without blocking SVR4: IO_SETSIG ioctl operation to any STREAMS device subsequent read or write to the stream simply queues the operation and returns without blocking I/O completes: process is informed via a SIGPOLL signal

User threads

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Asynchronous I/O
very useful feature allows a process to perform other tasks while waiting for I/O leads to a complex programming model restrict asynchrony to the operating system level and give applications a synchronous programming environment. A threads library achieves this by providing a synchronous interface that uses the asynchronous mechanisms internally Each request is synchronous with respect to the calling thread, which blocks until the I/O completes process, however, continues to make progress, since the library invokes the asynchronous operation and schedules another user thread to run in the mean- time When the I/O completes, the library reschedules the blocked thread.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Benefits of user threads


more natural way of programming applications (windowing systems) synchronous programming paradigm
by hiding the complexities of asynchronous operations in the threads library Makes them useful, even in systems lacking any kernel support for threads several threads libraries optimized for a different class of applications extremely light-weight, consume no kernel resources except when bound to an LWP performance gains at user level without using system calls avoids the overhead of trap processing and moving parameters and data across protection boundaries amount of work a thread must do to be useful as a separate entity depends on the overhead in creating and using a thread user threads: few 100 instructions and may be reduced < 100 (compiler support) much less time for creation, destruction, and synchronization total separation of information between the kernel and the threads library kernel not know about user threads: cant use its protection mechanisms to protect Process: own address space, which the kernel protects from unauthorized access by other processes

greatest advantage: performance

critical thread size


User threads Limitations

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

split scheduling model


Problem: do not know what the other is doing
threads library schedules user threads, kernel schedules the underlying processes or LWPs E.g. kernel may preempt an LWP whose user thread is holding a spin lock, If another user thread on a different LWP tries to acquire this lock, it will busy-wait until the holder of the lock runs again kernel does not know the relative priorities of user threads preempt an LWP running a high-priority user thread to schedule an LWP running a lowerpriority user thread

user-level synchronization mechanisms


behave incorrectly applications are written on the assumption that all runnable threads are eventually scheduled This is true when each thread is bound to a separate LWP, but may not be valid when the user threads are multiplexed onto a small number of LWPs Since the LWP may block in the kernel when its user thread makes a blocking system call, a process may run out of LWPs even when there are runnable threads and available processors availability of an asynchronous I/O mechanism may help to mitigate this problem.

without explicit kernel support, user threads may improve concurrency, but do not increase parallelism
Even on a multiprocessor: user threads sharing a single LWP cannot execute in parallel

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Summary
Kernel threads
primitive objects not visible to applications user-visible threads that are recognized by the kernel and are based on kernel threads higher-level objects not visible to the kernel use LWP if supported by the system, or they may be implemented in a standard UNIX process without special kernel support major drawbacks that limit their usefulness

Lightweight processes User threads

LWPs + user threads

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

User-Level Threads Libraries


2 important issues
what kind of programming interface the package will present to the user how it can be implemented using the primitives provided by the operating system different threads packages Chores, Topaz, and Mach's C threads IEEE POSIX standards, pthreads Modem UNIX versions support pthreads interface

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Programming Interface
Interface: facilities provide a large set of operations
creating and terminating threads suspending and resuming threads assigning priorities to individual threads thread scheduling and context switching synchronizing activities (semaphores and mutual exclusion locks) sending messages from one thread to another

threads package: minimize kernel involvement (overhead) Kernel:


no explicit knowledge of user threads threads library may use system calls to implement some of its functionality thread priority is unrelated to the kernel scheduling priority, which is assigned to the underlying process or LWP process-relative priority: used by the threads scheduler to select a thread to run within the process

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Implementing Threads Libraries


depends on the facilities for multithreading provided by the kernel packages - traditional UNIX kernels: no special support for threads Threads library
miniature kernel, maintaining all the state information for each thread and handling all thread operations at the user level effectively serializes all processing, concurrency is provided by using asynchronous I/O

Modern sys: kernel supports multithreaded processes through LWPs,


Bind each thread to a different LWP. This is easier to implement, but uses more kernel resources and offers little added value. It requires kernel involvement in all synchronization and thread scheduling operations Multiplex user threads on a (smaller) set of LWPs. This is more efficient, as it consumes fewer kernel resources. This method works well when all threads in a process are roughly equivalent. It provides no easy way of guaranteeing resources to a particular thread Allow a mixture of bound and unbound threads in the same process. This allows the application to fully exploit the concurrency and parallelism of the system. It also allows preferential handling of a bound thread, by increasing the scheduling priority of its underlying LWPs or by giving its LWP exclusive ownership of a processor
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Threads library
scheduling algorithm that selects which user thread to run It maintains per-thread state and priority, which has no relation to the state or priority of the underlying LWPs 6 user threads multiplexed onto two LWPs (u1-u6) library schedules one thread to run on each LWP u5 and u6: running state, even though the underlying LWPs may be blocked in the middle of a system call, or preempted and waiting to be scheduled u1 and u2: blocked state when it tries to acquire a synchronization object locked by another thread, when released, the library unblocks the thread, and puts it on the scheduler queue u3 and u4: runnable state, waiting to be scheduled Scheduler selects a thread from this queue based on priority and LWP affiliation closely parallels the kernel's resource wait and scheduling algorithms acts as a miniature kernel for the threads it manages
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

User thread states

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

User-Level Threads Libraries User level thread library should provide a set of operations such as:
creating and terminating threads suspending and resuming threads assigning priorities to individual threads thread scheduling and context switching synchronizing activities through facilities such as semaphores and mutual exclusion locks sending messages from one thread to another

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

thread priority is unrelated to kernel scheduling priority threads library contains a scheduling algorithm that selects which user thread to run kernel is responsible for processor allocation, the threads library for scheduling

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

User threads libraries have a choice of implementations: Bind each thread to a different LWP. This is easier to implement, but uses more kernel resources and offers little added value. It requires kernel involvement in all synchronization and thread scheduling operations. Multiplex user threads on a (smaller) set of LWPs. This is more efficient, as it consumes fewer kernel resources. This method works well when all threads in a process are roughly equivalent. It provides no easy way of guaranteeing resources to a particular thread. Allow a mixture of bound and unbound threads in the same process. This allows the application to fully exploit the concurrency and parallelism of the system. It also allows preferential handling of a bound thread, by increasing the scheduling priority

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

User Thread Implementation Each user thread must maintain the following state information: Thread ID Saved register state User stack Signal mask Priority Thread local storage

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Mach C-threads library The pthreads Library

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

PROCESS SCHEDULING

2006 IBM Corporation

Introduction Clock interrupt handling Scheduler Goals Traditional UNIX scheduling Processor affinity on AIX

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Introduction
Like memory & terminals, the CPU is also a shared resource for which processes contend. The scheduler is the component of the OS that determines which process to run at any given time and how long to run. AIX(UNIX) is essentially a time sharing system, which means it allows several processes to run concurrently (This is an illusion on an uni-processor machine). Two aspects of scheduler: 1. Scheduling policy 2. Implementation (Data Structures and Algos) Context switch
expensive

Process Control Block (PCB)


BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Clock Interrupt Handling

Every machine has a hardware clock, which interrupts the s/m at fixed intervals. CPU Tick Time interval between successive clock interrups UNIX typically sets the CPU tick at 10 milliseconds Clock interrupt handler runs in response to the h/w clock interrupt. It performs the following tasks:
Rearms the h/w clock if necessary Update CPU usage statistics performs priority recomputation and time-slice expiration handling sends a SIGXCPU signal to the current process if it has exceeded its CPU usage Quota. Updates the time-of-day clock and other related clocks Handles callouts Wakes up swapper and pagedaemon when approptiate. Handles alarms

Note: Some of these taks do not need to be performed on every tick.


BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Callouts
A callout records a function that the kernel must involve at a later time. TO reggister a callout int to_ID = timeout (void (*fn)(), caddr_t arg, long delta); fn() the kernel function to invoke; arg is an argument to pass to fn(). To cancel a callout void untimeout (int to_ID); On every tick, the clock handler checksif any callouts are due.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Alarms
A process can request the kernel to send it a signal after a specific amount of time, much like an alarm clock. Three types of alarms :
real-time - relates to the actual elapsed time, and notifies the process via a SIGALRM profiling - measures the amount of time the process has been executing and notifies the process via SIGPROF virtual-time - monitors only the time spent by the process in user mode and sends the SIGVTALRM

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Scheduler Goals
The scheduler must judiciously apportion CPU time to all processes in the system. The scheduler must ensure that the system delivers acceptable performance to each application, Applications can be loosely categorized into the following classes, based on their scheduling requirements and performance expectations:
Interactive -- Applications such as shells, editors, and programs with graphical user interfaces Batch -- Activities such as software builds and scientific computations do not require user interaction and are often submitted as background jobs. Real-time -- This is a catchall class of applications that are often time-critical.

The scheduler must try to balance the needs of each. It must also ensure that kernel functions such as paging, interrupt handling, and process management can execute promptly when required. In a well-behaved system, all applications must continue to progress. No application should be able to prevent others from progressing, unless the user has explicitly permitted it. The choice of scheduling policy has a profound effect on the system's ability to meet the requirements of different types of applications.
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Traditional UNIX Scheduling


Traditional UNIX scheduling is priority-based. Each process has a scheduling priority that changes with time. The scheduler always selects the highest-priority runnable process. It uses preemptive time-slicing to schedule processes of equal priority, and dynamically varies process priorities based on their CPU usage patterns. UNIX kernel is strictly nonpreemptable. (It solves many synchronization problems associated with multiple processes accessing the same kernel data structures).

Process Priorities
Priority may be any integer value between 0 and 127. Numerically lower values correspond to higher priorities. Priorities between 0 and 49 are reserved for the kernel, while processes in user mode have priorities between 50 and 127. The proc structure contains the following fields that contain priority-related information:
p_pri Current scheduling priority. p_us rp r i User mode priority. p_cpu Measure of recent CPU usage. p_ni ce User-controllable nice factor.

When a process completes the system call and is about to return to user mode, its scheduling priority is reset to its current user mode priority. The user mode priority depends on two factors--the nice value and the recent CPU usage. nice value is a number between 0 and 39 with a default of 20.Increasing this value decreases the priority.Background processes are automatically given higher nice values. Only a superuser can decrease the nice value of a process. Every second, the kernel invokes a routine called schedcpu () (scheduled by a callout) that reduces the p_cpu value of each process by a decay factor.
decay = (2 * toad_average) / (2 * load_average + 1);

sched-cpu () routine also recomputes the user priorities of all processes using the formula
p_usrpri = PUSER + (p_cpu / 4) + (2 * p_nice);

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Scheduler Implementation
scheduler maintains an array called qs of 32 run queues. Each queue corresponds to four adjacent priorities. A global variable which qs contains a bitmask with one bit for each queue.

The swtch () routine, which performs the context switch, examines whJ chqs to find the index of the first set bit.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Run Queue Manipulation

Every 100 milliseconds, the kernel invokes (through a callout) a routine called roundrobin() to schedule the next process from the same queue. The schedcpu () routine recomputes the priority of each process once every second. There are three situations where a context switch is indicated:
The current process blocks on a resource or exits. This is a voluntary context switch. The priority recomputation procedure results in the priority of another process becoming greater than that of the current one. The current process, or an interrupt handler, wakes up a higher-priority process.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Traditional scheduling algorithm - Analysis


The traditional scheduling algorithm is simple and effective. It is adequate for a general time-sharing system with a mixture of interactive and batch jobs. Dynamic computation of the priorities prevents starvation of any process. The approach favors I/O-bound jobs that require small infrequent bursts of CPU cycles. The scheduler has several limitations that make it unsuitable for use in a wide variety of commercial applications:
It does not scale well--if the number of processes is very large, it is inefficient to recompute all priorities every second. There is no way to guarantee a portion of CPU resources to a specific process or group of processes. There are no guarantees of response time to applications with real-time characteristics. Applications have little control over their priorities. The nice value mechanism is simplistic and inadequate. Since the kernel is nonpreemptive, higher-priority processes may have to wait a significant amount of time even after being made runnable.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Processor affinity on AIX

Binding the process to available CPUs on AIX : use either the bindprocessor command or the bindprocessor API The bindprocessor command binds or unbinds the kernel threads of a process to a processor. The syntax for the bindprocessor command is: bindprocessor Process [ ProcessorNum ] | -q | -u Process{ProcessID [ProcessorNum] | -u ProcessID |

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

bindprocessor subroutine

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Users Threads in AIX

2006 IBM Corporation

Understanding Threads and Process


Process Properties It has traditional process attributes, such as: Process ID, user ID, group ID Environment Working directory

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

A process also provides a common address space and common system resources, as follows: File descriptors Signal actions Shared libraries Inter-process communication tools (such as message queues, pipes, semaphores, or shared memory)

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Thread Properties A thread is the schedulable entity. The properties include the following: Stack Scheduling properties (such as policy or priority) Set of pending and blocked signals Some thread-specific data(like errno)

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

All threads share the same address space. When a process is created, one thread is automatically created. This thread is called the initial thread, not visible to the programmer Threads are well-suited entities for modular programming.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

User threads are mapped to kernel threads by the threads library. Three different ways to map user threads to kernel threads. M:1 model 1:1 model M:N model

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Thread-Safe and Threaded Libraries in AIX


The following pairs of objects are manipulated by the threads library: Threads and thread-attributes objects Mutexes and mutex-attributes objects Condition variables and condition-attributes objects Read-write locks

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

The thread safe libraries in AIX are: libbsd.a libc.a libm.a libsvid.a libtli.a libxti.a libnetsvc.a

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Creating Threads
No parent-child relation exists between threads When creating a thread, an entry-point routine and an argument must be specified A thread has attributes, which specify the characteristics of the thread. A thread is created by calling the pthread_create subroutine

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

The pthread_create subroutine returns the thread ID of the new thread The current thread ID is returned the pthread_self subroutine A thread ID is an opaque object; its type is pthread_t (an integer in AIX) When calling the pthread_create subroutine, you may specify a thread attributes object. If you specify a NULL pointer, the created thread will have the default attributes.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Terminating Threads
A thread automatically terminates when it returns from its entry-point routine. Thread can exit at any time by calling the pthread_exit subroutine. The cancelation of a thread is requested by calling the pthread_cancel subroutine. Cleanup Handlers

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Using Mutexes
A mutex is a mutual exclusion lock, Only one thread can hold the lock Mutex attributes, specify the characteristics of the mutex. Like threads, mutexes are created with the help of an attributes object , which can be accessed through accessed through a variable of type pthread_mutexattr_t .

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Creating and destroying mutexes


pthread_mutexattr_init pthread_mutexattr_destroy pthread_mutex_init pthread_mutex_destroy

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Types of Mutexes
The type of mutex determines how the mutex behaves when it is operated on. They are: PTHREAD_MUTEX_DEFAULT or PTHREAD_MUTEX_N ORMAL PTHREAD_MUTEX_ERRORCHECK PTHREAD_MUTEX_RECURSIVE

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Joining Threads
Joining a thread means waiting for it to terminate Using the pthread_join subroutine alows a thread to wait for another thread to terminate A thread cannot join itself because a deadlock would occur and it is detected by the library

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

The pthread_join subroutine also allows a thread to return information to another thread. Any call to the pthread_join subroutine occurring before the target thread's termination blocks the calling thread. What happens when two threads may try to join each other?

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Scheduling Threads
Threads can be scheduled The threads library allows the programmer to control the execution scheduling of the threads in the following ways: By setting scheduling attributes when creating a thread By dynamically changing the scheduling attributes of a created thread By defining the effect of a mutex on the thread's scheduling when creating a mutex (known as synchronization scheduling) By dynamically changing the scheduling of a thread during synchronization operations (known as synchronization scheduling)

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Thread specific data


A thread-specific data key is an opaque object, of thepthread_key_t data type. Thread-specific data are void pointers, which allows referencing any kind of data Thread-specific data keys must be created before being used Their values can be automatically destroyed when the corresponding threads terminate.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Routines
pthread_key_create pthread_key_delete pthread_getspecific pthread_setspecific

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Signal management
Signal management in multi-threaded processes is shared by the process and thread levels, and consists of the following: Per-process signal handlers Per-thread signal masks Single delivery of each signal

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Signal handlers - maintained at process level. Signal masks - maintained at thread level Each thread can have its own set of signals that will be blocked from delivery. The sigthreadmask subroutine must be used to get and set the calling thread's signal mask

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Signal Generation
pthread_kill - subroutine sends a signal to a thread The kill - subroutine sends a signal to a process. The raise subroutine sends a signal to the calling thread, The alarm subroutine requests that a signal be sent later to the process

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Signal handlers are called within the thread to which the signal is delivered No pthread routines can be called from a signal handler. Calling a pthread routine from a signal handler can lead to an application deadlock. A signal is delivered to a thread, unless its action is set to ignore.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Writing Reentrant and Thread-Safe Code


In multi-threaded programs, the same functions and the same resources may be accessed concurrently by several flows of control. To protect resource integrity, code written for multithreaded programs must be reentrant and thread-safe. A function can be either reentrant, thread-safe, both, or neither.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

A reentrant function does not hold static data over successive calls, nor does it return a pointer to static data. A thread-safe function protects shared resources from concurrent access by locks. How to make a function Reentrant? How to make a function Thread-Safe?

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Developing Multi-Threaded Programs


All subroutine prototypes, macros, and other definitions for using the threads library are in the pthread.h header file, which is located in the /usr/include directory. The following global symbols are defined in the pthread.h file: _POSIX_REENTRANT_FUNCTIONS _POSIX_THREADS

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Invoking the Compiler


When compiling a multi-threaded program, invoke the C compiler using one of the following commands: xlc_rInvokes the compiler with default language level of ansi cc_rInvokes the compiler with default language level of extended AIX supports up to 32768 threads in a single process

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Debugging a Multi-Threaded Program


Application programmers can use the dbx command to perform debugging. subcommands are available for displaying thread-related objects, including attribute, condition, mutex, and thread . Kernel programmers can use the kernel debug program to perform debugging on kernel extensions and device drivers.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Several subcommands support multiple kernel threads and processors, including: The cpu subcommand - changes the current processor The ppd subcommand- displays per-processor data structures The thread subcommand- displays thread table entries The uthread subcommand -displays the uthread structure of a thread

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Core File Requirements of a Multi-Threaded Program By default, processes do not generate a full core file. If an application must debug data in shared memory regions, particularly thread stacks, it is necessary to generate a full core dump. To generate full core file information, run the following command as root user: chdev -l sys0 -a fullcore=true

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Benefits of Threads
Improved performance can be obtained on multiprocessor systems using threads. Inter-thread communication is far more efficient and easier to use than inter-process communication. creating threads and controlling their execution, requires fewer system resources than managing processes. On a multiprocessor system, multiple threads can concurrently run on multiple CPUs.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

References:
http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix. genprogc/doc/genprogc/understanding_threads.htm

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

II: UnitUnit II : IPC, Filesystems & Memory management IPC File Systems Virtual Memory File Systems in AIX

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Inter Process Communication (IPC)

Universal IPC facilities System V IPC Messages Ports Message Passing

2006 IBM Corporation

Universal IPC facilities


UNIX provides three facilities for inter process communications signals pipes Process tracing These are the only IPC mechanisms common to all UNIX variants.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Signals
Notify a process of asynchronous events. Modern UNIXes recognize 31 or more different signals - have predefined meanings. A process may send signals to another process(es) using the kill or killpg system calls. kernel also generates signals in response to various events (ex : sending SIGINT for Ctrl + C event) Each signal has default action, it can be overridden.

Uses:

o o

can be used for synchronization. Many applications developped resource sharing and locking protocols based on signals.

Limitations:

Expensive (Sender must make a s/m call. kernel must interrupt the receiver - manipulate stack-resuming intterrupted code. Limited bandwidth (only 31 different signals exist) i.,e can convey only limited info Useful for event notification and not for complicated interactions.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Pipes
Unidirectional, FIFO, unstructured data stream of fixed size Writers add data to the end of the pipe, readers retrive data from the front of the pipe. Once read the data is removed Pipe system calls creates pipes and returns two file descriptors (one for rd and one for wr) Each pipe can several readers and writers.

A process can be a reader/ writer / both. I/O to pipe is much like I/O to file, read and write is achived through read and write system calls to the pipe'sdescriptors. Limitations: can not be used to broadcast data (since reading removes the data from pipe) Data in pipe is a byte stream. If writer sends several objects of different length, reader can't determine. If there are multiple readers, a writer can't direct data to specific reader, vice versa.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Process tracing
ptrace system call used by debuggers such as sdb, dbx a process can be control the execution of a child process ptrace (cmd, pidm addr, data); The cmd arg allows the following operations: Read / write a word in child's aread area / uarea / GP registers. Interrupt specific signals. set or delete watchpoints in the child's address space/ Resume the execution of the stopped child. Single-step the child. Terminate the child. kernel sets the child's traced flag (in its proc structure), which affects how the child responds to signals.

Limitations : only the direct child can be controlled. Inefficient- requires several context switches. Tracing setuid program raises problem.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

System V IPC
System V UNIX provide three IPC mechanisms Semaphores Message Queues Shared memory Each instance of an IPC resource has the following attributes Key -to identify the instance od the resource Creator - UID & GID Owner - UID & GID Permissions Process acquires a resource using shmget / semget / msgget Controls the aquired resource using shmctl / semctl / msgctl

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Semaphores
Are integer-valued objects that support two atomic operations P() and V()/ p() - decrements the value. Blocks if its value is less than zero. V() - increments the value. If the result >= 0 then wakes up a waiting process / threads. These operations are atomic in nature Semaphores can cause dead locks.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Message Queues
A Message Queue is a header pointing at a linked list of messsages. A message = 32 bit type value + data area. The following functions are used: msgqid = gsgget(key, flag); msgsnd(msgqid, msgp, count, flag); count = msgrcv(msgqid, msgp, maxcnt, msgtype, flag);

It is similar to pipe but more versatile and address limitations of pipes. Transmit data as discrete messages thn unformatted byte-stream. Effective for small amount of data. Each transfer require 2 copy operations, hence results in poor performance.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Shared memory
Shared m/y is a portion of physical m/y shared by multiple processes. Processes may attach this region to any suitable virtual address range in their address space. Functions :
shmid = shmget (key, size, flag); addr = shmat (shmid, shmaddr, shmflag); shmdt (shmaddr);

Most modern UNIX variants also provide the mmap system calls, which maaps a file (or part of a file) into the address space of the caller.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

File System

2006 IBM Corporation

Filesystems

The User Interface to Files FileSystems Special Files File System Framework The Vnode/VFS Architecture Implementation Overview Network File System

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

The User Interface to Files

Allows user to organise,manipulate,access different files Files and Directories -links -file organisation - dirent structure File attributes - type,size, inode number,link,deviceid,user id,group id - sticky flag File descriptors - file offset - open() File I/o - read write calls File Locking - advisory and mandatory locks

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

File Systems
root file system Mounting File hierarchy Logical Disks disk mirrorring striping

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Special Files

Terminals and printers Symbolic Links -soft and hard links Pipes and FifO - implementation
File System Framework

Growth of FS Need for sharing


BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

The Vnode/VFS Architecture

Objectives Lessons from devices I/o -cdevsw structure -Relationship between a base class and its subclass -vnode abstraction - vfs abstraction

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Implementation Overview

Objectives vnode and open files - file system dependant data structures - file object fields Vnode - vnode structure Vnode reference count Vfs object -vfs structure -relationship between vnode and vfs

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Network File System


- Client/server model -remote procedure calls -User perspective NFS export mount -Design goals objectives Mounting nfs - NFS components nfsV2 operations mount protocol -stalelessness

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

File Systems in AIX

JFS/JFS2 File System Layout Mounting Managing Filesystem

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

1)JFS File System Layout


FS is a set of files, directories, and other structures JFS file contain a boot block, a superblock, bitmaps, and one or more allocation groups. JFS Boot Block -The boot block occupies the first 4096 bytes of the file system, -starting at byte offset 0 on the disk. -The boot block is available to start the operating system. JFS Superblock -4096 bytes in size and starts at byte offset 4096 on the disk. -maintains size,# of data blocks,state of FS,allocation groups sizes JFS Allocation Bitmaps -The fragment allocation map records the allocation state of each fragment. -The disk i-node allocation map records the status of each i-node.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

JFS File System Layout (Contd)

JFS Fragments -Many file systems have disk blocks or data blocks. -blocks divide the disk into units of equal size to store the data in a file or directory's logical blocks. -The disk block may be further divided into fixed-size allocation units called fragments. -JFS provides a view of the file system as a contiguous series of fragments. -JFS fragments are the basic allocation unit and the disk is addressed at the fragment level.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

JFS File System Layout (Contd)

JFS Allocation Groups -The set of fragments making up the FS are divided into one or more fixed-sized units of contiguous fragments. - Each unit is an allocation group. -The first 4096 bytes of this area hold the boot block, and the second 4096 bytes hold the file system superblock. -Disk i-nodes are 128 bytes in size and are identified by a unique disk i-node number or i-number. - The i-number maps a disk i-node to its location on the disk or to an i-node within its allocation group. -Allocation groups allow the JFS resource allocation policies to use effective methods for to achieve FS I/O performance.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

JFS File System Layout (Contd)

JFS Disk i-Nodes -Each file and directory has an i-node that contains access information such as file type, access permissions, owner's ID, and number of links to that file. -These i-nodes also contain "addresses" for finding the location on the disk where the data for a logical block is stored. -Each i-node has an array of numbered sections. - Each section contains an address for one of the file or directory's logical blocks.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Mounting

-Mounting makes file systems, files, directories, devices, and special files available for use -The mount command instructs the operating system to attach a file system at a specified directory. -write permission is needed for mount point -A user with root authority can mount a file system arbitrarily by naming both the device and the directory on the command line. -The /etc/filesystems file is used to define mounts to be automatic at system initialization. Mount points -A mount point is a directory or file at which a new file system, directory, or file is made accessible. -To mount a file system or a directory, the mount point must be a directory; -To mount a file, the mount point must be a file. Mounting file systems, directories, and files - two types of mounts, a remote mount and a local mount. -Remote mounts are done on a remote system on which data is transmitted over a telecommunication line.Ex.NFS -Local mounts are mounts done on your local system. -Mounts can be set to occur automatically during system initialization. -Diskless workstations must have the ability to create and access device-special files on remote machines to have their /dev directories mounted from a server.
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Managing file system


A FS is a complete directory structure, including a root directory and any subdirectories. File systems are confined to a single logical volume. Some of the most important system management tasks are concerning file systems, specifically:
Allocating space, Creating file systems, Monitoring space, backup creating snapshot,

list of system management commands that help manage file systems (backup,chfs,df,fsck,mkfs,mount,restore,snapshot) Displaying available space on a file system (df command) File system commands Comparing file systems on different machines

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Virtual

Memory

Introduction Demand Paging Hardware requirement IBM RS/6000 , Intel 80X806 AIX Program Address space Overview

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Introduction
Memory Management Unit (MMU) Things to achieve :
Run programs larger than physical memory. Run partially loaded programs, thus reducing program startup time. Allow more than one program to reside in memory at one time Allow relocatable programs, which may be placed anywhere in memory Write machine-independent code--there should be no a priori correspondence between the program and the physical memory configuration. Relieve programmers of the burden of allocating and managing memory resources. Allow sharing --for example, shared code

These goals are realized through the use of virtual memory The application is given the illusion that it has a large main memory at its disposal, although the computer may have a relatively small memory. The translation tables and other data structures used for memory management reduce the physical memory available to programs. The usable memory is further reduced by fragmentation.
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Demand Paging
Functional Requirements
Address space management Address translation Physical memory management Memory protection Memory sharing Monitoring system load Other facilities

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

The Virtual Address Space


The address space of a process comprises all (virtual) memory locations that the program may reference or access. Demand-paged architectures divide this space into fixed-size pages. The pages of a program may hold several types of information:
text initialized data uninitialized data modified data stack heap shared memory shared libraries
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Initial Access to a Page


A process can start running a program with none of its pages in physical memory. As it accesses each nonresident page, it generates a page fault, which the kernel handles by allocating a free page and initializing it with the appropriate data. The method of initialization is different for the first access to a page and for subsequent accesses. A process can request the kernel to send it a signal after a specific amount of time, much like an alarm clock.

The swap area


The total size of all active programs is often much greater than the physical memory, which consequently holds only some of the pages of each process. If a process needs a page that is not resident. The kernel makes room for it by appropriating another page and discarding its old contents.
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Translation Maps
The paging system may use four different types of translation maps to implement virtual memory, as shown in the following figure:

Hardware address translations Address space map Physical memory map Backing store map

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Hardware Requirements - The IBM RS/6000


The IBM RS/6000 is a reduced instruction set computer (RISC) machine that runs AIX, IBM's System V-based operating system. Its memory architecture has two interesting features it uses a single, flat, system address space, and it uses an inverted page table for address translation.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

AIX Program Address space Overview

Tools are available to assist in allocating memory, mapping memory and files, and profiling application memory usage.

System Memory Architecture Introduction


The system employs a memory management scheme that uses software to extend the capabilities of the physical hardware. Because the address space does not correspond one-to-one with real memory, the address space (and the way the system makes it correspond to real memory) is called virtual memory. The subsystems of the kernel and the hardware that cooperate to translate the virtual address to physical addresses make up the memory management subsystem. The actions the kernel takes to ensure that processes share main memory fairly comprise the memory management policy.
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

The Physical Address Space of 64-bit Systems


The hardware provides a continuous range of virtual memory addresses, from 0x00000000000000000000 to 0xFFFFFFFFFFFFFFFFFFFF Total addressable space is more than 1 trillion terabytes Memory access instructions generate an address of 64 bits: 36 bits to select a segment register and 28 bits to give an offset within the segment.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Segment Register Addressing


The system kernel loads some segment registers in the conventional way for all processes, implicitly providing the memory addressability needed by most processes. These registers include two kernel segments, and a shared-library segment, and an I/O device segment, that are shared by all processes Some segment registers are shared by all processes, others by a subset of processes, and yet others are accessible to only one process. Sharing is achieved by allowing two or more processes to load the same segment ID.

Paging Space
A page is a unit of virtual memory that holds 4K bytes of data and can be transferred between real and auxiliary storage. To accommodate the large virtual memory space with a limited real memory space, the system uses real memory as a work space and keeps inactive data and programs that are not mapped on disk. The area of disk that contains this data is called the paging space.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Memory Management Policy


The real-to-virtual address translation and most other virtual memory facilities are provided to the system transparently by the Virtual Memory Manager (VMM). To accomplish paging, the VMM uses page-stealing algorithms VMM uses a technique known as the clock algorithm to select pages to be replaced. This technique takes advantage of a referenced bit for each page as an indication of what pages have been recently used

Memory Allocation
Version 3 of the operating system uses a delayed paging slot technique for storage allocated to applications. This means that when storage is allocated to an application with a subroutine such as malloc, no paging space is assigned to that storage until the storage is referenced.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Q&A

2006 IBM Corporation

Virtual and Physical Addresses


Physical addresses are provided directly by the machine. one physical address space per machine
addresses typically range from 0 to some maximum, though some portions of this range are usually used by the OS and/or devices, and are not available for user processes

Virtual addresses (or logical addresses) are addresses provided by the OS to processes. one virtual address space per process
addresses typically start at zero, but not necessarily space may consist of several segments

Address translation (or address binding) means mapping virtual addresses to physical addresses.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Simple Address Translation Mechanism


OS divides physical memory into partitions of different sizes. Each partition is made available by the OS as a possible virtual address space for processes. Properties: virtual addresses are identical to physical addresses
address binding is performed by compiler, linker, or loader, not the OS changing partitions means changing the virtual addresses in the application program by recompiling or by relocating if the compiler produces relocatable output degree of multiprogramming is limited by the number of partitions size of programs is limited by the size of the partitions

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Address space diagram

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Zombie state
process that completed execution but still in process table. When a process ends,
all of the memory and resources associated with it are deallocated so they can be used by other processes but process's entry in the process table remains parent can read the child's exit status by executing the wait system call, at which stage the zombie is removed After the zombie is removed, its process ID and entry in the process table can then be reused But if a parent fails to call wait, the zombie will be left in the process table In some situations this may be desirable, for example if the parent creates another child process it ensures that it will not be allocated the same process ID.

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

FIVE OBSERVATIONS 1. One day, all villagers decided to pray for rain, on the day of prayer all the People gathered but only one boy came with an umbrella... THAT'S FAITH 2. When you throw a baby in the air, she laughs because she knows you will catch her... THAT'S TRUST 3. Every night we go to bed, without any assurance of being alive the next Morning but still we set the alarms in our watch to wake up... THAT'S HOPE 4. We plan big things for tomorrow in spite of zero knowledge of the future or having any certainty of uncertainties. .. THAT'S CONFIDENCE 5. We see the world suffering. We know there is every possibility of same or similar things happening to us. But still we get married?? THAT'S OVER CONFIDENCE!!
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Balance sheet of life


Our Birth is our Opening Balance! Our Death is our Closing Balance! Our Prejudiced Views are our Liabilities Our Creative Ideas are our Assets Heart is our Current Asset Soul is our Fixed Asset Brain is our Fixed Deposit Thinking is our Current Account Achievements are our Capital Character & Morals, our Stock-in-Trade Friends are our General Reserves Values & Behavior are our Goodwill Patience is our Interest Earned Love is our Dividend Children are our Bonus Issues Education is Brands / Patents Knowledge is our Investment Experience is our Premium Account The Aim is to Tally the Balance Sheet Accurately. The Goal is to get the Best Presented Accounts Award. Some very Good and Very bad things ... The most destructive habit....... ........ .....Worry The greatest Joy......... ......... ............ ...Giving The greatest loss........ Loss of self-respect The most satisfying work........ .......Helping others The ugliest personality trait....... ... .....Selfishness The most endangered species..... ....Dedicated leaders Our greatest natural resource.... .......... ...Our youth

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

The greatest 'shot in the arm'........ .. The most effective sleeping pill.......

Encouragement Peace of mind

The greatest problem to overcome.... ........ ... Fear The most crippling failure disease..... ... ....... Excuses The most powerful force in life........ ............ . The most dangerous act...... .. A gossip The world's most incredible computer.... . .... The brain The worst thing to be without..... ............ ..... Hope The deadliest weapon...... ........ .......... The two most power-filled words....... ........ The greatest asset....... .......... ........ .... The most worthless emotion.... ......... .... The most beautiful attire...... ......... ........ The most prized possession.. ........ ..... The most contagious spirit...... .......... ...... Life ends; when you stop Dreaming, Hope ends; when you stop Believing, Love ends; when you stop Caring, And Friendship ends; when you stop Sharing...!!!
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Love

The tongue 'I Can' Faith Self- pity SMILE! Integrity Enthusiasm

The most powerful channel of communication. .Prayer

BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends

Anda mungkin juga menyukai