Each UNIX variant: kernel threads, user threads, kernel-supported user threads, C-threads, pthreads, and lightweight processes
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Motivation
several independent tasks: neednt be serialized Database server
listen for and process numerous client requests Neednt be serviced in a particular order, could run in parallel perform better, if provided mechanisms for concurrent execution of the subtasks
UNIX systems
programs use multiple processes server applications have a listener process that waits for client requests When a request arrives, the listener forks a new process to service it Since servicing of the request often involves I/O operations that may block the process, this approach yields some concurrency benefits even on uniprocessor systems
Uniprocessor machines
divide the work among multiple processes one process must block for I/O or page fault servicing, another process can progress in the meantime UNIX allows users to compile several files in parallel, using a separate process for each
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
1. Additional work
dispatch processes to different machines or processors pass information between these processes wait for their completion gather the results
4. UNIX: no appropriate frameworks for sharing certain resources, e.g., network connections model is justified only
benefits of concurrency offset the cost of creating and managing multiple processes
Thread abstraction
independent computational unit that is part of the total processing work of an application few interactions with one another and so low synchronization reqts application contain 1/ more such units UNIX process single-threaded: all computation is serialized within the same unit
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Interthread context switching Reduces load on the memory subsystem: eliminate multiple, nearly identical address spaces for each application copy-on-write memory sharing: manage separate address translation maps for each process Since all threads of an application share a common address space
Can use efficient, lightweight, interthread communication and synchronization mechanisms single-threaded process: not have to protect its data from other processes Multithreaded processes: concerned with every object in their address space If more than one thread can access an object, use synchronization to avoid data corruption
disadvantages
concurrency
Achieve maximum parallelism with an unlimited no. of processors Depends how the application is written how many threads of control can execute simultaneously, with the proper resources available provided at the system or application level kernel provides system concurrency by recognizing multiple threads of control (hot threads) within a process and scheduling them independently benefit from system concurrency even on a uniprocessor if one thread blocks on an event or resource, the kernel can schedule another thread
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
does not provide true concurrency or parallelism, since such threads cannot actually run in parallel provide a more natural programming model for concurrent applications Threads
organizational tools and exploit multiple processors Kernel threads allow parallel execution on multiprocessors, not suitable for structuring user applications only useful for structuring applications does not permit parallel execution of code system + user concurrency kernel recognizes multiple threads in a process, and libraries add user threads that are not seen by the kernel User threads allow synchronization between concurrent routines in a program without the overhead of making system calls always a good idea to reduce the size and responsibilities of the kernel, and splitting the thread support functionality between the kernel and the threads library
Fundamental Abstractions
Process: compound entity, 2 components
set of threads and a collection of resources dynamic object that represents a control point in the process and that executes a sequence of instructions private objects (program counter, a stack, and a register context) address space, open files, user credentials, quotas, and so on, are shared by all threads in the process
1. thread
1. Resources UNIX process:single thread of control Multithreaded systems: allow more than one thread of control in each process. Centralizing resource ownership: drawbacks
server application assumes the identity of the client while servicing a request installed with superuser privileges calls setuid, setgid, and setgroups to temporarily change itsuser credentials to match those of the client Multithreading server to increase the concurrency: security problems process has a single set of credentials, it can only pretend to be one client at a time server is forced to serialize (single-thread) all system calls that check for security
several different types of threads, different properties and uses 3 important types
Kernel Threads
1. Not associated with a user process created and destroyed as needed internally by the kernel responsible for executing a specific function shares the kernel text and global data, has its own kernel stack independently scheduled uses std synchronization mechanisms of the kernel (sleep (), wakeup () useful for performing asynchronous I/O Rather than special mechanisms to handle it
Kernel create a new thread to handle each such request handled synchronously by the thread, but appears asynchronous to the rest of the kernel
Lightweight Processes
kernel-supported user thread system must support kernel threads before it can support LWPs Every process: one / more LWPs, each supported by a separate kernel thread LWPs
independently scheduled share the address space and other resources of the process make system calls and block for I/O or resources benefits of true parallelism (each LWP dispatched to run on a different processor) resource and I/O waits block individual LWPs, not the entire process
multiprocessor system even on a uniprocessor Besides the kernel stack and register context, an LWP also needs to maintain some user state
user register context, which must be saved when the LWP is pre- empted
User code is fully preemptible, and all LWPs in a process share a common address space If any data can be accessed concurrently by multiple LWPs, such access must be synchronized kernel provides facilities to lock shared variables and to block an LWP if it tries to access locked data
mutual exclusion (mutex) locks, semaphores, and condition variables
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Limitations:
creation, destruction, and synchronization of LWP: require system calls expensive operations
2 mode switches one from user to kernel mode on invocation another back to user mode on completion
When the LWPs frequently access shared data multiprocessor systems: locks at the user level
LWPs
consumes significant kernel resources, including physical memory for a kernel stack system cannot support a large no. of LWPs, general enough to support most reasonable applications Not suitable for applications use a large number of threads frequently create and destroy them transfer control from one thread to another
must be scheduled by the kernel fairness issue: user can monopolize the processor by creating a large no. of LWPs Kernel provides mechanisms for creating, synchronizing, and managing LWPs, it is the responsibility of the programmer to use them judiciously
User threads
entirely at the user level, without the kernel knowing anything through library packages such as Mach's C-threads and POSIX pthreads provide all the functions for creating, synchronizing, scheduling, and managing threads with no special assistance from the kernel interaction do not involve the kernel, extremely fast user threads + lightweight processes
very powerful programming environment
library acts as a miniature kernel for the threads it controls user threads is possible
user-level context of a thread can be saved and restored without kernel intervention own user stack, an area to save user-level register context, and other state information, such as signal masks saving the current thread's stack and registers Load newly scheduled one
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
user thread library schedules and switches context between user threads
kernel
responsibility for process switching, because it alone has the privilege to modify the memory management registers not truly schedulable entities kernel has no knowledge of it schedules the underlying process or LWP, which in turn uses library functions to schedule its threads When the process or LWP is preempted, so are its threads user thread makes a blocking system call, it blocks the underlying LWP If the process has only one LWP (or if the user threads are implemented on a single-threaded system), all its threads are blocked Library: provides synchronization objects to protect shared data structures type of lock variable (such as a semaphore) and a queue of threads blocked on it Threads must acquire the lock before accessing the data structure If the object is already locked, the library blocks the thread by linking it onto its blocked threads queue and transfer ring control to another thread Modern UNIX systems: asynchronous I/O mechanisms, which allow processes to perform I/O without blocking SVR4: IO_SETSIG ioctl operation to any STREAMS device subsequent read or write to the stream simply queues the operation and returns without blocking I/O completes: process is informed via a SIGPOLL signal
User threads
Asynchronous I/O
very useful feature allows a process to perform other tasks while waiting for I/O leads to a complex programming model restrict asynchrony to the operating system level and give applications a synchronous programming environment. A threads library achieves this by providing a synchronous interface that uses the asynchronous mechanisms internally Each request is synchronous with respect to the calling thread, which blocks until the I/O completes process, however, continues to make progress, since the library invokes the asynchronous operation and schedules another user thread to run in the mean- time When the I/O completes, the library reschedules the blocked thread.
without explicit kernel support, user threads may improve concurrency, but do not increase parallelism
Even on a multiprocessor: user threads sharing a single LWP cannot execute in parallel
Summary
Kernel threads
primitive objects not visible to applications user-visible threads that are recognized by the kernel and are based on kernel threads higher-level objects not visible to the kernel use LWP if supported by the system, or they may be implemented in a standard UNIX process without special kernel support major drawbacks that limit their usefulness
Programming Interface
Interface: facilities provide a large set of operations
creating and terminating threads suspending and resuming threads assigning priorities to individual threads thread scheduling and context switching synchronizing activities (semaphores and mutual exclusion locks) sending messages from one thread to another
Threads library
scheduling algorithm that selects which user thread to run It maintains per-thread state and priority, which has no relation to the state or priority of the underlying LWPs 6 user threads multiplexed onto two LWPs (u1-u6) library schedules one thread to run on each LWP u5 and u6: running state, even though the underlying LWPs may be blocked in the middle of a system call, or preempted and waiting to be scheduled u1 and u2: blocked state when it tries to acquire a synchronization object locked by another thread, when released, the library unblocks the thread, and puts it on the scheduler queue u3 and u4: runnable state, waiting to be scheduled Scheduler selects a thread from this queue based on priority and LWP affiliation closely parallels the kernel's resource wait and scheduling algorithms acts as a miniature kernel for the threads it manages
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
User-Level Threads Libraries User level thread library should provide a set of operations such as:
creating and terminating threads suspending and resuming threads assigning priorities to individual threads thread scheduling and context switching synchronizing activities through facilities such as semaphores and mutual exclusion locks sending messages from one thread to another
thread priority is unrelated to kernel scheduling priority threads library contains a scheduling algorithm that selects which user thread to run kernel is responsible for processor allocation, the threads library for scheduling
User threads libraries have a choice of implementations: Bind each thread to a different LWP. This is easier to implement, but uses more kernel resources and offers little added value. It requires kernel involvement in all synchronization and thread scheduling operations. Multiplex user threads on a (smaller) set of LWPs. This is more efficient, as it consumes fewer kernel resources. This method works well when all threads in a process are roughly equivalent. It provides no easy way of guaranteeing resources to a particular thread. Allow a mixture of bound and unbound threads in the same process. This allows the application to fully exploit the concurrency and parallelism of the system. It also allows preferential handling of a bound thread, by increasing the scheduling priority
User Thread Implementation Each user thread must maintain the following state information: Thread ID Saved register state User stack Signal mask Priority Thread local storage
PROCESS SCHEDULING
Introduction Clock interrupt handling Scheduler Goals Traditional UNIX scheduling Processor affinity on AIX
Introduction
Like memory & terminals, the CPU is also a shared resource for which processes contend. The scheduler is the component of the OS that determines which process to run at any given time and how long to run. AIX(UNIX) is essentially a time sharing system, which means it allows several processes to run concurrently (This is an illusion on an uni-processor machine). Two aspects of scheduler: 1. Scheduling policy 2. Implementation (Data Structures and Algos) Context switch
expensive
Every machine has a hardware clock, which interrupts the s/m at fixed intervals. CPU Tick Time interval between successive clock interrups UNIX typically sets the CPU tick at 10 milliseconds Clock interrupt handler runs in response to the h/w clock interrupt. It performs the following tasks:
Rearms the h/w clock if necessary Update CPU usage statistics performs priority recomputation and time-slice expiration handling sends a SIGXCPU signal to the current process if it has exceeded its CPU usage Quota. Updates the time-of-day clock and other related clocks Handles callouts Wakes up swapper and pagedaemon when approptiate. Handles alarms
Callouts
A callout records a function that the kernel must involve at a later time. TO reggister a callout int to_ID = timeout (void (*fn)(), caddr_t arg, long delta); fn() the kernel function to invoke; arg is an argument to pass to fn(). To cancel a callout void untimeout (int to_ID); On every tick, the clock handler checksif any callouts are due.
Alarms
A process can request the kernel to send it a signal after a specific amount of time, much like an alarm clock. Three types of alarms :
real-time - relates to the actual elapsed time, and notifies the process via a SIGALRM profiling - measures the amount of time the process has been executing and notifies the process via SIGPROF virtual-time - monitors only the time spent by the process in user mode and sends the SIGVTALRM
Scheduler Goals
The scheduler must judiciously apportion CPU time to all processes in the system. The scheduler must ensure that the system delivers acceptable performance to each application, Applications can be loosely categorized into the following classes, based on their scheduling requirements and performance expectations:
Interactive -- Applications such as shells, editors, and programs with graphical user interfaces Batch -- Activities such as software builds and scientific computations do not require user interaction and are often submitted as background jobs. Real-time -- This is a catchall class of applications that are often time-critical.
The scheduler must try to balance the needs of each. It must also ensure that kernel functions such as paging, interrupt handling, and process management can execute promptly when required. In a well-behaved system, all applications must continue to progress. No application should be able to prevent others from progressing, unless the user has explicitly permitted it. The choice of scheduling policy has a profound effect on the system's ability to meet the requirements of different types of applications.
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Process Priorities
Priority may be any integer value between 0 and 127. Numerically lower values correspond to higher priorities. Priorities between 0 and 49 are reserved for the kernel, while processes in user mode have priorities between 50 and 127. The proc structure contains the following fields that contain priority-related information:
p_pri Current scheduling priority. p_us rp r i User mode priority. p_cpu Measure of recent CPU usage. p_ni ce User-controllable nice factor.
When a process completes the system call and is about to return to user mode, its scheduling priority is reset to its current user mode priority. The user mode priority depends on two factors--the nice value and the recent CPU usage. nice value is a number between 0 and 39 with a default of 20.Increasing this value decreases the priority.Background processes are automatically given higher nice values. Only a superuser can decrease the nice value of a process. Every second, the kernel invokes a routine called schedcpu () (scheduled by a callout) that reduces the p_cpu value of each process by a decay factor.
decay = (2 * toad_average) / (2 * load_average + 1);
sched-cpu () routine also recomputes the user priorities of all processes using the formula
p_usrpri = PUSER + (p_cpu / 4) + (2 * p_nice);
Scheduler Implementation
scheduler maintains an array called qs of 32 run queues. Each queue corresponds to four adjacent priorities. A global variable which qs contains a bitmask with one bit for each queue.
The swtch () routine, which performs the context switch, examines whJ chqs to find the index of the first set bit.
Every 100 milliseconds, the kernel invokes (through a callout) a routine called roundrobin() to schedule the next process from the same queue. The schedcpu () routine recomputes the priority of each process once every second. There are three situations where a context switch is indicated:
The current process blocks on a resource or exits. This is a voluntary context switch. The priority recomputation procedure results in the priority of another process becoming greater than that of the current one. The current process, or an interrupt handler, wakes up a higher-priority process.
Binding the process to available CPUs on AIX : use either the bindprocessor command or the bindprocessor API The bindprocessor command binds or unbinds the kernel threads of a process to a processor. The syntax for the bindprocessor command is: bindprocessor Process [ ProcessorNum ] | -q | -u Process{ProcessID [ProcessorNum] | -u ProcessID |
bindprocessor subroutine
A process also provides a common address space and common system resources, as follows: File descriptors Signal actions Shared libraries Inter-process communication tools (such as message queues, pipes, semaphores, or shared memory)
Thread Properties A thread is the schedulable entity. The properties include the following: Stack Scheduling properties (such as policy or priority) Set of pending and blocked signals Some thread-specific data(like errno)
All threads share the same address space. When a process is created, one thread is automatically created. This thread is called the initial thread, not visible to the programmer Threads are well-suited entities for modular programming.
User threads are mapped to kernel threads by the threads library. Three different ways to map user threads to kernel threads. M:1 model 1:1 model M:N model
The thread safe libraries in AIX are: libbsd.a libc.a libm.a libsvid.a libtli.a libxti.a libnetsvc.a
Creating Threads
No parent-child relation exists between threads When creating a thread, an entry-point routine and an argument must be specified A thread has attributes, which specify the characteristics of the thread. A thread is created by calling the pthread_create subroutine
The pthread_create subroutine returns the thread ID of the new thread The current thread ID is returned the pthread_self subroutine A thread ID is an opaque object; its type is pthread_t (an integer in AIX) When calling the pthread_create subroutine, you may specify a thread attributes object. If you specify a NULL pointer, the created thread will have the default attributes.
Terminating Threads
A thread automatically terminates when it returns from its entry-point routine. Thread can exit at any time by calling the pthread_exit subroutine. The cancelation of a thread is requested by calling the pthread_cancel subroutine. Cleanup Handlers
Using Mutexes
A mutex is a mutual exclusion lock, Only one thread can hold the lock Mutex attributes, specify the characteristics of the mutex. Like threads, mutexes are created with the help of an attributes object , which can be accessed through accessed through a variable of type pthread_mutexattr_t .
Types of Mutexes
The type of mutex determines how the mutex behaves when it is operated on. They are: PTHREAD_MUTEX_DEFAULT or PTHREAD_MUTEX_N ORMAL PTHREAD_MUTEX_ERRORCHECK PTHREAD_MUTEX_RECURSIVE
Joining Threads
Joining a thread means waiting for it to terminate Using the pthread_join subroutine alows a thread to wait for another thread to terminate A thread cannot join itself because a deadlock would occur and it is detected by the library
The pthread_join subroutine also allows a thread to return information to another thread. Any call to the pthread_join subroutine occurring before the target thread's termination blocks the calling thread. What happens when two threads may try to join each other?
Scheduling Threads
Threads can be scheduled The threads library allows the programmer to control the execution scheduling of the threads in the following ways: By setting scheduling attributes when creating a thread By dynamically changing the scheduling attributes of a created thread By defining the effect of a mutex on the thread's scheduling when creating a mutex (known as synchronization scheduling) By dynamically changing the scheduling of a thread during synchronization operations (known as synchronization scheduling)
Routines
pthread_key_create pthread_key_delete pthread_getspecific pthread_setspecific
Signal management
Signal management in multi-threaded processes is shared by the process and thread levels, and consists of the following: Per-process signal handlers Per-thread signal masks Single delivery of each signal
Signal handlers - maintained at process level. Signal masks - maintained at thread level Each thread can have its own set of signals that will be blocked from delivery. The sigthreadmask subroutine must be used to get and set the calling thread's signal mask
Signal Generation
pthread_kill - subroutine sends a signal to a thread The kill - subroutine sends a signal to a process. The raise subroutine sends a signal to the calling thread, The alarm subroutine requests that a signal be sent later to the process
Signal handlers are called within the thread to which the signal is delivered No pthread routines can be called from a signal handler. Calling a pthread routine from a signal handler can lead to an application deadlock. A signal is delivered to a thread, unless its action is set to ignore.
A reentrant function does not hold static data over successive calls, nor does it return a pointer to static data. A thread-safe function protects shared resources from concurrent access by locks. How to make a function Reentrant? How to make a function Thread-Safe?
Several subcommands support multiple kernel threads and processors, including: The cpu subcommand - changes the current processor The ppd subcommand- displays per-processor data structures The thread subcommand- displays thread table entries The uthread subcommand -displays the uthread structure of a thread
Core File Requirements of a Multi-Threaded Program By default, processes do not generate a full core file. If an application must debug data in shared memory regions, particularly thread stacks, it is necessary to generate a full core dump. To generate full core file information, run the following command as root user: chdev -l sys0 -a fullcore=true
Benefits of Threads
Improved performance can be obtained on multiprocessor systems using threads. Inter-thread communication is far more efficient and easier to use than inter-process communication. creating threads and controlling their execution, requires fewer system resources than managing processes. On a multiprocessor system, multiple threads can concurrently run on multiple CPUs.
References:
http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix. genprogc/doc/genprogc/understanding_threads.htm
II: UnitUnit II : IPC, Filesystems & Memory management IPC File Systems Virtual Memory File Systems in AIX
Signals
Notify a process of asynchronous events. Modern UNIXes recognize 31 or more different signals - have predefined meanings. A process may send signals to another process(es) using the kill or killpg system calls. kernel also generates signals in response to various events (ex : sending SIGINT for Ctrl + C event) Each signal has default action, it can be overridden.
Uses:
o o
can be used for synchronization. Many applications developped resource sharing and locking protocols based on signals.
Limitations:
Expensive (Sender must make a s/m call. kernel must interrupt the receiver - manipulate stack-resuming intterrupted code. Limited bandwidth (only 31 different signals exist) i.,e can convey only limited info Useful for event notification and not for complicated interactions.
Pipes
Unidirectional, FIFO, unstructured data stream of fixed size Writers add data to the end of the pipe, readers retrive data from the front of the pipe. Once read the data is removed Pipe system calls creates pipes and returns two file descriptors (one for rd and one for wr) Each pipe can several readers and writers.
A process can be a reader/ writer / both. I/O to pipe is much like I/O to file, read and write is achived through read and write system calls to the pipe'sdescriptors. Limitations: can not be used to broadcast data (since reading removes the data from pipe) Data in pipe is a byte stream. If writer sends several objects of different length, reader can't determine. If there are multiple readers, a writer can't direct data to specific reader, vice versa.
Process tracing
ptrace system call used by debuggers such as sdb, dbx a process can be control the execution of a child process ptrace (cmd, pidm addr, data); The cmd arg allows the following operations: Read / write a word in child's aread area / uarea / GP registers. Interrupt specific signals. set or delete watchpoints in the child's address space/ Resume the execution of the stopped child. Single-step the child. Terminate the child. kernel sets the child's traced flag (in its proc structure), which affects how the child responds to signals.
Limitations : only the direct child can be controlled. Inefficient- requires several context switches. Tracing setuid program raises problem.
System V IPC
System V UNIX provide three IPC mechanisms Semaphores Message Queues Shared memory Each instance of an IPC resource has the following attributes Key -to identify the instance od the resource Creator - UID & GID Owner - UID & GID Permissions Process acquires a resource using shmget / semget / msgget Controls the aquired resource using shmctl / semctl / msgctl
Semaphores
Are integer-valued objects that support two atomic operations P() and V()/ p() - decrements the value. Blocks if its value is less than zero. V() - increments the value. If the result >= 0 then wakes up a waiting process / threads. These operations are atomic in nature Semaphores can cause dead locks.
Message Queues
A Message Queue is a header pointing at a linked list of messsages. A message = 32 bit type value + data area. The following functions are used: msgqid = gsgget(key, flag); msgsnd(msgqid, msgp, count, flag); count = msgrcv(msgqid, msgp, maxcnt, msgtype, flag);
It is similar to pipe but more versatile and address limitations of pipes. Transmit data as discrete messages thn unformatted byte-stream. Effective for small amount of data. Each transfer require 2 copy operations, hence results in poor performance.
Shared memory
Shared m/y is a portion of physical m/y shared by multiple processes. Processes may attach this region to any suitable virtual address range in their address space. Functions :
shmid = shmget (key, size, flag); addr = shmat (shmid, shmaddr, shmflag); shmdt (shmaddr);
Most modern UNIX variants also provide the mmap system calls, which maaps a file (or part of a file) into the address space of the caller.
File System
Filesystems
The User Interface to Files FileSystems Special Files File System Framework The Vnode/VFS Architecture Implementation Overview Network File System
Allows user to organise,manipulate,access different files Files and Directories -links -file organisation - dirent structure File attributes - type,size, inode number,link,deviceid,user id,group id - sticky flag File descriptors - file offset - open() File I/o - read write calls File Locking - advisory and mandatory locks
File Systems
root file system Mounting File hierarchy Logical Disks disk mirrorring striping
Special Files
Terminals and printers Symbolic Links -soft and hard links Pipes and FifO - implementation
File System Framework
Objectives Lessons from devices I/o -cdevsw structure -Relationship between a base class and its subclass -vnode abstraction - vfs abstraction
Implementation Overview
Objectives vnode and open files - file system dependant data structures - file object fields Vnode - vnode structure Vnode reference count Vfs object -vfs structure -relationship between vnode and vfs
JFS Fragments -Many file systems have disk blocks or data blocks. -blocks divide the disk into units of equal size to store the data in a file or directory's logical blocks. -The disk block may be further divided into fixed-size allocation units called fragments. -JFS provides a view of the file system as a contiguous series of fragments. -JFS fragments are the basic allocation unit and the disk is addressed at the fragment level.
JFS Allocation Groups -The set of fragments making up the FS are divided into one or more fixed-sized units of contiguous fragments. - Each unit is an allocation group. -The first 4096 bytes of this area hold the boot block, and the second 4096 bytes hold the file system superblock. -Disk i-nodes are 128 bytes in size and are identified by a unique disk i-node number or i-number. - The i-number maps a disk i-node to its location on the disk or to an i-node within its allocation group. -Allocation groups allow the JFS resource allocation policies to use effective methods for to achieve FS I/O performance.
JFS Disk i-Nodes -Each file and directory has an i-node that contains access information such as file type, access permissions, owner's ID, and number of links to that file. -These i-nodes also contain "addresses" for finding the location on the disk where the data for a logical block is stored. -Each i-node has an array of numbered sections. - Each section contains an address for one of the file or directory's logical blocks.
Mounting
-Mounting makes file systems, files, directories, devices, and special files available for use -The mount command instructs the operating system to attach a file system at a specified directory. -write permission is needed for mount point -A user with root authority can mount a file system arbitrarily by naming both the device and the directory on the command line. -The /etc/filesystems file is used to define mounts to be automatic at system initialization. Mount points -A mount point is a directory or file at which a new file system, directory, or file is made accessible. -To mount a file system or a directory, the mount point must be a directory; -To mount a file, the mount point must be a file. Mounting file systems, directories, and files - two types of mounts, a remote mount and a local mount. -Remote mounts are done on a remote system on which data is transmitted over a telecommunication line.Ex.NFS -Local mounts are mounts done on your local system. -Mounts can be set to occur automatically during system initialization. -Diskless workstations must have the ability to create and access device-special files on remote machines to have their /dev directories mounted from a server.
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
list of system management commands that help manage file systems (backup,chfs,df,fsck,mkfs,mount,restore,snapshot) Displaying available space on a file system (df command) File system commands Comparing file systems on different machines
Virtual
Memory
Introduction Demand Paging Hardware requirement IBM RS/6000 , Intel 80X806 AIX Program Address space Overview
Introduction
Memory Management Unit (MMU) Things to achieve :
Run programs larger than physical memory. Run partially loaded programs, thus reducing program startup time. Allow more than one program to reside in memory at one time Allow relocatable programs, which may be placed anywhere in memory Write machine-independent code--there should be no a priori correspondence between the program and the physical memory configuration. Relieve programmers of the burden of allocating and managing memory resources. Allow sharing --for example, shared code
These goals are realized through the use of virtual memory The application is given the illusion that it has a large main memory at its disposal, although the computer may have a relatively small memory. The translation tables and other data structures used for memory management reduce the physical memory available to programs. The usable memory is further reduced by fragmentation.
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Demand Paging
Functional Requirements
Address space management Address translation Physical memory management Memory protection Memory sharing Monitoring system load Other facilities
Translation Maps
The paging system may use four different types of translation maps to implement virtual memory, as shown in the following figure:
Hardware address translations Address space map Physical memory map Backing store map
Tools are available to assist in allocating memory, mapping memory and files, and profiling application memory usage.
Paging Space
A page is a unit of virtual memory that holds 4K bytes of data and can be transferred between real and auxiliary storage. To accommodate the large virtual memory space with a limited real memory space, the system uses real memory as a work space and keeps inactive data and programs that are not mapped on disk. The area of disk that contains this data is called the paging space.
Memory Allocation
Version 3 of the operating system uses a delayed paging slot technique for storage allocated to applications. This means that when storage is allocated to an application with a subroutine such as malloc, no paging space is assigned to that storage until the storage is referenced.
Q&A
Virtual addresses (or logical addresses) are addresses provided by the OS to processes. one virtual address space per process
addresses typically start at zero, but not necessarily space may consist of several segments
Address translation (or address binding) means mapping virtual addresses to physical addresses.
Zombie state
process that completed execution but still in process table. When a process ends,
all of the memory and resources associated with it are deallocated so they can be used by other processes but process's entry in the process table remains parent can read the child's exit status by executing the wait system call, at which stage the zombie is removed After the zombie is removed, its process ID and entry in the process table can then be reused But if a parent fails to call wait, the zombie will be left in the process table In some situations this may be desirable, for example if the parent creates another child process it ensures that it will not be allocated the same process ID.
FIVE OBSERVATIONS 1. One day, all villagers decided to pray for rain, on the day of prayer all the People gathered but only one boy came with an umbrella... THAT'S FAITH 2. When you throw a baby in the air, she laughs because she knows you will catch her... THAT'S TRUST 3. Every night we go to bed, without any assurance of being alive the next Morning but still we set the alarms in our watch to wake up... THAT'S HOPE 4. We plan big things for tomorrow in spite of zero knowledge of the future or having any certainty of uncertainties. .. THAT'S CONFIDENCE 5. We see the world suffering. We know there is every possibility of same or similar things happening to us. But still we get married?? THAT'S OVER CONFIDENCE!!
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The greatest 'shot in the arm'........ .. The most effective sleeping pill.......
The greatest problem to overcome.... ........ ... Fear The most crippling failure disease..... ... ....... Excuses The most powerful force in life........ ............ . The most dangerous act...... .. A gossip The world's most incredible computer.... . .... The brain The worst thing to be without..... ............ ..... Hope The deadliest weapon...... ........ .......... The two most power-filled words....... ........ The greatest asset....... .......... ........ .... The most worthless emotion.... ......... .... The most beautiful attire...... ......... ........ The most prized possession.. ........ ..... The most contagious spirit...... .......... ...... Life ends; when you stop Dreaming, Hope ends; when you stop Believing, Love ends; when you stop Caring, And Friendship ends; when you stop Sharing...!!!
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Love
The tongue 'I Can' Faith Self- pity SMILE! Integrity Enthusiasm