Anda di halaman 1dari 2

program is passive entity, process is active, program process when executable loaded into memory process state new,

w, running, waiting, ready, terminated PCB process state, program counter, CPU registers, scheduling information, memory-management information, Accounting information, I/O status information scheduling queues of processes job queue, ready queue, device queues processes migrate among IPC (interprocess communication) shared memory, message passing blocking = synchronous sender blocked until message received, receiver until message available non-blocking = asynchronous sender sends message and continues, receiver receives message or null user level user-level threads library, kernel oblivious to thread existence, scheduling done user level advantages: can be implemented without kernel support, faster context switch disadvantages: single thread can block entire process kernel threads supported by the kernel, kernel schedules threads like a process, less user-level code thread pools create a number of threads in a pool where they await work signals used in UNIX to notify process that particular event has occurred, signal handler processes CPU scheduling from the processes in ready queue, allocates the CPU to one of them nonpreemptive when process switches from running to waiting state, terminates preemptive when process switches from running to ready state, switches from waiting to ready consider access to shared data, preemption while in kernel mode, interrupts during crucial OS activities dispatcher module gives control of the CPU to the process this involves: switching context, switching to user mode, jumping to proper location in user program to restart it CPU utilization keep the CPU as busy as possible throughput # of processes that complete their execution per time unit turnaround time amount of time to execute a particular process waiting time amount of time a process has been waiting in the ready queue response time time it takes from when request was submitted until rst response produced FIFO SJF is optimal gives minimum average waiting time for a given set of processes

requirements for solution to critical-section problem 1. mutual exclusion if process in its critical section, then no other processes in their critical sections 2. progress if no process in its critical section and some processes that wish to enter critical section, then the selection of the processes to enter the critical section next cannot be postponed indenitely 3. bounded waiting a bound on number of times that other processes are allowed to enter critical sections after process has made request to enter its critical section and before that request is granted Peterson flag[i] = TRUE; turn = j; while (flag[j] && turn == j) {} <critical> flag[i] = FALSE; Test and set while (TestAndSet (&lock)) {}; <critical> lock = FALSE; Swap key = TRUE; while (key == TRUE) { Swap (&lock, &key) }; <critical> lock = FALSE; semaphore two operations modify S: wait() and signal(), originally called P() and V() counting semaphore integer value can range over an unrestricted domain binary semaphore integer value can range only between 0 and 1; mutex lock associated waiting queue block place the process invoking the operation on the appropriate waiting queue; wakeup remove one of processes in the waiting queue and place it in the ready queue wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); }} signal(semaphore *S) {S->value++; if (S->value <= 0) { remove a process from S->list; wakeup(P); }} deadlock processes waiting indenitely for event that can be caused by one of the waiting processes starvation indenite blocking a process may never be removed from the semaphore queue deadlock mutual exclusion: only one process at a time can use a resource, hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes, no preemption: a resource can be released only voluntarily, circular wait if no cycles ! no deadlock, if cycle ! one instance per resource type, then deadlock else possibility of avoidance Ensure that the system will never enter a deadlock state recovery Allow the system to enter a deadlock state and then recover ignorance Ignore the problem and pretend that deadlocks never occur in the system System is in safe state if there exists a sequence <P1, P2, , Pn> of ALL the processes in the systems such that for each Pi, the resources that Pi can still request can be satised by currently available resources + resources held by all the Pj, with j < 1 access latency = average access time = average seek time + average latency average I/O time = average access time + (amount to transfer / transfer rate) + controller overhead low-level (physical) formatting dividing a disk into sectors that the disk controller can read and write partition the disk into one or more groups of cylinders, each treated as a logical disk Shortest Seek Time First selects request with minimum seek time from the current head position SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests SCAN The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues. C-SCAN The head moves from one end of the disk to the other, servicing requests as it goes when reaches end immediately returns to beginning of disk, without servicing requests on return trip file attributes name, identier, type, location, size, protection (mode), time, user identication links hard links directly to le data, soft link links to le name (new directory entry type) boot control block info needed to boot OS from that volume VCB Total # of blocks, # of free blocks, block size, free block pointers or array Directory structure organizes the les Names and inode numbers, master le table

priority scheduling problem Starvation low priority processes may never execute solution aging as time progresses increase the priority of the process round robin (RR) higher average turnaround than SJF, but better response, q large ! FIFO, should be large compared to context switch time multilevel queue foreground RR, background FCFS a process can move between the various queues; aging can be implemented this way a number of queues, scheduling algorithms for each queue, when to upgrade a process, when to demote a process, which queue a process will enter when that process needs service asymmetric multiprocessing one processor accesses system data structures, no need for data sharing symmetric multiprocessing (SMP) each processor self scheduling, all processes in common ready queue, or each has its own private queue of ready processes

Per-le File Control Block (FCB) contains many details about the le Inode number, permissions, size, dates NFTS stores into in master le table using relational DB structures contiguous allocation each le occupies set of contiguous blocks best performance in most cases, simple, problems nding space for le, knowing le size, external fragmentation, need for compaction indexed allocation each le has its own index block(s) of pointers to its data blocks free-space list to track available blocks/clusters bit vector or bit map (n blocks) log structured (or journaling) le systems record each metadata update to the le system as transaction Address binding of instructions and data to memory addresses can happen at three different stages compile time if memory location known a priori, absolute code can be generated; must recompile load time must generate relocatable code if memory location is not known at compile time execution time binding delayed until run time if the process can be moved during its execution need hardware support for address maps (e.g., base and limit registers) first-t: allocate the rst slot that is big enough best-t: allocate the smallest slot that is big enough; must search entire list, unless ordered by size worst-t: allocate the largest slot; must also search entire list, produces the largest leftover slot external fragmentation total memory space exists to satisfy a request, but it is not contiguous internal fragmentation allocated memory may be slightly larger than requested memory frames xed-sized blocks of physical memory, size is power of 2, between 512 bytes and 16 Mbytes pages divide logical memory into blocks of same size as frames Address generated by CPU is divided into m+n logical address space 2m and page size 2n page number (m) used as index into page table which contains base address of each page in physical page offset (n) combined with base address to dene the physical memory address that is sent to MU

second-chance algorithm FIFO, plus hardware-provided reference bit if page to be replaced has reference bit = 0 -> replace it if reference bit = 1, set reference bit 0, leave page in memory, replace next page, subject to same rules page fault rate 0 # p # 1 if p = 0 no page faults, if p = 1, every reference is a fault effective access time (EAT) = (1 p) x memory access + p (page fault overhead + swap page out + swap page in + restart overhead) copy-on-write (COW) allows both parent and child processes to initially share the same pages thrashing a process is busy swapping pages in and out

associative memory, translation look-aside buffers (TLBs) a special fast-lookup hardware cache effective access time (EAT) = (1 + !) " + (2 + !)(1 ") = 2 + ! " associative lookup = ! time unit (ns) hit ratio = " percentage of times that a page number is found in the associative registers valid-invalid bit attached to each entry in the page table: valid indicates that the associated page is in the process logical address space, and thus a legal page invalid indicates that the page is not in the process logical address space segment table maps two-dimensional physical addresses; each table entry has base contains the starting physical address where the segments reside in memory limit species the length of the segment virtual memory separation of user logical memory from physical memory page is needed ! reference to it, invalid reference ! abort, not-in-memory ! bring to memory lazy swapper never swaps a page into memory unless page will be needed validinvalid bit is associated (v ! in-memory memory resident, i ! not-in-memory) during address translation, if valid-invalid bit in page table entry is i ! page fault modify (dirty) bit reduce overhead of page transfers only modied pages are written to disk reference bit with each page associate a bit, initially = 0, when page is referenced bit set to 1, replace any with reference bit = 0 (if one exists)