Anda di halaman 1dari 16

1 Write detail description on the following computer operating system concept?

A. Non-preemptive kernel

Preemptive kernel allows a process to be preempted while it is running in kernel mode.

A nonpreemptive kernel does not allow a process running in kernel mode to be preempted; a
kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of
the CPU. Under nonpreemptive scheduling, once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by terminating or by switching to the
waiting state. - This to me seems to be the exact same description of the nonpreemeptive kernel.

A non preemptive kernel does not allow a process running in kernel modeto be preempted; a
kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of
the CPU. Obviously, a non preemptive kernel is essentially free from race conditions on kernel
data structures, as only one process is active in the kernel at a time.

We cannot say the same about preemptive kernels, so they must be carefully designed to ensure
that shared kernel data are free from race conditions. Preemptive kernels are especially difficult
to design for SMP architectures, since in these environments it is possible for two kernel-mode
processes to run simultaneously on different processors.

A preemptive kernel is more suitable for real-time programming, as it will allow a real-time
process to preempt a process currently running in the kernel. Furthermore, a preemptive kernel
may be more responsive, since there is less risk that a kernel-mode process will run for an
arbitrarily long period before relinquishing the processor to waiting processes. Of course, this
effect can be minimized by designing kernel code that does not behave in this way. Later in this
chapter, we explore how various operating systems manage preemption within the kernel

B Memory Fragmentation
Memory fragmentation is the memory which is not used and getswasted Memory fragmentation
is the memory which is not used and gets wasted.

Memory fragmentation is the state of the disk/drive of your computer where the files are
not in continuity and there is some gap between files.

• Memory fragmentation is the state of the disk/drive of your computer where the files are
not in continuity and there is some gap between files. This is memory fragmentation. It is
of 2 types:
• Memory fragmentation is the state of the disk/drive of your computer where the files are
not in continuity and there is some gap between files

 Memory fragmentation is the state of the disk/drive of your computer where the files are
not in continuity and there is some gap between files

 External fragmentation
 Internal Fragmentation
External fragmentation
External Fragmentation occurs due to allocation and allocation of memory which leads into hole
that is leaving small memory spaces which is not sufficient for other programs to use.

Internal fragmentation
Internal Fragmentation occurs when we allocate some memory to certain program or application
but actually don't use them
C.Inter process communication

Inter process communication, also known as IPC, refers to a mechanism that allows processes
to manage and synchronize shared data. Applications can use IPC classified as clients or servers.
A client is an application or a process that requests for a service from another application or
process. A server is an application or a process that responds to a client request. Depending on
the situation, applications act as both a client and a server.

. Hopefully, the discussion here offers enough information to partially satisfy and raise the level
of curiosity about the distributed computing area.

Process management aprocess is a program in execution. A process must have system


resources, such as memory and the underlying CPU. The kernel supports the illusion of concur-
rent execution of multiple processes by scheduling system resources among the set of processes
that are ready to execute.

Program
 It is a sequence of instructions defined to perform some task.
 It is a passive entity.
Process
 It is a program in execution.
 It is an instance of a program running on a computer.
 It is an active entity.
A processor performs the actions defined by a process. It includes:
 program counter
 stack
 data section

D Bootloader

A boot loader is a small program which runs at startup time that can load a
Complete application program into a processor's memory.

Boot loader is the code that is executed before any Operating System starts to run.
Virtually all the smartphones, PC, laptops and other devices that run an OS has the boot
loader. Boot loader basically includes the instruction to boot the OS kernel. Most of the
boot loaders have their own debugging or modification environment. Since the boot
loader runs before any piece of software, this makes it processor specific and every
motherboard has its own boot loader. This is one reason due to which every Android
phones and tablet has its own custom rooms due to different processor and motherboard.

Since Android is an open source operating system and all the android device manufactures their
devices with different hardware, so all the manufacturers have their own version of bootloader
specific to their hardware. Think of your android phone as a hard drive consisting of multiple
partitions. One partition holds system files while the other holds your apps data. The basic
function of a bootloader on an android phone is to instruct OS kernel to boot normally.

2 Write the detail description on different between proccess and thread?

Process is an instance of an executing computer program. In other words, it is an idea of a single


occurrence of a running computer program. Simply processes are running binaries that contain
one or more threads.

According to the number of threads involved in a process, there are two types of processes.
Single-thread process: is a process that has only one thread. This thread is a process, and there
is only one activity happening

Multi-thread process: there is more than one thread, and there is more than one activity that is
happening.

Two or more processes can communicate within each other using inter-process communication.
But it is quite difficult and need more resources. When making a new process a programmer has
to do two things. They are duplication of the parent process and allocation of memory and
resources for the new process. So this is really expensive

Thread
A thread is a simple path of execution within a process. A thread is as powerful as a process
because a thread can do anything that a process can do. A thread is a light-weight process and
needs only fewer resources.

Comparison between Process and Thread:

Process

 An executing instance of a program is called a process.


 It has its own copy of the data segment of the parent process.
 Processes must use inter-process communication to communicate with sibling processes.
 Processes have considerable overhead
 Processes are independent.
 New processes require duplication of the parent process.
 Processes can only exercise control over child processes.
 Any change in the parent process does not affect child processes.
 Run in separate memory spaces.
 Most file descriptors are not shared.
 There is no sharing of file system context.
 It does not share signal handling.
 Process is controlled by the operating system.

Thread

 A thread is a subset of the process.


 It has direct access to the data segment of its process.
 Threads can directly communicate with other threads of its process.
 Threads have almost no overhead
 Threads are dependent.
 New threads are easily created.
 Threads can exercise considerable control over threads of the same process.
 Any change in the main thread may affect the behavior of the other threads of the process
 It shares file system context.
3 What is purpose of resource abstraction?

Resource abstraction is one of the major responsibilities of the operating system. Resource
abstraction is the process of hiding the details of how the hardware operates, making computer
hardware relatively easy for an application programmer to use.

Operating system can implement resource abstraction by providing a single abstract disk
interface which will be the same for both the hard disk and floppy disk. Such an abstraction
saves the programmer from need to learn the details of both hardware interfaces. Instead, the
programmer only needs to learn the disk abstraction provided by the operating system. Operating
system provides an abstraction layer over the concrete hardware, use the computer hardware in
an efficient manner and hide the complexity of the underlying hardware. The significance of
resource abstraction can be illustrated in this example: Suppose a programmer is writing an
application to analyze stock market trends. For this programmer, the effort to design and debug
code to read and write information to from a disk drive would represent a significant fraction of
the overall effort. The skill and experience required to write the software to control the disk drive
are not the same as that to design the stock analysis portion of the program. While an application
programmer must be aware of the general behavior of a disk drive, it is generally preferable to
avoid learning the details of methods how disk input/output takes place. Abstraction is the
perfect approach, since the application programmer uses a previously implemented abstraction to
read and write the disk drive. A disk software package is an example of system software.
Programmers can focus their attention on the application programming problem rather than
diverting it to tasks not specific to the application domain. In other words, system software is
generally transparent to the end user but is of major significance to the programmer. Resource
abstraction provides an abstract model of the operation of hardware components and generalizes
hardware behavior. It also limits the flexibility in which hardware can be manipulated. It also
provides a more convenient working environment for applications, by hiding some of the details
of the hardware, and allowing the applications to operate at a higher level of abstraction. For
example, the operating system provides the abstraction of a file system, and applications don’t
need to handle raw disk interfaces directly.
Resource abstraction also provides isolation means that many applications can co-exist at the
same time, using the same hardware devices, without falling over each other’s feet. One can
view Operating Systems from two points of views: resource manager and extended machines.
Form resource manager point of view operating systems manage the different parts of the system
efficiently and from extended machines point of view operating systems provide a virtual
machine to users that is more convenient to use. Modern operating systems generally have
following three major goals. Operating systems generally accomplish these goals by running
processes in low privilege and providing service calls that invoke the operating system kernel in
high privilege set

Note

 Abstractionis the act of representing essential features without including the


backgrounddetails or explanations.
 Reduce complexity and allow efficient design and implementation of complex software
systems.
 Often used to help the reader quickly ascertain the paper's purpose.
 When used, an abstract always appears at the beginning of a manuscript or typescript,
acting as the point-of-entry for any given academic paper or patent application.

5 Describe the relation between dispatcher and scheduler?

When scheduler completed its job of selecting a process, then after it is the dispatcher which
takes that process to the desired state/queue.

Schedulers are special system software which handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run

The dispatcher is the module that gives control of the CPU to the process selected by the short-
term scheduler.
Dispatcher –
Dispatcher is a special program which comes into play after scheduler. When scheduler
completed its job of selecting a process, then after it is the dispatcher which takes that process to
the desired state/queue. The dispatcher is the module that gives control of the CPU to the process
selected by the short-term scheduler. This function involves the following:

 Switching context
 Switching to user mode
 Jumping to proper location in user program to restart that program

6 write brief description on the following concept of operating system

Static Vs Dynamic Load

Static Load

 They are independent of time, the dead load on a structure can be considered as a
static load.

Dynamic Load

 Time dependent loads


 Loads can be accelerating or decelerating
 Live Load, Wind Load, Earthquake Load, Snow load are the examples of dynamic
load.

Difference between static and dynamic loading

 In a static problem, load is constant with respect to time. On the other hand, the dynamic
problem is time varying in nature. Because both loading and responses varies with time
 Static problem has only one response, i.e. displacement. But the dynamic problem has
mainly three types of responses, such as displacement, velocity and acceleration.
 Static problem has only one solution whereas a dynamic problem has infinite number of
solutions which are time dependent in nature. Thus dynamic analysis is more complex
and time-consuming than static analysis.
 In static problem, the response can be calculated by the principles of force or static
equilibrium whereas in case of dynamic problem the responses depend not only on the
load but also upon inertial forces which oppose the accelerations producing them. Thus
the total responses are calculated by including inertia forces along with the static
equilibrium. Hence, the inertia forces are the most important distinguishing
characteristics of a structural dynamics problem.
 A static load bearing is the weight applied without any build up of energy, and therefore
is to remain motionless. Force, pressure, and gravity remain static or are applied
gradually.
 A dynamic load bearing is measured by the application of rapid force or pressure to an
object.

B Static vs. dynamic linking

In static linking, functions and variables which are defined in external library files are linked
inside your executable. That means that the code is actually linked against your code when
compiling/linking.

With dynamic linking external functions that you use in your software are not linked against
your executable. Instead they reside in an external library files which are only referenced by your
software. Ie: the compiler/linker instructs the software on where to find the used functions.

Static linking increase the file size of your program and it may increase the code size in memory
if other applications are running on the system... on the other hand dynamic linked program take
up less space and less virtual memory

In static linking libraries linked at compile time, but code size is more when you this static
linking, when you only one or two programs then you use static linking

In dynamic linking libraries linked at run time (or) execution time, but code size is less, when
you have more programs then use dynamic linking.

Static Linking

 Static linking is the process of copying all library modules used in the program into the
final executable image.
 Static linking is performed by programs called linkers as the last step in compiling a
program. Linkers are also called link editors.
 Statically linked files are significantly larger in size because external programs are built
into the executable files.
 In static linking if any of the external programs has changed then they have to be
recompiled and re-linked again else the changes won't reflect in existing executable file.
 Statically linked program takes constant load time every time it is loaded into the
memory for execution.
 Programs that use statically-linked libraries are usually faster than those that use shared
libraries.

Dynamic Linking

 In dynamic linking only one copy of shared library is kept in memory. This significantly
reduces the size of executable programs, thereby saving memory and disk space.
 Dynamic linking is performed at run time by the operating system.
 In dynamic linking this is not the case and individual shared modules can be updated and
recompiled. This is one of the greatest advantages dynamic linking offers.
 Programs that use shared libraries are usually slower than those that use statically-linked
libraries.
 Dynamically linked programs are dependent on having a compatible library. If a library is
changed (for example, a new compiler release may change a library), applications might
have to be reworked to be made compatible with the new version of the library. If a
library is removed from the system, programs using that library will no longer work.

C Paging

• Paging is a memory management scheme, which assigns a noncontiguous address space


to a process.

• Paging is implemented by breaking the main memory into fixed-sized blocks that are
called frames.
• The logical memory of a process is broken into the same fixed-sized blocks called
pages. The page size and frame size is defined by the hardware. As we know, the process
is to be placed in main memory for execution.

• So, when a process is to be executed, the pages of the process from the source i.e. back
store are loaded into any available frames in main memory.

• CPU generates the logical address for a process which consists of two parts that are page
number and the page offset. The page number is used as an index in the page table.

• This base address is combined with page offset to generate the address of the page in
main memory. The page table contains the base address of each page that loaded in main
memory.

Paging is a memory management technique in which the memory is divided into fixed size
pages. Paging is used for faster access to data. When a program needs a page, it is available in
the main memory as the OS copies a certain number of pages from your storage device to main
memory. Paging allows the physical address space of a process to be noncontiguous.

Advantages

 Allocating memory is easy and cheap

 Eliminates external fragmentation

 Data (page frames) can be scattered all over PM

 Pages are mapped appropriately anyway

 Allows demand paging and prepaging

 More efficient swapping

 No need for considerations about fragmentation

 Just swap out page least likely to be used

Disadvantages
 Longer memory access times (page table lookup)

 Defended page tables

 Inverted page tables

 Memory requirements (one entry per VM page)

 Internal fragmentation

Segmentation

• Segmentation is one of the most common ways to achieve memory protection. Because
internal fragmentation of pages takes place, the user’s view of memory is lost.

• Because internal fragmentation of pages takes place, the user’s view of memory is lost.

• The user will view the memory as a combination of segments. In this type, memory
addresses are not contiguous.

• Each memory segment is associated with a specific length and a set of permissions.

• When a process tries to access the memory it is first checked to see whether it has the
required permission to access the particular memory segment and whether it is within the
length specified by that particular memory segment.

Memory segmentation is the division of a computer's primary memory into segments or


sections. Segments or sections are also used in object files of compiled programs when
they are linked together into a program image and when the image is loaded into
memory.

Advantages of segmentation

 Allow the use of separate memory areas for the program code and data
 Permit a program to be placed into different areas of memory whenever the program is
end.

 Multitasking becomes easy

 The advantage of having separate code and data segments is that one program can work
on different sets of data.

 The reference logical addressed can be loaded into the instruction pointer (IP) and run the
program anywhere in the segment memory

 Programs are re-locatable so that programs can be run at any location in the memory

Disadvantages of segmentation

 External fragmentation is present

 Costly memory management algorithms.

 Segmentation finds free memory area big enough. .

 Segments of unequal size not suited as well for swapping


7 For a particular page reference string find out how many page faults would
occur for the following page replacement algorithms for a given number of frames

A. The LRU Page Replacement Policy

The optimal algorithm was designed to replace a page that will not be referenced for the longest
time. In other words, this was meant to have a low number of page faults. This algorithm may
also be approximated with another view. The idea is to predict future references based on the
past data, that is, a page that has not been referenced for a long time in the past may not be
referenced for a long time in the future either. In this way, LRU page-replacement algorithm
replaces a page that has not been used for the longest period of time in the past. Let us
understand this algorithm with an example.

Example

Calculate the number of page faults for the following reference string using LRU

Page-replacement algorithm with frame size as: 3. 5 0 2 1 0 3 0 2 4 3 0 3 2 1 3 0 1 5

Solution

The page references 5, 0, and 2 will result in page faults. The next page reference 1 needs page
replacement. Out of 5, 0, and 2, Page number 5 has not been used in the past. Therefore, it will
be replaced.

This process goes on resulting in total 13 page faults, which is again less compared to FIFO
page-replacement algorithm.

B, First-In-First-Out (FIFO) Replacement

This strategy is used very often in daily life. Let us take an analogy of a shelf, which is used to
keep several things. When there is no place in the shelf to keep a new item, the oldest item is
replaced. The same strategy known as first-in first-out (FIFO) is also used for page replacement.
According to FIFO, the oldest page among all the pages in the memory is chosen as the victim.
The question is how to know the oldest page in the memory. One approach may be to attach the
time while storing a page in the memory. However, an easier approach is to store all the pages in
the memory in a FIFO queue. The page at the head of the queue will be paged-out first and a new
page will be inserted at the tail of the queue.

Example Calculate the number of page faults for the following reference string using FIFO
algorithm with frame size as 3

502103024303213015

Solution

Initially, all the three frames are empty. Page number 5 is first referenced, and it is a page fault.
After handling the page fault, the page is brought into the memory in one of the frames.
Similarly, Page numbers 0 and 2 occupy the other two frames. Next, Page number 1 is
referenced but there is no free frame. Here, the FIFO algorithm comes into the picture and
replaces the page that was brought first, that is Page number 5. The next referenced Page number
0 is already in the memory; therefore, it will not fault. After this, the next referenced page
number is 3, which is a page fault. Therefore, Page number 0 is replaced.

This process goes on resulting in 15 page faults. All the page references causing page faults
have been circled to show the page faults

7.C Optimal Replacement


The Belady’s optimal algorithm cheats. It looks forward in time to see which frame to replace on
a page fault. Thus it is not a real replacement algorithm.
It gives us a frame of reference for a given static frame access sequence.
 The Optimal policy selects for replacement the page that will not be used for longest
period of time.

 Impossible to implement (need to know the future) but serves as a standard to compare
with the other algorithms we shall study.
An algorithm is required that produces the least number of page faults and does not suffer
from Belay’s anomaly.

This policy produces a minimal number of page faults. Let us understand this algorithm
with an example.

Example Calculate the number of page faults for the following reference string using
optimum algorithm with frame size as 3.

502103024303213015

Solution

The Page numbers 5, 0, and 2 are page faults as shown earlier. The next page reference is
1, which is a page fault. At this moment, it is observed which page out of 5, 0, and 2 will
not be referenced for a long time in the reference string. This is Page number 5.
Therefore, Page number 5 will be replaced in the memory. The next reference Page
number 0 is not a page fault. Page number 3 is again a page fault. Again, it is observed
which page out of 1, 0, and 2 will not be referenced for a long time. This is Page number
1. Therefore, Page number 1 is replaced.

The process goes on resulting in 9 page faults. The number of page faults in this
algorithm is very less compared to FIFO algorithm.

Reference:

 Principles of Operating Systems text book.


 Internet

Anda mungkin juga menyukai