Anda di halaman 1dari 9

Outline

1. Why we need Operating Systems


2. Different Views of OS
3. Evolution of OS Features:
performance vs. economic challenges
4. OS Functionality and Its Variants
5. Example of an Analytical Approach to OS
Performance:
superlinear speedup and its causes
6. Structure of OS

Why We Need Operating Systems

An OPERATING SYSTEM is a PROGRAM which acts as


an interface between the user and the hardware to:
1. Facilitate convenient access to hardware by the user
2. Improve efficiency of the system
3. Provide security of the system and its program and
data
4. Emulate features not available in hardware
User Views of Operating System
1. System Calls (For Programmers)

2. System Programs (For Users)

3. Commands (For Superusers)

4. File System (For All)

Evolution of Operating Systems, Step 1


PROBLEM:
different speed of I/O operations and computations
SOLUTION:
overlapping I/O with computation
TOOL:
asynchronous operation of I/O and CPU
Need for synchronization
INTERRUPTS and interrupt handlers
Interrupt Processing

Evolution of Operating Systems, Step 2


PROBLEM:
Underutilization of CPU
SOLUTION:
multiprogramming (multiple programs in memory at
the same time)
TOOL:
Context switching
Context switching and memory swapping
Timesharing, Multiprogramming
and command languages
Evolution of Operating Systems, Step 3
PROBLEM:
Protection of the system resources:
I/O, memory, CPU, network ports

SOLUTION:
distinction between two modes of operations:
I. supervisor (monitor, farmer)
II. user (user, worker)

Evolution of Operating Systems, Step 3


continued
TOOLS:
Two sets of machine instructions:
1. privileged, allowed only in supervisor mode
2. non-privileged

In the user mode, Operating System Services


provide “canned” (strictly limited) access to OS
System Calls (software interrupts)
Typical Services of Operating
Systems
1. Device Drivers (I/O operations)
2. File System (permanent data storage)
3. Command Language Interpreter and Utilities
Library (often through GUI/icon-based
interface) (program execution)
4. Two modes of operations with privileged
instructions, interrupt handlers and system
calls (execution protection)

Typical Services of Operating Systems,


continued

5. Resource allocation (program scheduling)


6. Error detection (fault tolerance)
7. Account and Resource Protection
(examples: memory protection, account
verification etc.)
8. Usage monitoring (accounting)
Variations of Services
A) MULTIPROCESSOR/MULTICORE SYSTEMS: complex
memory allocation and CPU scheduling
B) CLUSTERS/GRIDS: communication primitives,
synchronization
C) REAL-TIME/EMBEDDED SYSTEMS: time constraints,
complex interrupt handling, execution predictability
D) LAPTOPS/DESKTOPS/WORKSTATIONS: security, as
interconnections make it open to remote attacks

Basic Structure of a Computer


The basic structure of a computer normally consists of one
or more of the following hardware components:

• The CPU or the central processing unit, also called the


processor
• RAM or random access memory, also known as main
memory
• The massive storage devices, which store large
amounts of data and programs in permanent form
• The I/O devices or input/output units
• The system bus, which provides interconnections for all
components of the system
Hardware Components of a Computer

Modern Memory Hierarchy


PROCESSOR (1)
Trading latency
for bandwidth
CACHE (1) (up to 3 levels 1-100)
at different
time scales
NETWORK MAIN STORAGE (100-1000)

DISK (100,000)/TAPE (1,000,000)

TERMINAL (1,000,000,000)
Clusters

PROCESSORS P2 P4 ... P2n

NETWORK
(1000 latency,
<<1 bandwidth)
PROCESSORS P1 P3 ... P2n+1

Memory Cost vs. Speed


Access time

100
10-1 Optical Memories
10-2 Magnetic Tapes
10-3
Magnetic Disks
10-4
10-5
10-6 Magnetic Bubble Memories
10-7
10-8
Semiconductor RAMs Main memory
10-9
Cache
10-10 Semiconductor RAMs
Cost per bit

10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2


Multi-Processor System
Homogeneous Heterogeneous
Parallel Processing Distributed Processing
• single user • many users
• tightly coupled • loosely coupled
• fine to medium grain • large grain

Parallelism efficient Parallelism possible


Transparency in Sharing
Resources
Distributed Memory

Shared Memory MIMD


SIMD MIMD
Lock Message
stepped Passing
Cell processors
Distributed Shared Memory
Computational Grids RPI Grid and RPI Blue Gene
Single/Multiple Instruction Single/Multiple Data streams

Parallel Processing Efficiency


Ws - sequential work
maxWn - maximum computational work on a single processor
when n processors used
T1, Tn - execution times on single- and n-processor machines
on - overhead of parallel processing on n-processor machine
T1 = Ws
Tn = maxWn+on ≥ Ws/n+on ≥ Ws/n = T1/n
Speedup Sn = T1/Tn ≤ n
Efficiency En = T1/nTn ≤ 1
With perfect load balance maxWn = Ws/n and En = 1/(1+non/Ws)
To keep efficiency >50%, on<Ws/n or n< Ws/on, so
efficient parallelism is limited by the total work and overhead.

Superlinear speedup - when En>1 or Sn>n


Reasons: memory, cache, implicit algorithm change

Anda mungkin juga menyukai