Anda di halaman 1dari 5

Q1. Compare the benefits and disadvantages of RAID 1 , RAID 3, RAID 5.

Make sure that you


address performance, reliability and relative cost/per GigaByte.
Ans.
Raid 1.
Benefits
RAID 1 offers excellent read speed and a write-speed that is comparable to that of a
single disk.
In case a disk fails, data do not have to be rebuild, they just have to be copied to the
replacement disk.
RAID 1 is a very simple technology.
Disadvantages
The main disadvantage is that the effective storage capacity is only half of the total disk
capacity because all data get written twice.
Software RAID 1 solutions do not always allow a hot swap of a failed disk (meaning it
cannot be replaced while the server keeps running). Ideally a hardware controller is used.
Raid 3.
Benefits
RAID-3 provides high throughput (both read and write) for large data transfers.
Disk failures do not significantly slow down throughput.
Disadvantages
This technology is fairly complex and too resource intensive to be done in software.
Performance is slower for random, small I/O operations.
Raid 5.
Benefits
Read data transactions are very fast while write data transaction are somewhat slower
(due to the parity that has to be calculated).
Disadvantages
Disk failures have an effect on throughput, although this is still acceptable.
Like RAID 3, this is complex technology.




Q2.Given an 8 KByte instruction cache with a miss rate of 2% and an 8 KByte data cache with a
miss rate of 5%, assume that 60% of the memory accesses are instruction references, that a cache
hit takes 1 clock cycle, and a cache miss takes 25 clock cycles.
Ans.
a) What is the combined miss rate?
Combined Miss Rate = (% of Instruction access * Miss rate of Instruction cache) + (% of Instruction Data
* Miss rate of Data cache)
Combined Miss Rate = (60% 0.02) + (40% 0.05) = 0.032.
b) What is the average memory access time?
Average memory access time =
% instructions (Hit time + Instruction miss rate Miss penalty) + % data (Hit time + Data miss rate
Miss penalty)
i.e = 60% (1 + 0.02 25) + 40% (1 + 0.05 25)
= 60% (.9) + 40% (.9)
= 1.8 clock cycles.
c) Explain the difference between a write-through cache policy and a write-back cache policy,
including a brief description of the benefits of each.
Write trough Caching: When the controller receives a write request from the host, it stores the data in its
cache module, writes the data to the disk drives, then notifies the host when the write operation is
complete. This process is calles write-trough caching because the data actually passes through-and is
stored in- the cache memory on its way to the disk drives.
Write back caching: This caching technique improves the subsystem's response time to write requests by
allowing the controller to declare the write operation 'complete' as soon as the data reaches its cache
memory. The controller performs the slower operation of writing the data to the disk drives at a later time.
Benefit of Write-Through Cache:
The primary advantage of the Write-Through mode is that the cache and cache store are updated
simultaneously, which ensures that the cache store remains consistent with the cache contents.
But,This is at the cost of reduced performance for cache operations caused by the cache store
accesses and updates during cache operations.
Suppose we have some data like a loop of a program which needs to be accessed and changed
several times before writing back to the disk then write through approach help and cut-out the
need for writing the irrelevant data again and again. If loops runs 100 times it saves 100 memory
writes.
Benefit of Write-Back Cache:
The advantage of write-back caches is that not all write operations need to access main memory,
as with write-through caches.
If a single address is frequently written to, then it doesnt pay to keep writing that data through to
main memory.
If several bytes within the same cache block are modified, they will only force one memory write
operation at write-back time.

d) Describe two strategies for reducing the miss rate in caches.
A) Software Solution
1.Compiler-controlled prefetch (An alternative to hardware prefetching):
Some CPUs include prefetching instructions .
These instructions request that data be moved into either a register or cache.
These special instructions can either be faulting or non-faulting .
Non-faulting instructions do nothing (no-op) if the memory access would cause an exception.
Of course, prefetching does not help if it interferes with normal CPU memory access or
operation.
Thus, the cache must be nonblocking (also called lockup-free ).
This allows the CPU to overlap execution with the prefetching of data.
2. Compiler optimizations:
This method does NOT require any hardware modifications.
Yet it can be the most efficient way to eliminate cache misses.
The improvement results from better code and data organizations.
For example, code can be rearranged to avoid conflicts in a direct-mapped cache, and accesses to
arrays can be reordered to operate on blocks of data rather than processing rows of the array.
B) Hardware Solution
1.Reducing Missrate using Victim Cache:
Add a small fully associative victim cache to place data discarded from regular cache
When data not found in cache, check victim cache
4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache
Get access time of direct mapped with reduced miss rate
2. Reducing Misses by HW Prefetching of Instruction & Data:
E.g., Instruction Prefetching
Alpha 21064 fetches 2 blocks on a miss
Extra block placed in stream buffer
On miss check stream buffer
Norman Jouppi [1990 HP] 1 data stream buffer got 25% of misses from 4KB cache; 4 streams got
43%
Works with data blocks too:
Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from
2 64KB, 4-way set associative caches
Prefetching relies on extra memory bandwidth that can be used without penalty
Q3. Assume an I/O system in equilibrium, if the average time to service an I/O request is 40 ms and
the I/O system averages 100 requests per second, what is the mean number of I/O requests in the
system?
Ans. Average number of arriving requests/second (R) = 100/second,
Average time to service a request (T
ser
) = 40 ms (0.04s),
Utilization (0 to 1): Utilization = R * T
ser
= 100/s * 0.04s = 4
Service utilization must be between 0 and 1; otherwise, there would be more tasks arriving than could be
serviced, violating our assumption that the system is in equilibrium.
Q4. Given a system with a cache hit time of 10ns, miss penalty of 100ns and hit rate of 95%, what is
the average memory access time?
Ans. Average Memory Access Time = Hit Time + Miss Rate * Miss Penalty
= 10ns+ 0.05 * 100ns
= 15 ns
{ Percentage Miss Rate = 100 % - Hit Rate
= 5 % (That is 0.05) }.



Q5. Briefly explain the benefits of using virtual memory in a multiuser computer system.
Ans. Advantages of Virtual Memory in Multi-user Computer System
Translation of Memory Address:
Program can be given consistent view of memory, even though physical memory is
scrambled
Only the most important part of program (Working Set) must be in physical memory.
Contiguous structures (like stacks) use only as much physical memory as necessary yet grow
later.
Protection of Memory:
Different threads (or processes) protected from each other.
Different pages can be given special behavior (Read Only, Invisible to user programs, etc).
Kernel data protected from User programs .
Very important for protection from malicious programs.
Sharing of Memory:
Sharing of Virtual Memory is possible as we can map same physical page to multiple users
(Shared memory Approach).

Anda mungkin juga menyukai