Anda di halaman 1dari 194

Q. 1. What is Computer Architecture? Ans. Computer Architecture : It is concerned with structure and behaviour of computer as seen by theY user.

It includes the information formats, the instruction set, and techniques for addressing mei ihrchitecturaldesi of a computer system is concerned with the specifications of the various ftmctional modules, such as processors and memories and structuring them together into a computer system.

Q. 2. What is Computer Organisation? Ans. Computer Organisation: It is concerned with the way the hardware components operate and the way they are connected together to form the computer system. The various components are assumed to be in place and the task is to investigate the organisational structure to verify that the computer parts operate.

Q. 3. What is the concept of layers in architectural design? Ans. The concepts of layers in architectural design is described as below: 1. Complex problems can be segmented into smaller and more manageable form. 2. Each layer functioning. is specialized for specific

3. Upper layers can share the services of a lower layer. Thus layering allows us to reuse functionality. 4. Team development is possible because of logical segmentation. A team of programmers will build. The system, and work has to be subdivided of along clear boundaries.

Q. 4. Differentiate between computer architecture and computer organisation. Ans. Difference between computer architecture and computer organisation:

Q. 5. Draw top leveled view of computer components. Ans. Computer organization includes emphasis on system components, circuit design, logical design, structure of instructions, computer arithmetic, processor control, assembly programming and methods of performance enhancement.

Diagram : Top leveled view of computer component

Q. 6. Write typical physical realisations of architecture. Ans. Important types of bus architecture used in a computer system are: (i) PCI bus (ii) ISA bus (iii) Universal serial bus (USB) (iv) Accelerated graphics port (AGP).

PCI bus : PCI stands for peripheral component interconnect It was developed by intel. Tqday it is a widely used bus arclirtecture. The PCI bus can operate with either a 32 bits or 64 bit data bus and a full 32-bit address bus. ISA Bus : ISA stands for industry standard Architecture. Most Pcs contain ISA slot on the main board to connect either an 8bit ISA card or a 16bit ISA card. USB : It is a high speed serial bus. It has higher data .transfer rate than that of a serial port fashion. Several devices can be connected to it in a daisy chain. AGP: It is a 32bit expansion slot or bus specially design for video card.

Q. 7. What is Channel? Ans. A channel is one of data transfer technique. This technique is a traditionally used on mainframe computers and also becoming more common on smaller systems. It controls multiple high speed devices. It combines the features of multiple and selector

channels. This channel provides a connection to a number of High speed devices.

Q. 8. Draw the machine architecture of 8086. Ans.

Q. 9. Explain Organisation.

about

the

computer

Ans. Computer organisation is concerned with the way the hardware components operate and the way they are connected together to form

the computer system. The various components are assumed to be in place ad task is to be a organisational structure. IL includes emphasis on the system components, circuit design, logical design, structure of instruction, computer arithmetic, processor control, assembly programming and methods of performance enhancement.

Q. 10. Explain the signficance of layered architecture Ans. Significance of layered architecture: In layered architecture, complex problems can be segmerled into smaller and more managable form. Each layer is specialized for specific functioning. Team development is possible because of logical segmentation. A team of programmers will build. The system, and work has to be sub-divided of along clear boundaries.

Q. 11. HOw can you evaluate the performance of processor architecture.

Ans. In processor architecture, there is no. of processor where 8086 and 8088 has taken an average of 12 cycles to execute a single instruction. XXX286 and XX386 are 4.5 cycles per instruction XX486 and most fourth generation intel compatible processor such as DMD X 85, drop the rate further, about 2 cycles per instruction. Latest processor are pentium pre, pentiom 11/111/4/celeron and Athlon/ Duress : These P6 and P7 processor S can execute as many as three or more instructing per cycle.

Q. 12. Explain the performance metrics.

various

types

of

Ans. Performance metrics include availability, response time, Channel capacity, latency, Completion time.

Q. 13. Write a short note on cost/benefit in layered Architecture design. Or H/W and S/W partitioning design:

Ans. One common architectural design pattern is based on layers. Layers are an architectural design pattern that structures applications can be decomposed into groups of subtasks such that each of subtasks is at a particular level of abstraction. A large system requires decomposition. One way to decompose a system is to segment it into collaborating objects. Then these objects are grouped to provide related types of services. Then these groups are interfaced with each other for inter communication and that results in a layered architecture. The traditional 3tier client server model, which separates application functionality into three distinct abstractions, is an example of layered design. The three layers include data, business rules and graphical user interface. Similiar is the 051 seven layer networking model and internet protocol stack based on layered architecture. The following are the benefits of layered architecture 1. Complex problems can be segmented into smaller and more manageable form.

2. Team development is possible because of logical segmentation. A team of programmers will build the system, and work has to be subdivided along cler boundaries.

3. Upper layers can share the services of a lower layer. Thus layering allows us to reuse functionality. 4. Each layer functioning. is specialized for specific

Late source code changes should not ripple through the system of layered architecture. The similar responsibilities should be grouped to help understandabilty and maintainability. Layers are implemented as software to isolate

each disparate concept or technology. The layers should isolate at the conceptual level. By isolating the database from the communication code, we can change one or the other with minimum impact on each other. Each layer of the system deals with only one concept. The layered architecture can have many beneficial effects on application, if it is applied in proper way. The concept of architecture is simple and easy to explain to team members and so demonstrate where each objects role fits into the team. With the use of layered architecture, the potential for reuse of many objects in the system can greatly increased.

Q. 14. Explain the machine architecture of 8085 processor. Ans. The 8085 is 8-bit general micro processor can address up to 64K words of memory. It require 5V supply. It can operate at 3 MHZ single phase clock. This processor is having 8bit data bus and 16-bit address bus. Data bus is multiplexed with address bus.

That means 8-bit data bus is used as lower 8bits of address bus whenever the address have to carried in address bus.

The arithmetic logic unit includes 8-hit accumulator, 8-bit temporary register, arithmetic and logic circuits and five plays. These five flags are used to indicate certain conditions such as overflow or carry that arises during arithmetic and logical operations. This processor has six general purpose named as B, C, D, E, H and L. These registers can be combined in pairs as BC, DE and 1-IL in order to perform 16-bit operations. The accurnulator is named as A and one of the operand may reside in accumulator register in an

instructiOn. The stack pointer and program counter is 16-bit. Stack pointer (SP) is used by the programmer to maintain a stack in memory. Program Counter (PC) is used to keep track of the address of instruction in memory that has to be executed next. The increment decrement address latch is also 16bit. The instruction of processor can be classified in following categories. 1. Data transfer 2. Arithmetic operations 3. Logical operations 4. Branching operations. 5. Machine control operations 6. Assembler directies.

Q. 15. Write a note on. (a) VLIW Architecture (b) Super scaler processor.

Ans. (a) Very long instruction word (VLIW) is a modification over super scaler architecture VLIW architecture implements instruction level parallelism (ILP). VLIW processor fetches veiy long instruction work having several operations and dispatches it is parallel execution to different functional units. The width of instruction varies from 128 to 1024 bits. VLIW architecture offers static scheduling as super scalar architecture offers dynamic (run time) scheduling. That means view offers a defined plan of execution of instruction by functional units. Due to static scheduling, VLIW architecture can handle up to eight operations per clock cycle. While super scaler architecture can handle up to five operations at a time. VLIW architecture needs the complete knowledge of Hardware like processor and their functional units. It is fast but inefficient for object oriented and event driver programming. In event driven and object oriented programming super scaler architecture is used. Hence view and super scaler architecture are important in different aspets. (b) Super Scaler Processor : The scaler processor executes one instruction on one set

of operands at a time. The super scaler architecture allows the execution of multiple instructions at the same time. In different pipelines. Here multiple processing elements are used for different instruction at the same time. Pipelining is also implemented in each processing elements. The instruction fetching units fetch multiple instructions at a time from cache. The instruction decoding unit check the indepedance of these instruction so that they can be executed in parallel. There should be multiple execution units so that multiple instructions can be executed at same time. The slowest stage among fetch, decode and execute will determine the overall performance of the system. Ideally these three stages should be equally fast. Practically execution stage in slowest and drastically affect the performance of system.

Q. 16. Write a note on following: (i) Pentium Processor - (ii) Server System

Ans. (i) Pentium processor : Pentium processor with super scaler architecture came as modfication of 80486 and 8086. It is based on CISC and uses two pipelines for integer processor so that two instructions are processed simultaneously one pipeline will have same condition then another is compared with hardware 80486 processor had only adder in one chip floating point unit. One the other side pentium processor is having adder, multiplier and divide in on chip floating point unit. That means pentiom processor can do the multiplication and division fastly. The separate data and code cache of 8KB exits on chip. Dual independent bus (DIB) architecture divides the bus as front side and backside bus. Backside Bus transfer the data from L2 Cache to CPU and vice-versa. Front side bus is used to transfer the data from CPU to main memory and to other components of system. Pentium processor user write back policy for cache data, while 80486 uses write through policy for cache data. The detail other common types of processor are AMD and cyrix Although these two types of processor are less powerful as compared to Pentium Processor.

(ii) Server System : System is formed as server or client depending upon the software used in that machine suppose window 2003 server operating system is installed on machine, that machine will be termed as sever. If on the same machine Window 95 is installer that machine is termed as client. Although server machine uses specialised hardware meant for faster processing server provides the service to other machine called client attached to server. Different types of servers are Network server, web server, database server, backup server. Sever system is having powerful computing power, high performance and higher clock speed. These system are having good fault tolerance capability using disk mirroring, disk stripping and RAID concepts. These system have back up power supply with hot swap. IBM and SUN servers are providing the different server of server for different use.

Q. 17. What is principle of performance and scalability in Computer Architecture. Ans. Computer Architecture have the good performance of computer system. It

is implementing concurrency can enhance the performance. The concept of concurrency can be implemented as parallalism or multiple processors with a computer system. The computer performance is measured by the total time needed to execute application progam. Another factor that affects the performance is the speed of memory. That is reason the current technology processor is having their own cache memory. Scalability is required in case of multiprocessor to have good performance. The scability means that as the cost of multiprocessor increase, the performance should also increase in proporation. The size access time and speed of memories and buses play a major role in the performance of the system.

Q. 18. What is evaluation of computer architecture? Ans. Computer Architecture involves both hardware orgartisation and programming software requirements. At seen by an assembly language programmer, computer

architecture is abstracted by an instruction set, which includes opcode (operation codes), addressing modes, register, virtual memory, etc. from the hardware implementation point o1 view, the abstract machine is organised with CPUs, caches, buses, microcode, pipelines physical memory etc. Therefore, the study of architecture covers both instruction set architectures and machine implementation organisation. Over the past four decades, computer architecture has gone through evolution rather than revolutional changes, sustaining features are those that were proven performance delivers. We started with Neumann architecture built as a sequential machine executing scalar data. The sequential Computer was improved from bit servial to word-parallel operations, and from fixed point to floating point operations. The von Neumann architecture is slow due to sequential execution of in programs.

Q. 19. What is parallelism and 5ipelining in computer Architecture?

Ans. LOOK AHEAD, PARALLELISM , AND PIPELINING IN COMPUTER ARCHITECTURE Look ahead techniques were introduced to prefetch instruction in order to overlap I/F (Instruction fetch/decode and execution) operations and to enable functional parallelism. Functional parallelism was supported by two approaches : One is to use multiple functional units simultaneously and the other is to practice pipelining at various processing levels. le latter includes pipelined instruction execution, pipelined arithmetic corn pu ta tion, and memory access operations. Pipelining has proven especially attractive in performng identcial operations repeatedly over vector data strings. Vector operations were originally carried out implicitly by software controlled looping using scaler pipeline processors.

Q. 20. How many cydes are required to execute per instruction for 8086, 8088, intel 286, 386, 486, pentium, K6 series, pentium 11/111/4/cebron, and Athion/Athion XP/Duron?

Ans. The time required to execute instructions for different processors are as follows: 8086 and 8088 : It has taken an average of 12 cycles to execute a single instruction. 286 and 386 : It improve this rate to about 4.5 cycles per instruction. 486 : The 486 and most other fourth generation intel-compatible processors, such as the DMD 5 x 86, drop the rate further, to about 2 cycles per instruction. Pentium, K6 Series,: The pentium architecture and other fifth generation intel compatible processors, such as those from AMD and cyrix, include twin instruction pipelines and other improvements that provide for operation at one or two instruction per cycle. Pentium pro, pentium II,fIII/4/celeron, and Athion/Athlon XP/Duron : These P6 and P7 (Sixth and Seventh generation) processors can execute as many as three or more instructions per cycle.

Q. 21. What is cost/benefit in layered Architecture design?

Or Write functional view of computer which are the possible computer operational. Ans. A larger system require decomposition. Only way to decompose a system is to segment it into collaborating objects. These objects are grouped to provide related types of services. Then these groups are interfaced with each other for inter communication and that results in a layered architecture. The following architecture are benefits of layered

1. Complex problems can be segmented into smaller and more manageable form. 2. Team development is possible because of logical segmentation. A team of programmes will build the system, and work has to be subdivied along clear boundaries. 3. Upper layer can share the services of a lower layer. Thus layering allows us to reuse functionalities. 4. Each layer functioning. is specialized for specific

5. Late source code changes should not ripple through the system because of layered architecture. 6. Similar responsibilities should be grouped to help understability and maintainability. 7. A message that moves downwards between layers is called request. A client issues a request to layer. I suppose layer I cannot fulfill it, then it delegates to layer J1. 8. Messages that moves upward between layers are called notifications. A notification could start at layer I. Layer I then formulates and sends a message (notification) to layer j +1. 9. Layers are logical placed to keep information caches. Requests that normally travel down through several layers can be cached to improve performance. 10. A systems programming interface is often implemented as a layer. Thus if two application or inter application elements need to communicate placing the interface responsibilities into dedicated layers. Can greatly simplify the task and make them more easily reusable.

Layers are implemented as software to isolate each disparate concept or technology. The layers should isolate at conceptual level. By isolating the data base from the communicate code, we can change one or the other with minimum impact on each other. Each layer of the system deals with only one concept. The layered architecture can have many beneficial effects on application, if it is applied in proper way. The concept of the architecture is simple and easy to explain to team member and so demonstrate where each objects role fits into the team. With the use of layer architecture, the potential for reuse of many objects in the system can be greatly increase. The best benefit of this layering is that .malws it easy to divide work along layer boundaries is easy to assign different teams or individuals to work of coding the layers in layered architectures, since the interfaces are identified and understood well in advance of coding. Performance of system is measure of speed, throughput. Higher is cost involves for manufacturing of computer, High is the performance as shown in figure.

Personal computer is cheapest in term of cost among server, mainframe and super computer. Super computer is the costliest one. Same is the hierarchy for the performance of the system. Most of simple applications can be executed on personal computers. For faster processing server, mainframe and super computing are used. Sometimes using too much I/O devices increases the cost but decreasing the performance in personal computer. That is termed as diminishing the performance with increase in the cost/sublinear diminshing, Like SCSI adopler increase the cost of system but that also increases the performance of server as termed as super linear economy in case of server. The ideal case is termed as linear representation where performance increases in the same proportion of cost. These are represented in graph shown in figure. Q. 1. Define ASCII code.

Ans. ASCII stands for American Standard code for Information Interchange. It is greatly accepted standard alphanumeric code used in microcomputers. ACII of bit code represents 2 128 different characters. These character represent 26 upper case letter (A to Z), 26 lowercase letters (a to z), 10 numbers (0 to 9), 33 special characters, symbols and 33 control characters. ASCII 7-bit code is divided into two portions. The left most 3-bits portion is called zone bits and the 4-bit portion on the right is called the numeric bits. ASCII 8-bit version can be used to represent a maximum of 256 characters.

Q. 2. What is EBCDIC? Ans. EBCDIC stands for extended Binary coded Decimal interchange code. A standard code that uses 8-bits to represent each of 256 alphanumeric characters Extended Binary coded Decimal interchange code is an 8-bit character encoding used on IBM mainframes EBCDIC having eight bits code divided into two parts. The first four, bits (on the left) are called zone and represent the category of the character and the last four bits (on the right)

are called the digits and identify the specific character.

Q. 3. Write a short note on: (i) Excess 3 (ii) Gray code. Ans. Excess 3 : Excess 3 is a non-weighted code used to express decimal numbers. The code derives its name from the fact that each binary code is the corresponding 8421 code plus 0021 (3). Excess representation of decimal numbers 0 to 9 Example

Gray Code : Gray coding is an important code and is known for its speed. This code is relatively free from the errors. In binary coding or 8421 BCD, counting from 7(0111) to 8(1 000) requires 4-bits to be charged simultaneously. Gray coding avoids this by following only one bit changes between subsequent numbers.

Q. 4. What is shift register in digital computer. Ans. Shift registers are the sequential logic circuit used to shift the data from registers in both directions. Shift registers are designed as a group of flip-flops connected together so that the output from one flip-flop becomes the input to the next flip-flop. The flip-flop are driven by a common clock signals and can be set or reset simultaneously. Shift registers can be connected to form different type of counters.

Q. 5. Which logic name is known as universal logic? Ans. NAND logic and NOR logic gates are universal logic. It is possible to implement any logic expression by NAND and NOR gates. This is because NAND and NOR gates can be used to perform each of Boolean operations INVERT, AND and OR. NAND is same as AND gate symbol except that it has a small circle at

output. This a small universal operations.

circle represents

the

Q.6. What is time known when D-input of D-FE must not change after clock is applied?

Q. 8. Addition of (1111)2 to 4-bit binary A results: (1) incrementing A (ii) Addition of (F)11 (iii) No change (iv) Decrementing A.

Ans. Addition of (F)H

Q. 9. Register A holds the 8-bit binary 11011001. Determine the B operand and the logic micro-operation to be performed in order to change the value in A to:

Q. 13. An 8-bit register R contain the binary value 10011100 what is the register value after an arithmetic shift right? Starting from the initial number 10011100, determine the register value after an arithmetic shift left, and state whether there is an overflow.

Q. 15. Write instruction (8085) to load 0011 in accumulator. Decrement the accumulator. Ans.

Q. 16. Draw the flowchart for Add and substract operations. Ans. The flow chart for hardware algorithm is presented in figure. The two signs A and are compared by an exclusive OR gate. If the output of the gate is 0, The signs are identical, if it is 1, the signs are different For an add operation, identical signs dictate that magnitudes be added. For a substract

operation, different signs dictate that the magnitude be added. The magnitudes are added with a micro-operation EA A + B Where EA is a register that combines E and A. The carry in E after the addition is an overflow if it is equal to 1. The value of E is transformed into the add-overflow flip-flop AVF.

Q. 17. Write the Algorithm of addition and Substraction numbers.

Ans. Addition (A + B) and substraction ( A - B) in floating point numbers are performed according to the following Algorithm: Step 1. If any of two numbers A or B is 0, then non-zero number is the result. Normalize the result to represent it in computer format. Step 2. Align the mantissa of both numbers so that exponent value of both the numbers will be same. Step 3. Same exponent part can be taken out common to do addition or substraction between the mantissa of two numbers. Step 4. The sum (MA + MB) x or difference (MA M5) X r(P0flt. Step 5. Normalise the result.

Q. 18. Discuss the Booths Algorithm for binary multiplication. Ans. Booths multiplication algorithm is used to multiply two signed and unsigned .number by using 2s complement. Aiidrew D; Booth

invented the algorithm. Booth invented this approach in a quest to find a fast way to multiples numbers. In present technology shifting was faster than addition. This algorithm is based on the shifting and is faster one. This algorithm works for signed and unsigned numbers both. Booths algorithm works according to following steps: Step 1. Convert the negative multiplier and negative multiplicand to a binary number in 2s complement representation If the multiplier or multiplicand is positive, simply represent it into binary representation. Step 2. The multiplies and multiplicand are placed in register QR and BR. Step 3. Add Os to the.left of multiplier in a quantify equivalent to number of bits in the multiplicand and all these Os are stored in register AC, which is being placed logically left to QR register. Step 4. Flip-Flop Q1, is placed logically to the right of least significant bit Q0 (right most bit) of register QR. Sequence Counter (SC) is set to a value equal to the bits in register QR (multiplier). Initially Q1 is set to value 0.

Step 5. Check the two right most bit (Q0 Q1) of the resultant binary number. If Q Q1 = 00 or Q0 Q1 = 11 then bits in register AC, QR and Q1 are shifted to right as arithmetic shift right. The arithmetic shift right does no change the sign bit (left most bit of register AC). Else if Q0Q1 = 01, add the contents of BR to AC and result is stored in AC. Then arithmetic shift right operation is applied on AC, QR and Q1. Else if Q0 Q1= 1 0, substract the contents of BR from AC and the result is stored in AC, substraction of BR from AC is performed by adding BR + I to AC. The arithmetic shift right operation is applied on AC, QR, and Q1. Step 6. Sequence decremented by value 1. counter (SC) is

Step 7. Step 5 and 6 are repeated up to the fine sequence counter (SC) value is not 0. Step 8. The final result of multiplication will be stored in AC, QR, register pair in combination.

Q. 19. Solve the multiply 4 x 9 by using Booths Algorithm or Multiplication Algorithm. Ans. Solution: The multiplier 9 (01001) is stored in register QR. Register AC is initialized to zero i.e. 00000 (Ac is a 5-bit register in this example). The flip-flop Q1 will be initialised to 0. The multiplicand 4 (00100) is stored in register BR. HereASHR stands arithmetic shift right Q0 flip-flop stores the right most bit of register QR in the previous step. Substraction of BR register value is equal to adding BR+l according to 2s complement rules. This algorithm require the size of BR, QR and AC to be one bit more than the maximum bits requires to represent multiplicand and multiplier. In 4 x 9, 9 can be represented using 4-bit. That is why QR, BR and AC are having 5bit size

register = 000100100. This value is binary equivalence of 36.

Q. 20. Write short note on Fast Carry adder. Ans. In fast carry adder, It devised faster way to add two binary numbers by using carry look

head adder. It work by creating two signals (P and a) for each bit position The block based adders include the carry to carry adder which determining P and G values for each block rather than each bit, the carry select adder which pregeneratessum and carry valve for either possible carry input to block.

Q. 21. What are the different register used in 8085 processor and what are the various categories of instruction in 8085 processor? Or

What are various issues for designing the instruction set of a processor. Ans. The 8085 is 8-bit general microprocessor can address up to 64 K words of memory. This processor requires 5-volt power supply and can operate at 3 MHz single- phase clock. This processor is having 8-bit data bus and 16-bit address bus data bus is multiplexed with address bus. That means 8-bit data bus is used as lower 8-bits of address but whenever the address have to be carried in address bus. The arithmetic logic unit includes 8-bit accumulator, 8-bit temporary register, arithmetic and logic circuits and five flags. These five flags are used to indicate certain condition such as overflow or carry that arises during arithmetic and logical operations. This processor has six general purpose registers named as B, C, D, E, H and L. These registers can be combined in pairs as BC, DE and HL in order to perform 16-bit operations. The accumulated is named as A and one of the operand may reside in accumulator register in an instruction. The stack pointer and program counter is 16-bit stack pointer (SP) is used by the programmer

to maintain a stack in memory. Program counter (PC) is used to keep track of the address of instruction in memory that has to be executed next. The increment decrement latch is also 16-bit.

The Instruction of processor can be classified in following categories. 1. Data Transfer 2. Arithmetic operations. 3. Logical operations. 4. Branching operations.

5. Machine control operations 6. Assembler Directives Each Instruction contains op-code and may have operands. The opcode is 8-bit and operand can be stored in 8-bit or 16-bit register pair.

Q. 22. What do you mean by error detection and what are the different techniques used for error detection? Ans. The data writing in memory should be accurately written. That means the error should not be introduced while writing the data. Similarly the error should not be introduced during the transmission of data. Single bit error means that only one bit of the given data is changed from 0 to I or from I to 0. These types of errors are easy to detect and correct. Burst error mean that two or more bits of data unit have changed from 0 to 1 or from 1 to 0. These types of errors are difficult to detect and correct. The most common four

methods used for error detection are : Vertical redundancy check (VRC), Longitudinal redundancy check (LRQ, cyclic redundancy check (CRC) and check sum. 1. Vertical Redundancy check (VRC ): This method of error detection is also called parity check. In this method an extra bit called parity bit is appended at the end of data so that total number of ls will become even. Suppose it wants to transmit 10100010. The parity generator counts the ls and appends. 1 as parity bit so that total number of ls in that data will be even. Hence data will become 110100010. At receiving end, even parities function checks the number of ls to be even. If the numbers of ls are not even, that mean data has been damaged. This method is used for single bit error detection. The other system may be odd parity. That means to make total number of is to be old using odd parity generates. 2. Longitudinal Redundancy Check (LRC ) : To transmit four byte of date, each byte is stored at one line so that four bytes are stored in four lines one below the other as shown below:

Now the fifth row called LRC is created by making. The parity of each column to be even. Then LRC is appended at the end of date and is transmitted to receiver. Similarly LRC of four byte date is checked at the receiving sides. If there is difference in sending and receiving LRC, that means error has been introduced. LRC technique is used for detecting the burst errors. 3. Cyclic Redundancy Check (CRC) : CRC is based on the binary decision. In this technique, CRC remainder is appended at the end of data so that resulting data is exactly divisible by binary number. At the receiving end, the date is divided by same binary number. If there is no remainder during division that means data has correctly reached the destination. While the remainder during division indicates that data have been, damaged during transmission. CRC technique is used for detecting burst errors. 4. Check Sum: Check sum generator subdivides the data unit into M segments

such that each segments will have N-bits. All the segments using ls complement are added together to get the sum. The sum is complemented and is called checksum. The checksum is appended at the end of data. The receiver sub-divides the data unit including checksum field into M segments so that each segment will have N-bits exactly like the sender did all segments using is complement are added to get the sum. Then sum is complemented If this value is zero, the data is correct. Else data requires the transmission. This techniques is also used for burst error detection.

Q. 23. Write an algorithm of summation of a set of numbers. Ans. This sum is a sequential operation that requires a sequence of add and shift microoperation. There is addition of n numbers can be done with micro-operation by

means of combinational circuit that performs the sum all at once. An array addition can implemented with a combinational circuit. The argend and addend are i.e. a0, a1, a2, a3 ...a. There are following steps of summation of a set of number. Step 1. There is n-array, numbers which are a1, a1, a2 .. .a so the result is in sum. Step 2. Input of a0, a2, a3,.. .a1 are given the combinational logical circuit. It shows the result. Step 3. The output is takens in sum and some time, a carry is produced. Step 4. A carry is put in the carry flag. The total result of sum is stored in SUM and carry is stored in CARRY.

Q. 24. Write an algorithm of Addition. Ans. Addition (A + B) in floating point numbers are performed according to following algorithm. Step 1. If any of the two number A or B is 0, then non-zero number is the result. Normalice the result to represent it in computer format. Step 2. Align the mautissa of both numbers so that exponent value of both the numbers will be same. Step 3. Same exponent part can be taken out common to do addition. Step 4. The sum (MA + MB) x rP0t or difference is (MA MB) .exponent Step 5. Normalise the result.

Q. 25. Simplify the following Boolean functions using three variable map in sum of product form.

1.f(a,b,c)=(1,4,5,6,7) 2. f.(a, b, c) = E (0, 1, 5, 7) 3. f (a, b, c) = E (1, 2, 3, 6, 7) 4. f (a, Li, c) = (3, 5, 6, 7) 5.f(a, Li, c) Y(O, 2, 3,4,6) Ans.

Q. 25. Simplify the ( a, b, c, d) = (0,1,2,5,8,9,10) Boolean functions using four variable map in sum of product and product of sum form. Verify the results of both using truth table. Ans. Sum of Product (SOP) f(a,b,c,d) =E(O,1,2,5,8,9,1O)

These are two 4-bit input A(A3, A2, A1, A0) B (B3, B2, B1, l3) and a 4-bit output D (D3, D2,D, D0). The four inputs form A(A3, A2, A1, A0) are applied directly to X (X3 X2, X1, X0) inputs of full adder. The four inputs from B (B B2 B1 B0) are connected to data input I of four multiplexer. The logic input 0 is connected to data input 12 oi four multiplexers. The logic input I is connected to data input 13 of four multiplexers. One of the four inputs of multiplexer as output is selected by two selection lines S0 and S1. The outputs from all four multiplexers are connected to the Y (Y3 Y2

Y, Y0) inputs of full adder. The input carry Cm is applied to the carry input of the full adder FAI. The carry generated by adder is connected to next adder and finally cout is generated. The output generated by full adder is represented by expression shown ahead.

Q. 26. Explain the De-Morgan's theorems.

De-Morgan theorem is applicable to n number of variable. Where n can have value 2, 3, 4 etc. De-Morgan theorem for three variables will be shown ahead. (AB+C) =A.B.C (A.B.C) A+B+C To prove the following identity [(A + C). (B + D)] = A. C + B. D Let x = [(A + C). (B + D)]

(A + C). (B + D) [De-Morgan theorem] = (A. C) + (B. D) [De-Morgan theorem] = A. C + B. D. The truth table for the second expression is given ahead. The equivalent between the entries in column (A + B) and (A. B). Prove the 2nd theorem.

Q. 27. What is universality of NAND and NOR Gates? Ans. It is possible implement any logic expression using only NAND gates. This is because NAND gate can he used to perform each of the Boolean operations INVERT, AND and OR. NANI) is the same as the AND gate symbol except that it has a small circle at the output. This small circle represents the inversion operation. Therefore the output

expression of the input NAND gate is X = (A.B)

The INVERT, AND OR gates constructed using NAND gates.

have

been

NOR is the same as OR gate symbol except that it has a small circle at the output The small circle represents the inversion operation

The Boolean expression and logic diagram of two input NOR gate is described ahead

NAND and NOR are universal gate. It can implement any logic gate or circuit.

Q. 28. Design a 4-bit binary incrementer circuit. Ans. The 4-bit binary incrementer adds value 1 to its previous value. This circuit can act as binary counter. The four half address are cascaded serially. One input of least significant adder is connected to 1 and other input is connected to least significant bit of A (A3, A2, A1, A0) means A0. The sum bit appears as S0 and carry C0 of this adder will be the one input of next adder. The other input of next adder will be A1. That will produce sum S1 and carry C1. The complete operation is as shown ahead.

Q. 29. Register a is having S-bit number 11011001. Determine the operand and logic micro-operation to be performed in order to change the value in A to. (i) 01101101 (ii) 11111101 (iii) Starting from an initial value R = 11011101, determine the sequence of binary values in R after a logical shift left, followed by circular shift right, followed by a logical shift right and a circular shift left. Ans. (1) 11011001 A Register 10110100 B Register 01101101 A register after operations. The selective complement operation complements the bits in register A where there is 1 in the corresponding bit of register B. This does not affect the bit value in A. Where there are 0 in the corresponding bit of register B. (ii) 11011001 A register 00100100 B register

11111101 A register after operation. The selective set operation sets the bit in register A to 1. Where there is I in corresponding bit of register B. This does not affect the bit value in. A where there are 0 in corresponding bit of register B. (iii) 11011101 R register 10111010 R register after logical shift left 01011101 R register after circular shift right 00101110 R register after logical shift right 01011100 R register after circular shift left.

Q. 30. Design a 4-bit common bus to transfer the contents of one register to other. Solution. Common bus is a mean for transferring the information from one register

to other. The 4-bit common bus is constructed with four multiplexers. The bus is not only used for transferring the information from register to register but also used for transfer information from register to memory, memory to register and memory to memory.

The number of multiplexer are four because there are 4-bits in each register used in the common bus. Moreover there are four registers named register A, register B, register C and register D. The size of each multiplexer is 4 1 because there are four register. There are two selection lines S and S in the 4 x 1 multiplexer.

These multiplexers select one of register and its contents are placed on the common bus. The register is selected as shown in function table.

Suppose the selection line S1 =00 that means the selection line has selected register A. A0 the least significant bit of register A is selected by MUX 1, A, the second least significant bit of register A is selected by MUX2, A2 the third least significant bit of register A is selected by MUX3 and A3, the most significant bit of the register A. A is selected by MUX4. Because the value of selection lines in each multiplexer is S1S0 = 00. The A1, A2 and A.3 have not connected to MUX2, MUX3 and MUX4 because that will make the diagram to be visually complicated. In Actual A1, A2 and A3 have been connected to MUX2, MUX3 and MUX4. Also C1, C2 and C3 have been connected to MUX 2, MUX3 MUX4.

That means ope bit data is selected by each multiplexer and is transferred to common bus. Similarly whee selection S1 5o that means the register B is selected. The contents of register B will appear on common bus. Similarly when selection S. S0 10, that means the register D is selected. The contents of register B will appear on common bus. Similarly when selection S1 S0li, that means the register D is selected. The contents of register will appear on common bus.

Q. 31. Design 4-bit arithmetic circuit that implements eight arithmetic operations. Ans. 4-bit arithmetic circuits constitute of four multiplexer of size 4 x and four full adders as shown in Figure 1.

The required arithmetic micro-operation can be performed by the combination of selection lines So, S1 and C. 1. When S1 S0 = 00, 13 input 0, How multiplexers are selected as output B(B3, B2, B1, B0) If Cm 0 the output D = A + B (Add). If Cm =1, the output D A + B + C (Add with carry). 2 When S1 S = 01, I, input of four multiplexers are selected as output B (B; B2 B1 B0). If cm =6the output D = A+ B (substract with Borrow). If C1n 1, the output D = A + B + 1 (Substract).

These are two 4-bit input A(A3, A2, A1, A0) B (B3, B2, B1, B0) and a 4-bit output D (D3, D21D, D0). The four inputs form A(A3, A2, A1, A0) are applied directly to X (X3 X2, X1, X0) inputs of full adder. The four inputs from B (B; B; B1 B) are connected to data input 13 of four multiplexer. The logic input 0 is connected to data input 2 of four multiplexers. The logic input I is connected to data input 13 of four multiplexers. One of the four inputs of multiplexer as output is selected by two selection lines S0 and S1. The outputs from all four multiplexers are connected to the Y (Y3 Y2

Y, Y0) inputs of full adder. The input carry Cm is applied to the carry input of the full adder FAI. The carry generated by adder is connected to next adder and finally cout is generated. The output generated by full adder is represented by expression shown ahead. D = X + y + C1 3. When S1 S0 = 10, 12 input of four multiplexer are selected as output (0000). If Cm = 0, the output D = A (transfer A), If Cm =1, the output D = A +1 (increment). 4. When S1S0 =11, 13 input of four multiplexers are selected as output (1111) is equivalent to the 2s complement of 1. (2s complement of binary 0001 is 1111). That means adding 2s. Complement of I to A is equivalent to A - 1. If Cm =1, the output D A (transfer A). Transfer A micro operation has generated twice. Hence there are only seven distinct microoperation Q. 1. What do you mean by memory hierarchy ? Briefly discuss.

Ans. Memory is technically any form of electronic storage. Personal computer system have a hierarchical memory structure consisting of auxiliary memory (disks), main memory (DRAM) and cache memory (SRAM). A design objective of computer system architects is to have the memory hierarchy work as through it were entirely comprised of the fastest memory type in the system.

Q. 2. What is Cache memory? Ans. Cache memory: Active portion of program and data are stored in a fast small memory, the average memory access time can be reduced, thus reducing the execution time of the program. Such a fast small memory is referred to as cache memory. It is placed

between the CPU and main memory as shown in figure.

Q. 3. What do you mean by interleaved memory? Ans. The memory is partitioned into a number of modules connected to a common memory address and data buses. A primary module is a memory array together with its own addressed data registers. Figure shows a memory unit with four modules.

Q. 4. How many memory chips of4128x8) are needed to provide memory capacity of 40 x 16. Ans. Memory capacity is 4096 x 16 Each chip is 128 8 No. of chips which is 128 x 8 of 4096 x 16 memory capacity

Q 5.Explain about main memory.

Ans. RAM is used as main memory or primary memory in the computer. This memory is mainly used by CPU so it is formed as primary memory RAM is also referred as the primary memory of computer. RAM is volatile memory because its contents erased up after the electrical power is switched off. ROM also come under category of primary memory. ROM is non volatile memory. Its contents will be retained even after electrical power is switched off. ROM is read only memory and RAM is read-write memory.Primary memory is the high speed memory. It can be accessed immediately and randomly.

Q 6. What is meant by DMA? Ans. DMA : The transfer of data between a fast storage device such as magnetic disk and memory is liputed by speed of CPU. Removing the CPU from the path and letting the peripheral device manager the memory buses directly would improve speed of transfer. This transfer technique is called Direct my Access (DMA) During DMA transfer, the CPU i idle

iiThhsno-eeRtmtofo buses. A DMA controller takes over the buses to manage the transfer y nteI/O device and memory

Q. 7. Write about DMA transfer. Ans. The DMA controller is among the other components in a computer system. The CPU communicates with the DMA through the address and data buses with any interface unit. The DMA has its own address, which activates with Data selection and One the DMA receives the start control command, it can start the transfer between the peripheral device and CPU.

Q. 8. Explain about Inter leave memory. Ans. The memory is to be partitioned into a number of modules connected to a common

memory address and data buses. A memory module is a memory array together with its own address and data register. Each memory array has its own address register and data register. The address registers receive information from a common address bus and data register communicate with a bidirectional data bus. The two least significant bits of the address can be used to distinguish between four modules. The modular system permits one module to initiate a memory access while with other modules are in the process of reading or writing a word and each module is honor a memory requestind endent of the state of the other modules.

Q. 9 Differentiate among direct mapping and associate mapping. Ans. Direct mapping : The direct mapped cache is the simplest form of cache and easiest to check for a hit. There is only one possible place that any memory location can be cached, there is nothing to search. The line either

contain the memory information it is looking for or it does not. Associate mapping : Associate cache is content addressable memory. The cache memory does not have its address. Instead this memory is being accessed using its contents. Each line of cache memory will accommodate the address and the contents of the address from the main memory. Always the block of data is being transferred to cache memory instead of transferring the contents of single memory location from main

Q 10. Define the terms: Seek Rotational Delay, Access time.

time,

Ans .Seek time : Seek time is a time in which the drive can position its read/write ads over any particular data track. Seek time varies for different accesses in tie disk. It is preferred to measure as an average seek time. Seek time is always measured in milliseconds (ms).

Rotational Delay: All drives 1bve rotational delay. The time that elapses between the moment when the read/we heal\settles over the desired data track and the moment when the first byte of required data appears under the head. Access time: Access time is simply the sum of the seek time and rotational latency time.

Q 11. What do you mean by DMA channel? Ans. DMA channel: DMA channel is issued to transfer data between main memory and peripheral device IrirdeTto perform the transfer of data. The DMA controller accs rs address and data buses. DMA with help of sthematic diagram of controller ontile needs the sual circuits of and e to communicate with - CPZJ and I/O device. In addition, it nee s an address register; aword count register, and a set of,es The address register and address lines are used rec communication with memory to word

count register specifies the no.of word that must be trEia transfer may be done directly between the device and memory .

Figure 2 shows the block diagram of typical DMA controlIer. The unit communicates with CPU via the data bus and control lines. The registers in the DMA are selected by

-through the address bus by enablin DS DMA RS (Register Select) mputse Icy iceaa) and rite inputs are bidirectional. When the G (Bus Grant) fo CPU can communicate with DMA register through the data bus to read from or write to the DMA registers. When BC = 1, the CPU has buses and DMA can communicate directly with memory by specifying an address in address bus and activating orWR.ctrLil. The DMA communicate with ex1Tipheral through request and acknowledge lines by using hands haking procedure. 1.The DMA controller has three registers : an address register, a word count register, and a control register. The address register cnfi in address The address bits go through bus buffer into address bus The address register is incremented after word that transferred to memory. The word- co1fltTegister o14s the number of words to be transferred.This register is decremented by one after each word transfer and internally tested for zero. The control register specifies the mode of All registers m the DMA appears to the ( PtJ as I/O interfZ?egisters Thus this CPU can read from or write into DMA register undefjrggrai1contrpI Via the data bus.

Q. 13. RAM chip 4096 x 8 bits has tio enable lines. How many pins are needed for the iegrated circuits package? Draw a block diagram and label all input and outputs of the RAM. What is main feature of random access memory? Ans.

(a) Total RAM capacity of 4096. Moreover the size of each RAM chip in 1024 x 8, that means total number of RAM chips required. 4O96 1024 That means total 4RAM chips are required of 1024 x 8 RAM. No. of address lines required to map each RAM chip of size 1024 x 8 is calculated as specified ahead. 2 = 1024; 2 = n = 10 that means 10 bit address is required to map each RAM chip of size 1024 x 8. 8-bit data bus is required for RAM because number after multiplication is 8 in RAM chip of size 1024 x 8. 10 bit address bus is required to map 1024 x 8RAM. The 11th and bit is used to select one of four RAM chips. Here we will take 12 bit of address bus because of 11th and 12th bit will select one of the four RAM chip as shown in memory address in Diagram. N0Q. The RAM 1C as described above is used in a

microprocessor system, having 16b address line and 8 bit data line. Its enable 1 input is active when A15 and A14 bjjre 0 and 1 and enable -2 input is active when A13, A12 bits are X and 0.

Q 14. What shall be the range of addresses that is being used by the RAM. Ans. The RAM chip is better suited for communication with the CPU if it has one or more control inputs that selects the chip only when needed. Another there is bidirectional data bus that allows the transfer of data either from memory to CPU during a read operation, or from CPU to memory during a write operation. A bidirectional bus can be threestate buffer. A three-stats buffer output can be placed in one of three possible states a signal equivalent to logic , a signal equivalent to logic 0, or a high impedance state. The logic 1 and 0) are normal digital signals. The high impedance state behaves like an open circuit which means that the output does-not carry a signal and has no logic significance.

The block diagram of a RAM chip is shows in figure. The capacity of memory is 216 work of 16 bit per word. This requires a 16-bit address and 8-bit bidirectional data bus. It has A13 and A12 bits which 1 and 0, 0 and 0 then it is active to accept two input through chip select CSI and CS2. If A15, A14 bits are 0 and I then one input is acceptable it is active i.e. it is from CS1 or CS2 (Chip selections). General Functional table

Q 15.Design a CPU that following specifications.

meets

the

Ans .can access 64 words of memory, each word being 8-bit long. The CPU does this outputting a 6-bit address on its output pins A [5 0] and reading in the 8-bit value from memory on inputs D [7,...O]. It has one 8-bit accumulator, s-bit address register, 6-bit program counter, 2-bit instruction register, 8 bit data register. The CPU must realise the following instruction set:

AC is Accumulator

MUX is Multiplexer Here instruction combination i.e. register has two bits

Instruction Code Instruction Operation 00 ADD AC - AC + M[A] 01 AND AC - AC A M[A] 10 JMP AC - M[A] 11 INC AC - AC + I

Q. 16. What are the advantages you got with virtual memory? Ans permit the user to construct program as though a large memory space were available, equal to totality auxiliary memory. Each address that is referenced by CPU goes through an address mapping from so called virtual address to physical address main memory.

There are following advantages we got with virtual memory: 1. Virtual memory helps in improving the processor utilization. 2. Memory allocation is also an important consideration in computer programming due to high cost of main memory. 3. The function of the memory management unit is therefore to translate virtual address to the physical address. 4. Virtual memory enables a program to execute on a computer with less main memory when it needs. 5.Virtual memory is generally implemented by demand paging concept In demand paging, pages are only loaded to main memory when they are required 6.Virtual memory that gives illusion to user that they have main memory equal to capacity of secondary stages media. The virtual memory is concept of implementation which is transferring the data from secondary stage media to main memory as and when necessary. The data replaced from main memory is written back to

secondary storage according to predetermined replacement algorithm. If the data swajd is designated a fixed size. This concept is called paging. If the data is in the main viiI1ze subroutines or matrices, it is called segmentation. Some operating systems combine segmentation and paging.

Q 17. Write about DMA transfer. Ans .the CPU communicates with DMA through the address and data buses as with a interface unit. The DMA has its own address, which activates the DS and RS lin&jt\he CPU initializes the DMA through the data bus. Once the DMA receives the start control command, it can start the transfer between peripheral device and the memory. When the peripheral device sends a DMA request, the DMA controller activates the BR line, informing the CPU to relinquish the buses. The CPU responds with its BC line, informing the DMA that its buses are disabled. The DMA then puts the current value of its address register with address bqs, initiates the RD WR signal, and

sends a DMA acknowledge to the peripheral device. RD and WR lines in DMA controller are bidirectional. The direction of transfer depends on the status of BC line. When BG = 0, the RD and WR are input lines allowing CPU to communicate with the internal DMA register when BC =1, the RD and WR are output lines from the DMA controller to random-access memory to specify the read or write operation for the data.

When the peripheral device receives a DMA acknowledge, it puts a word in the data bus (for write) or receives a word from the data

bus (for read). Thus the or write operations and for the memory through transfer between two momentarily disabled.

DMA controls the read supplies. The address the data bus for direct units while CPU is

DMA transfer is very useful in many applications . It is used for fast transfer of information between magnetic disks and memory. It is also useful for updating the display in an interactive terminal. The contents of memory is transferred to the screen periodically by means of DMA transfer.

Q 18.What is memory organization Explain various memories ?

Ans .The memory unit is an essential component in any digital computer since it is needed for storing programs and data A very small computer with a limited application may be able to fulfill its intended task without the need of additional storage capacity, Most general purpose computer is run more efficiently if it is equipped with additional

storage beyond the capacity of main memory. There is just not enough in one memory unit to accommodate all the programs used in typical compui Mbst computei- users accumulate and continue to accumulate large amounts of data processing software. There, it is more economical to use low cost storage devices to serve as a backup for storing. The information that is not currently used by CPU. The unit that communicates directly with CPU is called the mam memory Devices that provide backup Iore d&ild auary memory The most common auxj1aii memory device used auxilor system are magnetic disks and tapes. They are used for storing system programs, large data files, and other backup information. Only proapisnd data currently needed by the processor reside in mam memory All other information is stored in auxiliary memory and transferred to main memory when needed. There are following types of Memories: 1. Main memory * RAM (Random - Access Memory) * ROM (Read only Memory)

2. Auxiliary Memory * Magnetic Disks * Magnetic tapes etc. 1. Main Memory : The main memory is central storage unit in computer system. It is used to store programs and data during computer operation. The technology for main memory is based on semi conductor integrated circuit. RAM (Random Access Memory) : Integrated circuit RAM chips are available in two possible operating modes, static and dynamic. The static RAM consists of internal flip flops that store the binary information. The dynamic RAM stores binary information in form of electric charges that are applied to capacitors. ROM: Most of the main memory in general purpose computer is made up of RAM integrated chips, but a portion of the memory may be constructed with ROM chips. Rom is also random access. It is used for string programs that are permanently in computer and for tables of constants that do not change in value once the production of computer is completed.

2. Auxiliary Memory : The most common auxiliary memory devices used in computer systems are magnetic disks and magnetic tapes. Other components used, but not as frequently, are magnetic drums, magnetic bubble memory, and optical disks. Auxiliary memory devices must have a knowledge of magnetic, jctranics and electro mechanical systems There are followmg auxiliary memories. Magnetic Disk: A magnetic .Disk is a circular plate constructed of metal or plastic coated with magnetized material. Both sides of the disk are used and several disks may be stacked on one spindle with read/write heads available on each surface. Bits are stored in magnetised surface in spots along concentric circles called tracks. Tracks are commonly divided into sections called sectors. Disk that are permanently attached and cannot removed by occasional user are called Hard disk. A disk drive with removable disks are called a floppy disks. Magnetic tapes: A magnetic tape transport consists of electric, mechanical and electronic components to provide the parts and control mechanism for a magnetic tape unit. The tape

itself is a strip of plastic coated with a magnetic recording medium. Bits are recorded as magnetic spots on tape along several tracks. Seven or Nine bits are recorded to form a character together with a parity bit R/W heads are mounted onein each track so that data can be recorded and read as a sequence of characters.

Q. 19. Compare interrupt I/O with DMA T/O? Ans. There is following given comparison between interrupt I/O with DMA I/O.

Q 20. What is memory interleaving ? How is it different from cache memory? Ans. Memory is to be partitioned into a number of modules connected to a common memory address and data buses A memory modules is memory array together with its own address and data registers. Figure show a memory unit with four modules. Each memory array has its own address register AR and data register DR. The ssreisters receive information from a common address bus and the data registers communicate with a bidirectional data bus The two least significant bits of the address can be used to distinguish between four iodiii The fid teinjtsiEidu1e to initiate a memory accesswle cLmQdlls are in process of reading or writing awo and each module is honor a memory request independent of the state of other modules.

The advantage of a modular memory is that it allows the use of a technique called interleaving. In an inter created memory ,different sets of addresses are assigned to different memory modules. For example, in a two-module memory system the even addresses may be one module and the odd addresses in other. When the number of modules is a power of 2, the least significant bits of the address select a memory module and remaining hIs designale the specific lncptitebaccss within the selected module, A modular memory is useful in systems with pipeline and vector processing. A vector processor that uses an n- y interleaved memory caric oerandsom n different

modules. By staggering the memory access, ? ifective memory cycle time can he reduced by a factor close to the number of modulus. A CPU with instruction pipeline can take advantage of multiple memory modules so that each segment in the pipeline can access memory independent of memory access from other segments. Cache memory is different from memory interleavmg. Processor accesses the main memory for the data. That speed of central processes are about 50 times faster than main memory. Hence processor cannot be utilized to its efficiency. The solution to this problem is to use a cache memory between the central processor and the main memory. Cache memory can provide the data to the CPU at the faster rate as compared to main memory. It is pronounced cashes, a special highspeed storage mechanism. It can he either a reserved section of main memory or an independent high speed memory. A cache memory is totally different from interleave memory because its talks about its data and address bus but in cache memory, it sits between central processor and the main memory. During any particular memory cycle, the cache is being checked for memory

address being issued by the processor. If the requested data is available in cache, this is called a cache bit. If the requested data is not available in cache, this is called a cache miss. Cache replacement policy will determine the data that will go out of cache to accommodate the new coming data. Hit ratio is defined as number of hits in cache to total number of attempt made by CPU. A hit ratio has the value range from 0 to 1. Higher is the hit ratio, better is the performance of system. Suppose access time in cache is easy to find out as compared to memory inter leave access time. There is simple i.e. access time of main memory 100 ns, access time of cache memory is I Ons and hit ratio is 0.8. The average access time of CPU (8 x 10 + 2 x (100 + 10)/lO) = 30 ns. Since hit ratio is 0.8 that means CPU will get the data 8 times in cache memory out of 10 attempts and in remaining 2 attempts CPU have to access the data from the main memory. The size of cache memory is up to 512 MB and size of main memory is up to 512 MB in current generator computers. So why not we choose the size of cache equal to the main

memory. Is that case no main memory is required. That would work but it would hn incredibly expensive. The idea behind caching is to use a small amount of expensive peed up a large arnou. J siowei, xpensve mey.

Q.21.What do you mean by initialization of DMA controller ? How DMA Controller works? Explain with suitable block diagram ? Ans. The DMA controller needs the usual circuits of an interface to communicate With CPU and I/O device. In addition, it needs an address register, a word count register, and a set of address line. The address register and address line are used for direct communication with the memory. The word count register specifies the number of words that must be transferred. The data transfer may be done directly between the device an memory under control of DMA,

Figure 2 shows the block diagram of a typical DIA controller. The unit communicate with CPU via datet,bus and control lines. The registers in DMA are selected by CPU through address bus by enabling PS (DMA select) and RS (register select) inputs. The RD (read) and WR (write) inputs are bidirectional When the BG (Bus grant) Input is 0, The CPU is in communicate with DMA registers through the data bus to read from or write to DMA registers. When BC 1, the CPU has the buses and DMA can communicate directly with the memory by specifying a address in the address bus and the activating the RD or WR control. The DMA communicate with the external peripheral through the request and acknowledge lines by using handshaking procedure. The DMA controller has three register : an address register, a word count register, and control register. The address register contains an address to specify the desired location in memory. The address bits go through bus

buffers into the address bus. The address register is incremented after each word that is transferred to memory. The word count register holds the number of words to be transferred. The register is decremented by one after each word transfer and internally tested for zero. The control register specifies the mode of transfer. All register in DMA appear to CPU as I/O interface register. Thus the CPU can read from or write into DMA register under program control via the data bus. transfer data be memory a per unit transferred

Block diagram of DMA controller.

The initialization process is essentially a program consisting pf I/O instructions that include the address for selecting particular DMA registers. The CPU initializes the DMA by sending the following information through data bus: 1. The starting address of the memory lock here data liable (for read) or when the data are stored (for write). 2. The word cont, which is the number of words in memory block. 3. Control to specifically the modes of transfer such as reader Write. 4. A control to start the DMA transfer. the starting address is stored in the address register. the word count is stored in the word Triste hd the control information in the control register. When DMA is initialized, the CPU stops communicating with DMA unless it receives an interrupt signal or if it wants to check how many words have been transferred.

Q. 22. When a DMA module takes control of bus and while it retain control of bus, what does the processor do. Ans. The CPU communicates with the DMA through the address and data buses as with any interface unit. The DMA has its own address, which activates the 1)5 and RS lines. The CPU initializes the DMA through the data bus. Once the DMA receives the start control command, it can start the transfer between peripheral device and the memory. When the peripheral device sends a DMA request, the DMA controller activates the BR line, informing the CPU responds with its HG line, informing the DMA that its buses are disabled. The DMA then puts the current value of its address register into the address bus, initiate the RD or WR signal and sends a DMA acknowledge to the peripheral device. Note that RD and WR lines in DMA controller are bidirectional. The direction of transfer depends on the status of BG line. When BG 0, the RD and WR are input lines allowing the CPU to communicate with internal DMA registers. When BC = 1, the RD and WR are output lines from DMA controller to the random access memory to specify the read or write operation for the data. When the peripheral device receives a DMA acknowledge,

it puts a word in the data bus (for write) or receives a word from the data bus (for read). Thus the DMA controls the read or write operations and supplies the address for the memory. The peripheral unit can then communicate with memory through the data bus for direct transfer between the two units while the CPU is momentarily disabled.

For each word that is transferred, the DMA increments its address register and decrements its word count register. If the word

count does not reach zero, the DMA checks the request line coming from peripheral. For a high speed device, the line will be active as soon as the previous transfer is completed. A second transfer is then initiated, and the process continues until the entire block is transferred. If the peripheral speed is slower, the DMA request line may come somewhat late. In this case the DMA disables the bus request line so that the CPU can continue.to execute its program, when the peripheral requests a transfer, DMA requests the buses again. If the word count register reaches zero, the DMA stops any further transfer and removes its bus request. It also informs the CPU of the termination by means of interrupts when the CPU responds to interrupts, it reads the content of word count register. The zero value of this register indicates that all the words were transferred successfully. The CPU can read this register at any time to check the number of words already transferred. A DMA controller may have more than one channel. In this case, each channel has a request and acknowledge pair of control signals which are connected to separate peripheral devices. Each channel also has its own address register and word count

register within the DMA controller. A priority among the channels may be established so that channels with high priority are serviced before channels with lower priority. DMA transfer is very useful in many application. It is used for fast transfer of information between magnetic disks and memory. It is also useful for updating the display in an interactive terminal. The contents of memory can be transferred to the screen by means of DMA transfer.

Q. 23. (a) How many 128 x 8 RAM chips are needed to provide a memory capacity of 2048 bytes? (b) How many lines of the address bus must be used to access 2048 bytes of memory ? How many these lines will be common to all chips? (c) How many lines must be decoded for chip select ? Specify the size of recorder. 2048

Q. 24. A computer uses RAM chips of 1024 x 1 capacity. (a) How many chips are needed, and how should there address lines be connected to provide a memory capacity of 1024 bytes? (b) How many chips are needed to provide a memory capacity of 16K bytes? Explain in words how the chips are to be connected to the address bus ? Specify the size of the decoders. 1024

Q. 26. An 8-bit computer has a 16-bit address bus. The first 15 lines of address are used to select a bank of 32K bytes of memory. The higher order bit of address is used to select a register which receives the contents of the data bus ? Explain how this configuration can be used to extend the memory capacity

of system to eight banks of 32 K bytes each, for a total of 256 bytes of memory. Ans. The processor selects the external register with an address 8000 hexadecimal. Each bank of 32K bytes are selected by address 00007 FFF. The processor loads an 8-bit number into the register with a single I and seven 0s. Each output of register selects one of the 8 banks of 32 bytes through 2-chip select input. A memory bank can be changed by changing in number in the register.

Q. 27. A Hard disk with 5 platters has 2048 tracks/ platter, 1024 sector/track (fixed number of sector per track) and 512 byte sectors. What is its total capacity? Ans. 512 bytes x 1024 sectors 0.5 MB/track. Multiplying by 2048 tracks/platter gives 1GB/plat platter, or 5GB capacity in the driver. (in the problem, we use)

the standard computer architecture definition of MB 220 bytes and GB 230 bytes, many hard disk manufactures use MB = 1,000,000 bytes and GB = 1,000,000,000 bytes. These definitions are close, but not equivalent.

Q. 28. A manufactures wishes to design a hard disk with a capacity of 30 GB or more (using the standard definition of 1GB = 230 bytes). If the technology used to manufacture the disks allows 1024bytes sectors,.. 2048 sector/track, and 40% tracks/ platter, how many platter are required? Ans. Multiplying bytes per sector times sectors per tracks per platter gives a capacity of 8 GB (8 x 230) per platter. Therefore, 4 platter will he required to give a total capacity of 30GB.

Q. 29. If a disk spins at 10,000 rpm vhat is the average rational latency time of a request? If a given track on the disk has 1024 sectors, what is the transfer time for a sector? Ans. At 10,000 r/min, it takes 6ms for a complete rotation of the disk. On average, the read/write head will have to wait for half rotation before the needed sector reaches it, SC) the average rotational latency will be 3ms. Since there are 1024 sectors on the track, the transfer time will he equal to the rotation time of the disk divided by 1024, or approximately 6 microseconds.

Q. 30. In a cache with 64-byte cache lines how may bits are used to determine which byte within a cache line an address points to ? Ans. The 26 = 64, so the low 6 hits of address determine an addresss byte within a cache line.

Q. 31. In a cache with 64 byte lines, what is the address of the first word in the cache line containing the address BEF.3DEi40 H ? Ans. Cache lines are aligned on a multiple of their size, so the address of first word in a line can found by setting all of the hits that determine the byte within the line to 0. In this case, 6 bits are used to select a byte within the line, SO We can find the starting address address of line by setting the low b b,ts of the address to 0. giving 13FF 3DE 40 H as the address of the first word in the line,

Q. 32. In a cache with 128-byte cache lines, what is the address of the first word in the cache line containing the addresses. (a) A23847FF4 IFFABCIJ24. (b) 724Ti824H (c)

Ans, For 1 28-byte catie lines, the low 7 hits of address indicate which byte within the line an address refers to. Since lines, are aligned, the address of first word in line can be found by setting the bits of the address that determine the byte within the line to 0. Therefore, the addresses of the first byte in lines containing the above addresses are as follows (a) A2384780F1 EEFABC8OH. (b) 7245E800H (c)

Q. 33 For a cache with a capacity of 32 KB, How many lines does the cache hold for line lengths of 32, 64 or 128 bytes? Ans. The number of lines in cache is simply the capacity divided by the line length, so the cache has 1024 lines with 32-byte lines, 512 lines with 64-byte lines, and 256 lines with 128 byte lines.

Q. 34. If a cache has a capacity of 16KB and a line length of 128 bytes, how many sets does the cache have if it is 2-way, 4way, or 8-way set associative? Ans. With 128-byte lines, the cache contains a total of 128 lines. The number of sets in the cache is the number of lines divided by the associativity so cache has 64 sets if it is 2-way set association, 32 sets if 4-way set associative, and 16 set if 8-way setassociative.

Q. 35. If a cache memory has a hit rate of 75 percent, memory request take l2ns to complete if they hit in the cache and memory request that miss in the cache take 100 ns to complete, what is the average access time of cache? Ans. Using the formula, the average access time = (THjt X H1t) + (TmissX miss)

The average access time is (12 ns x 0.75) + (100 ns x 0.25) = 34 ns.

Q. 36. In a two-level memory hierarchy, if the cache has an access time of ns and main memory has an access time of 60ns, what is the hit rate in cache required to give an average access time of 10ns? Ans. Using the formula, the average access time = (THjt X 1Hit) + (T5x miss) The average access time 10ns = (8ns x hit rate) + 60 ns x (hit rate), (The hit and miss rates at a given level should sum to 100 percent). Solving for hit rate, we get required hit rate of 96.2%.

Q. 37. A two-level memory system has an average access time of l2ns. The top level

(cache memory) of memory system has a hit rate of 90 percent and an access time of 5ns. What is the access time of lower level (main memory) of the memory system. Ans. Using the formula, the average access time = (Flit x PEIIt) + miss) The average access time = l2ris (5 x 0.9) + (Tmjss x 0.1). Solving for Tmiss, we get Tmiss 75 ns, Which is the access time of main memory.

Q. 38. If a cache has 64-byte cache lines, how long does it take to fetch a cache line if the main memory takes 20 cycles to respond to each memory request and return 2 bytes of data in response to each request? Ans. Since the main memory returns 2 bytes of data in response to each request,

32 memory requests are required to fetch the line. At 20 cycles per request, fetching a cache line will take 640 cycles.

Q. 39. In direct-mapped cache with a capacity of 16KB and a line length of 32 bytes, how many bits are used to determine the byte that a memory operation references within a cache line, and how many bits are used to select the line in the cache that may contain the data? Ans. 2 = 32, so 5 bits are required to determine which byte within a cache line is being referenced with 32-byte lines, there are 512 lines in 16KB cache, so, 9 bits are required to select the line that may contains the address (2 = 512).

Q. 40. The logical address space in a computer system consists of 128 segments. 'Each segment can have up to 32 pages of 4K words in each physical memory consists of 4K blocks of 4K words in each. Formulate the logical and physical address formats. Ans. Logical address:

Q. 43. A memory system contains a cache, a main memory and a virtual memory. The

access time of the cache is 5ns, and it has an 80 percent hit rate. The access time of the main memory is 100 ns, and it has a 99.5 percent hit rate. The access time of the virtual memory is 10 ms. What is average access time of the hierarchy. Ans. To solve this sort of problem, we start at the bottom of the hierarchy and work up. Since the hit rate of virtual memory is 100 percent, we can compute the average access time for requests that reach the main memory as (l00n s x 0.995) + (10 ns x 0.005) = 50,099.5 ns. Give this, the average access time for requests that reach the cache (which is all request) is (5ns x 0.80) + (50,099.5 ns x 0.20) = 10,024 ns.

Q. 44. Why does increasing the capacity of cache tend to increase its hit rate?

Ans. Increasing the capacity Of cache allows more data to be stored in cache. If a program references more data than the capacity of a cache, increasing the caches capacity will increase the function of a programs data that can be kept in the cache. This will usually increase the bit rate of the cache. If the program references less data than capacity of a cache, increasing the capacity of the cache generally does not affect the hit rate unless this change causes two or more lines that conflicted for space in the cache to not conflict since the program does not need the extra space.

Q. 45. Extend the memory system of 4096 bytes to 128 x 8 bytes of RAM and 512 x 8 bytes of ROM. List the memory address map and indicate what size decoder are needed if CPU address bus lines are 16 4096 Ans. Number RAM chips = 32

Therefore, 5 x 32 decoder are needed to select each of 32 chips. Also 128 = 2, First 7 lines are used as a address lines for a selected RAM 4096 Number of ROM chips = 8. Therefore, 3 x 8 decoders are needed to select each of 8 ROM chips. Also 512 = 2, First 9 lines are used as a address line for a selected ROM. Since, 4096 = 212, therefore, There are 12 common address lines and I line to select between RAM and ROM. The memory address map is tabulated below

Q. 46. A computer employ RAM chips f 256 x 8 and ROM chips of 1024 x 8. The

computer system needs 2k byte of RAM, 4K. bytes of ROM and four interface units, each with four register. A memory mapped 1/0 configuration is used. The two highest order bits of the address assigned 00 for RAM, 01 for ROM, and 10 for interface registers. (a) How many RAM and ROM chips are rteded? (b) Draw a memory-address map for the system.

Q. 47. Which associativity techniques are used in 486 and Pentium Li and L2 caches? Ans. The organization of the cache memory in the 486 and MMX Pentium family is called a four-way set associative cache, which means that the cache memory is split into four blocks. Each block also in organised as 128 or 256 lines of 16 bytes each. The following table shows the associativity of various processor IA and 1.2 caches.

Q. 48. Design cache organization in 486 and Pentium processors. Ans. The contents of cache must always be in sync with contents of main memory to ensure that the processor is working with current data. For this reason; The internal cache in 486 family is a write-through cache. Writethrough means that when the processor write information out to cache, that information is automatically written through to main memory as well, By comparison, the Pentium and later chips have an internal write-back cache, which means that both reads and writes are cached, Further improving performance. Even though the internal 486 cache is write-through, the system can employ an external write back cache for increased performance. In addition 486 can buffer upto 4 byte before actually storing the data in RAM, improving efficiency in case the memory bus is busy. Another feature of improved cache design is that they are non-blocking. This is a technique for reducing or hiding memory delays by

exploiting the overlap of processor operations with data accesses. A non-blocking cache enables program execution to proceed concurrently with cache misses as long as certain dependency constraints are observed. In other words, the cache can handle a cache miss much better and enable the processor to continue doing something non-dependent on the missing data. The cache controller built into the processor also is responsible for watching the memory bus when alternative processors, known as bus masters, are in control of system. his process of watching the bus is referred to as bus snooping It a bus master device writes to an area of memory that also is stored in the processor cache currently, the cache contents and memory no longer agree. The cache controller then marks this data as invalid and reload the cache during the next memory access, preserving the integrity of the system. All PC processor designs that support cache memory include a features known as a translation look aside buffer (TLB) to improve recovery from cache misses. The TLB is a table inside the processor that stores information about the location of recently accessed memory addresses. The TLB speeds

up the translation of virtual address to physical memory addresses.To improve TLB performance, several recent processors have increased the number of entries in TLB, as AMD did when it moved from the anthon. Thunder bird care to palomicro core. Pentium 4 processors that support HT technology have a separate instruction TLB (TLB) for each virtual processor thread. As clock speeds increase, cycle time decreases, Newer system dont use cache on the motherboard any longer because the faster DDR-SDRAM or RDRAM used in Modern Pentium -4/Celeron or Anthon systems can keep up with the mother board speed. Modern processor all integrate the L2 cache into the processor die just like the L1 cache. This enables the L2 to run at full core speed because it is now a part of the core. Cache speed is always more important than size. The rule is that a smaller but faster cache is always better than a slower but bigger cache. Table illustrates the need for and function of L1 (Internal) and L2 (External) caches in modern systems. Result: As you can see, having two level of cache between very fast CPU and the much slower main memory helps minimize any wait

states the processor might have to endure, especially those with the on-die L2. This enables the processor to keep working closer to its true speed.

Q. 49. What is segmentation ? Explain in detail. Ans. Segmentation is a techniques that involves having all programs code and data resident in RAM at run time. For given system, this limits the numbers of programs that can

be run simultaneously. Segment sizes can differ from program to program, which means the operating system must employ considerable time dedicated to managing the memory system. The address generated by segmented program is called a logical address and space used by that segmented program is known logical space.

Segmented-page Mapping The property of a logical space is that uses variable length segments. The length of each segment is allowed to grow and contract according to the needs of the program being executed. One way of specifying the length of a segment is by associating with it a number of equal -size pages. Consider the logical address shown in figure. The logical address is partitioned into three fields. 1. The segment field specifies a segment number. 2. The page field specifies the page within the segment.

3. The word field gives the specific word within page. A page field of K-bits can specify up to 2 pages. A segment number may be associated with just one page or with as many as 2 pages. A segment number may be associated with just one page or with as many as page. Thus the length of a segment would vary according to number of pages that are assigned to it. The mapping of the logical address into a physical address is done by means of two tables, shown in figure. The segment number of a logical address specifies the address for the segment table. The entry in the segment table is a pointer address for a jage table base. The page table base is added to the page number of given in logical address. The sum produces a pointer address to an entry in page table. The value found in page table provides the block number in physical memory. The concatenation of the block field with the word field produces the final physical mapped address.

The two mapping tables may be stored in two separate small memories or in main memory. In either case, a memory reference from CPU will require three accesses to memory First from the segment table, second from the page table and third from main memory. This would slow the system significantly when compared to a conversational system that requires only one reference to memory. To avoid this speed penalty, a fast associative memory is used to hold the most recently referenced table entries. (This type of memory is sometimes called a translation look side buffer, abbreviated TLB). The first time a given block is referenced, its value together with corresponding segment and page numbers are entered into the associated memory. Thus the mapping process is first attempted by associative search with the given segment and

page numbers. If match occurs, time taken by search is only of the associative memory. If no match occurs, the segment page mapping is used and the result transformed into the associative memory for future reference. For Example Suppose the size of secondary storage media is 16K and size of main memory is 1K. 14-bits of address bus are required to map the secondary storage media and 10-bits are required to map main memory. In that case CPU will generate 14-bit logical address. Suppose there are 5-bit given segment number. The 5-bit segment number specifies one of 32 possible segments. The 6-bit_page number can specify up to 64 pages. The remaining 3-bits (14 (5 + 6) = 3) implies the 8 words in each page since the size of main memory is 1K. Each page contain 8 words. That means main memory can have IK/8 = 1024/8 = 128 page frame. The logical address generated by CPU is divided into there fields. 1. Segment field : It specifies the segment number. If there are n-bits in the segment number. That means 21L segments are there in secondary storage media. In our example, above there are 5-bits for segment.

That means there are 2 32 segments are there in secondary storage media. 2. Page field : It specifies the page with in a particular segment. If there are n-bits in page fields. That means maximum of 2 pages can be there in any segment. Although a particular segment may contain pages lesson than 2. 2 In our Example, there are 6-bit for page fields. That means segment may have 26 = 64 maximum pages. 3. Word field: It specifies a word within a specified page. If there are n-bits in word field. That means words are there in each page. In our example there are 3-bits for word field. That means 2 = 8 words are there in each page.

Suppose the logical address generated by the CPU is 1403( )ctal). To map logical address to physical address, segment and page table are used, segment number in logical address of segment table. Segment table contains the address of page table base. The steps for

mapping the logical address address is described ahead.

to

physical

1. The value in segment table at the address provided by segment number is added with page number given in logical address. In the logical address generated by CPU, 14 is segment number and the value at this segment address is 30. This value 30 is added with page number 03 to get page number 03 to get page table entry. 2. The page table entry corresponds to page frame number of main memory where the requested page of required segment in residing. Corresponding to address 33 in page table, page frame number is 2. That means the required page is lying in page frame numbers. The word address 6 from the logical address implies the 7 word in that page because the first word is having address 0 and last eight word is having address 7

Q 50.What is virtual memory ? Explain the address mapping using page? Ans.Virtual memory was developed to automate the movement of program code and data between main memory and secondary storage to give the appearance of a single large store. 1.This technique greatly simplified the programmers job, particularly when program cod and data exceeded the main memorys size. The basic technology proverlradily adaptable to modern multi programming

environments, which in addation to a virtual single level memory, also require support for large address spaces; process protection, address space organization and the execution to processes only partially residing in memory. 2. Consequently, virtual memory has become widely used, and most processors have hardware to support it. 3.Virtual memory is stored in a hard disk image the physical memory hold a small number of virtual pages in physical page frames. Our focus is on mechanism andintructures popular in todays O,s and microprocessors, which geared toward demand paged virtual memory.

Address mapping using pages Virtual memory stores only the most often used portions of an address in main memory and retrieves other portion from a disk as needed. The virtualmemory space is divided into pages identified by virtual page numbers which are mapped to page frames, shown in figure. As figure shows the virtual memory space is divided into uniform virtual pages, each of which is identified by a virtual page number. The physical memory is divided into uniform page

frames, each identified by a page frame number. The page frames are so named because they frame, or hold, a pages data. At its simplest, then, virtual memory is a mapping of virtual page nuirthers to page frame numbers. The mapping is a function i.e. a given virtual page can have only one pitysical location. However, the inverse mapping-from page frame numbers to virtual page numbers-is not necessarily a function, and thus it is possible to have several pages mapped to the same page frame, The table implementation of the address mapping is simplified if the information in address space and memory space are each divided into groups of fixed size. The physical memory is broken down into groups of equal size called blocks. The term page refers to groups of address space of the same size made by the programmer. The programs can also split into pages. Portions of program are moved from auxiliary memory to main memory in records equal to the size of a page. The term Page frame is sometimes used to denote a block. A simple mapping a virtual and a physical memory as shown in figure.

Let us illustrate with an address space (virtual memory) of 8k and a memory space (physical memory) of 4k. If we split each into groups of 1k words, we get eight pages frames as shown in figure. At any given time, upto four pages of virtual memory may reside in main memory in any one of the four page frame.

Address Mapping Using Memory Mapping page table The organization of the memory mapping is shown in next figure. The memory. page table in a paged system consists of eight words, one for each page. The address- in page table, specifies the page number and the contents of

the word gives the frame number where the page is stored in main memory. The table shows the page 1, 3, 6 and 7 are now available in main memory in page from 1, 2, 0 and 3 respectively. A presence bit in each location indicates whether the page has been transferred frorn auxiliary memory into main memory. A 1 in the presence bit indicates that this page is available in main memory and 0 in the presence bit indicate that this page is not available in main memory. The CPU reference a word in memory with a virtual address of 13-bits. The three higher order bits of virtual address specifies a page number and also an address for memory page table. The content of word in memory page taken at the page number d dress is read out into memory table buffer register (MBR). if the presence bit is 1. The frame number thus read is transferred to the two high order bit of main memory address register. A read signal to main memory transfers the contents. The word to memory buffer register and it ready to be used by the CPU. If the presence bit is 0, it means the contents of word referenced by virtual address does not reitie in main memory. A call to the OS (Operation system) is then generated to fetch the required page

from the auxiliary memory and transfers it into main memory before resuming computation. Q 1.what is an I/O processor ? Briefly discuss. Ans. I/O processor is designed to handle I/O processes of device or the computer. This processor is separated from the main processor (CPU). It controls input/output operation only. The computer having I/O processor relieves CPU from input output burden. I/O processor cannot work independently and is controlled by the CPU. If I/O processors have been removed and given its job to a general purpose CPU. It is specialized devices whose purpose is to take load of I/O activity from the main CPU.

Q. 2. Write major requirement for I/O module. Ans. I/O module consists of following main components as its requirement. (a)-Connection to the system bus.

(b) Interface module. (c) Data Buffer. (d) Control logic gates. (e) Status/control register. All of these are basic requirement of I/O module.

Q. 3. Write channels.

characteristics

of

I/O

Ans. The characteristics of I/O channels are given below: 1.I/O channel is one of data transfer technique which is adopted by peripheral. 2. I/O channel has the ability to execute I/O instruction. These instructions are stored in the main memory and are executed by a special purpose processor of I/O channel. 3. Multiplexer I/O channel handles I/O with multiple devices at the same time.

4. I/O channel is the concept where the processor is used as 1/0 module with its local memory.

Q. 4. What is channel? Ans. Channel: The channel is a way where the data is transferred. It is also a transfer technique which is adopted by various devices. It is path which is considered as interface between various device. Here I/O channel which is used with peripherals. There is no. of instruction stored in main memory and are executed by special purpose processor of the I/O channel. There is various types of channels i.e. multiplexer channel, selector channel and Block multiplexer channel etc. The data is transfers between devices and memory.

Q. 5. (a) Explain about I/O modes.

Ans. The CPU executes the I/O instructions and may accept the data temporally, but the ultimate source or destination is the memory unit. Data transfer between the control computer and I/O devices which handled in variety of I/O modes. Some I/O modes use the CPU as an intermediate path; others transfer the data directly to and from the memory unit data transfer to and from peripherals may be handled in various possible I/O modes i.e. (a) Programmed I/O mode (b) Interrupt initiated I/O mode (c) Direct Memory Access (DMA).

Q. 5. (b) What is interrupt controller ?

basic

function

of

Ans. Data transfer between CPU and an I/O device is initiated by CPU. However, the CPU start the transfer unless the device is ready to communicate with CPU. The CPU responds to the interrupt request by storing the return address from PC into a memory stack and then the program branches to a service routine that

processes the required transfer. Some processors also push the current PSW (program status word) auto the stack and load a new PSW for the serving routine, it neglect PSW here in order not to complicate the discussion of I/O interrupts. A priority over the various sources to determine which condition is to be serviced first when two or more requests arrive simultaneously. The system may also determine which conditions are permitted to interrupt the computer while another interrupt is being serviced. Higher-priority interrupts levels are assigned to requests which if delayed or interrupted, could have serious consequences. Devices with high speed transfers such as magnetic disks are given high priority, and slow devices such as keyboard receive low priority. When two devices interrupt the computer at the same time,. the computer services the device, with higher priority first. When a device interrupt occurs, basically it is checked through Daisy chaining priority method into determine which device issued the interrupt.

Q. 6. Write and explain all classes of interrupts.

Ans. There are two main classes of interrupts explained below: 1. Maskable interrupts. 2. Non-maskable interrupts. 1. Maskable interrupts : The commonly used interrupts by number are called maskable interrupts The processor can ask o temporarily ignore such interrupts These interrupts are temporarily 1gnred such that processor can finish the task under execution. The processor inhibits (block) these types of interrupts by use of special interrupt mask bit. This mask bit is part of the condition code register or a special interrupt request input, it is ignored else processor services the interrupts when processor is free, processor will serve these types of interrupts.

2. Non-Maskable Interrupts (NMI) Some interrupts cannot be masked out or ignored by the processor. These are referred to as nonmaskable interrupts. These are associated with high priority tasks that cannot be ignored. Example system bus faults.

The computer has a non-maskable interrupts (NMI) that can be used for serious conditions that demand the processors attentions immediately. The NMI cannot ignored by the system unless it is shut off specifically. In general most processors support the nonmaskable interrupt (NMI). This interrupt has absolute priority. When it occurs the processor will finish the current memory cycle and then branch to a special routine written to handle the interrupt request. When a NMI signal is received the processor immediately stops whenever it as doing and attends to it. That can lead to problem if these type of interrupts are used improperly. The NMI signal is used only for critical problem situation like Hardware errors.

Q. 7. Explain about processor/information processor.

1/0

Ans. Input/Output processor/information processor: It is designed to handle input/ output processes of a device or the computer. This processor is separate from the main processor (CPU). I/O processor is similar

to CPU but it controls input output operations only. The computer having I/O processor relieves CPU from Input/output operations only. CPU is the master processor of the computer and it instructs the I/O processor to handle the input output tasks. I/O processor cannot work independently and is controlled by the CPU. The I/O processor is composed of commercially available TTL logic circuits that generate the micro instructions necessary to implement the I/O instructions. The I/O processor is fully synchronous with the system clock and main processor. it receives starting control from the main processor (CPU) whenever an input output instruction is read from memory. The I/O processor makes use of system buses after taking the permission from the CPU. It can instruction the I/O processor 1/0 processor responds to CPU by placing a status word at prescribed location to be checked out by the CPU later on CPU informs the 1/0 processor to find out the 1/0 program and ask 1/0 processor to transfer the data. I/O processor can detect and correct the transmission errors. I/O processor can have its own I/O register.

The I/O instruction require six to twelve microsecond to execute. There are I/O instructions for setting or clearing flip flops, testing the state of flip flops and moving data between registers in the main processor and the input! output register. I/O processor are specialized devices whose purpose is to take the load of I/O activity from the main CPU. The simplest I/O processor is DMA controller. Complex I/O processor are full computers dedicated to one task like NFS servers, X-terminals, terminal concentrators. Other I/O processor are like graphics accelerators, channels controller and network interfaces.

Q. 8. Explain various addressing modes in detail. Ans. The addressing mode specifies a rule for interpreting or modifying the address field of the instruction before the operand is actually referenced. Computers use addressing mode techniques for the purpose of accommodating one or both of the following provision: (a) To give programming versatility to the user by providing such facilities as pointers to memory, counter for ioop control, indexing of data, and program relocation. (b) To, reduce the number of bits in the addressing field of the instruction. There are following addressing modes given below: 1. Immediate Addressing mode : In this mode the operand is specified in the instruction itself. In other words, an immediate mode instruction has an operand field rather an address field. The operand field contain the actual operand to be used in conjunction with operations-specified in instruction. Immediate mode instructions are useful for initialising registers to a constant value.

2. Register mode: In this mode, the operands are in registers that reside within the CPU. The particular register is selected from register field in instruction. A k-bit field can specify any one of 2k register. 3. Register indirect mode : In this mode the instruction specifies a register in the CPU whose contents give the address of the operand in memory. In other words, the selected register contains the address of operand rather than operand itself. Before using a register indirect mode instruction, the programmer must ensure that the memory address of the operand is placed in processor register with a previous instruction. A reference to the register is than equivalent to specifying a memory address. The advantage of a register indirect mode instruction is that the address field of the instruction uses fever bits to select a register than would have required to specify a memory address directly. 4. Auto increment or Auto decrement mode : This is similar to register indirect mode except the register is incremented or decremented after (or before) its value is used to access memory. When the address stored in register refer to a table of data in memory, it is necessary to increment or decrement the

register after every access to the table. This can be achieved by using the increment to decrement instruction. However, because it is such a common requirement some computers incorporate a special mode that automatically increments or decrements the contents of register after data access. 5. Direct address mode: In this mode the effective address is equal to the address part of the instruction. The operand resides in memory and its address is given directly by the address field of instruction. In a branch type instruction the address field specifies the actual branch address. The effective address in these modes is obtained from the following computation: Effective address= address part of instruction + content of CPU register. 6. Relative address mode: In this mode, the content of program counter is added of address part of the instruction in order to obtain the effective address. The address part of instruction is usually a signed number (in 2s complement representation) which can be either positive or negative. When this number is added to the content of program counter, the result produces an effective address whose

position in memory is relative to the address of the next instruction. 7. Indexed addressing mode: In this mode, the content of an index register is added to the address part of the instruction to obtain the effective address. The index register is special CPU register that contain an index value. The address field of instruction defines the beginning address of a data array in memory. Each operand in array is stored in memory relative to the beginning address. The distance between the beginning address and the address of the operand is the index value stored in the index register. Any operand in the arrays can be accessed with the same instruction provided that the index register contains the correct index value. The index register can be incremented to facilitate access to consecutive operands. 8. Base register addressing mode : In this mode, the content of a base register is added to address part of instruction to obtain the effective address. This is sisimilar to the indexed addressing mode except that the register is now called a base register instead of an index register.

Q. 9. What is the difference between isolated mapped VO and memory mapped input output. What are the advantages and disadvantages of each? Ans. Isolated I/O : In isolated mapped I/O transfer, there will be common address and data bus for main memory and I/O devices. The distinction memory transfer i l7transfer is made through control lines. There will be separate control signals for main memory and I/O device. Those signals are memory read, memory write, I/O read and I/O write. This is an isolated I/O method of communication using a common bus. When CPU fetches and decodes the operation code of input or output instruction, the address associated with instruction is placed on address bus. If that address is meant for I/O devices then 1/0 read or I/O write control signal will he enabled depending upon whether we want to read or write the data from I/O devices. If that address is meant for main memory then memory read or memory write signals will be enabled depending upon whether we want to read or write the data to main memory. Memory Mapped I/O: In memory-

mapped I/O, certain address locations are not used by memory and I/O devices use these address. Example : It address from 0 to 14 are not used by main memory. Then these addresses can be assigned as the address of I/O devices. That means with above example we can connect with 15 I/O devices to system having addresses from 0 to 14. So we an have single set of address, data and control buses. It the address on address bus belongs to main memory. This will reduce the available address space for main memory but as most modern system are having large main memory so that is not normally problem. Memory mapped I/O treats I/O parts as memory locations programmer must ensure that a memory-mapped address used by I/O device is used as a regular memory address. There are following main point of difference between isolated mapped I/O and memory mapped I/O.

The Advantage is that the load and store instructions used for reading and writing from memory can be used to input and output data from I/O registers. In a typical computer, there are more memory reference instructions than I/O instructions with memory-mapped I/O all instruction that refer to memory are also available for I/O.

Q. 10. When a device interrupt occurs, how does the processor determine which device issued the interrupt Ans. Data transfer between CPU and an I/O device is initiated by CPU. However, the CPU cannot start the transfer unless the device is ready to communicate with CPU. The CPU responds to the interrupt request by storing the return address on PC into a memory stack and then the program branches to a service routine that processes the required transfer.

Some processors also push the current PSW (program status word) auto the stack and load a new PSW for the serving routine, it neglect PSW here in order not to complicate the discussion of I/O interrupts. A priority over the various sources to determine which condition is to be serviced first when two or more requests arrive simultaneously. The system may also determine which conditions are permitted to interrupt the computer while another interrupt is being serviced. Higher-priority interrupts levels are assigned to requests which if delayed or interrupted, could have serious consequences. Devices with high speed transfers such as magnetic disks are given high priority, and slow devices such as keyboard receive low priority. When two devices interrupt the computer at the same time,. the computer services the device, with higher priority first. When a device interrupt occurs, basically it is checked through Daisy chaining priority method into determine which device issued the interrupt. The daisy-chaining method of establishing priority consists of a serial connection of all devices that request an interrupt. The device

with the highest is placed in first position, followed by lower-priority devices upto the device with the lowest priority which is placed last in chain. This method of connection between three devices and CPU is shown in figure. The interrupt request line is common to all devices and forms a wired logic connection. If any device has its interrupt signal in lowlevel state, the interrupt line goes to low level states and enables the interrupt input in CPU. When no interrupts are pending, interrupt line stays in the high level state and no interrupts are recognized by the CPU. This is equivalent to a negative logic OR operation. The CPU responds to an interrupt request by enabling the interrupt acknowledge line. This signal is received by device 1 at its P1 (priority in) input. The acknowledge signal passes on to next device through P0 (priority out) output only if device 1 is not requesting an interrupt. If devices has a pending interrupt, it blocks the acknowledge signal from next device by placing 0 in P0 output. If then proceeds to insert its own interrupt Vector Address (VAD) into the data bus for the CPU to use during the interrupt cycle. A device with 0 in its input generates a 0 in its P0 output to inform the next lower priority

device that the acknowledge signal has been blocked. A device that is requesting an interrupt and has a 1 in its P1 input will intercept the acknowledge signal by placing 0 in its P0 output. If the device does not have pending interrupts, it transmits the acknowledge signal to the next device placing interrupts, a L in its P0 output ( Thus the device with P1 =1 and PC =0 is the one with the highest priority that is requesting an interrupt, and this device places its VAD on the data bus. The daisy chain arragement gives the highest priority to the device that receives the interrupts acknowledge signal from the CPU. The farther the device is form the first position, the lower to its priority.

Figure shows the internal logic that must be included within each device when connected in the daisy chaining scheme. The device sets its RF flip-flop when its wants to interrupt the CPU. The Output of the R.F. flip-flop goes through an open-collector in verter, a circuit that provide the wired logic for the common interrupt line. If PT = 0, both PC and the enable line to VAD are equal to 0, irrespective of value of RF. If P1 = I and RF 0, then P0 = 1 and vector address is disabled. This condition passes the acknowledge signal to the next device through P0. The device is active when P1 = I and RF = 1. This condition places 0 in P0 and enables the vector address for the data bus. It is assumed that each device has its own distinct vector address. The RF flip-flop is reset after a sufficient delay to ensure that the CPU has received the vector address. Q 1.what is an I/O processor ? Briefly discuss. Ans. I/O processor is designed to handle I/O processes of device or the computer. This processor is separated from the main processor (CPU). It controls input/output

operation only. The computer having I/O processor relieves CPU from input output burden. I/O processor cannot work independently and is controlled by the CPU. If I/O processors have been removed and given its job to a general purpose CPU. It is specialized devices whose purpose is to take load of I/O activity from the main CPU.

Q. 2. Write major requirement for I/O module. Ans. I/O module consists of following main components as its requirement. (a)-Connection to the system bus. (b) Interface module. (c) Data Buffer. (d) Control logic gates. (e) Status/control register. All of these are basic requirement of I/O module.

Q. 3. Write channels.

characteristics

of

I/O

Ans. The characteristics of I/O channels are given below: 1.I/O channel is one of data transfer technique which is adopted by peripheral. 2. I/O channel has the ability to execute I/O instruction. These instructions are stored in the main memory and are executed by a special purpose processor of I/O channel. 3. Multiplexer I/O channel handles I/O with multiple devices at the same time. 4. I/O channel is the concept where the processor is used as 1/0 module with its local memory.

Q. 4. What is channel? Ans. Channel: The channel is a way where the data is transferred. It is also a

transfer technique which is adopted by various devices. It is path which is considered as interface between various device. Here I/O channel which is used with peripherals. There is no. of instruction stored in main memory and are executed by special purpose processor of the I/O channel. There is various types of channels i.e. multiplexer channel, selector channel and Block multiplexer channel etc. The data is transfers between devices and memory.

Q. 5. (a) Explain about I/O modes. Ans. The CPU executes the I/O instructions and may accept the data temporally, but the ultimate source or destination is the memory unit. Data transfer between the control computer and I/O devices which handled in variety of I/O modes. Some I/O modes use the CPU as an intermediate path; others transfer the data directly to and from the memory unit data transfer to and from peripherals may be handled in various possible I/O modes i.e. (a) Programmed I/O mode

(b) Interrupt initiated I/O mode (c) Direct Memory Access (DMA).

Q. 5. (b) What is interrupt controller ?

basic

function

of

Ans. Data transfer between CPU and an I/O device is initiated by CPU. However, the CPU start the transfer unless the device is ready to communicate with CPU. The CPU responds to the interrupt request by storing the return address from PC into a memory stack and then the program branches to a service routine that processes the required transfer. Some processors also push the current PSW (program status word) auto the stack and load a new PSW for the serving routine, it neglect PSW here in order not to complicate the discussion of I/O interrupts. A priority over the various sources to determine which condition is to be serviced first when two or more requests arrive simultaneously. The system may also determine which conditions are permitted to interrupt the computer while another interrupt is being serviced. Higher-priority interrupts levels are assigned to requests which if

delayed or interrupted, could have serious consequences. Devices with high speed transfers such as magnetic disks are given high priority, and slow devices such as keyboard receive low priority. When two devices interrupt the computer at the same time,. the computer services the device, with higher priority first. When a device interrupt occurs, basically it is checked through Daisy chaining priority method into determine which device issued the interrupt.

Q. 6. Write and explain all classes of interrupts. Ans. There are two main classes of interrupts explained below: 1. Maskable interrupts. 2. Non-maskable interrupts. 1. Maskable interrupts : The commonly used interrupts by number are called maskable interrupts The processor can ask o temporarily ignore such interrupts These interrupts are temporarily 1gnred such that

processor can finish the task under execution. The processor inhibits (block) these types of interrupts by use of special interrupt mask bit. This mask bit is part of the condition code register or a special interrupt request input, it is ignored else processor services the interrupts when processor is free, processor will serve these types of interrupts.

2. Non-Maskable Interrupts (NMI) Some interrupts cannot be masked out or ignored by the processor. These are referred to as nonmaskable interrupts. These are associated with high priority tasks that cannot be ignored. Example system bus faults. The computer has a non-maskable interrupts (NMI) that can be used for serious conditions that demand the processors attentions immediately. The NMI cannot ignored by the system unless it is shut off specifically. In general most processors support the nonmaskable interrupt (NMI). This interrupt has absolute priority. When it occurs the processor will finish the current memory cycle and then branch to a special routine written to handle the interrupt request. When a NMI signal is received the processor immediately

stops whenever it as doing and attends to it. That can lead to problem if these type of interrupts are used improperly. The NMI signal is used only for critical problem situation like Hardware errors.

Q. 7. Explain about processor/information processor.

1/0

Ans. Input/Output processor/information processor: It is designed to handle input/ output processes of a device or the computer. This processor is separate from the main processor (CPU). I/O processor is similar to CPU but it controls input output operations only. The computer having I/O processor relieves CPU from Input/output operations only. CPU is the master processor of the computer and it instructs the I/O processor to handle the input output tasks. I/O processor cannot work independently and is controlled by the CPU. The I/O processor is composed of commercially available TTL logic circuits that generate the micro instructions necessary to implement the I/O instructions. The I/O processor is fully

synchronous with the system clock and main processor. it receives starting control from the main processor (CPU) whenever an input output instruction is read from memory. The I/O processor makes use of system buses after taking the permission from the CPU. It can instruction the I/O processor 1/0 processor responds to CPU by placing a status word at prescribed location to be checked out by the CPU later on CPU informs the 1/0 processor to find out the 1/0 program and ask 1/0 processor to transfer the data. I/O processor can detect and correct the transmission errors. I/O processor can have its own I/O register.

The I/O instruction require six to twelve microsecond to execute. There are I/O instructions for setting or clearing flip flops,

testing the state of flip flops and moving data between registers in the main processor and the input! output register. I/O processor are specialized devices whose purpose is to take the load of I/O activity from the main CPU. The simplest I/O processor is DMA controller. Complex I/O processor are full computers dedicated to one task like NFS servers, X-terminals, terminal concentrators. Other I/O processor are like graphics accelerators, channels controller and network interfaces.

Q. 8. Explain various addressing modes in detail. Ans. The addressing mode specifies a rule for interpreting or modifying the address field of the instruction before the operand is actually referenced. Computers use addressing mode techniques for the purpose of accommodating one or both of the following provision: (a) To give programming versatility to the user by providing such facilities as pointers to

memory, counter for ioop control, indexing of data, and program relocation. (b) To, reduce the number of bits in the addressing field of the instruction. There are following addressing modes given below: 1. Immediate Addressing mode : In this mode the operand is specified in the instruction itself. In other words, an immediate mode instruction has an operand field rather an address field. The operand field contain the actual operand to be used in conjunction with operations-specified in instruction. Immediate mode instructions are useful for initialising registers to a constant value. 2. Register mode: In this mode, the operands are in registers that reside within the CPU. The particular register is selected from register field in instruction. A k-bit field can specify any one of 2k register. 3. Register indirect mode : In this mode the instruction specifies a register in the CPU whose contents give the address of the operand in memory. In other words, the selected register contains the address of operand rather than operand itself. Before

using a register indirect mode instruction, the programmer must ensure that the memory address of the operand is placed in processor register with a previous instruction. A reference to the register is than equivalent to specifying a memory address. The advantage of a register indirect mode instruction is that the address field of the instruction uses fever bits to select a register than would have required to specify a memory address directly. 4. Auto increment or Auto decrement mode : This is similar to register indirect mode except the register is incremented or decremented after (or before) its value is used to access memory. When the address stored in register refer to a table of data in memory, it is necessary to increment or decrement the register after every access to the table. This can be achieved by using the increment to decrement instruction. However, because it is such a common requirement some computers incorporate a special mode that automatically increments or decrements the contents of register after data access. 5. Direct address mode: In this mode the effective address is equal to the address part of the instruction. The operand resides in memory and its address is given directly by the

address field of instruction. In a branch type instruction the address field specifies the actual branch address. The effective address in these modes is obtained from the following computation: Effective address= address part of instruction + content of CPU register. 6. Relative address mode: In this mode, the content of program counter is added of address part of the instruction in order to obtain the effective address. The address part of instruction is usually a signed number (in 2s complement representation) which can be either positive or negative. When this number is added to the content of program counter, the result produces an effective address whose position in memory is relative to the address of the next instruction. 7. Indexed addressing mode: In this mode, the content of an index register is added to the address part of the instruction to obtain the effective address. The index register is special CPU register that contain an index value. The address field of instruction defines the beginning address of a data array in memory. Each operand in array is stored in memory relative to the beginning address. The distance

between the beginning address and the address of the operand is the index value stored in the index register. Any operand in the arrays can be accessed with the same instruction provided that the index register contains the correct index value. The index register can be incremented to facilitate access to consecutive operands. 8. Base register addressing mode : In this mode, the content of a base register is added to address part of instruction to obtain the effective address. This is sisimilar to the indexed addressing mode except that the register is now called a base register instead of an index register.

Q. 9. What is the difference between isolated mapped VO and memory mapped input output. What are the advantages and disadvantages of each? Ans. Isolated I/O : In isolated mapped I/O transfer, there will be common address and data bus for main memory and I/O devices. The distinction memory transfer i l7transfer is made through control lines. There will be

separate control signals for main memory and I/O device. Those signals are memory read, memory write, I/O read and I/O write. This is an isolated I/O method of communication using a common bus. When CPU fetches and decodes the operation code of input or output instruction, the address associated with instruction is placed on address bus. If that address is meant for I/O devices then 1/0 read or I/O write control signal will he enabled depending upon whether we want to read or write the data from I/O devices. If that address is meant for main memory then memory read or memory write signals will be enabled depending upon whether we want to read or write the data to main memory. Memory Mapped I/O: In memorymapped I/O, certain address locations are not used by memory and I/O devices use these address. Example : It address from 0 to 14 are not used by main memory. Then these addresses can be assigned as the address of I/O devices. That means with above example we can connect with 15 I/O devices to system having addresses from 0 to 14. So we an have single set of address, data and control buses. It the address on address bus belongs to main memory. This will reduce the available address space for main

memory but as most modern system are having large main memory so that is not normally problem. Memory mapped I/O treats I/O parts as memory locations programmer must ensure that a memory-mapped address used by I/O device is used as a regular memory address. There are following main point of difference between isolated mapped I/O and memory mapped I/O.

The Advantage is that the load and store instructions used for reading and writing from memory can be used to input and output data from I/O registers. In a typical computer, there are more memory reference instructions than I/O instructions with memory-mapped I/O all

instruction that refer to memory are also available for I/O.

Q. 10. When a device interrupt occurs, how does the processor determine which device issued the interrupt Ans. Data transfer between CPU and an I/O device is initiated by CPU. However, the CPU cannot start the transfer unless the device is ready to communicate with CPU. The CPU responds to the interrupt request by storing the return address on PC into a memory stack and then the program branches to a service routine that processes the required transfer. Some processors also push the current PSW (program status word) auto the stack and load a new PSW for the serving routine, it neglect PSW here in order not to complicate the discussion of I/O interrupts. A priority over the various sources to determine which condition is to be serviced first when two or more requests arrive simultaneously. The system may also determine which conditions are permitted to interrupt the computer while another interrupt is being

serviced. Higher-priority interrupts levels are assigned to requests which if delayed or interrupted, could have serious consequences. Devices with high speed transfers such as magnetic disks are given high priority, and slow devices such as keyboard receive low priority. When two devices interrupt the computer at the same time,. the computer services the device, with higher priority first. When a device interrupt occurs, basically it is checked through Daisy chaining priority method into determine which device issued the interrupt. The daisy-chaining method of establishing priority consists of a serial connection of all devices that request an interrupt. The device with the highest is placed in first position, followed by lower-priority devices upto the device with the lowest priority which is placed last in chain. This method of connection between three devices and CPU is shown in figure. The interrupt request line is common to all devices and forms a wired logic connection. If any device has its interrupt signal in lowlevel state, the interrupt line goes to low level states and enables the interrupt input in CPU. When no interrupts are pending, interrupt line stays in the high level state and no interrupts

are recognized by the CPU. This is equivalent to a negative logic OR operation. The CPU responds to an interrupt request by enabling the interrupt acknowledge line. This signal is received by device 1 at its P1 (priority in) input. The acknowledge signal passes on to next device through P0 (priority out) output only if device 1 is not requesting an interrupt. If devices has a pending interrupt, it blocks the acknowledge signal from next device by placing 0 in P0 output. If then proceeds to insert its own interrupt Vector Address (VAD) into the data bus for the CPU to use during the interrupt cycle. A device with 0 in its input generates a 0 in its P0 output to inform the next lower priority device that the acknowledge signal has been blocked. A device that is requesting an interrupt and has a 1 in its P1 input will intercept the acknowledge signal by placing 0 in its P0 output. If the device does not have pending interrupts, it transmits the acknowledge signal to the next device placing interrupts, a L in its P0 output ( Thus the device with P1 =1 and PC =0 is the one with the highest priority that is requesting an interrupt, and this device places its VAD on the data bus. The daisy chain arragement

gives the highest priority to the device that receives the interrupts acknowledge signal from the CPU. The farther the device is form the first position, the lower to its priority.

Figure shows the internal logic that must be included within each device when connected in the daisy chaining scheme. The device sets its RF flip-flop when its wants to interrupt the CPU. The Output of the R.F. flip-flop goes through an open-collector in verter, a circuit that provide the wired logic for the common interrupt line. If PT = 0, both PC and the enable line to VAD are equal to 0, irrespective of value of RF. If P1 = I and RF 0, then P0 = 1 and vector address is disabled. This condition passes the acknowledge signal to the next device through P0. The device is active when

P1 = I and RF = 1. This condition places 0 in P0 and enables the vector address for the data bus. It is assumed that each device has its own distinct vector address. The RF flip-flop is reset after a sufficient delay to ensure that the CPU has received the vector address. Q. 1. Briefly write about 8255 chip. Ans. 8255 is a 40 pin I/O chip programmed as simple I/O part for connection with devices such as LCD, stepper motor and analog to digital converter: This chip allows us to have both digital input and output (I/O) to the computer. This chip is having three ports. The individual port of the chip can be programmed to input or output. These ports are having handshaking capability.

Q. 2. Explain about taking example 8255.

parallel

interface

Ans. CPU program the 8255 by writing a control byte. It is shown in figure.

The 8255 is 40 pins I/O chip organised as simple I/O post for connection with device such as LCD, stepper motor and analog to digital converter. The chip allows to have both digital input and output (D/O) to the computer this chip is having there ports named as A, B and C., The individual port to the chip can be programmed to input or output. These ports are having handshaking.

Q. 3. Provide it typical list of the input and outputs of control unit. Ans. CPU is partitioned into arithmetic logic unit (ALU) and control unit (CU). The function of control unit is to generate relevant control signals that select and sequence the data processing operations. The master clock pulse generated applies clock pulses to a flip-flops

and registers. These clock pulses cannot change the state of flip-flop and registers unit the control signal is enabled.

There is following bit of input and output of control unit. 1. Memory Read (MRD) of main memory. 2. Memory Write (MWR) of main memory in control unit. 3. I/O Read (lORD) of I/O controller. 4. I/O write (IOWR) of I/O controller. 5. Various controller. I/O devices attached of I/O

Control unit issues signals to initiate different operations inside the computer, the main memory can receive two kinds of control signals. One is memory read (MRD) control

unit generates that when the data is to be read from the memory. The second kind of control signal is memory write (MWR) that is generated by control unit when the data is to be written to memory. Similarly the I/O controller can receive two kinds of control signal from control unit. One is I/O read (lORD) to read the data from I/O device and second is I/O write (IOWR) to write the data to I/O device. I/O controller selects specific devices. Also different operations like AC .R1,IC - PC + 1 and 0 A executes in ALU if control unit issues control signal for their execution and mister clock pulse generator issues clock pulses.

Q. 4. Which is the elements considered in bus design. Ans.There are the number of wires be excessive if separate lines are used between each register and all other register ifi the system. A more efficient scheme for transferring information between registers in a multiple-register configuration is a common bus system. A bus structure consist of a set of

common lines, one for each bit of a register through which binary information is transferred one at a time. Control signals determine which register is selected by bus during particular register transfer. There Is also away of constructing a common bus system is with multiplexers. The multiplexer select source register who binary information is than placed on the bus. The construction of a bus system for four registers is shown in figure. Each register has four bits numbered 0 through 3. The bus consists of four 4 x 1 multiplexers each having four input, 0 through 3, and selection inputs, and S0. In order not to complicate the diagram with 16 lines crossing each other it use labels to show the connections from outputs of the registers to the inputs of the multiplexer. For example, output 1 of register A is connected to input 0 of MUX 1 because this input is labeled A, the diagram shows that the bits in the same significant position in each register are connected to the data inputs of one multiplexers the from one line of the bus. Thus max 0 multiplexers the four 0 bits of registers, MUX I multiplexers the four I bits of register and similarly for other two bits.

Q 5.What is instruction cycles ? Explain. Ans.A program in computer consists of sequence of instructions. Executing these

instructions runs the program in computer. Moreover each instruction is further divided into sequences of phases The concept of execution of an instruction through different phases is called instruction cycle. The instruction is divided into sub phases as specified ahead. 1. First of all an instruction (accessed) from memory. 2. Then decode that instruction., 3. Decisions is made for memory or register of I/O reference instruction, in case of memory indirect address, read the effective address from the memory. 4. Finally execute the instruction. 1. Fetch phase : The sequence counter (SC) is initialized to 0. The program counter (PC) contains the address first instruction of a program under execution. The address of first instruction on front PC is loaded into address register (AR) during first clock esTQ). Then instruction from memory location given by address (AR) is loaded into the instruction.register (IR) and program. coi is increased to address of next instruction in is fetched

second clock (T1). i. nese micro-operations using register transfer language is shown as ahead.

2. Decode phase: All bits of the instruction under execution stored in JR are analysed and decoded as shown ahead in third clock cycle (T2). T2: D0, D1, D2 ... D1 -i5ecode IR (12 - 14) I - JR (15), AR - JR Y 3. Decision phase : First off all the [JR (12 14)] bit of instruction are decoded. (i) If decoder output D7 (111) is 1, that means instruction must be register reference or inp peference....Then the last bit of instruction registerjj) stored itj. flip-flop I is decoded If JR (15) = 0 that means thc instruction is register enc1nfrtion.JfIR (15) 1, that theans the Instruction are used far registers or I/O reference instruction code.

(ii) If decoder output D7 (III) is not , that means JR (1214) are decoded in between D0 to D6. That implies the instruction is memory reference instruction. Then last bit of instruction register JR (15) stored in flip-flop J is decoded. Jt JR (15) = 0, that mean memory reference instruction is direction address instruction If JR (15) = 1, that means memory reference instruction is indirect address instruction. The first twelve bits of instruction are used as address of memory location.

4. Execution Phase : Then the instruction is executed as memory or register or input output reference instruction in fourth clock cycle (T3).

Q. 6. Discuss three instruction each from memory references, register reference and 1/0 reference in detail.

Ans. 1. Memory Reference Instruction. The three-memory reference instruction have been explained ahead. (a) AND : This instruction performs the AND logic operation of data stored AC with an operand, the operand is accessed from memory location given by AR. The result of operation after execution will he stored back to AC. Suppose the opcode for AND operation is 000 (D0). The micro-operation for this execution is shown ahead.

(b) ADD : This instruction adds the contents of AC with contents of memory location specified by effective address. The results of operation is stored in AC and carry is transferred to E flipflop. The fetch and decode operation are similar to AND logic operation. Suppose the opcode for

ADD operation is 001 (D1). The micro operation for this execution is shown ahead.

(c) LDA : This instruction transfers the contents of memory location to AC. The memory location is specified by effective address. Suppose the opcode for LDA is 0/0 (D2). ihe fetch and decodes operations are similar to AND logic operation. The micro operation for this execution is shown ahead.

2. Register Reference Instruction : The three-register reference instructions have been explained ahead. The register reference instruction re executed. When D- = 1 and == 0 at time 13. The retaining 12- C-41) are used

for 12 register reference instruction. So that end bit will represent one instruction. Although these 12-bits can represent 21 iw1;ructinn, 1t these many instruction are generally not required. The control signed for all register reference instruction is D71T3 (). Bi specifies the bit position of JR (0 -11) to execute one of twelve register reference instruction. (a) CLA : This instruction clears the contents of AC. The micro operation for the execution of instruction is shown ahead.

(b) CLE : This instruction clear the contents of flip-flop E. The micro operation for execution of instruction is shown ahead.

(c) CMA : This instruction complements the contents of accumulation (AC). The micro operation for execution of instruction is shown ahead.

This instruction will complement each bit of data available in accumulator and the result of operation is also stored in AC.

3. Input-Output Reference Instructions : The I/O reference instructions are executed when D7 = I and I = 1 at time T3. The total 12 bits (0 11) are used to represent twelve 1/ 0 reference instruction. So that each bit will represent one instruction. The central signal for all I/O reference instruction is D7 1T3(P). Bi specifies the bit position of JR (0 11) to execute one of twelve I/O reference instruction. The three I/O reference instruction have explained ahead. (a) INP: This instruction transfer the input information from 8-bit input register (JNPR) in to lower eight bit of AC and clear the input flag (FGJ) to 0. The micro operation for the execution of instruction is show ahead.

(b) OUT: This instruction transfers the lower 8 bits of AC into output (OUTR) and clear the output flag to 0. The micro operation for execution of instructions is shown ahead

(c) SKI (Skip On Input flag) : The input flag I means the next instruction will be skipped

and the contents of PC will be incriminated to its next value. The micro-operation for execution of instruction is shown ahead.

Q. 7. Explain block diagram of 8251 chip. Ans. Programmable Communication interface is used for serial data transmission. The Intel 8251 is a programmable Communication interface. It is universal synchronous/ Asynchronous Receiver/Transmitter (USART). It is compatible with 8085, 8086, 8088 and 8748 gate technology. The 8251 can be used to transmit/receive serial data. It accepts data in parallel format from the microprocessor and convert them into serial data for transmission. It also receives s2rial data and converts them into parallel and sends the data in parallel format to the CPU. Figure shows a schematic diagram of Intel 8251. The description of some pins is as follows:

C/D (Control/data). When it is low data is transmitted on data bus. When it high control signals are transmitted on the data bus. RD (Read): When it is low the CPU reads data from 8251. WR (Write) : When it is low the CPU reads data from Intel 8251.

TXC (Transmitter Clock) It governs the rate of data transmission

TXE (Transmitter Empty) TXE goes high when 8251 has no characters to transmit. TXRDY (Transmitter ready) RXRDY (Receiver ready) RXD Line for receiving data TXD Line for serial data Transmission RXC (Receiver clock), it governs the rate at which characters are received.

Q. 8. A program uses a memory unit with 256 K words of 32-bit size each. A binary instruction code is stored in one word of memory. The instruction is divided into four parts : an indirect bit, operation code a register code to specify one to 64 register and an address part. (i) How many bits are there in operation code, the register code and the address part? (ii) Draw the instruction word format and indicate the no. of bits in each part.

(iii) How many bits are there in the data and address inputs of memory? Solution. (i) The size of each word is 32-bit. The number of bit required to map memory (n) is equal to the address bus size required for this memory.

(iii) 32-bit are the data input to memory. Since the size of each word is 32-bit are the address input to memory. Since 18-bit are require cito map 256K words memory.

Anda mungkin juga menyukai