Anda di halaman 1dari 5

DEPARTMENT OF CS & ENGG VIII SEMESTER B.

E (Autonomous) Advanced Computer Architecture (CS801) QUESTION BANK


Note: All the questions below are only indicative and they may or may not appear in the examination. Questions carrying one/two marks.

1. Define supercomputer. (1) 2. Mention the supercomputer for the year 2011. (1) 3. Define multicore processor. (1) 4. Mention 2 important features of Embedded computer. (2) 5. Define ISA. (1) 6. Define feature size. (1) 7. Write the formula to find die yield. (2) 8. Expand MTTF & MTTR. (2) 9. Mention 2 widely benchmarks. (2) 10. When should one use geometric mean. (2) 11. explain Amdahl's law. (2) 12. Give an example code for loop level parallelism. (2) 13. How loop level parallelism can be used to increase ILP. (2) 14. Why WAR & WAW hazards are called as false dependence. (1) 15. Why is RAW hazard known as true dependence. (1) 16. what is loop unrolling. (2) 17. Explain any 1 static branch predictor. (2) 18. Mention the drawback of 1-b branch predictor. (1) 19. Give the central idea of correlating predictor. (2) 20. Give the central idea of tournament predictor. (2) 21. explain register renaming, with an example. (2) 22. Mention 2 methods of dynamic scheduling. (2) 23. Give the central idea of dynamic scheduling. (2) 24. Explain the committ stage in a pipelined processor. (2) 25. How RAW hazard is handled in Tomasulo's methods. (2) 26. Mention 2 drawback of VLIW. (2) 27. expand IA 64. (1) 28. What is value predictor. (2) 29. Explain the role of BTB in branch prediction. (2) 30. Compare register renaming & reorder buffer. (2) 31. Define cache coherence. (2) 32. Explain any one hardware primitive used to implement synchronization.(2) 33. Bring out 2 difference berween snoopy protocol & directory based cache coherence protocol. (2) 34. What is memory consistency. (2) 35. Explain priniple of locality. (2) 36. Give the formula to find the average memory access time, considering 2 levels of cache.(2) 37. Explain all the cache block states in snoopy write invalidate write back cache coherence protocol. (2) 38. Bring out the difference bteween write update & write invalidate coche coherence protocol. (2) 39. Explain a pair of instructions, that can be used to implement synchronization. (2) 40. State 2 major challenges of parallel processing. (2)

41. Explain weak memory consistency model. (2) 42. Explain any 2 types of cache misses. (2) 43. List any 2 cache block placement policies. (2) 44. What is associative memory.(2) 45. What is TLB? 46. Explain any one method to reduce cache miss rate. (2) 47. Suggest a method to reduce cache hit time. (2) 48. What is multilevel inclusion & multilevel exclusion property. (2) 49. What is virtual cache & physical cache. (2) 50. What is set-associative memory. (2) 51. What are superblocks ? (2) 52. What are predicated instructions. (2) 53. What is virtual machine. (2) 54. Give an example processor for IA 64. (1) 55. Expand EPIC. (1) 56. What is a superscalar processor. (1) 57. Mention any 2 benchmark programs. (2) 58. Explain SMP. 59. Explain UMA & NUMA. (2) 60. Bring out 1 difference between shared memory & message passing paradigm. (2) 61. Pipeline does not decrease time to complete a task, rather it increases throughput. State whether the statement is true or false. (1) 62. On what factor, does the clock of pipeline depends on ? (1) 63. What is software pipeline ? (2) 64. What is the advantage of multi-banked memory. (2) 65. What is commodity cluster ? (2) 66. What is the major limitation of shared bus multiprocessor. (1) 67. Give any 2 features of RISC machine. (2) 68. What is the role of dynamic power in a computer. (2) 69. Explain distributed shared memory. (2) 70. Explain the translation of virtual address into physical address. (2) 71. What is the role of buffers in cache write operation. (2) 72. Give an example virtual machine. (1) 73. Justify how small & simple cache reduce hit time. (2) 74. Explain false sharing in multiprocessor. (2) 75. Give the central idea of branch delay slot. (2)
Descriptive Questions

Unit 1 : Fundmentals of Computer Design 1. Give an account of growth in preocessor performance since mid-1980's (6) 2. List & explain the different classes of computers. (6) 3. Define computer architecture & organization. (4) 4. Define ISA. Illustrate 7 dimension of ISA. (9) 5. List & explain 4 trend in technology that are critical to modern implementation of processors. (6) 6. Define feature size. Explain the impact of feature size on the perfromance of transistor. (5) 7. Explain the impact of 2 types of power on processor perfromance. (4) 8. Explain the impact of Time, Volume & commodification on cost of processor. (6) 9. Give the formula to find the wafer yield. A problem to compute it. (5) 10. Define the terms MTTF, MTTR, module availability. (6)

11. Explain the need of benchmarks. Mention 2 benchmarks. (5) 12. Give the difference between arithmetic & geometric mean. When is it better to use geometric mean. (4) 13. Explain principle of locality. (4) 14. State Amdahl's law. Give the formula to find attainable speedup. (4) 15. Explain the different parameters used to find processor performance. (5) Unit 2 Pipelining: Basic & Intermediate concepts 1. Explain the concept of pipelining with suitable example. (5) 2. Derive the maximum speedup attainable in k-staged processor. (3) 3. What are the issues to be considered in designing pipline. (4) 4. Mention 5 features of RISC. (5) 5. Explain each stages in 1 5-staged pipeline for MIPS. (5) 6. List & explain the major hurdles of pipeline. (6) 7. With suitable sketch, explain how data forwarding can reduce pipeline stalls. (6) 8. Draw suitable sketch & find the number of stalls for the below code (4) LD R1,0(R2) DSUB R4,R1,R5 AND R6,R1,R7 OR R8,R1,R9 9. Explain 2 static methods to handle branch hazard. (4) 10. Explain the concept of branch delay slot. (6) 11. Explain a simple implementation of pipeline for MIPS. (6) 12. Why is it difficult to handle exceptions in pipeline. (4) 13. Explain 5 different dimensions of exceptions. (5) 14. Mention 2 synchronous & 2 terminate exceptions. (4) 15. Which instructions pose difficulty in handling exceptions in pipeline. (3) Unit 3 Instruction-level Parallelism - 1 1. Define ILP, TLP & DLP (6) 2. Explain loop level parallelism with an example. (4) 3. Mention 2 methods to exploit ILP in loop level parallelism. (4) 4. Mention & explain 3 differnet types of dependeces. (6) 5. List & explain 3 types of data hazards, with examples. (6) 6. With suitable example, how loop unrolling can increase ILP ? (6) 7. Mention 5 points to be comnsidered while performing loop unrolling. (5) 8. explain 2 static branch prediction methods. (4) 9. Explain 1-b & 2-b branch predictor. (5) 10. What is the drawback of 1-b predictor. How is it overcome in 2-b predictor. (4) 11. Explain correlating predictor with an example. (5) 12. Explain Tournament branch predictor. (6) 13. Give the central idea of dynamic scheduling. (4) 14. Explain register renaming with suitable example. (4) 15. Explain Tomasulo's method with block diagram. 16. Explain the 3 steps in Tomasulo methods. Clearly indicate the step in which each data hazard is resolved. (9) 17. Explain the role of committ stage in pipeline. (4) Unit 4 : Instruction-level Parallelism - 2 1. Explain 1 method of increasing instruction bandwidth. 96) 2. Write a note on return address predictors. (5) 3. Compare register renaming & re-order buffers. (6)

4. 5. 6. 7. 8. 9.

Explain speculation through multiple branches. (5) Write a note on value predictors. (4) With a block diagram, explain Pentium 4 micro architecture. (10) comment on processors with faster clock rates will always be faster. (4) Comment on sometimes bigger & dumber are better. (4) Give an analysis of the performance of pentium -4 (6)

Unit 5 : Multiprocessors and Thread Level Parallelism 1. Explain Flynn's classification of computers. (6) 2. List any 4 reasons, which have led to the rise of multiprocessors. (4) 3. With a diagram, explain UMA, NUMA (8) 4. Explain shared memory & distributed memory (4) 5. Explain shared memory & distributed memory. (4) 6. Mention 2 big challenges of parallel processing. (2) 7. Define cache coherence. Giv ethe conditions, when memory is said to cache coherent. (6) 8. Briefly explain 2 schemes of enforcing cache coherence. (6) 9. explain snnopy protocol. (4) 10. Draw the trasistion diagram for write invalidate write back snoopy protocol. (7) 11. What is false sharing ? Give an example. (4) 12. Explain directory based cache coherence protocol. (6) 13. Draw the transistion diagram for write invalidate write back directory protocol. (7) 14. With an example, explain the need of synchronization. (4) 15. Explain any2 hardware primitives used to enforce synchronization. 94) 16. What is the problem in implementing synchronization primitive. Explain how a pair of instructions can be used to enforce synchronization. (6) 17. What is memory consistency. Explain relaxed consistency models. (8) Unit 6 : Review of Memory Hierarchy 1. List & explain 3 cache block placement policies. (6) 2. Explain how a block is identified in cache. (3) 3. Explain different cache write polcies. (4) 4. With a neat block diagram, explain organization of data cache in Opteron. (8) 5. List & explain 3 categories into which cache misses can be categorized. (6) 6. explain how large block size can reduce miss rate. (5) 7. Explain how higher associativity can reduce miss rate. (5) 8. Explain the role of multilevel cache in reducing miss penalty. (6) 9. Explain physical cache & virtual cache. (4) 10. Give 3 reasons for not using virtual cache. (3) Unit 7 : Memory Hierarchy Design 1. Explain the role of principle of locality in memory hierarchy. (4) 2. Explain 1 method to reduce hit time. (4) 3. Explain the role of trace cache in improving cache performance. (4) 4. Explain pipelined cache acccess. (3) 5. Explain the optimization multibanked cache to increase cache bandwidth. (5) 6. Explain any 3 compiler optimization to reduce miss rate. (6) 7. With an example code, expalin how loop interchage can reduce miss rate. (5) 8. Compre SRAM & DRAM. (4) 9. Explain 3 methods to improve DRAM bandwidth. (6) 10. How protetction can be enforced via virtual machine. (5) 11. Explain virtual machine monitor. (5)

12. Compare virtual memory & virtual machine. (6) 13. Explain the impact of virtual machine on virtual memory & I/O. (5) Unit 8 : Hardware & Software for VLIW & EPIC 1. Explain 1 method to detect & enhance parallelism. (5) 2. Explain copy propagation optimization method with example. (6) 3. Explain s/w pipelining with a suitable example. (6) 4. Explain global code motion to optimize code. (5) 5. Explain Trace scheduling, with focus on critical path. (6) 6. Explain the concept of super block, with suitable example. (6) 7. What is predicated instruction. How it can be used to increase ILP? 8. Explain the hardware support for speculation. (5) 9. Explain the hardware support for preserving exception behaviour. (5) 10. With diagram, explain IA-64 architecture. (10) 11. Explain instruction format of IA 64 & their support for parallelism. (6)

Anda mungkin juga menyukai