Anda di halaman 1dari 12

A Brief History of Computer Architecture

Computer Architecture is the science and art of selecting and interconnecting hardware components to create computers that meet functional performance and cost goals. It refers to those attributes of the system that are visible to a programmer and have a direct impact on the execution of a program. Computer architect coordinate of many levels of abstraction and translates business and technology drives into efficient systems for computing tasks. Computer Architecture concerns Machine Organization, interfaces, application, technology, measurement & simulation. Includes:

Instruction set Data formats Principle of Operation (textual or formal description of every operation) Features (organization of programmable storage, registers used, interrupts mechanism, etc.)

In short, it is the combination of Instruction Set Architecture, Machine Organization and the underline hardware.

The Brief History of Computer Architecture: First Generation (1940-1950) Vacuum Tube

ENIAC- 1945: Designed by Mauchly & Echert, built by US army to calculate trajectories for ballistic shells during WWII, used 18000 vacuum tubes and 1500 relays, programmed by manually setting switches UNIVAC 1950: the first commercial computer John Von Neumann architecture: Goldstine and Von Neumann took the idea of ENIAC and developed concept of storing a program in the memory. Known as the Von Neumann architecture and has been the basis for virtually every machine designed since then. Features:

Electron emitting devices Data and programs are stored in a single read-write memory Memory contents are addressable by location, regardless of the content itself Machine language/Assemble language Sequential execution

Second Generation (1950-1964) Transistors


William Shockley, John Bardeen, and Walter Brattain invent the transistor that reduce size of computers and improve reliability. First operating Systems: handled one program at a time On-off switches controlled by electricity High level languages Floating point arithmetic

Third Generation (1964-1974) Integrated Circuits (IC)


Microprocessor chips combines thousands of transistors, entire circuit on one computer ship Semiconductor memory Multiple computer models with different performance characteristics Smaller computers that did not need a specialized room

Fourth Generation (1974-present) Very Large-Scale Integration (VLSI)/Ultra Large Scale Integration (ULSI)

Combines millions of transistors Single-chip processor and the single-board computer emerged Creation of the Personal Computer (PC) Wide spread use of data communications Artificial intelligence: Functions & logic predicates Object-Oriented programming: Objects & operations on objects Massively parallel machine

Evolution of Instruction Sets Instruction Set Architecture (ISA) Abstract interface between the Hardware and lowestlevel Software

1950: Single Accumulator: EDSAC 1953: Accumulator plus Index Registers: Manchester Mark I, IBM 700 series Separation of programming Model from implementation: 1963: High-level language Based: B5000 1964: Concept of a Family: IMB 360 General Purpose Register Machines: 1977-1980: CISC - Complex Instruction Sets computer: Vax, Intel 432 1963-1976: Load/Store Architecture: CDC 6600, Cray 1

1987: RISC Reduced Instruction Set Computer: Mips, Sparc, HP-PA, IBM RS6000

Typical RISC:

Simple, no complex addressing Constant length instruction, 32-bit fixed format Large register file Hard wired control unit, no need for micro programming Just about every opposites of CISC

Evolution or Revolution? Major advances in computer architecture are typically associated with landmark instruction set designs. Computer architectures definition itself has been through bit changes. The following are the main concern for computer architecture through different times:

1930-1950: Computer arithmetic Microprogramming Pipelining Cache Timeshared multiprocessor 1960: Operating system support, especially memory management Virtual memory 1970-1980: Instruction Set Design, especially for compilers; Vector processing and shared memory multiprocessors RISC 1990s: Design of CPU, memory system, I/O system, multi-processors, networks CC-UMA multiprocessor CC-NUMA multiprocessor Not-CC-NUMA multiprocessor Message-passing multiprocessor 2000s: Special purpose architecture, functionally reconfigurable, special considerations for low power/mobile processing, chip multiprocessors, TLP, memory systems Massive SIMD Parallel processing multiprocessor

Course of Development Our online high school courses are carefully developed to create a high-quality, rigorous education. Courses meet the national academic content standards and Nebraska state academic standards for all core subject matter areas: English, mathematics, science and social studies. Courses are evaluated annually and updated to reflect changes in learning outcomes, standards, and world views. New courses are added constantly to provide students with relevant learning experiences. Our core and AP courses are approved by the National Collegiate Athletic Association. (To view our list, visit the NCAA Eligibility Center and use our code 281316.) We have also chosen to meet, and in many cases exceed, iNACOL National Standards of Quality for Online Courses. Subject Matter Experts Our high school courses are developed by a team of subject matter experts:

University professors with extensive knowledge and credentials in the content area Instructional designers with teaching backgrounds who are experienced in distance education course development and pedagogy High school teachers certificated by the Nebraska Department of Education, endorsed in their subject area, and experienced in teaching at a distance

A Global Perspective Our accredited curriculum mirrors our global student body. We have students from all 50 United States and 135 countries. Courses reflect the realization that students come from many different countries, ethnic backgrounds, and cultures. Instructional materials emphasize a global approach to content and activities, particularly relevant in our world today.

Eastern Visayas State University Tacloban City

Assignment in IT 453 (Computer Architecture)

Submitted to: Benito V. Badilla Jr. Instructor

Submitted by: Liscabo, Girlie G. BSIT - 4C

History of Computer Architecture In computer science and engineering, computer architecture is the practical art of defining the structure and relationship of the subcomponents of a computer. As in designing the architecture of buildings, architecture can comprise many levels of information. The highest level of the definition conveys the concepts implement, whereas in building architecture this over-view is normally visual, computer architecture is primarily logical, positing a conceptual system that serves a particular purpose. In both instances (building and computer), many levels of detail are required to completely specify a given implementation. As in building architecture, some of these details are often implied as common practice. An early example of an architectural definition of a computer was John Von Neumann's 1945 paper, First Draft of a Report on the EDVAC, which described an organization of logical elements. IBM used this to develop the IBM 701, the company's first commercial stored program computer, delivered in early 1952. For example, at a high level, computer architecture is concerned with how the central processing unit (CPU) acts and how it accesses computer memory. Some currently (2011) fashionable computer architectures include cluster computing and Non-Uniform Memory Access. The art of computer architecture has three main subcategories:[1]

Instruction set architecture, or ISA. The ISA is the code that a central processor reads and acts upon. It is the machine language (or assembly language), including the instruction set, word size, memory address modes, processor registers, and address and data formats. Microarchitecture, also known as Computer organization describes the data paths, data processing elements and data storage elements, and describes how they should implement the ISA.[2] The size of a computer's CPU cache for instance, is an organizational issue that generally has nothing to do with the ISA. System Design includes all of the other hardware components within a computing system. These include: Data paths, such as computer buses and switches Memory controllers and hierarchies Data processing other than the CPU, such as direct memory access (DMA) Miscellaneous issues such as virtualization or multiprocessing.

1. 2. 3. 4.

From early days, computers have been used to design the next generation. Programs written in the proposed instruction language can be run on a current computer via emulation. At this stage, it is now commonplace for compiler designers to collaborate, suggesting improvements in the ISA. Modern simulators normally measure time in clock cycles, and give power consumption estimates in watts, or, especially for mobile systems, energy consumption in joules.

Once instruction set and microarchitecture are described, a practical machine must be designed. This design process is called the implementation. Implementation is usually not considered architectural definition, but rather hardware design engineering. Implementation can be further broken down into several (not fully distinct) steps:

Logic Implementation design of blocks defined in the microarchitecture at (primarily) the register-transfer level and logic gate level. Circuit Implementation transistor-level design of basic elements (gates, multiplexers, latches etc.) as well as of some larger blocks (ALUs, caches etc.) that may be implemented at this level, or even (partly) at the physical level, for performance reasons. Physical Implementation physical circuits are drawn out, the different circuit components are placed in a chip floorplan or on a board and the wires connecting them are routed. Design Validation The computer as a whole is tested to see if it works in all situations and all timings. Once implementation starts, the first design validations are simulations using logic emulators. However, this is usually too slow to run realistic programs. So, after making corrections, next, prototypes are constructed using field-programmable gatearrays FPGAs. Many hobby projects stop at this stage. The final step is to test prototype integrated circuits. Integrated circuits may require several redesigns to fix problems.

For CPUs, the entire implementation process is often called CPU design.

Course of Development Course Description This course focuses on digital hardware design for all major components of a modern, reducedinstruction-set computer. Topics covered include instruction set architecture; addressing modes; register-transfer notation; control circuitry; pipelining with hazard control; circuits to support interrupts and other exceptions; microprogramming; computer addition and subtraction circuits using unsigned, two's-complement, and excess notation; circuits to support multiplication using Robertson's and Booth's algorithms; circuits for implementing restoring and non-restoring division; square-root circuits; floating-point arithmetic notation and circuits; memory and cache memory systems; segmentation and paging; input/output interfaces; interrupt processing; direct memory access; and several common peripheral devices, including analog-to-digital and digitalto-analog converters. Prerequisites An undergraduate course in digital design.

Course Goal The purpose of this course is to teach students how to design and implement computers. The focus is on the development of (1) processors that interpret and execute machine instructions; (2) pipelined implementations of such processors; (3) memory subsystems; (4) input/output subsystems; and (5) arithmetic logic units. Course Objectives

Be able to design hardware implementations of processors capable of executing machine instructions for a given computer architecture. Be able to design pipelined implementations of such computer processors. Be able to design memory systems for such processors using integrated memory circuits. Be able to design input/output interfaces for such processors, both hardware and software.

When This Course is Typically Offered This course is typically offered in the fall term at the Applied Physics Laboratory. Syllabus Topics Covered

Introduction: History of Computer Architecture Digital Design Processor design Pipelined processor design Computer arithmetic Software/hardware interaction Computer memory hierarchy Input and output

Major Contributors of the Computer Architecture Harvard architecture - physically separate storage and signal pathways for instructions and data. (The term originated from the Harvard Mark I, relay-based computer, which stored instructions on punched tape and data in relay latches.) Von Neumann architecture - a single storage structure to hold both the set of instructions and the data. Such machines are also known as stored-program computers. Von Neumann bottleneck - the bandwidth, or the data transfer rate, between the CPU and memory is very small in comparison with the amount of memory.

Eastern Visayas State University Tacloban City

Assignment in IT 453 (Computer Architecture)

Submitted to: Benito V. Badilla Jr. Instructor

Submitted by: Indangan, Reyvelyn L. BSIT - 4C

Eastern Visayas State University Tacloban City

Assignment in IT 453 (Computer Architecture)

Submitted to: Benito V. Badilla Jr. Instructor

Submitted by: Pangilinan, Joey R. BSIT - 4C

Course Development of Computer Architecture This course focuses on digital hardware design for all major components of a modern, reduced-instruction-set computer. Topics covered include instruction set architecture; addressing modes; register-transfer notation; control circuitry; pipelining with hazard control; circuits to support interrupts and other exceptions; microprogramming; computer addition and subtraction circuits using unsigned, two's-complement, and excess notation; circuits to support multiplication using Robertson's and Booth's algorithms; circuits for implementing restoring and non-restoring division; square-root circuits; floating-point arithmetic notation and circuits; memory and cache memory systems; segmentation and paging; input/output interfaces; interrupt processing; direct memory access; and several common peripheral devices, including analog-todigital and digital-to-analog converters.

Major Contributors of the Development of Computer Architecture First Generation (1940-1950) Vacuum Tube

ENIAC- 1945: Designed by Mauchly & Echert, built by US army to calculate trajectories for ballistic shells during WWII, used 18000 vacuum tubes and 1500 relays, programmed by manually setting switches UNIVAC 1950: the first commercial computer John Von Neumann architecture: Goldstine and Von Neumann took the idea of ENIAC and developed concept of storing a program in the memory. Known as the Von Neumann architecture and has been the basis for virtually every machine designed since then. Features:

Electron emitting devices Data and programs are stored in a single read-write memory Memory contents are addressable by location, regardless of the content itself Machine language/Assemble language Sequential execution

Second Generation (1950-1964) Transistors

William Shockley, John Bardeen, and Walter Brattain invent the transistor that reduce size of computers and improve reliability.

First operating Systems: handled one program at a time On-off switches controlled by electricity High level languages Floating point arithmetic

Third Generation (1964-1974) Integrated Circuits (IC)


Microprocessor chips combines thousands of transistors, entire circuit on one computer ship Semiconductor memory Multiple computer models with different performance characteristics Smaller computers that did not need a specialized room

Fourth Generation (1974-present) Very Large-Scale Integration (VLSI)/Ultra Large Scale Integration (ULSI)

Combines millions of transistors Single-chip processor and the single-board computer emerged Creation of the Personal Computer (PC) Wide spread use of data communications Artificial intelligence: Functions & logic predicates Object-Oriented programming: Objects & operations on objects Massively parallel machine

Anda mungkin juga menyukai